idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
44,601
Bayesian inference and testable implications
I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then estimate how much posterior density falls inside this region. For example, let's say that, based on theory and domain knowledge, you know that for all practical purposes, if c deviates from exactly 1 by less than 0.01 then it might as well be 1 (outside of simulation, c is never going to be exactly 1 anyway and so you will always reject point null hypothesis with enough data). Anyway, using the deviation of 0.01 you define a ROPE of 0.99 - 1.01. After that, you run your model, and estimate how much density falls inside the ROPE region. If the proportion of density $k$ that falls inside the rope is smaller than whatever you decide your alpha is, then you should feel comfortable rejecting your model, with $k$ confidence. See this vignette: https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html PS: You'll probably want a large tail effective sample (ESS) size for this kind of testing. This is because Monte Carlo samplers tend to explore the typical set & give increasingly less precise estimates towards the tails of the distribution, which is where your ROPE might be. So you'll want to run your sampler with a lot of iterations.
Bayesian inference and testable implications
I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then
Bayesian inference and testable implications I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then estimate how much posterior density falls inside this region. For example, let's say that, based on theory and domain knowledge, you know that for all practical purposes, if c deviates from exactly 1 by less than 0.01 then it might as well be 1 (outside of simulation, c is never going to be exactly 1 anyway and so you will always reject point null hypothesis with enough data). Anyway, using the deviation of 0.01 you define a ROPE of 0.99 - 1.01. After that, you run your model, and estimate how much density falls inside the ROPE region. If the proportion of density $k$ that falls inside the rope is smaller than whatever you decide your alpha is, then you should feel comfortable rejecting your model, with $k$ confidence. See this vignette: https://easystats.github.io/bayestestR/articles/region_of_practical_equivalence.html PS: You'll probably want a large tail effective sample (ESS) size for this kind of testing. This is because Monte Carlo samplers tend to explore the typical set & give increasingly less precise estimates towards the tails of the distribution, which is where your ROPE might be. So you'll want to run your sampler with a lot of iterations.
Bayesian inference and testable implications I'm not a Bayesian expert and I'm happy to stand corrected, but to me the most straightforward & principled way to test this would be to define region of practical equivalence (ROPE) around c and then
44,602
Bayesian inference and testable implications
EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really relies on a single assertion (namely, that $c$ has some value), we can simply estimate the following model $$ y \sim \mathcal{N}(b_0 + b_1x, \sigma)$$ and determine the posterior probability that either $b_0/(b_0+b_1)<c$ or $b_0/(b_0+b_1)>c$. Here is an example. Say we had a hypothesis that $c=1$ and we know that the variance is 4 and that the intercept (or the mean of one populatio n) is 2. We can fit the following model in Stan stan_model = ' data{ int n; vector[n] x; vector[n]y; } parameters{ real b; } model{ b~normal(0,1); y~normal(2+b*x, 2); } ' This will allow is to freely estimate the parameter $b1$ assuming we know $b_0$ and $\sigma$. After fitting the model with a standard normal prior on $b_1$, here is a histogram of the posterior The model provides a 95% posterior credible interval for $c$ covering (0.465, 0.686). We can be fairly certain that the value of $c$ is not 1.
Bayesian inference and testable implications
EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really reli
Bayesian inference and testable implications EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really relies on a single assertion (namely, that $c$ has some value), we can simply estimate the following model $$ y \sim \mathcal{N}(b_0 + b_1x, \sigma)$$ and determine the posterior probability that either $b_0/(b_0+b_1)<c$ or $b_0/(b_0+b_1)>c$. Here is an example. Say we had a hypothesis that $c=1$ and we know that the variance is 4 and that the intercept (or the mean of one populatio n) is 2. We can fit the following model in Stan stan_model = ' data{ int n; vector[n] x; vector[n]y; } parameters{ real b; } model{ b~normal(0,1); y~normal(2+b*x, 2); } ' This will allow is to freely estimate the parameter $b1$ assuming we know $b_0$ and $\sigma$. After fitting the model with a standard normal prior on $b_1$, here is a histogram of the posterior The model provides a 95% posterior credible interval for $c$ covering (0.465, 0.686). We can be fairly certain that the value of $c$ is not 1.
Bayesian inference and testable implications EDIT: innisfree is right. Bayes factors seem like a better approach than what I have provided here. I'm leaving it up for posterity, but it isn't the right approach. Because this problem really reli
44,603
Which data is "more normal"?
If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an excellent algorithm in R that is known to sample from an essentially perfect normal population, $\mathsf{Norm}(\mu=1.5, \sigma=0.5).$ The sample y is based on sums of three standard uniform random variables. By the Central Limit theorem, we can guess that such a sum might be nearly normal, but the actual slightly non-normal population is known. It also has $E(Y) = 1.5, SD(Y) = 0.5.$ . set.seed(1021) x = rnorm(5000, 3/2, 1/2) mean(x); sd(x) [1] 1.492946 [1] 0.5032069 summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. -0.4434 1.1552 1.4951 1.4929 1.8283 3.4453 ks.test(x, "pnorm", 3/2, 1/2) One-sample Kolmogorov-Smirnov test data: x D = 0.013255, p-value = 0.3434 alternative hypothesis: two-sided y = replicate(5000, sum(runif(3))) mean(y); sd(y) [1] 1.503185 [1] 0.500952 summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09379 1.15050 1.49884 1.50319 1.86148 2.90054 A key non-normal feature of the Y-population is that it has no probability outside the interval $(0,3).$ ks.test(y, "pnorm", 3/2, 1/2) One-sample Kolmogorov-Smirnov test data: y D = 0.018057, p-value = 0.07674 alternative hypothesis: two-sided Histograms. Histograms of the two samples are shown below, along with densities of $\mathsf{Norm}(1.5, 0.5).$ ECDF plots. Empirical CDFs of the two samples are shown below, along with CDFs of $\mathsf{Norm}(1.5, 0.5).$ At the scale of these cumulative plots, it is difficult to see a difference between ECDFs and CDFs. However, there are slight discrepancies. K-S test statistic. The Kolmogorov-Smirnov test statistic measures the maximum vertical absolute difference between ECDF and CDF in each case. For the $X_i$s, that absolute difference is $D \approx 0.013$ and for $Y_i$s, the absolute difference is a little larger $D \approx 0.018.$ A closer look. In order to show maximum absolute differences between ECDF and CDF more clearly, we show an ECDF plot of a sample of size $n = 5$ from the Y-population. y1 = replicate(5, sum(runif(3))) ks.test(y1, "pnorm", 1.5, .5)$stat # '$'-notation shows test stat D 0.3368526 plot(ecdf(y1), main="n=5: 'Nearly' Normal Population") curve(pnorm(x,1.5,.5), add=T, col="red") The maximum vertical distance $D = 0.3369$ between the ECDF and CDF occurs at observation $0.7356.$ For two samples of the same size, the one with the smaller K-S normality test statistic $D$ could be said to be more nearly normal. However, there are other ways to measure differences between ECDFs and CDFs.
Which data is "more normal"?
If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an
Which data is "more normal"? If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an excellent algorithm in R that is known to sample from an essentially perfect normal population, $\mathsf{Norm}(\mu=1.5, \sigma=0.5).$ The sample y is based on sums of three standard uniform random variables. By the Central Limit theorem, we can guess that such a sum might be nearly normal, but the actual slightly non-normal population is known. It also has $E(Y) = 1.5, SD(Y) = 0.5.$ . set.seed(1021) x = rnorm(5000, 3/2, 1/2) mean(x); sd(x) [1] 1.492946 [1] 0.5032069 summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. -0.4434 1.1552 1.4951 1.4929 1.8283 3.4453 ks.test(x, "pnorm", 3/2, 1/2) One-sample Kolmogorov-Smirnov test data: x D = 0.013255, p-value = 0.3434 alternative hypothesis: two-sided y = replicate(5000, sum(runif(3))) mean(y); sd(y) [1] 1.503185 [1] 0.500952 summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09379 1.15050 1.49884 1.50319 1.86148 2.90054 A key non-normal feature of the Y-population is that it has no probability outside the interval $(0,3).$ ks.test(y, "pnorm", 3/2, 1/2) One-sample Kolmogorov-Smirnov test data: y D = 0.018057, p-value = 0.07674 alternative hypothesis: two-sided Histograms. Histograms of the two samples are shown below, along with densities of $\mathsf{Norm}(1.5, 0.5).$ ECDF plots. Empirical CDFs of the two samples are shown below, along with CDFs of $\mathsf{Norm}(1.5, 0.5).$ At the scale of these cumulative plots, it is difficult to see a difference between ECDFs and CDFs. However, there are slight discrepancies. K-S test statistic. The Kolmogorov-Smirnov test statistic measures the maximum vertical absolute difference between ECDF and CDF in each case. For the $X_i$s, that absolute difference is $D \approx 0.013$ and for $Y_i$s, the absolute difference is a little larger $D \approx 0.018.$ A closer look. In order to show maximum absolute differences between ECDF and CDF more clearly, we show an ECDF plot of a sample of size $n = 5$ from the Y-population. y1 = replicate(5, sum(runif(3))) ks.test(y1, "pnorm", 1.5, .5)$stat # '$'-notation shows test stat D 0.3368526 plot(ecdf(y1), main="n=5: 'Nearly' Normal Population") curve(pnorm(x,1.5,.5), add=T, col="red") The maximum vertical distance $D = 0.3369$ between the ECDF and CDF occurs at observation $0.7356.$ For two samples of the same size, the one with the smaller K-S normality test statistic $D$ could be said to be more nearly normal. However, there are other ways to measure differences between ECDFs and CDFs.
Which data is "more normal"? If you want to quantify departure from normality, then a good measure is the Kolmogorov-Smirnov test statistic $D.$ Let's compare two samples of size $n = 5000.$ The sample x below it taken using an
44,604
Which data is "more normal"?
Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nicer than another. That is due to Donsker's Theorem. As to My question is, is it valid to say that on the basis of a lower test statistic between two tests that one of the data is "more normal"? The answer is no, at least as you have constructed it. Your null hypothesis is that $x$ is drawn from a normal distribution in both cases. It is rejected. You cannot, at least in this manner, make statements about the differences in the samples. You did not perform a difference test such as $\mu_1-\mu_2$. Hypothesis tests are with regard to population parameters and not samples. You have two choices on how to consider this, subject to the assumptions of the Anderson-Darling test and any instrumentation issues that may have existed in gathering the sample. You can either use the p-values as evidence against the null and reject that it is normal; or you can assume that the sample is an extreme case because the p-value only states that if the null is true, then the sample was unlikely. If the latter may hold, then you should perform another investigation. By themselves, p-values are not informative as to whether your sample was bad but your hypothesis good and the case where the sample was good but your hypothesis bad. The better question, regarding your residuals not being normal, is "so what?" Why would they be something else? What might be going on in your model?
Which data is "more normal"?
Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nic
Which data is "more normal"? Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nicer than another. That is due to Donsker's Theorem. As to My question is, is it valid to say that on the basis of a lower test statistic between two tests that one of the data is "more normal"? The answer is no, at least as you have constructed it. Your null hypothesis is that $x$ is drawn from a normal distribution in both cases. It is rejected. You cannot, at least in this manner, make statements about the differences in the samples. You did not perform a difference test such as $\mu_1-\mu_2$. Hypothesis tests are with regard to population parameters and not samples. You have two choices on how to consider this, subject to the assumptions of the Anderson-Darling test and any instrumentation issues that may have existed in gathering the sample. You can either use the p-values as evidence against the null and reject that it is normal; or you can assume that the sample is an extreme case because the p-value only states that if the null is true, then the sample was unlikely. If the latter may hold, then you should perform another investigation. By themselves, p-values are not informative as to whether your sample was bad but your hypothesis good and the case where the sample was good but your hypothesis bad. The better question, regarding your residuals not being normal, is "so what?" Why would they be something else? What might be going on in your model?
Which data is "more normal"? Let us begin with the assumption that you have data collected across time that is drawn from a normal distribution. If it is, then the frequency is irrelevant even if one level of frequency looks nic
44,605
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$)
Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really all we need is an unbiased estimator for $\tau$. An obvious starting place is to use the invariance property of the MLE. $$\hat\tau_\text{mle} = \bar{X}e^\bar{X}$$ For reasons which will shortly become clear, let's adjust this estimator by introducing a quantity $m$ in the exponential term. $$\hat\tau_m = \frac{T}{n}e^{T/m}$$ where $T = \sum_{i=1}^n X_i$ has a $\text{Poisson}(n\lambda)$ distribution. The expected value of $\hat\tau_m$ can be found directly. \begin{aligned} E(\hat\tau_m) &= \sum_{t=0}^\infty \left(\frac{t}{n}e^{t/m}\right)\left(\frac{e^{-n\lambda}(n\lambda)^t}{t!}\right) \\[1.2ex] &= \cdots && \text{show this on your own} \\[1.2ex] &= \lambda\left(e^{\lambda(e^{1/m} - 1)n}\right)e^{1/m} \end{aligned} This estimator is clearly biased (for now). To make this an unbiased estimator, we need $(e^{1/m}-1)n = 1$ for all $n$. Solving this equation, we obtain $m_\star = (\log(1+1/n))^{-1}$. Using this value for $m$ yeilds, $$E(\hat\tau_{m_\star}) = \frac{\lambda e^\lambda}{1+1/n}$$ I'll leave the rest of the details up to you, but this estimator can now be adjusted so that it is unbiased for $\tau$.
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$)
Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$) Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really all we need is an unbiased estimator for $\tau$. An obvious starting place is to use the invariance property of the MLE. $$\hat\tau_\text{mle} = \bar{X}e^\bar{X}$$ For reasons which will shortly become clear, let's adjust this estimator by introducing a quantity $m$ in the exponential term. $$\hat\tau_m = \frac{T}{n}e^{T/m}$$ where $T = \sum_{i=1}^n X_i$ has a $\text{Poisson}(n\lambda)$ distribution. The expected value of $\hat\tau_m$ can be found directly. \begin{aligned} E(\hat\tau_m) &= \sum_{t=0}^\infty \left(\frac{t}{n}e^{t/m}\right)\left(\frac{e^{-n\lambda}(n\lambda)^t}{t!}\right) \\[1.2ex] &= \cdots && \text{show this on your own} \\[1.2ex] &= \lambda\left(e^{\lambda(e^{1/m} - 1)n}\right)e^{1/m} \end{aligned} This estimator is clearly biased (for now). To make this an unbiased estimator, we need $(e^{1/m}-1)n = 1$ for all $n$. Solving this equation, we obtain $m_\star = (\log(1+1/n))^{-1}$. Using this value for $m$ yeilds, $$E(\hat\tau_{m_\star}) = \frac{\lambda e^\lambda}{1+1/n}$$ I'll leave the rest of the details up to you, but this estimator can now be adjusted so that it is unbiased for $\tau$.
Unbiased estimator of $\lambda(1 - e^\lambda)$ when $x_1,\ldots,x_n$ are i.i.d Poisson($\lambda$) Actually, an unbiased estimator does exist. Let us define $\tau = \lambda e^\lambda$ so that $$\lambda(1-e^\lambda) = \lambda - \tau$$ Since the sample mean $\bar{X}$ is unbiased for $\lambda$, really
44,606
Hyperparameter Optimization Using Gaussian Processes
As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did some systematic search e.g. using the Gaussian process approach). The standard solution to this would be to not just have a training and validation set, but a third set (a test set). You would only ever look at the test set once with you very final model after hyperparameter tuning.
Hyperparameter Optimization Using Gaussian Processes
As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did so
Hyperparameter Optimization Using Gaussian Processes As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did some systematic search e.g. using the Gaussian process approach). The standard solution to this would be to not just have a training and validation set, but a third set (a test set). You would only ever look at the test set once with you very final model after hyperparameter tuning.
Hyperparameter Optimization Using Gaussian Processes As a result of doing that you will also overfit the validation set (the more so the more you tuned the hyperparameters - if you tried two or three configurations, the effect is less than if you did so
44,607
Why does non-parametric bootstrap not return the same sample over and over again?
Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data. But, as a consequence of replacement, the bootstrap samples differ in how many times they include each data point (which may be once, multiple times, or not at all). On average, ~63% of data points appear at least once in a given bootstrap sample.
Why does non-parametric bootstrap not return the same sample over and over again?
Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data.
Why does non-parametric bootstrap not return the same sample over and over again? Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data. But, as a consequence of replacement, the bootstrap samples differ in how many times they include each data point (which may be once, multiple times, or not at all). On average, ~63% of data points appear at least once in a given bootstrap sample.
Why does non-parametric bootstrap not return the same sample over and over again? Each member of the bootstrap sample is selected randomly with replacement from the data set. If we were to sample without replacement, then every sample would simply be a re-ordering of the same data.
44,608
Why does non-parametric bootstrap not return the same sample over and over again?
@user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be represented (x <- 1:5; t(replicate(10,sort(sample(x,replace=TRUE))))) [,1] [,2] [,3] [,4] [,5] [1,] 2 2 4 4 5 [2,] 1 1 1 2 4 [3,] 3 3 3 5 5 [4,] 1 1 1 2 3 [5,] 1 1 2 3 3 [6,] 1 2 3 4 4 [7,] 2 2 3 4 5 [8,] 3 3 3 4 4 [9,] 1 1 2 3 5 [10,] 1 1 2 4 4
Why does non-parametric bootstrap not return the same sample over and over again?
@user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be repres
Why does non-parametric bootstrap not return the same sample over and over again? @user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be represented (x <- 1:5; t(replicate(10,sort(sample(x,replace=TRUE))))) [,1] [,2] [,3] [,4] [,5] [1,] 2 2 4 4 5 [2,] 1 1 1 2 4 [3,] 3 3 3 5 5 [4,] 1 1 1 2 3 [5,] 1 1 2 3 3 [6,] 1 2 3 4 4 [7,] 2 2 3 4 5 [8,] 3 3 3 4 4 [9,] 1 1 2 3 5 [10,] 1 1 2 4 4
Why does non-parametric bootstrap not return the same sample over and over again? @user20160's explanation is fine. Here's an example of 10 bootstrap samples of the sequence from 1 to 5, showing that some values will be represented more than once and other values will not be repres
44,609
Why does non-parametric bootstrap not return the same sample over and over again?
Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replications, each replication is identical to the other without replacement. The number of random sampling events can never exceed the original sample size. However, with replacement the number of sampling events in theory could exceed the number of elements, thus the original sample size could increased to any given number. In practice however this would be erroneous because you would artificially lower the variance (which is a no no), the mean however would remain the same. Just to clarify, increasing the number of replications is the correct approach to stabilise both the mean and variance. I'll refrain from elaborating. Just to waffle, bootstrapping (nonparametric) is cool when you've no idea how to derrive the 95% confidence interval of the mean (sort the bootstrap and remove the upper and lower 2.5%). The technique has its critiques however.
Why does non-parametric bootstrap not return the same sample over and over again?
Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replica
Why does non-parametric bootstrap not return the same sample over and over again? Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replications, each replication is identical to the other without replacement. The number of random sampling events can never exceed the original sample size. However, with replacement the number of sampling events in theory could exceed the number of elements, thus the original sample size could increased to any given number. In practice however this would be erroneous because you would artificially lower the variance (which is a no no), the mean however would remain the same. Just to clarify, increasing the number of replications is the correct approach to stabilise both the mean and variance. I'll refrain from elaborating. Just to waffle, bootstrapping (nonparametric) is cool when you've no idea how to derrive the 95% confidence interval of the mean (sort the bootstrap and remove the upper and lower 2.5%). The technique has its critiques however.
Why does non-parametric bootstrap not return the same sample over and over again? Just to confirm the answers here, the key misunderstanding is the questioner believes there is no replacement in the sampling. Thus if there are 10 elements and 10 random sampling events and 2 replica
44,610
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow?
This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strictly speaking, you will not understand the rest of the book without understanding these foundational chapters. Realistically speaking, don't worry if you don't understand something. Continue reading until the topic actually appears and is applied. Then, and only then, re-read the earlier section, and try to make sense of it in the light of the later application. By then, you will have seen a lot of other material and may be able to understand it much better against this background. In addition, it is often very good to look at other sources at this point, when you actually need to understand the application of something. Different authors have different ways of explaining stuff. Looking at things from different angles can be very helpful. It has been said that good mathematical writing is the kind where you can mentally replace every formula by "foo" and still understand the gist. Read the formulas when you need to understand something in depth and detail. Regarding the two specific topics you mention: The Moore-Penrose pseudoinverse is fundamental when you want to create an actual estimation algorithm. If you are mainly interested in applying algorithms someone else has developed and implemented, then you need to understand that algorithm, but much less so the gory details. I have never needed to understand the Moore-Penrose pseudoinverse. We only have very few threads here on it, too. PCA is much more useful to someone actually applying a tool. Conversely, someone building a tool will likely not use it very much. It's really good to understand this and related ways of reducing dimensionality or compressing information. If you come across a situation where PCA can be helpful in preprocessing, there will not be a big sign pointing this out, so you need to develop your own intuition and understand that this method exists. Happily enough, we have an astronomically upvoted mother-of-all-canonical-threads on PCA, along with an entire pca tag. Go through that thread, then re-read Goodfellow et al. on PCA. Enlightenment is almost sure to follow.
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow?
This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strict
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow? This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strictly speaking, you will not understand the rest of the book without understanding these foundational chapters. Realistically speaking, don't worry if you don't understand something. Continue reading until the topic actually appears and is applied. Then, and only then, re-read the earlier section, and try to make sense of it in the light of the later application. By then, you will have seen a lot of other material and may be able to understand it much better against this background. In addition, it is often very good to look at other sources at this point, when you actually need to understand the application of something. Different authors have different ways of explaining stuff. Looking at things from different angles can be very helpful. It has been said that good mathematical writing is the kind where you can mentally replace every formula by "foo" and still understand the gist. Read the formulas when you need to understand something in depth and detail. Regarding the two specific topics you mention: The Moore-Penrose pseudoinverse is fundamental when you want to create an actual estimation algorithm. If you are mainly interested in applying algorithms someone else has developed and implemented, then you need to understand that algorithm, but much less so the gory details. I have never needed to understand the Moore-Penrose pseudoinverse. We only have very few threads here on it, too. PCA is much more useful to someone actually applying a tool. Conversely, someone building a tool will likely not use it very much. It's really good to understand this and related ways of reducing dimensionality or compressing information. If you come across a situation where PCA can be helpful in preprocessing, there will not be a big sign pointing this out, so you need to develop your own intuition and understand that this method exists. Happily enough, we have an astronomically upvoted mother-of-all-canonical-threads on PCA, along with an entire pca tag. Go through that thread, then re-read Goodfellow et al. on PCA. Enlightenment is almost sure to follow.
Should I gloss over the linear algebra chapter in the book "Deep Learning" by Ian Goodfellow? This is a question that often pops up when reading mathematical literature. The initial chapters, of this book or any other math book, lay out tools that you will be using in later chapters, so strict
44,611
Is it wise to use predicted values to model predicted values further down the line?
I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to generate a forecast for two steps ahead $\hat{y}_{t+2} = f(\hat{y}_{t+1})$, etc...until you have $\hat{y}_{T}$ for your desired $T$ steps ahead. This approach is used by most statistical forecasting models such as ARIMA and Exponential Smoothing. We could say that it is the standard approach. Another possibility is direct forecasting - where your build a model for forecasting $\hat{y}_T$ directly. This is called direct forecasting, and although theoretically it shows promise, I haven't seen it widely used except sometimes when using neural networks for forecasting. See here for details. 1) You could do that, and it should work (depends on your data obviously), but you would get a similar result using Holt-Winters, STL or Seasonal ARIMA. I suspect you are not applying ARIMA correctly if you think that your data is seasonal but your are still getting bad results. In response to @Ben's comment that The auto-regression is at a fixed lag, but I don't agree that this leads to a seasonal part with fixed frequency and phase angle. (I should have said: it is the phase angle that gets thrown off here.) Run a seasonal ARIMA for a long time and you will see that random error eventually pushes the seasonal fluctuation out-of-sync with what it was at the start of the series. As I understand it, you cannot mimic a periodic regression with seasonal ARIMA for this reason. This is not correct. The seasonality is structurally built into a Seasonal ARIMA model (in the same way that it is in a Holt-Winters or Seasonal BSTS model), so it can't deviate from the frequency, even after long term forecasts. Below is an example of an ARIMA model of a monthly seasonal series where a long term forecast maintains a fixed seasonality even with a very, very long forecast horizon (216 steps ahead) - generate using the R Forecast package auto.arima() function:
Is it wise to use predicted values to model predicted values further down the line?
I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to gene
Is it wise to use predicted values to model predicted values further down the line? I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to generate a forecast for two steps ahead $\hat{y}_{t+2} = f(\hat{y}_{t+1})$, etc...until you have $\hat{y}_{T}$ for your desired $T$ steps ahead. This approach is used by most statistical forecasting models such as ARIMA and Exponential Smoothing. We could say that it is the standard approach. Another possibility is direct forecasting - where your build a model for forecasting $\hat{y}_T$ directly. This is called direct forecasting, and although theoretically it shows promise, I haven't seen it widely used except sometimes when using neural networks for forecasting. See here for details. 1) You could do that, and it should work (depends on your data obviously), but you would get a similar result using Holt-Winters, STL or Seasonal ARIMA. I suspect you are not applying ARIMA correctly if you think that your data is seasonal but your are still getting bad results. In response to @Ben's comment that The auto-regression is at a fixed lag, but I don't agree that this leads to a seasonal part with fixed frequency and phase angle. (I should have said: it is the phase angle that gets thrown off here.) Run a seasonal ARIMA for a long time and you will see that random error eventually pushes the seasonal fluctuation out-of-sync with what it was at the start of the series. As I understand it, you cannot mimic a periodic regression with seasonal ARIMA for this reason. This is not correct. The seasonality is structurally built into a Seasonal ARIMA model (in the same way that it is in a Holt-Winters or Seasonal BSTS model), so it can't deviate from the frequency, even after long term forecasts. Below is an example of an ARIMA model of a monthly seasonal series where a long term forecast maintains a fixed seasonality even with a very, very long forecast horizon (216 steps ahead) - generate using the R Forecast package auto.arima() function:
Is it wise to use predicted values to model predicted values further down the line? I will answer your questions in reverse order: 2) Your approach is correct. This is called recursive forecasting: Generate a forecast for one step ahead $\hat{y}_{t+1} = f(y_t)$, then use that to gene
44,612
Is it wise to use predicted values to model predicted values further down the line?
(1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures all of these features together. This is preferable to attempting to make ad hoc changes to a model that only captures one feature of the data. If you have a seasonal component with a fixed frequency, you can add this into your model by using an appropriate seasonal variable. In the case of monthly data with an annual seasonal component, this can be done by adding factor(month) as an explanatory variable in your model. By having both a drift term and a seasonal term in your model, you are able to estimate both effects simultaneously, in the presence of the other. You can then forecast from your fitted model without having to make ad hoc changes. (2) Predictions are functions of observed data; they are not new data. When you want to make forward predictions in time-series data, your predictions will be functions of the observed data and the parameter estimates from your fitted model. For time-series models with an auto-regressive component, the form of the predictions is simplified by expressing the later predictions in terms of earlier predictions. The later predictions are implicitly still functions of the observed data and the estimated parameters; they are just expressed in a simplified form through previous predictions. For example, suppose you observe $y_1,...,y_T$ and you estimate parameters $\hat{\tau}$ for a model. Then if your model has an auto-regressive component, you make predictions $\hat{y}_{T+1} = f(y_1,...,y_T, \hat{\tau})$ and $\hat{y}_{T+2} = f(y_1,...,y_T, \hat{y}_{T+1}, \hat{\tau})$, where the later prediction is expressed as a function of the earlier prediction. The prediction $\hat{y}_{T+2}$ is still an implicit function of $y_1,...,y_T, \hat{\tau}$, so this is just a shorthand way of simplifying the expressed predictions, to take advantage of the auto-regression. If you are doing this correctly, your uncertainty about your predictions (e.g., confidence intervals, etc.) should account of the uncertainty in earlier predictions, and so your uncertainty should tend to "balloon" as you get further and further from the observed data. You must make sure that the earlier predictions are not treated as new observed data - i.e., the prediction $\hat{y}_{T+1}$ is not the same as the actual data point $y_{T+1}$. So long as you treat this correctly, accounting for the additional uncertainty, there is no problem with expressing later predictions as being dependent on earlier predictions.
Is it wise to use predicted values to model predicted values further down the line?
(1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures a
Is it wise to use predicted values to model predicted values further down the line? (1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures all of these features together. This is preferable to attempting to make ad hoc changes to a model that only captures one feature of the data. If you have a seasonal component with a fixed frequency, you can add this into your model by using an appropriate seasonal variable. In the case of monthly data with an annual seasonal component, this can be done by adding factor(month) as an explanatory variable in your model. By having both a drift term and a seasonal term in your model, you are able to estimate both effects simultaneously, in the presence of the other. You can then forecast from your fitted model without having to make ad hoc changes. (2) Predictions are functions of observed data; they are not new data. When you want to make forward predictions in time-series data, your predictions will be functions of the observed data and the parameter estimates from your fitted model. For time-series models with an auto-regressive component, the form of the predictions is simplified by expressing the later predictions in terms of earlier predictions. The later predictions are implicitly still functions of the observed data and the estimated parameters; they are just expressed in a simplified form through previous predictions. For example, suppose you observe $y_1,...,y_T$ and you estimate parameters $\hat{\tau}$ for a model. Then if your model has an auto-regressive component, you make predictions $\hat{y}_{T+1} = f(y_1,...,y_T, \hat{\tau})$ and $\hat{y}_{T+2} = f(y_1,...,y_T, \hat{y}_{T+1}, \hat{\tau})$, where the later prediction is expressed as a function of the earlier prediction. The prediction $\hat{y}_{T+2}$ is still an implicit function of $y_1,...,y_T, \hat{\tau}$, so this is just a shorthand way of simplifying the expressed predictions, to take advantage of the auto-regression. If you are doing this correctly, your uncertainty about your predictions (e.g., confidence intervals, etc.) should account of the uncertainty in earlier predictions, and so your uncertainty should tend to "balloon" as you get further and further from the observed data. You must make sure that the earlier predictions are not treated as new observed data - i.e., the prediction $\hat{y}_{T+1}$ is not the same as the actual data point $y_{T+1}$. So long as you treat this correctly, accounting for the additional uncertainty, there is no problem with expressing later predictions as being dependent on earlier predictions.
Is it wise to use predicted values to model predicted values further down the line? (1) You should "mix" the approaches by using a model that captures both features. When your data shows multiple features (e.g., drift and seasonality) it is a good idea to use a model that captures a
44,613
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y|≤|X|)=1/2$$ Do you specifically need to show that the probability is greater than 1/2?
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y|≤|X|)=1/2$$ Do you specifically need to show that the probability is greater than 1/2?
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ I think it's much easier to solve this problem using the triangle inequality rather than using a squaring approach. Since $|X+Y| \le |X| + |Y|$, we have that $$P(|X+Y|\le2|X|) \ge P(|X|+|Y|≤2|X|)=P(|Y
44,614
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrants. So you just have to show that more than half of the probability mass is in those two quadrants. Call the two quadrants together figure1. Now rotate this by 90, and call this figure2. Rotating by 90 degrees is the same as Y' = X, X' = -Y. The distributions are identical, so exchanging X and Y leaves the probability the same. They are symmetric about zero, so multiplying by -1 leave the probability the same. Thus, this transformation leaves the probability the same. Since every point in the plane is covered by either figure1 or figure2, and some points are covered by both, and this transformation doesn't affect the probability mass, it follows that figure1 contains at least half of the probability mass, and if you can proved that the overlapped areas contain a positive amount of probability mass, then it follows that figure1 contains more than half of the probability mass.
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrant
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrants. So you just have to show that more than half of the probability mass is in those two quadrants. Call the two quadrants together figure1. Now rotate this by 90, and call this figure2. Rotating by 90 degrees is the same as Y' = X, X' = -Y. The distributions are identical, so exchanging X and Y leaves the probability the same. They are symmetric about zero, so multiplying by -1 leave the probability the same. Thus, this transformation leaves the probability the same. Since every point in the plane is covered by either figure1 or figure2, and some points are covered by both, and this transformation doesn't affect the probability mass, it follows that figure1 contains at least half of the probability mass, and if you can proved that the overlapped areas contain a positive amount of probability mass, then it follows that figure1 contains more than half of the probability mass.
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ If you draw the lines Y = X and Y = -3X, the first has slope 1, and the second has slope -3. The two lines divide the plane into four quadrants, with the solution set being the left and right quadrant
44,615
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x| \qquad \text{or} \qquad (y-x)(3x+y) \leq 0 $$ Part of this region (the pink colored hatched region, region 2a) is a mirror image of the complement of region 2 (the pink coloured region, region 1). From this you can deduce that: $$P[|X+Y| \leq 2|X|] = \frac{1 + P[(X-Y)(3Y-X) \geq 0] + P[(-X-Y)(3X+Y) \geq 0]}{2}$$ where the extra terms $P[(X-Y)(3Y-X) \geq 0]$ and $P[(-X-Y)(3X+Y) \geq 0]$ relate to the probability that X,Y are in the gray coloured hatched regions. Note that the lines are not included in the complement region 1. Thus the discrete uniform example $x,y \sim U(−1,1)$ by jbowman does not work. The points (1,1), (-1,-1), (-1,1), (1,-1) are inside the gray hatched region and eventually inside the region for which $|x+y| \leq 2|x|$ is true. In the same way for any distribution with finite pdf or probability mass at least somewhere. there will be some positive contribution near the lines Y=X such that the inequality is more strict $>$ instead of $\geq$. For instance in the discrete case $P(X=x,Y=y)$ for $x=y$ is equal to $P(X=x)^2$ which is due to the property that $f_X=f_Y$ In the continuous case you could evaluate: $$\begin{array} \\P[|X+Y| \leq 2|X|] &= \frac{1 + 4 \int_{x=0}^{x=\infty} \left( \int_{t=\frac{1}{3}x}^{t=x} f_X(t) dt \right) f_X(x) dx}{2}\\ & = \frac{1 + 4 \int_{x=0}^{x=\infty} \left( F(x)-F(\frac{1}{3}x)\right) f_X(x) dx}{2} \\ \end{array}$$ where the integral must be non-zero if f_X(x) is non zero in at least some continuous region of non zero size (such that $F(x)-F(\frac{1}{3}x)$ is non-zero in a region with non-zero probability). Some other equality is $$P[|X+Y| \leq 2|X|] = \frac{1}{2} + P[\frac{1}{3} |Y| \leq |X| \leq |Y| ]$$
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x| \qquad \text{or} \qquad (y-x)(3x+y) \leq 0 $$ Part of this region (the pink colored hatched region, region 2a) is a mirror image of the complement of region 2 (the pink coloured region, region 1). From this you can deduce that: $$P[|X+Y| \leq 2|X|] = \frac{1 + P[(X-Y)(3Y-X) \geq 0] + P[(-X-Y)(3X+Y) \geq 0]}{2}$$ where the extra terms $P[(X-Y)(3Y-X) \geq 0]$ and $P[(-X-Y)(3X+Y) \geq 0]$ relate to the probability that X,Y are in the gray coloured hatched regions. Note that the lines are not included in the complement region 1. Thus the discrete uniform example $x,y \sim U(−1,1)$ by jbowman does not work. The points (1,1), (-1,-1), (-1,1), (1,-1) are inside the gray hatched region and eventually inside the region for which $|x+y| \leq 2|x|$ is true. In the same way for any distribution with finite pdf or probability mass at least somewhere. there will be some positive contribution near the lines Y=X such that the inequality is more strict $>$ instead of $\geq$. For instance in the discrete case $P(X=x,Y=y)$ for $x=y$ is equal to $P(X=x)^2$ which is due to the property that $f_X=f_Y$ In the continuous case you could evaluate: $$\begin{array} \\P[|X+Y| \leq 2|X|] &= \frac{1 + 4 \int_{x=0}^{x=\infty} \left( \int_{t=\frac{1}{3}x}^{t=x} f_X(t) dt \right) f_X(x) dx}{2}\\ & = \frac{1 + 4 \int_{x=0}^{x=\infty} \left( F(x)-F(\frac{1}{3}x)\right) f_X(x) dx}{2} \\ \end{array}$$ where the integral must be non-zero if f_X(x) is non zero in at least some continuous region of non zero size (such that $F(x)-F(\frac{1}{3}x)$ is non-zero in a region with non-zero probability). Some other equality is $$P[|X+Y| \leq 2|X|] = \frac{1}{2} + P[\frac{1}{3} |Y| \leq |X| \leq |Y| ]$$
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ The image below demonstrates how a partitioning of the area $|x+y| \leq 2|x|$ helps to proof $P[|X+Y| \leq 2|X|] > \frac{1}{2}$. The hatched region (region 2) corresponds to your area $$|x+y| \leq 2|x
44,616
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into conic regions. Note that $$ P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)] $$ where $I()$ is the usual zero-one indicator function. (Note that we can use "$<$" in place of "$\le$" by assuming $X$ and $Y$ are continuous - a requirement that's needed to ensure the strict inequality you want.) Then, the indicator can be rewritten as $$I(|X+Y|<2|X|)=U+V$$ where $$U=I(X < 0, -X < Y < -3X) + I(X > 0, -3X < Y < -X)$$ and $$V=I(X < 0, X < Y < -X) + I(X > 0, -X < Y < X)$$ Based on the definitions of $U$ and $V$, it is clear that the regions on which each random variable equals one are non-overlapping (i.e. $P(V=1,U=1)=0$). Additionally, we can rewrite $V$ as $$V=I(|X|+|Y| \le 2|X|)=I(|Y| \le |X|)$$ Therefore, $$P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)]=E[U+V]=E[U]+E[V]=P(U=1)+P(|Y| \le |X|)=P(U=1)+1/2$$ All that remains is to show that $P(U=1)>0$. Let $A\times B\in \textbf{R}^2$ be a rectangle centered about the origin. Then, $P(X\in A, Y\in B)=P(X\in A)P(X\in B)$. Necessarily, there exist intervals $A$ and $B$ over which $f_X(x)$ integrates to some number greater than zero. Also, any such $A$ and $B$ will contain area over which $U=1$. This follows from the fact that the area over which $U=1$ takes the form of two cones extending from the origin. Therefore, $P(U=1)>0$ and the result follows.
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$
Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into coni
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into conic regions. Note that $$ P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)] $$ where $I()$ is the usual zero-one indicator function. (Note that we can use "$<$" in place of "$\le$" by assuming $X$ and $Y$ are continuous - a requirement that's needed to ensure the strict inequality you want.) Then, the indicator can be rewritten as $$I(|X+Y|<2|X|)=U+V$$ where $$U=I(X < 0, -X < Y < -3X) + I(X > 0, -3X < Y < -X)$$ and $$V=I(X < 0, X < Y < -X) + I(X > 0, -X < Y < X)$$ Based on the definitions of $U$ and $V$, it is clear that the regions on which each random variable equals one are non-overlapping (i.e. $P(V=1,U=1)=0$). Additionally, we can rewrite $V$ as $$V=I(|X|+|Y| \le 2|X|)=I(|Y| \le |X|)$$ Therefore, $$P(|X+Y|<2|X|)=E[I(|X+Y|<2|X|)]=E[U+V]=E[U]+E[V]=P(U=1)+P(|Y| \le |X|)=P(U=1)+1/2$$ All that remains is to show that $P(U=1)>0$. Let $A\times B\in \textbf{R}^2$ be a rectangle centered about the origin. Then, $P(X\in A, Y\in B)=P(X\in A)P(X\in B)$. Necessarily, there exist intervals $A$ and $B$ over which $f_X(x)$ integrates to some number greater than zero. Also, any such $A$ and $B$ will contain area over which $U=1$. This follows from the fact that the area over which $U=1$ takes the form of two cones extending from the origin. Therefore, $P(U=1)>0$ and the result follows.
Proving $P(|X+Y|\leq 2|X|) > \dfrac{1}{2}$ Changing this probability into an expectation is the key to solving this problem easily. As noted by @Accumulation and @Martijn Weterings, the absolute value function divides the (x,y)-plane into coni
44,617
two questions; how to interpret the AUROC (area under the ROC curve)
Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate that $A$ has the disease. The meaning of AUROC (area under the ROC curve, to distinguish from the less-common area under the precision-recall curve) is exactly what you state: given a randomly-selected diseased person and a randomly-selected healthy person, there is an 85% chance that your model ranks the diseased person higher than the healthy person. Can you give me some examples on how I can utilize my regression model knowing that it has strong discriminatory power? Suppose you need to construct a procedure that makes binary decisions without human intervention. For example, the test results are reported in an automated fashion for some purpose. It is possible to find all diseased individuals (perfect TPR) by labeling everyone as diseased, but your FPR will also be 1.0. Alternatively, you could capture no false positives, but at the cost of also capturing no diseased individuals. A ROC curve compares the tradeoffs between these two extremes, i.e. the estimated TPR and FPR for any decision value cutoff. ROC curves are commonly summarized by AUROC, but this does not imply that a model with a higher AUROC necessarily has a better TPR/FPR tradeoff at a specific decision-value. It's common in the machine learning community to compare two or more alternative models the basis of AUROC, but this does not imply that AUROC is useful in general or even for the particular purpose of that machine learning project.
two questions; how to interpret the AUROC (area under the ROC curve)
Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate
two questions; how to interpret the AUROC (area under the ROC curve) Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate that $A$ has the disease. The meaning of AUROC (area under the ROC curve, to distinguish from the less-common area under the precision-recall curve) is exactly what you state: given a randomly-selected diseased person and a randomly-selected healthy person, there is an 85% chance that your model ranks the diseased person higher than the healthy person. Can you give me some examples on how I can utilize my regression model knowing that it has strong discriminatory power? Suppose you need to construct a procedure that makes binary decisions without human intervention. For example, the test results are reported in an automated fashion for some purpose. It is possible to find all diseased individuals (perfect TPR) by labeling everyone as diseased, but your FPR will also be 1.0. Alternatively, you could capture no false positives, but at the cost of also capturing no diseased individuals. A ROC curve compares the tradeoffs between these two extremes, i.e. the estimated TPR and FPR for any decision value cutoff. ROC curves are commonly summarized by AUROC, but this does not imply that a model with a higher AUROC necessarily has a better TPR/FPR tradeoff at a specific decision-value. It's common in the machine learning community to compare two or more alternative models the basis of AUROC, but this does not imply that AUROC is useful in general or even for the particular purpose of that machine learning project.
two questions; how to interpret the AUROC (area under the ROC curve) Would it be correct to say that there is 85% chance that $A$ has the disease? No. Assuming your model is correct and well-calibrated, the probability that $A$ has the disease is the model's estimate
44,618
two questions; how to interpret the AUROC (area under the ROC curve)
If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chance that AA has the disease? The answer is "no". The AUROC does not care about the actual value of your probability predictions, only the order of your predictions. You could divide all your prediction probabilities by 10 and still get the same AUC. In fact, you can come up with some ordering criteria that is entirely independent of what your prediction probabilities are, and still get an AUC score. To get a good intuitive idea of how to interpret an AUROC, it helps to look at an ROC curve for a small number of samples. Here's one I whipped up: Note that each step to the right represents a "wrong" guess, and each step upwards represents a "right" guess. (Larger steps mean more guesses.) I filled out a grey area for a single step to the right (i.e. "wrong guess"). The AUC is simply the sum of all the dark rectangles over all wrong guesses. The height of the rectangle is the proportion of "true" samples that have been listed so far. That is, for the "false" sample individual who caused the horizontal step of the rectangle, if we stop at that individual then the true positive rate is given by the height of the rectangle. The width of the rectangle is the proportion of "false" guesses that we're running through when taking the horizontal step in the rectangle. The area of the highlighted rectangle can be interpreted as follows. Suppose we choose a "false" sample individual at random. The probability of choosing that sample, multiplied by the true positive rate of all selections before that sample is given by the area of a dark rectangle. Thus the sum of dark rectangles is the expected true positive rate before a false sample, where the expectation is taken over all false samples. Put another way, if you pick a false sample individual at random and stop your "chosen" list at that individual, the expected value of the TPR up until that sample is the AUC. The TPR, of course, is the probability that a positive sample, when chosen at random, will be in your "chosen" list. So another way to interpret the AUC is that, if you choose a positive sample at random, and a negative sample at random, the AUC is the expected probability that the positive sample will appear on your list before the negative sample. As to your last question: Based on what I said earlier, AUROC is an indicator of how well you ranked your samples, not how good your probability prediction is. Remember, any monotonic function of your probability outcomes (such as dividing by ten, or taking the sigmoid of them) will yield the exact same ROC curve. So a good AUROC shouldn't tell you how to gauge the probability of an event. (For example, if you have a very high AUROC for disease classification, and you predict that someone has a 99% chance of having disease, that person shouldn't necessarily act as if they almost definitely have a disease. It could be that they only have a 5% chance, but your model is still great at determining that their chance is much higher than that of someone else.) Because AUC is a good indicator of ranking, its main utility should be in your prioritizing candidates. For example, if you have a model with high AUC for disease classification, then your prediction results should determine who you choose first for further diagnoses or treatment.
two questions; how to interpret the AUROC (area under the ROC curve)
If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chanc
two questions; how to interpret the AUROC (area under the ROC curve) If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chance that AA has the disease? The answer is "no". The AUROC does not care about the actual value of your probability predictions, only the order of your predictions. You could divide all your prediction probabilities by 10 and still get the same AUC. In fact, you can come up with some ordering criteria that is entirely independent of what your prediction probabilities are, and still get an AUC score. To get a good intuitive idea of how to interpret an AUROC, it helps to look at an ROC curve for a small number of samples. Here's one I whipped up: Note that each step to the right represents a "wrong" guess, and each step upwards represents a "right" guess. (Larger steps mean more guesses.) I filled out a grey area for a single step to the right (i.e. "wrong guess"). The AUC is simply the sum of all the dark rectangles over all wrong guesses. The height of the rectangle is the proportion of "true" samples that have been listed so far. That is, for the "false" sample individual who caused the horizontal step of the rectangle, if we stop at that individual then the true positive rate is given by the height of the rectangle. The width of the rectangle is the proportion of "false" guesses that we're running through when taking the horizontal step in the rectangle. The area of the highlighted rectangle can be interpreted as follows. Suppose we choose a "false" sample individual at random. The probability of choosing that sample, multiplied by the true positive rate of all selections before that sample is given by the area of a dark rectangle. Thus the sum of dark rectangles is the expected true positive rate before a false sample, where the expectation is taken over all false samples. Put another way, if you pick a false sample individual at random and stop your "chosen" list at that individual, the expected value of the TPR up until that sample is the AUC. The TPR, of course, is the probability that a positive sample, when chosen at random, will be in your "chosen" list. So another way to interpret the AUC is that, if you choose a positive sample at random, and a negative sample at random, the AUC is the expected probability that the positive sample will appear on your list before the negative sample. As to your last question: Based on what I said earlier, AUROC is an indicator of how well you ranked your samples, not how good your probability prediction is. Remember, any monotonic function of your probability outcomes (such as dividing by ten, or taking the sigmoid of them) will yield the exact same ROC curve. So a good AUROC shouldn't tell you how to gauge the probability of an event. (For example, if you have a very high AUROC for disease classification, and you predict that someone has a 99% chance of having disease, that person shouldn't necessarily act as if they almost definitely have a disease. It could be that they only have a 5% chance, but your model is still great at determining that their chance is much higher than that of someone else.) Because AUC is a good indicator of ranking, its main utility should be in your prioritizing candidates. For example, if you have a model with high AUC for disease classification, then your prediction results should determine who you choose first for further diagnoses or treatment.
two questions; how to interpret the AUROC (area under the ROC curve) If the regression model gives me a subject AA with a predicted probability of 0.6 and this seems to be a high probability compared to other subjects. Would it be correct to say that there is 85% chanc
44,619
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<1/r$, where $r$ is the valency of the regular graph. But possibly you do not want to choose $\rho$ so small. A useful thing to remember here is that a symmetric matrix is positive definite if and only if its eigenvalues are all positive. And since you are constructing your matrix as $$ M = I + \rho A$$ where $A$ is the adjacency matrix of the graph, it follows that the eigenvalues of $M$ are of the form $1+\rho\lambda$ as $\lambda$ ranges over the eigenvalues of $A$. So if you have a particular $\rho>0$ in mind, then the graphs that will work are precisely those for which all eigenvalues (of the adjacency matrix) satisfy the bound $\lambda > -1/\rho$. In other words, you need graphs whose negative eigenvalues aren't too large in magnitude. Note that for a regular graph with valency $r$, all its eigenvalues satisfy $|\lambda| \leq r$, which leads to the same sufficient condition $|\rho| < 1/r$ described above. There is quite a bit of information available about graphs whose most negative eigenvalue isn't too large in magnitude; this falls within the subject of spectral graph theory. In particular, the problem of characterizing graphs whose eigenvalues satisfy $\lambda \geq -2$ is treated in the book Spectral Generalizations of Line Graphs: On Graphs with Least Eigenvalue -2. It contains the following result, showing that with fairly trivial exceptions the bound $\lambda\geq -2$ is the best that we can hope for a regular graph to satisfy: Corollary 2.3.22. If G is a connected regular graph with least eigenvalue greater than −2 then G is either a complete graph or an odd cycle. There are methods of constructing broad families of regular graphs which attain this bound, i.e. whose least eigenvalue is -2. The most basic one is the construction of a line graph. If you start with any graph $G$, you can construct a new graph $L(G)$, whose vertices correspond to the edges of $G$ and whose edges correspond to edge-incidences of $G$. This graph $L(G)$ is called the line graph of $G$, and it is guaranteed that its eigenvalues satisfy $\lambda \geq -2$, no matter which graph $G$ you start with. Moreover, if you start with a regular graph $G$ with valency $r$, then $L(G)$ will also be regular, with valency $2(r-1)$. This gives you a way to construct regular graphs for which you can take $\rho$ to be any value satisfying $|\rho|<1/2$ and end up with M being positive definite. In light of the result cited above, this is the best that is possible, unless you want to go with a complete graph (which allows $\rho$ to be arbitrarily close to 1) or an odd cycle (which allows $\rho$ to be a little larger than $1/2$, but by an amount which approaches zero as the size of the cycle increases), or a disjoint union of complete graphs and odd cycles. If it is unsatisfactory to restrict to regular graphs with even valency, it's worth noting that you don't have to start with a regular graph $G$ in order for $L(G)$ to be regular. For instance, you could instead start with $G$ being a semiregular bipartite graph, where one of the two valencies is even and the other is odd, and this would result in $L(G)$ being regular with odd valency.
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<1/r$, where $r$ is the valency of the regular graph. But possibly you do not want to choose $\rho$ so small. A useful thing to remember here is that a symmetric matrix is positive definite if and only if its eigenvalues are all positive. And since you are constructing your matrix as $$ M = I + \rho A$$ where $A$ is the adjacency matrix of the graph, it follows that the eigenvalues of $M$ are of the form $1+\rho\lambda$ as $\lambda$ ranges over the eigenvalues of $A$. So if you have a particular $\rho>0$ in mind, then the graphs that will work are precisely those for which all eigenvalues (of the adjacency matrix) satisfy the bound $\lambda > -1/\rho$. In other words, you need graphs whose negative eigenvalues aren't too large in magnitude. Note that for a regular graph with valency $r$, all its eigenvalues satisfy $|\lambda| \leq r$, which leads to the same sufficient condition $|\rho| < 1/r$ described above. There is quite a bit of information available about graphs whose most negative eigenvalue isn't too large in magnitude; this falls within the subject of spectral graph theory. In particular, the problem of characterizing graphs whose eigenvalues satisfy $\lambda \geq -2$ is treated in the book Spectral Generalizations of Line Graphs: On Graphs with Least Eigenvalue -2. It contains the following result, showing that with fairly trivial exceptions the bound $\lambda\geq -2$ is the best that we can hope for a regular graph to satisfy: Corollary 2.3.22. If G is a connected regular graph with least eigenvalue greater than −2 then G is either a complete graph or an odd cycle. There are methods of constructing broad families of regular graphs which attain this bound, i.e. whose least eigenvalue is -2. The most basic one is the construction of a line graph. If you start with any graph $G$, you can construct a new graph $L(G)$, whose vertices correspond to the edges of $G$ and whose edges correspond to edge-incidences of $G$. This graph $L(G)$ is called the line graph of $G$, and it is guaranteed that its eigenvalues satisfy $\lambda \geq -2$, no matter which graph $G$ you start with. Moreover, if you start with a regular graph $G$ with valency $r$, then $L(G)$ will also be regular, with valency $2(r-1)$. This gives you a way to construct regular graphs for which you can take $\rho$ to be any value satisfying $|\rho|<1/2$ and end up with M being positive definite. In light of the result cited above, this is the best that is possible, unless you want to go with a complete graph (which allows $\rho$ to be arbitrarily close to 1) or an odd cycle (which allows $\rho$ to be a little larger than $1/2$, but by an amount which approaches zero as the size of the cycle increases), or a disjoint union of complete graphs and odd cycles. If it is unsatisfactory to restrict to regular graphs with even valency, it's worth noting that you don't have to start with a regular graph $G$ in order for $L(G)$ to be regular. For instance, you could instead start with $G$ being a semiregular bipartite graph, where one of the two valencies is even and the other is odd, and this would result in $L(G)$ being regular with odd valency.
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn Yes, for example if you choose $\rho$ small enough to ensure that your matrix is strictly diagonally dominant, then it is guaranteed to be positive definite. In this case "small enough" means $|\rho|<
44,620
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but vertices 2 and 3 are not adjacent to each other. Let $X_1$, $X_2$, and $X_3$ be the corresponding random variables being modeled. In this case you are wanting to have \begin{align*} \text{Corr}(X_1,X_2) &= \rho \\ \text{Corr}(X_1,X_3) &= \rho \\ \text{Corr}(X_2,X_3) &= 0 \end{align*} There is some tension here, in the sense that these requirements become incompatible with each other as $|\rho|$ grows too large. Namely, if $X_1$ is strongly correlated with both $X_2$ and $X_3$, then at a certain point it is no longer possible for $X_2$ and $X_3$ to be uncorrelated with each other. In other words, there is some degree of transitivity that holds with correlations. This is quantified in general here, and in particular it can be shown that for $|\rho|>1/\sqrt 2$, the above conditions are incompatible. And as the other answer shows, in practice you will run into trouble even sooner, namely at the point $|\rho| \geq 1/2$, even for carefully chosen families of graphs. An alternative approach would be to relax the constraint that the non-adjacent variables have exactly correlation 0. A natural way to do this is to change our focus away from directly modeling the covariance matrix $\Sigma$, to instead modeling the precision matrix $\Lambda = \Sigma^{-1}$. Namely, we model $\Lambda$ in basically the same way that we were modeling $\Sigma$ before: $$\Lambda = I - \rho A$$ where $A$ is the adjacency matrix of the graph. Here $\rho$ no longer represents the correlation between neighboring variables; instead it represents their partial correlation, after controlling for all the remaining variables. To ensure that $\Lambda$ is positive definite (which is equivalent to $\Sigma$ being positive definite), we need to impose the restriction $|\rho| < 1/r$, where $r$ is the valency of the (regular) graph. Although this restriction may appear superficially similar to the crude sufficient condition $|\rho| < 1/r$ that arises when modeling $\Sigma$ directly, the situation here is completely different. This time, as $\rho \to 1/r$ the random variables approach correlation 1 with one another (in each connected component of the graph), and we could not ask for more than that. An example might help illustrate how this works. Consider the cycle graph on 10 vertices. Because of the symmetry of the graph, the value of the $(i,j)$ entry of $\Sigma$ only depends on the distance between vertices $i$ and $j$, so we can concisely summarize the resulting correlations for various choices of $\rho$: \begin{array}{lllllll} \text{Distance}& 0& 1& 2& 3& 4& 5& \\ \rho = 0& \text{1}& \text{0}& \text{0}& \text{0}& \text{0}& \text{0}& \\ \rho = 0.1& \text{1}& \text{0.101}& \text{0.01021}& \text{0.001031}& \text{0.0001052}& \text{2.104e-05}& \\ \rho = 0.4& \text{1}& \text{0.5015}& \text{0.2537}& \text{0.1327}& \text{0.07805}& \text{0.06244}& \\ \rho = 0.49& \text{1}& \text{0.865}& \text{0.7654}& \text{0.697}& \text{0.657}& \text{0.6439}& \\ \rho = 0.499& \text{1}& \text{0.9826}& \text{0.9691}& \text{0.9596}& \text{0.9538}& \text{0.9519}& \\ \rho = 0.4999& \text{1}& \text{0.9982}& \text{0.9968}& \text{0.9958}& \text{0.9952}& \text{0.995}& \\\end{array} Here the correlations shown in the table are defined by $\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$ where $\Sigma$ is given by $$\Sigma = \Lambda^{-1} = (I - \rho A)^{-1}$$ Again, the idea here is that even though non-neighboring variables are now correlated with each other, this correlation is only due to the mutual influence of neighbors connecting them. In the case of a multivariate Gaussian distribution this can be made more precise, as then each variable satisfies the Markov property that, given its direct neighbors, it is conditionally independent of all the non-neighboring variables (e.g., see here).
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but verti
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but vertices 2 and 3 are not adjacent to each other. Let $X_1$, $X_2$, and $X_3$ be the corresponding random variables being modeled. In this case you are wanting to have \begin{align*} \text{Corr}(X_1,X_2) &= \rho \\ \text{Corr}(X_1,X_3) &= \rho \\ \text{Corr}(X_2,X_3) &= 0 \end{align*} There is some tension here, in the sense that these requirements become incompatible with each other as $|\rho|$ grows too large. Namely, if $X_1$ is strongly correlated with both $X_2$ and $X_3$, then at a certain point it is no longer possible for $X_2$ and $X_3$ to be uncorrelated with each other. In other words, there is some degree of transitivity that holds with correlations. This is quantified in general here, and in particular it can be shown that for $|\rho|>1/\sqrt 2$, the above conditions are incompatible. And as the other answer shows, in practice you will run into trouble even sooner, namely at the point $|\rho| \geq 1/2$, even for carefully chosen families of graphs. An alternative approach would be to relax the constraint that the non-adjacent variables have exactly correlation 0. A natural way to do this is to change our focus away from directly modeling the covariance matrix $\Sigma$, to instead modeling the precision matrix $\Lambda = \Sigma^{-1}$. Namely, we model $\Lambda$ in basically the same way that we were modeling $\Sigma$ before: $$\Lambda = I - \rho A$$ where $A$ is the adjacency matrix of the graph. Here $\rho$ no longer represents the correlation between neighboring variables; instead it represents their partial correlation, after controlling for all the remaining variables. To ensure that $\Lambda$ is positive definite (which is equivalent to $\Sigma$ being positive definite), we need to impose the restriction $|\rho| < 1/r$, where $r$ is the valency of the (regular) graph. Although this restriction may appear superficially similar to the crude sufficient condition $|\rho| < 1/r$ that arises when modeling $\Sigma$ directly, the situation here is completely different. This time, as $\rho \to 1/r$ the random variables approach correlation 1 with one another (in each connected component of the graph), and we could not ask for more than that. An example might help illustrate how this works. Consider the cycle graph on 10 vertices. Because of the symmetry of the graph, the value of the $(i,j)$ entry of $\Sigma$ only depends on the distance between vertices $i$ and $j$, so we can concisely summarize the resulting correlations for various choices of $\rho$: \begin{array}{lllllll} \text{Distance}& 0& 1& 2& 3& 4& 5& \\ \rho = 0& \text{1}& \text{0}& \text{0}& \text{0}& \text{0}& \text{0}& \\ \rho = 0.1& \text{1}& \text{0.101}& \text{0.01021}& \text{0.001031}& \text{0.0001052}& \text{2.104e-05}& \\ \rho = 0.4& \text{1}& \text{0.5015}& \text{0.2537}& \text{0.1327}& \text{0.07805}& \text{0.06244}& \\ \rho = 0.49& \text{1}& \text{0.865}& \text{0.7654}& \text{0.697}& \text{0.657}& \text{0.6439}& \\ \rho = 0.499& \text{1}& \text{0.9826}& \text{0.9691}& \text{0.9596}& \text{0.9538}& \text{0.9519}& \\ \rho = 0.4999& \text{1}& \text{0.9982}& \text{0.9968}& \text{0.9958}& \text{0.9952}& \text{0.995}& \\\end{array} Here the correlations shown in the table are defined by $\Sigma_{ij}/\sqrt{\Sigma_{ii}\Sigma_{jj}}$ where $\Sigma$ is given by $$\Sigma = \Lambda^{-1} = (I - \rho A)^{-1}$$ Again, the idea here is that even though non-neighboring variables are now correlated with each other, this correlation is only due to the mutual influence of neighbors connecting them. In the case of a multivariate Gaussian distribution this can be made more precise, as then each variable satisfies the Markov property that, given its direct neighbors, it is conditionally independent of all the non-neighboring variables (e.g., see here).
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn Here is an explanation which might provide some intuition about what is going on here. Suppose that in your graph you have three vertices where vertex 1 is adjacent to both vertices 2 and 3, but verti
44,621
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix?
For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diagonal elements can be multiplied when the obtained matrix is not positive defined. The graph2prec function in SpiecEasi (Sparse and Compositionally Robust Inference of Microbial Ecological Networks, Zachary Kurtz et al. 2015) R package implements that. Others are based on the laplacian matrix. The rNetwork function in simone(SIMoNe: Statistical Inference for MOdular NEtworks, Julien Chiquet et al. 2009) R package implements that.
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn
For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diag
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without running into a NON-positive definite matrix? For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diagonal elements can be multiplied when the obtained matrix is not positive defined. The graph2prec function in SpiecEasi (Sparse and Compositionally Robust Inference of Microbial Ecological Networks, Zachary Kurtz et al. 2015) R package implements that. Others are based on the laplacian matrix. The rNetwork function in simone(SIMoNe: Statistical Inference for MOdular NEtworks, Julien Chiquet et al. 2009) R package implements that.
Given an adjacency matrix, how can we fit a covariance matrix based on that for a graph without runn For the special case of precision matrixes $K = \Sigma^{-1}$, some approaches tend to use the condition number theory (Article Condition number on Wikipedia). It helps to find a constant by which diag
44,622
Does it make sense to interact 2 dummy variables?
Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories). Let's look at your example and how to interpret it. You did only tell us one of the binary variables, $\mathrm{Education}$ ($0:$ High school, $1:$ More than high school). For the sake of illustration, I'm going to assume another binary variable, $\mathrm{Age}$ ($0:$ <35 years old, $1:$ $\geq$35 years old). The logistic model containing an interaction between $\mathrm{Education}$ and $\mathrm{Age}$ is: $$ \operatorname{logit}(p_{i}) = \beta_{0} + \beta_{1}\mathrm{Education}_{i} + \beta_{2}\mathrm{Age}_{i} + \beta_{3}\underbrace{\mathrm{Education}_{i}\times\mathrm{Age}_{i}}_{\text{= interaction term}} $$ Where $p_{i}$ is the probability to be married for the $i$th subject. We have four possibilities to consider. Below is a table of all four possibilities and the corresponding coefficients that remain. Please note that if a binary dummy variables is 0, the corresponding coefficient vanishes. $$ \begin{array}{l|l|l} & \text{Education = 0} & \text{Education = 1} & \text{Difference} \\ \hline \text{Age = 0} & \beta_{0} & \beta_{0} + \beta_{1} & (\beta_{0} + \beta_{1}) -\beta_{0} = \beta_{1} \\ \text{Age = 1} & \beta_{0} + \beta_{2} & \beta_{0} + \beta_{1} + \beta_{2} + \beta_{3} & \beta_{1} + \beta_{3} \\ \hline \text{Difference} & \beta_{2} & \beta_{2} + \beta_{3} \end{array} $$ To summarize the interpretation: $\beta_{0}$ is the log-odds for women with only high school education and that are below 35. $\beta_{1}$ is the difference in the log-odds between women with a higher education and women with only high school education for women below 35. $\beta_{2}$ is the difference in the log-odds between women below 35 and women above 35 among women with only a high school education. $\beta_{3}$ is the additional difference between the log-odds for women < 35 and older women if education changes from 0 to 1. This still may be cryptic. Have a look at the graphic below which illustrates all coefficients graphically. We can draw an important conclusion from the picture: The interaction tests if the lines are parallel. If $\beta_{3}$ is $0$ or very small, we can conclude that the lines are more or less parallel. A corresponding hypothesis test helps to quantify the evidence for parallelity. This analysis easily extends to categorical variables with more than two categories.
Does it make sense to interact 2 dummy variables?
Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories).
Does it make sense to interact 2 dummy variables? Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories). Let's look at your example and how to interpret it. You did only tell us one of the binary variables, $\mathrm{Education}$ ($0:$ High school, $1:$ More than high school). For the sake of illustration, I'm going to assume another binary variable, $\mathrm{Age}$ ($0:$ <35 years old, $1:$ $\geq$35 years old). The logistic model containing an interaction between $\mathrm{Education}$ and $\mathrm{Age}$ is: $$ \operatorname{logit}(p_{i}) = \beta_{0} + \beta_{1}\mathrm{Education}_{i} + \beta_{2}\mathrm{Age}_{i} + \beta_{3}\underbrace{\mathrm{Education}_{i}\times\mathrm{Age}_{i}}_{\text{= interaction term}} $$ Where $p_{i}$ is the probability to be married for the $i$th subject. We have four possibilities to consider. Below is a table of all four possibilities and the corresponding coefficients that remain. Please note that if a binary dummy variables is 0, the corresponding coefficient vanishes. $$ \begin{array}{l|l|l} & \text{Education = 0} & \text{Education = 1} & \text{Difference} \\ \hline \text{Age = 0} & \beta_{0} & \beta_{0} + \beta_{1} & (\beta_{0} + \beta_{1}) -\beta_{0} = \beta_{1} \\ \text{Age = 1} & \beta_{0} + \beta_{2} & \beta_{0} + \beta_{1} + \beta_{2} + \beta_{3} & \beta_{1} + \beta_{3} \\ \hline \text{Difference} & \beta_{2} & \beta_{2} + \beta_{3} \end{array} $$ To summarize the interpretation: $\beta_{0}$ is the log-odds for women with only high school education and that are below 35. $\beta_{1}$ is the difference in the log-odds between women with a higher education and women with only high school education for women below 35. $\beta_{2}$ is the difference in the log-odds between women below 35 and women above 35 among women with only a high school education. $\beta_{3}$ is the additional difference between the log-odds for women < 35 and older women if education changes from 0 to 1. This still may be cryptic. Have a look at the graphic below which illustrates all coefficients graphically. We can draw an important conclusion from the picture: The interaction tests if the lines are parallel. If $\beta_{3}$ is $0$ or very small, we can conclude that the lines are more or less parallel. A corresponding hypothesis test helps to quantify the evidence for parallelity. This analysis easily extends to categorical variables with more than two categories.
Does it make sense to interact 2 dummy variables? Sure, you can include an interaction between categorical variables in your regression. The interpretation is particularly easy if the categorical variables are binary (i.e. have only two categories).
44,623
Does it make sense to interact 2 dummy variables?
It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number means variable of first possible dummy variable, and second number means value of second variable). In such a situation there can be possibility, that there is different effect of every of these possibilities on dependent variable. Otherwise (let's say in most popular situation when there are no cases that have 1;1 combinations) the interaction variable would be linear combination of the base variables, so in case of logistic regression without regularization model will not converge at all. Also if there are no 0;0, 0;1 or 1;0 it does not make sense, because then also two coefficients would be enough to fit all conditional means of predicted variable within combinations of these two variables. This would also logically make no sense. If there are let's say, employed and unemployed women and employed and unemployed man, it make sense to make model that predict different value for employed woman than for employed man. But if there are no man that have given a birth to a child, it does not make sense to predict different value in model for man who have given a birth to a child and for woman who have given a birth to a child. In model with two dummy variables the effect of all of their combinations is just sum of effect of one of them and the second one: $$ y = \beta_{0} + \beta_{1}\mathrm({x}_{1}=1) + \beta_{2}\mathrm({x}_{2}=1) $$ In such a model for a case who has both variables equal to one model predicts just sum of effects of both variables when predicting his dependent variable value. With interaction term there is also individual effect of having both of them: $$ y = \beta_{0} + \beta_{1}\mathrm({x}_{1}=1) + \beta_{2}\mathrm({x}_{2}=1) + \beta_{3}\mathrm({x}_{1}=1)\&\mathrm({x}_{2}=1) $$ In such a situation model predicts individual, special value to a case that have both variables equal to one, that can differ from just sum of effects of two separate variables.
Does it make sense to interact 2 dummy variables?
It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number mean
Does it make sense to interact 2 dummy variables? It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number means variable of first possible dummy variable, and second number means value of second variable). In such a situation there can be possibility, that there is different effect of every of these possibilities on dependent variable. Otherwise (let's say in most popular situation when there are no cases that have 1;1 combinations) the interaction variable would be linear combination of the base variables, so in case of logistic regression without regularization model will not converge at all. Also if there are no 0;0, 0;1 or 1;0 it does not make sense, because then also two coefficients would be enough to fit all conditional means of predicted variable within combinations of these two variables. This would also logically make no sense. If there are let's say, employed and unemployed women and employed and unemployed man, it make sense to make model that predict different value for employed woman than for employed man. But if there are no man that have given a birth to a child, it does not make sense to predict different value in model for man who have given a birth to a child and for woman who have given a birth to a child. In model with two dummy variables the effect of all of their combinations is just sum of effect of one of them and the second one: $$ y = \beta_{0} + \beta_{1}\mathrm({x}_{1}=1) + \beta_{2}\mathrm({x}_{2}=1) $$ In such a model for a case who has both variables equal to one model predicts just sum of effects of both variables when predicting his dependent variable value. With interaction term there is also individual effect of having both of them: $$ y = \beta_{0} + \beta_{1}\mathrm({x}_{1}=1) + \beta_{2}\mathrm({x}_{2}=1) + \beta_{3}\mathrm({x}_{1}=1)\&\mathrm({x}_{2}=1) $$ In such a situation model predicts individual, special value to a case that have both variables equal to one, that can differ from just sum of effects of two separate variables.
Does it make sense to interact 2 dummy variables? It makes sense, but only in situation when there are all possible combinations of those variables in data, that is: when there are cases that have 0;0, 0;1, 1;0 and 1;1 in your data (first number mean
44,624
Does it make sense to interact 2 dummy variables?
I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary variables. You didn't tell us what the binary variables were, so it's hard to answer your question. But let's say if one variable is gender (1: F, 0:M) and the other is diabetes (1: individual has diabetes 0:No), and you interact the two, then a one would be females with diabetes only. So, your coefficient will read for diabetic females, in other words.
Does it make sense to interact 2 dummy variables?
I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary v
Does it make sense to interact 2 dummy variables? I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary variables. You didn't tell us what the binary variables were, so it's hard to answer your question. But let's say if one variable is gender (1: F, 0:M) and the other is diabetes (1: individual has diabetes 0:No), and you interact the two, then a one would be females with diabetes only. So, your coefficient will read for diabetic females, in other words.
Does it make sense to interact 2 dummy variables? I don't know if anyone else can chime in here with a better answer, but I have seen this. There's a lot of debate as to whether to include interaction terms at all, but it is possible with 2 binary v
44,625
Confusion about interpreting log likelihood (and likelihood ratio test) output
We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you multiply lots of numbers less than 1 together you tend to get really, really small numbers. Nothing wrong with those really small numbers - except that we hit precision limits on the computers that we use. Try exponentiating your log-likelihoods of -3000 and -2000. I suspect that your computer will say that they are both zero - so the likelihood of both models is zero - and they are indistinguishable. So instead of trying to multiply lots of values togehter, it is easier to take their logs, and add them together. So we don't get a likelihood, we get a log-likelihood, and we don't hit precision limits. Then @PlayStarCraftOkLetsGo's answer.
Confusion about interpreting log likelihood (and likelihood ratio test) output
We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you mu
Confusion about interpreting log likelihood (and likelihood ratio test) output We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you multiply lots of numbers less than 1 together you tend to get really, really small numbers. Nothing wrong with those really small numbers - except that we hit precision limits on the computers that we use. Try exponentiating your log-likelihoods of -3000 and -2000. I suspect that your computer will say that they are both zero - so the likelihood of both models is zero - and they are indistinguishable. So instead of trying to multiply lots of values togehter, it is easier to take their logs, and add them together. So we don't get a likelihood, we get a log-likelihood, and we don't hit precision limits. Then @PlayStarCraftOkLetsGo's answer.
Confusion about interpreting log likelihood (and likelihood ratio test) output We take the log-likelihood because each case in the dataset gets a likelihood, and the log-likelihood is the product of these likelihoods. But each of these likelihoods is less than 1, and when you mu
44,626
Confusion about interpreting log likelihood (and likelihood ratio test) output
We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output
We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
Confusion about interpreting log likelihood (and likelihood ratio test) output We try to minimise the negative log-likelihood function, which is equivalent to maximising the log-likelihood function. The model with the lower negative log-likelihood value would be a better fit.
44,627
How did Generative Adversarial Networks get their name?
In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine whether its input is real or fake. The second network is trained to better distinguish real from fake data, and the first network is trained to produce fake data that better fools the second network. The overall training procedure amounts to a competition between the two networks, which is why the model is called 'adversarial'. An equilibrium point of this competition occurs if the first network learns to perfectly model the 'true' distribution, at which point the second network can do no better than chance.
How did Generative Adversarial Networks get their name?
In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine wh
How did Generative Adversarial Networks get their name? In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine whether its input is real or fake. The second network is trained to better distinguish real from fake data, and the first network is trained to produce fake data that better fools the second network. The overall training procedure amounts to a competition between the two networks, which is why the model is called 'adversarial'. An equilibrium point of this competition occurs if the first network learns to perfectly model the 'true' distribution, at which point the second network can do no better than chance.
How did Generative Adversarial Networks get their name? In GANs, there are two networks. The first network generates fake data. The second network is shown examples of both real data and fake data generated by the first network. Its goal is to determine wh
44,628
How did Generative Adversarial Networks get their name?
From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles. Two side notes: the term "network" is misleading as neither of the generative model nor the discriminative model have to be a neural network. (same issue with the term "memory networks": Where is the network in memory networks?) Jurgen Schmidhuber claims to have performed similar work earlier in that direction. He called it predictability minimization. (Were generative adversarial networks introduced by Jürgen Schmidhuber?) References: {1} Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. arXiv:1406.2661 https://arxiv.org/abs/1406.2661
How did Generative Adversarial Networks get their name?
From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a
How did Generative Adversarial Networks get their name? From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a sample is from the model distribution or the data distribution. The generative model can be thought of as analogous to a team of counterfeiters, trying to produce fake currency and use it without detection, while the discriminative model is analogous to the police, trying to detect the counterfeit currency. Competition in this game drives both teams to improve their methods until the counterfeits are indistiguishable from the genuine articles. Two side notes: the term "network" is misleading as neither of the generative model nor the discriminative model have to be a neural network. (same issue with the term "memory networks": Where is the network in memory networks?) Jurgen Schmidhuber claims to have performed similar work earlier in that direction. He called it predictability minimization. (Were generative adversarial networks introduced by Jürgen Schmidhuber?) References: {1} Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. Generative Adversarial Networks. arXiv:1406.2661 https://arxiv.org/abs/1406.2661
How did Generative Adversarial Networks get their name? From the paper that introduced GANs {1}: In the proposed adversarial nets framework, the generative model is pitted against an adversary: a discriminative model that learns to determine whether a
44,629
How to prove this decomposition of sum of squares?
I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_{i=1}^{n}\left((x_i-\bar{x})^2-2(x_i-\bar{x})(\mu-\bar{x})+(\mu-\bar{x})^2\right)$$ Rearagning the sums $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-\sum_{i=1}^{n}2(x_i-\bar{x})(\mu-\bar{x})+\sum_{i=1}^{n}(\mu-\bar{x})^2$$ Last sum does not depend on index $i$ $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-\sum_{i=1}^{n}2(x_i-\bar{x})(\mu-\bar{x})+n(\mu-\bar{x})^2$$ Putting out things from the second sum that are independent on $i$ $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-2(\mu-\bar{x})\sum_{i=1}^{n}(x_i-\bar{x})+n(\mu-\bar{x})^2$$ Let us focus on $\sum_{i=1}^n(x_i-\bar{x})=\sum_{i=1}^n x_i-n\bar{x}=\frac{n}{n}\sum_{i=1}^n x_i-n\bar{x}=n\bar{x}-n\bar{x}=0$ Thus, $$\sum_{i=1}^{n}(x_i-\mu)^2 = \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\mu-\bar{x})^2$$ Your formula is wrong as it misses the power in the second term.
How to prove this decomposition of sum of squares?
I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_
How to prove this decomposition of sum of squares? I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_{i=1}^{n}\left((x_i-\bar{x})^2-2(x_i-\bar{x})(\mu-\bar{x})+(\mu-\bar{x})^2\right)$$ Rearagning the sums $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-\sum_{i=1}^{n}2(x_i-\bar{x})(\mu-\bar{x})+\sum_{i=1}^{n}(\mu-\bar{x})^2$$ Last sum does not depend on index $i$ $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-\sum_{i=1}^{n}2(x_i-\bar{x})(\mu-\bar{x})+n(\mu-\bar{x})^2$$ Putting out things from the second sum that are independent on $i$ $$=\sum_{i=1}^{n}(x_i-\bar{x})^2-2(\mu-\bar{x})\sum_{i=1}^{n}(x_i-\bar{x})+n(\mu-\bar{x})^2$$ Let us focus on $\sum_{i=1}^n(x_i-\bar{x})=\sum_{i=1}^n x_i-n\bar{x}=\frac{n}{n}\sum_{i=1}^n x_i-n\bar{x}=n\bar{x}-n\bar{x}=0$ Thus, $$\sum_{i=1}^{n}(x_i-\mu)^2 = \sum_{i=1}^{n}(x_i-\bar{x})^2+n(\mu-\bar{x})^2$$ Your formula is wrong as it misses the power in the second term.
How to prove this decomposition of sum of squares? I am afraid that the statement you show is wrong. Adding and subtracting $\bar{x}$: $$\sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}((x_i-\bar{x})-(\mu-\bar{x}))^2$$ Powering $(a-b)^2=a^2-2ab+b^2$: $$=\sum_
44,630
How to prove this decomposition of sum of squares?
I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu\left(\sum_{i=1}^{n}x_i\right)+n\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+2\overline x\sum_{i=1}^{n}x_i+\sum_{i=1}^{n}\overline x^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+2 n \overline x ^2-n\overline x^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+ n \left(\overline x ^2-2\mu \overline x+\mu^2\right)\\ &\therefore \sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}(x_i-\overline x)^2+ n \left(\overline x -\mu\right)^2 \end{align} Which is clearly not what Fink stated.
How to prove this decomposition of sum of squares?
I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x
How to prove this decomposition of sum of squares? I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu\left(\sum_{i=1}^{n}x_i\right)+n\mu^2=\\ &\sum_{i=1}^{n}x_i^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+2\overline x\sum_{i=1}^{n}x_i+\sum_{i=1}^{n}\overline x^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+2 n \overline x ^2-n\overline x^2-2\mu n\overline x+n\mu^2=\\ &\sum_{i=1}^{n}(x_i-\overline x)^2+ n \left(\overline x ^2-2\mu \overline x+\mu^2\right)\\ &\therefore \sum_{i=1}^{n}(x_i-\mu)^2=\sum_{i=1}^{n}(x_i-\overline x)^2+ n \left(\overline x -\mu\right)^2 \end{align} Which is clearly not what Fink stated.
How to prove this decomposition of sum of squares? I see while I was writing this @KarelMacek (+1) gave a identical proof. \begin{align} &\sum_{i=1}^{n}(x_i-\mu)^2= \\ &\sum_{i=1}^{n}x_i^2-2\sum_{i=1}^{n}x_i\mu+\sum_{i=1}^{n}\mu^2=\\ &\sum_{i=1}^{n}x
44,631
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the primeval binary data case. Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = number of cases on which both X and Y are 1 b = number of cases where X is 1 and Y is 0 c = number of cases where X is 0 and Y is 1 d = number of cases where X and Y are 0 a+b+c+d = n, the number of cases. we know that $\text{Jaccard}[X,Y]= \frac {a}{a+b+c}$. PMI by Wikipedia definition is $\text{PMI}[X,Y]= \text{log}\frac {P(X,Y)}{P(X)P(Y)}$. Let us first forget about "log" - because Jaccard implies no logarithming. Then plug a,b,c,d notation into PMI formula to obtain: $$\frac {P(X = 1,Y = 1)}{P(X = 1)P(Y = 1)} = \frac{a/n}{\frac{a+b}{n}\frac{a+c}{n}} = \frac{an}{(a+b)(a+c)} = \frac{\frac{a}{\sqrt{(a+b)(a+c)}}}{\sqrt{\frac{a+b}{n}\frac{a+c}{n}}} = \frac{\text{Ochiai}[X,Y]}{\text{gm}[P(X),P(Y)]}$$ where "gm" is geometric mean of the two probabilities, and Ochiai similarity between X and Y vectors is just another name for cosine similarity in case of binary data: $\sqrt {\frac{a}{a+b} \frac{a}{a+c}}$. So, you can see that PMI (without logarithm) is Ochiai coefficient further "normalized" (or I'd say, de-normalized) by the overall probability of the two-way positive (eventful) data. But Jaccard and Ochiai are comparable. Both are association measures ranging from 0 to 1. They differ in the accents they put on the potential discrepancy between frequencies $b$ and $c$. I've described it in the answer "Ochiai" above links to. To cite: Because product (seen in Ochiai) increases weaker than sum (seen in Jaccard) when only one of the terms grows, Ochiai will be really high only if both of the two proportions (probabilities) are high, which implies that to be considered similar by Ochiai the two vectors must share the great shares of their attributes/elements. In short, Ochiai curbs similarity if b and c are unequal. Jaccard does not.
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the
Jaccard similarity coefficient vs. Point-wise mutual information coefficient These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the primeval binary data case. Using a,b,c,d convention of the 4-fold table, as here, Y 1 0 ------- 1 | a | b | X ------- 0 | c | d | ------- a = number of cases on which both X and Y are 1 b = number of cases where X is 1 and Y is 0 c = number of cases where X is 0 and Y is 1 d = number of cases where X and Y are 0 a+b+c+d = n, the number of cases. we know that $\text{Jaccard}[X,Y]= \frac {a}{a+b+c}$. PMI by Wikipedia definition is $\text{PMI}[X,Y]= \text{log}\frac {P(X,Y)}{P(X)P(Y)}$. Let us first forget about "log" - because Jaccard implies no logarithming. Then plug a,b,c,d notation into PMI formula to obtain: $$\frac {P(X = 1,Y = 1)}{P(X = 1)P(Y = 1)} = \frac{a/n}{\frac{a+b}{n}\frac{a+c}{n}} = \frac{an}{(a+b)(a+c)} = \frac{\frac{a}{\sqrt{(a+b)(a+c)}}}{\sqrt{\frac{a+b}{n}\frac{a+c}{n}}} = \frac{\text{Ochiai}[X,Y]}{\text{gm}[P(X),P(Y)]}$$ where "gm" is geometric mean of the two probabilities, and Ochiai similarity between X and Y vectors is just another name for cosine similarity in case of binary data: $\sqrt {\frac{a}{a+b} \frac{a}{a+c}}$. So, you can see that PMI (without logarithm) is Ochiai coefficient further "normalized" (or I'd say, de-normalized) by the overall probability of the two-way positive (eventful) data. But Jaccard and Ochiai are comparable. Both are association measures ranging from 0 to 1. They differ in the accents they put on the potential discrepancy between frequencies $b$ and $c$. I've described it in the answer "Ochiai" above links to. To cite: Because product (seen in Ochiai) increases weaker than sum (seen in Jaccard) when only one of the terms grows, Ochiai will be really high only if both of the two proportions (probabilities) are high, which implies that to be considered similar by Ochiai the two vectors must share the great shares of their attributes/elements. In short, Ochiai curbs similarity if b and c are unequal. Jaccard does not.
Jaccard similarity coefficient vs. Point-wise mutual information coefficient These two are quite different. Still, let us try to "bring them to a common denominator", to see the difference. Both Jaccard and PMI could be extended to a continuous data case, but we'll observe the
44,632
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances that the two items co-occur. For two items with low probabilities and moderate co-occurrence, Jaccard will have really low scores, while PMI could give high scores.
Jaccard similarity coefficient vs. Point-wise mutual information coefficient
To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances tha
Jaccard similarity coefficient vs. Point-wise mutual information coefficient To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances that the two items co-occur. For two items with low probabilities and moderate co-occurrence, Jaccard will have really low scores, while PMI could give high scores.
Jaccard similarity coefficient vs. Point-wise mutual information coefficient To supplement the top answer: You want high Jaccard similarity, if you care about whether the two items co-occur frequently. You want high PMI, if you care about how bigger than random the chances tha
44,633
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5?
NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with the same seed, a symmetrical illustration is now generated. I believe this addresses the issue. You may want to refer to this post by Glen_b. This would be the shape of the simulation: I ran $100,000$ simulations of random values extracted from a binomial distribution of $10$ trials with a probability of success of the individual Bernoulli experiments of $0.2$, $0.5$ and $0.8$, respectively. Clearly $p=0.5$ approaches a normal distribution much closer, and the more extreme probability values result in markedly skewed distributions.
Is the normal distribution a better approximation to the binomial distribution with proportions near
NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with t
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5? NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with the same seed, a symmetrical illustration is now generated. I believe this addresses the issue. You may want to refer to this post by Glen_b. This would be the shape of the simulation: I ran $100,000$ simulations of random values extracted from a binomial distribution of $10$ trials with a probability of success of the individual Bernoulli experiments of $0.2$, $0.5$ and $0.8$, respectively. Clearly $p=0.5$ approaches a normal distribution much closer, and the more extreme probability values result in markedly skewed distributions.
Is the normal distribution a better approximation to the binomial distribution with proportions near NOTE: Following up on @whuber's comment, I realized that I was imposing aesthetic constraints on the plotting of the values in terms of the breaks options in hist(). Running the same simulation with t
44,634
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5?
The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approximability" kicks in a lot earlier when $p=.5$.
Is the normal distribution a better approximation to the binomial distribution with proportions near
The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approxima
Is the normal distribution a better approximation to the binomial distribution with proportions near or far from 0.5? The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approximability" kicks in a lot earlier when $p=.5$.
Is the normal distribution a better approximation to the binomial distribution with proportions near The rule of thumb says that both $N\pi $ and $N(1-\pi)$ should be $>10$. For $\pi=.5$ this demands $N>20$. But for $\pi=0.2$ (as well as for $\pi=0.8$) it demands $N>50$. So we see that the "approxima
44,635
Unconditional mean and variance of a stationary VAR(1) model
Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\operatorname{Var}y_t =\operatorname{Var}y_{t-1}=\Gamma_0$ so you need to solve the matrix equation $$ \Gamma_0 = A_1\Gamma_0 A_1^T+\Sigma_u. $$ Applying the vec-function, this can be rewritten (see wikipedia) as $$ \operatorname{vec}\Gamma_0 = (A_1\otimes A_1) \operatorname{vec}\Gamma_0 + \operatorname{vec}\Sigma_u $$ and solved using standard methods for the unknown covariances given by $$ \operatorname{vec}\Gamma_0 = (I-A_1\otimes A_1)^{-1} \operatorname{vec}\Sigma_u. $$ So you don't need to work out the infinite sum from the MA$(\infty)$-representation.
Unconditional mean and variance of a stationary VAR(1) model
Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\opera
Unconditional mean and variance of a stationary VAR(1) model Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\operatorname{Var}y_t =\operatorname{Var}y_{t-1}=\Gamma_0$ so you need to solve the matrix equation $$ \Gamma_0 = A_1\Gamma_0 A_1^T+\Sigma_u. $$ Applying the vec-function, this can be rewritten (see wikipedia) as $$ \operatorname{vec}\Gamma_0 = (A_1\otimes A_1) \operatorname{vec}\Gamma_0 + \operatorname{vec}\Sigma_u $$ and solved using standard methods for the unknown covariances given by $$ \operatorname{vec}\Gamma_0 = (I-A_1\otimes A_1)^{-1} \operatorname{vec}\Sigma_u. $$ So you don't need to work out the infinite sum from the MA$(\infty)$-representation.
Unconditional mean and variance of a stationary VAR(1) model Taking the variance of both sides of the equation $$ y_t = \nu + A_1 y_{t-1} + u_t $$ leads to $$ \operatorname{Var}y_t = A_1\operatorname{Var}y_{t-1}A_1^T+\Sigma_u. $$ Stationary implies that $\opera
44,636
Unconditional mean and variance of a stationary VAR(1) model
According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ is an identity matrix of dimension $K\times K$) and the unconditional covariance for lag $h$ (i.e. $\text{Cov}(y_t,y_{t-h})$) is $$ \sum_{i=0}^\infty A_1^{h+i}\Sigma_u {A_1^i}' $$ where $\Sigma_u$ is the covariance matrix of the error term $u_t$. Then the unconditional variance can be obtained by taking $h=0$ in the above expression. The same applies to VAR($p$) after having expressed the process in its alternative $Kp$-dimensional VAR(1) representation. These results are obtained using the vector moving-average (VMA) representation of the VAR(1) process. References Lütkepohl, Helmut. New Introduction to Multiple Time Series Analysis. Springer Science & Business Media, 2005.
Unconditional mean and variance of a stationary VAR(1) model
According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ i
Unconditional mean and variance of a stationary VAR(1) model According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ is an identity matrix of dimension $K\times K$) and the unconditional covariance for lag $h$ (i.e. $\text{Cov}(y_t,y_{t-h})$) is $$ \sum_{i=0}^\infty A_1^{h+i}\Sigma_u {A_1^i}' $$ where $\Sigma_u$ is the covariance matrix of the error term $u_t$. Then the unconditional variance can be obtained by taking $h=0$ in the above expression. The same applies to VAR($p$) after having expressed the process in its alternative $Kp$-dimensional VAR(1) representation. These results are obtained using the vector moving-average (VMA) representation of the VAR(1) process. References Lütkepohl, Helmut. New Introduction to Multiple Time Series Analysis. Springer Science & Business Media, 2005.
Unconditional mean and variance of a stationary VAR(1) model According to Lütkepohl (2005), p. 14-15, if we have a $K$-variate VAR(1) process of the form $$ y_t = \nu + A_1 y_{t-1} + u_t, $$ then the unconditional mean is $$ (I_K-A_1)^{-1}\nu $$ (where $I_K$ i
44,637
Name for 1 minus Bernoulli variable
As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever heard it called that, or anything else, but if I needed a short and sweet name, that would be it. Edit: I guess it hasn't caught on, at least exactly as in quotes. Now, 17 months after the post, googling "Complementary Bernoulli random variable" only brings up this post. :(
Name for 1 minus Bernoulli variable
As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever
Name for 1 minus Bernoulli variable As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever heard it called that, or anything else, but if I needed a short and sweet name, that would be it. Edit: I guess it hasn't caught on, at least exactly as in quotes. Now, 17 months after the post, googling "Complementary Bernoulli random variable" only brings up this post. :(
Name for 1 minus Bernoulli variable As @Tim has aleady shown in his answer, if $X$ is a Bernoulli random variable, then so is $Y = 1-X$. I would call $Y$ the "Complementary Bernoulli random variable" to $X$. I don't know that I've ever
44,638
Name for 1 minus Bernoulli variable
It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is an indicator function, so it is just a matter of labeling the categories. Notice that the labeling is arbitrary since it is always your choice if you code "heads" as $1$, or as $0$; males, or females as $1$ etc., it doesn't matter. If you want to name the relationship between the two variables, you can say that $Y$ is $X$ with reversed or switched labels, what led to Bernoulli variable with probability $1-p$.
Name for 1 minus Bernoulli variable
It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is a
Name for 1 minus Bernoulli variable It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is an indicator function, so it is just a matter of labeling the categories. Notice that the labeling is arbitrary since it is always your choice if you code "heads" as $1$, or as $0$; males, or females as $1$ etc., it doesn't matter. If you want to name the relationship between the two variables, you can say that $Y$ is $X$ with reversed or switched labels, what led to Bernoulli variable with probability $1-p$.
Name for 1 minus Bernoulli variable It is still a Bernoulli variable, for example if $Y = 1-X$ where $X \sim \mathrm{Bern}(p)$, then $$ Y \sim \mathrm{Bern}(1-p) $$ moreover $$ \Bbb{1}_{Y=0} \sim \mathrm{Bern}(p) $$ where $\Bbb{1}$ is a
44,639
a regression through the origin
Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line without intercept must start at $(0,0)$ without intercept, and will try to "catch up" with the data points as quickly as possible if $y$ has nonzero mean, which induces a clear slope (purple line), while the blue line with intercept may start at the right level for $y$ right away, such that it "needs" no slope. Note however that this example will typically exhibit a significant intercept in the model with intercept. n <- 100 mu <- 10 y <- rnorm(n, mean=mu) x <- runif(n) plot(x, y, ylim=c(0, mu+3)) abline(v=0, lty=2) abline(h=0, lty=2) abline(lm(y~x), col="lightblue", lwd=2) abline(lm(y~x-1), col="purple", lwd=2) abline(h=mu, lwd=2) legend("bottom", legend=c("with intercept","without intercept","truth"), col=c("lightblue","purple","black"), lty=1, lwd=2) We can also analyze the issue theoretically. Suppose the true model is $$ y_i=\alpha+\epsilon_i, $$ i.e., $$ y_i=\alpha+\beta x_i+\epsilon_i\qquad\text{with}\qquad\beta=0 $$ or $E(y_i|x_i)=E(y_i)=\alpha$. Under this model and assuming $E(x_i\epsilon_i)=0$ for simplicity (i.e. no further misspecification than a missing intercept), the plim for the OLS estimator $\hat\beta=\sum_ix_iy_i/\sum_ix_i^2$ of a regression of $y_i$ on $x_i$ without constant is given by \begin{align*} \text{plim}\frac{\sum_ix_iy_i}{\sum_ix_i^2}&=\text{plim}\frac{\sum_ix_i(\alpha+\epsilon_i)}{\sum_ix_i^2}\\ &=\text{plim}\frac{\frac{1}{n}\sum_ix_i(\alpha+\epsilon_i)}{\frac{1}{n}\sum_ix_i^2}\\ &=\text{plim}\frac{\alpha\frac{1}{n}\sum_ix_i+\frac{1}{n}\sum_ix_i\epsilon_i}{\frac{1}{n}\sum_ix_i^2}\\ &=\frac{\alpha E(x_i)}{E(x_i^2)} \end{align*} For example, in the numerical illustration, we have $\alpha=10$, $E(x_i)=1/2$ and $E(x_i^2)=1/3$. Hence, unless we are in the special cases that $E(y_i)=0$ or $E(x_i)=0$, OLS is inconsistent for $\beta=0$, $\text{plim}\hat\beta\neq0$. In the first case, we do not need a sloping $\hat\beta$ anyway, in the second, a flat line is "best" for OLS as smaller squared mistakes for positive fitted values for positive $x_i$ (in the case of a positive estimated slope) would be overcompensated by much larger squared mistakes for negative fitted values for negative $x_i$.
a regression through the origin
Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line wit
a regression through the origin Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line without intercept must start at $(0,0)$ without intercept, and will try to "catch up" with the data points as quickly as possible if $y$ has nonzero mean, which induces a clear slope (purple line), while the blue line with intercept may start at the right level for $y$ right away, such that it "needs" no slope. Note however that this example will typically exhibit a significant intercept in the model with intercept. n <- 100 mu <- 10 y <- rnorm(n, mean=mu) x <- runif(n) plot(x, y, ylim=c(0, mu+3)) abline(v=0, lty=2) abline(h=0, lty=2) abline(lm(y~x), col="lightblue", lwd=2) abline(lm(y~x-1), col="purple", lwd=2) abline(h=mu, lwd=2) legend("bottom", legend=c("with intercept","without intercept","truth"), col=c("lightblue","purple","black"), lty=1, lwd=2) We can also analyze the issue theoretically. Suppose the true model is $$ y_i=\alpha+\epsilon_i, $$ i.e., $$ y_i=\alpha+\beta x_i+\epsilon_i\qquad\text{with}\qquad\beta=0 $$ or $E(y_i|x_i)=E(y_i)=\alpha$. Under this model and assuming $E(x_i\epsilon_i)=0$ for simplicity (i.e. no further misspecification than a missing intercept), the plim for the OLS estimator $\hat\beta=\sum_ix_iy_i/\sum_ix_i^2$ of a regression of $y_i$ on $x_i$ without constant is given by \begin{align*} \text{plim}\frac{\sum_ix_iy_i}{\sum_ix_i^2}&=\text{plim}\frac{\sum_ix_i(\alpha+\epsilon_i)}{\sum_ix_i^2}\\ &=\text{plim}\frac{\frac{1}{n}\sum_ix_i(\alpha+\epsilon_i)}{\frac{1}{n}\sum_ix_i^2}\\ &=\text{plim}\frac{\alpha\frac{1}{n}\sum_ix_i+\frac{1}{n}\sum_ix_i\epsilon_i}{\frac{1}{n}\sum_ix_i^2}\\ &=\frac{\alpha E(x_i)}{E(x_i^2)} \end{align*} For example, in the numerical illustration, we have $\alpha=10$, $E(x_i)=1/2$ and $E(x_i^2)=1/3$. Hence, unless we are in the special cases that $E(y_i)=0$ or $E(x_i)=0$, OLS is inconsistent for $\beta=0$, $\text{plim}\hat\beta\neq0$. In the first case, we do not need a sloping $\hat\beta$ anyway, in the second, a flat line is "best" for OLS as smaller squared mistakes for positive fitted values for positive $x_i$ (in the case of a positive estimated slope) would be overcompensated by much larger squared mistakes for negative fitted values for negative $x_i$.
a regression through the origin Here is an illustration that simulates $y$ and $x$ independently of each other so that the true slope is zero. The mean of $y$ is nonzero, such that the true intercept is also nonzero. The LS line wit
44,640
a regression through the origin
Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula is used. The result of this different R^2 formula is always very high. You can go to this link to get more specifics- https://www.riinu.me/2014/08/why-does-linear-model-without-an-intercept-forced-through-the-origin-have-a-higher-r-squared-value-calculated-by-r/
a regression through the origin
Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula
a regression through the origin Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula is used. The result of this different R^2 formula is always very high. You can go to this link to get more specifics- https://www.riinu.me/2014/08/why-does-linear-model-without-an-intercept-forced-through-the-origin-have-a-higher-r-squared-value-calculated-by-r/
a regression through the origin Basically, to force a regression through zero the statistical software will enter in an infinite amount of data points at (0,0). This makes the normal R^2 formula useless, and a different R^2 formula
44,641
Correlation of signs of a jointly Gaussian RV
For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one of the $S$ being $0$. Now, since we are looking for $E[S_1S_2]$, ignoring the states of $S=0$ will not affect the result. Hence, \begin{align*}E[S_1S_2]&=1\times P(S_1=1,S_2=1)+1\times P(S_1=-1,S_2=-1)\\ &\quad+(-1)\times P(S_1=1,S_2=-1)+(-1)\times P(S_1=-1,S_2=1)\\ &=1\times P(X_1>0,X_2>0)+1\times P(X_1<0,X_2<0)\\ &\quad+(-1)\times P(X_1>0,X_2<0)+(-1)\times P(X_1<0,X_2>0).\end{align*} Further, one can show that $$P(X_1>0,X_2>0)=P(X_1<0,X_2<0)=\frac{1}{4}+\frac{1}{2\pi}\sin^{-1}(\rho),$$ and $$P(X_1>0,X_2<0)=P(X_1<0,X_2>0)=\frac{1}{2\pi}\cos^{-1}(\rho).$$ So \begin{align*}E[S_1S_2]&=\frac{1}{2}+\frac{1}{\pi}\sin^{-1}(\rho)-\frac{1}{\pi}\left(\frac{\pi}{2}-\sin^{-1}(\rho)\right)\\ &=\frac{2}{\pi}\sin^{-1}(\rho).\end{align*}
Correlation of signs of a jointly Gaussian RV
For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one
Correlation of signs of a jointly Gaussian RV For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one of the $S$ being $0$. Now, since we are looking for $E[S_1S_2]$, ignoring the states of $S=0$ will not affect the result. Hence, \begin{align*}E[S_1S_2]&=1\times P(S_1=1,S_2=1)+1\times P(S_1=-1,S_2=-1)\\ &\quad+(-1)\times P(S_1=1,S_2=-1)+(-1)\times P(S_1=-1,S_2=1)\\ &=1\times P(X_1>0,X_2>0)+1\times P(X_1<0,X_2<0)\\ &\quad+(-1)\times P(X_1>0,X_2<0)+(-1)\times P(X_1<0,X_2>0).\end{align*} Further, one can show that $$P(X_1>0,X_2>0)=P(X_1<0,X_2<0)=\frac{1}{4}+\frac{1}{2\pi}\sin^{-1}(\rho),$$ and $$P(X_1>0,X_2<0)=P(X_1<0,X_2>0)=\frac{1}{2\pi}\cos^{-1}(\rho).$$ So \begin{align*}E[S_1S_2]&=\frac{1}{2}+\frac{1}{\pi}\sin^{-1}(\rho)-\frac{1}{\pi}\left(\frac{\pi}{2}-\sin^{-1}(\rho)\right)\\ &=\frac{2}{\pi}\sin^{-1}(\rho).\end{align*}
Correlation of signs of a jointly Gaussian RV For convenience let's call $\operatorname{sgn}(X_1),\operatorname{sgn}(X_2)$ as $S_1$ and $S_2$, respectively. There are only $9$ possible combinations of $(S_1,S_2)$: $(\pm1,\pm1)$, and at least one
44,642
Correlation of signs of a jointly Gaussian RV
$$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0) - 2P(X_1 \ge 0,X_2 \le 0)$$ by symmetry. Plugging in the Bivariate Normal density, this evaluates (integrates) to $\frac{2}{\pi} sin^{-1}(\rho)$. The details of performing the integration are left to you. Edit: Have changed what was $\sigma^2$ to $\rho$ to match edit of question.
Correlation of signs of a jointly Gaussian RV
$$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0
Correlation of signs of a jointly Gaussian RV $$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0) - 2P(X_1 \ge 0,X_2 \le 0)$$ by symmetry. Plugging in the Bivariate Normal density, this evaluates (integrates) to $\frac{2}{\pi} sin^{-1}(\rho)$. The details of performing the integration are left to you. Edit: Have changed what was $\sigma^2$ to $\rho$ to match edit of question.
Correlation of signs of a jointly Gaussian RV $$\mathbb{E}[ \text{sign}(X_1) \text{sign}(X_2)] = 1 * (P(X_1 \ge 0,X_2 \ge 0) + P(X_1 \le 0,X_2 \le 0)) - (P(X_1 \ge 0,X_2 \le 0) + P(X_1 \le 0,X_2 \ge 0))$$ which in turn $$= 2P(X_1 \ge 0,X_2 \ge 0
44,643
Standard Error for a Parameter in Ordinary Least Squares [duplicate]
In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix contains a series of ones, so that we can safely assume that the "error term" $\mathbf u$ has zero mean. We do not as yet make any statistical/probabilistic assumptions. Calculating the unknown betas by Ordinary Least Squares is a mathematical approximation method that needs no statistical assumptions. We obtain $$\hat \beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf y$$ This is the (orthogonal) Linear Projection coefficient vector, and, as a mathematical approximation story, it stops here. Now we want to talk about the "standard error" of the estimates. But that is a statistical concept, and so we must assume something random and probabilistic. Assume that the regressors are all deterministic, but $\mathbf u$ is a random variable. Due to the regressor matrix containing a series of ones, we then have $E(\mathbf u) = \mathbf 0$, where $E$ denotes the expected value. We have $$\hat \beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf y = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\left(\mathbf X\beta + \mathbf u\right) = \beta +\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u$$ $$\implies \hat \beta -\beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u$$ Since $\beta$ is a constant, we have that $\text{Var}(\hat \beta -\beta) = \text{Var}(\hat \beta)$, where the Variance is the square of the standard deviation. The multivariate version of the variance, is $$\text{Var}(\hat \beta -\beta) = E\Big[(\hat \beta -\beta)(\hat \beta -\beta)'\Big] - E\Big[(\hat \beta -\beta)\Big]E\Big[(\hat \beta -\beta)\Big]'$$ where the prime denotes the transpose. Substituting, $$\text{Var}(\hat \beta) = E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\mathbf u'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} \Big] - E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\Big]E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\Big]'$$ Since we have assumed that $\mathbf X$ is deterministic, the expected value applies only to $\mathbf u$ so we have $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u\mathbf u')\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} - \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u)E(\mathbf u)'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} $$ Since $E(\mathbf u) =\mathbf 0$ we are left with $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u\mathbf u')\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} $$ Now comes another benchmark statistical assumption: the $\mathbf u$ is "homoskedastic" which means $$\text{Var}(\mathbf u) = E(\mathbf u\mathbf u') = \sigma^2I$$ where $\sigma^2 >0$ is the common variance of each element of the error vector, and $I$ is the identity matrix. Substituting we get $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\sigma^2I\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} =\sigma^2\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1}$$ $$\text{Var}(\hat \beta) = \sigma^2\left(\mathbf X' \mathbf X\right) ^{-1}$$ The $\sigma^2$ is estimated by $s^2$ as in the OP question, and the diagonal elements of $\left(\mathbf X' \mathbf X\right) ^{-1}$, each multiplied by $s^2$, is the variance of the corresponding element of the estimated beta vector. Taking the square root leads to the standard error of each element. The off-diagonal elements of the matrix, multiplied by $\sigma^2$, give the estimated covariances between the elements of the beta vector.
Standard Error for a Parameter in Ordinary Least Squares [duplicate]
In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix
Standard Error for a Parameter in Ordinary Least Squares [duplicate] In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix contains a series of ones, so that we can safely assume that the "error term" $\mathbf u$ has zero mean. We do not as yet make any statistical/probabilistic assumptions. Calculating the unknown betas by Ordinary Least Squares is a mathematical approximation method that needs no statistical assumptions. We obtain $$\hat \beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf y$$ This is the (orthogonal) Linear Projection coefficient vector, and, as a mathematical approximation story, it stops here. Now we want to talk about the "standard error" of the estimates. But that is a statistical concept, and so we must assume something random and probabilistic. Assume that the regressors are all deterministic, but $\mathbf u$ is a random variable. Due to the regressor matrix containing a series of ones, we then have $E(\mathbf u) = \mathbf 0$, where $E$ denotes the expected value. We have $$\hat \beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf y = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\left(\mathbf X\beta + \mathbf u\right) = \beta +\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u$$ $$\implies \hat \beta -\beta = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u$$ Since $\beta$ is a constant, we have that $\text{Var}(\hat \beta -\beta) = \text{Var}(\hat \beta)$, where the Variance is the square of the standard deviation. The multivariate version of the variance, is $$\text{Var}(\hat \beta -\beta) = E\Big[(\hat \beta -\beta)(\hat \beta -\beta)'\Big] - E\Big[(\hat \beta -\beta)\Big]E\Big[(\hat \beta -\beta)\Big]'$$ where the prime denotes the transpose. Substituting, $$\text{Var}(\hat \beta) = E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\mathbf u'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} \Big] - E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\Big]E\Big[\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf u\Big]'$$ Since we have assumed that $\mathbf X$ is deterministic, the expected value applies only to $\mathbf u$ so we have $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u\mathbf u')\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} - \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u)E(\mathbf u)'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} $$ Since $E(\mathbf u) =\mathbf 0$ we are left with $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'E(\mathbf u\mathbf u')\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} $$ Now comes another benchmark statistical assumption: the $\mathbf u$ is "homoskedastic" which means $$\text{Var}(\mathbf u) = E(\mathbf u\mathbf u') = \sigma^2I$$ where $\sigma^2 >0$ is the common variance of each element of the error vector, and $I$ is the identity matrix. Substituting we get $$\text{Var}(\hat \beta) = \left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\sigma^2I\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1} =\sigma^2\left(\mathbf X' \mathbf X\right) ^{-1} \mathbf X'\mathbf X\left(\mathbf X' \mathbf X\right) ^{-1}$$ $$\text{Var}(\hat \beta) = \sigma^2\left(\mathbf X' \mathbf X\right) ^{-1}$$ The $\sigma^2$ is estimated by $s^2$ as in the OP question, and the diagonal elements of $\left(\mathbf X' \mathbf X\right) ^{-1}$, each multiplied by $s^2$, is the variance of the corresponding element of the estimated beta vector. Taking the square root leads to the standard error of each element. The off-diagonal elements of the matrix, multiplied by $\sigma^2$, give the estimated covariances between the elements of the beta vector.
Standard Error for a Parameter in Ordinary Least Squares [duplicate] In matrix notation we have data $\left (\mathbf y, \mathbf X\right)$ and we consider the model $$\mathbf y = \mathbf X\beta + \mathbf u$$ where for the moment we only assume that the regressor matrix
44,644
How to choose between logit, probit or linear probability model?
Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the logical boundaries of 0 and 1. Logit and probit differ in the assumption of the underlying distribution. Logit assumes the distribution is logistic (i.e. the outcome either happens or it doesn't). Probit assumes the underlying distribution is normal which means, essentially, that the observed outcome either happens or doesn't but this reflects a certain threshold being met for the underlying latent variable which is normally distributed. In practice the end result of these different distributional assumptions is that coefficients differ, usually by a factor of about 1.6. However, if you look at marginal effects (meaning the effects on the predicted mean of the outcome holding other covariates at the mean or averaging over observed values) the logit and probit models will make essentially the same predictions. So if you're looking at marginal effects the choice probably doesn't matter. On the other hand, if you're not going to go about calculating the margins then logit has the obvious advantage of generating coefficients that can be transformed into the familiar odds ratio by exponentiating the coefficient. Probit coefficients are essentially uninterpretable - given a probit model I would report average marginal effects for this very reason. Of course most people improperly interpret odds ratios as probabilities which is a big no-no. The odds of an outcome occurring is a ratio of successes to failures (an odds of 1 would correspond to a probability of .5). Odds RATIOS, then, reflect the predicted change in the odds given a 1 unit change in the predictor. Thus, the odds ratio reflects change relative to the base odds of the outcome occurring. Given an outcome that either rarely occurs or almost always occurs, a small change in probability can correspond to a large odds ratio. Odds ratios are a ratio of ratios which can be quite confusing and so we arrive at a reason to report marginal effects in the context of a logit model. So, to summarize, don't use a linear probability model. Use logit or probit and report the marginal effects. The choice is, perhaps, of theoretical significance but probably of no practical consequence if reporting marginal effects. If you're not going to report marginal effects then use logit but be sure to properly interpret the odds ratios so you don't look like an uninformed idiot.
How to choose between logit, probit or linear probability model?
Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the l
How to choose between logit, probit or linear probability model? Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the logical boundaries of 0 and 1. Logit and probit differ in the assumption of the underlying distribution. Logit assumes the distribution is logistic (i.e. the outcome either happens or it doesn't). Probit assumes the underlying distribution is normal which means, essentially, that the observed outcome either happens or doesn't but this reflects a certain threshold being met for the underlying latent variable which is normally distributed. In practice the end result of these different distributional assumptions is that coefficients differ, usually by a factor of about 1.6. However, if you look at marginal effects (meaning the effects on the predicted mean of the outcome holding other covariates at the mean or averaging over observed values) the logit and probit models will make essentially the same predictions. So if you're looking at marginal effects the choice probably doesn't matter. On the other hand, if you're not going to go about calculating the margins then logit has the obvious advantage of generating coefficients that can be transformed into the familiar odds ratio by exponentiating the coefficient. Probit coefficients are essentially uninterpretable - given a probit model I would report average marginal effects for this very reason. Of course most people improperly interpret odds ratios as probabilities which is a big no-no. The odds of an outcome occurring is a ratio of successes to failures (an odds of 1 would correspond to a probability of .5). Odds RATIOS, then, reflect the predicted change in the odds given a 1 unit change in the predictor. Thus, the odds ratio reflects change relative to the base odds of the outcome occurring. Given an outcome that either rarely occurs or almost always occurs, a small change in probability can correspond to a large odds ratio. Odds ratios are a ratio of ratios which can be quite confusing and so we arrive at a reason to report marginal effects in the context of a logit model. So, to summarize, don't use a linear probability model. Use logit or probit and report the marginal effects. The choice is, perhaps, of theoretical significance but probably of no practical consequence if reporting marginal effects. If you're not going to report marginal effects then use logit but be sure to properly interpret the odds ratios so you don't look like an uninformed idiot.
How to choose between logit, probit or linear probability model? Modeling a dichotomous outcome using linear regression is a big no-no. The error terms will not be normally distributed, there will be heteroskedasticity, and predicted values will fall outside the l
44,645
How to choose between logit, probit or linear probability model?
Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it would be better to use LPM to minimize bias (and then use HAC correction), because logic and probit suffer from « incidental parameter problem ».
How to choose between logit, probit or linear probability model?
Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it w
How to choose between logit, probit or linear probability model? Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it would be better to use LPM to minimize bias (and then use HAC correction), because logic and probit suffer from « incidental parameter problem ».
How to choose between logit, probit or linear probability model? Following the response of whauser, I would also add that it depends on your data. I learnt from my professor that: If we are dealing with spatial data of high dimensionality in our fixed effects, it w
44,646
Can p-value be greater than 1?
The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, under the NULL hypothesis. We have a fixed type I error decided upon whereby we are ready to accept only a certain risk of excluding $H_o$ incorrectly. But say you set a risk alpha of $0.05$ and the $p$ value obtained ends up being $0.0001$, you will exclude $H_o$ because $0.0001 < 0.05$, as much as if the p value had been $0.04$. In any event, the p-value is a probability value, and probability measures are bounded between $0$ and $1$.
Can p-value be greater than 1?
The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, u
Can p-value be greater than 1? The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, under the NULL hypothesis. We have a fixed type I error decided upon whereby we are ready to accept only a certain risk of excluding $H_o$ incorrectly. But say you set a risk alpha of $0.05$ and the $p$ value obtained ends up being $0.0001$, you will exclude $H_o$ because $0.0001 < 0.05$, as much as if the p value had been $0.04$. In any event, the p-value is a probability value, and probability measures are bounded between $0$ and $1$.
Can p-value be greater than 1? The $p$ value, as explained very nicely in this post by @fcop is not the probability of making a type I error, but the probability of getting a value for a test statistic higher than the one we got, u
44,647
Variable Selection Techniques for Multivariate Multiple Regression
Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/DavidCassell-StoppingStepwise.pdf That said, every statistician and their brother has a paper or approach to variable selection – they are legion. To your point, these are all focused on selection with a single response function. I am not aware of anyone who has developed algorithms specifically for use with multiple dependent variables and would be happy to be told that this is incorrect, someone, somewhere has a protocol. Methodological solutions follow demand and, if there is no demand, then no one will bother. To date, there would appear to be, at best, limited demand for selection routines with multiple response functions. I really do not understand why nearly all modeling projects insist on choosing a single response function when multiple functions would give a much better, more informative and insightful answer. There are many possible reasons for this but, in my view, the leading explanations would have to include a deeply engrained bias in favor of “Occam’s Razor-like,” single response models; the paucity of training in the use and interpretation of multiple response models as well as the inevitable consequences of our cognitive limitations in “bounded rationality.” This is true despite the fact that all of the major statistical packages offer MANOVA or canonical correlation routines. What they all lack is a “LASSO-like” algorithm for multiple DVs and large numbers of candidate features. An informative exception to these observations is a paper by Grice and Iwasaki which compares ANOVA with MANOVA, discussing the advantages and pitfalls of each in the context of hypothesis-testing, inference and interpretation. Note that they do not address your specific issue concerned with variable selection. http://psychology.okstate.edu/faculty/jgrice/psyc6813/Grice_Iwasaki_AMR.pdf This paper raises a fundamental issue which the OP hasn’t addressed: the objectives of the model. Is it to be used for hypothesis-testing and inference or black box prediction as in a machine learning problem? These really are independent challenges with differing solutions in large part as a function of the amount of information under analysis. For relatively small amounts of data, classic inferential methods are realistic. If one is faced with large amounts of information containing many, even massive quantities of candidate predictors, then the classic approaches break down. Given this, what are the limiting cases for variable selection with multiple dependent variables? Of course, one always has the option of combining the multiple DVs into an a priori composite. In this instance, the variable selection process would be the same as for any single response function. When modeling truly multiple DVs the simplest and most obvious example would be to have such a finite amount of information possessing so few possible features that variable selection becomes moot, permitting the ready fitting of a canonical correlation or MANOVA as in the Grice and Iwasaki paper. This case would be consistent for use with a PhD dissertation or paper employing careful, classic hypothesis-tests. For the more likely case where there are a large number of candidate predictors – making a variable selection step unavoidable -- a brute force solution might be to fit a separate selection process for each dependent variable. This approach should not be recommended and is flawed in that it ignores the linear combinations or composites that are inherent to a truly multivariate approach and begs the question of how a rigorous and final variable selection process would work. It would appear to be the case that classic multivariate statistics and analysis does not offer a solution to the problem of variable selection for multiple response functions with large, even massive, numbers of candidate predictors and/or “big” data. In my view, this necessitates employing approximating workarounds that involve extensions of Breiman’s random forests routine. Breiman discussed using RFs as a variable selection method but never said that you could not employ a multivariate tool other than CART as the engine driving the algorithm. Breiman’s classic approach to RFs was limited in that it was developed in the 90s for only a few thousand candidate predictors on a single CPU. In the applied world of today, access to massively parallel platforms (MPP) for crunching massive amounts of data as well as “divide and conquer” routines means that one is no longer limited to his classic solution. For a discussion of “D&C” routines, see this paper by Chen and Minge A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf One example of how a “D&C” approach might work on an MPP for multiple response functions and thousands or hundreds of thousands (or more) candidate predictors – a common challenge with unstructured information -- would be to plug in MANOVA or canonical correlation, run millions of “mini-models” and aggregate the output on the back end to obtain both ensemble predictions as well as a ranking of truly multivariate variable relative importance. This could be done in a few hours on an MPP of reasonable size. Given the approximating nature of this approach, the modeler is forced to give up any notion of finding a final, reduced or fixed set of mathematically unique predictors. Note, however, that this would facilitate the elimination of large numbers of candidate variables. At this point, the question becomes one of whether or not this solution is an end in itself -- is the objective prediction or inference? If it is prediction, this this could be the end product and retaining the results from the millions of mini-models would enable their later use in scoring new data. If inference is the objective, then it’s not the end of the analysis and further refinement of the variables in additional stages of modeling would further reduce the variables as well as eliminate the inevitable redundancies and pure linear combinations hidden in the rankings. At this stage of the development of D&C routines, there don’t seem to be any good answers as to how best to pursue additional stages of inferential modeling. Anyway, these are just a few thoughts. Hope they’re helpful.
Variable Selection Techniques for Multivariate Multiple Regression
Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/Dav
Variable Selection Techniques for Multivariate Multiple Regression Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/DavidCassell-StoppingStepwise.pdf That said, every statistician and their brother has a paper or approach to variable selection – they are legion. To your point, these are all focused on selection with a single response function. I am not aware of anyone who has developed algorithms specifically for use with multiple dependent variables and would be happy to be told that this is incorrect, someone, somewhere has a protocol. Methodological solutions follow demand and, if there is no demand, then no one will bother. To date, there would appear to be, at best, limited demand for selection routines with multiple response functions. I really do not understand why nearly all modeling projects insist on choosing a single response function when multiple functions would give a much better, more informative and insightful answer. There are many possible reasons for this but, in my view, the leading explanations would have to include a deeply engrained bias in favor of “Occam’s Razor-like,” single response models; the paucity of training in the use and interpretation of multiple response models as well as the inevitable consequences of our cognitive limitations in “bounded rationality.” This is true despite the fact that all of the major statistical packages offer MANOVA or canonical correlation routines. What they all lack is a “LASSO-like” algorithm for multiple DVs and large numbers of candidate features. An informative exception to these observations is a paper by Grice and Iwasaki which compares ANOVA with MANOVA, discussing the advantages and pitfalls of each in the context of hypothesis-testing, inference and interpretation. Note that they do not address your specific issue concerned with variable selection. http://psychology.okstate.edu/faculty/jgrice/psyc6813/Grice_Iwasaki_AMR.pdf This paper raises a fundamental issue which the OP hasn’t addressed: the objectives of the model. Is it to be used for hypothesis-testing and inference or black box prediction as in a machine learning problem? These really are independent challenges with differing solutions in large part as a function of the amount of information under analysis. For relatively small amounts of data, classic inferential methods are realistic. If one is faced with large amounts of information containing many, even massive quantities of candidate predictors, then the classic approaches break down. Given this, what are the limiting cases for variable selection with multiple dependent variables? Of course, one always has the option of combining the multiple DVs into an a priori composite. In this instance, the variable selection process would be the same as for any single response function. When modeling truly multiple DVs the simplest and most obvious example would be to have such a finite amount of information possessing so few possible features that variable selection becomes moot, permitting the ready fitting of a canonical correlation or MANOVA as in the Grice and Iwasaki paper. This case would be consistent for use with a PhD dissertation or paper employing careful, classic hypothesis-tests. For the more likely case where there are a large number of candidate predictors – making a variable selection step unavoidable -- a brute force solution might be to fit a separate selection process for each dependent variable. This approach should not be recommended and is flawed in that it ignores the linear combinations or composites that are inherent to a truly multivariate approach and begs the question of how a rigorous and final variable selection process would work. It would appear to be the case that classic multivariate statistics and analysis does not offer a solution to the problem of variable selection for multiple response functions with large, even massive, numbers of candidate predictors and/or “big” data. In my view, this necessitates employing approximating workarounds that involve extensions of Breiman’s random forests routine. Breiman discussed using RFs as a variable selection method but never said that you could not employ a multivariate tool other than CART as the engine driving the algorithm. Breiman’s classic approach to RFs was limited in that it was developed in the 90s for only a few thousand candidate predictors on a single CPU. In the applied world of today, access to massively parallel platforms (MPP) for crunching massive amounts of data as well as “divide and conquer” routines means that one is no longer limited to his classic solution. For a discussion of “D&C” routines, see this paper by Chen and Minge A Split-and-Conquer Approach for Analysis of Extraordinarily Large Data http://dimacs.rutgers.edu/TechnicalReports/TechReports/2012/2012-01.pdf One example of how a “D&C” approach might work on an MPP for multiple response functions and thousands or hundreds of thousands (or more) candidate predictors – a common challenge with unstructured information -- would be to plug in MANOVA or canonical correlation, run millions of “mini-models” and aggregate the output on the back end to obtain both ensemble predictions as well as a ranking of truly multivariate variable relative importance. This could be done in a few hours on an MPP of reasonable size. Given the approximating nature of this approach, the modeler is forced to give up any notion of finding a final, reduced or fixed set of mathematically unique predictors. Note, however, that this would facilitate the elimination of large numbers of candidate variables. At this point, the question becomes one of whether or not this solution is an end in itself -- is the objective prediction or inference? If it is prediction, this this could be the end product and retaining the results from the millions of mini-models would enable their later use in scoring new data. If inference is the objective, then it’s not the end of the analysis and further refinement of the variables in additional stages of modeling would further reduce the variables as well as eliminate the inevitable redundancies and pure linear combinations hidden in the rankings. At this stage of the development of D&C routines, there don’t seem to be any good answers as to how best to pursue additional stages of inferential modeling. Anyway, these are just a few thoughts. Hope they’re helpful.
Variable Selection Techniques for Multivariate Multiple Regression Roman Kh is correct to warn you against ever using stepwise approaches. One of the best discussions of their pitfalls is Peter Flom's paper Stop Using Stepwise http://www.lexjansen.com/pnwsug/2008/Dav
44,648
Variable Selection Techniques for Multivariate Multiple Regression
The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression
The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
Variable Selection Techniques for Multivariate Multiple Regression The well-known textbook "Introduction to Statistical Learning" has a nice treatment of this subject. Chapter 6 in this free PDF is easy to read. The structure of the chapter looks like this:
44,649
Variable Selection Techniques for Multivariate Multiple Regression
Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression
Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
Variable Selection Techniques for Multivariate Multiple Regression Stepwise regressions are controversial and might lead to model misspecification. Other techniques are Lasso and Ridge regression, as well as Least angle regression.
44,650
Variable Selection Techniques for Multivariate Multiple Regression
Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression
Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
Variable Selection Techniques for Multivariate Multiple Regression Partial Least Squares(PLS) is designed to take multivariate/univariate response variables. check out the "pls" r package for more details.
44,651
Variable Selection Techniques for Multivariate Multiple Regression
GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M. (2016). R2GUESS : A Graphics Processing Unit-Based R Package for Bayesian Variable Selection Regression of Multivariate Responses. Journal of Statistical Software, 69(2), 1–32. https://doi.org/10.18637/jss.v069.i02 The R package MBSGS, available on CRAN (link) can perform variable selection for multivariate response using a Spike and Slab prior. It is backed by the following paper, which is not yet available unfortunately: B. Liquet, K. Mengersen, A. Pettitt and M. Sutton. (2016). Bayesian Variable Selection Regression Of Multivariate Responses For Group Data. Submitted in Bayesian Analysis. Note that the package is intended for cases where the design matrix has a group structure.
Variable Selection Techniques for Multivariate Multiple Regression
GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M.
Variable Selection Techniques for Multivariate Multiple Regression GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M. (2016). R2GUESS : A Graphics Processing Unit-Based R Package for Bayesian Variable Selection Regression of Multivariate Responses. Journal of Statistical Software, 69(2), 1–32. https://doi.org/10.18637/jss.v069.i02 The R package MBSGS, available on CRAN (link) can perform variable selection for multivariate response using a Spike and Slab prior. It is backed by the following paper, which is not yet available unfortunately: B. Liquet, K. Mengersen, A. Pettitt and M. Sutton. (2016). Bayesian Variable Selection Regression Of Multivariate Responses For Group Data. Submitted in Bayesian Analysis. Note that the package is intended for cases where the design matrix has a group structure.
Variable Selection Techniques for Multivariate Multiple Regression GUESS or the corresponding R package R2GUESS is available for variable selection of multivariate response. The reference is: Liquet, B., Bottolo, L., Campanella, G., Richardson, S., & Chadeau-Hyam, M.
44,652
Extreme values in the data
A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results driven by error. More generally, you don't want results driven by bizarre, weird behavior that's not related to what you're trying to model. Some examples: In finance, excluding extreme events like bankruptcy would be a horrible mistake. It is often the extreme observations (eg. deaths, -100% returns, crashes) that you really care about! On the other hand, financial data isn't perfect. You can find cases where decimal points are in the wrong place, 100.00 is mistakenly recorded as 10000 etc... There's often fuzzy stuff in between... A key distinction: left hand side or right hand side variables? Dropping observations conditional on the value of a left hand side variable tends to be problematic. It can easily qualify as research misconduct, like trying to estimate the effects of schooling and dropping all the low test scores under some dubious argument that they somehow don't count. Depending on context, transforming right hand side variables can be OK. There's often more flexibility on what you're using to try to predict or explain the data. Some techniques that can be valid (depending on context): For example in accounting data, you often have a few companies with bizarre, extreme numbers and you want to give ordinary least squares regression a reasonable shot at fitting something other than the few outliers. To reduce the effect of outliers, you can: Trim the data (eg. drop 1 percent most extreme observations). This is most reasonable if the outliers are almost certainly entirely wrong. (eg. an entry for a human's height of -2 feet or 135 feet). You can go seriously wrong by trimming the data though. Arguably better is to winsorize the data: eg. replace values above the 99th percentile with value of the 99th percentile. More complicated outlier detection systems such as ellipsodial peeling: find the minimum volume ellipse that encloses the data and then drop points on the boundary. Robust methods: There are other types of regression that may be more robust to extreme outliers. Quantile regression (eg. fit the median) Instead of minimizing sum of squares, use the Huber loss function or something with less penalty for big outliers. There are a lot of different approaches people use to deal with outliers, and what's reasonable often depends on context.
Extreme values in the data
A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results dri
Extreme values in the data A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results driven by error. More generally, you don't want results driven by bizarre, weird behavior that's not related to what you're trying to model. Some examples: In finance, excluding extreme events like bankruptcy would be a horrible mistake. It is often the extreme observations (eg. deaths, -100% returns, crashes) that you really care about! On the other hand, financial data isn't perfect. You can find cases where decimal points are in the wrong place, 100.00 is mistakenly recorded as 10000 etc... There's often fuzzy stuff in between... A key distinction: left hand side or right hand side variables? Dropping observations conditional on the value of a left hand side variable tends to be problematic. It can easily qualify as research misconduct, like trying to estimate the effects of schooling and dropping all the low test scores under some dubious argument that they somehow don't count. Depending on context, transforming right hand side variables can be OK. There's often more flexibility on what you're using to try to predict or explain the data. Some techniques that can be valid (depending on context): For example in accounting data, you often have a few companies with bizarre, extreme numbers and you want to give ordinary least squares regression a reasonable shot at fitting something other than the few outliers. To reduce the effect of outliers, you can: Trim the data (eg. drop 1 percent most extreme observations). This is most reasonable if the outliers are almost certainly entirely wrong. (eg. an entry for a human's height of -2 feet or 135 feet). You can go seriously wrong by trimming the data though. Arguably better is to winsorize the data: eg. replace values above the 99th percentile with value of the 99th percentile. More complicated outlier detection systems such as ellipsodial peeling: find the minimum volume ellipse that encloses the data and then drop points on the boundary. Robust methods: There are other types of regression that may be more robust to extreme outliers. Quantile regression (eg. fit the median) Instead of minimizing sum of squares, use the Huber loss function or something with less penalty for big outliers. There are a lot of different approaches people use to deal with outliers, and what's reasonable often depends on context.
Extreme values in the data A key distinction: mismeasurement or extreme events? Are extreme values due to extreme events or error? You generally want to include the former but exclude the latter. You don't want your results dri
44,653
Extreme values in the data
First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you then have an outlier at say 200kg, ask yourself, is this possible? Yes, it could be a very heavy person. However, if you have a value that is 1000kg, you would think "This is absolutely impossible". Maybe someone added a "0" too many. This could be administrational error and should be fixed or removed from the dataset. If you have outliers that are within the possible range of the variable, you can still run a linear regression analysis. However, certain outliers can skew the data and hence skew your analysis. Lets say you're modeling weight vs. blood pressure. You'd expect individuals with low weight to have low blood pressure, and individuals with a heavy weight to have a high blood pressure. If an outlier has a low blood pressure and a heavy weight, it can skew your analysis. To illustrate, here is an example with weight vs. PEF (A value for breathing/lung strength) Compare the outlier and the regression line in: With the one in: In the second image, the regression line appears to show a stronger linear relationship between weight and PEF. The outlier in the first image has a low weight and high PEF, skewing the data. The amount of influence an outlier has on the outcome can be measured. Two methods are Cook's distance and DfBeta. Values that can be problematic have a Cook's distance > 4/n or DfBeta > 2 / square root(n)
Extreme values in the data
First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you t
Extreme values in the data First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you then have an outlier at say 200kg, ask yourself, is this possible? Yes, it could be a very heavy person. However, if you have a value that is 1000kg, you would think "This is absolutely impossible". Maybe someone added a "0" too many. This could be administrational error and should be fixed or removed from the dataset. If you have outliers that are within the possible range of the variable, you can still run a linear regression analysis. However, certain outliers can skew the data and hence skew your analysis. Lets say you're modeling weight vs. blood pressure. You'd expect individuals with low weight to have low blood pressure, and individuals with a heavy weight to have a high blood pressure. If an outlier has a low blood pressure and a heavy weight, it can skew your analysis. To illustrate, here is an example with weight vs. PEF (A value for breathing/lung strength) Compare the outlier and the regression line in: With the one in: In the second image, the regression line appears to show a stronger linear relationship between weight and PEF. The outlier in the first image has a low weight and high PEF, skewing the data. The amount of influence an outlier has on the outcome can be measured. Two methods are Cook's distance and DfBeta. Values that can be problematic have a Cook's distance > 4/n or DfBeta > 2 / square root(n)
Extreme values in the data First off, you should check the nature of your outliers. Are they within the natural range of your variable? E.g. you have measured weight for 100 people. Most would be between 50kg - 120kg. If you t
44,654
Extreme values in the data
Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the outliers will be probably wrong. Instead of it, use robust linear regression model and calculate standardized residuals for it (using robust estimator of standard deviation), and then remove everything that you can not expect by chance (so tune your threshold according to the sample size). The explanation of outliers is that your data does not follow your theoretical assumptions. There can be several possible reasons: your theoretical assumptions are wrong or you have data points generated by random variable with other distribution (so you have mixture of two or more distributions with some proportion, typically, we call outliers everything that belong to the smallest proportion, so less than 50% of data are outliers). It can not be "diagnosed by photo", without full understanding of what are you trying to do.
Extreme values in the data
Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the out
Extreme values in the data Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the outliers will be probably wrong. Instead of it, use robust linear regression model and calculate standardized residuals for it (using robust estimator of standard deviation), and then remove everything that you can not expect by chance (so tune your threshold according to the sample size). The explanation of outliers is that your data does not follow your theoretical assumptions. There can be several possible reasons: your theoretical assumptions are wrong or you have data points generated by random variable with other distribution (so you have mixture of two or more distributions with some proportion, typically, we call outliers everything that belong to the smallest proportion, so less than 50% of data are outliers). It can not be "diagnosed by photo", without full understanding of what are you trying to do.
Extreme values in the data Typically, it is better to remove this values, called outliers. But I would warn you not to use OLS regression in order to detect such outliers: you will probably construct the wrong model and the out
44,655
Expected value of sum of cards
By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything}=10$, then if we take into consideration that there is $4$ nines, than we instantly know that with probability greater than $20/52$ you finish in two draws. However, return of two draws is simple to obtain by enumerating all $52 \choose 2$ combinations of card pairs and summing their scores. unique_cards <- c(1:10, 10, 10, 10) # A, 1, 2, ..., 10, J, Q, K unique_cards <- rep(unique_cards, 4) # each appears 4 times comb <- combn(unique_cards, 2) # take all possible combinations of card pairs what gives $79\%$ probability of obtaining score of at least $10$ in two draws > sum(colSums(comb) >= 10)/choose(52, 2) # accepted / all combinations [1] 0.7888386 Lazy solution for more than two draws can be obtained by a simple simulation, where whole deck is shuffled and then cards are drawn until their total score is at least $10$. set.seed(123) sim <- function(target = 10) { res <- cumsum(sample(unique_cards)) # shuffle, draw and sum n <- which.max(res >= target) # take first score >= 10 c(sum = res[n], n = n) } R <- 1e4 res <- replicate(R, sim()) and the result is that on average you have to draw two cards and the average total score is $12.77$ > apply(res, 1, summary) sum n Min. 10.00 1.000 1st Qu. 10.00 1.000 Median 12.00 2.000 Mean 12.77 1.946 3rd Qu. 15.00 2.000 Max. 19.00 6.000 Moreover, as expected, with approximately $30\%$ probability you finish with one draw, but with $79\%$ probability you finish with two draws and you rarely get over three draws: > cumsum(table(res[2,])/R) 1 2 3 4 5 6 0.3006 0.7910 0.9653 0.9968 0.9999 1.0000
Expected value of sum of cards
By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything
Expected value of sum of cards By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything}=10$, then if we take into consideration that there is $4$ nines, than we instantly know that with probability greater than $20/52$ you finish in two draws. However, return of two draws is simple to obtain by enumerating all $52 \choose 2$ combinations of card pairs and summing their scores. unique_cards <- c(1:10, 10, 10, 10) # A, 1, 2, ..., 10, J, Q, K unique_cards <- rep(unique_cards, 4) # each appears 4 times comb <- combn(unique_cards, 2) # take all possible combinations of card pairs what gives $79\%$ probability of obtaining score of at least $10$ in two draws > sum(colSums(comb) >= 10)/choose(52, 2) # accepted / all combinations [1] 0.7888386 Lazy solution for more than two draws can be obtained by a simple simulation, where whole deck is shuffled and then cards are drawn until their total score is at least $10$. set.seed(123) sim <- function(target = 10) { res <- cumsum(sample(unique_cards)) # shuffle, draw and sum n <- which.max(res >= target) # take first score >= 10 c(sum = res[n], n = n) } R <- 1e4 res <- replicate(R, sim()) and the result is that on average you have to draw two cards and the average total score is $12.77$ > apply(res, 1, summary) sum n Min. 10.00 1.000 1st Qu. 10.00 1.000 Median 12.00 2.000 Mean 12.77 1.946 3rd Qu. 15.00 2.000 Max. 19.00 6.000 Moreover, as expected, with approximately $30\%$ probability you finish with one draw, but with $79\%$ probability you finish with two draws and you rarely get over three draws: > cumsum(table(res[2,])/R) 1 2 3 4 5 6 0.3006 0.7910 0.9653 0.9968 0.9999 1.0000
Expected value of sum of cards By your definition, you have $16$ cards ($10$, $\text{J}$, $\text{K}$, $\text{Q}$) that are worth $10$ points, so with probability $16/52$ you get $10$ points in a single draw. Since $9+\text{anything
44,656
Expected value of sum of cards
The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It takes about the same amount of computing power to work out this distribution, exactly, as it does to perform a small simulation. Here's how. The deck is determined by the numbers of cards of each point. Let there be $k_1 = 4$ cards worth one point (the aces), $k_2$ worth two points, and so on. Writing $n=10$, the vector $\mathbf{k}=(k_1,k_2, \ldots, k_n)$ for a standard deck is $$\mathbf{k} = (4,4,4,4,4,4,4,4,4,16).$$ When we draw a card from this deck, we remove a card worth $i$ points with probability $$P_{\mathbf{k}}(i) = \frac{k_i} {k_1+k_2+\cdots+k_n}.$$ The deck changes afterwards: $k_i$ is reduced to $k_i-1$. Let's indicate the new vector with the notation $$S_i(\mathbf{k}) = (k_1, k_2, \ldots, k_{i-1}, k_i - 1, k_{i+1}, \ldots, k_n).$$ Let $f_{\mathbf{k}}(t, s)$ be the distribution of the sum of points when the target is $t$ starting with a sum of $s$ points. We wish to find $f_{\mathbf{k}}(10, 0)$. The possible draws are described by $n=10$ distinct, non-overlapping events: event $i$ consists of drawing a card worth $i$ points. When that happens, any starting sum $s$ is increased to $s+i$, the target we need to reach is reduced to $t-i$, and the deck is changed to $S_i(\mathbf{k})$. The Law of Total Probability tells us to sum the chances over all these events. Thus, $$f_{\mathbf{k}}(t, s) = \sum_{i=1}^n P_{\mathbf{k}}(i)f_{S_i(\mathbf{k})}(t-i, s+i).\tag{*}$$ Certainly when $s$ exceeds the original target ($10$) there's nothing left to figure out: the drawing terminates and the distribution will be 100% on the total value $s$. Equivalently, when the target is $0$ or negative then we should put all the probability on whatever value $s$ currently has, because the target obviously has been met. These considerations give an effective recursive formula $(*)$ for $f$. Because $t$ decreases by at least $1$ with each iteration, it is guaranteed to terminate within $10$ draws. (It will actually terminate within $7$ draws, because soon all the aces would be used up.) This is short. (Just 4389 calls to $f$ are needed and only 446 of those have to perform the summation in $(*)$.) With double-precision computations the answer takes only a tenth of a second to obtain. It gives these probabilities for the final values $10, 11, \ldots, 19$ (which have been rounded for ease of reading): 0.37906 0.09108 0.08503 0.08203 0.07512 0.07132 0.06356 0.05877 0.04995 0.04408 Their expectation is $$10\times 0.37906 + 11\times 0.09108 + \cdots + 19\times 0.04408 = 12.7534.$$ An R implementation of $f$ is given below. First, though, its results can be checked by simulation. The game is played $10^4$ times, the final sums are tallied, and those tallies are compared to the preceding results with a $\chi^2$ test. (It is applicable and accurate because the smallest expected value of any cell count is a very large $440.8$.) set.seed(17) deck <- c(rep(4, 9), 16) # deck[i] counts cards of value `i`. deck.long <- unlist(sapply(1:length(deck), function(i) rep(i, deck[i]))) sim <- replicate(1e4, { x <- cumsum(sample(deck.long, 10)) x[which(x >= 10)[1]] }) y <- table(sim) z <- c(rep(0, 10), y/sum(y)) rbind(x, z) chisq.test(y, p=x[-(1:10)]) The output is Chi-squared test for given probabilities data: y X-squared = 3.8856, df = 9, p-value = 0.9188 The large p-value demonstrates consistency between the simulation and the theoretical answers. Here is how $f$ can be computed. f <- function(deck, total=0, target=10, maximum=20) { x <- rep(0, maximum) if (target <= 0) { x[total+1] <- 1 return(x) } n <- sum(deck) x <- sapply(1:length(deck), function(i) { k <- deck[i] if (k <= 0) return (x) d <- deck d[i] <- d[i] - 1 k/n * f(d, total + i, target - i, maximum) }) return(rowSums(x)) } x <- f(deck) round(x[-(1:10)], 5) sum(x * (1:length(x)-1)) # Expected value Incidentally, the same techniques--with only the tiniest modifications (which I leave to interested readers)--will answer the question "what is the distribution of the number of draws in the game?" The answer (again in double precision) is the vector of probabilities corresponding to 1, 2, ..., 7 draws: 3.076923e-01 4.811463e-01 1.762293e-01 3.176286e-02 3.029212e-03 1.381895e-04 1.866540e-06 A similar simulation--this time of a million iterations, because these probabilities get so small--produces these observed frequencies: 1 2 3 4 5 6 7 306897 481652 176242 32052 3019 135 3 These do not differ significantly from the computed values.
Expected value of sum of cards
The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It t
Expected value of sum of cards The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It takes about the same amount of computing power to work out this distribution, exactly, as it does to perform a small simulation. Here's how. The deck is determined by the numbers of cards of each point. Let there be $k_1 = 4$ cards worth one point (the aces), $k_2$ worth two points, and so on. Writing $n=10$, the vector $\mathbf{k}=(k_1,k_2, \ldots, k_n)$ for a standard deck is $$\mathbf{k} = (4,4,4,4,4,4,4,4,4,16).$$ When we draw a card from this deck, we remove a card worth $i$ points with probability $$P_{\mathbf{k}}(i) = \frac{k_i} {k_1+k_2+\cdots+k_n}.$$ The deck changes afterwards: $k_i$ is reduced to $k_i-1$. Let's indicate the new vector with the notation $$S_i(\mathbf{k}) = (k_1, k_2, \ldots, k_{i-1}, k_i - 1, k_{i+1}, \ldots, k_n).$$ Let $f_{\mathbf{k}}(t, s)$ be the distribution of the sum of points when the target is $t$ starting with a sum of $s$ points. We wish to find $f_{\mathbf{k}}(10, 0)$. The possible draws are described by $n=10$ distinct, non-overlapping events: event $i$ consists of drawing a card worth $i$ points. When that happens, any starting sum $s$ is increased to $s+i$, the target we need to reach is reduced to $t-i$, and the deck is changed to $S_i(\mathbf{k})$. The Law of Total Probability tells us to sum the chances over all these events. Thus, $$f_{\mathbf{k}}(t, s) = \sum_{i=1}^n P_{\mathbf{k}}(i)f_{S_i(\mathbf{k})}(t-i, s+i).\tag{*}$$ Certainly when $s$ exceeds the original target ($10$) there's nothing left to figure out: the drawing terminates and the distribution will be 100% on the total value $s$. Equivalently, when the target is $0$ or negative then we should put all the probability on whatever value $s$ currently has, because the target obviously has been met. These considerations give an effective recursive formula $(*)$ for $f$. Because $t$ decreases by at least $1$ with each iteration, it is guaranteed to terminate within $10$ draws. (It will actually terminate within $7$ draws, because soon all the aces would be used up.) This is short. (Just 4389 calls to $f$ are needed and only 446 of those have to perform the summation in $(*)$.) With double-precision computations the answer takes only a tenth of a second to obtain. It gives these probabilities for the final values $10, 11, \ldots, 19$ (which have been rounded for ease of reading): 0.37906 0.09108 0.08503 0.08203 0.07512 0.07132 0.06356 0.05877 0.04995 0.04408 Their expectation is $$10\times 0.37906 + 11\times 0.09108 + \cdots + 19\times 0.04408 = 12.7534.$$ An R implementation of $f$ is given below. First, though, its results can be checked by simulation. The game is played $10^4$ times, the final sums are tallied, and those tallies are compared to the preceding results with a $\chi^2$ test. (It is applicable and accurate because the smallest expected value of any cell count is a very large $440.8$.) set.seed(17) deck <- c(rep(4, 9), 16) # deck[i] counts cards of value `i`. deck.long <- unlist(sapply(1:length(deck), function(i) rep(i, deck[i]))) sim <- replicate(1e4, { x <- cumsum(sample(deck.long, 10)) x[which(x >= 10)[1]] }) y <- table(sim) z <- c(rep(0, 10), y/sum(y)) rbind(x, z) chisq.test(y, p=x[-(1:10)]) The output is Chi-squared test for given probabilities data: y X-squared = 3.8856, df = 9, p-value = 0.9188 The large p-value demonstrates consistency between the simulation and the theoretical answers. Here is how $f$ can be computed. f <- function(deck, total=0, target=10, maximum=20) { x <- rep(0, maximum) if (target <= 0) { x[total+1] <- 1 return(x) } n <- sum(deck) x <- sapply(1:length(deck), function(i) { k <- deck[i] if (k <= 0) return (x) d <- deck d[i] <- d[i] - 1 k/n * f(d, total + i, target - i, maximum) }) return(rowSums(x)) } x <- f(deck) round(x[-(1:10)], 5) sum(x * (1:length(x)-1)) # Expected value Incidentally, the same techniques--with only the tiniest modifications (which I leave to interested readers)--will answer the question "what is the distribution of the number of draws in the game?" The answer (again in double precision) is the vector of probabilities corresponding to 1, 2, ..., 7 draws: 3.076923e-01 4.811463e-01 1.762293e-01 3.176286e-02 3.029212e-03 1.381895e-04 1.866540e-06 A similar simulation--this time of a million iterations, because these probabilities get so small--produces these observed frequencies: 1 2 3 4 5 6 7 306897 481652 176242 32052 3019 135 3 These do not differ significantly from the computed values.
Expected value of sum of cards The question asks for the "mean of the sum of points." Because the target is 10 and no value exceeds 10, this is the mean of a distribution defined on the ten integers $10, 11, \ldots, 10+10-1$. It t
44,657
Expected value of sum of cards
I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random(); int thisSum; ulong ttl = 0; int b; int bTenRaw; int bTen; HashSet <int> values = new HashSet<int>(); for(Int32 i = 0; i < loops; i++) { thisSum = 0; values.Clear(); while (thisSum < 10) { b = rand.Next(0, 52); if (values.Contains(b)) continue; values.Add(b); bTenRaw = b % 13 + 1; bTen = (bTenRaw >= 10) ? 10 : bTenRaw; //Debug.WriteLine("bTen "+ bTen); thisSum += bTen; } //Debug.WriteLine("thisSum " + thisSum + Environment.NewLine); ttl += (ulong)thisSum; if (ttl > (ulong.MaxValue - 100)) Debug.WriteLine("ttl > (ulong.MaxValue - 100)" + thisSum); } decimal answer = (decimal)ttl / (decimal)loops; Debug.WriteLine("answer " + answer.ToString("N4")); return answer; }
Expected value of sum of cards
I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random();
Expected value of sum of cards I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random(); int thisSum; ulong ttl = 0; int b; int bTenRaw; int bTen; HashSet <int> values = new HashSet<int>(); for(Int32 i = 0; i < loops; i++) { thisSum = 0; values.Clear(); while (thisSum < 10) { b = rand.Next(0, 52); if (values.Contains(b)) continue; values.Add(b); bTenRaw = b % 13 + 1; bTen = (bTenRaw >= 10) ? 10 : bTenRaw; //Debug.WriteLine("bTen "+ bTen); thisSum += bTen; } //Debug.WriteLine("thisSum " + thisSum + Environment.NewLine); ttl += (ulong)thisSum; if (ttl > (ulong.MaxValue - 100)) Debug.WriteLine("ttl > (ulong.MaxValue - 100)" + thisSum); } decimal answer = (decimal)ttl / (decimal)loops; Debug.WriteLine("answer " + answer.ToString("N4")); return answer; }
Expected value of sum of cards I ran the simulation is C# .NET I am getting 12.75x consistently private static decimal AvgSumToTen() { Int32 loops = Int32.MaxValue / 10; //loops = 100; Random rand = new Random();
44,658
Matrix inverse not able to be calculated while determinant is non-zero
My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1/50))) > det(t(X) %*% X) [1] 5.156683e+126 > solve(t(X) %*% X) > Error in solve.default... The problem is numerical. You might be able to solve it by making some transformation of your $X$ matrix that makes the numbers smaller but allows you to work out what $\left(X'X\right)^{-1}$ is.
Matrix inverse not able to be calculated while determinant is non-zero
My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1
Matrix inverse not able to be calculated while determinant is non-zero My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1/50))) > det(t(X) %*% X) [1] 5.156683e+126 > solve(t(X) %*% X) > Error in solve.default... The problem is numerical. You might be able to solve it by making some transformation of your $X$ matrix that makes the numbers smaller but allows you to work out what $\left(X'X\right)^{-1}$ is.
Matrix inverse not able to be calculated while determinant is non-zero My guess is that the numbers are too big (the determinant is large) and you're running into a computational problem. I was able to replicate your error by running: > X <- cbind(1,exp(rexp(100,rate=1
44,659
Matrix inverse not able to be calculated while determinant is non-zero
The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is approximately very small and the U is approximately very large. At 16 point digit precision the very small number is rounded too large and the product explodes when it's actually 0. I would trust the solve command. The matrix is singular. The r help says "you shouldn't use det for solving any problems".
Matrix inverse not able to be calculated while determinant is non-zero
The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is
Matrix inverse not able to be calculated while determinant is non-zero The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is approximately very small and the U is approximately very large. At 16 point digit precision the very small number is rounded too large and the product explodes when it's actually 0. I would trust the solve command. The matrix is singular. The r help says "you shouldn't use det for solving any problems".
Matrix inverse not able to be calculated while determinant is non-zero The method for determinant is different than the method for inverting a matrix. The determinant uses a lower upper decomposition. The determinant of a product is the product of determinants. The L is
44,660
Matrix inverse not able to be calculated while determinant is non-zero
It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikely. What about the scale of $X$? What are its max values? Your determinant may be overflowing due to scaling issues, in which case you can decrease the values of the matrix by some constant factor. I also agree with the commenters -- there's no need to explicitly invert a matrix to solve linear regression.
Matrix inverse not able to be calculated while determinant is non-zero
It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikel
Matrix inverse not able to be calculated while determinant is non-zero It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikely. What about the scale of $X$? What are its max values? Your determinant may be overflowing due to scaling issues, in which case you can decrease the values of the matrix by some constant factor. I also agree with the commenters -- there's no need to explicitly invert a matrix to solve linear regression.
Matrix inverse not able to be calculated while determinant is non-zero It looks like there's a similar question here, and I'd suggest a similar exploration. What is the condition number of your matrix? Your matrix may be nearly singular, although I suspect that's unlikel
44,661
Matrix inverse not able to be calculated while determinant is non-zero
Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Given computer arithmetic, the determinant will be computed as zero if one of the individual computed eigenvalues is exactly zero or if enough of them are very small that the computed product underflows. It takes a lot to underflow double precision, so we're talking really really small. .Machine$double.eps^20 doesn't underflow. The matrix is truly uninvertible iff one of the individual eigenvalues is zero. Given computer arithmetic, the inverse will be detected as numerically singular if the estimated condition number, the ratio of the largest and smallest eigenvalues, is too large. The default threshold is the reciprocal of the condition number being smaller than machine epsilon, which is only $2^{-52}\approx 2\times 10^{-16}$. So it's a lot easier to get solve to give up on a matrix that to get det to underflow to zero. @John's answer gives a matrix of rank 2 that has a non-zero computed determinant, because the non-zero eigenvalues are big and presumably the zero ones didn't exactly evaluate to zero. Your example isn't like that because it would have full rank at infinite precision, but it's presumably similar. The smallest eigenvalue is not zero, but it's less than machine epsilon times the largest eigenvalue. As a final note, while solve and det just use Lapack, as all sensible people do, functions like lm and glm don't -- and they have a much stricter tolerance for singular matrices, because typically a double-precision design matrix that someone hasn't deliberately set up as a numerical analysis exercise is either actually singular or has a reciprocal condition number much larger than machine epsilon. And if it does fall in the gap, the user probably needs to know. The tolerance (in qr(,LAPACK=FALSE)) is $10^{-7}$. So, the numerical rank as computed by qr can be zero when solve still works, and that's deliberate and for good reasons. (I mean, on top of the fact that you're probably using qr on $X$ rather than $X^TX$)
Matrix inverse not able to be calculated while determinant is non-zero
Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Give
Matrix inverse not able to be calculated while determinant is non-zero Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Given computer arithmetic, the determinant will be computed as zero if one of the individual computed eigenvalues is exactly zero or if enough of them are very small that the computed product underflows. It takes a lot to underflow double precision, so we're talking really really small. .Machine$double.eps^20 doesn't underflow. The matrix is truly uninvertible iff one of the individual eigenvalues is zero. Given computer arithmetic, the inverse will be detected as numerically singular if the estimated condition number, the ratio of the largest and smallest eigenvalues, is too large. The default threshold is the reciprocal of the condition number being smaller than machine epsilon, which is only $2^{-52}\approx 2\times 10^{-16}$. So it's a lot easier to get solve to give up on a matrix that to get det to underflow to zero. @John's answer gives a matrix of rank 2 that has a non-zero computed determinant, because the non-zero eigenvalues are big and presumably the zero ones didn't exactly evaluate to zero. Your example isn't like that because it would have full rank at infinite precision, but it's presumably similar. The smallest eigenvalue is not zero, but it's less than machine epsilon times the largest eigenvalue. As a final note, while solve and det just use Lapack, as all sensible people do, functions like lm and glm don't -- and they have a much stricter tolerance for singular matrices, because typically a double-precision design matrix that someone hasn't deliberately set up as a numerical analysis exercise is either actually singular or has a reciprocal condition number much larger than machine epsilon. And if it does fall in the gap, the user probably needs to know. The tolerance (in qr(,LAPACK=FALSE)) is $10^{-7}$. So, the numerical rank as computed by qr can be zero when solve still works, and that's deliberate and for good reasons. (I mean, on top of the fact that you're probably using qr on $X$ rather than $X^TX$)
Matrix inverse not able to be calculated while determinant is non-zero Ok, I think det is the one that's misleading here. The "true" determinant is zero if the product of the eigenvalues of $X^TX$ is zero, which happens iff one of the individual eigenvalues is zero. Give
44,662
Use of KL Divergence in practice
The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribution P and a sought element Q from a class of tractable distributions. The "direction" of the KL divergence then must be chosen such that the expectation is taken with respect to Q to make the task feasible. Many approximating algorithms (which can also be used to fit probabilistic models to data) can be interpreted in this way. Among those are Mean Field, (Loopy) Belief Propagation (generalizing forward-backward and Viterbi for HMMs), Expectation Propagation, Junction graph/tree, tree-reweighted Belief Propagation and many more. References Wainwright, M. J. and Jordan, M. I. Graphical models, exponential families, and variational inference, Foundations and Trendstextregistered in Machine Learning, Now Publishers Inc., 2008, Vol. 1(1-2), pp. 1-305 Yedidia, J. S.; Freeman, W. T. & Weiss, Y. Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms, Information Theory, IEEE Transactions on, IEEE, 2005, 51, 2282-2312
Use of KL Divergence in practice
The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribut
Use of KL Divergence in practice The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribution P and a sought element Q from a class of tractable distributions. The "direction" of the KL divergence then must be chosen such that the expectation is taken with respect to Q to make the task feasible. Many approximating algorithms (which can also be used to fit probabilistic models to data) can be interpreted in this way. Among those are Mean Field, (Loopy) Belief Propagation (generalizing forward-backward and Viterbi for HMMs), Expectation Propagation, Junction graph/tree, tree-reweighted Belief Propagation and many more. References Wainwright, M. J. and Jordan, M. I. Graphical models, exponential families, and variational inference, Foundations and Trendstextregistered in Machine Learning, Now Publishers Inc., 2008, Vol. 1(1-2), pp. 1-305 Yedidia, J. S.; Freeman, W. T. & Weiss, Y. Constructing Free-Energy Approximations and Generalized Belief Propagation Algorithms, Information Theory, IEEE Transactions on, IEEE, 2005, 51, 2282-2312
Use of KL Divergence in practice The Kullback-Leibler divergence is widely used in variational inference, where an optimization problem is constructed that aims at minimizing the KL-divergence between the intractable target distribut
44,663
Use of KL Divergence in practice
KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probability distribution $p(x)$ while you use the approximate $q(x)$, you will have to use excess bits to encode a sequence of $X$ values. The extra cost you pay is KL(p,q) bayesian approximate inference: bayesian methods are great for ML, but it's also extremely computationally expensive to obtain the posterior. Two solutions: either you use sampling methods (MCMC, gibbs, etc) OR you use approximate inference methods which aim at finding a simple (for example Gaussian) approximation to the posterior. Most approximate inference methods refer to KL in some way: so called "variational" (this name sucks) methods minimize KL(q,p), etc. Approximate inference is present in a lot of machine learning research, so KL is too
Use of KL Divergence in practice
KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probabi
Use of KL Divergence in practice KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probability distribution $p(x)$ while you use the approximate $q(x)$, you will have to use excess bits to encode a sequence of $X$ values. The extra cost you pay is KL(p,q) bayesian approximate inference: bayesian methods are great for ML, but it's also extremely computationally expensive to obtain the posterior. Two solutions: either you use sampling methods (MCMC, gibbs, etc) OR you use approximate inference methods which aim at finding a simple (for example Gaussian) approximation to the posterior. Most approximate inference methods refer to KL in some way: so called "variational" (this name sucks) methods minimize KL(q,p), etc. Approximate inference is present in a lot of machine learning research, so KL is too
Use of KL Divergence in practice KL is widely used in machine learning. The two main ways I know compression: compressing a document is actually all about finding a good generative model for it. Given that the true model has probabi
44,664
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value?
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best, you can evaluate whether the value is low or high (this evaluation is also a bit problematic to me). From what I understand, stress value is inversely related to the number of dimensions of a MDS solution, correct and that higher stress value indicates that there is a lot of error (i.e. badness-of-fit) in the current model, correct indicating a solution with more dimensions. Not very accurate conclusion. consider stress as a function, "number of dimensions" is one of the inputs of this function. The others [significant factors] are the model that you are using as your MDS model, the initial configuration of points in the MDS configuration(map) or even the order of rows/columns in the dissimilarity matrix. Therefore, you will get different stress values in 2-dimension space for instance just by changing the initial configuration of the points! [although this change in the stress value is not considerable comparing to the one resulted by change in the number of dimensions] Now if you want to figure out the most proper number of dimensions regarding the stress value, there is a straight-forward solution: In multidimensional scaling, the pragmatic way of depicting the inverse relation of number of dimensions and stress is computing the stress for 2,3,4...,n-1 dimensions. n is the original number of dimension of the data. The result of above computations becomes more lucid and comprehensible through "Scree plot of number of dimensions ~ amount of stress". The example below is from Cox and Cox(2001): Now we can decide about the number of dimensions based on the relation. It is a trade-off: more dimensions-->lower stress (more accurate map) and less dimension reduction(more difficult to visualize and interpret). Besides, the proper number of dimensions are not decided solely based on stress value. Your goal also matters. If you want to have a 2D map, then you choose 2-dimensions and then try to minimize the stress as much as possible. Nevertheless, if you are implying "how much stress is too much" then we have another story! one way of evaluation of your magnitude of stress is comparing it to the average stress values of different possible configurations of your dataset. (have look at "Multidimensional Scaling in R: SMACOF" written by Patrick Mair). Are the randomly generated coordinates, number of variables, and number of categories in a variable related? Sorry but I don't understand this part of your question.
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best,
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best, you can evaluate whether the value is low or high (this evaluation is also a bit problematic to me). From what I understand, stress value is inversely related to the number of dimensions of a MDS solution, correct and that higher stress value indicates that there is a lot of error (i.e. badness-of-fit) in the current model, correct indicating a solution with more dimensions. Not very accurate conclusion. consider stress as a function, "number of dimensions" is one of the inputs of this function. The others [significant factors] are the model that you are using as your MDS model, the initial configuration of points in the MDS configuration(map) or even the order of rows/columns in the dissimilarity matrix. Therefore, you will get different stress values in 2-dimension space for instance just by changing the initial configuration of the points! [although this change in the stress value is not considerable comparing to the one resulted by change in the number of dimensions] Now if you want to figure out the most proper number of dimensions regarding the stress value, there is a straight-forward solution: In multidimensional scaling, the pragmatic way of depicting the inverse relation of number of dimensions and stress is computing the stress for 2,3,4...,n-1 dimensions. n is the original number of dimension of the data. The result of above computations becomes more lucid and comprehensible through "Scree plot of number of dimensions ~ amount of stress". The example below is from Cox and Cox(2001): Now we can decide about the number of dimensions based on the relation. It is a trade-off: more dimensions-->lower stress (more accurate map) and less dimension reduction(more difficult to visualize and interpret). Besides, the proper number of dimensions are not decided solely based on stress value. Your goal also matters. If you want to have a 2D map, then you choose 2-dimensions and then try to minimize the stress as much as possible. Nevertheless, if you are implying "how much stress is too much" then we have another story! one way of evaluation of your magnitude of stress is comparing it to the average stress values of different possible configurations of your dataset. (have look at "Multidimensional Scaling in R: SMACOF" written by Patrick Mair). Are the randomly generated coordinates, number of variables, and number of categories in a variable related? Sorry but I don't understand this part of your question.
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? Having a stress value it is not possible to determine the dimensionality of the dataset. At best,
44,665
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value?
This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and parametric complexity. See Lee 2001: http://www.socsci.uci.edu/~mdlee/lee_mdsbic.pdf
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value
This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value? This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and parametric complexity. See Lee 2001: http://www.socsci.uci.edu/~mdlee/lee_mdsbic.pdf
In multidimensional scaling, how can one determine dimensionality of a solution given a stress value This is old, but one can compute BIC for every dimensionality, and choose the dimensionality with the lowest BIC. BIC is nice in that it accounts for inter-subject variability, model fit (stress), and
44,666
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time series input, textbook answer would be that you can feed your time series in feed forward network, by having a input layer containg also inputs from previous time points. Therefore effectively transforming time series problem, into feed forward problem. However you will have to choose length of your input beforehand, and you will not be able to learn functions that depends on the inputs happening long time ago. You can solve this problem by having a RNN, that can theoretically, store information from arbitrarily long time ago, in it;s context layer. In practice however, you will have gradient exploding/vanishing problem.
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time ser
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time series input, textbook answer would be that you can feed your time series in feed forward network, by having a input layer containg also inputs from previous time points. Therefore effectively transforming time series problem, into feed forward problem. However you will have to choose length of your input beforehand, and you will not be able to learn functions that depends on the inputs happening long time ago. You can solve this problem by having a RNN, that can theoretically, store information from arbitrarily long time ago, in it;s context layer. In practice however, you will have gradient exploding/vanishing problem.
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) Theoretically, MLP can approximate any function, to an arbitrary precision, therefore there is no need for RNN. However that doesn't mean it is usable in a wild. Assuming we are talking about time ser
44,667
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for example the network can now detect changes over time. This is exactly the added capability you'd want a Recurrent Neural Network for, in this example. That part it sounds like you know. The downside, is it can be much more difficult to train and has multiple issues with convergence. For example, the backpropegation "signal" tends to decay exponentially over "time". The choice of learning algorithm can be more limited. (SGD can't obviously throw out intermediate timesteps without serious modifications). There are other methods that address some of these issues; for example Long-Short Term Memory, which basically uses a gating approach to build a recurrent circuit that can be "set" or "cleared". Even if you aren't talking time series (e.g. Recurrent neural network have also been used with convolutional layers to extend the effective pixel neighborhood.) you will still have similar convergence and exponential backprop decay issues.
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP)
For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for exampl
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for example the network can now detect changes over time. This is exactly the added capability you'd want a Recurrent Neural Network for, in this example. That part it sounds like you know. The downside, is it can be much more difficult to train and has multiple issues with convergence. For example, the backpropegation "signal" tends to decay exponentially over "time". The choice of learning algorithm can be more limited. (SGD can't obviously throw out intermediate timesteps without serious modifications). There are other methods that address some of these issues; for example Long-Short Term Memory, which basically uses a gating approach to build a recurrent circuit that can be "set" or "cleared". Even if you aren't talking time series (e.g. Recurrent neural network have also been used with convolutional layers to extend the effective pixel neighborhood.) you will still have similar convergence and exponential backprop decay issues.
The advantages of recurrent neural network(RNN) over feed-forward neural network (MLP) For purposes of discussion I'll assume you are using RNN for the typical use case of time series analysis, where the recurrence operation allows response to depend on a time-evolving state; for exampl
44,668
Simple Log regression model in R
In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <- 1*log(x)-6+rnorm(n) > > #plot the data > plot(y~x) > > #fit log model > fit <- lm(y~log(x)) > #Results of the model > summary(fit) Call: lm(formula = y ~ log(x)) Residuals: Min 1Q Median 3Q Max -3.06157 -0.69437 -0.00174 0.76330 2.63033 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6.4699 0.2471 -26.19 <2e-16 *** log(x) 1.0879 0.0465 23.39 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.014 on 498 degrees of freedom Multiple R-squared: 0.5236, Adjusted R-squared: 0.5226 F-statistic: 547.3 on 1 and 498 DF, p-value: < 2.2e-16 > > coef(fit) (Intercept) log(x) -6.469869 1.087886 > > #plot > x=seq(from=1,to=n,length.out=1000) > y=predict(fit,newdata=list(x=seq(from=1,to=n,length.out=1000)), + interval="confidence") > matlines(x,y,lwd=2) Results of the previous code:
Simple Log regression model in R
In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <
Simple Log regression model in R In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <- 1*log(x)-6+rnorm(n) > > #plot the data > plot(y~x) > > #fit log model > fit <- lm(y~log(x)) > #Results of the model > summary(fit) Call: lm(formula = y ~ log(x)) Residuals: Min 1Q Median 3Q Max -3.06157 -0.69437 -0.00174 0.76330 2.63033 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -6.4699 0.2471 -26.19 <2e-16 *** log(x) 1.0879 0.0465 23.39 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 1.014 on 498 degrees of freedom Multiple R-squared: 0.5236, Adjusted R-squared: 0.5226 F-statistic: 547.3 on 1 and 498 DF, p-value: < 2.2e-16 > > coef(fit) (Intercept) log(x) -6.469869 1.087886 > > #plot > x=seq(from=1,to=n,length.out=1000) > y=predict(fit,newdata=list(x=seq(from=1,to=n,length.out=1000)), + interval="confidence") > matlines(x,y,lwd=2) Results of the previous code:
Simple Log regression model in R In my opinion, it's a good strategy to transform your data before performing linear regression model as your data show good log relation: > #generating the data > n=500 > x <- 1:n > set.seed(10) > y <
44,669
Continuous probability distribution over integers?
By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finite. This is not true, e.g. the Poisson distribution is defined on the non-negative integers, which is an infinite countable set $[0,+\infty)$. Another source could be usage of computer-generated random numbers, or pseudo random number generators. Since in computers even continuous variables, such as real numbers, are represented by countable sets (e.g. IEEE double precision floating point), the PRNGs generate discrete sequences to approximate continuous variables. So, in some sense, everything we do in computers is discrete. By "computers" I mean digital computers. Analog computers are different.
Continuous probability distribution over integers?
By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finit
Continuous probability distribution over integers? By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finite. This is not true, e.g. the Poisson distribution is defined on the non-negative integers, which is an infinite countable set $[0,+\infty)$. Another source could be usage of computer-generated random numbers, or pseudo random number generators. Since in computers even continuous variables, such as real numbers, are represented by countable sets (e.g. IEEE double precision floating point), the PRNGs generate discrete sequences to approximate continuous variables. So, in some sense, everything we do in computers is discrete. By "computers" I mean digital computers. Analog computers are different.
Continuous probability distribution over integers? By definition your distribution is discrete, because you can obtain all the values by counting. Your confusion may stem from two sources. One is that often people assume that discrete also means finit
44,670
How to plot clusters in more than 3 dimensions?
Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership.
How to plot clusters in more than 3 dimensions?
Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the
How to plot clusters in more than 3 dimensions? Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the key aspect of your question. Read up on multidimensional scaling (MDS) for this. Finally, color your points according to cluster membership.
How to plot clusters in more than 3 dimensions? Calculate distances between data points, as appropriate to your problem. Then plot your data points in two dimensions instead of fifteen, preserving distances as far as possible. This is probably the
44,671
How to plot clusters in more than 3 dimensions?
I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific tasks. The main advantage (to me) is that it is an unsupervised method, meaning that you can apply it even with unknown classes in your data. If you know your classes, you could use this information to color the distinctive clusters/regions obtained in the output map. https://en.wikipedia.org/wiki/Self-organizing_map
How to plot clusters in more than 3 dimensions?
I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific
How to plot clusters in more than 3 dimensions? I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific tasks. The main advantage (to me) is that it is an unsupervised method, meaning that you can apply it even with unknown classes in your data. If you know your classes, you could use this information to color the distinctive clusters/regions obtained in the output map. https://en.wikipedia.org/wiki/Self-organizing_map
How to plot clusters in more than 3 dimensions? I have successfully used a Self-Organizing Map (SOM) in the past for this task. It is a kind of Neural Network with some relation to Clustering, with significant advantages over them for some specific
44,672
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunned trees. This is the case because RF can only reduce variance, not bias (where $error=bias+variance$). Since the bias of the entire forest is roughly equal to the bias of a single tree, the base model used has to be a very deep tree to guarantee a low bias. Variance is subsequently reduced by growing many deep, uncorrelated trees and averaging their predictions. I wouldn't necessarily say that a training accuracy of 87% and a test accuracy of 57% indicates severe overfitting. Performance on your training set will always be higher than on your test set. Now, you need to provide more information if you want CV users to be able to diagnose the source of your potential overfitting problem. how did you tune the parameters of your random forest model? Did you use cross-validation, or an independent test set? What are the sizes of your training/testing sets? Did you properly used randomization to constitute these sets? is your target categorical or continuous? If yes to the former, do you have any kind of class imbalance issue? how did you measure error? If it applies, is your classification problem binary, or multiclass? In practice, Random Forest seldom overfit. But what would tend to favor overfitting would be having too many trees in the forest. At some point it is not necessary to keep adding trees (it does not reduce variance anymore, but can slightly increase it). This is why the optimal number of trees should be optimized like any other hyperparameter (or at least, should not be carelessly set to too high of a number. It should be the smallest number of trees needed to achieve lowest error. You can look at a plateau in the curve of OOB error VS number of trees). Other than overfitting, the difference in accuracy between train & test that you observe could be explained by differences between the sets. Are the same concepts present in both sets? If not, even the best classifier won't be able to perform well out-of-bag. You can't extrapolate for something if you did not even learn about some aspect of it. I would also recommend that you read the section about RF in the formative Elements of Statistical Learning. Especially, see section 15.3.4 (p. 596) about RF and overfitting.
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunn
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunned trees. This is the case because RF can only reduce variance, not bias (where $error=bias+variance$). Since the bias of the entire forest is roughly equal to the bias of a single tree, the base model used has to be a very deep tree to guarantee a low bias. Variance is subsequently reduced by growing many deep, uncorrelated trees and averaging their predictions. I wouldn't necessarily say that a training accuracy of 87% and a test accuracy of 57% indicates severe overfitting. Performance on your training set will always be higher than on your test set. Now, you need to provide more information if you want CV users to be able to diagnose the source of your potential overfitting problem. how did you tune the parameters of your random forest model? Did you use cross-validation, or an independent test set? What are the sizes of your training/testing sets? Did you properly used randomization to constitute these sets? is your target categorical or continuous? If yes to the former, do you have any kind of class imbalance issue? how did you measure error? If it applies, is your classification problem binary, or multiclass? In practice, Random Forest seldom overfit. But what would tend to favor overfitting would be having too many trees in the forest. At some point it is not necessary to keep adding trees (it does not reduce variance anymore, but can slightly increase it). This is why the optimal number of trees should be optimized like any other hyperparameter (or at least, should not be carelessly set to too high of a number. It should be the smallest number of trees needed to achieve lowest error. You can look at a plateau in the curve of OOB error VS number of trees). Other than overfitting, the difference in accuracy between train & test that you observe could be explained by differences between the sets. Are the same concepts present in both sets? If not, even the best classifier won't be able to perform well out-of-bag. You can't extrapolate for something if you did not even learn about some aspect of it. I would also recommend that you read the section about RF in the formative Elements of Statistical Learning. Especially, see section 15.3.4 (p. 596) about RF and overfitting.
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees as stated in one of the other answers is completely WRONG. The RF algorithm, by definition, requires fully grown unprunn
44,673
Random Forest Overfitting R
One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you perform the splits in the nodes of the trees, the algorithm may often only choose very poor features, which makes your model just chase some noise in your data. It is wise to look at the variable importance of the forest to try to identify if you have features that are maybe not relevant. You can also try to make some dimensionality-reduction/aggregation on the features.
Random Forest Overfitting R
One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you p
Random Forest Overfitting R One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you perform the splits in the nodes of the trees, the algorithm may often only choose very poor features, which makes your model just chase some noise in your data. It is wise to look at the variable importance of the forest to try to identify if you have features that are maybe not relevant. You can also try to make some dimensionality-reduction/aggregation on the features.
Random Forest Overfitting R One reason that you Random Forest may be overfitting may be because you have a lot of redundant features or your features are heavily correlated. If lot of your features are redundant, then when you p
44,674
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry mtry is the number of variables the algorithm draws to build each tree. ntree is the total number of trees in the forest. Having said that, cross validation always helps. Consider carrying out k-fold cross validation. This forum on Kaggle would help understand carrying out cross validation in random forests.
Random Forest Overfitting R
In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry mtry is the number of variables the algorithm draws to build each tree. ntree is the total number of trees in the forest. Having said that, cross validation always helps. Consider carrying out k-fold cross validation. This forum on Kaggle would help understand carrying out cross validation in random forests.
Random Forest Overfitting R In random forests, overfitting is generally caused by over growing the trees. Pruning the trees would also help. So, some parameters which you can optimize in the cForest argument are the ntree, mtry
44,675
Variance-covariance matrix of logit with matrix computation
@Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regression: Where W is diagonal matrix with is the probability of event=1 at the observation level
Variance-covariance matrix of logit with matrix computation
@Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regre
Variance-covariance matrix of logit with matrix computation @Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regression: Where W is diagonal matrix with is the probability of event=1 at the observation level
Variance-covariance matrix of logit with matrix computation @Deep North: You are right, there should not be a 'n' The covariance matrix of a logistic regression is different from the covariance matrix of a linear regression. Linear Regression: Logistic Regre
44,676
Variance-covariance matrix of logit with matrix computation
The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p35 and p41 formular(2.8) I revised your program and compare with variance estimation, they are close but not the same. library(Matrix) library(sandwich) mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) n <- nrow(X) pi<-mylogit$fit w<-pi*(1-pi) v<-Diagonal(n, x = w) var_b<-solve(t(X)%*%v%*%X) var_b x 3 Matrix of class "dgeMatrix" [,1] [,2] [,3] [1,] 1.1558251135 -2.818944e-04 -0.2825632388 [2,] -0.0002818944 1.118288e-06 -0.0001144821 [3,] -0.2825632388 -1.144821e-04 0.1021349767 vcov(mylogit) (Intercept) gre gpa (Intercept) 1.1558247051 -2.818942e-04 -0.2825631552 gre -0.0002818942 1.118287e-06 -0.0001144821 gpa -0.2825631552 -1.144821e-04 0.1021349526 They are the same at the first five digits
Variance-covariance matrix of logit with matrix computation
The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p3
Variance-covariance matrix of logit with matrix computation The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p35 and p41 formular(2.8) I revised your program and compare with variance estimation, they are close but not the same. library(Matrix) library(sandwich) mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) n <- nrow(X) pi<-mylogit$fit w<-pi*(1-pi) v<-Diagonal(n, x = w) var_b<-solve(t(X)%*%v%*%X) var_b x 3 Matrix of class "dgeMatrix" [,1] [,2] [,3] [1,] 1.1558251135 -2.818944e-04 -0.2825632388 [2,] -0.0002818944 1.118288e-06 -0.0001144821 [3,] -0.2825632388 -1.144821e-04 0.1021349767 vcov(mylogit) (Intercept) gre gpa (Intercept) 1.1558247051 -2.818942e-04 -0.2825631552 gre -0.0002818942 1.118287e-06 -0.0001144821 gpa -0.2825631552 -1.144821e-04 0.1021349526 They are the same at the first five digits
Variance-covariance matrix of logit with matrix computation The covariance for logistic regression from subra is correct. But $w_{ii}=\hat{\pi_i}(1-\hat{\pi_i})$. There should not have a $n_i$. ref. David W. Hosmer Applied Logistic Regression (2nd Editiion) p3
44,677
Variance-covariance matrix of logit with matrix computation
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <- as.matrix(coef(mylogit)) require(slam) p <- 1/(1+exp(-X %*% beta.hat)) V <- simple_triplet_zero_matrix(dim(X)[1]) diag(V) <- p*(1-p) IB <- matprod_simple_triplet_matrix(t(X), V) %*% X varcov_mat <- solve(IB) round(solve(IB),4) == round(vcov(mylogit),4) # 1 gre gpa # 1 TRUE TRUE TRUE # gre TRUE TRUE TRUE # gpa TRUE TRUE TRUE
Variance-covariance matrix of logit with matrix computation
mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <-
Variance-covariance matrix of logit with matrix computation mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <- as.matrix(coef(mylogit)) require(slam) p <- 1/(1+exp(-X %*% beta.hat)) V <- simple_triplet_zero_matrix(dim(X)[1]) diag(V) <- p*(1-p) IB <- matprod_simple_triplet_matrix(t(X), V) %*% X varcov_mat <- solve(IB) round(solve(IB),4) == round(vcov(mylogit),4) # 1 gre gpa # 1 TRUE TRUE TRUE # gre TRUE TRUE TRUE # gpa TRUE TRUE TRUE
Variance-covariance matrix of logit with matrix computation mydata <- read.csv("http://www.ats.ucla.edu/stat/data/binary.csv") mylogit <- glm(admit ~ gre + gpa, data = mydata, family = "binomial") X <- as.matrix(cbind(1, mydata[,c('gre','gpa')])) beta.hat <-
44,678
Alpha parameter in ridge regression is high
The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger is the alpha, the higher is the smoothness constraint. So, the smaller the value of alpha, the higher would be the magnitude of the coefficients. I would add an image which would help you visualize how the alpha value influences the fit: So, the alpha parameter need not be small. But, for a larger alpha, the flexibility of the fit would be very strict.
Alpha parameter in ridge regression is high
The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger
Alpha parameter in ridge regression is high The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger is the alpha, the higher is the smoothness constraint. So, the smaller the value of alpha, the higher would be the magnitude of the coefficients. I would add an image which would help you visualize how the alpha value influences the fit: So, the alpha parameter need not be small. But, for a larger alpha, the flexibility of the fit would be very strict.
Alpha parameter in ridge regression is high The L2 norm term in ridge regression is weighted by the regularization parameter alpha So, if the alpha value is 0, it means that it is just an Ordinary Least Squares Regression model. So, the larger
44,679
Correlation coefficient is very small
A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight positive relationship between your variables for a large number of useful votes, described by your fitted linear equation, but you also have the bulk of your data close to 0 useful votes, where the effect of this relation is small compared to the large variation in the ratings. I would recommend the following. For visualization, use smaller dots or use a two-dimensional histogram. On the left side of the diagram you have so many superimposed circles, it's hard to see anything. Things should also get clearer if you plot the logarithm of useful votes instead of the useful votes themselves. Of course you cannot compute the log of 0, but it is possible that removing users with 0 useful votes is a good idea anyway. You can then try to make a linear fit to the relation (log useful votes) vs (rating). Alternatively, you could rank-transform your data (replace each data point with its position in a sorted list of data points) and then attempt a linear fit. The corresponding correlation coefficient is called Spearman correlation.
Correlation coefficient is very small
A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight posit
Correlation coefficient is very small A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight positive relationship between your variables for a large number of useful votes, described by your fitted linear equation, but you also have the bulk of your data close to 0 useful votes, where the effect of this relation is small compared to the large variation in the ratings. I would recommend the following. For visualization, use smaller dots or use a two-dimensional histogram. On the left side of the diagram you have so many superimposed circles, it's hard to see anything. Things should also get clearer if you plot the logarithm of useful votes instead of the useful votes themselves. Of course you cannot compute the log of 0, but it is possible that removing users with 0 useful votes is a good idea anyway. You can then try to make a linear fit to the relation (log useful votes) vs (rating). Alternatively, you could rank-transform your data (replace each data point with its position in a sorted list of data points) and then attempt a linear fit. The corresponding correlation coefficient is called Spearman correlation.
Correlation coefficient is very small A large amount of data can only help you to determine the correlation more precisely, it cannot reduce the correlation. The problem with your data seems rather to be that, yes, you have a slight posit
44,680
Correlation coefficient is very small
related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in terms of fitting a linear model of $y$ against $x$. If the model is a perfect fit (i.e. $y$ plotted against $x$ is a straight line), $r=1$. If the fit can't get any worse (e.g. if the "true" curve is parabolic) then $r=0$. In fact, for the linear model $\mathrm{E}\left(y\right) = \beta_0 + \beta_1 x$, the familiar goodness-of-fit statistic $r^2$ is literally just the square of the correlation coefficient (hence the notation). So two variables can be (or at least appear) very closely related, as in your case, but have near-zero correlation: For a deeper explanation and the source of this very nice graphic, see this Wikipedia article.
Correlation coefficient is very small
related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in term
Correlation coefficient is very small related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in terms of fitting a linear model of $y$ against $x$. If the model is a perfect fit (i.e. $y$ plotted against $x$ is a straight line), $r=1$. If the fit can't get any worse (e.g. if the "true" curve is parabolic) then $r=0$. In fact, for the linear model $\mathrm{E}\left(y\right) = \beta_0 + \beta_1 x$, the familiar goodness-of-fit statistic $r^2$ is literally just the square of the correlation coefficient (hence the notation). So two variables can be (or at least appear) very closely related, as in your case, but have near-zero correlation: For a deeper explanation and the source of this very nice graphic, see this Wikipedia article.
Correlation coefficient is very small related ≠ correlated The garden-variety Pearson correlation $r$ measures the strength of linear association between two variables $x$ and $y$. The easiest way to think of it (in my opinion) is in term
44,681
Correlation coefficient is very small
It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a different picture.
Correlation coefficient is very small
It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a dif
Correlation coefficient is very small It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a different picture.
Correlation coefficient is very small It is probably because the relationship you are seeing is not linear, and the usual correlation coefficient reflects linear relationship. As @A._Donda said transform useful votesand you will see a dif
44,682
How to find conditional distributions from joint
Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent of the value of $Y$. To see how the conditional distribution is gamma, all you have to do is write $$f_{X \mid Y}(x) = \frac{f_{X,Y}(x,y)}{f_Y(y)} \propto f_{X,Y}(x,y).$$ That is to say, the conditional distribution is proportional to the joint distribution, appropriately normalized. So we have $$f_{X \mid Y}(x) \propto x^2 e^{-x(y^2+4)},$$ completely ignoring any factors that are not functions of $x$. Then we recognize that the gamma distribution has density $$f_S(s) \propto s^{a-1} e^{-bs},$$ so the choice of shape $a = 3$ and rate $b = y^2+4$ demonstrates that the conditional distribution $X \mid Y \sim \operatorname{Gamma}(a = 3, b = y^2+4)$. The conditional distribution of $Y \mid X$ is done in a similar fashion. Just ignore constants of proportionality: $$f_{Y \mid X}(y) \propto e^{-(x+1)y^2+2y},$$ but this one requires us to complete the square to get it to look like a normal density: $$-(x+1)y^2 + 2y = (x+1)\left(-\left(y - \tfrac{1}{x+1}\right)^2 \right) + \tfrac{1}{x+1},$$ and after exponentiating and removing the $e^{1/(x+1)}$ factor, comparing this against $$f_W(w) \propto e^{-(w-\mu)^2/(2\sigma^2)},$$ we see that we have a normal density with mean $\mu = 1/(x+1)$ and variance $\sigma^2 = 1/(2(x+1))$. Now, if you wanted the marginal distributions, you would need to integrate: $$f_X(x) = \int_{y=-\infty}^\infty f_{X,Y}(x,y) \, dy,$$ for example. And as you can see, this expression will not be a function of $Y$. The difference is that if I simulated realizations of ordered pairs $(X_i, Y_i)$ from the joint distribution, the marginal density for $X$ would be what you would see if I only told you the values of $X_i$. The conditional distribution of $X$ given $Y = y$ would be what you would see if I only gave you the $X_i$ for which the corresponding $Y_i$ was equal to $y$.
How to find conditional distributions from joint
Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent o
How to find conditional distributions from joint Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent of the value of $Y$. To see how the conditional distribution is gamma, all you have to do is write $$f_{X \mid Y}(x) = \frac{f_{X,Y}(x,y)}{f_Y(y)} \propto f_{X,Y}(x,y).$$ That is to say, the conditional distribution is proportional to the joint distribution, appropriately normalized. So we have $$f_{X \mid Y}(x) \propto x^2 e^{-x(y^2+4)},$$ completely ignoring any factors that are not functions of $x$. Then we recognize that the gamma distribution has density $$f_S(s) \propto s^{a-1} e^{-bs},$$ so the choice of shape $a = 3$ and rate $b = y^2+4$ demonstrates that the conditional distribution $X \mid Y \sim \operatorname{Gamma}(a = 3, b = y^2+4)$. The conditional distribution of $Y \mid X$ is done in a similar fashion. Just ignore constants of proportionality: $$f_{Y \mid X}(y) \propto e^{-(x+1)y^2+2y},$$ but this one requires us to complete the square to get it to look like a normal density: $$-(x+1)y^2 + 2y = (x+1)\left(-\left(y - \tfrac{1}{x+1}\right)^2 \right) + \tfrac{1}{x+1},$$ and after exponentiating and removing the $e^{1/(x+1)}$ factor, comparing this against $$f_W(w) \propto e^{-(w-\mu)^2/(2\sigma^2)},$$ we see that we have a normal density with mean $\mu = 1/(x+1)$ and variance $\sigma^2 = 1/(2(x+1))$. Now, if you wanted the marginal distributions, you would need to integrate: $$f_X(x) = \int_{y=-\infty}^\infty f_{X,Y}(x,y) \, dy,$$ for example. And as you can see, this expression will not be a function of $Y$. The difference is that if I simulated realizations of ordered pairs $(X_i, Y_i)$ from the joint distribution, the marginal density for $X$ would be what you would see if I only told you the values of $X_i$. The conditional distribution of $X$ given $Y = y$ would be what you would see if I only gave you the $X_i$ for which the corresponding $Y_i$ was equal to $y$.
How to find conditional distributions from joint Those distributions you call "marginal" are not marginal. They are conditional distributions because you wrote $x \mid y$. The marginal distribution of $X$, for example, is necessarily independent o
44,683
How to find conditional distributions from joint
The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\mathrm{Gamma}(3,y^2+4)$ density. The other full conditional $f(y\mid x)$ is obtained similarly after completing the square in the exponent.
How to find conditional distributions from joint
The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\
How to find conditional distributions from joint The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\mathrm{Gamma}(3,y^2+4)$ density. The other full conditional $f(y\mid x)$ is obtained similarly after completing the square in the exponent.
How to find conditional distributions from joint The "trick" is to observe that $f(x\mid y)=f(x,y)/f(y)$ is proportional to $f(x,y)$ up to terms that do not involve $x$. Hence, $f(x\mid y)\propto x^2\exp(-(y^2+4)x)$, and this is the "kernel" of a $\
44,684
How can using Logistic Regression without regularization be better?
As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your weights from fitting the "noise" in your problem, aka overfitting. If you have more noise (i.e. as measured by the standard deviation of the noise distribution), then you will need more regularization to prevent overfitting. It's not really about keeping weights small. So one should use a large lambda to regularize. With regularization, it's best to avoid such definite statements. Sometimes bigger is better, sometimes not. However, when I used L1 regularization with a lambda=1 the performance was worse than using lambda=0.0001. Actually the best performance I got is when I used lambda=0! By my reasoning above, it is not true that bigger lambda => better performance. It depends on the noise level, among other things. In fact, you can always set lambda = 1000000 and all your weights will be zero. Choosing lambda correctly can be somewhat of a subtle art. To your questions: 1- How can logistic regression without regularization perform better than when using regularization? Isn't the idea of regularization after all is to make the performance better?! More often than not, regularization will improve the performance of your model. It sounds to me like you're considering one specific application and/or dataset, in which case it is very possible that regularization doesn't help for this specific problem. However, without knowing what you mean by "better performance", it's hard to tell. What have you done to test the generalization performance of your model? lambda = 0 is always gonna to perform better on the training data, but what you should care about is the performance on test data. 2- Should I use large values for the regularization parameter?! See above - this is somewhat of an art and you need to balance it with the noise level in your specific problem. Are you familiar with / have you tried techniques such as cross-validation for selecting hyperparameters? 3 - Is using regularization in general always good? See answer to 1).
How can using Logistic Regression without regularization be better?
As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your wei
How can using Logistic Regression without regularization be better? As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your weights from fitting the "noise" in your problem, aka overfitting. If you have more noise (i.e. as measured by the standard deviation of the noise distribution), then you will need more regularization to prevent overfitting. It's not really about keeping weights small. So one should use a large lambda to regularize. With regularization, it's best to avoid such definite statements. Sometimes bigger is better, sometimes not. However, when I used L1 regularization with a lambda=1 the performance was worse than using lambda=0.0001. Actually the best performance I got is when I used lambda=0! By my reasoning above, it is not true that bigger lambda => better performance. It depends on the noise level, among other things. In fact, you can always set lambda = 1000000 and all your weights will be zero. Choosing lambda correctly can be somewhat of a subtle art. To your questions: 1- How can logistic regression without regularization perform better than when using regularization? Isn't the idea of regularization after all is to make the performance better?! More often than not, regularization will improve the performance of your model. It sounds to me like you're considering one specific application and/or dataset, in which case it is very possible that regularization doesn't help for this specific problem. However, without knowing what you mean by "better performance", it's hard to tell. What have you done to test the generalization performance of your model? lambda = 0 is always gonna to perform better on the training data, but what you should care about is the performance on test data. 2- Should I use large values for the regularization parameter?! See above - this is somewhat of an art and you need to balance it with the noise level in your specific problem. Are you familiar with / have you tried techniques such as cross-validation for selecting hyperparameters? 3 - Is using regularization in general always good? See answer to 1).
How can using Logistic Regression without regularization be better? As far as I know the idea of regularization is to have the weights as small as possible and so using lambda will penalize large weights. Deep down, regularization is really about preventing your wei
44,685
Can we express logistic loss minimization as a maximum likelihood problem?
It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$\Lambda(z) = [1+\exp(-z)]^{-1}$$ we have that $$\min_w \sum_{i=1}^N \log[1+\exp(-y_iw^Tx_i)] = \max_w \sum_{i=1}^N \log \Lambda(y_iw^Tx_i)$$ $$=\max_w \Big\{\sum_{y_i=1} \log \Lambda(w^Tx_i) + \sum_{y_i=-1} \log [1-\Lambda(w^Tx_i)]\Big\} \tag{1}$$ In the usual maximum likelihood approach we would have use the label, $z_i \in \{0,1\}$, ($0$ instead of $-1$), and we would have assumed that $$P(z_i =1 \mid x_i) = \Lambda(w^Tx_i)$$ With an independent sample, we would have arrived at the log-likelihood $$=\max_w \Big\{\sum_{i=1}^N z_i\log \Lambda(w^Tx_i) + \sum_{i=1}^N (1-z_i)\log [1-\Lambda(w^Tx_i)]\Big\} \tag{2}$$ Now note that $$\sum_{y_i=1} \log \Lambda(w^Tx_i) = \sum_{i=1}^N z_i\log \Lambda(w^Tx_i)$$ and $$\sum_{y_i=-1} \log [1-\Lambda(w^Tx_i)] = \sum_{i=1}^N(1-z_i) \log [1-\Lambda(w^Tx_i)]$$ so $(1)$ and $(2)$ are equivalent.
Can we express logistic loss minimization as a maximum likelihood problem?
It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$
Can we express logistic loss minimization as a maximum likelihood problem? It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$\Lambda(z) = [1+\exp(-z)]^{-1}$$ we have that $$\min_w \sum_{i=1}^N \log[1+\exp(-y_iw^Tx_i)] = \max_w \sum_{i=1}^N \log \Lambda(y_iw^Tx_i)$$ $$=\max_w \Big\{\sum_{y_i=1} \log \Lambda(w^Tx_i) + \sum_{y_i=-1} \log [1-\Lambda(w^Tx_i)]\Big\} \tag{1}$$ In the usual maximum likelihood approach we would have use the label, $z_i \in \{0,1\}$, ($0$ instead of $-1$), and we would have assumed that $$P(z_i =1 \mid x_i) = \Lambda(w^Tx_i)$$ With an independent sample, we would have arrived at the log-likelihood $$=\max_w \Big\{\sum_{i=1}^N z_i\log \Lambda(w^Tx_i) + \sum_{i=1}^N (1-z_i)\log [1-\Lambda(w^Tx_i)]\Big\} \tag{2}$$ Now note that $$\sum_{y_i=1} \log \Lambda(w^Tx_i) = \sum_{i=1}^N z_i\log \Lambda(w^Tx_i)$$ and $$\sum_{y_i=-1} \log [1-\Lambda(w^Tx_i)] = \sum_{i=1}^N(1-z_i) \log [1-\Lambda(w^Tx_i)]$$ so $(1)$ and $(2)$ are equivalent.
Can we express logistic loss minimization as a maximum likelihood problem? It is equivalent to the maximum likelihood approach. The different appearance results from the different coding for $y_i$ (which is arbitrary). Keeping in mind that $y_i \in \{-1,1\}$, and denoting $$
44,686
Probability of the limit of a sequence of events
A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events
A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
Probability of the limit of a sequence of events A sufficient condition is that the events are nested $A_1 \subset A_2 \subset \ldots$ or $A_1 \supset A_2 \supset \ldots$.
44,687
Probability of the limit of a sequence of events
This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n \geq 1}P(B_n).$$ In the first case, you can define $B_n = A_n-A_{n-1}$, which gives the result immediately. Because $P(\Omega - A) = 1 - P(A)$, the converse is also true, as can be seen by taking the limit of the complement sets. A fairly standard generalization is to say that $\lim A_n$ exists if $\lim \inf A_n = \lim \sup A_n$. In other words, if $$ \bigcup_{n \geq 1} \bigcap_{k \geq n} A_k = \bigcap_{n \geq 1} \bigcup_{k \geq n} A_k = A.$$ Because $\lim \inf A_n$ is a non decreasing union of events, we have in general $P(\lim\inf A_n) = \lim_{n\rightarrow\infty} P(\cap_{k \geq n}A_k) \leq \lim_{n\rightarrow\infty}\inf_{k \geq n} P(A_k)$. Similarly, we also have $P(\lim\sup A_n) = \lim_{n\rightarrow\infty} P(\cup_{k \geq n}A_k) \geq \lim_{n\rightarrow\infty}\sup_{k \geq n} P(A_k)$. If $\lim \inf A_n = \lim \sup A_n = A$ we thus have $$P(A) = P(\lim\inf A_n) \leq \lim\inf P(A_n) \leq \lim\sup P(A_n) \leq P(\lim\sup A_n) = P(A).$$ This shows that $\lim P(A_n)$ exists and that $P(\lim_{n\rightarrow\infty}A_n) = \lim_{n\rightarrow\infty}P(A_n)$.
Probability of the limit of a sequence of events
This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n
Probability of the limit of a sequence of events This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n \geq 1}P(B_n).$$ In the first case, you can define $B_n = A_n-A_{n-1}$, which gives the result immediately. Because $P(\Omega - A) = 1 - P(A)$, the converse is also true, as can be seen by taking the limit of the complement sets. A fairly standard generalization is to say that $\lim A_n$ exists if $\lim \inf A_n = \lim \sup A_n$. In other words, if $$ \bigcup_{n \geq 1} \bigcap_{k \geq n} A_k = \bigcap_{n \geq 1} \bigcup_{k \geq n} A_k = A.$$ Because $\lim \inf A_n$ is a non decreasing union of events, we have in general $P(\lim\inf A_n) = \lim_{n\rightarrow\infty} P(\cap_{k \geq n}A_k) \leq \lim_{n\rightarrow\infty}\inf_{k \geq n} P(A_k)$. Similarly, we also have $P(\lim\sup A_n) = \lim_{n\rightarrow\infty} P(\cup_{k \geq n}A_k) \geq \lim_{n\rightarrow\infty}\sup_{k \geq n} P(A_k)$. If $\lim \inf A_n = \lim \sup A_n = A$ we thus have $$P(A) = P(\lim\inf A_n) \leq \lim\inf P(A_n) \leq \lim\sup P(A_n) \leq P(\lim\sup A_n) = P(A).$$ This shows that $\lim P(A_n)$ exists and that $P(\lim_{n\rightarrow\infty}A_n) = \lim_{n\rightarrow\infty}P(A_n)$.
Probability of the limit of a sequence of events This is a basic property of probability measures. One item of the definition for a probability measure says that if $B_n$ are disjoint events, then $$ P \left(\bigcup_{n \geq 1} B_n \right) = \sum_{n
44,688
Least squares: Calculus to find residual minimizers?
The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that minimize the sum of the squared errors, $S$. $$S = \displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= \sum \left(y_i - \hat{y_i} \right)^2= \sum \left(y_i - \beta_0 - \beta_1x_i\right)^2$$ We want to find $\beta_0$ and $\beta_1$ that minimize the sum, $S$. We start by taking the partial derivative of $S$ with respect to $\beta_0$ and setting it to zero. $$\frac{\partial{S}}{\partial{\beta_0}} = \sum 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-1) = 0$$ $$\sum \left(y_i - \beta_0 - \beta_1x_i\right) = 0 $$ $$\sum \beta_0 = \sum y_i -\beta_1 \sum x_i $$ $$n\beta_0 = \sum y_i -\beta_1 \sum x_i $$ $$\beta_0 = \frac{1}{n}\sum y_i -\beta_1 \frac{1}{n}\sum x_i \tag{1}$$ $$\beta_0 = \bar y - \beta_1 \bar x \tag{*} $$ now take the partial of $S$ with respect to $\beta_1$ and set it to zero. $$\frac{\partial{S}}{\partial{\beta_1}} = \sum 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-x_i) = 0$$ $$\sum x_i \left(y_i - \beta_0 - \beta_1x_i\right) = 0$$ $$\sum x_iy_i - \beta_0 \sum x_i - \beta_1 \sum x_i^2 = 0 \tag{2}$$ substitute $(1)$ into $(2)$ $$\sum x_iy_i - \left( \frac{1}{n}\sum y_i -\beta_1 \frac{1}{n}\sum x_i\right) \sum x_i - \beta_1 \sum x_i^2 = 0 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i + \beta_1 \frac{1}{n} \left( \sum x_i \right) ^2 - \beta_1 \sum x_i^2 = 0 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i = - \beta_1 \frac{1}{n} \left( \sum x_i \right) ^2 + \beta_1 \sum x_i^2 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i = \beta_1 \left(\sum x_i^2 - \frac{1}{n} \left( \sum x_i \right) ^2 \right) $$ $$\beta_1 = \frac{\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i}{\sum x_i^2 - \frac{1}{n} \left( \sum x_i \right) ^2 } = \frac{cov(x,y)}{var(x)}\tag{*}$$
Least squares: Calculus to find residual minimizers?
The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that mini
Least squares: Calculus to find residual minimizers? The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that minimize the sum of the squared errors, $S$. $$S = \displaystyle\sum\limits_{i=1}^n \left(e_i \right)^2= \sum \left(y_i - \hat{y_i} \right)^2= \sum \left(y_i - \beta_0 - \beta_1x_i\right)^2$$ We want to find $\beta_0$ and $\beta_1$ that minimize the sum, $S$. We start by taking the partial derivative of $S$ with respect to $\beta_0$ and setting it to zero. $$\frac{\partial{S}}{\partial{\beta_0}} = \sum 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-1) = 0$$ $$\sum \left(y_i - \beta_0 - \beta_1x_i\right) = 0 $$ $$\sum \beta_0 = \sum y_i -\beta_1 \sum x_i $$ $$n\beta_0 = \sum y_i -\beta_1 \sum x_i $$ $$\beta_0 = \frac{1}{n}\sum y_i -\beta_1 \frac{1}{n}\sum x_i \tag{1}$$ $$\beta_0 = \bar y - \beta_1 \bar x \tag{*} $$ now take the partial of $S$ with respect to $\beta_1$ and set it to zero. $$\frac{\partial{S}}{\partial{\beta_1}} = \sum 2\left(y_i - \beta_0 - \beta_1x_i\right)^1 (-x_i) = 0$$ $$\sum x_i \left(y_i - \beta_0 - \beta_1x_i\right) = 0$$ $$\sum x_iy_i - \beta_0 \sum x_i - \beta_1 \sum x_i^2 = 0 \tag{2}$$ substitute $(1)$ into $(2)$ $$\sum x_iy_i - \left( \frac{1}{n}\sum y_i -\beta_1 \frac{1}{n}\sum x_i\right) \sum x_i - \beta_1 \sum x_i^2 = 0 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i + \beta_1 \frac{1}{n} \left( \sum x_i \right) ^2 - \beta_1 \sum x_i^2 = 0 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i = - \beta_1 \frac{1}{n} \left( \sum x_i \right) ^2 + \beta_1 \sum x_i^2 $$ $$\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i = \beta_1 \left(\sum x_i^2 - \frac{1}{n} \left( \sum x_i \right) ^2 \right) $$ $$\beta_1 = \frac{\sum x_iy_i - \frac{1}{n} \sum x_i \sum y_i}{\sum x_i^2 - \frac{1}{n} \left( \sum x_i \right) ^2 } = \frac{cov(x,y)}{var(x)}\tag{*}$$
Least squares: Calculus to find residual minimizers? The principle underlying least squares regression is that the sum of the squares of the errors is minimized. We can use calculus to find equations for the parameters $\beta_0$ and $\beta_1$ that mini
44,689
Least squares: Calculus to find residual minimizers?
A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In this broader setting, we have the regression model: $$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \quad \quad \quad \boldsymbol{\varepsilon} \sim \text{N}(\boldsymbol{0}, \sigma^2\boldsymbol{I}).$$ The corresponding maximum-likelihood estimation (MLE) problem for the regression coefficients is to maximise the conditional log-likelihood $l_\boldsymbol{x,y}(\boldsymbol{\beta}) = - n \ln \sigma - \tfrac{1}{2} || \boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ||^2 / \sigma^2$. Maximisation of the log-likelihood can be written equivalently as minimising the objective function: $$F(\boldsymbol{\beta}) = || \boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ||^2 = (\boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} )^\text{T} (\boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ) .$$ This objective function has gradiant vector and Hessian matrix given respectively by: $$\begin{equation} \begin{aligned} \nabla F(\boldsymbol{\beta}) &= 2 [(\boldsymbol{x}^\text{T} \boldsymbol{x})\boldsymbol{\beta} - (\boldsymbol{x}^\text{T} \boldsymbol{y}) ], \\[8pt] \nabla^2 F(\boldsymbol{\beta}) &= 2 (\boldsymbol{x}^\text{T} \boldsymbol{x}). \end{aligned} \end{equation}$$ Assuming the design matrix $\boldsymbol{x}$ has full rank (i.e., its columns are linearly independent) then the Hessian matrix is positive definite and the objective is a convex function, with a unique global minimising point at its only critical point. Taking $\nabla F(\hat{\boldsymbol{\beta}} ) = \boldsymbol{0}$ to obtain the critical point (which is the global minimising value) yields the well-known OLS solution: $$\hat{\boldsymbol{\beta}} = (\boldsymbol{x}^\text{T} \boldsymbol{x})^{-1} (\boldsymbol{x}^\text{T} \boldsymbol{y}).$$ In the case where the design matrix $\boldsymbol{x}$ is not of full rank, there are an infinite number of minimising coefficient vectors, and the problem can be solved by reducing the design matrix to remove excess variables.
Least squares: Calculus to find residual minimizers?
A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In thi
Least squares: Calculus to find residual minimizers? A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In this broader setting, we have the regression model: $$\boldsymbol{Y} = \boldsymbol{x} \boldsymbol{\beta} + \boldsymbol{\varepsilon} \quad \quad \quad \boldsymbol{\varepsilon} \sim \text{N}(\boldsymbol{0}, \sigma^2\boldsymbol{I}).$$ The corresponding maximum-likelihood estimation (MLE) problem for the regression coefficients is to maximise the conditional log-likelihood $l_\boldsymbol{x,y}(\boldsymbol{\beta}) = - n \ln \sigma - \tfrac{1}{2} || \boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ||^2 / \sigma^2$. Maximisation of the log-likelihood can be written equivalently as minimising the objective function: $$F(\boldsymbol{\beta}) = || \boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ||^2 = (\boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} )^\text{T} (\boldsymbol{y} - \boldsymbol{x} \boldsymbol{\beta} ) .$$ This objective function has gradiant vector and Hessian matrix given respectively by: $$\begin{equation} \begin{aligned} \nabla F(\boldsymbol{\beta}) &= 2 [(\boldsymbol{x}^\text{T} \boldsymbol{x})\boldsymbol{\beta} - (\boldsymbol{x}^\text{T} \boldsymbol{y}) ], \\[8pt] \nabla^2 F(\boldsymbol{\beta}) &= 2 (\boldsymbol{x}^\text{T} \boldsymbol{x}). \end{aligned} \end{equation}$$ Assuming the design matrix $\boldsymbol{x}$ has full rank (i.e., its columns are linearly independent) then the Hessian matrix is positive definite and the objective is a convex function, with a unique global minimising point at its only critical point. Taking $\nabla F(\hat{\boldsymbol{\beta}} ) = \boldsymbol{0}$ to obtain the critical point (which is the global minimising value) yields the well-known OLS solution: $$\hat{\boldsymbol{\beta}} = (\boldsymbol{x}^\text{T} \boldsymbol{x})^{-1} (\boldsymbol{x}^\text{T} \boldsymbol{y}).$$ In the case where the design matrix $\boldsymbol{x}$ is not of full rank, there are an infinite number of minimising coefficient vectors, and the problem can be solved by reducing the design matrix to remove excess variables.
Least squares: Calculus to find residual minimizers? A simpler presentation of the calculus can be done in the context of the broader multiple linear regression model, but this requires knowledge of multivariate calculus (i.e., vector calculus). In thi
44,690
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis
Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synthesize and compare the results from several related studies in a systematic manner. Moderator analysis: In the context of a meta-analysis, this refers to using some kind of method in an attempt to find and account for systematic differences in the size of the effect or outcome that is being meta-analyzed. Meta-regression: This is one possible way of conducting a moderator analysis, where we regress the observed effect sizes on one or multiple study characteristics. There are other ways of conducting a moderator analysis. For example, one could simply subgroup studies based on a categorical moderator (using dummy variables in a meta-regression model is quite similar). Not commonly used, but one could also consider mixture models or clustering techniques.
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis
Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synt
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synthesize and compare the results from several related studies in a systematic manner. Moderator analysis: In the context of a meta-analysis, this refers to using some kind of method in an attempt to find and account for systematic differences in the size of the effect or outcome that is being meta-analyzed. Meta-regression: This is one possible way of conducting a moderator analysis, where we regress the observed effect sizes on one or multiple study characteristics. There are other ways of conducting a moderator analysis. For example, one could simply subgroup studies based on a categorical moderator (using dummy variables in a meta-regression model is quite similar). Not commonly used, but one could also consider mixture models or clustering techniques.
Difference between Meta-Analysis, Meta-Regression and Moderator-Analysis Here are some suggestions for definitions that may help to clarify the terminology: Meta-analysis: A general term to denote the collection of statistical methods and techniques used to aggregate/synt
44,691
How to calculate Estimated Arithmetic Mean for a lognormal distribution
For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2\right);$$ these are the mean log and variance of the logs, respectively. The UMVUE for the arithmetic mean when the $x_i$ are assumed to be independent and identically distributed with a common lognormal distribution is given by $$m(x) = \exp(\bar{y}) g_n\left(\frac{s^2}{2}\right)$$ where $g_n$ is Finney's function $$g_n(t) = 1 + \frac{(n-1)t}{n} + \frac{(n-1)^3t^2}{2!n^2(n+1)} + \frac{(n-1)^5t^3}{3!n^3(n+1)(n+3)}+\frac{(n-1)^7t^4}{4!n^4(n+1)(n+3)(n+5)} + \cdots.$$ For the data in the question, $s^2 = 1.23594$, $g_4(s^2/2) = 1.532355$, and the UMVUE is $m(x) = 0.084519.$ Because this might take a while to converge when $s^2/2 \gg 1$, it is best implemented as an Excel macro. Such power series are straightforward to program efficiently: just maintain a version of the current term and at each step update it to the next term and add that to a cumulative sum. The term values will typically rise and then fall again; stop when they have fallen below a small positive threshold. (For less floating point error, first compute all such terms and then sum them from smallest to largest in absolute value.) My version of this macro (in very plain vanilla VBA) follows. ' ' Finney's G (Psi) function as in Millard & Neerchal, formula 5.57 ' or equivalently in Gilbert, formula 13.4 (m here = n-1 there). ' ' Typically, m is a positive integer. Z can be positive or negative. ' ' Programmed by WAH @ QD 5 March 2001 ' ' This algorithm will be less accurate for large m*z. It could be replaced by ' one that separately computes the descending half of the terms, ' iterating backward over i. ' ' It can be badly inaccurate for very negative m*z. ' ' This function returns 0 (an impossible value) upon encountering ' an input error. ' Public Function Finney(m As Integer, z As Double) As Double Dim i As Integer ' Index variable Dim g As Double ' Result Dim x As Double ' z * m * m / (m+1) Dim a As Double ' Power series coefficient Dim iMax As Integer ' Maximum iteration count Const aTol As Double = 0.0000000001 ' Convergence threshold Const iterMax As Integer = 1000 ' Limits execution time If (m <= -1) Then ' issue an error Finney = 0# End If x = z * m * m / (m + 1) If (Abs(x) < aTol) Then Finney = 1# ' This is the correct answer. Exit Function End If iMax = Abs(Int(z) + 1) + 20 If (iMax > iterMax) Then ' issue an error Finney = 0# Exit Function End If ' ' Initialize ' a = 1# g = a ' Lead terms For i = 1 To iMax ' ' Test for convergence ' If (Abs(a) <= aTol * Abs(g)) Then Exit For End If ' ' Compute the next term ' a = a * x / (m + 2 * (i - 1)) / i ' ' Accumulate terms ' g = g + a Next Finney = g End Function References Gilbert, Richard O. Statistical Methods for Environmental Pollution Monitoring. Van Nostrand Reinhold Company, 1987. Millard, Steven P. and Nagaraj K. Neerchal, Environmental Statistics with S-Plus. CRC Press, 2001. Appendix For those using a vectorized implementation it pays to precompute the coefficients of $g_n$ in advance for a given value of $n$. This can also be exploited to determine in advance how many coefficients will be needed, thereby avoiding almost all the comparison operations. Here, as an example, is an R implementation. (It uses the equivalent Gamma-function formula of http://www.unc.edu/~haipeng/publication/lnmean.pdf after correcting a typographical error there: the power series argument should be $(n-1)^2t/(2n)$ rather than $(n-1)t/(2n)$ as written.) finney <- function(t, n, eps=1.0e-20) { u <- t * (n-1)^2 / (2*n) tau <- max(u) i.max <- ceiling(max(1, -log(eps), 1 + log(tau)/2)) a=lgamma((n-1)/2) - (lgamma(1:i.max+1) + lgamma((n-1)/2 + 1:i.max)) b <- exp(a[a + log(tau) * 1:i.max > log(eps)]) # Retain only terms larger than eps x <- outer(u, 1:length(b), function(z,i) z^i) # Compute powers of u return(x %*% b + 1) # Sum the power series } For example, finney(1.2359357/2, 4) produces the value $1.532355$. This implementation can compute a million values per second for $n=3$ and about $400,000$ values per second for $n=300$. As another example of its use, here is a plot of $g_4, g_8, g_{16}, g_{32}$. (The higher graphs correspond to larger values of $n$.) par(mfrow=c(1,1)) curve(finney(x/2, 32), 0, 2, lwd=2, main="Finney g(t/2)", xlab="t", ylab="") curve(finney(x/2, 16), add=TRUE, lwd=2, col="#2040c0") curve(finney(x/2, 8), add=TRUE, lwd=2, col="#c02040") curve(finney(x/2, 4), add=TRUE, lwd=2, col="#40c020")
How to calculate Estimated Arithmetic Mean for a lognormal distribution
For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cd
How to calculate Estimated Arithmetic Mean for a lognormal distribution For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cdots + (y_n - \bar{y})^2\right);$$ these are the mean log and variance of the logs, respectively. The UMVUE for the arithmetic mean when the $x_i$ are assumed to be independent and identically distributed with a common lognormal distribution is given by $$m(x) = \exp(\bar{y}) g_n\left(\frac{s^2}{2}\right)$$ where $g_n$ is Finney's function $$g_n(t) = 1 + \frac{(n-1)t}{n} + \frac{(n-1)^3t^2}{2!n^2(n+1)} + \frac{(n-1)^5t^3}{3!n^3(n+1)(n+3)}+\frac{(n-1)^7t^4}{4!n^4(n+1)(n+3)(n+5)} + \cdots.$$ For the data in the question, $s^2 = 1.23594$, $g_4(s^2/2) = 1.532355$, and the UMVUE is $m(x) = 0.084519.$ Because this might take a while to converge when $s^2/2 \gg 1$, it is best implemented as an Excel macro. Such power series are straightforward to program efficiently: just maintain a version of the current term and at each step update it to the next term and add that to a cumulative sum. The term values will typically rise and then fall again; stop when they have fallen below a small positive threshold. (For less floating point error, first compute all such terms and then sum them from smallest to largest in absolute value.) My version of this macro (in very plain vanilla VBA) follows. ' ' Finney's G (Psi) function as in Millard & Neerchal, formula 5.57 ' or equivalently in Gilbert, formula 13.4 (m here = n-1 there). ' ' Typically, m is a positive integer. Z can be positive or negative. ' ' Programmed by WAH @ QD 5 March 2001 ' ' This algorithm will be less accurate for large m*z. It could be replaced by ' one that separately computes the descending half of the terms, ' iterating backward over i. ' ' It can be badly inaccurate for very negative m*z. ' ' This function returns 0 (an impossible value) upon encountering ' an input error. ' Public Function Finney(m As Integer, z As Double) As Double Dim i As Integer ' Index variable Dim g As Double ' Result Dim x As Double ' z * m * m / (m+1) Dim a As Double ' Power series coefficient Dim iMax As Integer ' Maximum iteration count Const aTol As Double = 0.0000000001 ' Convergence threshold Const iterMax As Integer = 1000 ' Limits execution time If (m <= -1) Then ' issue an error Finney = 0# End If x = z * m * m / (m + 1) If (Abs(x) < aTol) Then Finney = 1# ' This is the correct answer. Exit Function End If iMax = Abs(Int(z) + 1) + 20 If (iMax > iterMax) Then ' issue an error Finney = 0# Exit Function End If ' ' Initialize ' a = 1# g = a ' Lead terms For i = 1 To iMax ' ' Test for convergence ' If (Abs(a) <= aTol * Abs(g)) Then Exit For End If ' ' Compute the next term ' a = a * x / (m + 2 * (i - 1)) / i ' ' Accumulate terms ' g = g + a Next Finney = g End Function References Gilbert, Richard O. Statistical Methods for Environmental Pollution Monitoring. Van Nostrand Reinhold Company, 1987. Millard, Steven P. and Nagaraj K. Neerchal, Environmental Statistics with S-Plus. CRC Press, 2001. Appendix For those using a vectorized implementation it pays to precompute the coefficients of $g_n$ in advance for a given value of $n$. This can also be exploited to determine in advance how many coefficients will be needed, thereby avoiding almost all the comparison operations. Here, as an example, is an R implementation. (It uses the equivalent Gamma-function formula of http://www.unc.edu/~haipeng/publication/lnmean.pdf after correcting a typographical error there: the power series argument should be $(n-1)^2t/(2n)$ rather than $(n-1)t/(2n)$ as written.) finney <- function(t, n, eps=1.0e-20) { u <- t * (n-1)^2 / (2*n) tau <- max(u) i.max <- ceiling(max(1, -log(eps), 1 + log(tau)/2)) a=lgamma((n-1)/2) - (lgamma(1:i.max+1) + lgamma((n-1)/2 + 1:i.max)) b <- exp(a[a + log(tau) * 1:i.max > log(eps)]) # Retain only terms larger than eps x <- outer(u, 1:length(b), function(z,i) z^i) # Compute powers of u return(x %*% b + 1) # Sum the power series } For example, finney(1.2359357/2, 4) produces the value $1.532355$. This implementation can compute a million values per second for $n=3$ and about $400,000$ values per second for $n=300$. As another example of its use, here is a plot of $g_4, g_8, g_{16}, g_{32}$. (The higher graphs correspond to larger values of $n$.) par(mfrow=c(1,1)) curve(finney(x/2, 32), 0, 2, lwd=2, main="Finney g(t/2)", xlab="t", ylab="") curve(finney(x/2, 16), add=TRUE, lwd=2, col="#2040c0") curve(finney(x/2, 8), add=TRUE, lwd=2, col="#c02040") curve(finney(x/2, 4), add=TRUE, lwd=2, col="#40c020")
How to calculate Estimated Arithmetic Mean for a lognormal distribution For positive data $x_1, x_2, \ldots, x_n$ let $y_i = \log(x_i)$ be their natural logarithms. Set $$\bar{y} = \frac{1}{n}(y_1+y_2+\cdots + y_n)$$ and $$s^2 = \frac{1}{n-1}\left((y_1 - \bar{y})^2 + \cd
44,692
How to calculate Estimated Arithmetic Mean for a lognormal distribution
@whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorithm #----------------------------------------------------------------------------- # The data #----------------------------------------------------------------------------- x <- c(0.043, 0.236, 0.057, 0.016) n <- length(x) logx <- log(x) log.mean <- mean(logx) log.sd <- sd(logx) #----------------------------------------------------------------------------- # R-translation of whuber's algorithm "Finney" #----------------------------------------------------------------------------- Finney <- function(m, z, maxiter = 1000, aTol = 1e-10){ aTol <- aTol iterMax <- maxiter if (m <= -1) { stop("Finney = 0") } x <- z*m*m/(m + 1) if (abs(x) < aTol) { return(Finney = 1L) } iMax <- abs(trunc(z) + 1) + 20 if (iMax > iterMax) { stop("iMax > iterMax") } a <- 1L g <- a for (i in seq(iMax)) { if (abs(a) <= aTol*abs(g)) { break() } a <- a*x/(m + 2*(i - 1))/i g = g + a } return(g) } # Sanity check Finney(n-1, log.sd^2/2) [1] 1.532355 exp(log.mean)*Finney(n-1, log.sd^2/2) [1] 0.08451876 Using the hypergeo package Seems correct. Now the solution using the R package hypergeo. The UMVUE for the arithmetic mean can also be calculated using the $_0F_{1}$ Hypergeometric function in the following way: $$ m(x) = \exp{(\bar{y})}_0F_{1}\left(;\frac{(n-1)}{2};\frac{(n-1)^{2}s_{y}^{2}}{4n}\right) $$ #----------------------------------------------------------------------------- # Using the package "hypergeo" #----------------------------------------------------------------------------- require(hypergeo) genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) [1] 1.532355 exp(log.mean)*genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) [1] 0.08451876 Using the EnvStats package The package EnvStat has a function elnormAlt that estimates the mean (optionally with a confidence interval) and the coefficient of variation of a lognormal distribution using several methods. Choose the option method = "mvue" to reproduce the results shown above: #----------------------------------------------------------------------------- # Using the package "EnvStats" #----------------------------------------------------------------------------- require(EnvStats) elnormAlt(x, method = "mvue", ci = FALSE) Results of Distribution Parameter Estimation -------------------------------------------- Assumed Distribution: Lognormal Estimated Parameter(s): mean = 0.08451876 cv = 1.02389278 Estimation Method: mvue Data: x Sample Size: 4 Timing the three implementations Finally, here is a comparison of how long it takes to apply the three methods to 1,000 samples of size $n=5,10,15,...,1000$, using @whuber's method as the baseline. The functions from the EnvStats and hypergeo packages presumably have more error handling and more options, which at least partially can explain why they take so much longer. The R code used for the comparison follows below: nvec<-seq(10,1000,10) B<-1000 reftime<-time1<-time2<-time3<-rep(NA,length(nvec)) # Compile the COOLSerdash-Whuber function: require(compiler) Finney<-cmpfun(Finney) for(i in 1:length(nvec)) { n<-nvec[i] cat(n,"\n") ## Just generate some LNorm data: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n)} end.time <- Sys.time() reftime[i]<-end.time - start.time ## Whuber's method: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); exp(log.mean)*Finney(n-1, log.sd^2/2) } end.time <- Sys.time() time1[i]<-end.time - start.time ## Hypergeo: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); exp(log.mean)*genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) } end.time <- Sys.time() time2[i]<-end.time - start.time ## EnvStats: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); elnormAlt(x, method = "mvue", ci = FALSE) } end.time <- Sys.time() time3[i]<-end.time - start.time } ## time1<-time1-reftime time2<-time2-reftime time3<-time3-reftime ## Save the results: plot(nvec,time1,type="l",lwd=3,ylim=c(0,max(time3/time1)),ylab="Relative execution time",xlab="Sample size n",cex.lab=1.5,cex.axis=1.5,cex.main=1.5,main="Relative execution time") lines(nvec,time2/time1,type="l",lwd=3,col=2) lines(nvec,time3/time1,type="l",lwd=3,col=4) legend(600,19,c("@whuber","hypergeo","EnvStats"),col=c(1,2,4),lwd=2,cex=1.5)
How to calculate Estimated Arithmetic Mean for a lognormal distribution
@whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorit
How to calculate Estimated Arithmetic Mean for a lognormal distribution @whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorithm #----------------------------------------------------------------------------- # The data #----------------------------------------------------------------------------- x <- c(0.043, 0.236, 0.057, 0.016) n <- length(x) logx <- log(x) log.mean <- mean(logx) log.sd <- sd(logx) #----------------------------------------------------------------------------- # R-translation of whuber's algorithm "Finney" #----------------------------------------------------------------------------- Finney <- function(m, z, maxiter = 1000, aTol = 1e-10){ aTol <- aTol iterMax <- maxiter if (m <= -1) { stop("Finney = 0") } x <- z*m*m/(m + 1) if (abs(x) < aTol) { return(Finney = 1L) } iMax <- abs(trunc(z) + 1) + 20 if (iMax > iterMax) { stop("iMax > iterMax") } a <- 1L g <- a for (i in seq(iMax)) { if (abs(a) <= aTol*abs(g)) { break() } a <- a*x/(m + 2*(i - 1))/i g = g + a } return(g) } # Sanity check Finney(n-1, log.sd^2/2) [1] 1.532355 exp(log.mean)*Finney(n-1, log.sd^2/2) [1] 0.08451876 Using the hypergeo package Seems correct. Now the solution using the R package hypergeo. The UMVUE for the arithmetic mean can also be calculated using the $_0F_{1}$ Hypergeometric function in the following way: $$ m(x) = \exp{(\bar{y})}_0F_{1}\left(;\frac{(n-1)}{2};\frac{(n-1)^{2}s_{y}^{2}}{4n}\right) $$ #----------------------------------------------------------------------------- # Using the package "hypergeo" #----------------------------------------------------------------------------- require(hypergeo) genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) [1] 1.532355 exp(log.mean)*genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) [1] 0.08451876 Using the EnvStats package The package EnvStat has a function elnormAlt that estimates the mean (optionally with a confidence interval) and the coefficient of variation of a lognormal distribution using several methods. Choose the option method = "mvue" to reproduce the results shown above: #----------------------------------------------------------------------------- # Using the package "EnvStats" #----------------------------------------------------------------------------- require(EnvStats) elnormAlt(x, method = "mvue", ci = FALSE) Results of Distribution Parameter Estimation -------------------------------------------- Assumed Distribution: Lognormal Estimated Parameter(s): mean = 0.08451876 cv = 1.02389278 Estimation Method: mvue Data: x Sample Size: 4 Timing the three implementations Finally, here is a comparison of how long it takes to apply the three methods to 1,000 samples of size $n=5,10,15,...,1000$, using @whuber's method as the baseline. The functions from the EnvStats and hypergeo packages presumably have more error handling and more options, which at least partially can explain why they take so much longer. The R code used for the comparison follows below: nvec<-seq(10,1000,10) B<-1000 reftime<-time1<-time2<-time3<-rep(NA,length(nvec)) # Compile the COOLSerdash-Whuber function: require(compiler) Finney<-cmpfun(Finney) for(i in 1:length(nvec)) { n<-nvec[i] cat(n,"\n") ## Just generate some LNorm data: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n)} end.time <- Sys.time() reftime[i]<-end.time - start.time ## Whuber's method: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); exp(log.mean)*Finney(n-1, log.sd^2/2) } end.time <- Sys.time() time1[i]<-end.time - start.time ## Hypergeo: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); exp(log.mean)*genhypergeo(NULL, (n-1)/2, ((n - 1)^2*log.sd^2)/(4*n)) } end.time <- Sys.time() time2[i]<-end.time - start.time ## EnvStats: start.time <- Sys.time() for(j in 1:B) {x<-rlnorm(n); elnormAlt(x, method = "mvue", ci = FALSE) } end.time <- Sys.time() time3[i]<-end.time - start.time } ## time1<-time1-reftime time2<-time2-reftime time3<-time3-reftime ## Save the results: plot(nvec,time1,type="l",lwd=3,ylim=c(0,max(time3/time1)),ylab="Relative execution time",xlab="Sample size n",cex.lab=1.5,cex.axis=1.5,cex.main=1.5,main="Relative execution time") lines(nvec,time2/time1,type="l",lwd=3,col=2) lines(nvec,time3/time1,type="l",lwd=3,col=4) legend(600,19,c("@whuber","hypergeo","EnvStats"),col=c(1,2,4),lwd=2,cex=1.5)
How to calculate Estimated Arithmetic Mean for a lognormal distribution @whuber gave already a complete answer. For convenience, I want to share an implementation of whuber's algorithm in R along with two other solutions using pre-existing packages. Using whuber's algorit
44,693
Interpretation of Little's MCAR test
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists in the missing data. Proving the existence of MAR data is difficult but you can try if data is related between them. The package Hmisc in R has some graphical tools to see the relationship between each variable. Another idea could be to do a logistic regression with the outcome being missing vs no missing for each variable and see if any other predictor is associated with the missingness of the variable. As a final note, I would say to think about your data and the definition of MCAR. Do you think it's plausible for the data to be MCAR? If so, then I would say there is evidence that the data is MCAR. Hope this helps.
Interpretation of Little's MCAR test
A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists i
Interpretation of Little's MCAR test A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists in the missing data. Proving the existence of MAR data is difficult but you can try if data is related between them. The package Hmisc in R has some graphical tools to see the relationship between each variable. Another idea could be to do a logistic regression with the outcome being missing vs no missing for each variable and see if any other predictor is associated with the missingness of the variable. As a final note, I would say to think about your data and the definition of MCAR. Do you think it's plausible for the data to be MCAR? If so, then I would say there is evidence that the data is MCAR. Hope this helps.
Interpretation of Little's MCAR test A large p-value (> 0.05) indicates weak evidence against the null hypothesis, so you fail to reject the null hypothesis, in this case the null hypothesis is that the data is MCAR, no patterns exists i
44,694
Interpretation of Little's MCAR test
tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you really care about. The things you would do to handle data that are MAR or covariate-dependent-MCAR are things you should just do anyway. First, it's important to understand what MCAR, MAR and MNAR mean, technically. If your data are MCAR, this means that whether an observation of your outcome variable (y) is missing does not depend on either the observed or unobserved values of y, nor on any covariates upon which y depends. As an equation, this gives you: $$ \mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}) = \mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}). $$ In other words, the probability of an observation of y being missing ($\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}$) depends only on some covariates that do not predict y (called $\mathbf{\text{Z}}$) and the coefficients that relate $\mathbf{\text{Z}}$ to y, called $\alpha$. The case in which missingness is dependent on covariates $\mathbf{\text{X}}$ is termed "covariate dependent missingness" and is just a special case of MCAR. Importantly, in this case, $\mathbf{\text{X}}$ is a set of variables y is dependent on. Little's test can tell you whether these variables are related to missingness on y, but only if you are adequately powered to detect such an association. If you're not well powered, you may mistakenly conclude they are not related. So, what would be your recourse if Little's test comes out significant? You would then include those $\mathbf{\text{X}}$ in your model of y somehow. But since y is dependent on $\mathbf{\text{X}}$, they should be part of your model anyway, or else you risk omitted variable bias! Little's test, then, is mostly useless: if your data are MCAR, you should include $\mathbf{\text{X}}$ to avoid omitted variable bias. If missingness depends on $\mathbf{\text{X}}$, you should do the same. If Little's test shows that other variables unrelated to y, i.e., $\mathbf{\text{Z}}$, are associated with missingness, you don't need to include them because the definition of MCAR allows missingness to be dependent on $\mathbf{\text{Z}}$. Note again, that you're likely to be underpowered to detect missingness with Little's test anyway. So what about the case of missing at random, or MAR? This is the case when missingness on y is dependent on the values of y that you have observed. Following the above notation: $$ \begin{matrix} {\mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}) =} \\ {\mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}^{\prime},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha})} \\ \end{matrix}, $$ the important part of which is that on the left you have $\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}$ and on the right you have $\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}^{\prime}$, which just means that the equality holds whether the missing values of y would have taken on the values $\mathbf{\text{y}}_{i,(0)}$ or $\mathbf{\text{y}}_{i,(0)}^{\prime}$. This situation doesn't make (at least to me) much sense outside of a longitudinal data context. In a situation where your cross-sectional observations are supposed to be statistically independent, it's not clear to me how an observed value on y for one person could tell you anything about the probability that another person's y observation would be missing. In longitudinal data, this situation arises because someone's observed y at time t-1 could be predictive of whether y at time t is missing. Is Little's test useful here? Maybe. If it's significant, you should include all observed y in a maximum likelihood model if possible. If it's not significant, (and well powered enough that you don't risk a false negative), you could simply remove all cases that are missing at least one y. But you'll have better statistical power for your model of y if you just include all available data anyway. Again, your modeling decisions are the same either way, so why bother with Little's test? Finally, if your data are missing not at random (MNAR), missingness depends on unobserved values of y and so are definitionally not statistically detectable. For more in-depth discussion and several examples, see Matta, T. H., Flournoy, J. C., & Byrne, M. L. (2018). Making an unknown unknown a known unknown: Missing data in longitudinal neuroimaging studies. Developmental cognitive neuroscience, 33, 83-98. 10.1016/j.dcn.2017.10.001
Interpretation of Little's MCAR test
tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you r
Interpretation of Little's MCAR test tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you really care about. The things you would do to handle data that are MAR or covariate-dependent-MCAR are things you should just do anyway. First, it's important to understand what MCAR, MAR and MNAR mean, technically. If your data are MCAR, this means that whether an observation of your outcome variable (y) is missing does not depend on either the observed or unobserved values of y, nor on any covariates upon which y depends. As an equation, this gives you: $$ \mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}) = \mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}). $$ In other words, the probability of an observation of y being missing ($\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}$) depends only on some covariates that do not predict y (called $\mathbf{\text{Z}}$) and the coefficients that relate $\mathbf{\text{Z}}$ to y, called $\alpha$. The case in which missingness is dependent on covariates $\mathbf{\text{X}}$ is termed "covariate dependent missingness" and is just a special case of MCAR. Importantly, in this case, $\mathbf{\text{X}}$ is a set of variables y is dependent on. Little's test can tell you whether these variables are related to missingness on y, but only if you are adequately powered to detect such an association. If you're not well powered, you may mistakenly conclude they are not related. So, what would be your recourse if Little's test comes out significant? You would then include those $\mathbf{\text{X}}$ in your model of y somehow. But since y is dependent on $\mathbf{\text{X}}$, they should be part of your model anyway, or else you risk omitted variable bias! Little's test, then, is mostly useless: if your data are MCAR, you should include $\mathbf{\text{X}}$ to avoid omitted variable bias. If missingness depends on $\mathbf{\text{X}}$, you should do the same. If Little's test shows that other variables unrelated to y, i.e., $\mathbf{\text{Z}}$, are associated with missingness, you don't need to include them because the definition of MCAR allows missingness to be dependent on $\mathbf{\text{Z}}$. Note again, that you're likely to be underpowered to detect missingness with Little's test anyway. So what about the case of missing at random, or MAR? This is the case when missingness on y is dependent on the values of y that you have observed. Following the above notation: $$ \begin{matrix} {\mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha}) =} \\ {\mathbb{P}(\mathbf{\text{R}}_{i} = {\widetilde{\mathbf{\text{r}}}}_{i}\,|\,\mathbf{\text{Y}}_{i,(1)} = {\widetilde{\mathbf{\text{y}}}}_{i,(1)},\,\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}^{\prime},\,\mathbf{\text{X}}_{i},\,\mathbf{\text{Z}}_{i},\,\mathbf{\alpha})} \\ \end{matrix}, $$ the important part of which is that on the left you have $\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}$ and on the right you have $\mathbf{\text{Y}}_{i,(0)} = \mathbf{\text{y}}_{i,(0)}^{\prime}$, which just means that the equality holds whether the missing values of y would have taken on the values $\mathbf{\text{y}}_{i,(0)}$ or $\mathbf{\text{y}}_{i,(0)}^{\prime}$. This situation doesn't make (at least to me) much sense outside of a longitudinal data context. In a situation where your cross-sectional observations are supposed to be statistically independent, it's not clear to me how an observed value on y for one person could tell you anything about the probability that another person's y observation would be missing. In longitudinal data, this situation arises because someone's observed y at time t-1 could be predictive of whether y at time t is missing. Is Little's test useful here? Maybe. If it's significant, you should include all observed y in a maximum likelihood model if possible. If it's not significant, (and well powered enough that you don't risk a false negative), you could simply remove all cases that are missing at least one y. But you'll have better statistical power for your model of y if you just include all available data anyway. Again, your modeling decisions are the same either way, so why bother with Little's test? Finally, if your data are missing not at random (MNAR), missingness depends on unobserved values of y and so are definitionally not statistically detectable. For more in-depth discussion and several examples, see Matta, T. H., Flournoy, J. C., & Byrne, M. L. (2018). Making an unknown unknown a known unknown: Missing data in longitudinal neuroimaging studies. Developmental cognitive neuroscience, 33, 83-98. 10.1016/j.dcn.2017.10.001
Interpretation of Little's MCAR test tl;dr: Little's test is probably not well-powered enough to detect missingness. You're probably testing for the wrong kind of missingness and won't be able to learn about the kind of missingness you r
44,695
Interpretation of Little's MCAR test
As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and "too random". So it could be an issue with your p-value. (here is the answer I am referring to) Another issue - chi squared can be considered as a sum of squares of normally distributed RVs. If you use test with such test statistic and overestimated your variances for normal RVs - you will essentially get what you've got.
Interpretation of Little's MCAR test
As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and
Interpretation of Little's MCAR test As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and "too random". So it could be an issue with your p-value. (here is the answer I am referring to) Another issue - chi squared can be considered as a sum of squares of normally distributed RVs. If you use test with such test statistic and overestimated your variances for normal RVs - you will essentially get what you've got.
Interpretation of Little's MCAR test As far as I know, you can look at both right or left tail of chi squared test. Having p-value of exactly 1 it is possible to say (with some caution) that your data could be artificially generated and
44,696
$p(D)$ in Bayesian Statistics
$P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$. This is basically the normalisation that you need to apply to ensure that the posterior is a valid distribution. So basically we are marginalising out $\theta$ and asking what is the probability of observing $D$. These are typically difficult to compute. Also see Conjugate priors.
$p(D)$ in Bayesian Statistics
$P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$.
$p(D)$ in Bayesian Statistics $P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$. This is basically the normalisation that you need to apply to ensure that the posterior is a valid distribution. So basically we are marginalising out $\theta$ and asking what is the probability of observing $D$. These are typically difficult to compute. Also see Conjugate priors.
$p(D)$ in Bayesian Statistics $P(D)$ is not a prior. It is what is called model evidence or marginal likelihood. $P(\theta)$ is the prior over the parameters of interest and $P(D)$ is $\int_{\theta} P(\theta) P(D|\theta) d\theta$.
44,697
$p(D)$ in Bayesian Statistics
$P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser as you are only trying to maximise the posterior probability of the parameters given the observation i.e. $$ P(\theta|D) \propto P(D|\theta) P(\theta) $$ So, you do not need to worry about the denominator ($P(D)$) as it does not affect finding the $\theta$ that maximizes the posterior. However, MAP gives you a point estimate and you ignore the rich information that posterior distribution may convey. However, if you want to quantify the uncertainty, do model comparison (see Bayes factor) and probably other things, then you need to also compute or approximate $P(D)$. I also suggest reading Chris Bishop's book. He explains a lot of these things in an amazing way! The book is called "Pattern Recognition and Machine Learning" by Christopher Bishop. He also has some amazing lectures on probabilistic graphical models and Bayesian inferencing that can be found in the following links: https://www.youtube.com/watch?v=ju1Grt2hdko https://www.youtube.com/watch?v=c0AWH5UFyOk https://www.youtube.com/watch?v=QJSEQeH40hM
$p(D)$ in Bayesian Statistics
$P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser
$p(D)$ in Bayesian Statistics $P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser as you are only trying to maximise the posterior probability of the parameters given the observation i.e. $$ P(\theta|D) \propto P(D|\theta) P(\theta) $$ So, you do not need to worry about the denominator ($P(D)$) as it does not affect finding the $\theta$ that maximizes the posterior. However, MAP gives you a point estimate and you ignore the rich information that posterior distribution may convey. However, if you want to quantify the uncertainty, do model comparison (see Bayes factor) and probably other things, then you need to also compute or approximate $P(D)$. I also suggest reading Chris Bishop's book. He explains a lot of these things in an amazing way! The book is called "Pattern Recognition and Machine Learning" by Christopher Bishop. He also has some amazing lectures on probabilistic graphical models and Bayesian inferencing that can be found in the following links: https://www.youtube.com/watch?v=ju1Grt2hdko https://www.youtube.com/watch?v=c0AWH5UFyOk https://www.youtube.com/watch?v=QJSEQeH40hM
$p(D)$ in Bayesian Statistics $P(D)$ is necessary if you characterise the full posterior. For example, if you want to do a maximum-a-posteriori (MAP) estimate of your parameters, then you do not need to worry about the normaliser
44,698
Using the gap statistic to compare algorithms
Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clustering criterions (one of them being Gap statistic) are not tied (in proprietary sense) to a specific clustering method: they are apt to assess the clusters whatever method used. They just do not "know" if the being compared solutions with various number of clusters have come from the same or from different clustering methods. However the majority of criterions should be applied to the same clustered dataset, unless a criterion value is well-thought standardized (which is not an easy task). P.S. In their reasonable answer @Anony-Mousse raised an aspect which I had decided to hush up above. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to [that criterion], but not how good it actually works. There exist no balanced or "universal" clustering criterions; any one of them bears some homology to the objective function of this and not that clustering algorithm, by virtue of which it tends to "prefer" one algorithm to another (and one shape of clusters to another, too). Gap statistic retains something of the K-means function, while Silhouette has clear trace of average linkage hierarchical method. They are not "orthogonal" to anything. A clustering criterion is itself an objective function of some clustering algorithm not invented exactly so, yet. An algorithm is good for us (in respect to cluster separability) if it wins when judged by the criterion we want. And it is unclear what else it might be that is the measure of how good it actually works (from the point of view of internal validation, I mean).
Using the gap statistic to compare algorithms
Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clusterin
Using the gap statistic to compare algorithms Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clustering criterions (one of them being Gap statistic) are not tied (in proprietary sense) to a specific clustering method: they are apt to assess the clusters whatever method used. They just do not "know" if the being compared solutions with various number of clusters have come from the same or from different clustering methods. However the majority of criterions should be applied to the same clustered dataset, unless a criterion value is well-thought standardized (which is not an easy task). P.S. In their reasonable answer @Anony-Mousse raised an aspect which I had decided to hush up above. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to [that criterion], but not how good it actually works. There exist no balanced or "universal" clustering criterions; any one of them bears some homology to the objective function of this and not that clustering algorithm, by virtue of which it tends to "prefer" one algorithm to another (and one shape of clusters to another, too). Gap statistic retains something of the K-means function, while Silhouette has clear trace of average linkage hierarchical method. They are not "orthogonal" to anything. A clustering criterion is itself an objective function of some clustering algorithm not invented exactly so, yet. An algorithm is good for us (in respect to cluster separability) if it wins when judged by the criterion we want. And it is unclear what else it might be that is the measure of how good it actually works (from the point of view of internal validation, I mean).
Using the gap statistic to compare algorithms Logically, the answer should be yes: you may compare, by the same criterion, solutions different by the number of clusters and/or the clustering algorithm used. Majority of the many internal clusterin
44,699
Using the gap statistic to compare algorithms
Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to the gap statistic, but not how good it actually works. A similar problem occurs with pretty much every measure. For example, the "sum of squares" (SSQ) measure is internally used by k-means, and it improves with the number of clusters (up to 0, when k=number of objects). K-means is (approximately, as the common algorithms only find local minimas) optimal with respect to this measure. But the optimum k is the number of objects, with evrery object its own cluster (which yields SSQ 0). So obviously, any other algorithm will look bad compared to k-means, and yet the optimum result will be entirely useless. Be careful when relying on such metrics. You measure a mathematical quantity that may not be capturing your needs. Things such as using the gap statistic or silhouette with k-means sometimes work well, because they are a slightly different objective to the original objective used by k-means. Instead of blindly searching for the best k-means result (which would yield a much too high k), you use this secondary measure to compare k-means results. It works, because even with different k, k-means still optimizes SSQ, and not the gap statistic. Nevertheless, the gap/silhouette is just yet another heuristic. Note that it already fails when you try different normalization before running k-means. It's trivial to reduce gaps just by scaling the data set down; the preprocessing has a strong effect on these statistics. When you compare different algorithms, usually each optimizes different quantities; so the comparison will usually not be fair. Actually, in most cases the result will not be fair; the only situation where it works reasonably well is varying the cluster number of k-means, and keeping everything else as is.
Using the gap statistic to compare algorithms
Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be
Using the gap statistic to compare algorithms Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be more likely measuring how similar the algorithm is to the gap statistic, but not how good it actually works. A similar problem occurs with pretty much every measure. For example, the "sum of squares" (SSQ) measure is internally used by k-means, and it improves with the number of clusters (up to 0, when k=number of objects). K-means is (approximately, as the common algorithms only find local minimas) optimal with respect to this measure. But the optimum k is the number of objects, with evrery object its own cluster (which yields SSQ 0). So obviously, any other algorithm will look bad compared to k-means, and yet the optimum result will be entirely useless. Be careful when relying on such metrics. You measure a mathematical quantity that may not be capturing your needs. Things such as using the gap statistic or silhouette with k-means sometimes work well, because they are a slightly different objective to the original objective used by k-means. Instead of blindly searching for the best k-means result (which would yield a much too high k), you use this secondary measure to compare k-means results. It works, because even with different k, k-means still optimizes SSQ, and not the gap statistic. Nevertheless, the gap/silhouette is just yet another heuristic. Note that it already fails when you try different normalization before running k-means. It's trivial to reduce gaps just by scaling the data set down; the preprocessing has a strong effect on these statistics. When you compare different algorithms, usually each optimizes different quantities; so the comparison will usually not be fair. Actually, in most cases the result will not be fair; the only situation where it works reasonably well is varying the cluster number of k-means, and keeping everything else as is.
Using the gap statistic to compare algorithms Note that some algorithms will try to optimize the gap/silhouette/ssq, others won't. By comparing different algorithms with a measure that correlates with some of the objective functions, you will be
44,700
Linear Combination of multivariate t distribution
I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the sum of two t-random variables, but with the second component having its mean that of the original random variable multiplied by -1) In some very particular cases, yes. Consider: (i) the limiting case of infinite degrees of freedom, linear combinations of multivariate normals are multivariate normal; (ii) if the component t-variables are perfectly dependent their sums will be multivariate-t; (iii) in the univariate case, sums of independent Cauchy random variables will be Cauchy. I haven't checked, but this may well extend more to subsets of the multivariate case than vectors of independent Cauchy (and the perfectly-dependent case mentioned above); (iv) in the limit of very large numbers of components, where none of the components dominates variance-wise (that is, where the coefficient of each component times the variance of that component doesn't become too large), you may be able to invoke a version of the central limit theorem. In the case where the weights on the components are equal (effectively converting it to a scaled sum) and you're dealing with standard t (rather than ones with general means and variances), this paper has some information. Extending it to the case of a general mean is straightforward but it doesn't deal with the general case of arbitrary scales, or equivalently arbitrary linear combinations.
Linear Combination of multivariate t distribution
I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here
Linear Combination of multivariate t distribution I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here for example; note that the difference of two t-random variables is the sum of two t-random variables, but with the second component having its mean that of the original random variable multiplied by -1) In some very particular cases, yes. Consider: (i) the limiting case of infinite degrees of freedom, linear combinations of multivariate normals are multivariate normal; (ii) if the component t-variables are perfectly dependent their sums will be multivariate-t; (iii) in the univariate case, sums of independent Cauchy random variables will be Cauchy. I haven't checked, but this may well extend more to subsets of the multivariate case than vectors of independent Cauchy (and the perfectly-dependent case mentioned above); (iv) in the limit of very large numbers of components, where none of the components dominates variance-wise (that is, where the coefficient of each component times the variance of that component doesn't become too large), you may be able to invoke a version of the central limit theorem. In the case where the weights on the components are equal (effectively converting it to a scaled sum) and you're dealing with standard t (rather than ones with general means and variances), this paper has some information. Extending it to the case of a general mean is straightforward but it doesn't deal with the general case of arbitrary scales, or equivalently arbitrary linear combinations.
Linear Combination of multivariate t distribution I am trying to see if the linear combination of multivariate t distribution will give a multivariate t distribution. In general, no, this is not the case, even with univariate t's (see here and here