idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,701
n observations from a random variable VS. 1 observation from n i.i.d random variables
I think it depends on the theoretical and mainstream approach. This approach could not be proved, but it is such an axiom. I looked in my books on math. stat., authors do not mention this question. Some points about that: Math. stat. theory approach based on tests and theory derived from i.i.d. condition. With this condition and assumption that the random sample contains n observations for n random variables we know how to get unbiased, consistent and effective estimates. In case when we assign n observation got from one rv, we cannot apply CLT, MLE, F, chi square etc. As mentioned above, it is hard to incorporate independence in statistical approach if we consider all n observations from one random variable. For example, in average, temperature rises in the summer, i.e. previous observation could influence the next level of temperature. I.e. cumulative distribution could change through the time. If we have only one rv, we could not explain this influence. Also, there are some “structural breaks” in time series or non time sample, this means that we could have different distributions. Remind definition: For a given sample space S of some experiment, a random variable (rv) is any rule that associates a number with each outcome in S. In mathematical language, a random variable is a function whose domain is the sample space and whose range is the set of real numbers Could you, please, prove that the set of real numbers does not change through the time or does not depend on smth? But in the case when there are only 1 observation for each rv this question drops.
n observations from a random variable VS. 1 observation from n i.i.d random variables
I think it depends on the theoretical and mainstream approach. This approach could not be proved, but it is such an axiom. I looked in my books on math. stat., authors do not mention this question. So
n observations from a random variable VS. 1 observation from n i.i.d random variables I think it depends on the theoretical and mainstream approach. This approach could not be proved, but it is such an axiom. I looked in my books on math. stat., authors do not mention this question. Some points about that: Math. stat. theory approach based on tests and theory derived from i.i.d. condition. With this condition and assumption that the random sample contains n observations for n random variables we know how to get unbiased, consistent and effective estimates. In case when we assign n observation got from one rv, we cannot apply CLT, MLE, F, chi square etc. As mentioned above, it is hard to incorporate independence in statistical approach if we consider all n observations from one random variable. For example, in average, temperature rises in the summer, i.e. previous observation could influence the next level of temperature. I.e. cumulative distribution could change through the time. If we have only one rv, we could not explain this influence. Also, there are some “structural breaks” in time series or non time sample, this means that we could have different distributions. Remind definition: For a given sample space S of some experiment, a random variable (rv) is any rule that associates a number with each outcome in S. In mathematical language, a random variable is a function whose domain is the sample space and whose range is the set of real numbers Could you, please, prove that the set of real numbers does not change through the time or does not depend on smth? But in the case when there are only 1 observation for each rv this question drops.
n observations from a random variable VS. 1 observation from n i.i.d random variables I think it depends on the theoretical and mainstream approach. This approach could not be proved, but it is such an axiom. I looked in my books on math. stat., authors do not mention this question. So
45,702
Maximum likelihood estimator that is not a function of a sufficient statistic
Nothing is wrong with what you said, just the statement that every maximum likelihood estimator has to be a function of any sufficient statistic, which is false as stated. A more correct form of putting this assertion is: If $T$ is a sufficient statistic for $\theta$ and a unique MLE of $\hat{\theta}$ exists, then $\hat{\theta}$ must be a function of $T$. If any MLE exists, then an MLE $\hat{\theta}$ can be chosen to be a function of $T$. This quote is from Maximum Likelihood and Sufficient Statistics found in The American Mathematical Monthly by D.S. Moore. You can find it on JSTOR. You can also find an example similar to yours and more information about your question.
Maximum likelihood estimator that is not a function of a sufficient statistic
Nothing is wrong with what you said, just the statement that every maximum likelihood estimator has to be a function of any sufficient statistic, which is false as stated. A more correct form of putti
Maximum likelihood estimator that is not a function of a sufficient statistic Nothing is wrong with what you said, just the statement that every maximum likelihood estimator has to be a function of any sufficient statistic, which is false as stated. A more correct form of putting this assertion is: If $T$ is a sufficient statistic for $\theta$ and a unique MLE of $\hat{\theta}$ exists, then $\hat{\theta}$ must be a function of $T$. If any MLE exists, then an MLE $\hat{\theta}$ can be chosen to be a function of $T$. This quote is from Maximum Likelihood and Sufficient Statistics found in The American Mathematical Monthly by D.S. Moore. You can find it on JSTOR. You can also find an example similar to yours and more information about your question.
Maximum likelihood estimator that is not a function of a sufficient statistic Nothing is wrong with what you said, just the statement that every maximum likelihood estimator has to be a function of any sufficient statistic, which is false as stated. A more correct form of putti
45,703
Maximum likelihood estimator that is not a function of a sufficient statistic
I think to preserve the theorem in cases like this one should define the MLE as the interval of MLEs. That is a function of the sufficient statistic. This page takes a different point of view: For every sufficient statistic, there is at least one MLE that is a function of it. (So if there is only one MLE, then that one is it.)
Maximum likelihood estimator that is not a function of a sufficient statistic
I think to preserve the theorem in cases like this one should define the MLE as the interval of MLEs. That is a function of the sufficient statistic. This page takes a different point of view: For eve
Maximum likelihood estimator that is not a function of a sufficient statistic I think to preserve the theorem in cases like this one should define the MLE as the interval of MLEs. That is a function of the sufficient statistic. This page takes a different point of view: For every sufficient statistic, there is at least one MLE that is a function of it. (So if there is only one MLE, then that one is it.)
Maximum likelihood estimator that is not a function of a sufficient statistic I think to preserve the theorem in cases like this one should define the MLE as the interval of MLEs. That is a function of the sufficient statistic. This page takes a different point of view: For eve
45,704
Maximum likelihood estimator that is not a function of a sufficient statistic
Sufficient statistics only apply to exponential family distributions. Continuous uniform is not an exponential family distribution. See [DeGroot, Morris H. Optimal Statistical Decisions, 1970, McGraw-Hill Book Company, New York]
Maximum likelihood estimator that is not a function of a sufficient statistic
Sufficient statistics only apply to exponential family distributions. Continuous uniform is not an exponential family distribution. See [DeGroot, Morris H. Optimal Statistical Decisions, 1970, McGraw-
Maximum likelihood estimator that is not a function of a sufficient statistic Sufficient statistics only apply to exponential family distributions. Continuous uniform is not an exponential family distribution. See [DeGroot, Morris H. Optimal Statistical Decisions, 1970, McGraw-Hill Book Company, New York]
Maximum likelihood estimator that is not a function of a sufficient statistic Sufficient statistics only apply to exponential family distributions. Continuous uniform is not an exponential family distribution. See [DeGroot, Morris H. Optimal Statistical Decisions, 1970, McGraw-
45,705
Why periodically skip updating a parameter in MCMC?
This type of fine-tuned (Gibbs) MCMC is appropriate for cases when one conditional distribution is most "sticky" than other conditional distributions in the problem. For instance, updating only one [random] part of $\beta$ may be profitable when updating the whole vector results in high rejection rates or in very small moves. An early reference on mixing several MCMC steps is Tierney (1994). (Gareth Roberts and Jeff Rosenthal have also written several papers on comparing such mixtures of MCMC steps.) However, updating the same parameter twice in a row, using the full conditional distribution given all others, is a waste of computational time since it means simulating twice from exactly the same distribution. The way the code is written, it seems to be the case: theta <- update_theta(alpha, beta, data) since the update function does not depend on the current value of theta. But when using a Metropolis-Hastings move using several iterations instead of one can make sense when the moves are of limited magnitude. Similarly, if the iterated calls to update modify only parts of the whole parameter theta and if those parts are chosen at random, this is a form of random block Gibbs sampling. And this is perfectly valid. Here is an instance of such a strategy. And an older one. (In an even older paper with Christophe Andrieu, we looked at an adaptive choice for the size of the blocks, if not for the number of moves for given blocks. But this can be incorporated within an adaptive MCMC algorithm as well.) Suggested Reading: A most relevant paper that appeared on arXiv a few days ago is The Recycling Gibbs Sampler for Efficient Learning by Luca Martino, Victor Elvira and Gustau Camps-Valls that I reviewed on my blog yesterday.
Why periodically skip updating a parameter in MCMC?
This type of fine-tuned (Gibbs) MCMC is appropriate for cases when one conditional distribution is most "sticky" than other conditional distributions in the problem. For instance, updating only one [r
Why periodically skip updating a parameter in MCMC? This type of fine-tuned (Gibbs) MCMC is appropriate for cases when one conditional distribution is most "sticky" than other conditional distributions in the problem. For instance, updating only one [random] part of $\beta$ may be profitable when updating the whole vector results in high rejection rates or in very small moves. An early reference on mixing several MCMC steps is Tierney (1994). (Gareth Roberts and Jeff Rosenthal have also written several papers on comparing such mixtures of MCMC steps.) However, updating the same parameter twice in a row, using the full conditional distribution given all others, is a waste of computational time since it means simulating twice from exactly the same distribution. The way the code is written, it seems to be the case: theta <- update_theta(alpha, beta, data) since the update function does not depend on the current value of theta. But when using a Metropolis-Hastings move using several iterations instead of one can make sense when the moves are of limited magnitude. Similarly, if the iterated calls to update modify only parts of the whole parameter theta and if those parts are chosen at random, this is a form of random block Gibbs sampling. And this is perfectly valid. Here is an instance of such a strategy. And an older one. (In an even older paper with Christophe Andrieu, we looked at an adaptive choice for the size of the blocks, if not for the number of moves for given blocks. But this can be incorporated within an adaptive MCMC algorithm as well.) Suggested Reading: A most relevant paper that appeared on arXiv a few days ago is The Recycling Gibbs Sampler for Efficient Learning by Luca Martino, Victor Elvira and Gustau Camps-Valls that I reviewed on my blog yesterday.
Why periodically skip updating a parameter in MCMC? This type of fine-tuned (Gibbs) MCMC is appropriate for cases when one conditional distribution is most "sticky" than other conditional distributions in the problem. For instance, updating only one [r
45,706
Understanding max-pooling and loss of information
Max pooling loses information in a sense that it tells you whether a filtered feature was encountered or not, but forgets where in the data, how many times etc. Suppose your filter is looking for vertical stripes in the image. Without max pooling it will output all stripes found. With max pooling, it will tell you whether there were stripes in the filter output or not. Pretty much zero or one outputs, as opposed to the whole image with stripes marked on it with ones. Max pooling can be viewed as a very crude form of compression in this regard. It's quite surprising that max pooling actually works given how crude it is. One reason why it does work is because you usually run a battery of filters. For instance, you may run a vertical, horizontal, and stripes at -45 and +45 degrees stripes filters then max pool their output. If you're looking for a rectangular box in the image, having ONE output for -45 and +45 degree stripes, and ZERO output from vertical and horizontal stripe filters after max pooling may suggest that your box is inclined in your image.
Understanding max-pooling and loss of information
Max pooling loses information in a sense that it tells you whether a filtered feature was encountered or not, but forgets where in the data, how many times etc. Suppose your filter is looking for vert
Understanding max-pooling and loss of information Max pooling loses information in a sense that it tells you whether a filtered feature was encountered or not, but forgets where in the data, how many times etc. Suppose your filter is looking for vertical stripes in the image. Without max pooling it will output all stripes found. With max pooling, it will tell you whether there were stripes in the filter output or not. Pretty much zero or one outputs, as opposed to the whole image with stripes marked on it with ones. Max pooling can be viewed as a very crude form of compression in this regard. It's quite surprising that max pooling actually works given how crude it is. One reason why it does work is because you usually run a battery of filters. For instance, you may run a vertical, horizontal, and stripes at -45 and +45 degrees stripes filters then max pool their output. If you're looking for a rectangular box in the image, having ONE output for -45 and +45 degree stripes, and ZERO output from vertical and horizontal stripe filters after max pooling may suggest that your box is inclined in your image.
Understanding max-pooling and loss of information Max pooling loses information in a sense that it tells you whether a filtered feature was encountered or not, but forgets where in the data, how many times etc. Suppose your filter is looking for vert
45,707
Understanding max-pooling and loss of information
I'm not completely sure but I'm thinking that if any of the pixels are dark in a chunk of max pooling it will output that darkest pixel, no matter what other pixels are. Without max pooling weights can be applied on all the pixels of the previous layer so less data is lost. Even though the network will learn what information is useful to pass to the pooling layer, it still may lose some information. Sometimes it's hard to think about these things and its easier to test them out in an actual CNN.
Understanding max-pooling and loss of information
I'm not completely sure but I'm thinking that if any of the pixels are dark in a chunk of max pooling it will output that darkest pixel, no matter what other pixels are. Without max pooling weights ca
Understanding max-pooling and loss of information I'm not completely sure but I'm thinking that if any of the pixels are dark in a chunk of max pooling it will output that darkest pixel, no matter what other pixels are. Without max pooling weights can be applied on all the pixels of the previous layer so less data is lost. Even though the network will learn what information is useful to pass to the pooling layer, it still may lose some information. Sometimes it's hard to think about these things and its easier to test them out in an actual CNN.
Understanding max-pooling and loss of information I'm not completely sure but I'm thinking that if any of the pixels are dark in a chunk of max pooling it will output that darkest pixel, no matter what other pixels are. Without max pooling weights ca
45,708
What are the implications of the curse of dimensionality for ordinary least squares linear regression?
Edit: As @Richard Hardy pointed out, the linear model under squared loss and ordinary least squares (OLS) are different things. I revised my answer to discuss the linear regression model only, where we are trying to check if the curse of dimesionality (CoD) is present when solving the following optimization problem: $$ \min \|X\beta-y\|_2^2. $$ In most cases, linear regression model will not suffer from CoD. This is because the number of parameters in the OLS will NOT increase exponentially with respect to the number of features / independent variables / columns. (Unless we include all "interaction" terms for all features as mentioned in a comment.) Suppose we have a data matrix $X$ that is $n \times p$, i.e., we have $n$ data points and $p$ features. It is possible in "machine learning context" that $n$ is on the scale of millions and $p$ is on the scale of thousands to millions. The linear model even works for $p \gg n$ as well once we add regularization. To summarize For the linear model, the number of parameters is the same as the number of features (let's assume we do not have the intercept.) The CoD will happen when we have the number of parameters growing exponentially with the number of features. Here is an example: let us assume we have $p$ discrete (binary) random variables. The joint distribution table has $2^p$ rows. In this case, CoD will happen.
What are the implications of the curse of dimensionality for ordinary least squares linear regressio
Edit: As @Richard Hardy pointed out, the linear model under squared loss and ordinary least squares (OLS) are different things. I revised my answer to discuss the linear regression model only, where w
What are the implications of the curse of dimensionality for ordinary least squares linear regression? Edit: As @Richard Hardy pointed out, the linear model under squared loss and ordinary least squares (OLS) are different things. I revised my answer to discuss the linear regression model only, where we are trying to check if the curse of dimesionality (CoD) is present when solving the following optimization problem: $$ \min \|X\beta-y\|_2^2. $$ In most cases, linear regression model will not suffer from CoD. This is because the number of parameters in the OLS will NOT increase exponentially with respect to the number of features / independent variables / columns. (Unless we include all "interaction" terms for all features as mentioned in a comment.) Suppose we have a data matrix $X$ that is $n \times p$, i.e., we have $n$ data points and $p$ features. It is possible in "machine learning context" that $n$ is on the scale of millions and $p$ is on the scale of thousands to millions. The linear model even works for $p \gg n$ as well once we add regularization. To summarize For the linear model, the number of parameters is the same as the number of features (let's assume we do not have the intercept.) The CoD will happen when we have the number of parameters growing exponentially with the number of features. Here is an example: let us assume we have $p$ discrete (binary) random variables. The joint distribution table has $2^p$ rows. In this case, CoD will happen.
What are the implications of the curse of dimensionality for ordinary least squares linear regressio Edit: As @Richard Hardy pointed out, the linear model under squared loss and ordinary least squares (OLS) are different things. I revised my answer to discuss the linear regression model only, where w
45,709
What are the implications of the curse of dimensionality for ordinary least squares linear regression?
I think that everything that hxd1011 says is correct, however if one is interested in prediction rather than description, CoD can rear it's ugly head. For example if one is using Akaike I]information Criteria to decide on model accuracy, then the value is proportional to the number ,p, of variables. Since a lower AIC is interpreted as meaning higher model quality, the number of variables used effects model quality. The same things occurs with the Bayesian information criteria, but there the BIC value depends on log(n)*p, so the effect is even more pronounced. If these examples aren't 'exponentialish' enough, then consider a best subsets regression. Again, for prediction, it may well be that the best model doesn't contain all the variables. Best subsets looks at all the distinct models one gets by considering all the different subsets of the p variables. It then uses some criteria (frequently AIC or BIC !) to choose the 'best' model. If there are p variables there are $\binom {p}{k}$ such models using exactly k of the variables and summing over all k we get that one has to compare (via some computation) $\sum_{k=0}^{k=p} \binom {p}{k} = 2^p $ different models. There is an exponential ! One reason for the use of various regularized regression methods is that the number of models one needs to check with best subsets is exponential in p ! Originally, this was a comment, but it is too long and I don't see how editing it will be possible, so I've posted this comment as an answer.
What are the implications of the curse of dimensionality for ordinary least squares linear regressio
I think that everything that hxd1011 says is correct, however if one is interested in prediction rather than description, CoD can rear it's ugly head. For example if one is using Akaike I]information
What are the implications of the curse of dimensionality for ordinary least squares linear regression? I think that everything that hxd1011 says is correct, however if one is interested in prediction rather than description, CoD can rear it's ugly head. For example if one is using Akaike I]information Criteria to decide on model accuracy, then the value is proportional to the number ,p, of variables. Since a lower AIC is interpreted as meaning higher model quality, the number of variables used effects model quality. The same things occurs with the Bayesian information criteria, but there the BIC value depends on log(n)*p, so the effect is even more pronounced. If these examples aren't 'exponentialish' enough, then consider a best subsets regression. Again, for prediction, it may well be that the best model doesn't contain all the variables. Best subsets looks at all the distinct models one gets by considering all the different subsets of the p variables. It then uses some criteria (frequently AIC or BIC !) to choose the 'best' model. If there are p variables there are $\binom {p}{k}$ such models using exactly k of the variables and summing over all k we get that one has to compare (via some computation) $\sum_{k=0}^{k=p} \binom {p}{k} = 2^p $ different models. There is an exponential ! One reason for the use of various regularized regression methods is that the number of models one needs to check with best subsets is exponential in p ! Originally, this was a comment, but it is too long and I don't see how editing it will be possible, so I've posted this comment as an answer.
What are the implications of the curse of dimensionality for ordinary least squares linear regressio I think that everything that hxd1011 says is correct, however if one is interested in prediction rather than description, CoD can rear it's ugly head. For example if one is using Akaike I]information
45,710
Calculate standard deviation given mean and percentage
We can solve this problem almost instantly in our heads using the "68-95-99.7" rule. I will explain the process in detail because that is what matters. The answer is of little interest: the point to this question is to help us learn to think about probability distributions. These numbers in the 68-95-99.7 rule are (approximately) the percent chances that a Normal variable lies within one, two, and three standard deviations of its mean. By subtracting these numbers from 100% it follows that the chances of a Normal variable lying beyond one, two, and three SDs of its mean are about 32, 5, and 0.3 percent, respectively. Since this distribution is symmetric, we can split each of these numbers in half to find the chances of lying beyond one, two, and three SDs of the mean in a given direction: the values are about 16, 2.5, and 0.15 percent, respectively. (Slightly more accurate values are shown in the figure.) The figure uses areas to represent chances. The leftmost value of 16%, for instance, is the proportion of all the area under the curve that lies to the left of -1. The "tail areas" associated with the numbers $Z = -3,-2,-1, 1,2,3$ are labeled. (These areas overlap; for instance, the 16% values include regions accounted for by the 2.3% and 0.13% values.) People who think effectively about probabilities use mental figures like this one. Turn to the data in the question: 0.0275 is 0.0001 to the left of the mean of 0.0276 while 0.0278 is 0.0002 to the right of the mean: twice as far. We therefore need to enclose 98% of the probability between an unknown number of standard deviations to the left of the mean--call this multiple $-Z$ to indicate it's to the left--and twice that number of standard deviations to the right of the mean, which therefore is $2Z.$ Equivalently, 100 - 98 = 2% of the probability must lie beyond this range. The figure shows 2.3% of the probability lies to the left of $-Z=-2$ and essentially 0% lies to the right of $Z=2\times 2=4,$ so $Z=2$ would be an accurate guess (albeit a tad low). The only arithmetic needed to get to this point involved subtractions, one division (of 0.0002 / 0.0001) and halving. If you would like to get a little closer to "the" answer, look up (or compute) the value of $Z$ for which 2% of the probability is to the left of $-Z$: that's $Z=2.054.$ It's still the case that essentially 0% is to the right of $2Z \approx 4.1.$ (Because there actually is a tiny bit of probability beyond $4.1,$ the correct value of $Z$ must be just a tiny bit more than $2.054.$) Either way, we come up with the result that $Z$ is somewhere around $2$ or $2.054.$ Finally, return to the data in the problem: $Z$ standard deviations equals $0.0001$ (or $2Z$ standard deviations equals $0.0002:$ it's all the same). Our answers therefore are Quick and dirty, based on the 68-95-99.7 rule: $0.0001/2 = 0.00005.$ A little more refined, based on a table lookup: $0.0001/2.054 \approx 0.0000486\,91.$ We know both of these answers will be a little too large, but the second must be quite accurate. Having gone through this thought process, we could write down the following R commands immediately because they directly carry out the calculation (albeit more accurately): (Z <- uniroot(function(z) pnorm(2*z)-pnorm(-z) - 0.98, c(2,3))$root) 2.054 158 That agrees with the three decimal digit table I used to get $2.054.$ (0.0276 - 0.0275) / Z 4.86 8176e-05 It agrees with our first answer almost to two significant figures and with the second answer almost to four significant figures--more than we really deserve.
Calculate standard deviation given mean and percentage
We can solve this problem almost instantly in our heads using the "68-95-99.7" rule. I will explain the process in detail because that is what matters. The answer is of little interest: the point to
Calculate standard deviation given mean and percentage We can solve this problem almost instantly in our heads using the "68-95-99.7" rule. I will explain the process in detail because that is what matters. The answer is of little interest: the point to this question is to help us learn to think about probability distributions. These numbers in the 68-95-99.7 rule are (approximately) the percent chances that a Normal variable lies within one, two, and three standard deviations of its mean. By subtracting these numbers from 100% it follows that the chances of a Normal variable lying beyond one, two, and three SDs of its mean are about 32, 5, and 0.3 percent, respectively. Since this distribution is symmetric, we can split each of these numbers in half to find the chances of lying beyond one, two, and three SDs of the mean in a given direction: the values are about 16, 2.5, and 0.15 percent, respectively. (Slightly more accurate values are shown in the figure.) The figure uses areas to represent chances. The leftmost value of 16%, for instance, is the proportion of all the area under the curve that lies to the left of -1. The "tail areas" associated with the numbers $Z = -3,-2,-1, 1,2,3$ are labeled. (These areas overlap; for instance, the 16% values include regions accounted for by the 2.3% and 0.13% values.) People who think effectively about probabilities use mental figures like this one. Turn to the data in the question: 0.0275 is 0.0001 to the left of the mean of 0.0276 while 0.0278 is 0.0002 to the right of the mean: twice as far. We therefore need to enclose 98% of the probability between an unknown number of standard deviations to the left of the mean--call this multiple $-Z$ to indicate it's to the left--and twice that number of standard deviations to the right of the mean, which therefore is $2Z.$ Equivalently, 100 - 98 = 2% of the probability must lie beyond this range. The figure shows 2.3% of the probability lies to the left of $-Z=-2$ and essentially 0% lies to the right of $Z=2\times 2=4,$ so $Z=2$ would be an accurate guess (albeit a tad low). The only arithmetic needed to get to this point involved subtractions, one division (of 0.0002 / 0.0001) and halving. If you would like to get a little closer to "the" answer, look up (or compute) the value of $Z$ for which 2% of the probability is to the left of $-Z$: that's $Z=2.054.$ It's still the case that essentially 0% is to the right of $2Z \approx 4.1.$ (Because there actually is a tiny bit of probability beyond $4.1,$ the correct value of $Z$ must be just a tiny bit more than $2.054.$) Either way, we come up with the result that $Z$ is somewhere around $2$ or $2.054.$ Finally, return to the data in the problem: $Z$ standard deviations equals $0.0001$ (or $2Z$ standard deviations equals $0.0002:$ it's all the same). Our answers therefore are Quick and dirty, based on the 68-95-99.7 rule: $0.0001/2 = 0.00005.$ A little more refined, based on a table lookup: $0.0001/2.054 \approx 0.0000486\,91.$ We know both of these answers will be a little too large, but the second must be quite accurate. Having gone through this thought process, we could write down the following R commands immediately because they directly carry out the calculation (albeit more accurately): (Z <- uniroot(function(z) pnorm(2*z)-pnorm(-z) - 0.98, c(2,3))$root) 2.054 158 That agrees with the three decimal digit table I used to get $2.054.$ (0.0276 - 0.0275) / Z 4.86 8176e-05 It agrees with our first answer almost to two significant figures and with the second answer almost to four significant figures--more than we really deserve.
Calculate standard deviation given mean and percentage We can solve this problem almost instantly in our heads using the "68-95-99.7" rule. I will explain the process in detail because that is what matters. The answer is of little interest: the point to
45,711
Calculate standard deviation given mean and percentage
So you can use R to get the answer: target=function (sd){ b=pnorm(0.0278, mean = 0.0276, sd = sd) a=pnorm(0.0275, mean = 0.0276, sd = sd) return(abs(b-a-0.98)) } sd=optim(1,target) sd$par This gives: > sd$par [1] 4.868167e-05 What we are doing is using numerical method to calculate $\sigma$ such that $$F(0.0278)-F(0.0275)=0.98$$ where $F()$ is cdf for $N(0.0276,\sigma)$
Calculate standard deviation given mean and percentage
So you can use R to get the answer: target=function (sd){ b=pnorm(0.0278, mean = 0.0276, sd = sd) a=pnorm(0.0275, mean = 0.0276, sd = sd) return(abs(b-a-0.98)) } sd=optim(1,target) sd$par This
Calculate standard deviation given mean and percentage So you can use R to get the answer: target=function (sd){ b=pnorm(0.0278, mean = 0.0276, sd = sd) a=pnorm(0.0275, mean = 0.0276, sd = sd) return(abs(b-a-0.98)) } sd=optim(1,target) sd$par This gives: > sd$par [1] 4.868167e-05 What we are doing is using numerical method to calculate $\sigma$ such that $$F(0.0278)-F(0.0275)=0.98$$ where $F()$ is cdf for $N(0.0276,\sigma)$
Calculate standard deviation given mean and percentage So you can use R to get the answer: target=function (sd){ b=pnorm(0.0278, mean = 0.0276, sd = sd) a=pnorm(0.0275, mean = 0.0276, sd = sd) return(abs(b-a-0.98)) } sd=optim(1,target) sd$par This
45,712
Calculate standard deviation given mean and percentage
It has to be solved numerically. Here is a solution in R using a simple root finding algorithm. We simply solve the equation $$ F_{\mu,\sigma}(b) - F_{\mu,\sigma}(a) - p = 0 $$ where $F_{\mu,\sigma}(\cdot)$ denotes the cumulative distribution function of the normal distribution with mean $\mu$ and standard deviation $\sigma$. $b$ and $a$ (with $b>a$) are the upper and lower bounds, respectively and $p$ ($0<p<1$) is the proportion of values that lies between $a$ and $b$. The function find_sigma is very generic: It accepts fixed arguments for $a$, $b$, $\mu$ and $p$. find_sigma <- function(sigma, a, b, mu, prop) { (pnorm(b, mean = mu, sd = sigma) - pnorm(a, mean = mu, sd = sigma)) - prop } uniroot( find_sigma , lower = .Machine$double.xmin , upper = 1 , a = 0.0275 # lower bound , b = 0.0278 # upper bound , mu = 0.0276 # mean , prop = 0.98 # proportion between a and b , maxiter = 10000 , tol = 1e-20 # , extendInt = "yes" ) $root [1] 4.868168e-05 The standard deviation is $0.000048682$ as the other answers have found.
Calculate standard deviation given mean and percentage
It has to be solved numerically. Here is a solution in R using a simple root finding algorithm. We simply solve the equation $$ F_{\mu,\sigma}(b) - F_{\mu,\sigma}(a) - p = 0 $$ where $F_{\mu,\sigma}(\
Calculate standard deviation given mean and percentage It has to be solved numerically. Here is a solution in R using a simple root finding algorithm. We simply solve the equation $$ F_{\mu,\sigma}(b) - F_{\mu,\sigma}(a) - p = 0 $$ where $F_{\mu,\sigma}(\cdot)$ denotes the cumulative distribution function of the normal distribution with mean $\mu$ and standard deviation $\sigma$. $b$ and $a$ (with $b>a$) are the upper and lower bounds, respectively and $p$ ($0<p<1$) is the proportion of values that lies between $a$ and $b$. The function find_sigma is very generic: It accepts fixed arguments for $a$, $b$, $\mu$ and $p$. find_sigma <- function(sigma, a, b, mu, prop) { (pnorm(b, mean = mu, sd = sigma) - pnorm(a, mean = mu, sd = sigma)) - prop } uniroot( find_sigma , lower = .Machine$double.xmin , upper = 1 , a = 0.0275 # lower bound , b = 0.0278 # upper bound , mu = 0.0276 # mean , prop = 0.98 # proportion between a and b , maxiter = 10000 , tol = 1e-20 # , extendInt = "yes" ) $root [1] 4.868168e-05 The standard deviation is $0.000048682$ as the other answers have found.
Calculate standard deviation given mean and percentage It has to be solved numerically. Here is a solution in R using a simple root finding algorithm. We simply solve the equation $$ F_{\mu,\sigma}(b) - F_{\mu,\sigma}(a) - p = 0 $$ where $F_{\mu,\sigma}(\
45,713
Calculate standard deviation given mean and percentage
There is no simple way to calculate this, I believe. I'd suggest looking into numerical solutions for it. Just to explain a bit, this is the normal distribution: $f(x|\sigma, \mu)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ The percentage of values in the interval [a, b] is then given by: $F(a, b)=\int_a^b \! f(x|\sigma, \mu)dx = \int_{-\infty}^b \! f(x|\sigma, \mu)dx - \int_{-\infty}^a \! f(x|\sigma, \mu)dx$ You know F(a,b), you know a and b, and you know the mean $\mu$. What you need to do is to solve this equation for $\sigma$. However, the integral is the error function, you cannot solve it analytically, so you can't solve for $\sigma$. Using a numerical approach, it gets easier - for example, you could calculate the normal distribution with a fixed $\sigma$ for 1000 x values, calculate the area between a and b, and then iteratively change $\sigma$ until you find the value that comes closest to 0.98. There are also functions in most programming languages that calculate the cumulative normal distribution for given parameters, so if you want higher precision, you could use those (again with iterations of $\sigma$).
Calculate standard deviation given mean and percentage
There is no simple way to calculate this, I believe. I'd suggest looking into numerical solutions for it. Just to explain a bit, this is the normal distribution: $f(x|\sigma, \mu)=\frac{1}{\sqrt{2\pi}
Calculate standard deviation given mean and percentage There is no simple way to calculate this, I believe. I'd suggest looking into numerical solutions for it. Just to explain a bit, this is the normal distribution: $f(x|\sigma, \mu)=\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$ The percentage of values in the interval [a, b] is then given by: $F(a, b)=\int_a^b \! f(x|\sigma, \mu)dx = \int_{-\infty}^b \! f(x|\sigma, \mu)dx - \int_{-\infty}^a \! f(x|\sigma, \mu)dx$ You know F(a,b), you know a and b, and you know the mean $\mu$. What you need to do is to solve this equation for $\sigma$. However, the integral is the error function, you cannot solve it analytically, so you can't solve for $\sigma$. Using a numerical approach, it gets easier - for example, you could calculate the normal distribution with a fixed $\sigma$ for 1000 x values, calculate the area between a and b, and then iteratively change $\sigma$ until you find the value that comes closest to 0.98. There are also functions in most programming languages that calculate the cumulative normal distribution for given parameters, so if you want higher precision, you could use those (again with iterations of $\sigma$).
Calculate standard deviation given mean and percentage There is no simple way to calculate this, I believe. I'd suggest looking into numerical solutions for it. Just to explain a bit, this is the normal distribution: $f(x|\sigma, \mu)=\frac{1}{\sqrt{2\pi}
45,714
Calculate standard deviation given mean and percentage
I entered on WolframAlpha: integral_0.0275^0.0278 (1/sqrt(2 π))/a exp(-(((x - 0.0276)/sqrt(2))/a)^2) dx = 0.98 Which got me 0.5 erf(0.0000707107/a) + 0.5 erf(0.000141421/a) = 0.98 Solving which assuming a is real. gives a = 0.0000486817 Multiple expansions of error function are listed on its Wikipedia page and on math.SE but they are not very useful accurate for hand calculations.
Calculate standard deviation given mean and percentage
I entered on WolframAlpha: integral_0.0275^0.0278 (1/sqrt(2 π))/a exp(-(((x - 0.0276)/sqrt(2))/a)^2) dx = 0.98 Which got me 0.5 erf(0.0000707107/a) + 0.5 erf(0.000141421/a) = 0.98 Solving which a
Calculate standard deviation given mean and percentage I entered on WolframAlpha: integral_0.0275^0.0278 (1/sqrt(2 π))/a exp(-(((x - 0.0276)/sqrt(2))/a)^2) dx = 0.98 Which got me 0.5 erf(0.0000707107/a) + 0.5 erf(0.000141421/a) = 0.98 Solving which assuming a is real. gives a = 0.0000486817 Multiple expansions of error function are listed on its Wikipedia page and on math.SE but they are not very useful accurate for hand calculations.
Calculate standard deviation given mean and percentage I entered on WolframAlpha: integral_0.0275^0.0278 (1/sqrt(2 π))/a exp(-(((x - 0.0276)/sqrt(2))/a)^2) dx = 0.98 Which got me 0.5 erf(0.0000707107/a) + 0.5 erf(0.000141421/a) = 0.98 Solving which a
45,715
Inconsistent estimators in case of endogeneity
The mistake is, if $\text{plim}N^{-1}X'u = 0$, then we only need to assume $\text{plim}N^{-1}(X'X)^{-1}$ exists to prove consistency. But when $\text{plim}N^{-1}X'u \ne 0$, as is in the case when $x_k$ is endogenous (suppose $x_k$ is the last variable), we will need to check $\text{plim}N^{-1}(X'X)^{-1}$ to see if other $\beta$s are consistent. If all other variables are orthogonal to $x_k$, then other $\beta$s will still be consistent. This is because the limit matrix of $(X'X)^{-1}$ will contain 0s for the last column and last row except the diagonal terms. Then right multiplied by $(0,\cdots,0,\epsilon)'$ gives 0 for other elements except $k$th. Even in the case that only one variable $x_q$ is correlated with $x_k$, all other variables are not, but if some other variable $x_p$ is correlated with $x_q$, then we should not expect $\beta_p$ to be consistent. Intuitively but loosely, inconsistency spread through correlation.
Inconsistent estimators in case of endogeneity
The mistake is, if $\text{plim}N^{-1}X'u = 0$, then we only need to assume $\text{plim}N^{-1}(X'X)^{-1}$ exists to prove consistency. But when $\text{plim}N^{-1}X'u \ne 0$, as is in the case when $x_k
Inconsistent estimators in case of endogeneity The mistake is, if $\text{plim}N^{-1}X'u = 0$, then we only need to assume $\text{plim}N^{-1}(X'X)^{-1}$ exists to prove consistency. But when $\text{plim}N^{-1}X'u \ne 0$, as is in the case when $x_k$ is endogenous (suppose $x_k$ is the last variable), we will need to check $\text{plim}N^{-1}(X'X)^{-1}$ to see if other $\beta$s are consistent. If all other variables are orthogonal to $x_k$, then other $\beta$s will still be consistent. This is because the limit matrix of $(X'X)^{-1}$ will contain 0s for the last column and last row except the diagonal terms. Then right multiplied by $(0,\cdots,0,\epsilon)'$ gives 0 for other elements except $k$th. Even in the case that only one variable $x_q$ is correlated with $x_k$, all other variables are not, but if some other variable $x_p$ is correlated with $x_q$, then we should not expect $\beta_p$ to be consistent. Intuitively but loosely, inconsistency spread through correlation.
Inconsistent estimators in case of endogeneity The mistake is, if $\text{plim}N^{-1}X'u = 0$, then we only need to assume $\text{plim}N^{-1}(X'X)^{-1}$ exists to prove consistency. But when $\text{plim}N^{-1}X'u \ne 0$, as is in the case when $x_k
45,716
Inconsistent estimators in case of endogeneity
Elaborating on @Paul 's answer, let $\text{plim}(X'X)^{-1} = W$ with typical element $[w^{ij}]$. Then, the expression for, say the $\hat \beta_0$ element of the vector $\hat \beta$ is, when only $X_k$ is correlated with $u$, $$\text{plim} \hat \beta_0 = \beta_0 + w^{1k}\cdot E(X_ku)$$ So only if $w^{1k} = 0$, correlation won't spread. Wooldridge page 62, ch 4 (1st ed.), provides an example where this is indeed the case and so the other eleents of the $\beta$-vector are consistently estimated. Assume that the cause of correlation is the existence of an unobservable variable $q$ in the error term $$u = \gamma q + v$$ where $v$ is uncorrelated with the regressors. Consider the Linear Projection of $q$ on the regressor matrix, $$q = X\delta + r$$ and insert into the main regression to get $$y = X(\beta + \gamma\delta) + \gamma r + v$$ By construction, $r$ and $u$ are uncorrelated with the regressors. So the probability limit of the OLS estimator will be $$\text{plim}\hat \beta = \beta + \gamma\delta$$ We see that if any element of the $\delta$-vector is zero, OLS will consistently estimate the quantity of interest, the corresponding $\beta$ element. But what does it mean if $\delta_j$ is zero? it means that in the presence of the other regressors, $X_j$ does not belong in the Linear Projection of $q$ on the regressor matrix.
Inconsistent estimators in case of endogeneity
Elaborating on @Paul 's answer, let $\text{plim}(X'X)^{-1} = W$ with typical element $[w^{ij}]$. Then, the expression for, say the $\hat \beta_0$ element of the vector $\hat \beta$ is, when only $X_k$
Inconsistent estimators in case of endogeneity Elaborating on @Paul 's answer, let $\text{plim}(X'X)^{-1} = W$ with typical element $[w^{ij}]$. Then, the expression for, say the $\hat \beta_0$ element of the vector $\hat \beta$ is, when only $X_k$ is correlated with $u$, $$\text{plim} \hat \beta_0 = \beta_0 + w^{1k}\cdot E(X_ku)$$ So only if $w^{1k} = 0$, correlation won't spread. Wooldridge page 62, ch 4 (1st ed.), provides an example where this is indeed the case and so the other eleents of the $\beta$-vector are consistently estimated. Assume that the cause of correlation is the existence of an unobservable variable $q$ in the error term $$u = \gamma q + v$$ where $v$ is uncorrelated with the regressors. Consider the Linear Projection of $q$ on the regressor matrix, $$q = X\delta + r$$ and insert into the main regression to get $$y = X(\beta + \gamma\delta) + \gamma r + v$$ By construction, $r$ and $u$ are uncorrelated with the regressors. So the probability limit of the OLS estimator will be $$\text{plim}\hat \beta = \beta + \gamma\delta$$ We see that if any element of the $\delta$-vector is zero, OLS will consistently estimate the quantity of interest, the corresponding $\beta$ element. But what does it mean if $\delta_j$ is zero? it means that in the presence of the other regressors, $X_j$ does not belong in the Linear Projection of $q$ on the regressor matrix.
Inconsistent estimators in case of endogeneity Elaborating on @Paul 's answer, let $\text{plim}(X'X)^{-1} = W$ with typical element $[w^{ij}]$. Then, the expression for, say the $\hat \beta_0$ element of the vector $\hat \beta$ is, when only $X_k$
45,717
Inconsistent estimators in case of endogeneity
Let's abstract from technicalities on Law of Large number and assume that for each $x_j$ for $j=1,...,k$ we have that $$\frac 1n\sum_{i=1}^{n} x_{i,j}u_i\to E(x_ju)$$ Now assme that for all but index $k$ we have that $E(x_ju)=0$. Let $E(x_ku)=\delta\neq0$. Let $X'X$ also have full rank, say you have an invertible limit for $\frac{1}{n}\sum_{i=1}^{n}x_i'x_i$ and call it $\Sigma_x$. Now $$\left(\frac{1}{n}\sum_{i=1}^{n}x_i'x_i\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^{n}x_i'u_i\right)\to\Sigma_x^{-1}\times \begin{bmatrix} 0\\ 0\\ \vdots\\ \delta \end{bmatrix}$$ Open $\Sigma_x^{-1}$ as $$\Sigma_x^{-1}=\begin{bmatrix} [s_0 |r_0]\\ [s_1|r_1]\\ \vdots\\ [s_k|r_k] \end{bmatrix}$$ where $[s_0 |r_0]$ is the first row of $\Sigma_x^{-1}$ which we partition as a rwo vector of length $k$ and a scalar $r_0$. The other rows of $\Sigma_x^{-1}$ are also defined in the same fashion. As such we have $$\Sigma_x^{-1}\times \begin{bmatrix} 0\\ 0\\ \vdots\\ \delta \end{bmatrix}=\begin{bmatrix} r_0\times \delta\\ r_1\times \delta\\ \vdots\\ r_k\times \delta \end{bmatrix}=\delta\begin{bmatrix} r_0\\ r_1\\ \vdots\\ r_k \end{bmatrix}$$ Now if $\delta=0$ the latter limit will just bea vector of zeros implying asymptotic unbiasedness. But if $\delta \neq 0$ then the asymptotic bias for each element of $\beta$ is $\delta r_j$ for $j=0,1,...k$.
Inconsistent estimators in case of endogeneity
Let's abstract from technicalities on Law of Large number and assume that for each $x_j$ for $j=1,...,k$ we have that $$\frac 1n\sum_{i=1}^{n} x_{i,j}u_i\to E(x_ju)$$ Now assme that for all but index
Inconsistent estimators in case of endogeneity Let's abstract from technicalities on Law of Large number and assume that for each $x_j$ for $j=1,...,k$ we have that $$\frac 1n\sum_{i=1}^{n} x_{i,j}u_i\to E(x_ju)$$ Now assme that for all but index $k$ we have that $E(x_ju)=0$. Let $E(x_ku)=\delta\neq0$. Let $X'X$ also have full rank, say you have an invertible limit for $\frac{1}{n}\sum_{i=1}^{n}x_i'x_i$ and call it $\Sigma_x$. Now $$\left(\frac{1}{n}\sum_{i=1}^{n}x_i'x_i\right)^{-1}\left(\frac{1}{n}\sum_{i=1}^{n}x_i'u_i\right)\to\Sigma_x^{-1}\times \begin{bmatrix} 0\\ 0\\ \vdots\\ \delta \end{bmatrix}$$ Open $\Sigma_x^{-1}$ as $$\Sigma_x^{-1}=\begin{bmatrix} [s_0 |r_0]\\ [s_1|r_1]\\ \vdots\\ [s_k|r_k] \end{bmatrix}$$ where $[s_0 |r_0]$ is the first row of $\Sigma_x^{-1}$ which we partition as a rwo vector of length $k$ and a scalar $r_0$. The other rows of $\Sigma_x^{-1}$ are also defined in the same fashion. As such we have $$\Sigma_x^{-1}\times \begin{bmatrix} 0\\ 0\\ \vdots\\ \delta \end{bmatrix}=\begin{bmatrix} r_0\times \delta\\ r_1\times \delta\\ \vdots\\ r_k\times \delta \end{bmatrix}=\delta\begin{bmatrix} r_0\\ r_1\\ \vdots\\ r_k \end{bmatrix}$$ Now if $\delta=0$ the latter limit will just bea vector of zeros implying asymptotic unbiasedness. But if $\delta \neq 0$ then the asymptotic bias for each element of $\beta$ is $\delta r_j$ for $j=0,1,...k$.
Inconsistent estimators in case of endogeneity Let's abstract from technicalities on Law of Large number and assume that for each $x_j$ for $j=1,...,k$ we have that $$\frac 1n\sum_{i=1}^{n} x_{i,j}u_i\to E(x_ju)$$ Now assme that for all but index
45,718
Can log-transformation, then z-scoring make a positive mean difference negative
While ordering of observations (and hence ordering of quantiles) are preserved through monotonic transformations -- so if medians or upper quartiles are ordered in one direction before taking logs and standardizing by a common location and scale they will be in that same direction afterward. However, averages are not constrained to remain in the same ordering under monotonic transformation. It's perfectly possible for the direction to swap. [The standardization by common location and scale values won't change relative means ... the swapping is all due to the nonlinear transformation.] Consider two samples of two observations each: Mean Mean-of-logs Sample 1: 1 10 | 5.5 1.15 | Sample 2: 4 6 | 5.0 1.59 On the original scale, the first sample has the larger mean (5.5 vs 5). On the log scale the second sample has the larger mean (1.15 vs 1.59). [Here I use natural logs but the base of the logarithms is immaterial.] You have to think very carefully about what it is you actually need to compare, not just transform willy-nilly and hope that averages on whatever-scale-you-transform-to will make sense. However, in some cases you can compare means on a transformed scale and make some conclusions about the original scale. For example, if, on the transformed scale two distributions are the same apart from a location shift, a difference in population means (which should be the location shift in question, if means exist) does imply an ordering of distributions on the original scale too, in which case the original scale population means -- if they exist -- will also be in that same order. (You'll note my example operates by deliberately making the spreads quite different, and having the slightly larger mean go with the larger spread; that way the log drags down the smallest observation and pulls in the largest observation relatively more than the corresponding observations in the less spread sample. That's an easy way to make the swap of means on the different scales happen) However, if you have pre- and post- data presumably you have paired data. In that case you should be dealing with some measure of change. You need to figure out what measure of change is best for your situation. If you're interested in absolute change, the pair-differences (post-pre) would make sense to look at. If you're interested in relative change, either the ratios or log-ratios might make sense (post/pre or log(post/pre) ). (It's hard to give precise advice with so little information, though conventions in your application area will also be a consideration.)
Can log-transformation, then z-scoring make a positive mean difference negative
While ordering of observations (and hence ordering of quantiles) are preserved through monotonic transformations -- so if medians or upper quartiles are ordered in one direction before taking logs and
Can log-transformation, then z-scoring make a positive mean difference negative While ordering of observations (and hence ordering of quantiles) are preserved through monotonic transformations -- so if medians or upper quartiles are ordered in one direction before taking logs and standardizing by a common location and scale they will be in that same direction afterward. However, averages are not constrained to remain in the same ordering under monotonic transformation. It's perfectly possible for the direction to swap. [The standardization by common location and scale values won't change relative means ... the swapping is all due to the nonlinear transformation.] Consider two samples of two observations each: Mean Mean-of-logs Sample 1: 1 10 | 5.5 1.15 | Sample 2: 4 6 | 5.0 1.59 On the original scale, the first sample has the larger mean (5.5 vs 5). On the log scale the second sample has the larger mean (1.15 vs 1.59). [Here I use natural logs but the base of the logarithms is immaterial.] You have to think very carefully about what it is you actually need to compare, not just transform willy-nilly and hope that averages on whatever-scale-you-transform-to will make sense. However, in some cases you can compare means on a transformed scale and make some conclusions about the original scale. For example, if, on the transformed scale two distributions are the same apart from a location shift, a difference in population means (which should be the location shift in question, if means exist) does imply an ordering of distributions on the original scale too, in which case the original scale population means -- if they exist -- will also be in that same order. (You'll note my example operates by deliberately making the spreads quite different, and having the slightly larger mean go with the larger spread; that way the log drags down the smallest observation and pulls in the largest observation relatively more than the corresponding observations in the less spread sample. That's an easy way to make the swap of means on the different scales happen) However, if you have pre- and post- data presumably you have paired data. In that case you should be dealing with some measure of change. You need to figure out what measure of change is best for your situation. If you're interested in absolute change, the pair-differences (post-pre) would make sense to look at. If you're interested in relative change, either the ratios or log-ratios might make sense (post/pre or log(post/pre) ). (It's hard to give precise advice with so little information, though conventions in your application area will also be a consideration.)
Can log-transformation, then z-scoring make a positive mean difference negative While ordering of observations (and hence ordering of quantiles) are preserved through monotonic transformations -- so if medians or upper quartiles are ordered in one direction before taking logs and
45,719
Testing Measurement Invariance with Robust Estimators Yields Bizarre (Improved) Model Fit Indexes
The source of your problem is the 'robust' estimation of standard errors using the robust Satorra-Bentler Chi-square statistic. When testing for measurement invariance, we compare less constrained (configural invariance) to more constrained (metric or scalar invariance) models. The comparison that is usually applied is a Chi-square difference test, which compares the Chi-square of a less constrained to a more constrained model, testing the null hypothesis that both models have the same fit. In addition some authors argue that one may also look at the change in RMSEA or CFI, but there are no strong advices on which change in these statistics is desired. Therefore my advice is to first of all look at change in model Chi-square and the associated p-value for above mentioned null hypothesis. I will therefore first answer your question in terms of Chi-square change and then address the change in CFI and RMSEA Testing the change in model Chi-square MLR uses a scaled version of Chi-square to find robust standard errors following a paper by Satorra and Bentler in Psychometrica. The problem you are facing now is that, as you say, the (scaled) Chi-squares decrease across more constrained version of the model. In fact, the simple scaled Chi-square differences between your models is negative and thus undefined. This behavior can be expected because the difference in scaled Chi-squares is not Chi-square distributed. A Chi-square difference test using scaled Chi-squares needs to be adapted before the Chi-square difference can be interpreted in the usual way. Specifically, the adjustment goes as follows. First we calculate a scaling correction factor: $$s= (d_0c_0-d_1c_1)/(d_0-c_1)$$ where $d_0$ is the degrees of freedom of the nested (constrained) model and $d_1$ in the unconstrained model. Furthermore $c_1$ and $c_0$ are the scaling correction factors reported by lavaan or other SEM packages like Mplus. Subsequently, we calculate a corrected Chi-sqaure difference $$ \Delta_{\chi} = (T_0c_0 - T_1c_1)/ s $$ where $T_0$ and $T_1$ are the scaled (robust) model Chi-squares. This adjusted Chi-square is then tested on a central Chi-square distribution with degrees of freedom equal to the difference in degrees of freedom of the two models. To provide an example for your data for testing configural against metric invariance in R, we use a short script: d0 = 488 # Enter data as in your output d1 = 444 c0 = 1.186 c1 = 1.105 T0 = 861.367 T1=890.242 (cd = (d0 * c0 - d1*c1)/(d0 - d1)) # scaling correction factor [1] 2.003364 (TRd = (T0*c0 - T1*c1)/cd) # Adjusted difference in model Chi-squares [1] 18.90014 > (df = d0-d1) # Difference in degrees of freedom [1] 44 > 1 - pchisq(TRd,df) # p-value [1] 0.9996636 We can see that the scaled Chi-square difference is 18.9 (and now it has a positive sign!), which when tested with $\alpha=.05$ type-1 error probability is not significant. Hence there is evidence for metric invariance in your data. There is a lot of documentation on this problem on the Mplus website. See here for a discussion of difference testing with scaled Chi-square. The correction I suggest is the simple adjustment variant which in some cases may still yield negative Chi-square. There is a more recent and more sophisticated approach called the strictly positive Chi-square difference. It is described on the Mplus website I linked. Decrease in fit indices (RMSEA and CFI): It was remarked that my answer did not yet sufficiently address the RMSEA and CFI increase that was observed over increasingly constrained versions of a baseline model. To understand this we first of all need to refer to the definitions of the two statistics: $$RMSEA = \frac{(\chi ^2-df)^{\frac{1}{2}}}{df(n-1)}$$ and $$CFI = \frac{ (\chi_0^2 - df_0) - (\chi_1^2 - df_1) }{ (\chi_0^2 - df_0)}$$ where $0$ and $1$ indicate the null model and the tested model respectively. It can be seen that both fit measures depend on $\chi^2$ and the $df$ of the model. The scaled $\chi^2$ is designed in a way to be more 'robust' to many practical problems, in particular the violation of multi-variate noramlity in continuous factor analysis. If we assume scaled $\chi^2$ is a more valid version than unscaled $\chi^2$, we may conclude that also ´scaled rmsea´ and ´scaled cfi´ are more precise versions. In lavaan you therefore need to check that you looked at the correct scaled rmsea and scaled cfi. Assuming that you did this already, it can be seen from the definitions of the two indices that a decrease in RMSEA and CFI across more constraint versions of the model is actually possible, in fact it is desirable! To see this, we first of all assume that the chi-square of the constrained and unconstrained models does not change. This means that the more strict model is true. However the number of parameters in the model decreases, thus $df$'s increase. Now let $a$ denote the unconstrained (e.g. configural) and $b$ the constrained (e.g. metric) model. So we know that $df_a<df_b$ while assuming $\chi^2_a =\chi^2_b = \chi^2$ (i.e. no decrease in fit / more constrained model is true). Now we wonder if it is possible whether $$RMSEA_a > RMSEA_b$$ as well as $$CFI_a > CFI_b$$ It is particularly easy to see this for $CFI$, because there we have $$CFI_a > CFI_b \Leftrightarrow (\chi_a^2 - df_a) - (\chi_b^2 - df_b) > 0 \\ \Leftrightarrow (\chi^2 - df_a) - (\chi^2 - df_b) > 0 \\ \Leftrightarrow df_b > df_a $$ which is always true if $\chi^2_a =\chi^2_b = \chi^2$. Hence $CFI$ of the more constrained model can be smaller than that of the unconstrained model and necessarily is when fit of the two models is exctly equal. For RMSEA the situation is a little bit more complicated because the inequality involves squared terms of $\chi^2$, $df_a$ and $df_b$. This suggests that the solution under the assumption $\chi^2_a = \chi^2_b$ depends on their particular values, but under certain combinations the inequality will hold as well. Hence in conclusion, what you observe is possible. In particular we are more likely to find it in situations when the model $\chi^2$ only marginally changes while the amoung of additionally constrained parameters is large. This is exectly the result we get when a more constrained model is the true model and the less constrained model was specified too 'flexible' (over-parametrized). Thus decrease in the two fit measures is even better news than a (small) increase!
Testing Measurement Invariance with Robust Estimators Yields Bizarre (Improved) Model Fit Indexes
The source of your problem is the 'robust' estimation of standard errors using the robust Satorra-Bentler Chi-square statistic. When testing for measurement invariance, we compare less constrained (co
Testing Measurement Invariance with Robust Estimators Yields Bizarre (Improved) Model Fit Indexes The source of your problem is the 'robust' estimation of standard errors using the robust Satorra-Bentler Chi-square statistic. When testing for measurement invariance, we compare less constrained (configural invariance) to more constrained (metric or scalar invariance) models. The comparison that is usually applied is a Chi-square difference test, which compares the Chi-square of a less constrained to a more constrained model, testing the null hypothesis that both models have the same fit. In addition some authors argue that one may also look at the change in RMSEA or CFI, but there are no strong advices on which change in these statistics is desired. Therefore my advice is to first of all look at change in model Chi-square and the associated p-value for above mentioned null hypothesis. I will therefore first answer your question in terms of Chi-square change and then address the change in CFI and RMSEA Testing the change in model Chi-square MLR uses a scaled version of Chi-square to find robust standard errors following a paper by Satorra and Bentler in Psychometrica. The problem you are facing now is that, as you say, the (scaled) Chi-squares decrease across more constrained version of the model. In fact, the simple scaled Chi-square differences between your models is negative and thus undefined. This behavior can be expected because the difference in scaled Chi-squares is not Chi-square distributed. A Chi-square difference test using scaled Chi-squares needs to be adapted before the Chi-square difference can be interpreted in the usual way. Specifically, the adjustment goes as follows. First we calculate a scaling correction factor: $$s= (d_0c_0-d_1c_1)/(d_0-c_1)$$ where $d_0$ is the degrees of freedom of the nested (constrained) model and $d_1$ in the unconstrained model. Furthermore $c_1$ and $c_0$ are the scaling correction factors reported by lavaan or other SEM packages like Mplus. Subsequently, we calculate a corrected Chi-sqaure difference $$ \Delta_{\chi} = (T_0c_0 - T_1c_1)/ s $$ where $T_0$ and $T_1$ are the scaled (robust) model Chi-squares. This adjusted Chi-square is then tested on a central Chi-square distribution with degrees of freedom equal to the difference in degrees of freedom of the two models. To provide an example for your data for testing configural against metric invariance in R, we use a short script: d0 = 488 # Enter data as in your output d1 = 444 c0 = 1.186 c1 = 1.105 T0 = 861.367 T1=890.242 (cd = (d0 * c0 - d1*c1)/(d0 - d1)) # scaling correction factor [1] 2.003364 (TRd = (T0*c0 - T1*c1)/cd) # Adjusted difference in model Chi-squares [1] 18.90014 > (df = d0-d1) # Difference in degrees of freedom [1] 44 > 1 - pchisq(TRd,df) # p-value [1] 0.9996636 We can see that the scaled Chi-square difference is 18.9 (and now it has a positive sign!), which when tested with $\alpha=.05$ type-1 error probability is not significant. Hence there is evidence for metric invariance in your data. There is a lot of documentation on this problem on the Mplus website. See here for a discussion of difference testing with scaled Chi-square. The correction I suggest is the simple adjustment variant which in some cases may still yield negative Chi-square. There is a more recent and more sophisticated approach called the strictly positive Chi-square difference. It is described on the Mplus website I linked. Decrease in fit indices (RMSEA and CFI): It was remarked that my answer did not yet sufficiently address the RMSEA and CFI increase that was observed over increasingly constrained versions of a baseline model. To understand this we first of all need to refer to the definitions of the two statistics: $$RMSEA = \frac{(\chi ^2-df)^{\frac{1}{2}}}{df(n-1)}$$ and $$CFI = \frac{ (\chi_0^2 - df_0) - (\chi_1^2 - df_1) }{ (\chi_0^2 - df_0)}$$ where $0$ and $1$ indicate the null model and the tested model respectively. It can be seen that both fit measures depend on $\chi^2$ and the $df$ of the model. The scaled $\chi^2$ is designed in a way to be more 'robust' to many practical problems, in particular the violation of multi-variate noramlity in continuous factor analysis. If we assume scaled $\chi^2$ is a more valid version than unscaled $\chi^2$, we may conclude that also ´scaled rmsea´ and ´scaled cfi´ are more precise versions. In lavaan you therefore need to check that you looked at the correct scaled rmsea and scaled cfi. Assuming that you did this already, it can be seen from the definitions of the two indices that a decrease in RMSEA and CFI across more constraint versions of the model is actually possible, in fact it is desirable! To see this, we first of all assume that the chi-square of the constrained and unconstrained models does not change. This means that the more strict model is true. However the number of parameters in the model decreases, thus $df$'s increase. Now let $a$ denote the unconstrained (e.g. configural) and $b$ the constrained (e.g. metric) model. So we know that $df_a<df_b$ while assuming $\chi^2_a =\chi^2_b = \chi^2$ (i.e. no decrease in fit / more constrained model is true). Now we wonder if it is possible whether $$RMSEA_a > RMSEA_b$$ as well as $$CFI_a > CFI_b$$ It is particularly easy to see this for $CFI$, because there we have $$CFI_a > CFI_b \Leftrightarrow (\chi_a^2 - df_a) - (\chi_b^2 - df_b) > 0 \\ \Leftrightarrow (\chi^2 - df_a) - (\chi^2 - df_b) > 0 \\ \Leftrightarrow df_b > df_a $$ which is always true if $\chi^2_a =\chi^2_b = \chi^2$. Hence $CFI$ of the more constrained model can be smaller than that of the unconstrained model and necessarily is when fit of the two models is exctly equal. For RMSEA the situation is a little bit more complicated because the inequality involves squared terms of $\chi^2$, $df_a$ and $df_b$. This suggests that the solution under the assumption $\chi^2_a = \chi^2_b$ depends on their particular values, but under certain combinations the inequality will hold as well. Hence in conclusion, what you observe is possible. In particular we are more likely to find it in situations when the model $\chi^2$ only marginally changes while the amoung of additionally constrained parameters is large. This is exectly the result we get when a more constrained model is the true model and the less constrained model was specified too 'flexible' (over-parametrized). Thus decrease in the two fit measures is even better news than a (small) increase!
Testing Measurement Invariance with Robust Estimators Yields Bizarre (Improved) Model Fit Indexes The source of your problem is the 'robust' estimation of standard errors using the robust Satorra-Bentler Chi-square statistic. When testing for measurement invariance, we compare less constrained (co
45,720
Gaussian covariance matrix basic concept
Why they represent covariance with 4 separated matrices? I emphasize this each notion as matrix. what happen if each notion become a matrix In this case the vectors ${\boldsymbol Y}$ and ${\boldsymbol \mu}$ are really block vectors. In the case of an $n$-dimensional ${\boldsymbol Y}$ vector we could expand it as follows: $$\boldsymbol Y= \begin{bmatrix} \color{blue}{Y_1} \\ \color{red}{Y_2} \end{bmatrix}=\begin{bmatrix}\color{blue}{Y_{11}\\Y_{12}\\\vdots\\ Y_{1h}}\\\color{red}{Y_{21}\\Y_{22}\\\vdots\\ Y_{2k}}\end{bmatrix}\tag{$n \times 1$}$$ showing the partition of the $n$ coordinates into two groups of size $h$ and $k$, respectively, such that $n = h + k$. A parallel illustration would immediately follow for the $\boldsymbol \mu$ vector of population means. The block matrix of covariances would hence follow as: $$\begin{bmatrix} \Sigma_{\color{blue}{11}} & \Sigma_{\color{blue}{1}\color{red}{2}}\\ \Sigma_{\color{red}{2}\color{blue}{1}} & \Sigma_{\color{red}{22}} \end{bmatrix} \tag {$n \times n$}$$ where $$\small\Sigma_{\color{blue}{11}}=\begin{bmatrix} \sigma^2({\color{blue}{Y_{11}}}) & \text{cov}(\color{blue}{Y_{11},Y_{12}}) & \dots & \text{cov}(\color{blue}{Y_{11},Y_{1h}}) \\ \text{cov}(\color{blue}{Y_{12},Y_{11}}) & \sigma^2({\color{blue}{Y_{12}}}) & \dots & \text{cov}(\color{blue}{Y_{12},Y_{1h}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}(\color{blue}{Y_{1h},Y_{11}}) & \text{cov}(\color{blue}{Y_{1h},Y_{12}}) &\dots& \sigma^2({\color{blue}{Y_{1h}}}) \end{bmatrix} \tag{$h \times h$}$$ with $$\small \Sigma_{\color{blue}{1}\color{red}{2}}= \begin{bmatrix} \text{cov}({\color{blue}{Y_{11}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{11}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{11}},\color{red}{Y_{2k}}) \\ \text{cov}({\color{blue}{Y_{12}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{12}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{12}},\color{red}{Y_{2k}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}({\color{blue}{Y_{1h}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{1h}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{1h}},\color{red}{Y_{2k}}) \end{bmatrix}\tag{$h \times k$} $$ its transpose... $$\small \Sigma_{\color{red}{2}\color{blue}{1}}= \begin{bmatrix} \text{cov}({\color{red}{Y_{21}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{21}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{21}},\color{blue}{Y_{1h}}) \\\text{cov}({\color{red}{Y_{22}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{22}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{22}},\color{blue}{Y_{1h}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}({\color{red}{Y_{2k}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{2k}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{2k}},\color{blue}{Y_{1h}}) \end{bmatrix}\tag{$k \times h$} $$ and $$\small \Sigma_{\color{red}{22}}=\begin{bmatrix} \sigma^2({\color{red}{Y_{21}}}) & \text{cov}(\color{red}{Y_{21},Y_{22}}) & \dots & \text{cov}(\color{red}{Y_{21},Y_{2k}}) \\ \text{cov}(\color{red}{Y_{22},Y_{21}}) & \sigma^2({\color{red}{Y_{22}}}) & \dots & \text{cov}(\color{red}{Y_{22},Y_{2k}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}(\color{red}{Y_{2k},Y_{21}}) & \text{cov}(\color{red}{Y_{2k},Y_{22}}) &\dots& \sigma^2({\color{red}{Y_{2k}}}) \end{bmatrix} \tag{$k \times k$}$$ These partitions come into play in proving that the marginal distributions of a multivariate Gaussian are also Gaussian, as well as in the actual derivation of marginal and conditional pdf's.
Gaussian covariance matrix basic concept
Why they represent covariance with 4 separated matrices? I emphasize this each notion as matrix. what happen if each notion become a matrix In this case the vectors ${\boldsymbol Y}$ and ${\boldsy
Gaussian covariance matrix basic concept Why they represent covariance with 4 separated matrices? I emphasize this each notion as matrix. what happen if each notion become a matrix In this case the vectors ${\boldsymbol Y}$ and ${\boldsymbol \mu}$ are really block vectors. In the case of an $n$-dimensional ${\boldsymbol Y}$ vector we could expand it as follows: $$\boldsymbol Y= \begin{bmatrix} \color{blue}{Y_1} \\ \color{red}{Y_2} \end{bmatrix}=\begin{bmatrix}\color{blue}{Y_{11}\\Y_{12}\\\vdots\\ Y_{1h}}\\\color{red}{Y_{21}\\Y_{22}\\\vdots\\ Y_{2k}}\end{bmatrix}\tag{$n \times 1$}$$ showing the partition of the $n$ coordinates into two groups of size $h$ and $k$, respectively, such that $n = h + k$. A parallel illustration would immediately follow for the $\boldsymbol \mu$ vector of population means. The block matrix of covariances would hence follow as: $$\begin{bmatrix} \Sigma_{\color{blue}{11}} & \Sigma_{\color{blue}{1}\color{red}{2}}\\ \Sigma_{\color{red}{2}\color{blue}{1}} & \Sigma_{\color{red}{22}} \end{bmatrix} \tag {$n \times n$}$$ where $$\small\Sigma_{\color{blue}{11}}=\begin{bmatrix} \sigma^2({\color{blue}{Y_{11}}}) & \text{cov}(\color{blue}{Y_{11},Y_{12}}) & \dots & \text{cov}(\color{blue}{Y_{11},Y_{1h}}) \\ \text{cov}(\color{blue}{Y_{12},Y_{11}}) & \sigma^2({\color{blue}{Y_{12}}}) & \dots & \text{cov}(\color{blue}{Y_{12},Y_{1h}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}(\color{blue}{Y_{1h},Y_{11}}) & \text{cov}(\color{blue}{Y_{1h},Y_{12}}) &\dots& \sigma^2({\color{blue}{Y_{1h}}}) \end{bmatrix} \tag{$h \times h$}$$ with $$\small \Sigma_{\color{blue}{1}\color{red}{2}}= \begin{bmatrix} \text{cov}({\color{blue}{Y_{11}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{11}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{11}},\color{red}{Y_{2k}}) \\ \text{cov}({\color{blue}{Y_{12}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{12}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{12}},\color{red}{Y_{2k}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}({\color{blue}{Y_{1h}}},\color{red}{Y_{21}}) & \text{cov}(\color{blue}{Y_{1h}},\color{red}{Y_{22}}) & \dots & \text{cov}(\color{blue}{Y_{1h}},\color{red}{Y_{2k}}) \end{bmatrix}\tag{$h \times k$} $$ its transpose... $$\small \Sigma_{\color{red}{2}\color{blue}{1}}= \begin{bmatrix} \text{cov}({\color{red}{Y_{21}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{21}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{21}},\color{blue}{Y_{1h}}) \\\text{cov}({\color{red}{Y_{22}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{22}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{22}},\color{blue}{Y_{1h}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}({\color{red}{Y_{2k}}},\color{blue}{Y_{11}}) & \text{cov}(\color{red}{Y_{2k}},\color{blue}{Y_{12}}) & \dots & \text{cov}(\color{red}{Y_{2k}},\color{blue}{Y_{1h}}) \end{bmatrix}\tag{$k \times h$} $$ and $$\small \Sigma_{\color{red}{22}}=\begin{bmatrix} \sigma^2({\color{red}{Y_{21}}}) & \text{cov}(\color{red}{Y_{21},Y_{22}}) & \dots & \text{cov}(\color{red}{Y_{21},Y_{2k}}) \\ \text{cov}(\color{red}{Y_{22},Y_{21}}) & \sigma^2({\color{red}{Y_{22}}}) & \dots & \text{cov}(\color{red}{Y_{22},Y_{2k}}) \\ \vdots & \vdots & & \vdots \\ \text{cov}(\color{red}{Y_{2k},Y_{21}}) & \text{cov}(\color{red}{Y_{2k},Y_{22}}) &\dots& \sigma^2({\color{red}{Y_{2k}}}) \end{bmatrix} \tag{$k \times k$}$$ These partitions come into play in proving that the marginal distributions of a multivariate Gaussian are also Gaussian, as well as in the actual derivation of marginal and conditional pdf's.
Gaussian covariance matrix basic concept Why they represent covariance with 4 separated matrices? I emphasize this each notion as matrix. what happen if each notion become a matrix In this case the vectors ${\boldsymbol Y}$ and ${\boldsy
45,721
Why do we use expectation in reinforcement learning?
Because life is uncertain. We don't know what the future might hold. If we knew the future, we'd calculate the reward we'll receive for each possible action, and choose the best one -- but alas, no one can tell the future. Therefore, we can't be sure what the reward of each possible action will be, and we can't be sure which action is best. So, instead, for each action, we calculate the average of all possible rewards (weighted by their likelihood). Roughly speaking, this is our best guess at what the reward of the future might be, given the information available to us right now and the unavoidable uncertainty about the future. Then, we use that information to guide our decisions.
Why do we use expectation in reinforcement learning?
Because life is uncertain. We don't know what the future might hold. If we knew the future, we'd calculate the reward we'll receive for each possible action, and choose the best one -- but alas, no o
Why do we use expectation in reinforcement learning? Because life is uncertain. We don't know what the future might hold. If we knew the future, we'd calculate the reward we'll receive for each possible action, and choose the best one -- but alas, no one can tell the future. Therefore, we can't be sure what the reward of each possible action will be, and we can't be sure which action is best. So, instead, for each action, we calculate the average of all possible rewards (weighted by their likelihood). Roughly speaking, this is our best guess at what the reward of the future might be, given the information available to us right now and the unavoidable uncertainty about the future. Then, we use that information to guide our decisions.
Why do we use expectation in reinforcement learning? Because life is uncertain. We don't know what the future might hold. If we knew the future, we'd calculate the reward we'll receive for each possible action, and choose the best one -- but alas, no o
45,722
Why do we use expectation in reinforcement learning?
You can view this as a consequence of how we define "optimal" in most reinforcement learning applications: An optimal policy is that which maximizes expected discounted reward in a Markov decision process. MDPs are RL's core and longest-studied problem, making them a natural starting point. Though natural, this definition may not fit every application. Generalized MDPs replace the $\max\limits_a$ and $\mathop{\mathbb{E}}\limits_{s'}$ operators with other non-expansions. For example, replace $\mathop{\mathbb{E}}\limits_{s'}$ with $\min\limits_{s'}$, and you have a risk-sensitive MDP. Several standard planning and learning algorithms—value iteration, policy iteration, model-based RL and Q learning—can be generalized to work in this framework. (Szepesvári and Littman.)
Why do we use expectation in reinforcement learning?
You can view this as a consequence of how we define "optimal" in most reinforcement learning applications: An optimal policy is that which maximizes expected discounted reward in a Markov decision pro
Why do we use expectation in reinforcement learning? You can view this as a consequence of how we define "optimal" in most reinforcement learning applications: An optimal policy is that which maximizes expected discounted reward in a Markov decision process. MDPs are RL's core and longest-studied problem, making them a natural starting point. Though natural, this definition may not fit every application. Generalized MDPs replace the $\max\limits_a$ and $\mathop{\mathbb{E}}\limits_{s'}$ operators with other non-expansions. For example, replace $\mathop{\mathbb{E}}\limits_{s'}$ with $\min\limits_{s'}$, and you have a risk-sensitive MDP. Several standard planning and learning algorithms—value iteration, policy iteration, model-based RL and Q learning—can be generalized to work in this framework. (Szepesvári and Littman.)
Why do we use expectation in reinforcement learning? You can view this as a consequence of how we define "optimal" in most reinforcement learning applications: An optimal policy is that which maximizes expected discounted reward in a Markov decision pro
45,723
Why do we use expectation in reinforcement learning?
We use expectations because we want to optimize the long-term performance of our algorithms. This is the weighted sum of all possible outcomes multiplied by their probabilities — the expected reward.
Why do we use expectation in reinforcement learning?
We use expectations because we want to optimize the long-term performance of our algorithms. This is the weighted sum of all possible outcomes multiplied by their probabilities — the expected reward.
Why do we use expectation in reinforcement learning? We use expectations because we want to optimize the long-term performance of our algorithms. This is the weighted sum of all possible outcomes multiplied by their probabilities — the expected reward.
Why do we use expectation in reinforcement learning? We use expectations because we want to optimize the long-term performance of our algorithms. This is the weighted sum of all possible outcomes multiplied by their probabilities — the expected reward.
45,724
Sampling from an Inverse Gamma distribution
This discrepancy arises because there are two different parameterizations of the Gamma distribution and each relate differently to the Inverse Gamma distribution. On Wikipedia, the two parameterizations for the Gamma distribution are differentiated by using $(k,\theta)$ and $(\alpha, \beta)$. $$\text{If } X \sim \text{Gamma}(k, \theta) , \,\,\,\, f(x) = \dfrac{1}{\Gamma(k) \theta^k} x^{k-1} e^{x/\theta}\,.$$ $$\text{If } X \sim \text{Gamma}(\alpha, \beta) , \,\,\,\, f(x) = \dfrac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1} e^{x\beta}\,.$$ Here $\alpha$ and $k$ are exactly the same in the pdfs but $\theta$ is and $\beta$ are different. $\theta$ is called the scale parameter and $\beta$ is called the rate parameter. The relation between these two is that $\beta = 1/\theta$. If $X \sim \text{Gamma}(\alpha, \beta)$ where $\beta$ is the rate parameter, then $1/X \sim IG(\alpha, \beta)$. If $X \sim \text{Gamma}(k, \theta)$, where $\theta$ is the scale parameter, then $1/X \sim IG(k, 1/\theta)$. In both those cases, the pdf of the IG is the same. If $Y \sim IG(\alpha, \beta)$, then the pdf of $Y$ is always $$f(y) = \dfrac{\beta^{\alpha}}{\Gamma(\alpha)} x^{-\alpha-1} e^{-\beta/x}.$$
Sampling from an Inverse Gamma distribution
This discrepancy arises because there are two different parameterizations of the Gamma distribution and each relate differently to the Inverse Gamma distribution. On Wikipedia, the two parameterizatio
Sampling from an Inverse Gamma distribution This discrepancy arises because there are two different parameterizations of the Gamma distribution and each relate differently to the Inverse Gamma distribution. On Wikipedia, the two parameterizations for the Gamma distribution are differentiated by using $(k,\theta)$ and $(\alpha, \beta)$. $$\text{If } X \sim \text{Gamma}(k, \theta) , \,\,\,\, f(x) = \dfrac{1}{\Gamma(k) \theta^k} x^{k-1} e^{x/\theta}\,.$$ $$\text{If } X \sim \text{Gamma}(\alpha, \beta) , \,\,\,\, f(x) = \dfrac{\beta^{\alpha}}{\Gamma(\alpha)} x^{\alpha-1} e^{x\beta}\,.$$ Here $\alpha$ and $k$ are exactly the same in the pdfs but $\theta$ is and $\beta$ are different. $\theta$ is called the scale parameter and $\beta$ is called the rate parameter. The relation between these two is that $\beta = 1/\theta$. If $X \sim \text{Gamma}(\alpha, \beta)$ where $\beta$ is the rate parameter, then $1/X \sim IG(\alpha, \beta)$. If $X \sim \text{Gamma}(k, \theta)$, where $\theta$ is the scale parameter, then $1/X \sim IG(k, 1/\theta)$. In both those cases, the pdf of the IG is the same. If $Y \sim IG(\alpha, \beta)$, then the pdf of $Y$ is always $$f(y) = \dfrac{\beta^{\alpha}}{\Gamma(\alpha)} x^{-\alpha-1} e^{-\beta/x}.$$
Sampling from an Inverse Gamma distribution This discrepancy arises because there are two different parameterizations of the Gamma distribution and each relate differently to the Inverse Gamma distribution. On Wikipedia, the two parameterizatio
45,725
How to select a range of dates in R? [closed]
Assuming you are using the Date class: if you are using a data.frame: myData[myData$myDate >= "1970-01-01" & myData$myDate <= "2016-06-27",] And if you are using a data.table: myData[myDate >= "1970-01-01" & myDate <= "2016-06-27"]
How to select a range of dates in R? [closed]
Assuming you are using the Date class: if you are using a data.frame: myData[myData$myDate >= "1970-01-01" & myData$myDate <= "2016-06-27",] And if you are using a data.table: myData[myDate >= "1970-
How to select a range of dates in R? [closed] Assuming you are using the Date class: if you are using a data.frame: myData[myData$myDate >= "1970-01-01" & myData$myDate <= "2016-06-27",] And if you are using a data.table: myData[myDate >= "1970-01-01" & myDate <= "2016-06-27"]
How to select a range of dates in R? [closed] Assuming you are using the Date class: if you are using a data.frame: myData[myData$myDate >= "1970-01-01" & myData$myDate <= "2016-06-27",] And if you are using a data.table: myData[myDate >= "1970-
45,726
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time series forecasting
The short answer is that there is no silver bullet. The few selection criteria you have named are also by far not all there are (as I am sure you are aware of). So let us start with the ones that are most commonly used for time series applications: the Bayesian-Schwarz Criterion (BIC), the Akaike Criterion (AIC), and the Hannan-Quinn Criterion (HQC). The way these model selection criteria are used is to select the lag length of your model (i.e., how many periods of the past affect the present period). These three criteria are estimating the Kullback-Leibler divergence of your data and asymptotically select a true model. Notice how I said 'a' true model, because including superfluous lags asymptotically makes no difference (since asymptotically, they will be estimated to be zero). It is noteworthy that AIC asymptotically selects a true model that strictly overfits, i.e. a model that is larger than the smallest true model. In Machine Learning terminology, it is prone to overfitting. BIC and HQC on the other hand select the smallest true model asymptotically. They have the drawback of underselecting in finite samples, which is why AIC is often preferred in applications. The main problem with (unpenalized) RMSE is that extending the lag length (i.e., including more lags as explanatory variables) will always yield a better value for RMSE. This is so because the fit will not get worse by including more explanatory variables, and RMSE is a direct measure of fit. I don't know your exact application, but I feel like many practitioners would go with comparing the optimum using AIC, BIC, and HQC and justifying their chosen lag length that way.
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time se
The short answer is that there is no silver bullet. The few selection criteria you have named are also by far not all there are (as I am sure you are aware of). So let us start with the ones that are
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time series forecasting The short answer is that there is no silver bullet. The few selection criteria you have named are also by far not all there are (as I am sure you are aware of). So let us start with the ones that are most commonly used for time series applications: the Bayesian-Schwarz Criterion (BIC), the Akaike Criterion (AIC), and the Hannan-Quinn Criterion (HQC). The way these model selection criteria are used is to select the lag length of your model (i.e., how many periods of the past affect the present period). These three criteria are estimating the Kullback-Leibler divergence of your data and asymptotically select a true model. Notice how I said 'a' true model, because including superfluous lags asymptotically makes no difference (since asymptotically, they will be estimated to be zero). It is noteworthy that AIC asymptotically selects a true model that strictly overfits, i.e. a model that is larger than the smallest true model. In Machine Learning terminology, it is prone to overfitting. BIC and HQC on the other hand select the smallest true model asymptotically. They have the drawback of underselecting in finite samples, which is why AIC is often preferred in applications. The main problem with (unpenalized) RMSE is that extending the lag length (i.e., including more lags as explanatory variables) will always yield a better value for RMSE. This is so because the fit will not get worse by including more explanatory variables, and RMSE is a direct measure of fit. I don't know your exact application, but I feel like many practitioners would go with comparing the optimum using AIC, BIC, and HQC and justifying their chosen lag length that way.
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time se The short answer is that there is no silver bullet. The few selection criteria you have named are also by far not all there are (as I am sure you are aware of). So let us start with the ones that are
45,727
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time series forecasting
Whether it makes sense to create a short-list of just n models or to select a single model depends a lot on how much the data favors the "best" (according to a chosen criterion) model. If a lot of models are "close together", then all of them are plausible models and selecting a single one or a subset of these when you have considered a lot of models is pretty problematic. For that reason model averaging is often a good idea and for prediction tasks weights for each model based on $\text{prior weights} \times \exp\{ -0.5 |\text{AIC}_i - \min_j \text{AIC}_j| \}$ are popular. I guess if you were to consider then when one model or a small number of models get nearly all the weight (e.g. the best model is ahead of the next model in terms of AIC by, say, 10 to 15 or so and not too many models were considered), then it is probably reasonable to concentrate on those. AIC based weights are popular for prediction, because of the link between these and maximum likelihood estimation. For a very enthusiastic view of this type of approach, you could refer to Burnham and Anderson's "Model selection and multimodel inference". As mentioned in the other response naive RMSE is not a justifiable option (although one could get a version adjusted for overfitting by using e.g. cross-validation), but other criteria (e.g. BIC etc.) are also potentially interesting although my personal bias is towards AIC type of approaches.
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time se
Whether it makes sense to create a short-list of just n models or to select a single model depends a lot on how much the data favors the "best" (according to a chosen criterion) model. If a lot of mo
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time series forecasting Whether it makes sense to create a short-list of just n models or to select a single model depends a lot on how much the data favors the "best" (according to a chosen criterion) model. If a lot of models are "close together", then all of them are plausible models and selecting a single one or a subset of these when you have considered a lot of models is pretty problematic. For that reason model averaging is often a good idea and for prediction tasks weights for each model based on $\text{prior weights} \times \exp\{ -0.5 |\text{AIC}_i - \min_j \text{AIC}_j| \}$ are popular. I guess if you were to consider then when one model or a small number of models get nearly all the weight (e.g. the best model is ahead of the next model in terms of AIC by, say, 10 to 15 or so and not too many models were considered), then it is probably reasonable to concentrate on those. AIC based weights are popular for prediction, because of the link between these and maximum likelihood estimation. For a very enthusiastic view of this type of approach, you could refer to Burnham and Anderson's "Model selection and multimodel inference". As mentioned in the other response naive RMSE is not a justifiable option (although one could get a version adjusted for overfitting by using e.g. cross-validation), but other criteria (e.g. BIC etc.) are also potentially interesting although my personal bias is towards AIC type of approaches.
What selection criteria to use and why? (AIC, RMSE, MAPE) - All possible model selection for time se Whether it makes sense to create a short-list of just n models or to select a single model depends a lot on how much the data favors the "best" (according to a chosen criterion) model. If a lot of mo
45,728
When should the Pasting ensemble method be used instead of Bagging?
I am not an expert on the subject, but I think I have a sufficient answer: Since pasting is without replacement, each subset of the sample can be used once at most, which means that you need a big dataset for it to work. As a matter of fact, pasting was originally designed for large data-sets, when computing power is limited. Bagging, on the other hand, can use the same subsets many times, which is great for smaller sample sizes, in which it improves robustness (to my experience). So, I think size is the major factor for making this decision. If your sample size is small, pasting isn't a real option. When it is, I would expect bagging to yield better cross-validation results almost always, but pasting might prove better in external validations (i.e. real life predictions), as it reaches its conclusion by aggregating predictions from practically independent datasets.
When should the Pasting ensemble method be used instead of Bagging?
I am not an expert on the subject, but I think I have a sufficient answer: Since pasting is without replacement, each subset of the sample can be used once at most, which means that you need a big dat
When should the Pasting ensemble method be used instead of Bagging? I am not an expert on the subject, but I think I have a sufficient answer: Since pasting is without replacement, each subset of the sample can be used once at most, which means that you need a big dataset for it to work. As a matter of fact, pasting was originally designed for large data-sets, when computing power is limited. Bagging, on the other hand, can use the same subsets many times, which is great for smaller sample sizes, in which it improves robustness (to my experience). So, I think size is the major factor for making this decision. If your sample size is small, pasting isn't a real option. When it is, I would expect bagging to yield better cross-validation results almost always, but pasting might prove better in external validations (i.e. real life predictions), as it reaches its conclusion by aggregating predictions from practically independent datasets.
When should the Pasting ensemble method be used instead of Bagging? I am not an expert on the subject, but I think I have a sufficient answer: Since pasting is without replacement, each subset of the sample can be used once at most, which means that you need a big dat
45,729
When should the Pasting ensemble method be used instead of Bagging?
Bagging is to use the same training for every predictor, but to train them on different random subsets of the training set. When sampling is performed with replacement, this method is called bagging (short for bootstrap aggregating). When sampling is performed without replacement, it is called pasting. In other words, both approaches are similar. In both cases you are sampling the training data to build multiple instances of a classifier. In both cases a training item could be sampled and used to train multiple instances in the collection of classifiers that is produced. In bagging, it is possible for a training sample to be sampled multiple times in the training for the same predictor. This type of bootstrap aggregation is a type of data enhancement, and it is used in other contexts as well in ML to artificially increase the size of the training set. Computationally bagging and pasting are very attractive because in theory and in practice all of the classifiers can be trained in parallel. Thus if you have a large number of CPU cores, or even a distributed memory computing cluster, you can independently train the individual classifiers all in parallel. scikit-learn using scikit-learn for performing bagging and/or pasting is relatively simple. As with the voting classifier, we specify which type of classifer we want to use. But since bagging/pasting train multiple classifiers all of this type, we only have to specify 1. The n_jobs parameter tells scikit-learn the number of cpu cores to use for training and predictions (-1 tells scikit-learn to use all available cores). The following trains an ensemble of 500 decision tree classifiers (n_estimators), each trained on 100 training instances randomly sampled from the training set with replacement (bootstrap=True). If you want to use pasting we simply set bootstrap=False instead. from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier bagging_clf = BaggingClassifier( DecisionTreeClassifier(max_leaf_nodes=20), n_estimators=500, max_samples=100, bootstrap=True, n_jobs=-1 ) bagging_clf.fit(X_train, y_train) y_pred = bag_clf.predict(X_test)
When should the Pasting ensemble method be used instead of Bagging?
Bagging is to use the same training for every predictor, but to train them on different random subsets of the training set. When sampling is performed with replacement, this method is called bagging (
When should the Pasting ensemble method be used instead of Bagging? Bagging is to use the same training for every predictor, but to train them on different random subsets of the training set. When sampling is performed with replacement, this method is called bagging (short for bootstrap aggregating). When sampling is performed without replacement, it is called pasting. In other words, both approaches are similar. In both cases you are sampling the training data to build multiple instances of a classifier. In both cases a training item could be sampled and used to train multiple instances in the collection of classifiers that is produced. In bagging, it is possible for a training sample to be sampled multiple times in the training for the same predictor. This type of bootstrap aggregation is a type of data enhancement, and it is used in other contexts as well in ML to artificially increase the size of the training set. Computationally bagging and pasting are very attractive because in theory and in practice all of the classifiers can be trained in parallel. Thus if you have a large number of CPU cores, or even a distributed memory computing cluster, you can independently train the individual classifiers all in parallel. scikit-learn using scikit-learn for performing bagging and/or pasting is relatively simple. As with the voting classifier, we specify which type of classifer we want to use. But since bagging/pasting train multiple classifiers all of this type, we only have to specify 1. The n_jobs parameter tells scikit-learn the number of cpu cores to use for training and predictions (-1 tells scikit-learn to use all available cores). The following trains an ensemble of 500 decision tree classifiers (n_estimators), each trained on 100 training instances randomly sampled from the training set with replacement (bootstrap=True). If you want to use pasting we simply set bootstrap=False instead. from sklearn.ensemble import BaggingClassifier from sklearn.tree import DecisionTreeClassifier bagging_clf = BaggingClassifier( DecisionTreeClassifier(max_leaf_nodes=20), n_estimators=500, max_samples=100, bootstrap=True, n_jobs=-1 ) bagging_clf.fit(X_train, y_train) y_pred = bag_clf.predict(X_test)
When should the Pasting ensemble method be used instead of Bagging? Bagging is to use the same training for every predictor, but to train them on different random subsets of the training set. When sampling is performed with replacement, this method is called bagging (
45,730
Does the sign of eigenvectors matter? [duplicate]
No, there is no difference. Notice that if $v$ is an eigenvector to $A$ with eigenvalue $\lambda$ and $\alpha$ is a scalar, then $$ A \alpha v = \alpha A v = \lambda \alpha v $$ and thus $\alpha v$ is also an eigenvector with eigenvalue $\lambda$. Since $\alpha$ is any scalar, if you let $\alpha = -1$ then you see that $v$ being an eigenvector implies $-v$ is an eigenvector. So there is no mathematical difference between which "scaling" of the eigenvector you choose ($\alpha$ just scales the eigenvector and flips it). Note: Normally one chooses the normalized eigenvalue (norm = 1) but even then that doesn't account for the "flipping".
Does the sign of eigenvectors matter? [duplicate]
No, there is no difference. Notice that if $v$ is an eigenvector to $A$ with eigenvalue $\lambda$ and $\alpha$ is a scalar, then $$ A \alpha v = \alpha A v = \lambda \alpha v $$ and thus $\alpha v$ is
Does the sign of eigenvectors matter? [duplicate] No, there is no difference. Notice that if $v$ is an eigenvector to $A$ with eigenvalue $\lambda$ and $\alpha$ is a scalar, then $$ A \alpha v = \alpha A v = \lambda \alpha v $$ and thus $\alpha v$ is also an eigenvector with eigenvalue $\lambda$. Since $\alpha$ is any scalar, if you let $\alpha = -1$ then you see that $v$ being an eigenvector implies $-v$ is an eigenvector. So there is no mathematical difference between which "scaling" of the eigenvector you choose ($\alpha$ just scales the eigenvector and flips it). Note: Normally one chooses the normalized eigenvalue (norm = 1) but even then that doesn't account for the "flipping".
Does the sign of eigenvectors matter? [duplicate] No, there is no difference. Notice that if $v$ is an eigenvector to $A$ with eigenvalue $\lambda$ and $\alpha$ is a scalar, then $$ A \alpha v = \alpha A v = \lambda \alpha v $$ and thus $\alpha v$ is
45,731
Verifying that a random generator outputs a uniform distribution
UPDATE 2: striked wrong points, and replaced some by stuff I think are correct. UPDATE 1: added direct analysis of both your uniformity tests, and kept my old answer as a proposed superior uniformity test that I suggest. Summary Both you and your student have offered uniformity tests that work. Both of your tests don't work well with continuous numbers (i.e. if RNG spits out numbers in $[1,k]$ instead of $\{1,2,\ldots,k\}$. Since the output of your uniformity score is not a probability, I think it has the disadvantage of being difficult to interpret. Since your student's uniformity score is a probability, I think it has the advantage of being easy to interpret. While not harmful to the correctness of the test, your student's uniformity test has one aspect that is useless (but also harmless; just redundant), namely: there is not point to add $t$ on right side of the inequality as the output is a probability. One small error in your student's uniformity test is using $<$ instead of $\le$ for the upper bound. I think he/she should have used $\le$. On the other hand, I think my proposed test is: Unlike your tests, mine works with continuous numbers. Similar to your student's test, mine is also easy to interpret cause it's a probability as well. The disadvantage is that my test is possibly harder to compute. But this is not a big deal as I think the correctness of the test is worth the slight increase in difficulty. Analysing your uniformity tests Your notation confused me a bit. I assume that you mean that $h$ is a histogram, and $h_i$ is the frequency that is associated with input $i$ where $i$ is some number between $1$ and $k$ (which your RNG produces). So to rewrite your test, I think you wanted to say this: \begin{equation} \text{teacher uniformity score} = \sum_{i \in \{1,2,\ldots,k\}} (h_i - \frac{n}{k})^2 \end{equation} Then the uniformity test: the RNG is uniformly distributed if the uniformity score is less than some threshold $t_{teacher}$ that we agree upon. I.e. output from RNG is uniformly distributed if the following statement is true: \begin{equation} \text{teacher uniformity score} < t_{teacher} \end{equation} I see why your method makes sense. Basically, your score is essentially the sum of squared errors of observed frequencies against expected frequencies, where expected is what should happen if the RNG is perfectly uniform. So if the sum of squared errors is minimal, the more uniform it is. So my opinion about your test is this: Pros: it does have merit, and it does reflect the degree of uniformity of the RNG. Cons: it is not easy to interpret because the output is not a probability. If you augment it such that the output is a probability, then it would be easier to interpret. At least I can't interpret it easily. This interpret-ability is a subjective thing. it is not friendly with continuous RNGs. You can't use histograms with continuous numbers without approximating numbers into bins, which essentially deletes information and opens the vulnerability that might potentially lead to masking non-uniformity of some poor RNGs. We need to look at the probability density functions (PDFs) instead. What you wrote about what your student claimed is also confusing, but I rewrite that to what I think is the closest thing that makes best sense in my view (kindly correct me if you think the student didn't say this): \begin{equation} \text{student uniformity score} = \sum_{i \in \{1,2,\ldots,k\}} \frac{h_i}{n/k} \end{equation} This is essentially the ratio between observed histogram and expected histogram (expected is the perfect uniform one). Clearly, if the ratio is 1 then it's perfectly uniform. But if it isn't (i.e. greater or less than 1) then it is not perfectly uniform. So your student is suggesting that the closer the ratio is to 1, the more random it is. So $t_{student}$ is essentially some kind of error term that accounts for deviations from 1: \begin{equation} 1-t_{student} \le \text{student uniformity score} \le 1+t_{student} \end{equation} My opinion about your student's uniformity test is as follows: Pros: It does have merit for the same reason yours does. It is easy to interpret (thanks to it being a probability). This is subjective though. I think most humans agree with me (correct me if you disagree). Cons: It's not easy to interpret just like yours. It doesn't work well with continuous RNGs for the same reason yours doesn't. My test which I think is superior to both of your tests Based on the question, you only seem to be interested in finding whether a sequence is uniformly distributed. I.e. it doesn't need to satisfy any other property. If your PRNG is outputting some float, e.g. something in $[0,1]$, I would personally suggest doing this (which seems somehow similar to your student's suggest? I don't know..): Repeat PRNG $n$ times, where $n$ is large enough. Estimate the true probability density function $f_X$. Let $\hat f_X$ be your best estimation. Then your sequence is perfectly uniformly distributed if $\hat f_X(x)$ forms a line with a slope of 0. This is usually unachievable in reality. So you need to define a degree of uniformity that, if satisfied, you subjectively declare that the PRNG is uniformly distributed. To do this, I would personally suggest to define the null hypothesis: A sequence is uniform if the probability of each number to appear is $1/n$. You then compute the probability based on your empirical trials. Finally, you do some variant of Fisher's exact test to find the probability that your sequence can exist if the null hypothesis is true. Once you get that probability, it is there where you plug your threshold. You and your student need to agree on this threshold. It's subjective. In this specific approach, I'd expect you to agree on a very large probability that is very close to $1$. Maybe $0.999$, or whatever you and your student seem to be happy with. Or maybe do the opposite: define the null hypothesis as: Your sequence is not uniform, and probability of each number to appear is not $1/n$. Then, you need to choose a a very small threshold in order to reject it.
Verifying that a random generator outputs a uniform distribution
UPDATE 2: striked wrong points, and replaced some by stuff I think are correct. UPDATE 1: added direct analysis of both your uniformity tests, and kept my old answer as a proposed superior uniformity
Verifying that a random generator outputs a uniform distribution UPDATE 2: striked wrong points, and replaced some by stuff I think are correct. UPDATE 1: added direct analysis of both your uniformity tests, and kept my old answer as a proposed superior uniformity test that I suggest. Summary Both you and your student have offered uniformity tests that work. Both of your tests don't work well with continuous numbers (i.e. if RNG spits out numbers in $[1,k]$ instead of $\{1,2,\ldots,k\}$. Since the output of your uniformity score is not a probability, I think it has the disadvantage of being difficult to interpret. Since your student's uniformity score is a probability, I think it has the advantage of being easy to interpret. While not harmful to the correctness of the test, your student's uniformity test has one aspect that is useless (but also harmless; just redundant), namely: there is not point to add $t$ on right side of the inequality as the output is a probability. One small error in your student's uniformity test is using $<$ instead of $\le$ for the upper bound. I think he/she should have used $\le$. On the other hand, I think my proposed test is: Unlike your tests, mine works with continuous numbers. Similar to your student's test, mine is also easy to interpret cause it's a probability as well. The disadvantage is that my test is possibly harder to compute. But this is not a big deal as I think the correctness of the test is worth the slight increase in difficulty. Analysing your uniformity tests Your notation confused me a bit. I assume that you mean that $h$ is a histogram, and $h_i$ is the frequency that is associated with input $i$ where $i$ is some number between $1$ and $k$ (which your RNG produces). So to rewrite your test, I think you wanted to say this: \begin{equation} \text{teacher uniformity score} = \sum_{i \in \{1,2,\ldots,k\}} (h_i - \frac{n}{k})^2 \end{equation} Then the uniformity test: the RNG is uniformly distributed if the uniformity score is less than some threshold $t_{teacher}$ that we agree upon. I.e. output from RNG is uniformly distributed if the following statement is true: \begin{equation} \text{teacher uniformity score} < t_{teacher} \end{equation} I see why your method makes sense. Basically, your score is essentially the sum of squared errors of observed frequencies against expected frequencies, where expected is what should happen if the RNG is perfectly uniform. So if the sum of squared errors is minimal, the more uniform it is. So my opinion about your test is this: Pros: it does have merit, and it does reflect the degree of uniformity of the RNG. Cons: it is not easy to interpret because the output is not a probability. If you augment it such that the output is a probability, then it would be easier to interpret. At least I can't interpret it easily. This interpret-ability is a subjective thing. it is not friendly with continuous RNGs. You can't use histograms with continuous numbers without approximating numbers into bins, which essentially deletes information and opens the vulnerability that might potentially lead to masking non-uniformity of some poor RNGs. We need to look at the probability density functions (PDFs) instead. What you wrote about what your student claimed is also confusing, but I rewrite that to what I think is the closest thing that makes best sense in my view (kindly correct me if you think the student didn't say this): \begin{equation} \text{student uniformity score} = \sum_{i \in \{1,2,\ldots,k\}} \frac{h_i}{n/k} \end{equation} This is essentially the ratio between observed histogram and expected histogram (expected is the perfect uniform one). Clearly, if the ratio is 1 then it's perfectly uniform. But if it isn't (i.e. greater or less than 1) then it is not perfectly uniform. So your student is suggesting that the closer the ratio is to 1, the more random it is. So $t_{student}$ is essentially some kind of error term that accounts for deviations from 1: \begin{equation} 1-t_{student} \le \text{student uniformity score} \le 1+t_{student} \end{equation} My opinion about your student's uniformity test is as follows: Pros: It does have merit for the same reason yours does. It is easy to interpret (thanks to it being a probability). This is subjective though. I think most humans agree with me (correct me if you disagree). Cons: It's not easy to interpret just like yours. It doesn't work well with continuous RNGs for the same reason yours doesn't. My test which I think is superior to both of your tests Based on the question, you only seem to be interested in finding whether a sequence is uniformly distributed. I.e. it doesn't need to satisfy any other property. If your PRNG is outputting some float, e.g. something in $[0,1]$, I would personally suggest doing this (which seems somehow similar to your student's suggest? I don't know..): Repeat PRNG $n$ times, where $n$ is large enough. Estimate the true probability density function $f_X$. Let $\hat f_X$ be your best estimation. Then your sequence is perfectly uniformly distributed if $\hat f_X(x)$ forms a line with a slope of 0. This is usually unachievable in reality. So you need to define a degree of uniformity that, if satisfied, you subjectively declare that the PRNG is uniformly distributed. To do this, I would personally suggest to define the null hypothesis: A sequence is uniform if the probability of each number to appear is $1/n$. You then compute the probability based on your empirical trials. Finally, you do some variant of Fisher's exact test to find the probability that your sequence can exist if the null hypothesis is true. Once you get that probability, it is there where you plug your threshold. You and your student need to agree on this threshold. It's subjective. In this specific approach, I'd expect you to agree on a very large probability that is very close to $1$. Maybe $0.999$, or whatever you and your student seem to be happy with. Or maybe do the opposite: define the null hypothesis as: Your sequence is not uniform, and probability of each number to appear is not $1/n$. Then, you need to choose a a very small threshold in order to reject it.
Verifying that a random generator outputs a uniform distribution UPDATE 2: striked wrong points, and replaced some by stuff I think are correct. UPDATE 1: added direct analysis of both your uniformity tests, and kept my old answer as a proposed superior uniformity
45,732
Verifying that a random generator outputs a uniform distribution
Note that we cannot conclude on the basis of only a sample that data is drawn from a uniform, only that it is consistent with having come from a uniform (sufficiently small deviations will be undetectable at any given sample size), or that it is not consistent.x Since the request is for a test rather than simply a measure, presumably it is goodness of fit of uniformity that is the principal object. There are a host of tests of uniformity that focus on different test statistics, and offer different power against various alternatives. It often makes sense to focus on the known ones instead of causally inventing new ones, because their characteristics are understood and (in particular) the kind of situations where they have good power will often be known. I discuss some possible choices of statistic here. Note that if your observations are an iid sample from a uniform, then histogram counts will have a multinomial distribution. There are effectively two suggested test statistics in your question. Setting $E_i = n/k$, and slightly rearranging the second: $T_1=\sum_i {(h_i-E_i)^2}$ $T_2=\max_i|\frac{h_i}{E_i}-1|$ Since your $E_i$ is constant over $i$ ($=E$ say), the first statistic is a scaled chi-squared test statistic ($T_1=E\cdot X^2$). So critical values (marking the boundary of the rejection region, i.e. your "t") can be determined quite readily. Note further that 2. is equivalent to using the largest absolute Pearson residual (for either a Poisson or multinomial model) as a test statistic, since $\sqrt{E}\cdot T_2 = \max_i|\frac{h_i-E_i}{\sqrt{E_i}}|$ - noting further that $\sqrt{\frac{E}{1-p}}\cdot T_2 = \max_i|\frac{h_i-E_i}{\sqrt{E_i(1-p_i)}}|$ - where $p_i=p=k/n$). I have suggested$^\dagger$ using just a statistic equivalent to $T_2$ as a quick visual test when testing roleplaying dice for uniformity (which dice generate discrete uniforms). While not a "standard" test, it's not hard to find critical values here either. This is particularly suitable when the alternative is for a single discrepant bin to have a substantially higher or lower proportion than expected but where the remaining bins are uniformly sharing the rest of the probability. $^\dagger$ - see the dashed and dotted grey lines in the last two plots there, which mark two different choices for $t$ with different type I error rates. If you expect (or are most interested in being able to pick up) particular kinds of deviations, other statistics may be better than either of these.
Verifying that a random generator outputs a uniform distribution
Note that we cannot conclude on the basis of only a sample that data is drawn from a uniform, only that it is consistent with having come from a uniform (sufficiently small deviations will be undetect
Verifying that a random generator outputs a uniform distribution Note that we cannot conclude on the basis of only a sample that data is drawn from a uniform, only that it is consistent with having come from a uniform (sufficiently small deviations will be undetectable at any given sample size), or that it is not consistent.x Since the request is for a test rather than simply a measure, presumably it is goodness of fit of uniformity that is the principal object. There are a host of tests of uniformity that focus on different test statistics, and offer different power against various alternatives. It often makes sense to focus on the known ones instead of causally inventing new ones, because their characteristics are understood and (in particular) the kind of situations where they have good power will often be known. I discuss some possible choices of statistic here. Note that if your observations are an iid sample from a uniform, then histogram counts will have a multinomial distribution. There are effectively two suggested test statistics in your question. Setting $E_i = n/k$, and slightly rearranging the second: $T_1=\sum_i {(h_i-E_i)^2}$ $T_2=\max_i|\frac{h_i}{E_i}-1|$ Since your $E_i$ is constant over $i$ ($=E$ say), the first statistic is a scaled chi-squared test statistic ($T_1=E\cdot X^2$). So critical values (marking the boundary of the rejection region, i.e. your "t") can be determined quite readily. Note further that 2. is equivalent to using the largest absolute Pearson residual (for either a Poisson or multinomial model) as a test statistic, since $\sqrt{E}\cdot T_2 = \max_i|\frac{h_i-E_i}{\sqrt{E_i}}|$ - noting further that $\sqrt{\frac{E}{1-p}}\cdot T_2 = \max_i|\frac{h_i-E_i}{\sqrt{E_i(1-p_i)}}|$ - where $p_i=p=k/n$). I have suggested$^\dagger$ using just a statistic equivalent to $T_2$ as a quick visual test when testing roleplaying dice for uniformity (which dice generate discrete uniforms). While not a "standard" test, it's not hard to find critical values here either. This is particularly suitable when the alternative is for a single discrepant bin to have a substantially higher or lower proportion than expected but where the remaining bins are uniformly sharing the rest of the probability. $^\dagger$ - see the dashed and dotted grey lines in the last two plots there, which mark two different choices for $t$ with different type I error rates. If you expect (or are most interested in being able to pick up) particular kinds of deviations, other statistics may be better than either of these.
Verifying that a random generator outputs a uniform distribution Note that we cannot conclude on the basis of only a sample that data is drawn from a uniform, only that it is consistent with having come from a uniform (sufficiently small deviations will be undetect
45,733
Verifying that a random generator outputs a uniform distribution
It's like saying $a$ is close to be $b$ when $a-b$ is 0 or when $\frac{a}b$ is 1. I don't see why one should be better than the other.
Verifying that a random generator outputs a uniform distribution
It's like saying $a$ is close to be $b$ when $a-b$ is 0 or when $\frac{a}b$ is 1. I don't see why one should be better than the other.
Verifying that a random generator outputs a uniform distribution It's like saying $a$ is close to be $b$ when $a-b$ is 0 or when $\frac{a}b$ is 1. I don't see why one should be better than the other.
Verifying that a random generator outputs a uniform distribution It's like saying $a$ is close to be $b$ when $a-b$ is 0 or when $\frac{a}b$ is 1. I don't see why one should be better than the other.
45,734
Verifying that a random generator outputs a uniform distribution
A test that is actually used in testing the R random number generators is based on the cumulative distribution function (rather than histogram or density), with test statistic $$d=\max_t\left\{\left|\frac{\sum_i (X_i\leq t)}{N}-F(t)\right|\right\}$$ This test takes advantage of Massart's inequality, $$P(\sup | F_n - F | > t) < 2 \exp(-2nt^2)$$ which gives a bound on the tail probability of $d$ that holds for all continuous and non-continuous $F$, and all $n$ and $t$.
Verifying that a random generator outputs a uniform distribution
A test that is actually used in testing the R random number generators is based on the cumulative distribution function (rather than histogram or density), with test statistic $$d=\max_t\left\{\left|\
Verifying that a random generator outputs a uniform distribution A test that is actually used in testing the R random number generators is based on the cumulative distribution function (rather than histogram or density), with test statistic $$d=\max_t\left\{\left|\frac{\sum_i (X_i\leq t)}{N}-F(t)\right|\right\}$$ This test takes advantage of Massart's inequality, $$P(\sup | F_n - F | > t) < 2 \exp(-2nt^2)$$ which gives a bound on the tail probability of $d$ that holds for all continuous and non-continuous $F$, and all $n$ and $t$.
Verifying that a random generator outputs a uniform distribution A test that is actually used in testing the R random number generators is based on the cumulative distribution function (rather than histogram or density), with test statistic $$d=\max_t\left\{\left|\
45,735
What can be inferred from this residual plot?
The person who produced that plot made a mistake. Here's why. The setting is ordinary least squares regression (including an intercept term), which is where responses $y_i$ are estimated as linear combinations of regressor variables $x_{ij}$ in the form $$\hat y_i = \hat\beta_0 + \hat \beta_1 x_{i1} + \hat\beta_2 x_{i2} + \cdots + \hat\beta_p x_{ip}.$$ By definition, the residuals are the differences $$e_i = y_i - \hat y_i.$$ The plot of $(\hat y_i, e_i)$ in the question shows a strong, consistent linear relationship. In other words, there are numbers $\hat\alpha_0$ and $\hat\alpha_1$--which we can find by fitting a line to the points in that plot--for which the values $$f_i = e_i - (\hat\alpha_0 + \hat\alpha_1 \hat y_i)$$ are much closer to $0$ than the $e_i$ (in the sense of having much smaller sums of squares). But this says nothing other than that the revised estimates $$\eqalign{ \hat {y}_i^\prime &= \hat {y}_i + \hat\alpha_0 + \hat\alpha_1 \hat y_i \\ &= (\hat\beta_0 + \hat\alpha_0) + (\hat\alpha_1\hat\beta_1) x_{i1} + \cdots + (\hat\alpha_1\hat\beta_p) x_{ip}\tag{1} }$$ are better, in the least squares sense, than the original estimates, because their residuals are $$y_i - \hat{y}_i^\prime = e_i - (\hat\alpha_0 + \hat\alpha_1 \hat y_i) = f_i.$$ But this is not possible, because in $(1)$, $\hat y_i^\prime$ has been written explicitly as a linear combination of the original regressors. That means this new solution must have a smaller sum of squared residuals--implying the original fit was not a valid solution. This result is worth calling a theorem: Theorem: The least squares slope of the residual-vs-predicted plot in an Ordinary Least Squares model is always zero. Residual plots like that in the question can arise only when a different model is used. The two most common situations are (1) when the model includes no intercept and (2) the model is not linear. The mechanism in (1) becomes evident when you look at an example: Because the model did not include an intercept, the fitted line must pass through $(0,0)$. Since the data points follow a strong linear trend that does not pass though $(0,0)$, the model is poor, the fit is bad, and the best that can be done is to pass the fitted line through the barycenter of the data points. The trend in the residual plot is precisely the difference between the slope of the data points and the slope of the red line at the left. In this case, contrary to what your reference states, a linear model is definitely valid. The only problem is that this fit failed to include an intercept term. You may try this example out for yourself by varying the parameters in the R code that produced the figures. set.seed(17) x <- seq(15, 6, length.out=50) # Specify the x-values y <- -20 + 4 * x + rnorm(length(x), sd=2) # Generate y-values with error fit <- lm(y ~ x - 1) # Fit a no-intercept model par(mfrow=c(1,2)) # Prepare for two plots plot(x,y, xlim=c(0, max(x)), ylim=c(0, max(y)), pch=16, main="Data and Fit") abline(fit, col="Red", lwd=2, ltw=3) plot(fit, which=1, pch=16, add.smooth=FALSE) # Residual-vs-predicted plot
What can be inferred from this residual plot?
The person who produced that plot made a mistake. Here's why. The setting is ordinary least squares regression (including an intercept term), which is where responses $y_i$ are estimated as linear co
What can be inferred from this residual plot? The person who produced that plot made a mistake. Here's why. The setting is ordinary least squares regression (including an intercept term), which is where responses $y_i$ are estimated as linear combinations of regressor variables $x_{ij}$ in the form $$\hat y_i = \hat\beta_0 + \hat \beta_1 x_{i1} + \hat\beta_2 x_{i2} + \cdots + \hat\beta_p x_{ip}.$$ By definition, the residuals are the differences $$e_i = y_i - \hat y_i.$$ The plot of $(\hat y_i, e_i)$ in the question shows a strong, consistent linear relationship. In other words, there are numbers $\hat\alpha_0$ and $\hat\alpha_1$--which we can find by fitting a line to the points in that plot--for which the values $$f_i = e_i - (\hat\alpha_0 + \hat\alpha_1 \hat y_i)$$ are much closer to $0$ than the $e_i$ (in the sense of having much smaller sums of squares). But this says nothing other than that the revised estimates $$\eqalign{ \hat {y}_i^\prime &= \hat {y}_i + \hat\alpha_0 + \hat\alpha_1 \hat y_i \\ &= (\hat\beta_0 + \hat\alpha_0) + (\hat\alpha_1\hat\beta_1) x_{i1} + \cdots + (\hat\alpha_1\hat\beta_p) x_{ip}\tag{1} }$$ are better, in the least squares sense, than the original estimates, because their residuals are $$y_i - \hat{y}_i^\prime = e_i - (\hat\alpha_0 + \hat\alpha_1 \hat y_i) = f_i.$$ But this is not possible, because in $(1)$, $\hat y_i^\prime$ has been written explicitly as a linear combination of the original regressors. That means this new solution must have a smaller sum of squared residuals--implying the original fit was not a valid solution. This result is worth calling a theorem: Theorem: The least squares slope of the residual-vs-predicted plot in an Ordinary Least Squares model is always zero. Residual plots like that in the question can arise only when a different model is used. The two most common situations are (1) when the model includes no intercept and (2) the model is not linear. The mechanism in (1) becomes evident when you look at an example: Because the model did not include an intercept, the fitted line must pass through $(0,0)$. Since the data points follow a strong linear trend that does not pass though $(0,0)$, the model is poor, the fit is bad, and the best that can be done is to pass the fitted line through the barycenter of the data points. The trend in the residual plot is precisely the difference between the slope of the data points and the slope of the red line at the left. In this case, contrary to what your reference states, a linear model is definitely valid. The only problem is that this fit failed to include an intercept term. You may try this example out for yourself by varying the parameters in the R code that produced the figures. set.seed(17) x <- seq(15, 6, length.out=50) # Specify the x-values y <- -20 + 4 * x + rnorm(length(x), sd=2) # Generate y-values with error fit <- lm(y ~ x - 1) # Fit a no-intercept model par(mfrow=c(1,2)) # Prepare for two plots plot(x,y, xlim=c(0, max(x)), ylim=c(0, max(y)), pch=16, main="Data and Fit") abline(fit, col="Red", lwd=2, ltw=3) plot(fit, which=1, pch=16, add.smooth=FALSE) # Residual-vs-predicted plot
What can be inferred from this residual plot? The person who produced that plot made a mistake. Here's why. The setting is ordinary least squares regression (including an intercept term), which is where responses $y_i$ are estimated as linear co
45,736
Downsides of inverse Wishart prior in hierarchical models
Here are some relevant resources (full disclosure: the first link is to a paper of mine): http://newprairiepress.org/agstatconference/2014/proceedings/8 http://www.themattsimpson.com/2012/08/20/prior-distributions-for-covariance-matrices-the-scaled-inverse-wishart-prior/ http://andrewgelman.com/2012/08/29/more-on-scaled-inverse-wishart-and-prior-independence/ A prior over a covariance matrix can be considered as a joint prior over the variances, i.e. the diagonals of the covariance matrix, and the correlations, i.e. the off-diagonal elements divided by the square root of the row and column diagonal elements. In my opinion, the problems with an IW prior are the uncertainty for all variance parameters are controlled by the single degree of freedom parameter, the marginal distribution for the variance is an inverse gamma (the IW(7,I) implies a marginal IG(1,1/2)) which has a region near zero with extremely low density which causes a bias toward larger variances when the true variance is small, and there is a prior dependency between the variances and correlations such that larger variances are associated with correlations near +/- 1 while small variances are associated with correlations near zero. Thus, when the true variance is small, the correlation will be estimated to be zero regardless of the true value of the correlation and this bias remains even for relatively large sample sizes. From your description, problem #1 is not so relevant, but problems 2 and 3 could be relevant. Although using more sophisticated priors will resolve this issue, a pragmatic solution is to think more carefully about the scale matrix in the IW prior. Instead of using the identity matrix, use a diagonal matrix with values for each element that a reasonable given the data you are analyzing. Alternatively, you could perform a prior sensitivity analysis by trying scale matrices of the form $\epsilon$ times the identity matrix. The above discussion primarily focused on the issues of using an inverse Wishart distribution for any covariance matrix. There is an additional concern when using the inverse Wishart as the prior for a hierarchical covariance matrix. In Gelman (2006), the inverse gamma is shown to be informative for a hierarchical variance and this issue will carry over to an inverse Wishart on a hierarchical covariance matrix. The suggestion in that paper is to use half-Cauchy distributions (or uniforms) on the hierarchical standard deviations. If you separately define priors for the hierarchical standard deviations and correlations, then you will still need a prior over a correlation matrix, e.g. the LKJ prior. So yes, I think you really need to think carefully about this prior and perform a sensitivity analysis to determine how impactful the prior is. With enough data, the likelihood should be able to overwhelm the prior, but it is unclear how much data is enough.
Downsides of inverse Wishart prior in hierarchical models
Here are some relevant resources (full disclosure: the first link is to a paper of mine): http://newprairiepress.org/agstatconference/2014/proceedings/8 http://www.themattsimpson.com/2012/08/20/prior
Downsides of inverse Wishart prior in hierarchical models Here are some relevant resources (full disclosure: the first link is to a paper of mine): http://newprairiepress.org/agstatconference/2014/proceedings/8 http://www.themattsimpson.com/2012/08/20/prior-distributions-for-covariance-matrices-the-scaled-inverse-wishart-prior/ http://andrewgelman.com/2012/08/29/more-on-scaled-inverse-wishart-and-prior-independence/ A prior over a covariance matrix can be considered as a joint prior over the variances, i.e. the diagonals of the covariance matrix, and the correlations, i.e. the off-diagonal elements divided by the square root of the row and column diagonal elements. In my opinion, the problems with an IW prior are the uncertainty for all variance parameters are controlled by the single degree of freedom parameter, the marginal distribution for the variance is an inverse gamma (the IW(7,I) implies a marginal IG(1,1/2)) which has a region near zero with extremely low density which causes a bias toward larger variances when the true variance is small, and there is a prior dependency between the variances and correlations such that larger variances are associated with correlations near +/- 1 while small variances are associated with correlations near zero. Thus, when the true variance is small, the correlation will be estimated to be zero regardless of the true value of the correlation and this bias remains even for relatively large sample sizes. From your description, problem #1 is not so relevant, but problems 2 and 3 could be relevant. Although using more sophisticated priors will resolve this issue, a pragmatic solution is to think more carefully about the scale matrix in the IW prior. Instead of using the identity matrix, use a diagonal matrix with values for each element that a reasonable given the data you are analyzing. Alternatively, you could perform a prior sensitivity analysis by trying scale matrices of the form $\epsilon$ times the identity matrix. The above discussion primarily focused on the issues of using an inverse Wishart distribution for any covariance matrix. There is an additional concern when using the inverse Wishart as the prior for a hierarchical covariance matrix. In Gelman (2006), the inverse gamma is shown to be informative for a hierarchical variance and this issue will carry over to an inverse Wishart on a hierarchical covariance matrix. The suggestion in that paper is to use half-Cauchy distributions (or uniforms) on the hierarchical standard deviations. If you separately define priors for the hierarchical standard deviations and correlations, then you will still need a prior over a correlation matrix, e.g. the LKJ prior. So yes, I think you really need to think carefully about this prior and perform a sensitivity analysis to determine how impactful the prior is. With enough data, the likelihood should be able to overwhelm the prior, but it is unclear how much data is enough.
Downsides of inverse Wishart prior in hierarchical models Here are some relevant resources (full disclosure: the first link is to a paper of mine): http://newprairiepress.org/agstatconference/2014/proceedings/8 http://www.themattsimpson.com/2012/08/20/prior
45,737
Maximum Likelihood Formulation for Linear Regression
In ordinary least squares regression the goal is to model the condition expectation; $$ E[y_i|x_i] = x_i'\beta $$ $y_i$ and $x_i$ are referred to as the dependent and independent variables respectively because we are literally conditioning $y_i$ on $x_i$. Ordinary least squares is equivalent to maximum likelihood where we assume; $$ y_i|x_i \stackrel{iid}{\sim} N(x_i'\beta,\sigma^2) $$ In this instance the $x_i$ are taken as fixed values (we are not calling $x_i$ a random variable and giving it a probability distribution) meaning that the "data", $\mathcal{D}$, is just the set of $y_i$'s $$\mathcal{D} \equiv \{y_1,..,y_n\}$$ So writing $$ p(\mathcal{D} | \theta) = \prod_{i=1}^n p(y_i | x_i, \theta) $$ where $\theta \equiv \{\beta,\sigma\}$ is actually correct. The likelihood $p(\mathcal{D} | \theta) = \prod_{i=1}^n p(y_i , x_i | \theta)=\prod_{i=1}^n p_y(y_i | x_i, \theta)p_x(x_i|\theta)$ ,on the other hand, treats the $x_i$ as random variables which, although applicable in some settings, is not linear regression in the traditional sense.
Maximum Likelihood Formulation for Linear Regression
In ordinary least squares regression the goal is to model the condition expectation; $$ E[y_i|x_i] = x_i'\beta $$ $y_i$ and $x_i$ are referred to as the dependent and independent variables respectivel
Maximum Likelihood Formulation for Linear Regression In ordinary least squares regression the goal is to model the condition expectation; $$ E[y_i|x_i] = x_i'\beta $$ $y_i$ and $x_i$ are referred to as the dependent and independent variables respectively because we are literally conditioning $y_i$ on $x_i$. Ordinary least squares is equivalent to maximum likelihood where we assume; $$ y_i|x_i \stackrel{iid}{\sim} N(x_i'\beta,\sigma^2) $$ In this instance the $x_i$ are taken as fixed values (we are not calling $x_i$ a random variable and giving it a probability distribution) meaning that the "data", $\mathcal{D}$, is just the set of $y_i$'s $$\mathcal{D} \equiv \{y_1,..,y_n\}$$ So writing $$ p(\mathcal{D} | \theta) = \prod_{i=1}^n p(y_i | x_i, \theta) $$ where $\theta \equiv \{\beta,\sigma\}$ is actually correct. The likelihood $p(\mathcal{D} | \theta) = \prod_{i=1}^n p(y_i , x_i | \theta)=\prod_{i=1}^n p_y(y_i | x_i, \theta)p_x(x_i|\theta)$ ,on the other hand, treats the $x_i$ as random variables which, although applicable in some settings, is not linear regression in the traditional sense.
Maximum Likelihood Formulation for Linear Regression In ordinary least squares regression the goal is to model the condition expectation; $$ E[y_i|x_i] = x_i'\beta $$ $y_i$ and $x_i$ are referred to as the dependent and independent variables respectivel
45,738
$p$-value for non-standard asymptotics
Assuming you know the $\lambda_i$, simulation is feasible. Consider library(MASS) k <- 3 lambda <- c(.2,.3,.4) # pick your lambdas here reps <- 100000 distr <- rep(NA,reps) for (i in 1:reps){ distr[i] <- sum(lambda*rchisq(k,1)) } distr <- sort(distr) teststat <- 2 # pick your teststat here pvalue <- which.min(abs(teststat-distr))/reps # assuming a left-tailed test So effectively, we "plug" the test statistic teststat into the empirical cdf, i.e., find the proportion of realizations from the simulation that (which, for reps large, precisely estimates the probability that) a random variable from the null distribution takes a value less (we consider a left-tailed test here, with obvious modifications to other alternatives) than the test statistic - i.e., the $p$-value:
$p$-value for non-standard asymptotics
Assuming you know the $\lambda_i$, simulation is feasible. Consider library(MASS) k <- 3 lambda <- c(.2,.3,.4) # pick your lambdas here reps <- 100000 distr <- rep(NA,reps) for (i in 1:reps){ di
$p$-value for non-standard asymptotics Assuming you know the $\lambda_i$, simulation is feasible. Consider library(MASS) k <- 3 lambda <- c(.2,.3,.4) # pick your lambdas here reps <- 100000 distr <- rep(NA,reps) for (i in 1:reps){ distr[i] <- sum(lambda*rchisq(k,1)) } distr <- sort(distr) teststat <- 2 # pick your teststat here pvalue <- which.min(abs(teststat-distr))/reps # assuming a left-tailed test So effectively, we "plug" the test statistic teststat into the empirical cdf, i.e., find the proportion of realizations from the simulation that (which, for reps large, precisely estimates the probability that) a random variable from the null distribution takes a value less (we consider a left-tailed test here, with obvious modifications to other alternatives) than the test statistic - i.e., the $p$-value:
$p$-value for non-standard asymptotics Assuming you know the $\lambda_i$, simulation is feasible. Consider library(MASS) k <- 3 lambda <- c(.2,.3,.4) # pick your lambdas here reps <- 100000 distr <- rep(NA,reps) for (i in 1:reps){ di
45,739
$p$-value for non-standard asymptotics
There are two useful approximations and at least three computations that would be exact with infinite precision arithmetic. Let's call the distribution $Q(\lambda)$. And write $\bar\lambda$ for the mean of $\lambda$ and $\tau$ for the mean of $\lambda^2$. The two approximations are implemented in pchisqsum in the R survey package The Satterthwaite approximation is overwhelmingly the most common way this distribution is evaluated in practice. It approximates $Q(\lambda)$ by $a\chi^2_d$ where $a$ and $d$ are chosen to get the right mean and variance. Specifically, $a=\tau/\bar\lambda$, and $d=k\bar\lambda^2/\tau$. Until you get far out in the right tail, the Satterthwaite approximation is far more accurate than it has any right to be. Also, in the common scenario where the $\lambda$ are eigenvalues of a matrix, you don't need the eigendecomposition: you can compute the Satterthwaite approximation in $O(k^2)$ time for general matrices and faster for specially structured matrices. The saddlepoint approximation is less accurate for modest tail probabilities, but much more accurate for small ones -- it has uniformly bounded relative error, and the error decreases as $k$ increases. It's the only one that works for very small tail probabilities with ordinary double-precision arithmetic. There are two fairly old computational methods that work well. These are both impemented in the CompQuadForm package for R. They both get catastrophic rounding error as the right tail probability approaches machine epsilon and they slow down for large $k$. Farebrother's method represents the probability as an infinite series in Beta functions. It requires the $\lambda$s to be positive, and for large $k$ the biggest one can't be that much larger than the rest. You might think negative $\lambda$ isn't important, but it lets you do the same trick with $F$ distributions having the same denominator Davies's method takes advantage of the fact that you can just write down the characteristic function, which can then be inverted by numerical integration There's a third computational method, due to Bausch, that has very good error/effort bounds in extreme settings as long as you have arbitrary precision arithmetic. He invented it for a problem in string theory. It really needs multiple-precision arithmetic. There are also some improvements on the Satterthwaite approximation matching more than two moments. In my opinion these aren't terribly appealing: if you have all the $\lambda$s you might as well use Davies's or Farebrother's methods. If $k$ is large and you only have the matrix whose eigenvalues are the $\lambda$s, these methods are no faster than a full eigendecomposition. leading-eigenvalue approximation. When $k$ is large, approximate the sum as $\left(\sum_{i<m}\lambda_i Z_i\right)+a_m\chi^2_{d_m}$, where the last term is a Satterthwaite approximation with the $k-m$ smallest eigenvalues. A student and I reviewed these in the large $k$ case; we have a blog post and a paper.
$p$-value for non-standard asymptotics
There are two useful approximations and at least three computations that would be exact with infinite precision arithmetic. Let's call the distribution $Q(\lambda)$. And write $\bar\lambda$ for the m
$p$-value for non-standard asymptotics There are two useful approximations and at least three computations that would be exact with infinite precision arithmetic. Let's call the distribution $Q(\lambda)$. And write $\bar\lambda$ for the mean of $\lambda$ and $\tau$ for the mean of $\lambda^2$. The two approximations are implemented in pchisqsum in the R survey package The Satterthwaite approximation is overwhelmingly the most common way this distribution is evaluated in practice. It approximates $Q(\lambda)$ by $a\chi^2_d$ where $a$ and $d$ are chosen to get the right mean and variance. Specifically, $a=\tau/\bar\lambda$, and $d=k\bar\lambda^2/\tau$. Until you get far out in the right tail, the Satterthwaite approximation is far more accurate than it has any right to be. Also, in the common scenario where the $\lambda$ are eigenvalues of a matrix, you don't need the eigendecomposition: you can compute the Satterthwaite approximation in $O(k^2)$ time for general matrices and faster for specially structured matrices. The saddlepoint approximation is less accurate for modest tail probabilities, but much more accurate for small ones -- it has uniformly bounded relative error, and the error decreases as $k$ increases. It's the only one that works for very small tail probabilities with ordinary double-precision arithmetic. There are two fairly old computational methods that work well. These are both impemented in the CompQuadForm package for R. They both get catastrophic rounding error as the right tail probability approaches machine epsilon and they slow down for large $k$. Farebrother's method represents the probability as an infinite series in Beta functions. It requires the $\lambda$s to be positive, and for large $k$ the biggest one can't be that much larger than the rest. You might think negative $\lambda$ isn't important, but it lets you do the same trick with $F$ distributions having the same denominator Davies's method takes advantage of the fact that you can just write down the characteristic function, which can then be inverted by numerical integration There's a third computational method, due to Bausch, that has very good error/effort bounds in extreme settings as long as you have arbitrary precision arithmetic. He invented it for a problem in string theory. It really needs multiple-precision arithmetic. There are also some improvements on the Satterthwaite approximation matching more than two moments. In my opinion these aren't terribly appealing: if you have all the $\lambda$s you might as well use Davies's or Farebrother's methods. If $k$ is large and you only have the matrix whose eigenvalues are the $\lambda$s, these methods are no faster than a full eigendecomposition. leading-eigenvalue approximation. When $k$ is large, approximate the sum as $\left(\sum_{i<m}\lambda_i Z_i\right)+a_m\chi^2_{d_m}$, where the last term is a Satterthwaite approximation with the $k-m$ smallest eigenvalues. A student and I reviewed these in the large $k$ case; we have a blog post and a paper.
$p$-value for non-standard asymptotics There are two useful approximations and at least three computations that would be exact with infinite precision arithmetic. Let's call the distribution $Q(\lambda)$. And write $\bar\lambda$ for the m
45,740
Why does a regression tree not split based on variance?
First off, let me just say that there are plenty of metrics that could be used to determine a split in the Regression Tree (that's an altogether different question) but the criterion you mentioned--minimizing the sum of squares--is definitely the most popular. With respect to your second question (i.e. why we don't divide by the number of points in a region), here is an example that might give insight into the potential problems related to splitting based on the minimum sum of variances: library(MASS) data(Boston) tss <- function(x){ if (length(x) == 0){ return(0) } sum((x-mean(x))^2) } ss <- tss(Boston$medv) new.ss <- ss which.s <- NA for (i in Boston$lstat){ mask <- Boston$lstat <= i new.y1 <- Boston$medv[mask] new.y2 <- Boston$medv[!mask] temp.ss <- tss(new.y1) + tss(new.y2) if (temp.ss < new.ss){ new.ss <- temp.ss which.s <- i } } plot(medv ~ lstat, data=Boston, col=ifelse(Boston$lstat < which.s, 'red', 'darkblue')) Now, try switching sum to mean in the tss function and re-run this entire block of code. Notice how different the resulting plot is?
Why does a regression tree not split based on variance?
First off, let me just say that there are plenty of metrics that could be used to determine a split in the Regression Tree (that's an altogether different question) but the criterion you mentioned--mi
Why does a regression tree not split based on variance? First off, let me just say that there are plenty of metrics that could be used to determine a split in the Regression Tree (that's an altogether different question) but the criterion you mentioned--minimizing the sum of squares--is definitely the most popular. With respect to your second question (i.e. why we don't divide by the number of points in a region), here is an example that might give insight into the potential problems related to splitting based on the minimum sum of variances: library(MASS) data(Boston) tss <- function(x){ if (length(x) == 0){ return(0) } sum((x-mean(x))^2) } ss <- tss(Boston$medv) new.ss <- ss which.s <- NA for (i in Boston$lstat){ mask <- Boston$lstat <= i new.y1 <- Boston$medv[mask] new.y2 <- Boston$medv[!mask] temp.ss <- tss(new.y1) + tss(new.y2) if (temp.ss < new.ss){ new.ss <- temp.ss which.s <- i } } plot(medv ~ lstat, data=Boston, col=ifelse(Boston$lstat < which.s, 'red', 'darkblue')) Now, try switching sum to mean in the tss function and re-run this entire block of code. Notice how different the resulting plot is?
Why does a regression tree not split based on variance? First off, let me just say that there are plenty of metrics that could be used to determine a split in the Regression Tree (that's an altogether different question) but the criterion you mentioned--mi
45,741
Why does a regression tree not split based on variance?
So the best split offers the best gain. If the loss function is sum of squares the gain function could look like: $\sigma^2 = \frac{\sum{(y_i-\overline{y})^2}}{N} = SS/N$, (SS is sum of sqaures) $gain = \frac{SS_{parent}}{n_{parent}} - w_l \frac{SS_{left}}{n_{left}} - w_r \frac{SS_{right}}{n_{right}}$ $w_l=n_{left} , w_r=n_{right}$ , weight daughter nodes by size $ \frac{SS_{parent}}{n_{parent}} = k$, always the same for any split of parent $argmin: SS_{left} + SS_{right} $, for any split by parent node This simplified cost function will yield the same ranking of splits and is much faster to compute. It is possible to perform rolling SS computation such that, SS does not have to be fully recalculated each split. Instead the $SS_{left}$ and $SS_{right}$ are iteratively adjusted for moving one sample at the time from left node to right node. Here's an answer for RF classification where the same trick is used.
Why does a regression tree not split based on variance?
So the best split offers the best gain. If the loss function is sum of squares the gain function could look like: $\sigma^2 = \frac{\sum{(y_i-\overline{y})^2}}{N} = SS/N$, (SS is sum of sqaures) $gain
Why does a regression tree not split based on variance? So the best split offers the best gain. If the loss function is sum of squares the gain function could look like: $\sigma^2 = \frac{\sum{(y_i-\overline{y})^2}}{N} = SS/N$, (SS is sum of sqaures) $gain = \frac{SS_{parent}}{n_{parent}} - w_l \frac{SS_{left}}{n_{left}} - w_r \frac{SS_{right}}{n_{right}}$ $w_l=n_{left} , w_r=n_{right}$ , weight daughter nodes by size $ \frac{SS_{parent}}{n_{parent}} = k$, always the same for any split of parent $argmin: SS_{left} + SS_{right} $, for any split by parent node This simplified cost function will yield the same ranking of splits and is much faster to compute. It is possible to perform rolling SS computation such that, SS does not have to be fully recalculated each split. Instead the $SS_{left}$ and $SS_{right}$ are iteratively adjusted for moving one sample at the time from left node to right node. Here's an answer for RF classification where the same trick is used.
Why does a regression tree not split based on variance? So the best split offers the best gain. If the loss function is sum of squares the gain function could look like: $\sigma^2 = \frac{\sum{(y_i-\overline{y})^2}}{N} = SS/N$, (SS is sum of sqaures) $gain
45,742
Why does a regression tree not split based on variance?
In order to compare the behavior of the different splitting formulas, let's consider a simple example: We have $N$ predictors with value $y=1$ in the left, $N$ predictors with value $y=-1$ in the right, and a single point at the right boundary with value $y^\ast$. There are two reasonable split points in this scenario, first the point (1) in the middle, and the point (2) at the right boundary. When $N$ is large, say $N=100$, I would prefer the split to be made at the point marked by the red (1). Only if the value of $y^\ast$ becomes very low, I would find a split at (2) reasonable. So, let's calculate how the split position changes in dependence of the value of $y^\ast$: For which $y^\ast$ is the split made at (1), or how low $y^\ast$ might get until the split occurs at (2). For the calculations we first use the criterium the OP posted in the question, which involves the sum of squares (SS): Split at (1): Mean left: $1$, SS Left: $0$ Mean right: $\mu_r = (y^\ast - N)/(N+1)$, SS right: $N(-1 - \mu_r)^2 + (y^\ast - \mu_r)^2$ Split at (2): Mean left: $0$, SS Left: $2N$ Mean right: $y^\ast$, SS right: $0$ Thus, we get a split at point (1) if $$N(-1 - \mu_r)^2 + (y^\ast - \mu_r)^2 \leq 2N$$ Inserting $\mu_r$ and setting equal the two sum of squares (SS), Wolfram Alpha gives as solution $$y_{SS}^\ast = -1 - \sqrt{2N} \ \underbrace{\sqrt{\frac{(N+1)^2}{(N^2+1)}}}_{\approx 1} \ \approx \ -1 - \sqrt{2N}$$ That means, for a value $y$ above $y_{SS}^\ast$, the split is made at (1), below $y_{SS}^\ast$ the split is made at (2). One sees that for large $N$ the split is preferred in the middle. Now, doing the same for the version of the splitting formula which includes the variance, the result is $$y_{VAR}^\ast = -1 - \sqrt{\frac{(N+1)^2}{(N^2+1)}} \ \approx \ 1$$ That is, the factor $\sqrt{2N}$ is missing. With this, the split is made at (2) if the value at the boundary is slightly lower than $-1$, regardless of the number of datapoints $N$. Conclusion: The splitting formula using the sum of squares (SS) leads to the more intuitive behaviour that the split is preferred at point (1) in the middle. Instead, the formula with the variance does not consider the number of particles in the respective regions, as the SS value is divided by the number of data points. As seen above, it easily prefers a split where only one data point is separated to a balanced split where the region is halfed. Summarizing, I would prefer the splitting criterium containing the sum-of-squares to the splitting criterium containing the variance.
Why does a regression tree not split based on variance?
In order to compare the behavior of the different splitting formulas, let's consider a simple example: We have $N$ predictors with value $y=1$ in the left, $N$ predictors with value $y=-1$ in the righ
Why does a regression tree not split based on variance? In order to compare the behavior of the different splitting formulas, let's consider a simple example: We have $N$ predictors with value $y=1$ in the left, $N$ predictors with value $y=-1$ in the right, and a single point at the right boundary with value $y^\ast$. There are two reasonable split points in this scenario, first the point (1) in the middle, and the point (2) at the right boundary. When $N$ is large, say $N=100$, I would prefer the split to be made at the point marked by the red (1). Only if the value of $y^\ast$ becomes very low, I would find a split at (2) reasonable. So, let's calculate how the split position changes in dependence of the value of $y^\ast$: For which $y^\ast$ is the split made at (1), or how low $y^\ast$ might get until the split occurs at (2). For the calculations we first use the criterium the OP posted in the question, which involves the sum of squares (SS): Split at (1): Mean left: $1$, SS Left: $0$ Mean right: $\mu_r = (y^\ast - N)/(N+1)$, SS right: $N(-1 - \mu_r)^2 + (y^\ast - \mu_r)^2$ Split at (2): Mean left: $0$, SS Left: $2N$ Mean right: $y^\ast$, SS right: $0$ Thus, we get a split at point (1) if $$N(-1 - \mu_r)^2 + (y^\ast - \mu_r)^2 \leq 2N$$ Inserting $\mu_r$ and setting equal the two sum of squares (SS), Wolfram Alpha gives as solution $$y_{SS}^\ast = -1 - \sqrt{2N} \ \underbrace{\sqrt{\frac{(N+1)^2}{(N^2+1)}}}_{\approx 1} \ \approx \ -1 - \sqrt{2N}$$ That means, for a value $y$ above $y_{SS}^\ast$, the split is made at (1), below $y_{SS}^\ast$ the split is made at (2). One sees that for large $N$ the split is preferred in the middle. Now, doing the same for the version of the splitting formula which includes the variance, the result is $$y_{VAR}^\ast = -1 - \sqrt{\frac{(N+1)^2}{(N^2+1)}} \ \approx \ 1$$ That is, the factor $\sqrt{2N}$ is missing. With this, the split is made at (2) if the value at the boundary is slightly lower than $-1$, regardless of the number of datapoints $N$. Conclusion: The splitting formula using the sum of squares (SS) leads to the more intuitive behaviour that the split is preferred at point (1) in the middle. Instead, the formula with the variance does not consider the number of particles in the respective regions, as the SS value is divided by the number of data points. As seen above, it easily prefers a split where only one data point is separated to a balanced split where the region is halfed. Summarizing, I would prefer the splitting criterium containing the sum-of-squares to the splitting criterium containing the variance.
Why does a regression tree not split based on variance? In order to compare the behavior of the different splitting formulas, let's consider a simple example: We have $N$ predictors with value $y=1$ in the left, $N$ predictors with value $y=-1$ in the righ
45,743
Why does a regression tree not split based on variance?
It's an old question but the reason I can think of is as follows: We don't divide each term by the number of points in each region because while performing the split, we are not interested in average deviance instead we are interested in the absolute deviance; thus, a larger set of observations will usually have a larger deviance than a smaller set in the same situation.
Why does a regression tree not split based on variance?
It's an old question but the reason I can think of is as follows: We don't divide each term by the number of points in each region because while performing the split, we are not interested in average
Why does a regression tree not split based on variance? It's an old question but the reason I can think of is as follows: We don't divide each term by the number of points in each region because while performing the split, we are not interested in average deviance instead we are interested in the absolute deviance; thus, a larger set of observations will usually have a larger deviance than a smaller set in the same situation.
Why does a regression tree not split based on variance? It's an old question but the reason I can think of is as follows: We don't divide each term by the number of points in each region because while performing the split, we are not interested in average
45,744
Adding the probabilities of two events when one is a subset of the other
The inclusion-exclusion principle states $P(A\cup B)=P(A)+P(B)-P(A\cap B).$ Therefore, you know that $P(A\cup B)+P(A\cap B)=P(A)+P(B).$ Without further information/assumptions, it is not possible to uniquely identify $P(A)$ or $P(B).$ You use the word "conditional" in your title, but it's important to note that this is not a problem which contains conditional probabilities. A conditional probability is something of the form "What's the probability that Alice finishes given that Bob finishes?" The notation for this is $P(A|B),$ and the technology to work with that is Bayes rule, which is just one particularly prominent conditional probability relation. Gung's comment points the way to a solution to identifying $P(A)$ and $P(B)$ using conditional probabilities. For example, if we know $P(B|A)$, we can use Bayes rule, $$P(B|A)=\frac{P(A\cap B)}{P(A)}$$ and we can solve for $P(A)$ using algebra. You've commented that you're assuming independence. Independence is defined to mean that $P(A\cap B)=P(A)P(B).$ Since we know $P(A)+P(B)=P(A\cap B)+P(A\cup B)$ and also that $P(A\cap B)=P(A)P(B),$ the solution set is the set of points satisfying the following criteria: $P(A)\in [0,1]$ $P(B)\in [0,1]$ $P(B)P(A)=P(A\cap B)$ $P(A)+P(B)=P(A\cap B)+P(A\cup B)$ An obvious way to solve this is to just graph the lines from (3) and (4) as functions of $P(A)$ and $P(B)$. The intersection is the answer. One caveat about independence: Assuming independence is a very strong assumption. When the independence assumption is violated, it's usually the case that results are not "slightly wrong" but spectacularly wrong.
Adding the probabilities of two events when one is a subset of the other
The inclusion-exclusion principle states $P(A\cup B)=P(A)+P(B)-P(A\cap B).$ Therefore, you know that $P(A\cup B)+P(A\cap B)=P(A)+P(B).$ Without further information/assumptions, it is not possible to
Adding the probabilities of two events when one is a subset of the other The inclusion-exclusion principle states $P(A\cup B)=P(A)+P(B)-P(A\cap B).$ Therefore, you know that $P(A\cup B)+P(A\cap B)=P(A)+P(B).$ Without further information/assumptions, it is not possible to uniquely identify $P(A)$ or $P(B).$ You use the word "conditional" in your title, but it's important to note that this is not a problem which contains conditional probabilities. A conditional probability is something of the form "What's the probability that Alice finishes given that Bob finishes?" The notation for this is $P(A|B),$ and the technology to work with that is Bayes rule, which is just one particularly prominent conditional probability relation. Gung's comment points the way to a solution to identifying $P(A)$ and $P(B)$ using conditional probabilities. For example, if we know $P(B|A)$, we can use Bayes rule, $$P(B|A)=\frac{P(A\cap B)}{P(A)}$$ and we can solve for $P(A)$ using algebra. You've commented that you're assuming independence. Independence is defined to mean that $P(A\cap B)=P(A)P(B).$ Since we know $P(A)+P(B)=P(A\cap B)+P(A\cup B)$ and also that $P(A\cap B)=P(A)P(B),$ the solution set is the set of points satisfying the following criteria: $P(A)\in [0,1]$ $P(B)\in [0,1]$ $P(B)P(A)=P(A\cap B)$ $P(A)+P(B)=P(A\cap B)+P(A\cup B)$ An obvious way to solve this is to just graph the lines from (3) and (4) as functions of $P(A)$ and $P(B)$. The intersection is the answer. One caveat about independence: Assuming independence is a very strong assumption. When the independence assumption is violated, it's usually the case that results are not "slightly wrong" but spectacularly wrong.
Adding the probabilities of two events when one is a subset of the other The inclusion-exclusion principle states $P(A\cup B)=P(A)+P(B)-P(A\cap B).$ Therefore, you know that $P(A\cup B)+P(A\cap B)=P(A)+P(B).$ Without further information/assumptions, it is not possible to
45,745
How to choose a constant for reject sampling
Let $\pi(x) = M f(x)$, where $M$ is the normalizing constant. In many situations, only $f(x)$ is known and $M$ is unknown. To implement rejection sampling, you want $c$ such that, for all $x$, $$\dfrac{\pi(x)}{h(x)} \leq c. $$ Then for all $x$, $$\dfrac{f(x)}{h(x)} \leq \dfrac{c}{M} := c'. $$ You don't know $c$ or $M$, but you should be able to find $c'$, if you can play around with $f/h$. (This is more difficult to do for higher dimensional distributions) The algorithm accepts when $\pi(x)/ch(x)$ is greater than a realization from a uniform random variable, which is the same as $f(x)/c'h(x)$ being greater than the realization. Thus, the algorithm can be implemented even though $M$ is not known.
How to choose a constant for reject sampling
Let $\pi(x) = M f(x)$, where $M$ is the normalizing constant. In many situations, only $f(x)$ is known and $M$ is unknown. To implement rejection sampling, you want $c$ such that, for all $x$, $$\dfra
How to choose a constant for reject sampling Let $\pi(x) = M f(x)$, where $M$ is the normalizing constant. In many situations, only $f(x)$ is known and $M$ is unknown. To implement rejection sampling, you want $c$ such that, for all $x$, $$\dfrac{\pi(x)}{h(x)} \leq c. $$ Then for all $x$, $$\dfrac{f(x)}{h(x)} \leq \dfrac{c}{M} := c'. $$ You don't know $c$ or $M$, but you should be able to find $c'$, if you can play around with $f/h$. (This is more difficult to do for higher dimensional distributions) The algorithm accepts when $\pi(x)/ch(x)$ is greater than a realization from a uniform random variable, which is the same as $f(x)/c'h(x)$ being greater than the realization. Thus, the algorithm can be implemented even though $M$ is not known.
How to choose a constant for reject sampling Let $\pi(x) = M f(x)$, where $M$ is the normalizing constant. In many situations, only $f(x)$ is known and $M$ is unknown. To implement rejection sampling, you want $c$ such that, for all $x$, $$\dfra
45,746
How to choose a constant for reject sampling
More generally a principle to choose $M$ is $$\inf_{\theta\in\Theta}\sup_{x\in\mathbb{R}}\frac{f(x)}{g_\theta(x)}$$ where $f$ is the normalized target and $g_\theta$ is the proposal density. As an example, for $f(x)=(2\pi)^{-\frac{1}{2}}e^{-x^2/2}$ and $g(x)=\theta^{-1}e^{-\theta |x|}$ We will have $\sup_xf(x)/g_\theta(x)=\theta(2\pi)^{-\frac{1}{2}}e^{-\theta^2/2}$ and $\inf_\theta\theta(2\pi)^{-\frac{1}{2}}e^{-\theta^2/2}=(2\pi)^{-\frac{1}{2}}e^{-\frac{1}{2}}$ But if your target is unnormalized then you can only try by luck. (correct me if I am wrong)
How to choose a constant for reject sampling
More generally a principle to choose $M$ is $$\inf_{\theta\in\Theta}\sup_{x\in\mathbb{R}}\frac{f(x)}{g_\theta(x)}$$ where $f$ is the normalized target and $g_\theta$ is the proposal density. As an ex
How to choose a constant for reject sampling More generally a principle to choose $M$ is $$\inf_{\theta\in\Theta}\sup_{x\in\mathbb{R}}\frac{f(x)}{g_\theta(x)}$$ where $f$ is the normalized target and $g_\theta$ is the proposal density. As an example, for $f(x)=(2\pi)^{-\frac{1}{2}}e^{-x^2/2}$ and $g(x)=\theta^{-1}e^{-\theta |x|}$ We will have $\sup_xf(x)/g_\theta(x)=\theta(2\pi)^{-\frac{1}{2}}e^{-\theta^2/2}$ and $\inf_\theta\theta(2\pi)^{-\frac{1}{2}}e^{-\theta^2/2}=(2\pi)^{-\frac{1}{2}}e^{-\frac{1}{2}}$ But if your target is unnormalized then you can only try by luck. (correct me if I am wrong)
How to choose a constant for reject sampling More generally a principle to choose $M$ is $$\inf_{\theta\in\Theta}\sup_{x\in\mathbb{R}}\frac{f(x)}{g_\theta(x)}$$ where $f$ is the normalized target and $g_\theta$ is the proposal density. As an ex
45,747
How to choose a constant for reject sampling
Just as an example consider the density proportional to $e^{-(1+x^4)^\frac14},\quad-\infty<x<\infty \,.$ I definitely know what this function "looks like", since we have everything but the normalizing constant; I can draw a function proportion to the density albeit that I currently have no idea what the normalizing constant is for that density. (Well I can tell it's not going to be terribly far from 1, so I have some notion, but being able to guess it to within an order of magnitude isn't much help). We can nevertheless obtain majorizing functions; I'll give an example. It's not hard to see that $(1+x^4)^\frac14 > x$ for $x\geq 0$ (if it's not obvious, look at the fourth power of both sides, from which it is surely clear), and so by symmetry $(1+x^4)^\frac14 > |x|$ everywhere, and hence it's clear that $e^{-|x|}>e^{-(1+x^4)^\frac14}$ on the real line. Consequently I can use the standard Laplace distribution scaled up by a factor of 2 as a majorizing function. (There are somewhat more efficient choices, but it suffices to use this simple case. Note that we now know for sure that the integral of that unscaled density must be less than 2.) I still don't know quite what the density in question integrates to, but I can simulate from it. Indeed, here's a histogram of (just over) a million values from it: (And having just run code to simulate those values, I now have some idea what the normalizing constant is, since you can work an approximation out from the acceptance rate. The integral of the unscaled density comes out to about 1.397 -- checking that with the integrate function in R, it gives 1.396785 with absolute error <0.00011 -- so the normalizing constant would approximately be the reciprocal of that.)
How to choose a constant for reject sampling
Just as an example consider the density proportional to $e^{-(1+x^4)^\frac14},\quad-\infty<x<\infty \,.$ I definitely know what this function "looks like", since we have everything but the normalizing
How to choose a constant for reject sampling Just as an example consider the density proportional to $e^{-(1+x^4)^\frac14},\quad-\infty<x<\infty \,.$ I definitely know what this function "looks like", since we have everything but the normalizing constant; I can draw a function proportion to the density albeit that I currently have no idea what the normalizing constant is for that density. (Well I can tell it's not going to be terribly far from 1, so I have some notion, but being able to guess it to within an order of magnitude isn't much help). We can nevertheless obtain majorizing functions; I'll give an example. It's not hard to see that $(1+x^4)^\frac14 > x$ for $x\geq 0$ (if it's not obvious, look at the fourth power of both sides, from which it is surely clear), and so by symmetry $(1+x^4)^\frac14 > |x|$ everywhere, and hence it's clear that $e^{-|x|}>e^{-(1+x^4)^\frac14}$ on the real line. Consequently I can use the standard Laplace distribution scaled up by a factor of 2 as a majorizing function. (There are somewhat more efficient choices, but it suffices to use this simple case. Note that we now know for sure that the integral of that unscaled density must be less than 2.) I still don't know quite what the density in question integrates to, but I can simulate from it. Indeed, here's a histogram of (just over) a million values from it: (And having just run code to simulate those values, I now have some idea what the normalizing constant is, since you can work an approximation out from the acceptance rate. The integral of the unscaled density comes out to about 1.397 -- checking that with the integrate function in R, it gives 1.396785 with absolute error <0.00011 -- so the normalizing constant would approximately be the reciprocal of that.)
How to choose a constant for reject sampling Just as an example consider the density proportional to $e^{-(1+x^4)^\frac14},\quad-\infty<x<\infty \,.$ I definitely know what this function "looks like", since we have everything but the normalizing
45,748
Difference in hypothesis testing using p-value and confidence interval
With large $N$, your test statistic will be distributed as a normal. Thus, we can refer to your test statistic as "$z$", and we can also use "$z$" to refer to the asymptotic sampling distribution. However, these are not the same $z$'s. When you form your $1-\alpha\%$ confidence interval, you need to multiply the standard error by $z_{1-\alpha/2}$ to get the right increment. You then add (subtract) the product from your observed percentage to get the confidence limits. For example, if you wanted a $95\%$ confidence interval, you would multiply your standard error by $z_{.975} = 1.96$. If you use this value, your confidence interval is: \begin{align} \hat{p} \pm z_{1-\alpha/2}\times \sqrt{\frac{\hat{p}(1-\hat{p})}{N}} &= 0.12 \pm 1.96\times\sqrt{\frac{0.12(1-0.12)}{384}} \\[10pt] &= 0.0872 < 0.12 < 0.15 \end{align} which agrees with your hypothesis test.
Difference in hypothesis testing using p-value and confidence interval
With large $N$, your test statistic will be distributed as a normal. Thus, we can refer to your test statistic as "$z$", and we can also use "$z$" to refer to the asymptotic sampling distribution. H
Difference in hypothesis testing using p-value and confidence interval With large $N$, your test statistic will be distributed as a normal. Thus, we can refer to your test statistic as "$z$", and we can also use "$z$" to refer to the asymptotic sampling distribution. However, these are not the same $z$'s. When you form your $1-\alpha\%$ confidence interval, you need to multiply the standard error by $z_{1-\alpha/2}$ to get the right increment. You then add (subtract) the product from your observed percentage to get the confidence limits. For example, if you wanted a $95\%$ confidence interval, you would multiply your standard error by $z_{.975} = 1.96$. If you use this value, your confidence interval is: \begin{align} \hat{p} \pm z_{1-\alpha/2}\times \sqrt{\frac{\hat{p}(1-\hat{p})}{N}} &= 0.12 \pm 1.96\times\sqrt{\frac{0.12(1-0.12)}{384}} \\[10pt] &= 0.0872 < 0.12 < 0.15 \end{align} which agrees with your hypothesis test.
Difference in hypothesis testing using p-value and confidence interval With large $N$, your test statistic will be distributed as a normal. Thus, we can refer to your test statistic as "$z$", and we can also use "$z$" to refer to the asymptotic sampling distribution. H
45,749
Difference in hypothesis testing using p-value and confidence interval
I don't have rep to comment, but I see some flaws in that problem. First, 6.3 sigmas are way more than 0.95 C.L. Just apply quickly Chebyshev theorem. A confidence level says that "in N experiments, if at least 1 - p experiments you get the null hypothesis you can't reject it". Then, a binomial distribution isn't symmetrical,but there are enough observations so it seems like symmetrical if you do an histogram.
Difference in hypothesis testing using p-value and confidence interval
I don't have rep to comment, but I see some flaws in that problem. First, 6.3 sigmas are way more than 0.95 C.L. Just apply quickly Chebyshev theorem. A confidence level says that "in N experiments, i
Difference in hypothesis testing using p-value and confidence interval I don't have rep to comment, but I see some flaws in that problem. First, 6.3 sigmas are way more than 0.95 C.L. Just apply quickly Chebyshev theorem. A confidence level says that "in N experiments, if at least 1 - p experiments you get the null hypothesis you can't reject it". Then, a binomial distribution isn't symmetrical,but there are enough observations so it seems like symmetrical if you do an histogram.
Difference in hypothesis testing using p-value and confidence interval I don't have rep to comment, but I see some flaws in that problem. First, 6.3 sigmas are way more than 0.95 C.L. Just apply quickly Chebyshev theorem. A confidence level says that "in N experiments, i
45,750
Difference in hypothesis testing using p-value and confidence interval
Just to grasp the concept, I've been working on a couple of plots to illustrate why the $z$ statistic calculates the standard error expected around the population or theoretical proportion (from the perspective of the Null hypothesis so to speak), and then figures out how many of these standard errors fit into the distance from the theoretical population proportion to the sample proportion - in this case ~$\small6$ - as shown on this plot with six arrow separating the population from the sample proportions: And why, on the other hand the confidence interval does the same thing, but from the perspective of the possible alternative hypothesis, or in other words, from the sample mean: it is the proportion found in the sample that is used to calculate the standard error. This latter calculation is plotted with the confidence interval shown as diverging arrows away from the sample proportion, and covering two standard errors on either side (the confidence interval): In either case the conclusion is the same: rejecting $H_o$ in favor of $H_a: \,\mu \neq0.05$. Code for illustrations here.
Difference in hypothesis testing using p-value and confidence interval
Just to grasp the concept, I've been working on a couple of plots to illustrate why the $z$ statistic calculates the standard error expected around the population or theoretical proportion (from the p
Difference in hypothesis testing using p-value and confidence interval Just to grasp the concept, I've been working on a couple of plots to illustrate why the $z$ statistic calculates the standard error expected around the population or theoretical proportion (from the perspective of the Null hypothesis so to speak), and then figures out how many of these standard errors fit into the distance from the theoretical population proportion to the sample proportion - in this case ~$\small6$ - as shown on this plot with six arrow separating the population from the sample proportions: And why, on the other hand the confidence interval does the same thing, but from the perspective of the possible alternative hypothesis, or in other words, from the sample mean: it is the proportion found in the sample that is used to calculate the standard error. This latter calculation is plotted with the confidence interval shown as diverging arrows away from the sample proportion, and covering two standard errors on either side (the confidence interval): In either case the conclusion is the same: rejecting $H_o$ in favor of $H_a: \,\mu \neq0.05$. Code for illustrations here.
Difference in hypothesis testing using p-value and confidence interval Just to grasp the concept, I've been working on a couple of plots to illustrate why the $z$ statistic calculates the standard error expected around the population or theoretical proportion (from the p
45,751
Why do you put all the exogenous variables into the first and second stage of 2SLS?
Technically, you are actually regressing $[X\;,\; W]$ on $[Z\;,\; W]$ so the resulting fitted values for the second stage regressors are $[\hat X\;,\; \hat W]=[\hat X \;,\;W]$. $\hat W =W$ since the best prediction of $W$ available in the matrix $[Z\;,\; W]$ is obviously $W$ itself. But the trivialities aside, $W$ is included in the first stage regressors because it is exogenous and so excluding $W$ would lead to a loss in efficiency or consistency (most likely both) of the 2SLS estimator. In other words, the purpose of the first stage is to sort of "devide out" the endogenous part of the $X$'s in that $\hat X$ is the part of $X$ which can be associated solely with exogenous movements (i.e. changes in $Z$ and $W$). If $X$ and $W$ are correlated at all, not including $W$ here would result in a large loss of information since the resulting fitted values would not reflect all the exogenous movement in $X$. $W$ is included in the second stage to avoid omitted variable bias in the 2SLS coefficient estimates. At this point $\hat X$ is almost surely correlated with $W$ and so if $W$ has any effect on $Y$, leaving it out of the regression will result in bias coefficient estimates.
Why do you put all the exogenous variables into the first and second stage of 2SLS?
Technically, you are actually regressing $[X\;,\; W]$ on $[Z\;,\; W]$ so the resulting fitted values for the second stage regressors are $[\hat X\;,\; \hat W]=[\hat X \;,\;W]$. $\hat W =W$ since the
Why do you put all the exogenous variables into the first and second stage of 2SLS? Technically, you are actually regressing $[X\;,\; W]$ on $[Z\;,\; W]$ so the resulting fitted values for the second stage regressors are $[\hat X\;,\; \hat W]=[\hat X \;,\;W]$. $\hat W =W$ since the best prediction of $W$ available in the matrix $[Z\;,\; W]$ is obviously $W$ itself. But the trivialities aside, $W$ is included in the first stage regressors because it is exogenous and so excluding $W$ would lead to a loss in efficiency or consistency (most likely both) of the 2SLS estimator. In other words, the purpose of the first stage is to sort of "devide out" the endogenous part of the $X$'s in that $\hat X$ is the part of $X$ which can be associated solely with exogenous movements (i.e. changes in $Z$ and $W$). If $X$ and $W$ are correlated at all, not including $W$ here would result in a large loss of information since the resulting fitted values would not reflect all the exogenous movement in $X$. $W$ is included in the second stage to avoid omitted variable bias in the 2SLS coefficient estimates. At this point $\hat X$ is almost surely correlated with $W$ and so if $W$ has any effect on $Y$, leaving it out of the regression will result in bias coefficient estimates.
Why do you put all the exogenous variables into the first and second stage of 2SLS? Technically, you are actually regressing $[X\;,\; W]$ on $[Z\;,\; W]$ so the resulting fitted values for the second stage regressors are $[\hat X\;,\; \hat W]=[\hat X \;,\;W]$. $\hat W =W$ since the
45,752
Why do you put all the exogenous variables into the first and second stage of 2SLS?
This question is quite old --- however I haven't found a fully satisfactory answer online for this query, so I am adding my 2 cents. PLEASE add a comment if you notice errors! In the case of simultaneous equations (one particular framework for thinking about endogeneity and IV), the stata documentation has a nice page about omitting exogenous variables from the first stage: stata FAQ link However, we aren't always in the simultaneous equations framework and I wanted more intuition. Note that in Wooldridge (panel data, aka papa Wooldridge), 2nd edition, page 97, he writes: In practice, it is best to use a software package with a 2SLS command rather than explicitly carry out the two-step procedure. Carrying out the two-step procedure explicitly makes one susceptible to a harmful mistakes. For example, the following seemingly sensible, two-step procedure is generally inconsistent: (1) regress [endogenous] $X_k$ on [only the instruments], and obtain fitted values $\hat{X_k}$ (2) run the regression [$y$ on exog variables and $\hat{X_k}$]... [This] produces inconsistent estimators of the betas In other words, omitting the exogenous variables in the first stage affects the consistency of the betas of interest. (standard errors will also have to be adjusted if 2SLS is run by hand in two stages). To see why we need to include the exogenous variables in the first stage, first consider the following DAG: Say we have the structural equation $Y = a_1 T + a_2 X + \varepsilon_3$ If we were to estimate this with OLS, our estimates would be biased because of the endogeneity of T and Y: $E(T \varepsilon_3 ) \ne 0$ Therefore we turn to 2SLS (IV), noting that Z satisfies the assumptions to be an instrument for T, as long as we control for X. Generally, we can express $T = \hat{T} + u$ (For now, ignore exactly how we estimate the first stage and obtain the fitted values $\hat{T}$.) Substituting into our previous equation: $ Y = a_1 \hat{T} + a_2 X + a_1 u + \varepsilon_3 $ To obtain consistent estimates of $a_1$ and $a_2$ with OLS, we need $\hat{T}$ and $X$ to be independent from both errors $u$ and $\varepsilon_3$. From the original setup, $X$ is independent of $\varepsilon_3$. Additionally $\hat{T}$ is independent of $u$ by construction---the linear projection is independent of the error term. So the two things we are concerned about is whether $\hat{T}$ is independent of $\varepsilon_3$, and whether $X$ is independent of $u$. Consider two different ways of estimating the first stage, i.e. obtaining $\hat{T}$ (apologies for slight abuse of notation in subscripts) $T = b_1 Z + u$, $ \hspace{2cm} \hat{T_1} = \hat{b_1} Z$ $T = b_1 Z + b_2 X + u$, $\hspace{1cm} \hat{T_2} = \hat{b_1} Z + \hat{b_2} X$ In the first case, if we use $\hat{T_1}$ in our second stage, $u$ will be correlated with $X$, resulting in inconsistency for estimating $a_2$. This will also cause issues for estimates of $a_1$, because $corr(X,T) \ne 0$ In the second case, if we use $\hat{T_2}$ in the second stage, now $u$ is independent of $X$ by construction!. Additionally both $\hat{T_1}$ and $\hat{T_2}$ are independent of $\varepsilon_3$, because both $Z$ and $X$ are independent of $\varepsilon_3$. Note that if $\varepsilon_1 = 0$, i.e. X is not related to T, then X could safely be omitted in the first stage. (but best to always include it anyways to be safe and for efficiency) Additionally, if $\varepsilon_2 = 0$, i.e. X is not related to Z, then if X is omitted from first stage $a_1$ is consistent but $a_2$ is not. If you are still confused I'd encourage you to simulate data based on the DAG, run 2SLS for both cases and check correlations to confirm the conclusions in my answer.
Why do you put all the exogenous variables into the first and second stage of 2SLS?
This question is quite old --- however I haven't found a fully satisfactory answer online for this query, so I am adding my 2 cents. PLEASE add a comment if you notice errors! In the case of simultane
Why do you put all the exogenous variables into the first and second stage of 2SLS? This question is quite old --- however I haven't found a fully satisfactory answer online for this query, so I am adding my 2 cents. PLEASE add a comment if you notice errors! In the case of simultaneous equations (one particular framework for thinking about endogeneity and IV), the stata documentation has a nice page about omitting exogenous variables from the first stage: stata FAQ link However, we aren't always in the simultaneous equations framework and I wanted more intuition. Note that in Wooldridge (panel data, aka papa Wooldridge), 2nd edition, page 97, he writes: In practice, it is best to use a software package with a 2SLS command rather than explicitly carry out the two-step procedure. Carrying out the two-step procedure explicitly makes one susceptible to a harmful mistakes. For example, the following seemingly sensible, two-step procedure is generally inconsistent: (1) regress [endogenous] $X_k$ on [only the instruments], and obtain fitted values $\hat{X_k}$ (2) run the regression [$y$ on exog variables and $\hat{X_k}$]... [This] produces inconsistent estimators of the betas In other words, omitting the exogenous variables in the first stage affects the consistency of the betas of interest. (standard errors will also have to be adjusted if 2SLS is run by hand in two stages). To see why we need to include the exogenous variables in the first stage, first consider the following DAG: Say we have the structural equation $Y = a_1 T + a_2 X + \varepsilon_3$ If we were to estimate this with OLS, our estimates would be biased because of the endogeneity of T and Y: $E(T \varepsilon_3 ) \ne 0$ Therefore we turn to 2SLS (IV), noting that Z satisfies the assumptions to be an instrument for T, as long as we control for X. Generally, we can express $T = \hat{T} + u$ (For now, ignore exactly how we estimate the first stage and obtain the fitted values $\hat{T}$.) Substituting into our previous equation: $ Y = a_1 \hat{T} + a_2 X + a_1 u + \varepsilon_3 $ To obtain consistent estimates of $a_1$ and $a_2$ with OLS, we need $\hat{T}$ and $X$ to be independent from both errors $u$ and $\varepsilon_3$. From the original setup, $X$ is independent of $\varepsilon_3$. Additionally $\hat{T}$ is independent of $u$ by construction---the linear projection is independent of the error term. So the two things we are concerned about is whether $\hat{T}$ is independent of $\varepsilon_3$, and whether $X$ is independent of $u$. Consider two different ways of estimating the first stage, i.e. obtaining $\hat{T}$ (apologies for slight abuse of notation in subscripts) $T = b_1 Z + u$, $ \hspace{2cm} \hat{T_1} = \hat{b_1} Z$ $T = b_1 Z + b_2 X + u$, $\hspace{1cm} \hat{T_2} = \hat{b_1} Z + \hat{b_2} X$ In the first case, if we use $\hat{T_1}$ in our second stage, $u$ will be correlated with $X$, resulting in inconsistency for estimating $a_2$. This will also cause issues for estimates of $a_1$, because $corr(X,T) \ne 0$ In the second case, if we use $\hat{T_2}$ in the second stage, now $u$ is independent of $X$ by construction!. Additionally both $\hat{T_1}$ and $\hat{T_2}$ are independent of $\varepsilon_3$, because both $Z$ and $X$ are independent of $\varepsilon_3$. Note that if $\varepsilon_1 = 0$, i.e. X is not related to T, then X could safely be omitted in the first stage. (but best to always include it anyways to be safe and for efficiency) Additionally, if $\varepsilon_2 = 0$, i.e. X is not related to Z, then if X is omitted from first stage $a_1$ is consistent but $a_2$ is not. If you are still confused I'd encourage you to simulate data based on the DAG, run 2SLS for both cases and check correlations to confirm the conclusions in my answer.
Why do you put all the exogenous variables into the first and second stage of 2SLS? This question is quite old --- however I haven't found a fully satisfactory answer online for this query, so I am adding my 2 cents. PLEASE add a comment if you notice errors! In the case of simultane
45,753
Is there a closed form solution for L2-norm regularized linear regression (not ridge regression)
You will get the ridge regression solutions, but parametrised differently in terms of the penalty parameter $\lambda$. This holds more generally for convex loss functions. If $L$ is a convex, differentiable function of $\beta$ let $\beta(\lambda)$ denote the unique minimiser of the strictly convex function $$h(\beta) = L(\beta) + \lambda \|\beta\|_2^2$$ for $\lambda > 0$. Let, furthermore, $s(\lambda) = \|\beta(\lambda)\|_2$. Consider now the function $$g(\beta) = L(\beta) + 2 \lambda s(\lambda) \|\beta\|_2.$$ Its Jacobian is $$Dg(\beta) = DL(\beta) + 2 \lambda s(\lambda) \frac{\beta}{\|\beta\|_2}.$$ If we plug in $\beta(\lambda)$ we find that $$Dg(\beta(\lambda)) = DL(\beta(\lambda)) + 2 \lambda \beta(\lambda) = Dh(\beta(\lambda) = 0,$$ because $\beta(\lambda)$ is a stationary point of $h$. Since $g$ is still convex this shows that $\beta(\lambda)$ is a global minimiser of $g$. It is possible that $\lambda \mapsto \lambda s(\lambda)$ does not map $(0, \infty)$ onto $(0,\infty)$, thus there can be choices of the penalty parameter $-$ when the $\|\cdot\|_2$-penalty and not the $\|\cdot\|_2^2$-penalty is used $-$ that give minimisers that are not of the form $\beta(\lambda)$ for any $\lambda > 0$. With the squared error loss (yielding ridge regression) this will be the case for large choices of the penalty parameter, where the $\|\cdot\|_2$-penalty will give the zero solution.
Is there a closed form solution for L2-norm regularized linear regression (not ridge regression)
You will get the ridge regression solutions, but parametrised differently in terms of the penalty parameter $\lambda$. This holds more generally for convex loss functions. If $L$ is a convex, differe
Is there a closed form solution for L2-norm regularized linear regression (not ridge regression) You will get the ridge regression solutions, but parametrised differently in terms of the penalty parameter $\lambda$. This holds more generally for convex loss functions. If $L$ is a convex, differentiable function of $\beta$ let $\beta(\lambda)$ denote the unique minimiser of the strictly convex function $$h(\beta) = L(\beta) + \lambda \|\beta\|_2^2$$ for $\lambda > 0$. Let, furthermore, $s(\lambda) = \|\beta(\lambda)\|_2$. Consider now the function $$g(\beta) = L(\beta) + 2 \lambda s(\lambda) \|\beta\|_2.$$ Its Jacobian is $$Dg(\beta) = DL(\beta) + 2 \lambda s(\lambda) \frac{\beta}{\|\beta\|_2}.$$ If we plug in $\beta(\lambda)$ we find that $$Dg(\beta(\lambda)) = DL(\beta(\lambda)) + 2 \lambda \beta(\lambda) = Dh(\beta(\lambda) = 0,$$ because $\beta(\lambda)$ is a stationary point of $h$. Since $g$ is still convex this shows that $\beta(\lambda)$ is a global minimiser of $g$. It is possible that $\lambda \mapsto \lambda s(\lambda)$ does not map $(0, \infty)$ onto $(0,\infty)$, thus there can be choices of the penalty parameter $-$ when the $\|\cdot\|_2$-penalty and not the $\|\cdot\|_2^2$-penalty is used $-$ that give minimisers that are not of the form $\beta(\lambda)$ for any $\lambda > 0$. With the squared error loss (yielding ridge regression) this will be the case for large choices of the penalty parameter, where the $\|\cdot\|_2$-penalty will give the zero solution.
Is there a closed form solution for L2-norm regularized linear regression (not ridge regression) You will get the ridge regression solutions, but parametrised differently in terms of the penalty parameter $\lambda$. This holds more generally for convex loss functions. If $L$ is a convex, differe
45,754
Notation for possible values of a random variable
There are sloppy ways and rigorous ways. The sloppy ways are shorthands, like "$X\in\{1,2,3\}$", that are either nonsensical or (in this example) just plain wrong when interpreted according to the correct conventional meanings of the symbols. (The second statement literally means $X$ is one of three specified integers--which aren't random variables at all.) Such a shorthand can be effective in contexts where (a) its meaning is defined and (b) set-theoretic notation will not otherwise be used. Recalling that all random variables are measurable functions defined on probability spaces, a standard way to stipulate that $X$ can take on a given set of values is to use functional notation to specify its image, as in $$X:\Omega\to\{1,2,3\}$$ or $$X(\Omega) = \{1,2,3\}.$$ Many statistical writers eschew such notation because they prefer to suppress all references to $\Omega$, which is intended to remain abstract, or they even think of random variables as being some kind of class of objects in which $\Omega$ is not even a definite set. An equivalent notation avoids referencing $\Omega$, such as stipulating $$\text{Image}(X) = \{1,2,3\}.$$ Very confusingly, some statistical writers use the term "domain" to refer to the image of $X$ (whereas in mathematics the word "domain" invariably refers to $\Omega$!). A Google search will easily turn up such uses of "domain." Others use phrases like "defined on" or "take on," such as "$X$ takes on the values $\{1,2,3\}$" or "$X$ is defined on $\{1,2,3\}$" (by which they really mean the probability mass function of $X$ rather than $X$ itself). There are indirect ways to refer to the image of $X$. For instance, real-valued random variables are often thought of as being almost interchangeable with their distribution functions. The support of such a function has a well-established definition in probability and measure theory; in the case of a finite discrete random variable $X$, it will coincide with the image of $X$. People who write about such things typically adopt some mnemonic notation for this, such as $$\text{supp}(X) = \{1,2,3\}.$$ Finally, this helps us understand how such confusion about the meaning of "domain" can arise. If we were to conflate the random variable $X$ with its probability mass function (pmf) $p_X$, given by the probabilities $$p_X(x) = \Pr(X=x),$$ then in the example $p_X(x)\ne 0$ only when $x \in \{1,2,3\}$. We could, if we wished, restrict $p_X$ (which notionally is a function defined on $\mathbb{R}$) to the subset $\{1,2,3\}\subset\mathbb{R}$ without losing any of the information it conveys. This would make $\{1,2,3\}$ the (mathematically correct) domain of the restricted $p_X$.
Notation for possible values of a random variable
There are sloppy ways and rigorous ways. The sloppy ways are shorthands, like "$X\in\{1,2,3\}$", that are either nonsensical or (in this example) just plain wrong when interpreted according to the co
Notation for possible values of a random variable There are sloppy ways and rigorous ways. The sloppy ways are shorthands, like "$X\in\{1,2,3\}$", that are either nonsensical or (in this example) just plain wrong when interpreted according to the correct conventional meanings of the symbols. (The second statement literally means $X$ is one of three specified integers--which aren't random variables at all.) Such a shorthand can be effective in contexts where (a) its meaning is defined and (b) set-theoretic notation will not otherwise be used. Recalling that all random variables are measurable functions defined on probability spaces, a standard way to stipulate that $X$ can take on a given set of values is to use functional notation to specify its image, as in $$X:\Omega\to\{1,2,3\}$$ or $$X(\Omega) = \{1,2,3\}.$$ Many statistical writers eschew such notation because they prefer to suppress all references to $\Omega$, which is intended to remain abstract, or they even think of random variables as being some kind of class of objects in which $\Omega$ is not even a definite set. An equivalent notation avoids referencing $\Omega$, such as stipulating $$\text{Image}(X) = \{1,2,3\}.$$ Very confusingly, some statistical writers use the term "domain" to refer to the image of $X$ (whereas in mathematics the word "domain" invariably refers to $\Omega$!). A Google search will easily turn up such uses of "domain." Others use phrases like "defined on" or "take on," such as "$X$ takes on the values $\{1,2,3\}$" or "$X$ is defined on $\{1,2,3\}$" (by which they really mean the probability mass function of $X$ rather than $X$ itself). There are indirect ways to refer to the image of $X$. For instance, real-valued random variables are often thought of as being almost interchangeable with their distribution functions. The support of such a function has a well-established definition in probability and measure theory; in the case of a finite discrete random variable $X$, it will coincide with the image of $X$. People who write about such things typically adopt some mnemonic notation for this, such as $$\text{supp}(X) = \{1,2,3\}.$$ Finally, this helps us understand how such confusion about the meaning of "domain" can arise. If we were to conflate the random variable $X$ with its probability mass function (pmf) $p_X$, given by the probabilities $$p_X(x) = \Pr(X=x),$$ then in the example $p_X(x)\ne 0$ only when $x \in \{1,2,3\}$. We could, if we wished, restrict $p_X$ (which notionally is a function defined on $\mathbb{R}$) to the subset $\{1,2,3\}\subset\mathbb{R}$ without losing any of the information it conveys. This would make $\{1,2,3\}$ the (mathematically correct) domain of the restricted $p_X$.
Notation for possible values of a random variable There are sloppy ways and rigorous ways. The sloppy ways are shorthands, like "$X\in\{1,2,3\}$", that are either nonsensical or (in this example) just plain wrong when interpreted according to the co
45,755
Notation for possible values of a random variable
A full description of $X$ would include the pmf, yes. For example ... $$ X \in \{1,2,3\} \\ \mathbb{P}(X=1)=\frac{1}{12} \\ \mathbb{P}(X=2)=\frac{7}{12} \\ \mathbb{P}(X=2)=\frac{1}{3} $$ ... which looks a bit clumsy, or do it in words: $X$ is a discrete random variable in $\{1,2,3\}$ with probabilities $p_1=\frac{1}{12}, p_2=\frac{7}{12}, p_3=\frac{1}{3}$ respectively.
Notation for possible values of a random variable
A full description of $X$ would include the pmf, yes. For example ... $$ X \in \{1,2,3\} \\ \mathbb{P}(X=1)=\frac{1}{12} \\ \mathbb{P}(X=2)=\frac{7}{12} \\ \mathbb{P}(X=2)=\frac{1}{3} $$ ... which lo
Notation for possible values of a random variable A full description of $X$ would include the pmf, yes. For example ... $$ X \in \{1,2,3\} \\ \mathbb{P}(X=1)=\frac{1}{12} \\ \mathbb{P}(X=2)=\frac{7}{12} \\ \mathbb{P}(X=2)=\frac{1}{3} $$ ... which looks a bit clumsy, or do it in words: $X$ is a discrete random variable in $\{1,2,3\}$ with probabilities $p_1=\frac{1}{12}, p_2=\frac{7}{12}, p_3=\frac{1}{3}$ respectively.
Notation for possible values of a random variable A full description of $X$ would include the pmf, yes. For example ... $$ X \in \{1,2,3\} \\ \mathbb{P}(X=1)=\frac{1}{12} \\ \mathbb{P}(X=2)=\frac{7}{12} \\ \mathbb{P}(X=2)=\frac{1}{3} $$ ... which lo
45,756
R: Test for correlation with a covariate?
Yes, you can use correlation with a covariate. This is called partial correlation. It produces a (partial) correlation coefficient that is normalized to the [-1, 1] range just like a regular correlation coefficient, except that the covariate is "controlled for" in the analysis -- a concept which is kind of subtle, but some good explanations of what it really means can be found here. One way to get a partial correlation in R is using the ppcor package: # install/load 'ppcor' package for its pcor.test() function if(!require("ppcor")){ install.packages("ppcor", repos='http://cran.us.r-project.org') library(ppcor) } # make up data x <- rnorm(50) y <- rnorm(50) z <- rnorm(50) # partial correlation between x and y, controlling for z pcor.test(x, y, z) # estimate p.value statistic n gp Method # 1 -0.02288511 0.8752972 -0.1569335 50 1 pearson You also asked whether this differed from using a linear model with a covariate. It doesn't! Check it out: summary(lm(y ~ x + z)) # Call: # lm(formula = y ~ x + z) # # Residuals: # Min 1Q Median 3Q Max # -2.80457 -0.76631 -0.00539 0.64083 2.79261 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.05163 0.17006 0.304 0.763 # x -0.02486 0.15842 -0.157 0.876 # z 0.07098 0.15333 0.463 0.646 # # Residual standard error: 1.184 on 47 degrees of freedom # Multiple R-squared: 0.005408, Adjusted R-squared: -0.03692 # F-statistic: 0.1278 on 2 and 47 DF, p-value: 0.8804 Notice that most of the numbers for the "x" row in the lm() output match those that we got from pcor.test(). The only difference is that the "estimate" for pcor.test() is the partial correlation coefficient, while the "estimate" for lm() is the slope. (The two estimates happen to be numerically similar here, but they are NOT the same.)
R: Test for correlation with a covariate?
Yes, you can use correlation with a covariate. This is called partial correlation. It produces a (partial) correlation coefficient that is normalized to the [-1, 1] range just like a regular correlati
R: Test for correlation with a covariate? Yes, you can use correlation with a covariate. This is called partial correlation. It produces a (partial) correlation coefficient that is normalized to the [-1, 1] range just like a regular correlation coefficient, except that the covariate is "controlled for" in the analysis -- a concept which is kind of subtle, but some good explanations of what it really means can be found here. One way to get a partial correlation in R is using the ppcor package: # install/load 'ppcor' package for its pcor.test() function if(!require("ppcor")){ install.packages("ppcor", repos='http://cran.us.r-project.org') library(ppcor) } # make up data x <- rnorm(50) y <- rnorm(50) z <- rnorm(50) # partial correlation between x and y, controlling for z pcor.test(x, y, z) # estimate p.value statistic n gp Method # 1 -0.02288511 0.8752972 -0.1569335 50 1 pearson You also asked whether this differed from using a linear model with a covariate. It doesn't! Check it out: summary(lm(y ~ x + z)) # Call: # lm(formula = y ~ x + z) # # Residuals: # Min 1Q Median 3Q Max # -2.80457 -0.76631 -0.00539 0.64083 2.79261 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) 0.05163 0.17006 0.304 0.763 # x -0.02486 0.15842 -0.157 0.876 # z 0.07098 0.15333 0.463 0.646 # # Residual standard error: 1.184 on 47 degrees of freedom # Multiple R-squared: 0.005408, Adjusted R-squared: -0.03692 # F-statistic: 0.1278 on 2 and 47 DF, p-value: 0.8804 Notice that most of the numbers for the "x" row in the lm() output match those that we got from pcor.test(). The only difference is that the "estimate" for pcor.test() is the partial correlation coefficient, while the "estimate" for lm() is the slope. (The two estimates happen to be numerically similar here, but they are NOT the same.)
R: Test for correlation with a covariate? Yes, you can use correlation with a covariate. This is called partial correlation. It produces a (partial) correlation coefficient that is normalized to the [-1, 1] range just like a regular correlati
45,757
How frequently am I going to lose this game?
We can ignore the rows and column, and just say that you have 40 positions and 40 cards that each point to a specific position. For the first draw, you have 40 cards but only 1 card will make you lose the game (the card that points to the position you chose). This gives a probability of 39/40 to proceed to the next round. In the second draw, you have 39 cards to choose from, and still only 1 card that will make you lose, because the card that points to the second position is eliminated from the deck. The only losing card is the one that points to your original position. This gives a probability of 38/39 to proceed. This gives the following formula: 39/40 * 38/39 * 37/38 * 36/37 * 35/36 * 34/35 * 33/34 * 32/33 * 31/32 * 30/31 * 29/30 * 28/29 * 27/28 * 26/27 * 25/26 * 24/25 * 23/24 * 22/23 * 21/22 * 20/21 * 19/20 * 18/19 * 17/18 * 16/17 * 15/16 * 14/15 * 13/14 * 12/13 * 11/12 * 10/11 * 9/10 * 8/9 * 7/8 * 6/7 * 5/6 * 4/5 * 3/4 * 2/3 * 1/2 [1] 0.025 So the probability of winning is 2.5% We can try this in a simulation using R (you will need to source the code in order for it to run): set.seed(1) card.game <- function() { cards <- sample(1:40) pos <- seq(1:40) result <- NA for (i in 1:40) { if (i == 1) { current.pos <- sample(pos, 1) pos <- pos[which(pos!=current.pos)] continue <- length(which(pos==cards[current.pos])) if (continue == 0) { result <- 0 return(result) } } else { if (i == 40) { result <- 1 return(result) } current.pos <- cards[current.pos] pos <- pos[which(pos!=current.pos)] continue <- length(which(pos==cards[current.pos])) if (continue == 0) { result <- 0 return(result) } } } } result <- NA for (i in 1:100000) { result[i] <- card.game() } prop.table(table(result)) result 0 1 0.97494 0.02506 And the result, as you can see, is very close to 2.5% which confirms the calculations above.
How frequently am I going to lose this game?
We can ignore the rows and column, and just say that you have 40 positions and 40 cards that each point to a specific position. For the first draw, you have 40 cards but only 1 card will make you lose
How frequently am I going to lose this game? We can ignore the rows and column, and just say that you have 40 positions and 40 cards that each point to a specific position. For the first draw, you have 40 cards but only 1 card will make you lose the game (the card that points to the position you chose). This gives a probability of 39/40 to proceed to the next round. In the second draw, you have 39 cards to choose from, and still only 1 card that will make you lose, because the card that points to the second position is eliminated from the deck. The only losing card is the one that points to your original position. This gives a probability of 38/39 to proceed. This gives the following formula: 39/40 * 38/39 * 37/38 * 36/37 * 35/36 * 34/35 * 33/34 * 32/33 * 31/32 * 30/31 * 29/30 * 28/29 * 27/28 * 26/27 * 25/26 * 24/25 * 23/24 * 22/23 * 21/22 * 20/21 * 19/20 * 18/19 * 17/18 * 16/17 * 15/16 * 14/15 * 13/14 * 12/13 * 11/12 * 10/11 * 9/10 * 8/9 * 7/8 * 6/7 * 5/6 * 4/5 * 3/4 * 2/3 * 1/2 [1] 0.025 So the probability of winning is 2.5% We can try this in a simulation using R (you will need to source the code in order for it to run): set.seed(1) card.game <- function() { cards <- sample(1:40) pos <- seq(1:40) result <- NA for (i in 1:40) { if (i == 1) { current.pos <- sample(pos, 1) pos <- pos[which(pos!=current.pos)] continue <- length(which(pos==cards[current.pos])) if (continue == 0) { result <- 0 return(result) } } else { if (i == 40) { result <- 1 return(result) } current.pos <- cards[current.pos] pos <- pos[which(pos!=current.pos)] continue <- length(which(pos==cards[current.pos])) if (continue == 0) { result <- 0 return(result) } } } } result <- NA for (i in 1:100000) { result[i] <- card.game() } prop.table(table(result)) result 0 1 0.97494 0.02506 And the result, as you can see, is very close to 2.5% which confirms the calculations above.
How frequently am I going to lose this game? We can ignore the rows and column, and just say that you have 40 positions and 40 cards that each point to a specific position. For the first draw, you have 40 cards but only 1 card will make you lose
45,758
How frequently am I going to lose this game?
As @Jonas pointed out, in the whole game there's only one losing card, and it is the one you choose at first. If you ever flip that, you lose; if that is the last card you flip, you win. Then a simpler approach to the question might be interpreting the game this way: you name one card; you shuffle your 40 cards deck; you win if the card you named is the last in the deck (or the first, equivalently). From this perspective it is clear that the probability you are going to win is the probability for that certain card to occupy a certain position, which is clearly 1/40 = 0.025. So, to answer my question, you are going to lose with p = 0.975.
How frequently am I going to lose this game?
As @Jonas pointed out, in the whole game there's only one losing card, and it is the one you choose at first. If you ever flip that, you lose; if that is the last card you flip, you win. Then a simple
How frequently am I going to lose this game? As @Jonas pointed out, in the whole game there's only one losing card, and it is the one you choose at first. If you ever flip that, you lose; if that is the last card you flip, you win. Then a simpler approach to the question might be interpreting the game this way: you name one card; you shuffle your 40 cards deck; you win if the card you named is the last in the deck (or the first, equivalently). From this perspective it is clear that the probability you are going to win is the probability for that certain card to occupy a certain position, which is clearly 1/40 = 0.025. So, to answer my question, you are going to lose with p = 0.975.
How frequently am I going to lose this game? As @Jonas pointed out, in the whole game there's only one losing card, and it is the one you choose at first. If you ever flip that, you lose; if that is the last card you flip, you win. Then a simple
45,759
Multivariate normal distributions density function: What is the value if the determinant is zero?
Return value for your function If you want to write code for a function that computes a multivariate normal distribution, then, in case the determinant of the var-covar is zero, this return-value is undefined because the var-covar matrix is singular (the inverse does not exist). Probably the best return value for your function, in case the var-covar matrix is singular, is NA. Reasons for singular var-covar matrix The reasons for singularity could be that one of your 'variables' is (as good as) constant such that its variance is (almost) zero, or it could also be that one of your variables is a linear combination of the other variables.
Multivariate normal distributions density function: What is the value if the determinant is zero?
Return value for your function If you want to write code for a function that computes a multivariate normal distribution, then, in case the determinant of the var-covar is zero, this return-value is u
Multivariate normal distributions density function: What is the value if the determinant is zero? Return value for your function If you want to write code for a function that computes a multivariate normal distribution, then, in case the determinant of the var-covar is zero, this return-value is undefined because the var-covar matrix is singular (the inverse does not exist). Probably the best return value for your function, in case the var-covar matrix is singular, is NA. Reasons for singular var-covar matrix The reasons for singularity could be that one of your 'variables' is (as good as) constant such that its variance is (almost) zero, or it could also be that one of your variables is a linear combination of the other variables.
Multivariate normal distributions density function: What is the value if the determinant is zero? Return value for your function If you want to write code for a function that computes a multivariate normal distribution, then, in case the determinant of the var-covar is zero, this return-value is u
45,760
Multivariate normal distributions density function: What is the value if the determinant is zero?
Presume that the covariance matrix as specified is n by n, i.e., the Multivariate Normal random variable X is in n dimensions. Det(covariance matrix) = 0 if and only if the covariance matrix is singular. If the covariance matrix is singular, X does not have a density. There may exist a lower dimensional space (manifold) in which X is concentrated, such that the covariance matrix in that lower dimensional space is nonsingular, in which case X, when projected into that lower dimensional space, would have a density. Edit: Pertaining to whuber's comment above, based on the thread title, I was presuming the OP wants to know (how to compute) the density. The cumulative probability distribution does exist, even if the covariance matrix is singular (has determinant = 0), and could be computed by integrating the lower dimensional density. I believe the only way that there could not be a lower dimensional manifold on which X was concentrated and which had nonsingular covariance, is if X were concentrated at a single point (for example, a one dimensional Normal having variance = 0), in whuch case the cumulative distribution = 1 for any region containing that point, and 0 otherwise.
Multivariate normal distributions density function: What is the value if the determinant is zero?
Presume that the covariance matrix as specified is n by n, i.e., the Multivariate Normal random variable X is in n dimensions. Det(covariance matrix) = 0 if and only if the covariance matrix is singu
Multivariate normal distributions density function: What is the value if the determinant is zero? Presume that the covariance matrix as specified is n by n, i.e., the Multivariate Normal random variable X is in n dimensions. Det(covariance matrix) = 0 if and only if the covariance matrix is singular. If the covariance matrix is singular, X does not have a density. There may exist a lower dimensional space (manifold) in which X is concentrated, such that the covariance matrix in that lower dimensional space is nonsingular, in which case X, when projected into that lower dimensional space, would have a density. Edit: Pertaining to whuber's comment above, based on the thread title, I was presuming the OP wants to know (how to compute) the density. The cumulative probability distribution does exist, even if the covariance matrix is singular (has determinant = 0), and could be computed by integrating the lower dimensional density. I believe the only way that there could not be a lower dimensional manifold on which X was concentrated and which had nonsingular covariance, is if X were concentrated at a single point (for example, a one dimensional Normal having variance = 0), in whuch case the cumulative distribution = 1 for any region containing that point, and 0 otherwise.
Multivariate normal distributions density function: What is the value if the determinant is zero? Presume that the covariance matrix as specified is n by n, i.e., the Multivariate Normal random variable X is in n dimensions. Det(covariance matrix) = 0 if and only if the covariance matrix is singu
45,761
Multivariate normal distributions density function: What is the value if the determinant is zero?
Actually it can also be defined by the use of generalized inverse and "pseudo-determinant". Solution is not unique though.
Multivariate normal distributions density function: What is the value if the determinant is zero?
Actually it can also be defined by the use of generalized inverse and "pseudo-determinant". Solution is not unique though.
Multivariate normal distributions density function: What is the value if the determinant is zero? Actually it can also be defined by the use of generalized inverse and "pseudo-determinant". Solution is not unique though.
Multivariate normal distributions density function: What is the value if the determinant is zero? Actually it can also be defined by the use of generalized inverse and "pseudo-determinant". Solution is not unique though.
45,762
What is the name of this kind of visualization, with arrows showing count of different subsets? [duplicate]
It is called a Sankey Diagram. A notable example is Charles Joseph Minard's visualization of Napoleon's invasion of Russia. (Edit) Also of interest may be these questions: What's a good tool to create Sankey diagrams? What is the proper name for a "river plot" visualisation
What is the name of this kind of visualization, with arrows showing count of different subsets? [dup
It is called a Sankey Diagram. A notable example is Charles Joseph Minard's visualization of Napoleon's invasion of Russia. (Edit) Also of interest may be these questions: What's a good tool to cr
What is the name of this kind of visualization, with arrows showing count of different subsets? [duplicate] It is called a Sankey Diagram. A notable example is Charles Joseph Minard's visualization of Napoleon's invasion of Russia. (Edit) Also of interest may be these questions: What's a good tool to create Sankey diagrams? What is the proper name for a "river plot" visualisation
What is the name of this kind of visualization, with arrows showing count of different subsets? [dup It is called a Sankey Diagram. A notable example is Charles Joseph Minard's visualization of Napoleon's invasion of Russia. (Edit) Also of interest may be these questions: What's a good tool to cr
45,763
How to prove Berkson's Fallacy?
$$ \begin{align} P( A \mid A \cup B) &= \frac{P(A)}{P(A \cup B)} \\ &\geq P(A) . \end{align} $$
How to prove Berkson's Fallacy?
$$ \begin{align} P( A \mid A \cup B) &= \frac{P(A)}{P(A \cup B)} \\ &\geq P(A) . \end{align} $$
How to prove Berkson's Fallacy? $$ \begin{align} P( A \mid A \cup B) &= \frac{P(A)}{P(A \cup B)} \\ &\geq P(A) . \end{align} $$
How to prove Berkson's Fallacy? $$ \begin{align} P( A \mid A \cup B) &= \frac{P(A)}{P(A \cup B)} \\ &\geq P(A) . \end{align} $$
45,764
How to prove Berkson's Fallacy?
I've mostly heard of it as Berkson's paradox and it refers to the spurious generation of associations when you are comparing an exposure and an outcome and you sample only individuals with either the exposure or the outcome. Suppose the population level association is: $$ \begin{array}{c|ccc} & D & \bar{D} & \\ \hline E & n_{11} & n_{12} & n_{1.} \\ \bar{E} & n_{21} & n_{22} & n_{2.} \\ & n_{.1} & n_{.2} & \\ \end{array} $$ Then the relative risk for disease is given by: $$ RR = \frac{n_{11} / n_{1.} }{n_{21} /n_{2.}}$$ However, in your sample you obtain the following: $$ \begin{array}{c|ccc} & D & \bar{D} & \\ \hline E & n_{11} & n_{12} & n_{1.} \\ \bar{E} & n_{21} & 0 & n_{2.} - n{22}\\ & n_{.1} & n_{.2} - n{22} & \\ \end{array} $$ with the cell counts and margins proportional to the "population" above WLOG. The estimated relative risk becomes: $$ RR_{Berkson} = \frac{n_{11}/n_{1.}}{n_{21} / (n_{2.} - n_{22})} $$ which is biased except when $n_{22} = 0$. In a less biostatistical fashion, assume $P(A \cup B ) \neq 1$ then $P(A | A \cup B) = \frac{P(A \cap A \cup B)}{P(A \cup B)} = \frac{P(A)}{P(A \cup B)}$ and we're done by assumption.
How to prove Berkson's Fallacy?
I've mostly heard of it as Berkson's paradox and it refers to the spurious generation of associations when you are comparing an exposure and an outcome and you sample only individuals with either the
How to prove Berkson's Fallacy? I've mostly heard of it as Berkson's paradox and it refers to the spurious generation of associations when you are comparing an exposure and an outcome and you sample only individuals with either the exposure or the outcome. Suppose the population level association is: $$ \begin{array}{c|ccc} & D & \bar{D} & \\ \hline E & n_{11} & n_{12} & n_{1.} \\ \bar{E} & n_{21} & n_{22} & n_{2.} \\ & n_{.1} & n_{.2} & \\ \end{array} $$ Then the relative risk for disease is given by: $$ RR = \frac{n_{11} / n_{1.} }{n_{21} /n_{2.}}$$ However, in your sample you obtain the following: $$ \begin{array}{c|ccc} & D & \bar{D} & \\ \hline E & n_{11} & n_{12} & n_{1.} \\ \bar{E} & n_{21} & 0 & n_{2.} - n{22}\\ & n_{.1} & n_{.2} - n{22} & \\ \end{array} $$ with the cell counts and margins proportional to the "population" above WLOG. The estimated relative risk becomes: $$ RR_{Berkson} = \frac{n_{11}/n_{1.}}{n_{21} / (n_{2.} - n_{22})} $$ which is biased except when $n_{22} = 0$. In a less biostatistical fashion, assume $P(A \cup B ) \neq 1$ then $P(A | A \cup B) = \frac{P(A \cap A \cup B)}{P(A \cup B)} = \frac{P(A)}{P(A \cup B)}$ and we're done by assumption.
How to prove Berkson's Fallacy? I've mostly heard of it as Berkson's paradox and it refers to the spurious generation of associations when you are comparing an exposure and an outcome and you sample only individuals with either the
45,765
How to prove Berkson's Fallacy?
Yes, it is true that $P(A|A \cup B) \ge P(A)$. I find that helps to think of the two quantities as fractions. Then the whole thing follows from these two facts: The numerator of both probabilities consists of the number of $A$s present in the population. This is trivial in the case of the unconditional probability $P(A)$, and guaranteed by having the $A$ appear on both sides of the condition in the conditional probability $P(A|A \cup B)$. In other words, by construction, the conditional "restriction" can't throw out any $A$s. The denominator of $P(A)$ is the sum of the number of $A$s in the population, plus the number of $B$s in it, plus the number of $C$s, $D$s, and whatever else happens to be in it. However, the denominator of $P(A|A \cup B)$ is merely the number of $A$s, plus the number of $B$s; the condition rules everything else out. Since these are all non-negative numbers, the denominator of $P(A)$ must be at least as large as the denominator of $P(A | A \cup B)$. Thus, we have $$\begin{align} P(A) &= \frac{||A||}{||A|| + ||\textrm{ Everything else }||}\\ P(A|A\cup B) &=\frac{||A||}{||A|| + ||B|| - ||A \cap B||} \end{align}$$ Since the numerators are the same, but $P(A)$'s denominator is greater than or equal to $P(A|A \cup B)$'s, it must be true that $P(A|A \cup B) \ge P(A). \blacksquare$ People call this a paradox because it is true even if $A$ and $B$ are unrelated or even mutually exclusive. Suppose $P(A)$ is the probability that $A$ plays professional basketball and $B$ indicates that $B$ is terrible at basketball. The inequality still holds because the condition rules out people who are fair-to-middling at basketball (and don't happen to play for the Knicks), so the denominator is still smaller than the overall population. More generally, it is true that the denominator of a conditional probabilty will always be less than or equal to the denominator an unconditional probability.
How to prove Berkson's Fallacy?
Yes, it is true that $P(A|A \cup B) \ge P(A)$. I find that helps to think of the two quantities as fractions. Then the whole thing follows from these two facts: The numerator of both probabilities co
How to prove Berkson's Fallacy? Yes, it is true that $P(A|A \cup B) \ge P(A)$. I find that helps to think of the two quantities as fractions. Then the whole thing follows from these two facts: The numerator of both probabilities consists of the number of $A$s present in the population. This is trivial in the case of the unconditional probability $P(A)$, and guaranteed by having the $A$ appear on both sides of the condition in the conditional probability $P(A|A \cup B)$. In other words, by construction, the conditional "restriction" can't throw out any $A$s. The denominator of $P(A)$ is the sum of the number of $A$s in the population, plus the number of $B$s in it, plus the number of $C$s, $D$s, and whatever else happens to be in it. However, the denominator of $P(A|A \cup B)$ is merely the number of $A$s, plus the number of $B$s; the condition rules everything else out. Since these are all non-negative numbers, the denominator of $P(A)$ must be at least as large as the denominator of $P(A | A \cup B)$. Thus, we have $$\begin{align} P(A) &= \frac{||A||}{||A|| + ||\textrm{ Everything else }||}\\ P(A|A\cup B) &=\frac{||A||}{||A|| + ||B|| - ||A \cap B||} \end{align}$$ Since the numerators are the same, but $P(A)$'s denominator is greater than or equal to $P(A|A \cup B)$'s, it must be true that $P(A|A \cup B) \ge P(A). \blacksquare$ People call this a paradox because it is true even if $A$ and $B$ are unrelated or even mutually exclusive. Suppose $P(A)$ is the probability that $A$ plays professional basketball and $B$ indicates that $B$ is terrible at basketball. The inequality still holds because the condition rules out people who are fair-to-middling at basketball (and don't happen to play for the Knicks), so the denominator is still smaller than the overall population. More generally, it is true that the denominator of a conditional probabilty will always be less than or equal to the denominator an unconditional probability.
How to prove Berkson's Fallacy? Yes, it is true that $P(A|A \cup B) \ge P(A)$. I find that helps to think of the two quantities as fractions. Then the whole thing follows from these two facts: The numerator of both probabilities co
45,766
Should I adjust p-values when investigating an ANOVA interaction?
The next question I'd like to answer is "how many of the 30 subjects could reliably discriminate between the 3 headphone types" Yes, using $p<0.05$ criterion will lead to several false positives expected by chance alone. You should either use some formal method of multiple testing adjustments, or perhaps simply lower the cutoff to some more conservative but still conventional value, such as e.g. $p<0.01$ or even $p<0.001$. In addition or even instead, I would suggest to look at the $p$-values for all 30 subjects. Then instead of writing that e.g. "23 out of 30 subjects could reliably discriminate between headphone types" you will be able to say something like "20 subjects could clearly discriminate between headphone types ($p<0.001$), 7 subjects clearly could not ($p>0.1$) and 3 subjects fell somewhere in between". Finally, note that even a very small $p$-value does not mean that the discrimination is "reliable". To me, "reliable" refers rather to the effect size, e.g. I would call say that a subject who can name the headphone type with above e.g. 90% accuracy is reliable. But with 12 repetitions for each type, you can get significant (even highly significant) but still very small difference, corresponding e.g. to the 40% accuracy. It can well be above chance (33%), but is hardly reliable.
Should I adjust p-values when investigating an ANOVA interaction?
The next question I'd like to answer is "how many of the 30 subjects could reliably discriminate between the 3 headphone types" Yes, using $p<0.05$ criterion will lead to several false positives expe
Should I adjust p-values when investigating an ANOVA interaction? The next question I'd like to answer is "how many of the 30 subjects could reliably discriminate between the 3 headphone types" Yes, using $p<0.05$ criterion will lead to several false positives expected by chance alone. You should either use some formal method of multiple testing adjustments, or perhaps simply lower the cutoff to some more conservative but still conventional value, such as e.g. $p<0.01$ or even $p<0.001$. In addition or even instead, I would suggest to look at the $p$-values for all 30 subjects. Then instead of writing that e.g. "23 out of 30 subjects could reliably discriminate between headphone types" you will be able to say something like "20 subjects could clearly discriminate between headphone types ($p<0.001$), 7 subjects clearly could not ($p>0.1$) and 3 subjects fell somewhere in between". Finally, note that even a very small $p$-value does not mean that the discrimination is "reliable". To me, "reliable" refers rather to the effect size, e.g. I would call say that a subject who can name the headphone type with above e.g. 90% accuracy is reliable. But with 12 repetitions for each type, you can get significant (even highly significant) but still very small difference, corresponding e.g. to the 40% accuracy. It can well be above chance (33%), but is hardly reliable.
Should I adjust p-values when investigating an ANOVA interaction? The next question I'd like to answer is "how many of the 30 subjects could reliably discriminate between the 3 headphone types" Yes, using $p<0.05$ criterion will lead to several false positives expe
45,767
Should I adjust p-values when investigating an ANOVA interaction?
This may not fully answer your question, but you might look into mixed-effects models, which allow you to control for subject-level variation ("Joe always estimates high") without actually estimating coefficients for each subject. This lets you get at the original question you posed: "Do the headphones differ in subjective quality?" without running thirty ANOVAs and trying to compare the results.
Should I adjust p-values when investigating an ANOVA interaction?
This may not fully answer your question, but you might look into mixed-effects models, which allow you to control for subject-level variation ("Joe always estimates high") without actually estimating
Should I adjust p-values when investigating an ANOVA interaction? This may not fully answer your question, but you might look into mixed-effects models, which allow you to control for subject-level variation ("Joe always estimates high") without actually estimating coefficients for each subject. This lets you get at the original question you posed: "Do the headphones differ in subjective quality?" without running thirty ANOVAs and trying to compare the results.
Should I adjust p-values when investigating an ANOVA interaction? This may not fully answer your question, but you might look into mixed-effects models, which allow you to control for subject-level variation ("Joe always estimates high") without actually estimating
45,768
Should I adjust p-values when investigating an ANOVA interaction?
Normally subjects are kept in the Error term to reduce the effect of subject variability but your aim is to determine which subjects are significantly different from others. You can take help of graphics to identify such subjects. For each subject you can determine range of rating given (max rating - min rating) and plot them: Then you can determine if there are any outliers or those who lie below 2.5 or above 97.5th percentiles. Also, plots from regression analysis can be used. If the data is arranged as follows (note data is different from that in above plot): subject variable value 1 A headph1 4 2 B headph1 5 3 C headph1 6 4 D headph1 5 5 E headph1 4 6 F headph1 2 mod = lm(value~variable+subject, mydata) Residual plots will show which readings are outliers: plot(mod) Note that outliers are numbered on the plots above. > library(car) > crPlots(mod) Note subject F is clearly standing out here.
Should I adjust p-values when investigating an ANOVA interaction?
Normally subjects are kept in the Error term to reduce the effect of subject variability but your aim is to determine which subjects are significantly different from others. You can take help of graph
Should I adjust p-values when investigating an ANOVA interaction? Normally subjects are kept in the Error term to reduce the effect of subject variability but your aim is to determine which subjects are significantly different from others. You can take help of graphics to identify such subjects. For each subject you can determine range of rating given (max rating - min rating) and plot them: Then you can determine if there are any outliers or those who lie below 2.5 or above 97.5th percentiles. Also, plots from regression analysis can be used. If the data is arranged as follows (note data is different from that in above plot): subject variable value 1 A headph1 4 2 B headph1 5 3 C headph1 6 4 D headph1 5 5 E headph1 4 6 F headph1 2 mod = lm(value~variable+subject, mydata) Residual plots will show which readings are outliers: plot(mod) Note that outliers are numbered on the plots above. > library(car) > crPlots(mod) Note subject F is clearly standing out here.
Should I adjust p-values when investigating an ANOVA interaction? Normally subjects are kept in the Error term to reduce the effect of subject variability but your aim is to determine which subjects are significantly different from others. You can take help of graph
45,769
Define own noninformative prior in stan
You can define a proper or improper prior in the Stan language using the increment_log_prob() function, which will add its input to the accumulated log-posterior value that is used in the Metropolis step to decide whether to accept or reject a proposal for the parameters. In your example, the model block would need to include the new line increment_log_block(-log(sigmaSquared)); However, some people (e.g. Jaynes) argue that the Jeffreys prior is only appropriate for scale parameters, in which case you could reparameterize your model in terms of the standard deviation (sigmaX) rather than the variance (sigmaSquared). Also, what I assume is your attempt to draw from the posterior predictive distribution of x should be in a generated quantities block. Putting all three pieces together, it would look like: data { int<lower=0> n; // obs in group x real x[n]; } parameters { real muX; real<lower=0> sigmaX; } model { x ~ normal(muX, sigmaX); increment_log_prob(-log(sigmaX)); } generated quantities { real postPred; postPred <- normal_rng(muX, sigmaX); }
Define own noninformative prior in stan
You can define a proper or improper prior in the Stan language using the increment_log_prob() function, which will add its input to the accumulated log-posterior value that is used in the Metropolis s
Define own noninformative prior in stan You can define a proper or improper prior in the Stan language using the increment_log_prob() function, which will add its input to the accumulated log-posterior value that is used in the Metropolis step to decide whether to accept or reject a proposal for the parameters. In your example, the model block would need to include the new line increment_log_block(-log(sigmaSquared)); However, some people (e.g. Jaynes) argue that the Jeffreys prior is only appropriate for scale parameters, in which case you could reparameterize your model in terms of the standard deviation (sigmaX) rather than the variance (sigmaSquared). Also, what I assume is your attempt to draw from the posterior predictive distribution of x should be in a generated quantities block. Putting all three pieces together, it would look like: data { int<lower=0> n; // obs in group x real x[n]; } parameters { real muX; real<lower=0> sigmaX; } model { x ~ normal(muX, sigmaX); increment_log_prob(-log(sigmaX)); } generated quantities { real postPred; postPred <- normal_rng(muX, sigmaX); }
Define own noninformative prior in stan You can define a proper or improper prior in the Stan language using the increment_log_prob() function, which will add its input to the accumulated log-posterior value that is used in the Metropolis s
45,770
svm functional margin and geometric margin
I think that the proper way to write the functional margin is $$ \hat{\gamma}_i = y_i(w^Tx_i + b), $$ while the geometric margin is simply $$ \gamma_i = \frac{\hat{\gamma}_i}{||w||}. $$ You can find the answer to your first question in here: [...] the functional margin would give you a number but without a reference you can't tell if the point is actually far away or close to the decision plane. The geometric margin is telling you not only if the point is properly classified or not, but the magnitude of that distance in term of units of |w|. Regarding the second question, see what happens to the Perceptron algorithm. It tries to build a hyperplane between linearly separable data the same as SVM, but it could be any hyperplane. So depending on the training data you used you could have very different hyperplanes, ergo, very different predictions in presence of new data. SVM tries to avoid that by finding the optimal hyperplane, that's why the margin has to be the widest possible, to reduce the chance of misclassification in presence of new data.
svm functional margin and geometric margin
I think that the proper way to write the functional margin is $$ \hat{\gamma}_i = y_i(w^Tx_i + b), $$ while the geometric margin is simply $$ \gamma_i = \frac{\hat{\gamma}_i}{||w||}. $$ You can find t
svm functional margin and geometric margin I think that the proper way to write the functional margin is $$ \hat{\gamma}_i = y_i(w^Tx_i + b), $$ while the geometric margin is simply $$ \gamma_i = \frac{\hat{\gamma}_i}{||w||}. $$ You can find the answer to your first question in here: [...] the functional margin would give you a number but without a reference you can't tell if the point is actually far away or close to the decision plane. The geometric margin is telling you not only if the point is properly classified or not, but the magnitude of that distance in term of units of |w|. Regarding the second question, see what happens to the Perceptron algorithm. It tries to build a hyperplane between linearly separable data the same as SVM, but it could be any hyperplane. So depending on the training data you used you could have very different hyperplanes, ergo, very different predictions in presence of new data. SVM tries to avoid that by finding the optimal hyperplane, that's why the margin has to be the widest possible, to reduce the chance of misclassification in presence of new data.
svm functional margin and geometric margin I think that the proper way to write the functional margin is $$ \hat{\gamma}_i = y_i(w^Tx_i + b), $$ while the geometric margin is simply $$ \gamma_i = \frac{\hat{\gamma}_i}{||w||}. $$ You can find t
45,771
svm functional margin and geometric margin
but why it says that one should find the maximum geometrical margin? Because the geometric margin is invariant to the scaling of the vector orthogonal to the hyperplane. Please see the answer here. should it not be also to find the maximum of the functional margin from the beginning? Since scaling the parameters w and b results nothing meaningful and the parameters are scaled in the same way as the functional margin, then if we can arbitrarily make the ||w|| to be 1(resulting in maximizing the geometric margin) we can also rescale the parameters to make them subject to the functional margin being 1(then to minimize ||w||). Then the solution is to transform maximizing the geomatric margin to minimizing the magnitude of the vector orthogonal to the hyperplane $min_{\gamma, w, b} \frac{1}{2}||w||^2$ subject to $y^{(i)}(w^Tx^{(i)}+b)\ge 1$ which is optimizable. It is different from finding the maximum of $\frac{the\_functional\_margin}{||w||}$ with the constraint of $y^{(i)}(w^Tx^{(i)}+b)\ge the\_functional\_margin$ which is normally impossible to be optimized. For what I know that geometrical margin is only the functional margin normalized by ||w|| to consider the distances between the points to the decision boundary? Let's say $\gamma^{(i)}$ is a geometric margin. w/||w|| is a unit-length vector orthogonal to the hyperplane. A represents $x^{(i)}$, then point B is given by $x^{(i)}-\gamma^{(i)} * w/||w||$. Since B lies on the decision boundary, and $w^Tx+b=0$, then, $$w^T(x^{(i)})-\gamma^{(i)}\frac{w}{||w||}+b=0$$ Solving for $\gamma^{(i)}$ obtain: $$\gamma^{(i)}=\frac{w^Tx^{(i)}+b}{||w||}=(\frac{w}{||w||})^Tx^{(i)}+\frac{b}{||w||}$$ As you can see the geometric margin is only the functional margin normalized by ||w|| to consider the distances between the points to the decision boundary. Another question ... why is it better to find a wide margin instead that a narrow one? The wider the margin the more confident we say the hyperplane is tuned well. The more distant a point is from the hyperplane, as you can imagine the more condident or possible that the point is divided into the right group, otherwise the hyperplane just moves a little and the point is divided into the other side of it and hence is grouped wrongly. reference: http://www.stanford.edu/class/cs229/notes/cs229-notes3.pdf
svm functional margin and geometric margin
but why it says that one should find the maximum geometrical margin? Because the geometric margin is invariant to the scaling of the vector orthogonal to the hyperplane. Please see the answer her
svm functional margin and geometric margin but why it says that one should find the maximum geometrical margin? Because the geometric margin is invariant to the scaling of the vector orthogonal to the hyperplane. Please see the answer here. should it not be also to find the maximum of the functional margin from the beginning? Since scaling the parameters w and b results nothing meaningful and the parameters are scaled in the same way as the functional margin, then if we can arbitrarily make the ||w|| to be 1(resulting in maximizing the geometric margin) we can also rescale the parameters to make them subject to the functional margin being 1(then to minimize ||w||). Then the solution is to transform maximizing the geomatric margin to minimizing the magnitude of the vector orthogonal to the hyperplane $min_{\gamma, w, b} \frac{1}{2}||w||^2$ subject to $y^{(i)}(w^Tx^{(i)}+b)\ge 1$ which is optimizable. It is different from finding the maximum of $\frac{the\_functional\_margin}{||w||}$ with the constraint of $y^{(i)}(w^Tx^{(i)}+b)\ge the\_functional\_margin$ which is normally impossible to be optimized. For what I know that geometrical margin is only the functional margin normalized by ||w|| to consider the distances between the points to the decision boundary? Let's say $\gamma^{(i)}$ is a geometric margin. w/||w|| is a unit-length vector orthogonal to the hyperplane. A represents $x^{(i)}$, then point B is given by $x^{(i)}-\gamma^{(i)} * w/||w||$. Since B lies on the decision boundary, and $w^Tx+b=0$, then, $$w^T(x^{(i)})-\gamma^{(i)}\frac{w}{||w||}+b=0$$ Solving for $\gamma^{(i)}$ obtain: $$\gamma^{(i)}=\frac{w^Tx^{(i)}+b}{||w||}=(\frac{w}{||w||})^Tx^{(i)}+\frac{b}{||w||}$$ As you can see the geometric margin is only the functional margin normalized by ||w|| to consider the distances between the points to the decision boundary. Another question ... why is it better to find a wide margin instead that a narrow one? The wider the margin the more confident we say the hyperplane is tuned well. The more distant a point is from the hyperplane, as you can imagine the more condident or possible that the point is divided into the right group, otherwise the hyperplane just moves a little and the point is divided into the other side of it and hence is grouped wrongly. reference: http://www.stanford.edu/class/cs229/notes/cs229-notes3.pdf
svm functional margin and geometric margin but why it says that one should find the maximum geometrical margin? Because the geometric margin is invariant to the scaling of the vector orthogonal to the hyperplane. Please see the answer her
45,772
svm functional margin and geometric margin
Regarding following question but why it says that one should find the maximum geometrical margin? should it not be also to find the maximum of the functional margin from the beginning? You are very right our goal is to maximize both geometrical margin or functional margin but the confusion is that following formula is for set of points, ideally we want to maximize the margin width (sum of functional margin of decision boundary (separation boundary) to the closest points (support vectors) belonging to different classes (consider 2 classes only to simplify the explanation here). In this formula, we use minimum to find out the support vectors (closest points to the decision boundary) and then once we find these support vectors (by using this minimizing optimization problem). We will try to maximize the functional margin (that is we choose the best decision boundary to maximize the functional margin). Please confirm whether this explanation clarify some of your doubts or just made it worse (obviously sorry for the latter part)
svm functional margin and geometric margin
Regarding following question but why it says that one should find the maximum geometrical margin? should it not be also to find the maximum of the functional margin from the beginning? You are very
svm functional margin and geometric margin Regarding following question but why it says that one should find the maximum geometrical margin? should it not be also to find the maximum of the functional margin from the beginning? You are very right our goal is to maximize both geometrical margin or functional margin but the confusion is that following formula is for set of points, ideally we want to maximize the margin width (sum of functional margin of decision boundary (separation boundary) to the closest points (support vectors) belonging to different classes (consider 2 classes only to simplify the explanation here). In this formula, we use minimum to find out the support vectors (closest points to the decision boundary) and then once we find these support vectors (by using this minimizing optimization problem). We will try to maximize the functional margin (that is we choose the best decision boundary to maximize the functional margin). Please confirm whether this explanation clarify some of your doubts or just made it worse (obviously sorry for the latter part)
svm functional margin and geometric margin Regarding following question but why it says that one should find the maximum geometrical margin? should it not be also to find the maximum of the functional margin from the beginning? You are very
45,773
svm functional margin and geometric margin
Not going into unnecessary complications about this concept, but in the most simple terms here is how one can think of and relate functional and geometric margin. Think of functional margin -- represented as 𝛾̂, as a measure of correctness of a classification for a data unit. For a data unit x with parameters w and b and given class y = 1, the functional margin is 1 only when y and (wx + b) are both of the same sign - which is to say are correctly classified. But we do not just rely on if we are correct or not in this classification. We need to know how correct we are, or what is the degree of confidence that we have in this classification. For this we need a different measure, and this is called geometric margin -- represented as 𝛾, and it can be expressed as below: 𝛾 = 𝛾̂ / ||𝑤|| So, geometric margin 𝛾 is a scaled version of functional margin 𝛾̂. If ||w|| == 1, then the geometric margin is same as functional margin - which is to say we are as confident in the correctness of this classification as we are correct in classifying a data unit to a particular class. This scaling by ||w|| gives us the measure of confidence in our correctness. And we always try to maximise this confidence in our correctness. Functional margin is like a binary valued or boolean valued variable: if we have correctly classified a particular data unit or not. So, this cannot be maximised. However, geometric margin for the same data unit gives a magnitude to our confidence, and tells us how correct we are.So, this we can maximise. And we aim for larger margin through the means of geometric margin because the wider the margin the more is the confidence in our classification. As an analogy, say a wider road (larger margin => higher geometric margin) gives higher confidence to drive must faster as it lessens the chance of hitting any pedestrian or trees (our data units in the training set), but on the narrower road (smaller margin => smaller geometric margin), one has to be a lot more cautious to not hit (lesser confidence) any pedestrian or trees. So, we always desire wider roads (larger margin), and that's why we aim to maximise it by maximising our geometric margin.
svm functional margin and geometric margin
Not going into unnecessary complications about this concept, but in the most simple terms here is how one can think of and relate functional and geometric margin. Think of functional margin -- represe
svm functional margin and geometric margin Not going into unnecessary complications about this concept, but in the most simple terms here is how one can think of and relate functional and geometric margin. Think of functional margin -- represented as 𝛾̂, as a measure of correctness of a classification for a data unit. For a data unit x with parameters w and b and given class y = 1, the functional margin is 1 only when y and (wx + b) are both of the same sign - which is to say are correctly classified. But we do not just rely on if we are correct or not in this classification. We need to know how correct we are, or what is the degree of confidence that we have in this classification. For this we need a different measure, and this is called geometric margin -- represented as 𝛾, and it can be expressed as below: 𝛾 = 𝛾̂ / ||𝑤|| So, geometric margin 𝛾 is a scaled version of functional margin 𝛾̂. If ||w|| == 1, then the geometric margin is same as functional margin - which is to say we are as confident in the correctness of this classification as we are correct in classifying a data unit to a particular class. This scaling by ||w|| gives us the measure of confidence in our correctness. And we always try to maximise this confidence in our correctness. Functional margin is like a binary valued or boolean valued variable: if we have correctly classified a particular data unit or not. So, this cannot be maximised. However, geometric margin for the same data unit gives a magnitude to our confidence, and tells us how correct we are.So, this we can maximise. And we aim for larger margin through the means of geometric margin because the wider the margin the more is the confidence in our classification. As an analogy, say a wider road (larger margin => higher geometric margin) gives higher confidence to drive must faster as it lessens the chance of hitting any pedestrian or trees (our data units in the training set), but on the narrower road (smaller margin => smaller geometric margin), one has to be a lot more cautious to not hit (lesser confidence) any pedestrian or trees. So, we always desire wider roads (larger margin), and that's why we aim to maximise it by maximising our geometric margin.
svm functional margin and geometric margin Not going into unnecessary complications about this concept, but in the most simple terms here is how one can think of and relate functional and geometric margin. Think of functional margin -- represe
45,774
Homogeneous vs. Inhomogeneous Poisson point process
A homogeneous Poisson point process is also called complete spatial randomness described by a single parameter called the intensity (number of points per unit area). It distributes a random number of points completely randomly and uniformly in any given set. The number of points falling in two disjoint sets are independent random variables. An inhomogeneous Poisson point process also has independence between disjoint sets but the points are not uniformly distributed. Rather the points are unevenly distributed according to the intensity function of the process. If you only are deciding between these two models you basically need to decide whether there are signs of inhomogeneous intensity in the data. There are many tests for this. A simple one is quadrat counting where you divide your study region into disjoint subsets of equal area and use a chi-square test statistic to judge whether the count distribution is significantly non-homogeneous. This requires you to choose the size of the subsets which says something about the spatial scale you are looking for inhomogeneity at. For point patterns in 2D the R package spatstat has facilities to do this.
Homogeneous vs. Inhomogeneous Poisson point process
A homogeneous Poisson point process is also called complete spatial randomness described by a single parameter called the intensity (number of points per unit area). It distributes a random number of
Homogeneous vs. Inhomogeneous Poisson point process A homogeneous Poisson point process is also called complete spatial randomness described by a single parameter called the intensity (number of points per unit area). It distributes a random number of points completely randomly and uniformly in any given set. The number of points falling in two disjoint sets are independent random variables. An inhomogeneous Poisson point process also has independence between disjoint sets but the points are not uniformly distributed. Rather the points are unevenly distributed according to the intensity function of the process. If you only are deciding between these two models you basically need to decide whether there are signs of inhomogeneous intensity in the data. There are many tests for this. A simple one is quadrat counting where you divide your study region into disjoint subsets of equal area and use a chi-square test statistic to judge whether the count distribution is significantly non-homogeneous. This requires you to choose the size of the subsets which says something about the spatial scale you are looking for inhomogeneity at. For point patterns in 2D the R package spatstat has facilities to do this.
Homogeneous vs. Inhomogeneous Poisson point process A homogeneous Poisson point process is also called complete spatial randomness described by a single parameter called the intensity (number of points per unit area). It distributes a random number of
45,775
Why is the curse of dimensionality also called the empty space phenomenon?
I don't think the curse of dimensionality has anything to do with correlation, or at least not in my understanding. The curse is the notion that a local neighborhood of a point in a high dimensional space is not really so local - the number of data points it takes to uniformly "fill" a neighborhood of a point with a fixed volume (think a unit cube centered at that point) grows exponentially with the dimension. Conversely, if you have a fixed number of points, and increase the dimension of the space that they reside in, you will very quickly find yourself in the situation where most of your space is empty. This comes up, for example, in $k$ nearest neighbors classification. Here we attempt to classify a new point by searching for the $k$ training points closest to it. In small dimensions, which is what people have concrete experience with and hence hence intuition for, these $k$ points all tend to be close by, as the entire space is is rather densely populated with training examples. But in large dimensions, the intuition fails - the $k$ nearest points tend to be quite far away, with much empty space in between. Suppose the dimension of the input space is 100 and we have a huge training set of a trillion (10^{12}) examples, then the examples will cover only a fraction of about 10^{-18} of the input space. Can anyone explain to me why is that? Here's a short explanation of what that may be getting at. Let's suppose all of our features are binary, this simplifies that math but is not essential. Then there are $2^{100}$ possible combinations of features. Now $\log_2(10^{12}) \approx 40$, that is $10^{12} \approx 2^{40}$, so the proportion of feature combinations "unaccounted for" is approximately $\frac{2^{100}}{2^{40}} = 2^{60}$. Now just observe that $\log_{10}(2^{60}) \approx 18$, so $2^{60} \approx 10^{18}$.
Why is the curse of dimensionality also called the empty space phenomenon?
I don't think the curse of dimensionality has anything to do with correlation, or at least not in my understanding. The curse is the notion that a local neighborhood of a point in a high dimensional
Why is the curse of dimensionality also called the empty space phenomenon? I don't think the curse of dimensionality has anything to do with correlation, or at least not in my understanding. The curse is the notion that a local neighborhood of a point in a high dimensional space is not really so local - the number of data points it takes to uniformly "fill" a neighborhood of a point with a fixed volume (think a unit cube centered at that point) grows exponentially with the dimension. Conversely, if you have a fixed number of points, and increase the dimension of the space that they reside in, you will very quickly find yourself in the situation where most of your space is empty. This comes up, for example, in $k$ nearest neighbors classification. Here we attempt to classify a new point by searching for the $k$ training points closest to it. In small dimensions, which is what people have concrete experience with and hence hence intuition for, these $k$ points all tend to be close by, as the entire space is is rather densely populated with training examples. But in large dimensions, the intuition fails - the $k$ nearest points tend to be quite far away, with much empty space in between. Suppose the dimension of the input space is 100 and we have a huge training set of a trillion (10^{12}) examples, then the examples will cover only a fraction of about 10^{-18} of the input space. Can anyone explain to me why is that? Here's a short explanation of what that may be getting at. Let's suppose all of our features are binary, this simplifies that math but is not essential. Then there are $2^{100}$ possible combinations of features. Now $\log_2(10^{12}) \approx 40$, that is $10^{12} \approx 2^{40}$, so the proportion of feature combinations "unaccounted for" is approximately $\frac{2^{100}}{2^{40}} = 2^{60}$. Now just observe that $\log_{10}(2^{60}) \approx 18$, so $2^{60} \approx 10^{18}$.
Why is the curse of dimensionality also called the empty space phenomenon? I don't think the curse of dimensionality has anything to do with correlation, or at least not in my understanding. The curse is the notion that a local neighborhood of a point in a high dimensional
45,776
Why do real-world high dimensional data often have much lower inherent dimensionality?
It's not that the dimensionality of the data (more precisely, the statistical process at work) is smaller than the coordinate space, it's that there is often not enough available data to get statistically significant results in every direction. This is a manifestation of the famed curse of dimensionality. PCA is an attempt to determine the directions in which you have the best shot at getting reliable and stable results - it does this by finding the directions in which the data is most spread out. Of course there are random processes that do only occur in a low dimensional subspace of the feature space, but in general it's best not to assume this is so without evidence. Nonetheless, in any situation, you are bound to do the best you can with the data you have, and the data you have can only support a finite number of inferences.
Why do real-world high dimensional data often have much lower inherent dimensionality?
It's not that the dimensionality of the data (more precisely, the statistical process at work) is smaller than the coordinate space, it's that there is often not enough available data to get statistic
Why do real-world high dimensional data often have much lower inherent dimensionality? It's not that the dimensionality of the data (more precisely, the statistical process at work) is smaller than the coordinate space, it's that there is often not enough available data to get statistically significant results in every direction. This is a manifestation of the famed curse of dimensionality. PCA is an attempt to determine the directions in which you have the best shot at getting reliable and stable results - it does this by finding the directions in which the data is most spread out. Of course there are random processes that do only occur in a low dimensional subspace of the feature space, but in general it's best not to assume this is so without evidence. Nonetheless, in any situation, you are bound to do the best you can with the data you have, and the data you have can only support a finite number of inferences.
Why do real-world high dimensional data often have much lower inherent dimensionality? It's not that the dimensionality of the data (more precisely, the statistical process at work) is smaller than the coordinate space, it's that there is often not enough available data to get statistic
45,777
Why do real-world high dimensional data often have much lower inherent dimensionality?
The answer is relatively simple. Typically for a real world problem you don't know which features to use for the problem. Very often you end up throwing too many features and let the algorithm figure out which ones are discriminative. Let's take MNIST digit classification as an example. You are given 28x28 black&white images centered on the digit. You have a choice: build the classifier based on the individual pixel values (easy but meaning more dimensions) or come up with more intelligent features (harder but less dimensions). If you go with individual pixel values, you know that you don't need those 784 dimensions to encode differences between 10 digits. This information is buried somewhere inside.
Why do real-world high dimensional data often have much lower inherent dimensionality?
The answer is relatively simple. Typically for a real world problem you don't know which features to use for the problem. Very often you end up throwing too many features and let the algorithm figure
Why do real-world high dimensional data often have much lower inherent dimensionality? The answer is relatively simple. Typically for a real world problem you don't know which features to use for the problem. Very often you end up throwing too many features and let the algorithm figure out which ones are discriminative. Let's take MNIST digit classification as an example. You are given 28x28 black&white images centered on the digit. You have a choice: build the classifier based on the individual pixel values (easy but meaning more dimensions) or come up with more intelligent features (harder but less dimensions). If you go with individual pixel values, you know that you don't need those 784 dimensions to encode differences between 10 digits. This information is buried somewhere inside.
Why do real-world high dimensional data often have much lower inherent dimensionality? The answer is relatively simple. Typically for a real world problem you don't know which features to use for the problem. Very often you end up throwing too many features and let the algorithm figure
45,778
Why do real-world high dimensional data often have much lower inherent dimensionality?
What "the inherent dimensionality of data points will be smaller than the number of coordinates" means is this: If you have 2 dimensional data, you need a 2 dimensional coordinate system (x,y for example) in order to be able to show them. if you data has n dimensions, you need an n dimensional coordinate system, where after n=3 it is impossible for us to visualize that geometrically (at least for me). So if you have high dimensional data, it is diffucult to show it. However, there is a trick to that. What you can do is to say: OK, I might need a lot of dimensions to show the data in the usual corrdinate system, but is there a way to transform the data into some other form, so I will need less axis to show it, or similarly an alternative coordinate system, where I can show my data with less axis? The answer is yes, and PCA is how you do it. You say, OK, let me find the direction where the change in my data is the highest, i.e. the varience is highest, and use this direction as the first axis of my new coordinate system. Theen you say, OK, now I have a direction that explains some (most) of the variance in my data, but what in which direction is the variance of the data second highest, you find it and it is the second direction of your coordinate system. Then you repeat it several times and come up with several axes that explain the most of the variance of your data. The rest of the varience in your data is very small and not so relevant, as you already have most of the change in your data. So now, in this new system, you have nearly all the variance of your original data, however you have less axes, i.e. your data is nearly as well represented in this new corrdinate system as in the previous, however it has significantly less axes in the new one. This is what is meant by your data having inherently less dimensions (in the new coord. system) than the number of coordinates (in the old coord. system). This representation in the lower dimensional new coordinate system is allowed by the fact that your data actually has some dependencies in the previous corrd. systems bases, i.e. your data can be more efficiently represented in another coord. system with other basis functions. PCA is a special way of finding these basis functions that define the new coordinate system.
Why do real-world high dimensional data often have much lower inherent dimensionality?
What "the inherent dimensionality of data points will be smaller than the number of coordinates" means is this: If you have 2 dimensional data, you need a 2 dimensional coordinate system (x,y for exam
Why do real-world high dimensional data often have much lower inherent dimensionality? What "the inherent dimensionality of data points will be smaller than the number of coordinates" means is this: If you have 2 dimensional data, you need a 2 dimensional coordinate system (x,y for example) in order to be able to show them. if you data has n dimensions, you need an n dimensional coordinate system, where after n=3 it is impossible for us to visualize that geometrically (at least for me). So if you have high dimensional data, it is diffucult to show it. However, there is a trick to that. What you can do is to say: OK, I might need a lot of dimensions to show the data in the usual corrdinate system, but is there a way to transform the data into some other form, so I will need less axis to show it, or similarly an alternative coordinate system, where I can show my data with less axis? The answer is yes, and PCA is how you do it. You say, OK, let me find the direction where the change in my data is the highest, i.e. the varience is highest, and use this direction as the first axis of my new coordinate system. Theen you say, OK, now I have a direction that explains some (most) of the variance in my data, but what in which direction is the variance of the data second highest, you find it and it is the second direction of your coordinate system. Then you repeat it several times and come up with several axes that explain the most of the variance of your data. The rest of the varience in your data is very small and not so relevant, as you already have most of the change in your data. So now, in this new system, you have nearly all the variance of your original data, however you have less axes, i.e. your data is nearly as well represented in this new corrdinate system as in the previous, however it has significantly less axes in the new one. This is what is meant by your data having inherently less dimensions (in the new coord. system) than the number of coordinates (in the old coord. system). This representation in the lower dimensional new coordinate system is allowed by the fact that your data actually has some dependencies in the previous corrd. systems bases, i.e. your data can be more efficiently represented in another coord. system with other basis functions. PCA is a special way of finding these basis functions that define the new coordinate system.
Why do real-world high dimensional data often have much lower inherent dimensionality? What "the inherent dimensionality of data points will be smaller than the number of coordinates" means is this: If you have 2 dimensional data, you need a 2 dimensional coordinate system (x,y for exam
45,779
Why do real-world high dimensional data often have much lower inherent dimensionality?
In real world everything's controlled by God's. That's why everything's dependent on God. So, everything's really in one dimension, the dimension of God's will. This is is the only true answer, but I doubt that your Prof will accept it. So, here's an easier one. In real world we're probably not gathering some random data. We usually collect data trying to solve some problem. When we do this it's most likely that we'll be looking for the data that is related to the phenomenon of interest. By virtue of this relation it's likely that all the different data points are measuring the same thing from different angles, figuratively. And that thing is probably what interests us in the first place. So, if we somehow can extract the essence of the phenomenon, we'll probably end up with a fewer number of dimensions that all these variables.
Why do real-world high dimensional data often have much lower inherent dimensionality?
In real world everything's controlled by God's. That's why everything's dependent on God. So, everything's really in one dimension, the dimension of God's will. This is is the only true answer, but I
Why do real-world high dimensional data often have much lower inherent dimensionality? In real world everything's controlled by God's. That's why everything's dependent on God. So, everything's really in one dimension, the dimension of God's will. This is is the only true answer, but I doubt that your Prof will accept it. So, here's an easier one. In real world we're probably not gathering some random data. We usually collect data trying to solve some problem. When we do this it's most likely that we'll be looking for the data that is related to the phenomenon of interest. By virtue of this relation it's likely that all the different data points are measuring the same thing from different angles, figuratively. And that thing is probably what interests us in the first place. So, if we somehow can extract the essence of the phenomenon, we'll probably end up with a fewer number of dimensions that all these variables.
Why do real-world high dimensional data often have much lower inherent dimensionality? In real world everything's controlled by God's. That's why everything's dependent on God. So, everything's really in one dimension, the dimension of God's will. This is is the only true answer, but I
45,780
Why do real-world high dimensional data often have much lower inherent dimensionality?
The information in the high-dimensional space can often be captured by a smaller number of latent variables/dimensions because there tends to be dependence (e.g. multicollinearity) between variables in the high-dimensional space. There tends to be dependence because many of the variables are probably going to form networks of causal connections. I am thinking of biological contexts such as genomic or neuroimaging data, for example. Maybe the situation is different in other domains such as astronomy, however.
Why do real-world high dimensional data often have much lower inherent dimensionality?
The information in the high-dimensional space can often be captured by a smaller number of latent variables/dimensions because there tends to be dependence (e.g. multicollinearity) between variables i
Why do real-world high dimensional data often have much lower inherent dimensionality? The information in the high-dimensional space can often be captured by a smaller number of latent variables/dimensions because there tends to be dependence (e.g. multicollinearity) between variables in the high-dimensional space. There tends to be dependence because many of the variables are probably going to form networks of causal connections. I am thinking of biological contexts such as genomic or neuroimaging data, for example. Maybe the situation is different in other domains such as astronomy, however.
Why do real-world high dimensional data often have much lower inherent dimensionality? The information in the high-dimensional space can often be captured by a smaller number of latent variables/dimensions because there tends to be dependence (e.g. multicollinearity) between variables i
45,781
Why do real-world high dimensional data often have much lower inherent dimensionality?
The statement in the exam question seems contentious. Take a simple example, like plotting life expectancy of a baby versus various factors including average daily salt consumption. The baby will (on average) die early if the salt consumption is near zero, will thrive it the salt consumption is intermediate, and will die immediately if it's very large. PCA is likely to obscure this crucial information. Usually, if a phenomenon cannot be explained using only a few factors, then it will be inaccessible to the human mind, and too complex to publish. Then we wait until someone in the future thinks of a way to explain the phenomenon simply, perhaps using completely different variables. With real data, it's certainly true that one usually finds that PCA gives rise to a coordinate change, indicating certain directions as being more significant. If these indications cannot be backed up with specifically designed experiments or with existing theory or with previous results, then the research may be discontinued. The overall result may be a bias in the assumptions made by examiners as to the nature of the scientific enterprise.
Why do real-world high dimensional data often have much lower inherent dimensionality?
The statement in the exam question seems contentious. Take a simple example, like plotting life expectancy of a baby versus various factors including average daily salt consumption. The baby will (on
Why do real-world high dimensional data often have much lower inherent dimensionality? The statement in the exam question seems contentious. Take a simple example, like plotting life expectancy of a baby versus various factors including average daily salt consumption. The baby will (on average) die early if the salt consumption is near zero, will thrive it the salt consumption is intermediate, and will die immediately if it's very large. PCA is likely to obscure this crucial information. Usually, if a phenomenon cannot be explained using only a few factors, then it will be inaccessible to the human mind, and too complex to publish. Then we wait until someone in the future thinks of a way to explain the phenomenon simply, perhaps using completely different variables. With real data, it's certainly true that one usually finds that PCA gives rise to a coordinate change, indicating certain directions as being more significant. If these indications cannot be backed up with specifically designed experiments or with existing theory or with previous results, then the research may be discontinued. The overall result may be a bias in the assumptions made by examiners as to the nature of the scientific enterprise.
Why do real-world high dimensional data often have much lower inherent dimensionality? The statement in the exam question seems contentious. Take a simple example, like plotting life expectancy of a baby versus various factors including average daily salt consumption. The baby will (on
45,782
Interpretation of eigenvectors of Hessian inverse
"Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?" Yes. The hessian is a symmetric matrix which can be diagonalized as $H=Q\Lambda Q^{T}$ where $Q$ is an orthogonal matrix whose columns are eigenvectors of $H$ and $\Lambda$ is a diagonal matrix with the eigenvalues of $H$ on the diagonal. The inverse is $H^{-1}=Q \Lambda^{-1} Q^{T}$ where $\Lambda^{-1}$ is a diagonal matrix with the reciprocals of the original eigenvalues on its diagonal. This means that the eigenvectors of $H$ are also eigenvectors of $H^{-1}$, with eigenvalues that are the reciprocals of the eigenvalues of $H$.
Interpretation of eigenvectors of Hessian inverse
"Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?" Yes. The hessian is a symmetric matrix which can be diagonalized as $H=Q\Lambda Q^{T}$ where $Q$ is an or
Interpretation of eigenvectors of Hessian inverse "Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?" Yes. The hessian is a symmetric matrix which can be diagonalized as $H=Q\Lambda Q^{T}$ where $Q$ is an orthogonal matrix whose columns are eigenvectors of $H$ and $\Lambda$ is a diagonal matrix with the eigenvalues of $H$ on the diagonal. The inverse is $H^{-1}=Q \Lambda^{-1} Q^{T}$ where $\Lambda^{-1}$ is a diagonal matrix with the reciprocals of the original eigenvalues on its diagonal. This means that the eigenvectors of $H$ are also eigenvectors of $H^{-1}$, with eigenvalues that are the reciprocals of the eigenvalues of $H$.
Interpretation of eigenvectors of Hessian inverse "Are the eigenvectors/eigenvalues of the inverse Hessian related to those of the Hessian?" Yes. The hessian is a symmetric matrix which can be diagonalized as $H=Q\Lambda Q^{T}$ where $Q$ is an or
45,783
How to compare two Pearson correlation coefficients
There are various tests you can apply. Biedenhofen & Musch (2015, PLoS ONE) give pointers and describe the cocor package for R, which implements these tests. You can also submit your correlations for testing to a web tool which internally uses the cocor package.
How to compare two Pearson correlation coefficients
There are various tests you can apply. Biedenhofen & Musch (2015, PLoS ONE) give pointers and describe the cocor package for R, which implements these tests. You can also submit your correlations for
How to compare two Pearson correlation coefficients There are various tests you can apply. Biedenhofen & Musch (2015, PLoS ONE) give pointers and describe the cocor package for R, which implements these tests. You can also submit your correlations for testing to a web tool which internally uses the cocor package.
How to compare two Pearson correlation coefficients There are various tests you can apply. Biedenhofen & Musch (2015, PLoS ONE) give pointers and describe the cocor package for R, which implements these tests. You can also submit your correlations for
45,784
How to compare two Pearson correlation coefficients
The cocor package seems to be a handy tool. I ran the cocor package with my parameters via the web tool as you suggested. The output of that calculation is the following: Comparison between r1.jk = -0.747 and r2.hm = -0.885 Difference: r1.jk - r2.hm = 0.138 Group sizes: n1 = 159200, n2 = 2400 Null hypothesis: r1.jk is equal to r2.hm Alternative hypothesis: r1.jk is not equal to r2.hm (two-sided) Alpha: 0.05 fisher1925: Fisher's z (1925) z = 21.0047, p-value = 0.0000 Null hypothesis rejected This seems pretty promising to me, but how do I have to interpret the result? The correlations are obviously different from another, but is the difference significant?
How to compare two Pearson correlation coefficients
The cocor package seems to be a handy tool. I ran the cocor package with my parameters via the web tool as you suggested. The output of that calculation is the following: Comparison between r1.jk = -
How to compare two Pearson correlation coefficients The cocor package seems to be a handy tool. I ran the cocor package with my parameters via the web tool as you suggested. The output of that calculation is the following: Comparison between r1.jk = -0.747 and r2.hm = -0.885 Difference: r1.jk - r2.hm = 0.138 Group sizes: n1 = 159200, n2 = 2400 Null hypothesis: r1.jk is equal to r2.hm Alternative hypothesis: r1.jk is not equal to r2.hm (two-sided) Alpha: 0.05 fisher1925: Fisher's z (1925) z = 21.0047, p-value = 0.0000 Null hypothesis rejected This seems pretty promising to me, but how do I have to interpret the result? The correlations are obviously different from another, but is the difference significant?
How to compare two Pearson correlation coefficients The cocor package seems to be a handy tool. I ran the cocor package with my parameters via the web tool as you suggested. The output of that calculation is the following: Comparison between r1.jk = -
45,785
What is the test statistics used for a conditional inference regression tree?
If both the regressor $X_{ji}$ and the response $Y_i$ are numeric, then both $g(\cdot)$ and $h(\cdot)$ are chosen to be the identiy by default. Thus, the linear test statistic $T_j$ is simply the sum of products $X_{ji} \cdot Y_i$. This corresponds essentially to the main ingredient of a covariance or correlation - and with the subsequent standardization of the linear test statistic $T_j$ it becomes a correlation test statistic. If one of the variables is categorical, then the corresponding transformation ($g(\cdot)$ or $h(\cdot)$) is the matrix of all dummy variables. Consequently, the standardized test statistic for two categorical variables corresponds to a $\chi^2$ test statistic. And if one variable is numeric and the other categorical you obtain an ANOVA-type test. Other transformations are also possible, appropriate for censored survival responses or ordinal responses etc. If you want to carry out the tests "by hand" you can explore the independence_test() function from the coin package for conditional inference. An introduction is available in Hothorn et al.'s "A Lego System for Conditional Inference" (doi:10.1198/000313006X118430), a preprint version of which is also available in the package as vignette("LegoCondInf", package = "coin").
What is the test statistics used for a conditional inference regression tree?
If both the regressor $X_{ji}$ and the response $Y_i$ are numeric, then both $g(\cdot)$ and $h(\cdot)$ are chosen to be the identiy by default. Thus, the linear test statistic $T_j$ is simply the sum
What is the test statistics used for a conditional inference regression tree? If both the regressor $X_{ji}$ and the response $Y_i$ are numeric, then both $g(\cdot)$ and $h(\cdot)$ are chosen to be the identiy by default. Thus, the linear test statistic $T_j$ is simply the sum of products $X_{ji} \cdot Y_i$. This corresponds essentially to the main ingredient of a covariance or correlation - and with the subsequent standardization of the linear test statistic $T_j$ it becomes a correlation test statistic. If one of the variables is categorical, then the corresponding transformation ($g(\cdot)$ or $h(\cdot)$) is the matrix of all dummy variables. Consequently, the standardized test statistic for two categorical variables corresponds to a $\chi^2$ test statistic. And if one variable is numeric and the other categorical you obtain an ANOVA-type test. Other transformations are also possible, appropriate for censored survival responses or ordinal responses etc. If you want to carry out the tests "by hand" you can explore the independence_test() function from the coin package for conditional inference. An introduction is available in Hothorn et al.'s "A Lego System for Conditional Inference" (doi:10.1198/000313006X118430), a preprint version of which is also available in the package as vignette("LegoCondInf", package = "coin").
What is the test statistics used for a conditional inference regression tree? If both the regressor $X_{ji}$ and the response $Y_i$ are numeric, then both $g(\cdot)$ and $h(\cdot)$ are chosen to be the identiy by default. Thus, the linear test statistic $T_j$ is simply the sum
45,786
Non-Stationary: Larger-than-unit root [duplicate]
I think this is actually a quite good question, which is often neglected (as you have noticed) and which I myself haven't thought about much before. The main point, I would say, is that processes with larger-than-one roots (called explosive roots) are not as interesting. If you have something which is just slightly above one, the process will fairly quickly just look like a nice curve. An explosive process will therefore reveal itself, but the (visual) difference between a unit root process and a near-unit root process is much more subtle. Consider the AR(1) process $$ y_t=ay_{t-1}+\epsilon_t. $$ I have simulated this with $a=1$ (this is the $y_t$ process in the figures), which is a random walk with a unit root. Also shown is $x_t$ which is the same as above but with a slight perturbation, so $a=1.05$ now. Thus, it has an explosive (not just a unit) root. As you can see, the behavior they exhibit is quite different (granted this is just one simulation, of course). You see the trending-like behavior already with $T=40$, and with $T=1000$ it just looks odd. Therefore, as I see it, you disregard the possibility of an explosive root many times because it is "unrealistic". A process such as what you have in the top right panel might instead, in practice, be modeled using deterministic trends with a possible non-stationary process moving around this trend. So, non-stationarity is definitely implied by explosive roots. But in practice these are much less often found, so we spend quite some time learning about the more realistic situation of non-stationarity, which is a unit root. For the same reason, you often don't learn a whole lot about a negative unit root (i.e. $a=-1$ in the model above). eps <- rnorm(1000) eps2 <- rnorm(1000) y <- eps x <- eps2 for (t in 2:1000) { y[t] <- y[t-1] + eps[t] x[t] <- 1.05*x[t-1] + eps2[t] } par(mfrow=c(2,2)) plot(y[1:40], type = "l", ylab = "y, t=1, ..., 40", main = "a = 1") plot(x[1:40], type = "l", ylab = "x, t=1, ..., 40", main = "a = 1.05") plot(y, type = "l", main = "a = 1") plot(x, type = "l", main = "a = 1.05")
Non-Stationary: Larger-than-unit root [duplicate]
I think this is actually a quite good question, which is often neglected (as you have noticed) and which I myself haven't thought about much before. The main point, I would say, is that processes with
Non-Stationary: Larger-than-unit root [duplicate] I think this is actually a quite good question, which is often neglected (as you have noticed) and which I myself haven't thought about much before. The main point, I would say, is that processes with larger-than-one roots (called explosive roots) are not as interesting. If you have something which is just slightly above one, the process will fairly quickly just look like a nice curve. An explosive process will therefore reveal itself, but the (visual) difference between a unit root process and a near-unit root process is much more subtle. Consider the AR(1) process $$ y_t=ay_{t-1}+\epsilon_t. $$ I have simulated this with $a=1$ (this is the $y_t$ process in the figures), which is a random walk with a unit root. Also shown is $x_t$ which is the same as above but with a slight perturbation, so $a=1.05$ now. Thus, it has an explosive (not just a unit) root. As you can see, the behavior they exhibit is quite different (granted this is just one simulation, of course). You see the trending-like behavior already with $T=40$, and with $T=1000$ it just looks odd. Therefore, as I see it, you disregard the possibility of an explosive root many times because it is "unrealistic". A process such as what you have in the top right panel might instead, in practice, be modeled using deterministic trends with a possible non-stationary process moving around this trend. So, non-stationarity is definitely implied by explosive roots. But in practice these are much less often found, so we spend quite some time learning about the more realistic situation of non-stationarity, which is a unit root. For the same reason, you often don't learn a whole lot about a negative unit root (i.e. $a=-1$ in the model above). eps <- rnorm(1000) eps2 <- rnorm(1000) y <- eps x <- eps2 for (t in 2:1000) { y[t] <- y[t-1] + eps[t] x[t] <- 1.05*x[t-1] + eps2[t] } par(mfrow=c(2,2)) plot(y[1:40], type = "l", ylab = "y, t=1, ..., 40", main = "a = 1") plot(x[1:40], type = "l", ylab = "x, t=1, ..., 40", main = "a = 1.05") plot(y, type = "l", main = "a = 1") plot(x, type = "l", main = "a = 1.05")
Non-Stationary: Larger-than-unit root [duplicate] I think this is actually a quite good question, which is often neglected (as you have noticed) and which I myself haven't thought about much before. The main point, I would say, is that processes with
45,787
Non-Stationary: Larger-than-unit root [duplicate]
There are several kinds of non-stationarities: 1) Series expected value is a function of time 2) Series variance depends on time and not just about lag 3) Etc Series with linear trend is non-stationary but stationary around trend.. EDIT: Your example of stochastic difference equation with explosive root is of course non-stationary if you take definition from non time independent variance or expected value. But linear trend is more interesting as a mathematical model than explosive stochastic difference equation.
Non-Stationary: Larger-than-unit root [duplicate]
There are several kinds of non-stationarities: 1) Series expected value is a function of time 2) Series variance depends on time and not just about lag 3) Etc Series with linear trend is non-station
Non-Stationary: Larger-than-unit root [duplicate] There are several kinds of non-stationarities: 1) Series expected value is a function of time 2) Series variance depends on time and not just about lag 3) Etc Series with linear trend is non-stationary but stationary around trend.. EDIT: Your example of stochastic difference equation with explosive root is of course non-stationary if you take definition from non time independent variance or expected value. But linear trend is more interesting as a mathematical model than explosive stochastic difference equation.
Non-Stationary: Larger-than-unit root [duplicate] There are several kinds of non-stationarities: 1) Series expected value is a function of time 2) Series variance depends on time and not just about lag 3) Etc Series with linear trend is non-station
45,788
Binary outcome in randomized controlled trials -- OLS or logistic?
Edit for clarity: It looks like my responses here have led to some clarifying additions to the question or additional information in comments, which make parts of my answer now at least partially obsolete. However, I plan to leave my answer as is, partly for context and partly because I believe the points raised may be relevant to later readers. Changing the order a little: Logistic: ... This is problematic because the correct model needs to include other co-variates (despite balance) and not just the Treat indicator. Both models should include predictors that would be likely to have a substantive effect, even if the design is perfectly balanced and there are no interactions between variables. For example, to omit them if you have them would reduce power - for example, in OLS it inflates error variance by incorporating their effect in the error. [Further, if there can be interactions between variables, you won't get the expectation in the model right. You should be consider diagnostic checks for potential interactions with such variables, included or not.] OLS: ... This is problematic because the variance of binary Y is not homoskedastic, That's not even the worst problem with OLS on this. The even more serious problem is that once you include the other covariates*, the relationship cannot be linear -- you will - necessarily have a model that predicts probabilities that are negative and others that are greater than 1 (predicted rather than fitted). *(which I strongly believe you should, unless you are confident they are actually unrelated to $Y$)
Binary outcome in randomized controlled trials -- OLS or logistic?
Edit for clarity: It looks like my responses here have led to some clarifying additions to the question or additional information in comments, which make parts of my answer now at least partially obs
Binary outcome in randomized controlled trials -- OLS or logistic? Edit for clarity: It looks like my responses here have led to some clarifying additions to the question or additional information in comments, which make parts of my answer now at least partially obsolete. However, I plan to leave my answer as is, partly for context and partly because I believe the points raised may be relevant to later readers. Changing the order a little: Logistic: ... This is problematic because the correct model needs to include other co-variates (despite balance) and not just the Treat indicator. Both models should include predictors that would be likely to have a substantive effect, even if the design is perfectly balanced and there are no interactions between variables. For example, to omit them if you have them would reduce power - for example, in OLS it inflates error variance by incorporating their effect in the error. [Further, if there can be interactions between variables, you won't get the expectation in the model right. You should be consider diagnostic checks for potential interactions with such variables, included or not.] OLS: ... This is problematic because the variance of binary Y is not homoskedastic, That's not even the worst problem with OLS on this. The even more serious problem is that once you include the other covariates*, the relationship cannot be linear -- you will - necessarily have a model that predicts probabilities that are negative and others that are greater than 1 (predicted rather than fitted). *(which I strongly believe you should, unless you are confident they are actually unrelated to $Y$)
Binary outcome in randomized controlled trials -- OLS or logistic? Edit for clarity: It looks like my responses here have led to some clarifying additions to the question or additional information in comments, which make parts of my answer now at least partially obs
45,789
Binary outcome in randomized controlled trials -- OLS or logistic?
A lot of economists use linear probability models, arguing that LPM provides the linear approximation of the conditional expectation function, which is often considered "good enough." Consistent (in large samples) standard errors can be gotten by using ``robust'' variance-covariance matrix estimators. This is an OK argument if you really just want $\beta$, and want it to be interpretable as a conditional expectation in the larger group. You don't want to do this if you have any interest in prediction. In reality though, arguing that $\beta$ increases a probability by a certain amount can only make sense on average (hence conditional expectation in the sample, which you generalize to the population). It can't be a description of what you would expect to happen to unit $i$ if you treat them. Because if $i$ has covariates that push them up or down, then adding $\beta$ to the effect of those covariates could lead to probabilities outside of 0/1, which wouldn't make any sense. That said, logit models involve assuming that the link between the predictors and the outcome is a logit. This can be restrictive. But you can interpret a simple logit coefficient as an odds ratio by exponentiating it. For example, if $\hat\beta=1$, then you're estimating that the treatment leads to a $e^1=2.7$-times more likely odds of $y$ equaling 1.
Binary outcome in randomized controlled trials -- OLS or logistic?
A lot of economists use linear probability models, arguing that LPM provides the linear approximation of the conditional expectation function, which is often considered "good enough." Consistent (in
Binary outcome in randomized controlled trials -- OLS or logistic? A lot of economists use linear probability models, arguing that LPM provides the linear approximation of the conditional expectation function, which is often considered "good enough." Consistent (in large samples) standard errors can be gotten by using ``robust'' variance-covariance matrix estimators. This is an OK argument if you really just want $\beta$, and want it to be interpretable as a conditional expectation in the larger group. You don't want to do this if you have any interest in prediction. In reality though, arguing that $\beta$ increases a probability by a certain amount can only make sense on average (hence conditional expectation in the sample, which you generalize to the population). It can't be a description of what you would expect to happen to unit $i$ if you treat them. Because if $i$ has covariates that push them up or down, then adding $\beta$ to the effect of those covariates could lead to probabilities outside of 0/1, which wouldn't make any sense. That said, logit models involve assuming that the link between the predictors and the outcome is a logit. This can be restrictive. But you can interpret a simple logit coefficient as an odds ratio by exponentiating it. For example, if $\hat\beta=1$, then you're estimating that the treatment leads to a $e^1=2.7$-times more likely odds of $y$ equaling 1.
Binary outcome in randomized controlled trials -- OLS or logistic? A lot of economists use linear probability models, arguing that LPM provides the linear approximation of the conditional expectation function, which is often considered "good enough." Consistent (in
45,790
Cluster Sequences of data with different length
One way to do it (among many other ways) is to treat the element of your sequence as a word. In other words, if your assume your list is a sentence, then you can extract ngrams. import nltk from nltk import ngrams a = [1, 15, 1, 1, 13, 14] b = [1, 1, 1, 1, 12, 1, 7, 11, 9, 11, 7, 11, 7, 11, 7, 4, 7, 7, 14, 15, 9, 2] c = [13, 1, 13, 15, 13, 2, 9, 2, 9, 2, 2, 2, 2, 2, 2, 2] d = [1, 2, 9, 1, 6, 10, 6, 1, 6, 10, 14, 3, 10] bb = list() bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in a]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in b]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in c]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in d]))) I added the x, because seems CountVectorizer neglects single numbers/letters. Lets do word count - alternatively you can go ahead with ngrams (read the sklearn documentation here ) as well from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() X = vectorizer.fit_transform(bb) X.toarray() The out put looks like this array([[3, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0], [5, 0, 4, 1, 0, 1, 1, 1, 0, 1, 0, 6, 2], [1, 0, 0, 0, 3, 0, 1, 9, 0, 0, 0, 0, 2], [3, 3, 0, 0, 0, 1, 0, 1, 1, 0, 3, 0, 1]]) basically columns corresponds to words which are print(vectorizer.get_feature_names()) ['x1', 'x10', 'x11', 'x12', 'x13', 'x14', 'x15', 'x2', 'x3', 'x4', 'x6', 'x7', 'x9'] and rows are your samples. Now that you have a feature matrix, you can go ahead and do clustering, for example kmeans from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2, random_state=0).fit(X) kmeans.labels_ which results array([0, 1, 0, 0], dtype=int32)
Cluster Sequences of data with different length
One way to do it (among many other ways) is to treat the element of your sequence as a word. In other words, if your assume your list is a sentence, then you can extract ngrams. import nltk from nltk
Cluster Sequences of data with different length One way to do it (among many other ways) is to treat the element of your sequence as a word. In other words, if your assume your list is a sentence, then you can extract ngrams. import nltk from nltk import ngrams a = [1, 15, 1, 1, 13, 14] b = [1, 1, 1, 1, 12, 1, 7, 11, 9, 11, 7, 11, 7, 11, 7, 4, 7, 7, 14, 15, 9, 2] c = [13, 1, 13, 15, 13, 2, 9, 2, 9, 2, 2, 2, 2, 2, 2, 2] d = [1, 2, 9, 1, 6, 10, 6, 1, 6, 10, 14, 3, 10] bb = list() bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in a]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in b]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in c]))) bb.append(str(','.join(str(e) for e in ['x' + str(e) for e in d]))) I added the x, because seems CountVectorizer neglects single numbers/letters. Lets do word count - alternatively you can go ahead with ngrams (read the sklearn documentation here ) as well from sklearn.feature_extraction.text import CountVectorizer vectorizer = CountVectorizer() X = vectorizer.fit_transform(bb) X.toarray() The out put looks like this array([[3, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0, 0, 0], [5, 0, 4, 1, 0, 1, 1, 1, 0, 1, 0, 6, 2], [1, 0, 0, 0, 3, 0, 1, 9, 0, 0, 0, 0, 2], [3, 3, 0, 0, 0, 1, 0, 1, 1, 0, 3, 0, 1]]) basically columns corresponds to words which are print(vectorizer.get_feature_names()) ['x1', 'x10', 'x11', 'x12', 'x13', 'x14', 'x15', 'x2', 'x3', 'x4', 'x6', 'x7', 'x9'] and rows are your samples. Now that you have a feature matrix, you can go ahead and do clustering, for example kmeans from sklearn.cluster import KMeans kmeans = KMeans(n_clusters=2, random_state=0).fit(X) kmeans.labels_ which results array([0, 1, 0, 0], dtype=int32)
Cluster Sequences of data with different length One way to do it (among many other ways) is to treat the element of your sequence as a word. In other words, if your assume your list is a sentence, then you can extract ngrams. import nltk from nltk
45,791
Cluster Sequences of data with different length
K-means won't work on data of this type. To me, the strings you provided as examples lend themselves to information theoretic approaches to clustering based on MDL (minimum description length https://en.wikipedia.org/wiki/Minimum_description_length) or data compression. By compressing these strings to their unique sequence (removing the redundancy), larger patterns can emerge. There are many data compression algorithms out there. A good overview can be found in Emmerg-Streib and Dehmer's Information Theory and Statistical Learning. http://www.amazon.com/Information-Theory-Statistical-Learning-Emmert-Streib/dp/0387848150/ref=sr_1_1?ie=UTF8&qid=1448032965&sr=8-1&keywords=Information+Theory+and+Statistical+Learning And a useful clustering algorithm could be permutation distribution clustering https://cran.r-project.org/web/packages/pdc/pdc.pdf
Cluster Sequences of data with different length
K-means won't work on data of this type. To me, the strings you provided as examples lend themselves to information theoretic approaches to clustering based on MDL (minimum description length https://
Cluster Sequences of data with different length K-means won't work on data of this type. To me, the strings you provided as examples lend themselves to information theoretic approaches to clustering based on MDL (minimum description length https://en.wikipedia.org/wiki/Minimum_description_length) or data compression. By compressing these strings to their unique sequence (removing the redundancy), larger patterns can emerge. There are many data compression algorithms out there. A good overview can be found in Emmerg-Streib and Dehmer's Information Theory and Statistical Learning. http://www.amazon.com/Information-Theory-Statistical-Learning-Emmert-Streib/dp/0387848150/ref=sr_1_1?ie=UTF8&qid=1448032965&sr=8-1&keywords=Information+Theory+and+Statistical+Learning And a useful clustering algorithm could be permutation distribution clustering https://cran.r-project.org/web/packages/pdc/pdc.pdf
Cluster Sequences of data with different length K-means won't work on data of this type. To me, the strings you provided as examples lend themselves to information theoretic approaches to clustering based on MDL (minimum description length https://
45,792
Cluster Sequences of data with different length
k-means must be able to compute means, so it won't work for you. Consider using hierarchical clustering, with a Levenshtein or similar similarity metric. LCSS is also a good choice; any similarity designed for sequences.
Cluster Sequences of data with different length
k-means must be able to compute means, so it won't work for you. Consider using hierarchical clustering, with a Levenshtein or similar similarity metric. LCSS is also a good choice; any similarity des
Cluster Sequences of data with different length k-means must be able to compute means, so it won't work for you. Consider using hierarchical clustering, with a Levenshtein or similar similarity metric. LCSS is also a good choice; any similarity designed for sequences.
Cluster Sequences of data with different length k-means must be able to compute means, so it won't work for you. Consider using hierarchical clustering, with a Levenshtein or similar similarity metric. LCSS is also a good choice; any similarity des
45,793
Sum of Gaussian is Gaussian?
No and this is a common fallacy. People tend to forget that the sum of two Gaussian is a Gaussian only if $X$ and $Y$ are independent or jointly normal. Here is a nice explanation.
Sum of Gaussian is Gaussian?
No and this is a common fallacy. People tend to forget that the sum of two Gaussian is a Gaussian only if $X$ and $Y$ are independent or jointly normal. Here is a nice explanation.
Sum of Gaussian is Gaussian? No and this is a common fallacy. People tend to forget that the sum of two Gaussian is a Gaussian only if $X$ and $Y$ are independent or jointly normal. Here is a nice explanation.
Sum of Gaussian is Gaussian? No and this is a common fallacy. People tend to forget that the sum of two Gaussian is a Gaussian only if $X$ and $Y$ are independent or jointly normal. Here is a nice explanation.
45,794
Sum of Gaussian is Gaussian?
You are saying in your second point “uncorrelated (hence independent)”. It’s not quite the case. You can have two Gaussian distributions both uncorrelated and dependent. Uncorrelated implies independence only if they are jointly Gaussian.
Sum of Gaussian is Gaussian?
You are saying in your second point “uncorrelated (hence independent)”. It’s not quite the case. You can have two Gaussian distributions both uncorrelated and dependent. Uncorrelated implies independe
Sum of Gaussian is Gaussian? You are saying in your second point “uncorrelated (hence independent)”. It’s not quite the case. You can have two Gaussian distributions both uncorrelated and dependent. Uncorrelated implies independence only if they are jointly Gaussian.
Sum of Gaussian is Gaussian? You are saying in your second point “uncorrelated (hence independent)”. It’s not quite the case. You can have two Gaussian distributions both uncorrelated and dependent. Uncorrelated implies independe
45,795
Independent replication experiments yielding contrasting results; how to combine them?
The p-value from each experiment should have a uniform distribution between 0 and 1 under the null hypothesis, so tests of the null hypothesis over all experiments can be based on this. Perhaps the most common test statistic is Fisher's: for p-values $p_j$ from $m$ independent experiments the negative log of each follows an exponential distribution $$-\log p_j\sim \mathrm{Exp}(1)$$ and twice their sum a chi-squared distribution with $2m$ degrees of freedom. $$-2\sum_j^m \log p_j \sim \chi^2_{2m}$$ So an overall p-value $p^*$ can be got from the chi-squared distribution function $F_{\chi^2}(\cdot)$: $$p^* = 1-F_{\chi^2}\left(-2\sum_j^m \log p_j; 2m\right)$$ If you only know whether or not $p_j<\alpha$ the no. "successes" follows a binomial distribution with probability parameter $\alpha$ and sample size $m$: $$\sum_j^m I(p_j) \sim \mathrm{Bin}(\alpha,m)$$ where the indicator function $$I(p_j)=\left\{ \begin{array}{ll} 0 & \text{when } p_j\geq\alpha \\ 1 & \text{when } p_j<\alpha \end{array} \right. $$ & so you can use the binomial distribution function $F_\mathrm{Bin}(\cdot)$ to calculate an overall p-value $$ p^*=1-F_\mathrm{Bin}\left(\sum_j^m I(p_j)-1;\alpha,m\right) $$ Read up on meta-analysis for more complicated situations, & for the (often more useful) estimation of an effect size measured over several studies, & for assessment of heterogeneity (are different studies really measuring the same thing?).
Independent replication experiments yielding contrasting results; how to combine them?
The p-value from each experiment should have a uniform distribution between 0 and 1 under the null hypothesis, so tests of the null hypothesis over all experiments can be based on this. Perhaps the mo
Independent replication experiments yielding contrasting results; how to combine them? The p-value from each experiment should have a uniform distribution between 0 and 1 under the null hypothesis, so tests of the null hypothesis over all experiments can be based on this. Perhaps the most common test statistic is Fisher's: for p-values $p_j$ from $m$ independent experiments the negative log of each follows an exponential distribution $$-\log p_j\sim \mathrm{Exp}(1)$$ and twice their sum a chi-squared distribution with $2m$ degrees of freedom. $$-2\sum_j^m \log p_j \sim \chi^2_{2m}$$ So an overall p-value $p^*$ can be got from the chi-squared distribution function $F_{\chi^2}(\cdot)$: $$p^* = 1-F_{\chi^2}\left(-2\sum_j^m \log p_j; 2m\right)$$ If you only know whether or not $p_j<\alpha$ the no. "successes" follows a binomial distribution with probability parameter $\alpha$ and sample size $m$: $$\sum_j^m I(p_j) \sim \mathrm{Bin}(\alpha,m)$$ where the indicator function $$I(p_j)=\left\{ \begin{array}{ll} 0 & \text{when } p_j\geq\alpha \\ 1 & \text{when } p_j<\alpha \end{array} \right. $$ & so you can use the binomial distribution function $F_\mathrm{Bin}(\cdot)$ to calculate an overall p-value $$ p^*=1-F_\mathrm{Bin}\left(\sum_j^m I(p_j)-1;\alpha,m\right) $$ Read up on meta-analysis for more complicated situations, & for the (often more useful) estimation of an effect size measured over several studies, & for assessment of heterogeneity (are different studies really measuring the same thing?).
Independent replication experiments yielding contrasting results; how to combine them? The p-value from each experiment should have a uniform distribution between 0 and 1 under the null hypothesis, so tests of the null hypothesis over all experiments can be based on this. Perhaps the mo
45,796
Is the sum of several Poisson processes a Poisson process?
If they're independent of each other, yes. Indeed a more general result is that if there are $k$ independent Poisson processes with rate $\lambda_i, \, i=1,2,\ldots,k$, then the combined process (the superposition of the component processes) is a Poisson process with rate $\sum_i\lambda_i$. It's really only necessary to show the result for $k=2$ since that result can be applied recursively. One way is to show that the conditions for a process to be a Poisson process are satisfied by the superposition of two Poisson processes. For example, if we take the definition here, then the properties of a Poisson process are satisfied by the superposition of two processes: N(0) = 0 $\quad$ (clearly satisfied if it's true for the components) Independent increments (the numbers of occurrences counted in disjoint intervals are independent of each other) $\quad$ (follows from the independence mentioned above) Stationary increments (the probability distribution of the number of occurrences counted in any time interval only depends on the length of the interval) $\quad$ (if it applies to the independent components it will apply to their superposition) The probability distribution of N(t) is a Poisson distribution $\quad$ (see here*) No counted occurrences are simultaneous $\quad$ (simultaneity is an event with probability 0: follows from continuity and independence) * (though I'd regard this as a consequence of the other properties)
Is the sum of several Poisson processes a Poisson process?
If they're independent of each other, yes. Indeed a more general result is that if there are $k$ independent Poisson processes with rate $\lambda_i, \, i=1,2,\ldots,k$, then the combined process (the
Is the sum of several Poisson processes a Poisson process? If they're independent of each other, yes. Indeed a more general result is that if there are $k$ independent Poisson processes with rate $\lambda_i, \, i=1,2,\ldots,k$, then the combined process (the superposition of the component processes) is a Poisson process with rate $\sum_i\lambda_i$. It's really only necessary to show the result for $k=2$ since that result can be applied recursively. One way is to show that the conditions for a process to be a Poisson process are satisfied by the superposition of two Poisson processes. For example, if we take the definition here, then the properties of a Poisson process are satisfied by the superposition of two processes: N(0) = 0 $\quad$ (clearly satisfied if it's true for the components) Independent increments (the numbers of occurrences counted in disjoint intervals are independent of each other) $\quad$ (follows from the independence mentioned above) Stationary increments (the probability distribution of the number of occurrences counted in any time interval only depends on the length of the interval) $\quad$ (if it applies to the independent components it will apply to their superposition) The probability distribution of N(t) is a Poisson distribution $\quad$ (see here*) No counted occurrences are simultaneous $\quad$ (simultaneity is an event with probability 0: follows from continuity and independence) * (though I'd regard this as a consequence of the other properties)
Is the sum of several Poisson processes a Poisson process? If they're independent of each other, yes. Indeed a more general result is that if there are $k$ independent Poisson processes with rate $\lambda_i, \, i=1,2,\ldots,k$, then the combined process (the
45,797
Confusing Holt-Winters parameters
The small values for $\beta$ and $\gamma$ show that the trend and seasonality do not change much over time. They do not tell you that there is no trend or seasonality.
Confusing Holt-Winters parameters
The small values for $\beta$ and $\gamma$ show that the trend and seasonality do not change much over time. They do not tell you that there is no trend or seasonality.
Confusing Holt-Winters parameters The small values for $\beta$ and $\gamma$ show that the trend and seasonality do not change much over time. They do not tell you that there is no trend or seasonality.
Confusing Holt-Winters parameters The small values for $\beta$ and $\gamma$ show that the trend and seasonality do not change much over time. They do not tell you that there is no trend or seasonality.
45,798
Confusing Holt-Winters parameters
All parameters, $\alpha$, $\beta$ and $\gamma$, have values between 0 and 1. In broad terms, a simple exponential smoothing model looks like this (though the idea also works for double and triple exponential smoothing): $$ \mathit{smoothed_t} = \color{blue}{\mathit{parameter}} \cdot \mathit{observation_t} + (\color{blue}{1 - \mathit{parameter}}) \cdot \mathit{smoothed_{t-1}} $$ So the closer a parameter is to 0, the lower the weight of the present observation and the higher the weight of previous estimatives in determining the updated statistic. Your model still recognized the presence of level, trend and seasonality, it just weighs older observations higher than newer ones.
Confusing Holt-Winters parameters
All parameters, $\alpha$, $\beta$ and $\gamma$, have values between 0 and 1. In broad terms, a simple exponential smoothing model looks like this (though the idea also works for double and triple expo
Confusing Holt-Winters parameters All parameters, $\alpha$, $\beta$ and $\gamma$, have values between 0 and 1. In broad terms, a simple exponential smoothing model looks like this (though the idea also works for double and triple exponential smoothing): $$ \mathit{smoothed_t} = \color{blue}{\mathit{parameter}} \cdot \mathit{observation_t} + (\color{blue}{1 - \mathit{parameter}}) \cdot \mathit{smoothed_{t-1}} $$ So the closer a parameter is to 0, the lower the weight of the present observation and the higher the weight of previous estimatives in determining the updated statistic. Your model still recognized the presence of level, trend and seasonality, it just weighs older observations higher than newer ones.
Confusing Holt-Winters parameters All parameters, $\alpha$, $\beta$ and $\gamma$, have values between 0 and 1. In broad terms, a simple exponential smoothing model looks like this (though the idea also works for double and triple expo
45,799
How to express "inequality" of a distribution in one number?
Perhaps the best known measure would be the Gini index. The R package ineq (See here) implements the Herfindahl and Rosenbluth concentration measures (in function conc). It also implements a number of inequality indexes (including the Gini) in function ineq -- the Gini coefficient, Ricci-Schutz coefficient (also called Pietra’s measure), Atkinson’s measure, Kolm’s measure, Theil’s entropy measure, Theil’s second measure, the coefficient of variation and the squared coefficient of variation. This answer mentions the Simpson diversity index, and derives a concentration measure from that. There are numerous other diversity indices (and thereby, other concentration measures). You'll probably note that there's a connection to the Herfindahl index (the Simpson diversity index is the Herfindahl, and the corresponding concentration measure is the normalized Herfindahl. In fact I just edited the other answer to point this out.) [When dealing with count data, or proportions derived from counts, it's also possible to define measures derived from chi-square goodness-of-fit statistics (they can be normalized to 0-1), for example. For one such measure, see here.] Many of these are either suitable or can be rescaled to be suitable as measures of the kind of thing you want.
How to express "inequality" of a distribution in one number?
Perhaps the best known measure would be the Gini index. The R package ineq (See here) implements the Herfindahl and Rosenbluth concentration measures (in function conc). It also implements a number of
How to express "inequality" of a distribution in one number? Perhaps the best known measure would be the Gini index. The R package ineq (See here) implements the Herfindahl and Rosenbluth concentration measures (in function conc). It also implements a number of inequality indexes (including the Gini) in function ineq -- the Gini coefficient, Ricci-Schutz coefficient (also called Pietra’s measure), Atkinson’s measure, Kolm’s measure, Theil’s entropy measure, Theil’s second measure, the coefficient of variation and the squared coefficient of variation. This answer mentions the Simpson diversity index, and derives a concentration measure from that. There are numerous other diversity indices (and thereby, other concentration measures). You'll probably note that there's a connection to the Herfindahl index (the Simpson diversity index is the Herfindahl, and the corresponding concentration measure is the normalized Herfindahl. In fact I just edited the other answer to point this out.) [When dealing with count data, or proportions derived from counts, it's also possible to define measures derived from chi-square goodness-of-fit statistics (they can be normalized to 0-1), for example. For one such measure, see here.] Many of these are either suitable or can be rescaled to be suitable as measures of the kind of thing you want.
How to express "inequality" of a distribution in one number? Perhaps the best known measure would be the Gini index. The R package ineq (See here) implements the Herfindahl and Rosenbluth concentration measures (in function conc). It also implements a number of
45,800
How to express "inequality" of a distribution in one number?
I've developed a method for quantifying "uniformity" that allows you to do what you are asking. Its helped out a couple other folks too. See: https://math.stackexchange.com/questions/921084/how-to-calculate-peakiness-or-uniformity-in-histogram/921110#921110 Basically, you are just calculating the path length of the associated CDF by connecting consecutive points by a straight line.
How to express "inequality" of a distribution in one number?
I've developed a method for quantifying "uniformity" that allows you to do what you are asking. Its helped out a couple other folks too. See: https://math.stackexchange.com/questions/921084/how-to-cal
How to express "inequality" of a distribution in one number? I've developed a method for quantifying "uniformity" that allows you to do what you are asking. Its helped out a couple other folks too. See: https://math.stackexchange.com/questions/921084/how-to-calculate-peakiness-or-uniformity-in-histogram/921110#921110 Basically, you are just calculating the path length of the associated CDF by connecting consecutive points by a straight line.
How to express "inequality" of a distribution in one number? I've developed a method for quantifying "uniformity" that allows you to do what you are asking. Its helped out a couple other folks too. See: https://math.stackexchange.com/questions/921084/how-to-cal