idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
25,201
Multinomial logistic regression assumptions
The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a priori. In standard regression you fit the least-squares curve, and measure the residual error. In a logit model, you assume that the error is already in the measurement of the point, and compute a likelihood function from that assumption. An important assumption is that the sample be exogenous. If it is choice-based, there are corrections that need to be employed. As far as assumptions on the model itself, Train describes three: Systematic, and non-random, taste variation. Proportional substitution among alternatives (a consequence of the IIA property). No serial correlation in the error term (panel data). The first assumption you mostly just have to defend in the context of your problem. The third is largely the same, because the error terms are purely random. The second is testable to a certain extent, however. If you specify a nested logit model, and it turns out that the inter-nest substitution pattern is entirely flexible ($\lambda = 1$) then you could have used the MNL model, and the IIA assumption is valid. But remember that the log-likelihood function for the nested logit model has local maxima, so you should make sure that you get $\lambda =1$ consistently. As far as doing any of this in SPSS, I can't help you other than suggest you use the mlogit package in R instead. Sorry.
Multinomial logistic regression assumptions
The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a p
Multinomial logistic regression assumptions The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a priori. In standard regression you fit the least-squares curve, and measure the residual error. In a logit model, you assume that the error is already in the measurement of the point, and compute a likelihood function from that assumption. An important assumption is that the sample be exogenous. If it is choice-based, there are corrections that need to be employed. As far as assumptions on the model itself, Train describes three: Systematic, and non-random, taste variation. Proportional substitution among alternatives (a consequence of the IIA property). No serial correlation in the error term (panel data). The first assumption you mostly just have to defend in the context of your problem. The third is largely the same, because the error terms are purely random. The second is testable to a certain extent, however. If you specify a nested logit model, and it turns out that the inter-nest substitution pattern is entirely flexible ($\lambda = 1$) then you could have used the MNL model, and the IIA assumption is valid. But remember that the log-likelihood function for the nested logit model has local maxima, so you should make sure that you get $\lambda =1$ consistently. As far as doing any of this in SPSS, I can't help you other than suggest you use the mlogit package in R instead. Sorry.
Multinomial logistic regression assumptions The key assumption in the MNL is that the errors are independently and identically distributed with a Gumbel extreme value distribution. The problem with testing this assumption is that it is made a p
25,202
Multinomial logistic regression assumptions
Assumptions: Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regression Independence of observational units Linear relation between covariates and (link-transformed) expectation of the outcome For assumption 1 to be fulfilled, the categories of your outcome need to be exclusive (non-overlapping) and exhaustive (covering all possible forms the outcome can take). I don't really know if there are any proper statistical tests for assumption 2. For time-series data there is a test of autocorrelation called Durbin-Watson test. For other forms of correlated data, I think you would rather make that decision based on theoretical considerations (e.g., if your data come are derived from a cluster-sampling procedure, you would expect the data within clusters to be correlated). As for assumption 3, in binary logistic regression you can plot binned residuals against estimated probabilities to see if the average residual is around 0 over the entire range of estimated probabilities. I suppose this can be generalized to multinomial regression by making (k-1) such plots instead for the different categories of an outcome with k categories. EDIT: Concerning alternative models: Assumption 1 is fairly straightforward to fulfill. You might run into trouble because you have to estimate a large number of parameters (k-1 different sets of intercepts and slope parameters). In such a case, you could for example collapse the outcome into a binary outcome and do a simple logistic regression. If assumption 2 is violated, you could use a mixed model, which allows you to specify a dependence structure- As for assumption 3, you could transform variables of which you suspect that they have a non-linear effect. A common transformation is for example to include age squared in health-related outcomes.
Multinomial logistic regression assumptions
Assumptions: Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regressi
Multinomial logistic regression assumptions Assumptions: Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regression Independence of observational units Linear relation between covariates and (link-transformed) expectation of the outcome For assumption 1 to be fulfilled, the categories of your outcome need to be exclusive (non-overlapping) and exhaustive (covering all possible forms the outcome can take). I don't really know if there are any proper statistical tests for assumption 2. For time-series data there is a test of autocorrelation called Durbin-Watson test. For other forms of correlated data, I think you would rather make that decision based on theoretical considerations (e.g., if your data come are derived from a cluster-sampling procedure, you would expect the data within clusters to be correlated). As for assumption 3, in binary logistic regression you can plot binned residuals against estimated probabilities to see if the average residual is around 0 over the entire range of estimated probabilities. I suppose this can be generalized to multinomial regression by making (k-1) such plots instead for the different categories of an outcome with k categories. EDIT: Concerning alternative models: Assumption 1 is fairly straightforward to fulfill. You might run into trouble because you have to estimate a large number of parameters (k-1 different sets of intercepts and slope parameters). In such a case, you could for example collapse the outcome into a binary outcome and do a simple logistic regression. If assumption 2 is violated, you could use a mixed model, which allows you to specify a dependence structure- As for assumption 3, you could transform variables of which you suspect that they have a non-linear effect. A common transformation is for example to include age squared in health-related outcomes.
Multinomial logistic regression assumptions Assumptions: Outcome follows a categorical distribution (http://en.wikipedia.org/wiki/Categorical_distribution), which is linked to the covariates via a link function as in ordinary logistic regressi
25,203
Multinomial logistic regression assumptions
One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parameters from the right hand side of the model.
Multinomial logistic regression assumptions
One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parame
Multinomial logistic regression assumptions One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parameters from the right hand side of the model.
Multinomial logistic regression assumptions One of the most important practical assumptions of multinomial logistic is that the number of observations in the smallest frequency category of $Y$ is large, for example 10 times the number of parame
25,204
Multinomial logistic regression assumptions
@h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For example, imagine you have $X_1$ as an explanatory variable, but you aren't sure whether the relationship between it and the link transformed expectation is a straight line. You could form a new model by adding $X_1^2$, and $X_1^3$, and then test to see if your new model fits better than your original model. Another assumption of generalized linear models, like the multinomial logistic, is that the link function is correct. Strictly speaking, multinomial logistic regression uses only the logit link, but there are other multinomial model possibilities, such as the multinomial probit. Many people (somewhat sloppily) refer to any such model as "logistic" meaning only that the response variable is categorical, but the term really only properly refers to the logit link. For more on links, it may help you to read my answer here: Difference between logit and probit models. Regarding addressing violations of these assumptions, it is mostly self-explanatory. If the observations are not independent, you can add the relevant fixed or random effects to make them so. If the relationship with a predictor is not linear, you can add transformed variables so that it is linear in the augmented predictor space. If the link is not appropriate, you can change it, etc. In general, multinomial logistic regression does not make very constraining assumptions.
Multinomial logistic regression assumptions
@h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For exa
Multinomial logistic regression assumptions @h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For example, imagine you have $X_1$ as an explanatory variable, but you aren't sure whether the relationship between it and the link transformed expectation is a straight line. You could form a new model by adding $X_1^2$, and $X_1^3$, and then test to see if your new model fits better than your original model. Another assumption of generalized linear models, like the multinomial logistic, is that the link function is correct. Strictly speaking, multinomial logistic regression uses only the logit link, but there are other multinomial model possibilities, such as the multinomial probit. Many people (somewhat sloppily) refer to any such model as "logistic" meaning only that the response variable is categorical, but the term really only properly refers to the logit link. For more on links, it may help you to read my answer here: Difference between logit and probit models. Regarding addressing violations of these assumptions, it is mostly self-explanatory. If the observations are not independent, you can add the relevant fixed or random effects to make them so. If the relationship with a predictor is not linear, you can add transformed variables so that it is linear in the augmented predictor space. If the link is not appropriate, you can change it, etc. In general, multinomial logistic regression does not make very constraining assumptions.
Multinomial logistic regression assumptions @h_bauer has provided a good answer. Let me add a small complementary point: You can also test for a curvilinear relationship by adding curvilinear terms and performing a nested model test. For exa
25,205
Multinomial logistic regression assumptions
gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives). You can not force your data fit into the IIA assumption, you should test it and hope for it to be satisfied. SPSS could not handle the test until 2010 for sure. R of course does it, but it might me easier for you to migrate to Stata and implement the IIA tests provided by the mlogit postestimation commands. If the IIA does not holds, mixed multinomial logit or nested logit are reasonable alternatives. The first one can be estimated within the gllamm, the second with the far more parsimonious nlogit command.
Multinomial logistic regression assumptions
gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives). You can not force
Multinomial logistic regression assumptions gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives). You can not force your data fit into the IIA assumption, you should test it and hope for it to be satisfied. SPSS could not handle the test until 2010 for sure. R of course does it, but it might me easier for you to migrate to Stata and implement the IIA tests provided by the mlogit postestimation commands. If the IIA does not holds, mixed multinomial logit or nested logit are reasonable alternatives. The first one can be estimated within the gllamm, the second with the far more parsimonious nlogit command.
Multinomial logistic regression assumptions gmacfarlane has been very clear. But to be more precise, and I assume you perform a cross section analysis, the core assumption is the IIA (independence of irrelevant alternatives). You can not force
25,206
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people?
I interpret the question like this: suppose the sampling was purportedly carried out as if $363$ tickets of white paper were put in a jar, each labeled with the name of one person, and $232$ were taken out randomly after thoroughly stirring the jar's contents. Beforehand, $12$ of the tickets were colored red. What is the chance that exactly two of the selected tickets are red? What is the chance that at most two of the tickets are red? An exact formula can be obtained, but we don't need to do that much theoretical work. Instead, we just track the chances as the tickets are pulled from the jar. At the time $m$ of them have been withdrawn, let the chance that exactly $i$ red tickets have been seen be written $p(i,m)$. To get started, note that $p(i,0)=0$ if $i\gt 0$ (you can't have any red tickets before you get started) and $p(0,0)=1$ (it's certain you have no red tickets at the outset). Now, on the most recent draw, either the ticket was red or it wasn't. In the first case, we previously had a chance $p(i-1,m-1)$ of seeing exactly $i-1$ red tickets. We then happened then to pull a red one from the remaining $363 - m + 1$ tickets, making it exactly $i$ red tickets so far. Because we assume all tickets have equal chances at every stage, our chance of drawing a red in this fashion was therefore $(12-i+1) / (363 - m + 1)$. In the other case, we had a chance $p(i,m-1)$ of obtaining exactly $i$ red tickets in the previous $m-1$ draws, and the chance of not adding another red ticket to the sample on the next draw was $(363 - m + 1 - 12 + i) / (363 - m + 1)$. Whence, using basic axioms of probability (to wit, chances of two mutually exclusive cases add and conditional chances multiply), $$p(i,m) = \frac{p(i-1,m-1) (12-i+1) + p(i,m-1) (363 - m + 1 - 12 + i)}{363 - m + 1}.$$ We repeat this calculation recursively, laying out a triangular array of the values of $p(i,m)$ for $0\le i\le 12$ and $0 \le m \le 232$. After a little calculation we obtain $p(2,232) \approx 0.000849884$ and $p(0,232)+p(1,232)+p(2,232)\approx 0.000934314$, answering both versions of the question. These are small numbers: no matter how you look at it, they are pretty rare events (rarer than one in a thousand). As a double-check, I performed this exercise with a computer 1,000,000 times. In 932 = 0.000932 of these experiments, 2 or fewer red tickets were observed. This is extremely close to the calculated result, because the sampling fluctuation in the expected value of 934.3 is about 30 (up or down). Here is how the simulation is done in R: > population <- c(rep(1,12), rep(0, 363-12)) # 1 is a "red" indicator > results <- replicate(10^6, sum(sample(population, 232))) # Count the reds in 10^6 trials > sum(results <= 2) # How many trials had 2 or fewer reds? [1] 948 This time, because the experiments are random, the results changed a little: two or fewer red tickets were observed in 948 of the million trials. That still is consistent with the theoretical result.) The conclusion is that it's highly unlikely that two or fewer of the 232 tickets will be red. If you indeed have a sample of 232 of 363 people, this result is a strong indication that the tickets-in-a-jar model is not a correct description of how the sample was obtained. Alternative explanations include (a) the red tickets were made more difficult to take from the jar (a "bias" against them) as well as (b) the tickets were colored after the sample was observed (post-hoc data snooping, which does not indicate any bias). An example of explanation (b) in action would be a jury pool for a notorious murder trial. Suppose it included 363 people. Out of that pool, the court interviewed 232 of them. An ambitious newspaper reporter meticulously reviews the vitae of everyone in the pool and notices that 12 of the 363 were goldfish fanciers, but only two of them had been interviewed. Is the court biased against goldfish fanciers? Probably not.
What is the probability of n people from a list of m people being in a random selection of x people
I interpret the question like this: suppose the sampling was purportedly carried out as if $363$ tickets of white paper were put in a jar, each labeled with the name of one person, and $232$ were take
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people? I interpret the question like this: suppose the sampling was purportedly carried out as if $363$ tickets of white paper were put in a jar, each labeled with the name of one person, and $232$ were taken out randomly after thoroughly stirring the jar's contents. Beforehand, $12$ of the tickets were colored red. What is the chance that exactly two of the selected tickets are red? What is the chance that at most two of the tickets are red? An exact formula can be obtained, but we don't need to do that much theoretical work. Instead, we just track the chances as the tickets are pulled from the jar. At the time $m$ of them have been withdrawn, let the chance that exactly $i$ red tickets have been seen be written $p(i,m)$. To get started, note that $p(i,0)=0$ if $i\gt 0$ (you can't have any red tickets before you get started) and $p(0,0)=1$ (it's certain you have no red tickets at the outset). Now, on the most recent draw, either the ticket was red or it wasn't. In the first case, we previously had a chance $p(i-1,m-1)$ of seeing exactly $i-1$ red tickets. We then happened then to pull a red one from the remaining $363 - m + 1$ tickets, making it exactly $i$ red tickets so far. Because we assume all tickets have equal chances at every stage, our chance of drawing a red in this fashion was therefore $(12-i+1) / (363 - m + 1)$. In the other case, we had a chance $p(i,m-1)$ of obtaining exactly $i$ red tickets in the previous $m-1$ draws, and the chance of not adding another red ticket to the sample on the next draw was $(363 - m + 1 - 12 + i) / (363 - m + 1)$. Whence, using basic axioms of probability (to wit, chances of two mutually exclusive cases add and conditional chances multiply), $$p(i,m) = \frac{p(i-1,m-1) (12-i+1) + p(i,m-1) (363 - m + 1 - 12 + i)}{363 - m + 1}.$$ We repeat this calculation recursively, laying out a triangular array of the values of $p(i,m)$ for $0\le i\le 12$ and $0 \le m \le 232$. After a little calculation we obtain $p(2,232) \approx 0.000849884$ and $p(0,232)+p(1,232)+p(2,232)\approx 0.000934314$, answering both versions of the question. These are small numbers: no matter how you look at it, they are pretty rare events (rarer than one in a thousand). As a double-check, I performed this exercise with a computer 1,000,000 times. In 932 = 0.000932 of these experiments, 2 or fewer red tickets were observed. This is extremely close to the calculated result, because the sampling fluctuation in the expected value of 934.3 is about 30 (up or down). Here is how the simulation is done in R: > population <- c(rep(1,12), rep(0, 363-12)) # 1 is a "red" indicator > results <- replicate(10^6, sum(sample(population, 232))) # Count the reds in 10^6 trials > sum(results <= 2) # How many trials had 2 or fewer reds? [1] 948 This time, because the experiments are random, the results changed a little: two or fewer red tickets were observed in 948 of the million trials. That still is consistent with the theoretical result.) The conclusion is that it's highly unlikely that two or fewer of the 232 tickets will be red. If you indeed have a sample of 232 of 363 people, this result is a strong indication that the tickets-in-a-jar model is not a correct description of how the sample was obtained. Alternative explanations include (a) the red tickets were made more difficult to take from the jar (a "bias" against them) as well as (b) the tickets were colored after the sample was observed (post-hoc data snooping, which does not indicate any bias). An example of explanation (b) in action would be a jury pool for a notorious murder trial. Suppose it included 363 people. Out of that pool, the court interviewed 232 of them. An ambitious newspaper reporter meticulously reviews the vitae of everyone in the pool and notices that 12 of the 363 were goldfish fanciers, but only two of them had been interviewed. Is the court biased against goldfish fanciers? Probably not.
What is the probability of n people from a list of m people being in a random selection of x people I interpret the question like this: suppose the sampling was purportedly carried out as if $363$ tickets of white paper were put in a jar, each labeled with the name of one person, and $232$ were take
25,207
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people?
@whuber gave an exhaustive explanation, I just want to point out that there is a standard statistical distribution corresponding to this scenario: the hypergeometric distribution. So you can obtain any such probabilities directly in, say, R: Probability of exactly 2 out of 12 selected: > dhyper(2, 12, 363-12, 232) [1] 0.0008498838 Probability of 2 or fewer out of 12 selected: > phyper(2, 12, 363-12, 232) [1] 0.000934314
What is the probability of n people from a list of m people being in a random selection of x people
@whuber gave an exhaustive explanation, I just want to point out that there is a standard statistical distribution corresponding to this scenario: the hypergeometric distribution. So you can obtain an
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people? @whuber gave an exhaustive explanation, I just want to point out that there is a standard statistical distribution corresponding to this scenario: the hypergeometric distribution. So you can obtain any such probabilities directly in, say, R: Probability of exactly 2 out of 12 selected: > dhyper(2, 12, 363-12, 232) [1] 0.0008498838 Probability of 2 or fewer out of 12 selected: > phyper(2, 12, 363-12, 232) [1] 0.000934314
What is the probability of n people from a list of m people being in a random selection of x people @whuber gave an exhaustive explanation, I just want to point out that there is a standard statistical distribution corresponding to this scenario: the hypergeometric distribution. So you can obtain an
25,208
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people?
The odds are much higher than calculated with the simple hypergeometric distribution, as the group is not chosen randomly ("12 fish are painted red before the draw"). From the description of the question, we are testing for a fraud in the draw. A specific group of 12 people complained that only 2 of them were selected, while the expected number was 232/363~2/3=8. What we really need to calculated is what are the odds that "No group of size 12 will have only 2 member selected". The odds that at least one group will have 2 or fewer (therefore will complain against the fairness of the draw) are much higher. When I run this simulation, and check in how many of the trials none of the 30 (=360/12) groups had 2 or fewer selections, I get about 2.3% of the times. 1:42 is low but not impossible. You should still check the procedure of the draw as it might be biased against a specific group of people. They might have come together and received a range of the draw with less probability (the first or last numbers, for example), or whatever dependent variable on the procedure of the draw. But if you don't find any flaw in the procedure, you can return to the 1:42 odds that it is simply bad luck for the group.
What is the probability of n people from a list of m people being in a random selection of x people
The odds are much higher than calculated with the simple hypergeometric distribution, as the group is not chosen randomly ("12 fish are painted red before the draw"). From the description of the ques
What is the probability of n people from a list of m people being in a random selection of x people from a list of y people? The odds are much higher than calculated with the simple hypergeometric distribution, as the group is not chosen randomly ("12 fish are painted red before the draw"). From the description of the question, we are testing for a fraud in the draw. A specific group of 12 people complained that only 2 of them were selected, while the expected number was 232/363~2/3=8. What we really need to calculated is what are the odds that "No group of size 12 will have only 2 member selected". The odds that at least one group will have 2 or fewer (therefore will complain against the fairness of the draw) are much higher. When I run this simulation, and check in how many of the trials none of the 30 (=360/12) groups had 2 or fewer selections, I get about 2.3% of the times. 1:42 is low but not impossible. You should still check the procedure of the draw as it might be biased against a specific group of people. They might have come together and received a range of the draw with less probability (the first or last numbers, for example), or whatever dependent variable on the procedure of the draw. But if you don't find any flaw in the procedure, you can return to the 1:42 odds that it is simply bad luck for the group.
What is the probability of n people from a list of m people being in a random selection of x people The odds are much higher than calculated with the simple hypergeometric distribution, as the group is not chosen randomly ("12 fish are painted red before the draw"). From the description of the ques
25,209
How to test hypothesis that correlation is equal to given value using R?
Using the variance stabilizing Fisher's atan transformation, you can get the p-value as pnorm( 0.5 * log( (1+r)/(1-r) ), mean = 0.5 * log( (1+0.75)/(1-0.75) ), sd = 1/sqrt(n-3) ) or whatever version of one-sided/two-sided p-value you are interested in. Obviuosly, you need the sample size n and the sample correlation coefficient r as inputs to this.
How to test hypothesis that correlation is equal to given value using R?
Using the variance stabilizing Fisher's atan transformation, you can get the p-value as pnorm( 0.5 * log( (1+r)/(1-r) ), mean = 0.5 * log( (1+0.75)/(1-0.75) ), sd = 1/sqrt(n-3) ) or whatever version
How to test hypothesis that correlation is equal to given value using R? Using the variance stabilizing Fisher's atan transformation, you can get the p-value as pnorm( 0.5 * log( (1+r)/(1-r) ), mean = 0.5 * log( (1+0.75)/(1-0.75) ), sd = 1/sqrt(n-3) ) or whatever version of one-sided/two-sided p-value you are interested in. Obviuosly, you need the sample size n and the sample correlation coefficient r as inputs to this.
How to test hypothesis that correlation is equal to given value using R? Using the variance stabilizing Fisher's atan transformation, you can get the p-value as pnorm( 0.5 * log( (1+r)/(1-r) ), mean = 0.5 * log( (1+0.75)/(1-0.75) ), sd = 1/sqrt(n-3) ) or whatever version
25,210
How to test hypothesis that correlation is equal to given value using R?
The distribution of r_hat around rho is given by this R function adapted from Matlab code at the webpage of Xu Cui. It's not that difficult to turn this into an estimate for the probability that an observed value "r" is improbable given a sample size of "n" and a hypothetical true value of "ro". corrdist <- function (r, ro, n) { y = (n-2) * gamma(n-1) * (1-ro^2)^((n-1)/2) * (1-r^2)^((n-4)/2) y = y/ (sqrt(2*pi) * gamma(n-1/2) * (1-ro*r)^(n-3/2)) y = y* (1+ 1/4*(ro*r+1)/(2*n-1) + 9/16*(ro*r+1)^2 / (2*n-1)/(2*n+1)) } Then with that function you can plot the distribution of a null rho of 0.75, calculate the probability that r_hat will be less than 0.6 and shade in that area on the plot: plot(seq(-1,1,.01), corrdist( seq(-1,1,.01), 0.75, 10) ,type="l") integrate(corrdist, lower=-1, upper=0.6, ro=0.75, n=10) # 0.1819533 with absolute error < 2e-09 polygon(x=c(seq(-1,0.6, length=100), 0.6, 0), y=c(sapply(seq(-1,0.6, length=100), corrdist, ro=0.75, n=10), 0,0), col="grey")
How to test hypothesis that correlation is equal to given value using R?
The distribution of r_hat around rho is given by this R function adapted from Matlab code at the webpage of Xu Cui. It's not that difficult to turn this into an estimate for the probability that an ob
How to test hypothesis that correlation is equal to given value using R? The distribution of r_hat around rho is given by this R function adapted from Matlab code at the webpage of Xu Cui. It's not that difficult to turn this into an estimate for the probability that an observed value "r" is improbable given a sample size of "n" and a hypothetical true value of "ro". corrdist <- function (r, ro, n) { y = (n-2) * gamma(n-1) * (1-ro^2)^((n-1)/2) * (1-r^2)^((n-4)/2) y = y/ (sqrt(2*pi) * gamma(n-1/2) * (1-ro*r)^(n-3/2)) y = y* (1+ 1/4*(ro*r+1)/(2*n-1) + 9/16*(ro*r+1)^2 / (2*n-1)/(2*n+1)) } Then with that function you can plot the distribution of a null rho of 0.75, calculate the probability that r_hat will be less than 0.6 and shade in that area on the plot: plot(seq(-1,1,.01), corrdist( seq(-1,1,.01), 0.75, 10) ,type="l") integrate(corrdist, lower=-1, upper=0.6, ro=0.75, n=10) # 0.1819533 with absolute error < 2e-09 polygon(x=c(seq(-1,0.6, length=100), 0.6, 0), y=c(sapply(seq(-1,0.6, length=100), corrdist, ro=0.75, n=10), 0,0), col="grey")
How to test hypothesis that correlation is equal to given value using R? The distribution of r_hat around rho is given by this R function adapted from Matlab code at the webpage of Xu Cui. It's not that difficult to turn this into an estimate for the probability that an ob
25,211
How to test hypothesis that correlation is equal to given value using R?
Another approach that may be less exact than Fisher's tranformation, but I think could be more intuitive (and could give ideas about practical significance in addition to statistical significance) is the visual test: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 There is an implementation of this in the vis.test function in the TeachingDemos package for R. One possibly way to run it for your example is: vt.scattercor <- function(x,y,r,...,orig=TRUE) { require('MASS') par(mar=c(2.5,2.5,1,1)+0.1) if(orig) { plot(x,y, xlab="", ylab="", ...) } else { mu <- c(mean(x), mean(y)) var <- var( cbind(x,y) ) var[ rbind( 1:2, 2:1 ) ] <- r * sqrt(var[1,1]*var[2,2]) tmp <- mvrnorm( length(x), mu, var ) plot( tmp[,1], tmp[,2], xlab="", ylab="", ...) } } test1 <- mvrnorm(100, c(0,0), rbind( c(1,.75), c(.75,1) ) ) test2 <- mvrnorm(100, c(0,0), rbind( c(1,.5), c(.5,1) ) ) vis.test( test1[,1], test1[,2], r=0.75, FUN=vt.scattercor ) vis.test( test2[,1], test2[,2], r=0.75, FUN=vt.scattercor ) Of course if your real data is not normal or the relationship is not linear then that will be easily picked up with the above code. If you want to simultaniously test for those, then the above code would do that, or the above code could be adapted to better represent the nature of the data.
How to test hypothesis that correlation is equal to given value using R?
Another approach that may be less exact than Fisher's tranformation, but I think could be more intuitive (and could give ideas about practical significance in addition to statistical significance) is
How to test hypothesis that correlation is equal to given value using R? Another approach that may be less exact than Fisher's tranformation, but I think could be more intuitive (and could give ideas about practical significance in addition to statistical significance) is the visual test: Buja, A., Cook, D. Hofmann, H., Lawrence, M. Lee, E.-K., Swayne, D.F and Wickham, H. (2009) Statistical Inference for exploratory data analysis and model diagnostics Phil. Trans. R. Soc. A 2009 367, 4361-4383 doi: 10.1098/rsta.2009.0120 There is an implementation of this in the vis.test function in the TeachingDemos package for R. One possibly way to run it for your example is: vt.scattercor <- function(x,y,r,...,orig=TRUE) { require('MASS') par(mar=c(2.5,2.5,1,1)+0.1) if(orig) { plot(x,y, xlab="", ylab="", ...) } else { mu <- c(mean(x), mean(y)) var <- var( cbind(x,y) ) var[ rbind( 1:2, 2:1 ) ] <- r * sqrt(var[1,1]*var[2,2]) tmp <- mvrnorm( length(x), mu, var ) plot( tmp[,1], tmp[,2], xlab="", ylab="", ...) } } test1 <- mvrnorm(100, c(0,0), rbind( c(1,.75), c(.75,1) ) ) test2 <- mvrnorm(100, c(0,0), rbind( c(1,.5), c(.5,1) ) ) vis.test( test1[,1], test1[,2], r=0.75, FUN=vt.scattercor ) vis.test( test2[,1], test2[,2], r=0.75, FUN=vt.scattercor ) Of course if your real data is not normal or the relationship is not linear then that will be easily picked up with the above code. If you want to simultaniously test for those, then the above code would do that, or the above code could be adapted to better represent the nature of the data.
How to test hypothesis that correlation is equal to given value using R? Another approach that may be less exact than Fisher's tranformation, but I think could be more intuitive (and could give ideas about practical significance in addition to statistical significance) is
25,212
How to display a matrix of correlations with missing entries?
Building upon @GaBorgulya's response, I would suggest trying fluctuation or level plot (aka heatmap displays). For example, using ggplot2: library(ggplot2, quietly=TRUE) k <- 100 rvals <- sample(seq(-1,1,by=.001), k, replace=TRUE) rvals[sample(1:k, 10)] <- NA cc <- matrix(rvals, nr=10) ggfluctuation(as.table(cc)) + opts(legend.position="none") + labs(x="", y="") (Here, missing entry are displayed in plain gray, but the default color scheme can be changed, and you can also put "NA" in the legend.) or ggfluctuation(as.table(cc), type="color") + labs(x="", y="") + scale_fill_gradient(low = "red", high = "blue") (Here, missing values are simply not displayed. However, you can add a geom_text() and display something like "NA" in the empty cell.)
How to display a matrix of correlations with missing entries?
Building upon @GaBorgulya's response, I would suggest trying fluctuation or level plot (aka heatmap displays). For example, using ggplot2: library(ggplot2, quietly=TRUE) k <- 100 rvals <- sample(seq(-
How to display a matrix of correlations with missing entries? Building upon @GaBorgulya's response, I would suggest trying fluctuation or level plot (aka heatmap displays). For example, using ggplot2: library(ggplot2, quietly=TRUE) k <- 100 rvals <- sample(seq(-1,1,by=.001), k, replace=TRUE) rvals[sample(1:k, 10)] <- NA cc <- matrix(rvals, nr=10) ggfluctuation(as.table(cc)) + opts(legend.position="none") + labs(x="", y="") (Here, missing entry are displayed in plain gray, but the default color scheme can be changed, and you can also put "NA" in the legend.) or ggfluctuation(as.table(cc), type="color") + labs(x="", y="") + scale_fill_gradient(low = "red", high = "blue") (Here, missing values are simply not displayed. However, you can add a geom_text() and display something like "NA" in the empty cell.)
How to display a matrix of correlations with missing entries? Building upon @GaBorgulya's response, I would suggest trying fluctuation or level plot (aka heatmap displays). For example, using ggplot2: library(ggplot2, quietly=TRUE) k <- 100 rvals <- sample(seq(-
25,213
How to display a matrix of correlations with missing entries?
Your data may be like name1 name2 correlation 1 V1 V2 0.2 2 V2 V3 0.4 You can rearrange your long table into a wide one with the following R code d = structure(list(name1 = c("V1", "V2"), name2 = c("V2", "V3"), correlation = c(0.2, 0.4)), .Names = c("name1", "name2", "correlation"), row.names = 1:2, class = "data.frame") k = d[, c(2, 1, 3)] names(k) = names(d) e = rbind(d, k) x = with(e, reshape(e[order(name2),], v.names="correlation", idvar="name1", timevar="name2", direction="wide")) x[order(x$name1),] You get name1 correlation.V1 correlation.V2 correlation.V3 1 V1 NA 0.2 NA 3 V2 0.2 NA 0.4 4 V3 NA 0.4 NA Now you can use techniques for visualizing correlation matrices (at least ones that can cope with missing values).
How to display a matrix of correlations with missing entries?
Your data may be like name1 name2 correlation 1 V1 V2 0.2 2 V2 V3 0.4 You can rearrange your long table into a wide one with the following R code d = structure(list(name
How to display a matrix of correlations with missing entries? Your data may be like name1 name2 correlation 1 V1 V2 0.2 2 V2 V3 0.4 You can rearrange your long table into a wide one with the following R code d = structure(list(name1 = c("V1", "V2"), name2 = c("V2", "V3"), correlation = c(0.2, 0.4)), .Names = c("name1", "name2", "correlation"), row.names = 1:2, class = "data.frame") k = d[, c(2, 1, 3)] names(k) = names(d) e = rbind(d, k) x = with(e, reshape(e[order(name2),], v.names="correlation", idvar="name1", timevar="name2", direction="wide")) x[order(x$name1),] You get name1 correlation.V1 correlation.V2 correlation.V3 1 V1 NA 0.2 NA 3 V2 0.2 NA 0.4 4 V3 NA 0.4 NA Now you can use techniques for visualizing correlation matrices (at least ones that can cope with missing values).
How to display a matrix of correlations with missing entries? Your data may be like name1 name2 correlation 1 V1 V2 0.2 2 V2 V3 0.4 You can rearrange your long table into a wide one with the following R code d = structure(list(name
25,214
How to display a matrix of correlations with missing entries?
The corrplot package is a useful function for visualizing correlation matrices. It accepts a correlation matrix as the input object and has several options for displaying the matrix itself. A nice feature is that it can reorder your variables using hierarchical clustering or PCA methods. See the accepted answer in this thread for an example visualization.
How to display a matrix of correlations with missing entries?
The corrplot package is a useful function for visualizing correlation matrices. It accepts a correlation matrix as the input object and has several options for displaying the matrix itself. A nice fea
How to display a matrix of correlations with missing entries? The corrplot package is a useful function for visualizing correlation matrices. It accepts a correlation matrix as the input object and has several options for displaying the matrix itself. A nice feature is that it can reorder your variables using hierarchical clustering or PCA methods. See the accepted answer in this thread for an example visualization.
How to display a matrix of correlations with missing entries? The corrplot package is a useful function for visualizing correlation matrices. It accepts a correlation matrix as the input object and has several options for displaying the matrix itself. A nice fea
25,215
Estimating the dimension of a data set
See Levina, E. and Bickel, P. (2004) “Maximum Likelihood Estimation of Intrinsic Dimension.” Advances in Neural Information Processing Systems 17 http://books.nips.cc/papers/files/nips17/NIPS2004_0094.pdf Their idea is that if the data are sampled from a smooth density in $R^m$ embedded in $R^p$ with $m < p$, then locally the number of data points in a small ball of radius $t$ behaves roughly like a poisson process. The rate of the process is related to the volume of the ball which in turn is related to the intrinsic dimension.
Estimating the dimension of a data set
See Levina, E. and Bickel, P. (2004) “Maximum Likelihood Estimation of Intrinsic Dimension.” Advances in Neural Information Processing Systems 17 http://books.nips.cc/papers/files/nips17/NIPS2004_009
Estimating the dimension of a data set See Levina, E. and Bickel, P. (2004) “Maximum Likelihood Estimation of Intrinsic Dimension.” Advances in Neural Information Processing Systems 17 http://books.nips.cc/papers/files/nips17/NIPS2004_0094.pdf Their idea is that if the data are sampled from a smooth density in $R^m$ embedded in $R^p$ with $m < p$, then locally the number of data points in a small ball of radius $t$ behaves roughly like a poisson process. The rate of the process is related to the volume of the ball which in turn is related to the intrinsic dimension.
Estimating the dimension of a data set See Levina, E. and Bickel, P. (2004) “Maximum Likelihood Estimation of Intrinsic Dimension.” Advances in Neural Information Processing Systems 17 http://books.nips.cc/papers/files/nips17/NIPS2004_009
25,216
Estimating the dimension of a data set
Principal Components Analysis of local data is a good point of departure. We have to take some care, though, to distinguish local (intrinsic) from global (extrinsic) dimension. In the example of points on a circle, the local dimension is 1, but overall the points within the circle lie in a 2D space. To apply PCA to this, the trick is to localize: select one data point and extract only those that are close to it. Apply PCA to this subset. The number of large eigenvalues will suggest the intrinsic dimension. Repeating this at other data points will indicate whether the data exhibit a constant intrinsic dimension throughout. If so, each of the PCA results provides a partial atlas of the manifold.
Estimating the dimension of a data set
Principal Components Analysis of local data is a good point of departure. We have to take some care, though, to distinguish local (intrinsic) from global (extrinsic) dimension. In the example of poi
Estimating the dimension of a data set Principal Components Analysis of local data is a good point of departure. We have to take some care, though, to distinguish local (intrinsic) from global (extrinsic) dimension. In the example of points on a circle, the local dimension is 1, but overall the points within the circle lie in a 2D space. To apply PCA to this, the trick is to localize: select one data point and extract only those that are close to it. Apply PCA to this subset. The number of large eigenvalues will suggest the intrinsic dimension. Repeating this at other data points will indicate whether the data exhibit a constant intrinsic dimension throughout. If so, each of the PCA results provides a partial atlas of the manifold.
Estimating the dimension of a data set Principal Components Analysis of local data is a good point of departure. We have to take some care, though, to distinguish local (intrinsic) from global (extrinsic) dimension. In the example of poi
25,217
Estimating the dimension of a data set
I'm not sure about the 'domain of a function' part, but Hausdorff Dimension seems to answer this question. It has the odd property of agreeing with simple examples (e.g. the circle has Hausdorff Dimension 1), but of giving non-integral results for some sets ('fractals').
Estimating the dimension of a data set
I'm not sure about the 'domain of a function' part, but Hausdorff Dimension seems to answer this question. It has the odd property of agreeing with simple examples (e.g. the circle has Hausdorff Dimen
Estimating the dimension of a data set I'm not sure about the 'domain of a function' part, but Hausdorff Dimension seems to answer this question. It has the odd property of agreeing with simple examples (e.g. the circle has Hausdorff Dimension 1), but of giving non-integral results for some sets ('fractals').
Estimating the dimension of a data set I'm not sure about the 'domain of a function' part, but Hausdorff Dimension seems to answer this question. It has the odd property of agreeing with simple examples (e.g. the circle has Hausdorff Dimen
25,218
Estimating the dimension of a data set
I highly recommend to read this survey: Camastra, F. (2003). Data dimensionality estimation methods: a survey. Pattern recognition, 36(12), 2945-2954. For performing this estimation, I found very good toolbox in matlab Matlab Toolbox for Dimensionality Reduction. In addition to the techniques for dimensionality reduction, the toolbox contains implementations of 6 techniques for intrinsic dimensionality estimation
Estimating the dimension of a data set
I highly recommend to read this survey: Camastra, F. (2003). Data dimensionality estimation methods: a survey. Pattern recognition, 36(12), 2945-2954. For performing this estimation, I found very good
Estimating the dimension of a data set I highly recommend to read this survey: Camastra, F. (2003). Data dimensionality estimation methods: a survey. Pattern recognition, 36(12), 2945-2954. For performing this estimation, I found very good toolbox in matlab Matlab Toolbox for Dimensionality Reduction. In addition to the techniques for dimensionality reduction, the toolbox contains implementations of 6 techniques for intrinsic dimensionality estimation
Estimating the dimension of a data set I highly recommend to read this survey: Camastra, F. (2003). Data dimensionality estimation methods: a survey. Pattern recognition, 36(12), 2945-2954. For performing this estimation, I found very good
25,219
Statistically significant vs. independent/dependent
Significance in an independent-samples t test just means that the probability (if the null were true) of sampling a mean difference as extreme as the mean difference you actually sampled is less than .05. This is totally unrelated to dependent/independent. "Dependent" means the distribution of some individual observations is connected to the distribution of others, for example A) they are the same person taking the same test a second time, B) people in each group are matched on some pre-test variable, C) people in the two groups are related (i.e. family). "Independent" means there is no such connection.
Statistically significant vs. independent/dependent
Significance in an independent-samples t test just means that the probability (if the null were true) of sampling a mean difference as extreme as the mean difference you actually sampled is less than
Statistically significant vs. independent/dependent Significance in an independent-samples t test just means that the probability (if the null were true) of sampling a mean difference as extreme as the mean difference you actually sampled is less than .05. This is totally unrelated to dependent/independent. "Dependent" means the distribution of some individual observations is connected to the distribution of others, for example A) they are the same person taking the same test a second time, B) people in each group are matched on some pre-test variable, C) people in the two groups are related (i.e. family). "Independent" means there is no such connection.
Statistically significant vs. independent/dependent Significance in an independent-samples t test just means that the probability (if the null were true) of sampling a mean difference as extreme as the mean difference you actually sampled is less than
25,220
Statistically significant vs. independent/dependent
Why stop at $t$-tests? You can think of two variables being uncorrelated as two orthogonal vectors, exactly like the $x$ and $y$ axes in a two dimensional Cartesian coordinate system. When either of two vectors, let's say $\mathbf{x}$ and $\mathbf{y}$ is correlated with the other, there will be a certain part of x that can be projected onto y and vice versa. With that in mind, it's fairly easy to see that since, $$ \begin{align*} \left<\mathbf{x},\mathbf{y}\right>&=\|x\|\|y\|\cos\left(\theta\right)\\ \frac{\left<\mathbf{x},\mathbf{y}\right>}{\|x\|\|y\|}&=\cos\left(\theta\right)=r \end{align*} $$ Where $r$ is Pearson's correlation coefficient and $\left<\cdot,\cdot\right>$ is the inner product of the arguments. When I learned this I was totally blown away by how geometrically simple the idea of correlation is. And this is definitely not the only way to measure the correlation between two (or more) variables. Significance testing is a different ball game. Often we want to know by how much two (or more) groups differ on some outcome variable as a result of some manipulation that was performed on said groups. Like Brian said, you want to know if the two groups come from the same distribution, thus you compute the probability of sampling the mean difference (scaled by the standard error of the mean) that you obtained from your experiment, given that the null hypothesis (there's no significant difference in the means) is true. In behavioral research (and often elsewhere) if this probability is less 0.05, you can conclude that the difference in the two (or more) means is likely due to your manipulation. EDIT: Dilip Sarwate pointed out that two uncorrelated variables can be statistically dependent, so I took out the first part. Thanks for that.
Statistically significant vs. independent/dependent
Why stop at $t$-tests? You can think of two variables being uncorrelated as two orthogonal vectors, exactly like the $x$ and $y$ axes in a two dimensional Cartesian coordinate system. When either of t
Statistically significant vs. independent/dependent Why stop at $t$-tests? You can think of two variables being uncorrelated as two orthogonal vectors, exactly like the $x$ and $y$ axes in a two dimensional Cartesian coordinate system. When either of two vectors, let's say $\mathbf{x}$ and $\mathbf{y}$ is correlated with the other, there will be a certain part of x that can be projected onto y and vice versa. With that in mind, it's fairly easy to see that since, $$ \begin{align*} \left<\mathbf{x},\mathbf{y}\right>&=\|x\|\|y\|\cos\left(\theta\right)\\ \frac{\left<\mathbf{x},\mathbf{y}\right>}{\|x\|\|y\|}&=\cos\left(\theta\right)=r \end{align*} $$ Where $r$ is Pearson's correlation coefficient and $\left<\cdot,\cdot\right>$ is the inner product of the arguments. When I learned this I was totally blown away by how geometrically simple the idea of correlation is. And this is definitely not the only way to measure the correlation between two (or more) variables. Significance testing is a different ball game. Often we want to know by how much two (or more) groups differ on some outcome variable as a result of some manipulation that was performed on said groups. Like Brian said, you want to know if the two groups come from the same distribution, thus you compute the probability of sampling the mean difference (scaled by the standard error of the mean) that you obtained from your experiment, given that the null hypothesis (there's no significant difference in the means) is true. In behavioral research (and often elsewhere) if this probability is less 0.05, you can conclude that the difference in the two (or more) means is likely due to your manipulation. EDIT: Dilip Sarwate pointed out that two uncorrelated variables can be statistically dependent, so I took out the first part. Thanks for that.
Statistically significant vs. independent/dependent Why stop at $t$-tests? You can think of two variables being uncorrelated as two orthogonal vectors, exactly like the $x$ and $y$ axes in a two dimensional Cartesian coordinate system. When either of t
25,221
Distribution of the exponential of an exponentially distributed random variable?
Sample from a Pareto distribution. If $Y\sim\mathsf{Exp}(\mathrm{rate}=\lambda),$ then $X = x_m\exp(Y)$ has a Pareto distribution with density function $f_X(x) = \frac{\lambda x_m^\lambda}{x^{\lambda+1}}$ and CDF $F_X(x) = 1-\left(\frac{x_m}{x}\right)^\lambda,$ for $x\ge x_m > 0.$ The minimum value $x_m > 0$ is necessary for the integral of the density to exist. Consider the random sample y of $n = 1000$ observations from $\mathsf{Exp}(\mathrm{rate}=\lambda=5)$ along with the Pareto sample y resulting from the transformation above. set.seed(1128) x.m = 1; lam = 5 y = rexp(1000, lam) summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0001314 0.0519039 0.1298572 0.1946130 0.2743406 1.9046195 x = x.m*exp(y) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 1.053 1.139 1.245 1.316 6.717 Below is the empirical CDF (ECDF) of Pareto sample x along with the CDF (dotted orange) of the distribution from which it was sampled. Tick marks along the horizontal axis show individual values of x. plot(ecdf(x), main="ECDF of Pareto Sample") curve(1 - (x.m/x)^lam, add=T, 1, 4, lwd=3, col="orange", lty="dotted") rug(x) Ref: See the Wikipedia page on Pareto distributions, under the heading for relationship to exponential.
Distribution of the exponential of an exponentially distributed random variable?
Sample from a Pareto distribution. If $Y\sim\mathsf{Exp}(\mathrm{rate}=\lambda),$ then $X = x_m\exp(Y)$ has a Pareto distribution with density function $f_X(x) = \frac{\lambda x_m^\lambda}{x^{\lambda+
Distribution of the exponential of an exponentially distributed random variable? Sample from a Pareto distribution. If $Y\sim\mathsf{Exp}(\mathrm{rate}=\lambda),$ then $X = x_m\exp(Y)$ has a Pareto distribution with density function $f_X(x) = \frac{\lambda x_m^\lambda}{x^{\lambda+1}}$ and CDF $F_X(x) = 1-\left(\frac{x_m}{x}\right)^\lambda,$ for $x\ge x_m > 0.$ The minimum value $x_m > 0$ is necessary for the integral of the density to exist. Consider the random sample y of $n = 1000$ observations from $\mathsf{Exp}(\mathrm{rate}=\lambda=5)$ along with the Pareto sample y resulting from the transformation above. set.seed(1128) x.m = 1; lam = 5 y = rexp(1000, lam) summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0001314 0.0519039 0.1298572 0.1946130 0.2743406 1.9046195 x = x.m*exp(y) summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 1.000 1.053 1.139 1.245 1.316 6.717 Below is the empirical CDF (ECDF) of Pareto sample x along with the CDF (dotted orange) of the distribution from which it was sampled. Tick marks along the horizontal axis show individual values of x. plot(ecdf(x), main="ECDF of Pareto Sample") curve(1 - (x.m/x)^lam, add=T, 1, 4, lwd=3, col="orange", lty="dotted") rug(x) Ref: See the Wikipedia page on Pareto distributions, under the heading for relationship to exponential.
Distribution of the exponential of an exponentially distributed random variable? Sample from a Pareto distribution. If $Y\sim\mathsf{Exp}(\mathrm{rate}=\lambda),$ then $X = x_m\exp(Y)$ has a Pareto distribution with density function $f_X(x) = \frac{\lambda x_m^\lambda}{x^{\lambda+
25,222
Distribution of the exponential of an exponentially distributed random variable?
First, note that the range of $\DeclareMathOperator{\P}{\mathbb{P}} Y$ is $(1, \infty)$. First find the cumulative distribution function of $Y$ in the usual way: $$\begin{align} F_Y(t) & = \P(Y \leq t) = \P(e^X \le t) \\ & = \P( X \leq \ln(t) ) \\ & = F_X( \ln(t) ) = 1-e^{-\lambda \ln(t)} \\ & = 1- e^{\ln( t^{-\lambda})} \\ & = 1-t^{-\lambda} \end{align}$$ for $t\gt 1$. By differentiation we find the density function $$ f_Y(t) = \lambda t^{-\lambda -1},\quad t>1. $$ which is a Pareto distribution. Note that this is suspiciously similar to the density of a beta prime distribution. Define $U=T-1$, which has density function $$ f_U(u)= \lambda (u+1)^{-\lambda -1},\quad u>0 $$ which we can rewrite as $$ f_U(u)=\frac{u^{1-1} (u+1)^{-\lambda-1}}{B(1,\lambda)} $$ which we can see is a beta prime density. So we can reformulate: $e^X -1$ has a beta prime distribution.
Distribution of the exponential of an exponentially distributed random variable?
First, note that the range of $\DeclareMathOperator{\P}{\mathbb{P}} Y$ is $(1, \infty)$. First find the cumulative distribution function of $Y$ in the usual way: $$\begin{align} F_Y(t) & = \P(Y
Distribution of the exponential of an exponentially distributed random variable? First, note that the range of $\DeclareMathOperator{\P}{\mathbb{P}} Y$ is $(1, \infty)$. First find the cumulative distribution function of $Y$ in the usual way: $$\begin{align} F_Y(t) & = \P(Y \leq t) = \P(e^X \le t) \\ & = \P( X \leq \ln(t) ) \\ & = F_X( \ln(t) ) = 1-e^{-\lambda \ln(t)} \\ & = 1- e^{\ln( t^{-\lambda})} \\ & = 1-t^{-\lambda} \end{align}$$ for $t\gt 1$. By differentiation we find the density function $$ f_Y(t) = \lambda t^{-\lambda -1},\quad t>1. $$ which is a Pareto distribution. Note that this is suspiciously similar to the density of a beta prime distribution. Define $U=T-1$, which has density function $$ f_U(u)= \lambda (u+1)^{-\lambda -1},\quad u>0 $$ which we can rewrite as $$ f_U(u)=\frac{u^{1-1} (u+1)^{-\lambda-1}}{B(1,\lambda)} $$ which we can see is a beta prime density. So we can reformulate: $e^X -1$ has a beta prime distribution.
Distribution of the exponential of an exponentially distributed random variable? First, note that the range of $\DeclareMathOperator{\P}{\mathbb{P}} Y$ is $(1, \infty)$. First find the cumulative distribution function of $Y$ in the usual way: $$\begin{align} F_Y(t) & = \P(Y
25,223
Where does the logistic function come from?
I read from Strogatz's book that it was originated from modeling the human populations by Verhulst in 1838. Assume the population size is $N(t)$, then the per capita growth rate is $\dot N(t)/N(t)$. By assuming the per capita growth rate descreases linearly with the population size, we can have the logistic equation of following form: $$\dot N(t)=rN(1-\frac{N}{K}),$$ where $K$ is carrying capacity of the environment. From the equation, we can see that when $N$ is very small, the population grows approximately exponentially. As $N$ grows until half of the capacity, the derivative $\dot N$ is still increasing but slows down. Once $N$ passes the half line, the derivative decreases so we can see a bending curve (Interestingly, we can see this trend very roughly from the bended curves of cumulative Coronavirus cases) and the population asymptotically approaches the capacity. Up to this point, we observe the properties of a logistic function. Intuitively, when $N$ is larger the capacity, the population decreases. By further mathematical simplification of the equation, we have an equation like this: $$\frac{df(x)}{dx}=f(x)(1-f(x)),$$ which has analytical solotion as: $$f(x)=\frac{e^x}{e^x + C}.$$ With $C=1$ we have the logistic function.
Where does the logistic function come from?
I read from Strogatz's book that it was originated from modeling the human populations by Verhulst in 1838. Assume the population size is $N(t)$, then the per capita growth rate is $\dot N(t)/N(t)$. B
Where does the logistic function come from? I read from Strogatz's book that it was originated from modeling the human populations by Verhulst in 1838. Assume the population size is $N(t)$, then the per capita growth rate is $\dot N(t)/N(t)$. By assuming the per capita growth rate descreases linearly with the population size, we can have the logistic equation of following form: $$\dot N(t)=rN(1-\frac{N}{K}),$$ where $K$ is carrying capacity of the environment. From the equation, we can see that when $N$ is very small, the population grows approximately exponentially. As $N$ grows until half of the capacity, the derivative $\dot N$ is still increasing but slows down. Once $N$ passes the half line, the derivative decreases so we can see a bending curve (Interestingly, we can see this trend very roughly from the bended curves of cumulative Coronavirus cases) and the population asymptotically approaches the capacity. Up to this point, we observe the properties of a logistic function. Intuitively, when $N$ is larger the capacity, the population decreases. By further mathematical simplification of the equation, we have an equation like this: $$\frac{df(x)}{dx}=f(x)(1-f(x)),$$ which has analytical solotion as: $$f(x)=\frac{e^x}{e^x + C}.$$ With $C=1$ we have the logistic function.
Where does the logistic function come from? I read from Strogatz's book that it was originated from modeling the human populations by Verhulst in 1838. Assume the population size is $N(t)$, then the per capita growth rate is $\dot N(t)/N(t)$. B
25,224
Where does the logistic function come from?
I don't know about its history, but logistic function has a property which makes it attractive for machine learning and logistic regression: If you have two normally distributed classes with equal variances, then the posterior probability of an observation to belong to one of these classes is given by the logistic function. First, for any two classes $A$ and $B$ it follows from the Bayesian formula: $$ P(B | x) = \frac{P(x | B) P(B)}{P(x)} = \frac{P(x | B) P(B)}{P(x | A) P(A) + P(x | B) P(B)} = \frac{1}{1 + \frac{P(x | A)P(A)} {P(x | B)P(B)}}. $$ If $x$ is continuous, so that the classes can be described by their PDFs, $f_A(x)$ and $f_B(x)$, the fraction $P(x | A) / P(x | B)$ can be expressed as: $$ \frac{P(x | A)} {P(x | B)} = \lim_{\Delta x \rightarrow 0} \frac{f_A(x) \Delta x}{f_B(x) \Delta x} = \frac{f_A(x)}{f_B(x)}. $$ If the two classes are normally distributed, with equal variances: $$ f_A(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( -\frac{(x - \mu_A)^2} {2 \sigma^2} \right), ~ ~ ~ ~ ~ ~ ~ ~ ~ f_B(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( -\frac{(x - \mu_B)^2} {2 \sigma^2} \right) $$ then the fraction $f_A(x) / f_B(x)$ can be written as: $$ \frac{f_A(x)}{f_B(x)} = \exp \left( - \frac{(x - \mu_A)^2} {2 \sigma^2} + \frac{(x - \mu_B)^2} {2 \sigma^2} \right) = \exp \left( \frac{\mu_B^2 - \mu_A^2} {2 \sigma^2} + \frac{\mu_A - \mu_B} {\sigma^2} x \right), $$ and the whole term $$ \frac{f_A(x)P(A)}{f_B(x)P(B)} = \exp \left( \ln \frac{P(A)}{P(B)} + \frac{\mu_B^2 - \mu_A^2} {2 \sigma^2} + \frac{\mu_A - \mu_B} {\sigma^2} x \right). $$ Denoting $$ \beta_0 = \frac{\mu_A^2 - \mu_B^2} {2 \sigma^2} - \ln \frac{P(A)}{P(B)} ~ ~ ~ ~ ~ ~ ~ ~ \text{and} ~ ~ ~ ~ ~ ~ ~ ~  \beta_1 = \frac{\mu_B - \mu_A} {\sigma^2} $$ leads to the form commonly used in logistic regression: $$ P(B | x) = \frac{1}{1 + \exp \left(-\beta_0 - \beta_1 x \right) }. $$ So, if you have reasons to believe that your classes are normally distributed, with equal variances, the logistic function is likely to be the best model for the class probabilities.
Where does the logistic function come from?
I don't know about its history, but logistic function has a property which makes it attractive for machine learning and logistic regression: If you have two normally distributed classes with equal va
Where does the logistic function come from? I don't know about its history, but logistic function has a property which makes it attractive for machine learning and logistic regression: If you have two normally distributed classes with equal variances, then the posterior probability of an observation to belong to one of these classes is given by the logistic function. First, for any two classes $A$ and $B$ it follows from the Bayesian formula: $$ P(B | x) = \frac{P(x | B) P(B)}{P(x)} = \frac{P(x | B) P(B)}{P(x | A) P(A) + P(x | B) P(B)} = \frac{1}{1 + \frac{P(x | A)P(A)} {P(x | B)P(B)}}. $$ If $x$ is continuous, so that the classes can be described by their PDFs, $f_A(x)$ and $f_B(x)$, the fraction $P(x | A) / P(x | B)$ can be expressed as: $$ \frac{P(x | A)} {P(x | B)} = \lim_{\Delta x \rightarrow 0} \frac{f_A(x) \Delta x}{f_B(x) \Delta x} = \frac{f_A(x)}{f_B(x)}. $$ If the two classes are normally distributed, with equal variances: $$ f_A(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( -\frac{(x - \mu_A)^2} {2 \sigma^2} \right), ~ ~ ~ ~ ~ ~ ~ ~ ~ f_B(x) = \frac{1}{\sqrt{2 \pi} \sigma} \exp \left( -\frac{(x - \mu_B)^2} {2 \sigma^2} \right) $$ then the fraction $f_A(x) / f_B(x)$ can be written as: $$ \frac{f_A(x)}{f_B(x)} = \exp \left( - \frac{(x - \mu_A)^2} {2 \sigma^2} + \frac{(x - \mu_B)^2} {2 \sigma^2} \right) = \exp \left( \frac{\mu_B^2 - \mu_A^2} {2 \sigma^2} + \frac{\mu_A - \mu_B} {\sigma^2} x \right), $$ and the whole term $$ \frac{f_A(x)P(A)}{f_B(x)P(B)} = \exp \left( \ln \frac{P(A)}{P(B)} + \frac{\mu_B^2 - \mu_A^2} {2 \sigma^2} + \frac{\mu_A - \mu_B} {\sigma^2} x \right). $$ Denoting $$ \beta_0 = \frac{\mu_A^2 - \mu_B^2} {2 \sigma^2} - \ln \frac{P(A)}{P(B)} ~ ~ ~ ~ ~ ~ ~ ~ \text{and} ~ ~ ~ ~ ~ ~ ~ ~  \beta_1 = \frac{\mu_B - \mu_A} {\sigma^2} $$ leads to the form commonly used in logistic regression: $$ P(B | x) = \frac{1}{1 + \exp \left(-\beta_0 - \beta_1 x \right) }. $$ So, if you have reasons to believe that your classes are normally distributed, with equal variances, the logistic function is likely to be the best model for the class probabilities.
Where does the logistic function come from? I don't know about its history, but logistic function has a property which makes it attractive for machine learning and logistic regression: If you have two normally distributed classes with equal va
25,225
Where does the logistic function come from?
Let me provide some perspectives from epidemiology. In epidemiology, we are generally interested in risks, which is roughly equivalent to probability $p$. However, one important reason why we so often work with $\text{odds}=\frac{p}{1-p}$ is because of the case-control study design, in which we sample cases (diseased) and controls (non-diseased) independently and look at the proportion of the 2 samples being exposed to a risk factor. This is called a retrospective design in contrast to the prospective design where two samples with different risk factors are being compared for their risk for developing diseases. Now the neat thing about the odds ratio is that it is invariant to both design -- the retrospective and prospective odds ratio are the same. This means one doesn't need to know the prevalence of the disease to estimate the odds ratio, unlike the risk ratio, from a case-control study. More interestingly, this invariance of the odds ratio extends to logistic regression, such that we can analyse a retrospective study as if we were analysing a prospective study, under some assumptions (ref). This makes it a lot easier to analyse retrospective studies as we don't have to model the exposures $\boldsymbol{x}$ jointly. Finally, although the odds ratio is difficult to interpret, with a rare disease, it approximates the risk ratio. That's another reason why modeling the odds is so common in epidemiology.
Where does the logistic function come from?
Let me provide some perspectives from epidemiology. In epidemiology, we are generally interested in risks, which is roughly equivalent to probability $p$. However, one important reason why we so often
Where does the logistic function come from? Let me provide some perspectives from epidemiology. In epidemiology, we are generally interested in risks, which is roughly equivalent to probability $p$. However, one important reason why we so often work with $\text{odds}=\frac{p}{1-p}$ is because of the case-control study design, in which we sample cases (diseased) and controls (non-diseased) independently and look at the proportion of the 2 samples being exposed to a risk factor. This is called a retrospective design in contrast to the prospective design where two samples with different risk factors are being compared for their risk for developing diseases. Now the neat thing about the odds ratio is that it is invariant to both design -- the retrospective and prospective odds ratio are the same. This means one doesn't need to know the prevalence of the disease to estimate the odds ratio, unlike the risk ratio, from a case-control study. More interestingly, this invariance of the odds ratio extends to logistic regression, such that we can analyse a retrospective study as if we were analysing a prospective study, under some assumptions (ref). This makes it a lot easier to analyse retrospective studies as we don't have to model the exposures $\boldsymbol{x}$ jointly. Finally, although the odds ratio is difficult to interpret, with a rare disease, it approximates the risk ratio. That's another reason why modeling the odds is so common in epidemiology.
Where does the logistic function come from? Let me provide some perspectives from epidemiology. In epidemiology, we are generally interested in risks, which is roughly equivalent to probability $p$. However, one important reason why we so often
25,226
Where does the logistic function come from?
I think we can see the logistic regression from the perspective of Boltzmann distribution in physics/Gibbs distribution(also refer to this thread) or the log linear model in statistics. We can treat the matrix(just view it from the softmax perspective), as the potentials between each visible feature variable and each hidden variable(the y's, if it is logistic regression there are two y's). The $\theta^{(i)}$ is just the sum of the ith y and its potentials between all features, and the $e$ makes it the product. And we can see that it can date back to 1868: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium.
Where does the logistic function come from?
I think we can see the logistic regression from the perspective of Boltzmann distribution in physics/Gibbs distribution(also refer to this thread) or the log linear model in statistics. We can treat
Where does the logistic function come from? I think we can see the logistic regression from the perspective of Boltzmann distribution in physics/Gibbs distribution(also refer to this thread) or the log linear model in statistics. We can treat the matrix(just view it from the softmax perspective), as the potentials between each visible feature variable and each hidden variable(the y's, if it is logistic regression there are two y's). The $\theta^{(i)}$ is just the sum of the ith y and its potentials between all features, and the $e$ makes it the product. And we can see that it can date back to 1868: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium.
Where does the logistic function come from? I think we can see the logistic regression from the perspective of Boltzmann distribution in physics/Gibbs distribution(also refer to this thread) or the log linear model in statistics. We can treat
25,227
Under the 0-1 loss function, the Bayesian estimator is the mode of the posterior distribution
You need to be a bit careful with this kind of problem because the definition of the zero-one loss function will depend on whether you are dealing with a discrete or continuous parameter. For a discrete parameter you can define the zero-one loss as an indicator function and this works fine. For a continuous parameter you can't do this, because if you integrate a discrete indicator over a continuous probability density function you will always get zero (so the expected loss would be zero regardless of the parameter estimator). In the latter case you need to define the zero-one loss function either by allowing some "tolerance" around the exact value, or by using the Dirac delta function. Below I show the derivation of the posterior mode estimator in both the discrete and continuous cases, using the Dirac function in the latter. I also unify these cases by using Lebesgue-Stieltjes integration. Discrete case: Suppose that the unknown parameter $\theta$ is a discrete random variable, and let $\hat{\theta}$ denote the estimator of this parameter. Then the zero-one loss function is defined as: $$L(\hat{\theta} , \theta) = \mathbb{I}(\hat{\theta} \neq \theta).$$ This gives expected loss: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \sum_{\theta \in \Theta} \mathbb{I}(\hat{\theta} \neq \theta) \pi (\theta | X ) \\[8pt] &= 1 - \sum_{\theta \in \Theta} \mathbb{I}(\hat{\theta} =\theta) \pi (\theta | X ) \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ Minimising the expected loss is equivalent to maximising the posterior probability $\pi (\hat{\theta} | X)$, which occurs when $\hat{\theta}$ is the posterior mode. Continuous case: Suppose that the unknown parameter $\theta$ is a continuous random variable, and let $\hat{\theta}$ denote the estimator of this parameter. Then the zero-one loss function is defined as: $$L(\hat{\theta} , \theta) = 1 - \delta (\hat{\theta} - \theta).$$ where $\delta$ denotes the Dirac delta function. This gives expected loss: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \int_{\Theta} (1- \delta (\hat{\theta} - \theta)) \pi (\theta | X ) \ d \theta \\[8pt] &= 1 - \int_{\Theta} \delta (\hat{\theta} =\theta) \pi (\theta | X ) \ d \theta \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ Minimising the expected loss is equivalent to maximising the posterior density $\pi (\hat{\theta} | X)$, which occurs when $\hat{\theta}$ is the posterior mode. Note here that the Dirac delta function is not strictly a real function; it is actually a distribution on the real line. Unification with Lebesgue-Stieltjes integration: We can unify these two cases by treating the loss function as a distribution for $\theta$ with the distribution function: $$H(\hat{\theta}-\theta) = \mathbb{I}(\hat{\theta} \geqslant \theta).$$ We can then write the expected loss as: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \int_{\Theta} \pi (\theta | X ) \ d H(\hat{\theta}-\theta) \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ This case encompasses both the discrete and continuous cases. In fact, this treatment implicitly uses the Dirac delta function, since the loss distribution in this case is the distribution function for the Dirac delta function.
Under the 0-1 loss function, the Bayesian estimator is the mode of the posterior distribution
You need to be a bit careful with this kind of problem because the definition of the zero-one loss function will depend on whether you are dealing with a discrete or continuous parameter. For a discr
Under the 0-1 loss function, the Bayesian estimator is the mode of the posterior distribution You need to be a bit careful with this kind of problem because the definition of the zero-one loss function will depend on whether you are dealing with a discrete or continuous parameter. For a discrete parameter you can define the zero-one loss as an indicator function and this works fine. For a continuous parameter you can't do this, because if you integrate a discrete indicator over a continuous probability density function you will always get zero (so the expected loss would be zero regardless of the parameter estimator). In the latter case you need to define the zero-one loss function either by allowing some "tolerance" around the exact value, or by using the Dirac delta function. Below I show the derivation of the posterior mode estimator in both the discrete and continuous cases, using the Dirac function in the latter. I also unify these cases by using Lebesgue-Stieltjes integration. Discrete case: Suppose that the unknown parameter $\theta$ is a discrete random variable, and let $\hat{\theta}$ denote the estimator of this parameter. Then the zero-one loss function is defined as: $$L(\hat{\theta} , \theta) = \mathbb{I}(\hat{\theta} \neq \theta).$$ This gives expected loss: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \sum_{\theta \in \Theta} \mathbb{I}(\hat{\theta} \neq \theta) \pi (\theta | X ) \\[8pt] &= 1 - \sum_{\theta \in \Theta} \mathbb{I}(\hat{\theta} =\theta) \pi (\theta | X ) \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ Minimising the expected loss is equivalent to maximising the posterior probability $\pi (\hat{\theta} | X)$, which occurs when $\hat{\theta}$ is the posterior mode. Continuous case: Suppose that the unknown parameter $\theta$ is a continuous random variable, and let $\hat{\theta}$ denote the estimator of this parameter. Then the zero-one loss function is defined as: $$L(\hat{\theta} , \theta) = 1 - \delta (\hat{\theta} - \theta).$$ where $\delta$ denotes the Dirac delta function. This gives expected loss: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \int_{\Theta} (1- \delta (\hat{\theta} - \theta)) \pi (\theta | X ) \ d \theta \\[8pt] &= 1 - \int_{\Theta} \delta (\hat{\theta} =\theta) \pi (\theta | X ) \ d \theta \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ Minimising the expected loss is equivalent to maximising the posterior density $\pi (\hat{\theta} | X)$, which occurs when $\hat{\theta}$ is the posterior mode. Note here that the Dirac delta function is not strictly a real function; it is actually a distribution on the real line. Unification with Lebesgue-Stieltjes integration: We can unify these two cases by treating the loss function as a distribution for $\theta$ with the distribution function: $$H(\hat{\theta}-\theta) = \mathbb{I}(\hat{\theta} \geqslant \theta).$$ We can then write the expected loss as: $$\begin{equation} \begin{aligned} \bar{L}(\hat{\theta} | X) \equiv \mathbb{E}(L(\hat{\theta}, \theta ) | X) &= \int_{\Theta} \pi (\theta | X ) \ d H(\hat{\theta}-\theta) \\[8pt] &= 1 - \pi (\hat{\theta} | X). \end{aligned} \end{equation}$$ This case encompasses both the discrete and continuous cases. In fact, this treatment implicitly uses the Dirac delta function, since the loss distribution in this case is the distribution function for the Dirac delta function.
Under the 0-1 loss function, the Bayesian estimator is the mode of the posterior distribution You need to be a bit careful with this kind of problem because the definition of the zero-one loss function will depend on whether you are dealing with a discrete or continuous parameter. For a discr
25,228
Confused with Residual Sum of Squares and Total Sum of Squares
You have the total sum of squares being $\displaystyle \sum_i ({y}_i-\bar{y})^2$ which you can write as $\displaystyle \sum_i ({y}_i-\hat{y}_i+\hat{y}_i-\bar{y})^2 $ i.e. as $\displaystyle \sum_i ({y}_i-\hat{y}_i)^2+2\sum_i ({y}_i-\hat{y}_i)(\hat{y}_i-\bar{y}) +\sum_i(\hat{y}_i-\bar{y})^2$ where the first summation term is the residual sum of squares, the second is zero (if not then there is correlation, suggesting there are better values of $\hat{y}_i$) and the third is the explained sum of squares Since you have sums of squares, they must be non-negative and so the residual sum of squares must be less than the total sum of squares
Confused with Residual Sum of Squares and Total Sum of Squares
You have the total sum of squares being $\displaystyle \sum_i ({y}_i-\bar{y})^2$ which you can write as $\displaystyle \sum_i ({y}_i-\hat{y}_i+\hat{y}_i-\bar{y})^2 $ i.e. as $\displaystyle \sum_i ({
Confused with Residual Sum of Squares and Total Sum of Squares You have the total sum of squares being $\displaystyle \sum_i ({y}_i-\bar{y})^2$ which you can write as $\displaystyle \sum_i ({y}_i-\hat{y}_i+\hat{y}_i-\bar{y})^2 $ i.e. as $\displaystyle \sum_i ({y}_i-\hat{y}_i)^2+2\sum_i ({y}_i-\hat{y}_i)(\hat{y}_i-\bar{y}) +\sum_i(\hat{y}_i-\bar{y})^2$ where the first summation term is the residual sum of squares, the second is zero (if not then there is correlation, suggesting there are better values of $\hat{y}_i$) and the third is the explained sum of squares Since you have sums of squares, they must be non-negative and so the residual sum of squares must be less than the total sum of squares
Confused with Residual Sum of Squares and Total Sum of Squares You have the total sum of squares being $\displaystyle \sum_i ({y}_i-\bar{y})^2$ which you can write as $\displaystyle \sum_i ({y}_i-\hat{y}_i+\hat{y}_i-\bar{y})^2 $ i.e. as $\displaystyle \sum_i ({
25,229
Confused with Residual Sum of Squares and Total Sum of Squares
This chart was very helpful for me. Via
Confused with Residual Sum of Squares and Total Sum of Squares
This chart was very helpful for me. Via
Confused with Residual Sum of Squares and Total Sum of Squares This chart was very helpful for me. Via
Confused with Residual Sum of Squares and Total Sum of Squares This chart was very helpful for me. Via
25,230
Marginalization of conditional probability
By definition of conditional probability* we have that: $$P(E=e|A=a)=\frac{P(E=e,A=a)}{P(A=a)}=\frac{\sum_{c}P(E=e,C=c,A=a)}{P(A=a)}$$ In the last step I used marginalization over $c$. Then, again using the definition of conditional probability, this is equal to: $$\sum_{c}P(E=e,C=c|A=a)$$. *Definition of conditional probability: $$P(x_1,...,x_n|y_1,...,y_m)=\frac{P(x_1,...,x_n,y_1,...,y_m)}{P(y_1,...,y_m)}$$
Marginalization of conditional probability
By definition of conditional probability* we have that: $$P(E=e|A=a)=\frac{P(E=e,A=a)}{P(A=a)}=\frac{\sum_{c}P(E=e,C=c,A=a)}{P(A=a)}$$ In the last step I used marginalization over $c$. Then, again usi
Marginalization of conditional probability By definition of conditional probability* we have that: $$P(E=e|A=a)=\frac{P(E=e,A=a)}{P(A=a)}=\frac{\sum_{c}P(E=e,C=c,A=a)}{P(A=a)}$$ In the last step I used marginalization over $c$. Then, again using the definition of conditional probability, this is equal to: $$\sum_{c}P(E=e,C=c|A=a)$$. *Definition of conditional probability: $$P(x_1,...,x_n|y_1,...,y_m)=\frac{P(x_1,...,x_n,y_1,...,y_m)}{P(y_1,...,y_m)}$$
Marginalization of conditional probability By definition of conditional probability* we have that: $$P(E=e|A=a)=\frac{P(E=e,A=a)}{P(A=a)}=\frac{\sum_{c}P(E=e,C=c,A=a)}{P(A=a)}$$ In the last step I used marginalization over $c$. Then, again usi
25,231
Marginalization of conditional probability
Conditional probabilities are a probability measure meaning that they satisfy the axioms of probability, and enjoy all the properties of (unconditional) probability. The practical use of this pontification is that any rule, theorem, or formula that you have learned about probabilities are also applicable if everything is assumed to be conditioned on the occurrence of some event. For example, knowing that $$P(B^c) = 1 - P(B)$$ allows us to immediately conclude that $$P(B^c\mid A) = 1 - P(B\mid A)$$ is a valid result without going through writing out the formal definitions and completing a proof of the result. So apply this idea to the formula $$P(E) = \sum_i P(E \cap C_i)$$ where $C_1, C_2, \cdots $ is a partition of the sample space $\Omega$ into disjoint subsets (and so $(E \cap C_1)$, $(E \cap C_2), \cdots$ is a partition of $E$ into disjoint subsets). Conditioning everything on $A$ gives us $$P(E\mid A) = \sum_i P(E \cap C_i\mid A)$$ which is your formula in slightly different notation.
Marginalization of conditional probability
Conditional probabilities are a probability measure meaning that they satisfy the axioms of probability, and enjoy all the properties of (unconditional) probability. The practical use of this pontific
Marginalization of conditional probability Conditional probabilities are a probability measure meaning that they satisfy the axioms of probability, and enjoy all the properties of (unconditional) probability. The practical use of this pontification is that any rule, theorem, or formula that you have learned about probabilities are also applicable if everything is assumed to be conditioned on the occurrence of some event. For example, knowing that $$P(B^c) = 1 - P(B)$$ allows us to immediately conclude that $$P(B^c\mid A) = 1 - P(B\mid A)$$ is a valid result without going through writing out the formal definitions and completing a proof of the result. So apply this idea to the formula $$P(E) = \sum_i P(E \cap C_i)$$ where $C_1, C_2, \cdots $ is a partition of the sample space $\Omega$ into disjoint subsets (and so $(E \cap C_1)$, $(E \cap C_2), \cdots$ is a partition of $E$ into disjoint subsets). Conditioning everything on $A$ gives us $$P(E\mid A) = \sum_i P(E \cap C_i\mid A)$$ which is your formula in slightly different notation.
Marginalization of conditional probability Conditional probabilities are a probability measure meaning that they satisfy the axioms of probability, and enjoy all the properties of (unconditional) probability. The practical use of this pontific
25,232
What's the recommended weight initialization strategy when using the ELU activation function?
I think the initialization should be roughly $\sqrt{\frac{1.55}{n_{in}}}$ The He et al. 2015 formula was made for ReLU units. The key idea is that the variance of f(y) with y = W * x + b should be roughly equal to the variance of y. Let's first go over the case of taking a ReLU activation, and see if we can ammend it for ELU units. In the paper they show show that: $$ Var[y_l] = n_l Var[w_l] \mathbb{E}[x^2_l] $$ They express the last expectation $\mathbb{E}[x^2_l]$ in terms of $Var[y_{l-1}]$. For ReLUs we have that $\mathbb{E}[x^2_l] = \frac{1}{2} Var[y_{l-1}]$, simply because ReLUs put half the values in $x$ to $0$ on average. Thus we can write $$ Var[y_l] = n_l Var[w_l] \frac{1}{2} Var[y_{l-1}] $$ We apply this to all layers, taking the product over $l$, all the way to the first layer. This gives: $$ Var[y_L] = Var[y_1] \prod_{i=2}^L \frac{1}{2} n_l Var[w_l] $$ Now this is stable only when $\frac{1}{2} n_l Var[w_l]$ is close to 1. So they set it to 1 and find $Var[W_l] = \frac{2}{n_l}$ Now for ELU units, the only thing we have to change is the expression of $\mathbb{E}[x^2_l]$ in terms of $Var[y_{l-1}]$. Sadly, this is not as straight-forward for ELU units as for RelU units as it involves calculating $\mathbb{E}[({e^{(\mathcal{N})}}^2)]$ for only the negative values of $\mathcal{N}$. This is not a pretty formula, I don't even know if there's a good closed form solution, so let's sample to get an approximation. We want $Var[y_l]$ to roughly be equal to 1 (most inputs are variance 1, batch norm makes layers variance 1 etc.). Thus we can sample from a normal distribution, apply the elu function with alpha = 1, square and calculate the mean. This gives $\approx 0.645$. The inverse of this is $\approx 1.55$. Thus following the same logic, we can set $Var[w_l]$ to $\sqrt{\frac{1.55}{n}}$ to get a variance that doesn't increase in magnitude. I reckon that would be the optimal value for the ELU function. It fits in between the value for the ReLU function (1/2, which is lower than 0.645 because the values that are mapped to 0 now get mapped to some minus value), and what you would have for any function with mean 0 (which is just 1). Take care that if the variance of $Var[y_{l-1}]$ is different, the optimal constant is also different. When this variance tends to 0, then the function becomes more and more like a unit function, thus the constant will tend to 1. If the variance becomes really big, the value tends towards the original ReLU value, thus 0.5. Edit: Did the theoretical analysis of the variance of ELU(x) if x is normally distributed. It involves the some derivations of the log-normal distribution and not so pretty integrals. The eventual answer for the variance is $0.5 \sigma$ (the part of the linear function) + $$ a - 2(b)^2 + (2b - 1)^2 $$ where $$ a = \frac{1}{2} e^{\frac{\sigma^2}{2}} \left(\text{erfc}\left(\frac{\sigma}{\sqrt{2}}\right) + \sqrt{\frac{1}{\sigma^2}} \sigma -1\right)\\ b = \frac{1}{2} e^{2\sigma^2} \left(\text{erfc}\left(\sqrt{2} \sigma\right) + \sqrt{\frac{1}{\sigma^2}} \sigma -1\right)\\ $$ Which is not very solvable for $\sigma$ unfortunately. You can fill in for $\sigma$ and get the estimate I gave above however, which is pretty cool.
What's the recommended weight initialization strategy when using the ELU activation function?
I think the initialization should be roughly $\sqrt{\frac{1.55}{n_{in}}}$ The He et al. 2015 formula was made for ReLU units. The key idea is that the variance of f(y) with y = W * x + b should be rou
What's the recommended weight initialization strategy when using the ELU activation function? I think the initialization should be roughly $\sqrt{\frac{1.55}{n_{in}}}$ The He et al. 2015 formula was made for ReLU units. The key idea is that the variance of f(y) with y = W * x + b should be roughly equal to the variance of y. Let's first go over the case of taking a ReLU activation, and see if we can ammend it for ELU units. In the paper they show show that: $$ Var[y_l] = n_l Var[w_l] \mathbb{E}[x^2_l] $$ They express the last expectation $\mathbb{E}[x^2_l]$ in terms of $Var[y_{l-1}]$. For ReLUs we have that $\mathbb{E}[x^2_l] = \frac{1}{2} Var[y_{l-1}]$, simply because ReLUs put half the values in $x$ to $0$ on average. Thus we can write $$ Var[y_l] = n_l Var[w_l] \frac{1}{2} Var[y_{l-1}] $$ We apply this to all layers, taking the product over $l$, all the way to the first layer. This gives: $$ Var[y_L] = Var[y_1] \prod_{i=2}^L \frac{1}{2} n_l Var[w_l] $$ Now this is stable only when $\frac{1}{2} n_l Var[w_l]$ is close to 1. So they set it to 1 and find $Var[W_l] = \frac{2}{n_l}$ Now for ELU units, the only thing we have to change is the expression of $\mathbb{E}[x^2_l]$ in terms of $Var[y_{l-1}]$. Sadly, this is not as straight-forward for ELU units as for RelU units as it involves calculating $\mathbb{E}[({e^{(\mathcal{N})}}^2)]$ for only the negative values of $\mathcal{N}$. This is not a pretty formula, I don't even know if there's a good closed form solution, so let's sample to get an approximation. We want $Var[y_l]$ to roughly be equal to 1 (most inputs are variance 1, batch norm makes layers variance 1 etc.). Thus we can sample from a normal distribution, apply the elu function with alpha = 1, square and calculate the mean. This gives $\approx 0.645$. The inverse of this is $\approx 1.55$. Thus following the same logic, we can set $Var[w_l]$ to $\sqrt{\frac{1.55}{n}}$ to get a variance that doesn't increase in magnitude. I reckon that would be the optimal value for the ELU function. It fits in between the value for the ReLU function (1/2, which is lower than 0.645 because the values that are mapped to 0 now get mapped to some minus value), and what you would have for any function with mean 0 (which is just 1). Take care that if the variance of $Var[y_{l-1}]$ is different, the optimal constant is also different. When this variance tends to 0, then the function becomes more and more like a unit function, thus the constant will tend to 1. If the variance becomes really big, the value tends towards the original ReLU value, thus 0.5. Edit: Did the theoretical analysis of the variance of ELU(x) if x is normally distributed. It involves the some derivations of the log-normal distribution and not so pretty integrals. The eventual answer for the variance is $0.5 \sigma$ (the part of the linear function) + $$ a - 2(b)^2 + (2b - 1)^2 $$ where $$ a = \frac{1}{2} e^{\frac{\sigma^2}{2}} \left(\text{erfc}\left(\frac{\sigma}{\sqrt{2}}\right) + \sqrt{\frac{1}{\sigma^2}} \sigma -1\right)\\ b = \frac{1}{2} e^{2\sigma^2} \left(\text{erfc}\left(\sqrt{2} \sigma\right) + \sqrt{\frac{1}{\sigma^2}} \sigma -1\right)\\ $$ Which is not very solvable for $\sigma$ unfortunately. You can fill in for $\sigma$ and get the estimate I gave above however, which is pretty cool.
What's the recommended weight initialization strategy when using the ELU activation function? I think the initialization should be roughly $\sqrt{\frac{1.55}{n_{in}}}$ The He et al. 2015 formula was made for ReLU units. The key idea is that the variance of f(y) with y = W * x + b should be rou
25,233
What's the recommended weight initialization strategy when using the ELU activation function?
Just noticed that the ELU paper states that "The weights have been initialized according to (He et al., 2015)", so this must be a good strategy, if not the optimal strategy.
What's the recommended weight initialization strategy when using the ELU activation function?
Just noticed that the ELU paper states that "The weights have been initialized according to (He et al., 2015)", so this must be a good strategy, if not the optimal strategy.
What's the recommended weight initialization strategy when using the ELU activation function? Just noticed that the ELU paper states that "The weights have been initialized according to (He et al., 2015)", so this must be a good strategy, if not the optimal strategy.
What's the recommended weight initialization strategy when using the ELU activation function? Just noticed that the ELU paper states that "The weights have been initialized according to (He et al., 2015)", so this must be a good strategy, if not the optimal strategy.
25,234
Confusion about pooling layer, is it trainable or not?
In the paper you read a total of 12 parameters can be trained in S1 layer meant the number of output planes in the pooling layer, not the number of parameters in the weight matrix. Normally, what we train within a neural network model are the parameters in the weight matrix. We don't train parameters in input planes or output planes. So students who wrote the paper didn't express themselves clearly, which made you confused about what a pooling layer really is. There are no trainable parameters in a max-pooling layer. In the forward pass, it pass maximum value within each rectangle to the next layer. In the backward pass, it propagate error in the next layer to the place where the max value is taken, because that's where the error comes from. For example, in forward pass, you have a image rectangle: 1 2 3 4 and you would get: 4 in the next layer. And in backward pass, you have error: -0.1 then you propagate the error back to where you get it: 0 0 0 -0.1 because the take the number 4 from that location in the forward pass.
Confusion about pooling layer, is it trainable or not?
In the paper you read a total of 12 parameters can be trained in S1 layer meant the number of output planes in the pooling layer, not the number of parameters in the weight matrix. Normally, what we
Confusion about pooling layer, is it trainable or not? In the paper you read a total of 12 parameters can be trained in S1 layer meant the number of output planes in the pooling layer, not the number of parameters in the weight matrix. Normally, what we train within a neural network model are the parameters in the weight matrix. We don't train parameters in input planes or output planes. So students who wrote the paper didn't express themselves clearly, which made you confused about what a pooling layer really is. There are no trainable parameters in a max-pooling layer. In the forward pass, it pass maximum value within each rectangle to the next layer. In the backward pass, it propagate error in the next layer to the place where the max value is taken, because that's where the error comes from. For example, in forward pass, you have a image rectangle: 1 2 3 4 and you would get: 4 in the next layer. And in backward pass, you have error: -0.1 then you propagate the error back to where you get it: 0 0 0 -0.1 because the take the number 4 from that location in the forward pass.
Confusion about pooling layer, is it trainable or not? In the paper you read a total of 12 parameters can be trained in S1 layer meant the number of output planes in the pooling layer, not the number of parameters in the weight matrix. Normally, what we
25,235
Confusion about pooling layer, is it trainable or not?
There is no fixed standard model in the deep learning. This why there are many different CNN models. Sometimes, The pooling can play some learning role as in Here . I have seen many papers where they apply the activation function or add some bias terms to the pooling layer. The pooling can be average pooling, max pooling, L2-norm pooling or even some other functions that reduce the size of the data. The state of art result on CIFAR 10 here used novel pooling method called Fractional Max-Pooling.
Confusion about pooling layer, is it trainable or not?
There is no fixed standard model in the deep learning. This why there are many different CNN models. Sometimes, The pooling can play some learning role as in Here . I have seen many papers where they
Confusion about pooling layer, is it trainable or not? There is no fixed standard model in the deep learning. This why there are many different CNN models. Sometimes, The pooling can play some learning role as in Here . I have seen many papers where they apply the activation function or add some bias terms to the pooling layer. The pooling can be average pooling, max pooling, L2-norm pooling or even some other functions that reduce the size of the data. The state of art result on CIFAR 10 here used novel pooling method called Fractional Max-Pooling.
Confusion about pooling layer, is it trainable or not? There is no fixed standard model in the deep learning. This why there are many different CNN models. Sometimes, The pooling can play some learning role as in Here . I have seen many papers where they
25,236
Confusion about pooling layer, is it trainable or not?
If the pooling operation is average pooling (see Scherer, Müller and Behnke, 2010), then it would be learnable because there is trainable bias term: takes the average over the inputs, multiplies it with a trainable scalar $\beta$, adds a trainable bias $b$, and passes the result through the non-linearity But many recent papers mentioned that it has fallen out of favor compared to max pooling, which has been found to work better in practice. References Scherer, D., Müller, A., & Behnke, S. (2010, September). Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks (pp. 92-101). Springer Berlin Heidelberg.
Confusion about pooling layer, is it trainable or not?
If the pooling operation is average pooling (see Scherer, Müller and Behnke, 2010), then it would be learnable because there is trainable bias term: takes the average over the inputs, multiplies it
Confusion about pooling layer, is it trainable or not? If the pooling operation is average pooling (see Scherer, Müller and Behnke, 2010), then it would be learnable because there is trainable bias term: takes the average over the inputs, multiplies it with a trainable scalar $\beta$, adds a trainable bias $b$, and passes the result through the non-linearity But many recent papers mentioned that it has fallen out of favor compared to max pooling, which has been found to work better in practice. References Scherer, D., Müller, A., & Behnke, S. (2010, September). Evaluation of pooling operations in convolutional architectures for object recognition. In International Conference on Artificial Neural Networks (pp. 92-101). Springer Berlin Heidelberg.
Confusion about pooling layer, is it trainable or not? If the pooling operation is average pooling (see Scherer, Müller and Behnke, 2010), then it would be learnable because there is trainable bias term: takes the average over the inputs, multiplies it
25,237
Why Are Measures of Dispersion Less Intuitive Than Centrality?
I share your feeling that variance is slightly less intuitive. More importantly, variance as a measure is optimized for certain distributions and has less worth for asymmetric distributions. Mean absolute difference from the mean is not much more intuitive in my view, because it requires one to choose the mean as the measure of central tendency. I prefer Gini's mean difference --- the mean absolute difference over all pairs of observations. It is intuitive, robust, and efficient. On efficiency, if the data come from a Gaussian distribution, Gini's mean difference with an appropriate rescaling factor applied to it is 0.98 as efficient as the sample standard deviation. There is an efficient computing formula for Gini's mean difference once the data are sorted. R code is below. w <- 4 * ((1:n) - (n - 1)/2)/n/(n - 1) sum(w * sort(x - mean(x)))
Why Are Measures of Dispersion Less Intuitive Than Centrality?
I share your feeling that variance is slightly less intuitive. More importantly, variance as a measure is optimized for certain distributions and has less worth for asymmetric distributions. Mean ab
Why Are Measures of Dispersion Less Intuitive Than Centrality? I share your feeling that variance is slightly less intuitive. More importantly, variance as a measure is optimized for certain distributions and has less worth for asymmetric distributions. Mean absolute difference from the mean is not much more intuitive in my view, because it requires one to choose the mean as the measure of central tendency. I prefer Gini's mean difference --- the mean absolute difference over all pairs of observations. It is intuitive, robust, and efficient. On efficiency, if the data come from a Gaussian distribution, Gini's mean difference with an appropriate rescaling factor applied to it is 0.98 as efficient as the sample standard deviation. There is an efficient computing formula for Gini's mean difference once the data are sorted. R code is below. w <- 4 * ((1:n) - (n - 1)/2)/n/(n - 1) sum(w * sort(x - mean(x)))
Why Are Measures of Dispersion Less Intuitive Than Centrality? I share your feeling that variance is slightly less intuitive. More importantly, variance as a measure is optimized for certain distributions and has less worth for asymmetric distributions. Mean ab
25,238
Why Are Measures of Dispersion Less Intuitive Than Centrality?
Here are some of my thoughts. It doesn't address every angle you could look at your question from, in fact, there is a lot it doesn't address (the question does feel a bit broad). Why is it hard for laypeople to understand the mathematical calculation of Variance? Variance is essentially how spread out things are. This is easy enough to understand, but the way it is calculated may seem counter-intuitive to a layperson. The issue is that the differences from the mean are squared (then averaged), and then square rooted to get the Standard Deviation. We understand why this method is necessary - the squaring is to make the values positive and then they are square-rooted to get the original units. However, a layperson is likely to be confused with why the numbers are squared and square-rooted. This looks like it cancels itself out (it doesn't) so seems pointless/strange. What is more intuitive to them is finding the spread by simply averaging the absolute differences between the mean and each point (called Mean absolute deviation). This method does not require squaring and square-rooting, so is far more intuitive. Note that just because Mean Absolute Deviation is more straightforward, does not mean it is 'better'. The debate of whether to use Squares or Absolute values has been going on for a century involving many prominent statisticians, so a random person like me can't just show up here and say one is better. (Averaging squares to find variance is of course more popular) In a nutshell: The Squaring to find variance seems less intuitive to laypeople who would find averaging the Absolute differences to be more straightforward. However, I don't think people have a problem with understanding the idea of spread itself
Why Are Measures of Dispersion Less Intuitive Than Centrality?
Here are some of my thoughts. It doesn't address every angle you could look at your question from, in fact, there is a lot it doesn't address (the question does feel a bit broad). Why is it hard for l
Why Are Measures of Dispersion Less Intuitive Than Centrality? Here are some of my thoughts. It doesn't address every angle you could look at your question from, in fact, there is a lot it doesn't address (the question does feel a bit broad). Why is it hard for laypeople to understand the mathematical calculation of Variance? Variance is essentially how spread out things are. This is easy enough to understand, but the way it is calculated may seem counter-intuitive to a layperson. The issue is that the differences from the mean are squared (then averaged), and then square rooted to get the Standard Deviation. We understand why this method is necessary - the squaring is to make the values positive and then they are square-rooted to get the original units. However, a layperson is likely to be confused with why the numbers are squared and square-rooted. This looks like it cancels itself out (it doesn't) so seems pointless/strange. What is more intuitive to them is finding the spread by simply averaging the absolute differences between the mean and each point (called Mean absolute deviation). This method does not require squaring and square-rooting, so is far more intuitive. Note that just because Mean Absolute Deviation is more straightforward, does not mean it is 'better'. The debate of whether to use Squares or Absolute values has been going on for a century involving many prominent statisticians, so a random person like me can't just show up here and say one is better. (Averaging squares to find variance is of course more popular) In a nutshell: The Squaring to find variance seems less intuitive to laypeople who would find averaging the Absolute differences to be more straightforward. However, I don't think people have a problem with understanding the idea of spread itself
Why Are Measures of Dispersion Less Intuitive Than Centrality? Here are some of my thoughts. It doesn't address every angle you could look at your question from, in fact, there is a lot it doesn't address (the question does feel a bit broad). Why is it hard for l
25,239
Why Are Measures of Dispersion Less Intuitive Than Centrality?
Here it goes my opinion on your question. I will start by questioning an abovementioned answer to then try to make my point. Question to previous hypothesis: Is it really the squares makes dispersion measures such as the Square Mean Deviation hard to understand? I agree the square makes it harder by bringing mathematical complexity but if the answer was only the squares, the Mean Absolute Deviation would be as simple to understand and measures of centrality. Opinion: I think that what makes it hard for us to understand measures of dispersion is that dispersion itself is a 2-dimentional information. Trying to summarize a 2-dimensional information in one metric implies a partial loss of information which in consequence causes confusion. Example: An example that can help to explain the concept above is by the following. Let's get 2 different sets of data: Follows a Gaussian distribution Follows an unknown and asymetric distribution Let's also assume the dispersion in terms of Standard Deviation is 1.0. My mind tends to interpret the dispersion of set 1 much more clearer than that of set 2. In this specific case, the reason for my better understanding is explained knowing 2-dimensional shape of the distribution in advance allows me to understand the distribution measure in terms of a probability around the centralized Gaussian mean. In other words, the Gaussian distribution gave me the 2-dimensional hint I needed to better translate from the measure of dispersion. Conclusion: In sum, there no tangible way to capture in one Deviation Measure all there is in a 2-dimensional information. What I usually do to understand dispersion without looking directly at the distribution itself is to combine many measures that explain a certain distribution. They will set up the context for my mind to have a better grasp on the dispersion measure itself. If I could make use of graphs certainly box plots are really useful for visualing it. Great discussion that made me think a lot on the issue. I would be glad to hear your opinion.
Why Are Measures of Dispersion Less Intuitive Than Centrality?
Here it goes my opinion on your question. I will start by questioning an abovementioned answer to then try to make my point. Question to previous hypothesis: Is it really the squares makes dispersion
Why Are Measures of Dispersion Less Intuitive Than Centrality? Here it goes my opinion on your question. I will start by questioning an abovementioned answer to then try to make my point. Question to previous hypothesis: Is it really the squares makes dispersion measures such as the Square Mean Deviation hard to understand? I agree the square makes it harder by bringing mathematical complexity but if the answer was only the squares, the Mean Absolute Deviation would be as simple to understand and measures of centrality. Opinion: I think that what makes it hard for us to understand measures of dispersion is that dispersion itself is a 2-dimentional information. Trying to summarize a 2-dimensional information in one metric implies a partial loss of information which in consequence causes confusion. Example: An example that can help to explain the concept above is by the following. Let's get 2 different sets of data: Follows a Gaussian distribution Follows an unknown and asymetric distribution Let's also assume the dispersion in terms of Standard Deviation is 1.0. My mind tends to interpret the dispersion of set 1 much more clearer than that of set 2. In this specific case, the reason for my better understanding is explained knowing 2-dimensional shape of the distribution in advance allows me to understand the distribution measure in terms of a probability around the centralized Gaussian mean. In other words, the Gaussian distribution gave me the 2-dimensional hint I needed to better translate from the measure of dispersion. Conclusion: In sum, there no tangible way to capture in one Deviation Measure all there is in a 2-dimensional information. What I usually do to understand dispersion without looking directly at the distribution itself is to combine many measures that explain a certain distribution. They will set up the context for my mind to have a better grasp on the dispersion measure itself. If I could make use of graphs certainly box plots are really useful for visualing it. Great discussion that made me think a lot on the issue. I would be glad to hear your opinion.
Why Are Measures of Dispersion Less Intuitive Than Centrality? Here it goes my opinion on your question. I will start by questioning an abovementioned answer to then try to make my point. Question to previous hypothesis: Is it really the squares makes dispersion
25,240
Why Are Measures of Dispersion Less Intuitive Than Centrality?
I think that a simple reason that people have a harder time with variability (whether variance, standard deviation, MAD, or whatever) is that you cannot really understand variability until after you understand the idea of center. This is because the measures of variability are all measured based on distance from the center. Concepts like mean and median are parallel concepts, you could learn either one first and some people may have a better understanding of one and other people will understand the other better. But spread is measured from the center (for some definition of center), so cannot really be understood first.
Why Are Measures of Dispersion Less Intuitive Than Centrality?
I think that a simple reason that people have a harder time with variability (whether variance, standard deviation, MAD, or whatever) is that you cannot really understand variability until after you u
Why Are Measures of Dispersion Less Intuitive Than Centrality? I think that a simple reason that people have a harder time with variability (whether variance, standard deviation, MAD, or whatever) is that you cannot really understand variability until after you understand the idea of center. This is because the measures of variability are all measured based on distance from the center. Concepts like mean and median are parallel concepts, you could learn either one first and some people may have a better understanding of one and other people will understand the other better. But spread is measured from the center (for some definition of center), so cannot really be understood first.
Why Are Measures of Dispersion Less Intuitive Than Centrality? I think that a simple reason that people have a harder time with variability (whether variance, standard deviation, MAD, or whatever) is that you cannot really understand variability until after you u
25,241
Cross-entropy cost function in neural network
Here's how I would express the cross-entropy loss: $$\mathcal{L}(X, Y) = -\frac{1}{n} \sum_{i=1}^n y^{(i)} \ln a(x^{(i)}) + \left(1 - y^{(i)}\right) \ln \left(1 - a(x^{(i)})\right) $$ Here, $X = \left\{x^{(1)},\dots,x^{(n)}\right\}$ is the set of input examples in the training dataset, and $Y=\left\{y^{(1)},\dots,y^{(n)} \right\}$ is the corresponding set of labels for those input examples. The $a(x)$ represents the output of the neural network given input $x$. Each of the $y^{(i)}$ is either 0 or 1, and the output activation $a(x)$ is typically restricted to the open interval (0, 1) by using a logistic sigmoid. For example, for a one-layer network (which is equivalent to logistic regression), the activation would be given by $$a(x) = \frac{1}{1 + e^{-Wx-b}}$$ where $W$ is a weight matrix and $b$ is a bias vector. For multiple layers, you can expand the activation function to something like $$a(x) = \frac{1}{1 + e^{-Wz(x)-b}} \\ z(x) = \frac{1}{1 + e^{-Vx-c}}$$ where $V$ and $c$ are the weight matrix and bias for the first layer, and $z(x)$ is the activation of the hidden layer in the network. I've used the (i) superscript to denote examples because I found it to be quite effective in Andrew Ng's machine learning course; sometimes people express examples as columns or rows in a matrix, but the idea remains the same.
Cross-entropy cost function in neural network
Here's how I would express the cross-entropy loss: $$\mathcal{L}(X, Y) = -\frac{1}{n} \sum_{i=1}^n y^{(i)} \ln a(x^{(i)}) + \left(1 - y^{(i)}\right) \ln \left(1 - a(x^{(i)})\right) $$ Here, $X = \left
Cross-entropy cost function in neural network Here's how I would express the cross-entropy loss: $$\mathcal{L}(X, Y) = -\frac{1}{n} \sum_{i=1}^n y^{(i)} \ln a(x^{(i)}) + \left(1 - y^{(i)}\right) \ln \left(1 - a(x^{(i)})\right) $$ Here, $X = \left\{x^{(1)},\dots,x^{(n)}\right\}$ is the set of input examples in the training dataset, and $Y=\left\{y^{(1)},\dots,y^{(n)} \right\}$ is the corresponding set of labels for those input examples. The $a(x)$ represents the output of the neural network given input $x$. Each of the $y^{(i)}$ is either 0 or 1, and the output activation $a(x)$ is typically restricted to the open interval (0, 1) by using a logistic sigmoid. For example, for a one-layer network (which is equivalent to logistic regression), the activation would be given by $$a(x) = \frac{1}{1 + e^{-Wx-b}}$$ where $W$ is a weight matrix and $b$ is a bias vector. For multiple layers, you can expand the activation function to something like $$a(x) = \frac{1}{1 + e^{-Wz(x)-b}} \\ z(x) = \frac{1}{1 + e^{-Vx-c}}$$ where $V$ and $c$ are the weight matrix and bias for the first layer, and $z(x)$ is the activation of the hidden layer in the network. I've used the (i) superscript to denote examples because I found it to be quite effective in Andrew Ng's machine learning course; sometimes people express examples as columns or rows in a matrix, but the idea remains the same.
Cross-entropy cost function in neural network Here's how I would express the cross-entropy loss: $$\mathcal{L}(X, Y) = -\frac{1}{n} \sum_{i=1}^n y^{(i)} \ln a(x^{(i)}) + \left(1 - y^{(i)}\right) \ln \left(1 - a(x^{(i)})\right) $$ Here, $X = \left
25,242
Cross-entropy cost function in neural network
What exactly are we summing over? The tutorial is actually pretty explicit: ... $n$ is the total number of items of training data, the sum is over all training inputs... The original single neuron cost function given in the tutorial (Eqn. 57) also has an $x$ subscript under the $\Sigma$ which is supposed to hint at this. For the single neuron case there's nothing else to sum over besides training examples, since we already summed over all the input weights when computing $a$: $$ a = \sum_{j} w_jx_j. $$ Later on in the same tutorial, Nielsen gives an expression for the cost function for a multi-layer, multi-neuron network (Eqn. 63): $$ C = -\frac{1}{n}\sum_{x}\sum_{j}[ y_j \ln a^{L}_{j} + (1 - y_j) \ln (1 - a^{L}_{j})]. $$ In this case the sum runs over both training examples ($x$'s) and individual neurons in the output layer ($j$'s).
Cross-entropy cost function in neural network
What exactly are we summing over? The tutorial is actually pretty explicit: ... $n$ is the total number of items of training data, the sum is over all training inputs... The original single neuron
Cross-entropy cost function in neural network What exactly are we summing over? The tutorial is actually pretty explicit: ... $n$ is the total number of items of training data, the sum is over all training inputs... The original single neuron cost function given in the tutorial (Eqn. 57) also has an $x$ subscript under the $\Sigma$ which is supposed to hint at this. For the single neuron case there's nothing else to sum over besides training examples, since we already summed over all the input weights when computing $a$: $$ a = \sum_{j} w_jx_j. $$ Later on in the same tutorial, Nielsen gives an expression for the cost function for a multi-layer, multi-neuron network (Eqn. 63): $$ C = -\frac{1}{n}\sum_{x}\sum_{j}[ y_j \ln a^{L}_{j} + (1 - y_j) \ln (1 - a^{L}_{j})]. $$ In this case the sum runs over both training examples ($x$'s) and individual neurons in the output layer ($j$'s).
Cross-entropy cost function in neural network What exactly are we summing over? The tutorial is actually pretty explicit: ... $n$ is the total number of items of training data, the sum is over all training inputs... The original single neuron
25,243
Expected magnitude of a vector from a multivariate normal
The sum of squares of $p$ independent standard normal distributions is a chi-squared distribution with $p$ degrees of freedom. The magnitude is the square root of that random variable. It is sometimes referred to as the chi distribution. (See this Wikipedia article.) The common variance $\sigma^2$ is a simple scale factor. Incorporating some of the comments into this answer: The mean of the chi-distribution with $p$ degrees of freedom is $$ \mu=\sqrt{2}\,\,\frac{\Gamma((p+1)/2)}{\Gamma(p/2)} $$ Special cases as noted: For $p=1$, the folded normal distribution has mean $\frac{\sqrt{2}}{\Gamma(1/2)}=\sqrt{\frac{2}{\pi}}$. For $p=2$, the distribution is also known as the Rayleigh distribution (with scale parameter 1), and its mean is $\sqrt{2}\frac{\Gamma(3/2)}{\Gamma(1)}=\sqrt{2}\frac{\sqrt{\pi}}{2} = \sqrt{\frac{\pi}{2}}$. For $p=3$, the distribution is known as the Maxwell distribution with parameter 1; its mean is $\sqrt{\frac{8}{\pi}}$. When the common variance $\sigma^2$ is not 1, the means must be multiplied by $\sigma$.
Expected magnitude of a vector from a multivariate normal
The sum of squares of $p$ independent standard normal distributions is a chi-squared distribution with $p$ degrees of freedom. The magnitude is the square root of that random variable. It is sometimes
Expected magnitude of a vector from a multivariate normal The sum of squares of $p$ independent standard normal distributions is a chi-squared distribution with $p$ degrees of freedom. The magnitude is the square root of that random variable. It is sometimes referred to as the chi distribution. (See this Wikipedia article.) The common variance $\sigma^2$ is a simple scale factor. Incorporating some of the comments into this answer: The mean of the chi-distribution with $p$ degrees of freedom is $$ \mu=\sqrt{2}\,\,\frac{\Gamma((p+1)/2)}{\Gamma(p/2)} $$ Special cases as noted: For $p=1$, the folded normal distribution has mean $\frac{\sqrt{2}}{\Gamma(1/2)}=\sqrt{\frac{2}{\pi}}$. For $p=2$, the distribution is also known as the Rayleigh distribution (with scale parameter 1), and its mean is $\sqrt{2}\frac{\Gamma(3/2)}{\Gamma(1)}=\sqrt{2}\frac{\sqrt{\pi}}{2} = \sqrt{\frac{\pi}{2}}$. For $p=3$, the distribution is known as the Maxwell distribution with parameter 1; its mean is $\sqrt{\frac{8}{\pi}}$. When the common variance $\sigma^2$ is not 1, the means must be multiplied by $\sigma$.
Expected magnitude of a vector from a multivariate normal The sum of squares of $p$ independent standard normal distributions is a chi-squared distribution with $p$ degrees of freedom. The magnitude is the square root of that random variable. It is sometimes
25,244
Expected magnitude of a vector from a multivariate normal
The answer by user3697176 gives all the needed information, but nonetheless, here is a slightly different view of the problem. If $X_i \sim N(0,\sigma^2)$, then $Y = \sum_{i=1}^n X_i^2$ has a Gamma distribution with parameters $\left(\frac n2, \frac{1}{2\sigma^2}\right)$. Now, if $W \sim \Gamma(t,\lambda)$, then $$f_W(w) = \frac{\lambda(\lambda w)^{t-1}}{\Gamma(t)}\exp(-\lambda w)\mathbf 1_{\{w\colon w > 0\}}$$ which of course enjoys the property that the area under the curve is $1$. This helps us find $E[\sqrt{W}]$ without actually explicitly evaluating an integral. We have that \begin{align} E[\sqrt{W}] &= \int_0^\infty \sqrt{w}\cdot \frac{\lambda(\lambda w)^{t-1}}{\Gamma(t)}\exp(-\lambda w)\, \mathrm dw\\ &= \frac{1}{\sqrt{\lambda}}\cdot\frac{\Gamma(t+\frac 12)}{\Gamma(t)} \int_0^\infty \frac{\lambda(\lambda w)^{t+\frac 12 -1}}{\Gamma(t+\frac 12)} \exp(-\lambda w)\, \mathrm dw\\ &= \frac{1}{\sqrt{\lambda}}\cdot\frac{\Gamma(t+\frac 12)}{\Gamma(t)}. \end{align} Applying this to $Y$, we get that $$E\left[\sqrt{X_1^2+X_2^2+\cdots+X_n^2}\right] = \sqrt{2} \frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sigma.$$ Those Gamma functions can be simplified further and we will always get a $\Gamma(1/2) = \sqrt{\pi}$ in the denominator or the numerator according as $n$ is odd or even.
Expected magnitude of a vector from a multivariate normal
The answer by user3697176 gives all the needed information, but nonetheless, here is a slightly different view of the problem. If $X_i \sim N(0,\sigma^2)$, then $Y = \sum_{i=1}^n X_i^2$ has a Gamma di
Expected magnitude of a vector from a multivariate normal The answer by user3697176 gives all the needed information, but nonetheless, here is a slightly different view of the problem. If $X_i \sim N(0,\sigma^2)$, then $Y = \sum_{i=1}^n X_i^2$ has a Gamma distribution with parameters $\left(\frac n2, \frac{1}{2\sigma^2}\right)$. Now, if $W \sim \Gamma(t,\lambda)$, then $$f_W(w) = \frac{\lambda(\lambda w)^{t-1}}{\Gamma(t)}\exp(-\lambda w)\mathbf 1_{\{w\colon w > 0\}}$$ which of course enjoys the property that the area under the curve is $1$. This helps us find $E[\sqrt{W}]$ without actually explicitly evaluating an integral. We have that \begin{align} E[\sqrt{W}] &= \int_0^\infty \sqrt{w}\cdot \frac{\lambda(\lambda w)^{t-1}}{\Gamma(t)}\exp(-\lambda w)\, \mathrm dw\\ &= \frac{1}{\sqrt{\lambda}}\cdot\frac{\Gamma(t+\frac 12)}{\Gamma(t)} \int_0^\infty \frac{\lambda(\lambda w)^{t+\frac 12 -1}}{\Gamma(t+\frac 12)} \exp(-\lambda w)\, \mathrm dw\\ &= \frac{1}{\sqrt{\lambda}}\cdot\frac{\Gamma(t+\frac 12)}{\Gamma(t)}. \end{align} Applying this to $Y$, we get that $$E\left[\sqrt{X_1^2+X_2^2+\cdots+X_n^2}\right] = \sqrt{2} \frac{\Gamma\left(\frac{n+1}{2}\right)}{\Gamma\left(\frac{n}{2}\right)}\sigma.$$ Those Gamma functions can be simplified further and we will always get a $\Gamma(1/2) = \sqrt{\pi}$ in the denominator or the numerator according as $n$ is odd or even.
Expected magnitude of a vector from a multivariate normal The answer by user3697176 gives all the needed information, but nonetheless, here is a slightly different view of the problem. If $X_i \sim N(0,\sigma^2)$, then $Y = \sum_{i=1}^n X_i^2$ has a Gamma di
25,245
How to calculate the likelihood function
The likelihood function of a sample, is the joint density of the random variables involved but viewed as a function of the unknown parameters given a specific sample of realizations from these random variables. In your case, it appears that the assumption here is that the lifetime of these electronic components each follows (i.e. its has a marginal distribution), an exponential distribution with identical rate parameter $\theta$, and so the marginal PDF is: $$f_{X_i}(x_i\mid \theta) = \theta e^{-\theta x_i}, \;\; i=1,2,3$$ Also, it appears that the life of each component is fully independent of the life of the others. In such a case the joint density function is the product of the three densities, $$f_{X1,X2,X3}(x_1,x_2,x_3\mid \theta) = \theta e^{-\theta x_1} \cdot \theta e^{-\theta x_2}\cdot \theta e^{-\theta x_3} = \theta^3\cdot \exp{\left\{-\theta \sum_{i=1}^3x_i\right\}}$$ To turn this into the likelihood function of the sample, we view it as a function of $\theta$ given a specific sample of $x_i$'s. $$ L(\theta \mid \{x_1,x_2,x_3\}) = \theta^3\cdot \exp{\left\{-\theta \sum_{i=1}^3x_i\right\}}$$ where only the left-hand-side has changed, to indicate what is considered as the variable of the function. In your case the available sample is the three observed lifetimes $\{x_1 = 3, x_2 =1.5, x_3 = 2.1\}$, and so $\sum_{i=1}^3x_i = 6.6$. Then the likelihood is $$L(\theta \mid \{x_1 = 3, x_2 =1.5, x_3 = 2.1\}) = \theta^3\cdot \exp{\left\{-6.6\theta \right\}}$$ In other words, in the likelihood you were given, the specific sample available has been already inserted in it. This is not usually done, i.e. we usually "stop" at the theoretical representation of the likelihood for general $x_i$'s, we then derive the conditions for its maximization with respect to $\theta$, and then we plug into the maximization conditions the specific numerical sample of $x$-values, in order to obtain a specific estimate for $\theta$. Admittedly though, looking at the likelihood like this, may make more clear the fact that what matters here for inference (for the specific distributional assumption), is the sum of the realizations, and not their individual values: the above likelihood is not "sample-specific" but rather "sum-of-realizations-specific": if we are given any other $n=3$ sample for which the sum of its elements is again $6.6$, we will obtain the same estimate for $\theta$ (this is essentially what it means to say that $\sum x$ is a "sufficient" statistic -it contains all information that the sample can provide for inference, under the specific distributional assumption).
How to calculate the likelihood function
The likelihood function of a sample, is the joint density of the random variables involved but viewed as a function of the unknown parameters given a specific sample of realizations from these random
How to calculate the likelihood function The likelihood function of a sample, is the joint density of the random variables involved but viewed as a function of the unknown parameters given a specific sample of realizations from these random variables. In your case, it appears that the assumption here is that the lifetime of these electronic components each follows (i.e. its has a marginal distribution), an exponential distribution with identical rate parameter $\theta$, and so the marginal PDF is: $$f_{X_i}(x_i\mid \theta) = \theta e^{-\theta x_i}, \;\; i=1,2,3$$ Also, it appears that the life of each component is fully independent of the life of the others. In such a case the joint density function is the product of the three densities, $$f_{X1,X2,X3}(x_1,x_2,x_3\mid \theta) = \theta e^{-\theta x_1} \cdot \theta e^{-\theta x_2}\cdot \theta e^{-\theta x_3} = \theta^3\cdot \exp{\left\{-\theta \sum_{i=1}^3x_i\right\}}$$ To turn this into the likelihood function of the sample, we view it as a function of $\theta$ given a specific sample of $x_i$'s. $$ L(\theta \mid \{x_1,x_2,x_3\}) = \theta^3\cdot \exp{\left\{-\theta \sum_{i=1}^3x_i\right\}}$$ where only the left-hand-side has changed, to indicate what is considered as the variable of the function. In your case the available sample is the three observed lifetimes $\{x_1 = 3, x_2 =1.5, x_3 = 2.1\}$, and so $\sum_{i=1}^3x_i = 6.6$. Then the likelihood is $$L(\theta \mid \{x_1 = 3, x_2 =1.5, x_3 = 2.1\}) = \theta^3\cdot \exp{\left\{-6.6\theta \right\}}$$ In other words, in the likelihood you were given, the specific sample available has been already inserted in it. This is not usually done, i.e. we usually "stop" at the theoretical representation of the likelihood for general $x_i$'s, we then derive the conditions for its maximization with respect to $\theta$, and then we plug into the maximization conditions the specific numerical sample of $x$-values, in order to obtain a specific estimate for $\theta$. Admittedly though, looking at the likelihood like this, may make more clear the fact that what matters here for inference (for the specific distributional assumption), is the sum of the realizations, and not their individual values: the above likelihood is not "sample-specific" but rather "sum-of-realizations-specific": if we are given any other $n=3$ sample for which the sum of its elements is again $6.6$, we will obtain the same estimate for $\theta$ (this is essentially what it means to say that $\sum x$ is a "sufficient" statistic -it contains all information that the sample can provide for inference, under the specific distributional assumption).
How to calculate the likelihood function The likelihood function of a sample, is the joint density of the random variables involved but viewed as a function of the unknown parameters given a specific sample of realizations from these random
25,246
What is the meaning of regularization path in LASSO or related sparsity problems?
Say you have a model with $p$ predictor variables: $x_1, x_2, \ldots x_p$. Set $\lambda$ to an initial value, and estimate your coefficients $\beta_1, \beta_2, \ldots \beta_p$. These coefficients can be thought of as a point in $p$-dimensional space.* Repeat the procedure for your next value of $\lambda$, and get another set of estimates. These form another point in $p$-dimensional space. Do this for all your $\lambda$ values, and you will get a sequence of such points. This sequence is the regularization path. * There's also the intercept term $\beta_0$ so all this technically takes place in $(p+1)$-dimensional space, but never mind that. Anyway most elastic net/lasso programs will standardise the variables before fitting the model, so $\beta_0$ will always be 0.
What is the meaning of regularization path in LASSO or related sparsity problems?
Say you have a model with $p$ predictor variables: $x_1, x_2, \ldots x_p$. Set $\lambda$ to an initial value, and estimate your coefficients $\beta_1, \beta_2, \ldots \beta_p$. These coefficients can
What is the meaning of regularization path in LASSO or related sparsity problems? Say you have a model with $p$ predictor variables: $x_1, x_2, \ldots x_p$. Set $\lambda$ to an initial value, and estimate your coefficients $\beta_1, \beta_2, \ldots \beta_p$. These coefficients can be thought of as a point in $p$-dimensional space.* Repeat the procedure for your next value of $\lambda$, and get another set of estimates. These form another point in $p$-dimensional space. Do this for all your $\lambda$ values, and you will get a sequence of such points. This sequence is the regularization path. * There's also the intercept term $\beta_0$ so all this technically takes place in $(p+1)$-dimensional space, but never mind that. Anyway most elastic net/lasso programs will standardise the variables before fitting the model, so $\beta_0$ will always be 0.
What is the meaning of regularization path in LASSO or related sparsity problems? Say you have a model with $p$ predictor variables: $x_1, x_2, \ldots x_p$. Set $\lambda$ to an initial value, and estimate your coefficients $\beta_1, \beta_2, \ldots \beta_p$. These coefficients can
25,247
What is the meaning of regularization path in LASSO or related sparsity problems?
A graphical explanation of the Lasso solution can be found on pages 69-73 of the text "Elements of Statistical Learning" (online version here).
What is the meaning of regularization path in LASSO or related sparsity problems?
A graphical explanation of the Lasso solution can be found on pages 69-73 of the text "Elements of Statistical Learning" (online version here).
What is the meaning of regularization path in LASSO or related sparsity problems? A graphical explanation of the Lasso solution can be found on pages 69-73 of the text "Elements of Statistical Learning" (online version here).
What is the meaning of regularization path in LASSO or related sparsity problems? A graphical explanation of the Lasso solution can be found on pages 69-73 of the text "Elements of Statistical Learning" (online version here).
25,248
Discrete analog of CDF: "cumulative mass function"?
The proper terminology is Cumulative Distribution Function, (CDF). The CDF is defined as $$F_X(x) = \mathrm{P}\{X \leq x\}.$$ With this definition, the nature of the random variable $X$ is irrelevant: continuous, discrete, or hybrids all have the same definition. As you note, for a discrete random variable the CDF has a very different appearance than for a continuous random variable. In the first case, it is a step function; in the second it is a continuous function.
Discrete analog of CDF: "cumulative mass function"?
The proper terminology is Cumulative Distribution Function, (CDF). The CDF is defined as $$F_X(x) = \mathrm{P}\{X \leq x\}.$$ With this definition, the nature of the random variable $X$ is irrelevant
Discrete analog of CDF: "cumulative mass function"? The proper terminology is Cumulative Distribution Function, (CDF). The CDF is defined as $$F_X(x) = \mathrm{P}\{X \leq x\}.$$ With this definition, the nature of the random variable $X$ is irrelevant: continuous, discrete, or hybrids all have the same definition. As you note, for a discrete random variable the CDF has a very different appearance than for a continuous random variable. In the first case, it is a step function; in the second it is a continuous function.
Discrete analog of CDF: "cumulative mass function"? The proper terminology is Cumulative Distribution Function, (CDF). The CDF is defined as $$F_X(x) = \mathrm{P}\{X \leq x\}.$$ With this definition, the nature of the random variable $X$ is irrelevant
25,249
Discrete analog of CDF: "cumulative mass function"?
I think "cumulative mass function" is correct, but it hasn't been widely adopted just yet. It makes sense to me as a more specific cumulative distribution function, a sibling to probability mass functions, similar to how cumulative density functions relate to probability density functions.
Discrete analog of CDF: "cumulative mass function"?
I think "cumulative mass function" is correct, but it hasn't been widely adopted just yet. It makes sense to me as a more specific cumulative distribution function, a sibling to probability mass funct
Discrete analog of CDF: "cumulative mass function"? I think "cumulative mass function" is correct, but it hasn't been widely adopted just yet. It makes sense to me as a more specific cumulative distribution function, a sibling to probability mass functions, similar to how cumulative density functions relate to probability density functions.
Discrete analog of CDF: "cumulative mass function"? I think "cumulative mass function" is correct, but it hasn't been widely adopted just yet. It makes sense to me as a more specific cumulative distribution function, a sibling to probability mass funct
25,250
What is it all about Machine Learning in real practice?
Machine learning (ML) in practice depends on what the goal of doing ML is. In some situations, solid pre-processing and applying a suite of out-of-the-box ML methods might be good enough. However, even in these situations, it is important to understand how the methods work in order to be able to troubleshoot when things go wrong. However, ML in practice can be much more than this, and MNIST is a good example of why. It's deceptively easy to get 'good' performance on the MNIST dataset. For example, according to Yann Le Cun's website on MNIST performance, K nearest neighbours (K-NN) with the Euclidean distance metric (L2) also has an error rate of 3%, the same as your out-of-the-box random forest. L2 K-NN is about as simple as an ML algorithm gets. On the other hand, Yann, Yoshua, Leon & Patrick's best, first shot at this dataset, LeNet-4, has an error rate of 0.7%, 0.7% is less than a fourth of 3%, so if you put this system into practice reading handwritten digits, the naive algorithm requires four times as much human effort to fix its errors. The convolutional neural network that Yann and colleagues used is matched to the task but I wouldn't call this 'feature engineering', so much as making an effort to understand the data and encode that understanding into the learning algorithm. So, what are the lessons: It is easy to reach the naive performance baseline using an out-of-the-box method and good preprocessing. You should always do this, so that you know where the baseline is and whether or not this performance level is good enough for your requirements. Beware though, often out-of-the-box ML methods are 'brittle' i.e., surprisingly sensitive to the pre-processing. Once you've trained all the out-of-the-box methods, it's almost always a good idea to try bagging them. Hard problems require either domain-specific knowledge or a lot more data or both to solve. Feature engineering means using domain-specific knowledge to help the ML algorithm. However, if you have enough data, an algorithm (or approach) that can take advantage of that data to learn complex features, and an expert applying this algorithm then you can sometimes forego this knowledge (e.g. the Kaggle Merck challenge). Also, sometimes domain experts are wrong about what good features are; so more data and ML expertise is always helpful. Consider error rate not accuracy. An ML methods with 99% accuracy makes half the errors that one with 98% accuracy does; sometimes this is important.
What is it all about Machine Learning in real practice?
Machine learning (ML) in practice depends on what the goal of doing ML is. In some situations, solid pre-processing and applying a suite of out-of-the-box ML methods might be good enough. However, eve
What is it all about Machine Learning in real practice? Machine learning (ML) in practice depends on what the goal of doing ML is. In some situations, solid pre-processing and applying a suite of out-of-the-box ML methods might be good enough. However, even in these situations, it is important to understand how the methods work in order to be able to troubleshoot when things go wrong. However, ML in practice can be much more than this, and MNIST is a good example of why. It's deceptively easy to get 'good' performance on the MNIST dataset. For example, according to Yann Le Cun's website on MNIST performance, K nearest neighbours (K-NN) with the Euclidean distance metric (L2) also has an error rate of 3%, the same as your out-of-the-box random forest. L2 K-NN is about as simple as an ML algorithm gets. On the other hand, Yann, Yoshua, Leon & Patrick's best, first shot at this dataset, LeNet-4, has an error rate of 0.7%, 0.7% is less than a fourth of 3%, so if you put this system into practice reading handwritten digits, the naive algorithm requires four times as much human effort to fix its errors. The convolutional neural network that Yann and colleagues used is matched to the task but I wouldn't call this 'feature engineering', so much as making an effort to understand the data and encode that understanding into the learning algorithm. So, what are the lessons: It is easy to reach the naive performance baseline using an out-of-the-box method and good preprocessing. You should always do this, so that you know where the baseline is and whether or not this performance level is good enough for your requirements. Beware though, often out-of-the-box ML methods are 'brittle' i.e., surprisingly sensitive to the pre-processing. Once you've trained all the out-of-the-box methods, it's almost always a good idea to try bagging them. Hard problems require either domain-specific knowledge or a lot more data or both to solve. Feature engineering means using domain-specific knowledge to help the ML algorithm. However, if you have enough data, an algorithm (or approach) that can take advantage of that data to learn complex features, and an expert applying this algorithm then you can sometimes forego this knowledge (e.g. the Kaggle Merck challenge). Also, sometimes domain experts are wrong about what good features are; so more data and ML expertise is always helpful. Consider error rate not accuracy. An ML methods with 99% accuracy makes half the errors that one with 98% accuracy does; sometimes this is important.
What is it all about Machine Learning in real practice? Machine learning (ML) in practice depends on what the goal of doing ML is. In some situations, solid pre-processing and applying a suite of out-of-the-box ML methods might be good enough. However, eve
25,251
What is it all about Machine Learning in real practice?
I think that the examples that you find on blog or websites are examples where it is known that the common methods work well (even if, of course, they can be improved). My specialization is in features engineering and I can tell you that often the standard algorithms do not work well at all. (I have not any knowledge of the field but often I work with people who have it.). Here there is a real problem where I worked for 6 months : Given a matrix X with 100 samples and 10000 variables representing the genetic value of patients and an output y of size 100 x 1 that represents the density of the bones. Can you tell me which genes influence the density of the bones? Now I am working on another problem. I have a manufacturing production dataset with 2000 samples and 12000 variables. My boss would like to extract from this dataset non more than 30 variables in an unsupervised way. I have tried some algorithms but I can not choose less then 600 variables because these are very very correlated between them. (I am still working on this...) Another important think to consider is the speed performance of the various algorithms. In a lot of situations you can not wait 20 minutes waiting for a result. For example you need to know when to use NIPALS and when to use SVD to compute PCA. Hope this can give you an idea of the problems that are common in ml.
What is it all about Machine Learning in real practice?
I think that the examples that you find on blog or websites are examples where it is known that the common methods work well (even if, of course, they can be improved). My specialization is in featur
What is it all about Machine Learning in real practice? I think that the examples that you find on blog or websites are examples where it is known that the common methods work well (even if, of course, they can be improved). My specialization is in features engineering and I can tell you that often the standard algorithms do not work well at all. (I have not any knowledge of the field but often I work with people who have it.). Here there is a real problem where I worked for 6 months : Given a matrix X with 100 samples and 10000 variables representing the genetic value of patients and an output y of size 100 x 1 that represents the density of the bones. Can you tell me which genes influence the density of the bones? Now I am working on another problem. I have a manufacturing production dataset with 2000 samples and 12000 variables. My boss would like to extract from this dataset non more than 30 variables in an unsupervised way. I have tried some algorithms but I can not choose less then 600 variables because these are very very correlated between them. (I am still working on this...) Another important think to consider is the speed performance of the various algorithms. In a lot of situations you can not wait 20 minutes waiting for a result. For example you need to know when to use NIPALS and when to use SVD to compute PCA. Hope this can give you an idea of the problems that are common in ml.
What is it all about Machine Learning in real practice? I think that the examples that you find on blog or websites are examples where it is known that the common methods work well (even if, of course, they can be improved). My specialization is in featur
25,252
Is cross-validation still valid when the sample size is small?
I don't think there is much confusion in your thoughts, you're putting your finger on one very important problem of classifier validation: not only classifier training but also classifier validation has certain sample size needs. Well, seeing the edit: there may be some confusion after all... What the "Elements" tell you is that in practice the most likely cause of such an observation is that there is a leak between training and testing, e.g. because the "test" data was used to optimze the model (which is a training task) The section of the Elements is concerned with an optimistic bias caused by this. But there is also variance uncertainty, and, even doing all splitting correctly you can observe extreme outcomes. IIRC the variance problematic is not discussed in great detail in the Elements (there's more to that than what the Elements discuss in section 7.10.1), so I'll give you a start here: Yes, it can and does happen that you either accidentally have a predictor that predicts this particular small data set perfectly (train & test set). You may even just get a splitting that does accidentally lead to seemingly perfect results while the resubstitution error would be > 0. This can happen also with correct (and thus unbiased) cross validation because the results are also subject to variance. IMHO it is a problem that people do not take this variance uncertainty into account (on contrast, bias is often discussed in great length; I've hardy seen any paper discussing the variance uncertainty of their results although in my field with usually < 100, frequently even < 20 patients in one study it is the predominant source of uncertainty). It is not that difficult to get a few basic sanity checks that would avoid most of these issues. There are two points here: With too few training cases (trainig samples ./. model complexity and no. of variates), models get unstable. Their predictive power can be all over the place. On average, it isn't that great, but it can accidentally be truly good. You can measure the influence of model instability on the predictions in a very easy way using the results of an iterated/repeated $k$-fold cross-validation: in each iteration, each case is predicted exactly once. As the case stays the same, any variation in these predictions is caused by instability of the surrogate models, i.e. the reaction of the model to exchanging a few training cases. See e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 IMHO checking whether the surrogate models are stable is a sanity check that should always be done in small sample situations. Particularly as it comes at nearly zero cost: it just needs a slightly different aggregation of the cross validation results (and $k$-fold cross-validation should be iterated anyways unless it is shown that the models are stable). Like you say: With too few test cases, your observed sucesses and failure may be all over the place. If you calculate proportions like error rate or hit rate, etc. they will also be all over the place. This is known as these proportions being subject to high variance. E.g. if the model truly has 50% hit rate, the probability to observe 3 correct out of 3 predictions is $0.5^3 = 12.5 \%$ (binomial distribution). However, it is possible to calculate confidence intervals for proportions, and these take into account how many cases were tested. There is a whole lot of literature about how to calculate them, and what approximations work well or not at all in what situations. For the extremely small sample size of 3 test cases in my example: binom.confint (x=3, n=3, prior.shape1=1, prior.shape2=1) # method x n mean lower upper # 1 agresti-coull 3 3 1.0 0.3825284 1.0559745 # 2 asymptotic 3 3 1.0 1.0000000 1.0000000 # 3 bayes 3 3 0.8 0.4728708 1.0000000 # 4 cloglog 3 3 1.0 0.2924018 1.0000000 # 5 exact 3 3 1.0 0.2924018 1.0000000 # 6 logit 3 3 1.0 0.2924018 1.0000000 # 7 probit 3 3 1.0 0.2924018 1.0000000 # 8 profile 3 3 1.0 0.4043869 1.0000000 # generates warnings # 9 lrt 3 3 1.0 0.5271642 1.0000000 # 10 prop.test 3 3 1.0 0.3099881 0.9682443 # 11 wilson 3 3 1.0 0.4385030 1.0000000 You'll notice that there is quite some variation particularly in the lower bound. This alone is an indicator that the test sample size is so small that hardy anything can be concluded from the test results. However, in practice it hardly matters whether the confidence interval spans the range from "guessing" to "perfect" or from "worse than guessing" to "perfect" conclusion 1: think beforehand how precise the performance results need to be in order to allow a useful interpretation. From that, you can roughly calculate the needed (test) sample size. conclusion 2: calculate confidence intervals for your performance estimates For model comparisons on the basis of correct/wrong predictions, don't even think of doing that with less than several hundred test cases for each classifier. Have a look at McNemar's test (for paired situations, i.e. you can test the same cases with both classifiers). If you cannot do the comparison paired, look for "comparison of proportions", you'll need even more cases, see the paper I link below for examples. You may be interested in our paper about these problems: Beleites, C. et al.: Sample size planning for classification models., Anal Chim Acta, 760, 25-33 (2013). DOI: 10.1016/j.aca.2012.11.007; arXiv: 1211.1323 second update about randomly selecting features: the bagging done for random forests regularly uses this strategy. Outside that context I think it is seldom, but it is a valid possibility.
Is cross-validation still valid when the sample size is small?
I don't think there is much confusion in your thoughts, you're putting your finger on one very important problem of classifier validation: not only classifier training but also classifier validation h
Is cross-validation still valid when the sample size is small? I don't think there is much confusion in your thoughts, you're putting your finger on one very important problem of classifier validation: not only classifier training but also classifier validation has certain sample size needs. Well, seeing the edit: there may be some confusion after all... What the "Elements" tell you is that in practice the most likely cause of such an observation is that there is a leak between training and testing, e.g. because the "test" data was used to optimze the model (which is a training task) The section of the Elements is concerned with an optimistic bias caused by this. But there is also variance uncertainty, and, even doing all splitting correctly you can observe extreme outcomes. IIRC the variance problematic is not discussed in great detail in the Elements (there's more to that than what the Elements discuss in section 7.10.1), so I'll give you a start here: Yes, it can and does happen that you either accidentally have a predictor that predicts this particular small data set perfectly (train & test set). You may even just get a splitting that does accidentally lead to seemingly perfect results while the resubstitution error would be > 0. This can happen also with correct (and thus unbiased) cross validation because the results are also subject to variance. IMHO it is a problem that people do not take this variance uncertainty into account (on contrast, bias is often discussed in great length; I've hardy seen any paper discussing the variance uncertainty of their results although in my field with usually < 100, frequently even < 20 patients in one study it is the predominant source of uncertainty). It is not that difficult to get a few basic sanity checks that would avoid most of these issues. There are two points here: With too few training cases (trainig samples ./. model complexity and no. of variates), models get unstable. Their predictive power can be all over the place. On average, it isn't that great, but it can accidentally be truly good. You can measure the influence of model instability on the predictions in a very easy way using the results of an iterated/repeated $k$-fold cross-validation: in each iteration, each case is predicted exactly once. As the case stays the same, any variation in these predictions is caused by instability of the surrogate models, i.e. the reaction of the model to exchanging a few training cases. See e.g. Beleites, C. & Salzer, R.: Assessing and improving the stability of chemometric models in small sample size situations, Anal Bioanal Chem, 390, 1261-1271 (2008). DOI: 10.1007/s00216-007-1818-6 IMHO checking whether the surrogate models are stable is a sanity check that should always be done in small sample situations. Particularly as it comes at nearly zero cost: it just needs a slightly different aggregation of the cross validation results (and $k$-fold cross-validation should be iterated anyways unless it is shown that the models are stable). Like you say: With too few test cases, your observed sucesses and failure may be all over the place. If you calculate proportions like error rate or hit rate, etc. they will also be all over the place. This is known as these proportions being subject to high variance. E.g. if the model truly has 50% hit rate, the probability to observe 3 correct out of 3 predictions is $0.5^3 = 12.5 \%$ (binomial distribution). However, it is possible to calculate confidence intervals for proportions, and these take into account how many cases were tested. There is a whole lot of literature about how to calculate them, and what approximations work well or not at all in what situations. For the extremely small sample size of 3 test cases in my example: binom.confint (x=3, n=3, prior.shape1=1, prior.shape2=1) # method x n mean lower upper # 1 agresti-coull 3 3 1.0 0.3825284 1.0559745 # 2 asymptotic 3 3 1.0 1.0000000 1.0000000 # 3 bayes 3 3 0.8 0.4728708 1.0000000 # 4 cloglog 3 3 1.0 0.2924018 1.0000000 # 5 exact 3 3 1.0 0.2924018 1.0000000 # 6 logit 3 3 1.0 0.2924018 1.0000000 # 7 probit 3 3 1.0 0.2924018 1.0000000 # 8 profile 3 3 1.0 0.4043869 1.0000000 # generates warnings # 9 lrt 3 3 1.0 0.5271642 1.0000000 # 10 prop.test 3 3 1.0 0.3099881 0.9682443 # 11 wilson 3 3 1.0 0.4385030 1.0000000 You'll notice that there is quite some variation particularly in the lower bound. This alone is an indicator that the test sample size is so small that hardy anything can be concluded from the test results. However, in practice it hardly matters whether the confidence interval spans the range from "guessing" to "perfect" or from "worse than guessing" to "perfect" conclusion 1: think beforehand how precise the performance results need to be in order to allow a useful interpretation. From that, you can roughly calculate the needed (test) sample size. conclusion 2: calculate confidence intervals for your performance estimates For model comparisons on the basis of correct/wrong predictions, don't even think of doing that with less than several hundred test cases for each classifier. Have a look at McNemar's test (for paired situations, i.e. you can test the same cases with both classifiers). If you cannot do the comparison paired, look for "comparison of proportions", you'll need even more cases, see the paper I link below for examples. You may be interested in our paper about these problems: Beleites, C. et al.: Sample size planning for classification models., Anal Chim Acta, 760, 25-33 (2013). DOI: 10.1016/j.aca.2012.11.007; arXiv: 1211.1323 second update about randomly selecting features: the bagging done for random forests regularly uses this strategy. Outside that context I think it is seldom, but it is a valid possibility.
Is cross-validation still valid when the sample size is small? I don't think there is much confusion in your thoughts, you're putting your finger on one very important problem of classifier validation: not only classifier training but also classifier validation h
25,253
Is cross-validation still valid when the sample size is small?
I figured out what is going on: Hastie et al. were (of course) right, and my intuition was wrong. Here is the demonstration of my logic flaw. Let's say we have 10 samples, first five are class 1 and last five are class 2. Let's generate 200 random features. Several of them are likely to predict the class perfectly, I made them bold on the following plot: If we do the t-tests between classes using each of these three features, we get three significant p-values: 0.0047, 0.011, and 0.0039. These, of course, are false positives due to cherry-picking. If we do a Hotelling's test using all 200 features, we get a non-significant p=0.61, as expected. Now let's try to cross-validate. I do 5-fold stratified cross-validation, holding out 1 sample from each class to classify. The classifier will be LDA on all 200 features. I repeat this procedure 100 times with different random features, and the mean number of correctly decoded classes I get is 4.9, i.e. chance level! Well, LDA with 10 classes and 200 features overfits badly. Let's select only the features with perfect class separation on the training test in each cross-validation fold. The resulting mean accuracy is 5.1, still chance level. Why? Because it turns out that the number of features I am using on each fold is larger than the number of "perfect" features on the whole dataset. There are features that look perfect on the training set, but have zero predictive power on the test set. And they screw up the classification. This is exactly the point that I did not appreciate before. Finally, one can only use the features with perfect class separation on the whole dataset. Then the mean accuracy is 9.3, but this is cheating! Of course we are not allowed to use any knowledge about the whole dataset when doing cross-validation.
Is cross-validation still valid when the sample size is small?
I figured out what is going on: Hastie et al. were (of course) right, and my intuition was wrong. Here is the demonstration of my logic flaw. Let's say we have 10 samples, first five are class 1 and l
Is cross-validation still valid when the sample size is small? I figured out what is going on: Hastie et al. were (of course) right, and my intuition was wrong. Here is the demonstration of my logic flaw. Let's say we have 10 samples, first five are class 1 and last five are class 2. Let's generate 200 random features. Several of them are likely to predict the class perfectly, I made them bold on the following plot: If we do the t-tests between classes using each of these three features, we get three significant p-values: 0.0047, 0.011, and 0.0039. These, of course, are false positives due to cherry-picking. If we do a Hotelling's test using all 200 features, we get a non-significant p=0.61, as expected. Now let's try to cross-validate. I do 5-fold stratified cross-validation, holding out 1 sample from each class to classify. The classifier will be LDA on all 200 features. I repeat this procedure 100 times with different random features, and the mean number of correctly decoded classes I get is 4.9, i.e. chance level! Well, LDA with 10 classes and 200 features overfits badly. Let's select only the features with perfect class separation on the training test in each cross-validation fold. The resulting mean accuracy is 5.1, still chance level. Why? Because it turns out that the number of features I am using on each fold is larger than the number of "perfect" features on the whole dataset. There are features that look perfect on the training set, but have zero predictive power on the test set. And they screw up the classification. This is exactly the point that I did not appreciate before. Finally, one can only use the features with perfect class separation on the whole dataset. Then the mean accuracy is 9.3, but this is cheating! Of course we are not allowed to use any knowledge about the whole dataset when doing cross-validation.
Is cross-validation still valid when the sample size is small? I figured out what is going on: Hastie et al. were (of course) right, and my intuition was wrong. Here is the demonstration of my logic flaw. Let's say we have 10 samples, first five are class 1 and l
25,254
The concept of 'proven statistically'
What the news people are talking about is anyone's guess and varies with the newscast. Perhaps most common is that they are giving a one sentence summary of research that requires several pages. However, your last paragraph is mistaken. Statistically, each family does NOT have 2.4 children. The mean is 2.4 children. This is entirely possible. If you take a random sample of American families (tricky to do, but possible) then you would get an estimate of the mean. However, if you took a census of families, then, if the census really got every family (it doesn't) or, if the people it got are representative of the people it didn't get, with regard to number of children, then you would have proven the fact. However, not only does the census miss people, the people it misses are different in many ways from the people it gets. The Census Bureau therefore tries to figure out how they are different; thus, again, giving an estimate of the number of kids per family. But there are things you can prove; if you wanted to know, say, the average number of years that each professor in your department had been teaching, you could get accurate data and come up with an exact mean. Your penultimate paragraph is also problematic as statistical tests are done precisely to prove hypotheses; more precisely, they are done (in the frequentist framework, anyway) to reject a null hypothesis at a given level of significance.
The concept of 'proven statistically'
What the news people are talking about is anyone's guess and varies with the newscast. Perhaps most common is that they are giving a one sentence summary of research that requires several pages. Howev
The concept of 'proven statistically' What the news people are talking about is anyone's guess and varies with the newscast. Perhaps most common is that they are giving a one sentence summary of research that requires several pages. However, your last paragraph is mistaken. Statistically, each family does NOT have 2.4 children. The mean is 2.4 children. This is entirely possible. If you take a random sample of American families (tricky to do, but possible) then you would get an estimate of the mean. However, if you took a census of families, then, if the census really got every family (it doesn't) or, if the people it got are representative of the people it didn't get, with regard to number of children, then you would have proven the fact. However, not only does the census miss people, the people it misses are different in many ways from the people it gets. The Census Bureau therefore tries to figure out how they are different; thus, again, giving an estimate of the number of kids per family. But there are things you can prove; if you wanted to know, say, the average number of years that each professor in your department had been teaching, you could get accurate data and come up with an exact mean. Your penultimate paragraph is also problematic as statistical tests are done precisely to prove hypotheses; more precisely, they are done (in the frequentist framework, anyway) to reject a null hypothesis at a given level of significance.
The concept of 'proven statistically' What the news people are talking about is anyone's guess and varies with the newscast. Perhaps most common is that they are giving a one sentence summary of research that requires several pages. Howev
25,255
The concept of 'proven statistically'
I think - as with so many things - it's a combination of a widespread cultural misunderstanding and journalistic attempts at punchy shorthand that turns out to sometimes mislead. "Cell phones cause cancer!" sells more ads than some explanation about investigating a possible link. Of course conclusions based on statistical inference isn't proof in any kind of hard sense. It's reliant on assumptions, and even then conclusions (at best) are probabilistic (as we get, say with Bayesian inference), and then with frequentist inference you have to add in the usual error of misinterpretation of p-values as the probability that the null is true. That's without even considering issues like publication or reporting bias You see similar errors just as much with science reporting more generally and it's just as frustrating. I don't like the phrase 'statistically proven' myself, as I think it gives the wrong impression. While statistics done well is a powerful tool, the things statistics actually tells us can be surprisingly subtle and the appropriate discussion of the meaning of what is learned and the accompanying qualifications placed on the conclusions are often unsuited to the hype and punchiness of a headline or a hurried few paragraphs squeezed in between the usual celeb gossip. Indeed, even in the academic journals where those sort of qualifications would seem essential, they are often left aside and instead there appears some formulaic pronouncement (different from research area to research area) that is regarded as 'anointing' the result. I think there is room for carefully explaining the reasoning going from the results of inference (whether point and interval estimation, hypothesis testing, decision-theoretic calculations or even exploratory construction of a few visual comparisons) to the conclusions they lead to. That reasoning is where the real heart of the matter lies (including where the gaps in reasoning would be laid bare, were they explicit) and we rarely see it laid out. Besides that, we can keep sounding a note of caution
The concept of 'proven statistically'
I think - as with so many things - it's a combination of a widespread cultural misunderstanding and journalistic attempts at punchy shorthand that turns out to sometimes mislead. "Cell phones cause ca
The concept of 'proven statistically' I think - as with so many things - it's a combination of a widespread cultural misunderstanding and journalistic attempts at punchy shorthand that turns out to sometimes mislead. "Cell phones cause cancer!" sells more ads than some explanation about investigating a possible link. Of course conclusions based on statistical inference isn't proof in any kind of hard sense. It's reliant on assumptions, and even then conclusions (at best) are probabilistic (as we get, say with Bayesian inference), and then with frequentist inference you have to add in the usual error of misinterpretation of p-values as the probability that the null is true. That's without even considering issues like publication or reporting bias You see similar errors just as much with science reporting more generally and it's just as frustrating. I don't like the phrase 'statistically proven' myself, as I think it gives the wrong impression. While statistics done well is a powerful tool, the things statistics actually tells us can be surprisingly subtle and the appropriate discussion of the meaning of what is learned and the accompanying qualifications placed on the conclusions are often unsuited to the hype and punchiness of a headline or a hurried few paragraphs squeezed in between the usual celeb gossip. Indeed, even in the academic journals where those sort of qualifications would seem essential, they are often left aside and instead there appears some formulaic pronouncement (different from research area to research area) that is regarded as 'anointing' the result. I think there is room for carefully explaining the reasoning going from the results of inference (whether point and interval estimation, hypothesis testing, decision-theoretic calculations or even exploratory construction of a few visual comparisons) to the conclusions they lead to. That reasoning is where the real heart of the matter lies (including where the gaps in reasoning would be laid bare, were they explicit) and we rarely see it laid out. Besides that, we can keep sounding a note of caution
The concept of 'proven statistically' I think - as with so many things - it's a combination of a widespread cultural misunderstanding and journalistic attempts at punchy shorthand that turns out to sometimes mislead. "Cell phones cause ca
25,256
The concept of 'proven statistically'
Empirical knowledge is always probabilistic -- never clearly true or false, but always somewhere in between. Statistical "proof" is a matter of collecting enough data to reduce the probability that a hypothesis is wrong to less than some accepted threshold. And the threshold for "truth" or "correctness" differs from one academic discipline to the next. Sociologists are satisfied with a 95% probability of being right, and sometimes settle for less; quantum physicists demand 99.99999% or better.
The concept of 'proven statistically'
Empirical knowledge is always probabilistic -- never clearly true or false, but always somewhere in between. Statistical "proof" is a matter of collecting enough data to reduce the probability that a
The concept of 'proven statistically' Empirical knowledge is always probabilistic -- never clearly true or false, but always somewhere in between. Statistical "proof" is a matter of collecting enough data to reduce the probability that a hypothesis is wrong to less than some accepted threshold. And the threshold for "truth" or "correctness" differs from one academic discipline to the next. Sociologists are satisfied with a 95% probability of being right, and sometimes settle for less; quantum physicists demand 99.99999% or better.
The concept of 'proven statistically' Empirical knowledge is always probabilistic -- never clearly true or false, but always somewhere in between. Statistical "proof" is a matter of collecting enough data to reduce the probability that a
25,257
Algebraic Geometry for Statistics
Here is a list of the standard references: Drton, Sturmfels, and Sullivant, Lectures on Algebraic Statistics Pachter, Sturmfels, Algebraic Statistics for Computational Biology Pistone, Riccomagno, Wynn, Algebraic Statistics: Computational Commutative Algebra in Statistics Sullivant, Algebraic Statistics Zwiernik, Semialgebraic Statistics and Latent Tree Models Riccomagno, A Short History of Algebraic Statistics Here is a list of related references, not directly addressing algebraic statistics, although providing background in the methodology used for the subject: Lauritzen, Graphical Models Lauritzen, Concrete Abstract Algebra Pearl, Causality: Models, Reasoning, and Inference Cox, Little, O'Shea, Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra Garrity, Belshoff, Boos, et al, Algebraic Geometry: A Problem-Solving Approach Sturmfels, Solving Systems of Polynomial Equations Uhler, Geometry of Maximum Likelihood Estimation in Graphical Models Web pages of courses about the topic, past and present: Professor Sturmfels's course at Berkeley Course at the TU Berlin Seminar at the Freie Universität, Berlin Conference at the University of Genoa Berkeley Algebraic Statistics Seminar These lists are almost certainly by no means comprehensive.
Algebraic Geometry for Statistics
Here is a list of the standard references: Drton, Sturmfels, and Sullivant, Lectures on Algebraic Statistics Pachter, Sturmfels, Algebraic Statistics for Computational Biology Pistone, Riccomagno, Wy
Algebraic Geometry for Statistics Here is a list of the standard references: Drton, Sturmfels, and Sullivant, Lectures on Algebraic Statistics Pachter, Sturmfels, Algebraic Statistics for Computational Biology Pistone, Riccomagno, Wynn, Algebraic Statistics: Computational Commutative Algebra in Statistics Sullivant, Algebraic Statistics Zwiernik, Semialgebraic Statistics and Latent Tree Models Riccomagno, A Short History of Algebraic Statistics Here is a list of related references, not directly addressing algebraic statistics, although providing background in the methodology used for the subject: Lauritzen, Graphical Models Lauritzen, Concrete Abstract Algebra Pearl, Causality: Models, Reasoning, and Inference Cox, Little, O'Shea, Ideals, Varieties, and Algorithms: An Introduction to Computational Algebraic Geometry and Commutative Algebra Garrity, Belshoff, Boos, et al, Algebraic Geometry: A Problem-Solving Approach Sturmfels, Solving Systems of Polynomial Equations Uhler, Geometry of Maximum Likelihood Estimation in Graphical Models Web pages of courses about the topic, past and present: Professor Sturmfels's course at Berkeley Course at the TU Berlin Seminar at the Freie Universität, Berlin Conference at the University of Genoa Berkeley Algebraic Statistics Seminar These lists are almost certainly by no means comprehensive.
Algebraic Geometry for Statistics Here is a list of the standard references: Drton, Sturmfels, and Sullivant, Lectures on Algebraic Statistics Pachter, Sturmfels, Algebraic Statistics for Computational Biology Pistone, Riccomagno, Wy
25,258
Algebraic Geometry for Statistics
Sumio Watanabe, Algebraic Geometry and Statistical Learning Theory, Cambridge University Press, Cambridge, UK, 2009. Sure to be influential, this book lays the foundations for the use of algebraic geometry in statistical learning theory. Many widely used statistical models and learning machines applied to information science have a parameter space that is singular: mixture models, neural networks, HMMs, Bayesian networks, and stochastic context-free grammars are major examples. Algebraic geometry and singularity theory provide the necessary tools for studying such non-smooth models. Four main formulas are established: the log likelihood function can be given a common standard form using resolution of singularities, even applied to more complex models; the asymptotic behaviour of the marginal likelihood or 'the evidence' is derived based on zeta function theory; new methods are derived to estimate the generalization errors in Bayes and Gibbs estimations from training errors; the generalization errors of maximum likelihood and a posteriori methods are clarified by empirical process theory on algebraic varieties.
Algebraic Geometry for Statistics
Sumio Watanabe, Algebraic Geometry and Statistical Learning Theory, Cambridge University Press, Cambridge, UK, 2009. Sure to be influential, this book lays the foundations for the use of algebrai
Algebraic Geometry for Statistics Sumio Watanabe, Algebraic Geometry and Statistical Learning Theory, Cambridge University Press, Cambridge, UK, 2009. Sure to be influential, this book lays the foundations for the use of algebraic geometry in statistical learning theory. Many widely used statistical models and learning machines applied to information science have a parameter space that is singular: mixture models, neural networks, HMMs, Bayesian networks, and stochastic context-free grammars are major examples. Algebraic geometry and singularity theory provide the necessary tools for studying such non-smooth models. Four main formulas are established: the log likelihood function can be given a common standard form using resolution of singularities, even applied to more complex models; the asymptotic behaviour of the marginal likelihood or 'the evidence' is derived based on zeta function theory; new methods are derived to estimate the generalization errors in Bayes and Gibbs estimations from training errors; the generalization errors of maximum likelihood and a posteriori methods are clarified by empirical process theory on algebraic varieties.
Algebraic Geometry for Statistics Sumio Watanabe, Algebraic Geometry and Statistical Learning Theory, Cambridge University Press, Cambridge, UK, 2009. Sure to be influential, this book lays the foundations for the use of algebrai
25,259
Probability that a continuous random variable assumes a fixed point [duplicate]
You may be falling into the trap of regarding 'five minutes from now' as lasting some finite period of time (which would have a nonzero probability). "Five minutes from now" in the continuous variable sense is truly instantaneous. Imagine that the arrival of the next train is uniformly distributed between 8:00 and 8:15. Further imagine we define the arrival of a train as occurring at the instant the front of the train passes a particular point on the station (perhaps the midpoint of the platform if there's no better landmark). Consider the following sequence of probabilities: a) the probability a train arrives between 8:05 and 8:10 b) the probability a train arrives between 8:05 and 8:06 c) the probability a train arrives between 8:05:00 and 8:05:01 d) the probability a train arrives between 8:05:00 and 8:05:00.01 (i.e. in the space of one hundredth of a second e) the probability a train arrives between 8:05 and one billionth of a second later f) the probability a train arrives between 8:05 and one quadrillionth of a second later ... and so on The probability that it arrives precisely at 8:05 is the limiting value of a sequence of probabilities like that. The probability is smaller than every $\epsilon>0$.
Probability that a continuous random variable assumes a fixed point [duplicate]
You may be falling into the trap of regarding 'five minutes from now' as lasting some finite period of time (which would have a nonzero probability). "Five minutes from now" in the continuous variabl
Probability that a continuous random variable assumes a fixed point [duplicate] You may be falling into the trap of regarding 'five minutes from now' as lasting some finite period of time (which would have a nonzero probability). "Five minutes from now" in the continuous variable sense is truly instantaneous. Imagine that the arrival of the next train is uniformly distributed between 8:00 and 8:15. Further imagine we define the arrival of a train as occurring at the instant the front of the train passes a particular point on the station (perhaps the midpoint of the platform if there's no better landmark). Consider the following sequence of probabilities: a) the probability a train arrives between 8:05 and 8:10 b) the probability a train arrives between 8:05 and 8:06 c) the probability a train arrives between 8:05:00 and 8:05:01 d) the probability a train arrives between 8:05:00 and 8:05:00.01 (i.e. in the space of one hundredth of a second e) the probability a train arrives between 8:05 and one billionth of a second later f) the probability a train arrives between 8:05 and one quadrillionth of a second later ... and so on The probability that it arrives precisely at 8:05 is the limiting value of a sequence of probabilities like that. The probability is smaller than every $\epsilon>0$.
Probability that a continuous random variable assumes a fixed point [duplicate] You may be falling into the trap of regarding 'five minutes from now' as lasting some finite period of time (which would have a nonzero probability). "Five minutes from now" in the continuous variabl
25,260
Probability that a continuous random variable assumes a fixed point [duplicate]
What if the train does arrive exactly 5 minutes from now, how could it occur if it had probability 0? A probabilistic statement is not a statement about the possibility/feasibility of an event. It only reflects our attempt to quantify our uncertainty about it happening. So when a phenomenon is continuous (or is modeled as one), then our tools and current state of knowledge do not permit us to make a probabilistic statement about it taking a specific value. We can only make such a statement related to a range of values. Of course the usual trick here is to discretize the support, to consider"small" intervals of values rather than single values. Since continuous random variables bring great benefits and flexibility compared to discrete random variables, this has been found to be a rather small price to pay, perhaps as small as the intervals we are forced to consider.
Probability that a continuous random variable assumes a fixed point [duplicate]
What if the train does arrive exactly 5 minutes from now, how could it occur if it had probability 0? A probabilistic statement is not a statement about the possibility/feasibility of an event. It
Probability that a continuous random variable assumes a fixed point [duplicate] What if the train does arrive exactly 5 minutes from now, how could it occur if it had probability 0? A probabilistic statement is not a statement about the possibility/feasibility of an event. It only reflects our attempt to quantify our uncertainty about it happening. So when a phenomenon is continuous (or is modeled as one), then our tools and current state of knowledge do not permit us to make a probabilistic statement about it taking a specific value. We can only make such a statement related to a range of values. Of course the usual trick here is to discretize the support, to consider"small" intervals of values rather than single values. Since continuous random variables bring great benefits and flexibility compared to discrete random variables, this has been found to be a rather small price to pay, perhaps as small as the intervals we are forced to consider.
Probability that a continuous random variable assumes a fixed point [duplicate] What if the train does arrive exactly 5 minutes from now, how could it occur if it had probability 0? A probabilistic statement is not a statement about the possibility/feasibility of an event. It
25,261
Probability that a continuous random variable assumes a fixed point [duplicate]
To give you some intuition for the above, try the following (thought) experiment: Draw a real line around zero with a ruler. Now take a sharp dart and let it fall from above randomly on the line(let's assume you will always hit the line and only the lateral positioning matters for the sake of the argument). However many times you let the dart fall randomly on the line, you will never hit the point zero. Why? Think what is the point zero, think what is its width. And after you recognise that its width is 0, do you still think you can hit it? Will you be able to hit point 1, or -2? Or any other point you pick on the line for that matter? To get back to maths, this is the difference between the physical world, and a mathematical concept such as real numbers (represented by the real line in my example). Probability theory has quite a bit more complicated definition of probability than you will see in your lecture. To quantify the probability of events and any combination of their outcomes, you need a probability measure. Both the Borel measure and Lebesgue measure are defined for an interval [a, b] on the real line as: $$\mu([a,b])=b-a$$ from this definition you can see what happens with the probability if you reduce the interval to a number (setting a = b). The bottom line is that based on our current definition of probability theory (dating back to Kolmogorov) the fact that an event has 0 probability does not mean it cannot occur. And as far as your example with the train goes, if you will have an infinitely precise watch, your train will never arrive exactly on time.
Probability that a continuous random variable assumes a fixed point [duplicate]
To give you some intuition for the above, try the following (thought) experiment: Draw a real line around zero with a ruler. Now take a sharp dart and let it fall from above randomly on the line(let's
Probability that a continuous random variable assumes a fixed point [duplicate] To give you some intuition for the above, try the following (thought) experiment: Draw a real line around zero with a ruler. Now take a sharp dart and let it fall from above randomly on the line(let's assume you will always hit the line and only the lateral positioning matters for the sake of the argument). However many times you let the dart fall randomly on the line, you will never hit the point zero. Why? Think what is the point zero, think what is its width. And after you recognise that its width is 0, do you still think you can hit it? Will you be able to hit point 1, or -2? Or any other point you pick on the line for that matter? To get back to maths, this is the difference between the physical world, and a mathematical concept such as real numbers (represented by the real line in my example). Probability theory has quite a bit more complicated definition of probability than you will see in your lecture. To quantify the probability of events and any combination of their outcomes, you need a probability measure. Both the Borel measure and Lebesgue measure are defined for an interval [a, b] on the real line as: $$\mu([a,b])=b-a$$ from this definition you can see what happens with the probability if you reduce the interval to a number (setting a = b). The bottom line is that based on our current definition of probability theory (dating back to Kolmogorov) the fact that an event has 0 probability does not mean it cannot occur. And as far as your example with the train goes, if you will have an infinitely precise watch, your train will never arrive exactly on time.
Probability that a continuous random variable assumes a fixed point [duplicate] To give you some intuition for the above, try the following (thought) experiment: Draw a real line around zero with a ruler. Now take a sharp dart and let it fall from above randomly on the line(let's
25,262
Probability that a continuous random variable assumes a fixed point [duplicate]
A probability distribution has to have an area of unity. If the measure is continuous then there is an infinite number of values that it can take (i.e. an infinite number of values along the x-axis of the distribution). The only way that the total area of the probability distribution can be finite is for the value at each of the infinite number of values to be zero. One divided by infinity. In 'real life' there can be no measures that take an infinite number of values (by several different philosophical arguments that don't matter much here) so no value need take a probability of exactly zero. A useful practical argument is based on the finite precision of real-world measurements. If you use a stopwatch that measures to one tenth of a second, the train will have one tenth of a second in which to arrive in 'exactly' five minutes.
Probability that a continuous random variable assumes a fixed point [duplicate]
A probability distribution has to have an area of unity. If the measure is continuous then there is an infinite number of values that it can take (i.e. an infinite number of values along the x-axis of
Probability that a continuous random variable assumes a fixed point [duplicate] A probability distribution has to have an area of unity. If the measure is continuous then there is an infinite number of values that it can take (i.e. an infinite number of values along the x-axis of the distribution). The only way that the total area of the probability distribution can be finite is for the value at each of the infinite number of values to be zero. One divided by infinity. In 'real life' there can be no measures that take an infinite number of values (by several different philosophical arguments that don't matter much here) so no value need take a probability of exactly zero. A useful practical argument is based on the finite precision of real-world measurements. If you use a stopwatch that measures to one tenth of a second, the train will have one tenth of a second in which to arrive in 'exactly' five minutes.
Probability that a continuous random variable assumes a fixed point [duplicate] A probability distribution has to have an area of unity. If the measure is continuous then there is an infinite number of values that it can take (i.e. an infinite number of values along the x-axis of
25,263
Probability that a continuous random variable assumes a fixed point [duplicate]
Other people have answered why the probability is zero (if you approximate time as being continuous, which it is effectively not, but anyway...) so I will just echo it briefly. To answer the last question that the OP asked---"how could it occur if it had probability 0?"---lots and lots of things can occur if they have probability zero. All a set of probability zero $A$ means is that, in the space of possible things that could happen, the set $A$ takes up no space. That is all. It is not more meaningful than this. I am writing this to hopefully address something else that the OP said in the comments: You say "you will never hit the point zero", but what can you say of the point that I hit in my first dart throw? Let 𝑥 be the point that I hit. Before throwing my dart, you would have said "you will never hit the point 𝑥", but I've just hit it. Now what? This is a very good question and one that, when I began to learn about probability, I struggled with. Here is the answer: it isn't equivalent to the question that you originally asked! What you have done is bring time into the analysis, and that means that the underlying probability structure changes to become much more intricate. Here is what you need to know. A probability space $(\Omega, A, \mu)$ consists of three things: an underlying space $\Omega$, such as $\mathbb{R}$ or $\mathbb{Z}$; a set of all possible outcomes on this space, such as the set of all half-open intervals on $\mathbb{R}$, and a measure $\mu$ that satisfies $\mu(\Omega) = 1$. Your original problem lives in the space $([a,b], \text{all half open intervals on $[a,b]$}, \nu)$ where $\nu$ is Lebesgue measure (this means that $\nu( [c,d) ) = \frac{1}{d - c}$). In this space, the probability that you hit any single point $x \in [a,b]$ is zero for the reasons discussed above---I think we have this cleared up. But now, when you say things like the quoted passage above, you are defining something called a filtration, which we will write as $\mathcal{F} = \{\mathcal{F}_t\}_{t \geq 0}$. A filtration in general is a collection of subsets of $A$ that satisfy $\mathcal{F}_t \subseteq \mathcal{F}_s$ for all $t < s$. In your case, we can define the filtration $$ \mathcal{F}_t = \{x \in [a,b]: \text{dart hit $x$ at time $t' < t$} \}. $$ Now, in this new subset of your outcome space, guess what---you're right! You have hit it and, after your first throw, your probability of having hit that point when restricted to the filtration $\mathcal{F}_1$ is 1.
Probability that a continuous random variable assumes a fixed point [duplicate]
Other people have answered why the probability is zero (if you approximate time as being continuous, which it is effectively not, but anyway...) so I will just echo it briefly. To answer the last ques
Probability that a continuous random variable assumes a fixed point [duplicate] Other people have answered why the probability is zero (if you approximate time as being continuous, which it is effectively not, but anyway...) so I will just echo it briefly. To answer the last question that the OP asked---"how could it occur if it had probability 0?"---lots and lots of things can occur if they have probability zero. All a set of probability zero $A$ means is that, in the space of possible things that could happen, the set $A$ takes up no space. That is all. It is not more meaningful than this. I am writing this to hopefully address something else that the OP said in the comments: You say "you will never hit the point zero", but what can you say of the point that I hit in my first dart throw? Let 𝑥 be the point that I hit. Before throwing my dart, you would have said "you will never hit the point 𝑥", but I've just hit it. Now what? This is a very good question and one that, when I began to learn about probability, I struggled with. Here is the answer: it isn't equivalent to the question that you originally asked! What you have done is bring time into the analysis, and that means that the underlying probability structure changes to become much more intricate. Here is what you need to know. A probability space $(\Omega, A, \mu)$ consists of three things: an underlying space $\Omega$, such as $\mathbb{R}$ or $\mathbb{Z}$; a set of all possible outcomes on this space, such as the set of all half-open intervals on $\mathbb{R}$, and a measure $\mu$ that satisfies $\mu(\Omega) = 1$. Your original problem lives in the space $([a,b], \text{all half open intervals on $[a,b]$}, \nu)$ where $\nu$ is Lebesgue measure (this means that $\nu( [c,d) ) = \frac{1}{d - c}$). In this space, the probability that you hit any single point $x \in [a,b]$ is zero for the reasons discussed above---I think we have this cleared up. But now, when you say things like the quoted passage above, you are defining something called a filtration, which we will write as $\mathcal{F} = \{\mathcal{F}_t\}_{t \geq 0}$. A filtration in general is a collection of subsets of $A$ that satisfy $\mathcal{F}_t \subseteq \mathcal{F}_s$ for all $t < s$. In your case, we can define the filtration $$ \mathcal{F}_t = \{x \in [a,b]: \text{dart hit $x$ at time $t' < t$} \}. $$ Now, in this new subset of your outcome space, guess what---you're right! You have hit it and, after your first throw, your probability of having hit that point when restricted to the filtration $\mathcal{F}_1$ is 1.
Probability that a continuous random variable assumes a fixed point [duplicate] Other people have answered why the probability is zero (if you approximate time as being continuous, which it is effectively not, but anyway...) so I will just echo it briefly. To answer the last ques
25,264
In what settings would confidence intervals not get better as sample size increases?
Note the qualification "in an observational setting". Checking the context from which you've taken the quote (the subthread of comments that it is in), it looks like the intent is "in the real world" rather than in simulations, and probably doesn't include a controlled experiment ... and in that case, the likely intent is a consequence of the fact that the assumptions under which the intervals are derived don't actually quite hold. There are numerous things that can impact bias - which are of small effect compared to variability in small samples - but which generally don't reduce in size as sample size increases, while the standard errors do. Since our calculations don't incorporate the bias, as intervals shrink (as $1/\sqrt n$), any unchanging bias, even if it's pretty small looms larger, leaving our intervals less and less likely to include the true value. Here's an illustration - one which perhaps exaggerates bias - to indicate what I think is meant about CI coverage probability shrinking as sample size increases: (Noting here that 'center of interval' refers to the center of the population of intervals, not the middle of any given sample's interval; the distance between the black and the red vertical lines is the bias, which is here being taken to be the same irrespective of sample size.) Of course in any particular sample, the interval will be random - it will be wider or narrower and shifted left or right relative to the diagram, so that at any sample size it has some coverage probability between 0 and 1, but any amount of bias will make it shrink toward zero as $n$ increases. Here's an example with 100 confidence intervals at each sample size using simulated data (plotted with transparency, so the color is more solid where more intervals cover it):
In what settings would confidence intervals not get better as sample size increases?
Note the qualification "in an observational setting". Checking the context from which you've taken the quote (the subthread of comments that it is in), it looks like the intent is "in the real world"
In what settings would confidence intervals not get better as sample size increases? Note the qualification "in an observational setting". Checking the context from which you've taken the quote (the subthread of comments that it is in), it looks like the intent is "in the real world" rather than in simulations, and probably doesn't include a controlled experiment ... and in that case, the likely intent is a consequence of the fact that the assumptions under which the intervals are derived don't actually quite hold. There are numerous things that can impact bias - which are of small effect compared to variability in small samples - but which generally don't reduce in size as sample size increases, while the standard errors do. Since our calculations don't incorporate the bias, as intervals shrink (as $1/\sqrt n$), any unchanging bias, even if it's pretty small looms larger, leaving our intervals less and less likely to include the true value. Here's an illustration - one which perhaps exaggerates bias - to indicate what I think is meant about CI coverage probability shrinking as sample size increases: (Noting here that 'center of interval' refers to the center of the population of intervals, not the middle of any given sample's interval; the distance between the black and the red vertical lines is the bias, which is here being taken to be the same irrespective of sample size.) Of course in any particular sample, the interval will be random - it will be wider or narrower and shifted left or right relative to the diagram, so that at any sample size it has some coverage probability between 0 and 1, but any amount of bias will make it shrink toward zero as $n$ increases. Here's an example with 100 confidence intervals at each sample size using simulated data (plotted with transparency, so the color is more solid where more intervals cover it):
In what settings would confidence intervals not get better as sample size increases? Note the qualification "in an observational setting". Checking the context from which you've taken the quote (the subthread of comments that it is in), it looks like the intent is "in the real world"
25,265
In what settings would confidence intervals not get better as sample size increases?
Sweet irony. Before that paragraph, the same person says "No wonder there is such widespread confusion". "Confidence intervals in an observational setting": what does that even mean? It appears to me that this is once again a confusion between estimation and hypothesis testing. Now I know the CI width should approach 1 with increasing sample size. No, it depends on the context. In principle, the width should converge to $0$. The coverage should be close to the nominal value for a large number of Monte Carlo simulations. The coverage does not depend on the sample size, unless some of the assumptions under which the CI was constructed are flawed (which maybe is what the OP meant to imply. "All the models are wrong", yeah.). The reference is a comment in a post of a personal blog. I would not worry too much about the validity of this sort of reference. The blog, owned by Larry Wasserman, tends to be very well written on the other hand. This reminded me of the xkcd comic: http://xkcd.com/386/
In what settings would confidence intervals not get better as sample size increases?
Sweet irony. Before that paragraph, the same person says "No wonder there is such widespread confusion". "Confidence intervals in an observational setting": what does that even mean? It appears to me
In what settings would confidence intervals not get better as sample size increases? Sweet irony. Before that paragraph, the same person says "No wonder there is such widespread confusion". "Confidence intervals in an observational setting": what does that even mean? It appears to me that this is once again a confusion between estimation and hypothesis testing. Now I know the CI width should approach 1 with increasing sample size. No, it depends on the context. In principle, the width should converge to $0$. The coverage should be close to the nominal value for a large number of Monte Carlo simulations. The coverage does not depend on the sample size, unless some of the assumptions under which the CI was constructed are flawed (which maybe is what the OP meant to imply. "All the models are wrong", yeah.). The reference is a comment in a post of a personal blog. I would not worry too much about the validity of this sort of reference. The blog, owned by Larry Wasserman, tends to be very well written on the other hand. This reminded me of the xkcd comic: http://xkcd.com/386/
In what settings would confidence intervals not get better as sample size increases? Sweet irony. Before that paragraph, the same person says "No wonder there is such widespread confusion". "Confidence intervals in an observational setting": what does that even mean? It appears to me
25,266
Adding random effect influences coefficient estimates
"I have always been taught that random effects only influence the variance (error), and that fixed effects only influence the mean." As you have discovered, this is only true for balanced, complete (i.e., no missing data) datasets with no continuous predictors. In other words, for the kinds of data/models discussed in classical ANOVA texts. Under these ideal circumstances, the fixed effects and random effects can be estimated independent of one another. When these conditions do not hold (as they very very often do not in the "real world"), the fixed and random effects are not independent. As an interesting aside, this is why "modern" mixed models are estimated using iterative optimization methods, rather than being exactly solved with a bit of matrix algebra as in the classical mixed ANOVA case: in order to estimate the fixed effects, we have to know the random effects, but in order to estimate the random effects, we have to know the fixed effects! More relevant to the present question, this also means that when data are unbalanced/incomplete and/or there are continuous predictors in the model, then adjusting the random-effects structure of the mixed model can alter the estimates of the fixed part of the model, and vice versa. Edit 2016-07-05. From the comments: "Could you elaborate or provide a citation on why continuous predictors will influence the estimates of the fixed part of the model?" The estimates for the fixed part of the model will depend on the estimates for the random part of the model -- that is, the estimated variance components -- if (but not only if) the variance of the predictors differs across clusters. Which will almost certainly be true if any of the predictors are continuous (at least in "real world" data -- in theory it would be possible for this to not be true, e.g. in a constructed dataset).
Adding random effect influences coefficient estimates
"I have always been taught that random effects only influence the variance (error), and that fixed effects only influence the mean." As you have discovered, this is only true for balanced, complete (i
Adding random effect influences coefficient estimates "I have always been taught that random effects only influence the variance (error), and that fixed effects only influence the mean." As you have discovered, this is only true for balanced, complete (i.e., no missing data) datasets with no continuous predictors. In other words, for the kinds of data/models discussed in classical ANOVA texts. Under these ideal circumstances, the fixed effects and random effects can be estimated independent of one another. When these conditions do not hold (as they very very often do not in the "real world"), the fixed and random effects are not independent. As an interesting aside, this is why "modern" mixed models are estimated using iterative optimization methods, rather than being exactly solved with a bit of matrix algebra as in the classical mixed ANOVA case: in order to estimate the fixed effects, we have to know the random effects, but in order to estimate the random effects, we have to know the fixed effects! More relevant to the present question, this also means that when data are unbalanced/incomplete and/or there are continuous predictors in the model, then adjusting the random-effects structure of the mixed model can alter the estimates of the fixed part of the model, and vice versa. Edit 2016-07-05. From the comments: "Could you elaborate or provide a citation on why continuous predictors will influence the estimates of the fixed part of the model?" The estimates for the fixed part of the model will depend on the estimates for the random part of the model -- that is, the estimated variance components -- if (but not only if) the variance of the predictors differs across clusters. Which will almost certainly be true if any of the predictors are continuous (at least in "real world" data -- in theory it would be possible for this to not be true, e.g. in a constructed dataset).
Adding random effect influences coefficient estimates "I have always been taught that random effects only influence the variance (error), and that fixed effects only influence the mean." As you have discovered, this is only true for balanced, complete (i
25,267
Adding random effect influences coefficient estimates
On first level, I think all you are ignoring shrinkage toward the population values; "the per-subject slopes and intercepts from the mixed-effects model are closer to the population estimates than are the within-subject least squares estimates." [ref. 1]. The following link probably will also be of help (What are the proper descriptives to look at for my mixed-models?), see Mike Lawrence's answer). Furthermore, I think you are marginally unlucky in your toy example because you have a perfectly balanced design that cause you to have the exact same estimate in the case of no missing values. Try the following code which has the same process with no missing value now: cat <- as.factor(sample(1:5, n*k, replace=T) ) #This should be a bit unbalanced. cat_i <- 1:k # intercept per kategorie x <- rep(1:n, k) sigma <- 0.2 alpha <- 0.001 y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma) m1 <- lm(y ~ x) m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit) round(digits= 7,fixef(m3)) == round(digits=7, coef(m1)) #Not this time lad. #(Intercept) x # FALSE FALSE Where now, because your design is not perfectly balanced you don't have the same coefficient estimates. Actually if you play along with your missing value pattern in a silly way ( so for instance: y[ c(1:10, 100 + 1:10, 200 + 1:10, 300 + 1:10, 400 +1:10)] <- NA) so your design is still perfectly balanced you'll get the same coefficients again. require(nlme) set.seed(128) n <- 100 k <- 5 cat <- as.factor(rep(1:k, each = n)) cat_i <- 1:k # intercept per kategorie x <- rep(1:n, k) sigma <- 0.2 alpha <- 0.001 y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma) plot(x, y) # simulate missing data in a perfectly balanced way y[ c(1:10, 100 + 1:10, 200 + 1:10, 300 + 1:10, 400 +1:10)] <- NA m1 <- lm(y ~ x) m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit) round(digits=7,fixef(m3)) == round(digits=7, coef(m1)) #Look what happend now... #(Intercept) x # TRUE TRUE You are marginally misguided by the perfect design of your original experiment. When you inserted the NA's in a non-balanced away you changed the pattern of how much "strength" could the individual subjects borrow from each other. In short the differences you see are due to shrinkage effects and more specifically because you distorted your original perfectly-balanced design with non-perfectly-balanced missing values. Ref 1: Douglas Bates lme4:Mixed-effects modeling with R, pages 71-72
Adding random effect influences coefficient estimates
On first level, I think all you are ignoring shrinkage toward the population values; "the per-subject slopes and intercepts from the mixed-effects model are closer to the population estimates than are
Adding random effect influences coefficient estimates On first level, I think all you are ignoring shrinkage toward the population values; "the per-subject slopes and intercepts from the mixed-effects model are closer to the population estimates than are the within-subject least squares estimates." [ref. 1]. The following link probably will also be of help (What are the proper descriptives to look at for my mixed-models?), see Mike Lawrence's answer). Furthermore, I think you are marginally unlucky in your toy example because you have a perfectly balanced design that cause you to have the exact same estimate in the case of no missing values. Try the following code which has the same process with no missing value now: cat <- as.factor(sample(1:5, n*k, replace=T) ) #This should be a bit unbalanced. cat_i <- 1:k # intercept per kategorie x <- rep(1:n, k) sigma <- 0.2 alpha <- 0.001 y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma) m1 <- lm(y ~ x) m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit) round(digits= 7,fixef(m3)) == round(digits=7, coef(m1)) #Not this time lad. #(Intercept) x # FALSE FALSE Where now, because your design is not perfectly balanced you don't have the same coefficient estimates. Actually if you play along with your missing value pattern in a silly way ( so for instance: y[ c(1:10, 100 + 1:10, 200 + 1:10, 300 + 1:10, 400 +1:10)] <- NA) so your design is still perfectly balanced you'll get the same coefficients again. require(nlme) set.seed(128) n <- 100 k <- 5 cat <- as.factor(rep(1:k, each = n)) cat_i <- 1:k # intercept per kategorie x <- rep(1:n, k) sigma <- 0.2 alpha <- 0.001 y <- cat_i[cat] + alpha * x + rnorm(n*k, 0, sigma) plot(x, y) # simulate missing data in a perfectly balanced way y[ c(1:10, 100 + 1:10, 200 + 1:10, 300 + 1:10, 400 +1:10)] <- NA m1 <- lm(y ~ x) m3 <- lme(y ~ x, random = ~ 1|cat, na.action = na.omit) round(digits=7,fixef(m3)) == round(digits=7, coef(m1)) #Look what happend now... #(Intercept) x # TRUE TRUE You are marginally misguided by the perfect design of your original experiment. When you inserted the NA's in a non-balanced away you changed the pattern of how much "strength" could the individual subjects borrow from each other. In short the differences you see are due to shrinkage effects and more specifically because you distorted your original perfectly-balanced design with non-perfectly-balanced missing values. Ref 1: Douglas Bates lme4:Mixed-effects modeling with R, pages 71-72
Adding random effect influences coefficient estimates On first level, I think all you are ignoring shrinkage toward the population values; "the per-subject slopes and intercepts from the mixed-effects model are closer to the population estimates than are
25,268
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportion?
What you propose is sometimes called a fractional logit. It certainly has its merits, as long as you remember to use robust standard errors. In 2010 I gave a talk at the German Stata Users' meeting comparing among other things beta regression and fractional logit. The slides can be found here: http://www.maartenbuis.nl/presentations/berlin10.pdf
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportio
What you propose is sometimes called a fractional logit. It certainly has its merits, as long as you remember to use robust standard errors. In 2010 I gave a talk at the German Stata Users' meeting co
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportion? What you propose is sometimes called a fractional logit. It certainly has its merits, as long as you remember to use robust standard errors. In 2010 I gave a talk at the German Stata Users' meeting comparing among other things beta regression and fractional logit. The slides can be found here: http://www.maartenbuis.nl/presentations/berlin10.pdf
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportio What you propose is sometimes called a fractional logit. It certainly has its merits, as long as you remember to use robust standard errors. In 2010 I gave a talk at the German Stata Users' meeting co
25,269
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportion?
Models of this kind are often defined and used as one kind of generalized linear model. For one concise review, see http://www.stata-journal.com/article.html?article=st0147 The argument is that the binomial is a reasonable family even for continuous proportions as the variance will also approach 0 as the mean approaches either 0 or 1. Whether particular programs or functions in particular software accommodate them is a different matter. To say that "R will throw a warning but still produce a result" conveys little information. Which package are you referring to? Is it really the only relevant package? In any case, as the article just referenced indicates, this model is well supported in Stata, for example. That still leaves scope for detailed discussion of the relative merits of a logit model for continuous proportions and beta regression.
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportio
Models of this kind are often defined and used as one kind of generalized linear model. For one concise review, see http://www.stata-journal.com/article.html?article=st0147 The argument is that the bi
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportion? Models of this kind are often defined and used as one kind of generalized linear model. For one concise review, see http://www.stata-journal.com/article.html?article=st0147 The argument is that the binomial is a reasonable family even for continuous proportions as the variance will also approach 0 as the mean approaches either 0 or 1. Whether particular programs or functions in particular software accommodate them is a different matter. To say that "R will throw a warning but still produce a result" conveys little information. Which package are you referring to? Is it really the only relevant package? In any case, as the article just referenced indicates, this model is well supported in Stata, for example. That still leaves scope for detailed discussion of the relative merits of a logit model for continuous proportions and beta regression.
Is it technically "valid" to fit a logistic regression with a dependent variable that is a proportio Models of this kind are often defined and used as one kind of generalized linear model. For one concise review, see http://www.stata-journal.com/article.html?article=st0147 The argument is that the bi
25,270
How do you use simple exponential smoothing in R?
This will do it: ses(d[1:40], h=30, alpha=0.1, initial="simple") with: h being the number of periods for forecasting. alpha being the level smoothing parameter. initial being the method for selecting initial state values (See ?ses.)
How do you use simple exponential smoothing in R?
This will do it: ses(d[1:40], h=30, alpha=0.1, initial="simple") with: h being the number of periods for forecasting. alpha being the level smoothing parameter. initial being the method for selectin
How do you use simple exponential smoothing in R? This will do it: ses(d[1:40], h=30, alpha=0.1, initial="simple") with: h being the number of periods for forecasting. alpha being the level smoothing parameter. initial being the method for selecting initial state values (See ?ses.)
How do you use simple exponential smoothing in R? This will do it: ses(d[1:40], h=30, alpha=0.1, initial="simple") with: h being the number of periods for forecasting. alpha being the level smoothing parameter. initial being the method for selectin
25,271
How do you use simple exponential smoothing in R?
You can also use the HoltWinters function, which is available in base R: mod1 <- HoltWinters(d[1:40], alpha=0.1, beta=FALSE, gamma=FALSE) To obtain the predictions for the next 30 periods, use predict(mod1, n.ahead=30)
How do you use simple exponential smoothing in R?
You can also use the HoltWinters function, which is available in base R: mod1 <- HoltWinters(d[1:40], alpha=0.1, beta=FALSE, gamma=FALSE) To obtain the predictions for the next 30 periods, use pred
How do you use simple exponential smoothing in R? You can also use the HoltWinters function, which is available in base R: mod1 <- HoltWinters(d[1:40], alpha=0.1, beta=FALSE, gamma=FALSE) To obtain the predictions for the next 30 periods, use predict(mod1, n.ahead=30)
How do you use simple exponential smoothing in R? You can also use the HoltWinters function, which is available in base R: mod1 <- HoltWinters(d[1:40], alpha=0.1, beta=FALSE, gamma=FALSE) To obtain the predictions for the next 30 periods, use pred
25,272
Trying to compute Gini index on StackOverflow reputation distribution?
Here is how you can calculate it with SQL: with balances as ( select '2018-01-01' as date, balance from unnest([1,2,3,4,5]) as balance -- Gini coef: 0.2666666666666667 union all select '2018-01-02' as date, balance from unnest([3,3,3,3]) as balance -- Gini coef: 0.0 union all select '2018-01-03' as date, balance from unnest([4,5,1,8,6,45,67,1,4,11]) as balance -- Gini coef: 0.625 ), ranked_balances as ( select date, balance, row_number() over (partition by date order by balance desc) as rank from balances ) SELECT date, -- (1 − 2B) https://en.wikipedia.org/wiki/Gini_coefficient 1 - 2 * sum((balance * (rank - 1) + balance / 2)) / count(*) / sum(balance) AS gini FROM ranked_balances GROUP BY date ORDER BY date ASC -- verify here http://shlegeris.com/gini Explanation is here https://medium.com/@medvedev1088/calculating-gini-coefficient-in-bigquery-3bc162c82168
Trying to compute Gini index on StackOverflow reputation distribution?
Here is how you can calculate it with SQL: with balances as ( select '2018-01-01' as date, balance from unnest([1,2,3,4,5]) as balance -- Gini coef: 0.2666666666666667 union all select
Trying to compute Gini index on StackOverflow reputation distribution? Here is how you can calculate it with SQL: with balances as ( select '2018-01-01' as date, balance from unnest([1,2,3,4,5]) as balance -- Gini coef: 0.2666666666666667 union all select '2018-01-02' as date, balance from unnest([3,3,3,3]) as balance -- Gini coef: 0.0 union all select '2018-01-03' as date, balance from unnest([4,5,1,8,6,45,67,1,4,11]) as balance -- Gini coef: 0.625 ), ranked_balances as ( select date, balance, row_number() over (partition by date order by balance desc) as rank from balances ) SELECT date, -- (1 − 2B) https://en.wikipedia.org/wiki/Gini_coefficient 1 - 2 * sum((balance * (rank - 1) + balance / 2)) / count(*) / sum(balance) AS gini FROM ranked_balances GROUP BY date ORDER BY date ASC -- verify here http://shlegeris.com/gini Explanation is here https://medium.com/@medvedev1088/calculating-gini-coefficient-in-bigquery-3bc162c82168
Trying to compute Gini index on StackOverflow reputation distribution? Here is how you can calculate it with SQL: with balances as ( select '2018-01-01' as date, balance from unnest([1,2,3,4,5]) as balance -- Gini coef: 0.2666666666666667 union all select
25,273
Trying to compute Gini index on StackOverflow reputation distribution?
I can't read the SQL code very easily, but if it helps, if I were going to calculate the Gini coefficient, this is what I would do (in plain English). Figure out the $n$ of $x$ (ie. the number of people with rep on SO) Sort $x$ from lowest to highest Sum each $x$ multiplied by its order in the rank (ie. if there are 10 people, the rep for the person with the lowest rep gets multiplied by 1 and the rep of the person with the highest rep gets multiplied by 10) Take that value and divide it by the product of $n$ and the sum of $x$ (ie. $n \times \sum $ rep) and then multiply that result by 2 Take that result and subtract the value of $1-(1/n)$ from it. Voila! I took those steps from the remarkably straight-forward code in the R function (in the ineq package) for calculating the Gini coefficient. For the record, here's that code: > ineq::Gini function (x) { n <- length(x) x <- sort(x) G <- sum(x * 1:n) G <- 2 * G/(n * sum(x)) G - 1 - (1/n) } <environment: namespace:ineq> It looks somewhat similar to your SQL code, but like I said, I can't really read that very easily!
Trying to compute Gini index on StackOverflow reputation distribution?
I can't read the SQL code very easily, but if it helps, if I were going to calculate the Gini coefficient, this is what I would do (in plain English). Figure out the $n$ of $x$ (ie. the number of pe
Trying to compute Gini index on StackOverflow reputation distribution? I can't read the SQL code very easily, but if it helps, if I were going to calculate the Gini coefficient, this is what I would do (in plain English). Figure out the $n$ of $x$ (ie. the number of people with rep on SO) Sort $x$ from lowest to highest Sum each $x$ multiplied by its order in the rank (ie. if there are 10 people, the rep for the person with the lowest rep gets multiplied by 1 and the rep of the person with the highest rep gets multiplied by 10) Take that value and divide it by the product of $n$ and the sum of $x$ (ie. $n \times \sum $ rep) and then multiply that result by 2 Take that result and subtract the value of $1-(1/n)$ from it. Voila! I took those steps from the remarkably straight-forward code in the R function (in the ineq package) for calculating the Gini coefficient. For the record, here's that code: > ineq::Gini function (x) { n <- length(x) x <- sort(x) G <- sum(x * 1:n) G <- 2 * G/(n * sum(x)) G - 1 - (1/n) } <environment: namespace:ineq> It looks somewhat similar to your SQL code, but like I said, I can't really read that very easily!
Trying to compute Gini index on StackOverflow reputation distribution? I can't read the SQL code very easily, but if it helps, if I were going to calculate the Gini coefficient, this is what I would do (in plain English). Figure out the $n$ of $x$ (ie. the number of pe
25,274
Trying to compute Gini index on StackOverflow reputation distribution?
There are, I believe, four equivalent formulations of the Gini index. To me, the most natural one is a U-statistic: $$ G = \frac 2{\mu n(n-1)}\sum_{i\neq j} |x_i - x_j| $$ where $\mu$ is the mean of $x$'s. You can double-check your computations with this formula. Obviously, the result must be non-negative. For what I know about Gini indices, the reputation distribution on CV should have the Gini index above 0.9; whether 0.98 makes a lot of sense or not, I can't say though.
Trying to compute Gini index on StackOverflow reputation distribution?
There are, I believe, four equivalent formulations of the Gini index. To me, the most natural one is a U-statistic: $$ G = \frac 2{\mu n(n-1)}\sum_{i\neq j} |x_i - x_j| $$ where $\mu$ is the mean of $
Trying to compute Gini index on StackOverflow reputation distribution? There are, I believe, four equivalent formulations of the Gini index. To me, the most natural one is a U-statistic: $$ G = \frac 2{\mu n(n-1)}\sum_{i\neq j} |x_i - x_j| $$ where $\mu$ is the mean of $x$'s. You can double-check your computations with this formula. Obviously, the result must be non-negative. For what I know about Gini indices, the reputation distribution on CV should have the Gini index above 0.9; whether 0.98 makes a lot of sense or not, I can't say though.
Trying to compute Gini index on StackOverflow reputation distribution? There are, I believe, four equivalent formulations of the Gini index. To me, the most natural one is a U-statistic: $$ G = \frac 2{\mu n(n-1)}\sum_{i\neq j} |x_i - x_j| $$ where $\mu$ is the mean of $
25,275
Trying to compute Gini index on StackOverflow reputation distribution?
Adding to @smillig answer, based on the provided equation: SELECT something AS x into #t FROM sometable SELECT *,ROW_NUMBER() OVER(ORDER BY x) AS i INTO #tt FROM #t SELECT 2.0*SUM(x*i)/(COUNT(x)*SUM(x))-1.0-(1.0/COUNT(x)) AS gini FROM #tt Gave me on my test set: 0.45503253636587840 Which is the same as R's ineq libraries Gini(x)
Trying to compute Gini index on StackOverflow reputation distribution?
Adding to @smillig answer, based on the provided equation: SELECT something AS x into #t FROM sometable SELECT *,ROW_NUMBER() OVER(ORDER BY x) AS i INTO #tt FROM #t SELECT 2.0*SUM(x*i)/(COUNT(x)*SUM(x
Trying to compute Gini index on StackOverflow reputation distribution? Adding to @smillig answer, based on the provided equation: SELECT something AS x into #t FROM sometable SELECT *,ROW_NUMBER() OVER(ORDER BY x) AS i INTO #tt FROM #t SELECT 2.0*SUM(x*i)/(COUNT(x)*SUM(x))-1.0-(1.0/COUNT(x)) AS gini FROM #tt Gave me on my test set: 0.45503253636587840 Which is the same as R's ineq libraries Gini(x)
Trying to compute Gini index on StackOverflow reputation distribution? Adding to @smillig answer, based on the provided equation: SELECT something AS x into #t FROM sometable SELECT *,ROW_NUMBER() OVER(ORDER BY x) AS i INTO #tt FROM #t SELECT 2.0*SUM(x*i)/(COUNT(x)*SUM(x
25,276
Wilcoxon-Mann-Whitney critical values in R
I think that the answer here might be that you're comparing apples and oranges. Let $F(x)$ denote the cdf of the Mann-Whitney $U$ statistic. qwilcox is the quantile function $Q(\alpha)$ of $U$. By definition, it is therefore $$Q(\alpha)=\inf \{x\in \mathbb{N}: F(x)\geq \alpha\},\qquad \alpha\in(0,1).$$ Because $U$ is discrete, there is usually no $x$ such that $F(x)=\alpha$, so typically $F(Q(\alpha))>\alpha$. Now, consider the critical value $C(\alpha)$ for the test. In this case, you want $F(C(\alpha))\leq \alpha$, since you otherwise will have a test with a type I error rate that is larger than the nominal one. This is usually considered to be undesirable; conservative tests tend to be prefered. Hence, $$C(\alpha)=\sup \{x\in \mathbb{N}: F(x)\leq \alpha\},\qquad \alpha\in(0,1).$$ Unless there is an $x$ such that $F(x)=\alpha$, we therefore have $C(\alpha)=Q(\alpha)-1$. The reason for the discrepancy is that qwilcox has been designed to compute quantiles and not critical values!
Wilcoxon-Mann-Whitney critical values in R
I think that the answer here might be that you're comparing apples and oranges. Let $F(x)$ denote the cdf of the Mann-Whitney $U$ statistic. qwilcox is the quantile function $Q(\alpha)$ of $U$. By def
Wilcoxon-Mann-Whitney critical values in R I think that the answer here might be that you're comparing apples and oranges. Let $F(x)$ denote the cdf of the Mann-Whitney $U$ statistic. qwilcox is the quantile function $Q(\alpha)$ of $U$. By definition, it is therefore $$Q(\alpha)=\inf \{x\in \mathbb{N}: F(x)\geq \alpha\},\qquad \alpha\in(0,1).$$ Because $U$ is discrete, there is usually no $x$ such that $F(x)=\alpha$, so typically $F(Q(\alpha))>\alpha$. Now, consider the critical value $C(\alpha)$ for the test. In this case, you want $F(C(\alpha))\leq \alpha$, since you otherwise will have a test with a type I error rate that is larger than the nominal one. This is usually considered to be undesirable; conservative tests tend to be prefered. Hence, $$C(\alpha)=\sup \{x\in \mathbb{N}: F(x)\leq \alpha\},\qquad \alpha\in(0,1).$$ Unless there is an $x$ such that $F(x)=\alpha$, we therefore have $C(\alpha)=Q(\alpha)-1$. The reason for the discrepancy is that qwilcox has been designed to compute quantiles and not critical values!
Wilcoxon-Mann-Whitney critical values in R I think that the answer here might be that you're comparing apples and oranges. Let $F(x)$ denote the cdf of the Mann-Whitney $U$ statistic. qwilcox is the quantile function $Q(\alpha)$ of $U$. By def
25,277
Wilcoxon-Mann-Whitney critical values in R
Remember that the rank sum test statistic is discrete and so you need to use a critical value such that the tail probability is $\geq$ to the specified $\alpha$. For some sample sizes equal to alpha cannot be achieved and that is my guess as to why you need the +1.
Wilcoxon-Mann-Whitney critical values in R
Remember that the rank sum test statistic is discrete and so you need to use a critical value such that the tail probability is $\geq$ to the specified $\alpha$. For some sample sizes equal to alpha
Wilcoxon-Mann-Whitney critical values in R Remember that the rank sum test statistic is discrete and so you need to use a critical value such that the tail probability is $\geq$ to the specified $\alpha$. For some sample sizes equal to alpha cannot be achieved and that is my guess as to why you need the +1.
Wilcoxon-Mann-Whitney critical values in R Remember that the rank sum test statistic is discrete and so you need to use a critical value such that the tail probability is $\geq$ to the specified $\alpha$. For some sample sizes equal to alpha
25,278
Is the Poisson distribution stable and are there inversion formulas for the MGF?
Linear combinations of Poisson random variables As you've calculated, the moment-generating function of the Poisson distribution with rate $\lambda$ is $$ m_X(t) = \mathbb E e^{t X} = e^{\lambda (e^t - 1)} \>. $$ Now, let's focus on a linear combination of independent Poisson random variables $X$ and $Y$. Let $Z = a X + b Y$. Then, $$ m_Z(t) = \mathbb Ee^{tZ} = \mathbb E e^{t (a X + b Y)} = \mathbb E e^{t(aX)} \mathbb E e^{t (bY)} = m_X(at) m_Y(bt) \>. $$ So, if $X$ has rate $\lambda_x$ and $Y$ has rate $\lambda_y$, we get $$ m_Z(t) = \exp({\lambda_x (e^{at} - 1)}) \exp({\lambda_y (e^{bt} - 1)}) = \exp(\lambda_x e^{at} + \lambda_y e^{bt} - (\lambda_x + \lambda_y))\>, $$ and this cannot, in general, be written in the form $\exp(\lambda(e^t - 1))$ for some $\lambda$ unless $a = b = 1$. Inversion of moment-generating functions If the moment generating function exists in a neighborhood of zero, then it also exists as a complex-valued function in an infinite strip around zero. This allows inversion by contour integration to come into play in many cases. Indeed, the Laplace transform $\mathcal L(s) = \mathbb E e^{-s T}$ of a nonnegative random variable $T$ is a common tool in stochastic-process theory, particularly for analyzing stopping times. Note that $\mathcal L(s) = m_T(-s)$ for real valued $s$. You should prove as an exercise that the Laplace transform always exists for $s \geq 0$ for nonnegative random variables. Inversion can then be accomplished either via the Bromwich integral or the Post inversion formula. A probabilistic interpretation of the latter can be found as an exercise in several classical probability texts. Though not directly related, you may be interested in the following note as well. J. H. Curtiss (1942), A note on the theory of moment generating functions, Ann. Math. Stat., vol. 13, no. 4, pp. 430–433. The associated theory is more commonly developed for characteristic functions since these are fully general: They exist for all distributions without support or moment restrictions.
Is the Poisson distribution stable and are there inversion formulas for the MGF?
Linear combinations of Poisson random variables As you've calculated, the moment-generating function of the Poisson distribution with rate $\lambda$ is $$ m_X(t) = \mathbb E e^{t X} = e^{\lambda (e^t
Is the Poisson distribution stable and are there inversion formulas for the MGF? Linear combinations of Poisson random variables As you've calculated, the moment-generating function of the Poisson distribution with rate $\lambda$ is $$ m_X(t) = \mathbb E e^{t X} = e^{\lambda (e^t - 1)} \>. $$ Now, let's focus on a linear combination of independent Poisson random variables $X$ and $Y$. Let $Z = a X + b Y$. Then, $$ m_Z(t) = \mathbb Ee^{tZ} = \mathbb E e^{t (a X + b Y)} = \mathbb E e^{t(aX)} \mathbb E e^{t (bY)} = m_X(at) m_Y(bt) \>. $$ So, if $X$ has rate $\lambda_x$ and $Y$ has rate $\lambda_y$, we get $$ m_Z(t) = \exp({\lambda_x (e^{at} - 1)}) \exp({\lambda_y (e^{bt} - 1)}) = \exp(\lambda_x e^{at} + \lambda_y e^{bt} - (\lambda_x + \lambda_y))\>, $$ and this cannot, in general, be written in the form $\exp(\lambda(e^t - 1))$ for some $\lambda$ unless $a = b = 1$. Inversion of moment-generating functions If the moment generating function exists in a neighborhood of zero, then it also exists as a complex-valued function in an infinite strip around zero. This allows inversion by contour integration to come into play in many cases. Indeed, the Laplace transform $\mathcal L(s) = \mathbb E e^{-s T}$ of a nonnegative random variable $T$ is a common tool in stochastic-process theory, particularly for analyzing stopping times. Note that $\mathcal L(s) = m_T(-s)$ for real valued $s$. You should prove as an exercise that the Laplace transform always exists for $s \geq 0$ for nonnegative random variables. Inversion can then be accomplished either via the Bromwich integral or the Post inversion formula. A probabilistic interpretation of the latter can be found as an exercise in several classical probability texts. Though not directly related, you may be interested in the following note as well. J. H. Curtiss (1942), A note on the theory of moment generating functions, Ann. Math. Stat., vol. 13, no. 4, pp. 430–433. The associated theory is more commonly developed for characteristic functions since these are fully general: They exist for all distributions without support or moment restrictions.
Is the Poisson distribution stable and are there inversion formulas for the MGF? Linear combinations of Poisson random variables As you've calculated, the moment-generating function of the Poisson distribution with rate $\lambda$ is $$ m_X(t) = \mathbb E e^{t X} = e^{\lambda (e^t
25,279
Is the Poisson distribution stable and are there inversion formulas for the MGF?
Poisson distributions are stable by sum. They are trivially not stable by linear combination because you can end up with noninteger values. For example, if $X$ is Poisson, $X/2$ is trivially not Poisson. I am not aware of inversion formulas for MGF (but @cardinal seems to be).
Is the Poisson distribution stable and are there inversion formulas for the MGF?
Poisson distributions are stable by sum. They are trivially not stable by linear combination because you can end up with noninteger values. For example, if $X$ is Poisson, $X/2$ is trivially not Poiss
Is the Poisson distribution stable and are there inversion formulas for the MGF? Poisson distributions are stable by sum. They are trivially not stable by linear combination because you can end up with noninteger values. For example, if $X$ is Poisson, $X/2$ is trivially not Poisson. I am not aware of inversion formulas for MGF (but @cardinal seems to be).
Is the Poisson distribution stable and are there inversion formulas for the MGF? Poisson distributions are stable by sum. They are trivially not stable by linear combination because you can end up with noninteger values. For example, if $X$ is Poisson, $X/2$ is trivially not Poiss
25,280
Where is the shared variance between all IVs in a linear multiple regression equation?
To understand what that diagram could mean, we have to define some things. Let's say that Venn diagram displays the overlapping (or shared) variance amongst 4 different variables, and that we want to predict the level of $Wiki$ by recourse to our knowledge of $Digg$, $Forum$, and $Blog$. That is, we want to be able to reduce the uncertainty (i.e., variance) in $Wiki$ from the null variance down to the residual variance. How well can that be done? That is the question that a Venn diagram is answering for you. Each circle represents a set of points, and thereby, an amount of variance. For the most part, we are interested in the variance in $Wiki$, but the figure also displays the variances in the predictors. There are a few things to notice about our figure. First, each variable has the same amount of variance--they are all the same size (although not everyone will use Venn diagrams quite so literally). Also, there is the same amount of overlap, etc., etc. A more important thing to notice is that there is a good deal of overlap amongst the predictor variables. This means that they are correlated. This situation is very common when dealing with secondary (i.e., archival) data, observational research, or real-world prediction scenarios. On the other hand, if this were a designed experiment, it would probably imply poor design or execution. To continue with this example for a little bit longer, we can see that our predictive ability will be moderate; most of the variability in $Wiki$ remains as residual variability after all the variables have been used (eyeballing the diagram, I would guess $R^2\approx.35$). Another thing to note is that, once $Digg$ and $Blog$ have been entered into the model, $Forum$ accounts for none of the variability in $Wiki$. Now, after having fit a model with multiple predictors, people often want to test those predictors to see if they are related to the response variable (although it's not clear this is as important as people seem to believe it is). Our problem is that to test these predictors, we must partition the Sum of Squares, and since our predictors are correlated, there are SS that could be attributed to more than one predictor. In fact, in the asterisked region, the SS could be attributed to any of the three predictors. This means that there is no unique partition of the SS, and thus no unique test. How this issue is handled depends on the type of SS that the researcher uses and other judgments made by the researcher. Since many software applications return type III SS by default, many people throw away the information contained in the overlapping regions without realizing they have made a judgment call. I explain these issues, the different types of SS, and go into some detail here. The question, as stated, specifically asks about where all of this shows up in the betas / regression equation. The answer is that it does not. Some information about that is contained in my answer here (although you'll have to read between the lines a little bit).
Where is the shared variance between all IVs in a linear multiple regression equation?
To understand what that diagram could mean, we have to define some things. Let's say that Venn diagram displays the overlapping (or shared) variance amongst 4 different variables, and that we want to
Where is the shared variance between all IVs in a linear multiple regression equation? To understand what that diagram could mean, we have to define some things. Let's say that Venn diagram displays the overlapping (or shared) variance amongst 4 different variables, and that we want to predict the level of $Wiki$ by recourse to our knowledge of $Digg$, $Forum$, and $Blog$. That is, we want to be able to reduce the uncertainty (i.e., variance) in $Wiki$ from the null variance down to the residual variance. How well can that be done? That is the question that a Venn diagram is answering for you. Each circle represents a set of points, and thereby, an amount of variance. For the most part, we are interested in the variance in $Wiki$, but the figure also displays the variances in the predictors. There are a few things to notice about our figure. First, each variable has the same amount of variance--they are all the same size (although not everyone will use Venn diagrams quite so literally). Also, there is the same amount of overlap, etc., etc. A more important thing to notice is that there is a good deal of overlap amongst the predictor variables. This means that they are correlated. This situation is very common when dealing with secondary (i.e., archival) data, observational research, or real-world prediction scenarios. On the other hand, if this were a designed experiment, it would probably imply poor design or execution. To continue with this example for a little bit longer, we can see that our predictive ability will be moderate; most of the variability in $Wiki$ remains as residual variability after all the variables have been used (eyeballing the diagram, I would guess $R^2\approx.35$). Another thing to note is that, once $Digg$ and $Blog$ have been entered into the model, $Forum$ accounts for none of the variability in $Wiki$. Now, after having fit a model with multiple predictors, people often want to test those predictors to see if they are related to the response variable (although it's not clear this is as important as people seem to believe it is). Our problem is that to test these predictors, we must partition the Sum of Squares, and since our predictors are correlated, there are SS that could be attributed to more than one predictor. In fact, in the asterisked region, the SS could be attributed to any of the three predictors. This means that there is no unique partition of the SS, and thus no unique test. How this issue is handled depends on the type of SS that the researcher uses and other judgments made by the researcher. Since many software applications return type III SS by default, many people throw away the information contained in the overlapping regions without realizing they have made a judgment call. I explain these issues, the different types of SS, and go into some detail here. The question, as stated, specifically asks about where all of this shows up in the betas / regression equation. The answer is that it does not. Some information about that is contained in my answer here (although you'll have to read between the lines a little bit).
Where is the shared variance between all IVs in a linear multiple regression equation? To understand what that diagram could mean, we have to define some things. Let's say that Venn diagram displays the overlapping (or shared) variance amongst 4 different variables, and that we want to
25,281
Where is the shared variance between all IVs in a linear multiple regression equation?
Peter Kennedy has a nice description of Ballentine/Venn diagrams for regression in his book and JSE article, including cases where they can lead you astray. The gist is that the starred area variation is thrown away only for estimating and testing the slope coefficients. That variation is added back in for the purpose of predicting and calculating $R^2$.
Where is the shared variance between all IVs in a linear multiple regression equation?
Peter Kennedy has a nice description of Ballentine/Venn diagrams for regression in his book and JSE article, including cases where they can lead you astray. The gist is that the starred area variation
Where is the shared variance between all IVs in a linear multiple regression equation? Peter Kennedy has a nice description of Ballentine/Venn diagrams for regression in his book and JSE article, including cases where they can lead you astray. The gist is that the starred area variation is thrown away only for estimating and testing the slope coefficients. That variation is added back in for the purpose of predicting and calculating $R^2$.
Where is the shared variance between all IVs in a linear multiple regression equation? Peter Kennedy has a nice description of Ballentine/Venn diagrams for regression in his book and JSE article, including cases where they can lead you astray. The gist is that the starred area variation
25,282
Where is the shared variance between all IVs in a linear multiple regression equation?
I realize this is a (very) dated thread, but since one of my colleagues asked me this very same question this week and finding nothing on the Web that I could point him to, I thought I would add my two cents "for posterity" here. I am not convinced that the answers provided to date answer the OP's question. I am going to simplify the problem to involve only two independent variables; it is very straight-forward to extend it to more than two. Consider the following scenario: two independent variables (X1 and X2), a dependent variable (Y), 1000 observations, the two independent variables are highly correlated with each other (r=.99), and each independent variable is correlated with the dependent variable (r=.60). Without loss of generality, standardize all variables to a mean of zero and a standard deviation of one, so the intercept term will be zero in each of the regressions. Running a simple linear regression of Y on X1 will produce an r-squared of .36 and a b1 value of 0.6. Similarly, running a simple linear regression of Y on X2 will produce an r-squared of .36 and a b1 value of 0.6. Running a multiple regression of Y on X1 and X2 will produce an r-squared of just a wee bit higher than .36, and both b1 and b2 take on the value of 0.3. Thus, the shared variation in Y is captured in BOTH b1 and b2 (equally). I think the OP may have made a false (but totally understandable) assumption: namely, that as X1 and X2 come closer and closer to being perfectly correlated, their b-values in the multiple regression equation come closer and closer to ZERO. That is not the case. In fact, when X1 and X2 come closer and closer to being perfectly correlated, their b-values in the multiple regression come closer and closer to HALF of the b-value in the simple linear regression of either one of them. However, as X1 and X2 come closer and closer to being perfectly correlated, the STANDARD ERROR of b1 and b2 moves closer and closer to infinity, so the t-values converge on zero. So, the t-values will converge on zero (i.e., no UNIQUE linear relationship between either X1 and Y or X2 and Y), but the b-values converge to half the value of the b-values in the simple linear regression. So, the answer to the OP's question is that, as the correlation between X1 and X2 approaches unity, EACH of the partial slope coefficients approaches contributing equally to the prediction of the Y value, even though neither independent variable offers any UNIQUE explanation of the dependent variable. If you wish to check this empirically, generate a fabricated dataset (...I used a SAS macro named Corr2Data.sas ...) which has the characteristics described above. Check out the b values, the standard errors, and the t-values: you will find that they are exactly as described here. HTH // Phil
Where is the shared variance between all IVs in a linear multiple regression equation?
I realize this is a (very) dated thread, but since one of my colleagues asked me this very same question this week and finding nothing on the Web that I could point him to, I thought I would add my tw
Where is the shared variance between all IVs in a linear multiple regression equation? I realize this is a (very) dated thread, but since one of my colleagues asked me this very same question this week and finding nothing on the Web that I could point him to, I thought I would add my two cents "for posterity" here. I am not convinced that the answers provided to date answer the OP's question. I am going to simplify the problem to involve only two independent variables; it is very straight-forward to extend it to more than two. Consider the following scenario: two independent variables (X1 and X2), a dependent variable (Y), 1000 observations, the two independent variables are highly correlated with each other (r=.99), and each independent variable is correlated with the dependent variable (r=.60). Without loss of generality, standardize all variables to a mean of zero and a standard deviation of one, so the intercept term will be zero in each of the regressions. Running a simple linear regression of Y on X1 will produce an r-squared of .36 and a b1 value of 0.6. Similarly, running a simple linear regression of Y on X2 will produce an r-squared of .36 and a b1 value of 0.6. Running a multiple regression of Y on X1 and X2 will produce an r-squared of just a wee bit higher than .36, and both b1 and b2 take on the value of 0.3. Thus, the shared variation in Y is captured in BOTH b1 and b2 (equally). I think the OP may have made a false (but totally understandable) assumption: namely, that as X1 and X2 come closer and closer to being perfectly correlated, their b-values in the multiple regression equation come closer and closer to ZERO. That is not the case. In fact, when X1 and X2 come closer and closer to being perfectly correlated, their b-values in the multiple regression come closer and closer to HALF of the b-value in the simple linear regression of either one of them. However, as X1 and X2 come closer and closer to being perfectly correlated, the STANDARD ERROR of b1 and b2 moves closer and closer to infinity, so the t-values converge on zero. So, the t-values will converge on zero (i.e., no UNIQUE linear relationship between either X1 and Y or X2 and Y), but the b-values converge to half the value of the b-values in the simple linear regression. So, the answer to the OP's question is that, as the correlation between X1 and X2 approaches unity, EACH of the partial slope coefficients approaches contributing equally to the prediction of the Y value, even though neither independent variable offers any UNIQUE explanation of the dependent variable. If you wish to check this empirically, generate a fabricated dataset (...I used a SAS macro named Corr2Data.sas ...) which has the characteristics described above. Check out the b values, the standard errors, and the t-values: you will find that they are exactly as described here. HTH // Phil
Where is the shared variance between all IVs in a linear multiple regression equation? I realize this is a (very) dated thread, but since one of my colleagues asked me this very same question this week and finding nothing on the Web that I could point him to, I thought I would add my tw
25,283
Finding suitable rules for new data using arules
The key is the is.subset-function in the same package Here is the code ... basket <- Groceries[2] # find all rules, where the lhs is a subset of the current basket rulesMatchLHS <- is.subset(rules@lhs,basket) # and the rhs is NOT a subset of the current basket (so that some items are left as potential recommendation) suitableRules <- rulesMatchLHS & !(is.subset(rules@rhs,basket)) # here they are inspect(rules[suitableRules]) # now extract the matching rhs ... recommendations <- strsplit(LIST(rules[suitableRules]@rhs)[[1]],split=" ") recommendations <- lapply(recommendations,function(x){paste(x,collapse=" ")}) recommendations <- as.character(recommendations) # ... and remove all items which are already in the basket recommendations <- recommendations[!sapply(recommendations,function(x){basket %in% x})] print(recommendations) and the generated output ... > inspect(rules[suitableRules]) lhs rhs support confidence lift 1 {} => {whole milk} 0.2555420 0.2555420 1.000000 2 {yogurt} => {whole milk} 0.0560301 0.4018964 1.572722 > print(recommendations) [1] "whole milk"
Finding suitable rules for new data using arules
The key is the is.subset-function in the same package Here is the code ... basket <- Groceries[2] # find all rules, where the lhs is a subset of the current basket rulesMatchLHS <- is.subset(rules@lhs
Finding suitable rules for new data using arules The key is the is.subset-function in the same package Here is the code ... basket <- Groceries[2] # find all rules, where the lhs is a subset of the current basket rulesMatchLHS <- is.subset(rules@lhs,basket) # and the rhs is NOT a subset of the current basket (so that some items are left as potential recommendation) suitableRules <- rulesMatchLHS & !(is.subset(rules@rhs,basket)) # here they are inspect(rules[suitableRules]) # now extract the matching rhs ... recommendations <- strsplit(LIST(rules[suitableRules]@rhs)[[1]],split=" ") recommendations <- lapply(recommendations,function(x){paste(x,collapse=" ")}) recommendations <- as.character(recommendations) # ... and remove all items which are already in the basket recommendations <- recommendations[!sapply(recommendations,function(x){basket %in% x})] print(recommendations) and the generated output ... > inspect(rules[suitableRules]) lhs rhs support confidence lift 1 {} => {whole milk} 0.2555420 0.2555420 1.000000 2 {yogurt} => {whole milk} 0.0560301 0.4018964 1.572722 > print(recommendations) [1] "whole milk"
Finding suitable rules for new data using arules The key is the is.subset-function in the same package Here is the code ... basket <- Groceries[2] # find all rules, where the lhs is a subset of the current basket rulesMatchLHS <- is.subset(rules@lhs
25,284
Precision and recall for clustering?
There's a wikipedia page on precision and recall that is quite clear on the definitions. In the paper to which you refer, they say: To evaluate the clustering results, precision, recall, and F-measure were calculated over pairs of points. For each pair of points that share at least one cluster in the overlapping clustering results, these measures try to estimate whether the prediction of this pair as being in the same cluster was correct with respect to the underlying true categories in the data. Precision is calculated as the fraction of pairs correctly put in the same cluster, recall is the fraction of actual pairs that were identified, and F-measure is the harmonic mean of precision and recall. The only thing that is potentially tricky is that a given point may appear in multiple clusters. The authors appear to look at all pairs of points, say (x,y), and ask whether one of the clusters that contains point x also contains point y. A true positive (tp) is when this is both the case in truth and for the inferred clusters. A false positive (fp) would be when this is not the case in truth but is the case for the inferred clusters. A false negative (fn) would be when this is the case in truth but not for the inferred clusters. Then precision = tp / (tp + fp) and recall = tp / (tp + fn).
Precision and recall for clustering?
There's a wikipedia page on precision and recall that is quite clear on the definitions. In the paper to which you refer, they say: To evaluate the clustering results, precision, recall, and F-meas
Precision and recall for clustering? There's a wikipedia page on precision and recall that is quite clear on the definitions. In the paper to which you refer, they say: To evaluate the clustering results, precision, recall, and F-measure were calculated over pairs of points. For each pair of points that share at least one cluster in the overlapping clustering results, these measures try to estimate whether the prediction of this pair as being in the same cluster was correct with respect to the underlying true categories in the data. Precision is calculated as the fraction of pairs correctly put in the same cluster, recall is the fraction of actual pairs that were identified, and F-measure is the harmonic mean of precision and recall. The only thing that is potentially tricky is that a given point may appear in multiple clusters. The authors appear to look at all pairs of points, say (x,y), and ask whether one of the clusters that contains point x also contains point y. A true positive (tp) is when this is both the case in truth and for the inferred clusters. A false positive (fp) would be when this is not the case in truth but is the case for the inferred clusters. A false negative (fn) would be when this is the case in truth but not for the inferred clusters. Then precision = tp / (tp + fp) and recall = tp / (tp + fn).
Precision and recall for clustering? There's a wikipedia page on precision and recall that is quite clear on the definitions. In the paper to which you refer, they say: To evaluate the clustering results, precision, recall, and F-meas
25,285
Precision and recall for clustering?
If you want to use precision and recall for clustering, which isn't always used to evaluate the clustering, here is a useful link with a good example of how to find them for clustering: http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html I will work through the relevant part of the linked example for finding TP, TN, FP, FN; adding details where necessary. As stated in the linked site: A true positive (TP) decision assigns two similar documents to the same cluster, a true negative (TN) decision assigns two dissimilar documents to different clusters. There are two types of errors we can commit. A (FP) decision assigns two dissimilar documents to the same cluster. A (FN) decision assigns two similar documents to different clusters. Consider the following example, Here we have three actual groups: x's, o's, and diamonds which we have tried to cluster into cluster 1, cluster 2, and cluster 3. Some mistakes were made. For example, cluster 2 has an x, four o's, and a diamond included in it. Now to quantify the TP's, FP's, TN's, FN's. We will consider all pairs of documents, of which there are $N(N-1)/2=136$, since we have $N=17$ documents. Now for TP+FP (all positives), we need to find out all pairs of x's, o's, and diamonds (not necessarily matching types) that exist in the same cluster. $6 \choose 2$ pairs of anything in cluster 1, etc. This gives us $TP+FP = {6 \choose 2} + {6 \choose 2} + {5 \choose 2} = 40$ total positives True positives are only the pairs that are of same type. For example, pairs of x's in cluster 1 is $5 \choose 2$. This gives us, $TP = {5 \choose 2}+{4 \choose 2}+{3 \choose 2}+{2 \choose 2}=20$ This leaves $40-20=20$ FP's. Now for the total number of negatives, which is not in the link I provided. The total negatives plus positives must equal the total number of pairs, and thus $Pairs-totalPostives=totalNegatives$. So there are $136 - 40 = 96$ negatives in total. The number of FN's can be found by looking at pairs that should be grouped together, but are not. I will do the x's first. Cluster 1 has 5 x's each paired to three mismatches ($3*5=15$) plus cluster 2 has 1 x that is paired to two mismatched x's in cluster three that have not been accounted for ($2*1=2$). The o's are the same. Cluster 1 has one o, which is paired to 4 mismatched o's ($1*4=4$) in cluster 2. Now for the diamonds. Cluster 2 has one diamond, which is paired to 3 mismatched diamonds in cluster 3 ($1*3$). Add them up, $FN = 3*5+2*1+1*4+1*3=24$ Since total negatives is 96, there must be 96-24=72 TN's. The final confusion matrix is (as per the website): And as Karl said precision and recall are:
Precision and recall for clustering?
If you want to use precision and recall for clustering, which isn't always used to evaluate the clustering, here is a useful link with a good example of how to find them for clustering: http://nlp.sta
Precision and recall for clustering? If you want to use precision and recall for clustering, which isn't always used to evaluate the clustering, here is a useful link with a good example of how to find them for clustering: http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html I will work through the relevant part of the linked example for finding TP, TN, FP, FN; adding details where necessary. As stated in the linked site: A true positive (TP) decision assigns two similar documents to the same cluster, a true negative (TN) decision assigns two dissimilar documents to different clusters. There are two types of errors we can commit. A (FP) decision assigns two dissimilar documents to the same cluster. A (FN) decision assigns two similar documents to different clusters. Consider the following example, Here we have three actual groups: x's, o's, and diamonds which we have tried to cluster into cluster 1, cluster 2, and cluster 3. Some mistakes were made. For example, cluster 2 has an x, four o's, and a diamond included in it. Now to quantify the TP's, FP's, TN's, FN's. We will consider all pairs of documents, of which there are $N(N-1)/2=136$, since we have $N=17$ documents. Now for TP+FP (all positives), we need to find out all pairs of x's, o's, and diamonds (not necessarily matching types) that exist in the same cluster. $6 \choose 2$ pairs of anything in cluster 1, etc. This gives us $TP+FP = {6 \choose 2} + {6 \choose 2} + {5 \choose 2} = 40$ total positives True positives are only the pairs that are of same type. For example, pairs of x's in cluster 1 is $5 \choose 2$. This gives us, $TP = {5 \choose 2}+{4 \choose 2}+{3 \choose 2}+{2 \choose 2}=20$ This leaves $40-20=20$ FP's. Now for the total number of negatives, which is not in the link I provided. The total negatives plus positives must equal the total number of pairs, and thus $Pairs-totalPostives=totalNegatives$. So there are $136 - 40 = 96$ negatives in total. The number of FN's can be found by looking at pairs that should be grouped together, but are not. I will do the x's first. Cluster 1 has 5 x's each paired to three mismatches ($3*5=15$) plus cluster 2 has 1 x that is paired to two mismatched x's in cluster three that have not been accounted for ($2*1=2$). The o's are the same. Cluster 1 has one o, which is paired to 4 mismatched o's ($1*4=4$) in cluster 2. Now for the diamonds. Cluster 2 has one diamond, which is paired to 3 mismatched diamonds in cluster 3 ($1*3$). Add them up, $FN = 3*5+2*1+1*4+1*3=24$ Since total negatives is 96, there must be 96-24=72 TN's. The final confusion matrix is (as per the website): And as Karl said precision and recall are:
Precision and recall for clustering? If you want to use precision and recall for clustering, which isn't always used to evaluate the clustering, here is a useful link with a good example of how to find them for clustering: http://nlp.sta
25,286
Precision and recall for clustering?
True Positive (TP) Assignment: when similar members are assigned to the same community. This is a correct decision. True Negative (TN) Assignment: when dissimilar members are assigned to dierent communities. This is a correct decision. False Negative (FN) Assignment: when similar members are assigned to dierent communities. This is an incorrect decision. False Positive (FP) Assignment: when dissimilar members are assigned to the same community. This is an incorrect decision. Reference Social Data Mining. by Reza Zafarani Mohammad Ali Abbasi Huan Liu
Precision and recall for clustering?
True Positive (TP) Assignment: when similar members are assigned to the same community. This is a correct decision. True Negative (TN) Assignment: when dissimilar members are assigned to dierent commu
Precision and recall for clustering? True Positive (TP) Assignment: when similar members are assigned to the same community. This is a correct decision. True Negative (TN) Assignment: when dissimilar members are assigned to dierent communities. This is a correct decision. False Negative (FN) Assignment: when similar members are assigned to dierent communities. This is an incorrect decision. False Positive (FP) Assignment: when dissimilar members are assigned to the same community. This is an incorrect decision. Reference Social Data Mining. by Reza Zafarani Mohammad Ali Abbasi Huan Liu
Precision and recall for clustering? True Positive (TP) Assignment: when similar members are assigned to the same community. This is a correct decision. True Negative (TN) Assignment: when dissimilar members are assigned to dierent commu
25,287
What are the mean and variance for the Gamma distribution?
If the shape parameter is $k>0$ and the scale is $\theta>0$, one parameterization has density function $$p(x) = x^{k-1} \frac{ e^{-x/\theta} }{\theta^{k} \Gamma(k)}$$ where the argument, $x$, is non-negative. A random variable with this density has mean $k \theta$ and variance $k \theta^{2}$ (this parameterization is the one used on the wikipedia page about the gamma distribution). An alternative parameterization uses $\vartheta = 1/\theta$ as the rate parameter (inverse scale parameter) and has density $$p(x) = x^{k-1} \frac{ \vartheta^{k} e^{-x \vartheta} }{\Gamma(k)}$$ Under this choice, the mean is $k/\vartheta$ and the variance is $k/\vartheta^{2}$.
What are the mean and variance for the Gamma distribution?
If the shape parameter is $k>0$ and the scale is $\theta>0$, one parameterization has density function $$p(x) = x^{k-1} \frac{ e^{-x/\theta} }{\theta^{k} \Gamma(k)}$$ where the argument, $x$, is non-
What are the mean and variance for the Gamma distribution? If the shape parameter is $k>0$ and the scale is $\theta>0$, one parameterization has density function $$p(x) = x^{k-1} \frac{ e^{-x/\theta} }{\theta^{k} \Gamma(k)}$$ where the argument, $x$, is non-negative. A random variable with this density has mean $k \theta$ and variance $k \theta^{2}$ (this parameterization is the one used on the wikipedia page about the gamma distribution). An alternative parameterization uses $\vartheta = 1/\theta$ as the rate parameter (inverse scale parameter) and has density $$p(x) = x^{k-1} \frac{ \vartheta^{k} e^{-x \vartheta} }{\Gamma(k)}$$ Under this choice, the mean is $k/\vartheta$ and the variance is $k/\vartheta^{2}$.
What are the mean and variance for the Gamma distribution? If the shape parameter is $k>0$ and the scale is $\theta>0$, one parameterization has density function $$p(x) = x^{k-1} \frac{ e^{-x/\theta} }{\theta^{k} \Gamma(k)}$$ where the argument, $x$, is non-
25,288
What are the mean and variance for the Gamma distribution?
actually, in addition to what Macro said, there is a third form for the gamma distribution With a shape parameter $v$ and a mean parameter $\mu$ $ p(x\mid \mu,v) = constant \times x^{\frac{v-2}{2}} e^{-\frac{xv}{2\mu}} $ if $x \sim G(\mu,v)$ then $ E(x) = \mu$ and $var(x) =\dfrac{2\mu^2}{v}$
What are the mean and variance for the Gamma distribution?
actually, in addition to what Macro said, there is a third form for the gamma distribution With a shape parameter $v$ and a mean parameter $\mu$ $ p(x\mid \mu,v) = constant \times x^{\frac{v-2}{2}} e^
What are the mean and variance for the Gamma distribution? actually, in addition to what Macro said, there is a third form for the gamma distribution With a shape parameter $v$ and a mean parameter $\mu$ $ p(x\mid \mu,v) = constant \times x^{\frac{v-2}{2}} e^{-\frac{xv}{2\mu}} $ if $x \sim G(\mu,v)$ then $ E(x) = \mu$ and $var(x) =\dfrac{2\mu^2}{v}$
What are the mean and variance for the Gamma distribution? actually, in addition to what Macro said, there is a third form for the gamma distribution With a shape parameter $v$ and a mean parameter $\mu$ $ p(x\mid \mu,v) = constant \times x^{\frac{v-2}{2}} e^
25,289
Addition of multivariate gaussians
Method 1: characteristic functions Referring to (say) the Wikipedia article on the multivariate normal distribution and using the 1D technique to compute sums in the article on sums of normal distributions, we find the log of its characteristic function is $$i t \mu - t' \Sigma t.$$ The cf of a sum is the product of the cfs, so the logarithms add. This tells us the cf of the sum of two independent MVN distributions (indexed by 1 and 2) has a logarithm equal to $$i t (\mu_1 + \mu_2) - t' (\Sigma_1 + \Sigma_2) t.$$ Because the cf uniquely determines the distribution we can immediately read off that the sum is MVN with mean $\mu_1 + \mu_2$ and variance $\Sigma_1 + \Sigma_2$. Method 2: Linear combinations View the pair of MVN distributions as being a single MVN with mean $(\mu_1, \mu_2)$ and covariance $\Sigma_1 \oplus \Sigma_2$. In block matrix form this is $$\Sigma_1 \oplus \Sigma_2 = \pmatrix{\Sigma_1 & 0 \\ 0 & \Sigma_2}$$ where the zeros represent square matrices of zeros (indicating all covariances between any component of distribution 1 and any component of distribution 2 are zero). The sum is given by a linear transformation and therefore is MVN. The covariance again works out to $\Sigma_1 + \Sigma_2$. (See p. 2 #4 in course notes by the late Dr. E.B. Moser, LSU EXST 7037. Edit Jan 2017: alas, the university appears to have removed them from its Web site. A copy of the original PDF file is available on archive.org.)
Addition of multivariate gaussians
Method 1: characteristic functions Referring to (say) the Wikipedia article on the multivariate normal distribution and using the 1D technique to compute sums in the article on sums of normal distribu
Addition of multivariate gaussians Method 1: characteristic functions Referring to (say) the Wikipedia article on the multivariate normal distribution and using the 1D technique to compute sums in the article on sums of normal distributions, we find the log of its characteristic function is $$i t \mu - t' \Sigma t.$$ The cf of a sum is the product of the cfs, so the logarithms add. This tells us the cf of the sum of two independent MVN distributions (indexed by 1 and 2) has a logarithm equal to $$i t (\mu_1 + \mu_2) - t' (\Sigma_1 + \Sigma_2) t.$$ Because the cf uniquely determines the distribution we can immediately read off that the sum is MVN with mean $\mu_1 + \mu_2$ and variance $\Sigma_1 + \Sigma_2$. Method 2: Linear combinations View the pair of MVN distributions as being a single MVN with mean $(\mu_1, \mu_2)$ and covariance $\Sigma_1 \oplus \Sigma_2$. In block matrix form this is $$\Sigma_1 \oplus \Sigma_2 = \pmatrix{\Sigma_1 & 0 \\ 0 & \Sigma_2}$$ where the zeros represent square matrices of zeros (indicating all covariances between any component of distribution 1 and any component of distribution 2 are zero). The sum is given by a linear transformation and therefore is MVN. The covariance again works out to $\Sigma_1 + \Sigma_2$. (See p. 2 #4 in course notes by the late Dr. E.B. Moser, LSU EXST 7037. Edit Jan 2017: alas, the university appears to have removed them from its Web site. A copy of the original PDF file is available on archive.org.)
Addition of multivariate gaussians Method 1: characteristic functions Referring to (say) the Wikipedia article on the multivariate normal distribution and using the 1D technique to compute sums in the article on sums of normal distribu
25,290
Weight a rating system to favor items rated highly by more people over items rated highly by fewer people?
One way you can combat this is to use proportions in each category, which does not require you to put numbers in for each category (you can leave it as 80% rated as "strongly likes"). However proportions do suffer from the small number of ratings issue. This shows up in your example the Photo with 1 +5 rating would get a higher average score (and proportion) than one with the 99 +5 and 1 +2 rating. This doesn't fit well with my intuition (and I suspect most peoples). One way to get around this small sample size issue is to use a Bayesian technique known as "Laplace's rule of succession" (searching this term may be useful). It simply involves adding 1 "observation" to each category before calculating the probabilities. If you wanted to take an average for a numerical value, I would suggest a weighted average where the weights are the probabilities calculated by the rule of succession. For the mathematical form, let $n_{sd},n_{d},n_{l},n_{sl}$ denote the number of responses of "strongly dislike", "dislike", "like", and "strongly like" respectively (in the two examples, $n_{sl}=1,n_{sd}=n_{d}=n{l}=0$ and $n_{sl}=99,n_{l}=1,n_{sd}=n_{d}=0$). You then calculate the probability (or weight) for strongly like as $$Pr(\text{"Strongly Like"}) = \frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$ For the two examples you give, they give probabilities of "strongly like" as $\frac{1+1}{1+0+0+0+4}=\frac{2}{5}$ and $\frac{99+1}{99+1+0+0+4}=\frac{100}{104}$ which I think agree more closely with "common sense". Removing the added constants give $\frac{1}{1}$ and $\frac{99}{100}$ which makes the first outcome seem higher than it should be (at least to me anyway). The respective scores are just given by the weighted average, which I have written below as: $$Score=\begin{array}{1 1} 5\frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}+2\frac{n_{l}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4} \\ - 2\frac{n_{d}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4} -5\frac{n_{sd}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}\end{array}$$ Or more succinctly as $$Score=\frac{5 n_{sl}+ 2 n_{l} - 2 n_{d} - 5 n_{sd}}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$ Which gives scores in the two examples of $\frac{5}{5}=1$ and $\frac{497}{104}\sim 4.8$. I think this shows an appropriate difference between the two cases. This may have been a bit "mathsy" so let me know if you need more explanation.
Weight a rating system to favor items rated highly by more people over items rated highly by fewer p
One way you can combat this is to use proportions in each category, which does not require you to put numbers in for each category (you can leave it as 80% rated as "strongly likes"). However proport
Weight a rating system to favor items rated highly by more people over items rated highly by fewer people? One way you can combat this is to use proportions in each category, which does not require you to put numbers in for each category (you can leave it as 80% rated as "strongly likes"). However proportions do suffer from the small number of ratings issue. This shows up in your example the Photo with 1 +5 rating would get a higher average score (and proportion) than one with the 99 +5 and 1 +2 rating. This doesn't fit well with my intuition (and I suspect most peoples). One way to get around this small sample size issue is to use a Bayesian technique known as "Laplace's rule of succession" (searching this term may be useful). It simply involves adding 1 "observation" to each category before calculating the probabilities. If you wanted to take an average for a numerical value, I would suggest a weighted average where the weights are the probabilities calculated by the rule of succession. For the mathematical form, let $n_{sd},n_{d},n_{l},n_{sl}$ denote the number of responses of "strongly dislike", "dislike", "like", and "strongly like" respectively (in the two examples, $n_{sl}=1,n_{sd}=n_{d}=n{l}=0$ and $n_{sl}=99,n_{l}=1,n_{sd}=n_{d}=0$). You then calculate the probability (or weight) for strongly like as $$Pr(\text{"Strongly Like"}) = \frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$ For the two examples you give, they give probabilities of "strongly like" as $\frac{1+1}{1+0+0+0+4}=\frac{2}{5}$ and $\frac{99+1}{99+1+0+0+4}=\frac{100}{104}$ which I think agree more closely with "common sense". Removing the added constants give $\frac{1}{1}$ and $\frac{99}{100}$ which makes the first outcome seem higher than it should be (at least to me anyway). The respective scores are just given by the weighted average, which I have written below as: $$Score=\begin{array}{1 1} 5\frac{n_{sl}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}+2\frac{n_{l}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4} \\ - 2\frac{n_{d}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4} -5\frac{n_{sd}+1}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}\end{array}$$ Or more succinctly as $$Score=\frac{5 n_{sl}+ 2 n_{l} - 2 n_{d} - 5 n_{sd}}{n_{sd}+n_{d}+n_{l}+n_{sl}+4}$$ Which gives scores in the two examples of $\frac{5}{5}=1$ and $\frac{497}{104}\sim 4.8$. I think this shows an appropriate difference between the two cases. This may have been a bit "mathsy" so let me know if you need more explanation.
Weight a rating system to favor items rated highly by more people over items rated highly by fewer p One way you can combat this is to use proportions in each category, which does not require you to put numbers in for each category (you can leave it as 80% rated as "strongly likes"). However proport
25,291
Weight a rating system to favor items rated highly by more people over items rated highly by fewer people?
I'd take a graphical approach. The x-axis could be average rating and the y could be number of ratings. I used to do this with sports statistics to compare the contribution of young phenoms with that of veteran stars. The nearer a point is to the upper right corner, the closer to the ideal. Of course, deciding on the "best" item would still be a subjective decision, but this would provide some structure. If you want to plot average rating against another variable, then you could set up number of ratings as the third variable using bubble size, in a bubble plot--e.g., in XL or SAS.
Weight a rating system to favor items rated highly by more people over items rated highly by fewer p
I'd take a graphical approach. The x-axis could be average rating and the y could be number of ratings. I used to do this with sports statistics to compare the contribution of young phenoms with tha
Weight a rating system to favor items rated highly by more people over items rated highly by fewer people? I'd take a graphical approach. The x-axis could be average rating and the y could be number of ratings. I used to do this with sports statistics to compare the contribution of young phenoms with that of veteran stars. The nearer a point is to the upper right corner, the closer to the ideal. Of course, deciding on the "best" item would still be a subjective decision, but this would provide some structure. If you want to plot average rating against another variable, then you could set up number of ratings as the third variable using bubble size, in a bubble plot--e.g., in XL or SAS.
Weight a rating system to favor items rated highly by more people over items rated highly by fewer p I'd take a graphical approach. The x-axis could be average rating and the y could be number of ratings. I used to do this with sports statistics to compare the contribution of young phenoms with tha
25,292
Intervention analysis with multi-dimensional time-series
The ARIMA model with a dummy variable for an intervention is a special case of a linear model with ARIMA errors. You can do the same here but with a richer linear model including factors for the beverage type and geographical zones. In R, the model can be estimated using arima() with the regression variables included via the xreg argument. Unfortunately, you will have to code the factors using dummy variables, but otherwise it is relatively straightforward.
Intervention analysis with multi-dimensional time-series
The ARIMA model with a dummy variable for an intervention is a special case of a linear model with ARIMA errors. You can do the same here but with a richer linear model including factors for the bever
Intervention analysis with multi-dimensional time-series The ARIMA model with a dummy variable for an intervention is a special case of a linear model with ARIMA errors. You can do the same here but with a richer linear model including factors for the beverage type and geographical zones. In R, the model can be estimated using arima() with the regression variables included via the xreg argument. Unfortunately, you will have to code the factors using dummy variables, but otherwise it is relatively straightforward.
Intervention analysis with multi-dimensional time-series The ARIMA model with a dummy variable for an intervention is a special case of a linear model with ARIMA errors. You can do the same here but with a richer linear model including factors for the bever
25,293
Intervention analysis with multi-dimensional time-series
If you wanted to model the sales of drinks types as a vector [sales of wine at t, sales of beer at t, sales of spirits at t], you might want to look at Vector Autoregression (VAR) models. You probably want the VARX variety that have a vector of exogenous variables like region and the policy intervention dummy, alongside the wine, beer and spirits sequences. They are fairly straightforward to fit and you'd get impulse response functions to express the impact of exogenous shocks, which might also be of interest. There's comprehensive discussion in Lütkepohl's book on multivariate time series. Finally, I'm certainly no economist but it seems to me that you might also think about ratios of these drinks types as well as levels. People probably operate under a booze budget constraint - I know I do - which would couple the levels and (anti-)correlate the errors.
Intervention analysis with multi-dimensional time-series
If you wanted to model the sales of drinks types as a vector [sales of wine at t, sales of beer at t, sales of spirits at t], you might want to look at Vector Autoregression (VAR) models. You probabl
Intervention analysis with multi-dimensional time-series If you wanted to model the sales of drinks types as a vector [sales of wine at t, sales of beer at t, sales of spirits at t], you might want to look at Vector Autoregression (VAR) models. You probably want the VARX variety that have a vector of exogenous variables like region and the policy intervention dummy, alongside the wine, beer and spirits sequences. They are fairly straightforward to fit and you'd get impulse response functions to express the impact of exogenous shocks, which might also be of interest. There's comprehensive discussion in Lütkepohl's book on multivariate time series. Finally, I'm certainly no economist but it seems to me that you might also think about ratios of these drinks types as well as levels. People probably operate under a booze budget constraint - I know I do - which would couple the levels and (anti-)correlate the errors.
Intervention analysis with multi-dimensional time-series If you wanted to model the sales of drinks types as a vector [sales of wine at t, sales of beer at t, sales of spirits at t], you might want to look at Vector Autoregression (VAR) models. You probabl
25,294
Intervention analysis with multi-dimensional time-series
Each time series should be evaluated separately with the ultimate idea of collecting i.e. grouping similar series into groups or sections as having similar/common structure. Since time series data can be intervened with by unknown deterministic structure at unspecified poins in time,one is advised to do Intervention Detection to find where the intervention actually had an effect. If you know a law went into effect at a particular point of (de jure) this may in fact (de facto) not the date when the intervention actually happened. Systems can respond in advance of a known effect date or even after the date due to non-compliance or non-response. Specifying the date of the intervention can lead to Model Specification Bias. I suggest that you google "Intervention Detection" or "Outlier Detection". A good book on this would be by Prof. Wei of Temple University published by Addison-Wessley. I believe the title is "Time Series Analysis". One further comment an Intervention Variable might appear as a Pulse or Level/Step Shift or a Seasonal Pulse or a Local Time Trend. In response to expanding the discussion about Local Time Trends: If you have a series that exhibits 1,2,3,4,5,7,9,11,13,15,16,17,18,19... there has been a change in trend at period 5 and at 10. For me a main question in time series is the detection of level shifts e.g. 1,2,3,4,5,8,9,10,..or another example of a level shift 1,1,1,1,2,2,2,2, AND/OR or the detection of time trend breaks. Just as a Pulse is a difference of a Step, a Step is a difference of a Trend. We have extended the theory of Intervention Detection to the 4th dimension i,e, Trend Point Change. In terms of openess, I have been able to implement such Intervention Detection schemes in conjuction with both ARIMA and Transfer Function Models. I am one of the senior time-series statisticians who have collaborated in the development of AUTOBOX which incorporates these features. I am unaware of anyone else who has programmed this exciting innovation. Perhaps someone else can comment on an R package that might do that but I don’t think so.
Intervention analysis with multi-dimensional time-series
Each time series should be evaluated separately with the ultimate idea of collecting i.e. grouping similar series into groups or sections as having similar/common structure. Since time series data can
Intervention analysis with multi-dimensional time-series Each time series should be evaluated separately with the ultimate idea of collecting i.e. grouping similar series into groups or sections as having similar/common structure. Since time series data can be intervened with by unknown deterministic structure at unspecified poins in time,one is advised to do Intervention Detection to find where the intervention actually had an effect. If you know a law went into effect at a particular point of (de jure) this may in fact (de facto) not the date when the intervention actually happened. Systems can respond in advance of a known effect date or even after the date due to non-compliance or non-response. Specifying the date of the intervention can lead to Model Specification Bias. I suggest that you google "Intervention Detection" or "Outlier Detection". A good book on this would be by Prof. Wei of Temple University published by Addison-Wessley. I believe the title is "Time Series Analysis". One further comment an Intervention Variable might appear as a Pulse or Level/Step Shift or a Seasonal Pulse or a Local Time Trend. In response to expanding the discussion about Local Time Trends: If you have a series that exhibits 1,2,3,4,5,7,9,11,13,15,16,17,18,19... there has been a change in trend at period 5 and at 10. For me a main question in time series is the detection of level shifts e.g. 1,2,3,4,5,8,9,10,..or another example of a level shift 1,1,1,1,2,2,2,2, AND/OR or the detection of time trend breaks. Just as a Pulse is a difference of a Step, a Step is a difference of a Trend. We have extended the theory of Intervention Detection to the 4th dimension i,e, Trend Point Change. In terms of openess, I have been able to implement such Intervention Detection schemes in conjuction with both ARIMA and Transfer Function Models. I am one of the senior time-series statisticians who have collaborated in the development of AUTOBOX which incorporates these features. I am unaware of anyone else who has programmed this exciting innovation. Perhaps someone else can comment on an R package that might do that but I don’t think so.
Intervention analysis with multi-dimensional time-series Each time series should be evaluated separately with the ultimate idea of collecting i.e. grouping similar series into groups or sections as having similar/common structure. Since time series data can
25,295
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
Have you considered using a nearest neighbour approach ? e.g. building a list of the k nearest neighbours for each of the 100'000 points and then consider the data point with the smallest distance of the kth neighbour a mode. In other words: find the point with the 'smallest bubble' containing k other points around this point. I'm not sure how robust this is and the choice for k is obviously influencing the results.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
Have you considered using a nearest neighbour approach ? e.g. building a list of the k nearest neighbours for each of the 100'000 points and then consider the data point with the smallest distance of
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? Have you considered using a nearest neighbour approach ? e.g. building a list of the k nearest neighbours for each of the 100'000 points and then consider the data point with the smallest distance of the kth neighbour a mode. In other words: find the point with the 'smallest bubble' containing k other points around this point. I'm not sure how robust this is and the choice for k is obviously influencing the results.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? Have you considered using a nearest neighbour approach ? e.g. building a list of the k nearest neighbours for each of the 100'000 points and then consider the data point with the smallest distance of
25,296
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
This is only a partial answer. I recently used figtree for multidimensional kernel density estimates. It's a C package and I got it to work fairly easily. However, I only used it to estimate the density at particular points, not calculate summary statistics.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
This is only a partial answer. I recently used figtree for multidimensional kernel density estimates. It's a C package and I got it to work fairly easily. However, I only used it to estimate the densi
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? This is only a partial answer. I recently used figtree for multidimensional kernel density estimates. It's a C package and I got it to work fairly easily. However, I only used it to estimate the density at particular points, not calculate summary statistics.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? This is only a partial answer. I recently used figtree for multidimensional kernel density estimates. It's a C package and I got it to work fairly easily. However, I only used it to estimate the densi
25,297
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
If you keep the log likelihoods, you can just select the one with the highest value. Also, if your interest is primarily the mode, just doing an optimization to find the point with the highest log likelihood would suffice.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
If you keep the log likelihoods, you can just select the one with the highest value. Also, if your interest is primarily the mode, just doing an optimization to find the point with the highest log lik
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? If you keep the log likelihoods, you can just select the one with the highest value. Also, if your interest is primarily the mode, just doing an optimization to find the point with the highest log likelihood would suffice.
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? If you keep the log likelihoods, you can just select the one with the highest value. Also, if your interest is primarily the mode, just doing an optimization to find the point with the highest log lik
25,298
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
Have you considered 'PRIM / bump hunting' ? (see e.g. Section 9.3. of 'The Elements of Statistical Learning' by Tibshirani et al. or ask your favourite search engine). Not sure whether that's implemented in R though. [ As far as I understood are you trying to find the mode of the probability density from which your 100'000 rows are drawn. So your problem would be partially solved by finding an appropriate density estimation method ].
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R?
Have you considered 'PRIM / bump hunting' ? (see e.g. Section 9.3. of 'The Elements of Statistical Learning' by Tibshirani et al. or ask your favourite search engine). Not sure whether that's implemen
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? Have you considered 'PRIM / bump hunting' ? (see e.g. Section 9.3. of 'The Elements of Statistical Learning' by Tibshirani et al. or ask your favourite search engine). Not sure whether that's implemented in R though. [ As far as I understood are you trying to find the mode of the probability density from which your 100'000 rows are drawn. So your problem would be partially solved by finding an appropriate density estimation method ].
Given a 10D MCMC chain, how can I determine its posterior mode(s) in R? Have you considered 'PRIM / bump hunting' ? (see e.g. Section 9.3. of 'The Elements of Statistical Learning' by Tibshirani et al. or ask your favourite search engine). Not sure whether that's implemen
25,299
How to find the covariance matrix of a polygon?
Let's do some analysis first. Suppose within the polygon $\mathcal{P}$ its probability density is proportional function $p(x,y).$ Then the constant of proportionality is the inverse of the integral of $p$ over the polygon, $$\mu_{0,0}(\mathcal{P})=\iint_{\mathcal P} p(x,y) \mathrm{d}x\,\mathrm{d}y.$$ The barycenter of the polygon is the point of average coordinates, computed as their first moments. The first one is $$\mu_{1,0}(\mathcal{P})=\frac{1}{\mu_{0,0}(\mathcal{P})} \iint_{\mathcal P} x\,p(x,y)\mathrm{d}x\,\mathrm{d}y.$$ The inertial tensor can be represented as the symmetric array of second moments computed after translating the polygon to put its barycenter at the origin: that is, the matrix of central second moments $$\mu^\prime_{k,l}(\mathcal{P}) = \frac{1}{\mu_{0,0}(\mathcal{P})} \iint_{\mathcal P} \left(x - \mu_{1,0}(\mathcal{P})\right)^k\,\left(y - \mu_{0,1}(\mathcal{P})\right)^l\,p(x,y)\mathrm{d}x\,\mathrm{d}y$$ where $(k,l)$ range from $(2,0)$ to $(1,1)$ to $(0,2).$ The tensor itself--aka covariance matrix--is $$I(\mathcal{P}) = \pmatrix{\mu^\prime_{2,0}(\mathcal{P}) & \mu^\prime_{1,1}(\mathcal{P}) \\ \mu^\prime_{1,1}(\mathcal{P}) & \mu^\prime_{0,2}(\mathcal{P})}.$$ A PCA of $I(\mathcal{P})$ yields the principal axes of $\mathcal{P}:$ these are the unit eigenvectors scaled by their eigenvalues. Next, let's work out how to do the calculations. Because the polygon is presented as a sequence of vertices describing its oriented boundary $\partial\mathcal P,$ it is natural to invoke Green's Theorem: $$\iint_{\mathcal{P}} \mathrm{d}\omega = \oint_{\partial\mathcal{P}}\omega$$ where $\omega = M(x,y)\mathrm{d}x + N(x,y)\mathrm{d}y$ is a one-form defined in a neighborhood of $\mathcal{P}$ and $$\mathrm{d}\omega = \left(\frac{\partial}{\partial x}N(x,y) - \frac{\partial}{\partial y}M(x,y)\right)\mathrm{d}x\,\mathrm{d}y.$$ For instance, with $\mathrm{d}\omega = x^k y^l \mathrm{d}x\mathrm{d}y$ and constant (i.e., uniform) density $p,$ we may (by inspection) select one of the many solutions, such as $$\omega(x,y) = \frac{-1}{l+1}x^k y^{l+1}\mathrm{d}x.$$ The point of this is that the contour integral follows the line segments determined by the sequence of vertices. Any line segment from vertex $\mathbf{u}$ to vertex $\mathbf{v}$ can be parameterized by a real variable $t$ in the form $$t \to \mathbf{u} + t\mathbf{w}$$ where $\mathbf{w} \propto \mathbf{v}-\mathbf{u}$ is the unit normal direction from $\mathbf{u}$ to $\mathbf{v}.$ The values of $t$ therefore range from $0$ to $|\mathbf{v}-\mathbf{u}|.$ Under this parameterization $x$ and $y$ are linear functions of $t$ and $\mathrm{d}x$ and $\mathrm{d}y$ are linear functions of $\mathrm{d}t.$ Thus the integrand of the contour integral over each edge becomes a polynomial function of $t,$ which is easily evaluated for small $k$ and $l.$ Implementing this analysis is as straightforward as coding its components. At the lowest level we will need a function to integrate a polynomial one-form over a line segment. Higher level functions will aggregate these to compute the raw and central moments to obtain the barycenter and inertial tensor, and finally we can operate on that tensor to find the principal axes (which are its scaled eigenvectors). The R code below performs this work. It makes no pretensions of efficiency: it is intended only to illustrate the practical application of the foregoing analysis. Each function is straightforward and the naming conventions parallel those of the analysis. Included in the code is a procedure to generate valid closed, simply connected, non-self-intersecting polygons (by randomly deforming points along a circle and including the starting vertex as its final point in order to create a closed loop). Following this are a few statements to plot the polygon, display its vertices, adjoin the barycenter, and plot the principal axes in red (largest) and blue (smallest), creating a polygon-centric positively-oriented coordinate system. # # Integrate a monomial one-form x^k*y^l*dx along the line segment given as an # origin, unit direction vector, and distance. # lintegrate <- function(k, l, origin, normal, distance) { # Binomial theorem expansion of (u + tw)^k expand <- function(k, u, w) { i <- seq_len(k+1)-1 u^i * w^rev(i) * choose(k,i) } # Construction of the product of two polynomials times a constant. omega <- normal[1] * convolve(rev(expand(k, origin[1], normal[1])), expand(l, origin[2], normal[2]), type="open") # Integrate the resulting polynomial from 0 to `distance`. sum(omega * distance^seq_along(omega) / seq_along(omega)) } # # Integrate monomials along a piecewise linear path given as a sequence of # (x,y) vertices. # cintegrate <- function(xy, k, l) { n <- dim(xy)[1]-1 # Number of edges sum(sapply(1:n, function(i) { dv <- xy[i+1,] - xy[i,] # The direction vector lambda <- sum(dv * dv) if (isTRUE(all.equal(lambda, 0.0))) { 0.0 } else { lambda <- sqrt(lambda) # Length of the direction vector -lintegrate(k, l+1, xy[i,], dv/lambda, lambda) / (l+1) } })) } # # Compute moments of inertia. # inertia <- function(xy) { mass <- cintegrate(xy, 0, 0) barycenter = c(cintegrate(xy, 1, 0), cintegrate(xy, 0, 1)) / mass uv <- t(t(xy) - barycenter) # Recenter the polygon to obtain central moments i <- matrix(0.0, 2, 2) i[1,1] <- cintegrate(uv, 2, 0) i[1,2] <- i[2,1] <- cintegrate(uv, 1, 1) i[2,2] <- cintegrate(uv, 0, 2) list(Mass=mass, Barycenter=barycenter, Inertia=i / mass) } # # Find principal axes of an inertial tensor. # principal.axes <- function(i.xy) { obj <- eigen(i.xy) t(t(obj$vectors) * obj$values) } # # Construct a polygon. # circle <- t(sapply(seq(0, 2*pi, length.out=11), function(a) c(cos(a), sin(a)))) set.seed(17) radii <- (1 + rgamma(dim(circle)[1]-1, 3, 3)) radii <- c(radii, radii[1]) # Closes the loop xy <- circle * radii # # Compute principal axes. # i.xy <- inertia(xy) axes <- principal.axes(i.xy$Inertia) sign <- sign(det(axes)) # # Plot barycenter and principal axes. # plot(xy, bty="n", xaxt="n", yaxt="n", asp=1, xlab="x", ylab="y", main="A random polygon\nand its principal axes", cex.main=0.75) polygon(xy, col="#e0e0e080") arrows(rep(i.xy$Barycenter[1], 2), rep(i.xy$Barycenter[2], 2), -axes[1,] + i.xy$Barycenter[1], # The -signs make the first axis .. -axes[2,]*sign + i.xy$Barycenter[2],# .. point to the right or down. length=0.1, angle=15, col=c("#e02020", "#4040c0"), lwd=2) points(matrix(i.xy$Barycenter, 1, 2), pch=21, bg="#404040")
How to find the covariance matrix of a polygon?
Let's do some analysis first. Suppose within the polygon $\mathcal{P}$ its probability density is proportional function $p(x,y).$ Then the constant of proportionality is the inverse of the integral o
How to find the covariance matrix of a polygon? Let's do some analysis first. Suppose within the polygon $\mathcal{P}$ its probability density is proportional function $p(x,y).$ Then the constant of proportionality is the inverse of the integral of $p$ over the polygon, $$\mu_{0,0}(\mathcal{P})=\iint_{\mathcal P} p(x,y) \mathrm{d}x\,\mathrm{d}y.$$ The barycenter of the polygon is the point of average coordinates, computed as their first moments. The first one is $$\mu_{1,0}(\mathcal{P})=\frac{1}{\mu_{0,0}(\mathcal{P})} \iint_{\mathcal P} x\,p(x,y)\mathrm{d}x\,\mathrm{d}y.$$ The inertial tensor can be represented as the symmetric array of second moments computed after translating the polygon to put its barycenter at the origin: that is, the matrix of central second moments $$\mu^\prime_{k,l}(\mathcal{P}) = \frac{1}{\mu_{0,0}(\mathcal{P})} \iint_{\mathcal P} \left(x - \mu_{1,0}(\mathcal{P})\right)^k\,\left(y - \mu_{0,1}(\mathcal{P})\right)^l\,p(x,y)\mathrm{d}x\,\mathrm{d}y$$ where $(k,l)$ range from $(2,0)$ to $(1,1)$ to $(0,2).$ The tensor itself--aka covariance matrix--is $$I(\mathcal{P}) = \pmatrix{\mu^\prime_{2,0}(\mathcal{P}) & \mu^\prime_{1,1}(\mathcal{P}) \\ \mu^\prime_{1,1}(\mathcal{P}) & \mu^\prime_{0,2}(\mathcal{P})}.$$ A PCA of $I(\mathcal{P})$ yields the principal axes of $\mathcal{P}:$ these are the unit eigenvectors scaled by their eigenvalues. Next, let's work out how to do the calculations. Because the polygon is presented as a sequence of vertices describing its oriented boundary $\partial\mathcal P,$ it is natural to invoke Green's Theorem: $$\iint_{\mathcal{P}} \mathrm{d}\omega = \oint_{\partial\mathcal{P}}\omega$$ where $\omega = M(x,y)\mathrm{d}x + N(x,y)\mathrm{d}y$ is a one-form defined in a neighborhood of $\mathcal{P}$ and $$\mathrm{d}\omega = \left(\frac{\partial}{\partial x}N(x,y) - \frac{\partial}{\partial y}M(x,y)\right)\mathrm{d}x\,\mathrm{d}y.$$ For instance, with $\mathrm{d}\omega = x^k y^l \mathrm{d}x\mathrm{d}y$ and constant (i.e., uniform) density $p,$ we may (by inspection) select one of the many solutions, such as $$\omega(x,y) = \frac{-1}{l+1}x^k y^{l+1}\mathrm{d}x.$$ The point of this is that the contour integral follows the line segments determined by the sequence of vertices. Any line segment from vertex $\mathbf{u}$ to vertex $\mathbf{v}$ can be parameterized by a real variable $t$ in the form $$t \to \mathbf{u} + t\mathbf{w}$$ where $\mathbf{w} \propto \mathbf{v}-\mathbf{u}$ is the unit normal direction from $\mathbf{u}$ to $\mathbf{v}.$ The values of $t$ therefore range from $0$ to $|\mathbf{v}-\mathbf{u}|.$ Under this parameterization $x$ and $y$ are linear functions of $t$ and $\mathrm{d}x$ and $\mathrm{d}y$ are linear functions of $\mathrm{d}t.$ Thus the integrand of the contour integral over each edge becomes a polynomial function of $t,$ which is easily evaluated for small $k$ and $l.$ Implementing this analysis is as straightforward as coding its components. At the lowest level we will need a function to integrate a polynomial one-form over a line segment. Higher level functions will aggregate these to compute the raw and central moments to obtain the barycenter and inertial tensor, and finally we can operate on that tensor to find the principal axes (which are its scaled eigenvectors). The R code below performs this work. It makes no pretensions of efficiency: it is intended only to illustrate the practical application of the foregoing analysis. Each function is straightforward and the naming conventions parallel those of the analysis. Included in the code is a procedure to generate valid closed, simply connected, non-self-intersecting polygons (by randomly deforming points along a circle and including the starting vertex as its final point in order to create a closed loop). Following this are a few statements to plot the polygon, display its vertices, adjoin the barycenter, and plot the principal axes in red (largest) and blue (smallest), creating a polygon-centric positively-oriented coordinate system. # # Integrate a monomial one-form x^k*y^l*dx along the line segment given as an # origin, unit direction vector, and distance. # lintegrate <- function(k, l, origin, normal, distance) { # Binomial theorem expansion of (u + tw)^k expand <- function(k, u, w) { i <- seq_len(k+1)-1 u^i * w^rev(i) * choose(k,i) } # Construction of the product of two polynomials times a constant. omega <- normal[1] * convolve(rev(expand(k, origin[1], normal[1])), expand(l, origin[2], normal[2]), type="open") # Integrate the resulting polynomial from 0 to `distance`. sum(omega * distance^seq_along(omega) / seq_along(omega)) } # # Integrate monomials along a piecewise linear path given as a sequence of # (x,y) vertices. # cintegrate <- function(xy, k, l) { n <- dim(xy)[1]-1 # Number of edges sum(sapply(1:n, function(i) { dv <- xy[i+1,] - xy[i,] # The direction vector lambda <- sum(dv * dv) if (isTRUE(all.equal(lambda, 0.0))) { 0.0 } else { lambda <- sqrt(lambda) # Length of the direction vector -lintegrate(k, l+1, xy[i,], dv/lambda, lambda) / (l+1) } })) } # # Compute moments of inertia. # inertia <- function(xy) { mass <- cintegrate(xy, 0, 0) barycenter = c(cintegrate(xy, 1, 0), cintegrate(xy, 0, 1)) / mass uv <- t(t(xy) - barycenter) # Recenter the polygon to obtain central moments i <- matrix(0.0, 2, 2) i[1,1] <- cintegrate(uv, 2, 0) i[1,2] <- i[2,1] <- cintegrate(uv, 1, 1) i[2,2] <- cintegrate(uv, 0, 2) list(Mass=mass, Barycenter=barycenter, Inertia=i / mass) } # # Find principal axes of an inertial tensor. # principal.axes <- function(i.xy) { obj <- eigen(i.xy) t(t(obj$vectors) * obj$values) } # # Construct a polygon. # circle <- t(sapply(seq(0, 2*pi, length.out=11), function(a) c(cos(a), sin(a)))) set.seed(17) radii <- (1 + rgamma(dim(circle)[1]-1, 3, 3)) radii <- c(radii, radii[1]) # Closes the loop xy <- circle * radii # # Compute principal axes. # i.xy <- inertia(xy) axes <- principal.axes(i.xy$Inertia) sign <- sign(det(axes)) # # Plot barycenter and principal axes. # plot(xy, bty="n", xaxt="n", yaxt="n", asp=1, xlab="x", ylab="y", main="A random polygon\nand its principal axes", cex.main=0.75) polygon(xy, col="#e0e0e080") arrows(rep(i.xy$Barycenter[1], 2), rep(i.xy$Barycenter[2], 2), -axes[1,] + i.xy$Barycenter[1], # The -signs make the first axis .. -axes[2,]*sign + i.xy$Barycenter[2],# .. point to the right or down. length=0.1, angle=15, col=c("#e02020", "#4040c0"), lwd=2) points(matrix(i.xy$Barycenter, 1, 2), pch=21, bg="#404040")
How to find the covariance matrix of a polygon? Let's do some analysis first. Suppose within the polygon $\mathcal{P}$ its probability density is proportional function $p(x,y).$ Then the constant of proportionality is the inverse of the integral o
25,300
How to find the covariance matrix of a polygon?
Edit: Didn't notice that whuber had already answered. I'll leave this up as an example of another (perhaps less elegant) approach to the problem. The covariance matrix Let $(X,Y)$ be a random point from the uniform distribution on a polygon $P$ with area $A$. The covariance matrix is: $$C = \begin{bmatrix} C_{XX} & C_{XY} \\ C_{XY} & C_{YY} \end{bmatrix}$$ where $C_{XX} = E[X^2]$ is the variance of $X$, $C_{YY} = E[Y^2]$ is the variance of $Y$, and $C_{XY} = E[XY]$ is the covariance between $X$ and $Y$. This assumes zero mean, since the polygon's center of mass is located at the origin. The uniform distribution assigns constant probability density $\frac{1}{A}$ to every point in $P$, so: $$C_{XX} = \frac{1}{A} \underset{P}{\iint} x^2 dV \quad C_{YY} = \frac{1}{A} \underset{P}{\iint} y^2 dV \quad C_{XY} = \frac{1}{A} \underset{P}{\iint} x y dV \tag{1}$$ Triangulation Instead of trying to directly integrate over a complicated region like $P$, we can simplify the problem by partitioning $P$ into $n$ triangular subregions: $$P = T_1 \cup \cdots \cup T_n$$ In your example, one possible partitioning looks like this: There are various ways to produce a triangulation (see here). For example, you could compute the Delaunay triangulation of the vertices, then discard edges that fall outside $P$ (since it may be nonconvex as in the example). Integrals over $P$ can then be split into sums of integrals over the triangles: $$C_{XX} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} x^2 dV \quad C_{YY} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} y^2 dV \quad C_{XY} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} x y dV \tag{2}$$ A triangle has nice, simple boundaries so these integrals are easier to evaluate. Integrating over triangles There are various ways to integrate over triangles. In this case, I used a trick that involves mapping a triangle to the unit square. Transforming to barycentric coordintes might be a better option. Here are solutions for the integrals above, for an arbitrary triangle $T$ defined by vertices $(x_1,y_1), (x_2,y_2), (x_3,y_3)$. Let: $$v_x = \left[ \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \end{smallmatrix} \right] \quad v_y = \left[ \begin{smallmatrix} y_1 \\ y_2 \\ y_3 \end{smallmatrix} \right] \quad \vec{1} = \left[ \begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix} \right] \quad L = \left[ \begin{smallmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{smallmatrix} \right]$$ Then: $$\underset{T}{\iint} x^2 dV = \frac{A}{6} \text{Tr}(v_x v_x^T L) \quad \underset{T}{\iint} y^2 dV = \frac{A}{6} \text{Tr}(v_y v_y^T L) \quad \underset{T}{\iint} x y dV = \frac{A}{12} (\vec{1}^T v_x v_y^T \vec{1} + v_x^T v_y) \tag{3}$$ Putting everything together Let $v_x^i$ and $v_y^i$ contain the x/y coordinates of the vertices for each triangle $T_i$, as above. Plug $(3)$ into $(2)$ for each triangle, noting that the area terms cancel out. This gives the solution: $$C_{XX} = \frac{1}{6} \sum_{i=1}^n \text{Tr} \big( v_x^i (v_x^i)^T L \big) \quad C_{YY} = \frac{1}{6} \sum_{i=1}^n \text{Tr} \big( v_y^i (v_y^i)^T L \big) \quad C_{XY} = \frac{1}{12} \sum_{i=1}^n \big( \vec{1}^T v_x^i (v_y^i)^T \vec{1} + (v_x^i)^T v_y^i \big) \tag{4}$$ Principal axes The principal axes are given by the eigenvectors of the covariance matrix $C$, just as in PCA. Unlike PCA, we have an analytic expression for $C$, rather than having to estimate it from sampled data points. Note that the vertices themselves are not a representative sample from the uniform distribution on $P$, so one can't simply take the sample covariance matrix of the vertices. But, $C$ *is* a relatively simple function of the vertices, as seen in $(4)$.
How to find the covariance matrix of a polygon?
Edit: Didn't notice that whuber had already answered. I'll leave this up as an example of another (perhaps less elegant) approach to the problem. The covariance matrix Let $(X,Y)$ be a random point fr
How to find the covariance matrix of a polygon? Edit: Didn't notice that whuber had already answered. I'll leave this up as an example of another (perhaps less elegant) approach to the problem. The covariance matrix Let $(X,Y)$ be a random point from the uniform distribution on a polygon $P$ with area $A$. The covariance matrix is: $$C = \begin{bmatrix} C_{XX} & C_{XY} \\ C_{XY} & C_{YY} \end{bmatrix}$$ where $C_{XX} = E[X^2]$ is the variance of $X$, $C_{YY} = E[Y^2]$ is the variance of $Y$, and $C_{XY} = E[XY]$ is the covariance between $X$ and $Y$. This assumes zero mean, since the polygon's center of mass is located at the origin. The uniform distribution assigns constant probability density $\frac{1}{A}$ to every point in $P$, so: $$C_{XX} = \frac{1}{A} \underset{P}{\iint} x^2 dV \quad C_{YY} = \frac{1}{A} \underset{P}{\iint} y^2 dV \quad C_{XY} = \frac{1}{A} \underset{P}{\iint} x y dV \tag{1}$$ Triangulation Instead of trying to directly integrate over a complicated region like $P$, we can simplify the problem by partitioning $P$ into $n$ triangular subregions: $$P = T_1 \cup \cdots \cup T_n$$ In your example, one possible partitioning looks like this: There are various ways to produce a triangulation (see here). For example, you could compute the Delaunay triangulation of the vertices, then discard edges that fall outside $P$ (since it may be nonconvex as in the example). Integrals over $P$ can then be split into sums of integrals over the triangles: $$C_{XX} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} x^2 dV \quad C_{YY} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} y^2 dV \quad C_{XY} = \frac{1}{A} \sum_{i=1}^n \underset{T_i}{\iint} x y dV \tag{2}$$ A triangle has nice, simple boundaries so these integrals are easier to evaluate. Integrating over triangles There are various ways to integrate over triangles. In this case, I used a trick that involves mapping a triangle to the unit square. Transforming to barycentric coordintes might be a better option. Here are solutions for the integrals above, for an arbitrary triangle $T$ defined by vertices $(x_1,y_1), (x_2,y_2), (x_3,y_3)$. Let: $$v_x = \left[ \begin{smallmatrix} x_1 \\ x_2 \\ x_3 \end{smallmatrix} \right] \quad v_y = \left[ \begin{smallmatrix} y_1 \\ y_2 \\ y_3 \end{smallmatrix} \right] \quad \vec{1} = \left[ \begin{smallmatrix} 1 \\ 1 \\ 1 \end{smallmatrix} \right] \quad L = \left[ \begin{smallmatrix} 1 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 1 & 1 \end{smallmatrix} \right]$$ Then: $$\underset{T}{\iint} x^2 dV = \frac{A}{6} \text{Tr}(v_x v_x^T L) \quad \underset{T}{\iint} y^2 dV = \frac{A}{6} \text{Tr}(v_y v_y^T L) \quad \underset{T}{\iint} x y dV = \frac{A}{12} (\vec{1}^T v_x v_y^T \vec{1} + v_x^T v_y) \tag{3}$$ Putting everything together Let $v_x^i$ and $v_y^i$ contain the x/y coordinates of the vertices for each triangle $T_i$, as above. Plug $(3)$ into $(2)$ for each triangle, noting that the area terms cancel out. This gives the solution: $$C_{XX} = \frac{1}{6} \sum_{i=1}^n \text{Tr} \big( v_x^i (v_x^i)^T L \big) \quad C_{YY} = \frac{1}{6} \sum_{i=1}^n \text{Tr} \big( v_y^i (v_y^i)^T L \big) \quad C_{XY} = \frac{1}{12} \sum_{i=1}^n \big( \vec{1}^T v_x^i (v_y^i)^T \vec{1} + (v_x^i)^T v_y^i \big) \tag{4}$$ Principal axes The principal axes are given by the eigenvectors of the covariance matrix $C$, just as in PCA. Unlike PCA, we have an analytic expression for $C$, rather than having to estimate it from sampled data points. Note that the vertices themselves are not a representative sample from the uniform distribution on $P$, so one can't simply take the sample covariance matrix of the vertices. But, $C$ *is* a relatively simple function of the vertices, as seen in $(4)$.
How to find the covariance matrix of a polygon? Edit: Didn't notice that whuber had already answered. I'll leave this up as an example of another (perhaps less elegant) approach to the problem. The covariance matrix Let $(X,Y)$ be a random point fr