idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
10,301
From a statistical perspective, can one infer causality using propensity scores with an observational study?
Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible. However, observational trials can provide evidence of a strong association between x and y, and are therefore useful for hypothesis generation. These hypotheses then need to be confirmed with a randomized trial.
From a statistical perspective, can one infer causality using propensity scores with an observationa
Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible.
From a statistical perspective, can one infer causality using propensity scores with an observational study? Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible. However, observational trials can provide evidence of a strong association between x and y, and are therefore useful for hypothesis generation. These hypotheses then need to be confirmed with a randomized trial.
From a statistical perspective, can one infer causality using propensity scores with an observationa Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible.
10,302
From a statistical perspective, can one infer causality using propensity scores with an observational study?
The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views of, say, Pearl (2009), who argues yes so long as you can model the process properly, versus the view @propofol, who will find many allies in experimental disciplines and who may share some of the thoughts expressed in (a rather obscure but nonetheless good) essay by Gerber et al (2004). Second, assuming that you do think that causality can be inferred from observational data, you might wonder whether propensity score methods are useful in doing so. Propensity score methods include various conditioning strategies as well as inverse propensity weighting. A nice review is given by Lunceford and Davidian (2004). They have good properties but certain assumptions are required (most specifically, "conditional independence") for them to be consistent. A little wrinkle though: propensity score matching and weighting are also used in the analysis of randomized experiments when, for example, there is an interest in computing "indirect effects" and also when there are problems of potentially non-random attrition or drop out (in which case what you have resembles an observational study). References Gerber A, et al. 2004. "The illusion of learning from observational research." In Shapiro I, et al, Problems and Methods in the Study of Politics, Cambridge University Press. Lunceford JK, Davidian M. 2004. "Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study." Statistics in Medicine 23(19):2937–2960. Pearl J. 2009. Causality (2nd Ed.), Cambridge University Press.
From a statistical perspective, can one infer causality using propensity scores with an observationa
The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views o
From a statistical perspective, can one infer causality using propensity scores with an observational study? The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views of, say, Pearl (2009), who argues yes so long as you can model the process properly, versus the view @propofol, who will find many allies in experimental disciplines and who may share some of the thoughts expressed in (a rather obscure but nonetheless good) essay by Gerber et al (2004). Second, assuming that you do think that causality can be inferred from observational data, you might wonder whether propensity score methods are useful in doing so. Propensity score methods include various conditioning strategies as well as inverse propensity weighting. A nice review is given by Lunceford and Davidian (2004). They have good properties but certain assumptions are required (most specifically, "conditional independence") for them to be consistent. A little wrinkle though: propensity score matching and weighting are also used in the analysis of randomized experiments when, for example, there is an interest in computing "indirect effects" and also when there are problems of potentially non-random attrition or drop out (in which case what you have resembles an observational study). References Gerber A, et al. 2004. "The illusion of learning from observational research." In Shapiro I, et al, Problems and Methods in the Study of Politics, Cambridge University Press. Lunceford JK, Davidian M. 2004. "Stratification and weighting via the propensity score in estimation of causal treatment effects: a comparative study." Statistics in Medicine 23(19):2937–2960. Pearl J. 2009. Causality (2nd Ed.), Cambridge University Press.
From a statistical perspective, can one infer causality using propensity scores with an observationa The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views o
10,303
From a statistical perspective, can one infer causality using propensity scores with an observational study?
Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality. However, it is not as simple as that. One reason that randomization may not be enough is that in "small" samples the law of large number is not "strong enough" to ensure that each and all differences are balanced. The question is: what is "too small" and when starts "big enough"? Saint-Mont (2015) argues here that "big enough" may well starts in the thousands (n>1000)! After all, the point is to balance differences between groups, to control for differences. So, even in experiments, great care should be taken to balance differences between groups. According to the calculations of Saint-Mont (2015) it may well be that in smaller samples one can considerably be better off with matched (manually balanced) samples. As to probability. Of course, probability is never able to give a conclusive answer - unless the probability is extreme (zero or one). However, in science, we found ourselves frequently confronted with situations were we are unable to provide a conclusive answer as stuff is difficult. Hence the need for probability. Probability is nothing more than a way to express our uncertainty in a statement. As such, it is similar to logic; see Briggs (2016) here. So, probability will help us but will not give conclusive answers, no certainty. But it is of great use - to express uncertainty. Note also that causality is not primarily a statistical question. Suppose two means differ "significantly". Does not mean the grouping variable is the cause of the difference in the measured variable? No (not necessarily). No matter which particular statistic one uses - propensity score, p-values, Bayes Factors and so on - such methods are (practically) never enough to backup causal claims.
From a statistical perspective, can one infer causality using propensity scores with an observationa
Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality. However, it is not as simple as that. One reason that randomization may not be enough is
From a statistical perspective, can one infer causality using propensity scores with an observational study? Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality. However, it is not as simple as that. One reason that randomization may not be enough is that in "small" samples the law of large number is not "strong enough" to ensure that each and all differences are balanced. The question is: what is "too small" and when starts "big enough"? Saint-Mont (2015) argues here that "big enough" may well starts in the thousands (n>1000)! After all, the point is to balance differences between groups, to control for differences. So, even in experiments, great care should be taken to balance differences between groups. According to the calculations of Saint-Mont (2015) it may well be that in smaller samples one can considerably be better off with matched (manually balanced) samples. As to probability. Of course, probability is never able to give a conclusive answer - unless the probability is extreme (zero or one). However, in science, we found ourselves frequently confronted with situations were we are unable to provide a conclusive answer as stuff is difficult. Hence the need for probability. Probability is nothing more than a way to express our uncertainty in a statement. As such, it is similar to logic; see Briggs (2016) here. So, probability will help us but will not give conclusive answers, no certainty. But it is of great use - to express uncertainty. Note also that causality is not primarily a statistical question. Suppose two means differ "significantly". Does not mean the grouping variable is the cause of the difference in the measured variable? No (not necessarily). No matter which particular statistic one uses - propensity score, p-values, Bayes Factors and so on - such methods are (practically) never enough to backup causal claims.
From a statistical perspective, can one infer causality using propensity scores with an observationa Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality. However, it is not as simple as that. One reason that randomization may not be enough is
10,304
Markov Process that depends on present state and past state
Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. Let me explain via an example. Suppose that we want to model the weather as a stochastic process and suppose that on any given day the weather can be rainy, sunny or cloudy. Let $W_t$ be the weather in any particular day and let us denote the possible states by the symbols $R$ (for rainy), $S$ for (sunny) and $C$ (for cloudy). First Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1})$ Second Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1},W_{t-2})$ The second order markov chain can be transformed into a first order markov chain be re-defining the state space as follows. Define: $Z_{t-1,t}$ as the weather on two consecutive days. In other words, the state space can take one of the following values: $RR$, $RC$, $RS$, $CR$, $CC$, $CS$, $SR$, $SC$ and $SS$. With this re-defined state space we have the following: $P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1}, Z_{t-3,t-2}, ..) = P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1})$ The above is clearly a first order markov chain on the re-defined state space. The one difference from the second order markov chain is that your redefined markov chain needs to be specified with two initial starting states i.e., the chain must be started with some assumptions about the weather on day 1 and on day 2.
Markov Process that depends on present state and past state
Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you c
Markov Process that depends on present state and past state Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. Let me explain via an example. Suppose that we want to model the weather as a stochastic process and suppose that on any given day the weather can be rainy, sunny or cloudy. Let $W_t$ be the weather in any particular day and let us denote the possible states by the symbols $R$ (for rainy), $S$ for (sunny) and $C$ (for cloudy). First Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1})$ Second Order Markov Chain $P(W_t = w | W_{t-1}, W_{t-2},W_{t-3} ..) = P(W_t = w | W_{t-1},W_{t-2})$ The second order markov chain can be transformed into a first order markov chain be re-defining the state space as follows. Define: $Z_{t-1,t}$ as the weather on two consecutive days. In other words, the state space can take one of the following values: $RR$, $RC$, $RS$, $CR$, $CC$, $CS$, $SR$, $SC$ and $SS$. With this re-defined state space we have the following: $P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1}, Z_{t-3,t-2}, ..) = P(Z_{t-1,t} = z_{t-1,t} | Z_{t-2,t-1})$ The above is clearly a first order markov chain on the re-defined state space. The one difference from the second order markov chain is that your redefined markov chain needs to be specified with two initial starting states i.e., the chain must be started with some assumptions about the weather on day 1 and on day 2.
Markov Process that depends on present state and past state Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you c
10,305
Markov Process that depends on present state and past state
The definition of a markov process says the next step depends on the current state only and no past states. That is the Markov property and it defines a first order MC, which is very tractable mathematically and quite easy to present/explain. Of course you could have $n^{th}$ order MC (where the next state depends on the current and the past $n-1$ states) as well as variable order MCs (when the length of the memory is fixed but depends on the previous state). $n^{th}$ order MCs retain the explicit formulation for the distribution of the stationary state, but as you pointed out, the size of the state matrix growths with $n$ such that an unrestricted $n^{th}$ order MC with $k$ states has $O(k^{2n})$ entry in its state matrix. You may want to have a look at recent papers such as Higher-order multivariate Markov chains and their applications as this field is advancing quiet fast.
Markov Process that depends on present state and past state
The definition of a markov process says the next step depends on the current state only and no past states. That is the Markov property and it defines a first order MC, which is very tractable mathema
Markov Process that depends on present state and past state The definition of a markov process says the next step depends on the current state only and no past states. That is the Markov property and it defines a first order MC, which is very tractable mathematically and quite easy to present/explain. Of course you could have $n^{th}$ order MC (where the next state depends on the current and the past $n-1$ states) as well as variable order MCs (when the length of the memory is fixed but depends on the previous state). $n^{th}$ order MCs retain the explicit formulation for the distribution of the stationary state, but as you pointed out, the size of the state matrix growths with $n$ such that an unrestricted $n^{th}$ order MC with $k$ states has $O(k^{2n})$ entry in its state matrix. You may want to have a look at recent papers such as Higher-order multivariate Markov chains and their applications as this field is advancing quiet fast.
Markov Process that depends on present state and past state The definition of a markov process says the next step depends on the current state only and no past states. That is the Markov property and it defines a first order MC, which is very tractable mathema
10,306
Exact two sample proportions binomial test in R (and some strange p-values)
If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so: > fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2)) Fisher's Exact Test for Count Data data: matrix(c(17, 25 - 17, 8, 20 - 8), ncol = 2) p-value = 0.07671 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.7990888 13.0020065 sample estimates: odds ratio 3.101466 The fisher.test function accepts a matrix object of the 'successes' and 'failures' the two binomial proportions. As you can see, however, the two-sided hypothesis is still not significant, sorry to say. However, Fisher's Exact test is typically only applied when a cell count is low (typically this means 5 or less but some say 10), therefore your initial use of prop.test is more appropriate. Regarding your binom.test calls, you are misunderstanding the call. When you run binom.test(x=17,n=25,p=8/20) you are testing whether proportion is significantly different from a population where the probability of success is 8/20. Likewise with binom.test(x=8,n=20,p=17/25) says the probability of success is 17/25 which is why these p-values differ. Therefore you are not comparing the two proportions at all.
Exact two sample proportions binomial test in R (and some strange p-values)
If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so: > fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2)
Exact two sample proportions binomial test in R (and some strange p-values) If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so: > fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2)) Fisher's Exact Test for Count Data data: matrix(c(17, 25 - 17, 8, 20 - 8), ncol = 2) p-value = 0.07671 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.7990888 13.0020065 sample estimates: odds ratio 3.101466 The fisher.test function accepts a matrix object of the 'successes' and 'failures' the two binomial proportions. As you can see, however, the two-sided hypothesis is still not significant, sorry to say. However, Fisher's Exact test is typically only applied when a cell count is low (typically this means 5 or less but some say 10), therefore your initial use of prop.test is more appropriate. Regarding your binom.test calls, you are misunderstanding the call. When you run binom.test(x=17,n=25,p=8/20) you are testing whether proportion is significantly different from a population where the probability of success is 8/20. Likewise with binom.test(x=8,n=20,p=17/25) says the probability of success is 17/25 which is why these p-values differ. Therefore you are not comparing the two proportions at all.
Exact two sample proportions binomial test in R (and some strange p-values) If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so: > fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2)
10,307
Exact two sample proportions binomial test in R (and some strange p-values)
There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two people flipping a coin of unknown fairness and one getting heads 55 times and the other 45 times. In the former case you are simply trying to identify if the flipper appears to be flipping a fair coin. In the latter, you are looking to see if they are flipping coins of the same fairness. You can see how if you are looking at each player against a known probability (45 vs. 50 and 55 vs. 50) is different than comparing them to each other (45 vs. 55).
Exact two sample proportions binomial test in R (and some strange p-values)
There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two peopl
Exact two sample proportions binomial test in R (and some strange p-values) There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two people flipping a coin of unknown fairness and one getting heads 55 times and the other 45 times. In the former case you are simply trying to identify if the flipper appears to be flipping a fair coin. In the latter, you are looking to see if they are flipping coins of the same fairness. You can see how if you are looking at each player against a known probability (45 vs. 50 and 55 vs. 50) is different than comparing them to each other (45 vs. 55).
Exact two sample proportions binomial test in R (and some strange p-values) There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two peopl
10,308
Exact two sample proportions binomial test in R (and some strange p-values)
The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absolute-truth 0.4 with zero variance around it. Or it is as if you were comparing player A's 17 wins out of 25 to player B's hypothetical 8 billion wins out of 20 billion games. However, prop.test compares the proportion of 17/25 with all its potential variance to the proportion of 8/20 with all of its own variance. In other words the variance around 0.7 (estimate of 17/25) and variance around 0.4 may bleed into one another with a resultant p=0.06.
Exact two sample proportions binomial test in R (and some strange p-values)
The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absol
Exact two sample proportions binomial test in R (and some strange p-values) The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absolute-truth 0.4 with zero variance around it. Or it is as if you were comparing player A's 17 wins out of 25 to player B's hypothetical 8 billion wins out of 20 billion games. However, prop.test compares the proportion of 17/25 with all its potential variance to the proportion of 8/20 with all of its own variance. In other words the variance around 0.7 (estimate of 17/25) and variance around 0.4 may bleed into one another with a resultant p=0.06.
Exact two sample proportions binomial test in R (and some strange p-values) The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absol
10,309
Exact two sample proportions binomial test in R (and some strange p-values)
First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution. Second, it is important be clear on how the "experiment", if you will, was conducted. Were the number of games that each person played determined in advance (or in the vernacular of the industry, fixed by design)? If so, and further assuming each player's results are independent of the other, you are dealing with the product of 2 binomial distributions. If instead the number of games was free to vary (say for example, the number of games each person played were variables, based on the number of games each was able to complete in a fixed time frame), then you are dealing with a Multinomial or Poisson distribution. In the second case the chi-square test (or what is the same thing, a z-test of difference in proportions) is appropriate, but in the former case it is not. In the first case, you really need to calculate the exact product of every possible binomial outcome for each player, and sum these probabilities for all occurrences that are equal to or less than the joint binomial probability of the outcomes that were observed (it is simply the product of the 2 binomials because each player's results are independent of the other player's results). Recognize first that the central purpose of any hypothesis test is to calculate just how "rare" or unusual the specific outcome you have observed is, compared to all other possible outcomes. This is calculated by computing the probability of the outcome you have observed - given the assumption that the null hypothesis is true - summed together with all other possible outcomes of equal or lower probability. Now it bares repeating that what we mean by "how rare" is " how low is the probability of observing the outcome obtained compared to all other possible outcomes?" Well, the probability of the specific outcome we have observed is 0.0679 * 0.0793 = 0.005115. Now consider a specific alternative outcome: it is certainly possible that player A could have won 7 of his 20 games and player B could have won 13 of his 25 games. The probability of this outcome is 0.004959. Note that this is LOWER than the probability of our observed outcome, so it should be included in the p-value. But look again: if you are deciding on which outcomes to include in your sum based on whether the difference in proportions exceeds to difference in proportions in our observed outcome, this probability will be excluded! Why? Because the difference in proportions for this specific outcome is less than the difference in proportions for our observed outcome. But this is not the proper focus - we must be concerned with the probability of this specific outcome and whether it is equal to or less than the probability of the outcome we have observed! A good formal explanation of this can be found here: http://data.princeton.edu/wws509/notes/c5.pdf Please note specifically the statement on page 9 that "If the row margin is fixed and sampling scheme is binomial then we must use the product binomial model, because we can not estimate the joint distribution for the two variables without further information."
Exact two sample proportions binomial test in R (and some strange p-values)
First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution. Second, it is important be clear on how
Exact two sample proportions binomial test in R (and some strange p-values) First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution. Second, it is important be clear on how the "experiment", if you will, was conducted. Were the number of games that each person played determined in advance (or in the vernacular of the industry, fixed by design)? If so, and further assuming each player's results are independent of the other, you are dealing with the product of 2 binomial distributions. If instead the number of games was free to vary (say for example, the number of games each person played were variables, based on the number of games each was able to complete in a fixed time frame), then you are dealing with a Multinomial or Poisson distribution. In the second case the chi-square test (or what is the same thing, a z-test of difference in proportions) is appropriate, but in the former case it is not. In the first case, you really need to calculate the exact product of every possible binomial outcome for each player, and sum these probabilities for all occurrences that are equal to or less than the joint binomial probability of the outcomes that were observed (it is simply the product of the 2 binomials because each player's results are independent of the other player's results). Recognize first that the central purpose of any hypothesis test is to calculate just how "rare" or unusual the specific outcome you have observed is, compared to all other possible outcomes. This is calculated by computing the probability of the outcome you have observed - given the assumption that the null hypothesis is true - summed together with all other possible outcomes of equal or lower probability. Now it bares repeating that what we mean by "how rare" is " how low is the probability of observing the outcome obtained compared to all other possible outcomes?" Well, the probability of the specific outcome we have observed is 0.0679 * 0.0793 = 0.005115. Now consider a specific alternative outcome: it is certainly possible that player A could have won 7 of his 20 games and player B could have won 13 of his 25 games. The probability of this outcome is 0.004959. Note that this is LOWER than the probability of our observed outcome, so it should be included in the p-value. But look again: if you are deciding on which outcomes to include in your sum based on whether the difference in proportions exceeds to difference in proportions in our observed outcome, this probability will be excluded! Why? Because the difference in proportions for this specific outcome is less than the difference in proportions for our observed outcome. But this is not the proper focus - we must be concerned with the probability of this specific outcome and whether it is equal to or less than the probability of the outcome we have observed! A good formal explanation of this can be found here: http://data.princeton.edu/wws509/notes/c5.pdf Please note specifically the statement on page 9 that "If the row margin is fixed and sampling scheme is binomial then we must use the product binomial model, because we can not estimate the joint distribution for the two variables without further information."
Exact two sample proportions binomial test in R (and some strange p-values) First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution. Second, it is important be clear on how
10,310
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tabulated. My preference is to refer to the test as the Wilcoxon-Mann-Whitney, to recognize both contributions (Mann-Whitney-Wilcoxon is also seen; I don't mind that either). * However, the actual picture is a little more cloudy, with several other authors also coming up with the same or similar statistics about this time or earlier, or in some cases making contributions that are closely connected to the test. At least some of the credit should go elsewhere. The Wilcoxon test and the Mann-Whitney U test are equivalent (and the help states that they are) in that they always reject the same cases under the same circumstances; at most their test statistics will only differ by a shift (and in some cases, just possibly a sign change). The Wilcoxon test is defined in more than one way in the literature (and that ambiguity dates back to the original tabulation of the test statistic, more on than in a moment), so one must take care with which Wilcoxon test is being discussed. The two most common forms of definition are discussed in this pair of posts: Wilcoxon rank sum test in R Different ways to calculate the test statistic for the Wilcoxon rank sum test To address what, specifically, happens in R: The statistic used by wilcox.test in R is defined in the help (?wilcox.test), and the question of the relationship to the Mann-Whitney U statistic is explained there: The literature is not unanimous about the definitions of the Wilcoxon rank sum and Mann-Whitney tests The two most common definitions correspond to the sum of the ranks of the first sample with the minimum value subtracted or not: R subtracts and S-PLUS does not, giving a value which is larger by m(m+1)/2 for a first sample of size m. (It seems Wilcoxon's original paper used the unadjusted sum of the ranks but subsequent tables subtracted the minimum.) R's value can also be computed as the number of all pairs (x[i], y[j]) for which y[j] is not greater than x[i], the most common definition of the Mann-Whitney test. This last sentence completely answers that aspect of your question - the version of W that R puts out* is also the value of U. * The sum of the ranks in sample 1, minus the smallest value it can take (i.e. minus $\frac{n_1(n_1+1)}{2}$).
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tab
Is the W statistic output by wilcox.test() in R the same as the U statistic? Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tabulated. My preference is to refer to the test as the Wilcoxon-Mann-Whitney, to recognize both contributions (Mann-Whitney-Wilcoxon is also seen; I don't mind that either). * However, the actual picture is a little more cloudy, with several other authors also coming up with the same or similar statistics about this time or earlier, or in some cases making contributions that are closely connected to the test. At least some of the credit should go elsewhere. The Wilcoxon test and the Mann-Whitney U test are equivalent (and the help states that they are) in that they always reject the same cases under the same circumstances; at most their test statistics will only differ by a shift (and in some cases, just possibly a sign change). The Wilcoxon test is defined in more than one way in the literature (and that ambiguity dates back to the original tabulation of the test statistic, more on than in a moment), so one must take care with which Wilcoxon test is being discussed. The two most common forms of definition are discussed in this pair of posts: Wilcoxon rank sum test in R Different ways to calculate the test statistic for the Wilcoxon rank sum test To address what, specifically, happens in R: The statistic used by wilcox.test in R is defined in the help (?wilcox.test), and the question of the relationship to the Mann-Whitney U statistic is explained there: The literature is not unanimous about the definitions of the Wilcoxon rank sum and Mann-Whitney tests The two most common definitions correspond to the sum of the ranks of the first sample with the minimum value subtracted or not: R subtracts and S-PLUS does not, giving a value which is larger by m(m+1)/2 for a first sample of size m. (It seems Wilcoxon's original paper used the unadjusted sum of the ranks but subsequent tables subtracted the minimum.) R's value can also be computed as the number of all pairs (x[i], y[j]) for which y[j] is not greater than x[i], the most common definition of the Mann-Whitney test. This last sentence completely answers that aspect of your question - the version of W that R puts out* is also the value of U. * The sum of the ranks in sample 1, minus the smallest value it can take (i.e. minus $\frac{n_1(n_1+1)}{2}$).
Is the W statistic output by wilcox.test() in R the same as the U statistic? Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tab
10,311
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in all cases. When you use: wilcox.test(df$var1 ~ df$var2, paired=FALSE) the given W is the same as U. So you may report it as the Mann-Whitney U statistic. However when you use: wilcox.test(df$var1 ~ df$var2, paired=TRUE), you are actually performing a Wilcoxon signed rank test. The Wilcoxon signed rank test is the equivalent of the dependent t-test. Source: "Dicovering statistics using R" by Andy Field (2013)
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in
Is the W statistic output by wilcox.test() in R the same as the U statistic? Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in all cases. When you use: wilcox.test(df$var1 ~ df$var2, paired=FALSE) the given W is the same as U. So you may report it as the Mann-Whitney U statistic. However when you use: wilcox.test(df$var1 ~ df$var2, paired=TRUE), you are actually performing a Wilcoxon signed rank test. The Wilcoxon signed rank test is the equivalent of the dependent t-test. Source: "Dicovering statistics using R" by Andy Field (2013)
Is the W statistic output by wilcox.test() in R the same as the U statistic? Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in
10,312
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Note however, that the code: wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~') will produce a different W statistic than a: wilcox.test(df$var1, df$var2, paired=FALSE) (using ',')
Is the W statistic output by wilcox.test() in R the same as the U statistic?
Note however, that the code: wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~') will produce a different W statistic than a: wilcox.test(df$var1, df$var2, paired=FALSE) (using ',')
Is the W statistic output by wilcox.test() in R the same as the U statistic? Note however, that the code: wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~') will produce a different W statistic than a: wilcox.test(df$var1, df$var2, paired=FALSE) (using ',')
Is the W statistic output by wilcox.test() in R the same as the U statistic? Note however, that the code: wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~') will produce a different W statistic than a: wilcox.test(df$var1, df$var2, paired=FALSE) (using ',')
10,313
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) 337-401. It's a physics paper for sure, but the applications there are not all related to quantum physics. My personal favorite application relates to de Finetti's Theorem (also Bayesian in flavor): if we don't mind negative probabilities then it turns out that all exchangeable sequences (even finite, perhaps negatively correlated ones) are a (signed) mixture of IID sequences. Of course, this itself has applications in quantum mechanics, in particular, that Fermi-Dirac statistics yield the same type of (signed) mixture representation that Bose-Einstein statistics do. My second personal favorite application (outside of physics proper) relates to infinite divisible (ID) distributions, which classically includes normal, gamma, poisson, ... the list continues. It isn't too hard to show that ID distributions must have unbounded support, which immediately kills distributions like the binomial or uniform (discrete+continuous) distributions. But if we permit negative probabilities then these problems disappear and the binomial, uniform (discrete+continuous), and a whole bunch of other distributions then become infinitely divisible - in this extended sense, please bear in mind. ID distributions relate to statistics in that they are limiting distributions in generalized central limit theorems. By the way, the first application is whispered folklore among probabilists and the infinite divisibility stuff is proved here, an informal electronic copy being here. Presumably there is a bunch of material on arXiv, too, though I haven't checked there in quite some time. As a final remark, whuber is absolutely right that it isn't really legal to call anything a probability that doesn't lie in $[0,1]$, at the very least, not for the time being. Given that "negative probabilities" have been around for so long I don't see this changing in the near future, not without some kind of colossal breakthrough.
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6)
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) 337-401. It's a physics paper for sure, but the applications there are not all related to quantum physics. My personal favorite application relates to de Finetti's Theorem (also Bayesian in flavor): if we don't mind negative probabilities then it turns out that all exchangeable sequences (even finite, perhaps negatively correlated ones) are a (signed) mixture of IID sequences. Of course, this itself has applications in quantum mechanics, in particular, that Fermi-Dirac statistics yield the same type of (signed) mixture representation that Bose-Einstein statistics do. My second personal favorite application (outside of physics proper) relates to infinite divisible (ID) distributions, which classically includes normal, gamma, poisson, ... the list continues. It isn't too hard to show that ID distributions must have unbounded support, which immediately kills distributions like the binomial or uniform (discrete+continuous) distributions. But if we permit negative probabilities then these problems disappear and the binomial, uniform (discrete+continuous), and a whole bunch of other distributions then become infinitely divisible - in this extended sense, please bear in mind. ID distributions relate to statistics in that they are limiting distributions in generalized central limit theorems. By the way, the first application is whispered folklore among probabilists and the infinite divisibility stuff is proved here, an informal electronic copy being here. Presumably there is a bunch of material on arXiv, too, though I haven't checked there in quite some time. As a final remark, whuber is absolutely right that it isn't really legal to call anything a probability that doesn't lie in $[0,1]$, at the very least, not for the time being. Given that "negative probabilities" have been around for so long I don't see this changing in the near future, not without some kind of colossal breakthrough.
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6)
10,314
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities! What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. From it the probability amplitude (which is a bona fide probability density) can be constructed; it is variously written $\langle\psi|\psi\rangle$ or $\|\psi\|^2$. When $\psi$ has (complex) scalar values, $\|\psi\|^2 = \psi^* \psi$. In every case these values are nonnegative real numbers. For details, see the section on "Postulates of Quantum Mechanics" in the Wikipedia article.
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities! What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. Fro
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities! What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. From it the probability amplitude (which is a bona fide probability density) can be constructed; it is variously written $\langle\psi|\psi\rangle$ or $\|\psi\|^2$. When $\psi$ has (complex) scalar values, $\|\psi\|^2 = \psi^* \psi$. In every case these values are nonnegative real numbers. For details, see the section on "Postulates of Quantum Mechanics" in the Wikipedia article.
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities! What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. Fro
10,315
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, it's up to her students to go find a use for the stuff in the world. (at least that's a kind-of defensible position, and the view I'll take just now) So perhaps the question should be: first, understand the algebra of quantum interactions (von Neumann algebra); then, look for things in the world which behave this way. Instead of "Who else has already done this work?" That said, one example that's tantalised me for a few years is V Danilov & A Lambert-Mogiliansky's use of von Neumann algebra in decision theory. Explicitly it is not about "quantum mechanics in the brain". Rather that "interfering (mental) states" might be a more accurate explanation of consumer behaviour than the usual picture:
Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, i
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, it's up to her students to go find a use for the stuff in the world. (at least that's a kind-of defensible position, and the view I'll take just now) So perhaps the question should be: first, understand the algebra of quantum interactions (von Neumann algebra); then, look for things in the world which behave this way. Instead of "Who else has already done this work?" That said, one example that's tantalised me for a few years is V Danilov & A Lambert-Mogiliansky's use of von Neumann algebra in decision theory. Explicitly it is not about "quantum mechanics in the brain". Rather that "interfering (mental) states" might be a more accurate explanation of consumer behaviour than the usual picture:
Do negative probabilities/probability amplitudes have applications outside quantum mechanics? I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, i
10,316
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side. "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman & Jennifer Hill. Very good on application of regression modelling. "The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition" (Springer Series in Statistics) 2nd (2009) Corrected Edition by Hastie Trevor, Tibshirani Robert & Friedman Jerome. More theoretical than the two first in my list, but also extremely good on the whys and ifs of applications. -- PDF Released Version "An Introduction to Statistical Learning" (Springer Series in Statistics) 6th (2015) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani -- PDF Released Version Working your way through these three books should give a very good basis for applications.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineerin
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side. "Data Analysis Using Regression and Multilevel/Hierarchical Models" by Andrew Gelman & Jennifer Hill. Very good on application of regression modelling. "The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition" (Springer Series in Statistics) 2nd (2009) Corrected Edition by Hastie Trevor, Tibshirani Robert & Friedman Jerome. More theoretical than the two first in my list, but also extremely good on the whys and ifs of applications. -- PDF Released Version "An Introduction to Statistical Learning" (Springer Series in Statistics) 6th (2015) by Gareth James, Daniela Witten, Trevor Hastie and Robert Tibshirani -- PDF Released Version Working your way through these three books should give a very good basis for applications.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Some very good books: "Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition" by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineerin
10,317
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Harrell (2001), Regression Modelling Strategies is distinguished by covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics included an emphasis on explaining how to employ different methods at different stages thoroughly worked-out examples (& S-Plus/R code) taking up much of the book
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Harrell (2001), Regression Modelling Strategies is distinguished by covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics in
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Harrell (2001), Regression Modelling Strategies is distinguished by covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics included an emphasis on explaining how to employ different methods at different stages thoroughly worked-out examples (& S-Plus/R code) taking up much of the book
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Harrell (2001), Regression Modelling Strategies is distinguished by covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics in
10,318
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level. edit: if you're dealing with categorical outcomes, Hastie et al is indispensable. Also, Categorical Data Analysis by Agresti is a good classical approach, as opposed to Hastie et al's machine learning approach.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level. edit: if yo
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level. edit: if you're dealing with categorical outcomes, Hastie et al is indispensable. Also, Categorical Data Analysis by Agresti is a good classical approach, as opposed to Hastie et al's machine learning approach.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level. edit: if yo
10,319
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in principled application of methods I'd recommend this book.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in princip
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in principled application of methods I'd recommend this book.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in princip
10,320
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and tons of notes about the subtleties of each. You can see the TOC at the publisher's site (linked above).
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and tons of notes about the subtleties of each. You can see the TOC at the publisher's site (linked above).
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and
10,321
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of models, dealing with common pitfalls and avoiding problematic methods.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of models, dealing with common pitfalls and avoiding problematic methods.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of
10,322
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots of mathematical statistics. It gives a lot more perspective than most books on even the simplest applied methods since it leverages so much math.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots o
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots of mathematical statistics. It gives a lot more perspective than most books on even the simplest applied methods since it leverages so much math.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots o
10,323
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning course. It's great for an introduction to ML Concepts (if that's part of your data analysis). https://work.caltech.edu/telecourse.html.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning c
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning course. It's great for an introduction to ML Concepts (if that's part of your data analysis). https://work.caltech.edu/telecourse.html.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning c
10,324
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, Rhinehart, R. R. because I sensed such a need. The book is 361 pages and has a companion web site with Excel/VBA open-code solutions for many of the techniques. Visit www.r3eda.com.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965,
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, Rhinehart, R. R. because I sensed such a need. The book is 361 pages and has a companion web site with Excel/VBA open-code solutions for many of the techniques. Visit www.r3eda.com.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965,
10,325
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this.
Do you have recommendations for books to self-teach Applied Statistics at the graduate level? I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this.
10,326
How can I generate data with a prespecified correlation matrix?
It appears that you're asking how to generate data with a particular correlation matrix. A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random vector ${\bf Ax}$ has mean ${\bf A} E({\bf x})$ and covariance matrix $ \Omega = {\bf A} \Sigma {\bf A}^{T} $. So, if you start with data that has mean zero, multiplying by ${\bf A}$ will not change that, so your first requirement is easily satisfied. Let's say you start with (mean zero) uncorrelated data (i.e. the covariance matrix is diagonal) - since we're talking about the correlation matrix, let's just take $\Sigma = I$. You can transform this to data with a given covariance matrix by choosing ${\bf A}$ to be the cholesky square root of $\Omega$ - then ${\bf Ax}$ would have the desired covariance matrix $\Omega$. In your example, you appear to want something like this: $$ \Omega = \left( \begin{array}{ccc} 1 & .8 & 0 \\ .8 & 1 & .8 \\ 0 & .8 & 1 \\ \end{array} \right) $$ Unfortunately that matrix is not positive definite, so it cannot be a covariance matrix - you can check this by seeing that the determinant is negative. Perhaps, instead $$ \Omega = \left( \begin{array}{ccc} 1 & .8 & .3 \\ .8 & 1 & .8 \\ .3 & .8 & 1 \\ \end{array} \right) \ \ \ \ {\rm or} \ \ \ \Omega = \left( \begin{array}{ccc} 1 & 2/3 & 0 \\ 2/3 & 1 & 2/3 \\ 0 & 2/3 & 1 \\ \end{array} \right)$$ would suffice. I'm not sure how to calculate the cholesky square root in matlab (which appears to be what you're using) but in R you can use the chol() function. In this example, for the two $\Omega$s listed above the proper matrix multiples(respectively) would be $$ {\bf A} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ .8 & .6 & 0 \\ .3 & .933 & .1972 \\ \end{array} \right) \ \ \ \ {\rm or} \ \ \ {\bf A} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 2/3 & .7453 & 0 \\ 0 & .8944 & .4472 \\ \end{array} \right)$$ The R code used to arrive at this was: x = matrix(0,3,3) x[1,]=c(1,.8,.3) x[2,]=c(.8,1,.8) x[3,]=c(.3,.8,1) t(chol(x)) [,1] [,2] [,3] [1,] 1.0 0.0000000 0.0000000 [2,] 0.8 0.6000000 0.0000000 [3,] 0.3 0.9333333 0.1972027 x[1,]=c(1,2/3,0) x[2,]=c(2/3,1,2/3) x[3,]=c(0,2/3,1) t(chol(x)) [,1] [,2] [,3] [1,] 1.0000000 0.0000000 0.0000000 [2,] 0.6666667 0.7453560 0.0000000 [3,] 0.0000000 0.8944272 0.4472136
How can I generate data with a prespecified correlation matrix?
It appears that you're asking how to generate data with a particular correlation matrix. A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random v
How can I generate data with a prespecified correlation matrix? It appears that you're asking how to generate data with a particular correlation matrix. A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random vector ${\bf Ax}$ has mean ${\bf A} E({\bf x})$ and covariance matrix $ \Omega = {\bf A} \Sigma {\bf A}^{T} $. So, if you start with data that has mean zero, multiplying by ${\bf A}$ will not change that, so your first requirement is easily satisfied. Let's say you start with (mean zero) uncorrelated data (i.e. the covariance matrix is diagonal) - since we're talking about the correlation matrix, let's just take $\Sigma = I$. You can transform this to data with a given covariance matrix by choosing ${\bf A}$ to be the cholesky square root of $\Omega$ - then ${\bf Ax}$ would have the desired covariance matrix $\Omega$. In your example, you appear to want something like this: $$ \Omega = \left( \begin{array}{ccc} 1 & .8 & 0 \\ .8 & 1 & .8 \\ 0 & .8 & 1 \\ \end{array} \right) $$ Unfortunately that matrix is not positive definite, so it cannot be a covariance matrix - you can check this by seeing that the determinant is negative. Perhaps, instead $$ \Omega = \left( \begin{array}{ccc} 1 & .8 & .3 \\ .8 & 1 & .8 \\ .3 & .8 & 1 \\ \end{array} \right) \ \ \ \ {\rm or} \ \ \ \Omega = \left( \begin{array}{ccc} 1 & 2/3 & 0 \\ 2/3 & 1 & 2/3 \\ 0 & 2/3 & 1 \\ \end{array} \right)$$ would suffice. I'm not sure how to calculate the cholesky square root in matlab (which appears to be what you're using) but in R you can use the chol() function. In this example, for the two $\Omega$s listed above the proper matrix multiples(respectively) would be $$ {\bf A} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ .8 & .6 & 0 \\ .3 & .933 & .1972 \\ \end{array} \right) \ \ \ \ {\rm or} \ \ \ {\bf A} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 2/3 & .7453 & 0 \\ 0 & .8944 & .4472 \\ \end{array} \right)$$ The R code used to arrive at this was: x = matrix(0,3,3) x[1,]=c(1,.8,.3) x[2,]=c(.8,1,.8) x[3,]=c(.3,.8,1) t(chol(x)) [,1] [,2] [,3] [1,] 1.0 0.0000000 0.0000000 [2,] 0.8 0.6000000 0.0000000 [3,] 0.3 0.9333333 0.1972027 x[1,]=c(1,2/3,0) x[2,]=c(2/3,1,2/3) x[3,]=c(0,2/3,1) t(chol(x)) [,1] [,2] [,3] [1,] 1.0000000 0.0000000 0.0000000 [2,] 0.6666667 0.7453560 0.0000000 [3,] 0.0000000 0.8944272 0.4472136
How can I generate data with a prespecified correlation matrix? It appears that you're asking how to generate data with a particular correlation matrix. A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random v
10,327
How can I generate data with a prespecified correlation matrix?
If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses the eigenvectors of the correlation matrix instead of the cholesky decomposition and scaling with a singular value decomposition (if the empirical option is set to true). If $X$ is a matrix with entries drawn from a normal distribution, $\Sigma$ is a positive definite correlation matrix with eigenvectors $\gamma$, and $\lambda$ is a square matrix with the square root eigen values from $\Sigma$ along the diagonal then: $X' = \gamma\lambda X^{T} $ Where X' is a normally distributed matrix with correlation matrix of $\Sigma$ and column means are the same as $X$. Note that the correlation matrix have to be positive definite, but converting it with the nearPD function from the Matrix package in R will be useful.
How can I generate data with a prespecified correlation matrix?
If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses t
How can I generate data with a prespecified correlation matrix? If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses the eigenvectors of the correlation matrix instead of the cholesky decomposition and scaling with a singular value decomposition (if the empirical option is set to true). If $X$ is a matrix with entries drawn from a normal distribution, $\Sigma$ is a positive definite correlation matrix with eigenvectors $\gamma$, and $\lambda$ is a square matrix with the square root eigen values from $\Sigma$ along the diagonal then: $X' = \gamma\lambda X^{T} $ Where X' is a normally distributed matrix with correlation matrix of $\Sigma$ and column means are the same as $X$. Note that the correlation matrix have to be positive definite, but converting it with the nearPD function from the Matrix package in R will be useful.
How can I generate data with a prespecified correlation matrix? If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses t
10,328
How can I generate data with a prespecified correlation matrix?
An alternative solution without cholesky factorization is the following. Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive definite with $\Lambda$ the diagonal matrix of the eigenvalues and $V$ the matrix of column eigenvectors . You can write $\Sigma_y = V \Lambda V^T = ( V \sqrt{\Lambda} ) (\sqrt{\Lambda}^T V^T ) = A A^T$. $y=Ax$ generate the desired data.
How can I generate data with a prespecified correlation matrix?
An alternative solution without cholesky factorization is the following. Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive
How can I generate data with a prespecified correlation matrix? An alternative solution without cholesky factorization is the following. Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive definite with $\Lambda$ the diagonal matrix of the eigenvalues and $V$ the matrix of column eigenvectors . You can write $\Sigma_y = V \Lambda V^T = ( V \sqrt{\Lambda} ) (\sqrt{\Lambda}^T V^T ) = A A^T$. $y=Ax$ generate the desired data.
How can I generate data with a prespecified correlation matrix? An alternative solution without cholesky factorization is the following. Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive
10,329
How does negative sampling work in word2vec?
The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network). Predicting the next word is like predicting the class. That is, such a network is just a "standard" multinomial (multi-class) classifier. And this network must have as many output neurons as classes there are. When classes are actual words, the number of neurons is, well, huge. A "standard" neural network is usually trained with a cross-entropy cost function which requires the values of the output neurons to represent probabilities - which means that the output "scores" computed by the network for each class have to be normalized, converted into actual probabilities for each class. This normalization step is achieved by means of the softmax function. Softmax is very costly when applied to a huge output layer. The (a) solution In order to deal with this issue, that is, the expensive computation of the softmax, Word2Vec uses a technique called noise-contrastive estimation. This technique was introduced by [A] (reformulated by [B]) then used in [C], [D], [E] to learn word embeddings from unlabelled natural language text. The basic idea is to convert a multinomial classification problem (as it is the problem of predicting the next word) to a binary classification problem. That is, instead of using softmax to estimate a true probability distribution of the output word, a binary logistic regression (binary classification) is used instead. For each training sample, the enhanced (optimized) classifier is fed a true pair (a center word and another word that appears in its context) and a number of $k$ randomly corrupted pairs (consisting of the center word and a randomly chosen word from the vocabulary). By learning to distinguish the true pairs from corrupted ones, the classifier will ultimately learn the word vectors. This is important: instead of predicting the next word (the "standard" training technique), the optimized classifier simply predicts whether a pair of words is good or bad. Word2Vec slightly customizes the process and calls it negative sampling. In Word2Vec, the words for the negative samples (used for the corrupted pairs) are drawn from a specially designed distribution, which favours less frequent words to be drawn more often. References [A] (2005) - Contrastive estimation: Training log-linear models on unlabeled data [B] (2010) - Noise-contrastive estimation: A new estimation principle for unnormalized statistical models [C] (2008) - A unified architecture for natural language processing: Deep neural networks with multitask learning [D] (2012) - A fast and simple algorithm for training neural probabilistic language models. [E] (2013) - Learning word embeddings efficiently with noise-contrastive estimation. The answer is based on some older notes of mine - I hope they were correct :)
How does negative sampling work in word2vec?
The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a w
How does negative sampling work in word2vec? The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network). Predicting the next word is like predicting the class. That is, such a network is just a "standard" multinomial (multi-class) classifier. And this network must have as many output neurons as classes there are. When classes are actual words, the number of neurons is, well, huge. A "standard" neural network is usually trained with a cross-entropy cost function which requires the values of the output neurons to represent probabilities - which means that the output "scores" computed by the network for each class have to be normalized, converted into actual probabilities for each class. This normalization step is achieved by means of the softmax function. Softmax is very costly when applied to a huge output layer. The (a) solution In order to deal with this issue, that is, the expensive computation of the softmax, Word2Vec uses a technique called noise-contrastive estimation. This technique was introduced by [A] (reformulated by [B]) then used in [C], [D], [E] to learn word embeddings from unlabelled natural language text. The basic idea is to convert a multinomial classification problem (as it is the problem of predicting the next word) to a binary classification problem. That is, instead of using softmax to estimate a true probability distribution of the output word, a binary logistic regression (binary classification) is used instead. For each training sample, the enhanced (optimized) classifier is fed a true pair (a center word and another word that appears in its context) and a number of $k$ randomly corrupted pairs (consisting of the center word and a randomly chosen word from the vocabulary). By learning to distinguish the true pairs from corrupted ones, the classifier will ultimately learn the word vectors. This is important: instead of predicting the next word (the "standard" training technique), the optimized classifier simply predicts whether a pair of words is good or bad. Word2Vec slightly customizes the process and calls it negative sampling. In Word2Vec, the words for the negative samples (used for the corrupted pairs) are drawn from a specially designed distribution, which favours less frequent words to be drawn more often. References [A] (2005) - Contrastive estimation: Training log-linear models on unlabeled data [B] (2010) - Noise-contrastive estimation: A new estimation principle for unnormalized statistical models [C] (2008) - A unified architecture for natural language processing: Deep neural networks with multitask learning [D] (2012) - A fast and simple algorithm for training neural probabilistic language models. [E] (2013) - Learning word embeddings efficiently with noise-contrastive estimation. The answer is based on some older notes of mine - I hope they were correct :)
How does negative sampling work in word2vec? The issue There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a w
10,330
Intuition for cumulative hazard function (survival analysis)
Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen: $$h(t) = \lim_{\Delta t \rightarrow 0} \frac {P(t<T \le t + \Delta t | T >t)} {\Delta t}$$ Cumulative hazard is integrating (instantaneous) hazard rate over ages/time. It's like summing up probabilities, but since $\Delta t$ is very small, these probabilities are also small numbers (e.g. hazard rate of dying may be around 0.004 at ages around 30). Hazard rate is conditional on not having experienced the event before $t$, so for a population it may sum over 1. You may look up some human mortality life table, although this is a discrete time formulation, and try to accumulate $m_x$. If you use R, here's a little example of approximating these functions from number of deaths at each 1-year age interval: dx <- c(3184L, 268L, 145L, 81L, 64L, 81L, 101L, 50L, 72L, 76L, 50L, 62L, 65L, 95L, 86L, 120L, 86L, 110L, 144L, 147L, 206L, 244L, 175L, 227L, 182L, 227L, 205L, 196L, 202L, 154L, 218L, 279L, 193L, 223L, 227L, 300L, 226L, 256L, 259L, 282L, 303L, 373L, 412L, 297L, 436L, 402L, 356L, 485L, 495L, 597L, 645L, 535L, 646L, 851L, 689L, 823L, 927L, 878L, 1036L, 1070L, 971L, 1225L, 1298L, 1539L, 1544L, 1673L, 1700L, 1909L, 2253L, 2388L, 2578L, 2353L, 2824L, 2909L, 2994L, 2970L, 2929L, 3401L, 3267L, 3411L, 3532L, 3090L, 3163L, 3060L, 2870L, 2650L, 2405L, 2143L, 1872L, 1601L, 1340L, 1095L, 872L, 677L, 512L, 376L, 268L, 186L, 125L, 81L, 51L, 31L, 18L, 11L, 6L, 3L, 2L) x <- 0:(length(dx)-1) # age vector plot((dx/sum(dx))/(1-cumsum(dx/sum(dx))), t="l", xlab="age", ylab="h(t)", main="h(t)", log="y") plot(cumsum((dx/sum(dx))/(1-cumsum(dx/sum(dx)))), t="l", xlab="age", ylab="H(t)", main="H(t)") Hope this helps.
Intuition for cumulative hazard function (survival analysis)
Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen: $$h(t) =
Intuition for cumulative hazard function (survival analysis) Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen: $$h(t) = \lim_{\Delta t \rightarrow 0} \frac {P(t<T \le t + \Delta t | T >t)} {\Delta t}$$ Cumulative hazard is integrating (instantaneous) hazard rate over ages/time. It's like summing up probabilities, but since $\Delta t$ is very small, these probabilities are also small numbers (e.g. hazard rate of dying may be around 0.004 at ages around 30). Hazard rate is conditional on not having experienced the event before $t$, so for a population it may sum over 1. You may look up some human mortality life table, although this is a discrete time formulation, and try to accumulate $m_x$. If you use R, here's a little example of approximating these functions from number of deaths at each 1-year age interval: dx <- c(3184L, 268L, 145L, 81L, 64L, 81L, 101L, 50L, 72L, 76L, 50L, 62L, 65L, 95L, 86L, 120L, 86L, 110L, 144L, 147L, 206L, 244L, 175L, 227L, 182L, 227L, 205L, 196L, 202L, 154L, 218L, 279L, 193L, 223L, 227L, 300L, 226L, 256L, 259L, 282L, 303L, 373L, 412L, 297L, 436L, 402L, 356L, 485L, 495L, 597L, 645L, 535L, 646L, 851L, 689L, 823L, 927L, 878L, 1036L, 1070L, 971L, 1225L, 1298L, 1539L, 1544L, 1673L, 1700L, 1909L, 2253L, 2388L, 2578L, 2353L, 2824L, 2909L, 2994L, 2970L, 2929L, 3401L, 3267L, 3411L, 3532L, 3090L, 3163L, 3060L, 2870L, 2650L, 2405L, 2143L, 1872L, 1601L, 1340L, 1095L, 872L, 677L, 512L, 376L, 268L, 186L, 125L, 81L, 51L, 31L, 18L, 11L, 6L, 3L, 2L) x <- 0:(length(dx)-1) # age vector plot((dx/sum(dx))/(1-cumsum(dx/sum(dx))), t="l", xlab="age", ylab="h(t)", main="h(t)", log="y") plot(cumsum((dx/sum(dx))/(1-cumsum(dx/sum(dx)))), t="l", xlab="age", ylab="H(t)", main="H(t)") Hope this helps.
Intuition for cumulative hazard function (survival analysis) Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen: $$h(t) =
10,331
Intuition for cumulative hazard function (survival analysis)
The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic. You can find the chapter on google books, p. 13-15. But I would advise on reading the whole chapter 2. Here is the short form: "it measures the total amount of risk that has been accumulated up to time t" (p. 8) count data interpretation: "it gives the number of times we would expect (mathematically) to observe failures [or other events] over a given period, if only the failure event were repeatable" (p. 13)
Intuition for cumulative hazard function (survival analysis)
The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic. You can find the chapter on google books, p. 13-15. But I would advise on re
Intuition for cumulative hazard function (survival analysis) The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic. You can find the chapter on google books, p. 13-15. But I would advise on reading the whole chapter 2. Here is the short form: "it measures the total amount of risk that has been accumulated up to time t" (p. 8) count data interpretation: "it gives the number of times we would expect (mathematically) to observe failures [or other events] over a given period, if only the failure event were repeatable" (p. 13)
Intuition for cumulative hazard function (survival analysis) The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic. You can find the chapter on google books, p. 13-15. But I would advise on re
10,332
Intuition for cumulative hazard function (survival analysis)
I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots: (1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coefficient and covariate vectors respectively, & $h_0(x)$ is the baseline hazard function; & so (by integrating both sides with respect to $x$ & taking logarithms) $\log H(x)=\beta^\mathrm{T} z + H_0(x)$. If you plot the estimate $\log \hat{H}(x)$ against $x$, different covariate patterns follow parallel curves, provided the proportional hazards assumption is correct. (2) In the Weibull model $h(x)=\frac{\alpha}{\theta}\left(\frac{x}{\theta}\right)^{\alpha-1}$, where $\theta$ & $\alpha$ are the scale & shape parameters respectively; & so $\log H(x) = \alpha \log x - \alpha \log \theta$. If you plot the estimate $\log \hat{H}(x)$ against $\log x$, you get a straight line with slope $\hat{\alpha}$ & intercept $-\hat{\alpha}\log\hat{\theta}$, provided the Weibull assumption is correct. And of course a slope near to 1 suggests an exponential model might fit. An intuitive interpretation of $H(x)$ is the expected number of deaths of an individual up to time $x$ if the individual were to be resurrected after each death (without resetting time to zero).
Intuition for cumulative hazard function (survival analysis)
I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots: (1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coef
Intuition for cumulative hazard function (survival analysis) I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots: (1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coefficient and covariate vectors respectively, & $h_0(x)$ is the baseline hazard function; & so (by integrating both sides with respect to $x$ & taking logarithms) $\log H(x)=\beta^\mathrm{T} z + H_0(x)$. If you plot the estimate $\log \hat{H}(x)$ against $x$, different covariate patterns follow parallel curves, provided the proportional hazards assumption is correct. (2) In the Weibull model $h(x)=\frac{\alpha}{\theta}\left(\frac{x}{\theta}\right)^{\alpha-1}$, where $\theta$ & $\alpha$ are the scale & shape parameters respectively; & so $\log H(x) = \alpha \log x - \alpha \log \theta$. If you plot the estimate $\log \hat{H}(x)$ against $\log x$, you get a straight line with slope $\hat{\alpha}$ & intercept $-\hat{\alpha}\log\hat{\theta}$, provided the Weibull assumption is correct. And of course a slope near to 1 suggests an exponential model might fit. An intuitive interpretation of $H(x)$ is the expected number of deaths of an individual up to time $x$ if the individual were to be resurrected after each death (without resetting time to zero).
Intuition for cumulative hazard function (survival analysis) I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots: (1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coef
10,333
Intuition for cumulative hazard function (survival analysis)
In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results; telling a non-statistical researcher that the cumulative hazards are different will most likely result in an "mm-hm" answer and then they'll never ask about the subject again, and not in a good way. However, the cumulative hazard function turns out to be very useful mathematically, such as a general way to link the hazard function and the survival function. So it's important to know what the cumulative hazard is and how it can be used in various statistical methods. But in general, I don't think it's particularly useful to think about real data in terms cumulative hazards.
Intuition for cumulative hazard function (survival analysis)
In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results
Intuition for cumulative hazard function (survival analysis) In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results; telling a non-statistical researcher that the cumulative hazards are different will most likely result in an "mm-hm" answer and then they'll never ask about the subject again, and not in a good way. However, the cumulative hazard function turns out to be very useful mathematically, such as a general way to link the hazard function and the survival function. So it's important to know what the cumulative hazard is and how it can be used in various statistical methods. But in general, I don't think it's particularly useful to think about real data in terms cumulative hazards.
Intuition for cumulative hazard function (survival analysis) In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results
10,334
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
You should use the signed rank test when the data are paired. You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they're observations on the same unit (repeated measures), but it doesn't have to be on the same unit, just in some way tending to be associated (while measuring the same kind of thing), to be considered as 'paired'. You should use the rank-sum test when the data are not paired. That's basically all there is to it. Note that having the same $n$ doesn't mean the data are paired, and having different $n$ doesn't mean that there isn't pairing (it may be that a few pairs lost an observation for some reason). Pairing comes from consideration of what was sampled. The effect of using a paired test when the data are paired is that it generally gives more power to detect the changes you're interested in. If the association leads to strong dependence*, then the gain in power may be substantial. * specifically, but speaking somewhat loosely, if the effect size is large compared to the typical size of the pair-differences, but small compared to the typical size of the unpaired-differences, you may pick up the difference with a paired test at a quite small sample size but with an unpaired test only at a much larger sample size. However, when the data are not paired, it may be (at least slightly) counterproductive to treat the data as paired. That said, the cost - in lost power - may in many circumstances be quite small - a power study I did in response to this question seems to suggest that on average the power loss in typical small-sample situations (say for n of the order of 10 to 30 in each sample, after adjusting for differences in significance level) may be surprisingly small, essentially negligible. [If you're somehow really uncertain whether the data are paired or not, the loss in treating unpaired data as paired is usually relatively minor, while the gains may be substantial if they are paired. This suggests if you really don't know, and have a way of figuring out what is paired with what assuming they were paired -- such as the values being in the same row in a table, it may in practice may make sense to act as if the data were paired to be safe -- though some people may tend to get quite exercised over you doing that.]
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
You should use the signed rank test when the data are paired. You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively d
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? You should use the signed rank test when the data are paired. You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they're observations on the same unit (repeated measures), but it doesn't have to be on the same unit, just in some way tending to be associated (while measuring the same kind of thing), to be considered as 'paired'. You should use the rank-sum test when the data are not paired. That's basically all there is to it. Note that having the same $n$ doesn't mean the data are paired, and having different $n$ doesn't mean that there isn't pairing (it may be that a few pairs lost an observation for some reason). Pairing comes from consideration of what was sampled. The effect of using a paired test when the data are paired is that it generally gives more power to detect the changes you're interested in. If the association leads to strong dependence*, then the gain in power may be substantial. * specifically, but speaking somewhat loosely, if the effect size is large compared to the typical size of the pair-differences, but small compared to the typical size of the unpaired-differences, you may pick up the difference with a paired test at a quite small sample size but with an unpaired test only at a much larger sample size. However, when the data are not paired, it may be (at least slightly) counterproductive to treat the data as paired. That said, the cost - in lost power - may in many circumstances be quite small - a power study I did in response to this question seems to suggest that on average the power loss in typical small-sample situations (say for n of the order of 10 to 30 in each sample, after adjusting for differences in significance level) may be surprisingly small, essentially negligible. [If you're somehow really uncertain whether the data are paired or not, the loss in treating unpaired data as paired is usually relatively minor, while the gains may be substantial if they are paired. This suggests if you really don't know, and have a way of figuring out what is paired with what assuming they were paired -- such as the values being in the same row in a table, it may in practice may make sense to act as if the data were paired to be safe -- though some people may tend to get quite exercised over you doing that.]
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? You should use the signed rank test when the data are paired. You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively d
10,335
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST). The WSRST requires that the populations be paired, for example, the same group of people are tested on two different occasions or things and MEASURED on the effects of each and we then compare the two things or occasions. The WSRST requires the data to be quantitative. Quantitative data is data that is measured along a scale, that is why I highlighted the world measured in the first point. Had the participants been asked to rank their responses, you will then be dealing with qualitative data, where you will then have to use the sign test to test your hypothesis. [There are other requirements for the WSRST but the ones I've listed are sufficient to differentiate the two tests] Now the Wilcoxon Rank Sum Test (WRST) The main requirement is that the samples be drawn from independent populations. For example you might want to test whether the exam paper 1 is harder than exam paper 2, and to do this you will have two groups of students, and the groups need not be the same size. From the example the two groups are independent, if you had asked the same group to write the same paper twice, then you will use the WSRST to test your hypothesis. The other requirement is that the data need not be quantitative, i.e. you can also perform the test on qualitative data.
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST). The WSRST requires that the populations be paired, for example,
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST). The WSRST requires that the populations be paired, for example, the same group of people are tested on two different occasions or things and MEASURED on the effects of each and we then compare the two things or occasions. The WSRST requires the data to be quantitative. Quantitative data is data that is measured along a scale, that is why I highlighted the world measured in the first point. Had the participants been asked to rank their responses, you will then be dealing with qualitative data, where you will then have to use the sign test to test your hypothesis. [There are other requirements for the WSRST but the ones I've listed are sufficient to differentiate the two tests] Now the Wilcoxon Rank Sum Test (WRST) The main requirement is that the samples be drawn from independent populations. For example you might want to test whether the exam paper 1 is harder than exam paper 2, and to do this you will have two groups of students, and the groups need not be the same size. From the example the two groups are independent, if you had asked the same group to write the same paper twice, then you will use the WSRST to test your hypothesis. The other requirement is that the data need not be quantitative, i.e. you can also perform the test on qualitative data.
What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST). The WSRST requires that the populations be paired, for example,
10,336
Converting (normalizing) very small likelihood values to probability
Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.) Indeed, if you want a relative precision of $\epsilon$ (such as $\epsilon = 10^{-d}$ for $d$ digits of precision) and you have $n$ likelihoods, throw away any result less than the logarithm of $\epsilon/n$. Then proceed as usual to exponentiate the resulting values and divide each one by the sum of all the exponentials. For those who like formulas, let the logarithms be $\lambda_1, \lambda_2, \ldots, \lambda_n$ with $\lambda_n = \max(\lambda_i)$. For logarithms to the base $b\gt 1$, define $$\alpha_i = \cases{ b^{\lambda_i - \lambda_n}, \lambda_i - \lambda_n \ge \log(\epsilon)-\log(n) \\ 0\quad \text{otherwise}.}$$ The normalized likelihoods equal $\alpha_i / \sum_{j=1}^n \alpha_j$, $i = 1, 2, \ldots, n.$ This works because replacing all of the otherwise underflowing $\alpha_i$ by zero makes a total error of at most $(n-1)\epsilon/n\lt \epsilon$ whereas, because $\alpha_n=b^{\lambda_n-\lambda_n}=b^0=1$ and all $\alpha_i$ are non-negative, the denominator $A = \sum_j \alpha_j \ge 1$, whence the total relative error due to the zero-replacement rule is strictly smaller than $\left((n-1)\epsilon/n \right) / A \lt \epsilon$, as desired. To avoid too much rounding error, compute the sum starting with the smallest values of the $\alpha_i$. This will be done automatically when the $\lambda_i$ are first sorted in increasing order. This is a consideration only for very large $n$. BTW, this prescription assumed the base of the logs is greater than $1$. For bases $b$ less than $1$, first negate all the logs and proceed as if the base were equal to $1/b$. Example Let there be three values with logarithms (natural logs, say) equal to $-269647.432,$ $-231444.981,$ and $-231444.699.$ The last is the largest; subtracting it from each value gives $-38202.733,$ $-0.282,$ and $0.$ Suppose you would like precision comparable to IEEE doubles (about 16 decimal digits), so that $\epsilon=10^{-16}$ and $n=3$. (You can't actually achieve this precision, because $-0.282$ is given only to three significant figures, but that's ok: we're only throwing away values that are guaranteed not to affect the better of the precision you want and the precision you actually have.) Compute $\log(\epsilon/n)$ = $\log(10^{-16}) - \log(3)$ = $-37.93997.$ The first of the three differences, $-38202.733,$ is less than this, so throw it away, leaving just $-0.282$ and $0.$ Exponentiating them gives $\exp(-0.282) = 0.754$ and $\exp(0)=1$ (of course). The normalized values are--in order--$0$ for the one you threw away, $0.754 / (1 + 0.754) = 0.430$, and $1/(1+0.754)=0.570$.
Converting (normalizing) very small likelihood values to probability
Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.) Indeed, if y
Converting (normalizing) very small likelihood values to probability Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.) Indeed, if you want a relative precision of $\epsilon$ (such as $\epsilon = 10^{-d}$ for $d$ digits of precision) and you have $n$ likelihoods, throw away any result less than the logarithm of $\epsilon/n$. Then proceed as usual to exponentiate the resulting values and divide each one by the sum of all the exponentials. For those who like formulas, let the logarithms be $\lambda_1, \lambda_2, \ldots, \lambda_n$ with $\lambda_n = \max(\lambda_i)$. For logarithms to the base $b\gt 1$, define $$\alpha_i = \cases{ b^{\lambda_i - \lambda_n}, \lambda_i - \lambda_n \ge \log(\epsilon)-\log(n) \\ 0\quad \text{otherwise}.}$$ The normalized likelihoods equal $\alpha_i / \sum_{j=1}^n \alpha_j$, $i = 1, 2, \ldots, n.$ This works because replacing all of the otherwise underflowing $\alpha_i$ by zero makes a total error of at most $(n-1)\epsilon/n\lt \epsilon$ whereas, because $\alpha_n=b^{\lambda_n-\lambda_n}=b^0=1$ and all $\alpha_i$ are non-negative, the denominator $A = \sum_j \alpha_j \ge 1$, whence the total relative error due to the zero-replacement rule is strictly smaller than $\left((n-1)\epsilon/n \right) / A \lt \epsilon$, as desired. To avoid too much rounding error, compute the sum starting with the smallest values of the $\alpha_i$. This will be done automatically when the $\lambda_i$ are first sorted in increasing order. This is a consideration only for very large $n$. BTW, this prescription assumed the base of the logs is greater than $1$. For bases $b$ less than $1$, first negate all the logs and proceed as if the base were equal to $1/b$. Example Let there be three values with logarithms (natural logs, say) equal to $-269647.432,$ $-231444.981,$ and $-231444.699.$ The last is the largest; subtracting it from each value gives $-38202.733,$ $-0.282,$ and $0.$ Suppose you would like precision comparable to IEEE doubles (about 16 decimal digits), so that $\epsilon=10^{-16}$ and $n=3$. (You can't actually achieve this precision, because $-0.282$ is given only to three significant figures, but that's ok: we're only throwing away values that are guaranteed not to affect the better of the precision you want and the precision you actually have.) Compute $\log(\epsilon/n)$ = $\log(10^{-16}) - \log(3)$ = $-37.93997.$ The first of the three differences, $-38202.733,$ is less than this, so throw it away, leaving just $-0.282$ and $0.$ Exponentiating them gives $\exp(-0.282) = 0.754$ and $\exp(0)=1$ (of course). The normalized values are--in order--$0$ for the one you threw away, $0.754 / (1 + 0.754) = 0.430$, and $1/(1+0.754)=0.570$.
Converting (normalizing) very small likelihood values to probability Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.) Indeed, if y
10,337
Probability of not drawing a word from a bag of letters in Scrabble
This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exploit it to the fullest. His results suggest that a brute-force solution is feasible. After all, including a wildcard, there are at most $27^7 = 10,460,353,203$ words one can make with 7 characters, and it looks like less than 1/10000 of them--say, around a million--will fail to include some valid word. The first step is to augment the minimal dictionary with a wildcard character, "?". 22 of the letters appear in two-letter words (all but c, q, v, z). Adjoin a wildcard to those 22 letters and add these to the dictionary: {a?, b?, d?, ..., y?} are now in. Similarly we can inspect the minimal three-letter words, causing some additional words to appear in the dictionary. Finally, we add "??" to the dictionary. After removing repetitions that result, it contains 342 minimal words. An elegant way to proceed--one that uses a very small amount of encoding indeed--is to view this problem as an algebraic one. A word, considered as an unordered set of letters, is just a monomial. For example, "spats" is the monomial $a p s^2 t$. The dictionary therefore is a collection of monomials. It looks like $$\{a^2, a b, a d, ..., o z \psi, w x \psi, \psi^2\}$$ (where, to avoid confusion, I have written $\psi$ for the wildcard character). A rack contains a valid word if and only if that word divides the rack. A more abstract, but extremely powerful, way to say this is that the dictionary generates an ideal $I$ in the polynomial ring $R = \mathbb{Z}[a, b, \ldots, z, \psi]$ and that the racks with valid words become zero in the quotient ring $R/I$, whereas racks without valid words remain nonzero in the quotient. If we form the sum of all racks in $R$ and compute it in this quotient ring, then the number of racks without words equals the number of distinct monomials in the quotient. Furthermore, the sum of all racks in $R$ is straightforward to express. Let $\alpha = a + b + \cdots + z + \psi$ be the sum of all letters in the alphabet. $\alpha^7$ contains one monomial for each rack. (As an added bonus, its coefficients count the number of ways each rack can be formed, allowing us to compute its probability if we like.) As a simple example (to see how this works), suppose (a) we don't use wildcards and (b) all letters from "a" through "x" are considered words. Then the only possible racks from which words cannot be formed must consist entirely of y's and z's. We compute $\alpha=(a+b+c+\cdots+x+y+z)^7$ modulo the ideal generated by $\{a,b,c, \ldots, x\}$ one step at a time, thus: $$\eqalign{ \alpha^0 &= 1 \cr \alpha^1 &= a+b+c+\cdots+x+y+z \equiv y+z \mod I \cr \alpha^2 &\equiv (y+z)(a+b+\cdots+y+z) \equiv (y+z)^2 \mod I \cr \cdots &\cr \alpha^7 &\equiv (y+z)^6(a+b+\cdots+y+z) \equiv (y+z)^7 \mod I \text{.} }$$ We can read off the chance of getting a non-word rack from the final answer, $y^7 + 7 y^6 z + 21 y^5 z^2 + 35 y^4 z^3 + 35 y^3 z^4 + 21 y^2 z^5 + 7 y z^6 + z^7$: each coefficient counts the ways in which the corresponding rack can be drawn. For example, there are 21 (out of 26^7 possible) ways to draw 2 y's and 5 z's because the coefficient of $y^2 z^5$ equals 21. From elementary calculations, it is obvious this is the correct answer. The whole point is that this procedure works regardless of the contents of the dictionary. Notice how reducing the power modulo the ideal at each stage reduces the computation: that's the shortcut revealed by this approach. (End of example.) Polynomial algebra systems implement these calculations. For instance, here is Mathematica code: alphabet = a + b + c + d + e + f + g + h + i + j + k + l + m + n + o + p + q + r + s + t + u + v + w + x + y + z + \[Psi]; dictionary = {a^2, a b, a d, a e, ..., w z \[Psi], \[Psi]^2}; next[pp_] := PolynomialMod[pp alphabet, dictionary]; nonwords = Nest[next, 1, 7]; Length[nonwords] (The dictionary can be constructed in a straightforward manner from @vqv's min.dict; I put a line here showing that it is short enough to be specified directly if you like.) The output--which takes ten minutes of computation--is 577958. (NB In an earlier version of this message I had made a tiny mistake in preparing the dictionary and obtained 577940. I have edited the text to reflect what I hope are now the correct results!) A little less than the million or so I expected, but of the same order of magnitude. To compute the chance of obtaining such a rack, we need to account for the number of ways in which the rack can be drawn. As we saw in the example, this equals its coefficient in $\alpha^7$. The chance of drawing some such rack is the sum of all these coefficients, easily found by setting all the letters equal to 1: nonwords /. (# -> 1) & /@ (List @@ alphabet) The answer equals 1066056120, giving a chance of 10.1914% of drawing a rack from which no valid word can be formed (if all letters are equally likely). When the probabilities of the letters vary, just replace each letter with its chance of being drawn: tiles = {9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6, 4, 2, 2, 1, 2, 1, 2}; chances = tiles / (Plus @@ tiles); nonwords /. (Transpose[{List @@ alphabet, chances}] /. {a_, b_} -> a -> b) The output is 1.079877553303%, the exact answer (albeit using an approximate model, drawing with replacement). Looking back, it took four lines to enter the data (alphabet, dictionary, and alphabet frequencies) and only three lines to do the work: describe how to take the next power of $\alpha$ modulo $I$, take the 7th power recursively, and substitute the probabilities for the letters.
Probability of not drawing a word from a bag of letters in Scrabble
This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exp
Probability of not drawing a word from a bag of letters in Scrabble This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exploit it to the fullest. His results suggest that a brute-force solution is feasible. After all, including a wildcard, there are at most $27^7 = 10,460,353,203$ words one can make with 7 characters, and it looks like less than 1/10000 of them--say, around a million--will fail to include some valid word. The first step is to augment the minimal dictionary with a wildcard character, "?". 22 of the letters appear in two-letter words (all but c, q, v, z). Adjoin a wildcard to those 22 letters and add these to the dictionary: {a?, b?, d?, ..., y?} are now in. Similarly we can inspect the minimal three-letter words, causing some additional words to appear in the dictionary. Finally, we add "??" to the dictionary. After removing repetitions that result, it contains 342 minimal words. An elegant way to proceed--one that uses a very small amount of encoding indeed--is to view this problem as an algebraic one. A word, considered as an unordered set of letters, is just a monomial. For example, "spats" is the monomial $a p s^2 t$. The dictionary therefore is a collection of monomials. It looks like $$\{a^2, a b, a d, ..., o z \psi, w x \psi, \psi^2\}$$ (where, to avoid confusion, I have written $\psi$ for the wildcard character). A rack contains a valid word if and only if that word divides the rack. A more abstract, but extremely powerful, way to say this is that the dictionary generates an ideal $I$ in the polynomial ring $R = \mathbb{Z}[a, b, \ldots, z, \psi]$ and that the racks with valid words become zero in the quotient ring $R/I$, whereas racks without valid words remain nonzero in the quotient. If we form the sum of all racks in $R$ and compute it in this quotient ring, then the number of racks without words equals the number of distinct monomials in the quotient. Furthermore, the sum of all racks in $R$ is straightforward to express. Let $\alpha = a + b + \cdots + z + \psi$ be the sum of all letters in the alphabet. $\alpha^7$ contains one monomial for each rack. (As an added bonus, its coefficients count the number of ways each rack can be formed, allowing us to compute its probability if we like.) As a simple example (to see how this works), suppose (a) we don't use wildcards and (b) all letters from "a" through "x" are considered words. Then the only possible racks from which words cannot be formed must consist entirely of y's and z's. We compute $\alpha=(a+b+c+\cdots+x+y+z)^7$ modulo the ideal generated by $\{a,b,c, \ldots, x\}$ one step at a time, thus: $$\eqalign{ \alpha^0 &= 1 \cr \alpha^1 &= a+b+c+\cdots+x+y+z \equiv y+z \mod I \cr \alpha^2 &\equiv (y+z)(a+b+\cdots+y+z) \equiv (y+z)^2 \mod I \cr \cdots &\cr \alpha^7 &\equiv (y+z)^6(a+b+\cdots+y+z) \equiv (y+z)^7 \mod I \text{.} }$$ We can read off the chance of getting a non-word rack from the final answer, $y^7 + 7 y^6 z + 21 y^5 z^2 + 35 y^4 z^3 + 35 y^3 z^4 + 21 y^2 z^5 + 7 y z^6 + z^7$: each coefficient counts the ways in which the corresponding rack can be drawn. For example, there are 21 (out of 26^7 possible) ways to draw 2 y's and 5 z's because the coefficient of $y^2 z^5$ equals 21. From elementary calculations, it is obvious this is the correct answer. The whole point is that this procedure works regardless of the contents of the dictionary. Notice how reducing the power modulo the ideal at each stage reduces the computation: that's the shortcut revealed by this approach. (End of example.) Polynomial algebra systems implement these calculations. For instance, here is Mathematica code: alphabet = a + b + c + d + e + f + g + h + i + j + k + l + m + n + o + p + q + r + s + t + u + v + w + x + y + z + \[Psi]; dictionary = {a^2, a b, a d, a e, ..., w z \[Psi], \[Psi]^2}; next[pp_] := PolynomialMod[pp alphabet, dictionary]; nonwords = Nest[next, 1, 7]; Length[nonwords] (The dictionary can be constructed in a straightforward manner from @vqv's min.dict; I put a line here showing that it is short enough to be specified directly if you like.) The output--which takes ten minutes of computation--is 577958. (NB In an earlier version of this message I had made a tiny mistake in preparing the dictionary and obtained 577940. I have edited the text to reflect what I hope are now the correct results!) A little less than the million or so I expected, but of the same order of magnitude. To compute the chance of obtaining such a rack, we need to account for the number of ways in which the rack can be drawn. As we saw in the example, this equals its coefficient in $\alpha^7$. The chance of drawing some such rack is the sum of all these coefficients, easily found by setting all the letters equal to 1: nonwords /. (# -> 1) & /@ (List @@ alphabet) The answer equals 1066056120, giving a chance of 10.1914% of drawing a rack from which no valid word can be formed (if all letters are equally likely). When the probabilities of the letters vary, just replace each letter with its chance of being drawn: tiles = {9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6, 4, 2, 2, 1, 2, 1, 2}; chances = tiles / (Plus @@ tiles); nonwords /. (Transpose[{List @@ alphabet, chances}] /. {a_, b_} -> a -> b) The output is 1.079877553303%, the exact answer (albeit using an approximate model, drawing with replacement). Looking back, it took four lines to enter the data (alphabet, dictionary, and alphabet frequencies) and only three lines to do the work: describe how to take the next power of $\alpha$ modulo $I$, take the 7th power recursively, and substitute the probabilities for the letters.
Probability of not drawing a word from a bag of letters in Scrabble This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exp
10,338
Probability of not drawing a word from a bag of letters in Scrabble
It is very hard to draw a rack that does not contain any valid word in Scrabble and its variants. Below is an R program I wrote to estimate the probability that the initial 7-tile rack does not contain a valid word. It uses a monte carlo approach and the Words With Friends lexicon (I couldn’t find the official Scrabble lexicon in an easy format). Each trial consists of drawing a 7-tile rack, and then checking if the rack contains a valid word. Minimal words You don’t have to scan the entire lexicon to check if the rack contains a valid word. You just need to scan a minimal lexicon consisting of minimal words. A word is minimal if it contains no other word as a subset. For example 'em’ is a minimal word; 'empty’ is not. The point of this is that if a rack contains word x then it must also contain any subset of x. In other words: a rack contains no words iff it contains no minimal words. Luckily, most words in the lexicon are not minimal, so they can be eliminated. You can also merge permutation equivalent words. I was able to reduce the Words With Friends lexicon from 172,820 to 201 minimal words. Wildcards can be easily handled by treating racks and words as distributions over the letters. We check if a rack contains a word by subtracting one distribution from the other. This gives us the number of each letter missing from the rack. If the sum of those number is $\leq$ the number of wildcards, then the word is in the rack. The only problem with the monte carlo approach is that the event that we are interested in is very rare. So it should take many, many trials to get an estimate with a small enough standard error. I ran my program (pasted at the bottom) with $N=100,000$ trials and got an estimated probability of 0.004 that the initial rack does not contain a valid word. The estimated standard error of that estimate is 0.0002. It took just a couple minutes to run on my Mac Pro, including downloading the lexicon. I’d be interested in seeing if someone can come up with an efficient exact algorithm. A naive approach based on inclusion-exclusion seems like it could involve a combinatorial explosion. Inclusion-exclusion I think this is a bad solution, but here is an incomplete sketch anyway. In principle you can write a program to do the calculation, but the specification would be tortuous. The probability we wish to calculate is $$ P(k\text{-tile rack does not contain a word}) = 1 - P(k\text{-tile rack contains a word}) . $$ The event inside the probability on the right side is a union of events: $$ P(k\text{-tile rack contains a word}) = P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right), $$ where $M$ is a minimal lexicon. We can expand it using the inclusion-exclusion formula. It involves considering all possible intersections of the events above. Let $\mathcal{P}(M)$ denote the power set of $M$, i.e. the set of all possible subsets of $M$. Then $$ \begin{align} &P(k\text{-tile rack contains a word}) \\ &= P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{j=1}^{|M|} (-1)^{j-1} \sum_{S \in \mathcal{P}(M) : |S| = j} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \end{align} $$ The last thing to specify is how to calculate the probability on the last line above. It involves a multivariate hypergeometric. $$\cap_{x \in S} \{ k\text{-tile rack contains }x \}$$ is the event that the rack contains every word in $S$. This is a pain to deal with because of wildcards. We'll have to consider, by conditioning, each of the following cases: the rack contains no wildcards, the rack contains 1 wildcard, the rack contains 2 wildcards, ... Then $$ \begin{align} &P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{w=0}^{n_{*}} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} | k\text{-tile rack contains } w \text{ wildcards} \right) \\ &\quad \times P(k\text{-tile rack contains } w \text{ wildcards}) . \end{align} $$ I'm going to stop here, because the expansions are tortuous to write out and not at all enlightening. It's easier to write a computer program to do it. But by now you should see that the inclusion-exclusion approach is intractable. It involves $2^{|M|}$ terms, each of which is also very complicated. For the lexicon I considered above $2^{|M|} \approx 3.2 \times 10^{60}$. Scanning all possible racks I think this is computationally easier, because there are fewer possible racks than possible subsets of minimal words. We successively reduce the set of possible $k$-tile racks until we get the set of racks which contain no words. For Scrabble (or Words With Friends) the number of possible 7-tile racks is in the tens of billions. Counting the number of those that do not contain a possible word should be doable with a few dozen lines of R code. But I think you should be able to do better than just enumerating all possible racks. For instance, 'aa' is a minimal word. That immediately eliminates all racks containing more than one 'a’. You can repeat with other words. Memory shouldn’t be an issue for modern computers. A 7-tile Scrabble rack requires fewer than 7 bytes of storage. At worst we would use a few gigabytes to store all possible racks, but I don’t think that’s a good idea either. Someone may want to think more about this. Monte Carlo R program # # scrabble.R # # Created by Vincent Vu on 2011-01-07. # Copyright 2011 Vincent Vu. All rights reserved. # # The Words With Friends lexicon # http://code.google.com/p/dotnetperls-controls/downloads/detail?name=enable1.txt&can=2&q= url <- 'http://dotnetperls-controls.googlecode.com/files/enable1.txt' lexicon <- scan(url, what=character()) # Words With Friends letters <- c(unlist(strsplit('abcdefghijklmnopqrstuvwxyz', NULL)), '?') tiles <- c(9, 2, 2, 5, 13, 2, 3, 4, 8, 1, 1, 4, 2, 5, 8, 2, 1, 6, 5, 7, 4, 2, 2, 1, 2, 1, 2) names(tiles) <- letters # Scrabble # tiles <- c(9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6, 4, # 2, 2, 1, 2, 1, 2) # Reduce to permutation equivalent words sort.letters.in.words <- function(x) { sapply(lapply(strsplit(x, NULL), sort), paste, collapse='') } min.dict <- unique(sort.letters.in.words(lexicon)) min.dict.length <- nchar(min.dict) # Find all minimal words of length k by elimination # This is held constant across iterations: # All words in min.dict contain no other words of length k or smaller k <- 1 while(k < max(min.dict.length)) { # List all k-letter words in min.dict k.letter.words <- min.dict[min.dict.length == k] # Find words in min.dict of length > k that contain a k-letter word for(w in k.letter.words) { # Create a regexp pattern makepattern <- function(x) { paste('.*', paste(unlist(strsplit(x, NULL)), '.*', sep='', collapse=''), sep='') } p <- paste('.*', paste(unlist(strsplit(w, NULL)), '.*', sep='', collapse=''), sep='') # Eliminate words of length > k that are not minimal eliminate <- grepl(p, min.dict) & min.dict.length > k min.dict <- min.dict[!eliminate] min.dict.length <- min.dict.length[!eliminate] } k <- k + 1 } # Converts a word into a letter distribution letter.dist <- function(w, l=letters) { d <- lapply(strsplit(w, NULL), factor, levels=l) names(d) <- w d <- lapply(d, table) return(d) } # Sample N racks of k tiles N <- 1e5 k <- 7 rack <- replicate(N, paste(sample(names(tiles), size=k, prob=tiles), collapse='')) contains.word <- function(rack.dist, lex.dist) { # For each word in the lexicon, subtract the rack distribution from the # letter distribution of the word. Positive results correspond to the # number of each letter that the rack is missing. y <- sweep(lex.dist, 1, rack.dist) # If the total number of missing letters is smaller than the number of # wildcards in the rack, then the rack contains that word any(colSums(pmax(y,0)) <= rack.dist[names(rack.dist) == '?']) } # Convert rack and min.dict into letter distributions min.dict.dist <- letter.dist(min.dict) min.dict.dist <- do.call(cbind, min.dict.dist) rack.dist <- letter.dist(rack, l=letters) # Determine if each rack contains a valid word x <- sapply(rack.dist, contains.word, lex.dist=min.dict.dist) message("Estimate (and SE) of probability of no words based on ", N, " trials:") message(signif(1-mean(x)), " (", signif(sd(x) / sqrt(N)), ")")
Probability of not drawing a word from a bag of letters in Scrabble
It is very hard to draw a rack that does not contain any valid word in Scrabble and its variants. Below is an R program I wrote to estimate the probability that the initial 7-tile rack does not contai
Probability of not drawing a word from a bag of letters in Scrabble It is very hard to draw a rack that does not contain any valid word in Scrabble and its variants. Below is an R program I wrote to estimate the probability that the initial 7-tile rack does not contain a valid word. It uses a monte carlo approach and the Words With Friends lexicon (I couldn’t find the official Scrabble lexicon in an easy format). Each trial consists of drawing a 7-tile rack, and then checking if the rack contains a valid word. Minimal words You don’t have to scan the entire lexicon to check if the rack contains a valid word. You just need to scan a minimal lexicon consisting of minimal words. A word is minimal if it contains no other word as a subset. For example 'em’ is a minimal word; 'empty’ is not. The point of this is that if a rack contains word x then it must also contain any subset of x. In other words: a rack contains no words iff it contains no minimal words. Luckily, most words in the lexicon are not minimal, so they can be eliminated. You can also merge permutation equivalent words. I was able to reduce the Words With Friends lexicon from 172,820 to 201 minimal words. Wildcards can be easily handled by treating racks and words as distributions over the letters. We check if a rack contains a word by subtracting one distribution from the other. This gives us the number of each letter missing from the rack. If the sum of those number is $\leq$ the number of wildcards, then the word is in the rack. The only problem with the monte carlo approach is that the event that we are interested in is very rare. So it should take many, many trials to get an estimate with a small enough standard error. I ran my program (pasted at the bottom) with $N=100,000$ trials and got an estimated probability of 0.004 that the initial rack does not contain a valid word. The estimated standard error of that estimate is 0.0002. It took just a couple minutes to run on my Mac Pro, including downloading the lexicon. I’d be interested in seeing if someone can come up with an efficient exact algorithm. A naive approach based on inclusion-exclusion seems like it could involve a combinatorial explosion. Inclusion-exclusion I think this is a bad solution, but here is an incomplete sketch anyway. In principle you can write a program to do the calculation, but the specification would be tortuous. The probability we wish to calculate is $$ P(k\text{-tile rack does not contain a word}) = 1 - P(k\text{-tile rack contains a word}) . $$ The event inside the probability on the right side is a union of events: $$ P(k\text{-tile rack contains a word}) = P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right), $$ where $M$ is a minimal lexicon. We can expand it using the inclusion-exclusion formula. It involves considering all possible intersections of the events above. Let $\mathcal{P}(M)$ denote the power set of $M$, i.e. the set of all possible subsets of $M$. Then $$ \begin{align} &P(k\text{-tile rack contains a word}) \\ &= P\left(\cup_{x \in M} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{j=1}^{|M|} (-1)^{j-1} \sum_{S \in \mathcal{P}(M) : |S| = j} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \end{align} $$ The last thing to specify is how to calculate the probability on the last line above. It involves a multivariate hypergeometric. $$\cap_{x \in S} \{ k\text{-tile rack contains }x \}$$ is the event that the rack contains every word in $S$. This is a pain to deal with because of wildcards. We'll have to consider, by conditioning, each of the following cases: the rack contains no wildcards, the rack contains 1 wildcard, the rack contains 2 wildcards, ... Then $$ \begin{align} &P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} \right) \\ &= \sum_{w=0}^{n_{*}} P\left( \cap_{x \in S} \{ k\text{-tile rack contains }x \} | k\text{-tile rack contains } w \text{ wildcards} \right) \\ &\quad \times P(k\text{-tile rack contains } w \text{ wildcards}) . \end{align} $$ I'm going to stop here, because the expansions are tortuous to write out and not at all enlightening. It's easier to write a computer program to do it. But by now you should see that the inclusion-exclusion approach is intractable. It involves $2^{|M|}$ terms, each of which is also very complicated. For the lexicon I considered above $2^{|M|} \approx 3.2 \times 10^{60}$. Scanning all possible racks I think this is computationally easier, because there are fewer possible racks than possible subsets of minimal words. We successively reduce the set of possible $k$-tile racks until we get the set of racks which contain no words. For Scrabble (or Words With Friends) the number of possible 7-tile racks is in the tens of billions. Counting the number of those that do not contain a possible word should be doable with a few dozen lines of R code. But I think you should be able to do better than just enumerating all possible racks. For instance, 'aa' is a minimal word. That immediately eliminates all racks containing more than one 'a’. You can repeat with other words. Memory shouldn’t be an issue for modern computers. A 7-tile Scrabble rack requires fewer than 7 bytes of storage. At worst we would use a few gigabytes to store all possible racks, but I don’t think that’s a good idea either. Someone may want to think more about this. Monte Carlo R program # # scrabble.R # # Created by Vincent Vu on 2011-01-07. # Copyright 2011 Vincent Vu. All rights reserved. # # The Words With Friends lexicon # http://code.google.com/p/dotnetperls-controls/downloads/detail?name=enable1.txt&can=2&q= url <- 'http://dotnetperls-controls.googlecode.com/files/enable1.txt' lexicon <- scan(url, what=character()) # Words With Friends letters <- c(unlist(strsplit('abcdefghijklmnopqrstuvwxyz', NULL)), '?') tiles <- c(9, 2, 2, 5, 13, 2, 3, 4, 8, 1, 1, 4, 2, 5, 8, 2, 1, 6, 5, 7, 4, 2, 2, 1, 2, 1, 2) names(tiles) <- letters # Scrabble # tiles <- c(9, 2, 2, 4, 12, 2, 3, 2, 9, 1, 1, 4, 2, 6, 8, 2, 1, 6, 4, 6, 4, # 2, 2, 1, 2, 1, 2) # Reduce to permutation equivalent words sort.letters.in.words <- function(x) { sapply(lapply(strsplit(x, NULL), sort), paste, collapse='') } min.dict <- unique(sort.letters.in.words(lexicon)) min.dict.length <- nchar(min.dict) # Find all minimal words of length k by elimination # This is held constant across iterations: # All words in min.dict contain no other words of length k or smaller k <- 1 while(k < max(min.dict.length)) { # List all k-letter words in min.dict k.letter.words <- min.dict[min.dict.length == k] # Find words in min.dict of length > k that contain a k-letter word for(w in k.letter.words) { # Create a regexp pattern makepattern <- function(x) { paste('.*', paste(unlist(strsplit(x, NULL)), '.*', sep='', collapse=''), sep='') } p <- paste('.*', paste(unlist(strsplit(w, NULL)), '.*', sep='', collapse=''), sep='') # Eliminate words of length > k that are not minimal eliminate <- grepl(p, min.dict) & min.dict.length > k min.dict <- min.dict[!eliminate] min.dict.length <- min.dict.length[!eliminate] } k <- k + 1 } # Converts a word into a letter distribution letter.dist <- function(w, l=letters) { d <- lapply(strsplit(w, NULL), factor, levels=l) names(d) <- w d <- lapply(d, table) return(d) } # Sample N racks of k tiles N <- 1e5 k <- 7 rack <- replicate(N, paste(sample(names(tiles), size=k, prob=tiles), collapse='')) contains.word <- function(rack.dist, lex.dist) { # For each word in the lexicon, subtract the rack distribution from the # letter distribution of the word. Positive results correspond to the # number of each letter that the rack is missing. y <- sweep(lex.dist, 1, rack.dist) # If the total number of missing letters is smaller than the number of # wildcards in the rack, then the rack contains that word any(colSums(pmax(y,0)) <= rack.dist[names(rack.dist) == '?']) } # Convert rack and min.dict into letter distributions min.dict.dist <- letter.dist(min.dict) min.dict.dist <- do.call(cbind, min.dict.dist) rack.dist <- letter.dist(rack, l=letters) # Determine if each rack contains a valid word x <- sapply(rack.dist, contains.word, lex.dist=min.dict.dist) message("Estimate (and SE) of probability of no words based on ", N, " trials:") message(signif(1-mean(x)), " (", signif(sd(x) / sqrt(N)), ")")
Probability of not drawing a word from a bag of letters in Scrabble It is very hard to draw a rack that does not contain any valid word in Scrabble and its variants. Below is an R program I wrote to estimate the probability that the initial 7-tile rack does not contai
10,339
Probability of not drawing a word from a bag of letters in Scrabble
Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains every possible single-letter word. In this case, the chance of not making a word in a draw of $1$ or more letters is zero. (2) The dictionary contains only words formed out of a single letter (e.g., "a", "aa", "aaa", etc.). The chance of not making a word in a draw of $k$ letters is easily determined and obviously is nonzero. Any definite closed-form answer would have to incorporate the entire dictionary structure and would be a truly awful and long formula. The second reason is that MC indeed is feasible: you just have to do it right. The preceding paragraph provides a clue: don't just generate words at random and look them up; instead, analyze the dictionary first and exploit its structure. One way represents the words in the dictionary as a tree. The tree is rooted at the empty symbol and branches on each letter all the way down; its leaves are (of course) the words themselves. However, we can also insert all nontrivial permutations of every word into the tree, too (up to $k!-1$ of them for each word). This can be done efficiently because one does not have to store all those permutations; only the edges in the tree need to be added. The leaves remain the same. In fact, this can be simplified further by insisting that the tree be followed in alphabetical order. In other words, to determine whether a multiset of $k$ characters is in the dictionary, first arrange the elements into sorted order, then look for this sorted "word" in a tree constructed from the sorted representatives of the words in the original dictionary. This will actually be smaller than the original tree because it merges all sets of words that are sort-equivalent, such as {stop, post, pots, opts, spot}. In fact, in an English dictionary this class of words would never be reached anyway because "so" would be found first. Let's see this in action. The sorted multiset is "opst"; the "o" would branch to all words containing only the letters {o, p, ..., z}, the "p" would branch to all words containing only {o, p, ..., z} and at most one "o", and finally the "s" would branch to the leaf "so"! (I have assumed that none of the plausible candidates "o", "op", "po", "ops", or "pos" are in the dictionary.) A modification is needed to handle wildcards: I'll let the programmer types among you think about that. It won't increase the dictionary size (it should decrease it, in fact); it will slightly slow down the tree traversal, but without changing it in any fundamental way. In any dictionary that contains a single-letter word, like English ("a", "i"), there is no complication: the presence of a wildcard means you can form a word! (This hints that the original question might not be as interesting as it sounds.) The upshot is that a single dictionary lookup requires (a) sorting a $k$-letter multiset and (b) traversing no more than $k$ edges of a tree. The running time is $O(k \log(k))$. If you cleverly generate random multisets in sorted order (I can think of several efficient ways to do this), the running time reduces to $O(k)$. Multiply this by the number of iterations to get the total running time. I bet you could conduct this study with a real Scrabble set and a million iterations in a matter of seconds.
Probability of not drawing a word from a bag of letters in Scrabble
Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains ev
Probability of not drawing a word from a bag of letters in Scrabble Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains every possible single-letter word. In this case, the chance of not making a word in a draw of $1$ or more letters is zero. (2) The dictionary contains only words formed out of a single letter (e.g., "a", "aa", "aaa", etc.). The chance of not making a word in a draw of $k$ letters is easily determined and obviously is nonzero. Any definite closed-form answer would have to incorporate the entire dictionary structure and would be a truly awful and long formula. The second reason is that MC indeed is feasible: you just have to do it right. The preceding paragraph provides a clue: don't just generate words at random and look them up; instead, analyze the dictionary first and exploit its structure. One way represents the words in the dictionary as a tree. The tree is rooted at the empty symbol and branches on each letter all the way down; its leaves are (of course) the words themselves. However, we can also insert all nontrivial permutations of every word into the tree, too (up to $k!-1$ of them for each word). This can be done efficiently because one does not have to store all those permutations; only the edges in the tree need to be added. The leaves remain the same. In fact, this can be simplified further by insisting that the tree be followed in alphabetical order. In other words, to determine whether a multiset of $k$ characters is in the dictionary, first arrange the elements into sorted order, then look for this sorted "word" in a tree constructed from the sorted representatives of the words in the original dictionary. This will actually be smaller than the original tree because it merges all sets of words that are sort-equivalent, such as {stop, post, pots, opts, spot}. In fact, in an English dictionary this class of words would never be reached anyway because "so" would be found first. Let's see this in action. The sorted multiset is "opst"; the "o" would branch to all words containing only the letters {o, p, ..., z}, the "p" would branch to all words containing only {o, p, ..., z} and at most one "o", and finally the "s" would branch to the leaf "so"! (I have assumed that none of the plausible candidates "o", "op", "po", "ops", or "pos" are in the dictionary.) A modification is needed to handle wildcards: I'll let the programmer types among you think about that. It won't increase the dictionary size (it should decrease it, in fact); it will slightly slow down the tree traversal, but without changing it in any fundamental way. In any dictionary that contains a single-letter word, like English ("a", "i"), there is no complication: the presence of a wildcard means you can form a word! (This hints that the original question might not be as interesting as it sounds.) The upshot is that a single dictionary lookup requires (a) sorting a $k$-letter multiset and (b) traversing no more than $k$ edges of a tree. The running time is $O(k \log(k))$. If you cleverly generate random multisets in sorted order (I can think of several efficient ways to do this), the running time reduces to $O(k)$. Multiply this by the number of iterations to get the total running time. I bet you could conduct this study with a real Scrabble set and a million iterations in a matter of seconds.
Probability of not drawing a word from a bag of letters in Scrabble Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains ev
10,340
Probability of not drawing a word from a bag of letters in Scrabble
Monte Carlo Approach The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could form a word by $m_w$. The desired probability would be: $$1 - \frac{m_w}{m}$$ Direct Approach Let the number of words in the dictionary be given by $S$. Let $t_s$ be the number of ways in which we can form the $s^\mbox{th}$ word. Let the number of letters needed by the $s^\mbox{th}$ word be denoted by ${m_a, m_b, ..., m_z}$ (i.e., the $s^\mbox{th}$ word needs $m_a$ number of 'a' letters etc). Denote the number of words we can form with all tiles by $N$. $$N = \binom{n}{k}$$ and $$t_s = \binom{n_a}{m_a} \binom{n_b}{m_b} ... \binom{n_z}{m_z}$$ (Including the impact of wildcard tiles is a bit trickier. I will defer that issue for now.) Thus, the desired probability is: $$1 - \frac{\sum_s{t_s}}{N}$$
Probability of not drawing a word from a bag of letters in Scrabble
Monte Carlo Approach The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could
Probability of not drawing a word from a bag of letters in Scrabble Monte Carlo Approach The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could form a word by $m_w$. The desired probability would be: $$1 - \frac{m_w}{m}$$ Direct Approach Let the number of words in the dictionary be given by $S$. Let $t_s$ be the number of ways in which we can form the $s^\mbox{th}$ word. Let the number of letters needed by the $s^\mbox{th}$ word be denoted by ${m_a, m_b, ..., m_z}$ (i.e., the $s^\mbox{th}$ word needs $m_a$ number of 'a' letters etc). Denote the number of words we can form with all tiles by $N$. $$N = \binom{n}{k}$$ and $$t_s = \binom{n_a}{m_a} \binom{n_b}{m_b} ... \binom{n_z}{m_z}$$ (Including the impact of wildcard tiles is a bit trickier. I will defer that issue for now.) Thus, the desired probability is: $$1 - \frac{\sum_s{t_s}}{N}$$
Probability of not drawing a word from a bag of letters in Scrabble Monte Carlo Approach The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could
10,341
Representing interaction effects in directed acyclic graphs
Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effects can vary (wildly) by assumption. If an effect is identified and you estimate it from data non-parametrically, you obtain a complete distribution of causal effects (instead of, say, a single parameter). Accordingly, you can evaluate the causal effect of tobacco exposure conditional on asbestos exposure non-parametrically to see whether it changes, without committing to any functional form. Let's have a look at the structural equations in your case, which correspond to your "DAG" stripped of the red arrow: Mesothelioma = $f_{1}$(Tobacco, Asbestos, $\epsilon_{m}$) Tobacco = $f_{2}$($\epsilon_{t}$) Asbestos = $f_{3}$($\epsilon_{a}$) where the $\epsilon$ are assumed to be independent because of missing dashed arrows between them. We have left the respective functions f() and the distributions of the errors unspecified, except for saying that the latter are independent. Nonetheless, we can apply Pearl's theory and immediately state that the causal effects of both tobacco and asbestos exposure on mesothelioma are identified. This means that if we had infinitely many observations from this process, we could exactly measure the effect of setting the exposures to different levels by simply seeing the incidences of mesothelioma in individuals with different levels of exposure. So we could infer causality without doing an actual experiment. This is because there exist no back-door paths from the exposure variables to the outcome variable. So you would get P(mesothelioma | do(Tobacco = t)) = P(mesothelioma | Tobacco = t) The same logic holds for the causal effect of asbestos, which allows you to simply evaluate: P(mesothelioma | Tobacco = t, Asbestos = a) - P(mesothelioma | Tobacco = t', Asbestos = a) in comparison to P(mesothelioma | Tobacco = t, Asbestos = a') - P(mesothelioma | Tobacco = t', Asbestos = a') for all relevant values of t and a in order to estimate the interaction effects. In your concrete example, let's assume that the outcome variable is a Bernoulli variable - you can either have mesothelioma or not - and that a person has been exposed to a very high asbestos level a. Then, it is very likely that he will suffer from mesothelioma; accordingly, the effect of increasing tobacco exposure will be very low. On the other hand, if asbestos levels a' are very low, increasing tobacco exposure will have a greater effect. This would constitute an interaction between the effects of tobacco and asbestos. Of course, non-parametric estimation can be extremely demanding and noisy with finite data and lots of different t and a values, so you might think about assuming some structure in f(). But basically you can do it without that.
Representing interaction effects in directed acyclic graphs
Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effect
Representing interaction effects in directed acyclic graphs Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effects can vary (wildly) by assumption. If an effect is identified and you estimate it from data non-parametrically, you obtain a complete distribution of causal effects (instead of, say, a single parameter). Accordingly, you can evaluate the causal effect of tobacco exposure conditional on asbestos exposure non-parametrically to see whether it changes, without committing to any functional form. Let's have a look at the structural equations in your case, which correspond to your "DAG" stripped of the red arrow: Mesothelioma = $f_{1}$(Tobacco, Asbestos, $\epsilon_{m}$) Tobacco = $f_{2}$($\epsilon_{t}$) Asbestos = $f_{3}$($\epsilon_{a}$) where the $\epsilon$ are assumed to be independent because of missing dashed arrows between them. We have left the respective functions f() and the distributions of the errors unspecified, except for saying that the latter are independent. Nonetheless, we can apply Pearl's theory and immediately state that the causal effects of both tobacco and asbestos exposure on mesothelioma are identified. This means that if we had infinitely many observations from this process, we could exactly measure the effect of setting the exposures to different levels by simply seeing the incidences of mesothelioma in individuals with different levels of exposure. So we could infer causality without doing an actual experiment. This is because there exist no back-door paths from the exposure variables to the outcome variable. So you would get P(mesothelioma | do(Tobacco = t)) = P(mesothelioma | Tobacco = t) The same logic holds for the causal effect of asbestos, which allows you to simply evaluate: P(mesothelioma | Tobacco = t, Asbestos = a) - P(mesothelioma | Tobacco = t', Asbestos = a) in comparison to P(mesothelioma | Tobacco = t, Asbestos = a') - P(mesothelioma | Tobacco = t', Asbestos = a') for all relevant values of t and a in order to estimate the interaction effects. In your concrete example, let's assume that the outcome variable is a Bernoulli variable - you can either have mesothelioma or not - and that a person has been exposed to a very high asbestos level a. Then, it is very likely that he will suffer from mesothelioma; accordingly, the effect of increasing tobacco exposure will be very low. On the other hand, if asbestos levels a' are very low, increasing tobacco exposure will have a greater effect. This would constitute an interaction between the effects of tobacco and asbestos. Of course, non-parametric estimation can be extremely demanding and noisy with finite data and lots of different t and a values, so you might think about assuming some structure in f(). But basically you can do it without that.
Representing interaction effects in directed acyclic graphs Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effect
10,342
Representing interaction effects in directed acyclic graphs
The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already assume that any variables pointing to the same outcome can modify the effect of the others pointing to the same outcome. It is a modeling assumption, separate from the DAG, which presumes the lack of an interaction. In addition, interaction can occur without including an explicit interaction term in your model. If you include main effects only in a model for the risk ratio of Y with respect to treatment T and covariate Q, the estimate of the risk difference will differ depending on the level of Q. In order to accommodate all these possibilities nonparametrically, DAGs make only the weakest assumptions on the functional form of the relationships among the variables, and assuming no interaction is a stronger assumption that allowing for an interaction. This again is to say that DAGs already allow for interaction without any adjustment. See Vanderweele (2009) for a discussion of interaction that uses conventional DAGs but allows for interaction. Bollen & Paxton (1998) and Muthén & Asparouhov (2015) both demonstrate interactions in path models with latent variables, but these interactions explicitly refer to product terms in a parametric model rather than to interactions broadly. I have also seen diagrams similar to yours where the causal arrow points to a path, but strictly speaking a path is not a unique quantity that a variable can have a causal effect on (even though that may be how we want to interpret our models); it simply represents the presence of a causal effect, not its magnitude. Bollen, K. A., & Paxton, P. (1998). Interactions of latent variables in structural equation models. Structural Equation Modeling: A Multidisciplinary Journal, 5(3), 267-293. Asparouhov, T. & Muthén, B. (2020): Bayesian estimation of single and multilevel models with latent variable interactions, Structural Equation Modeling: A Multidisciplinary Journal VanderWeele, T. J. (2009). On the distinction between interaction and effect modification. Epidemiology, 20(6), 863-871.
Representing interaction effects in directed acyclic graphs
The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already as
Representing interaction effects in directed acyclic graphs The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already assume that any variables pointing to the same outcome can modify the effect of the others pointing to the same outcome. It is a modeling assumption, separate from the DAG, which presumes the lack of an interaction. In addition, interaction can occur without including an explicit interaction term in your model. If you include main effects only in a model for the risk ratio of Y with respect to treatment T and covariate Q, the estimate of the risk difference will differ depending on the level of Q. In order to accommodate all these possibilities nonparametrically, DAGs make only the weakest assumptions on the functional form of the relationships among the variables, and assuming no interaction is a stronger assumption that allowing for an interaction. This again is to say that DAGs already allow for interaction without any adjustment. See Vanderweele (2009) for a discussion of interaction that uses conventional DAGs but allows for interaction. Bollen & Paxton (1998) and Muthén & Asparouhov (2015) both demonstrate interactions in path models with latent variables, but these interactions explicitly refer to product terms in a parametric model rather than to interactions broadly. I have also seen diagrams similar to yours where the causal arrow points to a path, but strictly speaking a path is not a unique quantity that a variable can have a causal effect on (even though that may be how we want to interpret our models); it simply represents the presence of a causal effect, not its magnitude. Bollen, K. A., & Paxton, P. (1998). Interactions of latent variables in structural equation models. Structural Equation Modeling: A Multidisciplinary Journal, 5(3), 267-293. Asparouhov, T. & Muthén, B. (2020): Bayesian estimation of single and multilevel models with latent variable interactions, Structural Equation Modeling: A Multidisciplinary Journal VanderWeele, T. J. (2009). On the distinction between interaction and effect modification. Epidemiology, 20(6), 863-871.
Representing interaction effects in directed acyclic graphs The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already as
10,343
Representing interaction effects in directed acyclic graphs
A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exposure causes a change in the direct causal effect of tobacco smoke exposure on risk of mesothelioma" would be represented as: Asbestos → Δ Mesothelioma Tobacco The limitation of this approach is that you would need to represent all other effects on Tobacco and Mesothelioma separately. For details see: Anton Nilsson and others, A directed acyclic graph for interactions, International Journal of Epidemiology, Volume 50, Issue 2, April 2021, Pages 613–619, https://doi.org/10.1093/ije/dyaa211
Representing interaction effects in directed acyclic graphs
A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exp
Representing interaction effects in directed acyclic graphs A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exposure causes a change in the direct causal effect of tobacco smoke exposure on risk of mesothelioma" would be represented as: Asbestos → Δ Mesothelioma Tobacco The limitation of this approach is that you would need to represent all other effects on Tobacco and Mesothelioma separately. For details see: Anton Nilsson and others, A directed acyclic graph for interactions, International Journal of Epidemiology, Volume 50, Issue 2, April 2021, Pages 613–619, https://doi.org/10.1093/ije/dyaa211
Representing interaction effects in directed acyclic graphs A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exp
10,344
Representing interaction effects in directed acyclic graphs
If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ensure statistical identification (even if you have built a defensible case for causal identification using graphical criteria and causal calculus), but these are not as restrictive as in the linear or parametric nonlinear case. Note that nonparametric quantile regression is equivalent to a non-separable model under certain conditions, so that gives you a fairly feasible implementation option. Breunig, C. (2020). Specification Testing in Nonparametric Instrumental Quantile Regression. Econometric Theory. https://doi.org/10.1017/S0266466619000288 Dunker, F. (April 16, 2020). Nonparametric instrumental variable regression and quantile regression with full independence. Repository arXiv. https://arxiv.org/pdf/1511.03977.pdf Chernozhukov, V., Fernández-Val, I., Newey, W., Stouli, S., & Vella, F. (2020). Semiparametric Estimation of Structural Functions in Nonseparable Triangular Models. Quantitative Economics, 11, 503-633. https://qeconomics.org/ojs/index.php/qe/article/viewFile/1328/1320 Babii, A., & Florens, S.P. (Jan. 30, 2020). Are unobservables separable? Repository arXiv. http://arxiv.org/pdf/1705.01654 Su, L., Tu, Y., & Ullah, A. (2015). Testing Additive Separability of Error Term in Nonparametric Structural Models. Econometric Reviews, 34(6-10), 1057-1088. https://doi.org/10.1080/07474938.2014.956621 Lu, X., & White, H. (2014). Testing for separability in structural equations. Journal of Econometrics, 182(1), 14–26. https://doi.org/10.1016/j.jeconom.2014.04.005
Representing interaction effects in directed acyclic graphs
If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ens
Representing interaction effects in directed acyclic graphs If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ensure statistical identification (even if you have built a defensible case for causal identification using graphical criteria and causal calculus), but these are not as restrictive as in the linear or parametric nonlinear case. Note that nonparametric quantile regression is equivalent to a non-separable model under certain conditions, so that gives you a fairly feasible implementation option. Breunig, C. (2020). Specification Testing in Nonparametric Instrumental Quantile Regression. Econometric Theory. https://doi.org/10.1017/S0266466619000288 Dunker, F. (April 16, 2020). Nonparametric instrumental variable regression and quantile regression with full independence. Repository arXiv. https://arxiv.org/pdf/1511.03977.pdf Chernozhukov, V., Fernández-Val, I., Newey, W., Stouli, S., & Vella, F. (2020). Semiparametric Estimation of Structural Functions in Nonseparable Triangular Models. Quantitative Economics, 11, 503-633. https://qeconomics.org/ojs/index.php/qe/article/viewFile/1328/1320 Babii, A., & Florens, S.P. (Jan. 30, 2020). Are unobservables separable? Repository arXiv. http://arxiv.org/pdf/1705.01654 Su, L., Tu, Y., & Ullah, A. (2015). Testing Additive Separability of Error Term in Nonparametric Structural Models. Econometric Reviews, 34(6-10), 1057-1088. https://doi.org/10.1080/07474938.2014.956621 Lu, X., & White, H. (2014). Testing for separability in structural equations. Journal of Econometrics, 182(1), 14–26. https://doi.org/10.1016/j.jeconom.2014.04.005
Representing interaction effects in directed acyclic graphs If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ens
10,345
Square of normal distribution with specific variance
To close this one: $$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$ with $$E(Q) = \frac {\sigma^2}{4},\;\; \text{Var}(Q) = \frac {\sigma^4}{8}$$ RESPONSE TO QUESTION IN THE COMMENT If $$X\sim N(\mu,\sigma^2/4)$$ then $$\frac {X^2}{\sigma^2/4} \sim \mathcal \chi^2_{1,NC}(\lambda=\mu^2),$$ where $\mathcal \chi^2_{1,NC}(\lambda)$ represents a Non-Central Chi-square with one degree of freedom, and $\lambda$ is the non-centrality parameter. Then $$X^2 =\frac{\sigma^2}{4} \mathcal \chi^2_{1,NC}(\lambda)$$ can be treated as a version of the Generalized Chi-square.
Square of normal distribution with specific variance
To close this one: $$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$ wit
Square of normal distribution with specific variance To close this one: $$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$ with $$E(Q) = \frac {\sigma^2}{4},\;\; \text{Var}(Q) = \frac {\sigma^4}{8}$$ RESPONSE TO QUESTION IN THE COMMENT If $$X\sim N(\mu,\sigma^2/4)$$ then $$\frac {X^2}{\sigma^2/4} \sim \mathcal \chi^2_{1,NC}(\lambda=\mu^2),$$ where $\mathcal \chi^2_{1,NC}(\lambda)$ represents a Non-Central Chi-square with one degree of freedom, and $\lambda$ is the non-centrality parameter. Then $$X^2 =\frac{\sigma^2}{4} \mathcal \chi^2_{1,NC}(\lambda)$$ can be treated as a version of the Generalized Chi-square.
Square of normal distribution with specific variance To close this one: $$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$ wit
10,346
Finding the PDF given the CDF
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable. In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to represent these atoms.
Finding the PDF given the CDF
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable. In the continuous case, wherever the
Finding the PDF given the CDF As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable. In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to represent these atoms.
Finding the PDF given the CDF As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable. In the continuous case, wherever the
10,347
Finding the PDF given the CDF
Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small.
Finding the PDF given the CDF
Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of t
Finding the PDF given the CDF Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small.
Finding the PDF given the CDF Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of t
10,348
Finding the PDF given the CDF
Differentiating the CDF does not always help, consider equation: F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2, Differentiating it you'll get: ((2 - x) / 4) substituting 0 in it gives value (1/2) which is clearly wrong as P(x = 0) is clearly (1 / 4). Instead what you should do is calculate difference between F(x) and lim(F(x - h)) as h tends to 0 from positive side of (x).
Finding the PDF given the CDF
Differentiating the CDF does not always help, consider equation: F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2, Differentiating it you'll get: ((2 - x) / 4) substituting 0 in it gives
Finding the PDF given the CDF Differentiating the CDF does not always help, consider equation: F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2, Differentiating it you'll get: ((2 - x) / 4) substituting 0 in it gives value (1/2) which is clearly wrong as P(x = 0) is clearly (1 / 4). Instead what you should do is calculate difference between F(x) and lim(F(x - h)) as h tends to 0 from positive side of (x).
Finding the PDF given the CDF Differentiating the CDF does not always help, consider equation: F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2, Differentiating it you'll get: ((2 - x) / 4) substituting 0 in it gives
10,349
Interpreting plot of residuals vs. fitted values from Poisson regression
This is the appearance you expect of such a plot when the dependent variable is discrete. Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. Every case where $y=k$ has a prediction $\hat{y}$; its residual--by definition--equals $k-\hat{y}$. The plot of $k-\hat{y}$ versus $\hat{y}$ is obviously a line with slope $-1$. In Poisson regression, the x-axis is shown on a log scale: it is $\log(\hat{y})$. The curves now bend down exponentially. As $k$ varies, these curves rise by integral amounts. Exponentiating them gives a set of quasi-parallel curves. (To prove this, the plot will be explicitly constructed below, separately coloring the points by the values of $y$.) We can reproduce the plot in question quite closely by means of a similar but arbitrary model (using small random coefficients): # Create random data for a random model. set.seed(17) n <- 2^12 # Number of cases k <- 12 # Number of variables beta = rnorm(k, sd=0.2) # Model coefficients x <- matrix(rnorm(n*k), ncol=k) # Independent values y <- rpois(n, lambda=exp(-0.5 + x %*% beta + 0.1*rnorm(n))) # Wrap the data into a data frame, create a formula, and run the model. df <- data.frame(cbind(y,x)) s.formula <- apply(matrix(1:k, nrow=1), 1, function(i) paste("V", i+1, sep="")) s.formula <- paste("y ~", paste(s.formula, collapse="+")) modl <- glm(as.formula(s.formula), family=poisson, data=df) # Construct a residual vs. prediction plot. b <- coefficients(modl) y.hat <- x %*% b[-1] + b[1] # *Logs* of the predicted values y.res <- y - exp(y.hat) # Residuals colors <- 1:(max(y)+1) # One color for each possible value of y plot(y.hat, y.res, col=colors[y+1], main="Residuals v. Fitted")
Interpreting plot of residuals vs. fitted values from Poisson regression
This is the appearance you expect of such a plot when the dependent variable is discrete. Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$.
Interpreting plot of residuals vs. fitted values from Poisson regression This is the appearance you expect of such a plot when the dependent variable is discrete. Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. Every case where $y=k$ has a prediction $\hat{y}$; its residual--by definition--equals $k-\hat{y}$. The plot of $k-\hat{y}$ versus $\hat{y}$ is obviously a line with slope $-1$. In Poisson regression, the x-axis is shown on a log scale: it is $\log(\hat{y})$. The curves now bend down exponentially. As $k$ varies, these curves rise by integral amounts. Exponentiating them gives a set of quasi-parallel curves. (To prove this, the plot will be explicitly constructed below, separately coloring the points by the values of $y$.) We can reproduce the plot in question quite closely by means of a similar but arbitrary model (using small random coefficients): # Create random data for a random model. set.seed(17) n <- 2^12 # Number of cases k <- 12 # Number of variables beta = rnorm(k, sd=0.2) # Model coefficients x <- matrix(rnorm(n*k), ncol=k) # Independent values y <- rpois(n, lambda=exp(-0.5 + x %*% beta + 0.1*rnorm(n))) # Wrap the data into a data frame, create a formula, and run the model. df <- data.frame(cbind(y,x)) s.formula <- apply(matrix(1:k, nrow=1), 1, function(i) paste("V", i+1, sep="")) s.formula <- paste("y ~", paste(s.formula, collapse="+")) modl <- glm(as.formula(s.formula), family=poisson, data=df) # Construct a residual vs. prediction plot. b <- coefficients(modl) y.hat <- x %*% b[-1] + b[1] # *Logs* of the predicted values y.res <- y - exp(y.hat) # Residuals colors <- 1:(max(y)+1) # One color for each possible value of y plot(y.hat, y.res, col=colors[y+1], main="Residuals v. Fitted")
Interpreting plot of residuals vs. fitted values from Poisson regression This is the appearance you expect of such a plot when the dependent variable is discrete. Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$.
10,350
Interpreting plot of residuals vs. fitted values from Poisson regression
Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If my suggestion is correct there should be 9 unique values in your training data set.
Interpreting plot of residuals vs. fitted values from Poisson regression
Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If
Interpreting plot of residuals vs. fitted values from Poisson regression Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If my suggestion is correct there should be 9 unique values in your training data set.
Interpreting plot of residuals vs. fitted values from Poisson regression Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If
10,351
Interpreting plot of residuals vs. fitted values from Poisson regression
This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) distributions. Also you should be plotting your residuals against the transformed linear predictor, not the predictors when using generalized linear models. To transform the Poisson predictor you need to take 2 times the square root of the linear predictor and plot your residuals against that. The residuals further more should not be exclusively pearson residuals, try deviance residuals and studentized resids.
Interpreting plot of residuals vs. fitted values from Poisson regression
This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) dist
Interpreting plot of residuals vs. fitted values from Poisson regression This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) distributions. Also you should be plotting your residuals against the transformed linear predictor, not the predictors when using generalized linear models. To transform the Poisson predictor you need to take 2 times the square root of the linear predictor and plot your residuals against that. The residuals further more should not be exclusively pearson residuals, try deviance residuals and studentized resids.
Interpreting plot of residuals vs. fitted values from Poisson regression This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) dist
10,352
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
There are several issues to address. $R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with the $R^2$ from a richer model The Hosmer-Lemeshow test is for overall calibration error, not for any particular lack of fit such as quadratic effects. It does not properly take overfitting into account, is arbitrary to choice of bins and method of computing quantiles, and often has power that is too low. For these reasons the Hosmer-Lemeshow test is no longer recommended. Hosmer et al have a better one d.f. omnibus test of fit, implemented in the R rms package residuals.lrm function. For your case goodness of fit can be assessed by jointly testing (in a "chunk" test) the contribution of all the square and interaction terms. But I recommend specifying the model to make it more likely to fit up front (especially with regard to relaxing linearity assumptions using regression splines) and using the bootstrap to estimate overfitting and to get an overfitting-corrected high-resolution smooth calibration curve to check absolute accuracy. These are done using the R rms package. On the last point, I prefer the philosophy that models be flexible (as limited by the sample size, anyway) and that we concentrate more on "fit" than "lack of fit".
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
There are several issues to address. $R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with t
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit There are several issues to address. $R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with the $R^2$ from a richer model The Hosmer-Lemeshow test is for overall calibration error, not for any particular lack of fit such as quadratic effects. It does not properly take overfitting into account, is arbitrary to choice of bins and method of computing quantiles, and often has power that is too low. For these reasons the Hosmer-Lemeshow test is no longer recommended. Hosmer et al have a better one d.f. omnibus test of fit, implemented in the R rms package residuals.lrm function. For your case goodness of fit can be assessed by jointly testing (in a "chunk" test) the contribution of all the square and interaction terms. But I recommend specifying the model to make it more likely to fit up front (especially with regard to relaxing linearity assumptions using regression splines) and using the bootstrap to estimate overfitting and to get an overfitting-corrected high-resolution smooth calibration curve to check absolute accuracy. These are done using the R rms package. On the last point, I prefer the philosophy that models be flexible (as limited by the sample size, anyway) and that we concentrate more on "fit" than "lack of fit".
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit There are several issues to address. $R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with t
10,353
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
From Wikipedia: The test assesses whether or not the observed event rates match expected event rates in subgroups of the model population. The Hosmer–Lemeshow test specifically identifies subgroups as the deciles of fitted risk values. Models for which expected and observed event rates in subgroups are similar are called well calibrated. Its meaning: after building model scoring your model's y, you want to cross check whether it is distributed across 10 deciles similar to actual event rates. So hypotheses will be $H_0$: Actual and predicted event rates are similar across 10 deciles $H_1$: they are mot same Hence if p-value is less than .05, they are not well distributed and you need to refine your model. I hope this answers some of your query.
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
From Wikipedia: The test assesses whether or not the observed event rates match expected event rates in subgroups of the model population. The Hosmer–Lemeshow test specifically identifies subgrou
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit From Wikipedia: The test assesses whether or not the observed event rates match expected event rates in subgroups of the model population. The Hosmer–Lemeshow test specifically identifies subgroups as the deciles of fitted risk values. Models for which expected and observed event rates in subgroups are similar are called well calibrated. Its meaning: after building model scoring your model's y, you want to cross check whether it is distributed across 10 deciles similar to actual event rates. So hypotheses will be $H_0$: Actual and predicted event rates are similar across 10 deciles $H_1$: they are mot same Hence if p-value is less than .05, they are not well distributed and you need to refine your model. I hope this answers some of your query.
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit From Wikipedia: The test assesses whether or not the observed event rates match expected event rates in subgroups of the model population. The Hosmer–Lemeshow test specifically identifies subgrou
10,354
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model still showed significant lack of fit, & that perhaps a even more complex model would be appropriate. You're testing the fit of precisely the model you specified, not of the simpler 1st-order model. † It's not a full 2nd-order model—there are three interactions to go.
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model st
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model still showed significant lack of fit, & that perhaps a even more complex model would be appropriate. You're testing the fit of precisely the model you specified, not of the simpler 1st-order model. † It's not a full 2nd-order model—there are three interactions to go.
Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model st
10,355
Why is the expectation maximization algorithm used?
The question is legit and I had the same confusion when I first learnt the EM algorithm. In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function of a parametric model in the case in which some variables of the model are (or are treated as) "latent" or unknown. In theory, for the same purpose, you can use a minimization algorithm to numerically find the maximum of the likelihood function for all parameters. However in real situation this minimization would be: much more computationally intensive less robust A very common application of the EM method is fitting a mixture model. In this case considering the variable that assign each sample to one of the component as "latent" variables the problem is greatly simplified. Lets look at an example. We have N samples $s = \{s_i\}$ extracted from a mixture of 2 normal distributions. To find the parameters without EM we should minimize: $$-\log \mathcal{L}(x,\theta) = -\log\Big[ a_1 \exp\Big( \frac{(x-\mu_1)^2}{2\sigma_1^2}\Big) + a_2 \exp\Big(\frac{(x-\mu_2)^2}{2\sigma_2^2}\Big) \Big]$$ On the contrary, using the EM algorithm, we first "assign" each sample to a component (E step) and then fit (or maximize the likelihood of) each component separately (M step). In this example the M-step is simply a weighted mean to find $\mu_k$ and $\sigma_k$. Iterating over these two steps is a simpler and more robust way to minimize $-\log \mathcal{L}(x,\theta)$.
Why is the expectation maximization algorithm used?
The question is legit and I had the same confusion when I first learnt the EM algorithm. In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function
Why is the expectation maximization algorithm used? The question is legit and I had the same confusion when I first learnt the EM algorithm. In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function of a parametric model in the case in which some variables of the model are (or are treated as) "latent" or unknown. In theory, for the same purpose, you can use a minimization algorithm to numerically find the maximum of the likelihood function for all parameters. However in real situation this minimization would be: much more computationally intensive less robust A very common application of the EM method is fitting a mixture model. In this case considering the variable that assign each sample to one of the component as "latent" variables the problem is greatly simplified. Lets look at an example. We have N samples $s = \{s_i\}$ extracted from a mixture of 2 normal distributions. To find the parameters without EM we should minimize: $$-\log \mathcal{L}(x,\theta) = -\log\Big[ a_1 \exp\Big( \frac{(x-\mu_1)^2}{2\sigma_1^2}\Big) + a_2 \exp\Big(\frac{(x-\mu_2)^2}{2\sigma_2^2}\Big) \Big]$$ On the contrary, using the EM algorithm, we first "assign" each sample to a component (E step) and then fit (or maximize the likelihood of) each component separately (M step). In this example the M-step is simply a weighted mean to find $\mu_k$ and $\sigma_k$. Iterating over these two steps is a simpler and more robust way to minimize $-\log \mathcal{L}(x,\theta)$.
Why is the expectation maximization algorithm used? The question is legit and I had the same confusion when I first learnt the EM algorithm. In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function
10,356
Why is the expectation maximization algorithm used?
EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing values in your data matrix. Consider a sample $X = (X_{1},...,X_{n})$ which has conditional density $f_{X|\Theta}(x|\theta)$. Then the log-likelihood of this is $$l(\theta;X) = log f_{X|\Theta}(X|\theta)$$ Now suppose that you do not have a complete data set such that $X$ is made up of observed data $Y$ and missing (or latent) variables $Z$, such that $X=(Y,Z)$. Then the log-likelihood for the observed data is $$l_{obs}(\theta,Y)=log \int f_{X|\Theta}(Y,z|\theta)\nu_{z}(dz)$$ In general you cannot compute this integral directly and you will not get a closed-form solution for $l_{obs}(\theta,Y)$. For this purpose you use the EM method. There are two steps which are iterated for $i$ times. In this $(i + 1)^{th}$ step these are the expectation step in which you compute $$Q(\theta|\theta^{(i)}) = E_{\theta^{(i)}}[l(\theta;X|Y]$$ where $\theta^{(i)}$ is the estimate of $\Theta$ in the $i^{th}$ step. Then compute the maximization step in which you maximize $Q(\theta|\theta^{(i)})$ with respect to $\theta$ and set $\theta^{(i+1)} = max Q(\theta|\theta^{i})$. You then repeat these steps until the method converges to some value which will be your estimate. If you need more information on the method, its properties, proofs or applications just give a look at the corresponding Wiki article.
Why is the expectation maximization algorithm used?
EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing valu
Why is the expectation maximization algorithm used? EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing values in your data matrix. Consider a sample $X = (X_{1},...,X_{n})$ which has conditional density $f_{X|\Theta}(x|\theta)$. Then the log-likelihood of this is $$l(\theta;X) = log f_{X|\Theta}(X|\theta)$$ Now suppose that you do not have a complete data set such that $X$ is made up of observed data $Y$ and missing (or latent) variables $Z$, such that $X=(Y,Z)$. Then the log-likelihood for the observed data is $$l_{obs}(\theta,Y)=log \int f_{X|\Theta}(Y,z|\theta)\nu_{z}(dz)$$ In general you cannot compute this integral directly and you will not get a closed-form solution for $l_{obs}(\theta,Y)$. For this purpose you use the EM method. There are two steps which are iterated for $i$ times. In this $(i + 1)^{th}$ step these are the expectation step in which you compute $$Q(\theta|\theta^{(i)}) = E_{\theta^{(i)}}[l(\theta;X|Y]$$ where $\theta^{(i)}$ is the estimate of $\Theta$ in the $i^{th}$ step. Then compute the maximization step in which you maximize $Q(\theta|\theta^{(i)})$ with respect to $\theta$ and set $\theta^{(i+1)} = max Q(\theta|\theta^{i})$. You then repeat these steps until the method converges to some value which will be your estimate. If you need more information on the method, its properties, proofs or applications just give a look at the corresponding Wiki article.
Why is the expectation maximization algorithm used? EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing valu
10,357
Why is the expectation maximization algorithm used?
EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model.
Why is the expectation maximization algorithm used?
EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model.
Why is the expectation maximization algorithm used? EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model.
Why is the expectation maximization algorithm used? EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model.
10,358
Why is the Fisher Information matrix positive semidefinite?
Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form From the definition, we have $$ I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] \, , $$ for $i,j=1,\dots,k$, in which $\partial_i=\partial /\partial \theta_i$. Your expression for $I_{ij}$ follows from this one under regularity conditions. For a nonnull vector $u = (u_1,\dots,u_k)^\top\in\mathbb{R}^n$, it follows from the linearity of the expectation that $$ \sum_{i,j=1}^k u_i I_{ij} u_j = \sum_{i,j=1}^k \left( u_i \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] u_j \right) \\ = \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\sum_{j=1}^k u_j \partial_j \log f_{X\mid\Theta} (X\mid\theta)\right)\right] \\ = \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right)^2 \right] \geq 0 \, . $$ If this component wise notation is too ugly, note that the Fisher Information matrix $H=(I_{ij})$ can be written as $H = \mathrm{E}_\theta\left[S S^\top\right]$, in which the scores vector $S$ is defined as $$ S = \left( \partial_1 \log f_{X\mid\Theta}(X\mid\theta), \dots, \partial_k \log f_{X\mid\Theta}(X\mid\theta) \right)^\top \, . $$ Hence, we have the one-liner $$ u^\top H u = u^\top \mathrm{E}_\theta[S S^\top] u = \mathrm{E}_\theta[u^\top S S^\top u] = \mathrm{E}_\theta\left[|| S^\top u ||^2\right] \geq 0. $$
Why is the Fisher Information matrix positive semidefinite?
Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form From the definition, we have $$ I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right
Why is the Fisher Information matrix positive semidefinite? Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form From the definition, we have $$ I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] \, , $$ for $i,j=1,\dots,k$, in which $\partial_i=\partial /\partial \theta_i$. Your expression for $I_{ij}$ follows from this one under regularity conditions. For a nonnull vector $u = (u_1,\dots,u_k)^\top\in\mathbb{R}^n$, it follows from the linearity of the expectation that $$ \sum_{i,j=1}^k u_i I_{ij} u_j = \sum_{i,j=1}^k \left( u_i \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] u_j \right) \\ = \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\sum_{j=1}^k u_j \partial_j \log f_{X\mid\Theta} (X\mid\theta)\right)\right] \\ = \mathrm{E}_\theta \left[ \left(\sum_{i=1}^k u_i \partial_i \log f_{X\mid\Theta}(X\mid\theta)\right)^2 \right] \geq 0 \, . $$ If this component wise notation is too ugly, note that the Fisher Information matrix $H=(I_{ij})$ can be written as $H = \mathrm{E}_\theta\left[S S^\top\right]$, in which the scores vector $S$ is defined as $$ S = \left( \partial_1 \log f_{X\mid\Theta}(X\mid\theta), \dots, \partial_k \log f_{X\mid\Theta}(X\mid\theta) \right)^\top \, . $$ Hence, we have the one-liner $$ u^\top H u = u^\top \mathrm{E}_\theta[S S^\top] u = \mathrm{E}_\theta[u^\top S S^\top u] = \mathrm{E}_\theta\left[|| S^\top u ||^2\right] \geq 0. $$
Why is the Fisher Information matrix positive semidefinite? Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form From the definition, we have $$ I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right
10,359
Why is the Fisher Information matrix positive semidefinite?
WARNING: not a general answer! If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Covariance matrices are always positive semi-definite. Since the Fisher information is a convex combination of positive semi-definite matrices, so it must also be positive semi-definite.
Why is the Fisher Information matrix positive semidefinite?
WARNING: not a general answer! If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Co
Why is the Fisher Information matrix positive semidefinite? WARNING: not a general answer! If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Covariance matrices are always positive semi-definite. Since the Fisher information is a convex combination of positive semi-definite matrices, so it must also be positive semi-definite.
Why is the Fisher Information matrix positive semidefinite? WARNING: not a general answer! If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Co
10,360
Why is there an asymmetry between the training step and evaluation step?
It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine Learning Tools and Techniques" and Tom Mitchell's "Machine Learning". Introduction. So we have a classifier and a limited dataset, and a certain amount of data must go into training set and the rest is used for testing (if necessary, a third subset used for validation). Dilemma we face is this: to find a good classifier, the "training subset" should be as big as possible, but to get a good error estimate the "test subset" should be as big as possible - but both subsets are taken from the same pool. It's obvious that the training set should be bigger than the test set - that is, the split should not be 1:1 (main goal is to train, not to test) - but it's not clear where the split should be. Holdout procedure. The procedure of splitting the "superset" into subsets is called holdout method. Note that you may easily get unlucky and examples of a certain class could be missing (or overpresented) in one of the subsets, which can be addressed via random sampling, which guarantees that each class is properly represented in all data subsets - the procedure is called stratified holdout random sampling with repeated training-testing-validation process on top of it - which is called repeated stratified holdout In a single (nonrepeated) holdout procedure, you might consider swapping the roles of the testing and training data and average the two results, but this is only plausible with a 1:1 split between training and test sets which is not acceptable (see Introduction). But this gives an idea, and an improved method (called cross-validation is used instead) - see below! Cross-validation. In cross-validation, you decide on a fixed number of folds (partitions of the data). If we use three folds, the data is split into three equal partitions and we use 2/3 for training and 1/3 for testing and repeat the procedure three times so that, in the end, every instance has been used exactly once for testing. This is called threefold cross-validation, and if stratification is adopted as well (which it often true) it is called stratified threefold cross-validation. But, lo and behold, the standard way is not the 2/3:1/3 split. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", The standard way [...] is to use stratified 10-fold cross-validation. The data is divided randomly into 10 parts in which the class is represented in approximately the same proportions as in the full dataset. Each part is held out in turn and the learning scheme trained on the remaining nine-tenths; then its error rate is calculated on the holdout set. Thus the learning procedure is executed a total of 10 times on different training sets (each of which have a lot in common). Finally, the 10 error estimates are averaged to yield an overall error estimate. Why 10? Because "..Extensive tests on numerous datasets, with different learning techniques, have shown that 10 is about the right number of folds to get the best estimate of error, and there is also some theoretical evidence that backs this up.." I haven't found which extensive tests and theoretical evidence they meant but this one seems like a good start for digging more - if you wish. They basically just say Although these arguments are by no means conclusive, and debate continues to rage in machine learning and data mining circles about what is the best scheme for evaluation, 10-fold cross-validation has become the standard method in practical terms. [...] Moreover, there is nothing magic about the exact number 10: 5-fold or 20-fold cross-validation is likely to be almost as good. Bootstrap, and - finally! - the answer to the original question. But we haven't yet arrived to the answer as to, why the 2/3:1/3 is often recommended. My take is that it's inherited from bootstrap method. It's based on sampling with replacement. Previously, we put a sample from the "grand set" into exactly one of the subsets. Bootstraping is different and a sample can easily appear in both training and test set. Let's look into one particular scenario where we take a dataset D1 of n instances and sample it n times with replacement, to get another dataset D2 of n instances. Now watch narrowly. Because some elements in D2 will (almost certainly) be repeated, there must be some instances in the original dataset that have not been picked: we will use these as test instances. What is the chance that a particular instance wasn't picked up for D2? The probability of being picked up on each take is 1/n so the opposite is (1 - 1/n). When we multiply these probabilities together, it's (1 - 1/n)^n which is e^-1 which is about 0.3. This means our test set will be about 1/3 and the training set will be about 2/3. I guess this is the reason why it's recommended to use 1/3:2/3 split: this ratio is taken from the bootstrapping estimation method. Wrapping it up. I want to finish off with a quote from the data mining book (which I cannot prove but assume correct) where they generally recommend to prefer 10-fold cross-validation: The bootstrap procedure may be the best way of estimating error for very small datasets. However, like leave-one-out cross-validation, it has disadvantages that can be illustrated by considering a special, artificial situation [...] a completely random dataset with two classes. The true error rate is 50% for any prediction rule.But a scheme that memorized the training set would give a perfect resubstitution score of 100% so that etraining instances= 0, and the 0.632 bootstrap will mix this in with a weight of 0.368 to give an overall error rate of only 31.6% (0.632 ¥ 50% + 0.368 ¥ 0%), which is misleadingly optimistic.
Why is there an asymmetry between the training step and evaluation step?
It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine L
Why is there an asymmetry between the training step and evaluation step? It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine Learning Tools and Techniques" and Tom Mitchell's "Machine Learning". Introduction. So we have a classifier and a limited dataset, and a certain amount of data must go into training set and the rest is used for testing (if necessary, a third subset used for validation). Dilemma we face is this: to find a good classifier, the "training subset" should be as big as possible, but to get a good error estimate the "test subset" should be as big as possible - but both subsets are taken from the same pool. It's obvious that the training set should be bigger than the test set - that is, the split should not be 1:1 (main goal is to train, not to test) - but it's not clear where the split should be. Holdout procedure. The procedure of splitting the "superset" into subsets is called holdout method. Note that you may easily get unlucky and examples of a certain class could be missing (or overpresented) in one of the subsets, which can be addressed via random sampling, which guarantees that each class is properly represented in all data subsets - the procedure is called stratified holdout random sampling with repeated training-testing-validation process on top of it - which is called repeated stratified holdout In a single (nonrepeated) holdout procedure, you might consider swapping the roles of the testing and training data and average the two results, but this is only plausible with a 1:1 split between training and test sets which is not acceptable (see Introduction). But this gives an idea, and an improved method (called cross-validation is used instead) - see below! Cross-validation. In cross-validation, you decide on a fixed number of folds (partitions of the data). If we use three folds, the data is split into three equal partitions and we use 2/3 for training and 1/3 for testing and repeat the procedure three times so that, in the end, every instance has been used exactly once for testing. This is called threefold cross-validation, and if stratification is adopted as well (which it often true) it is called stratified threefold cross-validation. But, lo and behold, the standard way is not the 2/3:1/3 split. Quotting "Data Mining: Practical Machine Learning Tools and Techniques", The standard way [...] is to use stratified 10-fold cross-validation. The data is divided randomly into 10 parts in which the class is represented in approximately the same proportions as in the full dataset. Each part is held out in turn and the learning scheme trained on the remaining nine-tenths; then its error rate is calculated on the holdout set. Thus the learning procedure is executed a total of 10 times on different training sets (each of which have a lot in common). Finally, the 10 error estimates are averaged to yield an overall error estimate. Why 10? Because "..Extensive tests on numerous datasets, with different learning techniques, have shown that 10 is about the right number of folds to get the best estimate of error, and there is also some theoretical evidence that backs this up.." I haven't found which extensive tests and theoretical evidence they meant but this one seems like a good start for digging more - if you wish. They basically just say Although these arguments are by no means conclusive, and debate continues to rage in machine learning and data mining circles about what is the best scheme for evaluation, 10-fold cross-validation has become the standard method in practical terms. [...] Moreover, there is nothing magic about the exact number 10: 5-fold or 20-fold cross-validation is likely to be almost as good. Bootstrap, and - finally! - the answer to the original question. But we haven't yet arrived to the answer as to, why the 2/3:1/3 is often recommended. My take is that it's inherited from bootstrap method. It's based on sampling with replacement. Previously, we put a sample from the "grand set" into exactly one of the subsets. Bootstraping is different and a sample can easily appear in both training and test set. Let's look into one particular scenario where we take a dataset D1 of n instances and sample it n times with replacement, to get another dataset D2 of n instances. Now watch narrowly. Because some elements in D2 will (almost certainly) be repeated, there must be some instances in the original dataset that have not been picked: we will use these as test instances. What is the chance that a particular instance wasn't picked up for D2? The probability of being picked up on each take is 1/n so the opposite is (1 - 1/n). When we multiply these probabilities together, it's (1 - 1/n)^n which is e^-1 which is about 0.3. This means our test set will be about 1/3 and the training set will be about 2/3. I guess this is the reason why it's recommended to use 1/3:2/3 split: this ratio is taken from the bootstrapping estimation method. Wrapping it up. I want to finish off with a quote from the data mining book (which I cannot prove but assume correct) where they generally recommend to prefer 10-fold cross-validation: The bootstrap procedure may be the best way of estimating error for very small datasets. However, like leave-one-out cross-validation, it has disadvantages that can be illustrated by considering a special, artificial situation [...] a completely random dataset with two classes. The true error rate is 50% for any prediction rule.But a scheme that memorized the training set would give a perfect resubstitution score of 100% so that etraining instances= 0, and the 0.632 bootstrap will mix this in with a weight of 0.368 to give an overall error rate of only 31.6% (0.632 ¥ 50% + 0.368 ¥ 0%), which is misleadingly optimistic.
Why is there an asymmetry between the training step and evaluation step? It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine L
10,361
Why is there an asymmetry between the training step and evaluation step?
Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial: y = a0 + a1*X+a2*X^2 + ... + an*X^m Now if you have some new record, not used in training set and values of an input vector X are different from any vector X, used in training set, what can you tell about the accuracy of prediction y? I suggest you to go over an example where you have 1 or 2-dimensional input vector X (in order to visualize the overfitting polynomial) and check how big is the prediction error for some pair (X, y) which X values are just a little different from the values from the training set. I don't know if this explanation is theoretic enough, but hopefully it helps. I tried to explain the problem on regression model as I consider it more intuitively understandable than others (SVM, Neural Networks...). When you build a model, you should split the data into at least training set and test set (some split the data into training, evaluation, and cross validation set). Usually 70% of data is used for training set and 30% for evaluation and then, when you build the model, you have to check the training error and test error. If both errors are big, it means your model is too simple (the model has high bias). On the other hand if your training error is very small but there is a big difference between training and test error, it means your model is too complex (the model has high variance). The best way to choose the right compromise is to plot training and test errors for models of various complexity and then choose the one where the test error is minimal (see the picture below).
Why is there an asymmetry between the training step and evaluation step?
Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial: y = a0 + a1*X+a2*X^2 + ... + an*X^m Now if you ha
Why is there an asymmetry between the training step and evaluation step? Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial: y = a0 + a1*X+a2*X^2 + ... + an*X^m Now if you have some new record, not used in training set and values of an input vector X are different from any vector X, used in training set, what can you tell about the accuracy of prediction y? I suggest you to go over an example where you have 1 or 2-dimensional input vector X (in order to visualize the overfitting polynomial) and check how big is the prediction error for some pair (X, y) which X values are just a little different from the values from the training set. I don't know if this explanation is theoretic enough, but hopefully it helps. I tried to explain the problem on regression model as I consider it more intuitively understandable than others (SVM, Neural Networks...). When you build a model, you should split the data into at least training set and test set (some split the data into training, evaluation, and cross validation set). Usually 70% of data is used for training set and 30% for evaluation and then, when you build the model, you have to check the training error and test error. If both errors are big, it means your model is too simple (the model has high bias). On the other hand if your training error is very small but there is a big difference between training and test error, it means your model is too complex (the model has high variance). The best way to choose the right compromise is to plot training and test errors for models of various complexity and then choose the one where the test error is minimal (see the picture below).
Why is there an asymmetry between the training step and evaluation step? Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial: y = a0 + a1*X+a2*X^2 + ... + an*X^m Now if you ha
10,362
Why is there an asymmetry between the training step and evaluation step?
This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened in case your model fit only the data you have and not a new one: Titius-Bode law
Why is there an asymmetry between the training step and evaluation step?
This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened i
Why is there an asymmetry between the training step and evaluation step? This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened in case your model fit only the data you have and not a new one: Titius-Bode law
Why is there an asymmetry between the training step and evaluation step? This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened i
10,363
Why is there an asymmetry between the training step and evaluation step?
So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of the question: Why using different data for training and evaluation helps us avoid overfitting. Our data is split into: Training instances Validation instances Test (evaluation) instances Now we have a model, let's call it $\mathfrak{M}$. We fit it using the training instances and check its accuracy using the validation instances. We may even do cross validation. But why on earth would we check it again using the test instances? The problem is that in practice, we try many different models, $\mathfrak{M}_1, ..., \mathfrak{M}_n$, with different parameters. This is where overfitting occurs. We selectively choose the model that performs the best on the validation instances. But our goal is to have a model that performs well in general. This is why we have the test instances - unlike the validation instances, test instances aren't involved in choosing the model. It is important to realise what are the different roles of the Validation and Test instances. Training instances - used to fit the models. Validation instances - used to choose a model Test (evaluation) instances - used to measure a model's accuracy on new data See the page 222 of The Elements of Statistical Learning: Data Mining, Inference, and Prediction for more details.
Why is there an asymmetry between the training step and evaluation step?
So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of
Why is there an asymmetry between the training step and evaluation step? So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of the question: Why using different data for training and evaluation helps us avoid overfitting. Our data is split into: Training instances Validation instances Test (evaluation) instances Now we have a model, let's call it $\mathfrak{M}$. We fit it using the training instances and check its accuracy using the validation instances. We may even do cross validation. But why on earth would we check it again using the test instances? The problem is that in practice, we try many different models, $\mathfrak{M}_1, ..., \mathfrak{M}_n$, with different parameters. This is where overfitting occurs. We selectively choose the model that performs the best on the validation instances. But our goal is to have a model that performs well in general. This is why we have the test instances - unlike the validation instances, test instances aren't involved in choosing the model. It is important to realise what are the different roles of the Validation and Test instances. Training instances - used to fit the models. Validation instances - used to choose a model Test (evaluation) instances - used to measure a model's accuracy on new data See the page 222 of The Elements of Statistical Learning: Data Mining, Inference, and Prediction for more details.
Why is there an asymmetry between the training step and evaluation step? So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of
10,364
Why do we model noise in linear regression but not logistic regression?
Short answer: we do, just implicitly. A possibly more enlightening way of looking at things is the following. In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N(0,\sigma^2)$ distributed, but we model the observations as $N(x\beta,\sigma^2)$ distributed. (Of course, this is precisely the same thing, just looking at it in two different ways.) Now the analogous statement for logistic regression becomes clear: here, we model the observations as Bernoulli distributed with parameter $p(x)=\frac{1}{1+e^{-x\beta}}$. We can flip this last way of thinking around if we want: we can indeed say that we are modeling the errors in logistic regression. Namely, we are modeling them as "the difference between a Bernoulli distributed variable with parameter $p(x)$ and $p(x)$ itself". This is just very unwieldy, and this distribution does not have a name, plus the error here depends on our independent variables $x$ (in contrast to the homoskedasticity assumption in OLS, where the error is independent of $x$), so this way of looking at things is just not used as often.
Why do we model noise in linear regression but not logistic regression?
Short answer: we do, just implicitly. A possibly more enlightening way of looking at things is the following. In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N
Why do we model noise in linear regression but not logistic regression? Short answer: we do, just implicitly. A possibly more enlightening way of looking at things is the following. In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N(0,\sigma^2)$ distributed, but we model the observations as $N(x\beta,\sigma^2)$ distributed. (Of course, this is precisely the same thing, just looking at it in two different ways.) Now the analogous statement for logistic regression becomes clear: here, we model the observations as Bernoulli distributed with parameter $p(x)=\frac{1}{1+e^{-x\beta}}$. We can flip this last way of thinking around if we want: we can indeed say that we are modeling the errors in logistic regression. Namely, we are modeling them as "the difference between a Bernoulli distributed variable with parameter $p(x)$ and $p(x)$ itself". This is just very unwieldy, and this distribution does not have a name, plus the error here depends on our independent variables $x$ (in contrast to the homoskedasticity assumption in OLS, where the error is independent of $x$), so this way of looking at things is just not used as often.
Why do we model noise in linear regression but not logistic regression? Short answer: we do, just implicitly. A possibly more enlightening way of looking at things is the following. In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N
10,365
Why do we model noise in linear regression but not logistic regression?
To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regression (and softmax regression more generally) you can actually also think of the target $y$ as computed by the following operation involving $x$ and some noise $\epsilon$: $$ y = \arg \max_{i \in \{0, 1\}} (\alpha_i + \epsilon_i)$$ where $\alpha_0 = 0, \alpha_1 = \theta^T x$, and $\epsilon_0, \epsilon_1$ are both independent "noise" variables following $\text{Gumbel}(0,1)$ distribution; you can check that this way $y$ follows Bernoulli with $\mathbb{P}(y=1|x)= 1/(1+e^{-\theta^T x})$ as desired. This way of sampling from a categorical (in this case Bernoulli) distribution is widely known as the Gumbel-max trick in machine learning: https://lips.cs.princeton.edu/the-gumbel-max-trick-for-discrete-distributions/ (The basic idea comes from the reparameterization trick. There's also a closely related Gumbel-softmax trick that essentially turns the above $\arg \max$ operation of Gumbel-max differentiable).
Why do we model noise in linear regression but not logistic regression?
To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regress
Why do we model noise in linear regression but not logistic regression? To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regression (and softmax regression more generally) you can actually also think of the target $y$ as computed by the following operation involving $x$ and some noise $\epsilon$: $$ y = \arg \max_{i \in \{0, 1\}} (\alpha_i + \epsilon_i)$$ where $\alpha_0 = 0, \alpha_1 = \theta^T x$, and $\epsilon_0, \epsilon_1$ are both independent "noise" variables following $\text{Gumbel}(0,1)$ distribution; you can check that this way $y$ follows Bernoulli with $\mathbb{P}(y=1|x)= 1/(1+e^{-\theta^T x})$ as desired. This way of sampling from a categorical (in this case Bernoulli) distribution is widely known as the Gumbel-max trick in machine learning: https://lips.cs.princeton.edu/the-gumbel-max-trick-for-discrete-distributions/ (The basic idea comes from the reparameterization trick. There's also a closely related Gumbel-softmax trick that essentially turns the above $\arg \max$ operation of Gumbel-max differentiable).
Why do we model noise in linear regression but not logistic regression? To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regress
10,366
Hidden Markov Model vs Recurrent Neural Network
Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working. An RNN may perform better if you have a very large dataset, since the extra complexity can take better advantage of the information in your data. This can be true even if the HMMs assumptions are true in your case. Finally, don't be restricted to only these two models for your sequence task, sometimes simpler regressions (e.g. ARIMA) can win out, and sometimes other complicated approaches such as Convolutional Neural Networks might be the best. (Yes, CNNs can be applied to some kinds of sequence data just like RNNs.) As always, the best way to know which model is best is to make the models and measure performance on a held out test set. Strong Assumptions of HMMs State transitions only depend on the current state, not on anything in the past. This assumption does not hold in a lot of the areas I am familiar with. For example, pretend you are trying to predict for every minute of the day whether a person was awake or asleep from movement data. The chance of someone transitioning from asleep to awake increases the longer the person has been in the asleep state. An RNN could theoretically learn this relationship and exploit it for higher predictive accuracy. You can try to get around this, for example by including the previous state as a feature, or defining composite states, but the added complexity does not always increase an HMM's predictive accuracy, and it definitely doesn't help computation times. You must pre-define the total number of states. Returning to the sleep example, it may appear as if there are only two states we care about. However, even if we only care about predicting awake vs. asleep, our model may benefit from figuring out extra states such as driving, showering, etc. (e.g. showering usually comes right before sleeping). Again, an RNN could theoretically learn such a relationship if showed enough examples of it. Difficulties with RNNs It may seem from the above that RNNs are always superior. I should note, though, that RNNs can be difficult to get working, especially when your dataset is small or your sequences very long. I've personally had troubles getting RNNs to train on some of my data, and I have a suspicion that most published RNN methods/guidelines are tuned to text data. When trying to use RNNs on non-text data I have had to perform a wider hyperparameter search than I care to in order to get good results on my particular datasets. In some cases, I've found the best model for sequential data is actually a UNet style (https://arxiv.org/pdf/1505.04597.pdf) Convolutional Neural Network model since it is easier and faster to train, and is able to take the full context of the signal into account.
Hidden Markov Model vs Recurrent Neural Network
Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see be
Hidden Markov Model vs Recurrent Neural Network Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working. An RNN may perform better if you have a very large dataset, since the extra complexity can take better advantage of the information in your data. This can be true even if the HMMs assumptions are true in your case. Finally, don't be restricted to only these two models for your sequence task, sometimes simpler regressions (e.g. ARIMA) can win out, and sometimes other complicated approaches such as Convolutional Neural Networks might be the best. (Yes, CNNs can be applied to some kinds of sequence data just like RNNs.) As always, the best way to know which model is best is to make the models and measure performance on a held out test set. Strong Assumptions of HMMs State transitions only depend on the current state, not on anything in the past. This assumption does not hold in a lot of the areas I am familiar with. For example, pretend you are trying to predict for every minute of the day whether a person was awake or asleep from movement data. The chance of someone transitioning from asleep to awake increases the longer the person has been in the asleep state. An RNN could theoretically learn this relationship and exploit it for higher predictive accuracy. You can try to get around this, for example by including the previous state as a feature, or defining composite states, but the added complexity does not always increase an HMM's predictive accuracy, and it definitely doesn't help computation times. You must pre-define the total number of states. Returning to the sleep example, it may appear as if there are only two states we care about. However, even if we only care about predicting awake vs. asleep, our model may benefit from figuring out extra states such as driving, showering, etc. (e.g. showering usually comes right before sleeping). Again, an RNN could theoretically learn such a relationship if showed enough examples of it. Difficulties with RNNs It may seem from the above that RNNs are always superior. I should note, though, that RNNs can be difficult to get working, especially when your dataset is small or your sequences very long. I've personally had troubles getting RNNs to train on some of my data, and I have a suspicion that most published RNN methods/guidelines are tuned to text data. When trying to use RNNs on non-text data I have had to perform a wider hyperparameter search than I care to in order to get good results on my particular datasets. In some cases, I've found the best model for sequential data is actually a UNet style (https://arxiv.org/pdf/1505.04597.pdf) Convolutional Neural Network model since it is easier and faster to train, and is able to take the full context of the signal into account.
Hidden Markov Model vs Recurrent Neural Network Summary Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see be
10,367
Hidden Markov Model vs Recurrent Neural Network
Let's first see the differences between the HMM and RNN. From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characterized by the following three fundamental problems: Problem 1 (Likelihood): Given an HMM λ = (A,B) and an observation sequence O, determine the likelihood P(O|λ). Problem 2 (Decoding): Given an observation sequence O and an HMM λ = (A,B), discover the best hidden state sequence Q. Problem 3 (Learning): Given an observation sequence O and the set of states in the HMM, learn the HMM parameters A and B. We can compare the HMM with the RNN from that three perspectives. Likelihood Likelihood in HMM(Picture A.5) Language model in RNN In HMM we calculate the likelihood by $P(O)=\sum_Q P(O, Q) = \sum_Q P(O|Q)P(Q)$ where the $Q$ represents all the possible hidden state sequences, and the probability is the real probability in the graph. While in RNN the equivalent, as far as I know, is the inverse of the perplexity in language modeling where $\frac{1}{p(X)} = \sqrt[T]{\prod_{t=1}^T \frac{1}{p(x^t|x^{(t-1)},...,x^{(1)})}}$ and we don't sum over the hidden states and don't get the exact probability. Decoding In HMM the decoding task is computing $v_t(j) = max_{i=1}^N v_{t-1}(i)a_{ij} b_(o_t)$ and determining which sequence of variables is the underlying source of some sequence of observations using the Viterbi algorithm and the length of the result is normally equal to the observation; while in RNN the decoding is computing $P(y_1, ..., y_O|x_1, ..., x_T) = \prod_{o=1}^OP(y_o|y_1, ..., y_{o-1}, c_o)$ and the length of $Y$ is usually not equal to the observation $X$. Decoding in HMM(Figure A.10) Decoding in RNN Learning The learning in HMM is much more complicated than that in RNN. In HMM it usually utilized the Baum-Welch algorithm(a special case of Expectation-Maximization algorithm) while in RNN it is usually the gradient descent. For your subquestions: Which sequential input problems are best suited for each? When you don't have enough data use the HMM, and when you need to calculate the exact probability HMM would also be a better suit(generative tasks modeling how the data are generated). Otherwise, you can use RNN. Does input dimensionality determine which is a better match? I don't think so, but it may take HMM more time to learn if hidden states is too big since the complexity of the algorithms (forward backward and Viterbi) is basically the square of the number of discrete states. Are problems which require "longer memory" better suited for an LSTM RNN, while problems with cyclical input patterns (stock market, weather) more easily solved by an HMM? In HMM the current state is also affected by the previous states and observations(by the parent states), and you can try Second-Order Hidden Markov Model for "longer memory". I think you can use RNN to do almost references Natural Language Processing with Deep Learning CS224N/Ling284 Hidden Markov Models
Hidden Markov Model vs Recurrent Neural Network
Let's first see the differences between the HMM and RNN. From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characteri
Hidden Markov Model vs Recurrent Neural Network Let's first see the differences between the HMM and RNN. From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characterized by the following three fundamental problems: Problem 1 (Likelihood): Given an HMM λ = (A,B) and an observation sequence O, determine the likelihood P(O|λ). Problem 2 (Decoding): Given an observation sequence O and an HMM λ = (A,B), discover the best hidden state sequence Q. Problem 3 (Learning): Given an observation sequence O and the set of states in the HMM, learn the HMM parameters A and B. We can compare the HMM with the RNN from that three perspectives. Likelihood Likelihood in HMM(Picture A.5) Language model in RNN In HMM we calculate the likelihood by $P(O)=\sum_Q P(O, Q) = \sum_Q P(O|Q)P(Q)$ where the $Q$ represents all the possible hidden state sequences, and the probability is the real probability in the graph. While in RNN the equivalent, as far as I know, is the inverse of the perplexity in language modeling where $\frac{1}{p(X)} = \sqrt[T]{\prod_{t=1}^T \frac{1}{p(x^t|x^{(t-1)},...,x^{(1)})}}$ and we don't sum over the hidden states and don't get the exact probability. Decoding In HMM the decoding task is computing $v_t(j) = max_{i=1}^N v_{t-1}(i)a_{ij} b_(o_t)$ and determining which sequence of variables is the underlying source of some sequence of observations using the Viterbi algorithm and the length of the result is normally equal to the observation; while in RNN the decoding is computing $P(y_1, ..., y_O|x_1, ..., x_T) = \prod_{o=1}^OP(y_o|y_1, ..., y_{o-1}, c_o)$ and the length of $Y$ is usually not equal to the observation $X$. Decoding in HMM(Figure A.10) Decoding in RNN Learning The learning in HMM is much more complicated than that in RNN. In HMM it usually utilized the Baum-Welch algorithm(a special case of Expectation-Maximization algorithm) while in RNN it is usually the gradient descent. For your subquestions: Which sequential input problems are best suited for each? When you don't have enough data use the HMM, and when you need to calculate the exact probability HMM would also be a better suit(generative tasks modeling how the data are generated). Otherwise, you can use RNN. Does input dimensionality determine which is a better match? I don't think so, but it may take HMM more time to learn if hidden states is too big since the complexity of the algorithms (forward backward and Viterbi) is basically the square of the number of discrete states. Are problems which require "longer memory" better suited for an LSTM RNN, while problems with cyclical input patterns (stock market, weather) more easily solved by an HMM? In HMM the current state is also affected by the previous states and observations(by the parent states), and you can try Second-Order Hidden Markov Model for "longer memory". I think you can use RNN to do almost references Natural Language Processing with Deep Learning CS224N/Ling284 Hidden Markov Models
Hidden Markov Model vs Recurrent Neural Network Let's first see the differences between the HMM and RNN. From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characteri
10,368
Hidden Markov Model vs Recurrent Neural Network
I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the strictest sense. HMMs are so-called generative models, if you have an HMM, you can generate some observations from it as-is. This is fundamentally different from RNNs, as even if you have a trained RNN, you need input to it. A practical example where this is important is speech synthesis. The underlying Hidden Markov states are phones and the emitted probability events are the acoustics. If you have a word model trained, you can generate as many different realisations of it as you want. But with RNNs, you need to provide at least some input seed to get out your output. You could argue that in HMMs you also need to provide an initial distribution, so it's similar. But if we stick with the speech synthesis example,, it is not because the initial distribution will be fixed (starting always from first phones of the word). With RNNs you get a deterministic output sequence for a trained model, if you are using the same input seed all the time. With HMM, you don't because the transitions and the emissions are always sampled from a probability distribution.
Hidden Markov Model vs Recurrent Neural Network
I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the
Hidden Markov Model vs Recurrent Neural Network I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the strictest sense. HMMs are so-called generative models, if you have an HMM, you can generate some observations from it as-is. This is fundamentally different from RNNs, as even if you have a trained RNN, you need input to it. A practical example where this is important is speech synthesis. The underlying Hidden Markov states are phones and the emitted probability events are the acoustics. If you have a word model trained, you can generate as many different realisations of it as you want. But with RNNs, you need to provide at least some input seed to get out your output. You could argue that in HMMs you also need to provide an initial distribution, so it's similar. But if we stick with the speech synthesis example,, it is not because the initial distribution will be fixed (starting always from first phones of the word). With RNNs you get a deterministic output sequence for a trained model, if you are using the same input seed all the time. With HMM, you don't because the transitions and the emissions are always sampled from a probability distribution.
Hidden Markov Model vs Recurrent Neural Network I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the
10,369
Why do these statements not follow logically from a 95% CI for the mean?
The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this context. The paper's explanation of its answer to question (5) is "... [it] mentions the boundaries of the CI whereas ... a CI can be used to evaluate only the procedure and not a specific interval." This is both specious and misleading. First, if you cannot evaluate the result of the procedure, then what good is the procedure in the first place? Second, the statement in the question is not about the procedure, but about the reader's "confidence" in its results. The authors defend themselves: "Before proceeding, it is important to recall the correct definition of a CI. A CI is a numerical interval constructed around the estimate of a parameter. Such an interval does not, however, directly indicate a property of the parameter; instead, it indicates a property of the procedure, as is typical for a frequentist technique." Their bias emerges in the last phrase: "frequentist technique" (written, perhaps, with an implicit sneer). Although this characterization is correct, it is critically incomplete. It fails to notice that a confidence interval is also a property of the experimental methods (how samples were obtained and measured) and, more importantly, of nature herself. That is the only reason why anyone would be interested in its value. I recently had the pleasure of reading Edward Batschelet's Circular Statistics in Biology (Academic Press, 1981). Batschelet writes clearly and to the point, in a style directed at the working scientist. Here is what he says about confidence intervals: "An estimate of a parameter without indications of deviations caused by chance fluctuations has little scientific value. ... "Whereas the parameter to be estimated is a fixed number, the confidence limits are determined by the sample. They are statistics and, therefore, dependent on chance fluctuations. Different samples drawn from the same population lead to different confidence intervals." [The emphasis is in the original, at pp 84-85.] Notice the difference in emphasis: whereas the paper in question focuses on the procedure, Batschelet focuses on the sample and specifically on what it can reveal about the parameter and how much that information can be affected by "chance fluctuations." I find this unabashedly practical, scientific approach far more constructive, illuminating, and--ultimately--useful. A fuller characterization of confidence intervals than offered by the paper therefore would have to proceed something like this: A CI is a numerical interval constructed around the estimate of a parameter. Anyone agreeing with the assumptions underlying the CI construction is justified in saying they are confident that the parameter lies within the interval: this is the meaning of "confident." This meaning is broadly in accord with conventional non-technical meanings of confidence because under many replications of the experiment (whether or not they actually take place) the CI, although it will vary, is expected to contain the parameter most of the time. In this fuller, more conventional, and more constructive sense of "confidence," the answer to question (5) is true.
Why do these statements not follow logically from a 95% CI for the mean?
The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this c
Why do these statements not follow logically from a 95% CI for the mean? The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this context. The paper's explanation of its answer to question (5) is "... [it] mentions the boundaries of the CI whereas ... a CI can be used to evaluate only the procedure and not a specific interval." This is both specious and misleading. First, if you cannot evaluate the result of the procedure, then what good is the procedure in the first place? Second, the statement in the question is not about the procedure, but about the reader's "confidence" in its results. The authors defend themselves: "Before proceeding, it is important to recall the correct definition of a CI. A CI is a numerical interval constructed around the estimate of a parameter. Such an interval does not, however, directly indicate a property of the parameter; instead, it indicates a property of the procedure, as is typical for a frequentist technique." Their bias emerges in the last phrase: "frequentist technique" (written, perhaps, with an implicit sneer). Although this characterization is correct, it is critically incomplete. It fails to notice that a confidence interval is also a property of the experimental methods (how samples were obtained and measured) and, more importantly, of nature herself. That is the only reason why anyone would be interested in its value. I recently had the pleasure of reading Edward Batschelet's Circular Statistics in Biology (Academic Press, 1981). Batschelet writes clearly and to the point, in a style directed at the working scientist. Here is what he says about confidence intervals: "An estimate of a parameter without indications of deviations caused by chance fluctuations has little scientific value. ... "Whereas the parameter to be estimated is a fixed number, the confidence limits are determined by the sample. They are statistics and, therefore, dependent on chance fluctuations. Different samples drawn from the same population lead to different confidence intervals." [The emphasis is in the original, at pp 84-85.] Notice the difference in emphasis: whereas the paper in question focuses on the procedure, Batschelet focuses on the sample and specifically on what it can reveal about the parameter and how much that information can be affected by "chance fluctuations." I find this unabashedly practical, scientific approach far more constructive, illuminating, and--ultimately--useful. A fuller characterization of confidence intervals than offered by the paper therefore would have to proceed something like this: A CI is a numerical interval constructed around the estimate of a parameter. Anyone agreeing with the assumptions underlying the CI construction is justified in saying they are confident that the parameter lies within the interval: this is the meaning of "confident." This meaning is broadly in accord with conventional non-technical meanings of confidence because under many replications of the experiment (whether or not they actually take place) the CI, although it will vary, is expected to contain the parameter most of the time. In this fuller, more conventional, and more constructive sense of "confidence," the answer to question (5) is true.
Why do these statements not follow logically from a 95% CI for the mean? The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this c
10,370
Why do these statements not follow logically from a 95% CI for the mean?
Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior. Question 3: For example, consider a case where we know for sure It would still be possible to get these results, but rather unreasonable to say that the null hypothesis is 'unlikely' to be true. We obtained data that is unlikely to occur if the null hypothesis is true, but this does not imply that the null hypothesis is unlikely to be true. Question 5: This is a bit questionable as this depends on the definition of "we can be p % confident." If we define the statement to mean the thing that is inferred from p % confidence intervals, the statement is by definition correct. The typical pro-Bayesian argument states that people tend to interpret these statements intuitively to mean "the probability is p %", which would be false (compare answers to 1-2,4). Question 6: Your explanation "it implies that the true mean is changing from experiment to experiment" is exactly correct. The article was recently discussed in Andrew Gelman's blog (http://andrewgelman.com/2014/03/15/problematic-interpretations-confidence-intervals/). For example, the issue regarding the interpretation of the statement in question 5 is discussed in the comments.
Why do these statements not follow logically from a 95% CI for the mean?
Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior. Qu
Why do these statements not follow logically from a 95% CI for the mean? Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior. Question 3: For example, consider a case where we know for sure It would still be possible to get these results, but rather unreasonable to say that the null hypothesis is 'unlikely' to be true. We obtained data that is unlikely to occur if the null hypothesis is true, but this does not imply that the null hypothesis is unlikely to be true. Question 5: This is a bit questionable as this depends on the definition of "we can be p % confident." If we define the statement to mean the thing that is inferred from p % confidence intervals, the statement is by definition correct. The typical pro-Bayesian argument states that people tend to interpret these statements intuitively to mean "the probability is p %", which would be false (compare answers to 1-2,4). Question 6: Your explanation "it implies that the true mean is changing from experiment to experiment" is exactly correct. The article was recently discussed in Andrew Gelman's blog (http://andrewgelman.com/2014/03/15/problematic-interpretations-confidence-intervals/). For example, the issue regarding the interpretation of the statement in question 5 is discussed in the comments.
Why do these statements not follow logically from a 95% CI for the mean? Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior. Qu
10,371
Why do these statements not follow logically from a 95% CI for the mean?
Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% probability of the mean's being in that interval: but some people do use it in the sense of having used an interval-generating method whose intervals contain the true mean 95% of the time, precisely to avoid talking about the probability distribution of an unknown parameter; which seems a natural enough extension of the terminology. The similar structure of the preceding statement (#4) might have encouraged respondents to try to draw a distinction between "we can be 95% confident" & "there is a 95% probability" even if they hadn't entertained the idea before. I had expected this tricksiness to lead to #5 having the highest proportion in agreement—looking at the paper, I found out I was wrong, but noticed that at least 80% read the questionnaire in a Dutch version, which perhaps should raise questions about the pertinence of the English translation.
Why do these statements not follow logically from a 95% CI for the mean?
Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% prob
Why do these statements not follow logically from a 95% CI for the mean? Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% probability of the mean's being in that interval: but some people do use it in the sense of having used an interval-generating method whose intervals contain the true mean 95% of the time, precisely to avoid talking about the probability distribution of an unknown parameter; which seems a natural enough extension of the terminology. The similar structure of the preceding statement (#4) might have encouraged respondents to try to draw a distinction between "we can be 95% confident" & "there is a 95% probability" even if they hadn't entertained the idea before. I had expected this tricksiness to lead to #5 having the highest proportion in agreement—looking at the paper, I found out I was wrong, but noticed that at least 80% read the questionnaire in a Dutch version, which perhaps should raise questions about the pertinence of the English translation.
Why do these statements not follow logically from a 95% CI for the mean? Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% prob
10,372
Why do these statements not follow logically from a 95% CI for the mean?
Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics: "A range of values, calculated from the sample observations, that are believed, with a certain probability, to contain the true parameter value. A 95% CI, for example, implies that were the estimation process repeated again and again, then 95% of the calculated intervals would be expected to contain the true parameter value. Note that the stated probability level refers to properties of the interval and not to the parameter itself, which is not considered a random variable" A very common misconception is to confuse the meaning of a confidence interval with that of a credible interval, AKA "Bayesian confidence interval", which does make statements similar to those in the questions. I have heard that confidence intervals are often similar to credible intervals that were derived from an uninformative prior, but that was told to me anecdotally (albeit by a guy I respect a lot), and I don't have details or a cite.
Why do these statements not follow logically from a 95% CI for the mean?
Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics: "A range of values, calculated from the sample observations, that are believed, with a certain probab
Why do these statements not follow logically from a 95% CI for the mean? Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics: "A range of values, calculated from the sample observations, that are believed, with a certain probability, to contain the true parameter value. A 95% CI, for example, implies that were the estimation process repeated again and again, then 95% of the calculated intervals would be expected to contain the true parameter value. Note that the stated probability level refers to properties of the interval and not to the parameter itself, which is not considered a random variable" A very common misconception is to confuse the meaning of a confidence interval with that of a credible interval, AKA "Bayesian confidence interval", which does make statements similar to those in the questions. I have heard that confidence intervals are often similar to credible intervals that were derived from an uninformative prior, but that was told to me anecdotally (albeit by a guy I respect a lot), and I don't have details or a cite.
Why do these statements not follow logically from a 95% CI for the mean? Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics: "A range of values, calculated from the sample observations, that are believed, with a certain probab
10,373
Why do these statements not follow logically from a 95% CI for the mean?
Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here It is correct to say that there is a 95% chance that the confidence interval you calculated contains the true population mean. It is not quite correct to say that there is a 95% chance that the population mean lies within the interval. What's the difference? The population mean has one value. You don't know what it is (unless you are doing simulations) but it has one value. If you repeated the experiment, that value wouldn't change (and you still wouldn't know what it is). Therefore it isn't strictly correct to ask about the probability that the population mean lies within a certain range. In contrast, the confidence interval you compute depends on the data you happened to collect. If you repeated the experiment, your confidence interval would almost certainly be different. So it is OK to ask about the probability that the interval contains the population mean. Now to your specific questions about 5. Why is it wrong... Is it because we might have some special information about the sample we just took that would make us think it's likely to be one of the 5% that does not contain the true mean? No, rather, I think it is because the true mean is not a random variable, but the confidence interval is a function of the data. What does confidence mean in this context, anyway? A confidence interval enables (confidently - if you trust your assumptions) you to make a claim that the interval covers the true parameter. The interpretation reflects the uncertainty in the sampling procedure; a confidence interval of $100(1-\alpha)$% asserts that [you can be confident that], in the long run, $100(1-\alpha)$% of the realized confidence intervals cover the true parameter. As a side note (mentioned in other answers to this question), a credible interval, a concept from Bayesian statistics, do predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. Perhaps you can obtain more background on this from Gelman's blog.
Why do these statements not follow logically from a 95% CI for the mean?
Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here It is correct to say that there is a 95% chance that the confidence interval you cal
Why do these statements not follow logically from a 95% CI for the mean? Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here It is correct to say that there is a 95% chance that the confidence interval you calculated contains the true population mean. It is not quite correct to say that there is a 95% chance that the population mean lies within the interval. What's the difference? The population mean has one value. You don't know what it is (unless you are doing simulations) but it has one value. If you repeated the experiment, that value wouldn't change (and you still wouldn't know what it is). Therefore it isn't strictly correct to ask about the probability that the population mean lies within a certain range. In contrast, the confidence interval you compute depends on the data you happened to collect. If you repeated the experiment, your confidence interval would almost certainly be different. So it is OK to ask about the probability that the interval contains the population mean. Now to your specific questions about 5. Why is it wrong... Is it because we might have some special information about the sample we just took that would make us think it's likely to be one of the 5% that does not contain the true mean? No, rather, I think it is because the true mean is not a random variable, but the confidence interval is a function of the data. What does confidence mean in this context, anyway? A confidence interval enables (confidently - if you trust your assumptions) you to make a claim that the interval covers the true parameter. The interpretation reflects the uncertainty in the sampling procedure; a confidence interval of $100(1-\alpha)$% asserts that [you can be confident that], in the long run, $100(1-\alpha)$% of the realized confidence intervals cover the true parameter. As a side note (mentioned in other answers to this question), a credible interval, a concept from Bayesian statistics, do predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained. Perhaps you can obtain more background on this from Gelman's blog.
Why do these statements not follow logically from a 95% CI for the mean? Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here It is correct to say that there is a 95% chance that the confidence interval you cal
10,374
Confidence Interval for variance given one observation
Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible. Let $\renewcommand{\Pr}{\mathbb P}\newcommand{\Ind}[1]{\mathbf 1_{(#1)}}X \sim \mathcal N(\mu,\sigma^2)$ with $\mu$ and $\sigma^2$ unknown. We can write $X = \sigma Z + \mu$ for $Z \sim \mathcal N(0,1)$. Main Claim: $[0,X^2/q_\alpha)$ is a $(1-\alpha)$ confidence interval for $\sigma^2$ where $q_\alpha$ is the $\alpha$-level quantile of a chi-squared distribution with one degree of freedom. Furthermore, since this interval has exactly $(1-\alpha)$ coverage when $\mu = 0$, it is the narrowest possible interval of the form $[0,b X^2)$ for some $b \in \mathbb R$. A reason for optimism Recall that in the $n \geq 2$ case, with $T = \sum_{i=1}^n (X_i - \bar X)^2$, the typical $(1-\alpha)$ confidence interval for $\sigma^2$ is $$ \Big(\frac{T}{q_{n-1,(1-\alpha)/2}}, \frac{T}{q_{n-1,\alpha/2}} \Big) \>, $$ where $q_{k,a}$ is the $a$-level quantile of a chi-squared with $k$ degrees of freedom. This, of course, holds for any $\mu$. While this is the most popular interval (called the equal-tailed interval for obvious reasons), it is neither the only one nor even the one of smallest width! As should be apparent, another valid selection is $$ \Big(0,\frac{T}{q_{n-1,\alpha}}\Big) \>. $$ Since, $T \leq \sum_{i=1}^n X_i^2$, then $$ \Big(0,\frac{\sum_{i=1}^n X_i^2}{q_{n-1,\alpha}}\Big) \>, $$ also has coverage of at least $(1-\alpha)$. Viewed in this light, we might then be optimistic that the interval in the main claim is true for $n = 1$. The main difference is that there is no zero-degree-of-freedom chi-squared distribution for the case of a single observation, so we must hope that using a one-degree-of-freedom quantile will work. A half step toward our destination (Exploiting the right tail) Before diving into a proof of the main claim, let's first look at a preliminary claim that is not nearly as strong or satisfying statistically, but perhaps gives some additional insight into what is going on. You can skip down to the proof of the main claim below, without much (if any) loss. In this section and the next, the proofs—while slightly subtle—are based on only elementary facts: monotonicity of probabilities, and symmetry and unimodality of the normal distribution. Auxiliary claim: $[0,X^2/z^2_\alpha)$ is a $(1-\alpha)$ confidence interval for $\sigma^2$ as long as $\alpha > 1/2$. Here $z_\alpha$ is the $\alpha$-level quantile of a standard normal. Proof. $|X| = |-X|$ and $|\sigma Z + \mu| \stackrel{d}{=} |-\sigma Z+\mu|$ by symmetry, so in what follows we can take $\mu \geq 0$ without loss of generality. Now, for $\theta \geq 0$ and $\mu \geq 0$, $$ \Pr(|X| > \theta) \geq \Pr( X > \theta) = \Pr( \sigma Z + \mu > \theta) \geq \Pr( Z > \theta/\sigma) \>, $$ and so with $\theta = z_{\alpha} \sigma$, we see that $$ \Pr(0 \leq \sigma^2 < X^2 / z^2_\alpha) \geq 1 - \alpha \>. $$ This works only for $\alpha > 1/2$, since that is what is needed for $z_\alpha > 0$. This proves the auxiliary claim. While illustrative, it is unsatifying from a statistical perspective since it requires an absurdly large $\alpha$ to work. Proving the main claim A refinement of the above argument leads to a result that will work for an arbitrary confidence level. First, note that $$ \Pr(|X| > \theta) = \Pr(|Z + \mu/\sigma| > \theta / \sigma ) \>. $$ Set $a = \mu/\sigma \geq 0$ and $b = \theta / \sigma \geq 0$. Then, $$ \Pr(|Z + a| > b) = \Phi(a-b) + \Phi(-a-b) \>. $$ If we can show that the right-hand side increases in $a$ for every fixed $b$, then we can employ a similar argument as in the previous argument. This is at least plausible, since we'd like to believe that if the mean increases, then it becomes more probable that we see a value with a modulus that exceeds $b$. (However, we have to watch out for how quickly the mass is decreasing in the left tail!) Set $f_b(a) = \Phi(a-b) + \Phi(-a-b)$. Then $$ f'_b(a) = \varphi(a-b) - \varphi(-a-b) = \varphi(a-b) - \varphi(a+b) \>. $$ Note that $f'_b(0) = 0$ and for positive $u$, $\varphi(u)$ is decreasing in $u$. Now, for $a \in (0,2b)$, it is easy to see that $\varphi(a-b) \geq \varphi(-b) = \varphi(b)$. These facts taken together easily imply that $$ f'_b(a) \geq 0 $$ for all $a \geq 0$ and any fixed $b \geq 0$. Hence, we have shown that for $a \geq 0$ and $b \geq 0$, $$ \Pr(|Z + a| > b) \geq \Pr(|Z| > b) = 2\Phi(-b) \>. $$ Unraveling all of this, if we take $\theta = \sqrt{q_\alpha} \sigma$, we get $$ \Pr(X^2 > q_\alpha \sigma^2) \geq \Pr(Z^2 > q_\alpha) = 1 - \alpha \>, $$ which establishes the main claim. Closing remark: A careful reading of the above argument shows that it uses only the symmetric and unimodal properties of the normal distribution. Hence, the approach works analogously for obtaining confidence intervals from a single observation from any symmetric unimodal location-scale family, e.g., Cauchy or Laplace distributions.
Confidence Interval for variance given one observation
Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible. Let $\renewc
Confidence Interval for variance given one observation Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible. Let $\renewcommand{\Pr}{\mathbb P}\newcommand{\Ind}[1]{\mathbf 1_{(#1)}}X \sim \mathcal N(\mu,\sigma^2)$ with $\mu$ and $\sigma^2$ unknown. We can write $X = \sigma Z + \mu$ for $Z \sim \mathcal N(0,1)$. Main Claim: $[0,X^2/q_\alpha)$ is a $(1-\alpha)$ confidence interval for $\sigma^2$ where $q_\alpha$ is the $\alpha$-level quantile of a chi-squared distribution with one degree of freedom. Furthermore, since this interval has exactly $(1-\alpha)$ coverage when $\mu = 0$, it is the narrowest possible interval of the form $[0,b X^2)$ for some $b \in \mathbb R$. A reason for optimism Recall that in the $n \geq 2$ case, with $T = \sum_{i=1}^n (X_i - \bar X)^2$, the typical $(1-\alpha)$ confidence interval for $\sigma^2$ is $$ \Big(\frac{T}{q_{n-1,(1-\alpha)/2}}, \frac{T}{q_{n-1,\alpha/2}} \Big) \>, $$ where $q_{k,a}$ is the $a$-level quantile of a chi-squared with $k$ degrees of freedom. This, of course, holds for any $\mu$. While this is the most popular interval (called the equal-tailed interval for obvious reasons), it is neither the only one nor even the one of smallest width! As should be apparent, another valid selection is $$ \Big(0,\frac{T}{q_{n-1,\alpha}}\Big) \>. $$ Since, $T \leq \sum_{i=1}^n X_i^2$, then $$ \Big(0,\frac{\sum_{i=1}^n X_i^2}{q_{n-1,\alpha}}\Big) \>, $$ also has coverage of at least $(1-\alpha)$. Viewed in this light, we might then be optimistic that the interval in the main claim is true for $n = 1$. The main difference is that there is no zero-degree-of-freedom chi-squared distribution for the case of a single observation, so we must hope that using a one-degree-of-freedom quantile will work. A half step toward our destination (Exploiting the right tail) Before diving into a proof of the main claim, let's first look at a preliminary claim that is not nearly as strong or satisfying statistically, but perhaps gives some additional insight into what is going on. You can skip down to the proof of the main claim below, without much (if any) loss. In this section and the next, the proofs—while slightly subtle—are based on only elementary facts: monotonicity of probabilities, and symmetry and unimodality of the normal distribution. Auxiliary claim: $[0,X^2/z^2_\alpha)$ is a $(1-\alpha)$ confidence interval for $\sigma^2$ as long as $\alpha > 1/2$. Here $z_\alpha$ is the $\alpha$-level quantile of a standard normal. Proof. $|X| = |-X|$ and $|\sigma Z + \mu| \stackrel{d}{=} |-\sigma Z+\mu|$ by symmetry, so in what follows we can take $\mu \geq 0$ without loss of generality. Now, for $\theta \geq 0$ and $\mu \geq 0$, $$ \Pr(|X| > \theta) \geq \Pr( X > \theta) = \Pr( \sigma Z + \mu > \theta) \geq \Pr( Z > \theta/\sigma) \>, $$ and so with $\theta = z_{\alpha} \sigma$, we see that $$ \Pr(0 \leq \sigma^2 < X^2 / z^2_\alpha) \geq 1 - \alpha \>. $$ This works only for $\alpha > 1/2$, since that is what is needed for $z_\alpha > 0$. This proves the auxiliary claim. While illustrative, it is unsatifying from a statistical perspective since it requires an absurdly large $\alpha$ to work. Proving the main claim A refinement of the above argument leads to a result that will work for an arbitrary confidence level. First, note that $$ \Pr(|X| > \theta) = \Pr(|Z + \mu/\sigma| > \theta / \sigma ) \>. $$ Set $a = \mu/\sigma \geq 0$ and $b = \theta / \sigma \geq 0$. Then, $$ \Pr(|Z + a| > b) = \Phi(a-b) + \Phi(-a-b) \>. $$ If we can show that the right-hand side increases in $a$ for every fixed $b$, then we can employ a similar argument as in the previous argument. This is at least plausible, since we'd like to believe that if the mean increases, then it becomes more probable that we see a value with a modulus that exceeds $b$. (However, we have to watch out for how quickly the mass is decreasing in the left tail!) Set $f_b(a) = \Phi(a-b) + \Phi(-a-b)$. Then $$ f'_b(a) = \varphi(a-b) - \varphi(-a-b) = \varphi(a-b) - \varphi(a+b) \>. $$ Note that $f'_b(0) = 0$ and for positive $u$, $\varphi(u)$ is decreasing in $u$. Now, for $a \in (0,2b)$, it is easy to see that $\varphi(a-b) \geq \varphi(-b) = \varphi(b)$. These facts taken together easily imply that $$ f'_b(a) \geq 0 $$ for all $a \geq 0$ and any fixed $b \geq 0$. Hence, we have shown that for $a \geq 0$ and $b \geq 0$, $$ \Pr(|Z + a| > b) \geq \Pr(|Z| > b) = 2\Phi(-b) \>. $$ Unraveling all of this, if we take $\theta = \sqrt{q_\alpha} \sigma$, we get $$ \Pr(X^2 > q_\alpha \sigma^2) \geq \Pr(Z^2 > q_\alpha) = 1 - \alpha \>, $$ which establishes the main claim. Closing remark: A careful reading of the above argument shows that it uses only the symmetric and unimodal properties of the normal distribution. Hence, the approach works analogously for obtaining confidence intervals from a single observation from any symmetric unimodal location-scale family, e.g., Cauchy or Laplace distributions.
Confidence Interval for variance given one observation Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible. Let $\renewc
10,375
Confidence Interval for variance given one observation
Time to follow up! Here's the solution I was given: We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interval with confidence level at least 99% if $$(\forall \mu \in \mathbb R )(\forall \sigma > 0)\; \mathbb P_{\mu,\sigma_2}(\sigma^2 > T(X)) < 0.01.$$ We note that the density of the $\mathcal{N}(\mu,\sigma^2)$ distribution does not exceed $1/\sigma\sqrt{2\pi}$. Therefore, $\mathbb{P}(|X| \leq a) \leq a/\sigma$ for every $a \geq 0$. It follows that $$t \geq \mathbb P (|X|/\sigma \leq t) = \mathbb P (X^2 \leq t^2\sigma^2) = \mathbb P (\sigma^2 \geq X^2/t^2).$$ Plugging in $t = 0.01$ we obtain that the appropriate statistic is $T(X) = 10000X^2.$ The confidence interval (which is very wide) is slightly conservative in simulation, with no empirical coverage (in 100,000 simulations) lower than 99.15% as I varied the CV over many orders of magnitude. For comparison, I also simulated cardinal's confidence interval. I should note that cardinal's interval is quite a bit narrower--in the 99% case, his ends up being up to about $6300X^2$, as opposed to the $10000X^2$ in the provided solution. Empirical coverage is right at the nominal level, again over many orders of magnitude for the CV. So his interval definitely wins. I haven't had time to look carefully at the paper Max posted, but I do plan to look at that and may add some comments regarding it later (i.e., no sooner than a week). That paper claims a 99% confidence interval of $(0,4900X^2)$, which has empirical coverage slightly lower (about 98.85%) than the nominal coverage for large CVs in my brief simulations.
Confidence Interval for variance given one observation
Time to follow up! Here's the solution I was given: We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interva
Confidence Interval for variance given one observation Time to follow up! Here's the solution I was given: We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interval with confidence level at least 99% if $$(\forall \mu \in \mathbb R )(\forall \sigma > 0)\; \mathbb P_{\mu,\sigma_2}(\sigma^2 > T(X)) < 0.01.$$ We note that the density of the $\mathcal{N}(\mu,\sigma^2)$ distribution does not exceed $1/\sigma\sqrt{2\pi}$. Therefore, $\mathbb{P}(|X| \leq a) \leq a/\sigma$ for every $a \geq 0$. It follows that $$t \geq \mathbb P (|X|/\sigma \leq t) = \mathbb P (X^2 \leq t^2\sigma^2) = \mathbb P (\sigma^2 \geq X^2/t^2).$$ Plugging in $t = 0.01$ we obtain that the appropriate statistic is $T(X) = 10000X^2.$ The confidence interval (which is very wide) is slightly conservative in simulation, with no empirical coverage (in 100,000 simulations) lower than 99.15% as I varied the CV over many orders of magnitude. For comparison, I also simulated cardinal's confidence interval. I should note that cardinal's interval is quite a bit narrower--in the 99% case, his ends up being up to about $6300X^2$, as opposed to the $10000X^2$ in the provided solution. Empirical coverage is right at the nominal level, again over many orders of magnitude for the CV. So his interval definitely wins. I haven't had time to look carefully at the paper Max posted, but I do plan to look at that and may add some comments regarding it later (i.e., no sooner than a week). That paper claims a 99% confidence interval of $(0,4900X^2)$, which has empirical coverage slightly lower (about 98.85%) than the nominal coverage for large CVs in my brief simulations.
Confidence Interval for variance given one observation Time to follow up! Here's the solution I was given: We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interva
10,376
Confidence Interval for variance given one observation
The CI's $(0,\infty)$ presumably.
Confidence Interval for variance given one observation
The CI's $(0,\infty)$ presumably.
Confidence Interval for variance given one observation The CI's $(0,\infty)$ presumably.
Confidence Interval for variance given one observation The CI's $(0,\infty)$ presumably.
10,377
Comparing two classifier accuracy results for statistical significance with t-test
I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is also mentioned in his book). Just to add, as Peter Flom says, the answer is almost certainly "yes" just by looking at the difference in performance and the size of the sample (I take the figures quoted are test set performance rather than training set performance). Incidentally Japkowicz and Shah have a recent book out on "Evaluating Learning Algorithms: A Classification Perspective", I haven't read it, but it looks like a useful reference for these sorts of issues.
Comparing two classifier accuracy results for statistical significance with t-test
I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is al
Comparing two classifier accuracy results for statistical significance with t-test I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is also mentioned in his book). Just to add, as Peter Flom says, the answer is almost certainly "yes" just by looking at the difference in performance and the size of the sample (I take the figures quoted are test set performance rather than training set performance). Incidentally Japkowicz and Shah have a recent book out on "Evaluating Learning Algorithms: A Classification Perspective", I haven't read it, but it looks like a useful reference for these sorts of issues.
Comparing two classifier accuracy results for statistical significance with t-test I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is al
10,378
Comparing two classifier accuracy results for statistical significance with t-test
Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions. Let $\hat p_1$ and $\hat p_2$ be the accuracies obtained from classifiers 1 and 2 respectively, and $n$ be the number of samples. The number of samples correctly classified in classifiers 1 and 2 are $x_1$ and $x_2$ respectively. $ \hat p_1 = x_1/n,\quad \hat p_2 = x_2/n$ The test statistic is given by $\displaystyle Z = \frac{\hat p_1 - \hat p_2}{\sqrt{2\hat p(1 -\hat p)/n}}\qquad$ where $\quad\hat p= (x_1+x_2)/2n$ Our intention is to prove that the global accuracy of classifier 2, i.e., $p_2$, is better than that of classifier 1, which is $p_1$. This frames our hypothesis as $H_0: p_1 = p_2\quad$ (null hypothesis stating both are equal) $H_a: p_1 < p_2\quad$ (alternative hypotyesis claiming the newer one is better than the existing) The rejection region is given by $Z < -z_\alpha \quad$ (if true reject $H_0$ and accept $H_a$) where $z_\alpha$ is obtained from a standard normal distribition that pertains to a level of significance, $\alpha$. For instance $z_{0.5} = 1.645$ for 5% level of significance. This means that if the relation $Z < -1.645$ is true, then we could say with 95% confidence level ($1-\alpha$) that classifier 2 is more accurate than classifier 1. References: R. Johnson and J. Freund, Miller and Freund’s Probability and Statistics for Engineers, 8th Ed. Prentice Hall International, 2011. (Primary source) Test of Hypothesis-Concise Formula Summary. (Adopted from [1])
Comparing two classifier accuracy results for statistical significance with t-test
Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions. Let $\hat p_1$ and $\hat p_2$ be the accura
Comparing two classifier accuracy results for statistical significance with t-test Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions. Let $\hat p_1$ and $\hat p_2$ be the accuracies obtained from classifiers 1 and 2 respectively, and $n$ be the number of samples. The number of samples correctly classified in classifiers 1 and 2 are $x_1$ and $x_2$ respectively. $ \hat p_1 = x_1/n,\quad \hat p_2 = x_2/n$ The test statistic is given by $\displaystyle Z = \frac{\hat p_1 - \hat p_2}{\sqrt{2\hat p(1 -\hat p)/n}}\qquad$ where $\quad\hat p= (x_1+x_2)/2n$ Our intention is to prove that the global accuracy of classifier 2, i.e., $p_2$, is better than that of classifier 1, which is $p_1$. This frames our hypothesis as $H_0: p_1 = p_2\quad$ (null hypothesis stating both are equal) $H_a: p_1 < p_2\quad$ (alternative hypotyesis claiming the newer one is better than the existing) The rejection region is given by $Z < -z_\alpha \quad$ (if true reject $H_0$ and accept $H_a$) where $z_\alpha$ is obtained from a standard normal distribition that pertains to a level of significance, $\alpha$. For instance $z_{0.5} = 1.645$ for 5% level of significance. This means that if the relation $Z < -1.645$ is true, then we could say with 95% confidence level ($1-\alpha$) that classifier 2 is more accurate than classifier 1. References: R. Johnson and J. Freund, Miller and Freund’s Probability and Statistics for Engineers, 8th Ed. Prentice Hall International, 2011. (Primary source) Test of Hypothesis-Concise Formula Summary. (Adopted from [1])
Comparing two classifier accuracy results for statistical significance with t-test Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions. Let $\hat p_1$ and $\hat p_2$ be the accura
10,379
Comparing two classifier accuracy results for statistical significance with t-test
I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes). If you do want to do a test, though, you could do it as a test of two proportions - this can be done with a two sample t-test. You might want to break "accuracy" down into its components, though; sensitivity and specificity, or false-positive and false-negative. In many applications, the cost of the different errors are quite different.
Comparing two classifier accuracy results for statistical significance with t-test
I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes). If you do wan
Comparing two classifier accuracy results for statistical significance with t-test I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes). If you do want to do a test, though, you could do it as a test of two proportions - this can be done with a two sample t-test. You might want to break "accuracy" down into its components, though; sensitivity and specificity, or false-positive and false-negative. In many applications, the cost of the different errors are quite different.
Comparing two classifier accuracy results for statistical significance with t-test I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes). If you do wan
10,380
Comparing two classifier accuracy results for statistical significance with t-test
Sorry, due my reputation I cant comment the answer of @Ébe Isaac. If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the accuracy metrics. I suggest three possible experiments on applying z-test over accuracy values. Do the experiments with accuracy for each class. Do the experiments with balanced accuracy score. You have to pay attention also to n in the denominator of the formula. Use a test set where each class occurs the same number of time (>50). I done a colab notebook on which I reported this experiments for Floor Estimation. colab
Comparing two classifier accuracy results for statistical significance with t-test
Sorry, due my reputation I cant comment the answer of @Ébe Isaac. If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the ac
Comparing two classifier accuracy results for statistical significance with t-test Sorry, due my reputation I cant comment the answer of @Ébe Isaac. If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the accuracy metrics. I suggest three possible experiments on applying z-test over accuracy values. Do the experiments with accuracy for each class. Do the experiments with balanced accuracy score. You have to pay attention also to n in the denominator of the formula. Use a test set where each class occurs the same number of time (>50). I done a colab notebook on which I reported this experiments for Floor Estimation. colab
Comparing two classifier accuracy results for statistical significance with t-test Sorry, due my reputation I cant comment the answer of @Ébe Isaac. If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the ac
10,381
Comparing two classifier accuracy results for statistical significance with t-test
@Chris looks like you can apply this: https://abtestguide.com/calc/ Calcuale Z-score And from Z-score look the p-value
Comparing two classifier accuracy results for statistical significance with t-test
@Chris looks like you can apply this: https://abtestguide.com/calc/ Calcuale Z-score And from Z-score look the p-value
Comparing two classifier accuracy results for statistical significance with t-test @Chris looks like you can apply this: https://abtestguide.com/calc/ Calcuale Z-score And from Z-score look the p-value
Comparing two classifier accuracy results for statistical significance with t-test @Chris looks like you can apply this: https://abtestguide.com/calc/ Calcuale Z-score And from Z-score look the p-value
10,382
A layman understanding of the difference between back-door and front-door adjustment
Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches: Back-door adjustment: Determine which other variables $X$ (age, gender) drive both $D$ (a drug) and $Y$ (health). Then, find units with the same values for $X$ (same age, same gender), but different values for $D$, and compute the difference in $Y$. If there is a difference in $Y$ between these units, it should be due to $D$, and not due to anything else. The relevant causal graph looks like this: Front-door adjustment: This means that you need to understand precisely the mechanism by which $D$ (let's now say it's smoking) affects $Y$ (lung cancer). Let's say it all flows through variable $M$ (tar in lungs): $D$ (smoking) affects $M$ (tar), and $M$ (tar) affects $Y$; there is no direct effect. Then, to find the effect of $D$ on $Y$, compute the effect of smoking on tar, and then the effect of tar on cancer - possibly through backdoor adjustment - and multiply the effect of $D$ on $M$ with the effect of $M$ on $Y$. The relevant causal graph looks like this (where $U$ is not observed): Here, front-door adjustment works because there is no open back-door path from $D$ to $M$. The path $D \leftarrow U \rightarrow Y \leftarrow M$ is blocked. This is because the arrows "collide" in $Y$. So the $D \rightarrow M$ effect is identified. Similarly, the $M \rightarrow Y$ effect is identified because the only back-door path from $M$ to $Y$ runs over $D$, so you can adjust for it using the back-door strategy. In sum, you can identify the "submechanisms", and there is no direct effect, so you can piece together the submechanisms to estimate the overall effect. This will not work if $U$ infuences $M$, because then identifying the submechanisms does not work.
A layman understanding of the difference between back-door and front-door adjustment
Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches: Back-door adjustment: Determi
A layman understanding of the difference between back-door and front-door adjustment Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches: Back-door adjustment: Determine which other variables $X$ (age, gender) drive both $D$ (a drug) and $Y$ (health). Then, find units with the same values for $X$ (same age, same gender), but different values for $D$, and compute the difference in $Y$. If there is a difference in $Y$ between these units, it should be due to $D$, and not due to anything else. The relevant causal graph looks like this: Front-door adjustment: This means that you need to understand precisely the mechanism by which $D$ (let's now say it's smoking) affects $Y$ (lung cancer). Let's say it all flows through variable $M$ (tar in lungs): $D$ (smoking) affects $M$ (tar), and $M$ (tar) affects $Y$; there is no direct effect. Then, to find the effect of $D$ on $Y$, compute the effect of smoking on tar, and then the effect of tar on cancer - possibly through backdoor adjustment - and multiply the effect of $D$ on $M$ with the effect of $M$ on $Y$. The relevant causal graph looks like this (where $U$ is not observed): Here, front-door adjustment works because there is no open back-door path from $D$ to $M$. The path $D \leftarrow U \rightarrow Y \leftarrow M$ is blocked. This is because the arrows "collide" in $Y$. So the $D \rightarrow M$ effect is identified. Similarly, the $M \rightarrow Y$ effect is identified because the only back-door path from $M$ to $Y$ runs over $D$, so you can adjust for it using the back-door strategy. In sum, you can identify the "submechanisms", and there is no direct effect, so you can piece together the submechanisms to estimate the overall effect. This will not work if $U$ infuences $M$, because then identifying the submechanisms does not work.
A layman understanding of the difference between back-door and front-door adjustment Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches: Back-door adjustment: Determi
10,383
do(x) operator meaning?
That is $do$-calculus. They explain it here: Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain functions from the model, replacing them with a constant $X = x$, while keeping the rest of the model unchanged. The resulting model is denoted $M_x$.
do(x) operator meaning?
That is $do$-calculus. They explain it here: Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain f
do(x) operator meaning? That is $do$-calculus. They explain it here: Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain functions from the model, replacing them with a constant $X = x$, while keeping the rest of the model unchanged. The resulting model is denoted $M_x$.
do(x) operator meaning? That is $do$-calculus. They explain it here: Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain f
10,384
do(x) operator meaning?
A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of structural equations that determines the values of each endogenous variable and $P(U)$ a probability distribution over the domain of $U$. In a SCM we represent the effect of an intervention on a variable $X$ by a submodel $M_x = \langle U, V, F_x, P(U) \rangle$ where $F_x$ indicates that the structural equation for $X$ is replaced by the new interventional equation. For example, the atomic intervention of setting the variable $X$ to a specific value $x$ --- usually denoted by $do(X = x)$ --- consists of replacing the equation for $X$ with the equation $X = x$. To make ideas clear, imagine a nonparametric structural causal model $M$ defined by the following structural equations: $$ Z = U_z\\ X = f(Z, U_x)\\ Y = g(X,Z, U_y) $$ Where the disturbances $U$ have some probability distribution $P(U)$. This induces a probability distribution over the endogenous variables $P_M(Y, Z, X)$, and in particular a conditional distribution of $Y$ given $X$, $P_M(Y|X)$. But notice $P_M(Y|X)$ is the "observational" distribution of $Y$ given $X$ in the context of model $M$. What would be the effect on the distribution of $Y$ if we intervened on $X$ setting it to $x$? This is nothing more than the probability distribution of $Y$ induced by the modified model $M_x$: $$ Z = U_z\\ X = x\\ Y = g(X, Z, U_y) $$ That is, the interventional probability of $Y$ if we set $X= x$ is given by the the probability induced in submodel $M_x$, that is, $P_{M_x}(Y|X=x)$ and it's usually denoted by $P(Y|do(X = x))$. The $do(X= x)$ operator makes it clear we are computing the probability of $Y$ in a submodel where there is an intervention setting $X$ equal to $x$, which corresponds to overriding the structural equation of $X$ with the equation $X =x$. The goal of many analyses is to find how to express the interventional distribution $P(Y|do(X))$ in terms of the joint probability of the observational (pre-intervention) distribution. do-calculus The do-calculus is not the same thing as the $do(\cdot)$ operator. The do-calculus consists of three inference rules to help "massage" the post-intervention probability distribution and get $P(Y|do(X))$ in terms of the observational (pre-intervention) distribution. Hence, instead of doing derivations by hand, such as in this question, you can let an algorithm perform the derivations and automatically give you a nonparametric expression for identifying your causal query of interest (and the do-calculus is complete for recursive nonparametric structural causal models).
do(x) operator meaning?
A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of
do(x) operator meaning? A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of structural equations that determines the values of each endogenous variable and $P(U)$ a probability distribution over the domain of $U$. In a SCM we represent the effect of an intervention on a variable $X$ by a submodel $M_x = \langle U, V, F_x, P(U) \rangle$ where $F_x$ indicates that the structural equation for $X$ is replaced by the new interventional equation. For example, the atomic intervention of setting the variable $X$ to a specific value $x$ --- usually denoted by $do(X = x)$ --- consists of replacing the equation for $X$ with the equation $X = x$. To make ideas clear, imagine a nonparametric structural causal model $M$ defined by the following structural equations: $$ Z = U_z\\ X = f(Z, U_x)\\ Y = g(X,Z, U_y) $$ Where the disturbances $U$ have some probability distribution $P(U)$. This induces a probability distribution over the endogenous variables $P_M(Y, Z, X)$, and in particular a conditional distribution of $Y$ given $X$, $P_M(Y|X)$. But notice $P_M(Y|X)$ is the "observational" distribution of $Y$ given $X$ in the context of model $M$. What would be the effect on the distribution of $Y$ if we intervened on $X$ setting it to $x$? This is nothing more than the probability distribution of $Y$ induced by the modified model $M_x$: $$ Z = U_z\\ X = x\\ Y = g(X, Z, U_y) $$ That is, the interventional probability of $Y$ if we set $X= x$ is given by the the probability induced in submodel $M_x$, that is, $P_{M_x}(Y|X=x)$ and it's usually denoted by $P(Y|do(X = x))$. The $do(X= x)$ operator makes it clear we are computing the probability of $Y$ in a submodel where there is an intervention setting $X$ equal to $x$, which corresponds to overriding the structural equation of $X$ with the equation $X =x$. The goal of many analyses is to find how to express the interventional distribution $P(Y|do(X))$ in terms of the joint probability of the observational (pre-intervention) distribution. do-calculus The do-calculus is not the same thing as the $do(\cdot)$ operator. The do-calculus consists of three inference rules to help "massage" the post-intervention probability distribution and get $P(Y|do(X))$ in terms of the observational (pre-intervention) distribution. Hence, instead of doing derivations by hand, such as in this question, you can let an algorithm perform the derivations and automatically give you a nonparametric expression for identifying your causal query of interest (and the do-calculus is complete for recursive nonparametric structural causal models).
do(x) operator meaning? A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of
10,385
Incidental parameter problem
In FE models of the type $$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$ $\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the important parameter, statistically speaking. But in essence, $\alpha$ is important because it provides useful information on the individual intercept. Most of the panels are short, i.e., T is relatively small. In order to illustrate the incidental parameter problem I will disregard $\beta$ for simplicity. So the model is now: $$y_{it} = \alpha_i + u_{it} \quad \quad u_{it}\sim iiN(0,\sigma^2)$$ So by using deviations from means method we have $\hat{u}_{it} = y_{it}-\bar{y}_i$ - and that's how we can get $\alpha$. Lets have a look on the estimate for $\sigma^2$: $$\hat{\sigma}^2 = \frac{1}{NT}\sum_i\sum_t (y_{it}-\bar{y}_i)^2 = \sigma^2\frac{\chi_{N(T-1)}^2}{NT} \overset p{\to} \sigma^2\frac{N(T-1)}{NT} = \sigma^2\frac{T-1}{T}$$ You can see that if T is "large" then the term $\frac{T-1}{T}$ disappears, BUT, if T is small (which is the case in most of the panels) then the estimate of $\sigma^2$ will be inconsistent. This makes the FE estimator to be inconsistent. The reason $\beta$ is usually consistent because usually N is indeed sufficiently large and therefore has the desired asymptotic requirements. Note that in spatial panels for example, the situation is opposite - T is usually considered large enough, but N is fixed. So the asymptotics comes from T. Therefore in spatial panels you need a large T! Hope it helps somehow.
Incidental parameter problem
In FE models of the type $$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$ $\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the im
Incidental parameter problem In FE models of the type $$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$ $\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the important parameter, statistically speaking. But in essence, $\alpha$ is important because it provides useful information on the individual intercept. Most of the panels are short, i.e., T is relatively small. In order to illustrate the incidental parameter problem I will disregard $\beta$ for simplicity. So the model is now: $$y_{it} = \alpha_i + u_{it} \quad \quad u_{it}\sim iiN(0,\sigma^2)$$ So by using deviations from means method we have $\hat{u}_{it} = y_{it}-\bar{y}_i$ - and that's how we can get $\alpha$. Lets have a look on the estimate for $\sigma^2$: $$\hat{\sigma}^2 = \frac{1}{NT}\sum_i\sum_t (y_{it}-\bar{y}_i)^2 = \sigma^2\frac{\chi_{N(T-1)}^2}{NT} \overset p{\to} \sigma^2\frac{N(T-1)}{NT} = \sigma^2\frac{T-1}{T}$$ You can see that if T is "large" then the term $\frac{T-1}{T}$ disappears, BUT, if T is small (which is the case in most of the panels) then the estimate of $\sigma^2$ will be inconsistent. This makes the FE estimator to be inconsistent. The reason $\beta$ is usually consistent because usually N is indeed sufficiently large and therefore has the desired asymptotic requirements. Note that in spatial panels for example, the situation is opposite - T is usually considered large enough, but N is fixed. So the asymptotics comes from T. Therefore in spatial panels you need a large T! Hope it helps somehow.
Incidental parameter problem In FE models of the type $$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$ $\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the im
10,386
What can we learn about the human brain from artificial neural networks?
As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account signals and timing as real neurons do. There's a fairly recent interview, that I felt was appropriate for your specific question, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts, and I quote: But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.
What can we learn about the human brain from artificial neural networks?
As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account
What can we learn about the human brain from artificial neural networks? As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account signals and timing as real neurons do. There's a fairly recent interview, that I felt was appropriate for your specific question, Machine-Learning Maestro Michael Jordan on the Delusions of Big Data and Other Huge Engineering Efforts, and I quote: But it’s true that with neuroscience, it’s going to require decades or even hundreds of years to understand the deep principles. There is progress at the very lowest levels of neuroscience. But for issues of higher cognition—how we perceive, how we remember, how we act—we have no idea how neurons are storing information, how they are computing, what the rules are, what the algorithms are, what the representations are, and the like. So we are not yet in an era in which we can be using an understanding of the brain to guide us in the construction of intelligent systems.
What can we learn about the human brain from artificial neural networks? As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account
10,387
What can we learn about the human brain from artificial neural networks?
Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine learning; @MattKrause (+1) is right that neural network models of some biological neural phenomena might have been helpful in many cases.] However, this is perhaps partially due to the fact the research into artificial neural networks in machine learning was more or less in stagnation until around 2006, when Geoffrey Hinton almost single-handedly rekindled the whole field which by now attracts billions of dollars. In a 2012 lecture in Google called Brains, Sex, and Machine Learning (from 45:30), Hinton suggested that artificial neural networks can provide a hint into why [most] neurons communicate with spikes and not with analogue signals. Namely, he suggests to see spikes as a regularization strategy similar to dropout. Dropout is a recently developed way of preventing overfitting, when only a subset of weights is updated on any given gradient descent step (see Srivastava et al. 2014). Apparently it can work very well, and Hinton thinks that perhaps spikes (i.e. most neurons being silent at any given moment) serve the similar purpose. I work in a neuroscience research institute and I don't know anybody here who is convinced by the Hinton's argument. The jury is still out (and is probably be going to be out for quite some time), but at least this is an example of something that artificial neural networks could potentially teach us about brain functioning.
What can we learn about the human brain from artificial neural networks?
Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine lea
What can we learn about the human brain from artificial neural networks? Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine learning; @MattKrause (+1) is right that neural network models of some biological neural phenomena might have been helpful in many cases.] However, this is perhaps partially due to the fact the research into artificial neural networks in machine learning was more or less in stagnation until around 2006, when Geoffrey Hinton almost single-handedly rekindled the whole field which by now attracts billions of dollars. In a 2012 lecture in Google called Brains, Sex, and Machine Learning (from 45:30), Hinton suggested that artificial neural networks can provide a hint into why [most] neurons communicate with spikes and not with analogue signals. Namely, he suggests to see spikes as a regularization strategy similar to dropout. Dropout is a recently developed way of preventing overfitting, when only a subset of weights is updated on any given gradient descent step (see Srivastava et al. 2014). Apparently it can work very well, and Hinton thinks that perhaps spikes (i.e. most neurons being silent at any given moment) serve the similar purpose. I work in a neuroscience research institute and I don't know anybody here who is convinced by the Hinton's argument. The jury is still out (and is probably be going to be out for quite some time), but at least this is an example of something that artificial neural networks could potentially teach us about brain functioning.
What can we learn about the human brain from artificial neural networks? Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine lea
10,388
What can we learn about the human brain from artificial neural networks?
It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman and Van Essen is a rough outline of how visual information flows through the monkey brain, beginning in the eyes (RGC at the bottom) and ending up in the hippocampus, a memory area. Each one of these boxes is a anatomically-defined area (more or less), which contains several processing stages (actual layers, in most cases). The diagram itself is 25 years old and if anything, we've learned that there are a few more boxes and a lot more lines. It is true that a lot of the deep learning work is more "vaguely inspired by" the brain than based on some underlying neural truth. "Deep learning" also has the added advantage of sounding a lot sexier than "iterated logistic regression." However, mathematical models of neural networks have also contributed a lot to our understanding of the brain. At one extreme, some models attempt to mimic the known biology and biophysics precisely. These typically include terms for individual ions and their flow. Some even use 3D reconstructions of real neurons to constrain their shape. If this interests you, ModelDB has a large collection of models and the associated publications. Many are implemented using the freely-available NEURONsoftware. There are larger-scale models that attempt to mimic certain behavioral or neurophysiological effects, without worrying too much about the underlying biophysics. Connectionist or Parallel-Distributed-Processing models, which were particularly popular in the late 1980s and 1990s and used models similar to those you might find in a current machine learning application (e.g., no biophysics, simple activation functions and stereotyped connectivity) to explain various psychological processes. These have fallen a little out of vogue, though one wonders if they might make a comeback now that we have more powerful computers and better training strategies. (See edit below!) Finally, there is a lot of work somewhere in the middle which includes some "phenomenology", plus some biological details (e.g., an explicitly inhibitory term with certain properties, but without fitting the exact distribution of chloride channels). A lot of current work fits into this category, e.g., work by Xiao Jing Wang (and many others....) EDIT: Since I wrote this, there's been an explosion of work comparing the (real) visual system to deep neural networks trained on object recognition tasks. There are some surprising similarities. Kernels in the first layers of a neural network are very similar to the kernels/receptive fields in primary visual cortex and subsequent layers resemble the receptive fields in higher visual areas (see work by Nikolaus Kriegeskorte, for example). Retraining neural networks can cause similar changes to extensive behavioral training (Wenliang and Seitz, 2018). DNNs and humans sometimes--but not always--make similar patterns of errors too. At the moment, it's still rather unclear whether this reflects similarity between real and artificial neural networks in general, something about images specifically[*], or the tendency for neural networks of all stripes to find patterns, even when they aren't there. Nevertheless, comparing the two has become an increasingly hot area of research and it seems likely that we'll learn something from it. * For example, the representation used in the early visual system/first layers of a CNN is an optimal sparse basis for natural images.
What can we learn about the human brain from artificial neural networks?
It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman a
What can we learn about the human brain from artificial neural networks? It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman and Van Essen is a rough outline of how visual information flows through the monkey brain, beginning in the eyes (RGC at the bottom) and ending up in the hippocampus, a memory area. Each one of these boxes is a anatomically-defined area (more or less), which contains several processing stages (actual layers, in most cases). The diagram itself is 25 years old and if anything, we've learned that there are a few more boxes and a lot more lines. It is true that a lot of the deep learning work is more "vaguely inspired by" the brain than based on some underlying neural truth. "Deep learning" also has the added advantage of sounding a lot sexier than "iterated logistic regression." However, mathematical models of neural networks have also contributed a lot to our understanding of the brain. At one extreme, some models attempt to mimic the known biology and biophysics precisely. These typically include terms for individual ions and their flow. Some even use 3D reconstructions of real neurons to constrain their shape. If this interests you, ModelDB has a large collection of models and the associated publications. Many are implemented using the freely-available NEURONsoftware. There are larger-scale models that attempt to mimic certain behavioral or neurophysiological effects, without worrying too much about the underlying biophysics. Connectionist or Parallel-Distributed-Processing models, which were particularly popular in the late 1980s and 1990s and used models similar to those you might find in a current machine learning application (e.g., no biophysics, simple activation functions and stereotyped connectivity) to explain various psychological processes. These have fallen a little out of vogue, though one wonders if they might make a comeback now that we have more powerful computers and better training strategies. (See edit below!) Finally, there is a lot of work somewhere in the middle which includes some "phenomenology", plus some biological details (e.g., an explicitly inhibitory term with certain properties, but without fitting the exact distribution of chloride channels). A lot of current work fits into this category, e.g., work by Xiao Jing Wang (and many others....) EDIT: Since I wrote this, there's been an explosion of work comparing the (real) visual system to deep neural networks trained on object recognition tasks. There are some surprising similarities. Kernels in the first layers of a neural network are very similar to the kernels/receptive fields in primary visual cortex and subsequent layers resemble the receptive fields in higher visual areas (see work by Nikolaus Kriegeskorte, for example). Retraining neural networks can cause similar changes to extensive behavioral training (Wenliang and Seitz, 2018). DNNs and humans sometimes--but not always--make similar patterns of errors too. At the moment, it's still rather unclear whether this reflects similarity between real and artificial neural networks in general, something about images specifically[*], or the tendency for neural networks of all stripes to find patterns, even when they aren't there. Nevertheless, comparing the two has become an increasingly hot area of research and it seems likely that we'll learn something from it. * For example, the representation used in the early visual system/first layers of a CNN is an optimal sparse basis for natural images.
What can we learn about the human brain from artificial neural networks? It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman a
10,389
What can we learn about the human brain from artificial neural networks?
The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so called neural network since using this kind of activation functions resulted in dramatic degrease of training affords for those artificial computational networks we use to call neural networks. What we learned is why synapse and neurons are build this way and why it is preferable. These linear rectified activation (f(x) := x > a ? x : 0) results in sparse activation (only few of the 'neurons' (weights)) get activated. So what we do while our knowledge extends towards biological functions, we understand why this was selected and preferred by evolution. We understand that those systems are sufficient enough but also stable in terms of error control during training and also preserve resources like energy and chemical/biological resources in a brain. We simply understand why the brain is what it is. Also by training and looking at the strategies we understand about possible flows of information and the involved information processing helping us to construct and assess hypothesis about the very subjects. For example something I can remember from a decade ago was training a system on learning natural spoken language and it the discovery made was the system showed similar problems that reassemble analogic behavior of babies learning speaking a language. Even the differences between learning different kind of languages were similar enough. So by studying this approach and design, it was concluded that the human information processing during language learning is similar enough to draw training recommendations and treatment for language related problems, that it helped in aiding and understanding humans difficulties and developing more efficient treatment (what ever of it really made it in practice is another question). A month ago I read an article about how the 3D navigation and remembering of rat brains really work and by creating computational models about every finding it was a great help to understand what is really going on. So the artificial model filled in the blanks of what was observed in the biological system. It really amazed me when I learned that the neurological scientists used a language that assembled more that of an engineer than a biological person talking about circuits, flow of information and logical processing units. So we are learning a lot from artificial neural networks since it presents us with empiric play grounds we can derive rules and assurance from when it comes to the why the architecture of the brain is what it is and also why evolution prefers this over alternative ways. There are still lots of blanks but from what I read - I just got recently into CNN's etc. but had artificial AI, fuzzy logic and neural networks during university time in the early 2000's. So I had catch up on a decade worth of development and discovery resulting in gratitude for all those scientists and practitioners of the neural network and AI field. Well done people, really well done!
What can we learn about the human brain from artificial neural networks?
The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so
What can we learn about the human brain from artificial neural networks? The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so called neural network since using this kind of activation functions resulted in dramatic degrease of training affords for those artificial computational networks we use to call neural networks. What we learned is why synapse and neurons are build this way and why it is preferable. These linear rectified activation (f(x) := x > a ? x : 0) results in sparse activation (only few of the 'neurons' (weights)) get activated. So what we do while our knowledge extends towards biological functions, we understand why this was selected and preferred by evolution. We understand that those systems are sufficient enough but also stable in terms of error control during training and also preserve resources like energy and chemical/biological resources in a brain. We simply understand why the brain is what it is. Also by training and looking at the strategies we understand about possible flows of information and the involved information processing helping us to construct and assess hypothesis about the very subjects. For example something I can remember from a decade ago was training a system on learning natural spoken language and it the discovery made was the system showed similar problems that reassemble analogic behavior of babies learning speaking a language. Even the differences between learning different kind of languages were similar enough. So by studying this approach and design, it was concluded that the human information processing during language learning is similar enough to draw training recommendations and treatment for language related problems, that it helped in aiding and understanding humans difficulties and developing more efficient treatment (what ever of it really made it in practice is another question). A month ago I read an article about how the 3D navigation and remembering of rat brains really work and by creating computational models about every finding it was a great help to understand what is really going on. So the artificial model filled in the blanks of what was observed in the biological system. It really amazed me when I learned that the neurological scientists used a language that assembled more that of an engineer than a biological person talking about circuits, flow of information and logical processing units. So we are learning a lot from artificial neural networks since it presents us with empiric play grounds we can derive rules and assurance from when it comes to the why the architecture of the brain is what it is and also why evolution prefers this over alternative ways. There are still lots of blanks but from what I read - I just got recently into CNN's etc. but had artificial AI, fuzzy logic and neural networks during university time in the early 2000's. So I had catch up on a decade worth of development and discovery resulting in gratitude for all those scientists and practitioners of the neural network and AI field. Well done people, really well done!
What can we learn about the human brain from artificial neural networks? The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so
10,390
Daily Time Series Analysis
Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth. You may also have yearly seasonality, although it's not obvious from your time series. Your best bet, given potentially multiple seasonalities, may be a tbats model, which explicitly models multiple types of seasonality. Load the forecast package: library(forecast) Your output from str(x) indicates that x does not yet carry information about potentially having multiple seasonalities. Look at ?tbats, and compare the output of str(taylor). Assign the seasonalities: x.msts <- msts(x,seasonal.periods=c(7,365.25)) Now you can fit a tbats model. (Be patient, this may take a while.) model <- tbats(x.msts) Finally, you can forecast and plot: plot(forecast(model,h=100)) You should not use arima() or auto.arima(), since these can only handle a single type of seasonality: either weekly or yearly. Don't ask me what auto.arima() would do on your data. It may pick one of the seasonalities, or it may disregard them altogether. EDIT to answer additional questions from a comment: How can I check whether the data has a yearly seasonality or not? Can I create another series of total number of events per month and use its ACF to decide this? Calculating a model on monthly data might be a possibility. Then you could, e.g., compare AICs between models with and without seasonality. However, I'd rather use a holdout sample to assess forecasting models. Hold out the last 100 data points. Fit a model with yearly and weekly seasonality to the rest of the data (like above), then fit one with only weekly seasonality, e.g., using auto.arima() on a ts with frequency=7. Forecast using both models into the holdout period. Check which one has a lower error, using MAE, MSE or whatever is most relevant to your loss function. If there is little difference between errors, go with the simpler model; otherwise, use the one with the lower error. The proof of the pudding is in the eating, and the proof of the time series model is in the forecasting. To improve matters, don't use a single holdout sample (which may be misleading, given the uptick at the end of your series), but use rolling origin forecasts, which is also known as "time series cross-validation". (I very much recommend that entire free online forecasting textbook. So Seasonal ARIMA models cannot usually handle multiple seasonalities? Is it a property of the model itself or is it just the way the functions in R are written? Standard ARIMA models handle seasonality by seasonal differencing. For seasonal monthly data, you would not model the raw time series, but the time series of differences between March 2015 and March 2014, between February 2015 and February 2014 and so forth. (To get forecasts on the original scale, you'd of course need to undifference again.) There is no immediately obvious way to extend this idea to multiple seasonalities. Of course, you can do something using ARIMAX, e.g., by including monthly dummies to model the yearly seasonality, then model residuals using weekly seasonal ARIMA. If you want to do this in R, use ts(x,frequency=7), create a matrix of monthly dummies and feed that into the xreg parameter of auto.arima(). I don't recall any publication that specifically extends ARIMA to multiple seasonalities, although I'm sure somebody has done something along the lines in my previous paragraph.
Daily Time Series Analysis
Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth. You may also have yearly seasonality, although it's not obvious from y
Daily Time Series Analysis Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth. You may also have yearly seasonality, although it's not obvious from your time series. Your best bet, given potentially multiple seasonalities, may be a tbats model, which explicitly models multiple types of seasonality. Load the forecast package: library(forecast) Your output from str(x) indicates that x does not yet carry information about potentially having multiple seasonalities. Look at ?tbats, and compare the output of str(taylor). Assign the seasonalities: x.msts <- msts(x,seasonal.periods=c(7,365.25)) Now you can fit a tbats model. (Be patient, this may take a while.) model <- tbats(x.msts) Finally, you can forecast and plot: plot(forecast(model,h=100)) You should not use arima() or auto.arima(), since these can only handle a single type of seasonality: either weekly or yearly. Don't ask me what auto.arima() would do on your data. It may pick one of the seasonalities, or it may disregard them altogether. EDIT to answer additional questions from a comment: How can I check whether the data has a yearly seasonality or not? Can I create another series of total number of events per month and use its ACF to decide this? Calculating a model on monthly data might be a possibility. Then you could, e.g., compare AICs between models with and without seasonality. However, I'd rather use a holdout sample to assess forecasting models. Hold out the last 100 data points. Fit a model with yearly and weekly seasonality to the rest of the data (like above), then fit one with only weekly seasonality, e.g., using auto.arima() on a ts with frequency=7. Forecast using both models into the holdout period. Check which one has a lower error, using MAE, MSE or whatever is most relevant to your loss function. If there is little difference between errors, go with the simpler model; otherwise, use the one with the lower error. The proof of the pudding is in the eating, and the proof of the time series model is in the forecasting. To improve matters, don't use a single holdout sample (which may be misleading, given the uptick at the end of your series), but use rolling origin forecasts, which is also known as "time series cross-validation". (I very much recommend that entire free online forecasting textbook. So Seasonal ARIMA models cannot usually handle multiple seasonalities? Is it a property of the model itself or is it just the way the functions in R are written? Standard ARIMA models handle seasonality by seasonal differencing. For seasonal monthly data, you would not model the raw time series, but the time series of differences between March 2015 and March 2014, between February 2015 and February 2014 and so forth. (To get forecasts on the original scale, you'd of course need to undifference again.) There is no immediately obvious way to extend this idea to multiple seasonalities. Of course, you can do something using ARIMAX, e.g., by including monthly dummies to model the yearly seasonality, then model residuals using weekly seasonal ARIMA. If you want to do this in R, use ts(x,frequency=7), create a matrix of monthly dummies and feed that into the xreg parameter of auto.arima(). I don't recall any publication that specifically extends ARIMA to multiple seasonalities, although I'm sure somebody has done something along the lines in my previous paragraph.
Daily Time Series Analysis Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth. You may also have yearly seasonality, although it's not obvious from y
10,391
Daily Time Series Analysis
The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huang transform instead of the Fourier transform. The Fourier transform has a severe drawback in that it can only handle stationary, linear data when most series of interest are neither. For example, the random walk y_t = y_{t-1} + e_t is the simplest random walk and frequently encountered. Other methods hold the amplitude of seasonal variation fixed when it often varies in practice.
Daily Time Series Analysis
The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huan
Daily Time Series Analysis The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huang transform instead of the Fourier transform. The Fourier transform has a severe drawback in that it can only handle stationary, linear data when most series of interest are neither. For example, the random walk y_t = y_{t-1} + e_t is the simplest random walk and frequently encountered. Other methods hold the amplitude of seasonal variation fixed when it often varies in practice.
Daily Time Series Analysis The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huan
10,392
Daily Time Series Analysis
The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including those to the original question as I believe they are relevant to your problem. You might actually take the data that was provided in the post and use it as a teaching moment for yourself. Use the entire discussion as a primer for what you should do.
Daily Time Series Analysis
The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including
Daily Time Series Analysis The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including those to the original question as I believe they are relevant to your problem. You might actually take the data that was provided in the post and use it as a teaching moment for yourself. Use the entire discussion as a primer for what you should do.
Daily Time Series Analysis The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including
10,393
How many lags to use in the Ljung-Box test of a time series?
Assume that we specify a simple AR(1) model, with all the usual properties, $$y_t = \beta y_{t-1} + u_t$$ Denote the theoretical covariance of the error term as $$\gamma_j \equiv E(u_tu_{t-j})$$ If we could observe the error term, then the sample autocorrelation of the error term is defined as $$\tilde \rho_j \equiv \frac {\tilde \gamma_j}{\tilde \gamma_0}$$ where $$\tilde\gamma_j \equiv \frac 1n \sum_{t=j+1}^nu_tu_{t-j},\;\;\; j=0,1,2...$$ But in practice, we do not observe the error term. So the sample autocorrelation related to the error term will be estimated using the residuals from estimation, as $$\hat\gamma_j \equiv \frac 1n \sum_{t=j+1}^n\hat u_t\hat u_{t-j},\;\;\; j=0,1,2...$$ The Box-Pierce Q-statistic (the Ljung-Box Q is just an asymptotically neutral scaled version of it) is $$Q_{BP} = n \sum_{j=1}^p\hat\rho^2_j = \sum_{j=1}^p[\sqrt n\hat\rho_j]^2\xrightarrow{d} \;???\;\chi^2(p) $$ Our issue is exactly whether $Q_{BP}$ can be said to have asymptotically a chi-square distribution (under the null of no-autocorellation in the error term) in this model. For this to happen, each and everyone of $\sqrt n \hat\rho_j$ must be asymptotically standard Normal. A way to check this is to examine whether $\sqrt n \hat\rho$ has the same asymptotic distribution as $\sqrt n \tilde\rho$ (which is constructed using the true errors, and so has the desired asymptotic behavior under the null). We have that $$\hat u_t = y_t - \hat \beta y_{t-1} = u_t - (\hat \beta - \beta)y_{t-1}$$ where $\hat \beta$ is a consistent estimator. So $$\hat\gamma_j \equiv \frac 1n \sum_{t=j+1}^n[u_t - (\hat \beta - \beta)y_{t-1}][u_{t-j} - (\hat \beta - \beta)y_{t-j-1}]$$ $$=\tilde \gamma _j -\frac 1n \sum_{t=j+1}^n (\hat \beta - \beta)\big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] + \frac 1n \sum_{t=j+1}^n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1}$$ The sample is assumed to be stationary and ergodic, and moments are assumed to exist up until the desired order. Since the estimator $\hat \beta$ is consistent, this is enough for the two sums to go to zero. So we conclude $$\hat \gamma_j \xrightarrow{p} \tilde \gamma_j$$ This implies that $$\hat \rho_j \xrightarrow{p} \tilde \rho_j \xrightarrow{p} \rho_j$$ But this does not automatically guarantee that $\sqrt n \hat \rho_j$ converges to $\sqrt n\tilde \rho_j$ (in distribution) (think that the continuous mapping theorem does not apply here because the transformation applied to the random variables depends on $n$). In order for this to happen, we need $$\sqrt n \hat \gamma_j \xrightarrow{d} \sqrt n \tilde \gamma_j$$ (the denominator $\gamma_0$ -tilde or hat- will converge to the variance of the error term in both cases, so it is neutral to our issue). We have $$\sqrt n \hat \gamma_j =\sqrt n\tilde \gamma _j -\frac 1n \sum_{t=j+1}^n \sqrt n(\hat \beta - \beta)\big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] \\+ \frac 1n \sum_{t=j+1}^n\sqrt n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1}$$ So the question is : do these two sums, multiplied now by $\sqrt n$, go to zero in probability so that we will be left with $\sqrt n \hat \gamma_j =\sqrt n\tilde \gamma _j$ asymptotically? For the second sum we have $$\frac 1n \sum_{t=j+1}^n\sqrt n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1} = \frac 1n \sum_{t=j+1}^n\big[\sqrt n(\hat \beta - \beta)][(\hat \beta - \beta)y_{t-1}y_{t-j-1}]$$ Since $[\sqrt n(\hat \beta - \beta)]$ converges to a random variable, and $\hat \beta$ is consistent, this will go to zero. For the first sum, here too we have that $[\sqrt n(\hat \beta - \beta)]$ converges to a random variable, and so we have that $$\frac 1n \sum_{t=j+1}^n \big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] \xrightarrow{p} E[u_ty_{t-j-1}] + E[u_{t-j}y_{t-1}]$$ The first expected value, $E[u_ty_{t-j-1}]$ is zero by the assumptions of the standard AR(1) model. But the second expected value is not, since the dependent variable depends on past errors. So $\sqrt n\hat \rho_j$ won't have the same asymptotic distribution as $\sqrt n\tilde \rho_j$. But the asymptotic distribution of the latter is standard Normal, which is the one leading to a chi-squared distribution when squaring the r.v. Therefore we conclude, that in a pure time series model, the Box-Pierce Q and the Ljung-Box Q statistic cannot be said to have an asymptotic chi-square distribution, so the test loses its asymptotic justification. This happens because the right-hand side variable (here the lag of the dependent variable) by design is not strictly exogenous to the error term, and we have found that such strict exogeneity is required for the BP/LB Q-statistic to have the postulated asymptotic distribution. Here the right-hand-side variable is only "predetermined", and the Breusch-Godfrey test is then valid. (for the full set of conditions required for an asymptotically valid test, see Hayashi 2000, p. 146-149).
How many lags to use in the Ljung-Box test of a time series?
Assume that we specify a simple AR(1) model, with all the usual properties, $$y_t = \beta y_{t-1} + u_t$$ Denote the theoretical covariance of the error term as $$\gamma_j \equiv E(u_tu_{t-j})$$ If we
How many lags to use in the Ljung-Box test of a time series? Assume that we specify a simple AR(1) model, with all the usual properties, $$y_t = \beta y_{t-1} + u_t$$ Denote the theoretical covariance of the error term as $$\gamma_j \equiv E(u_tu_{t-j})$$ If we could observe the error term, then the sample autocorrelation of the error term is defined as $$\tilde \rho_j \equiv \frac {\tilde \gamma_j}{\tilde \gamma_0}$$ where $$\tilde\gamma_j \equiv \frac 1n \sum_{t=j+1}^nu_tu_{t-j},\;\;\; j=0,1,2...$$ But in practice, we do not observe the error term. So the sample autocorrelation related to the error term will be estimated using the residuals from estimation, as $$\hat\gamma_j \equiv \frac 1n \sum_{t=j+1}^n\hat u_t\hat u_{t-j},\;\;\; j=0,1,2...$$ The Box-Pierce Q-statistic (the Ljung-Box Q is just an asymptotically neutral scaled version of it) is $$Q_{BP} = n \sum_{j=1}^p\hat\rho^2_j = \sum_{j=1}^p[\sqrt n\hat\rho_j]^2\xrightarrow{d} \;???\;\chi^2(p) $$ Our issue is exactly whether $Q_{BP}$ can be said to have asymptotically a chi-square distribution (under the null of no-autocorellation in the error term) in this model. For this to happen, each and everyone of $\sqrt n \hat\rho_j$ must be asymptotically standard Normal. A way to check this is to examine whether $\sqrt n \hat\rho$ has the same asymptotic distribution as $\sqrt n \tilde\rho$ (which is constructed using the true errors, and so has the desired asymptotic behavior under the null). We have that $$\hat u_t = y_t - \hat \beta y_{t-1} = u_t - (\hat \beta - \beta)y_{t-1}$$ where $\hat \beta$ is a consistent estimator. So $$\hat\gamma_j \equiv \frac 1n \sum_{t=j+1}^n[u_t - (\hat \beta - \beta)y_{t-1}][u_{t-j} - (\hat \beta - \beta)y_{t-j-1}]$$ $$=\tilde \gamma _j -\frac 1n \sum_{t=j+1}^n (\hat \beta - \beta)\big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] + \frac 1n \sum_{t=j+1}^n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1}$$ The sample is assumed to be stationary and ergodic, and moments are assumed to exist up until the desired order. Since the estimator $\hat \beta$ is consistent, this is enough for the two sums to go to zero. So we conclude $$\hat \gamma_j \xrightarrow{p} \tilde \gamma_j$$ This implies that $$\hat \rho_j \xrightarrow{p} \tilde \rho_j \xrightarrow{p} \rho_j$$ But this does not automatically guarantee that $\sqrt n \hat \rho_j$ converges to $\sqrt n\tilde \rho_j$ (in distribution) (think that the continuous mapping theorem does not apply here because the transformation applied to the random variables depends on $n$). In order for this to happen, we need $$\sqrt n \hat \gamma_j \xrightarrow{d} \sqrt n \tilde \gamma_j$$ (the denominator $\gamma_0$ -tilde or hat- will converge to the variance of the error term in both cases, so it is neutral to our issue). We have $$\sqrt n \hat \gamma_j =\sqrt n\tilde \gamma _j -\frac 1n \sum_{t=j+1}^n \sqrt n(\hat \beta - \beta)\big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] \\+ \frac 1n \sum_{t=j+1}^n\sqrt n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1}$$ So the question is : do these two sums, multiplied now by $\sqrt n$, go to zero in probability so that we will be left with $\sqrt n \hat \gamma_j =\sqrt n\tilde \gamma _j$ asymptotically? For the second sum we have $$\frac 1n \sum_{t=j+1}^n\sqrt n(\hat \beta - \beta)^2y_{t-1}y_{t-j-1} = \frac 1n \sum_{t=j+1}^n\big[\sqrt n(\hat \beta - \beta)][(\hat \beta - \beta)y_{t-1}y_{t-j-1}]$$ Since $[\sqrt n(\hat \beta - \beta)]$ converges to a random variable, and $\hat \beta$ is consistent, this will go to zero. For the first sum, here too we have that $[\sqrt n(\hat \beta - \beta)]$ converges to a random variable, and so we have that $$\frac 1n \sum_{t=j+1}^n \big[u_ty_{t-j-1} +u_{t-j}y_{t-1}\big] \xrightarrow{p} E[u_ty_{t-j-1}] + E[u_{t-j}y_{t-1}]$$ The first expected value, $E[u_ty_{t-j-1}]$ is zero by the assumptions of the standard AR(1) model. But the second expected value is not, since the dependent variable depends on past errors. So $\sqrt n\hat \rho_j$ won't have the same asymptotic distribution as $\sqrt n\tilde \rho_j$. But the asymptotic distribution of the latter is standard Normal, which is the one leading to a chi-squared distribution when squaring the r.v. Therefore we conclude, that in a pure time series model, the Box-Pierce Q and the Ljung-Box Q statistic cannot be said to have an asymptotic chi-square distribution, so the test loses its asymptotic justification. This happens because the right-hand side variable (here the lag of the dependent variable) by design is not strictly exogenous to the error term, and we have found that such strict exogeneity is required for the BP/LB Q-statistic to have the postulated asymptotic distribution. Here the right-hand-side variable is only "predetermined", and the Breusch-Godfrey test is then valid. (for the full set of conditions required for an asymptotically valid test, see Hayashi 2000, p. 146-149).
How many lags to use in the Ljung-Box test of a time series? Assume that we specify a simple AR(1) model, with all the usual properties, $$y_t = \beta y_{t-1} + u_t$$ Denote the theoretical covariance of the error term as $$\gamma_j \equiv E(u_tu_{t-j})$$ If we
10,394
How many lags to use in the Ljung-Box test of a time series?
The answer definitely depends on: What are actually trying to use the $Q$ test for? The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of no autocorrelation up to lag $h$ (alternatively assuming that you have something close to a weak white noise) and to build a parsimonious model, having as little number of parameters as possible. Usually time series data has natural seasonal pattern, so the practical rule-of-thumb would be to set $h$ to twice this value. Another one is the forecasting horizon, if you use the model for forecasting needs. Finally if you find some significant departures at latter lags try to think about the corrections (could this be due to some seasonal effects, or the data was not corrected for outliers). Rather than using a single value for h, suppose that I do the Ljung-Box test for all h<50, and then pick the h which gives the minimum p value. It's a joint significance test, so if the choice of $h$ is data-driven, then why should I care about some small (occasional?) departures at any lag less than $h$, supposing that it is much less than $n$ of course (the power of the test you mentioned). Seeking to find a simple yet relevant model I suggest the information criteria as described below. My question concerns how to interpret the test if $p<0.05$ for some values of $h$ and not for other values. So it will depend on how far from the present it happens. Disadvantages of far departures: more parameters to estimate, less degrees of freedom, worse predictive power of the model. Try to estimate the model including the MA and\or AR parts at the lag where the departure occurs AND additionally look at one of information criteria (either AIC or BIC depending on the sample size) this would bring you more insights on what model is more parsimonious. Any out-of-sample prediction exercises are also welcome here.
How many lags to use in the Ljung-Box test of a time series?
The answer definitely depends on: What are actually trying to use the $Q$ test for? The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of
How many lags to use in the Ljung-Box test of a time series? The answer definitely depends on: What are actually trying to use the $Q$ test for? The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of no autocorrelation up to lag $h$ (alternatively assuming that you have something close to a weak white noise) and to build a parsimonious model, having as little number of parameters as possible. Usually time series data has natural seasonal pattern, so the practical rule-of-thumb would be to set $h$ to twice this value. Another one is the forecasting horizon, if you use the model for forecasting needs. Finally if you find some significant departures at latter lags try to think about the corrections (could this be due to some seasonal effects, or the data was not corrected for outliers). Rather than using a single value for h, suppose that I do the Ljung-Box test for all h<50, and then pick the h which gives the minimum p value. It's a joint significance test, so if the choice of $h$ is data-driven, then why should I care about some small (occasional?) departures at any lag less than $h$, supposing that it is much less than $n$ of course (the power of the test you mentioned). Seeking to find a simple yet relevant model I suggest the information criteria as described below. My question concerns how to interpret the test if $p<0.05$ for some values of $h$ and not for other values. So it will depend on how far from the present it happens. Disadvantages of far departures: more parameters to estimate, less degrees of freedom, worse predictive power of the model. Try to estimate the model including the MA and\or AR parts at the lag where the departure occurs AND additionally look at one of information criteria (either AIC or BIC depending on the sample size) this would bring you more insights on what model is more parsimonious. Any out-of-sample prediction exercises are also welcome here.
How many lags to use in the Ljung-Box test of a time series? The answer definitely depends on: What are actually trying to use the $Q$ test for? The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of
10,395
How many lags to use in the Ljung-Box test of a time series?
Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined. http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm Quoting the section below Issue 4 in the above link: "....The p-values shown for the Ljung-Box statistic plot are incorrect because the degrees of freedom used to calculate the p-values are lag instead of lag - (p+q). That is, the procedure being used does NOT take into account the fact that the residuals are from a fitted model. And YES, at least one R core developer knows this...." Edit (01/23/2011): Here's an article by Burns that might help: http://lib.stat.cmu.edu/S/Spoetry/Working/ljungbox.pdf
How many lags to use in the Ljung-Box test of a time series?
Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined. http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm Quoting the s
How many lags to use in the Ljung-Box test of a time series? Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined. http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm Quoting the section below Issue 4 in the above link: "....The p-values shown for the Ljung-Box statistic plot are incorrect because the degrees of freedom used to calculate the p-values are lag instead of lag - (p+q). That is, the procedure being used does NOT take into account the fact that the residuals are from a fitted model. And YES, at least one R core developer knows this...." Edit (01/23/2011): Here's an article by Burns that might help: http://lib.stat.cmu.edu/S/Spoetry/Working/ljungbox.pdf
How many lags to use in the Ljung-Box test of a time series? Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined. http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm Quoting the s
10,396
How many lags to use in the Ljung-Box test of a time series?
The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-Godfrey test should be used instead. That limits the relevance of your question and the answers (although the answers may include some generally good points).
How many lags to use in the Ljung-Box test of a time series?
The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-
How many lags to use in the Ljung-Box test of a time series? The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-Godfrey test should be used instead. That limits the relevance of your question and the answers (although the answers may include some generally good points).
How many lags to use in the Ljung-Box test of a time series? The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-
10,397
How many lags to use in the Ljung-Box test of a time series?
The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted. The first one is supposed to be from the authorative book by Box, Jenkins, and Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.. However, here's all they say about the lags on p.314: It's not a strong argument or suggestion by any means, yet people keep repeating it from one place to another. The second setting for a lag is from Tsay, R. S. Analysis of Financial Time Series. 2nd Ed. Hoboken, NJ: John Wiley & Sons, Inc., 2005, here's what he wrote on p.33: Several values of m are often used. Simulation studies suggest that the choice of m ≈ ln(T ) provides better power performance. This is a somewhat stronger argument, but there's no description of what kind of study was done. So, I wouldn't take it at a face value. He also warns about seasonality: This general rule needs modification in analysis of seasonal time series for which autocorrelations with lags at multiples of the seasonality are more important. Summarizing, if you just need to plug some lag into the test and move on, then you can use either of these setting, and that's fine, because that's what most practitioners do. We're either lazy or, more likely, don't have time for this stuff. Otherwise, you'd have to conduct your own research on the power and properties of the statistics for series that you deal with. UPDATE. Here's my answer to Richard Hardy's comment and his answer, which refers to another thread on CV started by him. You can see that the exposition in the accepted (by Richerd Hardy himself) answer in that thread is clearly based on ARMAX model, i.e. the model with exogenous regressors $x_t$:$$y_t = \mathbf x_t'\beta + \phi(L)y_t + u_t$$ However, OP did not indicate that he's doing ARMAX, to contrary, he explicitly mentions ARMA: After an ARMA model is fit to a time series, it is common to check the residuals via the Ljung-Box portmanteau test One of the first papers that pointed to a potential issue with LB test was Dezhbaksh, Hashem (1990). “The Inappropriate Use of Serial Correlation Tests in Dynamic Linear Models,” Review of Economics and Statistics, 72, 126–132. Here's the excerpt from the paper: As you can see, he doesn't object to using LB test for pure time series models such as ARMA. See also the discussion in the manual to a standard econometrics tool EViews: If the series represents the residuals from ARIMA estimation, the appropriate degrees of freedom should be adjusted to represent the number of autocorrelations less the number of AR and MA terms previously estimated. Note also that some care should be taken in interpreting the results of a Ljung-Box test applied to the residuals from an ARMAX specification (see Dezhbaksh, 1990, for simulation evidence on the finite sample performance of the test in this setting) Yes, you have to be careful with ARMAX models and LB test, but you can't make a blanket statement that LB test is always wrong for all autoregressive series. UPDATE 2 Alecos Papadopoulos's answer shows why Ljung-Box test requires strict exogeneity assumption. He doesn't show it in his post, but Breusch-Gpdfrey test (another alternative test) requires only weak exogeneity, which is better, of course. This what Greene, Econometrics, 7th ed. says on the differences between tests, p.923: The essential difference between the Godfrey–Breusch and the Box–Pierce tests is the use of partial correlations (controlling for X and the other variables) in the former and simple correlations in the latter. Under the null hypothesis, there is no autocorrelation in εt , and no correlation between $x_t$ and $\varepsilon_s$ in any event, so the two tests are asymptotically equivalent. On the other hand, because it does not condition on $x_t$ , the Box–Pierce test is less powerful than the LM test when the null hypothesis is false, as intuition might suggest.
How many lags to use in the Ljung-Box test of a time series?
The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted. The first one is supposed to be from the authorative book by Box, Jenkins, an
How many lags to use in the Ljung-Box test of a time series? The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted. The first one is supposed to be from the authorative book by Box, Jenkins, and Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.. However, here's all they say about the lags on p.314: It's not a strong argument or suggestion by any means, yet people keep repeating it from one place to another. The second setting for a lag is from Tsay, R. S. Analysis of Financial Time Series. 2nd Ed. Hoboken, NJ: John Wiley & Sons, Inc., 2005, here's what he wrote on p.33: Several values of m are often used. Simulation studies suggest that the choice of m ≈ ln(T ) provides better power performance. This is a somewhat stronger argument, but there's no description of what kind of study was done. So, I wouldn't take it at a face value. He also warns about seasonality: This general rule needs modification in analysis of seasonal time series for which autocorrelations with lags at multiples of the seasonality are more important. Summarizing, if you just need to plug some lag into the test and move on, then you can use either of these setting, and that's fine, because that's what most practitioners do. We're either lazy or, more likely, don't have time for this stuff. Otherwise, you'd have to conduct your own research on the power and properties of the statistics for series that you deal with. UPDATE. Here's my answer to Richard Hardy's comment and his answer, which refers to another thread on CV started by him. You can see that the exposition in the accepted (by Richerd Hardy himself) answer in that thread is clearly based on ARMAX model, i.e. the model with exogenous regressors $x_t$:$$y_t = \mathbf x_t'\beta + \phi(L)y_t + u_t$$ However, OP did not indicate that he's doing ARMAX, to contrary, he explicitly mentions ARMA: After an ARMA model is fit to a time series, it is common to check the residuals via the Ljung-Box portmanteau test One of the first papers that pointed to a potential issue with LB test was Dezhbaksh, Hashem (1990). “The Inappropriate Use of Serial Correlation Tests in Dynamic Linear Models,” Review of Economics and Statistics, 72, 126–132. Here's the excerpt from the paper: As you can see, he doesn't object to using LB test for pure time series models such as ARMA. See also the discussion in the manual to a standard econometrics tool EViews: If the series represents the residuals from ARIMA estimation, the appropriate degrees of freedom should be adjusted to represent the number of autocorrelations less the number of AR and MA terms previously estimated. Note also that some care should be taken in interpreting the results of a Ljung-Box test applied to the residuals from an ARMAX specification (see Dezhbaksh, 1990, for simulation evidence on the finite sample performance of the test in this setting) Yes, you have to be careful with ARMAX models and LB test, but you can't make a blanket statement that LB test is always wrong for all autoregressive series. UPDATE 2 Alecos Papadopoulos's answer shows why Ljung-Box test requires strict exogeneity assumption. He doesn't show it in his post, but Breusch-Gpdfrey test (another alternative test) requires only weak exogeneity, which is better, of course. This what Greene, Econometrics, 7th ed. says on the differences between tests, p.923: The essential difference between the Godfrey–Breusch and the Box–Pierce tests is the use of partial correlations (controlling for X and the other variables) in the former and simple correlations in the latter. Under the null hypothesis, there is no autocorrelation in εt , and no correlation between $x_t$ and $\varepsilon_s$ in any event, so the two tests are asymptotically equivalent. On the other hand, because it does not condition on $x_t$ , the Box–Pierce test is less powerful than the LM test when the null hypothesis is false, as intuition might suggest.
How many lags to use in the Ljung-Box test of a time series? The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted. The first one is supposed to be from the authorative book by Box, Jenkins, an
10,398
How many lags to use in the Ljung-Box test of a time series?
Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test). The gist of their approach is to combine the AIC and BIC criteria --- common in the identification and estimation of ARMA models --- to select the optimal number of lags to be used. In the introduction of they suggest that, intuitively, ``test conducted using the BIC criterion are able to properly control for type I error and are more powerful when serial correlation is present in the first order''. Instead, tests based on AIC are more powerful against high order serial correlation. Their procedure thus choses a BIC-type lag selection in the case that autocorrelations seem to be small and present only at low order, and an AIC-type lag section otherwise. The test is implemented in the R package vrtest (see function Auto.Q).
How many lags to use in the Ljung-Box test of a time series?
Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test). The gist of their
How many lags to use in the Ljung-Box test of a time series? Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test). The gist of their approach is to combine the AIC and BIC criteria --- common in the identification and estimation of ARMA models --- to select the optimal number of lags to be used. In the introduction of they suggest that, intuitively, ``test conducted using the BIC criterion are able to properly control for type I error and are more powerful when serial correlation is present in the first order''. Instead, tests based on AIC are more powerful against high order serial correlation. Their procedure thus choses a BIC-type lag selection in the case that autocorrelations seem to be small and present only at low order, and an AIC-type lag section otherwise. The test is implemented in the R package vrtest (see function Auto.Q).
How many lags to use in the Ljung-Box test of a time series? Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test). The gist of their
10,399
How many lags to use in the Ljung-Box test of a time series?
... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lot of samples; n must be ~> 100 to be meaningful. Unfortunately I have never seen a better test. But perhaps one exists. Anyone know of one ? Paul3nt
How many lags to use in the Ljung-Box test of a time series?
... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lo
How many lags to use in the Ljung-Box test of a time series? ... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lot of samples; n must be ~> 100 to be meaningful. Unfortunately I have never seen a better test. But perhaps one exists. Anyone know of one ? Paul3nt
How many lags to use in the Ljung-Box test of a time series? ... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lo
10,400
How many lags to use in the Ljung-Box test of a time series?
There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data. That said, after trying to figure out to reproduce a result in Stata in R I can tell you that, by default Stata implementation uses: $\mathrm{min}(\frac{n}{2}-2, 40)$. Either half the number of data points minus 2, or 40, whichever is smaller. All defaults are wrong, of course, and this will definitely be wrong in some situations. In many situations, this might not be a bad place to start.
How many lags to use in the Ljung-Box test of a time series?
There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data. That said, after trying to figure out to reproduce a result in Stata in R I
How many lags to use in the Ljung-Box test of a time series? There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data. That said, after trying to figure out to reproduce a result in Stata in R I can tell you that, by default Stata implementation uses: $\mathrm{min}(\frac{n}{2}-2, 40)$. Either half the number of data points minus 2, or 40, whichever is smaller. All defaults are wrong, of course, and this will definitely be wrong in some situations. In many situations, this might not be a bad place to start.
How many lags to use in the Ljung-Box test of a time series? There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data. That said, after trying to figure out to reproduce a result in Stata in R I