idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,301
What does interaction depth mean in GBM?
Previous answer is not correct. Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves. So: NumberOfLeaves = interaction.depth + 1
What does interaction depth mean in GBM?
Previous answer is not correct. Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves. So: NumberOfLeaves = interaction.depth + 1
What does interaction depth mean in GBM? Previous answer is not correct. Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves. So: NumberOfLeaves = interaction.depth + 1
What does interaction depth mean in GBM? Previous answer is not correct. Stumps will have an interaction.depth of 1 (and have two leaves). But interaction.depth=2 gives three leaves. So: NumberOfLeaves = interaction.depth + 1
8,302
What does interaction depth mean in GBM?
Actually, the previous answers are incorrect. Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following: $$\begin{align*} N &= 2^{(K+1)} - 1\\ L &= 2^K \end{align*} $$ The previous 2 formulas can easily be demonstrated: a tree of depth K can be seen as having K+1 levels k ranging from 0 (root level) to K (leaf level). Each of these levels has $2^k$ nodes. And the tree's total number of nodes is the sum of the number of nodes at each level. In mathematical terms: $$ N = \sum_{k=0}^K 2^k) $$ which is equivalent to: $$N = 2^{(K+1)} - 1 $$(as per the formula of the sum of the terms of a geometrical progression).
What does interaction depth mean in GBM?
Actually, the previous answers are incorrect. Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following: $$\begin{align*} N
What does interaction depth mean in GBM? Actually, the previous answers are incorrect. Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following: $$\begin{align*} N &= 2^{(K+1)} - 1\\ L &= 2^K \end{align*} $$ The previous 2 formulas can easily be demonstrated: a tree of depth K can be seen as having K+1 levels k ranging from 0 (root level) to K (leaf level). Each of these levels has $2^k$ nodes. And the tree's total number of nodes is the sum of the number of nodes at each level. In mathematical terms: $$ N = \sum_{k=0}^K 2^k) $$ which is equivalent to: $$N = 2^{(K+1)} - 1 $$(as per the formula of the sum of the terms of a geometrical progression).
What does interaction depth mean in GBM? Actually, the previous answers are incorrect. Let K be the interaction.depth, then the number of nodes N and leaves L (i.e terminal nodes) are respectively given by the following: $$\begin{align*} N
8,303
What does interaction depth mean in GBM?
You can try table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 , interaction.depth = 1 ),n.trees=1)) and see that there are only 2 unique predicted values. interaction.depth = 2 will get you 3 distinct predicted values. And convince yourself.
What does interaction depth mean in GBM?
You can try table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 , interaction.depth = 1 ),n.trees=1)) and see that there
What does interaction depth mean in GBM? You can try table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 , interaction.depth = 1 ),n.trees=1)) and see that there are only 2 unique predicted values. interaction.depth = 2 will get you 3 distinct predicted values. And convince yourself.
What does interaction depth mean in GBM? You can try table(predict(gbm( y ~.,data=TrainingData, distribution="gaussian", verbose =FALSE, n.trees =1 , shrinkage =0.01, bag.fraction =1 , interaction.depth = 1 ),n.trees=1)) and see that there
8,304
Is there a Project Euler-alike for machine learning?
Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in to access the datasets (for legal agreements and so forth), but if you don't actually finish an entry, there's no penalty that I know of. That being said, if you look for data sets that are specific to testing statistics procedures, like the ones at Princeton, you can test the data on different network architectures and compare it to plain regression, etc. as a benchmark. See also here for a comprehensive list, which includes all of the Google natural language processing data. So, Project Euler provides a great service with specific problems, but in the case of machine learning, you can use existing datasets with an architecture of your creation and compare the "answers" to conclusions that are presented online or in research papers.
Is there a Project Euler-alike for machine learning?
Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in t
Is there a Project Euler-alike for machine learning? Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in to access the datasets (for legal agreements and so forth), but if you don't actually finish an entry, there's no penalty that I know of. That being said, if you look for data sets that are specific to testing statistics procedures, like the ones at Princeton, you can test the data on different network architectures and compare it to plain regression, etc. as a benchmark. See also here for a comprehensive list, which includes all of the Google natural language processing data. So, Project Euler provides a great service with specific problems, but in the case of machine learning, you can use existing datasets with an architecture of your creation and compare the "answers" to conclusions that are presented online or in research papers.
Is there a Project Euler-alike for machine learning? Though the stakes are higher than for Project Euler, as you've pointed out, Kaggle is an excellent source of data for use in your own experiments. Many of their contests require you to be signed in t
8,305
Is there a Project Euler-alike for machine learning?
UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see how you do.
Is there a Project Euler-alike for machine learning?
UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see ho
Is there a Project Euler-alike for machine learning? UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see how you do.
Is there a Project Euler-alike for machine learning? UCI is well-known in the machine learning community for their repository if datasets. Many journal articles include results of their techniques on some UCI datasets, so you can try yourself and see ho
8,306
Is there a Project Euler-alike for machine learning?
How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning. As it was pointed in the comments this course has next edition: http://jan2012.ml-class.org/#
Is there a Project Euler-alike for machine learning?
How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning. As it was point
Is there a Project Euler-alike for machine learning? How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning. As it was pointed in the comments this course has next edition: http://jan2012.ml-class.org/#
Is there a Project Euler-alike for machine learning? How about: http://www.ml-class.org/? It has good introduction and some programming excersises. AFAIK Euler has much more sophisticated examples, but ml-class is still a good beginning. As it was point
8,307
Can ANOVA be significant when none of the pairwise t-tests is?
Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now. Here's an example I made that has the ANOVA significant at the 5% level but none of the 6 pairwise comparisons are significant, even at the 5% level. Here's the data: g1: 10.71871 10.42931 9.46897 9.87644 g2: 10.64672 9.71863 10.04724 10.32505 10.22259 10.18082 10.76919 10.65447 g3: 10.90556 10.94722 10.78947 10.96914 10.37724 10.81035 10.79333 9.94447 g4: 10.81105 10.58746 10.96241 10.59571 Here's the ANOVA: Df Sum Sq Mean Sq F value Pr(>F) as.factor(g) 3 1.341 0.4469 3.191 0.0458 * Residuals 20 2.800 0.1400 Here's the two sample t-test p-values (equal variance assumption): g2 g3 g4 g1 0.4680 0.0543 0.0809 g2 0.0550 0.0543 g3 0.8108 With a little more fiddling with group means or individual points, the difference in significance could be made more striking (in that I could make the first p-value smaller and the lowest of the set of six p-values for the t-test higher). -- Edit: Here's an additional example that was originally generated with noise about a trend, which shows how much better you can do if you move points around a little: g1: 7.27374 10.31746 10.54047 9.76779 g2: 10.33672 11.33857 10.53057 11.13335 10.42108 9.97780 10.45676 10.16201 g3: 10.13160 10.79660 9.64026 10.74844 10.51241 11.08612 10.58339 10.86740 g4: 10.88055 13.47504 11.87896 10.11403 The F has a p-value below 3% and none of the t's has a p-value below 8%. (For a 3 group example - but with a somewhat larger p-value on the F - omit the second group) And here's a really simple, if more artificial, example with 3 groups: g1: 1.0 2.1 g2: 2.15 2.3 3.0 3.7 3.85 g3: 3.9 5.0 (In this case, the largest variance is on the middle group - but because of the larger sample size there, the standard error of the group mean is still smaller) Multiple comparisons t-tests whuber suggested I consider the multiple comparisons case. It proves to be quite interesting. The case for multiple comparisons (all conducted at the original significance level - i.e. without adjusting alpha for multiple comparisons) is somewhat more difficult to achieve, as playing around with larger and smaller variances or more and fewer d.f. in the different groups don't help in the same way as they do with ordinary two-sample t-tests. However, we do still have the tools of manipulating the number of groups and the significance level; if we choose more groups and smaller significance levels, it again becomes relatively straightforward to identify cases. Here's one: Take eight groups with $n_i=2$. Define the values in the first four groups to be (2,2.5) and in the last four groups to be (3.5,4), and take $\alpha=0.0025$ (say). Then we have a significant F: > summary(aov(values~ind,gs2)) Df Sum Sq Mean Sq F value Pr(>F) ind 7 9 1.286 10.29 0.00191 Residuals 8 1 0.125 Yet the smallest p-value on the pairwise comparisons is not significant that that level: > with(gs2,pairwise.t.test(values,ind,p.adjust.method="none")) Pairwise comparisons using t tests with pooled SD data: values and ind g1 g2 g3 g4 g5 g6 g7 g2 1.0000 - - - - - - g3 1.0000 1.0000 - - - - - g4 1.0000 1.0000 1.0000 - - - - g5 0.0028 0.0028 0.0028 0.0028 - - - g6 0.0028 0.0028 0.0028 0.0028 1.0000 - - g7 0.0028 0.0028 0.0028 0.0028 1.0000 1.0000 - g8 0.0028 0.0028 0.0028 0.0028 1.0000 1.0000 1.0000 P value adjustment method: none
Can ANOVA be significant when none of the pairwise t-tests is?
Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now. Her
Can ANOVA be significant when none of the pairwise t-tests is? Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now. Here's an example I made that has the ANOVA significant at the 5% level but none of the 6 pairwise comparisons are significant, even at the 5% level. Here's the data: g1: 10.71871 10.42931 9.46897 9.87644 g2: 10.64672 9.71863 10.04724 10.32505 10.22259 10.18082 10.76919 10.65447 g3: 10.90556 10.94722 10.78947 10.96914 10.37724 10.81035 10.79333 9.94447 g4: 10.81105 10.58746 10.96241 10.59571 Here's the ANOVA: Df Sum Sq Mean Sq F value Pr(>F) as.factor(g) 3 1.341 0.4469 3.191 0.0458 * Residuals 20 2.800 0.1400 Here's the two sample t-test p-values (equal variance assumption): g2 g3 g4 g1 0.4680 0.0543 0.0809 g2 0.0550 0.0543 g3 0.8108 With a little more fiddling with group means or individual points, the difference in significance could be made more striking (in that I could make the first p-value smaller and the lowest of the set of six p-values for the t-test higher). -- Edit: Here's an additional example that was originally generated with noise about a trend, which shows how much better you can do if you move points around a little: g1: 7.27374 10.31746 10.54047 9.76779 g2: 10.33672 11.33857 10.53057 11.13335 10.42108 9.97780 10.45676 10.16201 g3: 10.13160 10.79660 9.64026 10.74844 10.51241 11.08612 10.58339 10.86740 g4: 10.88055 13.47504 11.87896 10.11403 The F has a p-value below 3% and none of the t's has a p-value below 8%. (For a 3 group example - but with a somewhat larger p-value on the F - omit the second group) And here's a really simple, if more artificial, example with 3 groups: g1: 1.0 2.1 g2: 2.15 2.3 3.0 3.7 3.85 g3: 3.9 5.0 (In this case, the largest variance is on the middle group - but because of the larger sample size there, the standard error of the group mean is still smaller) Multiple comparisons t-tests whuber suggested I consider the multiple comparisons case. It proves to be quite interesting. The case for multiple comparisons (all conducted at the original significance level - i.e. without adjusting alpha for multiple comparisons) is somewhat more difficult to achieve, as playing around with larger and smaller variances or more and fewer d.f. in the different groups don't help in the same way as they do with ordinary two-sample t-tests. However, we do still have the tools of manipulating the number of groups and the significance level; if we choose more groups and smaller significance levels, it again becomes relatively straightforward to identify cases. Here's one: Take eight groups with $n_i=2$. Define the values in the first four groups to be (2,2.5) and in the last four groups to be (3.5,4), and take $\alpha=0.0025$ (say). Then we have a significant F: > summary(aov(values~ind,gs2)) Df Sum Sq Mean Sq F value Pr(>F) ind 7 9 1.286 10.29 0.00191 Residuals 8 1 0.125 Yet the smallest p-value on the pairwise comparisons is not significant that that level: > with(gs2,pairwise.t.test(values,ind,p.adjust.method="none")) Pairwise comparisons using t tests with pooled SD data: values and ind g1 g2 g3 g4 g5 g6 g7 g2 1.0000 - - - - - - g3 1.0000 1.0000 - - - - - g4 1.0000 1.0000 1.0000 - - - - g5 0.0028 0.0028 0.0028 0.0028 - - - g6 0.0028 0.0028 0.0028 0.0028 1.0000 - - g7 0.0028 0.0028 0.0028 0.0028 1.0000 1.0000 - g8 0.0028 0.0028 0.0028 0.0028 1.0000 1.0000 1.0000 P value adjustment method: none
Can ANOVA be significant when none of the pairwise t-tests is? Note: There was something wrong with my original example. I stupidly got caught by R's silent argument recycling. My new example is quite similar to my old one. Hopefully everything is right now. Her
8,308
Can ANOVA be significant when none of the pairwise t-tests is?
Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance). Here's some code that seeks out such a possibility. Note that it increments the seed by 1 each time it runs, so that the seed is stored (and the search through seeds is systematic). stopNow <- FALSE counter <- 0 while(stopNow == FALSE) { counter <- counter + 1 print(counter) set.seed(counter) x <- rep(c(0:5), 100) y <- rnorm(600) + x * 0.01 df <-as.data.frame( cbind(x, y)) df$x <- as.factor(df$x) fit <- (lm(y ~ x, data=df)) anovaP <- anova(fit)$"Pr(>F)"[[1]] minTtestP <- 1 for(loop1 in c(0:5)){ for(loop2 in c(0:5)) { newTtestP <- t.test(df[x==loop1,]$y, df[x==loop2,]$y)$p.value minTtestP <- min(minTtestP, newTtestP ) } } if(minTtestP > 0.05 & anovaP < 0.05) stopNow <- TRUE cat("\nminTtestP = ", minTtestP ) cat("\nanovaP = ", anovaP ) cat("\nCounter = ", counter, "\n\n" ) } Searching for a significant R2 and no non-significant t-tests I have found nothing up to a seed of 18,000. Searching for a lower p-value from R2 than from the t-tests, I get a result at seed = 323, but the difference is very, very small. It's possible that tweaking the parameters (increasing the number of groups?) might help. The reason that the R2 p-value can be smaller is that when the standard error is calculated for the parameters in the regression, all groups are combined, so the standard error of the difference is potentially smaller than in the t-test. I wondered if violating heteroscedasticity might help (as it were). It does. If I use y <- (rnorm(600) + x * 0.01) * x * 5 To generate the y, then I find a suitable result at seed = 1889, where the minimum p-value from the t-tests is 0.061 and the p-value associated with R-squared is 0.046. If I vary the group sizes (which increases the effect of violation of heteroscedasticity), by replacing the x sampling with: x <- sample(c(0:5), 100, replace=TRUE) I get a significant result at seed = 531, with the minimum t-test p-value at 0.063 and the p-value for R2 at 0.046. If I stop correcting for heteroscedasticity in the t-test, by using: newTtestP <- t.test(df[x==loop1,]$y, df[x==loop2,]$y, var.equal = TRUE)$p.value My conclusion is that this is very unlikely to occur, and the difference is likely to be very small, unless you have violated the homoscedasticity assumption in regression. Try running your analysis with a robust/sandwich/whatever you want to call it correction.
Can ANOVA be significant when none of the pairwise t-tests is?
Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance).
Can ANOVA be significant when none of the pairwise t-tests is? Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance). Here's some code that seeks out such a possibility. Note that it increments the seed by 1 each time it runs, so that the seed is stored (and the search through seeds is systematic). stopNow <- FALSE counter <- 0 while(stopNow == FALSE) { counter <- counter + 1 print(counter) set.seed(counter) x <- rep(c(0:5), 100) y <- rnorm(600) + x * 0.01 df <-as.data.frame( cbind(x, y)) df$x <- as.factor(df$x) fit <- (lm(y ~ x, data=df)) anovaP <- anova(fit)$"Pr(>F)"[[1]] minTtestP <- 1 for(loop1 in c(0:5)){ for(loop2 in c(0:5)) { newTtestP <- t.test(df[x==loop1,]$y, df[x==loop2,]$y)$p.value minTtestP <- min(minTtestP, newTtestP ) } } if(minTtestP > 0.05 & anovaP < 0.05) stopNow <- TRUE cat("\nminTtestP = ", minTtestP ) cat("\nanovaP = ", anovaP ) cat("\nCounter = ", counter, "\n\n" ) } Searching for a significant R2 and no non-significant t-tests I have found nothing up to a seed of 18,000. Searching for a lower p-value from R2 than from the t-tests, I get a result at seed = 323, but the difference is very, very small. It's possible that tweaking the parameters (increasing the number of groups?) might help. The reason that the R2 p-value can be smaller is that when the standard error is calculated for the parameters in the regression, all groups are combined, so the standard error of the difference is potentially smaller than in the t-test. I wondered if violating heteroscedasticity might help (as it were). It does. If I use y <- (rnorm(600) + x * 0.01) * x * 5 To generate the y, then I find a suitable result at seed = 1889, where the minimum p-value from the t-tests is 0.061 and the p-value associated with R-squared is 0.046. If I vary the group sizes (which increases the effect of violation of heteroscedasticity), by replacing the x sampling with: x <- sample(c(0:5), 100, replace=TRUE) I get a significant result at seed = 531, with the minimum t-test p-value at 0.063 and the p-value for R2 at 0.046. If I stop correcting for heteroscedasticity in the t-test, by using: newTtestP <- t.test(df[x==loop1,]$y, df[x==loop2,]$y, var.equal = TRUE)$p.value My conclusion is that this is very unlikely to occur, and the difference is likely to be very small, unless you have violated the homoscedasticity assumption in regression. Try running your analysis with a robust/sandwich/whatever you want to call it correction.
Can ANOVA be significant when none of the pairwise t-tests is? Summary: I believe that this is possible, but very, very unlikely. The difference will be small, and if it happens, it's because an assumption has been violated (such as homoscedasticity of variance).
8,309
Can ANOVA be significant when none of the pairwise t-tests is?
It's entirely possible: One or more pairwise t-test is signfiicant but the overall F-test isn't The overall F-test is significant but none of the pairwise t-test is The overall F test tests all contrasts simultaneously. As such, it must be less sensitive (less statistical power) to individual contrasts (eg: a pairwise test). The two tests are closely related to each other but they are not reporting exactly the same thing. As you can see, the textbook recommendation of not doing planned comparisons unless the overall F-test is significant is not always correct. In fact, the recommendation may prevent us from finding significant differences because the overall F test has less power than planned comparisons for testing the specific differences.
Can ANOVA be significant when none of the pairwise t-tests is?
It's entirely possible: One or more pairwise t-test is signfiicant but the overall F-test isn't The overall F-test is significant but none of the pairwise t-test is The overall F test tests all cont
Can ANOVA be significant when none of the pairwise t-tests is? It's entirely possible: One or more pairwise t-test is signfiicant but the overall F-test isn't The overall F-test is significant but none of the pairwise t-test is The overall F test tests all contrasts simultaneously. As such, it must be less sensitive (less statistical power) to individual contrasts (eg: a pairwise test). The two tests are closely related to each other but they are not reporting exactly the same thing. As you can see, the textbook recommendation of not doing planned comparisons unless the overall F-test is significant is not always correct. In fact, the recommendation may prevent us from finding significant differences because the overall F test has less power than planned comparisons for testing the specific differences.
Can ANOVA be significant when none of the pairwise t-tests is? It's entirely possible: One or more pairwise t-test is signfiicant but the overall F-test isn't The overall F-test is significant but none of the pairwise t-test is The overall F test tests all cont
8,310
Can ANOVA be significant when none of the pairwise t-tests is?
The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important). The p-value of the ANOVA test depends on the variance of all the group means (so all the means are important). For example, the following two situations have the same maximal difference between the means, but a different in-between groups variance. $$\begin{array}{rclrclrclrcl} \mu_1& =& -1 ,&\mu_2 &=& 0,& \mu_3 &= &1\\ \mu_1& =& -1 ,&\mu_2 &=& 1,& \mu_3 &= &1\\ \end{array}$$ In this example the t tests will have the same minimum p-value (the largest difference between the means is $2$ in both cases), but the ANOVA test will have different p-values (the in between groups variance is different). This indicates how the ANOVA and t-tests make a difference comparison. A similar situation is described in the question How can I get a significant overall ANOVA but no significant pairwise differences with Tukey's procedure? In the answer a scatter plot is made with simulations for the two smallest p-values of the pairwise comparisons, and with colour coding the region is shown where ANOVA would have p-values below 0.05 or 0.1 The pairwise comparisons and the ANOVA test reject the same amount of cases, but they do so in different cases. The extreme case is when half the groups have a mean around a single point $\mu_a$ and another half of around a single point $\mu_b$. This gives a large variance for in-between groups, whereas the spread can still be modest. The ANOVA is significant while the pairwise comparisons are not.
Can ANOVA be significant when none of the pairwise t-tests is?
The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important). The p-value of the ANOVA test depends on the variance of all the grou
Can ANOVA be significant when none of the pairwise t-tests is? The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important). The p-value of the ANOVA test depends on the variance of all the group means (so all the means are important). For example, the following two situations have the same maximal difference between the means, but a different in-between groups variance. $$\begin{array}{rclrclrclrcl} \mu_1& =& -1 ,&\mu_2 &=& 0,& \mu_3 &= &1\\ \mu_1& =& -1 ,&\mu_2 &=& 1,& \mu_3 &= &1\\ \end{array}$$ In this example the t tests will have the same minimum p-value (the largest difference between the means is $2$ in both cases), but the ANOVA test will have different p-values (the in between groups variance is different). This indicates how the ANOVA and t-tests make a difference comparison. A similar situation is described in the question How can I get a significant overall ANOVA but no significant pairwise differences with Tukey's procedure? In the answer a scatter plot is made with simulations for the two smallest p-values of the pairwise comparisons, and with colour coding the region is shown where ANOVA would have p-values below 0.05 or 0.1 The pairwise comparisons and the ANOVA test reject the same amount of cases, but they do so in different cases. The extreme case is when half the groups have a mean around a single point $\mu_a$ and another half of around a single point $\mu_b$. This gives a large variance for in-between groups, whereas the spread can still be modest. The ANOVA is significant while the pairwise comparisons are not.
Can ANOVA be significant when none of the pairwise t-tests is? The smallest p-value of the t-tests depends on the maximum spread of the different group means (so only two means are important). The p-value of the ANOVA test depends on the variance of all the grou
8,311
Strategy to deal with rare events logistic regression
(1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & here. So should you throw away observations from your sample? No. King & Zeng don't advocate this: [...] in fields like international relations, the number of observable 1’s (such as wars) is strictly limited, so in most applications it is best to collect all available 1’s or a large sample of them. The only real decision then is how many 0’s to collect as well. If collecting 0’s is costless, we should collect as many as we can get, since more data are always better. The situation I think you're talking about is the example "Selecting on $Y$ in Militarized Interstate Dispute Data". K.&Z. use it to, well, prove their point: in this example if a researcher had tried to economize by collecting all the 1's & a proportion of the 0's, their estimates would be similar to one who'd sampled all available 1's & 0's. How else would you illustrate that? (2) The main issue here is the use of an improper scoring rule to assess your model's predictive performance. Suppose your model were true, so that for any individual you knew the probability of a rare event—say being bitten by a snake in the next month. What more do you learn by stipulating an arbitrary probability cut-off & predicting that those above it will be bitten & those below it won't be? If you make the cut-off 50% you'll likely predict no-one will get bitten. If you make it low enough you can predict everyone will get bitten. So what? Sensible application of a model requires discrimination—who should be given the only vial of anti-venom?— or calibration—for whom is it worth buying boots, given their cost relative to that of a snake-bite?.
Strategy to deal with rare events logistic regression
(1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & her
Strategy to deal with rare events logistic regression (1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & here. So should you throw away observations from your sample? No. King & Zeng don't advocate this: [...] in fields like international relations, the number of observable 1’s (such as wars) is strictly limited, so in most applications it is best to collect all available 1’s or a large sample of them. The only real decision then is how many 0’s to collect as well. If collecting 0’s is costless, we should collect as many as we can get, since more data are always better. The situation I think you're talking about is the example "Selecting on $Y$ in Militarized Interstate Dispute Data". K.&Z. use it to, well, prove their point: in this example if a researcher had tried to economize by collecting all the 1's & a proportion of the 0's, their estimates would be similar to one who'd sampled all available 1's & 0's. How else would you illustrate that? (2) The main issue here is the use of an improper scoring rule to assess your model's predictive performance. Suppose your model were true, so that for any individual you knew the probability of a rare event—say being bitten by a snake in the next month. What more do you learn by stipulating an arbitrary probability cut-off & predicting that those above it will be bitten & those below it won't be? If you make the cut-off 50% you'll likely predict no-one will get bitten. If you make it low enough you can predict everyone will get bitten. So what? Sensible application of a model requires discrimination—who should be given the only vial of anti-venom?— or calibration—for whom is it worth buying boots, given their cost relative to that of a snake-bite?.
Strategy to deal with rare events logistic regression (1) If you've "full knowledge of a population" why do you need a model to make predictions? I suspect you're implicitly considering them as a sample from a hypothetical super-population—see here & her
8,312
Strategy to deal with rare events logistic regression
On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain more? On the other hand, if you can cast your dependent variable as a count/ordinal problem (like casualties from conflict, or duration of conflict), you might try zero-inflated count regression or hurdle models. These might have the same issue of poor definition between 0 and 1, but some conflicts that your variables are correlated with could pull away from zero.
Strategy to deal with rare events logistic regression
On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain m
Strategy to deal with rare events logistic regression On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain more? On the other hand, if you can cast your dependent variable as a count/ordinal problem (like casualties from conflict, or duration of conflict), you might try zero-inflated count regression or hurdle models. These might have the same issue of poor definition between 0 and 1, but some conflicts that your variables are correlated with could pull away from zero.
Strategy to deal with rare events logistic regression On one level, I wonder how much of your model's inaccuracy is simply that your process is hard to predict, and your variables aren't sufficient to do so. Are there other variables that might explain m
8,313
Strategy to deal with rare events logistic regression
In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully. This paper can give more information about it: Yap, Bee Wah, et al. "An Application of Oversampling, Undersampling, Bagging and Boosting in Handling Imbalanced Datasets." pdf Also, I'd like to link this question since it discusses the same issue as well
Strategy to deal with rare events logistic regression
In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully.
Strategy to deal with rare events logistic regression In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully. This paper can give more information about it: Yap, Bee Wah, et al. "An Application of Oversampling, Undersampling, Bagging and Boosting in Handling Imbalanced Datasets." pdf Also, I'd like to link this question since it discusses the same issue as well
Strategy to deal with rare events logistic regression In addition to downsampling the majority population you can oversample the rare events as well, but be aware that oversampling of the minority class may lead to overfitting, so check things carefully.
8,314
Strategy to deal with rare events logistic regression
Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better solution? I would try a more complicated model by for example adding product terms at the input, or adding a max-out layer on the target side (so that you essentially have multiple logistic regressors for various adaptively discovered subsets of target 1s).
Strategy to deal with rare events logistic regression
Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better s
Strategy to deal with rare events logistic regression Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better solution? I would try a more complicated model by for example adding product terms at the input, or adding a max-out layer on the target side (so that you essentially have multiple logistic regressors for various adaptively discovered subsets of target 1s).
Strategy to deal with rare events logistic regression Your question boils down to how can I coax logit regression to find a better solution. But are you even sure that a better solution exists? With only ten parameters, were you able to find a better s
8,315
Strategy to deal with rare events logistic regression
Great question. To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow models from Machine Learning (BART, randomForest, boosted trees, etc.) that will almost certainly do a better job at prediction than logit. If you're doing inference, and you have so many datapoints, then try including sensible interaction terms, polynomial terms, etc. Alternatively, you could do inference from BART, as in this paper: http://artsandsciences.sc.edu/people/kernh/publications/Green%20and%20Kern%20BART.pdf I have been doing some work recently on rare events, and had no idea beforehand how much rare cases can affect the analysis. Down-sampling the 0-cases is a must. One strategy to find the ideal down-sample proportion would be Take all your 1s, let's say you have n1 of them. Set some value z = multiple of the n1 you will draw; perhaps start at 5 and reduce to 1. draw z*n1 0 observations Estimate your model on a sample of your subset data, making sure that you cross-validate on the whole dataset Save the relevant fit measures you're interested in: coefficients of interest, AUC of a ROC curve, relevant values in a confusion matrix, etc. Repeat steps 2:5 for successively smaller zs. You will probably find that as you down-sample, the false-negative to false positive ratio (in your test-set) will decrease. That is, you'll start predicting more 1s, hopefully that are genuinely 1s, but also many that are actually 0s. If there is a saddle point in this misclassification, then that would be a good down-sample ratio. Hope this helps. JS
Strategy to deal with rare events logistic regression
Great question. To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow mode
Strategy to deal with rare events logistic regression Great question. To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow models from Machine Learning (BART, randomForest, boosted trees, etc.) that will almost certainly do a better job at prediction than logit. If you're doing inference, and you have so many datapoints, then try including sensible interaction terms, polynomial terms, etc. Alternatively, you could do inference from BART, as in this paper: http://artsandsciences.sc.edu/people/kernh/publications/Green%20and%20Kern%20BART.pdf I have been doing some work recently on rare events, and had no idea beforehand how much rare cases can affect the analysis. Down-sampling the 0-cases is a must. One strategy to find the ideal down-sample proportion would be Take all your 1s, let's say you have n1 of them. Set some value z = multiple of the n1 you will draw; perhaps start at 5 and reduce to 1. draw z*n1 0 observations Estimate your model on a sample of your subset data, making sure that you cross-validate on the whole dataset Save the relevant fit measures you're interested in: coefficients of interest, AUC of a ROC curve, relevant values in a confusion matrix, etc. Repeat steps 2:5 for successively smaller zs. You will probably find that as you down-sample, the false-negative to false positive ratio (in your test-set) will decrease. That is, you'll start predicting more 1s, hopefully that are genuinely 1s, but also many that are actually 0s. If there is a saddle point in this misclassification, then that would be a good down-sample ratio. Hope this helps. JS
Strategy to deal with rare events logistic regression Great question. To my mind, the issue is whether you're trying to do inference (are you interested in what your coefficients are telling you?) or prediction. If the latter, then you could borrow mode
8,316
Who first used/invented p-values?
Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718) The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doctrine of Chance (1718) from page 251-254 who extends this line of thinking further. De Moivre makes two steps/advancements: The normal approximation of a Bernoulli distribution, which helps to easily calculate probabilities for results being within or out a certain range. In the section before the example about Arbuthnot's case, de Moivre writes about his approximation (now called the Gaussian/normal distribution) for the Bernoulli distribution. This approximation allows to easily calculate a p-value (which Arbuthnot could not do). Generalization of Arbuthnot's argument. He mentions that "this method of reasoning may also be usefully applied in some other very interesting inquiries". (which may give partial credit to de Moivre for seeing the general applicability of the argument) According to de Moivre, Jacob Bernoulli wrote about this problem in his Ars Conjectandi. De Moivre names this in English 'Assigning the limits within which, by the repetition of experiments, the probability of an event may approach indefinitely to a probability given', but the original text by Bernouilli is in Latin. I do not know sufficient Latin to be able to figure out if Bernoulli was writing about a concept like the p-value or more like the law of large numbers. Interesting to note is that Bernouilli claims to have had these ideas for 20 years (and also the work 1713 was published after his death 1705 so it seems to precede the date 1710 mentioned in the comments by @Glen_b for Arbuthnot). One source of inspiration for de Moivre was Nicolaus Bernouilli, who in 1712/1713 made the calculations for the probability of the number of boys being born is not less than 7037 and not bigger than 7363, when 14000 is the number of total born kids and the probability for a boy is 18/35. (The numbers for this problem were based on 80 years of statistics for London. He wrote about this in letters to Pierre Raymond de Montmort published in the second edition (1713) of Montmort's Essay d'analyse sur les jeux de hazard.) The calculations, which I did not quite follow, turned out a probability of 43.58 to 1. (Using a computer summing all terms probability of a binomial from 7037 up to 7363 I get 175:1 so I may have misinterpreted his work/calculation.) 1: John Arbuthnot wrote about this case in An argument for divine providence, taken from the constant regularity observed in the births of both sexes (1710). Explanation of Arbuthnot's argument: the boy:girl birth ratio is remarkably different from the middle. He does not calculate exactly the p-value (which is not his goal), but uses the probability to get boys>girls 82 times in a row $$\frac{1}{2}^{82} \sim \frac{1}{4 \,8360\,0000\,0000\,0000\,0000\,0000}$$ arguing that this number would be even more small when you would consider that one could take a smaller range and that it happened more than in just London and 82 years, he ends up at the conclusion that it is very unlikely and that this must be some (divine) providence to counter the greater mortality among men to finally end up with equal men and women. Arbuthnot: then A’s Chance will be near an infinitely small Quantity, at least less than any assignable Fraction. From whence it follows that it is Art, not Chance that governs.
Who first used/invented p-values?
Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718) The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doct
Who first used/invented p-values? Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718) The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doctrine of Chance (1718) from page 251-254 who extends this line of thinking further. De Moivre makes two steps/advancements: The normal approximation of a Bernoulli distribution, which helps to easily calculate probabilities for results being within or out a certain range. In the section before the example about Arbuthnot's case, de Moivre writes about his approximation (now called the Gaussian/normal distribution) for the Bernoulli distribution. This approximation allows to easily calculate a p-value (which Arbuthnot could not do). Generalization of Arbuthnot's argument. He mentions that "this method of reasoning may also be usefully applied in some other very interesting inquiries". (which may give partial credit to de Moivre for seeing the general applicability of the argument) According to de Moivre, Jacob Bernoulli wrote about this problem in his Ars Conjectandi. De Moivre names this in English 'Assigning the limits within which, by the repetition of experiments, the probability of an event may approach indefinitely to a probability given', but the original text by Bernouilli is in Latin. I do not know sufficient Latin to be able to figure out if Bernoulli was writing about a concept like the p-value or more like the law of large numbers. Interesting to note is that Bernouilli claims to have had these ideas for 20 years (and also the work 1713 was published after his death 1705 so it seems to precede the date 1710 mentioned in the comments by @Glen_b for Arbuthnot). One source of inspiration for de Moivre was Nicolaus Bernouilli, who in 1712/1713 made the calculations for the probability of the number of boys being born is not less than 7037 and not bigger than 7363, when 14000 is the number of total born kids and the probability for a boy is 18/35. (The numbers for this problem were based on 80 years of statistics for London. He wrote about this in letters to Pierre Raymond de Montmort published in the second edition (1713) of Montmort's Essay d'analyse sur les jeux de hazard.) The calculations, which I did not quite follow, turned out a probability of 43.58 to 1. (Using a computer summing all terms probability of a binomial from 7037 up to 7363 I get 175:1 so I may have misinterpreted his work/calculation.) 1: John Arbuthnot wrote about this case in An argument for divine providence, taken from the constant regularity observed in the births of both sexes (1710). Explanation of Arbuthnot's argument: the boy:girl birth ratio is remarkably different from the middle. He does not calculate exactly the p-value (which is not his goal), but uses the probability to get boys>girls 82 times in a row $$\frac{1}{2}^{82} \sim \frac{1}{4 \,8360\,0000\,0000\,0000\,0000\,0000}$$ arguing that this number would be even more small when you would consider that one could take a smaller range and that it happened more than in just London and 82 years, he ends up at the conclusion that it is very unlikely and that this must be some (divine) providence to counter the greater mortality among men to finally end up with equal men and women. Arbuthnot: then A’s Chance will be near an infinitely small Quantity, at least less than any assignable Fraction. From whence it follows that it is Art, not Chance that governs.
Who first used/invented p-values? Jacob Bernoulli (~1700) - John Arbuthnot (1710) - Nicolaus Bernoulli (1710s) - Abraham de Moivre (1718) The case of Arbuthnot1 see explanation in note below, can also be read about in de Moivre's Doct
8,317
Who first used/invented p-values?
I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities. If you accept hypothesis testing as the basis, predating probability, then the Online Etymology Dictionary offers this: "hypothesis (n.) 1590s, "a particular statement;" 1650s, "a proposition, assumed and taken for granted, used as a premise," from Middle French hypothese and directly from Late Latin hypothesis, from Greek hypothesis "base, groundwork, foundation," hence in extended use "basis of an argument, supposition," literally "a placing under," from hypo- "under" (see hypo-) + thesis "a placing, proposition" (from reduplicated form of PIE root *dhe- "to set, put"). A term in logic; narrower scientific sense is from 1640s.". Wiktionary offers: "Recorded since 1596, from Middle French hypothese, from Late Latin hypothesis, from Ancient Greek ὑπόθεσις (hupóthesis, “base, basis of an argument, supposition”), literally “a placing under”, itself from ὑποτίθημι (hupotíthēmi, “I set before, suggest”), from ὑπό (hupó, “below”) + τίθημι (títhēmi, “I put, place”). Noun hypothesis (plural hypotheses) (sciences) Used loosely, a tentative conjecture explaining an observation, phenomenon or scientific problem that can be tested by further observation, investigation and/or experimentation. As a scientific term of art, see the attached quotation. Compare to theory, and quotation given there. quotations ▲ 2005, Ronald H. Pine, http://www.csicop.org/specialarticles/show/intelligent_design_or_no_model_creationism, 15 October 2005: Far too many of us have been taught in school that a scientist, in the course of trying to figure something out, will first come up with a "hypothesis" (a guess or surmise—not necessarily even an "educated" guess). ... [But t]he word "hypothesis" should be used, in science, exclusively for a reasoned, sensible, knowledge-informed explanation for why some phenomenon exists or occurs. An hypothesis can be as yet untested; can have already been tested; may have been falsified; may have not yet been falsified, although tested; or may have been tested in a myriad of ways countless times without being falsified; and it may come to be universally accepted by the scientific community. An understanding of the word "hypothesis," as used in science, requires a grasp of the principles underlying Occam's Razor and Karl Popper's thought in regard to "falsifiability" — including the notion that any respectable scientific hypothesis must, in principle, be "capable of" being proven wrong (if it should, in fact, just happen to be wrong), but none can ever be proved to be true. One aspect of a proper understanding of the word "hypothesis," as used in science, is that only a vanishingly small percentage of hypotheses could ever potentially become a theory.". On probability and statistics Wikipedia offers: "Data collection Sampling When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. The idea of making inferences based on sampled data began around the mid-1600's in connection with estimating populations and developing precursors of life insurance. (Reference: Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1082. ISBN 1-57955-008-8). To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design for experiments that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction — inductively inferring from samples to the parameters of a larger or total population. From "Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1082.": "Statistical Analysis • History. Some computations of odds for games of chance were already made in antiquity. Beginning around the 1200s increasingly elaborate results based on the combinatorial enumeration of probabilities were obtained by mystics and mathematicians, with systematically correct methods being developed in the mid-1600s and early 1700s. The idea of making inferences from sampled data arose in the mid-1600s in connection with estimating populations and developing precursors of life insurance. The method of averaging to correct for what were assumed to be random errors of observation began to be used, primarily in astronomy, in the mid-1700s, while least squares fitting and the notion of probability distributions became established around 1800. Probabilistic models based on random variations between individuals began to be used in biology in the mid-1800s, and many of the classical methods now used for statistical analysis were developed in the late 1800s and early 1900s in the context of agricultural research. In physics fundamentally probabilistic models were central to the introduction of statistical mechanics in the late 1800s and quantum mechanics in the early 1900s. Beginning as early as the 1700s, the foundations of statistical analysis have been vigorously debated, with a succession of fairly specific approaches being claimed as the only ones capable of drawing unbiased conclusions from data.". Other sources: The article "P values: from suggestion to superstition" by Concato and Hartigan has an introduction that explains: "This report, in mainly non-mathematical terms, defines the p value, summarizes the historical origins of the p value approach to hypothesis testing, describes various applications of p≤0.05 in the context of clinical research, and discusses the emergence of p≤5×10−8 and other values as thresholds for genomic statistical analyses." The section "Historical origins" states: "Published work on using concepts of probability for comparing data to a scientific hypothesis can be traced back for centuries. In the early 1700s, for example, the physician John Arbuthnot analyzed data on christenings in London during the years 1629–1710 and observed that the number of male births exceeded female births in each of the years studied. He reported$^{[1]}$ that if one assumes a balance of male and female births is based on chance, then the probability of observing an excess of males over 82 consecutive years is 0.582=2×10−25, or less than a one in a septillion (ie, one in a trillion-trillion) chance. [1]. Arbuthnott J. An argument for divine Providence, taken from the constant regularity observ'd in the births of both sexes. Phil Trans 1710;27:186–90. doi:10.1098/rstl.1710.0011 published 1 January 1710 We have some further discussion on our SE site regarding Fischer method vs. Neyman-Pearson-Wald here: Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?. An article in the Journal of Epidemiology and Biostatistics (2001) Vol. 6, No. 2, 193–204 by Senn, titled: "Opinion: Two cheers for P-values?" explains this in the introduction: "P-values have long linked medicine and statistics. John Arbuthnot and Daniel Bernoulli were both physicians, in addition to being mathematicians, and their analyses of sex ratios at birth (Arbuthnot) and inclination of the planets’ orbits (Bernoulli) provide the two most famous early examples of significance tests $^{1–4}$. If their ubiquity in medical journals is the standard by which they are judged, P-values are also extremely popular with the medical profession. On the other hand, they are subject to regular criticism from statisticians $^{5–7}$ and only reluctantly defended $^8$. For example, a dozen years ago, the prominent biostatisticians, the late Martin Gardner and Doug Altman $^9$, together with other colleagues, mounted a successful campaign to persuade the British Medical Journal to place less emphasis on P-values and more on confidence intervals. The journal Epidemiology has banned them altogether. Recently, attacks have even appeared in the popular press $^{10,11}$. P-values thus seem to be an appropriate subject for the Journal of Epidemiology and Biostatistics. This essay represents a personal view of what, if anything, may be said to defend them. I shall offer a limited defence of P-values only. ...". References 1 Hald A. A history of probability and statistics and their appli- cations before 1750. New York: Wiley, 1990. 2 Shoesmith E, Arbuthnot, J. In: Johnson, NL, Kotz, S, editors. Leading personalities in statistical sciences. New York: Wiley, 1997:7–10. 3 Bernoulli, D. Sur le probleme propose pour la seconde fois par l’Acadamie Royale des Sciences de Paris. In: Speiser D, editor. Die Werke von Daniel Bernoulli, Band 3, Basle: Birkhauser Verlag, 1987:303–26. 4 Arbuthnot J. An argument for divine providence taken from the constant regularity observ’d in the births of both sexes. Phil Trans R Soc 1710;27:186–90. 5 Freeman P. The role of P-values in analysing trial results. Statist Med 1993;12:1443 –52. 6 Anscombe FJ. The summarizing of clinical experiments by significance levels. Statist Med 1990;9:703 –8. 7 Royall R. The effect of sample size on the meaning of signifi- cance tests. Am Stat 1986;40:313 –5. 8 Senn SJ. Discussion of Freeman’s paper. Statist Med 1993;12:1453 –8. 9 Gardner M, Altman D. Statistics with confidence. Br Med J 1989. 10 Matthews R. The great health hoax. Sunday Telegraph 13 September, 1998. 11 Matthews R. Flukes and flaws. Prospect 20–24, November 1998. @Martijn Weterings: "Was Pearson in 1900 the revival or did this (frequentist) concept appear earlier? How did Jacob Bernoulli think about his 'golden theorem' in a frequentist sense or in a Bayesian sense (what does the Ars Conjectandi tell and are there more sources)? The American Statistical Association has a webpage on the History of Statistics which, along with this information, has a poster (reproduced in part below) titled "Timeline of statistics". AD 2: Evidence of a census completed during the Han Dynasty survives. 1500s: Girolamo Cardano calculates probabilities of different dice rolls. 1600s: Edmund Halley relates death rate to age and develops mortality tables. 1700s: Thomas Jefferson directs the first U.S. Census. 1839: The American Statistical Association is formed. 1894: The term “standard deviation” is introduced by Karl Pearson. 1935: R.A. Fisher publishes Design of Experiments. In the "History" section of Wikipedia's webpage "Law of large numbers" it explains: "The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's Theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S.D. Poisson further described it under the name "la loi des grands nombres" ("The law of large numbers"). Thereafter, it was known under both names, but the "Law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli and Kolmogorov and Khinchin.". Question: "Was Pearson the first person to conceive of p-values?" No, probably not. In "The ASA's Statement on p-Values: Context, Process, and Purpose" (09 Jun 2016) by Wasserstein and Lazar, doi: 10.1080/00031305.2016.1154108 there's an official statement on the definition of the p-value (which is no doubt not agreed upon by all disciplines utilizing, or rejecting, p-values) which reads: ". What is a p-Value? Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value. 3. Principles ... 6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis. Researchers should recognize that a p-value without context or other evidence provides limited information. For example, a p-value near 0.05 taken by itself offers only weak evidence against the null hypothesis. Likewise, a relatively large p-value does not imply evidence in favor of the null hypothesis; many other hypotheses may be equally or more consistent with the observed data. For these reasons, data analysis should not end with the calculation of a p-value when other approaches are appropriate and feasible.". Rejection of the null hypothesis likely occurred long before Pearson. Wikipedia's page on early examples of null hypothesis testing states: Early choices of null hypothesis Paul Meehl has argued that the epistemological importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment. An examination of the origins of the latter practice may therefore be useful: 1778: Pierre Laplace compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus Laplace's null hypothesis that the birthrates of boys and girls should be equal given "conventional wisdom". 1900: Karl Pearson develops the chi squared test to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the Weldon dice throw data. 1904: Karl Pearson develops the concept of "contingency" in order to determine whether outcomes are independent of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox). The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the principle of indifference that lead Fisher and others to dismiss the use of "inverse probabilities". Despite any one person being credited for rejecting a null hypothesis I don't think it's reasonable to label them the "discover of skepticism based on weak mathematical standing".
Who first used/invented p-values?
I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities. If you accept hypothesis testing as t
Who first used/invented p-values? I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities. If you accept hypothesis testing as the basis, predating probability, then the Online Etymology Dictionary offers this: "hypothesis (n.) 1590s, "a particular statement;" 1650s, "a proposition, assumed and taken for granted, used as a premise," from Middle French hypothese and directly from Late Latin hypothesis, from Greek hypothesis "base, groundwork, foundation," hence in extended use "basis of an argument, supposition," literally "a placing under," from hypo- "under" (see hypo-) + thesis "a placing, proposition" (from reduplicated form of PIE root *dhe- "to set, put"). A term in logic; narrower scientific sense is from 1640s.". Wiktionary offers: "Recorded since 1596, from Middle French hypothese, from Late Latin hypothesis, from Ancient Greek ὑπόθεσις (hupóthesis, “base, basis of an argument, supposition”), literally “a placing under”, itself from ὑποτίθημι (hupotíthēmi, “I set before, suggest”), from ὑπό (hupó, “below”) + τίθημι (títhēmi, “I put, place”). Noun hypothesis (plural hypotheses) (sciences) Used loosely, a tentative conjecture explaining an observation, phenomenon or scientific problem that can be tested by further observation, investigation and/or experimentation. As a scientific term of art, see the attached quotation. Compare to theory, and quotation given there. quotations ▲ 2005, Ronald H. Pine, http://www.csicop.org/specialarticles/show/intelligent_design_or_no_model_creationism, 15 October 2005: Far too many of us have been taught in school that a scientist, in the course of trying to figure something out, will first come up with a "hypothesis" (a guess or surmise—not necessarily even an "educated" guess). ... [But t]he word "hypothesis" should be used, in science, exclusively for a reasoned, sensible, knowledge-informed explanation for why some phenomenon exists or occurs. An hypothesis can be as yet untested; can have already been tested; may have been falsified; may have not yet been falsified, although tested; or may have been tested in a myriad of ways countless times without being falsified; and it may come to be universally accepted by the scientific community. An understanding of the word "hypothesis," as used in science, requires a grasp of the principles underlying Occam's Razor and Karl Popper's thought in regard to "falsifiability" — including the notion that any respectable scientific hypothesis must, in principle, be "capable of" being proven wrong (if it should, in fact, just happen to be wrong), but none can ever be proved to be true. One aspect of a proper understanding of the word "hypothesis," as used in science, is that only a vanishingly small percentage of hypotheses could ever potentially become a theory.". On probability and statistics Wikipedia offers: "Data collection Sampling When full census data cannot be collected, statisticians collect sample data by developing specific experiment designs and survey samples. Statistics itself also provides tools for prediction and forecasting through statistical models. The idea of making inferences based on sampled data began around the mid-1600's in connection with estimating populations and developing precursors of life insurance. (Reference: Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1082. ISBN 1-57955-008-8). To use a sample as a guide to an entire population, it is important that it truly represents the overall population. Representative sampling assures that inferences and conclusions can safely extend from the sample to the population as a whole. A major problem lies in determining the extent that the sample chosen is actually representative. Statistics offers methods to estimate and correct for any bias within the sample and data collection procedures. There are also methods of experimental design for experiments that can lessen these issues at the outset of a study, strengthening its capability to discern truths about the population. Sampling theory is part of the mathematical discipline of probability theory. Probability is used in mathematical statistics to study the sampling distributions of sample statistics and, more generally, the properties of statistical procedures. The use of any statistical method is valid when the system or population under consideration satisfies the assumptions of the method. The difference in point of view between classic probability theory and sampling theory is, roughly, that probability theory starts from the given parameters of a total population to deduce probabilities that pertain to samples. Statistical inference, however, moves in the opposite direction — inductively inferring from samples to the parameters of a larger or total population. From "Wolfram, Stephen (2002). A New Kind of Science. Wolfram Media, Inc. p. 1082.": "Statistical Analysis • History. Some computations of odds for games of chance were already made in antiquity. Beginning around the 1200s increasingly elaborate results based on the combinatorial enumeration of probabilities were obtained by mystics and mathematicians, with systematically correct methods being developed in the mid-1600s and early 1700s. The idea of making inferences from sampled data arose in the mid-1600s in connection with estimating populations and developing precursors of life insurance. The method of averaging to correct for what were assumed to be random errors of observation began to be used, primarily in astronomy, in the mid-1700s, while least squares fitting and the notion of probability distributions became established around 1800. Probabilistic models based on random variations between individuals began to be used in biology in the mid-1800s, and many of the classical methods now used for statistical analysis were developed in the late 1800s and early 1900s in the context of agricultural research. In physics fundamentally probabilistic models were central to the introduction of statistical mechanics in the late 1800s and quantum mechanics in the early 1900s. Beginning as early as the 1700s, the foundations of statistical analysis have been vigorously debated, with a succession of fairly specific approaches being claimed as the only ones capable of drawing unbiased conclusions from data.". Other sources: The article "P values: from suggestion to superstition" by Concato and Hartigan has an introduction that explains: "This report, in mainly non-mathematical terms, defines the p value, summarizes the historical origins of the p value approach to hypothesis testing, describes various applications of p≤0.05 in the context of clinical research, and discusses the emergence of p≤5×10−8 and other values as thresholds for genomic statistical analyses." The section "Historical origins" states: "Published work on using concepts of probability for comparing data to a scientific hypothesis can be traced back for centuries. In the early 1700s, for example, the physician John Arbuthnot analyzed data on christenings in London during the years 1629–1710 and observed that the number of male births exceeded female births in each of the years studied. He reported$^{[1]}$ that if one assumes a balance of male and female births is based on chance, then the probability of observing an excess of males over 82 consecutive years is 0.582=2×10−25, or less than a one in a septillion (ie, one in a trillion-trillion) chance. [1]. Arbuthnott J. An argument for divine Providence, taken from the constant regularity observ'd in the births of both sexes. Phil Trans 1710;27:186–90. doi:10.1098/rstl.1710.0011 published 1 January 1710 We have some further discussion on our SE site regarding Fischer method vs. Neyman-Pearson-Wald here: Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"?. An article in the Journal of Epidemiology and Biostatistics (2001) Vol. 6, No. 2, 193–204 by Senn, titled: "Opinion: Two cheers for P-values?" explains this in the introduction: "P-values have long linked medicine and statistics. John Arbuthnot and Daniel Bernoulli were both physicians, in addition to being mathematicians, and their analyses of sex ratios at birth (Arbuthnot) and inclination of the planets’ orbits (Bernoulli) provide the two most famous early examples of significance tests $^{1–4}$. If their ubiquity in medical journals is the standard by which they are judged, P-values are also extremely popular with the medical profession. On the other hand, they are subject to regular criticism from statisticians $^{5–7}$ and only reluctantly defended $^8$. For example, a dozen years ago, the prominent biostatisticians, the late Martin Gardner and Doug Altman $^9$, together with other colleagues, mounted a successful campaign to persuade the British Medical Journal to place less emphasis on P-values and more on confidence intervals. The journal Epidemiology has banned them altogether. Recently, attacks have even appeared in the popular press $^{10,11}$. P-values thus seem to be an appropriate subject for the Journal of Epidemiology and Biostatistics. This essay represents a personal view of what, if anything, may be said to defend them. I shall offer a limited defence of P-values only. ...". References 1 Hald A. A history of probability and statistics and their appli- cations before 1750. New York: Wiley, 1990. 2 Shoesmith E, Arbuthnot, J. In: Johnson, NL, Kotz, S, editors. Leading personalities in statistical sciences. New York: Wiley, 1997:7–10. 3 Bernoulli, D. Sur le probleme propose pour la seconde fois par l’Acadamie Royale des Sciences de Paris. In: Speiser D, editor. Die Werke von Daniel Bernoulli, Band 3, Basle: Birkhauser Verlag, 1987:303–26. 4 Arbuthnot J. An argument for divine providence taken from the constant regularity observ’d in the births of both sexes. Phil Trans R Soc 1710;27:186–90. 5 Freeman P. The role of P-values in analysing trial results. Statist Med 1993;12:1443 –52. 6 Anscombe FJ. The summarizing of clinical experiments by significance levels. Statist Med 1990;9:703 –8. 7 Royall R. The effect of sample size on the meaning of signifi- cance tests. Am Stat 1986;40:313 –5. 8 Senn SJ. Discussion of Freeman’s paper. Statist Med 1993;12:1453 –8. 9 Gardner M, Altman D. Statistics with confidence. Br Med J 1989. 10 Matthews R. The great health hoax. Sunday Telegraph 13 September, 1998. 11 Matthews R. Flukes and flaws. Prospect 20–24, November 1998. @Martijn Weterings: "Was Pearson in 1900 the revival or did this (frequentist) concept appear earlier? How did Jacob Bernoulli think about his 'golden theorem' in a frequentist sense or in a Bayesian sense (what does the Ars Conjectandi tell and are there more sources)? The American Statistical Association has a webpage on the History of Statistics which, along with this information, has a poster (reproduced in part below) titled "Timeline of statistics". AD 2: Evidence of a census completed during the Han Dynasty survives. 1500s: Girolamo Cardano calculates probabilities of different dice rolls. 1600s: Edmund Halley relates death rate to age and develops mortality tables. 1700s: Thomas Jefferson directs the first U.S. Census. 1839: The American Statistical Association is formed. 1894: The term “standard deviation” is introduced by Karl Pearson. 1935: R.A. Fisher publishes Design of Experiments. In the "History" section of Wikipedia's webpage "Law of large numbers" it explains: "The Italian mathematician Gerolamo Cardano (1501–1576) stated without proof that the accuracies of empirical statistics tend to improve with the number of trials. This was then formalized as a law of large numbers. A special form of the LLN (for a binary random variable) was first proved by Jacob Bernoulli. It took him over 20 years to develop a sufficiently rigorous mathematical proof which was published in his Ars Conjectandi (The Art of Conjecturing) in 1713. He named this his "Golden Theorem" but it became generally known as "Bernoulli's Theorem". This should not be confused with Bernoulli's principle, named after Jacob Bernoulli's nephew Daniel Bernoulli. In 1837, S.D. Poisson further described it under the name "la loi des grands nombres" ("The law of large numbers"). Thereafter, it was known under both names, but the "Law of large numbers" is most frequently used. After Bernoulli and Poisson published their efforts, other mathematicians also contributed to refinement of the law, including Chebyshev, Markov, Borel, Cantelli and Kolmogorov and Khinchin.". Question: "Was Pearson the first person to conceive of p-values?" No, probably not. In "The ASA's Statement on p-Values: Context, Process, and Purpose" (09 Jun 2016) by Wasserstein and Lazar, doi: 10.1080/00031305.2016.1154108 there's an official statement on the definition of the p-value (which is no doubt not agreed upon by all disciplines utilizing, or rejecting, p-values) which reads: ". What is a p-Value? Informally, a p-value is the probability under a specified statistical model that a statistical summary of the data (e.g., the sample mean difference between two compared groups) would be equal to or more extreme than its observed value. 3. Principles ... 6. By itself, a p-value does not provide a good measure of evidence regarding a model or hypothesis. Researchers should recognize that a p-value without context or other evidence provides limited information. For example, a p-value near 0.05 taken by itself offers only weak evidence against the null hypothesis. Likewise, a relatively large p-value does not imply evidence in favor of the null hypothesis; many other hypotheses may be equally or more consistent with the observed data. For these reasons, data analysis should not end with the calculation of a p-value when other approaches are appropriate and feasible.". Rejection of the null hypothesis likely occurred long before Pearson. Wikipedia's page on early examples of null hypothesis testing states: Early choices of null hypothesis Paul Meehl has argued that the epistemological importance of the choice of null hypothesis has gone largely unacknowledged. When the null hypothesis is predicted by theory, a more precise experiment will be a more severe test of the underlying theory. When the null hypothesis defaults to "no difference" or "no effect", a more precise experiment is a less severe test of the theory that motivated performing the experiment. An examination of the origins of the latter practice may therefore be useful: 1778: Pierre Laplace compares the birthrates of boys and girls in multiple European cities. He states: "it is natural to conclude that these possibilities are very nearly in the same ratio". Thus Laplace's null hypothesis that the birthrates of boys and girls should be equal given "conventional wisdom". 1900: Karl Pearson develops the chi squared test to determine "whether a given form of frequency curve will effectively describe the samples drawn from a given population." Thus the null hypothesis is that a population is described by some distribution predicted by theory. He uses as an example the numbers of five and sixes in the Weldon dice throw data. 1904: Karl Pearson develops the concept of "contingency" in order to determine whether outcomes are independent of a given categorical factor. Here the null hypothesis is by default that two things are unrelated (e.g. scar formation and death rates from smallpox). The null hypothesis in this case is no longer predicted by theory or conventional wisdom, but is instead the principle of indifference that lead Fisher and others to dismiss the use of "inverse probabilities". Despite any one person being credited for rejecting a null hypothesis I don't think it's reasonable to label them the "discover of skepticism based on weak mathematical standing".
Who first used/invented p-values? I have three supporting links/arguments that support the date ~1600-1650 for formally developed statistics and much earlier for simply the usage of probabilities. If you accept hypothesis testing as t
8,318
Choosing optimal alpha in elastic net logistic regression
Clarifying what is meant by $\alpha$ and Elastic Net parameters Different terminology and parameters are used by different packages, but the meaning is generally the same: The R package Glmnet uses the following definition $\min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) + \lambda\left[(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1\right]$ Sklearn uses $\min_{w} \frac{1}{2N} \sum_{i=1}^{N} ||y - Xw ||^2_2 + \alpha \times l_1 \text{ratio} ||w||_1 + 0.5 \times \alpha \times (1 - l_1 \text{ratio}) \times ||w||_2^2$ There are alternative parametrizations using $a$ and $b$ as well.. To avoid confusion i am going to call $\lambda$ the penalty strength parameter $L_1 \text{ratio}$ the ratio between $L_1$ and $L_2$ penalty, ranging from 0 (ridge) to 1 (lasso) Visualizing the impact of the parameters Consider a simulated data set where $y$ consists of a noisy sine curve and $X$ is a two dimensional feature consisting of $X_1 = x$ and $X_2 = x^2$. Due to correlation between $X_1$ and $X_2$ the cost function is a narrow valley. The graphics below illustrate the solution path of elasticnet regression with two different $L_1$ ratio parameters, as a function of $\lambda$ the strength parameter. For both simulations: when $\lambda = 0$ then the solution is the OLS solution on the bottom right, with the associated valley shaped cost function. As $\lambda$ increases, the regularization kicks in and the solution tends to $(0,0)$ The main difference between the two simulations is the $L_1$ ratio parameter. LHS: for small $L_1$ ratio, the regularized cost function looks a lot like Ridge regression with round contours. RHS: for large $L_1$ ratio, the cost function looks a lot like Lasso regression with the typical diamond shape contours. For intermediate $L_1$ ratio (not shown) the cost function is a mix of the two Understanding the effect of the parameters The ElasticNet was introduced to counter some of the limitations of the Lasso which are: If there are more variables $p$ than data points $n$, $p>n$, the lasso selects at most $n$ variables. Lasso fails to perform grouped selection, especially in the presence of correlated variables. It will tend to select one variable from a group and ignore the others By combining an $L_1$ and a quadratic $L_2$ penalty we get the advantages of both: $L_1$ generates a sparse model $L_2$ removes the limitation on the number of selected variables, encourages grouping and stabilizes the $L_1$ regularization path. You can see this visually on the diagram above, the singularities at the vertices encourage sparsity, while the strict convex edges encourage grouping. Here is a visualization taken from Hastie (the inventor of ElasticNet) Further reading https://web.stanford.edu/~hastie/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf https://www.researchgate.net/profile/Federico_Andreis/publication/321106005_Shrinkage_methods_ridge_lasso_elastic_nets/links/5a0da3d7aca2729b1f4eeabb/Shrinkage-methods-ridge-lasso-elastic-nets.pdf https://web.stanford.edu/~hastie/TALKS/enet_talk.pdf
Choosing optimal alpha in elastic net logistic regression
Clarifying what is meant by $\alpha$ and Elastic Net parameters Different terminology and parameters are used by different packages, but the meaning is generally the same: The R package Glmnet uses t
Choosing optimal alpha in elastic net logistic regression Clarifying what is meant by $\alpha$ and Elastic Net parameters Different terminology and parameters are used by different packages, but the meaning is generally the same: The R package Glmnet uses the following definition $\min_{\beta_0,\beta} \frac{1}{N} \sum_{i=1}^{N} w_i l(y_i,\beta_0+\beta^T x_i) + \lambda\left[(1-\alpha)||\beta||_2^2/2 + \alpha ||\beta||_1\right]$ Sklearn uses $\min_{w} \frac{1}{2N} \sum_{i=1}^{N} ||y - Xw ||^2_2 + \alpha \times l_1 \text{ratio} ||w||_1 + 0.5 \times \alpha \times (1 - l_1 \text{ratio}) \times ||w||_2^2$ There are alternative parametrizations using $a$ and $b$ as well.. To avoid confusion i am going to call $\lambda$ the penalty strength parameter $L_1 \text{ratio}$ the ratio between $L_1$ and $L_2$ penalty, ranging from 0 (ridge) to 1 (lasso) Visualizing the impact of the parameters Consider a simulated data set where $y$ consists of a noisy sine curve and $X$ is a two dimensional feature consisting of $X_1 = x$ and $X_2 = x^2$. Due to correlation between $X_1$ and $X_2$ the cost function is a narrow valley. The graphics below illustrate the solution path of elasticnet regression with two different $L_1$ ratio parameters, as a function of $\lambda$ the strength parameter. For both simulations: when $\lambda = 0$ then the solution is the OLS solution on the bottom right, with the associated valley shaped cost function. As $\lambda$ increases, the regularization kicks in and the solution tends to $(0,0)$ The main difference between the two simulations is the $L_1$ ratio parameter. LHS: for small $L_1$ ratio, the regularized cost function looks a lot like Ridge regression with round contours. RHS: for large $L_1$ ratio, the cost function looks a lot like Lasso regression with the typical diamond shape contours. For intermediate $L_1$ ratio (not shown) the cost function is a mix of the two Understanding the effect of the parameters The ElasticNet was introduced to counter some of the limitations of the Lasso which are: If there are more variables $p$ than data points $n$, $p>n$, the lasso selects at most $n$ variables. Lasso fails to perform grouped selection, especially in the presence of correlated variables. It will tend to select one variable from a group and ignore the others By combining an $L_1$ and a quadratic $L_2$ penalty we get the advantages of both: $L_1$ generates a sparse model $L_2$ removes the limitation on the number of selected variables, encourages grouping and stabilizes the $L_1$ regularization path. You can see this visually on the diagram above, the singularities at the vertices encourage sparsity, while the strict convex edges encourage grouping. Here is a visualization taken from Hastie (the inventor of ElasticNet) Further reading https://web.stanford.edu/~hastie/Papers/B67.2%20(2005)%20301-320%20Zou%20&%20Hastie.pdf https://www.researchgate.net/profile/Federico_Andreis/publication/321106005_Shrinkage_methods_ridge_lasso_elastic_nets/links/5a0da3d7aca2729b1f4eeabb/Shrinkage-methods-ridge-lasso-elastic-nets.pdf https://web.stanford.edu/~hastie/TALKS/enet_talk.pdf
Choosing optimal alpha in elastic net logistic regression Clarifying what is meant by $\alpha$ and Elastic Net parameters Different terminology and parameters are used by different packages, but the meaning is generally the same: The R package Glmnet uses t
8,319
Choosing optimal alpha in elastic net logistic regression
Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless. Normally you should just pick the hyperparameters (here: $\alpha$) with the best CV score. Alternatively, you could select the best $k$ models $f_1, ..., f_k$ and form an ensemble $f(x) = \frac{1}{k}\sum_i{f_i(x)}$ by arithmetic averaging the decision function. This, of course, gives you an increase of runtime complexity. Hint: sometimes geometric averaging works better $f(x) = \sqrt[k]{\prod_{i=1}^k{f_i(x)}}$. I suppose this is because of a smoother resulting decision boundary. One advantage of resampling is that you can inspect the sequence of test scores, which here are the scores of the cv. You should always not only look at the average but at the std deviation (it is not normal distributed, but you act as if). Usually you display this say as 65.5% (± 2.57%) for accuracy. This way you can tell whether the "small deviations" are more likely to be by chance or structurally. Better would be even to inspect the complete sequences. If there is always one fold off for some reason, you may want to rethink the way you are doing your split (it hints a faulty experimental design, also: did you shuffle?). In scikit-learn the GridSearchCV stores details about the fold expirements in cv_results_ (see here). With regards to the $\alpha$: the higher it is, the more your elastic net will have the $L_1$ sparsity feature. You can check the weights of the resulting models, the higher the $\alpha$ is, the more will be set to zero. It is a useful trick to remove the attributes with weights set to zero from your pipeline all together (this improves runtime performance dramatically). Another trick is to use the elastic net model for feature selection and then retrain a $L_2$ variant. Usually this leads to a dramatic model performance boost as intercorrelations between the features have been filtered out.
Choosing optimal alpha in elastic net logistic regression
Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless. Normally you should just pick the h
Choosing optimal alpha in elastic net logistic regression Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless. Normally you should just pick the hyperparameters (here: $\alpha$) with the best CV score. Alternatively, you could select the best $k$ models $f_1, ..., f_k$ and form an ensemble $f(x) = \frac{1}{k}\sum_i{f_i(x)}$ by arithmetic averaging the decision function. This, of course, gives you an increase of runtime complexity. Hint: sometimes geometric averaging works better $f(x) = \sqrt[k]{\prod_{i=1}^k{f_i(x)}}$. I suppose this is because of a smoother resulting decision boundary. One advantage of resampling is that you can inspect the sequence of test scores, which here are the scores of the cv. You should always not only look at the average but at the std deviation (it is not normal distributed, but you act as if). Usually you display this say as 65.5% (± 2.57%) for accuracy. This way you can tell whether the "small deviations" are more likely to be by chance or structurally. Better would be even to inspect the complete sequences. If there is always one fold off for some reason, you may want to rethink the way you are doing your split (it hints a faulty experimental design, also: did you shuffle?). In scikit-learn the GridSearchCV stores details about the fold expirements in cv_results_ (see here). With regards to the $\alpha$: the higher it is, the more your elastic net will have the $L_1$ sparsity feature. You can check the weights of the resulting models, the higher the $\alpha$ is, the more will be set to zero. It is a useful trick to remove the attributes with weights set to zero from your pipeline all together (this improves runtime performance dramatically). Another trick is to use the elastic net model for feature selection and then retrain a $L_2$ variant. Usually this leads to a dramatic model performance boost as intercorrelations between the features have been filtered out.
Choosing optimal alpha in elastic net logistic regression Let me add some very practical remarks despite the age of the question. As I am not a R user, I cannot let code talk, but it should be understandable nevertheless. Normally you should just pick the h
8,320
Optimising for Precision-Recall curves under class imbalance
The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers". Up-sampling the low-frequency class is a reasonable approach. There are many other ways of dealing with class imbalance. Boosting and bagging are two techniques that come to mind. This seems like a relevant recent study: Comparing Boosting and Bagging Techniques With Noisy and Imbalanced Data P.S. Neat problem; I'd love to know how it turns out.
Optimising for Precision-Recall curves under class imbalance
The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers". Up-sampling the low-frequency class is a reasonable app
Optimising for Precision-Recall curves under class imbalance The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers". Up-sampling the low-frequency class is a reasonable approach. There are many other ways of dealing with class imbalance. Boosting and bagging are two techniques that come to mind. This seems like a relevant recent study: Comparing Boosting and Bagging Techniques With Noisy and Imbalanced Data P.S. Neat problem; I'd love to know how it turns out.
Optimising for Precision-Recall curves under class imbalance The ROC curve is insensitive to changes in class imbalance; see Fawcett (2004) "ROC Graphs: Notes and Practical Considerations for Researchers". Up-sampling the low-frequency class is a reasonable app
8,321
Optimising for Precision-Recall curves under class imbalance
A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on unbalanced data: Data Sampling (as suggested in the question) Algorithm modification Cost sensitive learning
Optimising for Precision-Recall curves under class imbalance
A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on
Optimising for Precision-Recall curves under class imbalance A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on unbalanced data: Data Sampling (as suggested in the question) Algorithm modification Cost sensitive learning
Optimising for Precision-Recall curves under class imbalance A recent study "An insight into classification with imbalanced data: Empirical results and current trends on using data intrinsic characteristics" compares three methods of improved classification on
8,322
Optimising for Precision-Recall curves under class imbalance
I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained by different distributions of validation dataset and the properties of particular METRICS used - precision and recall, that depend highly on that distribution. To elaborate this point a bit more, if you took X distinct entries from your initial validation dataset and replicated the minority class for the upscaled dataset, your model will make the same predictions for those X entries, correct or incorrect, in both upscaled and unbalanced validation datasets. The only difference is that for each false positive there will be less true positives in the initial dataset (hence lower precision) and more true positives in the balanced dataset (simply due to the fact that there are more positive examples in the dataset in general). This is why Precision and Recall are said to be sensitive to skew. On the other hand, as your experiments illustrate as well, ROC does not change. This can be observed by looking at its definition as well. That's why ROC is said to not be sensitive to skew. I don't yet have good answers for points 2 and 3 as am looking for those myself :)
Optimising for Precision-Recall curves under class imbalance
I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained
Optimising for Precision-Recall curves under class imbalance I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained by different distributions of validation dataset and the properties of particular METRICS used - precision and recall, that depend highly on that distribution. To elaborate this point a bit more, if you took X distinct entries from your initial validation dataset and replicated the minority class for the upscaled dataset, your model will make the same predictions for those X entries, correct or incorrect, in both upscaled and unbalanced validation datasets. The only difference is that for each false positive there will be less true positives in the initial dataset (hence lower precision) and more true positives in the balanced dataset (simply due to the fact that there are more positive examples in the dataset in general). This is why Precision and Recall are said to be sensitive to skew. On the other hand, as your experiments illustrate as well, ROC does not change. This can be observed by looking at its definition as well. That's why ROC is said to not be sensitive to skew. I don't yet have good answers for points 2 and 3 as am looking for those myself :)
Optimising for Precision-Recall curves under class imbalance I wanted to draw attention to the fact, that the last 2 experiments are in fact using the SAME model on ALMOST THE SAME dataset. The difference in performance is not model difference, it is explained
8,323
Optimising for Precision-Recall curves under class imbalance
Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen 1) the number of TruePositives (TP) increases for "all thresholds" and, as a result, ratios TP/(TP+FP) and TP/(TP+FN) increase for all thresholds. So that the area under PRC is increasing. 2)the expected precision, also called precision of "dumb" model, increases from ~1/2700 (in original set) to ~1/2 (in case of "ideal" balance). Assuming your model performs better then the "dumb" model means that the area under curve will be more then 0.00037 in "original set" and more then 0.5 in the ideally balanced set. 3) while training the model on upscaled dataset, some models may "overfit" positive samples. In regard to ROC curves, ROC curves are known to show little effect from class distribution variations (upscaling has very minor effect on FPR, while you can see some effect on TPR). In regard to focusing in high precision/low recall region, you can optimize with respect to a cost function where False Positives are penalized more then False Negatives.
Optimising for Precision-Recall curves under class imbalance
Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen 1) the number of TruePositives (TP) incre
Optimising for Precision-Recall curves under class imbalance Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen 1) the number of TruePositives (TP) increases for "all thresholds" and, as a result, ratios TP/(TP+FP) and TP/(TP+FN) increase for all thresholds. So that the area under PRC is increasing. 2)the expected precision, also called precision of "dumb" model, increases from ~1/2700 (in original set) to ~1/2 (in case of "ideal" balance). Assuming your model performs better then the "dumb" model means that the area under curve will be more then 0.00037 in "original set" and more then 0.5 in the ideally balanced set. 3) while training the model on upscaled dataset, some models may "overfit" positive samples. In regard to ROC curves, ROC curves are known to show little effect from class distribution variations (upscaling has very minor effect on FPR, while you can see some effect on TPR). In regard to focusing in high precision/low recall region, you can optimize with respect to a cost function where False Positives are penalized more then False Negatives.
Optimising for Precision-Recall curves under class imbalance Assuming the upsampled positive samples have the "same distribution" as in the "original set". As the number of positive samples increases, few changes happen 1) the number of TruePositives (TP) incre
8,324
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in the paper Bayesian robustness modelling of location and scale parameters by Andrade and O'Hagan (2011). The estimates are robust in the sense that a single observation cannot make the estimates arbitrarily large, as demonstrated in figure 2 of the paper. When the data is normally distributed, the SD of the fitted T distribution (for fixed $\nu$) does not match the SD of the generating distribution. But this is easy to fix. Let $\sigma$ be the standard deviation of the generating distribution and let $s$ be the standard deviation of the fitted T distribution. If the data is scaled by 2, then from the form of the likelihood we know that $s$ must scale by 2. This implies that $s = \sigma f(\nu)$ for some fixed function $f$. This function can be computed numerically by simulation from a standard normal. Here is the code to do this: library(stats) library(stats4) y = rnorm(100000, mean=0,sd=1) nu = 4 nLL = function(s) -sum(stats::dt(y/s,nu,log=TRUE)-log(s)) fit = mle(nLL, start=list(s=1), method="Brent", lower=0.5, upper=2) # the variance of a standard T is nu/(nu-2) print(coef(fit)*sqrt(nu/(nu-2))) For example, at $\nu=4$ I get $f(\nu)=1.18$. The desired estimator is then $\hat{\sigma} = s/f(\nu)$.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in th
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in the paper Bayesian robustness modelling of location and scale parameters by Andrade and O'Hagan (2011). The estimates are robust in the sense that a single observation cannot make the estimates arbitrarily large, as demonstrated in figure 2 of the paper. When the data is normally distributed, the SD of the fitted T distribution (for fixed $\nu$) does not match the SD of the generating distribution. But this is easy to fix. Let $\sigma$ be the standard deviation of the generating distribution and let $s$ be the standard deviation of the fitted T distribution. If the data is scaled by 2, then from the form of the likelihood we know that $s$ must scale by 2. This implies that $s = \sigma f(\nu)$ for some fixed function $f$. This function can be computed numerically by simulation from a standard normal. Here is the code to do this: library(stats) library(stats4) y = rnorm(100000, mean=0,sd=1) nu = 4 nLL = function(s) -sum(stats::dt(y/s,nu,log=TRUE)-log(s)) fit = mle(nLL, start=list(s=1), method="Brent", lower=0.5, upper=2) # the variance of a standard T is nu/(nu-2) print(coef(fit)*sqrt(nu/(nu-2))) For example, at $\nu=4$ I get $f(\nu)=1.18$. The desired estimator is then $\hat{\sigma} = s/f(\nu)$.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? Bayesian inference in a T noise model with an appropriate prior will give a robust estimate of location and scale. The precise conditions that the likelihood and prior need to satisfy are given in th
8,325
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. It is not true that there is a robust bayesian estimate of location (there are bayesian estimators of locations but as I illustrate below they are not robust and, apparently, even the simplest robust estimator of location is not bayesian) . In my opinion, the reasons for the absence of overlap between the 'bayesian' and 'robust' paradigm in the location case goes a long way in explaining why there also are no estimators of scatter that are both robust and bayesian . With suitable priors on $m, s$ and $\nu$, $m$ will be an estimate of the mean of $y_i$ that will be robust against outliers. Actually, no. The resulting estimates will only be robust in a very weak sense of the word robust. However, when we say that the median is robust to outliers we mean the word robust in a much stronger sense. That is, in robust statistics, the robustness of the median refers to the property that if you compute the median on a data-set of observations drawn from a uni-modal, continuous model and then replace less than half of these observations by arbitrary values, the value of the median computed on the contaminated data is close to the value you would have had had you computed it on the original (uncontaminated) data-set. Then, it is easy to show that the estimation strategy you propose in the paragraph I quoted above is definitely not robust in the sense of how the word is typically understood for the median. I'm wholly unfamiliar with Bayesian analysis. However, I was wondering what is wrong with the following strategy as it seems simple, effective and yet has not been considered in the other answers. The prior is that the good part of the data is drawn from a symmetric distribution $F$ and that the rate of contamination is less than half. Then, a simple strategy would be to: compute the median/mad of your dataset. Then compute: $$z_i=\frac{|x_i-\mbox{med}(x)|}{\mbox{mad}(x)}$$ exclude the observations for which $z_i>q_{\alpha}(z|x\sim F)$ (this is the $\alpha$ quantile of the distribution of $z$ when $x\sim F$). This quantity is avalaible for many choice of $F$ and can be bootstrapped for the others. Run a (usual, non-robust) Bayesian analysis on the non-rejected observations. EDIT: Thanks to the OP for providing a self contained R code to conduct a bonna fide bayesian analysis of the problem. the code below compares the the bayesian approach suggested by the O.P. to it's alternative from the robust statistics literature (e.g. the fitting method proposed by Gauss for the case where the data may contain as much as $n/2-2$ outliers and the distribution of the good part of the data is Gaussian). central part of the data is $\mathcal{N}(1000,1)$: n<-100 set.seed(123) y<-rnorm(n,1000,1) Add some amount of contaminants: y[1:30]<-y[1:30]/100-1000 w<-rep(0,n) w[1:30]<-1 the index w takes value 1 for the outliers. I begin with the approach suggested by the O.P.: library("rjags") model_string<-"model{ for(i in 1:length(y)){ y[i]~dt(mu,inv_s2,nu) } mu~dnorm(0,0.00001) inv_s2~dgamma(0.0001,0.0001) s<-1/sqrt(inv_s2) nu~dexp(1/30) }" model<-jags.model(textConnection(model_string),list(y=y)) mcmc_samples<-coda.samples(model,"mu",n.iter=1000) print(summary(mcmc_samples)$statistics[1:2]) summary(mcmc_samples) I get: Mean SD 384.2283 97.0445 and: 2. Quantiles for each variable: 2.5% 25% 50% 75% 97.5% 184.6 324.3 384.7 448.4 577.7 (quiet far thus from the target values) For the robust method, z<-abs(y-median(y))/mad(y) th<-max(abs(rnorm(length(y)))) print(c(mean(y[which(z<=th)]),sd(y[which(z<=th)]))) one gets: 1000.149 0.8827613 (very close to the target values) The second result is much closer to the real values. But it gets worst. If we classify as outliers those observations for which the estimated $z$-score is larger than th (remember that the prior is that $F$ is Gaussian) then the bayesian approach finds that all the observations are outliers (the robust procedure, in contrast, flags all and only the outliers as such). This also implies that if you were to run a usual (non-robust) bayesian analysis on the data not classified as outliers by the robust procedure, you should do fine (e.g. fulfil the objectives stated in your question). This is just an example, but it's actually fairly straightforward to show that (and it can done formally, see for example, in chapter 2 of [1]) the parameters of a student $t$ distribution fitted to contaminated data cannot be depended upon to reveal the outliers. [1]Ricardo A. Maronna, Douglas R. Martin, Victor J. Yohai (2006). Robust Statistics: Theory and Methods (Wiley Series in Probability and Statistics). Huber, P. J. (1981). Robust Statistics. New York: John Wiley and Sons.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption. It is not true that there is a robust bayesian estimate of location (there are bayesian estimators of locations but as I illustrate below they are not robust and, apparently, even the simplest robust estimator of location is not bayesian) . In my opinion, the reasons for the absence of overlap between the 'bayesian' and 'robust' paradigm in the location case goes a long way in explaining why there also are no estimators of scatter that are both robust and bayesian . With suitable priors on $m, s$ and $\nu$, $m$ will be an estimate of the mean of $y_i$ that will be robust against outliers. Actually, no. The resulting estimates will only be robust in a very weak sense of the word robust. However, when we say that the median is robust to outliers we mean the word robust in a much stronger sense. That is, in robust statistics, the robustness of the median refers to the property that if you compute the median on a data-set of observations drawn from a uni-modal, continuous model and then replace less than half of these observations by arbitrary values, the value of the median computed on the contaminated data is close to the value you would have had had you computed it on the original (uncontaminated) data-set. Then, it is easy to show that the estimation strategy you propose in the paragraph I quoted above is definitely not robust in the sense of how the word is typically understood for the median. I'm wholly unfamiliar with Bayesian analysis. However, I was wondering what is wrong with the following strategy as it seems simple, effective and yet has not been considered in the other answers. The prior is that the good part of the data is drawn from a symmetric distribution $F$ and that the rate of contamination is less than half. Then, a simple strategy would be to: compute the median/mad of your dataset. Then compute: $$z_i=\frac{|x_i-\mbox{med}(x)|}{\mbox{mad}(x)}$$ exclude the observations for which $z_i>q_{\alpha}(z|x\sim F)$ (this is the $\alpha$ quantile of the distribution of $z$ when $x\sim F$). This quantity is avalaible for many choice of $F$ and can be bootstrapped for the others. Run a (usual, non-robust) Bayesian analysis on the non-rejected observations. EDIT: Thanks to the OP for providing a self contained R code to conduct a bonna fide bayesian analysis of the problem. the code below compares the the bayesian approach suggested by the O.P. to it's alternative from the robust statistics literature (e.g. the fitting method proposed by Gauss for the case where the data may contain as much as $n/2-2$ outliers and the distribution of the good part of the data is Gaussian). central part of the data is $\mathcal{N}(1000,1)$: n<-100 set.seed(123) y<-rnorm(n,1000,1) Add some amount of contaminants: y[1:30]<-y[1:30]/100-1000 w<-rep(0,n) w[1:30]<-1 the index w takes value 1 for the outliers. I begin with the approach suggested by the O.P.: library("rjags") model_string<-"model{ for(i in 1:length(y)){ y[i]~dt(mu,inv_s2,nu) } mu~dnorm(0,0.00001) inv_s2~dgamma(0.0001,0.0001) s<-1/sqrt(inv_s2) nu~dexp(1/30) }" model<-jags.model(textConnection(model_string),list(y=y)) mcmc_samples<-coda.samples(model,"mu",n.iter=1000) print(summary(mcmc_samples)$statistics[1:2]) summary(mcmc_samples) I get: Mean SD 384.2283 97.0445 and: 2. Quantiles for each variable: 2.5% 25% 50% 75% 97.5% 184.6 324.3 384.7 448.4 577.7 (quiet far thus from the target values) For the robust method, z<-abs(y-median(y))/mad(y) th<-max(abs(rnorm(length(y)))) print(c(mean(y[which(z<=th)]),sd(y[which(z<=th)]))) one gets: 1000.149 0.8827613 (very close to the target values) The second result is much closer to the real values. But it gets worst. If we classify as outliers those observations for which the estimated $z$-score is larger than th (remember that the prior is that $F$ is Gaussian) then the bayesian approach finds that all the observations are outliers (the robust procedure, in contrast, flags all and only the outliers as such). This also implies that if you were to run a usual (non-robust) bayesian analysis on the data not classified as outliers by the robust procedure, you should do fine (e.g. fulfil the objectives stated in your question). This is just an example, but it's actually fairly straightforward to show that (and it can done formally, see for example, in chapter 2 of [1]) the parameters of a student $t$ distribution fitted to contaminated data cannot be depended upon to reveal the outliers. [1]Ricardo A. Maronna, Douglas R. Martin, Victor J. Yohai (2006). Robust Statistics: Theory and Methods (Wiley Series in Probability and Statistics). Huber, P. J. (1981). Robust Statistics. New York: John Wiley and Sons.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? As you are asking a question about a very precise problem (robust estimation), I will offer you an equally precise answer. First, however, I will begin be trying to dispel an unwarranted assumption.
8,326
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Adding a prior on the variance improves robustness against outliers. There is a nice paper by Andrew Gelman: "Prior distributions for variance parameters in hierarchical models" where he discusses what good choices for the priors on the variances can be.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Ad
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Adding a prior on the variance improves robustness against outliers. There is a nice paper by Andrew Gelman: "Prior distributions for variance parameters in hierarchical models" where he discusses what good choices for the priors on the variances can be.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? In bayesian analysis using the inverse Gamma distribution as a prior for the precision (the inverse of the variance) is a common choice. Or the inverse Wishart distribution for multivariate models. Ad
8,327
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the marginal for $\mu$, yielding a $t$ distribution with $N$ degrees of freedom. Similarly, if you want a robust estimator for the standard deviation $\sigma$ of some data $D$, we can do the following: First, we suppose that the data is normally distributed when its mean and standard deviation are known. Therefore, $$\left.D\right|_{\mu,\sigma} \sim \mathcal{N}(\mu,\sigma^2)$$ and if $D \equiv (d_1,\ldots,d_N)$ then $$p(D|\mu,\sigma^2) = \frac{1}{(\sqrt{2\pi}\sigma)^N} \exp\left(-\frac{N}{2\sigma^2}\left((m-\mu^2)+s^2\right)\right)$$ where the sufficient statistics $m$ and $s^2$ are $$m=\frac{1}{N}\sum_{i=1}^N d_i \quad s^2 = \frac{1}{N}\sum_{i=1}^N d_i^2 - m^2$$ In addition, using Bayes' theorem, we have $$p(\mu,\sigma^2|D) \propto p(D|\mu,\sigma^2) p(\mu,\sigma^2)$$ A convenient prior for $(\mu,\sigma^2)$ is the Normal-invese-gamma family, which covers a wide range of shapes and is conjugate to this likelihood. This means that the posterior distribution $p(\mu,\sigma^2|D)$ still belongs to the normal-inverse-gamma family, and its marginal $p(\sigma^2|D)$ is an inverse gamma distribution parameterized as $$\left.\sigma^2\right|_{D} \sim \mathcal{IG}\left(\alpha+N/2,2\beta+Ns^2\right) \qquad \alpha,\beta>0$$ From this distribution, we can take the mode, which will give us an estimator for $\sigma^2$. This estimator will be more or less tolerant to small excursions from misspecifications on the model by varying $\alpha$ and/or $\beta$. The variance of this distribution will then provide some indication on the fault-tolerance of the estimate. Since the tails of the inverse gamma are semi-heavy, you get the kind of behaviour you would expect from the $t$ distribution estimate for $\mu$ that you mention.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the ma
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the marginal for $\mu$, yielding a $t$ distribution with $N$ degrees of freedom. Similarly, if you want a robust estimator for the standard deviation $\sigma$ of some data $D$, we can do the following: First, we suppose that the data is normally distributed when its mean and standard deviation are known. Therefore, $$\left.D\right|_{\mu,\sigma} \sim \mathcal{N}(\mu,\sigma^2)$$ and if $D \equiv (d_1,\ldots,d_N)$ then $$p(D|\mu,\sigma^2) = \frac{1}{(\sqrt{2\pi}\sigma)^N} \exp\left(-\frac{N}{2\sigma^2}\left((m-\mu^2)+s^2\right)\right)$$ where the sufficient statistics $m$ and $s^2$ are $$m=\frac{1}{N}\sum_{i=1}^N d_i \quad s^2 = \frac{1}{N}\sum_{i=1}^N d_i^2 - m^2$$ In addition, using Bayes' theorem, we have $$p(\mu,\sigma^2|D) \propto p(D|\mu,\sigma^2) p(\mu,\sigma^2)$$ A convenient prior for $(\mu,\sigma^2)$ is the Normal-invese-gamma family, which covers a wide range of shapes and is conjugate to this likelihood. This means that the posterior distribution $p(\mu,\sigma^2|D)$ still belongs to the normal-inverse-gamma family, and its marginal $p(\sigma^2|D)$ is an inverse gamma distribution parameterized as $$\left.\sigma^2\right|_{D} \sim \mathcal{IG}\left(\alpha+N/2,2\beta+Ns^2\right) \qquad \alpha,\beta>0$$ From this distribution, we can take the mode, which will give us an estimator for $\sigma^2$. This estimator will be more or less tolerant to small excursions from misspecifications on the model by varying $\alpha$ and/or $\beta$. The variance of this distribution will then provide some indication on the fault-tolerance of the estimate. Since the tails of the inverse gamma are semi-heavy, you get the kind of behaviour you would expect from the $t$ distribution estimate for $\mu$ that you mention.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? A robust estimator for the location parameter $\mu$ of some dataset of size $N$ is obtained when one assigns a Jeffreys prior to the variance $\sigma^2$ of the normal distribution, and computes the ma
8,328
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribution of the data to be Laplace distribution instead of a t-distribution, then as in normal regression where we model the mean, here we will model the median (very robust) aka median regression (we all know). Let the model be: $Y=\beta X+\epsilon$, $\epsilon$ has laplace$(0,$$\sigma^2)$. Of course our goal is to estimate model parameters. We expect our priors to be vague to have an objective model. The model at hand has a posterior of the form $f(\beta,\sigma,Y,X)$. Giving $\beta$ a normal prior with large variance makes such a prior vague and a chis-squared prior with small degrees of freedom to mimic a jeffrey's prior(vague prior) is given to to $\sigma^2$. With a Gibbs sampler what happens? normal prior+laplace likehood=???? we do know. Also chi-square prior +laplace likelihood=??? we do not know the distribution. Fortunately for us there is a theorem in (Aslan,2010) that transforms a laplace likelihood to a scale mixture of normal distributions which then enable us to enjoy the conjugate properties of our priors. I think the whole process described is fully robust in terms of outliers. In a multivariate setting chi-square becomes a a wishart distribution, and we use multivariate laplace and normal distributions.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribut
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribution of the data to be Laplace distribution instead of a t-distribution, then as in normal regression where we model the mean, here we will model the median (very robust) aka median regression (we all know). Let the model be: $Y=\beta X+\epsilon$, $\epsilon$ has laplace$(0,$$\sigma^2)$. Of course our goal is to estimate model parameters. We expect our priors to be vague to have an objective model. The model at hand has a posterior of the form $f(\beta,\sigma,Y,X)$. Giving $\beta$ a normal prior with large variance makes such a prior vague and a chis-squared prior with small degrees of freedom to mimic a jeffrey's prior(vague prior) is given to to $\sigma^2$. With a Gibbs sampler what happens? normal prior+laplace likehood=???? we do know. Also chi-square prior +laplace likelihood=??? we do not know the distribution. Fortunately for us there is a theorem in (Aslan,2010) that transforms a laplace likelihood to a scale mixture of normal distributions which then enable us to enjoy the conjugate properties of our priors. I think the whole process described is fully robust in terms of outliers. In a multivariate setting chi-square becomes a a wishart distribution, and we use multivariate laplace and normal distributions.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? I have followed the discussion from the original question. Rasmus when you say robustness I am sure you mean in the data (outliers, not miss-specification of distributions). I will take the distribut
8,329
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \in {1 \ldots K}$ is $\textrm{Var}(y_k) \in [0, \infty)$. The question here is, "What is a robust model for the likelihood of the sample variance?" One way to approach this is to model the transformed data $\textrm{ln}[\textrm{Var}(y_k)]$ as coming from a $t$ distribution, which as you have already mentioned is a robust version of the normal distribution. If you don't feel like assuming that the transformed variance is approximately normal as $n \rightarrow \infty$, then you could choose a probability distribution with positive real support that is known to have heavy tails compared to another distribution with the same location. For example, there is a recent answer to a question on Cross Validated about whether the lognormal or gamma distribution has heavier tails, and it turns out that the lognormal distribution does (thanks to @Glen_b for that contribution). In addition, you could explore the half-Cauchy family. Similar reasoning applies if instead you are assigning a prior distribution over a scale parameter for a normal distribution. Tangentially, the lognormal and inverse-gamma distributions are not advisable if you want to form a boundary avoiding prior for the purposes of posterior mode approximation because they peak sharply if you parameterize them so that the mode is near zero. See BDA3 chapter 13 for discussion. So in addition to identifying a robust model in terms of tail thickness, keep in mind that kurtosis may matter to your inference, too. I hope this helps you as much as your answer to one of my recent questions helped me.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be?
Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \in {1 \ldots K}$ is $\textrm{Var}(y_k) \in [0, \infty)$. The question here is, "What is a robust model for the likelihood of the sample variance?" One way to approach this is to model the transformed data $\textrm{ln}[\textrm{Var}(y_k)]$ as coming from a $t$ distribution, which as you have already mentioned is a robust version of the normal distribution. If you don't feel like assuming that the transformed variance is approximately normal as $n \rightarrow \infty$, then you could choose a probability distribution with positive real support that is known to have heavy tails compared to another distribution with the same location. For example, there is a recent answer to a question on Cross Validated about whether the lognormal or gamma distribution has heavier tails, and it turns out that the lognormal distribution does (thanks to @Glen_b for that contribution). In addition, you could explore the half-Cauchy family. Similar reasoning applies if instead you are assigning a prior distribution over a scale parameter for a normal distribution. Tangentially, the lognormal and inverse-gamma distributions are not advisable if you want to form a boundary avoiding prior for the purposes of posterior mode approximation because they peak sharply if you parameterize them so that the mode is near zero. See BDA3 chapter 13 for discussion. So in addition to identifying a robust model in terms of tail thickness, keep in mind that kurtosis may matter to your inference, too. I hope this helps you as much as your answer to one of my recent questions helped me.
What would a robust Bayesian model for estimating the scale of a roughly normal distribution be? Suppose that you have $K$ groups and you want to model the distribution of their sample variances, perhaps in relation to some covariates $\bf{x}$. That is, suppose that your data point for group $k \
8,330
How to understand SARIMAX intuitively?
As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error: $$ x_t = \phi x_{t-1} + \varepsilon_t $$ Let's substitute in $ x_{t-1} $, and then $ x_{t-2} $: $$\begin{aligned} x_t &= \phi (\phi x_{t-2} + \varepsilon_{t-1}) + \varepsilon_t \\ &= \phi^2x_{t-2} + \phi\varepsilon_{t-1} + \varepsilon_t \\ &= \phi^3x_{t-3} + \phi^2\varepsilon_{t-2} + \phi\varepsilon_{t-1} + \varepsilon_t \end{aligned} $$ Taking that out to infinity: $$ x_t = \phi^nx_{t-n} + \phi^{n-1}\varepsilon_{t-n+1} + ... + \phi\varepsilon_{t-1}+ \varepsilon_t $$ You can write any (stationary) AR($p$) as an MA($\infty$), though of course you run into a giant pile-up of terms on top of one another with $p>1$. Having seen that, let's rephrase our definition (1) now. An AR process relates the value of an observation $x$ at time $t$ to an infinite sequence of decaying error shocks $\varepsilon$ from prior time periods (that we don't directly observe). So what an MA process is might be clearer now. (2) An MA($q$) process relates the value of an observation $x$ at time $t$ to just $q$ error shocks from prior periods (that we don't directly observe), of which coefficients are allowed to vary more than the exponential decay implicit in an AR model. As you note, it has nothing to do with the usual "moving average" concept. With some conditions on the coefficients $\theta_1...\theta_q$ of an MA($q$) process, we can actually do something very similar to what I showed for an AR process above, that is, write the MA($q$) as an AR($\infty$). So it's just as valid to restate (2) to say an MA process relates the value of an observation $x$ at time $t$ to a decaying sequence of all prior values of $x$. So an ARMA model just combines those two ideas, relating $x_t$ to both an infinite decaying sequence and a defined sequence. ARIMA just adds in differencing to the mix, that is, you run ARMA on $x_t - x_{t-1}$ (or further differences as it may be), to remove trend, as you noted.
How to understand SARIMAX intuitively?
As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error: $$ x_t = \phi x_{t-1} + \varepsilon_t $$ Let's substitute in $ x_{t-1} $, a
How to understand SARIMAX intuitively? As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error: $$ x_t = \phi x_{t-1} + \varepsilon_t $$ Let's substitute in $ x_{t-1} $, and then $ x_{t-2} $: $$\begin{aligned} x_t &= \phi (\phi x_{t-2} + \varepsilon_{t-1}) + \varepsilon_t \\ &= \phi^2x_{t-2} + \phi\varepsilon_{t-1} + \varepsilon_t \\ &= \phi^3x_{t-3} + \phi^2\varepsilon_{t-2} + \phi\varepsilon_{t-1} + \varepsilon_t \end{aligned} $$ Taking that out to infinity: $$ x_t = \phi^nx_{t-n} + \phi^{n-1}\varepsilon_{t-n+1} + ... + \phi\varepsilon_{t-1}+ \varepsilon_t $$ You can write any (stationary) AR($p$) as an MA($\infty$), though of course you run into a giant pile-up of terms on top of one another with $p>1$. Having seen that, let's rephrase our definition (1) now. An AR process relates the value of an observation $x$ at time $t$ to an infinite sequence of decaying error shocks $\varepsilon$ from prior time periods (that we don't directly observe). So what an MA process is might be clearer now. (2) An MA($q$) process relates the value of an observation $x$ at time $t$ to just $q$ error shocks from prior periods (that we don't directly observe), of which coefficients are allowed to vary more than the exponential decay implicit in an AR model. As you note, it has nothing to do with the usual "moving average" concept. With some conditions on the coefficients $\theta_1...\theta_q$ of an MA($q$) process, we can actually do something very similar to what I showed for an AR process above, that is, write the MA($q$) as an AR($\infty$). So it's just as valid to restate (2) to say an MA process relates the value of an observation $x$ at time $t$ to a decaying sequence of all prior values of $x$. So an ARMA model just combines those two ideas, relating $x_t$ to both an infinite decaying sequence and a defined sequence. ARIMA just adds in differencing to the mix, that is, you run ARMA on $x_t - x_{t-1}$ (or further differences as it may be), to remove trend, as you noted.
How to understand SARIMAX intuitively? As you noted, (1) an AR model relates the value of an observation $x$ at time $t$ to the previous values, with some error: $$ x_t = \phi x_{t-1} + \varepsilon_t $$ Let's substitute in $ x_{t-1} $, a
8,331
Convolutional neural networks: Aren't the central neurons over-represented in the output?
Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few scientific papers on "sparse representations", especially in memory research. I think you would benefit from reading about "receptor fields" in visual cortex. Not only are there ON and OFF cells in the mammal brain, but also RF cells that fire both during ON and OFF. Perhaps the edge/sparsity problem could be circumvented by updating the model to reflect current neuroscience on vision, especially in animal models.
Convolutional neural networks: Aren't the central neurons over-represented in the output?
Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few s
Convolutional neural networks: Aren't the central neurons over-represented in the output? Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few scientific papers on "sparse representations", especially in memory research. I think you would benefit from reading about "receptor fields" in visual cortex. Not only are there ON and OFF cells in the mammal brain, but also RF cells that fire both during ON and OFF. Perhaps the edge/sparsity problem could be circumvented by updating the model to reflect current neuroscience on vision, especially in animal models.
Convolutional neural networks: Aren't the central neurons over-represented in the output? Sparse representations are expected in hierarchical models. Possibly, what you are discovering is a problem intrinsic to the hierarchical structure of deep learning models. You will find quite a few s
8,332
Convolutional neural networks: Aren't the central neurons over-represented in the output?
You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolution will apply the filter the same number of times to each pixel.
Convolutional neural networks: Aren't the central neurons over-represented in the output?
You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolu
Convolutional neural networks: Aren't the central neurons over-represented in the output? You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolution will apply the filter the same number of times to each pixel.
Convolutional neural networks: Aren't the central neurons over-represented in the output? You're right that this an issue if the convolution operates only on the image pixels, but the problem disappears if you zero-pad the images (as is generally recommended). This ensures that the convolu
8,333
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
This is a very nice compact introduction to the basic ideas! Reinforcement Learning I think your use case description of reinforcement learning is not exactly right. The term classify is not appropriate. An better description would be: I don't know how to act in this environment, can you find a good behavior and meanwhile I'll give you feedback. In other words, the goal is rather to control something well, than to classify something well. Input The environment which is defined by all possible states possible actions in the states The reward function dependent on the state and/or action Algorithm The agent is in a state takes an action to transfer to another state gets a reward for the action in the state Output The agent wants to find an optimal policy which maximizes the reward
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
This is a very nice compact introduction to the basic ideas! Reinforcement Learning I think your use case description of reinforcement learning is not exactly right. The term classify is not appropria
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics This is a very nice compact introduction to the basic ideas! Reinforcement Learning I think your use case description of reinforcement learning is not exactly right. The term classify is not appropriate. An better description would be: I don't know how to act in this environment, can you find a good behavior and meanwhile I'll give you feedback. In other words, the goal is rather to control something well, than to classify something well. Input The environment which is defined by all possible states possible actions in the states The reward function dependent on the state and/or action Algorithm The agent is in a state takes an action to transfer to another state gets a reward for the action in the state Output The agent wants to find an optimal policy which maximizes the reward
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics This is a very nice compact introduction to the basic ideas! Reinforcement Learning I think your use case description of reinforcement learning is not exactly right. The term classify is not appropria
8,334
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome... Here is an answer that adds some tiny mathematical notes to your list and some different thoughts on when to use what. I hope the enumeration is self-explanatory enough: Supervised We have data $\mathcal{D} = \{(\boldsymbol{x}_0,y_0), (\boldsymbol{x}_1,y_1), \ldots, (\boldsymbol{x}_n,y_n)\}$ We look for a model $g$ that minimises some loss/cost measure $L(y_i, g(\boldsymbol{x}_i))$ for all points $0 \leq i < l$ We evaluate the model by computing the loss/cost $L$ for the rest of the data ($l \leq i \leq n$) in order to get an idea how well the model generalises We can give examples, but we cannot give an algorithm to get from input to output Setting for classification and regression Unsupervised We have data $\mathcal{D} = \{\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_n\}$ We look for a model $g$ that gives us some insight in our data. We have little to no measures to say whether we did something useful/interesting We have some data, but we have no idea where to start looking for useful/interesting stuff Setting for clustering, dimensionality reduction, finding hidden factors, generative models, etc. Reinforcement We have no data We construct a model $g$ that generates data $\boldsymbol{x}_i$ (often called actions), which can be based on measurements and/or previous actions, in an attempt to maximise some reward measure $R(\boldsymbol{x}_i)$, which is generally not known to the model (it needs to be learned as well). We evaluate by means of the reward function after it had some time to learn. We have no idea how to do something, but we can say whether it has been done right or wrong This seems especially useful for sequential decision tasks. References: Si, J., Barto, A., Powell, W. and Wunsch, D. (2004) Reinforcement Learning and Its Relationship to Supervised Learning, in Handbook of Learning and Approximate Dynamic Programming, John Wiley & Sons, Inc., Hoboken, NJ, USA. doi: 10.1002/9780470544785.ch2
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics
Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome... Here is an answer that adds some tiny mathematical notes to your
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome... Here is an answer that adds some tiny mathematical notes to your list and some different thoughts on when to use what. I hope the enumeration is self-explanatory enough: Supervised We have data $\mathcal{D} = \{(\boldsymbol{x}_0,y_0), (\boldsymbol{x}_1,y_1), \ldots, (\boldsymbol{x}_n,y_n)\}$ We look for a model $g$ that minimises some loss/cost measure $L(y_i, g(\boldsymbol{x}_i))$ for all points $0 \leq i < l$ We evaluate the model by computing the loss/cost $L$ for the rest of the data ($l \leq i \leq n$) in order to get an idea how well the model generalises We can give examples, but we cannot give an algorithm to get from input to output Setting for classification and regression Unsupervised We have data $\mathcal{D} = \{\boldsymbol{x}_0, \boldsymbol{x}_1, \ldots, \boldsymbol{x}_n\}$ We look for a model $g$ that gives us some insight in our data. We have little to no measures to say whether we did something useful/interesting We have some data, but we have no idea where to start looking for useful/interesting stuff Setting for clustering, dimensionality reduction, finding hidden factors, generative models, etc. Reinforcement We have no data We construct a model $g$ that generates data $\boldsymbol{x}_i$ (often called actions), which can be based on measurements and/or previous actions, in an attempt to maximise some reward measure $R(\boldsymbol{x}_i)$, which is generally not known to the model (it needs to be learned as well). We evaluate by means of the reward function after it had some time to learn. We have no idea how to do something, but we can say whether it has been done right or wrong This seems especially useful for sequential decision tasks. References: Si, J., Barto, A., Powell, W. and Wunsch, D. (2004) Reinforcement Learning and Its Relationship to Supervised Learning, in Handbook of Learning and Approximate Dynamic Programming, John Wiley & Sons, Inc., Hoboken, NJ, USA. doi: 10.1002/9780470544785.ch2
Supervised learning, unsupervised learning and reinforcement learning: Workflow basics Disclaimer: I am no expert and I even have never done something with reinforcement learning (yet), so any feedback would be welcome... Here is an answer that adds some tiny mathematical notes to your
8,335
Bound for Arithmetic Harmonic mean inequality for matrices?
Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof: https://www.sciencedirect.com/science/article/pii/0024379595002693 After downloading the paper, the proof is on pages 450-452, in the Main Result section. Here is a citation in case you need it: Mond, B., and Pec̆arić, J. E. (1996), “A mixed arithmetic-mean-harmonic-mean matrix inequality,” Linear Algebra and its Applications, Linear Algebra and Statistics: In Celebration of C. R. Rao’s 75th Birthday (September 10, 1995), 237–238, 449–454. https://doi.org/10.1016/0024-3795(95)00269-3. I hope this helps you. Best, =K=
Bound for Arithmetic Harmonic mean inequality for matrices?
Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof: htt
Bound for Arithmetic Harmonic mean inequality for matrices? Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof: https://www.sciencedirect.com/science/article/pii/0024379595002693 After downloading the paper, the proof is on pages 450-452, in the Main Result section. Here is a citation in case you need it: Mond, B., and Pec̆arić, J. E. (1996), “A mixed arithmetic-mean-harmonic-mean matrix inequality,” Linear Algebra and its Applications, Linear Algebra and Statistics: In Celebration of C. R. Rao’s 75th Birthday (September 10, 1995), 237–238, 449–454. https://doi.org/10.1016/0024-3795(95)00269-3. I hope this helps you. Best, =K=
Bound for Arithmetic Harmonic mean inequality for matrices? Yes, indeed there is. Please see the work by Mond and Pec̆arić here. They established the AM-GM inequality for positive semi-definite matrices. Here is a link to the paper that contains the proof: htt
8,336
Variance on the sum of predicted values from a mixed effect model on a timeseries
In matrix notation a mixed model can be represented as y = X*beta + Z*u + epsilon where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively. I would apply a simple and adequate (but not the best) transformation for correcting for auto-correlation that involves the lose of the first observation, and replacing the column vector of [y1, y2,...yn] with a smaller by one observation column vector, namely: [y2 - rho*y1, y3 - rho*y2,..., yn - rho*y(n-1)], where rho is your estimated value for serial auto-correlation. This can be performed by multiplying by a matrix T, forming T*y, where the 1st row of T is composed as follows: [ -rho, 1, 0, 0,....], the 2nd row: [ 0, -rho, 1, 0, 0, ...], etc. Similarly, the other design matrices are changed to T*X and T*Z. Also, the variance-covariance matrix of the error terms is altered as well, now with independent error terms. Now, just compute the solution with the new design matrices.
Variance on the sum of predicted values from a mixed effect model on a timeseries
In matrix notation a mixed model can be represented as y = X*beta + Z*u + epsilon where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively. I
Variance on the sum of predicted values from a mixed effect model on a timeseries In matrix notation a mixed model can be represented as y = X*beta + Z*u + epsilon where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively. I would apply a simple and adequate (but not the best) transformation for correcting for auto-correlation that involves the lose of the first observation, and replacing the column vector of [y1, y2,...yn] with a smaller by one observation column vector, namely: [y2 - rho*y1, y3 - rho*y2,..., yn - rho*y(n-1)], where rho is your estimated value for serial auto-correlation. This can be performed by multiplying by a matrix T, forming T*y, where the 1st row of T is composed as follows: [ -rho, 1, 0, 0,....], the 2nd row: [ 0, -rho, 1, 0, 0, ...], etc. Similarly, the other design matrices are changed to T*X and T*Z. Also, the variance-covariance matrix of the error terms is altered as well, now with independent error terms. Now, just compute the solution with the new design matrices.
Variance on the sum of predicted values from a mixed effect model on a timeseries In matrix notation a mixed model can be represented as y = X*beta + Z*u + epsilon where X and Z are known design matrices relating to the fixed effects and random effects observations, respectively. I
8,337
What is the hardest statistical concept to grasp?
for some reason, people have difficulty grasping what a p-value really is.
What is the hardest statistical concept to grasp?
for some reason, people have difficulty grasping what a p-value really is.
What is the hardest statistical concept to grasp? for some reason, people have difficulty grasping what a p-value really is.
What is the hardest statistical concept to grasp? for some reason, people have difficulty grasping what a p-value really is.
8,338
What is the hardest statistical concept to grasp?
Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer the question that we would like to answer. We'd like to know, "what's the chance that the true value is inside this particular interval?" Instead, we can only answer, "what's the chance that a randomly chosen interval created in this way contains the true parameter?" The latter is obviously less satisfying.
What is the hardest statistical concept to grasp?
Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer
What is the hardest statistical concept to grasp? Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer the question that we would like to answer. We'd like to know, "what's the chance that the true value is inside this particular interval?" Instead, we can only answer, "what's the chance that a randomly chosen interval created in this way contains the true parameter?" The latter is obviously less satisfying.
What is the hardest statistical concept to grasp? Similar to shabbychef's answer, it is difficult to understand the meaning of a confidence interval in frequentist statistics. I think the biggest obstacle is that a confidence interval doesn't answer
8,339
What is the hardest statistical concept to grasp?
What is the meaning of "degrees of freedom"? How about df that are not whole numbers?
What is the hardest statistical concept to grasp?
What is the meaning of "degrees of freedom"? How about df that are not whole numbers?
What is the hardest statistical concept to grasp? What is the meaning of "degrees of freedom"? How about df that are not whole numbers?
What is the hardest statistical concept to grasp? What is the meaning of "degrees of freedom"? How about df that are not whole numbers?
8,340
What is the hardest statistical concept to grasp?
Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can't get away from & is a source of rampant misadventure.
What is the hardest statistical concept to grasp?
Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can
What is the hardest statistical concept to grasp? Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can't get away from & is a source of rampant misadventure.
What is the hardest statistical concept to grasp? Conditional probability probably leads to most mistakes in everyday experience. There are many harder concepts to grasp, of course, but people usually don't have to worry about them--this one they can
8,341
What is the hardest statistical concept to grasp?
I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically: Sample size has to be picked in advance. It is not ok to keep analyzing the data as more subjects are added, stopping when the results looks good. Any methods used to normalize the data or exclude outliers must also be decided in advance. It isn't ok to analyze various subsets of the data until you find results you like. And finally, of course, the statistical methods must be decided in advance. Is it not ok to analyze the data via parametric and nonparametric methods, and pick the results you like. Exploratory methods can be useful to, well, explore. But then you can't turn around and run regular statistical tests and interpret the results in the usual way.
What is the hardest statistical concept to grasp?
I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically: Sampl
What is the hardest statistical concept to grasp? I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically: Sample size has to be picked in advance. It is not ok to keep analyzing the data as more subjects are added, stopping when the results looks good. Any methods used to normalize the data or exclude outliers must also be decided in advance. It isn't ok to analyze various subsets of the data until you find results you like. And finally, of course, the statistical methods must be decided in advance. Is it not ok to analyze the data via parametric and nonparametric methods, and pick the results you like. Exploratory methods can be useful to, well, explore. But then you can't turn around and run regular statistical tests and interpret the results in the usual way.
What is the hardest statistical concept to grasp? I think that very few scientists understand this basic point: It is only possible to interpret results of statistical analyses at face value, if every step was planned in advance. Specifically: Sampl
8,342
What is the hardest statistical concept to grasp?
Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o) Both have merit of course, but it can be very difficult to understand why one framework is interesting/useful/valid if your grasp of the other is too firm. Cross-validated is a good remedy as asking questions and listening to answers is a good way to learn.
What is the hardest statistical concept to grasp?
Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o) Both have merit of course, but it can be very difficult to un
What is the hardest statistical concept to grasp? Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o) Both have merit of course, but it can be very difficult to understand why one framework is interesting/useful/valid if your grasp of the other is too firm. Cross-validated is a good remedy as asking questions and listening to answers is a good way to learn.
What is the hardest statistical concept to grasp? Tongue firmly in cheek: For frequentists, the Bayesian concept of probability; for Bayesians, the frequentist concept of probability. ;o) Both have merit of course, but it can be very difficult to un
8,343
What is the hardest statistical concept to grasp?
From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability, which is not exactly correct.
What is the hardest statistical concept to grasp?
From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability
What is the hardest statistical concept to grasp? From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability, which is not exactly correct.
What is the hardest statistical concept to grasp? From my personal experience the concept of likelihood can also cause quite a lot of stir, especially for non-statisticians. As wikipedia says, it is very often mixed up with the concept of probability
8,344
What is the hardest statistical concept to grasp?
Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it.
What is the hardest statistical concept to grasp?
Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it.
What is the hardest statistical concept to grasp? Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it.
What is the hardest statistical concept to grasp? Fiducial inference. Even Fisher admitted he didn't understand what it does, and he invented it.
8,345
What is the hardest statistical concept to grasp?
What do the different distributions really represent, besides than how they are used.
What is the hardest statistical concept to grasp?
What do the different distributions really represent, besides than how they are used.
What is the hardest statistical concept to grasp? What do the different distributions really represent, besides than how they are used.
What is the hardest statistical concept to grasp? What do the different distributions really represent, besides than how they are used.
8,346
What is the hardest statistical concept to grasp?
I think the question is interpretable in two ways, which will give very different answers: 1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept to grasp? 2) Which statistical concept is misunderstood by the most people? For 1) I don't know the answer at all. Something from measure theory, maybe? Some type of integration? I don't know. For 2) p-value, hands down.
What is the hardest statistical concept to grasp?
I think the question is interpretable in two ways, which will give very different answers: 1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept t
What is the hardest statistical concept to grasp? I think the question is interpretable in two ways, which will give very different answers: 1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept to grasp? 2) Which statistical concept is misunderstood by the most people? For 1) I don't know the answer at all. Something from measure theory, maybe? Some type of integration? I don't know. For 2) p-value, hands down.
What is the hardest statistical concept to grasp? I think the question is interpretable in two ways, which will give very different answers: 1) For people studying statistics, particularly at a relatively advanced level, what is the hardest concept t
8,347
What is the hardest statistical concept to grasp?
Confidence interval in non-Bayesian tradition is a difficult one.
What is the hardest statistical concept to grasp?
Confidence interval in non-Bayesian tradition is a difficult one.
What is the hardest statistical concept to grasp? Confidence interval in non-Bayesian tradition is a difficult one.
What is the hardest statistical concept to grasp? Confidence interval in non-Bayesian tradition is a difficult one.
8,348
What is the hardest statistical concept to grasp?
I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't know the difference between a sample statistic and a population parameter. If you beat these ideas into their head, the other stuff should follow a little bit easier. I'm sure most students don't understand the crux of the CLT either.
What is the hardest statistical concept to grasp?
I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't k
What is the hardest statistical concept to grasp? I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't know the difference between a sample statistic and a population parameter. If you beat these ideas into their head, the other stuff should follow a little bit easier. I'm sure most students don't understand the crux of the CLT either.
What is the hardest statistical concept to grasp? I think people miss the boat on pretty much everything the first time around. I think what most students don't understand is that they're usually estimating parameters based on samples. They don't k
8,349
Generating random numbers manually
If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a geometric distribution we can count how many coin tosses are needed before we obtain heads. To simulate a binomial distribution, we can toss our coin $n$ times (or simply toss $n$ coins) and count the heads. The "quincunx" or "bean machine" or "Galton box" is a more kinetic alternative — why not set one into action and see for yourself? It seems there is no such thing as a "weighted coin" but if we wish to vary the probability parameter of our Bernoulli or binomial variable to values other than $p = 0.5$, the needle of Georges-Louis Leclerc, Comte de Buffon will allow us to do so. To simulate the discrete uniform distribution on $\{1, 2, 3, 4, 5, 6\}$ we roll a six-sided die. Fans of role-playing games will have encountered more exotic dice, for example tetrahedral dice to sample uniformly from $\{1,2,3,4\}$, while with a spinner or roulette wheel one can go further still. (Image credit) Would we have to be mad to generate random numbers in this manner today, when it is just one command away on a computer console — or, if we have a suitable table of random numbers available, one foray to the dustier corners of the bookshelf? Well perhaps, though there is something pleasingly tactile about a physical experiment. But for people working before the Computer Age, indeed before widely available large-scale random number tables (of which more later), simulating random variables manually had more practical importance. When Buffon investigated the St. Petersburg paradox — the famous coin-tossing game where the amount the player wins doubles every time a heads is tossed, the player loses upon the first tails, and whose expected pay-off is counter-intuitively infinite — he needed to simulate the geometric distribution with $p=0.5$. To do so, it seems he hired a child to toss a coin to simulate 2048 plays of the St. Petersburg game, recording how many tosses before the game ended. This simulated geometric distribution is reproduced in Stigler (1991): Tosses Frequency 1 1061 2 494 3 232 4 137 5 56 6 29 7 25 8 8 9 6 In the same essay where he published this empirical investigation into the St. Petersburg paradox, Buffon also introduced the famous "Buffon's needle". If a plane is divided into strips by parallel lines a distance $d$ apart, and a needle of length $l \leq d$ is dropped onto it, the probability the needle crosses one of the lines is $\frac{2l}{\pi d}$. Buffon's needle can, therefore, be used to simulate a random variable $X \sim \text{Bernoulli}(\frac{2l}{\pi d})$ or $X \sim \text{Binomial}(n,\frac{2l}{\pi d})$, and we can adjust the probability of success by altering the lengths of our needles or (perhaps more conveniently) the distance at which we rule the lines. An alternative use of Buffon's needles is as a terrifically inefficient way to find a probabilistic approximation for $\pi$. The image (credit) shows 17 matchsticks, of which 11 cross a line. When the distance between the ruled lines is set equal to the length of the matchstick, as here, the expected proportion of crossing matchsticks is $\frac{2}{\pi}$ and hence we can estimate $\hat \pi$ as twice the reciprocal of the observed fraction: here we obtain $\hat \pi = 2 \cdot \frac{17}{11} \approx 3.1$. In 1901 Mario Lazzarini claimed to have performed the experiment using 2.5 cm needles with lines 3 cm apart, and after 3408 tosses obtained $\hat \pi = \frac{355}{113}$. This is a well-known rational to $\pi$, accurate to six decimal places. Badger (1994) provides convincing evidence that this was fraudulent, not least that to be 95% confident of six decimal places of accuracy using Lazzarini's apparatus, a patience-sapping 134 trillion needles must be thrown! Certainly Buffon's needle is more useful as a random number generator than it is as a method for estimating $\pi$. Our generators so far have been disappointingly discrete. What if we want to simulate a normal distribution? One option is to obtain random digits and use them to form good discrete approximations to a uniform distribution on $[0,1]$, then perform some calculations to transform these into random normal deviates. A spinner or roulette wheel could give decimal digits from zero to nine; a tossed coin can generate binary digits; if our arithmetic skills can cope with a funkier base, even a standard set of dice would do — or we could use a die to generate binary digits via odd/even scores. Other answers have covered this kind of transformation-based approach in more detail; I defer any further discussion of it until the end. By the late nineteenth century the utility of the normal distribution was well-known, and so there were statisticians keen to simulate random normal deviates. Needless to say, lengthy hand calculations would not have been suitable except to set up the simulating process in the first place. Once that was established, the generation of the random numbers had to be relatively quick and easy. Stigler (1991) lists the methods employed by three statisticians of this era. All were researching smoothing techniques: random normal deviates were of obvious interest, e.g. to simulate measurement error that needed to be smoothed over. The remarkable American statistician Erastus Lyman De Forest was interested in smoothing life tables, and encountered a problem that required the simulation of the absolute values of normal deviates. In what will prove a running theme, De Forest was really sampling from a half-normal distribution. Moreover, rather than using a standard deviation of one (the $Z \sim N(0, 1^2)$ we are used to calling "standard"), De Forest wanted a "probable error" (median deviation) of one. This was the form given in the table of "Probability of Errors" in the appendices of "A Manual Of Spherical And Practical Astronomy, Volume II" by William Chauvenet. From this table, De Forest interpolated the quantiles of a half-normal distribution, from $p=0.005$ to $p=0.995$, which he deemed to be "errors of equal frequency". Should you wish to simulate the normal distribution, following De Forest, you can print this table out and cut it up. De Forest (1876) wrote that the errors "have been inscribed upon 100 bits of card-board of equal size, which were shaken up in a box and all drawn out one by one". The astronomer and meteorologist Sir George Howard Darwin (son of the naturalist Charles) put a different spin on things, by developing what he called a "roulette" for generating random normal deviates. Darwin (1877) describes how: A circular piece of card was graduated radially, so that a graduation marked $x$ was $\frac{720}{\sqrt \pi} \int_0^x e^{-x^2} dx$ degrees distant from a fixed radius. The card was made to spin round its centre close to a fixed index. It was then spun a number of times, and on stopping it the number opposite the index was read off. [Darwin adds in a footnote: It is better to stop the disk when it is spinning so fast that the graduations are invisible, rather than to let it run its course.] From the nature of the graduation the numbers thus obtained will occur in exactly the same way as errors of observation occur in practice; but they have no signs of addition or subtraction prefixed. Then by tossing up a coin over and over again and calling heads $+$ and tails $-$, the signs $+$ or $-$ are assigned by chance to this series of errors. "Index" should be read here as "pointer" or "indicator" (c.f. "index finger"). Stigler points out that Darwin, like De Forest, was using a half-normal cumulative distribution around the disk. Subsequently using a coin to attach a sign at random renders this a full normal distribution. Stigler notes that it is unclear how finely the scale was graduated, but presumes the instruction to manually arrest the disk mid-spin was "to diminish potential bias toward one section of the disk and to speed up the procedure". Sir Francis Galton, incidentally a half-cousin to Charles Darwin, has already been mentioned in connection with his quincunx. While this mechanically simulates a binomial distribution that, by the De Moivre–Laplace theorem bears a striking resemblance to the normal distribution (and is occasionally used as a teaching aid for that topic), Galton actually produced a far more elaborate scheme when he desired to sample from a normal distribution. Even more extraordinary than the unconventional examples at the top of this answer, Galton developed normally distributed dice — or more accurately, a set of dice that produce an excellent discrete approximation to a normal distribution with median deviation one. These dice, dating from 1890, are preserved in the Galton Collection at University College London. In an 1890 article in Nature Galton wrote that: As an instrument for selecting at random, I have found nothing superior to dice. It is most tedious to shuffle cards thoroughly between each successive draw, and the method of mixing and stirring up marked balls in a bag is more tedious still. A teetotum or some form of roulette is preferable to these, but dice are better than all. When they are shaken and tossed in a basket, they hurtle so variously against one another and against the ribs of the basket-work that they tumble wildly about, and their positions at the outset afford no perceptible clue to what they will be after even a single good shake and toss. The chances afforded by a die are more various than are commonly supposed; there are 24 equal possibilities, and not only 6, because each face has four edges that may be utilized, as I shall show. It was important for Galton to be able to rapidly generate a sequence of normal deviates. After each roll Galton would line the dice up by touch alone, then record the scores along their front edges. He would initially roll several dice of type I, on whose edges were half-normal deviates, much like De Forest's cards but using 24 not 100 quantiles. For the largest deviates (actually marked as blanks on the type I dice) he would roll as many of the more sensitive type II dice (which showed large deviates only, at a finer graduation) as he needed to fill in the spaces in his sequence. To convert from half-normal to normal deviates, he would roll die III, which would allocate $+$ or $-$ signs to his sequence in blocks of three or four deviates at a time. The dice themselves were mahogany, of side $1 \frac 1 4$ inches, and pasted with thin white paper for the marking to be written on. Galton recommended to prepare three dice of type I, two of II and one of III. Raazesh Sainudiin's Laboratory for Mathematical Statistical Experiments includes a student project from the University of Canterbury, NZ, reproducing Galton's dice. The project includes empirical investigation from rolling the dice many times (including an empirical CDF that looks reassuringly "normal") and an adaptation of the dice scores so they follow the standard normal distribution. Using Galton's original scores, there is also a graph of the discretized normal distribution that the dice scores actually follow. On a grand scale, if you are prepared to stretch the "mechanical" to the electrical, note that RAND's epic A Million Random Digits with 100,000 Normal Deviates was based on a kind of electronic simulation of a roulette wheel. From the technical report (by George W. Brown, originally June 1949) we find: Thus motivated, the RAND people, with the assistance of Douglas Aircraft Company engineering personnel, designed an electro roulette wheel based on a variation of a proposal made by Cecil Hastings. For purposes of this talk a brief description will suffice. A random frequency pulse source was gated by a constant frequency pulse, about once a second, providing on the average about 100,000 pulses in one second. Pulse standardization circuits passed the pulses to a five place binary counter, so that in principle the machine is like a roulette wheel with 32 positions, making on the average about 3000 revolutions on each turn. A binary to decimal conversion was used, throwing away 12 of the 32 positions, and the resulting random digit was fed into an I.B.M. punch, yielding punched card tables of random digits. A detailed analysis of the randomness to be expected from such a machine was made by the designers and indicated that the machine should yield very high quality output. However, before you too are tempted to assemble an electro roulette wheel, it would be a good idea to read the rest of the report! It transpired that the scheme "leaned heavily on the assumption of ideal pulse standardization to overcome natural preferences among the counter positions; later experience showed that this assumption was the weak point, and much of the later fussing with the machine was concerned with troubles originating at this point". Detailed statistical analysis revealed some problems with the output: for instance $\chi^2$ tests of the frequencies of odd and even digits revealed that some batches had a slight imbalance. This was worse in some batches than others, suggesting that "the machine had been running down in the month since its tune up ... The indications are at this machine required excessive maintenance to keep it in tip-top shape". However, a statistical way of resolving these issues was found: At this point we had our original million digits, 20,000 I.B.M. cards with 50 digits to a card, with the small but perceptible odd-even bias disclosed by the statistical analysis. It was now decided to rerandomize the table, or at least alter it, by a little roulette playing with it, to remove the odd-even bias. We added (mod 10) the digits in each card, digit by digit, to the corresponding digits of the previous card. The derived table of one million digits was then subjected to the various standard tests, frequency tests, serial tests, poker tests, etc. These million digits have a clean bill of health and have been adopted as RAND's modern table of random digits. There was, of course, good reason to believe that the addition process would do some good. In a general way, the underlying mechanism is the limiting approach of sums of random variables modulo the unit interval in the rectangular distribution, in the same way that unrestricted sums of random variables approach normality. This method has been used by Horton and Smith, of the Interstate Commerce Commission, to obtain some good batches of apparently random numbers from larger batches of badly non-random numbers. Of course, this concerns generation of random decimal digits, but it easy to use these to produce random deviates sampled uniformly from $[0,1]$, rounded to however many decimal places you saw fit to take digits. There are various lovely methods to generate deviates of other distributions from your uniform deviates, perhaps the most aesthetically pleasing of which is the ziggurat algorithm for probability distributions which are either monotone decreasing or unimodal symmetric, but conceptually the simplest and most widely applicable is the inverse CDF transform: given a deviate $u$ from the uniform distribution on $[0,1]$, and if your desired distribution has CDF $F$, then $F^{-1}(u)$ will be a random deviate from your distribution. If you are interested specifically in random normal deviates then computationally, the Box-Muller transform is more efficient than inverse transform sampling, the Marsaglia polar method is more efficient again, and the ziggurat (image credit for the animation below) even better. Some practical issues are discussed on this StackOverflow thread if you intend to implement one or more of these methods in code. References Badger, L. (1994). "Lazzarini's Lucky Approximation of π". Mathematics Magazine. Mathematical Association of America. 67(2): 83–91. Brown, G.W. "History of RAND's random digits—Summary". in A.S. Householder, G.E. Forsythe, and H.H. Germond, eds., "Monte Carlo Method", National Bureau of Standards Applied Mathematics Series, 12 (Washington, D.C.: U.S. Government Printing Office, 1951): 31-32 $(*)$ Darwin, G. H. (1877). "On fallible measures of variable quantities, and on the treatment of meteorological observations." Philosophical Magazine, 4(22), 1–14 De Forest, E. L. (1876). Interpolation and adjustment of series. Tuttle, Morehouse and Taylor, New Haven, Conn. Galton, F. (1890). "Dice for statistical experiments". Nature, 42, 13-14 Stigler, S. M. (1991). "Stochastic simulation in the nineteenth century". Statistical Science, 6(1), 89-97. $(*)$ In the very same journal is von Neumann's highly-cited paper Various Techniques Used in Connection with Random Digits in which he considers the difficulties of generating random numbers for use in a computer. He rejects the idea of a physical device attached to a computer that generates random input on the fly, and considers whether some physical mechanism might be employed to generate random numbers which are then recorded for future use — essentially what RAND had done with their Million Digits. It also includes his famous quote about what we would describe as the difference between random and pseudo-random number generation: "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method."
Generating random numbers manually
If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a
Generating random numbers manually If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a geometric distribution we can count how many coin tosses are needed before we obtain heads. To simulate a binomial distribution, we can toss our coin $n$ times (or simply toss $n$ coins) and count the heads. The "quincunx" or "bean machine" or "Galton box" is a more kinetic alternative — why not set one into action and see for yourself? It seems there is no such thing as a "weighted coin" but if we wish to vary the probability parameter of our Bernoulli or binomial variable to values other than $p = 0.5$, the needle of Georges-Louis Leclerc, Comte de Buffon will allow us to do so. To simulate the discrete uniform distribution on $\{1, 2, 3, 4, 5, 6\}$ we roll a six-sided die. Fans of role-playing games will have encountered more exotic dice, for example tetrahedral dice to sample uniformly from $\{1,2,3,4\}$, while with a spinner or roulette wheel one can go further still. (Image credit) Would we have to be mad to generate random numbers in this manner today, when it is just one command away on a computer console — or, if we have a suitable table of random numbers available, one foray to the dustier corners of the bookshelf? Well perhaps, though there is something pleasingly tactile about a physical experiment. But for people working before the Computer Age, indeed before widely available large-scale random number tables (of which more later), simulating random variables manually had more practical importance. When Buffon investigated the St. Petersburg paradox — the famous coin-tossing game where the amount the player wins doubles every time a heads is tossed, the player loses upon the first tails, and whose expected pay-off is counter-intuitively infinite — he needed to simulate the geometric distribution with $p=0.5$. To do so, it seems he hired a child to toss a coin to simulate 2048 plays of the St. Petersburg game, recording how many tosses before the game ended. This simulated geometric distribution is reproduced in Stigler (1991): Tosses Frequency 1 1061 2 494 3 232 4 137 5 56 6 29 7 25 8 8 9 6 In the same essay where he published this empirical investigation into the St. Petersburg paradox, Buffon also introduced the famous "Buffon's needle". If a plane is divided into strips by parallel lines a distance $d$ apart, and a needle of length $l \leq d$ is dropped onto it, the probability the needle crosses one of the lines is $\frac{2l}{\pi d}$. Buffon's needle can, therefore, be used to simulate a random variable $X \sim \text{Bernoulli}(\frac{2l}{\pi d})$ or $X \sim \text{Binomial}(n,\frac{2l}{\pi d})$, and we can adjust the probability of success by altering the lengths of our needles or (perhaps more conveniently) the distance at which we rule the lines. An alternative use of Buffon's needles is as a terrifically inefficient way to find a probabilistic approximation for $\pi$. The image (credit) shows 17 matchsticks, of which 11 cross a line. When the distance between the ruled lines is set equal to the length of the matchstick, as here, the expected proportion of crossing matchsticks is $\frac{2}{\pi}$ and hence we can estimate $\hat \pi$ as twice the reciprocal of the observed fraction: here we obtain $\hat \pi = 2 \cdot \frac{17}{11} \approx 3.1$. In 1901 Mario Lazzarini claimed to have performed the experiment using 2.5 cm needles with lines 3 cm apart, and after 3408 tosses obtained $\hat \pi = \frac{355}{113}$. This is a well-known rational to $\pi$, accurate to six decimal places. Badger (1994) provides convincing evidence that this was fraudulent, not least that to be 95% confident of six decimal places of accuracy using Lazzarini's apparatus, a patience-sapping 134 trillion needles must be thrown! Certainly Buffon's needle is more useful as a random number generator than it is as a method for estimating $\pi$. Our generators so far have been disappointingly discrete. What if we want to simulate a normal distribution? One option is to obtain random digits and use them to form good discrete approximations to a uniform distribution on $[0,1]$, then perform some calculations to transform these into random normal deviates. A spinner or roulette wheel could give decimal digits from zero to nine; a tossed coin can generate binary digits; if our arithmetic skills can cope with a funkier base, even a standard set of dice would do — or we could use a die to generate binary digits via odd/even scores. Other answers have covered this kind of transformation-based approach in more detail; I defer any further discussion of it until the end. By the late nineteenth century the utility of the normal distribution was well-known, and so there were statisticians keen to simulate random normal deviates. Needless to say, lengthy hand calculations would not have been suitable except to set up the simulating process in the first place. Once that was established, the generation of the random numbers had to be relatively quick and easy. Stigler (1991) lists the methods employed by three statisticians of this era. All were researching smoothing techniques: random normal deviates were of obvious interest, e.g. to simulate measurement error that needed to be smoothed over. The remarkable American statistician Erastus Lyman De Forest was interested in smoothing life tables, and encountered a problem that required the simulation of the absolute values of normal deviates. In what will prove a running theme, De Forest was really sampling from a half-normal distribution. Moreover, rather than using a standard deviation of one (the $Z \sim N(0, 1^2)$ we are used to calling "standard"), De Forest wanted a "probable error" (median deviation) of one. This was the form given in the table of "Probability of Errors" in the appendices of "A Manual Of Spherical And Practical Astronomy, Volume II" by William Chauvenet. From this table, De Forest interpolated the quantiles of a half-normal distribution, from $p=0.005$ to $p=0.995$, which he deemed to be "errors of equal frequency". Should you wish to simulate the normal distribution, following De Forest, you can print this table out and cut it up. De Forest (1876) wrote that the errors "have been inscribed upon 100 bits of card-board of equal size, which were shaken up in a box and all drawn out one by one". The astronomer and meteorologist Sir George Howard Darwin (son of the naturalist Charles) put a different spin on things, by developing what he called a "roulette" for generating random normal deviates. Darwin (1877) describes how: A circular piece of card was graduated radially, so that a graduation marked $x$ was $\frac{720}{\sqrt \pi} \int_0^x e^{-x^2} dx$ degrees distant from a fixed radius. The card was made to spin round its centre close to a fixed index. It was then spun a number of times, and on stopping it the number opposite the index was read off. [Darwin adds in a footnote: It is better to stop the disk when it is spinning so fast that the graduations are invisible, rather than to let it run its course.] From the nature of the graduation the numbers thus obtained will occur in exactly the same way as errors of observation occur in practice; but they have no signs of addition or subtraction prefixed. Then by tossing up a coin over and over again and calling heads $+$ and tails $-$, the signs $+$ or $-$ are assigned by chance to this series of errors. "Index" should be read here as "pointer" or "indicator" (c.f. "index finger"). Stigler points out that Darwin, like De Forest, was using a half-normal cumulative distribution around the disk. Subsequently using a coin to attach a sign at random renders this a full normal distribution. Stigler notes that it is unclear how finely the scale was graduated, but presumes the instruction to manually arrest the disk mid-spin was "to diminish potential bias toward one section of the disk and to speed up the procedure". Sir Francis Galton, incidentally a half-cousin to Charles Darwin, has already been mentioned in connection with his quincunx. While this mechanically simulates a binomial distribution that, by the De Moivre–Laplace theorem bears a striking resemblance to the normal distribution (and is occasionally used as a teaching aid for that topic), Galton actually produced a far more elaborate scheme when he desired to sample from a normal distribution. Even more extraordinary than the unconventional examples at the top of this answer, Galton developed normally distributed dice — or more accurately, a set of dice that produce an excellent discrete approximation to a normal distribution with median deviation one. These dice, dating from 1890, are preserved in the Galton Collection at University College London. In an 1890 article in Nature Galton wrote that: As an instrument for selecting at random, I have found nothing superior to dice. It is most tedious to shuffle cards thoroughly between each successive draw, and the method of mixing and stirring up marked balls in a bag is more tedious still. A teetotum or some form of roulette is preferable to these, but dice are better than all. When they are shaken and tossed in a basket, they hurtle so variously against one another and against the ribs of the basket-work that they tumble wildly about, and their positions at the outset afford no perceptible clue to what they will be after even a single good shake and toss. The chances afforded by a die are more various than are commonly supposed; there are 24 equal possibilities, and not only 6, because each face has four edges that may be utilized, as I shall show. It was important for Galton to be able to rapidly generate a sequence of normal deviates. After each roll Galton would line the dice up by touch alone, then record the scores along their front edges. He would initially roll several dice of type I, on whose edges were half-normal deviates, much like De Forest's cards but using 24 not 100 quantiles. For the largest deviates (actually marked as blanks on the type I dice) he would roll as many of the more sensitive type II dice (which showed large deviates only, at a finer graduation) as he needed to fill in the spaces in his sequence. To convert from half-normal to normal deviates, he would roll die III, which would allocate $+$ or $-$ signs to his sequence in blocks of three or four deviates at a time. The dice themselves were mahogany, of side $1 \frac 1 4$ inches, and pasted with thin white paper for the marking to be written on. Galton recommended to prepare three dice of type I, two of II and one of III. Raazesh Sainudiin's Laboratory for Mathematical Statistical Experiments includes a student project from the University of Canterbury, NZ, reproducing Galton's dice. The project includes empirical investigation from rolling the dice many times (including an empirical CDF that looks reassuringly "normal") and an adaptation of the dice scores so they follow the standard normal distribution. Using Galton's original scores, there is also a graph of the discretized normal distribution that the dice scores actually follow. On a grand scale, if you are prepared to stretch the "mechanical" to the electrical, note that RAND's epic A Million Random Digits with 100,000 Normal Deviates was based on a kind of electronic simulation of a roulette wheel. From the technical report (by George W. Brown, originally June 1949) we find: Thus motivated, the RAND people, with the assistance of Douglas Aircraft Company engineering personnel, designed an electro roulette wheel based on a variation of a proposal made by Cecil Hastings. For purposes of this talk a brief description will suffice. A random frequency pulse source was gated by a constant frequency pulse, about once a second, providing on the average about 100,000 pulses in one second. Pulse standardization circuits passed the pulses to a five place binary counter, so that in principle the machine is like a roulette wheel with 32 positions, making on the average about 3000 revolutions on each turn. A binary to decimal conversion was used, throwing away 12 of the 32 positions, and the resulting random digit was fed into an I.B.M. punch, yielding punched card tables of random digits. A detailed analysis of the randomness to be expected from such a machine was made by the designers and indicated that the machine should yield very high quality output. However, before you too are tempted to assemble an electro roulette wheel, it would be a good idea to read the rest of the report! It transpired that the scheme "leaned heavily on the assumption of ideal pulse standardization to overcome natural preferences among the counter positions; later experience showed that this assumption was the weak point, and much of the later fussing with the machine was concerned with troubles originating at this point". Detailed statistical analysis revealed some problems with the output: for instance $\chi^2$ tests of the frequencies of odd and even digits revealed that some batches had a slight imbalance. This was worse in some batches than others, suggesting that "the machine had been running down in the month since its tune up ... The indications are at this machine required excessive maintenance to keep it in tip-top shape". However, a statistical way of resolving these issues was found: At this point we had our original million digits, 20,000 I.B.M. cards with 50 digits to a card, with the small but perceptible odd-even bias disclosed by the statistical analysis. It was now decided to rerandomize the table, or at least alter it, by a little roulette playing with it, to remove the odd-even bias. We added (mod 10) the digits in each card, digit by digit, to the corresponding digits of the previous card. The derived table of one million digits was then subjected to the various standard tests, frequency tests, serial tests, poker tests, etc. These million digits have a clean bill of health and have been adopted as RAND's modern table of random digits. There was, of course, good reason to believe that the addition process would do some good. In a general way, the underlying mechanism is the limiting approach of sums of random variables modulo the unit interval in the rectangular distribution, in the same way that unrestricted sums of random variables approach normality. This method has been used by Horton and Smith, of the Interstate Commerce Commission, to obtain some good batches of apparently random numbers from larger batches of badly non-random numbers. Of course, this concerns generation of random decimal digits, but it easy to use these to produce random deviates sampled uniformly from $[0,1]$, rounded to however many decimal places you saw fit to take digits. There are various lovely methods to generate deviates of other distributions from your uniform deviates, perhaps the most aesthetically pleasing of which is the ziggurat algorithm for probability distributions which are either monotone decreasing or unimodal symmetric, but conceptually the simplest and most widely applicable is the inverse CDF transform: given a deviate $u$ from the uniform distribution on $[0,1]$, and if your desired distribution has CDF $F$, then $F^{-1}(u)$ will be a random deviate from your distribution. If you are interested specifically in random normal deviates then computationally, the Box-Muller transform is more efficient than inverse transform sampling, the Marsaglia polar method is more efficient again, and the ziggurat (image credit for the animation below) even better. Some practical issues are discussed on this StackOverflow thread if you intend to implement one or more of these methods in code. References Badger, L. (1994). "Lazzarini's Lucky Approximation of π". Mathematics Magazine. Mathematical Association of America. 67(2): 83–91. Brown, G.W. "History of RAND's random digits—Summary". in A.S. Householder, G.E. Forsythe, and H.H. Germond, eds., "Monte Carlo Method", National Bureau of Standards Applied Mathematics Series, 12 (Washington, D.C.: U.S. Government Printing Office, 1951): 31-32 $(*)$ Darwin, G. H. (1877). "On fallible measures of variable quantities, and on the treatment of meteorological observations." Philosophical Magazine, 4(22), 1–14 De Forest, E. L. (1876). Interpolation and adjustment of series. Tuttle, Morehouse and Taylor, New Haven, Conn. Galton, F. (1890). "Dice for statistical experiments". Nature, 42, 13-14 Stigler, S. M. (1991). "Stochastic simulation in the nineteenth century". Statistical Science, 6(1), 89-97. $(*)$ In the very same journal is von Neumann's highly-cited paper Various Techniques Used in Connection with Random Digits in which he considers the difficulties of generating random numbers for use in a computer. He rejects the idea of a physical device attached to a computer that generates random input on the fly, and considers whether some physical mechanism might be employed to generate random numbers which are then recorded for future use — essentially what RAND had done with their Million Digits. It also includes his famous quote about what we would describe as the difference between random and pseudo-random number generation: "Any one who considers arithmetical methods of producing random digits is, of course, in a state of sin. For, as has been pointed out several times, there is no such thing as a random number — there are only methods to produce random numbers, and a strict arithmetic procedure of course is not such a method."
Generating random numbers manually If "manually" includes "mechanical" then you have many options available to you. To simulate a Bernoulli variable with probability half, we can toss a coin: $0$ for tails, $1$ for heads. To simulate a
8,350
Generating random numbers manually
If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transform:$$X=\sqrt{-2\log U_1}\,\cos(2\pi U_2)$$(and even two since $Y=\sqrt{-2\log U_1}\,\sin(2\pi U_2)$ is another normal variate independent from $X$). For instance, on my Linux OS, I can check $ date +%s.%N 1479733744.077762986 $ date +%s.%N 1479733980.615056616 hence set$$U_1=.077762986,\ U_2=.615056616$$and $X$ as > sqrt(-2*log(.077762986))*cos(2*pi*.615056616) [1] -1.694815 Addendum: since computing logarithms and cosines may be deemed not manual enough, there exists a variant to Box-Müller that avoids using those transcendental functions (see Exercise 2.9 in our book Monte Carlo Statistical Methods): Now, one can argue against this version because of the Exponential variates. But there also exists a very clever way of simulating those variates without a call to transcendental functions, due to von Neumann, as summarised in this algorithm reproduced from Luc Devroye's Non-uniform Random Variates: Admittedly, it requires the computation of 1/e, but only once. If you do not have access to this clock, you can replace this uniform generator by a mechanistic uniform generator, like throwing a dart on a surface with a large number of unit squares $(0,1)^2$ or rolling a ball on a unit interval $(0,1)$ with enough bounces [as in Thomas Bayes' conceptual billiard experiment] or yet throwing matches on a wooden floor with unit width planks and counting the distance to the nearest leftmost separation [as in Buffon's experiment] or yet further to start a roulette wheel with number 1 the lowest and turn the resulting angle of 1 with its starting orientation into a uniform $(0,2\pi)$ draw. Using the CLT to approximate normality is certainly not a method I would ever advise as (1) you still need other variates to feed the average, so may as well use uniforms in the Box-Müller algorithm, and (2) the accuracy grows quite slowly with the number of simulations. Especially if using a discrete random variable like the result of a dice, even with more than six faces. To quote from Thomas et al. (2007), a survey on the pros and cons of Gaussian random generators: The central limit theorem of course is an example of an “approximate” method—even if perfect arithmetic is used, for finite K the output will not be Gaussian. Here is a quick experiment to illustrate the problem: I generated 100 times the average of 30 die outcomes: dies=apply(matrix(sample(1:6,30*100,rep=TRUE),ncol=30),1,mean) then normalised those averages into mean zero - variance one variates stdies=(dies-3.5)/sqrt(35/12/30) and looked at the normal fit [or lack thereof] of this sample: First, the fit is not great, especially in the tails, and second, rather obviously, the picture confirms that the number of values taken by the sample is embarrassingly finite. (In this particular experiment, there were only 34 different values taken by dies, between 76/30 and 122/30.) By comparison, if I exploit the very same 3000 die outcomes $D_i$ to create enough digits of a pseudo-uniform as$$U=\sum_{i=1}^k \dfrac{D_i-1}{6^i}$$with $k=15$ (note that 6¹⁵>10¹¹, hence I generate more than 11 truly random digits), and then apply the above Box-Müller transform to turn pairs of uniforms into pairs of N(0,1) variates, dies=matrix(apply(matrix(sample(0:5,15*200,rep=TRUE),nrow=15)/6^(1:15),2,sum),ncol=2) norma=sqrt(-2*log(dies[,1]))*c(cos(2*pi*dies[,2]),sin(2*pi*dies[,2])) the fit is as good as can be expected for a Normal sample of size 200 (just plot another one for a true normal sample, norma=rnorm(100)): as further shown by a Kolmogorov-Smirnov test: > ks.test(norma,pnorm) One-sample Kolmogorov-Smirnov test data: norma D = 0.06439, p-value = 0.3783 alternative hypothesis: two-sided
Generating random numbers manually
If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transfor
Generating random numbers manually If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transform:$$X=\sqrt{-2\log U_1}\,\cos(2\pi U_2)$$(and even two since $Y=\sqrt{-2\log U_1}\,\sin(2\pi U_2)$ is another normal variate independent from $X$). For instance, on my Linux OS, I can check $ date +%s.%N 1479733744.077762986 $ date +%s.%N 1479733980.615056616 hence set$$U_1=.077762986,\ U_2=.615056616$$and $X$ as > sqrt(-2*log(.077762986))*cos(2*pi*.615056616) [1] -1.694815 Addendum: since computing logarithms and cosines may be deemed not manual enough, there exists a variant to Box-Müller that avoids using those transcendental functions (see Exercise 2.9 in our book Monte Carlo Statistical Methods): Now, one can argue against this version because of the Exponential variates. But there also exists a very clever way of simulating those variates without a call to transcendental functions, due to von Neumann, as summarised in this algorithm reproduced from Luc Devroye's Non-uniform Random Variates: Admittedly, it requires the computation of 1/e, but only once. If you do not have access to this clock, you can replace this uniform generator by a mechanistic uniform generator, like throwing a dart on a surface with a large number of unit squares $(0,1)^2$ or rolling a ball on a unit interval $(0,1)$ with enough bounces [as in Thomas Bayes' conceptual billiard experiment] or yet throwing matches on a wooden floor with unit width planks and counting the distance to the nearest leftmost separation [as in Buffon's experiment] or yet further to start a roulette wheel with number 1 the lowest and turn the resulting angle of 1 with its starting orientation into a uniform $(0,2\pi)$ draw. Using the CLT to approximate normality is certainly not a method I would ever advise as (1) you still need other variates to feed the average, so may as well use uniforms in the Box-Müller algorithm, and (2) the accuracy grows quite slowly with the number of simulations. Especially if using a discrete random variable like the result of a dice, even with more than six faces. To quote from Thomas et al. (2007), a survey on the pros and cons of Gaussian random generators: The central limit theorem of course is an example of an “approximate” method—even if perfect arithmetic is used, for finite K the output will not be Gaussian. Here is a quick experiment to illustrate the problem: I generated 100 times the average of 30 die outcomes: dies=apply(matrix(sample(1:6,30*100,rep=TRUE),ncol=30),1,mean) then normalised those averages into mean zero - variance one variates stdies=(dies-3.5)/sqrt(35/12/30) and looked at the normal fit [or lack thereof] of this sample: First, the fit is not great, especially in the tails, and second, rather obviously, the picture confirms that the number of values taken by the sample is embarrassingly finite. (In this particular experiment, there were only 34 different values taken by dies, between 76/30 and 122/30.) By comparison, if I exploit the very same 3000 die outcomes $D_i$ to create enough digits of a pseudo-uniform as$$U=\sum_{i=1}^k \dfrac{D_i-1}{6^i}$$with $k=15$ (note that 6¹⁵>10¹¹, hence I generate more than 11 truly random digits), and then apply the above Box-Müller transform to turn pairs of uniforms into pairs of N(0,1) variates, dies=matrix(apply(matrix(sample(0:5,15*200,rep=TRUE),nrow=15)/6^(1:15),2,sum),ncol=2) norma=sqrt(-2*log(dies[,1]))*c(cos(2*pi*dies[,2]),sin(2*pi*dies[,2])) the fit is as good as can be expected for a Normal sample of size 200 (just plot another one for a true normal sample, norma=rnorm(100)): as further shown by a Kolmogorov-Smirnov test: > ks.test(norma,pnorm) One-sample Kolmogorov-Smirnov test data: norma D = 0.06439, p-value = 0.3783 alternative hypothesis: two-sided
Generating random numbers manually If you can get access to a very precise clock, you can extract the decimal part of the current time and turn it into a uniform, from which you can derive a normal simulation by the Box-Müller transfor
8,351
Generating random numbers manually
This is not exactly random, but it should be close enough, as you seem to want a rough experiment. Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more you approach a truly "random" result, but 10 seconds are fine). Take the last digits (for instance, 10.67 sec will give you 67). Apply the percentile table for the normal distribution. In this example, you just have to search for 0.67 and you will find the number. In this case, your value is about 0.45. This is not perfectly precise, but it will give you a solid estimation. If you roll below 50, just do 100-[Your Result] and use the table. Your result will be the same, with a minus sign, due to the symetry of N(0,1).
Generating random numbers manually
This is not exactly random, but it should be close enough, as you seem to want a rough experiment. Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more
Generating random numbers manually This is not exactly random, but it should be close enough, as you seem to want a rough experiment. Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more you approach a truly "random" result, but 10 seconds are fine). Take the last digits (for instance, 10.67 sec will give you 67). Apply the percentile table for the normal distribution. In this example, you just have to search for 0.67 and you will find the number. In this case, your value is about 0.45. This is not perfectly precise, but it will give you a solid estimation. If you roll below 50, just do 100-[Your Result] and use the table. Your result will be the same, with a minus sign, due to the symetry of N(0,1).
Generating random numbers manually This is not exactly random, but it should be close enough, as you seem to want a rough experiment. Use your phone to setup a chronometer. After a good 10 seconds, stop it (The more you wait, the more
8,352
Generating random numbers manually
Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is sufficiently large, then we should have an "approximate realization" of the normalized Gaussian $N (0,1)$. Why? Let $$X_k := \begin{cases} +1 & \text{ if } k \text{-th coin flip is heads}\\ -1 & \text{ if } k \text{-th coin flip is tails}\end{cases}$$ be i.i.d. Bernoulli random variables with $\mathbb P (X_k = \pm 1) = \frac 12$. Hence, $$\mathbb E (X_k) = 0 \qquad\qquad \mbox{Var} (X_k) = 1$$ Let $Y := X_1 + X_2 + \cdots + X_n$. Hence, $$\mathbb E (Y) = 0 \qquad\qquad \mbox{Var} (Y) = n$$ Normalizing, $$Z := \frac{Y}{\sqrt n}$$ we obtain a random variable with unit variance $$\mathbb E (Z) = 0 \qquad\qquad \mbox{Var} (Z) = 1$$
Generating random numbers manually
Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is su
Generating random numbers manually Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is sufficiently large, then we should have an "approximate realization" of the normalized Gaussian $N (0,1)$. Why? Let $$X_k := \begin{cases} +1 & \text{ if } k \text{-th coin flip is heads}\\ -1 & \text{ if } k \text{-th coin flip is tails}\end{cases}$$ be i.i.d. Bernoulli random variables with $\mathbb P (X_k = \pm 1) = \frac 12$. Hence, $$\mathbb E (X_k) = 0 \qquad\qquad \mbox{Var} (X_k) = 1$$ Let $Y := X_1 + X_2 + \cdots + X_n$. Hence, $$\mathbb E (Y) = 0 \qquad\qquad \mbox{Var} (Y) = n$$ Normalizing, $$Z := \frac{Y}{\sqrt n}$$ we obtain a random variable with unit variance $$\mathbb E (Z) = 0 \qquad\qquad \mbox{Var} (Z) = 1$$
Generating random numbers manually Let us flip an unbiased coin $n$ times. Starting at zero, we count $+1$ if heads, $-1$ if tails. After $n$ coin flips, we divide the counter by $\sqrt n$. Using the central limit theorem, if $n$ is su
8,353
Generating random numbers manually
It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inverse CDF. So how might one calculate a uniform(0,1) manually? Well, as mentioned by @Silverfish, there are a variety of dice used by traditional RPG players. One of which is a ten sided die. Assuming this is a fair die, we can now generate a discrete uniform(0, 9). We can also use this uniform(0,9) to represent a single digit of a random variable. So if we use two dice, we get a uniform random variable that can take on values $0.01, 0.02, ..., 0.99, 1.00$. With three dice, we can get a uniform distribution on $0.001, 0.002, ..., 0.999, 1.000$. So we can get very close to a continuous uniform(0,1) by approximating it with a finely gridded discrete uniform distribution with a few 10 sided dice. This can then be plugged into an inverse CDF to produce the random variable of interest.
Generating random numbers manually
It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inve
Generating random numbers manually It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inverse CDF. So how might one calculate a uniform(0,1) manually? Well, as mentioned by @Silverfish, there are a variety of dice used by traditional RPG players. One of which is a ten sided die. Assuming this is a fair die, we can now generate a discrete uniform(0, 9). We can also use this uniform(0,9) to represent a single digit of a random variable. So if we use two dice, we get a uniform random variable that can take on values $0.01, 0.02, ..., 0.99, 1.00$. With three dice, we can get a uniform distribution on $0.001, 0.002, ..., 0.999, 1.000$. So we can get very close to a continuous uniform(0,1) by approximating it with a finely gridded discrete uniform distribution with a few 10 sided dice. This can then be plugged into an inverse CDF to produce the random variable of interest.
Generating random numbers manually It's worth noting that once you can generate a uniform(0,1), you can generate any random variable for which the inverse cdf is calculatable by simply plugging the uniform random variable into the inve
8,354
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
More consistency in parameter names. For instance: matrix() has a parameter dimnames. write.table() has parameters row.names and col.names (with dots, and no dimnames parameter). There are functions rownames() and colnames(), without dots. Yes, this is a tiny detail. But I have been using R on a daily basis for almost 20 years now, and I still have to look at ?matrix each and every time, because I tried to set row.names and am surprised why this doesn't work.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
More consistency in parameter names. For instance: matrix() has a parameter dimnames. write.table() has parameters row.names and col.names (with dots, and no dimnames parameter). There are functions
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] More consistency in parameter names. For instance: matrix() has a parameter dimnames. write.table() has parameters row.names and col.names (with dots, and no dimnames parameter). There are functions rownames() and colnames(), without dots. Yes, this is a tiny detail. But I have been using R on a daily basis for almost 20 years now, and I still have to look at ?matrix each and every time, because I tried to set row.names and am surprised why this doesn't work.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu More consistency in parameter names. For instance: matrix() has a parameter dimnames. write.table() has parameters row.names and col.names (with dots, and no dimnames parameter). There are functions
8,355
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Useful error messages Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the code causes the bug. Optional static typing Easy way to make sure that i is a number (as it is supposed to be) and not a data frame. Some (maybe optional) way to get rid of bugs caused by scoping issues For example I want to be able to tell a function that it should work just with its arguments and under no circumstances try to find variables in other environments (I'm looking at you global environment). Native support of C++ extensions Rcpp is a wonderful way to extend R to get performance gains but suffers from the problem that natively R supports only C (not C++). This limits severely what you can do with Rcpp and makes extending R through new packages more difficult than it has to be. Of course, addressing any of these concerns would require a complete re-design of the language so R wouldn't really be R any longer.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Useful error messages Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the co
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Useful error messages Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the code causes the bug. Optional static typing Easy way to make sure that i is a number (as it is supposed to be) and not a data frame. Some (maybe optional) way to get rid of bugs caused by scoping issues For example I want to be able to tell a function that it should work just with its arguments and under no circumstances try to find variables in other environments (I'm looking at you global environment). Native support of C++ extensions Rcpp is a wonderful way to extend R to get performance gains but suffers from the problem that natively R supports only C (not C++). This limits severely what you can do with Rcpp and makes extending R through new packages more difficult than it has to be. Of course, addressing any of these concerns would require a complete re-design of the language so R wouldn't really be R any longer.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Useful error messages Compared to other languages (e.g. Python) it is very difficult to track down bugs based on error messages. Error messages are often not even informative about what part of the co
8,356
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Standalone executable To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables. This makes it more difficult to share programms with users that do not have R installed.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Standalone executable To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables. This makes it more diffi
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Standalone executable To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables. This makes it more difficult to share programms with users that do not have R installed.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Standalone executable To execute the code you need to have R installed. This is similar to Python, which does however have some programs than can turn python into executables. This makes it more diffi
8,357
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Built-in reproducible environments If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and bundle information about which packages the code was run with in a single file that could be used to rerun this code with identical packages. Ideally without requiring you to install the same package multiple times. There are plenty of packages out there to create reproducible R environments, which causes fragmentation, and users do have to use one for their code to be properly reproducible.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Built-in reproducible environments If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Built-in reproducible environments If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and bundle information about which packages the code was run with in a single file that could be used to rerun this code with identical packages. Ideally without requiring you to install the same package multiple times. There are plenty of packages out there to create reproducible R environments, which causes fragmentation, and users do have to use one for their code to be properly reproducible.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Built-in reproducible environments If R were designed from scratch, it would be great to have a built-in way to reproducibly use packages and have multiple versions of the same package installed, and
8,358
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Preserving/translating existing R packages Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perform a broader class of statistical tasks than is available in other programs. In the event that there were any attempt to reprogram a new version from scratch, it would be important to preserve as much of this as possible as valid code that would be compatible with a new program. Consequently, in the event that there is any change in the base program that would render later code obsolete, it would be useful to have a parallel method of "translation" of code into the new program.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Preserving/translating existing R packages Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perfo
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Preserving/translating existing R packages Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perform a broader class of statistical tasks than is available in other programs. In the event that there were any attempt to reprogram a new version from scratch, it would be important to preserve as much of this as possible as valid code that would be compatible with a new program. Consequently, in the event that there is any change in the base program that would render later code obsolete, it would be useful to have a parallel method of "translation" of code into the new program.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Preserving/translating existing R packages Probably the greatest present advantage of R over other statistical computing programs is that it has a huge repository of well-developed packages that perfo
8,359
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Bring data.table like syntax to data.frame data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing to entertain breaking changes).
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Bring data.table like syntax to data.frame data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Bring data.table like syntax to data.frame data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing to entertain breaking changes).
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Bring data.table like syntax to data.frame data.table's syntax (DT[i, j, by]) is so useful and such a faithful extension of data.frame that it should just be built in at this point. (If we are willing
8,360
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Object oriented programming OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a problem that is more general than just OOP).
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Object oriented programming OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a prob
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Object oriented programming OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a problem that is more general than just OOP).
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Object oriented programming OOP tools had not been in initially included into language. Currently there are S3 and S4 objects which makes that there is lack of consistency among different code (a prob
8,361
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Standard object classes/structures for common statistical outputs There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, there are objects of class htest that are used to represent the outputs of a hypothesis test, and objects of class lm, glm, etc., used for the outputs of statistical models. However, there are a number of common statistical outputs that do not have special classes/structures developed. As a result, they tend to be represented in an ad hoc manner. It would be useful for common statistical outputs to have a defined class and structure in the base program, with consistent elements and printing method. Here are some examples of particular outputs that would benefit from having a developed class/structure, with associated custom print methods, etc. Giving these and other important statistical outputs a standard class/structure would allow users to develop, compute and print these outputs in a way that includes all required information and gives user-friendly print output. Sets could be represented by appropriate objects such as is presently in the sets package. Having sets as objects in the program would be useful for a number of statistical outputs. Confidence intervals/sets could be represented as an object of class ci that includes a set object giving the confidence interval/set, the confidence level, the name/description of the parameter or quantity for the interval, and any other required information. Highest density regions (HDRs) could be represented as an object of class hdr that includes the set object giving the HDR, the coverage probability, and any other required information.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Standard object classes/structures for common statistical outputs There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example,
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Standard object classes/structures for common statistical outputs There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example, there are objects of class htest that are used to represent the outputs of a hypothesis test, and objects of class lm, glm, etc., used for the outputs of statistical models. However, there are a number of common statistical outputs that do not have special classes/structures developed. As a result, they tend to be represented in an ad hoc manner. It would be useful for common statistical outputs to have a defined class and structure in the base program, with consistent elements and printing method. Here are some examples of particular outputs that would benefit from having a developed class/structure, with associated custom print methods, etc. Giving these and other important statistical outputs a standard class/structure would allow users to develop, compute and print these outputs in a way that includes all required information and gives user-friendly print output. Sets could be represented by appropriate objects such as is presently in the sets package. Having sets as objects in the program would be useful for a number of statistical outputs. Confidence intervals/sets could be represented as an object of class ci that includes a set object giving the confidence interval/set, the confidence level, the name/description of the parameter or quantity for the interval, and any other required information. Highest density regions (HDRs) could be represented as an object of class hdr that includes the set object giving the HDR, the coverage probability, and any other required information.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Standard object classes/structures for common statistical outputs There are some special object types that have been developed in R to represent particular kinds of statistical outputs. For example,
8,362
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Multithreading by default R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. https://mran.microsoft.com/documents/rro/multithread
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Multithreading by default R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. htt
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Multithreading by default R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. https://mran.microsoft.com/documents/rro/multithread
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Multithreading by default R was built as a single threaded application, but we can do better these days. Sadly Microsoft R is pretty much discontinued now...ir had many benefits over the original. htt
8,363
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Less reliance on C/C++/Fortran, aka solve the Two-Language Problem One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran). This makes development and tinkering way harder (since now new users need to learn at least two, not one, language). Julia, for example, is Julia all the way down to the LLVM layer. This makes a novice Julia user proficient in both high and low-level functionalities necessary to actually develop a package or simply help improving other packages (not to say that the low-level complexity is easy, but you at least know the language already, to the point that it's not uncommon for newbies to contribute features to the core language). So, if pure R could be made performant enough, the Two-Language problem would be overcome. How to make that? This is harder said than done. Julia took a stance regarding type inference and JIT (just in time, aka at runtime) compilation, so R could need to give away some of its features to achieve that. Luckily, R (and some other languages) followed the footsteps of JIT compiled languages, and part of it is already featured.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Less reliance on C/C++/Fortran, aka solve the Two-Language Problem One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran). Thi
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Less reliance on C/C++/Fortran, aka solve the Two-Language Problem One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran). This makes development and tinkering way harder (since now new users need to learn at least two, not one, language). Julia, for example, is Julia all the way down to the LLVM layer. This makes a novice Julia user proficient in both high and low-level functionalities necessary to actually develop a package or simply help improving other packages (not to say that the low-level complexity is easy, but you at least know the language already, to the point that it's not uncommon for newbies to contribute features to the core language). So, if pure R could be made performant enough, the Two-Language problem would be overcome. How to make that? This is harder said than done. Julia took a stance regarding type inference and JIT (just in time, aka at runtime) compilation, so R could need to give away some of its features to achieve that. Luckily, R (and some other languages) followed the footsteps of JIT compiled languages, and part of it is already featured.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Less reliance on C/C++/Fortran, aka solve the Two-Language Problem One of the major drawbacks of R is that the actual performant code is mostly written in other languages (C/C++ and even Fortran). Thi
8,364
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Build wrangling functions and labelled data into the base program As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for the stats package at one stage). In particular, the base objects in the program should be programmed to use some of the useful wrangling functions for "tidy data" (e.g., per Wickham 2017), should allow easy descriptive labels for all variables, and should handle time as a special variables in a way that is useful for tidy analysis of time-series data. Some of the wrangling functions for tidy data analysis similar to what exists in the tidyverse should be built into the base program. There are a number of functions in that field that assist in wrangling data frames (but their names can be quite odd, owing to the fact they are not in base). All base objects and functions should be programmed with the principles of tidy data in mind, and with core functions for important wrangling steps. I concede that there is a trade-off here --- you don't want to add too many functions and increase complexity, but you want to add enough functions to do key wrangling steps. Objects such as vectors, matrices and data frames should allow descriptive labels for their variables, similar to what exists in other languages such as Stata. The labels should be in addition to variable names, to allow variable description or labels for printing. The base program should handle time variables in a way that allows simple ordering of time-series, and standard operations you want to do with time variables. Presently most of this is in packages in R such as lubridate and zoo.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Build wrangling functions and labelled data into the base program As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Build wrangling functions and labelled data into the base program As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for the stats package at one stage). In particular, the base objects in the program should be programmed to use some of the useful wrangling functions for "tidy data" (e.g., per Wickham 2017), should allow easy descriptive labels for all variables, and should handle time as a special variables in a way that is useful for tidy analysis of time-series data. Some of the wrangling functions for tidy data analysis similar to what exists in the tidyverse should be built into the base program. There are a number of functions in that field that assist in wrangling data frames (but their names can be quite odd, owing to the fact they are not in base). All base objects and functions should be programmed with the principles of tidy data in mind, and with core functions for important wrangling steps. I concede that there is a trade-off here --- you don't want to add too many functions and increase complexity, but you want to add enough functions to do key wrangling steps. Objects such as vectors, matrices and data frames should allow descriptive labels for their variables, similar to what exists in other languages such as Stata. The labels should be in addition to variable names, to allow variable description or labels for printing. The base program should handle time variables in a way that allows simple ordering of time-series, and standard operations you want to do with time variables. Presently most of this is in packages in R such as lubridate and zoo.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Build wrangling functions and labelled data into the base program As a general rule, it would be nice to move some of the important functionality in key packages into the base program (as was done for
8,365
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Add more protected names pi <- 3 should probably not be allowed.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Add more protected names pi <- 3 should probably not be allowed.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Add more protected names pi <- 3 should probably not be allowed.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Add more protected names pi <- 3 should probably not be allowed.
8,366
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed]
Replace packages by standardized functions There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packages, with similar names but different details. Actually you do not know what happens if you apply a function and you loose control about your code. If you want to know what a function does then the help is very scarce or only a paper is given as a reference. A function without a documentation is a risk for any user. It would be better to choose the most useful functions in a selection process, standardize and modify them, and put them all in a default system with standardized help. This would also reduce redundant functioncs and increase the order. Currently R looks like a multiworld construction kit that needs refurbishing.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu
Replace packages by standardized functions There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packag
If R were reprogrammed from scratch today, what changes would be most useful to the statistics community? [closed] Replace packages by standardized functions There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packages, with similar names but different details. Actually you do not know what happens if you apply a function and you loose control about your code. If you want to know what a function does then the help is very scarce or only a paper is given as a reference. A function without a documentation is a risk for any user. It would be better to choose the most useful functions in a selection process, standardize and modify them, and put them all in a default system with standardized help. This would also reduce redundant functioncs and increase the order. Currently R looks like a multiworld construction kit that needs refurbishing.
If R were reprogrammed from scratch today, what changes would be most useful to the statistics commu Replace packages by standardized functions There are so many packages and the definitions of functions differ between packages. For the same problem there are different functions from different packag
8,367
Command-line tool to calculate basic statistics for stream of values [closed]
You can do this with R, which may be a bit of overkill... EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, which is meant to do what you're trying to do. For example, if I have a file bar which has a list of numbers, one per line: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar Will send the numbers in the file into R and run R's summary command on the lines, returning something like: Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 2.25 3.50 3.50 4.75 6.00 You could also do something like: Rscript -e 'quantile (as.numeric (readLines ("stdin")), probs=c(0.025, 0.5, 0.975))' to get quantiles. And you could obviously chop off the first line of output (which contains labels) with something like: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar | tail -n +2 I'd highly recommend doing what you want in interactive R first, to make sure you have the command correct. In trying this, I left out the closing parenthesis and Rscript returns nothing -- no error message, no result, just nothing. (For the record, file bar contains: 1 2 3 4 5 6
Command-line tool to calculate basic statistics for stream of values [closed]
You can do this with R, which may be a bit of overkill... EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript,
Command-line tool to calculate basic statistics for stream of values [closed] You can do this with R, which may be a bit of overkill... EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript, which is meant to do what you're trying to do. For example, if I have a file bar which has a list of numbers, one per line: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar Will send the numbers in the file into R and run R's summary command on the lines, returning something like: Min. 1st Qu. Median Mean 3rd Qu. Max. 1.00 2.25 3.50 3.50 4.75 6.00 You could also do something like: Rscript -e 'quantile (as.numeric (readLines ("stdin")), probs=c(0.025, 0.5, 0.975))' to get quantiles. And you could obviously chop off the first line of output (which contains labels) with something like: Rscript -e 'summary (as.numeric (readLines ("stdin")))' < bar | tail -n +2 I'd highly recommend doing what you want in interactive R first, to make sure you have the command correct. In trying this, I left out the closing parenthesis and Rscript returns nothing -- no error message, no result, just nothing. (For the record, file bar contains: 1 2 3 4 5 6
Command-line tool to calculate basic statistics for stream of values [closed] You can do this with R, which may be a bit of overkill... EDIT 2: [OOPS, looks like someone else hit with Rscript while I was retyping this.] I found an easier way. Installed with R should be Rscript,
8,368
Command-line tool to calculate basic statistics for stream of values [closed]
Try "st": $ seq 1 10 | st N min max sum mean stddev 10 1 10 55 5.5 3.02765 $ seq 1 10 | st --transpose N 10 min 1 max 10 sum 55 mean 5.5 stddev 3.02765 You can also see the five number summary: $ seq 1 10 | st --summary min q1 median q3 max 1 3.5 5.5 7.5 10 You can download it here: https://github.com/nferraz/st (DISCLAIMER: I wrote this tool :))
Command-line tool to calculate basic statistics for stream of values [closed]
Try "st": $ seq 1 10 | st N min max sum mean stddev 10 1 10 55 5.5 3.02765 $ seq 1 10 | st --transpose N 10 min 1 max 10 sum 55 mean 5.5 stddev 3.02765 Yo
Command-line tool to calculate basic statistics for stream of values [closed] Try "st": $ seq 1 10 | st N min max sum mean stddev 10 1 10 55 5.5 3.02765 $ seq 1 10 | st --transpose N 10 min 1 max 10 sum 55 mean 5.5 stddev 3.02765 You can also see the five number summary: $ seq 1 10 | st --summary min q1 median q3 max 1 3.5 5.5 7.5 10 You can download it here: https://github.com/nferraz/st (DISCLAIMER: I wrote this tool :))
Command-line tool to calculate basic statistics for stream of values [closed] Try "st": $ seq 1 10 | st N min max sum mean stddev 10 1 10 55 5.5 3.02765 $ seq 1 10 | st --transpose N 10 min 1 max 10 sum 55 mean 5.5 stddev 3.02765 Yo
8,369
Command-line tool to calculate basic statistics for stream of values [closed]
R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner: Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7 which results in Min. 1st Qu. Median Mean 3rd Qu. Max. 3.0 4.0 5.0 5.6 7.0 9.0 If you want to read from the standard input use this: echo 3 4 5 9 7 | Rscript -e 'summary(as.numeric(read.table(file("stdin"))))' If number on the standard input are separated by carriage returns (ie one number per line), use Rscript -e 'summary(as.numeric(read.table(file("stdin"))[,1]))' One can create aliases for these commands: alias summary='Rscript -e "summary(as.numeric(read.table(file(\"stdin\"))[,1]))"' du -s /usr/bin/* | cut -f1 | summary Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0 8.0 20.0 93.6 44.0 6528.0
Command-line tool to calculate basic statistics for stream of values [closed]
R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner: Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7 which res
Command-line tool to calculate basic statistics for stream of values [closed] R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner: Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7 which results in Min. 1st Qu. Median Mean 3rd Qu. Max. 3.0 4.0 5.0 5.6 7.0 9.0 If you want to read from the standard input use this: echo 3 4 5 9 7 | Rscript -e 'summary(as.numeric(read.table(file("stdin"))))' If number on the standard input are separated by carriage returns (ie one number per line), use Rscript -e 'summary(as.numeric(read.table(file("stdin"))[,1]))' One can create aliases for these commands: alias summary='Rscript -e "summary(as.numeric(read.table(file(\"stdin\"))[,1]))"' du -s /usr/bin/* | cut -f1 | summary Min. 1st Qu. Median Mean 3rd Qu. Max. 0.0 8.0 20.0 93.6 44.0 6528.0
Command-line tool to calculate basic statistics for stream of values [closed] R provides a command called Rscript. If you have only a few numbers that you can paste on the command line, use this one liner: Rscript -e 'summary(as.numeric(commandArgs(TRUE)))' 3 4 5 9 7 which res
8,370
Command-line tool to calculate basic statistics for stream of values [closed]
datamash is another great option. It's from the GNU Project. If you have homebrew / linuxbrew you can do: brew install datamash
Command-line tool to calculate basic statistics for stream of values [closed]
datamash is another great option. It's from the GNU Project. If you have homebrew / linuxbrew you can do: brew install datamash
Command-line tool to calculate basic statistics for stream of values [closed] datamash is another great option. It's from the GNU Project. If you have homebrew / linuxbrew you can do: brew install datamash
Command-line tool to calculate basic statistics for stream of values [closed] datamash is another great option. It's from the GNU Project. If you have homebrew / linuxbrew you can do: brew install datamash
8,371
Command-line tool to calculate basic statistics for stream of values [closed]
Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubuntu. Usage example: $ cat test.log Handled 1000000 packets.Time elapsed: 7.575278 Handled 1000000 packets.Time elapsed: 7.569267 Handled 1000000 packets.Time elapsed: 7.540344 Handled 1000000 packets.Time elapsed: 7.547680 Handled 1000000 packets.Time elapsed: 7.692373 Handled 1000000 packets.Time elapsed: 7.390200 Handled 1000000 packets.Time elapsed: 7.391308 Handled 1000000 packets.Time elapsed: 7.388075 $ cat test.log| awk '{print $5}' | ministat -w 74 x <stdin> +--------------------------------------------------------------------------+ | x | |xx xx x x x| | |__________________________A_______M_________________| | +--------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 8 7.388075 7.692373 7.54768 7.5118156 0.11126122
Command-line tool to calculate basic statistics for stream of values [closed]
Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubun
Command-line tool to calculate basic statistics for stream of values [closed] Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubuntu. Usage example: $ cat test.log Handled 1000000 packets.Time elapsed: 7.575278 Handled 1000000 packets.Time elapsed: 7.569267 Handled 1000000 packets.Time elapsed: 7.540344 Handled 1000000 packets.Time elapsed: 7.547680 Handled 1000000 packets.Time elapsed: 7.692373 Handled 1000000 packets.Time elapsed: 7.390200 Handled 1000000 packets.Time elapsed: 7.391308 Handled 1000000 packets.Time elapsed: 7.388075 $ cat test.log| awk '{print $5}' | ministat -w 74 x <stdin> +--------------------------------------------------------------------------+ | x | |xx xx x x x| | |__________________________A_______M_________________| | +--------------------------------------------------------------------------+ N Min Max Median Avg Stddev x 8 7.388075 7.692373 7.54768 7.5118156 0.11126122
Command-line tool to calculate basic statistics for stream of values [closed] Y.a. tool which could be used for calculating statistics and view distribution in ASCII mode is ministat. It's a tool from FreeBSD, but it also packaged for popular Linux distribution like Debian/Ubun
8,372
Command-line tool to calculate basic statistics for stream of values [closed]
There is also simple-r, which can do almost everything that R can, but with less keystrokes: https://code.google.com/p/simple-r/ To calculate basic descriptive statistics, one would have to type one of: r summary file.txt r summary - < file.txt cat file.txt | r summary - Doesn't get any simple-R!
Command-line tool to calculate basic statistics for stream of values [closed]
There is also simple-r, which can do almost everything that R can, but with less keystrokes: https://code.google.com/p/simple-r/ To calculate basic descriptive statistics, one would have to type one
Command-line tool to calculate basic statistics for stream of values [closed] There is also simple-r, which can do almost everything that R can, but with less keystrokes: https://code.google.com/p/simple-r/ To calculate basic descriptive statistics, one would have to type one of: r summary file.txt r summary - < file.txt cat file.txt | r summary - Doesn't get any simple-R!
Command-line tool to calculate basic statistics for stream of values [closed] There is also simple-r, which can do almost everything that R can, but with less keystrokes: https://code.google.com/p/simple-r/ To calculate basic descriptive statistics, one would have to type one
8,373
Command-line tool to calculate basic statistics for stream of values [closed]
There is sta, which is a c++ varient of st, also referenced in these comments. Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or biased estimators, and can output more detailed information such as standard error. You can download sta at github. Disclaimer: I'm the author of sta.
Command-line tool to calculate basic statistics for stream of values [closed]
There is sta, which is a c++ varient of st, also referenced in these comments. Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or bia
Command-line tool to calculate basic statistics for stream of values [closed] There is sta, which is a c++ varient of st, also referenced in these comments. Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or biased estimators, and can output more detailed information such as standard error. You can download sta at github. Disclaimer: I'm the author of sta.
Command-line tool to calculate basic statistics for stream of values [closed] There is sta, which is a c++ varient of st, also referenced in these comments. Being written in c++, it's fast and can handle large datasets. It's simple to use, includes the choice of unbiased or bia
8,374
Command-line tool to calculate basic statistics for stream of values [closed]
Just in case, there's datastat https://sourceforge.net/p/datastat/code/ a simple program for Linux computing simple statistics from the command-line. For example, cat file.dat | datastat will output the average value across all rows for each column of file.dat. If you need to know the standard deviation, min, max, you can add the --dev, --min and --max options, respectively. datastat has the possibility to aggregate rows based on the value of one or more "key" columns. It's written in C++, runs fast and with small memory occupation, and can be piped nicely with other tools such as cut, grep, sed, sort, awk, etc.
Command-line tool to calculate basic statistics for stream of values [closed]
Just in case, there's datastat https://sourceforge.net/p/datastat/code/ a simple program for Linux computing simple statistics from the command-line. For example, cat file.dat | datastat will output t
Command-line tool to calculate basic statistics for stream of values [closed] Just in case, there's datastat https://sourceforge.net/p/datastat/code/ a simple program for Linux computing simple statistics from the command-line. For example, cat file.dat | datastat will output the average value across all rows for each column of file.dat. If you need to know the standard deviation, min, max, you can add the --dev, --min and --max options, respectively. datastat has the possibility to aggregate rows based on the value of one or more "key" columns. It's written in C++, runs fast and with small memory occupation, and can be piped nicely with other tools such as cut, grep, sed, sort, awk, etc.
Command-line tool to calculate basic statistics for stream of values [closed] Just in case, there's datastat https://sourceforge.net/p/datastat/code/ a simple program for Linux computing simple statistics from the command-line. For example, cat file.dat | datastat will output t
8,375
Command-line tool to calculate basic statistics for stream of values [closed]
You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers. I/O options Input data can be from a file, standard input, or a pipe Output can be written to a file, standard output, or a pipe Output uses headers that start with "#" to enable piping to gnuplot Parsing options Signal, end-of-file, or blank line based detection to stop processing Comment and delimiter character can be set Columns can be filtered out from processing Rows can be filtered out from processing based on numeric constraint Rows can be filtered out from processing based on string constraint Initial header rows can be skipped Fixed number of rows can be processed Duplicate delimiters can be ignored Rows can be reshaped into columns Strictly enforce that only rows of the same size are processed A row containing column titles can be used to title output statistics Statistics options Summary statistics (Count, Minimum, Mean, Maximum, Standard deviation) Covariance Correlation Least squares offset Least squares slope Histogram Raw data after filtering NOTE: I'm the author.
Command-line tool to calculate basic statistics for stream of values [closed]
You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers. I/O options Input data can be from a fil
Command-line tool to calculate basic statistics for stream of values [closed] You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers. I/O options Input data can be from a file, standard input, or a pipe Output can be written to a file, standard output, or a pipe Output uses headers that start with "#" to enable piping to gnuplot Parsing options Signal, end-of-file, or blank line based detection to stop processing Comment and delimiter character can be set Columns can be filtered out from processing Rows can be filtered out from processing based on numeric constraint Rows can be filtered out from processing based on string constraint Initial header rows can be skipped Fixed number of rows can be processed Duplicate delimiters can be ignored Rows can be reshaped into columns Strictly enforce that only rows of the same size are processed A row containing column titles can be used to title output statistics Statistics options Summary statistics (Count, Minimum, Mean, Maximum, Standard deviation) Covariance Correlation Least squares offset Least squares slope Histogram Raw data after filtering NOTE: I'm the author.
Command-line tool to calculate basic statistics for stream of values [closed] You might also consider using clistats. It is a highly configurable command line interface tool to compute statistics for a stream of delimited input numbers. I/O options Input data can be from a fil
8,376
Command-line tool to calculate basic statistics for stream of values [closed]
Stumbled across this old thread looking for something else. Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day: http://moo.nac.uci.edu/~hjm/stats example: $ ls -l | scut -f=4 | stats Sum 9702066453 Number 501 Mean 19365402.1017964 Median 4451 Mode 4096 NModes 15 Min 0 Max 2019645440 Range 2019645440 Variance 1.96315423371944e+16 Std_Dev 140112605.91822 SEM 6259769.58393047 Skew 10.2405932543676 Std_Skew 93.5768354979843 Kurtosis 117.69005473429 (scut is a slower, but arguably easier to version of cut): http://moo.nac.uci.edu/~hjm/scut described: http://moo.nac.uci.edu/~hjm/scut_cols_HOWTO.html
Command-line tool to calculate basic statistics for stream of values [closed]
Stumbled across this old thread looking for something else. Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day: http://moo.nac.uci
Command-line tool to calculate basic statistics for stream of values [closed] Stumbled across this old thread looking for something else. Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day: http://moo.nac.uci.edu/~hjm/stats example: $ ls -l | scut -f=4 | stats Sum 9702066453 Number 501 Mean 19365402.1017964 Median 4451 Mode 4096 NModes 15 Min 0 Max 2019645440 Range 2019645440 Variance 1.96315423371944e+16 Std_Dev 140112605.91822 SEM 6259769.58393047 Skew 10.2405932543676 Std_Skew 93.5768354979843 Kurtosis 117.69005473429 (scut is a slower, but arguably easier to version of cut): http://moo.nac.uci.edu/~hjm/scut described: http://moo.nac.uci.edu/~hjm/scut_cols_HOWTO.html
Command-line tool to calculate basic statistics for stream of values [closed] Stumbled across this old thread looking for something else. Wanted the same thing, couldn't find anything simple, so did it in perl, fairly trivial, but use it multiple times a day: http://moo.nac.uci
8,377
Command-line tool to calculate basic statistics for stream of values [closed]
Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended for large datasets and supports multiple fields and grouping by key. Output is tab separated. An example for the sequence of numbers 1 to 1000, one per line: $ seq 1000 | tsv-summarize --min 1 --max 1 --median 1 --sum 1 1 1000 500.5 500500 Headers are normally generated from a header line in the input. If the input has no header one can be added using the -w switch: $ seq 1000 | tsv-summarize -w --min 1 --max 1 --median 1 --sum 1 field1_min field1_max field1_median field1_sum 1 1000 500.5 500500 Disclaimer: I'm the author.
Command-line tool to calculate basic statistics for stream of values [closed]
Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended fo
Command-line tool to calculate basic statistics for stream of values [closed] Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended for large datasets and supports multiple fields and grouping by key. Output is tab separated. An example for the sequence of numbers 1 to 1000, one per line: $ seq 1000 | tsv-summarize --min 1 --max 1 --median 1 --sum 1 1 1000 500.5 500500 Headers are normally generated from a header line in the input. If the input has no header one can be added using the -w switch: $ seq 1000 | tsv-summarize -w --min 1 --max 1 --median 1 --sum 1 field1_min field1_max field1_median field1_sum 1 1000 500.5 500500 Disclaimer: I'm the author.
Command-line tool to calculate basic statistics for stream of values [closed] Another tool: tsv-summarize from eBay's TSV Utilities. Supports many of the basic summary statistics, like min, max, mean, median, quantiles, standard deviation, MAD, and a few more. It is intended fo
8,378
Command-line tool to calculate basic statistics for stream of values [closed]
Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means algorithm. For the mean: $$\bar{x}_n = \frac{(n-1)\,\bar{x}_{n-1} + x_n}{n}$$; and for the variance:$$s^2_n = \frac{S_n}{n-1}$$ $$S_n = S_{n-1} + (x_n-\bar{x}_{n-1})(x_n-\bar{x}_n).$$ Take $\bar{x}_0 = S_0 = 0$ as starting values. Modifications are available for weighted analyses. You can do the computations with two double precision reals and a counter.
Command-line tool to calculate basic statistics for stream of values [closed]
Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means al
Command-line tool to calculate basic statistics for stream of values [closed] Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means algorithm. For the mean: $$\bar{x}_n = \frac{(n-1)\,\bar{x}_{n-1} + x_n}{n}$$; and for the variance:$$s^2_n = \frac{S_n}{n-1}$$ $$S_n = S_{n-1} + (x_n-\bar{x}_{n-1})(x_n-\bar{x}_n).$$ Take $\bar{x}_0 = S_0 = 0$ as starting values. Modifications are available for weighted analyses. You can do the computations with two double precision reals and a counter.
Command-line tool to calculate basic statistics for stream of values [closed] Too much memory and processor power, folks. Using R for something like this is roughly like getting a sledgehammer to kill a mosquito. Use your favorite language and implement a provisional means al
8,379
Choosing the best model from among different "best" models
A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible. For model evaluation there are different methods depending on what you want to know. There are generally two ways of evaluating a model: Based on predictions and based on goodness of fit on the current data. In the first case you want to know if your model adequately predicts new data, in the second you want to know whether your model adequatelly describes the relations in your current data. Those are two different things. Evaluating based on predictions The best way to evaluate models used for prediction, is crossvalidation. Very briefly, you cut your dataset in eg. 10 different pieces, use 9 of them to build the model and predict the outcomes for the tenth dataset. A simple mean squared difference between the observed and predicted values give you a measure for the prediction accuracy. As you repeat this ten times, you calculate the mean squared difference over all ten iterations to come to a general value with a standard deviation. This allows you again to compare two models on their prediction accuracy using standard statistical techniques (t-test or ANOVA). A variant on the theme is the PRESS criterion (Prediction Sum of Squares), defined as $\displaystyle\sum^{n}_{i=1} \left(Y_i - \hat{Y}_{i(-i)}\right)^2$ Where $\hat{Y}_{i(-i)}$ is the predicted value for the ith observation using a model based on all observations minus the ith value. This criterion is especially useful if you don't have much data. In that case, splitting your data like in the crossvalidation approach might result in subsets of data that are too small for a stable fitting. Evaluating based on goodness of fit Let me first state that this really differs depending on the model framework you use. For example, a likelihood-ratio test can work for Generalized Additive Mixed Models when using the classic gaussian for the errors, but is meaningless in the case of the binomial variant. First you have the more intuitive methods of comparing models. You can use the Aikake Information Criterion (AIC) or the Bayesian Information Criterion (BIC) to compare the goodness of fit for two models. But nothing tells you that both models really differ. Another one is the Mallow's Cp criterion. This essentially checks for possible bias in your model, by comparing the model with all possible submodels (or a careful selection of them). See also http://www.public.iastate.edu/~mervyn/stat401/Other/mallows.pdf If the models you want to compare are nested models (i.e. all predictors and interactions of the more parsimonious model occur also in the more complete model), you can use a formal comparison in the form of a likelihood ratio test (or a Chi-squared or an F test in the appropriate cases, eg when comparing simple linear models fitted using least squares). This test essentially controls whether the extra predictors or interactions really improve the model. This criterion is often used in forward or backward stepwise methods. About automatic model selection You have advocates and you have enemies of this method. I personally am not in favor of automatic model selection, especially not when it's about describing models, and this for a number of reasons: In every model you should have checked that you deal adequately with confounding. In fact, many datasets have variables that should never be put in a model at the same time. Often people forget to control for that. Automatic model selection is a method to create hypotheses, not to test them. All inference based on models originating from Automatic model selection is invalid. No way to change that. I've seen many cases where starting at a different starting point, a stepwise selection returned a completely different model. These methods are far from stable. It's also difficult to incorporate a decent rule, as the statistical tests to compare two models require the models to be nested. If you use eg AIC, BIC or PRESS, the cutoff for when a difference is really important is arbitrary chosen. So basically, I see more in comparing a select set of models chosen beforehand. If you don't care about statistical evaluation of the model and hypothesis testing, you can use crossvalidation to compare the predictive accuracy of your models. But if you're really after variable selection for predictive purposes, you might want to take a look to other methods for variable selection, like Support Vector Machines, Neural Networks, Random Forests and the likes. These are far more often used in eg medicine to find out which of the thousand measured proteins can adequately predict whether you have cancer or not. Just to give a (famous) example : https://www.nature.com/articles/nm0601_673 https://doi.org/10.1023/A:1012487302797 All these methods have regression variants for continuous data as well.
Choosing the best model from among different "best" models
A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible. For model evaluation there are different methods depending o
Choosing the best model from among different "best" models A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible. For model evaluation there are different methods depending on what you want to know. There are generally two ways of evaluating a model: Based on predictions and based on goodness of fit on the current data. In the first case you want to know if your model adequately predicts new data, in the second you want to know whether your model adequatelly describes the relations in your current data. Those are two different things. Evaluating based on predictions The best way to evaluate models used for prediction, is crossvalidation. Very briefly, you cut your dataset in eg. 10 different pieces, use 9 of them to build the model and predict the outcomes for the tenth dataset. A simple mean squared difference between the observed and predicted values give you a measure for the prediction accuracy. As you repeat this ten times, you calculate the mean squared difference over all ten iterations to come to a general value with a standard deviation. This allows you again to compare two models on their prediction accuracy using standard statistical techniques (t-test or ANOVA). A variant on the theme is the PRESS criterion (Prediction Sum of Squares), defined as $\displaystyle\sum^{n}_{i=1} \left(Y_i - \hat{Y}_{i(-i)}\right)^2$ Where $\hat{Y}_{i(-i)}$ is the predicted value for the ith observation using a model based on all observations minus the ith value. This criterion is especially useful if you don't have much data. In that case, splitting your data like in the crossvalidation approach might result in subsets of data that are too small for a stable fitting. Evaluating based on goodness of fit Let me first state that this really differs depending on the model framework you use. For example, a likelihood-ratio test can work for Generalized Additive Mixed Models when using the classic gaussian for the errors, but is meaningless in the case of the binomial variant. First you have the more intuitive methods of comparing models. You can use the Aikake Information Criterion (AIC) or the Bayesian Information Criterion (BIC) to compare the goodness of fit for two models. But nothing tells you that both models really differ. Another one is the Mallow's Cp criterion. This essentially checks for possible bias in your model, by comparing the model with all possible submodels (or a careful selection of them). See also http://www.public.iastate.edu/~mervyn/stat401/Other/mallows.pdf If the models you want to compare are nested models (i.e. all predictors and interactions of the more parsimonious model occur also in the more complete model), you can use a formal comparison in the form of a likelihood ratio test (or a Chi-squared or an F test in the appropriate cases, eg when comparing simple linear models fitted using least squares). This test essentially controls whether the extra predictors or interactions really improve the model. This criterion is often used in forward or backward stepwise methods. About automatic model selection You have advocates and you have enemies of this method. I personally am not in favor of automatic model selection, especially not when it's about describing models, and this for a number of reasons: In every model you should have checked that you deal adequately with confounding. In fact, many datasets have variables that should never be put in a model at the same time. Often people forget to control for that. Automatic model selection is a method to create hypotheses, not to test them. All inference based on models originating from Automatic model selection is invalid. No way to change that. I've seen many cases where starting at a different starting point, a stepwise selection returned a completely different model. These methods are far from stable. It's also difficult to incorporate a decent rule, as the statistical tests to compare two models require the models to be nested. If you use eg AIC, BIC or PRESS, the cutoff for when a difference is really important is arbitrary chosen. So basically, I see more in comparing a select set of models chosen beforehand. If you don't care about statistical evaluation of the model and hypothesis testing, you can use crossvalidation to compare the predictive accuracy of your models. But if you're really after variable selection for predictive purposes, you might want to take a look to other methods for variable selection, like Support Vector Machines, Neural Networks, Random Forests and the likes. These are far more often used in eg medicine to find out which of the thousand measured proteins can adequately predict whether you have cancer or not. Just to give a (famous) example : https://www.nature.com/articles/nm0601_673 https://doi.org/10.1023/A:1012487302797 All these methods have regression variants for continuous data as well.
Choosing the best model from among different "best" models A parsimonious model is a model that accomplishes a desired level of explanation or prediction with as few predictor variables as possible. For model evaluation there are different methods depending o
8,380
Choosing the best model from among different "best" models
Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you use or which index you use as a stopping rule. Variable selection without shrinkage is almost doomed. However limited backwards stepdown (with $\alpha=0.50$) can sometimes be helpful. It works simply because it will not delete many variables.
Choosing the best model from among different "best" models
Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you us
Choosing the best model from among different "best" models Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you use or which index you use as a stopping rule. Variable selection without shrinkage is almost doomed. However limited backwards stepdown (with $\alpha=0.50$) can sometimes be helpful. It works simply because it will not delete many variables.
Choosing the best model from among different "best" models Parsimony is your enemy. Nature does not act parsimoneously, and datasets do not have enough information to allow one to choose the "right" variables. It doesn't matter very much which method you us
8,381
Choosing the best model from among different "best" models
Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away from 0, and there are other related problems. If you must do automatic variable selection, I would recommend using a more modern method, such as LASSO or LAR. I wrote a SAS presentation on this, entitled "Stopping Stepwise: Why Stepwise and Similar Methods are Bad and what you should Use" But, if possible, I'd avoid these automated methods altogether, and rely on subject matter expertise. One idea is to generate 10 or so reasonable models, and compare them based on an information criterion. @Nick Sabbe listed several of these in his response.
Choosing the best model from among different "best" models
Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away
Choosing the best model from among different "best" models Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away from 0, and there are other related problems. If you must do automatic variable selection, I would recommend using a more modern method, such as LASSO or LAR. I wrote a SAS presentation on this, entitled "Stopping Stepwise: Why Stepwise and Similar Methods are Bad and what you should Use" But, if possible, I'd avoid these automated methods altogether, and rely on subject matter expertise. One idea is to generate 10 or so reasonable models, and compare them based on an information criterion. @Nick Sabbe listed several of these in his response.
Choosing the best model from among different "best" models Using backwards or forwards selection is a common strategy, but not one I can recommend. The results from such model building are all wrong. The p-values are too low, the coefficients are biased away
8,382
Choosing the best model from among different "best" models
The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting the outcome for new observations, or you may simply be interested in the model with the least false positives; perhaps you simply want the curve that is 'closest' to the data. In any of the cases above, you need some sort of measure for what you are looking for. Some popular measures with different applications are AUC, BIC, AIC, residual error,... You calculate the measure that best matches your goal for each model, and then compare the 'scores' for each model. This leads to the best model for your goal. Some of these measures (e.g. AIC) place an extra stress on the number of nonzero coefficients in the model, because using too many could be simply overfitting the data (so that the model is useless if you use it for new data, let alone for the population). There may be other reasons for requiring a model to hold 'as little as possible' variables, e.g. if it is simply costly to measure all of them for prediction. The 'simplicity of' or 'small number of variables in' a model is typically refered to as its parsimony. So in short, a parsimoneous model is a 'simple' model, not holding too many variables. As often with these type of questions, I will refer you to the excellent book Elements of Statistical Learning for deeper information on the subject and related issues.
Choosing the best model from among different "best" models
The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting
Choosing the best model from among different "best" models The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting the outcome for new observations, or you may simply be interested in the model with the least false positives; perhaps you simply want the curve that is 'closest' to the data. In any of the cases above, you need some sort of measure for what you are looking for. Some popular measures with different applications are AUC, BIC, AIC, residual error,... You calculate the measure that best matches your goal for each model, and then compare the 'scores' for each model. This leads to the best model for your goal. Some of these measures (e.g. AIC) place an extra stress on the number of nonzero coefficients in the model, because using too many could be simply overfitting the data (so that the model is useless if you use it for new data, let alone for the population). There may be other reasons for requiring a model to hold 'as little as possible' variables, e.g. if it is simply costly to measure all of them for prediction. The 'simplicity of' or 'small number of variables in' a model is typically refered to as its parsimony. So in short, a parsimoneous model is a 'simple' model, not holding too many variables. As often with these type of questions, I will refer you to the excellent book Elements of Statistical Learning for deeper information on the subject and related issues.
Choosing the best model from among different "best" models The answer to this will greatly depend upon your goal. You may be looking for statistically significant coefficients, or you may be out to avoid as many missclassifications as possible when predicting
8,383
The "Amazing Hidden Power" of Random Search?
One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result. Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" stated if a million monkeys spent ten hours a day at a typewriter, it's extremely unlikely that the quality of their writing would equal a library's contents. And of course we understand the intuition: language is highly structured (not random), so randomly pressing keys is not going to yield a coherent text. Even a text that is extremely similar to language can be rendered incoherent by a minor error. In terms of estimating a model, you need to estimate everything correctly simultaneously. Getting the correct slope $\hat{\beta}_1$ in the model $y = \beta_0 + \beta_1 x +\epsilon$ is not very meaningful if $\hat{\beta}_0$ is very very far from the truth. In a larger number of dimensions, such as in a neural network, a good solution will need to be found in thousands or millions of parameters simultaneously. This is unlikely to happen at random! This is directly related to the curse of dimensionality. Suppose your goal is to find a solution with a distance less than 0.05 from the true solution, which is at the middle of the unit interval. Using random sampling, the probability of this is 0.1. But as we increase the dimension of the search space to a unit square, a unit cube, and higher dimensions, the volume occupied by our "good solution" (a solution with distance from the optimal solution less than 0.05) shrinks, and the probability of finding that solution using random sampling likewise shrinks. (And naturally, increasing the size of the search space but keeping the dimensionality constant also rapidly diminishes the probability.) The "trick" to random search is that it purports to defeat this process by keeping the probability constant while dimension grows; this must imply that the volume assigned to the "good solution" increases, correspondingly, to keep the probability assigned to the event constant. This is hardly perfect, because the quality of the solutions within our radius is worse (because these solutions have a larger average distance from the true value). You have no way to know if your search space contains a good result. The core assumption of random search is that your search space contains a configuration that is “good enough” to solve your problem. If a “good enough “ solution isn’t in your search space at all ( perhaps because you chose too small a region), then the probability of finding that good solution is 0. Random search can only find the top 5% of solutions with positive probability from among the solutions in the search space. You might think that enlarging the search space is a good way to increase your odds. While it might make the search region contain an extremely high quality region, but the probability of selecting something in that region shrinks rapidly with increasing size of the search space. High-quality model parameters often reside in narrow valleys. When considering hyperparameters, it's often true that the hyperparameter response surface changes only gradually; there are large regions of the space where lots of hyperparameter values are basically the same in terms of quality. Moreover, a small number of hyperparameters make large contributions to improving the model; see Examples of the performance of a machine learning model to be more sensitive to a small subset of hyperparameters than others? But in terms of estimating the model parameters, we see the opposite phenomenon. For instance, regression problems have likelihoods that are prominently peaked around their optimal values (provided you have more observations than features); moreover, these peaks become more and more "pointy" as the number of observations increases. Peaked optimal values are bad news for random search, because it means that the "optimal region" is actually quite small, and all of the "near miss" values are actually much poorer in comparison to the optimal value. To make a fair comparison between random search and gradient descent, set a budget of iterations (e.g. the $n=60$ value derived from random search). Then compare model quality of a neural network fit with $n$ iterations of ordinary gradient descent & backprop to a model that uses $n$ iterations of random search. As long as gradient descent doesn't diverge, I'm confident that it will beat random search with high probability. Obtaining even stronger guarantees rapidly becomes expensive. You can of course adjust $p$ or $q$ to increase the assurances that you'll find a very high quality solution, but if you work out the arithmetic, you'll find that $n$ rapidly becomes very large (that is, random search becomes expensive quickly). Moreover, in a fair comparison, gradient descent will likewise take $n$ optimization steps, and tend to find even better solutions than random search. Some more discussion, with an intuitive illustration: Does random search depend on the number of dimensions searched?
The "Amazing Hidden Power" of Random Search?
One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result. Émile Borel's 1913 article "Mécanique Statistique et Irréve
The "Amazing Hidden Power" of Random Search? One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result. Émile Borel's 1913 article "Mécanique Statistique et Irréversibilité" stated if a million monkeys spent ten hours a day at a typewriter, it's extremely unlikely that the quality of their writing would equal a library's contents. And of course we understand the intuition: language is highly structured (not random), so randomly pressing keys is not going to yield a coherent text. Even a text that is extremely similar to language can be rendered incoherent by a minor error. In terms of estimating a model, you need to estimate everything correctly simultaneously. Getting the correct slope $\hat{\beta}_1$ in the model $y = \beta_0 + \beta_1 x +\epsilon$ is not very meaningful if $\hat{\beta}_0$ is very very far from the truth. In a larger number of dimensions, such as in a neural network, a good solution will need to be found in thousands or millions of parameters simultaneously. This is unlikely to happen at random! This is directly related to the curse of dimensionality. Suppose your goal is to find a solution with a distance less than 0.05 from the true solution, which is at the middle of the unit interval. Using random sampling, the probability of this is 0.1. But as we increase the dimension of the search space to a unit square, a unit cube, and higher dimensions, the volume occupied by our "good solution" (a solution with distance from the optimal solution less than 0.05) shrinks, and the probability of finding that solution using random sampling likewise shrinks. (And naturally, increasing the size of the search space but keeping the dimensionality constant also rapidly diminishes the probability.) The "trick" to random search is that it purports to defeat this process by keeping the probability constant while dimension grows; this must imply that the volume assigned to the "good solution" increases, correspondingly, to keep the probability assigned to the event constant. This is hardly perfect, because the quality of the solutions within our radius is worse (because these solutions have a larger average distance from the true value). You have no way to know if your search space contains a good result. The core assumption of random search is that your search space contains a configuration that is “good enough” to solve your problem. If a “good enough “ solution isn’t in your search space at all ( perhaps because you chose too small a region), then the probability of finding that good solution is 0. Random search can only find the top 5% of solutions with positive probability from among the solutions in the search space. You might think that enlarging the search space is a good way to increase your odds. While it might make the search region contain an extremely high quality region, but the probability of selecting something in that region shrinks rapidly with increasing size of the search space. High-quality model parameters often reside in narrow valleys. When considering hyperparameters, it's often true that the hyperparameter response surface changes only gradually; there are large regions of the space where lots of hyperparameter values are basically the same in terms of quality. Moreover, a small number of hyperparameters make large contributions to improving the model; see Examples of the performance of a machine learning model to be more sensitive to a small subset of hyperparameters than others? But in terms of estimating the model parameters, we see the opposite phenomenon. For instance, regression problems have likelihoods that are prominently peaked around their optimal values (provided you have more observations than features); moreover, these peaks become more and more "pointy" as the number of observations increases. Peaked optimal values are bad news for random search, because it means that the "optimal region" is actually quite small, and all of the "near miss" values are actually much poorer in comparison to the optimal value. To make a fair comparison between random search and gradient descent, set a budget of iterations (e.g. the $n=60$ value derived from random search). Then compare model quality of a neural network fit with $n$ iterations of ordinary gradient descent & backprop to a model that uses $n$ iterations of random search. As long as gradient descent doesn't diverge, I'm confident that it will beat random search with high probability. Obtaining even stronger guarantees rapidly becomes expensive. You can of course adjust $p$ or $q$ to increase the assurances that you'll find a very high quality solution, but if you work out the arithmetic, you'll find that $n$ rapidly becomes very large (that is, random search becomes expensive quickly). Moreover, in a fair comparison, gradient descent will likewise take $n$ optimization steps, and tend to find even better solutions than random search. Some more discussion, with an intuitive illustration: Does random search depend on the number of dimensions searched?
The "Amazing Hidden Power" of Random Search? One limitation of random search is that searching over a large space is extremely challenging; even a small difference can spoil the result. Émile Borel's 1913 article "Mécanique Statistique et Irréve
8,384
The "Amazing Hidden Power" of Random Search?
Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of the signs of these weights, which is a very large number. If we sample 60 random weight vectors, we will have seen only an minuscule proportion of that space, not even enough to be confident that we have at least one solution for which a given seven weights have the right sign. So even for a small neural network, random sampling has a vanishingly small chance of getting all of the signs of the weights right. Now of course, the structure of the neural net means that there are symmetries that mean there are multiple equivalent solutions (e.g. flipping the signs of all of the input weights of a neuron and its output weights), but this doesn't cut down the number of equivalent combinations of signs very much. I suspect part of the problem is indeed that the distribution of performance is very sharply peaked around the best solutions. So even if 60 samples gets you into the top 5% of solutions, if the search space is very large, and the optimum of the cost function is very localised, then a top 5% random solution may still be nonsense and you need, perhaps, a top 0.0005% solution or better to have acceptable performance. If random search was an effective way of training neural networks, then I would expect someone to have found that out, rather than ~50 years of gradient descent. Random search is useful for hyper-parameter search though, but mostly because the dimension is lower, and the models are trained on the data using gradient descent, so you are choosing from a set of plausibly good solutions, rather than random ones. In that case, most of the search space has goodish solutions, and the optimum of the cost function is not highly localised (at least not for kernel learning methods).
The "Amazing Hidden Power" of Random Search?
Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of t
The "Amazing Hidden Power" of Random Search? Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of the signs of these weights, which is a very large number. If we sample 60 random weight vectors, we will have seen only an minuscule proportion of that space, not even enough to be confident that we have at least one solution for which a given seven weights have the right sign. So even for a small neural network, random sampling has a vanishingly small chance of getting all of the signs of the weights right. Now of course, the structure of the neural net means that there are symmetries that mean there are multiple equivalent solutions (e.g. flipping the signs of all of the input weights of a neuron and its output weights), but this doesn't cut down the number of equivalent combinations of signs very much. I suspect part of the problem is indeed that the distribution of performance is very sharply peaked around the best solutions. So even if 60 samples gets you into the top 5% of solutions, if the search space is very large, and the optimum of the cost function is very localised, then a top 5% random solution may still be nonsense and you need, perhaps, a top 0.0005% solution or better to have acceptable performance. If random search was an effective way of training neural networks, then I would expect someone to have found that out, rather than ~50 years of gradient descent. Random search is useful for hyper-parameter search though, but mostly because the dimension is lower, and the models are trained on the data using gradient descent, so you are choosing from a set of plausibly good solutions, rather than random ones. In that case, most of the search space has goodish solutions, and the optimum of the cost function is not highly localised (at least not for kernel learning methods).
The "Amazing Hidden Power" of Random Search? Consider a neural network model with 100 weights. If we think only about getting the sign of the weights right and don't worry for the moment about their magnitude. There are 2^100 combinations of t
8,385
The "Amazing Hidden Power" of Random Search?
Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of these will be within the most useful 5% of all possible Stack Exchange answers within this character limit. Basically the problem as you point out is that being within some ranked quantile of all possible solutions is not usually very interesting. Generally you have some evaluation function, and what you are really interested in is the difference between either the best possible model or some current model and the model defined by your new set of parameters. Random search is useful when you are optimizing hyperparameters because (really if, it's not always useful) the non-random optimization of the parameters following hyperparameter selection already restricts you to a class of generally useful models.
The "Amazing Hidden Power" of Random Search?
Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of thes
The "Amazing Hidden Power" of Random Search? Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of these will be within the most useful 5% of all possible Stack Exchange answers within this character limit. Basically the problem as you point out is that being within some ranked quantile of all possible solutions is not usually very interesting. Generally you have some evaluation function, and what you are really interested in is the difference between either the best possible model or some current model and the model defined by your new set of parameters. Random search is useful when you are optimizing hyperparameters because (really if, it's not always useful) the non-random optimization of the parameters following hyperparameter selection already restricts you to a class of generally useful models.
The "Amazing Hidden Power" of Random Search? Suppose we want to answer your question with a 1000 character answer. One approach could be to sample 60 1000-tuples of characters, punctuation marks, and whitespace. With 95% probability, one of thes
8,386
The "Amazing Hidden Power" of Random Search?
There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorithm can beat random search when its performance is averaged over all possible functions to be optimised. That is, you have a function $f:\Omega\to L$ where $\Omega$ is a finite discrete space (like the space of 1000-character strings) and $L$ is a discrete space of numerical values (like 1:10 or 1:1000000000). There are only finitely many such $f$. You can define any algorithm that evaluates $f(\omega_1)$, $f(\omega_2)$, and so on for $n$ attempts, choosing $\omega_{i+1}$ in terms of earlier results, and then take $\max_i f(\omega_i)$ as your best result. No algorithm will outperform random search averaged over all $f$. One proof idea is to consider $f$ as randomly chosen from the possible functions with equal probability. Because $f$ could be anything, with equal probability, evaluations at $\omega_1,\ldots,\omega_i$ are independent of $f(\omega_{i+1})$; you can't learn anything. This result isn't that interesting because we usually aren't interested in the average performance over all possible objective functions. But it does imply that the reason random search isn't actually a good competitor is because the objective functions we care about have structure. Some have smooth (or smooth-ish) structure -- parameter values near the optimum give better outputs than those far from the optimum. Sometimes the structure is more complicated. But there is (typically) structure. [The no-free-lunch theorem is also perhaps less interesting than it seems because it doesn't seem to have an analogue for continuous parameter spaces]
The "Amazing Hidden Power" of Random Search?
There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorith
The "Amazing Hidden Power" of Random Search? There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorithm can beat random search when its performance is averaged over all possible functions to be optimised. That is, you have a function $f:\Omega\to L$ where $\Omega$ is a finite discrete space (like the space of 1000-character strings) and $L$ is a discrete space of numerical values (like 1:10 or 1:1000000000). There are only finitely many such $f$. You can define any algorithm that evaluates $f(\omega_1)$, $f(\omega_2)$, and so on for $n$ attempts, choosing $\omega_{i+1}$ in terms of earlier results, and then take $\max_i f(\omega_i)$ as your best result. No algorithm will outperform random search averaged over all $f$. One proof idea is to consider $f$ as randomly chosen from the possible functions with equal probability. Because $f$ could be anything, with equal probability, evaluations at $\omega_1,\ldots,\omega_i$ are independent of $f(\omega_{i+1})$; you can't learn anything. This result isn't that interesting because we usually aren't interested in the average performance over all possible objective functions. But it does imply that the reason random search isn't actually a good competitor is because the objective functions we care about have structure. Some have smooth (or smooth-ish) structure -- parameter values near the optimum give better outputs than those far from the optimum. Sometimes the structure is more complicated. But there is (typically) structure. [The no-free-lunch theorem is also perhaps less interesting than it seems because it doesn't seem to have an analogue for continuous parameter spaces]
The "Amazing Hidden Power" of Random Search? There's a mathematical result in optimisation, less interesting than it first sounds, called the "No Free Lunch Theorem". It says that for a discrete problem (like @JonnyLomond's answer), no algorith
8,387
The "Amazing Hidden Power" of Random Search?
As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thie performance of the random search will very strongly depend on the features of this distribution. In fact, one of the key developments in the history of training neural network was the development of various random weight initialization procedures (Glorot, He, etc.). So in a sense the random search is already being used as a (very important) first step for training the networks. In fact, there is recent work showing that pure random-search like approaches can be used to train neural networks to high accuracy. This is related to the Lottery Ticket hypothesis, which has already been mentioned by msuzen. But what is even more dramatic, is that it turns out that large randomly-initialized neural networks contain subnetworks that can nearly match the performance of the trained model with no training (Zhou et. al. Ramanujan et. al.). You may note, though, that I have done a bit of a sleight of hand. In the linked papers, they look for the subnetworks by basically searching over the space of all subnetworks of the original network. It is not as if they are only sampling 60 subnetworks at a time. But this underscores a crucial observation which makes the random search approach somewhat feasible or neural networks: Sampling a single, large network is equivalent to sampling a massive number of small networks. This is because a large network has a very large number of subnetworks. The catch is that they search space is much more than 60: in fact, the combinatorics make an exhaustive enumeration out of the question. So in the linked papers, they have to use specialized search algorithms to identify the best (or near best) subnetwork. I am not claiming that this is the best way to train a neural network, but in principle random search is a feasible training procedure. You ask "Why do we use gradient descent instead of random search?". So really this is not just a question about random search, but also about gradient descent. It has been hypothesized that the stochastic gradient descent algorithm itself is actually key to the remarkable generalization abilities of neural networks. (Here are a few examples of papers that take this approach) This is sometimes called "algorithmic regularization" or "implicit regularization". A simple example: suppose you fit an underdetermined linear regression using gradient descent. There are multiple global minima, but it turns out that the the GD will always converge to the minimum that has smallest norm. So the point is that gradient descent has a bias towards certain kinds of solutions, which can be important when the models are over parametrized, with many gloabl minima. You can easily find a ton of literature on this by googling these key words. But here is the upshot: suppose that we could actually train neural networks using 60 iterations of random search. Then stochastic gradient descent would still probalby be the preferred way to train them, because the solutions found by random search have no useful regularization.
The "Amazing Hidden Power" of Random Search?
As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thi
The "Amazing Hidden Power" of Random Search? As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thie performance of the random search will very strongly depend on the features of this distribution. In fact, one of the key developments in the history of training neural network was the development of various random weight initialization procedures (Glorot, He, etc.). So in a sense the random search is already being used as a (very important) first step for training the networks. In fact, there is recent work showing that pure random-search like approaches can be used to train neural networks to high accuracy. This is related to the Lottery Ticket hypothesis, which has already been mentioned by msuzen. But what is even more dramatic, is that it turns out that large randomly-initialized neural networks contain subnetworks that can nearly match the performance of the trained model with no training (Zhou et. al. Ramanujan et. al.). You may note, though, that I have done a bit of a sleight of hand. In the linked papers, they look for the subnetworks by basically searching over the space of all subnetworks of the original network. It is not as if they are only sampling 60 subnetworks at a time. But this underscores a crucial observation which makes the random search approach somewhat feasible or neural networks: Sampling a single, large network is equivalent to sampling a massive number of small networks. This is because a large network has a very large number of subnetworks. The catch is that they search space is much more than 60: in fact, the combinatorics make an exhaustive enumeration out of the question. So in the linked papers, they have to use specialized search algorithms to identify the best (or near best) subnetwork. I am not claiming that this is the best way to train a neural network, but in principle random search is a feasible training procedure. You ask "Why do we use gradient descent instead of random search?". So really this is not just a question about random search, but also about gradient descent. It has been hypothesized that the stochastic gradient descent algorithm itself is actually key to the remarkable generalization abilities of neural networks. (Here are a few examples of papers that take this approach) This is sometimes called "algorithmic regularization" or "implicit regularization". A simple example: suppose you fit an underdetermined linear regression using gradient descent. There are multiple global minima, but it turns out that the the GD will always converge to the minimum that has smallest norm. So the point is that gradient descent has a bias towards certain kinds of solutions, which can be important when the models are over parametrized, with many gloabl minima. You can easily find a ton of literature on this by googling these key words. But here is the upshot: suppose that we could actually train neural networks using 60 iterations of random search. Then stochastic gradient descent would still probalby be the preferred way to train them, because the solutions found by random search have no useful regularization.
The "Amazing Hidden Power" of Random Search? As soon as one moves from discrete to continuous search spaces, it becomes necessary to specify a distribution on the parameter space in order to perform the random search. Then it is evident that thi
8,388
The "Amazing Hidden Power" of Random Search?
Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks? We do use both at the same time currently. Meaning that, there is already a degree of random search even if we use stochastic gradient decent in training neural networks, i.e., random initialisation and in reinforcement learning via random search in game trees. For supervised deep learning, this is prominently studied by The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks arXiv. A random initialisation is acting as a random search. In reinforcement learning, deep Q-learning is achieved actually with the synergy of Monte Carlo Tree Search. In short, random search algorithms are already a practical part of training deep learning models. Completely gradient-free optimisation for deep learning, there was a different discussion here.
The "Amazing Hidden Power" of Random Search?
Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks? We do use both at the same time currently. Meaning that, there is already a degree of ran
The "Amazing Hidden Power" of Random Search? Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks? We do use both at the same time currently. Meaning that, there is already a degree of random search even if we use stochastic gradient decent in training neural networks, i.e., random initialisation and in reinforcement learning via random search in game trees. For supervised deep learning, this is prominently studied by The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks arXiv. A random initialisation is acting as a random search. In reinforcement learning, deep Q-learning is achieved actually with the synergy of Monte Carlo Tree Search. In short, random search algorithms are already a practical part of training deep learning models. Completely gradient-free optimisation for deep learning, there was a different discussion here.
The "Amazing Hidden Power" of Random Search? Why do we use Gradient Descent instead of Random Search for optimizing the loss functions in Neural Networks? We do use both at the same time currently. Meaning that, there is already a degree of ran
8,389
The "Amazing Hidden Power" of Random Search?
regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions! Finding a 95th-percentile solution is no guarantee of finding a good solution. The nature of the curse of dimensionality is that your "optimization distribution" becomes very skewed. When you have a lot of dimensions, 99.99999999% of your parameter space is going to be far from optimum. If you ask random search to find you a 99.99999999th percentile result, it will take billions of times as many trials as finding a 95th-percentile result (and honestly, I probably haven't added nearly enough nines for most real-world scenarios). And from an information-theoretic perspective, random search is purely "stupid" — it doesn't use any information from the objective function to inform its next guess, and the millionth guess is no more likely to be near the optimum than the first guess is. In many cases, a gradient is no more expensive per-evaluation than the objective function itself, and its value is obvious: it's a signpost that says "go this way to get a local improvement". In other cases, no gradient is available (unless we want to estimate it by doing a bunch of function evaluations, which is of course expensive), but then, the answers to the 2016 question you linked cover other techniques we can use to incorporate information about "where we've been and what we found there", which will hopefully enable our later guesses to be more productive than our earlier guesses. Every optimization technique (whether gradient descent, the simplex method, Bayesian optimization, or whatever else) encodes some sort of assumption about the structure of the objective function. They perform well (and justify their overheads) when the objective function agrees well with that structure, and poorly when it doesn't. Random search incorporates no implicit structure, which means that it's the optimal optimizer when the objective function itself is unstructured and completely random (you're looking for a needle in a haystack, and finding a slightly more silvery stalk of hay doesn't indicate the presence of a needle nearby). Otherwise, you can probably win by doing something else.
The "Amazing Hidden Power" of Random Search?
regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions! Finding a 95th-percenti
The "Amazing Hidden Power" of Random Search? regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions! Finding a 95th-percentile solution is no guarantee of finding a good solution. The nature of the curse of dimensionality is that your "optimization distribution" becomes very skewed. When you have a lot of dimensions, 99.99999999% of your parameter space is going to be far from optimum. If you ask random search to find you a 99.99999999th percentile result, it will take billions of times as many trials as finding a 95th-percentile result (and honestly, I probably haven't added nearly enough nines for most real-world scenarios). And from an information-theoretic perspective, random search is purely "stupid" — it doesn't use any information from the objective function to inform its next guess, and the millionth guess is no more likely to be near the optimum than the first guess is. In many cases, a gradient is no more expensive per-evaluation than the objective function itself, and its value is obvious: it's a signpost that says "go this way to get a local improvement". In other cases, no gradient is available (unless we want to estimate it by doing a bunch of function evaluations, which is of course expensive), but then, the answers to the 2016 question you linked cover other techniques we can use to incorporate information about "where we've been and what we found there", which will hopefully enable our later guesses to be more productive than our earlier guesses. Every optimization technique (whether gradient descent, the simplex method, Bayesian optimization, or whatever else) encodes some sort of assumption about the structure of the objective function. They perform well (and justify their overheads) when the objective function agrees well with that structure, and poorly when it doesn't. Random search incorporates no implicit structure, which means that it's the optimal optimizer when the objective function itself is unstructured and completely random (you're looking for a needle in a haystack, and finding a slightly more silvery stalk of hay doesn't indicate the presence of a needle nearby). Otherwise, you can probably win by doing something else.
The "Amazing Hidden Power" of Random Search? regardless of how many dimensions your function has, there is a 95% probability that only 60 iterations are needed to obtain an answer in the top 5% of all possible solutions! Finding a 95th-percenti
8,390
The "Amazing Hidden Power" of Random Search?
The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed" Sort of. There is a compounding that occurs when you add dimensions that is similar to what you get when you add more randomly sampled models, except that it works against you rather than for you. As you add more dimensions, the models become more and more likely to be "average", and the probability of them being exceptionally good decreases. It's not so much they're skewed towards "bad", so much as they're skewed towards "average", and the "average" model is really bad (remember, if you look at the space of all models, not just the ones created through a rational generation process, most of them are actually worse than just "Use $\bar y$ as your estimator regardless of the $x$ values"). There are many ways of thinking of this: According to the CLT, adding more features decreases the spread of the distribution of models in terms of loss function per feature. So the standard deviations needed to get to a model quality increases. If you look at how much increasing your percentile increases your standard deviations, this is the reciprocal of the probability density. As your percentile increases, the impact of increasing it further increases. Increasing your percentile from $0.999$ to $0.9999$ increases your z-score much, much more that increasing your percentile from $0.99$ to $0.991$. The length of a vector increases as you add dimensions, if you keep the individual component lengths fixed. For instance, if you have an $n$ dimensional vector with components equal to $0.7$, the length is $0.7\sqrt n$. So if you want vectors that are within some fixed distance of a "good" solution, the percentage of actual solutions that are within that distance decreases as you add dimensions. Suppose we give each feature a percentile rank (i.e. "This model is in the 70th percentile as far as how well it incorporates this feature", however that is defined). If there are $k$ features, the probability that all them will be in the top $p$ percentile is $(1-p)^k$. Even if $p$ is a relatively low number like $0.7$, this probability quickly becomes tiny; with ten features, it's about $3$%.
The "Amazing Hidden Power" of Random Search?
The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed" Sort of. There is a compounding that occurs when you add dimensions th
The "Amazing Hidden Power" of Random Search? The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed" Sort of. There is a compounding that occurs when you add dimensions that is similar to what you get when you add more randomly sampled models, except that it works against you rather than for you. As you add more dimensions, the models become more and more likely to be "average", and the probability of them being exceptionally good decreases. It's not so much they're skewed towards "bad", so much as they're skewed towards "average", and the "average" model is really bad (remember, if you look at the space of all models, not just the ones created through a rational generation process, most of them are actually worse than just "Use $\bar y$ as your estimator regardless of the $x$ values"). There are many ways of thinking of this: According to the CLT, adding more features decreases the spread of the distribution of models in terms of loss function per feature. So the standard deviations needed to get to a model quality increases. If you look at how much increasing your percentile increases your standard deviations, this is the reciprocal of the probability density. As your percentile increases, the impact of increasing it further increases. Increasing your percentile from $0.999$ to $0.9999$ increases your z-score much, much more that increasing your percentile from $0.99$ to $0.991$. The length of a vector increases as you add dimensions, if you keep the individual component lengths fixed. For instance, if you have an $n$ dimensional vector with components equal to $0.7$, the length is $0.7\sqrt n$. So if you want vectors that are within some fixed distance of a "good" solution, the percentage of actual solutions that are within that distance decreases as you add dimensions. Suppose we give each feature a percentile rank (i.e. "This model is in the 70th percentile as far as how well it incorporates this feature", however that is defined). If there are $k$ features, the probability that all them will be in the top $p$ percentile is $(1-p)^k$. Even if $p$ is a relatively low number like $0.7$, this probability quickly becomes tiny; with ten features, it's about $3$%.
The "Amazing Hidden Power" of Random Search? The only reason that I can think of, is that if the ranked distribution of the optimization values are "heavily negative skewed" Sort of. There is a compounding that occurs when you add dimensions th
8,391
The "Amazing Hidden Power" of Random Search?
This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality. Consider classification problem on ImageNet and some large networks with millions of parameters. Doing a random search in the space of parameters, you can find a network, that is in the top 5% of all possible weight configurations. However, being in the top 5% may only guarantee, that the accuracy is 0.11%, say, only slightly better, than the random guess classifier (since there are 1000 classes).
The "Amazing Hidden Power" of Random Search?
This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality. Consider classification problem
The "Amazing Hidden Power" of Random Search? This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality. Consider classification problem on ImageNet and some large networks with millions of parameters. Doing a random search in the space of parameters, you can find a network, that is in the top 5% of all possible weight configurations. However, being in the top 5% may only guarantee, that the accuracy is 0.11%, say, only slightly better, than the random guess classifier (since there are 1000 classes).
The "Amazing Hidden Power" of Random Search? This thought has appeared in some of the answers, but I would like to say, that being in the 5% of the best solutions may still produce a solution of very poor quality. Consider classification problem
8,392
The "Amazing Hidden Power" of Random Search?
The key to the answer to OP's question is in the ... loss function. Here's why. OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution with very few attempts. Why then this isn't good enough, if you believe everyone who answered the question before me? Several answers refer to "skewness" as does OP in the question. In fact, the skewness is related to the true answer but doesn't address it directly. The skewness itself is the result of two factors: high dimensionality (and its curse) and the choice of the loss function. It is well known that some distance measures, such as Cartesian distances (quadratic loss being a close cousin), are prone to suffer from [high] dimensionality curse: the distance between any two points in the space explodes when the points are further apart. In other words, any two points are infinitely far away unless they're almost on top of each other. This is what makes those top 5% best solution look like a garbage to us. So, we insist that we want to do much, and I mean $\large MUCH$, better. This is the real reason why random search doesn't work with high dimensional problems. Note, you can come up with a loss function with which the random search could work. The loss function must not suffer from dimensionality curse in this case. It should re-define what is "good enough" of a solution. Finally, I didn't address a key point yet" why then the gradient search has a better chance to navigate the dimensionality curse compared to a random search. I'll answer this part later.
The "Amazing Hidden Power" of Random Search?
The key to the answer to OP's question is in the ... loss function. Here's why. OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution wit
The "Amazing Hidden Power" of Random Search? The key to the answer to OP's question is in the ... loss function. Here's why. OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution with very few attempts. Why then this isn't good enough, if you believe everyone who answered the question before me? Several answers refer to "skewness" as does OP in the question. In fact, the skewness is related to the true answer but doesn't address it directly. The skewness itself is the result of two factors: high dimensionality (and its curse) and the choice of the loss function. It is well known that some distance measures, such as Cartesian distances (quadratic loss being a close cousin), are prone to suffer from [high] dimensionality curse: the distance between any two points in the space explodes when the points are further apart. In other words, any two points are infinitely far away unless they're almost on top of each other. This is what makes those top 5% best solution look like a garbage to us. So, we insist that we want to do much, and I mean $\large MUCH$, better. This is the real reason why random search doesn't work with high dimensional problems. Note, you can come up with a loss function with which the random search could work. The loss function must not suffer from dimensionality curse in this case. It should re-define what is "good enough" of a solution. Finally, I didn't address a key point yet" why then the gradient search has a better chance to navigate the dimensionality curse compared to a random search. I'll answer this part later.
The "Amazing Hidden Power" of Random Search? The key to the answer to OP's question is in the ... loss function. Here's why. OP's question has a clue to its answer: yes, by random search you can get the top $\alpha$ quantile of best solution wit
8,393
The "Amazing Hidden Power" of Random Search?
A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogous to random search. Any more sophisticated algorithms will have had to evolve from random search. This means that it should be possible to use only random search in machine learning and to get to excellent results. We couldn't exist if this were not true. The question is then how to use random search. Biology would strongly suggest that we should use a genetic algorithm. For example, with your 60 trials to get to the top 5% method, you can iterate this where you take the best few and create 60 mutants of these best few and search for the best of these. The problem is then that you can get stuck in a local maximum in the fitness landscape. There are many different solutions to this problem. One can use the analogue of sexual reproduction in biology, by mixing the weights from different networks. It is also good idea to also include a few results that are not in the top, as mutations of these may yield good results. And instead of starting with a very complex loss function, one can start with a simpler one. In many visual tasks such as handwriting recognition, coarse graining can work well. One then trains the neural networks using blurred images where instead of 26 letters there are only a few. You then reduce the amount of blurring so that more letters can be distinguished and then continue with the results of the previous learning session.
The "Amazing Hidden Power" of Random Search?
A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogou
The "Amazing Hidden Power" of Random Search? A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogous to random search. Any more sophisticated algorithms will have had to evolve from random search. This means that it should be possible to use only random search in machine learning and to get to excellent results. We couldn't exist if this were not true. The question is then how to use random search. Biology would strongly suggest that we should use a genetic algorithm. For example, with your 60 trials to get to the top 5% method, you can iterate this where you take the best few and create 60 mutants of these best few and search for the best of these. The problem is then that you can get stuck in a local maximum in the fitness landscape. There are many different solutions to this problem. One can use the analogue of sexual reproduction in biology, by mixing the weights from different networks. It is also good idea to also include a few results that are not in the top, as mutations of these may yield good results. And instead of starting with a very complex loss function, one can start with a simpler one. In many visual tasks such as handwriting recognition, coarse graining can work well. One then trains the neural networks using blurred images where instead of 26 letters there are only a few. You then reduce the amount of blurring so that more letters can be distinguished and then continue with the results of the previous learning session.
The "Amazing Hidden Power" of Random Search? A different perspective. The chemistry that led to the first life forms and from there to life forms with a simple nervous system and onward to organisms with a brain, involved only processes analogou
8,394
Expectation of 500 coin flips after 500 realizations
If you "know" that the coin is fair then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but rather that the initial $500$ flips become irrelevant as $n\rightarrow\infty$. A streak of $500$ heads may seem like a lot (and practically speaking it is), but if $250$ of the next $500$ flips are heads then the sample proportion becomes $$\hat p = \frac{500 + 250}{1000} = 0.75.$$ if $250$ of the next $500$ flips are heads then... $$\hat p = \frac{500+250+250}{1500} \approx 0.67$$ if $100000$ of the next $200000$ flips are heads then... $$\hat p = \cdots \approx 0.501.$$ This is the Law of Large Numbers. On the other hand... if I were to flip a coin in real life and see $500$ heads in a row, I would start to seriously doubt that the coin is actually fair. (Interesting side note, it is hard (impossible?) to actually bias a coin in real life. The only realistic values of $p$ are $0$, $0.5$ and $1$, but we will ignore this for the sake of an answer). To account for this possibility, we could use a Bayesian procedure from the outset. Rather than assume $p=1/2$, suppose we specify the prior distribution $$p \sim \text{Beta}(\alpha, \alpha).$$ This is a symmetric distribution, which encodes my a priori belief that the coin is fair, i.e. $E(p) = \frac{1}{2}$. How strongly I believe in this notion is specified through the choice of $\alpha$, since $Var(p) = \frac{1}{8(\alpha+0.5)}$. $\alpha = 1$ corresponds to a uniform prior over $(0,1)$. $\alpha = 0.5$ is Jeffrey's prior - another popular non-informative choice. Choosing a large value of $\alpha$ gives more credence to the belief that $p=1/2$. In fact, setting $\alpha = \infty$ implies that $Pr(p=1/2) = 1$. Applying Bayes rule directly, the posterior distribution for $p$ is $$p|y \sim \text{Beta}(\alpha+y, \alpha+n-y)$$ where $y = \text{number of heads}$ and $n = \text{number of flips}$. For instance, if you choose $\alpha = 1$ and observe $n=y=500$, the posterior distribution becomes $\text{Beta}(501, 1)$ and $$E(p|y) = \frac{\alpha + y}{2\alpha + n} = \frac{501}{502} \approx 0.998$$ indicating that I should bet on heads for the next flip (since it is highly improbable that the coin is fair). This updating process can be applied after each flip, using the posterior distribution after $n$ flips as the prior for flip $n+1$. If it turns out that the $500$ heads was just a (astronomically) improbable event and the coin really is fair, the posterior distribution will eventually capture this (using a similar argument to the previous section). Intuition for choosing $\alpha$: To help understand the role of $\alpha$ in the Bayesian procedure, we can use the following argument. The mean of the posterior distribution is equivalent to the maximum likelihood estimate of $p$, if we were to augment the data with a series of $2\alpha$ "hypothetical flips", where $\alpha$ of these flips are heads and $\alpha$ of these flips are tails. Choosing $\alpha=1$ (as we did above) suggests that the augmented data is $501$ heads and $1$ tails. Choosing a larger value of $\alpha$ suggests that more evidence is required to change our beliefs. Still, for any finite choice of $\alpha$, these "hypothetical flips" will eventually become irrelevant as $n\rightarrow\infty$.
Expectation of 500 coin flips after 500 realizations
If you "know" that the coin is fair then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but
Expectation of 500 coin flips after 500 realizations If you "know" that the coin is fair then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but rather that the initial $500$ flips become irrelevant as $n\rightarrow\infty$. A streak of $500$ heads may seem like a lot (and practically speaking it is), but if $250$ of the next $500$ flips are heads then the sample proportion becomes $$\hat p = \frac{500 + 250}{1000} = 0.75.$$ if $250$ of the next $500$ flips are heads then... $$\hat p = \frac{500+250+250}{1500} \approx 0.67$$ if $100000$ of the next $200000$ flips are heads then... $$\hat p = \cdots \approx 0.501.$$ This is the Law of Large Numbers. On the other hand... if I were to flip a coin in real life and see $500$ heads in a row, I would start to seriously doubt that the coin is actually fair. (Interesting side note, it is hard (impossible?) to actually bias a coin in real life. The only realistic values of $p$ are $0$, $0.5$ and $1$, but we will ignore this for the sake of an answer). To account for this possibility, we could use a Bayesian procedure from the outset. Rather than assume $p=1/2$, suppose we specify the prior distribution $$p \sim \text{Beta}(\alpha, \alpha).$$ This is a symmetric distribution, which encodes my a priori belief that the coin is fair, i.e. $E(p) = \frac{1}{2}$. How strongly I believe in this notion is specified through the choice of $\alpha$, since $Var(p) = \frac{1}{8(\alpha+0.5)}$. $\alpha = 1$ corresponds to a uniform prior over $(0,1)$. $\alpha = 0.5$ is Jeffrey's prior - another popular non-informative choice. Choosing a large value of $\alpha$ gives more credence to the belief that $p=1/2$. In fact, setting $\alpha = \infty$ implies that $Pr(p=1/2) = 1$. Applying Bayes rule directly, the posterior distribution for $p$ is $$p|y \sim \text{Beta}(\alpha+y, \alpha+n-y)$$ where $y = \text{number of heads}$ and $n = \text{number of flips}$. For instance, if you choose $\alpha = 1$ and observe $n=y=500$, the posterior distribution becomes $\text{Beta}(501, 1)$ and $$E(p|y) = \frac{\alpha + y}{2\alpha + n} = \frac{501}{502} \approx 0.998$$ indicating that I should bet on heads for the next flip (since it is highly improbable that the coin is fair). This updating process can be applied after each flip, using the posterior distribution after $n$ flips as the prior for flip $n+1$. If it turns out that the $500$ heads was just a (astronomically) improbable event and the coin really is fair, the posterior distribution will eventually capture this (using a similar argument to the previous section). Intuition for choosing $\alpha$: To help understand the role of $\alpha$ in the Bayesian procedure, we can use the following argument. The mean of the posterior distribution is equivalent to the maximum likelihood estimate of $p$, if we were to augment the data with a series of $2\alpha$ "hypothetical flips", where $\alpha$ of these flips are heads and $\alpha$ of these flips are tails. Choosing $\alpha=1$ (as we did above) suggests that the augmented data is $501$ heads and $1$ tails. Choosing a larger value of $\alpha$ suggests that more evidence is required to change our beliefs. Still, for any finite choice of $\alpha$, these "hypothetical flips" will eventually become irrelevant as $n\rightarrow\infty$.
Expectation of 500 coin flips after 500 realizations If you "know" that the coin is fair then we still expect the long run proportion of heads to tend to $0.5$. This is not to say that we should expect more (than 50%) of the next flips to be tails, but
8,395
Expectation of 500 coin flips after 500 realizations
The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant. For example, if I toss the coin 10 times and get 7 heads, those two extra heads seem pretty significant. If I toss the coin 1,000,000 times and get 500,002 heads, those two extra heads are almost completely insignificant. In your example, those 500 extra heads are going to be HUGELY significant in a trial of 1,000 tosses. However, if you continue that trails out to 10,000 tosses those 500 heads only amount to a 5% difference. After 1,000,000 trials of 50/50 flips those 500 extra heads only account for a 0.05% difference. Going all the way to 1,000,000,000 trials and that initial run of crazy luck only amounted to a 0.00005% difference. You can see that as the number of trials increase, the results become closer to the expected value.
Expectation of 500 coin flips after 500 realizations
The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant
Expectation of 500 coin flips after 500 realizations The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant. For example, if I toss the coin 10 times and get 7 heads, those two extra heads seem pretty significant. If I toss the coin 1,000,000 times and get 500,002 heads, those two extra heads are almost completely insignificant. In your example, those 500 extra heads are going to be HUGELY significant in a trial of 1,000 tosses. However, if you continue that trails out to 10,000 tosses those 500 heads only amount to a 5% difference. After 1,000,000 trials of 50/50 flips those 500 extra heads only account for a 0.05% difference. Going all the way to 1,000,000,000 trials and that initial run of crazy luck only amounted to a 0.00005% difference. You can see that as the number of trials increase, the results become closer to the expected value.
Expectation of 500 coin flips after 500 realizations The law of large numbers doesn't state that some force will bring the results back to the mean. It states that as the number of trials increases the fluctuations will become less and less significant
8,396
Expectation of 500 coin flips after 500 realizations
The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run. The coin does not know or care that you plan to stop flipping. For the coin, an infinity of flips remain, and against that infinity, a mere 500 is nothing at all. Keep in mind that, once an outcome has been observed, that outcome is no longer random. The model p(heads) = 0.5 does not govern the past observed values. Each of those values is "heads" with probability 1. As you state the problem, you persevere with the model p(heads) = 0.5. This model says that history is irrelevant. One might, at some point, consider an alternate model.
Expectation of 500 coin flips after 500 realizations
The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run. The coin does not k
Expectation of 500 coin flips after 500 realizations The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run. The coin does not know or care that you plan to stop flipping. For the coin, an infinity of flips remain, and against that infinity, a mere 500 is nothing at all. Keep in mind that, once an outcome has been observed, that outcome is no longer random. The model p(heads) = 0.5 does not govern the past observed values. Each of those values is "heads" with probability 1. As you state the problem, you persevere with the model p(heads) = 0.5. This model says that history is irrelevant. One might, at some point, consider an alternate model.
Expectation of 500 coin flips after 500 realizations The notion of the one side being "due" is the "gambler's fallacy" in a nutshell. Boiled down, the gambler's fallacy is the false belief that the short run must mirror the long run. The coin does not k
8,397
Expectation of 500 coin flips after 500 realizations
The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion asaṃkhyeyas, a value used in Buddhist and Hindu theology to denote a number so large as to be incalculable; it is about the number of Planck volumes in a cubic parsec. I tried to come up with a snappy "marbles in the observable universe" comparison, but I can't. Nothing is small enough and the universe isn't big enough. In terms of probability, you are almost a googol times more likely to shuffle a deck of cards into perfect increasing order, aces low, clubs-diamonds-hearts-spades ($1$ in ${52!}$). At this point, you should be assuming you have been flipping a two-headed coin by mistake. Two-headed coins are not especially rare; they're a mildly popular novelty item. Estimates say some tens of thousands (let's assume twenty thousand) filter into circulation⁠—an easy mistake with a well-made trick coin (and perfectly legal: trick coins are made by machining down two coins and sticking them together, but I wouldn't try arguing they're worth double). If there are 20,000 double-headers circulating amongst the roughly 3.82 billion US coins in circulation right now, the odds that you've picked one up by mistake are 1 in 191,000. If there's a 99% percent chance you'd notice the coin didn't have a reverse side, that's still a thousand asaṃkhyeya times more likely than this outcome. With one two-header amidst the $793,464,097,826$ coins produced by the US mint since 1890, and a one-in-a-trillion chance you'd let it slip by, that's still a vacuum catastrophe more likely than the alternative. I think that's what's messing you up: this scenario is so phenomenally improbable that you just can't accept it as conforming to normal probability. Of course, if you've magically verified that the coin really truly is fair, then the odds remain as unchanged as ever: 50/50. I'm just inclined to suspect it isn't.
Expectation of 500 coin flips after 500 realizations
The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion
Expectation of 500 coin flips after 500 realizations The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion asaṃkhyeyas, a value used in Buddhist and Hindu theology to denote a number so large as to be incalculable; it is about the number of Planck volumes in a cubic parsec. I tried to come up with a snappy "marbles in the observable universe" comparison, but I can't. Nothing is small enough and the universe isn't big enough. In terms of probability, you are almost a googol times more likely to shuffle a deck of cards into perfect increasing order, aces low, clubs-diamonds-hearts-spades ($1$ in ${52!}$). At this point, you should be assuming you have been flipping a two-headed coin by mistake. Two-headed coins are not especially rare; they're a mildly popular novelty item. Estimates say some tens of thousands (let's assume twenty thousand) filter into circulation⁠—an easy mistake with a well-made trick coin (and perfectly legal: trick coins are made by machining down two coins and sticking them together, but I wouldn't try arguing they're worth double). If there are 20,000 double-headers circulating amongst the roughly 3.82 billion US coins in circulation right now, the odds that you've picked one up by mistake are 1 in 191,000. If there's a 99% percent chance you'd notice the coin didn't have a reverse side, that's still a thousand asaṃkhyeya times more likely than this outcome. With one two-header amidst the $793,464,097,826$ coins produced by the US mint since 1890, and a one-in-a-trillion chance you'd let it slip by, that's still a vacuum catastrophe more likely than the alternative. I think that's what's messing you up: this scenario is so phenomenally improbable that you just can't accept it as conforming to normal probability. Of course, if you've magically verified that the coin really truly is fair, then the odds remain as unchanged as ever: 50/50. I'm just inclined to suspect it isn't.
Expectation of 500 coin flips after 500 realizations The straight answer, I suppose, is that you don't. The chance that a fair coin will get $500$ heads on $500$ flips is $1$ in $2^{500}\approx3\times10^{150}$. For reference, this is one in ten billion
8,398
Expectation of 500 coin flips after 500 realizations
There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the question). This reasoning holds for any particular arbitrary number of trials, but does not address the situation of arbitrarily more trials towards infinity. Those are handled elegantly and well in the already-posted, math-based answers. Each flip is completely independent, and so preceding flips don't have any influence on subsequent flips. But you aren't describing individual flips, because you are imposing information about previous trials. In this scenario, you are using 500 previous trials to inform your thinking about the result of the next flip. This doesn't work, as each flip is independent from all others. If you are imposing information about 500 previous flips on the problem, then you are interpreting the process as a collection of flips. In that case it may be more intuitive to consider trials not as individual flips but as sets of flips. As a simpler example, if we're flipping the coin three times we have eight possible outcomes: HHH HHT HTH THH HTT THT TTH TTT Summarized, those results are: Three Heads: 1 combination Two Heads, One Tails: 3 combinations One Heads, Two Tails: 3 combinations Three Tails: 1 combination So from the summary descriptions (where flip ordering doesn't matter) it is more likely that we'll see a 2:1 outcome, simply because there are six individual combinations that produce that result compared with the 3:0 possibilities, of which there are only two possible combinations. But each specific combination of three flips appears in the list once, and is just as likely as the others. The same logic holds for more trials, though the combinations become tedious to list. Luckily for us, if we're asserting a string of 500 flips with results of heads that takes most of the combinations out of the picture-- we need to start with 500/501 flips showing heads. From that starting point we now look at how many outcomes are possible for the remaining flip, and for that we have the base probability of a single coin flip offering two outcomes: 500 heads flips, and then another heads flip 500 heads flips, and then a tails flip Every possible combination of flips in a set with a given number of individual trials is equally likely, but the summary of each set produces lots of overlapping results (there are a lot of combinations that produce 250 heads and 250 tails, since the order doesn't matter for the summary, but exactly one combination which would produce all heads across all individual trials). There are only two combinations which can describe the situation in the question: every single one of the first 500 flips must show heads (assumed in the problem, so the probability of that outcome is not important), and then after those initial 500 flips, you can get your 1st tails result or your 501st heads result. So that's my suggestion to help internalize the intuition behind this scenario: Each individual flip of a fair coin is memoryless and totally independent, and so each result is equally likely on any particular flip The number of possible combinations of flip results across 500 trials is large, but each specific combination only appears on that list once. Each possible combination of 500 flips is exactly as likely as any other (each has a single entry in the possible outcome list) There are only two possible combinations of 501 flips which begin with 500 flips that show a heads result: one in which another heads result occurs, and one in which a tails result occurs. Each of those results is equally likely (being decided by the 501st flip alone)
Expectation of 500 coin flips after 500 realizations
There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the qu
Expectation of 500 coin flips after 500 realizations There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the question). This reasoning holds for any particular arbitrary number of trials, but does not address the situation of arbitrarily more trials towards infinity. Those are handled elegantly and well in the already-posted, math-based answers. Each flip is completely independent, and so preceding flips don't have any influence on subsequent flips. But you aren't describing individual flips, because you are imposing information about previous trials. In this scenario, you are using 500 previous trials to inform your thinking about the result of the next flip. This doesn't work, as each flip is independent from all others. If you are imposing information about 500 previous flips on the problem, then you are interpreting the process as a collection of flips. In that case it may be more intuitive to consider trials not as individual flips but as sets of flips. As a simpler example, if we're flipping the coin three times we have eight possible outcomes: HHH HHT HTH THH HTT THT TTH TTT Summarized, those results are: Three Heads: 1 combination Two Heads, One Tails: 3 combinations One Heads, Two Tails: 3 combinations Three Tails: 1 combination So from the summary descriptions (where flip ordering doesn't matter) it is more likely that we'll see a 2:1 outcome, simply because there are six individual combinations that produce that result compared with the 3:0 possibilities, of which there are only two possible combinations. But each specific combination of three flips appears in the list once, and is just as likely as the others. The same logic holds for more trials, though the combinations become tedious to list. Luckily for us, if we're asserting a string of 500 flips with results of heads that takes most of the combinations out of the picture-- we need to start with 500/501 flips showing heads. From that starting point we now look at how many outcomes are possible for the remaining flip, and for that we have the base probability of a single coin flip offering two outcomes: 500 heads flips, and then another heads flip 500 heads flips, and then a tails flip Every possible combination of flips in a set with a given number of individual trials is equally likely, but the summary of each set produces lots of overlapping results (there are a lot of combinations that produce 250 heads and 250 tails, since the order doesn't matter for the summary, but exactly one combination which would produce all heads across all individual trials). There are only two combinations which can describe the situation in the question: every single one of the first 500 flips must show heads (assumed in the problem, so the probability of that outcome is not important), and then after those initial 500 flips, you can get your 1st tails result or your 501st heads result. So that's my suggestion to help internalize the intuition behind this scenario: Each individual flip of a fair coin is memoryless and totally independent, and so each result is equally likely on any particular flip The number of possible combinations of flip results across 500 trials is large, but each specific combination only appears on that list once. Each possible combination of 500 flips is exactly as likely as any other (each has a single entry in the possible outcome list) There are only two possible combinations of 501 flips which begin with 500 flips that show a heads result: one in which another heads result occurs, and one in which a tails result occurs. Each of those results is equally likely (being decided by the 501st flip alone)
Expectation of 500 coin flips after 500 realizations There are some great answers here already, but I wanted to add another way of thinking about the problem that may be more intuitive than reviewing the math (to address the feelings described in the qu
8,399
Expectation of 500 coin flips after 500 realizations
The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many models that use bayesian framework includes realization on updating the probability. This is a great example to what I mentioned earlier. The reason that it does not apply for your case because realization is not included by design of your model.
Expectation of 500 coin flips after 500 realizations
The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many m
Expectation of 500 coin flips after 500 realizations The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many models that use bayesian framework includes realization on updating the probability. This is a great example to what I mentioned earlier. The reason that it does not apply for your case because realization is not included by design of your model.
Expectation of 500 coin flips after 500 realizations The key thing to remember is that throws are IID. Realization could be included when it is considered in the design of your model. One of the example is if your model is a markov model, in fact many m
8,400
Expectation of 500 coin flips after 500 realizations
Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world. A good rule of thumb to help you think about it is that every finite number looks like zero to infinity. A million heads in a row still looks like zero to infinity. If you were to "flip the coin an infinite number of times"--which you can't do, but what we really mean is "keep flipping"--then a run of a million heads eventually becomes a near certainty. But infinity is a tricky concept, and we have to work hard to make sure we know what we're saying. For example what we mean by "it becomes a near certainty" is: if you give me a percentage, say, 99.99; I will then calculate how many coin flips X you must do to have a 99.99% probability of seeing a run of a million heads in there. You define what "near" means--if you want it to be 99.999999%, fine, I'll just recalculate and give you a bigger number Y of coin flips to do. But even the Y flips won't guarantee the million-head run. All I am guaranteeing is that if you do a bunch of runs of Y flips, then you can expect 99.999999% of them to have a million-head run (and the more you do, the closer to 99.999999% we can expect the outcome to be). In the universe of possibilities, starting a run with any number of heads is a possibility. What the law of large numbers is saying is that if you go long enough, that particular run is more and more immaterial because there are so many other experiments being done. Yes, you might get a billion heads in a row. But if you give me a percentage and a target, like, say, "I want to be 99.8 percent sure that my head-tail ratio is between .499 and .501, and I know I start out with a billion heads" I can tell you a number Z that will give you a 99.8% chance of achieving that. Infinity is not a number. It's a concept beyond number, and when we talk about it, we have to be really careful that we know what we really mean, or we will end up confusing ourselves. The law of large numbers talks about what happens when N "goes to infinity" (actually toward infinity, you don't "get there"), and so it's not surprising that reasoning about what it is really telling you can lead to some pitfalls. Everything we experience is finite, and, in the real world, if an accountant was looking over your shoulder you would be getting more and more nervous about how many tails you're going to need to "balance this run out". Infinity has the time for that, even if the full span of the existence of humanity might not.
Expectation of 500 coin flips after 500 realizations
Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world. A good rule of thumb to help you think about it is that every finite number looks lik
Expectation of 500 coin flips after 500 realizations Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world. A good rule of thumb to help you think about it is that every finite number looks like zero to infinity. A million heads in a row still looks like zero to infinity. If you were to "flip the coin an infinite number of times"--which you can't do, but what we really mean is "keep flipping"--then a run of a million heads eventually becomes a near certainty. But infinity is a tricky concept, and we have to work hard to make sure we know what we're saying. For example what we mean by "it becomes a near certainty" is: if you give me a percentage, say, 99.99; I will then calculate how many coin flips X you must do to have a 99.99% probability of seeing a run of a million heads in there. You define what "near" means--if you want it to be 99.999999%, fine, I'll just recalculate and give you a bigger number Y of coin flips to do. But even the Y flips won't guarantee the million-head run. All I am guaranteeing is that if you do a bunch of runs of Y flips, then you can expect 99.999999% of them to have a million-head run (and the more you do, the closer to 99.999999% we can expect the outcome to be). In the universe of possibilities, starting a run with any number of heads is a possibility. What the law of large numbers is saying is that if you go long enough, that particular run is more and more immaterial because there are so many other experiments being done. Yes, you might get a billion heads in a row. But if you give me a percentage and a target, like, say, "I want to be 99.8 percent sure that my head-tail ratio is between .499 and .501, and I know I start out with a billion heads" I can tell you a number Z that will give you a 99.8% chance of achieving that. Infinity is not a number. It's a concept beyond number, and when we talk about it, we have to be really careful that we know what we really mean, or we will end up confusing ourselves. The law of large numbers talks about what happens when N "goes to infinity" (actually toward infinity, you don't "get there"), and so it's not surprising that reasoning about what it is really telling you can lead to some pitfalls. Everything we experience is finite, and, in the real world, if an accountant was looking over your shoulder you would be getting more and more nervous about how many tails you're going to need to "balance this run out". Infinity has the time for that, even if the full span of the existence of humanity might not.
Expectation of 500 coin flips after 500 realizations Intuition can often lead us astray in the realm of inifinty because infinity is not experienced in the real world. A good rule of thumb to help you think about it is that every finite number looks lik