idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
32,301
Calculating I-squared (in meta-analysis, given tau-squared)
In the random-effects model the method-of-moments (DerSimonian–Laird) estimator of the between-study variance for $n$ studies is given by $$ \hat{\tau}^2=\frac{Q-(n-1)}{\sum{w_i}-\frac{\sum{w_i^2}}{\sum{w_i}}}$$ where Cochran's $Q=\sum{w_i}\left(y_i-\frac{\sum{w_i y_i}}{\sum{w_i}}\right)^2$ (with $y_i$ & $w_i$ as the effect size & the reciprocal of the within-study variance respectively, estimated from the $i$th study). The total proportion of variance owing to heterogeneity is $$I^2=\frac{Q-(n-1)}{Q}$$ So from $\hat{\tau}^2$ alone you can't calculate $I^2$; you also need the number of studies, the sum of the weights, & the sum of the squared weights.
Calculating I-squared (in meta-analysis, given tau-squared)
In the random-effects model the method-of-moments (DerSimonian–Laird) estimator of the between-study variance for $n$ studies is given by $$ \hat{\tau}^2=\frac{Q-(n-1)}{\sum{w_i}-\frac{\sum{w_i^2}}{\s
Calculating I-squared (in meta-analysis, given tau-squared) In the random-effects model the method-of-moments (DerSimonian–Laird) estimator of the between-study variance for $n$ studies is given by $$ \hat{\tau}^2=\frac{Q-(n-1)}{\sum{w_i}-\frac{\sum{w_i^2}}{\sum{w_i}}}$$ where Cochran's $Q=\sum{w_i}\left(y_i-\frac{\sum{w_i y_i}}{\sum{w_i}}\right)^2$ (with $y_i$ & $w_i$ as the effect size & the reciprocal of the within-study variance respectively, estimated from the $i$th study). The total proportion of variance owing to heterogeneity is $$I^2=\frac{Q-(n-1)}{Q}$$ So from $\hat{\tau}^2$ alone you can't calculate $I^2$; you also need the number of studies, the sum of the weights, & the sum of the squared weights.
Calculating I-squared (in meta-analysis, given tau-squared) In the random-effects model the method-of-moments (DerSimonian–Laird) estimator of the between-study variance for $n$ studies is given by $$ \hat{\tau}^2=\frac{Q-(n-1)}{\sum{w_i}-\frac{\sum{w_i^2}}{\s
32,302
Complete sufficient statistic
(1) Show that for a sample size $n$, $T=\left(X_{(1)}, X_{(n)}\right)$, where $X_{(1)}$ is the sample minimum & $X_{(n)}$ the sample maximum, is minimal sufficient. (2) Find the sampling distribution of the range $R=X_{(n)}-X_{(1)}$ & hence its expectation $\newcommand{\E}{\operatorname{E}}\E R$. It will be a function of $n$ only, not of $\theta$ (which is the important thing, & which you can perhaps show without specifying it exactly). (3) Then simply let $g(T)=R-\E R$. It's not a function of $\theta$, & its expectation is zero; yet it's not certainly equal to zero: therefore $T$ is not complete. As $T$ is minimal sufficent, it follows from Bahadur's theorem that no sufficient statistic is complete.
Complete sufficient statistic
(1) Show that for a sample size $n$, $T=\left(X_{(1)}, X_{(n)}\right)$, where $X_{(1)}$ is the sample minimum & $X_{(n)}$ the sample maximum, is minimal sufficient. (2) Find the sampling distribution
Complete sufficient statistic (1) Show that for a sample size $n$, $T=\left(X_{(1)}, X_{(n)}\right)$, where $X_{(1)}$ is the sample minimum & $X_{(n)}$ the sample maximum, is minimal sufficient. (2) Find the sampling distribution of the range $R=X_{(n)}-X_{(1)}$ & hence its expectation $\newcommand{\E}{\operatorname{E}}\E R$. It will be a function of $n$ only, not of $\theta$ (which is the important thing, & which you can perhaps show without specifying it exactly). (3) Then simply let $g(T)=R-\E R$. It's not a function of $\theta$, & its expectation is zero; yet it's not certainly equal to zero: therefore $T$ is not complete. As $T$ is minimal sufficent, it follows from Bahadur's theorem that no sufficient statistic is complete.
Complete sufficient statistic (1) Show that for a sample size $n$, $T=\left(X_{(1)}, X_{(n)}\right)$, where $X_{(1)}$ is the sample minimum & $X_{(n)}$ the sample maximum, is minimal sufficient. (2) Find the sampling distribution
32,303
Help me fit this non-linear multiple regression that has defied all previous efforts
Very nice work. I think this situation is a candidate for the proportional odds semiparametric ordinal logistic model. The lrm function in the R rms package will fit the model. For now you may want to round $Y$ to have only 100-200 levels. Soon a new version of rms will be released with a new function orm that efficiently allows for thousands of intercepts in the model, i.e., allows $Y$ to be fully continuous [update: this appeared in 2014]. The proportional odds model $\beta$s are invariant to how $Y$ is transformed. That means quantiles are invariant also. When you want a predicted mean, $Y$ is assumed to be on the proper interval scale.
Help me fit this non-linear multiple regression that has defied all previous efforts
Very nice work. I think this situation is a candidate for the proportional odds semiparametric ordinal logistic model. The lrm function in the R rms package will fit the model. For now you may want
Help me fit this non-linear multiple regression that has defied all previous efforts Very nice work. I think this situation is a candidate for the proportional odds semiparametric ordinal logistic model. The lrm function in the R rms package will fit the model. For now you may want to round $Y$ to have only 100-200 levels. Soon a new version of rms will be released with a new function orm that efficiently allows for thousands of intercepts in the model, i.e., allows $Y$ to be fully continuous [update: this appeared in 2014]. The proportional odds model $\beta$s are invariant to how $Y$ is transformed. That means quantiles are invariant also. When you want a predicted mean, $Y$ is assumed to be on the proper interval scale.
Help me fit this non-linear multiple regression that has defied all previous efforts Very nice work. I think this situation is a candidate for the proportional odds semiparametric ordinal logistic model. The lrm function in the R rms package will fit the model. For now you may want
32,304
Help me fit this non-linear multiple regression that has defied all previous efforts
I think re-working the dependent variable and model can be fruitful here. Looking at your residuals from the lm(), it seems that the major issue is with the players with a high career WAR (which you defined as sum of all WAR). Notice that your highest predicted (scaled) WAR is 0.15 out of a maximum of 1! I think there's two things with this dependent variable that is exacerbating this issue: Players that simply play for longer have more time to collect WAR Good players will tend to be kept around longer, and will thus have the opportunity to have that longer time to collect WAR However in the context of prediction, including time played explicitly as a control (in any manner, whether as a weight, or as the denominator in calculating average career WAR) is counterproductive (also I suspect it's effect would also be non-linear). Thus I suggest modeling time somewhat less explicitly in a mixed model using lme4 or nlme. Your dependent variable would be seasonal WAR, and you would have a different number $j=m_i$ of seasons per player $i$. The model would have player as a random effect, and would be along the lines of: $ sWAR_{ij} = \alpha + \sigma^2_i + \text{<other stuff>} + \varepsilon_{ij}$ With lme4, this would look something like lmer(sWAR ~ <other stuff> + (1|Player), data=mydata) You still may need to transform on $sWAR$, but I think this will help out on that feedback loop.
Help me fit this non-linear multiple regression that has defied all previous efforts
I think re-working the dependent variable and model can be fruitful here. Looking at your residuals from the lm(), it seems that the major issue is with the players with a high career WAR (which you d
Help me fit this non-linear multiple regression that has defied all previous efforts I think re-working the dependent variable and model can be fruitful here. Looking at your residuals from the lm(), it seems that the major issue is with the players with a high career WAR (which you defined as sum of all WAR). Notice that your highest predicted (scaled) WAR is 0.15 out of a maximum of 1! I think there's two things with this dependent variable that is exacerbating this issue: Players that simply play for longer have more time to collect WAR Good players will tend to be kept around longer, and will thus have the opportunity to have that longer time to collect WAR However in the context of prediction, including time played explicitly as a control (in any manner, whether as a weight, or as the denominator in calculating average career WAR) is counterproductive (also I suspect it's effect would also be non-linear). Thus I suggest modeling time somewhat less explicitly in a mixed model using lme4 or nlme. Your dependent variable would be seasonal WAR, and you would have a different number $j=m_i$ of seasons per player $i$. The model would have player as a random effect, and would be along the lines of: $ sWAR_{ij} = \alpha + \sigma^2_i + \text{<other stuff>} + \varepsilon_{ij}$ With lme4, this would look something like lmer(sWAR ~ <other stuff> + (1|Player), data=mydata) You still may need to transform on $sWAR$, but I think this will help out on that feedback loop.
Help me fit this non-linear multiple regression that has defied all previous efforts I think re-working the dependent variable and model can be fruitful here. Looking at your residuals from the lm(), it seems that the major issue is with the players with a high career WAR (which you d
32,305
Implications and interpretation of conditional independence assumption
Regarding the first part of your question. I believe it is more correct to say that $D$ is an indicator for assignment of treatment. Thus the assumption is that the choice, whether an individual gets treated or not, is not correlated to possible outcomes. The problem is the possible selection into treatment. It may be that treatment is assigned (or there is self-selection into) to those who are going to benefit most from it. For instance, suppose there is some training that improves academic achievement and you want to measure its impact. However, students are not assigned randomly for this course, but it is chosen mostly by those who have excellent computer literacy. In this case, if you estimate treatment effect of this particular training, you compare the outcomes of participants (mostly computer literate) to non-participants (mostly not computer literate). Thus the result would be biased as the outcome is partly dependent on selection of individuals to be treated. Selection bias should be evident if assignment to treatment correlates with outcome. If there is such non-random assignment to treatment and you know that the assignment depends only on characteristic $X$ (in this case computer literacy), you make an assumption that after controlling for $X$ both the treated and non-treated groups are equivalent in their remaining characteristics, except that some of them got treated and others not. So the difference between outcomes of the treated and non-treated can be attributed only to the fact of being treated, not that the individuals in groups were different from the beginning. Conditional on $X$ you assume that assignment to treatment is random, so it cannot correlate with possible outcomes.
Implications and interpretation of conditional independence assumption
Regarding the first part of your question. I believe it is more correct to say that $D$ is an indicator for assignment of treatment. Thus the assumption is that the choice, whether an individual gets
Implications and interpretation of conditional independence assumption Regarding the first part of your question. I believe it is more correct to say that $D$ is an indicator for assignment of treatment. Thus the assumption is that the choice, whether an individual gets treated or not, is not correlated to possible outcomes. The problem is the possible selection into treatment. It may be that treatment is assigned (or there is self-selection into) to those who are going to benefit most from it. For instance, suppose there is some training that improves academic achievement and you want to measure its impact. However, students are not assigned randomly for this course, but it is chosen mostly by those who have excellent computer literacy. In this case, if you estimate treatment effect of this particular training, you compare the outcomes of participants (mostly computer literate) to non-participants (mostly not computer literate). Thus the result would be biased as the outcome is partly dependent on selection of individuals to be treated. Selection bias should be evident if assignment to treatment correlates with outcome. If there is such non-random assignment to treatment and you know that the assignment depends only on characteristic $X$ (in this case computer literacy), you make an assumption that after controlling for $X$ both the treated and non-treated groups are equivalent in their remaining characteristics, except that some of them got treated and others not. So the difference between outcomes of the treated and non-treated can be attributed only to the fact of being treated, not that the individuals in groups were different from the beginning. Conditional on $X$ you assume that assignment to treatment is random, so it cannot correlate with possible outcomes.
Implications and interpretation of conditional independence assumption Regarding the first part of your question. I believe it is more correct to say that $D$ is an indicator for assignment of treatment. Thus the assumption is that the choice, whether an individual gets
32,306
Implications and interpretation of conditional independence assumption
First Question: Can someone explain this better? So give a real interpretation of this and not just the mathematical formulation verbally? It means that if we control for X, the treatment assignment is independent of the potential outcomes. For example, for the study of class size on learning outcome, STAR program randomly assign students in particular schools into small size classrooms or middle size classrooms. In this case, if we control for X, which is belonging to schools of experiment and in the grade of experiments, then the assignment of whether or not someone is in the small or middle size classroom has nothing to do with the future outcome of each class. The future outcome may be confusing. Let's suppose if it is not independent, then what can happen is that parents who care about their children may push them to study harder and also pulling strings to get them into smaller classes. Reverse for the kids with careless parents. So Y(1)-Y(0) is not just the difference due to class assignment, but also the effect of careful parents. So when CIA is satisfied, if you use regression to calculate, the coefficient of W would be the causal effect of interest. (I guess I also answered question II oops)
Implications and interpretation of conditional independence assumption
First Question: Can someone explain this better? So give a real interpretation of this and not just the mathematical formulation verbally? It means that if we control for X, the treatment assignment i
Implications and interpretation of conditional independence assumption First Question: Can someone explain this better? So give a real interpretation of this and not just the mathematical formulation verbally? It means that if we control for X, the treatment assignment is independent of the potential outcomes. For example, for the study of class size on learning outcome, STAR program randomly assign students in particular schools into small size classrooms or middle size classrooms. In this case, if we control for X, which is belonging to schools of experiment and in the grade of experiments, then the assignment of whether or not someone is in the small or middle size classroom has nothing to do with the future outcome of each class. The future outcome may be confusing. Let's suppose if it is not independent, then what can happen is that parents who care about their children may push them to study harder and also pulling strings to get them into smaller classes. Reverse for the kids with careless parents. So Y(1)-Y(0) is not just the difference due to class assignment, but also the effect of careful parents. So when CIA is satisfied, if you use regression to calculate, the coefficient of W would be the causal effect of interest. (I guess I also answered question II oops)
Implications and interpretation of conditional independence assumption First Question: Can someone explain this better? So give a real interpretation of this and not just the mathematical formulation verbally? It means that if we control for X, the treatment assignment i
32,307
Borrowing strength
I am not certain this is the formal definition, nor the unique one, but the term was coined by John W. Tukey and often used in the context of empirical Bayes, or indeed, hierarchical models. It refers to the idea that assuming a distribution over your parameters of interest, information on one parameters, gives you information on other. Thus, each estimation "borrows strength" from others, via their assumed distribution. See here.
Borrowing strength
I am not certain this is the formal definition, nor the unique one, but the term was coined by John W. Tukey and often used in the context of empirical Bayes, or indeed, hierarchical models. It refer
Borrowing strength I am not certain this is the formal definition, nor the unique one, but the term was coined by John W. Tukey and often used in the context of empirical Bayes, or indeed, hierarchical models. It refers to the idea that assuming a distribution over your parameters of interest, information on one parameters, gives you information on other. Thus, each estimation "borrows strength" from others, via their assumed distribution. See here.
Borrowing strength I am not certain this is the formal definition, nor the unique one, but the term was coined by John W. Tukey and often used in the context of empirical Bayes, or indeed, hierarchical models. It refer
32,308
Borrowing strength
Here is an example. In the UK we are constantly told that violent crime rates are falling. We also hear reports of police adjusting data to make that case. The public are justifiably sceptical. However, data from hospital A&Es (Emergency Rooms) also shows a fall in hospital attendance because of violent crime. The crime data has borrowed strength from the hospital data to make it more credible. This is particularly compelling as the medical staff have no interest in the figures going one way or another. This is a very important topic that gets too little attention. It has for me been a primarily judgmental thing (like exchangeability) rather than something that can usefully be modelled analytically.
Borrowing strength
Here is an example. In the UK we are constantly told that violent crime rates are falling. We also hear reports of police adjusting data to make that case. The public are justifiably sceptical. Howeve
Borrowing strength Here is an example. In the UK we are constantly told that violent crime rates are falling. We also hear reports of police adjusting data to make that case. The public are justifiably sceptical. However, data from hospital A&Es (Emergency Rooms) also shows a fall in hospital attendance because of violent crime. The crime data has borrowed strength from the hospital data to make it more credible. This is particularly compelling as the medical staff have no interest in the figures going one way or another. This is a very important topic that gets too little attention. It has for me been a primarily judgmental thing (like exchangeability) rather than something that can usefully be modelled analytically.
Borrowing strength Here is an example. In the UK we are constantly told that violent crime rates are falling. We also hear reports of police adjusting data to make that case. The public are justifiably sceptical. Howeve
32,309
Under which assumptions does the ordinary least squares method give efficient and unbiased estimators?
The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, $ E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\sigma^{2}(\epsilon_{i}) = \sigma^{2} < \infty$ and $\epsilon_{i}$ and $ \epsilon_{j}$ are uncorrelated for all $ i$ and $ j$ the least squares estimator $ b_{0}$ and $ b_{1}$ are unbiased and have minimum variance among all unbiased linear estimators. Note that there might be biased estimator which have a even lower variance. A proof that actually shows that under the assuptions of the Gauss-Markov Theorem an linear estimator is BLUE can be found under http://economictheoryblog.com/2015/02/26/markov_theorem/
Under which assumptions does the ordinary least squares method give efficient and unbiased estimator
The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, $ E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\
Under which assumptions does the ordinary least squares method give efficient and unbiased estimators? The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, $ E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\sigma^{2}(\epsilon_{i}) = \sigma^{2} < \infty$ and $\epsilon_{i}$ and $ \epsilon_{j}$ are uncorrelated for all $ i$ and $ j$ the least squares estimator $ b_{0}$ and $ b_{1}$ are unbiased and have minimum variance among all unbiased linear estimators. Note that there might be biased estimator which have a even lower variance. A proof that actually shows that under the assuptions of the Gauss-Markov Theorem an linear estimator is BLUE can be found under http://economictheoryblog.com/2015/02/26/markov_theorem/
Under which assumptions does the ordinary least squares method give efficient and unbiased estimator The Gauss-Markov Theorem is telling us that in a regression model, where the expected value of our error terms is zero, $ E(\epsilon_{i}) = 0$ and variance of the error terms is constant and finite $\
32,310
Equivalence of Mann-Whitney U-test and t-test on ranks
Does that mean they just test the same hypothesis/are useful in the same situations or are they are supposed to give the exact same p-values? It means: (i) the test statistics will be monotonic transformations of each other. (ii) they give the same p-values, if you work out the p-values correctly. Your problem is that the t-test on ranks doesn't have the distribution you use when you use t-tables (though in large samples they'll be close). You need to calculate its true distribution to correctly calculate the p-value. It matters most for small samples ... but they're also the ones where you can calculate the actual distribution most easily.
Equivalence of Mann-Whitney U-test and t-test on ranks
Does that mean they just test the same hypothesis/are useful in the same situations or are they are supposed to give the exact same p-values? It means: (i) the test statistics will be monotonic tran
Equivalence of Mann-Whitney U-test and t-test on ranks Does that mean they just test the same hypothesis/are useful in the same situations or are they are supposed to give the exact same p-values? It means: (i) the test statistics will be monotonic transformations of each other. (ii) they give the same p-values, if you work out the p-values correctly. Your problem is that the t-test on ranks doesn't have the distribution you use when you use t-tables (though in large samples they'll be close). You need to calculate its true distribution to correctly calculate the p-value. It matters most for small samples ... but they're also the ones where you can calculate the actual distribution most easily.
Equivalence of Mann-Whitney U-test and t-test on ranks Does that mean they just test the same hypothesis/are useful in the same situations or are they are supposed to give the exact same p-values? It means: (i) the test statistics will be monotonic tran
32,311
Equivalence of Mann-Whitney U-test and t-test on ranks
N.B. Not really an answer per se... Jonas Kristoffer Lindeløv wrote a recent blog post here that addresses relationships between linear models and group tests (such as t-test, Mann-Whitney, Wilcox, Kruskal-Wallis, ANOVA, etc.). He also created a somewhat limited simulation to assess the differences between the t-test and MW directly. It would be nice to either have 1) an analytic calculation or 2) a strong set of simulations to create some good rules-of-thumb for when we can use a linear model on ranks as opposed to a nonparametric test, as these are sometimes more convenient.
Equivalence of Mann-Whitney U-test and t-test on ranks
N.B. Not really an answer per se... Jonas Kristoffer Lindeløv wrote a recent blog post here that addresses relationships between linear models and group tests (such as t-test, Mann-Whitney, Wilcox, Kr
Equivalence of Mann-Whitney U-test and t-test on ranks N.B. Not really an answer per se... Jonas Kristoffer Lindeløv wrote a recent blog post here that addresses relationships between linear models and group tests (such as t-test, Mann-Whitney, Wilcox, Kruskal-Wallis, ANOVA, etc.). He also created a somewhat limited simulation to assess the differences between the t-test and MW directly. It would be nice to either have 1) an analytic calculation or 2) a strong set of simulations to create some good rules-of-thumb for when we can use a linear model on ranks as opposed to a nonparametric test, as these are sometimes more convenient.
Equivalence of Mann-Whitney U-test and t-test on ranks N.B. Not really an answer per se... Jonas Kristoffer Lindeløv wrote a recent blog post here that addresses relationships between linear models and group tests (such as t-test, Mann-Whitney, Wilcox, Kr
32,312
Detecting clusters in a binary sequence
I would avoid calling them "clusters". With this terminology you end up getting distracted into multidimensional techniques from data mining all the time. Your problem is a much simpler one dimensional setting. And even simpler: you don't even have coordinates but an array of zeros and ones. There will not be a one-size-fits all solution for your problem ever. Because one user might want to read very high resolution "barcodes", while the other user has a lot of noise. So in the end, you will need to have one parameter. You have a number of choices: absolute gap sizes, relative gap sizes, kernel bandwidth etc. A very simple "kernel based" approach would be to map each pixel to the number of pixels set in -10...+10. So that is 21 cells, the value will be 0 to 21. Now look for a local minimum. Increase the window size, if it starts splitting runs that you did not yet want to split.
Detecting clusters in a binary sequence
I would avoid calling them "clusters". With this terminology you end up getting distracted into multidimensional techniques from data mining all the time. Your problem is a much simpler one dimensiona
Detecting clusters in a binary sequence I would avoid calling them "clusters". With this terminology you end up getting distracted into multidimensional techniques from data mining all the time. Your problem is a much simpler one dimensional setting. And even simpler: you don't even have coordinates but an array of zeros and ones. There will not be a one-size-fits all solution for your problem ever. Because one user might want to read very high resolution "barcodes", while the other user has a lot of noise. So in the end, you will need to have one parameter. You have a number of choices: absolute gap sizes, relative gap sizes, kernel bandwidth etc. A very simple "kernel based" approach would be to map each pixel to the number of pixels set in -10...+10. So that is 21 cells, the value will be 0 to 21. Now look for a local minimum. Increase the window size, if it starts splitting runs that you did not yet want to split.
Detecting clusters in a binary sequence I would avoid calling them "clusters". With this terminology you end up getting distracted into multidimensional techniques from data mining all the time. Your problem is a much simpler one dimensiona
32,313
Detecting clusters in a binary sequence
Reference 1 on pages 49-55 has nice section on kernel based methods that might be useful here. If I were doing it then I would look at some weighted sum of the actual values and their first derivative because it might be a better indicator of "information". Reference: http://amzn.com/0198538642 "Neural Networks for Pattern Recognition" by Christopher Bishop.(1995)
Detecting clusters in a binary sequence
Reference 1 on pages 49-55 has nice section on kernel based methods that might be useful here. If I were doing it then I would look at some weighted sum of the actual values and their first derivativ
Detecting clusters in a binary sequence Reference 1 on pages 49-55 has nice section on kernel based methods that might be useful here. If I were doing it then I would look at some weighted sum of the actual values and their first derivative because it might be a better indicator of "information". Reference: http://amzn.com/0198538642 "Neural Networks for Pattern Recognition" by Christopher Bishop.(1995)
Detecting clusters in a binary sequence Reference 1 on pages 49-55 has nice section on kernel based methods that might be useful here. If I were doing it then I would look at some weighted sum of the actual values and their first derivativ
32,314
Detecting clusters in a binary sequence
The problem has some similarity with image processing. You have a binary image with a height of one pixel and want to achieve some sort of segmentation. The nature of the input image suggests a morphological filter to smooth the regions, e.g. closing. You'd need to choose the structuring element that thereby determines the "linkage" of the clusters. In the end this is pretty similar to your approach. You could also smooth the image using convolution filters, e.g. using blur, or gaussian kernel and apply a chosen threshold to re-binarize it. If you can treat every 1 as a point, its position in the sequence as a coordinate, and can make up some distance metric, you could use pretty much every standard clustering algorithm there is. For example, you could use hierarchical clustering (choose a linkage criterion and a threshold), you could use k-means or an EM with a gaussian mixture model (choose the number of clusters you are looking for). But I don't think, you may eventually getting away without having to predefine the sensitivity of the algorithm at least.
Detecting clusters in a binary sequence
The problem has some similarity with image processing. You have a binary image with a height of one pixel and want to achieve some sort of segmentation. The nature of the input image suggests a morpho
Detecting clusters in a binary sequence The problem has some similarity with image processing. You have a binary image with a height of one pixel and want to achieve some sort of segmentation. The nature of the input image suggests a morphological filter to smooth the regions, e.g. closing. You'd need to choose the structuring element that thereby determines the "linkage" of the clusters. In the end this is pretty similar to your approach. You could also smooth the image using convolution filters, e.g. using blur, or gaussian kernel and apply a chosen threshold to re-binarize it. If you can treat every 1 as a point, its position in the sequence as a coordinate, and can make up some distance metric, you could use pretty much every standard clustering algorithm there is. For example, you could use hierarchical clustering (choose a linkage criterion and a threshold), you could use k-means or an EM with a gaussian mixture model (choose the number of clusters you are looking for). But I don't think, you may eventually getting away without having to predefine the sensitivity of the algorithm at least.
Detecting clusters in a binary sequence The problem has some similarity with image processing. You have a binary image with a height of one pixel and want to achieve some sort of segmentation. The nature of the input image suggests a morpho
32,315
R as an alternative to SAS for large data
I have done work on very large data sets in R, and not had problems. There are several approaches that work, but my basic paradigm is that I find ways to process the data "sequentially". Obviously SAS has the same fundamental memory constraints if you're using it on the same machine, using R is just a little more DIY. In every case that I have ever encountered I'm either doing analysis on some kind of summary of the data, or I'm doing analysis on chunks of the data and then summarizing the results. Either way that's easy to accomplish in R. It's pretty easy to create summaries if you have your data structured in some way (really in any way). Hadoop is a leading tool for creating summaries, but it's easy to do batch processing on R Data files, and if your data will fit on your local storage device, it's also faster to batch process it that way (in terms of both processing time and development time). It's also pretty easy to batch your analysis by chunk as well using the same thought process. If you're really dying to do a linear model directly on a gigantic data set, then I think bigmemory is your answer, as suggested by Stéphane Laurent. I don't really think there is one "answer" to "how do you deal with memory constraints" or "move to a new platform", but this is my long winded two cents.
R as an alternative to SAS for large data
I have done work on very large data sets in R, and not had problems. There are several approaches that work, but my basic paradigm is that I find ways to process the data "sequentially". Obviously SA
R as an alternative to SAS for large data I have done work on very large data sets in R, and not had problems. There are several approaches that work, but my basic paradigm is that I find ways to process the data "sequentially". Obviously SAS has the same fundamental memory constraints if you're using it on the same machine, using R is just a little more DIY. In every case that I have ever encountered I'm either doing analysis on some kind of summary of the data, or I'm doing analysis on chunks of the data and then summarizing the results. Either way that's easy to accomplish in R. It's pretty easy to create summaries if you have your data structured in some way (really in any way). Hadoop is a leading tool for creating summaries, but it's easy to do batch processing on R Data files, and if your data will fit on your local storage device, it's also faster to batch process it that way (in terms of both processing time and development time). It's also pretty easy to batch your analysis by chunk as well using the same thought process. If you're really dying to do a linear model directly on a gigantic data set, then I think bigmemory is your answer, as suggested by Stéphane Laurent. I don't really think there is one "answer" to "how do you deal with memory constraints" or "move to a new platform", but this is my long winded two cents.
R as an alternative to SAS for large data I have done work on very large data sets in R, and not had problems. There are several approaches that work, but my basic paradigm is that I find ways to process the data "sequentially". Obviously SA
32,316
R as an alternative to SAS for large data
I do not have hands-on on the revolutionary analytics part but there is a blog on this http://www.r-bloggers.com/allstate-compares-sas-hadoop-and-r-for-big-data-insurance-models/ It uses hadoop (distributed computing) to solve this problem of memory.
R as an alternative to SAS for large data
I do not have hands-on on the revolutionary analytics part but there is a blog on this http://www.r-bloggers.com/allstate-compares-sas-hadoop-and-r-for-big-data-insurance-models/ It uses hadoop (distr
R as an alternative to SAS for large data I do not have hands-on on the revolutionary analytics part but there is a blog on this http://www.r-bloggers.com/allstate-compares-sas-hadoop-and-r-for-big-data-insurance-models/ It uses hadoop (distributed computing) to solve this problem of memory.
R as an alternative to SAS for large data I do not have hands-on on the revolutionary analytics part but there is a blog on this http://www.r-bloggers.com/allstate-compares-sas-hadoop-and-r-for-big-data-insurance-models/ It uses hadoop (distr
32,317
Understanding the split plot
Split plots are often used out of necessity, but there can be statistical advantages in term of precision of your contrasts (or also disadvantages). Here is my rudimentary understanding of intuition for using split plot: First, let me establish that two common terms in split plot design are "whole plot factor" and the "sub-plot factor." In an agricultural study the whole plot factor are at a larger spatial scale, say entire fields, which represent different levels of some treatment such as drainage efficiency. The sub-plot factors are spatially nested within the whole plot factor. Subplot factors are often something that can be applied at a smaller spatial scale, such as crop type. Aside from reasons of practicality (which may be the case in the example I wrote above), split power may be efficient (or inefficient!). Federer and King 2007 suggest that one reason to use Split plot is that in comparison to a 2-way ANOVA you generally have increased precision to detect contrasts between the sub-plot factors. Also, interaction effects may be easier to detect. In contrast, precision to detect contrasts between the whole plot factor generally decreases. These differences are explained by the fact that two separate residual error terms are used for hypothesis testing. The whole plot error term is calculated by first averaging the subplots within each whole plot. Spit plot is also sometimes used as a split plot in time, which as I understand is similar to a repeated measures, often used on subjects. I'm not sure what the advantage one way or another is on this. The terminology maps as follows: split-plot design = repeated-measures design whole plot = subject whole plot factor = between-subject factor split-plot factor = within-subject factor = repeated-measures factor A very comprehensive reference on split plot theory and implementation is: Federer WT & King F (2007) Variations on split plot and split block experiment designs (John Wiley & Sons).
Understanding the split plot
Split plots are often used out of necessity, but there can be statistical advantages in term of precision of your contrasts (or also disadvantages). Here is my rudimentary understanding of intuition f
Understanding the split plot Split plots are often used out of necessity, but there can be statistical advantages in term of precision of your contrasts (or also disadvantages). Here is my rudimentary understanding of intuition for using split plot: First, let me establish that two common terms in split plot design are "whole plot factor" and the "sub-plot factor." In an agricultural study the whole plot factor are at a larger spatial scale, say entire fields, which represent different levels of some treatment such as drainage efficiency. The sub-plot factors are spatially nested within the whole plot factor. Subplot factors are often something that can be applied at a smaller spatial scale, such as crop type. Aside from reasons of practicality (which may be the case in the example I wrote above), split power may be efficient (or inefficient!). Federer and King 2007 suggest that one reason to use Split plot is that in comparison to a 2-way ANOVA you generally have increased precision to detect contrasts between the sub-plot factors. Also, interaction effects may be easier to detect. In contrast, precision to detect contrasts between the whole plot factor generally decreases. These differences are explained by the fact that two separate residual error terms are used for hypothesis testing. The whole plot error term is calculated by first averaging the subplots within each whole plot. Spit plot is also sometimes used as a split plot in time, which as I understand is similar to a repeated measures, often used on subjects. I'm not sure what the advantage one way or another is on this. The terminology maps as follows: split-plot design = repeated-measures design whole plot = subject whole plot factor = between-subject factor split-plot factor = within-subject factor = repeated-measures factor A very comprehensive reference on split plot theory and implementation is: Federer WT & King F (2007) Variations on split plot and split block experiment designs (John Wiley & Sons).
Understanding the split plot Split plots are often used out of necessity, but there can be statistical advantages in term of precision of your contrasts (or also disadvantages). Here is my rudimentary understanding of intuition f
32,318
Understanding the split plot
A good resource would be Mead's "The design of experiments" (1988), chapter 14. I think there is a new version here. But you don't really need the new version to understand split-plot, and I am assuming you have access to these books at your local library. I can give you my 2-cent-worth. In the ideal world, if you have 2 treatments, you would want to do a factorial design. It is probably the most efficient design you can use. However, there is often practical limitation. Perhaps the 2 treatments have to be applied to different levels of the unit (1 larger, 1 smaller), then you will have to contend with split-plot. So my view of split-plot is that it arises out of practical limitation. Linking to the notion of restricted randomization, yes, split plot is a type of restricted randomization. The treatment that is applied to the main unit ('larger' plot) is randomized in a restricted sense. But the restriction is posed by practical limitation rather than statistical ideal.
Understanding the split plot
A good resource would be Mead's "The design of experiments" (1988), chapter 14. I think there is a new version here. But you don't really need the new version to understand split-plot, and I am assumi
Understanding the split plot A good resource would be Mead's "The design of experiments" (1988), chapter 14. I think there is a new version here. But you don't really need the new version to understand split-plot, and I am assuming you have access to these books at your local library. I can give you my 2-cent-worth. In the ideal world, if you have 2 treatments, you would want to do a factorial design. It is probably the most efficient design you can use. However, there is often practical limitation. Perhaps the 2 treatments have to be applied to different levels of the unit (1 larger, 1 smaller), then you will have to contend with split-plot. So my view of split-plot is that it arises out of practical limitation. Linking to the notion of restricted randomization, yes, split plot is a type of restricted randomization. The treatment that is applied to the main unit ('larger' plot) is randomized in a restricted sense. But the restriction is posed by practical limitation rather than statistical ideal.
Understanding the split plot A good resource would be Mead's "The design of experiments" (1988), chapter 14. I think there is a new version here. But you don't really need the new version to understand split-plot, and I am assumi
32,319
What was Student's (Gosset's) contribution in formulating the t-test?
E. L. Lehmann addressed this question in an introduction to a reprint of Gosset's 1908 article in Breakthroughs in Statistics, Volume II--Methodology and Distribution (Samuel Kotz & Norman L. Johnson, eds., 1992). Lehmann first describes the state of the art in Gosset's time: it amounted to a "z test" where the estimated standard deviation was treated as if it were a constant. Then he discusses Gosset's contribution: However, if the sample size $n$ is small, $S^2$ will be subject to considerable variation. It was the effect of this variation that concerned Student, the pseudonym of W. S. Gosset... . He pointed out that if the form of the distribution of the $X$'s is known, this variation can be taken into account, since for any given $n$ the distribution of $t$ is then determined exactly. He proposed to work out this distribution for the case in which the $X$'s are normal. This in fact is what Gosset did, albeit without mathematical rigor: he derived some properties of the distribution of $t$ for the normal case, matched them to properties of known distributions, and correctly guessed its distribution--acknowledging that this was less than rigorous. To support his guess, he conducted a Monte-Carlo simulation using samples of four from a dataset. Gosset wrote pseudonymously because his employer (the Guinness brewery) apparently felt that this improved understanding of small-sample variation was a bit of an advantage in the business : it would have led to improved quality control procedures.
What was Student's (Gosset's) contribution in formulating the t-test?
E. L. Lehmann addressed this question in an introduction to a reprint of Gosset's 1908 article in Breakthroughs in Statistics, Volume II--Methodology and Distribution (Samuel Kotz & Norman L. Johnson,
What was Student's (Gosset's) contribution in formulating the t-test? E. L. Lehmann addressed this question in an introduction to a reprint of Gosset's 1908 article in Breakthroughs in Statistics, Volume II--Methodology and Distribution (Samuel Kotz & Norman L. Johnson, eds., 1992). Lehmann first describes the state of the art in Gosset's time: it amounted to a "z test" where the estimated standard deviation was treated as if it were a constant. Then he discusses Gosset's contribution: However, if the sample size $n$ is small, $S^2$ will be subject to considerable variation. It was the effect of this variation that concerned Student, the pseudonym of W. S. Gosset... . He pointed out that if the form of the distribution of the $X$'s is known, this variation can be taken into account, since for any given $n$ the distribution of $t$ is then determined exactly. He proposed to work out this distribution for the case in which the $X$'s are normal. This in fact is what Gosset did, albeit without mathematical rigor: he derived some properties of the distribution of $t$ for the normal case, matched them to properties of known distributions, and correctly guessed its distribution--acknowledging that this was less than rigorous. To support his guess, he conducted a Monte-Carlo simulation using samples of four from a dataset. Gosset wrote pseudonymously because his employer (the Guinness brewery) apparently felt that this improved understanding of small-sample variation was a bit of an advantage in the business : it would have led to improved quality control procedures.
What was Student's (Gosset's) contribution in formulating the t-test? E. L. Lehmann addressed this question in an introduction to a reprint of Gosset's 1908 article in Breakthroughs in Statistics, Volume II--Methodology and Distribution (Samuel Kotz & Norman L. Johnson,
32,320
When do you stratify an analysis versus including an interaction term?
Stratified approach does not provide a test of statistical significance of the difference between the stratified parameter estimates. A more serious statistical shortcoming arises when a model has numerous covariates in addition to the modifier. Stratification unnecessarily attenuates multicollinearity among the covariates because it allows for no statistical interrelationships between data items segregated into the stratified models. The stratified models would provide slightly different and less satisfactory results than a model that includes all your data and tests for modification using an interaction term. Thus, interpretations of measures of association for stratified models are also subtly different: statistical inferences can be generalized only to the population from which the sample stratum was drawn and not to the entire original sample. Finally the most serious problem is that if the sampling design did not account for that same stratification, you might end up with a very different associated risk distribution in each strata invalidating the comparison of estimates across strata. Use interactions to test for modification, even if the seem difficult to understand, any other analysis is INCORRECT.
When do you stratify an analysis versus including an interaction term?
Stratified approach does not provide a test of statistical significance of the difference between the stratified parameter estimates. A more serious statistical shortcoming arises when a model has num
When do you stratify an analysis versus including an interaction term? Stratified approach does not provide a test of statistical significance of the difference between the stratified parameter estimates. A more serious statistical shortcoming arises when a model has numerous covariates in addition to the modifier. Stratification unnecessarily attenuates multicollinearity among the covariates because it allows for no statistical interrelationships between data items segregated into the stratified models. The stratified models would provide slightly different and less satisfactory results than a model that includes all your data and tests for modification using an interaction term. Thus, interpretations of measures of association for stratified models are also subtly different: statistical inferences can be generalized only to the population from which the sample stratum was drawn and not to the entire original sample. Finally the most serious problem is that if the sampling design did not account for that same stratification, you might end up with a very different associated risk distribution in each strata invalidating the comparison of estimates across strata. Use interactions to test for modification, even if the seem difficult to understand, any other analysis is INCORRECT.
When do you stratify an analysis versus including an interaction term? Stratified approach does not provide a test of statistical significance of the difference between the stratified parameter estimates. A more serious statistical shortcoming arises when a model has num
32,321
When do you stratify an analysis versus including an interaction term?
One practical difference is that stratified analysis is usually easier for non-statisticians to understand, but analysis with interactions allows more comparisons to be done - in particular, it gives a parameter estimate, p value and confidence interval for the difference.
When do you stratify an analysis versus including an interaction term?
One practical difference is that stratified analysis is usually easier for non-statisticians to understand, but analysis with interactions allows more comparisons to be done - in particular, it gives
When do you stratify an analysis versus including an interaction term? One practical difference is that stratified analysis is usually easier for non-statisticians to understand, but analysis with interactions allows more comparisons to be done - in particular, it gives a parameter estimate, p value and confidence interval for the difference.
When do you stratify an analysis versus including an interaction term? One practical difference is that stratified analysis is usually easier for non-statisticians to understand, but analysis with interactions allows more comparisons to be done - in particular, it gives
32,322
Comparison of entropy and distribution of bytes in compressed/encrypted data
This question still lacks essential information, but I think I can make some intelligent guesses: The entropy of a discrete distribution $\mathbb{p} = (p_0, p_1, \ldots, p_{255})$ is defined as $$H(\mathbb{p}) = -\sum_{i=0}^{255} p_i \log_2{p_i}.$$ Because $-\log$ is a concave function, the entropy is maximized when all $p_i$ are equal. Since they determine a probability distribution (they sum to unity), this occurs when $p_i = 2^{-8}$ for each $i$, whence the maximum entropy is $$H_0 = -\sum_{i=0}^{255} 2^{-8} \log_2{(2^{-8})} = \sum_{i=0}^{255} 2^{-8}\times 8 = 8.$$ The entropies of $7.9961532$ bits/byte (i.e., using binary logarithms) and $7.9998857$ are extremely close both to each other and to the theoretical limit of $H_0 = 8$. How close? Expanding $H(\mathbb{p})$ in a Taylor series around the maximum shows that the deviation between $H_0$ and any entropy $H(\mathbb{p})$ equals $$H_0 - H(\mathbb{p}) = \sum_i \frac{(p_i - 2^{-8})^2}{2 \cdot 2^{-8} \log(2)} + O(p_i - 2^{-8})^3.$$ Using this formula we can deduce that an entropy of $7.9961532$, which is a discrepancy of $0.0038468$, is produced by a root-mean-square deviation of just $0.00002099$ between the $p_i$ and the perfectly uniform distribution of $2^{-8}$. This represents an average relative deviation of only $0.5$%. A similar calculation for an entropy of $7.9998857$ corresponds to an RMS deviation in $p_i$ of just 0.09%. (In a figure like the bottom one in the question, whose height spans about $1000$ pixels, if we assume the heights of the bars represent the $p_i$, then a $0.09$% RMS variation corresponds to changes of just one pixel above or below the mean height, and almost always less than three pixels. That's just what it looks like. A $0.5$% RMS, on the other hand, would be associated with variations of about $6$ pixels on average, but rarely exceeding $15$ pixels or so. That is not what the upper figure looks like, with its obvious variations of $100$ or more pixels. I am therefore guessing that these figures are not directly comparable to each other.) In both cases these are small deviations, but one is more than five times smaller than the other. Now we have to make some guesses, because the question does not tell us how the entropies were used to determine uniformity, nor does it tell us how much data there are. If a true "entropy test" has been applied, then like any other statistical test it needs to account for chance variation. In this case, the observed frequencies (from which the entropies have been calculated) will tend to vary from the true underlying frequencies due to chance. These variations translate, via the formulas given above, into variations of the observed entropy from the true underlying entropy. Given sufficient data, we can detect whether the true entropy differs from the value of $8$ associated with a uniform distribution. All other things being equal, the amount of data needed to detect a mean discrepancy of just $0.09$% compared to a mean discrepancy of $0.5$% will be approximately $(0.5/0.09)^2$ times as much: in this case, that works out to be more than $33$ times as much. Consequently, it's quite possible for there to be enough data to determine that an observed entropy of $7.996\ldots$ differs significantly from $8$ while an equivalent amount of data would be unable to distinguish $7.99988\ldots$ from $8$. (This situation, by the way, is called a false negative, not a "false positive," because it has failed to identify a lack of uniformity (which is considered a "negative" result).) Accordingly, I propose that (a) the entropies have indeed been computed correctly and (b) the amount of data adequately explains what has happened. Incidentally, the figures seem to be either useless or misleading, because they lack appropriate labels. Although the bottom one appears to depict a near-uniform distribution (assuming the x-axis is discrete and corresponds to the $256$ possible byte values and the y-axis is proportional to observed frequency), the top one cannot possibly correspond to an entropy anywhere near $8$. I suspect the zero of the y-axis in the top figure has not been shown, so that discrepancies among the frequencies are exaggerated. (Tufte would say this figure has a large Lie Factor.)
Comparison of entropy and distribution of bytes in compressed/encrypted data
This question still lacks essential information, but I think I can make some intelligent guesses: The entropy of a discrete distribution $\mathbb{p} = (p_0, p_1, \ldots, p_{255})$ is defined as $$H(\
Comparison of entropy and distribution of bytes in compressed/encrypted data This question still lacks essential information, but I think I can make some intelligent guesses: The entropy of a discrete distribution $\mathbb{p} = (p_0, p_1, \ldots, p_{255})$ is defined as $$H(\mathbb{p}) = -\sum_{i=0}^{255} p_i \log_2{p_i}.$$ Because $-\log$ is a concave function, the entropy is maximized when all $p_i$ are equal. Since they determine a probability distribution (they sum to unity), this occurs when $p_i = 2^{-8}$ for each $i$, whence the maximum entropy is $$H_0 = -\sum_{i=0}^{255} 2^{-8} \log_2{(2^{-8})} = \sum_{i=0}^{255} 2^{-8}\times 8 = 8.$$ The entropies of $7.9961532$ bits/byte (i.e., using binary logarithms) and $7.9998857$ are extremely close both to each other and to the theoretical limit of $H_0 = 8$. How close? Expanding $H(\mathbb{p})$ in a Taylor series around the maximum shows that the deviation between $H_0$ and any entropy $H(\mathbb{p})$ equals $$H_0 - H(\mathbb{p}) = \sum_i \frac{(p_i - 2^{-8})^2}{2 \cdot 2^{-8} \log(2)} + O(p_i - 2^{-8})^3.$$ Using this formula we can deduce that an entropy of $7.9961532$, which is a discrepancy of $0.0038468$, is produced by a root-mean-square deviation of just $0.00002099$ between the $p_i$ and the perfectly uniform distribution of $2^{-8}$. This represents an average relative deviation of only $0.5$%. A similar calculation for an entropy of $7.9998857$ corresponds to an RMS deviation in $p_i$ of just 0.09%. (In a figure like the bottom one in the question, whose height spans about $1000$ pixels, if we assume the heights of the bars represent the $p_i$, then a $0.09$% RMS variation corresponds to changes of just one pixel above or below the mean height, and almost always less than three pixels. That's just what it looks like. A $0.5$% RMS, on the other hand, would be associated with variations of about $6$ pixels on average, but rarely exceeding $15$ pixels or so. That is not what the upper figure looks like, with its obvious variations of $100$ or more pixels. I am therefore guessing that these figures are not directly comparable to each other.) In both cases these are small deviations, but one is more than five times smaller than the other. Now we have to make some guesses, because the question does not tell us how the entropies were used to determine uniformity, nor does it tell us how much data there are. If a true "entropy test" has been applied, then like any other statistical test it needs to account for chance variation. In this case, the observed frequencies (from which the entropies have been calculated) will tend to vary from the true underlying frequencies due to chance. These variations translate, via the formulas given above, into variations of the observed entropy from the true underlying entropy. Given sufficient data, we can detect whether the true entropy differs from the value of $8$ associated with a uniform distribution. All other things being equal, the amount of data needed to detect a mean discrepancy of just $0.09$% compared to a mean discrepancy of $0.5$% will be approximately $(0.5/0.09)^2$ times as much: in this case, that works out to be more than $33$ times as much. Consequently, it's quite possible for there to be enough data to determine that an observed entropy of $7.996\ldots$ differs significantly from $8$ while an equivalent amount of data would be unable to distinguish $7.99988\ldots$ from $8$. (This situation, by the way, is called a false negative, not a "false positive," because it has failed to identify a lack of uniformity (which is considered a "negative" result).) Accordingly, I propose that (a) the entropies have indeed been computed correctly and (b) the amount of data adequately explains what has happened. Incidentally, the figures seem to be either useless or misleading, because they lack appropriate labels. Although the bottom one appears to depict a near-uniform distribution (assuming the x-axis is discrete and corresponds to the $256$ possible byte values and the y-axis is proportional to observed frequency), the top one cannot possibly correspond to an entropy anywhere near $8$. I suspect the zero of the y-axis in the top figure has not been shown, so that discrepancies among the frequencies are exaggerated. (Tufte would say this figure has a large Lie Factor.)
Comparison of entropy and distribution of bytes in compressed/encrypted data This question still lacks essential information, but I think I can make some intelligent guesses: The entropy of a discrete distribution $\mathbb{p} = (p_0, p_1, \ldots, p_{255})$ is defined as $$H(\
32,323
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?
The OpenMx project can estimate growth mixture models, though you have to install the package from their website since it isn't on CRAN. They have examples in the user documentation (section 2.8) for how to set this up as well.
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)
The OpenMx project can estimate growth mixture models, though you have to install the package from their website since it isn't on CRAN. They have examples in the user documentation (section 2.8) for
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)? The OpenMx project can estimate growth mixture models, though you have to install the package from their website since it isn't on CRAN. They have examples in the user documentation (section 2.8) for how to set this up as well.
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM) The OpenMx project can estimate growth mixture models, though you have to install the package from their website since it isn't on CRAN. They have examples in the user documentation (section 2.8) for
32,324
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)?
You have also the package Kml and Kml3d (joint trajectories) that estimate the non-parametric equivalent of a GMM. You don't get any parameters as a result of these analyses, only the classification of each observation in the classes. However, in most applications, people don't use the parameters of LCGA and GMM anyway, and it is also much more robust than those applications, in particular GMM. There are two or three publications on the packages and full R documentation.
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)
You have also the package Kml and Kml3d (joint trajectories) that estimate the non-parametric equivalent of a GMM. You don't get any parameters as a result of these analyses, only the classification o
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM)? You have also the package Kml and Kml3d (joint trajectories) that estimate the non-parametric equivalent of a GMM. You don't get any parameters as a result of these analyses, only the classification of each observation in the classes. However, in most applications, people don't use the parameters of LCGA and GMM anyway, and it is also much more robust than those applications, in particular GMM. There are two or three publications on the packages and full R documentation.
Which R package to use to conduct a latent class growth analysis (LCGA) / growth mixture model (GMM) You have also the package Kml and Kml3d (joint trajectories) that estimate the non-parametric equivalent of a GMM. You don't get any parameters as a result of these analyses, only the classification o
32,325
How to elegantly determine the area of a hysteresis loop (inside/outside problem)?
A completely different way would be to directly calculate the area of your polygon: library(geometry) polyarea(x=Data$Q, y=Data$DOC) This yields 0.606.
How to elegantly determine the area of a hysteresis loop (inside/outside problem)?
A completely different way would be to directly calculate the area of your polygon: library(geometry) polyarea(x=Data$Q, y=Data$DOC) This yields 0.606.
How to elegantly determine the area of a hysteresis loop (inside/outside problem)? A completely different way would be to directly calculate the area of your polygon: library(geometry) polyarea(x=Data$Q, y=Data$DOC) This yields 0.606.
How to elegantly determine the area of a hysteresis loop (inside/outside problem)? A completely different way would be to directly calculate the area of your polygon: library(geometry) polyarea(x=Data$Q, y=Data$DOC) This yields 0.606.
32,326
How to elegantly determine the area of a hysteresis loop (inside/outside problem)?
One possibility would be this: it looks to me like the hysteresis loop should be convex, right? So one could generate points and test for each point whether it is part of the convex hull of the union of your dataset and the random point - if yes, it lies outside the original dataset. To speed up things, we can work with a subset of the original dataset, namely the points comprising its convex hull itself. Data.subset <- Data[chull(Data$Q, Data$DOC),c("Q","DOC")] x.min <- min(Data.subset$Q) x.max <- max(Data.subset$Q) y.min <- min(Data.subset$DOC) y.max <- max(Data.subset$DOC) n.sims <- 1000 random.points <- data.frame(Q=runif(n=n.sims,x.min,x.max), DOC=runif(n=n.sims,y.min,y.max)) hit <- rep(NA,n.sims) for ( ii in 1:n.sims ) { hit[ii] <- !((nrow(Data.subset)+1) %in% chull(c(Data.subset$Q,random.points$Q[ii]), c(Data.subset$DOC,random.points$DOC[ii]))) } points(random.points$Q[hit],random.points$DOC[hit],pch=21,bg="black",cex=0.6) points(random.points$Q[!hit],random.points$DOC[!hit],pch=21,bg="red",col="red",cex=0.6) estimated.area <- (y.max-y.min)*(x.max-x.min)*sum(hit)/n.sims Of course, the R way of doing things would not use my for loop, but this is easy to understand and not too slow. I get an estimated area of 0.703. EDIT: of course, this relies on the presumed convexity of the relationship. For instance, there appears to be a nonconvex part in the lower right. In principle, we could Monte-Carlo-estimate the area of such an area in the same way and subtract this from the original area estimate.
How to elegantly determine the area of a hysteresis loop (inside/outside problem)?
One possibility would be this: it looks to me like the hysteresis loop should be convex, right? So one could generate points and test for each point whether it is part of the convex hull of the union
How to elegantly determine the area of a hysteresis loop (inside/outside problem)? One possibility would be this: it looks to me like the hysteresis loop should be convex, right? So one could generate points and test for each point whether it is part of the convex hull of the union of your dataset and the random point - if yes, it lies outside the original dataset. To speed up things, we can work with a subset of the original dataset, namely the points comprising its convex hull itself. Data.subset <- Data[chull(Data$Q, Data$DOC),c("Q","DOC")] x.min <- min(Data.subset$Q) x.max <- max(Data.subset$Q) y.min <- min(Data.subset$DOC) y.max <- max(Data.subset$DOC) n.sims <- 1000 random.points <- data.frame(Q=runif(n=n.sims,x.min,x.max), DOC=runif(n=n.sims,y.min,y.max)) hit <- rep(NA,n.sims) for ( ii in 1:n.sims ) { hit[ii] <- !((nrow(Data.subset)+1) %in% chull(c(Data.subset$Q,random.points$Q[ii]), c(Data.subset$DOC,random.points$DOC[ii]))) } points(random.points$Q[hit],random.points$DOC[hit],pch=21,bg="black",cex=0.6) points(random.points$Q[!hit],random.points$DOC[!hit],pch=21,bg="red",col="red",cex=0.6) estimated.area <- (y.max-y.min)*(x.max-x.min)*sum(hit)/n.sims Of course, the R way of doing things would not use my for loop, but this is easy to understand and not too slow. I get an estimated area of 0.703. EDIT: of course, this relies on the presumed convexity of the relationship. For instance, there appears to be a nonconvex part in the lower right. In principle, we could Monte-Carlo-estimate the area of such an area in the same way and subtract this from the original area estimate.
How to elegantly determine the area of a hysteresis loop (inside/outside problem)? One possibility would be this: it looks to me like the hysteresis loop should be convex, right? So one could generate points and test for each point whether it is part of the convex hull of the union
32,327
Reporting coefficient of determination using Spearman's rho
Pearson's r and Spearman's rho are both already effect size measures. Spearman's rho, for example, represents the degree of correlation of the data after data has been converted to ranks. Thus, it already captures the strength of relationship. People often square a correlation coefficient because it has a nice verbal interpretation as the proportion of shared variance. That said, there's nothing stopping you from interpreting the size of relationship in the metric of a straight correlation. It does not seem to be customary to square Spearman's rho. That said, you could square it if you wanted to. It would then represent the proportion of shared variance in the two ranked variables. I wouldn't worry so much about normality and absolute precision on p-values. Think about whether Pearson or Spearman better captures the association of interest. As you already mentioned, see the discussion here on the implication of non-normality for the choice between Pearson's r and Spearman's rho.
Reporting coefficient of determination using Spearman's rho
Pearson's r and Spearman's rho are both already effect size measures. Spearman's rho, for example, represents the degree of correlation of the data after data has been converted to ranks. Thus, it al
Reporting coefficient of determination using Spearman's rho Pearson's r and Spearman's rho are both already effect size measures. Spearman's rho, for example, represents the degree of correlation of the data after data has been converted to ranks. Thus, it already captures the strength of relationship. People often square a correlation coefficient because it has a nice verbal interpretation as the proportion of shared variance. That said, there's nothing stopping you from interpreting the size of relationship in the metric of a straight correlation. It does not seem to be customary to square Spearman's rho. That said, you could square it if you wanted to. It would then represent the proportion of shared variance in the two ranked variables. I wouldn't worry so much about normality and absolute precision on p-values. Think about whether Pearson or Spearman better captures the association of interest. As you already mentioned, see the discussion here on the implication of non-normality for the choice between Pearson's r and Spearman's rho.
Reporting coefficient of determination using Spearman's rho Pearson's r and Spearman's rho are both already effect size measures. Spearman's rho, for example, represents the degree of correlation of the data after data has been converted to ranks. Thus, it al
32,328
Reporting coefficient of determination using Spearman's rho
@ Jeromy Anglim About squaring the Spearman's Rho and interpreting it as the coefficient of determination: If you are using partial Spearman's Rho, squaring the partial Spearman's Rho and adding them can give you a total that can be above one. Thus, losing the meaning of partial determination coefficient, i.e., the % of variance of dependent variance's rank explained by the independent variable's rank. However, if you do the same procedure with Pearson's partial correlation coefficient, the total will always be bounded btw [0,1]. For instance, try in R: y.data <- data.frame( hl=c(7,15,19,15,21,22,57,15,20,18), disp=c(0.000,0.964,0.000,0.000,0.921,0.000,0.000,1.006,0.000,1.011), deg=c(9,2,3,4,1,3,1,3,6,1), BC=c(1.78e-02,1.05e-06,1.37e-05,7.18e-03,0.00e+00,0.00e+00,0.00e+00 ,4.48e-03,2.10e-06,0.00e+00)) head(y.data) p1=pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("pearson"))$estimate^2# y.data[,c("deg","BC") --> indicates what other valiables are controling to p2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("pearson"))$estimate^2 p3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("pearson"))$estimate^2 p1+p2+p3 OUTPUT = 0.8444889 and s1= pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("spearman"))$estimate^2 s2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("spearman") )$estimate^2 s3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("spearman"))$estimate^2 s1+s2+s3 OUTPUT = 1.22142 Not sure what is the detailed math explaining why the squared sum of partial Spearman's rho can be above 1. QUESTION: Could we interpret the squared Kendall's partial correlation coefficient as a coefficient of determination? Using Kendall for the example above: k1= pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("kendall"))$estimate^2 k2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("kendall") )$estimate^2 k3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("kendall"))$estimate^2 k1+k2+k3 OUTCOME= 0.6010744. But I am not sure how to interpret squared Kendall, or if it is acceptable squaring it. About the pcor.test() function in R: Pearson and Spearman partial correlation coefficient
Reporting coefficient of determination using Spearman's rho
@ Jeromy Anglim About squaring the Spearman's Rho and interpreting it as the coefficient of determination: If you are using partial Spearman's Rho, squaring the partial Spearman's Rho and adding them
Reporting coefficient of determination using Spearman's rho @ Jeromy Anglim About squaring the Spearman's Rho and interpreting it as the coefficient of determination: If you are using partial Spearman's Rho, squaring the partial Spearman's Rho and adding them can give you a total that can be above one. Thus, losing the meaning of partial determination coefficient, i.e., the % of variance of dependent variance's rank explained by the independent variable's rank. However, if you do the same procedure with Pearson's partial correlation coefficient, the total will always be bounded btw [0,1]. For instance, try in R: y.data <- data.frame( hl=c(7,15,19,15,21,22,57,15,20,18), disp=c(0.000,0.964,0.000,0.000,0.921,0.000,0.000,1.006,0.000,1.011), deg=c(9,2,3,4,1,3,1,3,6,1), BC=c(1.78e-02,1.05e-06,1.37e-05,7.18e-03,0.00e+00,0.00e+00,0.00e+00 ,4.48e-03,2.10e-06,0.00e+00)) head(y.data) p1=pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("pearson"))$estimate^2# y.data[,c("deg","BC") --> indicates what other valiables are controling to p2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("pearson"))$estimate^2 p3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("pearson"))$estimate^2 p1+p2+p3 OUTPUT = 0.8444889 and s1= pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("spearman"))$estimate^2 s2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("spearman") )$estimate^2 s3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("spearman"))$estimate^2 s1+s2+s3 OUTPUT = 1.22142 Not sure what is the detailed math explaining why the squared sum of partial Spearman's rho can be above 1. QUESTION: Could we interpret the squared Kendall's partial correlation coefficient as a coefficient of determination? Using Kendall for the example above: k1= pcor.test(y.data$hl, y.data$disp, y.data[,c("deg","BC")], method = c("kendall"))$estimate^2 k2=pcor.test(y.data$hl, y.data$deg, y.data[,c("disp","BC")], method = c("kendall") )$estimate^2 k3=pcor.test(y.data$hl, y.data$BC, y.data[,c("disp","deg")], method = c("kendall"))$estimate^2 k1+k2+k3 OUTCOME= 0.6010744. But I am not sure how to interpret squared Kendall, or if it is acceptable squaring it. About the pcor.test() function in R: Pearson and Spearman partial correlation coefficient
Reporting coefficient of determination using Spearman's rho @ Jeromy Anglim About squaring the Spearman's Rho and interpreting it as the coefficient of determination: If you are using partial Spearman's Rho, squaring the partial Spearman's Rho and adding them
32,329
Reporting coefficient of determination using Spearman's rho
I don't have enough reputation to comment on the answer of Lucas, so I will write this an answer. You can also get a sum of partial R-squareds with Pearson correlation above 1. This does not happen because of the Spearman or Pearson correlation, actually, I'd expect this to happen regardless of the dependence measure. This happens because of spurious correlation added by adjusting for colliders. Maybe there are other situations, but this is one of the ways. Check the R code below for an example. set.seed(2021) N <- 1000 X <- rnorm(N) Y <- rnorm(N) Z1 <- X - Y + rnorm(N) Z2 <- X - Y + rnorm(N) Z3 <- X - Y + rnorm(N) Z4 <- X - Y + rnorm(N) Z5 <- X - Y + rnorm(N) Z1_r2 <- (pcor.test(X,Y,Z1)$estimate)**2 Z2_r2 <- (pcor.test(X,Y,Z2)$estimate)**2 Z3_r2 <- (pcor.test(X,Y,Z3)$estimate)**2 Z4_r2 <- (pcor.test(X,Y,Z4)$estimate)**2 Z5_r2 <- (pcor.test(X,Y,Z5)$estimate)**2 Z1_r2 + Z2_r2 + Z3_r2 + Z4_r2 + Z5_r2 The result is larger than one. It is 1.296859. For understanding why, you can check this answer.
Reporting coefficient of determination using Spearman's rho
I don't have enough reputation to comment on the answer of Lucas, so I will write this an answer. You can also get a sum of partial R-squareds with Pearson correlation above 1. This does not happen be
Reporting coefficient of determination using Spearman's rho I don't have enough reputation to comment on the answer of Lucas, so I will write this an answer. You can also get a sum of partial R-squareds with Pearson correlation above 1. This does not happen because of the Spearman or Pearson correlation, actually, I'd expect this to happen regardless of the dependence measure. This happens because of spurious correlation added by adjusting for colliders. Maybe there are other situations, but this is one of the ways. Check the R code below for an example. set.seed(2021) N <- 1000 X <- rnorm(N) Y <- rnorm(N) Z1 <- X - Y + rnorm(N) Z2 <- X - Y + rnorm(N) Z3 <- X - Y + rnorm(N) Z4 <- X - Y + rnorm(N) Z5 <- X - Y + rnorm(N) Z1_r2 <- (pcor.test(X,Y,Z1)$estimate)**2 Z2_r2 <- (pcor.test(X,Y,Z2)$estimate)**2 Z3_r2 <- (pcor.test(X,Y,Z3)$estimate)**2 Z4_r2 <- (pcor.test(X,Y,Z4)$estimate)**2 Z5_r2 <- (pcor.test(X,Y,Z5)$estimate)**2 Z1_r2 + Z2_r2 + Z3_r2 + Z4_r2 + Z5_r2 The result is larger than one. It is 1.296859. For understanding why, you can check this answer.
Reporting coefficient of determination using Spearman's rho I don't have enough reputation to comment on the answer of Lucas, so I will write this an answer. You can also get a sum of partial R-squareds with Pearson correlation above 1. This does not happen be
32,330
Is Xorshift RNG good enough for Monte Carlo approaches? If not what alternatives are there?
At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and dieharder. The Java LCG generator is unusable. You should avoid it like the plague. A xorshift generator is better, but still displays several statistical artifacts. Marsaglia suggested in his original paper to multiply the result of a xorshift generator by a constant to improve the statistical quality, and the result is a xorshift* generator. These are some of the fastest top-quality generators available, in particular if the period is reasonably large. Using SecureRandom for generating a large number of random bits is extremely dangerous. After a few hundred thousands calls (or less) your application will stop (on Linux, at least) waiting for /dev/random to provide more truly random bits. It is a nightmare to debug if you don't know--the application will just hang and use 0% CPU for apparently no reason.
Is Xorshift RNG good enough for Monte Carlo approaches? If not what alternatives are there?
At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and diehard
Is Xorshift RNG good enough for Monte Carlo approaches? If not what alternatives are there? At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and dieharder. The Java LCG generator is unusable. You should avoid it like the plague. A xorshift generator is better, but still displays several statistical artifacts. Marsaglia suggested in his original paper to multiply the result of a xorshift generator by a constant to improve the statistical quality, and the result is a xorshift* generator. These are some of the fastest top-quality generators available, in particular if the period is reasonably large. Using SecureRandom for generating a large number of random bits is extremely dangerous. After a few hundred thousands calls (or less) your application will stop (on Linux, at least) waiting for /dev/random to provide more truly random bits. It is a nightmare to debug if you don't know--the application will just hang and use 0% CPU for apparently no reason.
Is Xorshift RNG good enough for Monte Carlo approaches? If not what alternatives are there? At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and diehard
32,331
Distributions on subsets of $\{1, 2, ..., J\}$?
You might favor location families based on Hamming distance, due to their richness, flexibility, and computational tractability. Notation and definitions Recall that in a free finite-dimensional module $V$ with basis $\left(\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_J\right)$, the Hamming distance $\delta_H$ between two vectors $\mathbf{v}=v_1 \mathbf{e}_1 + \cdots + v_J\mathbf{e}_J$ and $\mathbf{w}=w_1 \mathbf{e}_1 + \cdots + w_J\mathbf{e}_J$ is the number of places $i$ where $v_i \ne w_i$. Given any origin $\mathbf{v}_0\in V$, the Hamming distance partitions $V$ into spheres $S_i(\mathbf{v}_0)$, $i=0, 1, \ldots, J$, where $S_i(\mathbf{v}_0) = \{\mathbf{w}\in V\ |\ \delta_H(\mathbf{w}, \mathbf{v}_0) = i\}$. When the ground ring has $n$ elements, $V$ has $n^J$ elements and $S_i(\mathbf{v})$ has $\binom{J}{i}\left(n-1\right)^i$ elements. (This follows immediately from observing that elements of $S_i(\mathbf{v})$ differ from $\mathbf{v}$ in exactly $i$ places--of which there are $\binom{J}{i}$ possibilities--and that there are, independently, $n-1$ choices of values for each place.) Affine translation in $V$ acts naturally on its distributions to give location families. Specifically, when $f$ is any distribution on $V$ (which means little more than $f:V\to [0,1]$, $f(\mathbf{v})\ge 0$ for all $\mathbf{v} \in V$, and $\sum_{\mathbf{v}\in V}f(\mathbf{v})=1$) and $\mathbf{w}$ is any element of $V$, then $f^{(\mathbf{w})}$ is also a distribution where $$f^{(\mathbf{w})}(\mathbf{v}) = f(\mathbf{v}-\mathbf{w})$$ for all $\mathbf{v}\in V$. A location family $\Omega$ of distributions is invariant under this action: $f\in \Omega$ implies $f^{(\mathbf{v})}\in \Omega$ for all $\mathbf{v}\in V$. Construction This enables us to define potentially interesting and useful families of distributions by specifying their shapes at one fixed vector $\mathbf{v}$, which for convenience I will take to be $\mathbf{0} = (0,0,\ldots,0)$, and translating these "generating distributions" under the action of $V$ to obtain the full family $\Omega$. To achieve the desired property that $f$ should have comparable values at nearby points, simply require that property of all generating distributions. To see how this works, let's construct the location family of all distributions that decrease with increasing distance. Because only $J+1$ Hamming distances are possible, consider any decreasing sequence of non-negative real numbers $\mathbf{a}$ = $0 \ne a_0 \ge a_1 \ge \cdots \ge a_J \ge 0$. Set $$A = \sum_{i=0}^J (n-1)^i\binom{J}{i} a_i$$ and define the function $f_\mathbf{a}:V\to [0,1]$ by $$f_\mathbf{a}(\mathbf{v}) = \frac{a_{\delta_H(\mathbf{0},\mathbf{v})}}{A}.$$ Then, as is straightforward to check, $f_\mathbf{a}$ is a distribution on $V$. Furthermore, $f_\mathbf{a} = f_{\mathbf{a}'}$ if and only if $\mathbf{a}'$ is a positive multiple of $\mathbf{a}$ (as vectors in $\mathbb{R}^{J+1}$). Thus, if we like, we may standardize $\mathbf{a}$ to $a_0=1$. Accordingly, this construction gives an explicit parameterization of all such location-invariant distributions that are decreasing with Hamming distance: any such distribution is in the form $f_\mathbf{a}^{(\mathbf{v})}$ for some sequence $\mathbf{a} = 1 \ge a_1 \ge a_2 \ge \cdots \ge a_J \ge 0$ and some vector $\mathbf{v}\in V$. This parameterization may allow for convenient specification of priors: factor them into a prior on the location $\mathbf{v}$ and a prior on the shape $\mathbf{a}$. (Of course one could consider a larger set of priors where location and shape and not independent, but this would be a more complicated undertaking.) Generating random values One way to sample from $f_\mathbf{a}^{(\mathbf{v})}$ is by stages by factoring it into a distribution over the spherical radi and another distribution conditional on each sphere: Draw an index $i$ from the discrete distribution on $\{0,1,\ldots,J\}$ given by the probabilities $\binom{J}{i}(n-1)^i a_i / A$, where $A$ is defined as before. The index $i$ corresponds to the set of vectors differing from $\mathbf{v}$ in exactly $i$ places. Therefore, select those $i$ places out of the $\binom{J}{i}$ possible subsets, giving each equal probability. (This is just a sample of $i$ subscripts out of $J$ without replacement.) Let this subset of $i$ places be written $I$. Draw an element $\mathbf{w}$ by independently selecting a value $w_j$ uniformly from the set of scalars not equal to $v_j$ for all $j\in I$ and otherwise set $w_j=v_j$. Equivalently, create a vector $\mathbf{u}$ by selecting $u_j$ uniformly at random from the nonzero scalars when $j\in I$ and otherwise setting $u_j=0$. Set $\mathbf{w} = \mathbf{v} + \mathbf{u}$. Step 3 is unnecessary in the binary case. Example Here is an R implementation to illustrate. rHamming <- function(N=1, a=c(1,1,1), n=2, origin) { # Draw N random values from the distribution f_a^v where the ground ring # is {0,1,...,n-1} mod n and the vector space has dimension j = length(a)-1. j <- length(a) - 1 if(missing(origin)) origin <- rep(0, j) # Draw radii `i` from the marginal distribution of the spherical radii. f <- sapply(0:j, function(i) (n-1)^i * choose(j,i) * a[i+1]) i <- sample(0:j, N, replace=TRUE, prob=f) # Helper function: select nonzero elements of 1:(n-1) in exactly i places. h <- function(i) { x <- c(sample(1:(n-1), i, replace=TRUE), rep(0, j-i)) sample(x, j, replace=FALSE) } # Draw elements from the conditional distribution over the spheres # and translate them by the origin. (sapply(i, h) + origin) %% n } As an example of its use: test <- rHamming(10^4, 2^(11:1), origin=rep(1,10)) hist(apply(test, 2, function(x) sum(x != 0))) This took $0.2$ seconds to draw $10^4$ iid elements from the distribution $f_{\mathbf{a}}^{(\mathbf{v})}$ where $J=10$, $n=2$ (the binary case), $\mathbf{v}=(1,1,\ldots,1)$, and $\mathbf{a}=(2^{11},2^{10},\ldots,2^1)$ is exponentially decreasing. (This algorithm does not require that $\mathbf{a}$ be decreasing; thus, it will generate random variates from any location family, not just the unimodal ones.)
Distributions on subsets of $\{1, 2, ..., J\}$?
You might favor location families based on Hamming distance, due to their richness, flexibility, and computational tractability. Notation and definitions Recall that in a free finite-dimensional modu
Distributions on subsets of $\{1, 2, ..., J\}$? You might favor location families based on Hamming distance, due to their richness, flexibility, and computational tractability. Notation and definitions Recall that in a free finite-dimensional module $V$ with basis $\left(\mathbf{e}_1, \mathbf{e}_2, \ldots, \mathbf{e}_J\right)$, the Hamming distance $\delta_H$ between two vectors $\mathbf{v}=v_1 \mathbf{e}_1 + \cdots + v_J\mathbf{e}_J$ and $\mathbf{w}=w_1 \mathbf{e}_1 + \cdots + w_J\mathbf{e}_J$ is the number of places $i$ where $v_i \ne w_i$. Given any origin $\mathbf{v}_0\in V$, the Hamming distance partitions $V$ into spheres $S_i(\mathbf{v}_0)$, $i=0, 1, \ldots, J$, where $S_i(\mathbf{v}_0) = \{\mathbf{w}\in V\ |\ \delta_H(\mathbf{w}, \mathbf{v}_0) = i\}$. When the ground ring has $n$ elements, $V$ has $n^J$ elements and $S_i(\mathbf{v})$ has $\binom{J}{i}\left(n-1\right)^i$ elements. (This follows immediately from observing that elements of $S_i(\mathbf{v})$ differ from $\mathbf{v}$ in exactly $i$ places--of which there are $\binom{J}{i}$ possibilities--and that there are, independently, $n-1$ choices of values for each place.) Affine translation in $V$ acts naturally on its distributions to give location families. Specifically, when $f$ is any distribution on $V$ (which means little more than $f:V\to [0,1]$, $f(\mathbf{v})\ge 0$ for all $\mathbf{v} \in V$, and $\sum_{\mathbf{v}\in V}f(\mathbf{v})=1$) and $\mathbf{w}$ is any element of $V$, then $f^{(\mathbf{w})}$ is also a distribution where $$f^{(\mathbf{w})}(\mathbf{v}) = f(\mathbf{v}-\mathbf{w})$$ for all $\mathbf{v}\in V$. A location family $\Omega$ of distributions is invariant under this action: $f\in \Omega$ implies $f^{(\mathbf{v})}\in \Omega$ for all $\mathbf{v}\in V$. Construction This enables us to define potentially interesting and useful families of distributions by specifying their shapes at one fixed vector $\mathbf{v}$, which for convenience I will take to be $\mathbf{0} = (0,0,\ldots,0)$, and translating these "generating distributions" under the action of $V$ to obtain the full family $\Omega$. To achieve the desired property that $f$ should have comparable values at nearby points, simply require that property of all generating distributions. To see how this works, let's construct the location family of all distributions that decrease with increasing distance. Because only $J+1$ Hamming distances are possible, consider any decreasing sequence of non-negative real numbers $\mathbf{a}$ = $0 \ne a_0 \ge a_1 \ge \cdots \ge a_J \ge 0$. Set $$A = \sum_{i=0}^J (n-1)^i\binom{J}{i} a_i$$ and define the function $f_\mathbf{a}:V\to [0,1]$ by $$f_\mathbf{a}(\mathbf{v}) = \frac{a_{\delta_H(\mathbf{0},\mathbf{v})}}{A}.$$ Then, as is straightforward to check, $f_\mathbf{a}$ is a distribution on $V$. Furthermore, $f_\mathbf{a} = f_{\mathbf{a}'}$ if and only if $\mathbf{a}'$ is a positive multiple of $\mathbf{a}$ (as vectors in $\mathbb{R}^{J+1}$). Thus, if we like, we may standardize $\mathbf{a}$ to $a_0=1$. Accordingly, this construction gives an explicit parameterization of all such location-invariant distributions that are decreasing with Hamming distance: any such distribution is in the form $f_\mathbf{a}^{(\mathbf{v})}$ for some sequence $\mathbf{a} = 1 \ge a_1 \ge a_2 \ge \cdots \ge a_J \ge 0$ and some vector $\mathbf{v}\in V$. This parameterization may allow for convenient specification of priors: factor them into a prior on the location $\mathbf{v}$ and a prior on the shape $\mathbf{a}$. (Of course one could consider a larger set of priors where location and shape and not independent, but this would be a more complicated undertaking.) Generating random values One way to sample from $f_\mathbf{a}^{(\mathbf{v})}$ is by stages by factoring it into a distribution over the spherical radi and another distribution conditional on each sphere: Draw an index $i$ from the discrete distribution on $\{0,1,\ldots,J\}$ given by the probabilities $\binom{J}{i}(n-1)^i a_i / A$, where $A$ is defined as before. The index $i$ corresponds to the set of vectors differing from $\mathbf{v}$ in exactly $i$ places. Therefore, select those $i$ places out of the $\binom{J}{i}$ possible subsets, giving each equal probability. (This is just a sample of $i$ subscripts out of $J$ without replacement.) Let this subset of $i$ places be written $I$. Draw an element $\mathbf{w}$ by independently selecting a value $w_j$ uniformly from the set of scalars not equal to $v_j$ for all $j\in I$ and otherwise set $w_j=v_j$. Equivalently, create a vector $\mathbf{u}$ by selecting $u_j$ uniformly at random from the nonzero scalars when $j\in I$ and otherwise setting $u_j=0$. Set $\mathbf{w} = \mathbf{v} + \mathbf{u}$. Step 3 is unnecessary in the binary case. Example Here is an R implementation to illustrate. rHamming <- function(N=1, a=c(1,1,1), n=2, origin) { # Draw N random values from the distribution f_a^v where the ground ring # is {0,1,...,n-1} mod n and the vector space has dimension j = length(a)-1. j <- length(a) - 1 if(missing(origin)) origin <- rep(0, j) # Draw radii `i` from the marginal distribution of the spherical radii. f <- sapply(0:j, function(i) (n-1)^i * choose(j,i) * a[i+1]) i <- sample(0:j, N, replace=TRUE, prob=f) # Helper function: select nonzero elements of 1:(n-1) in exactly i places. h <- function(i) { x <- c(sample(1:(n-1), i, replace=TRUE), rep(0, j-i)) sample(x, j, replace=FALSE) } # Draw elements from the conditional distribution over the spheres # and translate them by the origin. (sapply(i, h) + origin) %% n } As an example of its use: test <- rHamming(10^4, 2^(11:1), origin=rep(1,10)) hist(apply(test, 2, function(x) sum(x != 0))) This took $0.2$ seconds to draw $10^4$ iid elements from the distribution $f_{\mathbf{a}}^{(\mathbf{v})}$ where $J=10$, $n=2$ (the binary case), $\mathbf{v}=(1,1,\ldots,1)$, and $\mathbf{a}=(2^{11},2^{10},\ldots,2^1)$ is exponentially decreasing. (This algorithm does not require that $\mathbf{a}$ be decreasing; thus, it will generate random variates from any location family, not just the unimodal ones.)
Distributions on subsets of $\{1, 2, ..., J\}$? You might favor location families based on Hamming distance, due to their richness, flexibility, and computational tractability. Notation and definitions Recall that in a free finite-dimensional modu
32,332
Distributions on subsets of $\{1, 2, ..., J\}$?
A sample from a k-determinantal point process models a distribution over subsets that encourages diversity, such that similar items are less likely to occur together in the sample. Refer to K-determinantal point process sampling by Alex Kulesza, Ben Taskar.
Distributions on subsets of $\{1, 2, ..., J\}$?
A sample from a k-determinantal point process models a distribution over subsets that encourages diversity, such that similar items are less likely to occur together in the sample. Refer to K-determin
Distributions on subsets of $\{1, 2, ..., J\}$? A sample from a k-determinantal point process models a distribution over subsets that encourages diversity, such that similar items are less likely to occur together in the sample. Refer to K-determinantal point process sampling by Alex Kulesza, Ben Taskar.
Distributions on subsets of $\{1, 2, ..., J\}$? A sample from a k-determinantal point process models a distribution over subsets that encourages diversity, such that similar items are less likely to occur together in the sample. Refer to K-determin
32,333
How to compare two time series?
To compare two time series simply estimate the COMMON appropriate arima model for each time series separately AND then estimate it globally ( putting the second series behind the first ) . Make sure that your software recognizes the beginning of the scond series and doesn't forecast it from the latter values of the first series. Perform an F test ala G. Chow to test the hypothesis of a common set of parameters. AUTOBOX , a program that I am involved with allows this test to be performed. SPSS may not.
How to compare two time series?
To compare two time series simply estimate the COMMON appropriate arima model for each time series separately AND then estimate it globally ( putting the second series behind the first ) . Make sure t
How to compare two time series? To compare two time series simply estimate the COMMON appropriate arima model for each time series separately AND then estimate it globally ( putting the second series behind the first ) . Make sure that your software recognizes the beginning of the scond series and doesn't forecast it from the latter values of the first series. Perform an F test ala G. Chow to test the hypothesis of a common set of parameters. AUTOBOX , a program that I am involved with allows this test to be performed. SPSS may not.
How to compare two time series? To compare two time series simply estimate the COMMON appropriate arima model for each time series separately AND then estimate it globally ( putting the second series behind the first ) . Make sure t
32,334
How to compare two time series?
I don’t know if you have used “Time Series Modeler” in "SPSS Forecasting" or not. But If you did, then you can obtain different statistics including Goodness-of-fit measures: stationary R-square, R-square (R2), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), maximum absolute error (MaxAE), maximum absolute percentage error (MaxAPE), normalized Bayesian information criterion (BIC) as referenced here, page 4. You can use these statistics to compare your models. To do so, from the menus choose -> Analyze -> Forecasting -> Create Models... (after selecting variables and methods) -> “statistics” tab.
How to compare two time series?
I don’t know if you have used “Time Series Modeler” in "SPSS Forecasting" or not. But If you did, then you can obtain different statistics including Goodness-of-fit measures: stationary R-square, R-sq
How to compare two time series? I don’t know if you have used “Time Series Modeler” in "SPSS Forecasting" or not. But If you did, then you can obtain different statistics including Goodness-of-fit measures: stationary R-square, R-square (R2), root mean square error (RMSE), mean absolute error (MAE), mean absolute percentage error (MAPE), maximum absolute error (MaxAE), maximum absolute percentage error (MaxAPE), normalized Bayesian information criterion (BIC) as referenced here, page 4. You can use these statistics to compare your models. To do so, from the menus choose -> Analyze -> Forecasting -> Create Models... (after selecting variables and methods) -> “statistics” tab.
How to compare two time series? I don’t know if you have used “Time Series Modeler” in "SPSS Forecasting" or not. But If you did, then you can obtain different statistics including Goodness-of-fit measures: stationary R-square, R-sq
32,335
PRESS statistic for ridge regression
yes, I use this method a lot for kernel ridge regression, and it is a good way of selecting the ridge parameter (see e.g. this paper [doi,preprint]). A search for the optimal ridge parameter can be made very efficient if the computations a performed in canonical form (see e.g. this paper), where the model is re-parametersied so that the inverse of a diagonal matrix is required.
PRESS statistic for ridge regression
yes, I use this method a lot for kernel ridge regression, and it is a good way of selecting the ridge parameter (see e.g. this paper [doi,preprint]). A search for the optimal ridge parameter can be ma
PRESS statistic for ridge regression yes, I use this method a lot for kernel ridge regression, and it is a good way of selecting the ridge parameter (see e.g. this paper [doi,preprint]). A search for the optimal ridge parameter can be made very efficient if the computations a performed in canonical form (see e.g. this paper), where the model is re-parametersied so that the inverse of a diagonal matrix is required.
PRESS statistic for ridge regression yes, I use this method a lot for kernel ridge regression, and it is a good way of selecting the ridge parameter (see e.g. this paper [doi,preprint]). A search for the optimal ridge parameter can be ma
32,336
PRESS statistic for ridge regression
The following approach can be taken to apply L2 regularisation and get the PRESS statistic. The method uses a data augmentation approach. Assume you have N samples of Y, and K explanatory variables X1,X2...Xk....XK Add additional variable X0 that has 1 over the N samples Augment with K additional samples where: Y value is 0 for each of the K samples X0 value is 0 for each of the K samples Xk value is SQRT(Lambda * N) * [STDEV(Xk) over N samples] if on diagonal, and 0 otherwise There are now N+K samples and K+1 variables. A normal linear regression can be solved with these inputs. As this a regression done in one step the PRESS statistic can be calculated as normal. The Lambda regularisation input has to be decided. Reviewing the PRESS statistic for different inputs of Lambada can help determine a suitable value.
PRESS statistic for ridge regression
The following approach can be taken to apply L2 regularisation and get the PRESS statistic. The method uses a data augmentation approach. Assume you have N samples of Y, and K explanatory variables X1
PRESS statistic for ridge regression The following approach can be taken to apply L2 regularisation and get the PRESS statistic. The method uses a data augmentation approach. Assume you have N samples of Y, and K explanatory variables X1,X2...Xk....XK Add additional variable X0 that has 1 over the N samples Augment with K additional samples where: Y value is 0 for each of the K samples X0 value is 0 for each of the K samples Xk value is SQRT(Lambda * N) * [STDEV(Xk) over N samples] if on diagonal, and 0 otherwise There are now N+K samples and K+1 variables. A normal linear regression can be solved with these inputs. As this a regression done in one step the PRESS statistic can be calculated as normal. The Lambda regularisation input has to be decided. Reviewing the PRESS statistic for different inputs of Lambada can help determine a suitable value.
PRESS statistic for ridge regression The following approach can be taken to apply L2 regularisation and get the PRESS statistic. The method uses a data augmentation approach. Assume you have N samples of Y, and K explanatory variables X1
32,337
Cox proportional hazard model and interpretation of coefficients when higher case interaction is involved
A couple suggestions, not directly related to CoxPH but to interactions and collinearity 1) When you are getting "crazy" values like these, one possiblitiy is collinearity. This is often a problem when you have interactions. Have you centered all your variables (by subtracting the mean from each)? 2) You can't interpret one interaction among many quite so easily. LT, food and temp2 are all involved in many interactions. So, look at predicted values from different combinations. 3) Check the units of the different variables. When you get crazy parameters, sometimes it's a problem of units (e.g. measuring a human height in millimeters or kilometers) 4) Once you've got that stuff straightened out, I find the easiest way to think of the effects of different interactions (esp. higher level ones) is to graph the predicted values with different combinations of the independent values.
Cox proportional hazard model and interpretation of coefficients when higher case interaction is inv
A couple suggestions, not directly related to CoxPH but to interactions and collinearity 1) When you are getting "crazy" values like these, one possiblitiy is collinearity. This is often a problem whe
Cox proportional hazard model and interpretation of coefficients when higher case interaction is involved A couple suggestions, not directly related to CoxPH but to interactions and collinearity 1) When you are getting "crazy" values like these, one possiblitiy is collinearity. This is often a problem when you have interactions. Have you centered all your variables (by subtracting the mean from each)? 2) You can't interpret one interaction among many quite so easily. LT, food and temp2 are all involved in many interactions. So, look at predicted values from different combinations. 3) Check the units of the different variables. When you get crazy parameters, sometimes it's a problem of units (e.g. measuring a human height in millimeters or kilometers) 4) Once you've got that stuff straightened out, I find the easiest way to think of the effects of different interactions (esp. higher level ones) is to graph the predicted values with different combinations of the independent values.
Cox proportional hazard model and interpretation of coefficients when higher case interaction is inv A couple suggestions, not directly related to CoxPH but to interactions and collinearity 1) When you are getting "crazy" values like these, one possiblitiy is collinearity. This is often a problem whe
32,338
How to test/prove data is zero inflated?
This seems like a relatively straightforward (nonlinear) mixed model to me. You have seed pods nested into clusters nested into plants, and you can fit a binomial model with random effects at each stage: library(lme4) binre <- lmer( pollinated ~ 1 + (1|plant) + (1|cluster), data = my.data, family = binomial) or with covariates if you have them. If the flowers self-pollinate, then you might see some mild effects due to natural variability in how viable the plants are by themselves. However if most of the variability in the response is driven by say cluster variability, you would have a stronger evidence of pollination by insects that might visit only selected clusters on a plant. Ideally, you would want a non-parametric distribution of the random effects rather than Gaussian: a point mass at zero, for no insect visits, and a point mass at a positive value -- this is essentially the mixture model Michael Chernick thought about. You can fit this with GLLAMM Stata package, I'd be surprised if this were not possible in R. Probably for a clean experiment, you would want to have the plants inside, or at least in a location with no insect access, and see how many seeds would be pollinated. That would probably answer all your questions in a more methodologically rigorous way.
How to test/prove data is zero inflated?
This seems like a relatively straightforward (nonlinear) mixed model to me. You have seed pods nested into clusters nested into plants, and you can fit a binomial model with random effects at each sta
How to test/prove data is zero inflated? This seems like a relatively straightforward (nonlinear) mixed model to me. You have seed pods nested into clusters nested into plants, and you can fit a binomial model with random effects at each stage: library(lme4) binre <- lmer( pollinated ~ 1 + (1|plant) + (1|cluster), data = my.data, family = binomial) or with covariates if you have them. If the flowers self-pollinate, then you might see some mild effects due to natural variability in how viable the plants are by themselves. However if most of the variability in the response is driven by say cluster variability, you would have a stronger evidence of pollination by insects that might visit only selected clusters on a plant. Ideally, you would want a non-parametric distribution of the random effects rather than Gaussian: a point mass at zero, for no insect visits, and a point mass at a positive value -- this is essentially the mixture model Michael Chernick thought about. You can fit this with GLLAMM Stata package, I'd be surprised if this were not possible in R. Probably for a clean experiment, you would want to have the plants inside, or at least in a location with no insect access, and see how many seeds would be pollinated. That would probably answer all your questions in a more methodologically rigorous way.
How to test/prove data is zero inflated? This seems like a relatively straightforward (nonlinear) mixed model to me. You have seed pods nested into clusters nested into plants, and you can fit a binomial model with random effects at each sta
32,339
How to test/prove data is zero inflated?
Seems to me that this is a mixture distribution for each individual insect. With probability p the insect does land with probaility 1-p it lands and distributes 0 to 4 seeds. But if you have no information on whether or not the insect lands on the plant you can't distinguish the two ways to get 0. So you could let p be the probability for 0 and then you have the multinomial distribution (p1, p2, p3, p4) where pi is the probability of i seeds given the insect pollinates subject to the constraint p1+p2+p3+p4=1. The model has five unknowns p, p1, p2, p3, p4 with the constraint 0=0 for each i. With enough data you should be able to estimate these parameters perhaps using a restricted maximum likelihood approach.
How to test/prove data is zero inflated?
Seems to me that this is a mixture distribution for each individual insect. With probability p the insect does land with probaility 1-p it lands and distributes 0 to 4 seeds. But if you have no infor
How to test/prove data is zero inflated? Seems to me that this is a mixture distribution for each individual insect. With probability p the insect does land with probaility 1-p it lands and distributes 0 to 4 seeds. But if you have no information on whether or not the insect lands on the plant you can't distinguish the two ways to get 0. So you could let p be the probability for 0 and then you have the multinomial distribution (p1, p2, p3, p4) where pi is the probability of i seeds given the insect pollinates subject to the constraint p1+p2+p3+p4=1. The model has five unknowns p, p1, p2, p3, p4 with the constraint 0=0 for each i. With enough data you should be able to estimate these parameters perhaps using a restricted maximum likelihood approach.
How to test/prove data is zero inflated? Seems to me that this is a mixture distribution for each individual insect. With probability p the insect does land with probaility 1-p it lands and distributes 0 to 4 seeds. But if you have no infor
32,340
How to test/prove data is zero inflated?
This is an answer to the last part of your question, how to quickly generate the data you want for the pollinator hypothesis: n = 16 max = 4 p1 = 0.1 p2 = 0.9 Y1 = rbinom(10000*n,1,p1) Y2 = matrix(Y1*rbinom(10000*n,4,p2),ncol=16) You can also use rzibinom() in package VGAM. Although I'm not sure what you want to do with it. You have 2 free parameters, p1 and p2, that need to be estimated. Why not use a zero inflated binomial model to estimate them from the data? You should look at package VGAM, which fits ZIB models among others. In fact, you can get the expected distribution for a ZIB from the VGAM function dzibinom(), which you could use to compare your observed distribution with if you know the parameters of visitation and pollination. Again, you really should fit the ZIB model. If your partial selfing hypothesis is exclusive to insect pollination, then the expected distribution is simply binomial, and you could estimate the parameters with a binomial family glm or perhaps a glmm with plant id as a random effect. However, if they can partial self AND receive insect pollination, then you're back to needing a mixture of two binomial distributions. In that case I would investigate using OpenBUGS or JAGS to fit the model using MCMC. Once you have the two models fitted to your data you then compare the models to see which one fits better, using AIC or BIC or some other metric of your choice.
How to test/prove data is zero inflated?
This is an answer to the last part of your question, how to quickly generate the data you want for the pollinator hypothesis: n = 16 max = 4 p1 = 0.1 p2 = 0.9 Y1 = rbinom(10000*n,1,p1) Y2 = matrix(Y1*
How to test/prove data is zero inflated? This is an answer to the last part of your question, how to quickly generate the data you want for the pollinator hypothesis: n = 16 max = 4 p1 = 0.1 p2 = 0.9 Y1 = rbinom(10000*n,1,p1) Y2 = matrix(Y1*rbinom(10000*n,4,p2),ncol=16) You can also use rzibinom() in package VGAM. Although I'm not sure what you want to do with it. You have 2 free parameters, p1 and p2, that need to be estimated. Why not use a zero inflated binomial model to estimate them from the data? You should look at package VGAM, which fits ZIB models among others. In fact, you can get the expected distribution for a ZIB from the VGAM function dzibinom(), which you could use to compare your observed distribution with if you know the parameters of visitation and pollination. Again, you really should fit the ZIB model. If your partial selfing hypothesis is exclusive to insect pollination, then the expected distribution is simply binomial, and you could estimate the parameters with a binomial family glm or perhaps a glmm with plant id as a random effect. However, if they can partial self AND receive insect pollination, then you're back to needing a mixture of two binomial distributions. In that case I would investigate using OpenBUGS or JAGS to fit the model using MCMC. Once you have the two models fitted to your data you then compare the models to see which one fits better, using AIC or BIC or some other metric of your choice.
How to test/prove data is zero inflated? This is an answer to the last part of your question, how to quickly generate the data you want for the pollinator hypothesis: n = 16 max = 4 p1 = 0.1 p2 = 0.9 Y1 = rbinom(10000*n,1,p1) Y2 = matrix(Y1*
32,341
Estimated distribution of eigenvalues for i.i.d. (uniform or normal) data
There is a large literature on the distribution of eigenvalues for random matrices (you can try googling random matrix theory). In particular, the Marcenko-Pastur distribution predicts the distribution of eigenvalues for the covariance matrix of $i.i.d.$ data with mean of zero and equal variance as the number of variables and observations goes to infinity. Closely related is Wigner's semicircle distribution.
Estimated distribution of eigenvalues for i.i.d. (uniform or normal) data
There is a large literature on the distribution of eigenvalues for random matrices (you can try googling random matrix theory). In particular, the Marcenko-Pastur distribution predicts the distributio
Estimated distribution of eigenvalues for i.i.d. (uniform or normal) data There is a large literature on the distribution of eigenvalues for random matrices (you can try googling random matrix theory). In particular, the Marcenko-Pastur distribution predicts the distribution of eigenvalues for the covariance matrix of $i.i.d.$ data with mean of zero and equal variance as the number of variables and observations goes to infinity. Closely related is Wigner's semicircle distribution.
Estimated distribution of eigenvalues for i.i.d. (uniform or normal) data There is a large literature on the distribution of eigenvalues for random matrices (you can try googling random matrix theory). In particular, the Marcenko-Pastur distribution predicts the distributio
32,342
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdown of entries?
The possible chances lie between 17.7% and 18.7%. The worst case occurs when everybody but you has exactly one entry in the lottery: this is a configuration consistent with the data (although unlikely!). Let's count the number of possibilities in which you do not win. This is the number of ways of drawing $25$ tickets out of the $784-6$ remaining tickets, given by the Binomial coefficient $\binom{784-6}{25}$. (It's a huge number). The total number of possibilities--all of them equally likely in a fair drawing--is $\binom{784}{25}$. The ratio simplifies to $(784-25)\cdots(784-30) / [(784)\cdots(784-5)]$, which is about 82.22772%: your chances of not winning. Your chances of winning in this situation therefore equal 1 - 82.22772% = 17.7228%. The best case occurs when there are as few individuals involved in the lottery as possible and as many as possible have $6$, and then $5$, etc, tickets. Given that the "gem" counts are $(42, 72, 119, 156, 178, 217)$ (in ascending order), this implies At most $42 = a_6$ people can have $6$ entries each. At most $72-42=30 = a_5$ people can have $5$ entries each. ... At most $178-156=22 = a_2$ people can have $2$ entries each. $217-178=39 = a_1$ people have $1$ entry each. Let $p(\mathbf{a}, l, j)$ designate the chance of winning when you hold $j$ (between $1$ and $6$) tickets in a lottery with data $\mathbf{a}=(a_1,a_2,\ldots,a_6)$ and $l=25$ draws. The total number of tickets therefore equals $1 a_1 + 2 a_2 + \cdots + 6 a_6 = n$. Consider the next draw. There are seven possibilities: One of your tickets is drawn; you win. The chance of this equals $j/n$. Somebody else's tickets are drawn. The chance of this equals $(n-j)/n$. If they hold $i$ of them, then all $i$ tickets are removed from the lottery. If $l \ge 1$, drawing continues with the new data: $l$ has been decreased by $1$ and $a_i$ has been decreased by $1$ as well. The chance that some person with $i$ tickets in the lottery is chosen, given that yours are not, equals $ia_i/(n-j)$. This gives six disjoint possibilities for $i=1,2,\ldots,6$. We add these chances because they partition all outcomes with no overlap. The calculation continues recursively down this probability tree until all the leaves at $l=0$ are reached. It's a lot of computation (about $25^6$ = 244 million calculations), but it only takes a few minutes (or less, depending on the platform). I obtain 18.6475% chances of winning in this case. Here's the Mathematica code I used. (It is written to parallel the preceding analysis; it could be made a little more efficient through some algebraic reductions and tests for when $a_i$ is reduced to $0$.) Here, the argument a does not count the $j$ tickets you hold: it gives the distribution of counts of tickets everyone else holds. p[a_, l_Integer, j_Integer] /; l >= 1 := p[a, l, j] = Module[{k = Length[a], n}, n = Range[k] . a + j; j/n + (n - j)/n ParallelSum[ i a[[i]] / (n - j) p[a - UnitVector[k, i], l - 1, j], {i, 1, k}] ]; p[a_, 0, j_Integer] := 0; (* The data *) a = Reverse[Differences[Prepend[Sort[{42, 72, 119, 217, 156, 178}], 0]]]; j = 6; l = 25; (* The solution *) p[a - UnitVector[Length[a],j], l, j] // N As a reality check, let us compare these answers to two naive approximations (neither of which is quite correct): 25 draws with 6 tickets in play should give you around 6*25 out of 784 chances of winning. This is 19.1%. Each time your chance of not winning is about (784-6)/784. Raise this to the 25th power to find your chance of not winning in the lottery. Subtracting it from 1 gives 17.5%. It looks like we're in the right ballpark.
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdow
The possible chances lie between 17.7% and 18.7%. The worst case occurs when everybody but you has exactly one entry in the lottery: this is a configuration consistent with the data (although unlikely
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdown of entries? The possible chances lie between 17.7% and 18.7%. The worst case occurs when everybody but you has exactly one entry in the lottery: this is a configuration consistent with the data (although unlikely!). Let's count the number of possibilities in which you do not win. This is the number of ways of drawing $25$ tickets out of the $784-6$ remaining tickets, given by the Binomial coefficient $\binom{784-6}{25}$. (It's a huge number). The total number of possibilities--all of them equally likely in a fair drawing--is $\binom{784}{25}$. The ratio simplifies to $(784-25)\cdots(784-30) / [(784)\cdots(784-5)]$, which is about 82.22772%: your chances of not winning. Your chances of winning in this situation therefore equal 1 - 82.22772% = 17.7228%. The best case occurs when there are as few individuals involved in the lottery as possible and as many as possible have $6$, and then $5$, etc, tickets. Given that the "gem" counts are $(42, 72, 119, 156, 178, 217)$ (in ascending order), this implies At most $42 = a_6$ people can have $6$ entries each. At most $72-42=30 = a_5$ people can have $5$ entries each. ... At most $178-156=22 = a_2$ people can have $2$ entries each. $217-178=39 = a_1$ people have $1$ entry each. Let $p(\mathbf{a}, l, j)$ designate the chance of winning when you hold $j$ (between $1$ and $6$) tickets in a lottery with data $\mathbf{a}=(a_1,a_2,\ldots,a_6)$ and $l=25$ draws. The total number of tickets therefore equals $1 a_1 + 2 a_2 + \cdots + 6 a_6 = n$. Consider the next draw. There are seven possibilities: One of your tickets is drawn; you win. The chance of this equals $j/n$. Somebody else's tickets are drawn. The chance of this equals $(n-j)/n$. If they hold $i$ of them, then all $i$ tickets are removed from the lottery. If $l \ge 1$, drawing continues with the new data: $l$ has been decreased by $1$ and $a_i$ has been decreased by $1$ as well. The chance that some person with $i$ tickets in the lottery is chosen, given that yours are not, equals $ia_i/(n-j)$. This gives six disjoint possibilities for $i=1,2,\ldots,6$. We add these chances because they partition all outcomes with no overlap. The calculation continues recursively down this probability tree until all the leaves at $l=0$ are reached. It's a lot of computation (about $25^6$ = 244 million calculations), but it only takes a few minutes (or less, depending on the platform). I obtain 18.6475% chances of winning in this case. Here's the Mathematica code I used. (It is written to parallel the preceding analysis; it could be made a little more efficient through some algebraic reductions and tests for when $a_i$ is reduced to $0$.) Here, the argument a does not count the $j$ tickets you hold: it gives the distribution of counts of tickets everyone else holds. p[a_, l_Integer, j_Integer] /; l >= 1 := p[a, l, j] = Module[{k = Length[a], n}, n = Range[k] . a + j; j/n + (n - j)/n ParallelSum[ i a[[i]] / (n - j) p[a - UnitVector[k, i], l - 1, j], {i, 1, k}] ]; p[a_, 0, j_Integer] := 0; (* The data *) a = Reverse[Differences[Prepend[Sort[{42, 72, 119, 217, 156, 178}], 0]]]; j = 6; l = 25; (* The solution *) p[a - UnitVector[Length[a],j], l, j] // N As a reality check, let us compare these answers to two naive approximations (neither of which is quite correct): 25 draws with 6 tickets in play should give you around 6*25 out of 784 chances of winning. This is 19.1%. Each time your chance of not winning is about (784-6)/784. Raise this to the 25th power to find your chance of not winning in the lottery. Subtracting it from 1 gives 17.5%. It looks like we're in the right ballpark.
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdow The possible chances lie between 17.7% and 18.7%. The worst case occurs when everybody but you has exactly one entry in the lottery: this is a configuration consistent with the data (although unlikely
32,343
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdown of entries?
If I did the math right, you have between 19.43% and 21.15% chance of winning a prize The 19.43% is the best-case scenario, where every entrant has 6 tickets The 21.15% is the worst-case scenario, where every entrant has 1 ticket except you Both scenarios are extremely unlikely, so your actual odds of winning probably fall somewhere in between, however a roughly 1/5 chance at winning seems like a fairly solid number to go by The details on how those numbers were obtained can be found in this Google spreadsheet, however to summarize how they were obtained: Start with Total # of Entries (784) and Your Entries (6) Get chance at winning (6 / 784 = 0.77%) Subtract 6 for best-case, or 1 for worst-case from TotalEntries Get chance of winning (6/778 for best case 6/783 for worst case) Repeat steps 3-4 until you have 25 percentages Add the 25 percentages together to find out your overall chance at winnning something Here's an alternative way to get the approximate percentage that is simpler, but is not as accurate since you are not removing duplicate entries every time you draw a winner. 6 (your tickets) / 784 total tickets = 0.00765 0.00765 chance to win * 25 prizes = 19.14 % chance to win EDIT: I'm fairly sure I'm missing something in my math and that you cannot simply add percentages like this (or multiply percent chance to win by # of prizes), although I think I'm close Whobar's comment gives a 17.4% chance of winning, although I still need to figure out the formula he gave and make sure it's accurate for the contest. Perhaps a weekend project :)
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdow
If I did the math right, you have between 19.43% and 21.15% chance of winning a prize The 19.43% is the best-case scenario, where every entrant has 6 tickets The 21.15% is the worst-case scenario, whe
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdown of entries? If I did the math right, you have between 19.43% and 21.15% chance of winning a prize The 19.43% is the best-case scenario, where every entrant has 6 tickets The 21.15% is the worst-case scenario, where every entrant has 1 ticket except you Both scenarios are extremely unlikely, so your actual odds of winning probably fall somewhere in between, however a roughly 1/5 chance at winning seems like a fairly solid number to go by The details on how those numbers were obtained can be found in this Google spreadsheet, however to summarize how they were obtained: Start with Total # of Entries (784) and Your Entries (6) Get chance at winning (6 / 784 = 0.77%) Subtract 6 for best-case, or 1 for worst-case from TotalEntries Get chance of winning (6/778 for best case 6/783 for worst case) Repeat steps 3-4 until you have 25 percentages Add the 25 percentages together to find out your overall chance at winnning something Here's an alternative way to get the approximate percentage that is simpler, but is not as accurate since you are not removing duplicate entries every time you draw a winner. 6 (your tickets) / 784 total tickets = 0.00765 0.00765 chance to win * 25 prizes = 19.14 % chance to win EDIT: I'm fairly sure I'm missing something in my math and that you cannot simply add percentages like this (or multiply percent chance to win by # of prizes), although I think I'm close Whobar's comment gives a 17.4% chance of winning, although I still need to figure out the formula he gave and make sure it's accurate for the contest. Perhaps a weekend project :)
Is it possible to estimate the odds of winning a multi-entry contest, when I don't know the breakdow If I did the math right, you have between 19.43% and 21.15% chance of winning a prize The 19.43% is the best-case scenario, where every entrant has 6 tickets The 21.15% is the worst-case scenario, whe
32,344
DNA use in court cases
Given your assumption that 1 in 160 millions being P(matching DNA evidence|random person), the number 16/19 is roughly the chance that none of the other 30 million males in the UK would also match the DNA evidence: binomial chance of 0 hits, given 30 millions trials with p = 1/160 millions. I get about 0.83 for this probability and 16/19 is roughly 0.84. Since 19/23 is a better approximation of the probability I calculated I am not certain if this is how they got it. Assumptions of whom? The counsel? If I am right, he makes the incorrect assumption that the existence of another man with matching DNA would mean that his client is inoccent. But of the 30 million men many would have alibis and/or life far away from the crime scene which gives them a relative miniscule prior probability of being the killer. Statistically it makes sense to assume he's guilty. If we had a measure of often the killer lives near and therefore how likely it is that he is part of the 2000 people tested, we could calculate the probability. Let's say it's relatively low, say 5%. Let be G be the event that the guilty one is part of the 2000 and let E be the event that at least one of the 2000 tests positive. Then $$ P(G|E)=\frac{P(E|G)P(G)}{P(E)}. $$ P(G) is assumed to be 0.05 and $P(E|G)$ should be about 1 if the lab does it's work properly. In practice it's probably slightly lower, so let's assume it's just 0.9. OTH $$ P(E) = P(E|G)P(G)+P(E|!G)P(!G)=0.05*0.9+ p*0.95 $$ with p being the binomial chance of at least 1 positive result out of 2000 with a hit chance of 1/160 million. It turns out that this is small, with p being about $0.000012$. This means we get $P(E)=0.045$ and $$ P(G|E) \approx 0.99974. $$
DNA use in court cases
Given your assumption that 1 in 160 millions being P(matching DNA evidence|random person), the number 16/19 is roughly the chance that none of the other 30 million males in the UK would also match the
DNA use in court cases Given your assumption that 1 in 160 millions being P(matching DNA evidence|random person), the number 16/19 is roughly the chance that none of the other 30 million males in the UK would also match the DNA evidence: binomial chance of 0 hits, given 30 millions trials with p = 1/160 millions. I get about 0.83 for this probability and 16/19 is roughly 0.84. Since 19/23 is a better approximation of the probability I calculated I am not certain if this is how they got it. Assumptions of whom? The counsel? If I am right, he makes the incorrect assumption that the existence of another man with matching DNA would mean that his client is inoccent. But of the 30 million men many would have alibis and/or life far away from the crime scene which gives them a relative miniscule prior probability of being the killer. Statistically it makes sense to assume he's guilty. If we had a measure of often the killer lives near and therefore how likely it is that he is part of the 2000 people tested, we could calculate the probability. Let's say it's relatively low, say 5%. Let be G be the event that the guilty one is part of the 2000 and let E be the event that at least one of the 2000 tests positive. Then $$ P(G|E)=\frac{P(E|G)P(G)}{P(E)}. $$ P(G) is assumed to be 0.05 and $P(E|G)$ should be about 1 if the lab does it's work properly. In practice it's probably slightly lower, so let's assume it's just 0.9. OTH $$ P(E) = P(E|G)P(G)+P(E|!G)P(!G)=0.05*0.9+ p*0.95 $$ with p being the binomial chance of at least 1 positive result out of 2000 with a hit chance of 1/160 million. It turns out that this is small, with p being about $0.000012$. This means we get $P(E)=0.045$ and $$ P(G|E) \approx 0.99974. $$
DNA use in court cases Given your assumption that 1 in 160 millions being P(matching DNA evidence|random person), the number 16/19 is roughly the chance that none of the other 30 million males in the UK would also match the
32,345
Bootstrap variance of squared sample mean
A little late, but anyways... First, to simplify later calculations, rewrite the sample mean in terms of an expression containing the central moments under the empirical measure. Let $S_n = \frac{1}{n}\sum(X_i - \bar{X}_n) = 0$. Then $$ \bar{X}_n = S_n +\bar{X}_n = \frac{1}{n}\sum (X_i - \bar{X}_n) + \bar{X}_n $$ Now, Var$(\bar{X}_n^2) = E(\bar{X}_n^4) - (E\bar{X}_n^2)^2$. We'll tackle the first term. Note that $\bar{X}_n$ is the mean under the empirical measure, so we treat it as a constant when taking expectations. $$ \begin{align} E(\bar{X}_n^4) &= E(S_n + \bar{X}_n)^4 \\ &= E(S_n^4 + 4\bar{X}_nS_n^3 + 6\bar{X}_n^2S_n^2 + 4\bar{X}_n^3S_n + \bar{X}_n^4) \\ &= E(S_n^4) + 4\bar{X}_nE(S_n^3) + 6\bar{X}_n^2E(S_n^2) + \bar{X}_n^4 \end{align} $$ where we used that $S_n = 0$ to drop the second-to-last term. In the following expansions, terms involving a product with $nS_n$ will not be written. $$ \begin{align} E(S_n^4) &= E\left(\frac{1}{n^4}\left[\sum(X_i - \bar{X}_n)^4 + \sum\sum(X_i - \bar{X}_n)^2(X_j - \bar{X}_n)^2\right]\right) \\ &= \frac{\hat{a}_4}{n^3} + \frac{3(n-1)\hat{a}_2^2}{n^3}\\ E(S_n^3) &= \left(\frac{1}{n^3}\sum(X_i - \bar{X}_n)^3\right) = \frac{\hat{a}_3}{n^2}\\ E(S_n^2) &= \left(\frac{1}{n^2}\sum(X_i - \bar{X}_n)^2\right) = \frac{\hat{a}_2}{n} \end{align} $$ These are straightforward sums of products with some combinatorics to count the number of terms. Doing similar calculations for the second term of the variance and putting it all together: $$ Var(\bar{X}_n^2) = \frac{4\bar{X}_n^2\hat{a}_2}{n} + \frac{4\bar{X}_n\hat{a}_3}{n^2} + \frac{\hat{a}_4 + (2n - 3)\hat{a}_2^2}{n^3} $$
Bootstrap variance of squared sample mean
A little late, but anyways... First, to simplify later calculations, rewrite the sample mean in terms of an expression containing the central moments under the empirical measure. Let $S_n = \frac{1}{n
Bootstrap variance of squared sample mean A little late, but anyways... First, to simplify later calculations, rewrite the sample mean in terms of an expression containing the central moments under the empirical measure. Let $S_n = \frac{1}{n}\sum(X_i - \bar{X}_n) = 0$. Then $$ \bar{X}_n = S_n +\bar{X}_n = \frac{1}{n}\sum (X_i - \bar{X}_n) + \bar{X}_n $$ Now, Var$(\bar{X}_n^2) = E(\bar{X}_n^4) - (E\bar{X}_n^2)^2$. We'll tackle the first term. Note that $\bar{X}_n$ is the mean under the empirical measure, so we treat it as a constant when taking expectations. $$ \begin{align} E(\bar{X}_n^4) &= E(S_n + \bar{X}_n)^4 \\ &= E(S_n^4 + 4\bar{X}_nS_n^3 + 6\bar{X}_n^2S_n^2 + 4\bar{X}_n^3S_n + \bar{X}_n^4) \\ &= E(S_n^4) + 4\bar{X}_nE(S_n^3) + 6\bar{X}_n^2E(S_n^2) + \bar{X}_n^4 \end{align} $$ where we used that $S_n = 0$ to drop the second-to-last term. In the following expansions, terms involving a product with $nS_n$ will not be written. $$ \begin{align} E(S_n^4) &= E\left(\frac{1}{n^4}\left[\sum(X_i - \bar{X}_n)^4 + \sum\sum(X_i - \bar{X}_n)^2(X_j - \bar{X}_n)^2\right]\right) \\ &= \frac{\hat{a}_4}{n^3} + \frac{3(n-1)\hat{a}_2^2}{n^3}\\ E(S_n^3) &= \left(\frac{1}{n^3}\sum(X_i - \bar{X}_n)^3\right) = \frac{\hat{a}_3}{n^2}\\ E(S_n^2) &= \left(\frac{1}{n^2}\sum(X_i - \bar{X}_n)^2\right) = \frac{\hat{a}_2}{n} \end{align} $$ These are straightforward sums of products with some combinatorics to count the number of terms. Doing similar calculations for the second term of the variance and putting it all together: $$ Var(\bar{X}_n^2) = \frac{4\bar{X}_n^2\hat{a}_2}{n} + \frac{4\bar{X}_n\hat{a}_3}{n^2} + \frac{\hat{a}_4 + (2n - 3)\hat{a}_2^2}{n^3} $$
Bootstrap variance of squared sample mean A little late, but anyways... First, to simplify later calculations, rewrite the sample mean in terms of an expression containing the central moments under the empirical measure. Let $S_n = \frac{1}{n
32,346
Bootstrap variance of squared sample mean
I actually think that the book may in fact have the right answer. My suggested change to the process: $E(S_n^4) = E(\frac{1}{n^4}[\Sigma(X_i-\bar{X}_n)^4+\frac{1}{n^2}\frac{1}{n^4}\Sigma\Sigma(X_i-\bar{X}_n)^2(X_j-\bar{X_n})^2)$ actually simplifies to $\frac{\hat{\alpha_4}}{n^3} + \frac{\hat{\alpha}_2^2}{n^3}$ which subsequently gets canceled when you find that $E(\bar{X}_n^2)^2 = E((\bar{X}_n+S_n)^2)^2 = \bar{X}_n^4 + 2\bar{X}_n^2\frac{\hat{\alpha}_2}{n} + \frac{\hat{\alpha}_2^2}{n^2}$. Thus the final result is as the book gives: $$v_{boot} = E(\bar{X}_n^4) - E(\bar{X}_n^2)^2 = \frac{4\bar{X}_n^2\hat{\alpha}_2}{n} + \frac{4\bar{X}_n\hat{\alpha}_3}{n^2} + \frac{\hat{\alpha}_4}{n^3}$$ Hope this clarifies the process. Please submit corrections or suggestions if somehow my reasoning was flawed.
Bootstrap variance of squared sample mean
I actually think that the book may in fact have the right answer. My suggested change to the process: $E(S_n^4) = E(\frac{1}{n^4}[\Sigma(X_i-\bar{X}_n)^4+\frac{1}{n^2}\frac{1}{n^4}\Sigma\Sigma(X_i-\
Bootstrap variance of squared sample mean I actually think that the book may in fact have the right answer. My suggested change to the process: $E(S_n^4) = E(\frac{1}{n^4}[\Sigma(X_i-\bar{X}_n)^4+\frac{1}{n^2}\frac{1}{n^4}\Sigma\Sigma(X_i-\bar{X}_n)^2(X_j-\bar{X_n})^2)$ actually simplifies to $\frac{\hat{\alpha_4}}{n^3} + \frac{\hat{\alpha}_2^2}{n^3}$ which subsequently gets canceled when you find that $E(\bar{X}_n^2)^2 = E((\bar{X}_n+S_n)^2)^2 = \bar{X}_n^4 + 2\bar{X}_n^2\frac{\hat{\alpha}_2}{n} + \frac{\hat{\alpha}_2^2}{n^2}$. Thus the final result is as the book gives: $$v_{boot} = E(\bar{X}_n^4) - E(\bar{X}_n^2)^2 = \frac{4\bar{X}_n^2\hat{\alpha}_2}{n} + \frac{4\bar{X}_n\hat{\alpha}_3}{n^2} + \frac{\hat{\alpha}_4}{n^3}$$ Hope this clarifies the process. Please submit corrections or suggestions if somehow my reasoning was flawed.
Bootstrap variance of squared sample mean I actually think that the book may in fact have the right answer. My suggested change to the process: $E(S_n^4) = E(\frac{1}{n^4}[\Sigma(X_i-\bar{X}_n)^4+\frac{1}{n^2}\frac{1}{n^4}\Sigma\Sigma(X_i-\
32,347
Bootstrap variance of squared sample mean
I think AlexK is correct. The term $\displaystyle \sum_{i\neq j}(X_i^*-\bar X_n)^2(X_j^*-\bar X_n)^2$ has $3n(n-1)$ parts in total (counting duplicates): From 4 factors you can chose 2 in $4\choose2$ ways and the remaining 2 in $2\choose 2$ ways so the coefficients of the distinct terms are 6. From $n$ indices you can choose 2 indices in $n\choose 2$ ways. Hence there are $3n(n-1)$ terms of this form.
Bootstrap variance of squared sample mean
I think AlexK is correct. The term $\displaystyle \sum_{i\neq j}(X_i^*-\bar X_n)^2(X_j^*-\bar X_n)^2$ has $3n(n-1)$ parts in total (counting duplicates): From 4 factors you can chose 2 in $4\choose2$
Bootstrap variance of squared sample mean I think AlexK is correct. The term $\displaystyle \sum_{i\neq j}(X_i^*-\bar X_n)^2(X_j^*-\bar X_n)^2$ has $3n(n-1)$ parts in total (counting duplicates): From 4 factors you can chose 2 in $4\choose2$ ways and the remaining 2 in $2\choose 2$ ways so the coefficients of the distinct terms are 6. From $n$ indices you can choose 2 indices in $n\choose 2$ ways. Hence there are $3n(n-1)$ terms of this form.
Bootstrap variance of squared sample mean I think AlexK is correct. The term $\displaystyle \sum_{i\neq j}(X_i^*-\bar X_n)^2(X_j^*-\bar X_n)^2$ has $3n(n-1)$ parts in total (counting duplicates): From 4 factors you can chose 2 in $4\choose2$
32,348
Bootstrap variance of squared sample mean
Although the idea behind the procedure proposed by AlexK and others is in the good direction, there are some key points that should be considered if you want to be strictly correct in this. You cannot arbitrarily decide over which statistic can or can't be considered to be "treated it as a constant when taking expectations" (As it happens with $\overline{X_n}$ respect to $S_n$). In the process you are also taking an expectation over an statistic that you know is defined as zero and still expect get something from it (in this case $S_n$, $S_n^2$,...). In general, the process of getting an estimator consist in first getting an expression for the statistic, as for example $\mathbb{V}(\overline{X_n})=\sigma^2/n$, and then transform it into an estimator, $\rightarrow \mathbb{\hat{V}}(\overline{X_n})=\hat{\sigma}^2/n$. You can obtain an expression of this kind (with the desired form requested from the exercise) using $$\mathbb{V}(\overline{X_n}^2)=\mathbb{E}\left(\overline{X_n} - \mu + \mu\right)^4-\mathbb{E}^2\left(\overline{X_n} - \mu + \mu\right)^2 $$ From this point, the procedure is quite similar to the proposed by AlexK, which requires taking $\mu$ instead of $\overline{X_n}$ and $\overline{X_n} - \mu$ instead of $S_n$. In this case $\mu$ is allowed to be taken out from expectation. Then \begin{align} \mathbb{E}(\overline{X_n}^4) =& \mathbb{E}\left(\overline{X_n} - \mu + \mu\right)^4\\ =& \mathbb{E}\left(\overline{X_n} - \mu\right)^4 + 4 \mu \mathbb{E}\left(\overline{X_n} - \mu\right)^3 + 6\mu^2 \mathbb{E}\left(\overline{X_n} - \mu\right)^2 + 4\mu^3 \underbrace{\mathbb{E}\left(\overline{X_n} - \mu\right)}_{=0} + \mu^4 \end{align} and \begin{align} \mathbb{E}^2(\overline{X_n}^2) =& \mathbb{E}^2\left(\overline{X_n} - \mu + \mu\right)^2\\ =& \mathbb{E}^2\left(\overline{X_n} - \mu\right)^2+ 2\mu^2 \mathbb{E}\left(\overline{X_n} - \mu\right)^2 + \mu^4 \end{align} where $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^2 = \frac{1}{n} \alpha_2 $$ $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^3 = \frac{1}{n^2} \alpha_3 $$ $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^4 = \frac{1}{n^3} \left[\alpha_4 + 3 (n-1) \alpha_2^2\right] $$ All this gives as result $$ \mathbb{V}(\overline{X_n}^2)= \frac{1}{n^3} \left[\alpha_4 + \left(2n-3\right) \alpha_2^2\right] + 4 \mu \frac{1}{n^2} \alpha_3+ 4\mu^2 \frac{1}{n} \alpha_2 $$ which then can be used as ansatz for the estimator $$ \mathbb{\hat{V}}(\overline{X_n}^2)= \frac{1}{n^3} \left[\hat{\alpha}_4 + \left(2n-3\right) \hat{\alpha_2}^2\right] + 4 \overline{X_n} \frac{1}{n^2} \hat{\alpha_3}+ 4\overline{X_n}^2 \frac{1}{n} \hat{\alpha_2} $$ Notice that in this solution I skipped the summation and combinatorial calculations, as it has been already discussed here. Also notice that I used the definition $\mathbb{E}\left(X_i - \mu\right)^k = \alpha_k$.
Bootstrap variance of squared sample mean
Although the idea behind the procedure proposed by AlexK and others is in the good direction, there are some key points that should be considered if you want to be strictly correct in this. You canno
Bootstrap variance of squared sample mean Although the idea behind the procedure proposed by AlexK and others is in the good direction, there are some key points that should be considered if you want to be strictly correct in this. You cannot arbitrarily decide over which statistic can or can't be considered to be "treated it as a constant when taking expectations" (As it happens with $\overline{X_n}$ respect to $S_n$). In the process you are also taking an expectation over an statistic that you know is defined as zero and still expect get something from it (in this case $S_n$, $S_n^2$,...). In general, the process of getting an estimator consist in first getting an expression for the statistic, as for example $\mathbb{V}(\overline{X_n})=\sigma^2/n$, and then transform it into an estimator, $\rightarrow \mathbb{\hat{V}}(\overline{X_n})=\hat{\sigma}^2/n$. You can obtain an expression of this kind (with the desired form requested from the exercise) using $$\mathbb{V}(\overline{X_n}^2)=\mathbb{E}\left(\overline{X_n} - \mu + \mu\right)^4-\mathbb{E}^2\left(\overline{X_n} - \mu + \mu\right)^2 $$ From this point, the procedure is quite similar to the proposed by AlexK, which requires taking $\mu$ instead of $\overline{X_n}$ and $\overline{X_n} - \mu$ instead of $S_n$. In this case $\mu$ is allowed to be taken out from expectation. Then \begin{align} \mathbb{E}(\overline{X_n}^4) =& \mathbb{E}\left(\overline{X_n} - \mu + \mu\right)^4\\ =& \mathbb{E}\left(\overline{X_n} - \mu\right)^4 + 4 \mu \mathbb{E}\left(\overline{X_n} - \mu\right)^3 + 6\mu^2 \mathbb{E}\left(\overline{X_n} - \mu\right)^2 + 4\mu^3 \underbrace{\mathbb{E}\left(\overline{X_n} - \mu\right)}_{=0} + \mu^4 \end{align} and \begin{align} \mathbb{E}^2(\overline{X_n}^2) =& \mathbb{E}^2\left(\overline{X_n} - \mu + \mu\right)^2\\ =& \mathbb{E}^2\left(\overline{X_n} - \mu\right)^2+ 2\mu^2 \mathbb{E}\left(\overline{X_n} - \mu\right)^2 + \mu^4 \end{align} where $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^2 = \frac{1}{n} \alpha_2 $$ $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^3 = \frac{1}{n^2} \alpha_3 $$ $$ \mathbb{E}\left(\overline{X_n} - \mu\right)^4 = \frac{1}{n^3} \left[\alpha_4 + 3 (n-1) \alpha_2^2\right] $$ All this gives as result $$ \mathbb{V}(\overline{X_n}^2)= \frac{1}{n^3} \left[\alpha_4 + \left(2n-3\right) \alpha_2^2\right] + 4 \mu \frac{1}{n^2} \alpha_3+ 4\mu^2 \frac{1}{n} \alpha_2 $$ which then can be used as ansatz for the estimator $$ \mathbb{\hat{V}}(\overline{X_n}^2)= \frac{1}{n^3} \left[\hat{\alpha}_4 + \left(2n-3\right) \hat{\alpha_2}^2\right] + 4 \overline{X_n} \frac{1}{n^2} \hat{\alpha_3}+ 4\overline{X_n}^2 \frac{1}{n} \hat{\alpha_2} $$ Notice that in this solution I skipped the summation and combinatorial calculations, as it has been already discussed here. Also notice that I used the definition $\mathbb{E}\left(X_i - \mu\right)^k = \alpha_k$.
Bootstrap variance of squared sample mean Although the idea behind the procedure proposed by AlexK and others is in the good direction, there are some key points that should be considered if you want to be strictly correct in this. You canno
32,349
Create a phrase net with R
I hope this makes sense. I kind of threw it together, but it seems that it is what you want to do. I grabbed some test from the above bounty hyperlink. It will show the words that come after a certain word as well the ratio of times that these outcomes happened. This will do nothing for the visualization, although I'm sure it would not be impossible to create. It should do most of the background math. library(tau) #this will load the string x <- tokenize("Questions must be at least 2 days old to be eligible for a bounty. There can only be 1 active bounty per question at any given time. Users must have at least 75 reputation to offer a bounty, and may only have a maximum of 3 active bounties at any given time. The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. If you do not award your bounty within 7 days (plus the grace period), the highest voted answer created after the bounty started with at least 2 upvotes will be awarded half the bounty amount. If there's no answer meeting that criteria, the bounty is not awarded to anyone. If the bounty was started by the question owner, and the question owner accepts an answer during the bounty period, and the bounty expires without an explicit award – we assume the bounty owner liked the answer they accepted and award it the full bounty amount at the time of bounty expiration. In any case, you will always give up the amount of reputation specified in the bounty, so if you start a bounty, be sure to follow up and award your bounty to the best answer! As an additional bonus, bounty awards are immune to the daily reputation cap and community wiki mode.") #the number of tokens in the string n <- length(x) list <- NULL count <- 1 #this will remove spaces, list is new string with no spaces for (i in 1:n) { if (x[i] != " ") { list[count] <- x[i] count <- count + 1 } } #the unique words in the string y <- unique(list) #number of tokens in the string n <- length(list) #number of distinct tokens m <- length(y) #assign tokens to values ind <- NULL val <- NULL #make vector of numbers in place of tokens for (i in 1:m) { ind[i] <- i for (j in 1:n) { if (y[i] == list[j]) { val[j] = i } } } d <- array(0, c(m, m)) #this finds the number of count of the word after the current word for (i in 1:(n-1)) { d[val[i], val[i+1]] <- d[val[i], val[i+1]] + 1 } #pick a word word <- 4 #show the word y[word] #[1] "at" #the words that follow y[which(d[word,] > 0)] #[1] "least" "any" "the" #the prob of words that follow d[word,which(d[word,]>0)]/sum(d[word,]) #[1] 0.5714286 0.2857143 0.1428571
Create a phrase net with R
I hope this makes sense. I kind of threw it together, but it seems that it is what you want to do. I grabbed some test from the above bounty hyperlink. It will show the words that come after a cert
Create a phrase net with R I hope this makes sense. I kind of threw it together, but it seems that it is what you want to do. I grabbed some test from the above bounty hyperlink. It will show the words that come after a certain word as well the ratio of times that these outcomes happened. This will do nothing for the visualization, although I'm sure it would not be impossible to create. It should do most of the background math. library(tau) #this will load the string x <- tokenize("Questions must be at least 2 days old to be eligible for a bounty. There can only be 1 active bounty per question at any given time. Users must have at least 75 reputation to offer a bounty, and may only have a maximum of 3 active bounties at any given time. The bounty period lasts 7 days. Bounties must have a minimum duration of at least 1 day. After the bounty ends, there is a grace period of 24 hours to manually award the bounty. If you do not award your bounty within 7 days (plus the grace period), the highest voted answer created after the bounty started with at least 2 upvotes will be awarded half the bounty amount. If there's no answer meeting that criteria, the bounty is not awarded to anyone. If the bounty was started by the question owner, and the question owner accepts an answer during the bounty period, and the bounty expires without an explicit award – we assume the bounty owner liked the answer they accepted and award it the full bounty amount at the time of bounty expiration. In any case, you will always give up the amount of reputation specified in the bounty, so if you start a bounty, be sure to follow up and award your bounty to the best answer! As an additional bonus, bounty awards are immune to the daily reputation cap and community wiki mode.") #the number of tokens in the string n <- length(x) list <- NULL count <- 1 #this will remove spaces, list is new string with no spaces for (i in 1:n) { if (x[i] != " ") { list[count] <- x[i] count <- count + 1 } } #the unique words in the string y <- unique(list) #number of tokens in the string n <- length(list) #number of distinct tokens m <- length(y) #assign tokens to values ind <- NULL val <- NULL #make vector of numbers in place of tokens for (i in 1:m) { ind[i] <- i for (j in 1:n) { if (y[i] == list[j]) { val[j] = i } } } d <- array(0, c(m, m)) #this finds the number of count of the word after the current word for (i in 1:(n-1)) { d[val[i], val[i+1]] <- d[val[i], val[i+1]] + 1 } #pick a word word <- 4 #show the word y[word] #[1] "at" #the words that follow y[which(d[word,] > 0)] #[1] "least" "any" "the" #the prob of words that follow d[word,which(d[word,]>0)]/sum(d[word,]) #[1] 0.5714286 0.2857143 0.1428571
Create a phrase net with R I hope this makes sense. I kind of threw it together, but it seems that it is what you want to do. I grabbed some test from the above bounty hyperlink. It will show the words that come after a cert
32,350
Create a phrase net with R
You can create phrase nets with Many Eyes, which is kind of the "official" home of this visualization technique. There you can upload your data (which is probably some body of text), choose "Phrase Net" as the visualization technique, and get what you're looking for. In fact, your illustration comes from the Phrase Net page on Many Eyes.
Create a phrase net with R
You can create phrase nets with Many Eyes, which is kind of the "official" home of this visualization technique. There you can upload your data (which is probably some body of text), choose "Phrase Ne
Create a phrase net with R You can create phrase nets with Many Eyes, which is kind of the "official" home of this visualization technique. There you can upload your data (which is probably some body of text), choose "Phrase Net" as the visualization technique, and get what you're looking for. In fact, your illustration comes from the Phrase Net page on Many Eyes.
Create a phrase net with R You can create phrase nets with Many Eyes, which is kind of the "official" home of this visualization technique. There you can upload your data (which is probably some body of text), choose "Phrase Ne
32,351
Create a phrase net with R
You can use package igraph to make and plot a graph, with control over all of the aspects. The graph and Rgraphviz packages work together to define and plot graphs. Both options provide a lot of control. (graphviz is also a standalone package, where you could use all kinds of software to generate the graph and have graphviz display it.) Of course, you need to process your data into a graph, doing something like @darrelkj suggests.
Create a phrase net with R
You can use package igraph to make and plot a graph, with control over all of the aspects. The graph and Rgraphviz packages work together to define and plot graphs. Both options provide a lot of contr
Create a phrase net with R You can use package igraph to make and plot a graph, with control over all of the aspects. The graph and Rgraphviz packages work together to define and plot graphs. Both options provide a lot of control. (graphviz is also a standalone package, where you could use all kinds of software to generate the graph and have graphviz display it.) Of course, you need to process your data into a graph, doing something like @darrelkj suggests.
Create a phrase net with R You can use package igraph to make and plot a graph, with control over all of the aspects. The graph and Rgraphviz packages work together to define and plot graphs. Both options provide a lot of contr
32,352
Highlighting significant results from non-parametric multiple comparisons on boxplots
The simplest code that comes to my mind is shown below. I'm pretty certain there's some already existing function(s) to do that on CRAN but I'm too lazy to search for them, even on R-seek. dd <- data.frame(y=as.vector(unlist(junk)), g=rep(paste("g", 1:4, sep=""), unlist(lapply(junk, length)))) aov.res <- kruskal.test(y ~ g, data=dd) alpha.level <- .05/nlevels(dd$g) # Bonferroni correction, but use # whatever you want using p.adjust() # generate all pairwise comparisons idx <- combn(nlevels(dd$g), 2) # compute p-values from Wilcoxon test for all comparisons pval.res <- numeric(ncol(idx)) for (i in 1:ncol(idx)) # test all group, pairwise pval.res[i] <- with(dd, wilcox.test(y[as.numeric(g)==idx[1,i]], y[as.numeric(g)==idx[2,i]]))$p.value # which groups are significantly different (arranged by column) signif.pairs <- idx[,which(pval.res<alpha.level)] boxplot(y ~ g, data=dd, ylim=c(min(dd$y)-1, max(dd$y)+1)) # use offset= to increment space between labels, thanks to vectorization for (i in 1:ncol(signif.pairs)) text(signif.pairs[,i], max(dd$y)+1, letters[i], pos=4, offset=i*.8-1) Here is an example of what the above code would produce (with significant differences between the four groups): Instead of the Wilcoxon test, one could rely on the procedure implemented in the kruskalmc() function from the pgirmess package (see a description of the procedure used here). Also, be sure to check Rudolf Cardinal's R tips about R: basic graphs 2 (see in particular, Another bar graph, with annotations).
Highlighting significant results from non-parametric multiple comparisons on boxplots
The simplest code that comes to my mind is shown below. I'm pretty certain there's some already existing function(s) to do that on CRAN but I'm too lazy to search for them, even on R-seek. dd <- data.
Highlighting significant results from non-parametric multiple comparisons on boxplots The simplest code that comes to my mind is shown below. I'm pretty certain there's some already existing function(s) to do that on CRAN but I'm too lazy to search for them, even on R-seek. dd <- data.frame(y=as.vector(unlist(junk)), g=rep(paste("g", 1:4, sep=""), unlist(lapply(junk, length)))) aov.res <- kruskal.test(y ~ g, data=dd) alpha.level <- .05/nlevels(dd$g) # Bonferroni correction, but use # whatever you want using p.adjust() # generate all pairwise comparisons idx <- combn(nlevels(dd$g), 2) # compute p-values from Wilcoxon test for all comparisons pval.res <- numeric(ncol(idx)) for (i in 1:ncol(idx)) # test all group, pairwise pval.res[i] <- with(dd, wilcox.test(y[as.numeric(g)==idx[1,i]], y[as.numeric(g)==idx[2,i]]))$p.value # which groups are significantly different (arranged by column) signif.pairs <- idx[,which(pval.res<alpha.level)] boxplot(y ~ g, data=dd, ylim=c(min(dd$y)-1, max(dd$y)+1)) # use offset= to increment space between labels, thanks to vectorization for (i in 1:ncol(signif.pairs)) text(signif.pairs[,i], max(dd$y)+1, letters[i], pos=4, offset=i*.8-1) Here is an example of what the above code would produce (with significant differences between the four groups): Instead of the Wilcoxon test, one could rely on the procedure implemented in the kruskalmc() function from the pgirmess package (see a description of the procedure used here). Also, be sure to check Rudolf Cardinal's R tips about R: basic graphs 2 (see in particular, Another bar graph, with annotations).
Highlighting significant results from non-parametric multiple comparisons on boxplots The simplest code that comes to my mind is shown below. I'm pretty certain there's some already existing function(s) to do that on CRAN but I'm too lazy to search for them, even on R-seek. dd <- data.
32,353
Naive-Bayes classifier for unequal groups
Assigning all patterns to the negative class certainly is not a "wierd result". It could be that the Bayes optimal classifier always classifies all patterns as belonging to the majority class, in which case your classifier is doing exactly what it should do. If the density of patterns belonging to the positive class never exceeds the density of the patterns belonging to the negative class, then the negative class is more likely regardless of the attribute values. The thing to do in such circumstances is to consider the relative importance of false-positive and false-negative errors, it is rare in practice that the costs of the two different types of error are the same. So determine the loss for false positive and false negative errors and take these into account in setting the threshold probability (differing misclassification costs is equivalent to changing the prior probabilities, so this is easy to implement for naive Bayes). I would recommend tuning the priors to minimise the cross-validation estimate of the loss (incorporating your unequal misclassification costs). If your misclassification costs are equal, and your training set priors representative of operational conditions, then assuming that your implementation is correct, it is possible that you already have the best NB classifier.
Naive-Bayes classifier for unequal groups
Assigning all patterns to the negative class certainly is not a "wierd result". It could be that the Bayes optimal classifier always classifies all patterns as belonging to the majority class, in whi
Naive-Bayes classifier for unequal groups Assigning all patterns to the negative class certainly is not a "wierd result". It could be that the Bayes optimal classifier always classifies all patterns as belonging to the majority class, in which case your classifier is doing exactly what it should do. If the density of patterns belonging to the positive class never exceeds the density of the patterns belonging to the negative class, then the negative class is more likely regardless of the attribute values. The thing to do in such circumstances is to consider the relative importance of false-positive and false-negative errors, it is rare in practice that the costs of the two different types of error are the same. So determine the loss for false positive and false negative errors and take these into account in setting the threshold probability (differing misclassification costs is equivalent to changing the prior probabilities, so this is easy to implement for naive Bayes). I would recommend tuning the priors to minimise the cross-validation estimate of the loss (incorporating your unequal misclassification costs). If your misclassification costs are equal, and your training set priors representative of operational conditions, then assuming that your implementation is correct, it is possible that you already have the best NB classifier.
Naive-Bayes classifier for unequal groups Assigning all patterns to the negative class certainly is not a "wierd result". It could be that the Bayes optimal classifier always classifies all patterns as belonging to the majority class, in whi
32,354
Naive-Bayes classifier for unequal groups
Enlarge the smaller data group to fit the big group by calculation. It will stretch the smaller group's data, but it will allow a more equal calculation. If you still get weird results like you currently do, check your whole implementation from start to hunt down a (probably simple) error.
Naive-Bayes classifier for unequal groups
Enlarge the smaller data group to fit the big group by calculation. It will stretch the smaller group's data, but it will allow a more equal calculation. If you still get weird results like you curren
Naive-Bayes classifier for unequal groups Enlarge the smaller data group to fit the big group by calculation. It will stretch the smaller group's data, but it will allow a more equal calculation. If you still get weird results like you currently do, check your whole implementation from start to hunt down a (probably simple) error.
Naive-Bayes classifier for unequal groups Enlarge the smaller data group to fit the big group by calculation. It will stretch the smaller group's data, but it will allow a more equal calculation. If you still get weird results like you curren
32,355
Building background for machine learning for CS student
Have you seen the Stanford online class on machine learning? It might be a great way to learn machine learning in general. References on text mining in particular are a different question; I don't have any particular suggestions on that.
Building background for machine learning for CS student
Have you seen the Stanford online class on machine learning? It might be a great way to learn machine learning in general. References on text mining in particular are a different question; I don't ha
Building background for machine learning for CS student Have you seen the Stanford online class on machine learning? It might be a great way to learn machine learning in general. References on text mining in particular are a different question; I don't have any particular suggestions on that.
Building background for machine learning for CS student Have you seen the Stanford online class on machine learning? It might be a great way to learn machine learning in general. References on text mining in particular are a different question; I don't ha
32,356
Building background for machine learning for CS student
For a nice intro into stats, check out O'Reilly Think Stats by Allen B. Downey. It's a freely-available ebook from the author.
Building background for machine learning for CS student
For a nice intro into stats, check out O'Reilly Think Stats by Allen B. Downey. It's a freely-available ebook from the author.
Building background for machine learning for CS student For a nice intro into stats, check out O'Reilly Think Stats by Allen B. Downey. It's a freely-available ebook from the author.
Building background for machine learning for CS student For a nice intro into stats, check out O'Reilly Think Stats by Allen B. Downey. It's a freely-available ebook from the author.
32,357
Building background for machine learning for CS student
Mathematics for Machine Learning is one of my favorite. The first part gives mathematical foundations such as linear algebra, analytic geometry, matrix decomposition, vector calculus, probability and statistics, continuous optimization, while the second part concerns central machine learning problems. Also, the PDF of the book is freely available.
Building background for machine learning for CS student
Mathematics for Machine Learning is one of my favorite. The first part gives mathematical foundations such as linear algebra, analytic geometry, matrix decomposition, vector calculus, probability and
Building background for machine learning for CS student Mathematics for Machine Learning is one of my favorite. The first part gives mathematical foundations such as linear algebra, analytic geometry, matrix decomposition, vector calculus, probability and statistics, continuous optimization, while the second part concerns central machine learning problems. Also, the PDF of the book is freely available.
Building background for machine learning for CS student Mathematics for Machine Learning is one of my favorite. The first part gives mathematical foundations such as linear algebra, analytic geometry, matrix decomposition, vector calculus, probability and
32,358
Which type of regression to use, considering one variable with upper bound?
I want to second @King's points. It is very intuitive to suspect that regressing $y$ onto $x$ ('direct regression') and regressing $x$ onto $y$ ('reverse regression') ought to be the same. However, this is neither true mathematically nor with respect to how the regression is related to the situation you're analyzing. If you plot $y$ on the vertical axis of a graph and $x$ on the horizontal axis, you can see what's happening. Direct regression finds the line that minimizes the vertical distances between the data points and the line, whereas reverse regression minimizes the horizontal distances. The line that minimizes the one will only minimize the other if $r_{xy}=1.0$. You need to decide what you want to explain, and what you want to use to explain it. The answer to that question gives you which variable is $y$ and $x$ and specifies your model. Also, (again following @King), I disagree with trying to say $x_{max}=f^{-1}(y_{max})$, for the same reasons. Regarding the issue of a bounded variable, typically it is conceivable that the 'real' amount could go higher, but that you just can't measure it. For example, an outside thermometer out my window goes up to 120, but it could be 140 outside in some places, and you would only have 120 as your measurement. Thus, the variable would have an upper bound, but the thing you really wanted to think about doesn't. If this is the case, tobit models exist for just such situations. Another approach would be to use something more robust like loess, which may be perfectly adequate for your needs.
Which type of regression to use, considering one variable with upper bound?
I want to second @King's points. It is very intuitive to suspect that regressing $y$ onto $x$ ('direct regression') and regressing $x$ onto $y$ ('reverse regression') ought to be the same. However,
Which type of regression to use, considering one variable with upper bound? I want to second @King's points. It is very intuitive to suspect that regressing $y$ onto $x$ ('direct regression') and regressing $x$ onto $y$ ('reverse regression') ought to be the same. However, this is neither true mathematically nor with respect to how the regression is related to the situation you're analyzing. If you plot $y$ on the vertical axis of a graph and $x$ on the horizontal axis, you can see what's happening. Direct regression finds the line that minimizes the vertical distances between the data points and the line, whereas reverse regression minimizes the horizontal distances. The line that minimizes the one will only minimize the other if $r_{xy}=1.0$. You need to decide what you want to explain, and what you want to use to explain it. The answer to that question gives you which variable is $y$ and $x$ and specifies your model. Also, (again following @King), I disagree with trying to say $x_{max}=f^{-1}(y_{max})$, for the same reasons. Regarding the issue of a bounded variable, typically it is conceivable that the 'real' amount could go higher, but that you just can't measure it. For example, an outside thermometer out my window goes up to 120, but it could be 140 outside in some places, and you would only have 120 as your measurement. Thus, the variable would have an upper bound, but the thing you really wanted to think about doesn't. If this is the case, tobit models exist for just such situations. Another approach would be to use something more robust like loess, which may be perfectly adequate for your needs.
Which type of regression to use, considering one variable with upper bound? I want to second @King's points. It is very intuitive to suspect that regressing $y$ onto $x$ ('direct regression') and regressing $x$ onto $y$ ('reverse regression') ought to be the same. However,
32,359
Which type of regression to use, considering one variable with upper bound?
Firstly, I don't think it makes sense to say $x_{max}=f^{-1}(y_{max})$ here, that's like implying that it's a one-to-one function although $x_{max}$ is explained by other unobserved variables. Secondly, it really depends on the context for which one to treat as an independent or dependent variable. From my experience, unless theory strongly suggests one way; either way is ok. From your comments on Oct 7, it seems like $x$ is the dependent while $y$ is the independent. If possible, look at the residuals and see if you can squeeze anything out of it. There could be another variable that you forgot; or it may help to transform your variables.
Which type of regression to use, considering one variable with upper bound?
Firstly, I don't think it makes sense to say $x_{max}=f^{-1}(y_{max})$ here, that's like implying that it's a one-to-one function although $x_{max}$ is explained by other unobserved variables. Secondl
Which type of regression to use, considering one variable with upper bound? Firstly, I don't think it makes sense to say $x_{max}=f^{-1}(y_{max})$ here, that's like implying that it's a one-to-one function although $x_{max}$ is explained by other unobserved variables. Secondly, it really depends on the context for which one to treat as an independent or dependent variable. From my experience, unless theory strongly suggests one way; either way is ok. From your comments on Oct 7, it seems like $x$ is the dependent while $y$ is the independent. If possible, look at the residuals and see if you can squeeze anything out of it. There could be another variable that you forgot; or it may help to transform your variables.
Which type of regression to use, considering one variable with upper bound? Firstly, I don't think it makes sense to say $x_{max}=f^{-1}(y_{max})$ here, that's like implying that it's a one-to-one function although $x_{max}$ is explained by other unobserved variables. Secondl
32,360
Dirichlet distribution plot in R
First, you need to put the data into a sensible form for ggplot2: dat <- data.frame(item=factor(rep(1:10,15)), draw=factor(rep(1:15,each=10)), value=as.vector(t(x))) Then you can plot it by building up the components you can see in the plot (points and lineranges; faceting, axis control and facet borders): library(ggplot2) ggplot(dat,aes(x=item,y=value,ymin=0,ymax=value)) + geom_point(colour=I("blue")) + geom_linerange(colour=I("blue")) + facet_wrap(~draw,ncol=5) + scale_y_continuous(lim=c(0,1)) + theme(panel.border = element_rect(fill=0, colour="black")) Output:
Dirichlet distribution plot in R
First, you need to put the data into a sensible form for ggplot2: dat <- data.frame(item=factor(rep(1:10,15)), draw=factor(rep(1:15,each=10)), value=as.vector(t(x
Dirichlet distribution plot in R First, you need to put the data into a sensible form for ggplot2: dat <- data.frame(item=factor(rep(1:10,15)), draw=factor(rep(1:15,each=10)), value=as.vector(t(x))) Then you can plot it by building up the components you can see in the plot (points and lineranges; faceting, axis control and facet borders): library(ggplot2) ggplot(dat,aes(x=item,y=value,ymin=0,ymax=value)) + geom_point(colour=I("blue")) + geom_linerange(colour=I("blue")) + facet_wrap(~draw,ncol=5) + scale_y_continuous(lim=c(0,1)) + theme(panel.border = element_rect(fill=0, colour="black")) Output:
Dirichlet distribution plot in R First, you need to put the data into a sensible form for ggplot2: dat <- data.frame(item=factor(rep(1:10,15)), draw=factor(rep(1:15,each=10)), value=as.vector(t(x
32,361
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
I think that using a MAP/Bayesian criterion of in combination with a mixture of Gaussians is a sensible choice. Points You will of course object that MOGs require Euclidean input data. The answer is to find a set of points that give rise to the distance matrix you are given. An example technique for this is multidimensional scaling: $\text{argmin}_{\lbrace x_i \rbrace} \sum_{i, j}(||x_i - x_j||_2 - D_{ij})^2$ where $D_{ij}$ is the distance of point $i$ to point $j$.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
I think that using a MAP/Bayesian criterion of in combination with a mixture of Gaussians is a sensible choice. Points You will of course object that MOGs require Euclidean input data. The answer is
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? I think that using a MAP/Bayesian criterion of in combination with a mixture of Gaussians is a sensible choice. Points You will of course object that MOGs require Euclidean input data. The answer is to find a set of points that give rise to the distance matrix you are given. An example technique for this is multidimensional scaling: $\text{argmin}_{\lbrace x_i \rbrace} \sum_{i, j}(||x_i - x_j||_2 - D_{ij})^2$ where $D_{ij}$ is the distance of point $i$ to point $j$.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? I think that using a MAP/Bayesian criterion of in combination with a mixture of Gaussians is a sensible choice. Points You will of course object that MOGs require Euclidean input data. The answer is
32,362
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
I dealt with a problem for my thesis where I had to do clustering on a data set for which I only had a similarity (= inverse distance) matrix. Although I 100% agree that a Bayesian technique would be best, what I went with was a discriminative model called Symmetric Convex Coding (link). I remember it working quite nicely. On the Bayesian front, perhaps you could consider something similar to clustering, but not? I'm thinking along the lines of Latent Dirichlet Allocation -- a really marvelous algorithm. Fully generative, developed in the context of modeling topic contents in text document corpora. But it finds plenty of applications in other types of unsupervised machine learning problems. Of course, the distance function isn't even relevant there...
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
I dealt with a problem for my thesis where I had to do clustering on a data set for which I only had a similarity (= inverse distance) matrix. Although I 100% agree that a Bayesian technique would be
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? I dealt with a problem for my thesis where I had to do clustering on a data set for which I only had a similarity (= inverse distance) matrix. Although I 100% agree that a Bayesian technique would be best, what I went with was a discriminative model called Symmetric Convex Coding (link). I remember it working quite nicely. On the Bayesian front, perhaps you could consider something similar to clustering, but not? I'm thinking along the lines of Latent Dirichlet Allocation -- a really marvelous algorithm. Fully generative, developed in the context of modeling topic contents in text document corpora. But it finds plenty of applications in other types of unsupervised machine learning problems. Of course, the distance function isn't even relevant there...
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? I dealt with a problem for my thesis where I had to do clustering on a data set for which I only had a similarity (= inverse distance) matrix. Although I 100% agree that a Bayesian technique would be
32,363
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
DBSCAN works without knowing the number of clusters ahead of time, and it can apply a wide range of distance metrics.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
DBSCAN works without knowing the number of clusters ahead of time, and it can apply a wide range of distance metrics.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? DBSCAN works without knowing the number of clusters ahead of time, and it can apply a wide range of distance metrics.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? DBSCAN works without knowing the number of clusters ahead of time, and it can apply a wide range of distance metrics.
32,364
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
You could use affinity propagation or better adaptive affinity propagation. Here is the Wikipedia link. There are two main advantages for your case and another third one that I think is an advantage but may not be of importance to you. You do not supply the number of clusters. The final number of clusters depends on the preference value and the similarity matrix values. The easiest way to work with the preference values is either to use the minimum value of the similarity matrix (that isn't zero) to get the smallest number of clusters, then try e.g. the maximum for the most clusters possible and continue with the median value and so on... OR Use the adaptive affinity propagation algorithm and have the preference determined by the algorithm. You can supply any similarity measure you can come up with or take the inverse of a distance measure (maybe guard against dividing by zero when you do that). 3.(extra point) The algorithm chooses an exemplar representing each cluster and which examples belong to it. This means the algorithm doesn't give you an arbitrary average but an actual datapoint. However you can still calculate averages later of course. AND this also means that the algorithm doesn't used intermittent averages! Software: There are several packages listed for Java, Python and R on the Wikipedia page. If you love MATLAB, like I do, then here is an implementation.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance?
You could use affinity propagation or better adaptive affinity propagation. Here is the Wikipedia link. There are two main advantages for your case and another third one that I think is an advantage b
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? You could use affinity propagation or better adaptive affinity propagation. Here is the Wikipedia link. There are two main advantages for your case and another third one that I think is an advantage but may not be of importance to you. You do not supply the number of clusters. The final number of clusters depends on the preference value and the similarity matrix values. The easiest way to work with the preference values is either to use the minimum value of the similarity matrix (that isn't zero) to get the smallest number of clusters, then try e.g. the maximum for the most clusters possible and continue with the median value and so on... OR Use the adaptive affinity propagation algorithm and have the preference determined by the algorithm. You can supply any similarity measure you can come up with or take the inverse of a distance measure (maybe guard against dividing by zero when you do that). 3.(extra point) The algorithm chooses an exemplar representing each cluster and which examples belong to it. This means the algorithm doesn't give you an arbitrary average but an actual datapoint. However you can still calculate averages later of course. AND this also means that the algorithm doesn't used intermittent averages! Software: There are several packages listed for Java, Python and R on the Wikipedia page. If you love MATLAB, like I do, then here is an implementation.
Any suggestion for clustering method for unknown number of clusters and non-Euclidean distance? You could use affinity propagation or better adaptive affinity propagation. Here is the Wikipedia link. There are two main advantages for your case and another third one that I think is an advantage b
32,365
Bootstrapping with a small number of observations
There is not a straightforward answer to this, as it will always depend on both the true distribution of your data (imagine the degenerate case where the only value allowed is 1: then a bootstrap from a sample of size 1 will be as good as anything!) and the statistic you are going to calculate: some statistics will have more trouble recovering from a small sample size than others (imagine a resampling of an extreme outlier). So: you're going to have to be more specific than what you've given us thus far.
Bootstrapping with a small number of observations
There is not a straightforward answer to this, as it will always depend on both the true distribution of your data (imagine the degenerate case where the only value allowed is 1: then a bootstrap from
Bootstrapping with a small number of observations There is not a straightforward answer to this, as it will always depend on both the true distribution of your data (imagine the degenerate case where the only value allowed is 1: then a bootstrap from a sample of size 1 will be as good as anything!) and the statistic you are going to calculate: some statistics will have more trouble recovering from a small sample size than others (imagine a resampling of an extreme outlier). So: you're going to have to be more specific than what you've given us thus far.
Bootstrapping with a small number of observations There is not a straightforward answer to this, as it will always depend on both the true distribution of your data (imagine the degenerate case where the only value allowed is 1: then a bootstrap from
32,366
What does this blur around the line mean in this graph?
I suspect it means very little in your actual figure; you have drawn a form of stripplot/chart. But as we don't have the data or reproducible example, I will just describe what these lines/regions show in general. In general, the line is the fitted linear model describing the relationship $$\widehat{\mathrm{val}} = \beta_0 + \beta_1 \mathrm{Num}$$ The shaded band is a pointwise 95% confidence interval on the fitted values (the line). This confidence interval contains the true, population, regression line with 0.95 probability. Or, in other words, there is 95% confidence that the true regression line lies within the shaded region. It shows us the uncertainty inherent in our estimate of the true relationship between your response and the predictor variable.
What does this blur around the line mean in this graph?
I suspect it means very little in your actual figure; you have drawn a form of stripplot/chart. But as we don't have the data or reproducible example, I will just describe what these lines/regions sho
What does this blur around the line mean in this graph? I suspect it means very little in your actual figure; you have drawn a form of stripplot/chart. But as we don't have the data or reproducible example, I will just describe what these lines/regions show in general. In general, the line is the fitted linear model describing the relationship $$\widehat{\mathrm{val}} = \beta_0 + \beta_1 \mathrm{Num}$$ The shaded band is a pointwise 95% confidence interval on the fitted values (the line). This confidence interval contains the true, population, regression line with 0.95 probability. Or, in other words, there is 95% confidence that the true regression line lies within the shaded region. It shows us the uncertainty inherent in our estimate of the true relationship between your response and the predictor variable.
What does this blur around the line mean in this graph? I suspect it means very little in your actual figure; you have drawn a form of stripplot/chart. But as we don't have the data or reproducible example, I will just describe what these lines/regions sho
32,367
Guidelines for discovering new knowledge in data
There's a whole field of exploratory data analysis (EDA), and an excellent book on this subject called Exploratory Data Analysis, by John W. Tukey. I like that you are using graphs - there are many other graphs that can be useful, depending on your data - how many variables? What nature are the variables (Categorical? Numeric? Continuous? Counts? Ordinal?) One graph that is often useful for data with multiple variables is a scatterplot matrix. You can look for various types of outliers, which are often interesting points. But I don't think this whole process can be made really methodical and scientific - exploration is what comes BEFORE the methodical and scientific approaches can be brought in. Here, I think the key aspect is playfulness.
Guidelines for discovering new knowledge in data
There's a whole field of exploratory data analysis (EDA), and an excellent book on this subject called Exploratory Data Analysis, by John W. Tukey. I like that you are using graphs - there are many ot
Guidelines for discovering new knowledge in data There's a whole field of exploratory data analysis (EDA), and an excellent book on this subject called Exploratory Data Analysis, by John W. Tukey. I like that you are using graphs - there are many other graphs that can be useful, depending on your data - how many variables? What nature are the variables (Categorical? Numeric? Continuous? Counts? Ordinal?) One graph that is often useful for data with multiple variables is a scatterplot matrix. You can look for various types of outliers, which are often interesting points. But I don't think this whole process can be made really methodical and scientific - exploration is what comes BEFORE the methodical and scientific approaches can be brought in. Here, I think the key aspect is playfulness.
Guidelines for discovering new knowledge in data There's a whole field of exploratory data analysis (EDA), and an excellent book on this subject called Exploratory Data Analysis, by John W. Tukey. I like that you are using graphs - there are many ot
32,368
Guidelines for discovering new knowledge in data
If you have chronological data i.e.time series data then there are "knowns" and waiting to be discovered are the "unknowns" . For example if you have a sequence of data points for 10 periods such as 1,9,1,9,1,5,1,9,1,9 then based upon this sample one can reasonably expect 1,9,1,9,... to arise in the future. What data analysis reveals is that there is an "unusual" reading at period 6 even though it is well within +-3 sigma limits suggesting that the DGF did not hold. Unmasking the Inlier/Outlier allows us to reveal things about the data. We also note that the Mean Value is not the Expected Value. This idea easily extends to detecting Mean Shifts and/or Local Time Trends that may have been unknown before the data was analyzed ( Hypothesis Generation ). Now it is quite possible that the next 10 readings are also 1,9,1,9,1,5,1,9,1,9 suggesting that the "5" is not necessarily untoward. If we observe an error process from a suitable model that exhibits provable non-constant variance we might be revealing one of the following states of nature: 1) the parameters might have changed at a particular point in time ; 2. There may be a need for Weighted Analysis (GLS) ; 3. There may be a need to transform the data via a power transform; 4. There may be a need to actually model the variance of the errors. If you have daily data good analysis might reveal that there is a window of response (lead,contemporaneous and lag structure) around each Holiday reflecting consistent/predictable behavior. You might also be able to reveal that certain days of the month have a significant effect or that Fridays before a Monday holiday have exceptional activity.
Guidelines for discovering new knowledge in data
If you have chronological data i.e.time series data then there are "knowns" and waiting to be discovered are the "unknowns" . For example if you have a sequence of data points for 10 periods such as 1
Guidelines for discovering new knowledge in data If you have chronological data i.e.time series data then there are "knowns" and waiting to be discovered are the "unknowns" . For example if you have a sequence of data points for 10 periods such as 1,9,1,9,1,5,1,9,1,9 then based upon this sample one can reasonably expect 1,9,1,9,... to arise in the future. What data analysis reveals is that there is an "unusual" reading at period 6 even though it is well within +-3 sigma limits suggesting that the DGF did not hold. Unmasking the Inlier/Outlier allows us to reveal things about the data. We also note that the Mean Value is not the Expected Value. This idea easily extends to detecting Mean Shifts and/or Local Time Trends that may have been unknown before the data was analyzed ( Hypothesis Generation ). Now it is quite possible that the next 10 readings are also 1,9,1,9,1,5,1,9,1,9 suggesting that the "5" is not necessarily untoward. If we observe an error process from a suitable model that exhibits provable non-constant variance we might be revealing one of the following states of nature: 1) the parameters might have changed at a particular point in time ; 2. There may be a need for Weighted Analysis (GLS) ; 3. There may be a need to transform the data via a power transform; 4. There may be a need to actually model the variance of the errors. If you have daily data good analysis might reveal that there is a window of response (lead,contemporaneous and lag structure) around each Holiday reflecting consistent/predictable behavior. You might also be able to reveal that certain days of the month have a significant effect or that Fridays before a Monday holiday have exceptional activity.
Guidelines for discovering new knowledge in data If you have chronological data i.e.time series data then there are "knowns" and waiting to be discovered are the "unknowns" . For example if you have a sequence of data points for 10 periods such as 1
32,369
Guidelines for discovering new knowledge in data
Datamining could be broken down into two categories. If you are interested in measuring the effect of a data set/variables on a specific variable then this would be considered supervised learning. For deep and exploratory learning with no objective you are undergoing unsupervised learning. Graphing and statistical analysis of the data (understanding distributions and gaining intuition) are the first steps.
Guidelines for discovering new knowledge in data
Datamining could be broken down into two categories. If you are interested in measuring the effect of a data set/variables on a specific variable then this would be considered supervised learning. For
Guidelines for discovering new knowledge in data Datamining could be broken down into two categories. If you are interested in measuring the effect of a data set/variables on a specific variable then this would be considered supervised learning. For deep and exploratory learning with no objective you are undergoing unsupervised learning. Graphing and statistical analysis of the data (understanding distributions and gaining intuition) are the first steps.
Guidelines for discovering new knowledge in data Datamining could be broken down into two categories. If you are interested in measuring the effect of a data set/variables on a specific variable then this would be considered supervised learning. For
32,370
How can I remove the z-order bias of a coloured scatter plot?
First off, I agree. I suspect that you can create a different sort of graph; you're not using a lot of the two-dimensionality of the current display because everything is clustered about the x=y line. Try plotting the pressure along the x axis and the ratios along the y axis. If this is too messy, try taking the difference in pressure. You could also use some measure of effect size, like Cohen's d, but then viewers would have to know what that is. You can probably come up with something better than what I suggested, but my suggestion might help you think of other approaches. As you'll read below, my approach might mislead viewers because it would make pressure look like an independent variable. It would help to know what sort of story you're telling from this graph. My interpretation is that the ratios are independent variables and the pressure is a dependent variable. The change that I suggested above makes it look like the pressure is independent and the ratios are dependent. (That might not be a problem.) But here are a ideas that use your current graph. Sorting a list randomly in python It looks like the pressures might be clustered a bit. I'm not sure whether this is what you were saying was bell-shaped. But if they are clustered, you could try assigning different dot types to each of a small number of clusters For each of the axes, plot a histogram of that variable with the pressure colors stacked on top of each other. Even if you don't change the main three-variable plot, these two-variable modified histograms would help point out the bias in the display.
How can I remove the z-order bias of a coloured scatter plot?
First off, I agree. I suspect that you can create a different sort of graph; you're not using a lot of the two-dimensionality of the current display because everything is clustered about the x=y line.
How can I remove the z-order bias of a coloured scatter plot? First off, I agree. I suspect that you can create a different sort of graph; you're not using a lot of the two-dimensionality of the current display because everything is clustered about the x=y line. Try plotting the pressure along the x axis and the ratios along the y axis. If this is too messy, try taking the difference in pressure. You could also use some measure of effect size, like Cohen's d, but then viewers would have to know what that is. You can probably come up with something better than what I suggested, but my suggestion might help you think of other approaches. As you'll read below, my approach might mislead viewers because it would make pressure look like an independent variable. It would help to know what sort of story you're telling from this graph. My interpretation is that the ratios are independent variables and the pressure is a dependent variable. The change that I suggested above makes it look like the pressure is independent and the ratios are dependent. (That might not be a problem.) But here are a ideas that use your current graph. Sorting a list randomly in python It looks like the pressures might be clustered a bit. I'm not sure whether this is what you were saying was bell-shaped. But if they are clustered, you could try assigning different dot types to each of a small number of clusters For each of the axes, plot a histogram of that variable with the pressure colors stacked on top of each other. Even if you don't change the main three-variable plot, these two-variable modified histograms would help point out the bias in the display.
How can I remove the z-order bias of a coloured scatter plot? First off, I agree. I suspect that you can create a different sort of graph; you're not using a lot of the two-dimensionality of the current display because everything is clustered about the x=y line.
32,371
How can I remove the z-order bias of a coloured scatter plot?
Unfortunately there isn't really a perfect solution to this, mainly because z-order cannot be decoupled draw order (see this question). As a work-around you can have a wrapper class, which allows to make lazy draw calls, and internally randomizes the draw call order. For example: class ZBiasFreePlotter(object): def __init__(self): self.plot_calls = [] def add_plot(self, f, xs, ys, *args, **kwargs): self.plot_calls.append((f, xs, ys, args, kwargs)) def draw_plots(self, chunk_size=512): scheduled_calls = [] for f, xs, ys, args, kwargs in self.plot_calls: assert(len(xs) == len(ys)) index = np.arange(len(xs)) np.random.shuffle(index) index_blocks = [index[i:i+chunk_size] for i in np.arange(len(index))[::chunk_size]] for i, index_block in enumerate(index_blocks): # Only attach a label for one of the chunks if i != 0 and kwargs.get("label") is not None: kwargs = kwargs.copy() kwargs["label"] = None scheduled_calls.append((f, xs[index_block], ys[index_block], args, kwargs)) np.random.shuffle(scheduled_calls) for f, xs, ys, args, kwargs in scheduled_calls: f(xs, ys, *args, **kwargs) The usage would be: fig, ax = plt.subplots(1, 1) bias_free_plotter = ZBiasFreePlotter() for xs, ys, color in things_to_plot: bias_free_plotter.add_plot(ax.plot, xs, ys, c=color, ...) bias_free_plotter.draw_plots() For example, this is my plot with standard z-order bias: And this is what I get using lazily cached plot calls and applying some alpha overdraw: A few caveats: Only works for marker plots, line plots will get messed up from the chunking. Every plot advances the color cycle, i.e., with this approach it is necessary to control the color cycle manually, which can be done by something like this.
How can I remove the z-order bias of a coloured scatter plot?
Unfortunately there isn't really a perfect solution to this, mainly because z-order cannot be decoupled draw order (see this question). As a work-around you can have a wrapper class, which allows to m
How can I remove the z-order bias of a coloured scatter plot? Unfortunately there isn't really a perfect solution to this, mainly because z-order cannot be decoupled draw order (see this question). As a work-around you can have a wrapper class, which allows to make lazy draw calls, and internally randomizes the draw call order. For example: class ZBiasFreePlotter(object): def __init__(self): self.plot_calls = [] def add_plot(self, f, xs, ys, *args, **kwargs): self.plot_calls.append((f, xs, ys, args, kwargs)) def draw_plots(self, chunk_size=512): scheduled_calls = [] for f, xs, ys, args, kwargs in self.plot_calls: assert(len(xs) == len(ys)) index = np.arange(len(xs)) np.random.shuffle(index) index_blocks = [index[i:i+chunk_size] for i in np.arange(len(index))[::chunk_size]] for i, index_block in enumerate(index_blocks): # Only attach a label for one of the chunks if i != 0 and kwargs.get("label") is not None: kwargs = kwargs.copy() kwargs["label"] = None scheduled_calls.append((f, xs[index_block], ys[index_block], args, kwargs)) np.random.shuffle(scheduled_calls) for f, xs, ys, args, kwargs in scheduled_calls: f(xs, ys, *args, **kwargs) The usage would be: fig, ax = plt.subplots(1, 1) bias_free_plotter = ZBiasFreePlotter() for xs, ys, color in things_to_plot: bias_free_plotter.add_plot(ax.plot, xs, ys, c=color, ...) bias_free_plotter.draw_plots() For example, this is my plot with standard z-order bias: And this is what I get using lazily cached plot calls and applying some alpha overdraw: A few caveats: Only works for marker plots, line plots will get messed up from the chunking. Every plot advances the color cycle, i.e., with this approach it is necessary to control the color cycle manually, which can be done by something like this.
How can I remove the z-order bias of a coloured scatter plot? Unfortunately there isn't really a perfect solution to this, mainly because z-order cannot be decoupled draw order (see this question). As a work-around you can have a wrapper class, which allows to m
32,372
How can I remove the z-order bias of a coloured scatter plot?
Did you plot each group of data using a separate call to scatter? If you plot all your data in one pass, I think the overlaying of dots on other dots would only come from the order in which there were plotted (not 100% sure though, can't test at the moment) So, one way to get different layouts would be to randomly permute your data before plotting it, or even sort them in a non-random way, to get the desired effect. Could you paste your source code?
How can I remove the z-order bias of a coloured scatter plot?
Did you plot each group of data using a separate call to scatter? If you plot all your data in one pass, I think the overlaying of dots on other dots would only come from the order in which there were
How can I remove the z-order bias of a coloured scatter plot? Did you plot each group of data using a separate call to scatter? If you plot all your data in one pass, I think the overlaying of dots on other dots would only come from the order in which there were plotted (not 100% sure though, can't test at the moment) So, one way to get different layouts would be to randomly permute your data before plotting it, or even sort them in a non-random way, to get the desired effect. Could you paste your source code?
How can I remove the z-order bias of a coloured scatter plot? Did you plot each group of data using a separate call to scatter? If you plot all your data in one pass, I think the overlaying of dots on other dots would only come from the order in which there were
32,373
Do zero counts need to be adjusted for a likelihood ratio test of poisson/loglinear models?
One of the powers of regression modeling generally is you can smooth over areas of no data - though as you have noticed, there are occasionally problems in estimating parameters. I would suggest that if you're getting things like infinite standard errors its time to reconsider your modeling approach at bit. One particular note of caution: There is a difference between "Having no counts" in a particular strata, and it being impossible for there to be counts in that strata. For example, imagine you're working on a study of psychological disorders for the U.S. Navy between say 2000 and 2009, and have binary regression terms for both "Is a Woman" and "Serves on a Submarine". A regression model may be able to estimate effects where both variables = 1 despite having a zero count where both = 1. However that inference wouldn't be valid - such a circumstance is impossible. This problem is called "non-positivity" and is occasionally a problem in highly stratified models.
Do zero counts need to be adjusted for a likelihood ratio test of poisson/loglinear models?
One of the powers of regression modeling generally is you can smooth over areas of no data - though as you have noticed, there are occasionally problems in estimating parameters. I would suggest that
Do zero counts need to be adjusted for a likelihood ratio test of poisson/loglinear models? One of the powers of regression modeling generally is you can smooth over areas of no data - though as you have noticed, there are occasionally problems in estimating parameters. I would suggest that if you're getting things like infinite standard errors its time to reconsider your modeling approach at bit. One particular note of caution: There is a difference between "Having no counts" in a particular strata, and it being impossible for there to be counts in that strata. For example, imagine you're working on a study of psychological disorders for the U.S. Navy between say 2000 and 2009, and have binary regression terms for both "Is a Woman" and "Serves on a Submarine". A regression model may be able to estimate effects where both variables = 1 despite having a zero count where both = 1. However that inference wouldn't be valid - such a circumstance is impossible. This problem is called "non-positivity" and is occasionally a problem in highly stratified models.
Do zero counts need to be adjusted for a likelihood ratio test of poisson/loglinear models? One of the powers of regression modeling generally is you can smooth over areas of no data - though as you have noticed, there are occasionally problems in estimating parameters. I would suggest that
32,374
How to perform logistic regression with lasso using GLMSELECT?
Code the outcome as -1 and 1, and run glmselect, and apply a cutoff of zero to the prediction. For a reference to this trick see Hastie Tibshirani Friedman-Elements of statistical learning 2nd ed -2009 page 661 "Lasso regression can be applied to a two-class classifcation problem by coding the outcome +-1, and applying a cutoff (usually 0) to the predictions." It's a quick and dirty trick. Lasso penalty can be applied to logistic regression, but it's not implemented in sas. In that case you have to try the R packages.
How to perform logistic regression with lasso using GLMSELECT?
Code the outcome as -1 and 1, and run glmselect, and apply a cutoff of zero to the prediction. For a reference to this trick see Hastie Tibshirani Friedman-Elements of statistical learning 2nd ed -20
How to perform logistic regression with lasso using GLMSELECT? Code the outcome as -1 and 1, and run glmselect, and apply a cutoff of zero to the prediction. For a reference to this trick see Hastie Tibshirani Friedman-Elements of statistical learning 2nd ed -2009 page 661 "Lasso regression can be applied to a two-class classifcation problem by coding the outcome +-1, and applying a cutoff (usually 0) to the predictions." It's a quick and dirty trick. Lasso penalty can be applied to logistic regression, but it's not implemented in sas. In that case you have to try the R packages.
How to perform logistic regression with lasso using GLMSELECT? Code the outcome as -1 and 1, and run glmselect, and apply a cutoff of zero to the prediction. For a reference to this trick see Hastie Tibshirani Friedman-Elements of statistical learning 2nd ed -20
32,375
Regularized fit from summarized data: choosing the parameter
My answer will be based on a nice review of the problem by Anders Bjorkstorm Ridge regression and inverse problems (I would recommend to read the whole article). Part 4 in this review is dedicated to the selection of a parameter $\lambda$ in ridge regression introducing several key approaches: ridge trace corresponds to graphical analysis of $\hat{\beta}_{i,\lambda}$ against $\lambda$. A typical plot will depict unstable (for a true ill-posted problem, you have to be sure you need this regularization in any case) behavior of different $\hat{\beta}_{i,\lambda}$ estimates for $\lambda$ close to zero, and almost constant from some point (roughly we have to detect constant behavior intersection region for all of the parameters). However decision regarding where this almost constant behavior starts is somewhat subjective. Good news for this approach is that it does not require to observe $X$ and $y$. $L$-curve it plots Euclidean norm of the vector of estimated parameters $|\hat{{\beta}}_\lambda|$ against the residual norm $|y - X\hat{\beta}_\lambda|$. The shape is typically close to letter $L$ so there exists a corner that determines where optimal parameter belongs to (one may choose the point in $L$ curve where the latter reaches the maximum curvature, but it is better to search for Hansen's article for more details). For cross-validation actually a simple "leave-one-out" approach is often chosen, seeking for $\lambda$ that maximizes (or minimizes) some forecasting accuracy criterion (you have a wide range of them, RMSE and MAPE are the two to begin with). Difficulties with 2. and 3. are that you have to observe $X$ and $y$ to implement them in practice.
Regularized fit from summarized data: choosing the parameter
My answer will be based on a nice review of the problem by Anders Bjorkstorm Ridge regression and inverse problems (I would recommend to read the whole article). Part 4 in this review is dedicated to
Regularized fit from summarized data: choosing the parameter My answer will be based on a nice review of the problem by Anders Bjorkstorm Ridge regression and inverse problems (I would recommend to read the whole article). Part 4 in this review is dedicated to the selection of a parameter $\lambda$ in ridge regression introducing several key approaches: ridge trace corresponds to graphical analysis of $\hat{\beta}_{i,\lambda}$ against $\lambda$. A typical plot will depict unstable (for a true ill-posted problem, you have to be sure you need this regularization in any case) behavior of different $\hat{\beta}_{i,\lambda}$ estimates for $\lambda$ close to zero, and almost constant from some point (roughly we have to detect constant behavior intersection region for all of the parameters). However decision regarding where this almost constant behavior starts is somewhat subjective. Good news for this approach is that it does not require to observe $X$ and $y$. $L$-curve it plots Euclidean norm of the vector of estimated parameters $|\hat{{\beta}}_\lambda|$ against the residual norm $|y - X\hat{\beta}_\lambda|$. The shape is typically close to letter $L$ so there exists a corner that determines where optimal parameter belongs to (one may choose the point in $L$ curve where the latter reaches the maximum curvature, but it is better to search for Hansen's article for more details). For cross-validation actually a simple "leave-one-out" approach is often chosen, seeking for $\lambda$ that maximizes (or minimizes) some forecasting accuracy criterion (you have a wide range of them, RMSE and MAPE are the two to begin with). Difficulties with 2. and 3. are that you have to observe $X$ and $y$ to implement them in practice.
Regularized fit from summarized data: choosing the parameter My answer will be based on a nice review of the problem by Anders Bjorkstorm Ridge regression and inverse problems (I would recommend to read the whole article). Part 4 in this review is dedicated to
32,376
How to do a 'beer and diapers' correlation analysis
In addition to the links that were given in comments, here are some further pointers: Association rules and frequent itemsets Survey on Frequent Pattern Mining -- look around Table 1, p. 4 About Python, I guess now you have an idea of what you should be looking for, but the Orange data mining package features a package on Association rules and Itemsets (although for the latter I cannot found any reference on the website). Edit: I recently came across pysuggest which is a Top-N recommendation engine that implements a variety of recommendation algorithms. Top-N recommender systems, a personalized information filtering technology, are used to identify a set of N items that will be of interest to a certain user. In recent years, top-N recommender systems have been used in a number of different applications such to recommend products a customer will most likely buy; recommend movies, TV programs, or music a user will find enjoyable; identify web-pages that will be of interest; or even suggest alternate ways of searching for information.
How to do a 'beer and diapers' correlation analysis
In addition to the links that were given in comments, here are some further pointers: Association rules and frequent itemsets Survey on Frequent Pattern Mining -- look around Table 1, p. 4 About Pyt
How to do a 'beer and diapers' correlation analysis In addition to the links that were given in comments, here are some further pointers: Association rules and frequent itemsets Survey on Frequent Pattern Mining -- look around Table 1, p. 4 About Python, I guess now you have an idea of what you should be looking for, but the Orange data mining package features a package on Association rules and Itemsets (although for the latter I cannot found any reference on the website). Edit: I recently came across pysuggest which is a Top-N recommendation engine that implements a variety of recommendation algorithms. Top-N recommender systems, a personalized information filtering technology, are used to identify a set of N items that will be of interest to a certain user. In recent years, top-N recommender systems have been used in a number of different applications such to recommend products a customer will most likely buy; recommend movies, TV programs, or music a user will find enjoyable; identify web-pages that will be of interest; or even suggest alternate ways of searching for information.
How to do a 'beer and diapers' correlation analysis In addition to the links that were given in comments, here are some further pointers: Association rules and frequent itemsets Survey on Frequent Pattern Mining -- look around Table 1, p. 4 About Pyt
32,377
Singular information matrix error in lrm.fit in R
Creating dummy variables should not be necessary. You should just use factors when modeling in R. admityear <- factor(admityear) m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_years + GMAT + UGI_Gourman + admityear, data=fsd) If the singular condition still persists, then you have multicollinearity and need to try dropping other variables. (I would be suspicious of WorkHistory_years.) I also don't see anything ordinal about that model. Ordinal logistic regression in the rms package (or the no longer actively supported Design package) is done with polr(). And it would be really helpful to see the results from str(fasd).
Singular information matrix error in lrm.fit in R
Creating dummy variables should not be necessary. You should just use factors when modeling in R. admityear <- factor(admityear) m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_year
Singular information matrix error in lrm.fit in R Creating dummy variables should not be necessary. You should just use factors when modeling in R. admityear <- factor(admityear) m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_years + GMAT + UGI_Gourman + admityear, data=fsd) If the singular condition still persists, then you have multicollinearity and need to try dropping other variables. (I would be suspicious of WorkHistory_years.) I also don't see anything ordinal about that model. Ordinal logistic regression in the rms package (or the no longer actively supported Design package) is done with polr(). And it would be really helpful to see the results from str(fasd).
Singular information matrix error in lrm.fit in R Creating dummy variables should not be necessary. You should just use factors when modeling in R. admityear <- factor(admityear) m4 <- lrm(Outcome ~ relGPA + mcAvgGPA + Interview_Z + WorkHistory_year
32,378
At least, how many times an experiment should be replicated?
There is no such thing as a minimum (or maximum) sample size rule. It depends on the size of the effect you are trying to measure. Your description of the experiment is slightly unclear, but consider this example, if you measured blood pressure in three different people, what could you conclude about blood pressure in the population? Likewise, if you are conducting a clinical trial and it's clear (using statistical arguments) that one of the treatments is harmful, should you continue? Another comment. In experiments concerning animals/people I would consider it unethical to conduct an experiment that has no chance of success due to low sample sizes. If in doubt, find a local friendly statistician. Most institutions have them somewhere.
At least, how many times an experiment should be replicated?
There is no such thing as a minimum (or maximum) sample size rule. It depends on the size of the effect you are trying to measure. Your description of the experiment is slightly unclear, but consider
At least, how many times an experiment should be replicated? There is no such thing as a minimum (or maximum) sample size rule. It depends on the size of the effect you are trying to measure. Your description of the experiment is slightly unclear, but consider this example, if you measured blood pressure in three different people, what could you conclude about blood pressure in the population? Likewise, if you are conducting a clinical trial and it's clear (using statistical arguments) that one of the treatments is harmful, should you continue? Another comment. In experiments concerning animals/people I would consider it unethical to conduct an experiment that has no chance of success due to low sample sizes. If in doubt, find a local friendly statistician. Most institutions have them somewhere.
At least, how many times an experiment should be replicated? There is no such thing as a minimum (or maximum) sample size rule. It depends on the size of the effect you are trying to measure. Your description of the experiment is slightly unclear, but consider
32,379
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportions?
The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approximation, all orders are equally likely, whence your calculation will produce a value close to 1/5! = about 0.83%. That tells us nothing. One more obvious observation: the order, based on your data, is 4 > 5 > 3 > 2 > 1. Your estimates of their relative superiorities are 0.61 - 0.40 = 21%, 0.40 - 0.21 = 11%, etc. Now, suppose your question concerns the extent to which any of the ${5 \choose 2} = 10$ differences in proportions could be due to chance under the null hypothesis of no difference. You can indeed evaluate these ten questions with a permutation test. However, in each iteration you need to track ten indicators of relative difference in proportion, not one global indicator of the total order. For your data, a simulation with 100,000 iterations gives the results \begin{array}{ccccc} & 5 & 4 & 3 & 2 \cr 1 & 0.02439 & 0.0003 & 0.13233 & 0.29961 \cr 2 & 0.09763 & 0.00374 & 0.29222 & \cr 3 & 0.20253 & 0.00884 & & \cr 4 & 0.08702 & & & \end{array} The differences in proportions between method 4 and methods 1, 2, and 3 are unlikely to be due to chance (with estimated probabilities 0.03%, 0.37%, 0.88%, respectively) but the other differences might be. There is some evidence (p = 2.44%) of a difference between methods 1 and 5. Thus it appears you can have confidence that the differences in proportions involved in the relationships 4 > 3, 4 > 2, and 4 > 1 are all positive, and most likely so is the difference in 5 > 1.
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportio
The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approxima
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportions? The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approximation, all orders are equally likely, whence your calculation will produce a value close to 1/5! = about 0.83%. That tells us nothing. One more obvious observation: the order, based on your data, is 4 > 5 > 3 > 2 > 1. Your estimates of their relative superiorities are 0.61 - 0.40 = 21%, 0.40 - 0.21 = 11%, etc. Now, suppose your question concerns the extent to which any of the ${5 \choose 2} = 10$ differences in proportions could be due to chance under the null hypothesis of no difference. You can indeed evaluate these ten questions with a permutation test. However, in each iteration you need to track ten indicators of relative difference in proportion, not one global indicator of the total order. For your data, a simulation with 100,000 iterations gives the results \begin{array}{ccccc} & 5 & 4 & 3 & 2 \cr 1 & 0.02439 & 0.0003 & 0.13233 & 0.29961 \cr 2 & 0.09763 & 0.00374 & 0.29222 & \cr 3 & 0.20253 & 0.00884 & & \cr 4 & 0.08702 & & & \end{array} The differences in proportions between method 4 and methods 1, 2, and 3 are unlikely to be due to chance (with estimated probabilities 0.03%, 0.37%, 0.88%, respectively) but the other differences might be. There is some evidence (p = 2.44%) of a difference between methods 1 and 5. Thus it appears you can have confidence that the differences in proportions involved in the relationships 4 > 3, 4 > 2, and 4 > 1 are all positive, and most likely so is the difference in 5 > 1.
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportio The proposed procedure does not answer your question. It only estimates the frequency, under the null hypothesis, with which your observed order would occur. But under that null, to a good approxima
32,380
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportions?
Your suggested Monte-Carlo permutation test procedure will produce a p-value for a test of the null hypothesis that the probability of success is the same for all methods. But there's little reason for doing a Monte Carlo permutation test here when the corresponding exact permutation test is perfectly feasible. That's Fisher's exact test (well, some people reserve that name for 2x2 tables, in which case it's a conditional exact test). I've just typed your data into Stata and -tabi ..., exact- gave p=.0067 (for comparison, Pearson's chi-squared test gives p=.0059). I'm sure there's an equivalent function in R which the R gurus will soon add. If you really want to look at ranking you may be best using a Bayesian approach, as it can give a simple interpretation as the probability that each method is truly the best, second best, third best, ... . That comes at the price of requiring you to put priors on your probabilities, of course. The maximum likelihood estimate of the ranks is simply the observed ordering, but it's difficult to quantify the uncertainty in the ranking in a frequentist framework in a way that can be easily interpreted, as far as i'm aware. I realise I haven't mentioned multiple comparisons, but I just don't see how that comes into this.
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportio
Your suggested Monte-Carlo permutation test procedure will produce a p-value for a test of the null hypothesis that the probability of success is the same for all methods. But there's little reason fo
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportions? Your suggested Monte-Carlo permutation test procedure will produce a p-value for a test of the null hypothesis that the probability of success is the same for all methods. But there's little reason for doing a Monte Carlo permutation test here when the corresponding exact permutation test is perfectly feasible. That's Fisher's exact test (well, some people reserve that name for 2x2 tables, in which case it's a conditional exact test). I've just typed your data into Stata and -tabi ..., exact- gave p=.0067 (for comparison, Pearson's chi-squared test gives p=.0059). I'm sure there's an equivalent function in R which the R gurus will soon add. If you really want to look at ranking you may be best using a Bayesian approach, as it can give a simple interpretation as the probability that each method is truly the best, second best, third best, ... . That comes at the price of requiring you to put priors on your probabilities, of course. The maximum likelihood estimate of the ranks is simply the observed ordering, but it's difficult to quantify the uncertainty in the ranking in a frequentist framework in a way that can be easily interpreted, as far as i'm aware. I realise I haven't mentioned multiple comparisons, but I just don't see how that comes into this.
Can I use permutation tests for to avoid the multiple comparison problem in the context of proportio Your suggested Monte-Carlo permutation test procedure will produce a p-value for a test of the null hypothesis that the probability of success is the same for all methods. But there's little reason fo
32,381
How can I use optimal scaling to scale an ordinal categorical variable?
The general idea is that you should scale the categorical variable in such way that the resulting continuous variables will be just the most useful. So, it is always coupled with some regression or learning procedure and so the fitting of the model is accompanied by optimization (or trying various possibilities) of ordinal variables scaling. For some more practical issues, consult the docks of R aspect and homals packages.
How can I use optimal scaling to scale an ordinal categorical variable?
The general idea is that you should scale the categorical variable in such way that the resulting continuous variables will be just the most useful. So, it is always coupled with some regression or le
How can I use optimal scaling to scale an ordinal categorical variable? The general idea is that you should scale the categorical variable in such way that the resulting continuous variables will be just the most useful. So, it is always coupled with some regression or learning procedure and so the fitting of the model is accompanied by optimization (or trying various possibilities) of ordinal variables scaling. For some more practical issues, consult the docks of R aspect and homals packages.
How can I use optimal scaling to scale an ordinal categorical variable? The general idea is that you should scale the categorical variable in such way that the resulting continuous variables will be just the most useful. So, it is always coupled with some regression or le
32,382
Clustering of large, heavy-tailed dataset
You could of for a supervised self-organizing map (e.g. with kohonen package for R), and use the login frequency as dependent variable. That way, the clustering will focus on separating the frequent visitors from the rare visitors. By plotting the number of users on each map unit, you may get an idea in clusters present in your data. Because SOMs are non-linear mapping methods, this approach is particularly interesting for tailed data.
Clustering of large, heavy-tailed dataset
You could of for a supervised self-organizing map (e.g. with kohonen package for R), and use the login frequency as dependent variable. That way, the clustering will focus on separating the frequent v
Clustering of large, heavy-tailed dataset You could of for a supervised self-organizing map (e.g. with kohonen package for R), and use the login frequency as dependent variable. That way, the clustering will focus on separating the frequent visitors from the rare visitors. By plotting the number of users on each map unit, you may get an idea in clusters present in your data. Because SOMs are non-linear mapping methods, this approach is particularly interesting for tailed data.
Clustering of large, heavy-tailed dataset You could of for a supervised self-organizing map (e.g. with kohonen package for R), and use the login frequency as dependent variable. That way, the clustering will focus on separating the frequent v
32,383
Clustering of large, heavy-tailed dataset
This is problem with clustering, you can't tell what is considered a cluster. I would see what is the reason behind clustering users, specify my threshold value, and use hierarchical clustering. In my experience, one has to set either the number of cluster needed, or the threshold value (the distance value that binds two data point together).
Clustering of large, heavy-tailed dataset
This is problem with clustering, you can't tell what is considered a cluster. I would see what is the reason behind clustering users, specify my threshold value, and use hierarchical clustering. In my
Clustering of large, heavy-tailed dataset This is problem with clustering, you can't tell what is considered a cluster. I would see what is the reason behind clustering users, specify my threshold value, and use hierarchical clustering. In my experience, one has to set either the number of cluster needed, or the threshold value (the distance value that binds two data point together).
Clustering of large, heavy-tailed dataset This is problem with clustering, you can't tell what is considered a cluster. I would see what is the reason behind clustering users, specify my threshold value, and use hierarchical clustering. In my
32,384
Clustering of large, heavy-tailed dataset
K-Means clustering should work well for this type of problem. However, it does require that you specify the number of clusters in advance. Given the nature of this data, however, you may be able to work with a hierarchical clustering algorithm instead. Since all 4 variables are most likely fairly highly correlated, you can most likely break out clusters, and stop when you reach a small enough distance between clusters. This may be a much simpler approach in this specific case, and allows you to determine "how many clusters" by just stopping as soon as you've broken your set into fine enough clusters.
Clustering of large, heavy-tailed dataset
K-Means clustering should work well for this type of problem. However, it does require that you specify the number of clusters in advance. Given the nature of this data, however, you may be able to w
Clustering of large, heavy-tailed dataset K-Means clustering should work well for this type of problem. However, it does require that you specify the number of clusters in advance. Given the nature of this data, however, you may be able to work with a hierarchical clustering algorithm instead. Since all 4 variables are most likely fairly highly correlated, you can most likely break out clusters, and stop when you reach a small enough distance between clusters. This may be a much simpler approach in this specific case, and allows you to determine "how many clusters" by just stopping as soon as you've broken your set into fine enough clusters.
Clustering of large, heavy-tailed dataset K-Means clustering should work well for this type of problem. However, it does require that you specify the number of clusters in advance. Given the nature of this data, however, you may be able to w
32,385
Clustering of large, heavy-tailed dataset
You might consider transforming (perhaps a log) the positively skewed variables. If after exploring various clustering algorithms you find that the four variables simply reflect varying intensity levels of usage, you might think about a theoretically based classification. Presumably this classification is going to be used for a purpose and that purpose could drive meaningful cut points on one or more of the variables.
Clustering of large, heavy-tailed dataset
You might consider transforming (perhaps a log) the positively skewed variables. If after exploring various clustering algorithms you find that the four variables simply reflect varying intensity leve
Clustering of large, heavy-tailed dataset You might consider transforming (perhaps a log) the positively skewed variables. If after exploring various clustering algorithms you find that the four variables simply reflect varying intensity levels of usage, you might think about a theoretically based classification. Presumably this classification is going to be used for a purpose and that purpose could drive meaningful cut points on one or more of the variables.
Clustering of large, heavy-tailed dataset You might consider transforming (perhaps a log) the positively skewed variables. If after exploring various clustering algorithms you find that the four variables simply reflect varying intensity leve
32,386
Bias Variance tradeoff in neural networks
The existence of a bias-variance tradeoff has been assumed as inevitable (i.e., an axiom) in any model using data, including neural networks. However, it has been observed since about 2018 that surprisingly, some cases of very large deep neural networks, trained with a correspondingly sufficiently large dataset, do not exhibit the classical bias-variance tradeoff. This means that these networks also generalize better. This phenomenon, termed "double descent", has been duplicated by other researchers. See for example: "Reconciling modern machine-learning practice and the classical bias–variance trade-off", 2019, by Belkin, Hsu, Ma, Mandal, https://www.pnas.org/doi/10.1073/pnas.1903070116. As of 2022, conclusively explaining this phenomenon is still an open research question, but there have recently been interesting inroads to answering it. For example, the following paper is a theoretical explanation to justify this mysterious phenomenon: "A Universal Law of Robustness via Isoperimetry", 2021, by Bubeck and Sellke, https://arxiv.org/abs/2105.12806 (Outstanding Paper award at NeurIPS 2021) The analysis/explanation is based on a network having a small Lipschitz constant (maximum value of the gradients), meaning the function represented by the network is smooth. The paper also claims that in addition to good generalization, such a phenomenon also implies better robustness to adversarial attacks. The analysis is not limited to neural networks, but is general enough for many other function approximations (including Reproducing Kernel Hilbert Space). The paper gives specific guidance on the number of parameters vs. the amount of data for this phenomenon to occur. Emphasis: This "double descent" phenomenon does not occur in all deep neural networks trained with a correspondingly sufficiently large dataset. Rather, according to Bubeck and Sellke, it depends on the number of input data points, the effective dimension of the classification, the depth of the network (number of layers), and the overall number of parameters in the network. Therefore, in other cases, neural networks, even deep ones, will still exhibit the bias-variance tradeoff. In a sense, this guidance on parameter values in the Bubeck and Sellke paper can be regarded as a falsifiable prediction as to whether their analysis/explanation is (in)correct.
Bias Variance tradeoff in neural networks
The existence of a bias-variance tradeoff has been assumed as inevitable (i.e., an axiom) in any model using data, including neural networks. However, it has been observed since about 2018 that surpri
Bias Variance tradeoff in neural networks The existence of a bias-variance tradeoff has been assumed as inevitable (i.e., an axiom) in any model using data, including neural networks. However, it has been observed since about 2018 that surprisingly, some cases of very large deep neural networks, trained with a correspondingly sufficiently large dataset, do not exhibit the classical bias-variance tradeoff. This means that these networks also generalize better. This phenomenon, termed "double descent", has been duplicated by other researchers. See for example: "Reconciling modern machine-learning practice and the classical bias–variance trade-off", 2019, by Belkin, Hsu, Ma, Mandal, https://www.pnas.org/doi/10.1073/pnas.1903070116. As of 2022, conclusively explaining this phenomenon is still an open research question, but there have recently been interesting inroads to answering it. For example, the following paper is a theoretical explanation to justify this mysterious phenomenon: "A Universal Law of Robustness via Isoperimetry", 2021, by Bubeck and Sellke, https://arxiv.org/abs/2105.12806 (Outstanding Paper award at NeurIPS 2021) The analysis/explanation is based on a network having a small Lipschitz constant (maximum value of the gradients), meaning the function represented by the network is smooth. The paper also claims that in addition to good generalization, such a phenomenon also implies better robustness to adversarial attacks. The analysis is not limited to neural networks, but is general enough for many other function approximations (including Reproducing Kernel Hilbert Space). The paper gives specific guidance on the number of parameters vs. the amount of data for this phenomenon to occur. Emphasis: This "double descent" phenomenon does not occur in all deep neural networks trained with a correspondingly sufficiently large dataset. Rather, according to Bubeck and Sellke, it depends on the number of input data points, the effective dimension of the classification, the depth of the network (number of layers), and the overall number of parameters in the network. Therefore, in other cases, neural networks, even deep ones, will still exhibit the bias-variance tradeoff. In a sense, this guidance on parameter values in the Bubeck and Sellke paper can be regarded as a falsifiable prediction as to whether their analysis/explanation is (in)correct.
Bias Variance tradeoff in neural networks The existence of a bias-variance tradeoff has been assumed as inevitable (i.e., an axiom) in any model using data, including neural networks. However, it has been observed since about 2018 that surpri
32,387
Bias Variance tradeoff in neural networks
Bias-variance trade-off is an old fashioned concept from classical statistics which fails to be useful in high-dimensional setting. Here's an example of famous statistician being surprised that overfitting is reduced by increasing the number of parameters in linear regression. A better way to explain good performance of neural networks is through the lens of statistical learning theory. One direction of work shows that if A) your learner is not very sensitive to small changes in your training set, and B) fits training data data it will also fit test data. See for instance, this paper by Bousquet. Hastie's paper shows that adding parameters restricts final solution to a smaller L2-norm ball, hence improving stability A). At the same time, adding parameters can improve training fit, hence improving B). B) is actually the harder part, much of modern progress in NN's has been achieved by coming up with clever ways of fitting the training data and ignoring generalization aspect.
Bias Variance tradeoff in neural networks
Bias-variance trade-off is an old fashioned concept from classical statistics which fails to be useful in high-dimensional setting. Here's an example of famous statistician being surprised that overfi
Bias Variance tradeoff in neural networks Bias-variance trade-off is an old fashioned concept from classical statistics which fails to be useful in high-dimensional setting. Here's an example of famous statistician being surprised that overfitting is reduced by increasing the number of parameters in linear regression. A better way to explain good performance of neural networks is through the lens of statistical learning theory. One direction of work shows that if A) your learner is not very sensitive to small changes in your training set, and B) fits training data data it will also fit test data. See for instance, this paper by Bousquet. Hastie's paper shows that adding parameters restricts final solution to a smaller L2-norm ball, hence improving stability A). At the same time, adding parameters can improve training fit, hence improving B). B) is actually the harder part, much of modern progress in NN's has been achieved by coming up with clever ways of fitting the training data and ignoring generalization aspect.
Bias Variance tradeoff in neural networks Bias-variance trade-off is an old fashioned concept from classical statistics which fails to be useful in high-dimensional setting. Here's an example of famous statistician being surprised that overfi
32,388
Bias Variance tradeoff in neural networks
There are other theorems in mathematical analysis about converging to decent functions (the Stone–Weierstrass theorem about polynomials and Carleson's theorem about Fourier series come to mind). However, neural networks decrease the bias more than other regression models by taking to the extreme the idea of nonlinearity and interaction. A neural network with millions of parameters is routine. A generalized linear model with millions of parameters, due to nonlinear basis functions (e.g., polynomials or splines) and their interactions, is not as common. If you put in all of those nonlinear features and their interactions in a generalized linear model, taking it to a similar extreme as a neural network, I would expect similar issues of high variance and low bias. In fact, there is a sense in which a neural network (at least some of them) involves a layer (or multiple layers) of feature extraction and then a generalized linear model on the extracted features. After all, if you draw out the usual "web" of a neural network and cover up everything before the final hidden layer, you wind up with something that looks like a generalized linear model.
Bias Variance tradeoff in neural networks
There are other theorems in mathematical analysis about converging to decent functions (the Stone–Weierstrass theorem about polynomials and Carleson's theorem about Fourier series come to mind). Howev
Bias Variance tradeoff in neural networks There are other theorems in mathematical analysis about converging to decent functions (the Stone–Weierstrass theorem about polynomials and Carleson's theorem about Fourier series come to mind). However, neural networks decrease the bias more than other regression models by taking to the extreme the idea of nonlinearity and interaction. A neural network with millions of parameters is routine. A generalized linear model with millions of parameters, due to nonlinear basis functions (e.g., polynomials or splines) and their interactions, is not as common. If you put in all of those nonlinear features and their interactions in a generalized linear model, taking it to a similar extreme as a neural network, I would expect similar issues of high variance and low bias. In fact, there is a sense in which a neural network (at least some of them) involves a layer (or multiple layers) of feature extraction and then a generalized linear model on the extracted features. After all, if you draw out the usual "web" of a neural network and cover up everything before the final hidden layer, you wind up with something that looks like a generalized linear model.
Bias Variance tradeoff in neural networks There are other theorems in mathematical analysis about converging to decent functions (the Stone–Weierstrass theorem about polynomials and Carleson's theorem about Fourier series come to mind). Howev
32,389
Bias Variance tradeoff in neural networks
Sidenote: It depends on the situation The question is a bit of a loaded question. It presupposes that neural networks are better. But, whether neural networks are better depends on the situation. If parametric models are applicable, then often these will work better. If the data generation doesn't follow a complex pattern (so nothing specific for a deep neural network to learn), then often the other shallow machine learning methods work better. But still, the question is a fair question. The observation that over-parameterized models perform surprisingly good in particular settings, without overfitting (and whether this is only with neural networks or not is actually not relevant), that is not something unreal. Flat vs Deep The relationship between deep neural networks and other more shallow machine learning methods (shallow could be kernel methods or decision trees, but they can also be very complicated), is like the relationship between Copernicus model of the solar system and Ptolemy's model of the solar system. In some way, the kernel methods and decision trees are like glorified methods for smoothening or averaging of data. There is no strong connection with any simple underlying process or patterns that may create the observations. The methods are only superficially learning how to be able to describe the observations in a way that it can be interpolated or extrapolated with a sufficient accuracy. If you add more data the methods gain some precision for the area where the data has been added, but their learning capacity stagnates because the models never gain a 'higher level of understanding' of the patterns or some break down of the complexity of the observations into simple building blocks. On the other hand, the deep neural networks are sort of like applying Occam's razor and bring the complex gamut of observations down to a simplified underlying principle, which is captured by the organization of the network (and with every extra network layer the possibilities grow multiplicative increasing the potential complexity and rate of simplifying power). The neural network tries to learn the observation by learning a pattern behind it. Double descent phenomenon The over-parameterized networks are susceptible to fitting noise, but the simplest patterns are easier to learn and will become dominant. This is either due to some explicit regularization (obvious in ridge regression or Lasso) or due to some implicit regularization like limits on learning rates and stochastic decent. In this respect an example that more parameters work better is seen in this question: Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?. Another effect could be that the multiple parameters work a bit like gradient boosting and are being blended together. When we spèeak about gradient boosting then there are also a lot of parameters fitted, more than the number of data points, but the method uses some average and we consider it not really as increasing the flexibility. This can happen in some similar way in deep neural networks and the first layers create several parallel pattern recognition models that are blended together in other layers.
Bias Variance tradeoff in neural networks
Sidenote: It depends on the situation The question is a bit of a loaded question. It presupposes that neural networks are better. But, whether neural networks are better depends on the situation. If p
Bias Variance tradeoff in neural networks Sidenote: It depends on the situation The question is a bit of a loaded question. It presupposes that neural networks are better. But, whether neural networks are better depends on the situation. If parametric models are applicable, then often these will work better. If the data generation doesn't follow a complex pattern (so nothing specific for a deep neural network to learn), then often the other shallow machine learning methods work better. But still, the question is a fair question. The observation that over-parameterized models perform surprisingly good in particular settings, without overfitting (and whether this is only with neural networks or not is actually not relevant), that is not something unreal. Flat vs Deep The relationship between deep neural networks and other more shallow machine learning methods (shallow could be kernel methods or decision trees, but they can also be very complicated), is like the relationship between Copernicus model of the solar system and Ptolemy's model of the solar system. In some way, the kernel methods and decision trees are like glorified methods for smoothening or averaging of data. There is no strong connection with any simple underlying process or patterns that may create the observations. The methods are only superficially learning how to be able to describe the observations in a way that it can be interpolated or extrapolated with a sufficient accuracy. If you add more data the methods gain some precision for the area where the data has been added, but their learning capacity stagnates because the models never gain a 'higher level of understanding' of the patterns or some break down of the complexity of the observations into simple building blocks. On the other hand, the deep neural networks are sort of like applying Occam's razor and bring the complex gamut of observations down to a simplified underlying principle, which is captured by the organization of the network (and with every extra network layer the possibilities grow multiplicative increasing the potential complexity and rate of simplifying power). The neural network tries to learn the observation by learning a pattern behind it. Double descent phenomenon The over-parameterized networks are susceptible to fitting noise, but the simplest patterns are easier to learn and will become dominant. This is either due to some explicit regularization (obvious in ridge regression or Lasso) or due to some implicit regularization like limits on learning rates and stochastic decent. In this respect an example that more parameters work better is seen in this question: Is ridge regression useless in high dimensions ($n \ll p$)? How can OLS fail to overfit?. Another effect could be that the multiple parameters work a bit like gradient boosting and are being blended together. When we spèeak about gradient boosting then there are also a lot of parameters fitted, more than the number of data points, but the method uses some average and we consider it not really as increasing the flexibility. This can happen in some similar way in deep neural networks and the first layers create several parallel pattern recognition models that are blended together in other layers.
Bias Variance tradeoff in neural networks Sidenote: It depends on the situation The question is a bit of a loaded question. It presupposes that neural networks are better. But, whether neural networks are better depends on the situation. If p
32,390
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
Since you have a good idea, let's formalize it. Begin with notation. We need to suppose you know all the possible types of vehicle. Let's index them with whole numbers $1,2,\ldots,k.$ You assume a constant chance $p_i$ that vehicle $i$ will appear at any time, so (obviously) these chances are non-negative and sum to unity. When on a particular day $t$ you count only vehicles of types $i$ and $j\ne i,$ let $x_{ti}$ and $x_{tj}$ be those counts. I will assume you decide, independently of any of the preceding data or other related information, which vehicles to count. (Thus, for instance, you don't just count the first two types of vehicles you encounter: that would provide more information.) If the appearances of the vehicles are independent, then the chance of observing $x_{ti}$ and $x_{tj}$ must be proportional to the powers of the probabilities, $$\Pr(x_{ti}, x_{tj}) \,\propto\, (1-p_i-p_j)^{-(x_{ti}+x_{tj})}\,p_i^{x_{ti}}\,p_j^{x_{tj}}.$$ Proceeding from day to day, again assuming independence, these probabilities continue to multiply. Thus, for each pair $\{i,j\},$ we may once and for all collect our counts $x_{ti}$ and $x_{tj}$ over all the days $t$ when we observed both these vehicle types. Let $X_{ij}$ be the sum of all such $x_{tj}$ (and, therefore, $X_{ji}$ is the sum of all such $x_{ti}$). Set $X_{ii}=0$ for all $i.$ For example, indexing Truck, Car, Motorcycle, and Van with the numbers $1$ through $4,$ respectively, the data in the question yield the matrix $$X = \begin{array}{l|rrrr} & \text{Truck} & \text{Car} & \text{Cycle} & \text{Van} \\ \hline\text{Truck} & 0 & 65 & 12 & 12 \\ \text{Car} & 30 & 0 & 19 & 0 \\ \text{Cycle} & 25 & 72 & 0 & 11 \\ \text{Van} & 44 & 0 & 14 & 0 \end{array}$$ For instance, on days when you were observing trucks ($1$) and cars ($2$), you saw a total of $X_{12}=65$ cars and $X_{21} = 30$ trucks. Let $\mathbf{p} = (p_i)$ be the vector of probabilities and let $\mathbf{x} = (x_j) = \left(\sum_{i=1}^k X_{ij}\right)$ represent the total counts of each type of vehicle (the column sums of $X$). Upon taking logarithms of the total probability, we obtain a log likelihood $$\Lambda = C + 2 \mathbf{x}\cdot \log \mathbf{p} - \sum_{i\ne j} (X_{ij}+X_{ji})\log(p_i+p_j)$$ where $C$ is what became of all the implicit constants of proportionality. Maximize this to obtain the maximum likelihood estimate of $\mathbf p.$ I obtain the value $\hat {\mathbb{p}} = (0.26, 0.53, 0.13, 0.081).$ Generally, with more than two types of vehicles, the equations for the critical points of the likelihood are polynomials of degree two or higher: you won't obtain closed formulas. This requires numerical optimization.
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
Since you have a good idea, let's formalize it. Begin with notation. We need to suppose you know all the possible types of vehicle. Let's index them with whole numbers $1,2,\ldots,k.$ You assume a
How to estimate discrete probability distribution from a dataset of pairwise frequencies? Since you have a good idea, let's formalize it. Begin with notation. We need to suppose you know all the possible types of vehicle. Let's index them with whole numbers $1,2,\ldots,k.$ You assume a constant chance $p_i$ that vehicle $i$ will appear at any time, so (obviously) these chances are non-negative and sum to unity. When on a particular day $t$ you count only vehicles of types $i$ and $j\ne i,$ let $x_{ti}$ and $x_{tj}$ be those counts. I will assume you decide, independently of any of the preceding data or other related information, which vehicles to count. (Thus, for instance, you don't just count the first two types of vehicles you encounter: that would provide more information.) If the appearances of the vehicles are independent, then the chance of observing $x_{ti}$ and $x_{tj}$ must be proportional to the powers of the probabilities, $$\Pr(x_{ti}, x_{tj}) \,\propto\, (1-p_i-p_j)^{-(x_{ti}+x_{tj})}\,p_i^{x_{ti}}\,p_j^{x_{tj}}.$$ Proceeding from day to day, again assuming independence, these probabilities continue to multiply. Thus, for each pair $\{i,j\},$ we may once and for all collect our counts $x_{ti}$ and $x_{tj}$ over all the days $t$ when we observed both these vehicle types. Let $X_{ij}$ be the sum of all such $x_{tj}$ (and, therefore, $X_{ji}$ is the sum of all such $x_{ti}$). Set $X_{ii}=0$ for all $i.$ For example, indexing Truck, Car, Motorcycle, and Van with the numbers $1$ through $4,$ respectively, the data in the question yield the matrix $$X = \begin{array}{l|rrrr} & \text{Truck} & \text{Car} & \text{Cycle} & \text{Van} \\ \hline\text{Truck} & 0 & 65 & 12 & 12 \\ \text{Car} & 30 & 0 & 19 & 0 \\ \text{Cycle} & 25 & 72 & 0 & 11 \\ \text{Van} & 44 & 0 & 14 & 0 \end{array}$$ For instance, on days when you were observing trucks ($1$) and cars ($2$), you saw a total of $X_{12}=65$ cars and $X_{21} = 30$ trucks. Let $\mathbf{p} = (p_i)$ be the vector of probabilities and let $\mathbf{x} = (x_j) = \left(\sum_{i=1}^k X_{ij}\right)$ represent the total counts of each type of vehicle (the column sums of $X$). Upon taking logarithms of the total probability, we obtain a log likelihood $$\Lambda = C + 2 \mathbf{x}\cdot \log \mathbf{p} - \sum_{i\ne j} (X_{ij}+X_{ji})\log(p_i+p_j)$$ where $C$ is what became of all the implicit constants of proportionality. Maximize this to obtain the maximum likelihood estimate of $\mathbf p.$ I obtain the value $\hat {\mathbb{p}} = (0.26, 0.53, 0.13, 0.081).$ Generally, with more than two types of vehicles, the equations for the critical points of the likelihood are polynomials of degree two or higher: you won't obtain closed formulas. This requires numerical optimization.
How to estimate discrete probability distribution from a dataset of pairwise frequencies? Since you have a good idea, let's formalize it. Begin with notation. We need to suppose you know all the possible types of vehicle. Let's index them with whole numbers $1,2,\ldots,k.$ You assume a
32,391
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
I have a proposition that is near to what you thought, but with an additional step of working with an exact marginal distribution. This will be a long answer. Skip the mathematics if you are not inclined. The conclusions about the computations is explained after they are made. The model Let $(N_t, X_{ti}, X_{tj})$ be the total number of vehicles, the number of vehicles of type $i$ and the number of vehicles of type $j$, $i\neq j$ , on the $t$-th day. The random variable $N_t$ is latent, meaning it is not observed directly. The idea is to propose a reasonable joint distribution for this triplet, and then compute the marginal of distribution of $(X_{ti}, X_{tj})$, which will then be used to compute the proportions. For this idea to work, we have to propose first a distribution for $N_t$. Since we are working with a countable variable, I will assume that $N_t \sim \mbox{Poisson}(\lambda)$. For the conditional distribution of $(X_{ti}, X_{tj})$, we propose $$\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) = \frac{n!}{x_i!x_j!(n-x_i-x_j)!}p_i^{x_i}p_j^{x_j}(1 - p_i - p_j)^{n-x_i-x_j} \quad.$$ That is, they are a trinomial distribution with $N_t = n$ observations, where the $(1 - p_i - p_j)$ term correspond to the probability of observing vehicles of type different than $i$ and $j$. The joint distribution can be written as $$\mathbb{P}(N_t = n, X_{ti} = x_i, X_{tj} = x_j) = \mathbb{P}(N_t = n) \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) \quad.$$ We have analytical an expression for the RHS. However, we are interest in the marginal $$\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j) = \sum_{n=0}^\infty \mathbb{P}(N_t = n) \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) \quad.$$ Before evaluating the summation, notice that $\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) = 0$ when $n < x_i + x_j$. Now we evaluate the summation \begin{align} \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j) &= \sum_{n=0}^\infty \frac{e^{-\lambda}\lambda^{n}}{n!} \frac{n!}{x_i!x_j!(n-x_i-x_j)!}p_i^{x_i}p_j^{x_j}(1 - p_i - p_j)^{n-x_i-x_j}\\ &= \frac{e^{-\lambda}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{n=x_i+x_j}^\infty \frac{\lambda^{n}}{(n-x_i-x_j)!}(1 - p_i - p_j)^{n-x_i-x_j}\\ &= \frac{e^{-\lambda}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{k = 0}^\infty \frac{\lambda^{k+x_i+x_j}}{k!}(1 - p_i - p_j)^{k}\\ &= \frac{e^{-\lambda}\lambda^{x_i+x_j}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{k = 0}^\infty \frac{(\lambda(1 - p_i - p_j))^{k}}{k!}\\ &= \frac{e^{-\lambda}\lambda^{x_i+x_j}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}e^{\lambda(1-p_i-p_j)}\\ &= \frac{e^{-\lambda p_i}(\lambda p_i)^{x_i}}{x_i!}\frac{e^{-\lambda p_j}(\lambda p_j)^{x_j}}{x_j!}\\ \end{align} After these exausting computations, we have a great result: $X_{ti}$ and $X_{tj}$ are marginally independent. Moreover, their distributions is $X_{ti} \sim \mbox{Poisson}(\lambda p_i)$ for all $i \, \in \, \{1,\ldots, k\}$, where $k$ is the number of types of vehicles. With this, you can write the likelihood as a product of independent Poissons for each type of vehicle, where the number of observations for each will vary on how you choose the vehicle counting. Estimation Write $\lambda_i = \lambda p_i$. Let $n_i$ be total number of days you chose to observe the $i$-th type of vehicle, and let $S_i$ be the sum of the observations of that vehicle type. The maximum likelihood estimator for $\lambda_i$ is $$\hat{\lambda}_i = \frac{1}{n_i}S_i \quad,$$ That is, it is just the average of the observations. But we do not want to estimate $\lambda_i$, we want to estimate $p_i$. Well, we know that $$1 = \sum_{i=1}^k p_i \quad.$$ Multiplying both sides by $\lambda$, we have $$\lambda = \sum_{i=1}^k \lambda_i \quad.$$ By the invariance property of the MLE, we have $$\hat{\lambda} = \sum_{i=1}^k \hat{\lambda}_i \quad.$$ But $p_i = \lambda_i/\lambda$, hence $$\hat{p}_i = \frac{\hat{\lambda}_i}{\hat{\lambda}} \quad.$$ Therefore, we can estimate the proportions $p_i$, and also the expected number of vehicles $\lambda$! Your example To show that this approach might work, lets compute each parameter in your data example: \begin{align} &\hat{\lambda}_{truck} = (30+25+44)/3 = 33\\ &\hat{\lambda}_{car} = (65+72)/2 = 68.5\\ &\hat{\lambda}_{cycle} = (12+19+14)/3 = 15\\ &\hat{\lambda}_{van} = (12+11)/2 = 11.5 \end{align} For $\lambda$, we have $$ \hat{\lambda} = \hat{\lambda}_{truck} + \hat{\lambda}_{car} + \hat{\lambda}_{cycle} + \hat{\lambda}_{van} = 128 \quad.$$ For the probabilities \begin{align} &\hat{p}_{truck} = \hat{\lambda}_{truck}/\hat{\lambda} = 0.2578\\ &\hat{p}_{car} = \hat{\lambda}_{car}/\hat{\lambda} = 0.5351\\ &\hat{p}_{cycle} = \hat{\lambda}_{cycle}/\hat{\lambda} = 0.1171\\ &\hat{p}_{van} = \hat{\lambda}_{van}/\hat{\lambda} = 0.0898 \end{align} The probabilities are strinkingly similar to those provided by @whuber. The difference is that it is very very easy to compute it, no optimization required. Final Analysis Here is a final overall comparison when considering this approach. Advantages: Its very easy to compute the estimators, they are analytical; You probably can perform hypothesis testing, if you wish; You can estimate the total number of vehicle. Disadvantages: We assumed that $N_t$ does not vary with the day, which might be false due to weekly or monthly seasonality; I do not know how you could check if the Poisson distribution is adequate for $N_t$ with the data at hand.
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
I have a proposition that is near to what you thought, but with an additional step of working with an exact marginal distribution. This will be a long answer. Skip the mathematics if you are not incli
How to estimate discrete probability distribution from a dataset of pairwise frequencies? I have a proposition that is near to what you thought, but with an additional step of working with an exact marginal distribution. This will be a long answer. Skip the mathematics if you are not inclined. The conclusions about the computations is explained after they are made. The model Let $(N_t, X_{ti}, X_{tj})$ be the total number of vehicles, the number of vehicles of type $i$ and the number of vehicles of type $j$, $i\neq j$ , on the $t$-th day. The random variable $N_t$ is latent, meaning it is not observed directly. The idea is to propose a reasonable joint distribution for this triplet, and then compute the marginal of distribution of $(X_{ti}, X_{tj})$, which will then be used to compute the proportions. For this idea to work, we have to propose first a distribution for $N_t$. Since we are working with a countable variable, I will assume that $N_t \sim \mbox{Poisson}(\lambda)$. For the conditional distribution of $(X_{ti}, X_{tj})$, we propose $$\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) = \frac{n!}{x_i!x_j!(n-x_i-x_j)!}p_i^{x_i}p_j^{x_j}(1 - p_i - p_j)^{n-x_i-x_j} \quad.$$ That is, they are a trinomial distribution with $N_t = n$ observations, where the $(1 - p_i - p_j)$ term correspond to the probability of observing vehicles of type different than $i$ and $j$. The joint distribution can be written as $$\mathbb{P}(N_t = n, X_{ti} = x_i, X_{tj} = x_j) = \mathbb{P}(N_t = n) \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) \quad.$$ We have analytical an expression for the RHS. However, we are interest in the marginal $$\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j) = \sum_{n=0}^\infty \mathbb{P}(N_t = n) \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) \quad.$$ Before evaluating the summation, notice that $\mathbb{P}(X_{ti} = x_i, X_{tj} = x_j|N_t = n) = 0$ when $n < x_i + x_j$. Now we evaluate the summation \begin{align} \mathbb{P}(X_{ti} = x_i, X_{tj} = x_j) &= \sum_{n=0}^\infty \frac{e^{-\lambda}\lambda^{n}}{n!} \frac{n!}{x_i!x_j!(n-x_i-x_j)!}p_i^{x_i}p_j^{x_j}(1 - p_i - p_j)^{n-x_i-x_j}\\ &= \frac{e^{-\lambda}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{n=x_i+x_j}^\infty \frac{\lambda^{n}}{(n-x_i-x_j)!}(1 - p_i - p_j)^{n-x_i-x_j}\\ &= \frac{e^{-\lambda}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{k = 0}^\infty \frac{\lambda^{k+x_i+x_j}}{k!}(1 - p_i - p_j)^{k}\\ &= \frac{e^{-\lambda}\lambda^{x_i+x_j}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}\sum_{k = 0}^\infty \frac{(\lambda(1 - p_i - p_j))^{k}}{k!}\\ &= \frac{e^{-\lambda}\lambda^{x_i+x_j}p_i^{x_i}p_j^{x_j}}{x_i!x_j!}e^{\lambda(1-p_i-p_j)}\\ &= \frac{e^{-\lambda p_i}(\lambda p_i)^{x_i}}{x_i!}\frac{e^{-\lambda p_j}(\lambda p_j)^{x_j}}{x_j!}\\ \end{align} After these exausting computations, we have a great result: $X_{ti}$ and $X_{tj}$ are marginally independent. Moreover, their distributions is $X_{ti} \sim \mbox{Poisson}(\lambda p_i)$ for all $i \, \in \, \{1,\ldots, k\}$, where $k$ is the number of types of vehicles. With this, you can write the likelihood as a product of independent Poissons for each type of vehicle, where the number of observations for each will vary on how you choose the vehicle counting. Estimation Write $\lambda_i = \lambda p_i$. Let $n_i$ be total number of days you chose to observe the $i$-th type of vehicle, and let $S_i$ be the sum of the observations of that vehicle type. The maximum likelihood estimator for $\lambda_i$ is $$\hat{\lambda}_i = \frac{1}{n_i}S_i \quad,$$ That is, it is just the average of the observations. But we do not want to estimate $\lambda_i$, we want to estimate $p_i$. Well, we know that $$1 = \sum_{i=1}^k p_i \quad.$$ Multiplying both sides by $\lambda$, we have $$\lambda = \sum_{i=1}^k \lambda_i \quad.$$ By the invariance property of the MLE, we have $$\hat{\lambda} = \sum_{i=1}^k \hat{\lambda}_i \quad.$$ But $p_i = \lambda_i/\lambda$, hence $$\hat{p}_i = \frac{\hat{\lambda}_i}{\hat{\lambda}} \quad.$$ Therefore, we can estimate the proportions $p_i$, and also the expected number of vehicles $\lambda$! Your example To show that this approach might work, lets compute each parameter in your data example: \begin{align} &\hat{\lambda}_{truck} = (30+25+44)/3 = 33\\ &\hat{\lambda}_{car} = (65+72)/2 = 68.5\\ &\hat{\lambda}_{cycle} = (12+19+14)/3 = 15\\ &\hat{\lambda}_{van} = (12+11)/2 = 11.5 \end{align} For $\lambda$, we have $$ \hat{\lambda} = \hat{\lambda}_{truck} + \hat{\lambda}_{car} + \hat{\lambda}_{cycle} + \hat{\lambda}_{van} = 128 \quad.$$ For the probabilities \begin{align} &\hat{p}_{truck} = \hat{\lambda}_{truck}/\hat{\lambda} = 0.2578\\ &\hat{p}_{car} = \hat{\lambda}_{car}/\hat{\lambda} = 0.5351\\ &\hat{p}_{cycle} = \hat{\lambda}_{cycle}/\hat{\lambda} = 0.1171\\ &\hat{p}_{van} = \hat{\lambda}_{van}/\hat{\lambda} = 0.0898 \end{align} The probabilities are strinkingly similar to those provided by @whuber. The difference is that it is very very easy to compute it, no optimization required. Final Analysis Here is a final overall comparison when considering this approach. Advantages: Its very easy to compute the estimators, they are analytical; You probably can perform hypothesis testing, if you wish; You can estimate the total number of vehicle. Disadvantages: We assumed that $N_t$ does not vary with the day, which might be false due to weekly or monthly seasonality; I do not know how you could check if the Poisson distribution is adequate for $N_t$ with the data at hand.
How to estimate discrete probability distribution from a dataset of pairwise frequencies? I have a proposition that is near to what you thought, but with an additional step of working with an exact marginal distribution. This will be a long answer. Skip the mathematics if you are not incli
32,392
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
With some help from @whuber and @Lucas Prates, I've put together this solution (which is pretty close to @whuber's solution). My general approach is to use maximum likelihood estimation in the following manner Index the vehicle types in alphabetical order, car = 1, motorcycle = 2, truck = 3, van = 4. Initialize arbitrary probabilities for $p_1 = 0.25$, $p_2 = 0.25$, $p_3$, and $p_4 = 0.25$ where $p_i$ gives the true probability of observing vehicle $i$ next. Let $p_{jk}$ be the probability of observing vehicle $j$ next when we restrict ourselves to viewing only vehicles of class $j$ and $k$. Assuming independence, we get $p_{jk} = \frac{p_j}{p_j + p_k}$. Let $x_{jk}$ be the number of $j$ vehicles we observed when we restricted ourselves to viewing only vehicles of class $j$ and $k$. $x_{jk}$ is a binomial random variable. Thus, if we believe in our discrete probability distribution, the probability of observing $x_{jk}$ is $$ \binom {n_{jk}}{x_{jk}}p_{jk}^{x_{jk}}(1-p_{jk})^{n_{jk}-x_{jk}} $$ The probability of all our pair-wise observations will just be the product of their individual probabilities. So, our likelihood function looks like this $$ L(p,x)=\prod_{jk} \binom {n_{jk}}{x_{jk}}\left( p_{jk}^{x_{jk}}(1-p_{jk})^{n_{jk}-x_{jk}} \right) $$ where $jk$ spans all pair-wise indexes that we observed. However, since our goal is merely to find $\bf p$ that maximizes the likelihood function, $n_{jk}$ and $x_{jk}$ terms are constants, so we can drop the $\binom {n_{jk}}{x_{jk}}$ terms. Next, we take the log to get $$ log(L)= \sum_{jk} x_{jk}log(p_{jk})+ (n_{jk}-x_{jk})log(1-p_{jk}) $$ Then we can calculate the partial derivative of $log(L)$ w.r.t. $p_i$ as $$ \frac{\partial log(L)}{\partial p_i} = \sum_{i=j} \left( \frac{x_{jk}}{p_{jk}} - \frac{n_{jk} - x_{jk}}{1-p_{jk}} \right) \frac{p_k}{(p_j + p_k)^2} + \sum_{i=k} \left( \frac{x_{jk}}{p_{jk}} - \frac{n_{jk} - x_{jk}}{1-p_{jk}} \right) \frac{-p_j}{(p_j + p_k)^2} $$ This looks nasty, but we know all these terms so we can plug and chug. I.e. we can use gradient descent to find the $\bf p$ that maximizes $log(L)$. I've implemented this in R... library(data.table) get_discrete_dist <- function(pairsDT, alpha = 0.0001, iters = 1000){ # Copy input data.table pairsDTCopy <- copy(pairsDT) # Insert total obs pairsDTCopy[, n_jk := qty1 + qty2] # initialize probabilities for every class vehicles <- sort(unique(c(pairsDTCopy$vehicle1, pairsDTCopy$vehicle2))) vehicles <- data.table(vehicle = vehicles) vehicles[, p := 1/.N] for(i in seq_len(iters)){ # Insert probabilities into pairsDTCopy pairsDTCopy[vehicles, p1 := i.p, on = c("vehicle1"="vehicle")] pairsDTCopy[vehicles, p2 := i.p, on = c("vehicle2"="vehicle")] # Calculate log likelihood pairsDTCopy[, p_jk := p1/(p1 + p2)] pairsDTCopy[, logl := qty1*log(p_jk) + (n_jk - qty1)*log(1 - p_jk)] logl <- sum(pairsDTCopy$logl) # -1316.98 # Print print(paste0("iter: ", i, ", log likelihood: ", logl)) # Calculate the gradient (partial log likelihood / partial p_i for every class) g1 <- pairsDTCopy[, list(grad = sum((qty1/p_jk - (n_jk - qty1)/(1 - p_jk)) * (p2)/(p1 + p2)^2)), keyby = list(vehicle = vehicle1)] g2 <- pairsDTCopy[, list(grad = sum((qty1/p_jk - (n_jk - qty1)/(1 - p_jk)) * (-p1)/(p1 + p2)^2)), keyby = list(vehicle = vehicle2)] grads <- rbind(g1, g2)[, list(grad = sum(grad)), keyby = vehicle] # Update probabilities (gradient step) vehicles[grads, grad := i.grad, on = "vehicle"] vehicles[, p := p + alpha * grad] vehicles[, p := p/sum(p)] # normalize } # Return return(vehicles[]) } pairsDT <- data.table( vehicle1 = c("truck", "truck", "motorcycle", "van", "van"), vehicle2 = c("car", "motorcycle", "car", "motorcycle", "truck"), qty1 = c(30, 25, 19, 11, 12), qty2 = c(65, 12, 72, 14, 44) ) get_discrete_dist(pairsDT, alpha = 0.0001, iters = 100) vehicle p grad 1: car 0.52790741 0.5766159 2: motorcycle 0.12749926 -0.2156412 3: truck 0.26296780 -0.9608602 4: van 0.08162554 -0.2951460 ```
How to estimate discrete probability distribution from a dataset of pairwise frequencies?
With some help from @whuber and @Lucas Prates, I've put together this solution (which is pretty close to @whuber's solution). My general approach is to use maximum likelihood estimation in the followi
How to estimate discrete probability distribution from a dataset of pairwise frequencies? With some help from @whuber and @Lucas Prates, I've put together this solution (which is pretty close to @whuber's solution). My general approach is to use maximum likelihood estimation in the following manner Index the vehicle types in alphabetical order, car = 1, motorcycle = 2, truck = 3, van = 4. Initialize arbitrary probabilities for $p_1 = 0.25$, $p_2 = 0.25$, $p_3$, and $p_4 = 0.25$ where $p_i$ gives the true probability of observing vehicle $i$ next. Let $p_{jk}$ be the probability of observing vehicle $j$ next when we restrict ourselves to viewing only vehicles of class $j$ and $k$. Assuming independence, we get $p_{jk} = \frac{p_j}{p_j + p_k}$. Let $x_{jk}$ be the number of $j$ vehicles we observed when we restricted ourselves to viewing only vehicles of class $j$ and $k$. $x_{jk}$ is a binomial random variable. Thus, if we believe in our discrete probability distribution, the probability of observing $x_{jk}$ is $$ \binom {n_{jk}}{x_{jk}}p_{jk}^{x_{jk}}(1-p_{jk})^{n_{jk}-x_{jk}} $$ The probability of all our pair-wise observations will just be the product of their individual probabilities. So, our likelihood function looks like this $$ L(p,x)=\prod_{jk} \binom {n_{jk}}{x_{jk}}\left( p_{jk}^{x_{jk}}(1-p_{jk})^{n_{jk}-x_{jk}} \right) $$ where $jk$ spans all pair-wise indexes that we observed. However, since our goal is merely to find $\bf p$ that maximizes the likelihood function, $n_{jk}$ and $x_{jk}$ terms are constants, so we can drop the $\binom {n_{jk}}{x_{jk}}$ terms. Next, we take the log to get $$ log(L)= \sum_{jk} x_{jk}log(p_{jk})+ (n_{jk}-x_{jk})log(1-p_{jk}) $$ Then we can calculate the partial derivative of $log(L)$ w.r.t. $p_i$ as $$ \frac{\partial log(L)}{\partial p_i} = \sum_{i=j} \left( \frac{x_{jk}}{p_{jk}} - \frac{n_{jk} - x_{jk}}{1-p_{jk}} \right) \frac{p_k}{(p_j + p_k)^2} + \sum_{i=k} \left( \frac{x_{jk}}{p_{jk}} - \frac{n_{jk} - x_{jk}}{1-p_{jk}} \right) \frac{-p_j}{(p_j + p_k)^2} $$ This looks nasty, but we know all these terms so we can plug and chug. I.e. we can use gradient descent to find the $\bf p$ that maximizes $log(L)$. I've implemented this in R... library(data.table) get_discrete_dist <- function(pairsDT, alpha = 0.0001, iters = 1000){ # Copy input data.table pairsDTCopy <- copy(pairsDT) # Insert total obs pairsDTCopy[, n_jk := qty1 + qty2] # initialize probabilities for every class vehicles <- sort(unique(c(pairsDTCopy$vehicle1, pairsDTCopy$vehicle2))) vehicles <- data.table(vehicle = vehicles) vehicles[, p := 1/.N] for(i in seq_len(iters)){ # Insert probabilities into pairsDTCopy pairsDTCopy[vehicles, p1 := i.p, on = c("vehicle1"="vehicle")] pairsDTCopy[vehicles, p2 := i.p, on = c("vehicle2"="vehicle")] # Calculate log likelihood pairsDTCopy[, p_jk := p1/(p1 + p2)] pairsDTCopy[, logl := qty1*log(p_jk) + (n_jk - qty1)*log(1 - p_jk)] logl <- sum(pairsDTCopy$logl) # -1316.98 # Print print(paste0("iter: ", i, ", log likelihood: ", logl)) # Calculate the gradient (partial log likelihood / partial p_i for every class) g1 <- pairsDTCopy[, list(grad = sum((qty1/p_jk - (n_jk - qty1)/(1 - p_jk)) * (p2)/(p1 + p2)^2)), keyby = list(vehicle = vehicle1)] g2 <- pairsDTCopy[, list(grad = sum((qty1/p_jk - (n_jk - qty1)/(1 - p_jk)) * (-p1)/(p1 + p2)^2)), keyby = list(vehicle = vehicle2)] grads <- rbind(g1, g2)[, list(grad = sum(grad)), keyby = vehicle] # Update probabilities (gradient step) vehicles[grads, grad := i.grad, on = "vehicle"] vehicles[, p := p + alpha * grad] vehicles[, p := p/sum(p)] # normalize } # Return return(vehicles[]) } pairsDT <- data.table( vehicle1 = c("truck", "truck", "motorcycle", "van", "van"), vehicle2 = c("car", "motorcycle", "car", "motorcycle", "truck"), qty1 = c(30, 25, 19, 11, 12), qty2 = c(65, 12, 72, 14, 44) ) get_discrete_dist(pairsDT, alpha = 0.0001, iters = 100) vehicle p grad 1: car 0.52790741 0.5766159 2: motorcycle 0.12749926 -0.2156412 3: truck 0.26296780 -0.9608602 4: van 0.08162554 -0.2951460 ```
How to estimate discrete probability distribution from a dataset of pairwise frequencies? With some help from @whuber and @Lucas Prates, I've put together this solution (which is pretty close to @whuber's solution). My general approach is to use maximum likelihood estimation in the followi
32,393
Why use the Wald test in logistic regression?
In logistic regression (& other generalized linear models with canonical link functions), the coefficient estimates $\hat\theta$ are arrived at by Fisher Scoring: iterating $$\vec\theta_{k+1} = \vec\theta_k + \mathcal{I}^{-1}(\vec\theta_k)U(\vec\theta_k)$$ where $\mathcal{I}$ is the Fisher information & $U$ the score, until convergence. When you're done, you're left with the covariance matrix $\mathcal{I}^{-1}$ for the coefficient estimates; the square roots of its diagonal elements are the variances you need for Wald tests of each coefficient. So you get Wald tests for free, almost, just by fitting a model; but likelihood-ratio tests require fitting a new model for each coefficient you want to test—with a large sample size & many predictors they'd take a good while longer to conduct. (This is also true more generally: if you're using observed information (the negative Hessian of the log likelihood) rather than expected information; or even if you're finding maximum likelihood estimates with an algorithm that doesn't involve calculating the Hessian, it's quicker to evaluate the Hessian numerically than to fit lots of models.) If the point of logistic regression were to always test whether each & every coefficient is equal to zero, then there'd be an argument for statistical software's defaulting to the likelihood-ratio test when displaying a summary of the fitted model. But as that isn't always, or even often, the point—& especially as with some models many of the hypotheses tested may well be of no interest at all in general (see What value does one set, in general, for the null hypothesis of β0 in a linear regression model?)— it makes sense to provide the Wald tests & leave the analyst to choose which, if any, further tests to conduct, & what method to use.† (It would also make sense to provide no tests, & force the analyst to think about which, if any, tests to conduct, &c.) † I don't know of any R function to conduct LRTs for all coefficients of a model individually—it wouldn't be hard to write one—but both stats:::drop1 & car:::Anova conduct them for a default set of null hypotheses more likely to be of interest. NB invariance to reparametrization means only that the LRT for, say, $H_0: \beta_7 =0$ is the same as the LRT for $H_0: \frac{1}{1+\mathrm{e}^{-\beta_7}}=1$ (which isn't the case for the Wald test). Replacing $\beta_7$ with $\log \beta_7$, on the other hand, would be fitting a substantively different model.
Why use the Wald test in logistic regression?
In logistic regression (& other generalized linear models with canonical link functions), the coefficient estimates $\hat\theta$ are arrived at by Fisher Scoring: iterating $$\vec\theta_{k+1} = \vec\t
Why use the Wald test in logistic regression? In logistic regression (& other generalized linear models with canonical link functions), the coefficient estimates $\hat\theta$ are arrived at by Fisher Scoring: iterating $$\vec\theta_{k+1} = \vec\theta_k + \mathcal{I}^{-1}(\vec\theta_k)U(\vec\theta_k)$$ where $\mathcal{I}$ is the Fisher information & $U$ the score, until convergence. When you're done, you're left with the covariance matrix $\mathcal{I}^{-1}$ for the coefficient estimates; the square roots of its diagonal elements are the variances you need for Wald tests of each coefficient. So you get Wald tests for free, almost, just by fitting a model; but likelihood-ratio tests require fitting a new model for each coefficient you want to test—with a large sample size & many predictors they'd take a good while longer to conduct. (This is also true more generally: if you're using observed information (the negative Hessian of the log likelihood) rather than expected information; or even if you're finding maximum likelihood estimates with an algorithm that doesn't involve calculating the Hessian, it's quicker to evaluate the Hessian numerically than to fit lots of models.) If the point of logistic regression were to always test whether each & every coefficient is equal to zero, then there'd be an argument for statistical software's defaulting to the likelihood-ratio test when displaying a summary of the fitted model. But as that isn't always, or even often, the point—& especially as with some models many of the hypotheses tested may well be of no interest at all in general (see What value does one set, in general, for the null hypothesis of β0 in a linear regression model?)— it makes sense to provide the Wald tests & leave the analyst to choose which, if any, further tests to conduct, & what method to use.† (It would also make sense to provide no tests, & force the analyst to think about which, if any, tests to conduct, &c.) † I don't know of any R function to conduct LRTs for all coefficients of a model individually—it wouldn't be hard to write one—but both stats:::drop1 & car:::Anova conduct them for a default set of null hypotheses more likely to be of interest. NB invariance to reparametrization means only that the LRT for, say, $H_0: \beta_7 =0$ is the same as the LRT for $H_0: \frac{1}{1+\mathrm{e}^{-\beta_7}}=1$ (which isn't the case for the Wald test). Replacing $\beta_7$ with $\log \beta_7$, on the other hand, would be fitting a substantively different model.
Why use the Wald test in logistic regression? In logistic regression (& other generalized linear models with canonical link functions), the coefficient estimates $\hat\theta$ are arrived at by Fisher Scoring: iterating $$\vec\theta_{k+1} = \vec\t
32,394
Probability distribution associated with nuclear norm?
We can derive the normalizing constant $C(\lambda):=\int e^{-\lambda ||M||_*}dM$ using the smooth coarea formula. Suppose we are given smooth functions $F: \mathbb{R}^n\to\mathbb{R}$ and $f: \mathbb{R}\to\mathbb{R}$. The formula states that $$\int_{\mathbb{R}^n} f(F(x))dx=\int_{t\in \mathbb{R}}\left(\int_{x\in F^{-1}t} f(x) |\nabla F(x)|^{-1}d H_t(x) \right)dt$$ where $dH_t$ denotes the surface measure on the preimage $F^{-1}t$. In our case, we'll have $f(x)=e^{-\lambda x}$ and $F(M)=||M||_*$. Also, denote the dimensions of $M$ by $n,m$ So first of all we'll need to compute $|\nabla F(M)|$. By the resuls from these notes, we can write down the derivatives of the singular values of a matrix. Suppose we have an SVD decomposition $M=USV^t$, with $U^tU=I_{min(n,m)}=V^tV$ and $S$ the diagonal matrix of singular values (we can assume $M$ is full rank, because the corresponding set of matrices has full measure).Then the differential with respect to the ith singular value is given by $dS_i=(U^tdMV)_{ii}$. Summing over these differentials, and recalling that the nuclear norm is the sum of singular values, we get \begin{eqnarray*} d||M||_{*}&=&Tr(U^tdMV)\\ &=&\sum_i \sum_{a,b}U[a,i]V[b,i]dM_{ab}\\ & = & \sum_{a,b} dM_{ab} \sum_i U[a,i]V[b,i]\\ & = & \sum_{ab}dM_{ab} (UV^t)_{ab} \end{eqnarray*} In other words, if we consider the gradient $\nabla ||M||_*$ with respect to the entries of $M$, then the $ab$ coefficient is given by $(UV^t)_{ab}$. Now computing the norm of the gradient is straight forward: \begin{eqnarray*} |\nabla ||M||_*|^2& = & \sum_{ab} (UV^t)_{ab}^2\\ & = & Tr((UV^t)(UV^t)^t)\\ & = & Tr(UV^tVU^t)\\ & = & Tr(U^tUV^tV)\\ & = & Tr(I_{min(n,m)}I_{min(n,m)})\\ & = & min(n,m) \end{eqnarray*} In the second equality we used the fact that $\sum_{ab} X_{ab}^2=Tr(XX^t)$ for any matrix $X$. Now, we can compute $\int e^{-\lambda F(M)}dM$ using the coarea formula (and recall that $F(M):=||M||_*$). We get $$\int_0^{\infty} \left(\int_{M\in B^*(t)} |\nabla F(M)|^{-1}e^{-\lambda F(M)} dM_{|B^*(t)}\right)dt$$ where $B^*(t)$ is the set of all matrices of nuclear norm $t$, and $dM_{|B^*(t)}$ denotes the restriction of the Euclidean measure to this set. Conveniently, the integrand is constant over each preimage, $$min(n,m)^{-1/2}\int_0^{\infty} e^{-\lambda t} \int_{B^*(t)} dM_{|B^*(t)}dt=min(n,m)^{-1/2}\int_0^{\infty} e^{-\lambda t} Vol(B^*(t))$$ The ball $B^*(t)$ is a dilation of the unit ball $B^*(1)$,so the volume is $Vol(B^*(1)) t^{nm-1}$. Finally, we get $$min(n,m)^{-1/2}Vol(B^*(1))\int_0^{\infty}e^{-\lambda t}t^{nm-1}dt=min(n,m)^{-1/2}Vol(B^*(1))(nm-1)!\lambda^{-nm}$$ for the normalizing constant $C(\lambda)$. Given this, we can also derive the distribution of the random variable $||M||_*$. To start with, we'll take the identity $\int e^{-\lambda ||M||_*}=C(\lambda)$ and differentiate under the integral to see: $(-1 )^k\int ||M||^k_* e^{-\lambda ||M||_*}=d^k_{\lambda}C(\lambda)=(-1)^k{\frac {(nm)^{(k)}}{\lambda^k}}C(\lambda)$ where $(nm)^{(k)}={\frac {(nm+k-1)!}{(nm-1)!}}$ denotes the rising factorial. Thus $$E ||M||^k_*=C(\lambda)^{-1}\int ||M||^k_* e^{-\lambda ||M||_*}dM= \lambda^{-k}{\frac {(nm+k-1)!}{(nm-1)!}}=\lambda^{-k}{\frac {\Gamma(nm+k)}{\Gamma(nm)}}$$ Remarkably, these are the same moments as a $\Gamma$ distribution with rate parameter $\lambda$ and shape parameter $mn$. By uniqueness of moment generating functions, we conclude that $||M||_*\sim \Gamma(mn,\lambda)$. added later You can also use a similar argument to derive the distribution over singular vectors/values. To phrase the question, suppose that I have sampled orthogonal matrices $U,V$ independently and uniformly at random, as well as a positive vector $\sigma$ from some fixed distribution $p(x)$ on $\mathbb{R}_+^n$. The question is: What should $p$ be, in order to ensure that the resulting matrix $M:=UDiag(\sigma)V^t$ is distributed according to the original distribution $e^{-\lambda ||M||_*}$? Assume WLOG that $m>n$. By the change of variables formula, the resulting density of $M$ is given by $|{\frac {\partial (U,V,\sigma)}{\partial M}}|p(\sigma)$,were $U,V,\sigma$ are the SVD of $M$. Lemma $$\left|{\frac {\partial (U,V,\sigma)}{\partial M}}\right|=\prod_{i<=n} \sigma_i^{-(m-n)}\prod_{i<j\leq n}{\frac 1 {|\sigma_i^2-\sigma_j^2|}}$$ proof See Aspects of Multivariate Statistical Theory by R.J. Muirhead. I also worked it out in a previous edit to this question, so it is visible in the edit history. It is not particularly difficult, provided that one uses the formulas for the derivative of the SVD in the linked notes. There are two tricks that make it considerably easier: (1) by symmetry it suffices to assume that $U=I$ and $V=\left (\begin{array}{c}I\\ 0\end{array}\right)$, (2) rather than the raw jacobian, it turns out to be easier to work with the Gram matrix $G_{ab}=\sum_{ij} {\frac {\partial x_a}{\partial M_{ij}}}{\frac {\partial x_b}{\partial M_{ij}}}$ where $x_a$ ranges over all independent entries of $U,V,\sigma$. This matrix ends up having a particularly simple structure. So in summary: The original distribution $e^{-\lambda ||M||_*}$ is obtained as the distribution of a matrix of the form $UDiag(\sigma)V^T$, where $U$ and $V$ are sampled uniformly from the space of all orthogonal matrices, and $\sigma$ is sampled from $\mathbb{R}_+^n$ according to the density $\propto \prod_{i\leq n} \sigma_i^{m-n}\prod_{i<j\leq n}|\sigma_i^2-\sigma_j^2| e^{-\lambda \sum_i\sigma_i}$. singular value distribution To explore the singular values, I generated samples from the singular value distribution using MCMC. in this case, $n=m=20$ and $\lambda=1$. Below is the histogram of the resulting sums $\sum_i \sigma_i$. These sums appear to follow a $\Gamma(20*20,1)$ distribution (shown in orange), as predicted by the above discussion. A bit more interesting is the distribution over the individual singular values. If $\sigma_k$ denotes the $k$th smallest singular value, then it appears that each $\sigma_k$ is in fact well approximated by a gamma distribution. Below are plotted histograms for a range of $\sigma_k$, with the approximating gamma pdfs overlaid. This is quite interesting, and I am not sure why it would be the case. analysis of largest singular value It turns out that there is a relatively simple analytic expression for the largest singular value, which can be found by applying the techniques from this paper. The key property is that the singular value density looks like $\propto |det \{\phi_j(\sigma_i)\}_{ij}|$ where $\phi_i$ are scalar-valued functions. Due to the Vandermonde determinant formula, it is easy to see that we can take $\phi_j(\sigma)=\sigma^{m-n}e^{-\lambda\sigma}\sigma^{2(j-1)}$. To evaluate the cdf of the largest singular value, we need to evaluate the integral $\int_{0<\sigma_1<\dots<\sigma_n<t}|det \{\phi_j(\sigma_i)\}_{ij}|d\sigma$. In general, it turns out that an integral of this form can be expressed as $\sqrt{|det(A)|}$ where $A_{ij}=\int_{[0,t]^2}sgn(x-y)\phi_i(x)\phi_j(t)dxdy$ (for even $n$; if n is odd the definition of $A$ is slightly more complicated). Even better, it turns out that, for our $\phi$ functions, the entries of $A$ satisfy a recurrence relation that obviates the need to actually evaluate the integral. See the linked paper for more details.
Probability distribution associated with nuclear norm?
We can derive the normalizing constant $C(\lambda):=\int e^{-\lambda ||M||_*}dM$ using the smooth coarea formula. Suppose we are given smooth functions $F: \mathbb{R}^n\to\mathbb{R}$ and $f: \mathbb{R
Probability distribution associated with nuclear norm? We can derive the normalizing constant $C(\lambda):=\int e^{-\lambda ||M||_*}dM$ using the smooth coarea formula. Suppose we are given smooth functions $F: \mathbb{R}^n\to\mathbb{R}$ and $f: \mathbb{R}\to\mathbb{R}$. The formula states that $$\int_{\mathbb{R}^n} f(F(x))dx=\int_{t\in \mathbb{R}}\left(\int_{x\in F^{-1}t} f(x) |\nabla F(x)|^{-1}d H_t(x) \right)dt$$ where $dH_t$ denotes the surface measure on the preimage $F^{-1}t$. In our case, we'll have $f(x)=e^{-\lambda x}$ and $F(M)=||M||_*$. Also, denote the dimensions of $M$ by $n,m$ So first of all we'll need to compute $|\nabla F(M)|$. By the resuls from these notes, we can write down the derivatives of the singular values of a matrix. Suppose we have an SVD decomposition $M=USV^t$, with $U^tU=I_{min(n,m)}=V^tV$ and $S$ the diagonal matrix of singular values (we can assume $M$ is full rank, because the corresponding set of matrices has full measure).Then the differential with respect to the ith singular value is given by $dS_i=(U^tdMV)_{ii}$. Summing over these differentials, and recalling that the nuclear norm is the sum of singular values, we get \begin{eqnarray*} d||M||_{*}&=&Tr(U^tdMV)\\ &=&\sum_i \sum_{a,b}U[a,i]V[b,i]dM_{ab}\\ & = & \sum_{a,b} dM_{ab} \sum_i U[a,i]V[b,i]\\ & = & \sum_{ab}dM_{ab} (UV^t)_{ab} \end{eqnarray*} In other words, if we consider the gradient $\nabla ||M||_*$ with respect to the entries of $M$, then the $ab$ coefficient is given by $(UV^t)_{ab}$. Now computing the norm of the gradient is straight forward: \begin{eqnarray*} |\nabla ||M||_*|^2& = & \sum_{ab} (UV^t)_{ab}^2\\ & = & Tr((UV^t)(UV^t)^t)\\ & = & Tr(UV^tVU^t)\\ & = & Tr(U^tUV^tV)\\ & = & Tr(I_{min(n,m)}I_{min(n,m)})\\ & = & min(n,m) \end{eqnarray*} In the second equality we used the fact that $\sum_{ab} X_{ab}^2=Tr(XX^t)$ for any matrix $X$. Now, we can compute $\int e^{-\lambda F(M)}dM$ using the coarea formula (and recall that $F(M):=||M||_*$). We get $$\int_0^{\infty} \left(\int_{M\in B^*(t)} |\nabla F(M)|^{-1}e^{-\lambda F(M)} dM_{|B^*(t)}\right)dt$$ where $B^*(t)$ is the set of all matrices of nuclear norm $t$, and $dM_{|B^*(t)}$ denotes the restriction of the Euclidean measure to this set. Conveniently, the integrand is constant over each preimage, $$min(n,m)^{-1/2}\int_0^{\infty} e^{-\lambda t} \int_{B^*(t)} dM_{|B^*(t)}dt=min(n,m)^{-1/2}\int_0^{\infty} e^{-\lambda t} Vol(B^*(t))$$ The ball $B^*(t)$ is a dilation of the unit ball $B^*(1)$,so the volume is $Vol(B^*(1)) t^{nm-1}$. Finally, we get $$min(n,m)^{-1/2}Vol(B^*(1))\int_0^{\infty}e^{-\lambda t}t^{nm-1}dt=min(n,m)^{-1/2}Vol(B^*(1))(nm-1)!\lambda^{-nm}$$ for the normalizing constant $C(\lambda)$. Given this, we can also derive the distribution of the random variable $||M||_*$. To start with, we'll take the identity $\int e^{-\lambda ||M||_*}=C(\lambda)$ and differentiate under the integral to see: $(-1 )^k\int ||M||^k_* e^{-\lambda ||M||_*}=d^k_{\lambda}C(\lambda)=(-1)^k{\frac {(nm)^{(k)}}{\lambda^k}}C(\lambda)$ where $(nm)^{(k)}={\frac {(nm+k-1)!}{(nm-1)!}}$ denotes the rising factorial. Thus $$E ||M||^k_*=C(\lambda)^{-1}\int ||M||^k_* e^{-\lambda ||M||_*}dM= \lambda^{-k}{\frac {(nm+k-1)!}{(nm-1)!}}=\lambda^{-k}{\frac {\Gamma(nm+k)}{\Gamma(nm)}}$$ Remarkably, these are the same moments as a $\Gamma$ distribution with rate parameter $\lambda$ and shape parameter $mn$. By uniqueness of moment generating functions, we conclude that $||M||_*\sim \Gamma(mn,\lambda)$. added later You can also use a similar argument to derive the distribution over singular vectors/values. To phrase the question, suppose that I have sampled orthogonal matrices $U,V$ independently and uniformly at random, as well as a positive vector $\sigma$ from some fixed distribution $p(x)$ on $\mathbb{R}_+^n$. The question is: What should $p$ be, in order to ensure that the resulting matrix $M:=UDiag(\sigma)V^t$ is distributed according to the original distribution $e^{-\lambda ||M||_*}$? Assume WLOG that $m>n$. By the change of variables formula, the resulting density of $M$ is given by $|{\frac {\partial (U,V,\sigma)}{\partial M}}|p(\sigma)$,were $U,V,\sigma$ are the SVD of $M$. Lemma $$\left|{\frac {\partial (U,V,\sigma)}{\partial M}}\right|=\prod_{i<=n} \sigma_i^{-(m-n)}\prod_{i<j\leq n}{\frac 1 {|\sigma_i^2-\sigma_j^2|}}$$ proof See Aspects of Multivariate Statistical Theory by R.J. Muirhead. I also worked it out in a previous edit to this question, so it is visible in the edit history. It is not particularly difficult, provided that one uses the formulas for the derivative of the SVD in the linked notes. There are two tricks that make it considerably easier: (1) by symmetry it suffices to assume that $U=I$ and $V=\left (\begin{array}{c}I\\ 0\end{array}\right)$, (2) rather than the raw jacobian, it turns out to be easier to work with the Gram matrix $G_{ab}=\sum_{ij} {\frac {\partial x_a}{\partial M_{ij}}}{\frac {\partial x_b}{\partial M_{ij}}}$ where $x_a$ ranges over all independent entries of $U,V,\sigma$. This matrix ends up having a particularly simple structure. So in summary: The original distribution $e^{-\lambda ||M||_*}$ is obtained as the distribution of a matrix of the form $UDiag(\sigma)V^T$, where $U$ and $V$ are sampled uniformly from the space of all orthogonal matrices, and $\sigma$ is sampled from $\mathbb{R}_+^n$ according to the density $\propto \prod_{i\leq n} \sigma_i^{m-n}\prod_{i<j\leq n}|\sigma_i^2-\sigma_j^2| e^{-\lambda \sum_i\sigma_i}$. singular value distribution To explore the singular values, I generated samples from the singular value distribution using MCMC. in this case, $n=m=20$ and $\lambda=1$. Below is the histogram of the resulting sums $\sum_i \sigma_i$. These sums appear to follow a $\Gamma(20*20,1)$ distribution (shown in orange), as predicted by the above discussion. A bit more interesting is the distribution over the individual singular values. If $\sigma_k$ denotes the $k$th smallest singular value, then it appears that each $\sigma_k$ is in fact well approximated by a gamma distribution. Below are plotted histograms for a range of $\sigma_k$, with the approximating gamma pdfs overlaid. This is quite interesting, and I am not sure why it would be the case. analysis of largest singular value It turns out that there is a relatively simple analytic expression for the largest singular value, which can be found by applying the techniques from this paper. The key property is that the singular value density looks like $\propto |det \{\phi_j(\sigma_i)\}_{ij}|$ where $\phi_i$ are scalar-valued functions. Due to the Vandermonde determinant formula, it is easy to see that we can take $\phi_j(\sigma)=\sigma^{m-n}e^{-\lambda\sigma}\sigma^{2(j-1)}$. To evaluate the cdf of the largest singular value, we need to evaluate the integral $\int_{0<\sigma_1<\dots<\sigma_n<t}|det \{\phi_j(\sigma_i)\}_{ij}|d\sigma$. In general, it turns out that an integral of this form can be expressed as $\sqrt{|det(A)|}$ where $A_{ij}=\int_{[0,t]^2}sgn(x-y)\phi_i(x)\phi_j(t)dxdy$ (for even $n$; if n is odd the definition of $A$ is slightly more complicated). Even better, it turns out that, for our $\phi$ functions, the entries of $A$ satisfy a recurrence relation that obviates the need to actually evaluate the integral. See the linked paper for more details.
Probability distribution associated with nuclear norm? We can derive the normalizing constant $C(\lambda):=\int e^{-\lambda ||M||_*}dM$ using the smooth coarea formula. Suppose we are given smooth functions $F: \mathbb{R}^n\to\mathbb{R}$ and $f: \mathbb{R
32,395
Why is the observed Fisher information defined as the Hessian of the log-likelihood?
I find the literature in MLE a bit fuzzy with nomenclature here, so I might have some stuff off, and I will try to stick to the nomenclature you introduced. We have the observed Fisher information: $$\left[\mathcal {J}(\theta)\right]_{ij} = -\left(\frac{\partial^2 \log f}{\partial \theta_i \partial \theta_j}\right)$$ And the empirical Fisher information: $$\left[\mathcal {\tilde J}(\theta)\right]_{ij} = \left(\frac{\partial \log f}{\partial \theta_i}\right)\left(\frac{\partial \log f}{\partial \theta_j}\right)$$ And it can be shown that with regularity (basically differentiability) conditions (see https://stats.stackexchange.com/a/101530/60613): $$\left[\mathcal I(\theta)\right]_{ij} = E\left[\left[\mathcal J(\theta)\right]_{ij}\right] = E\left[\left[\mathcal {\tilde J}(\theta)\right]_{ij}\right]$$ So, why not use $\mathcal {\tilde J}$ instead of $\mathcal J$? Well, we actually use both. The distinction is in that, using $\mathcal {\tilde J}$ (expected Hessian) for MLE we are doing IWLS (Fisher scoring), while $\mathcal {J}$ (observed Hessian) results in Newton-Raphson. $\tilde {\mathcal J}$ is guaranteed positive definite for non-overparametrized loglikelihoods (since you have more data than parameters, the covariance is full rank, see Why is the Fisher Information matrix positive semidefinite?), and the procedure benefits from that. ${\mathcal J}$ does not enjoy of such benefits. If we are performing MLE on the canonical parameter of a distribution in the exponential family, then both are actually identical.
Why is the observed Fisher information defined as the Hessian of the log-likelihood?
I find the literature in MLE a bit fuzzy with nomenclature here, so I might have some stuff off, and I will try to stick to the nomenclature you introduced. We have the observed Fisher information: $$
Why is the observed Fisher information defined as the Hessian of the log-likelihood? I find the literature in MLE a bit fuzzy with nomenclature here, so I might have some stuff off, and I will try to stick to the nomenclature you introduced. We have the observed Fisher information: $$\left[\mathcal {J}(\theta)\right]_{ij} = -\left(\frac{\partial^2 \log f}{\partial \theta_i \partial \theta_j}\right)$$ And the empirical Fisher information: $$\left[\mathcal {\tilde J}(\theta)\right]_{ij} = \left(\frac{\partial \log f}{\partial \theta_i}\right)\left(\frac{\partial \log f}{\partial \theta_j}\right)$$ And it can be shown that with regularity (basically differentiability) conditions (see https://stats.stackexchange.com/a/101530/60613): $$\left[\mathcal I(\theta)\right]_{ij} = E\left[\left[\mathcal J(\theta)\right]_{ij}\right] = E\left[\left[\mathcal {\tilde J}(\theta)\right]_{ij}\right]$$ So, why not use $\mathcal {\tilde J}$ instead of $\mathcal J$? Well, we actually use both. The distinction is in that, using $\mathcal {\tilde J}$ (expected Hessian) for MLE we are doing IWLS (Fisher scoring), while $\mathcal {J}$ (observed Hessian) results in Newton-Raphson. $\tilde {\mathcal J}$ is guaranteed positive definite for non-overparametrized loglikelihoods (since you have more data than parameters, the covariance is full rank, see Why is the Fisher Information matrix positive semidefinite?), and the procedure benefits from that. ${\mathcal J}$ does not enjoy of such benefits. If we are performing MLE on the canonical parameter of a distribution in the exponential family, then both are actually identical.
Why is the observed Fisher information defined as the Hessian of the log-likelihood? I find the literature in MLE a bit fuzzy with nomenclature here, so I might have some stuff off, and I will try to stick to the nomenclature you introduced. We have the observed Fisher information: $$
32,396
Interview question of biased coin
Nothing frustrates me more than when someone tells you to do the "optimal" thing without telling you the criteria over which to optimize. That being said, I'm betting that since it was an interview, they intended for you to determine what you wanted to optimize for. Your approach might not be "optimal" if we wanted to optimize for statistical power. If the difference in bias is small, 50 flips might not be sufficient to detect which coin has larger bias. I suspect they were hoping you knew about bandit algorithms. Given the constraint on flips and the goal of learning the coin with the largest bias, this sounds like an AB test one might run in industry. One way the algorithm is run is as follows: Start with uniform beta priors on each on the biases of the coin Draw from those priors and select the coin who's draw was largest. Flip the coin and update the priors (now posteriors) Repeat Here is a python implementation of the bandit. The two coins have a bias of 0.4 and 0.6 respectively. The bandit correctly identifies that coin 2 has the larger bias (as evidenced by the posterior concentrating on larger biases. import numpy as np from scipy.stats import beta, binom import matplotlib.pyplot as plt import numpy as np from scipy.stats import beta, binom import matplotlib.pyplot as plt class Coin(): def __init__(self): self.a = 1 self.b = 1 def draw(self): return beta(self.a, self.b).rvs(1) def update(self, flip): if flip>0: self.a+=1 else: self.b+=1 def __str__(self): return f"{self.a}:{self.b}={self.a/(self.a+self.b):.3f}" #Unknown to us np.random.seed(19920908) coin1 = binom(p=0.4, n=1) coin2 = binom(p=0.6, n=1) model1 = Coin() model2 = Coin() for i in range(100): draw1 = model1.draw() draw2 = model2.draw() if draw1>draw2: flip = coin1.rvs() model1.update(flip) else: flip = coin2.rvs() model2.update(flip) x = np.linspace(0,1,101) plt.plot(x, beta(model1.a, model1.b).pdf(x)) plt.plot(x, beta(model2.a, model2.b).pdf(x)) print(model1,model2)
Interview question of biased coin
Nothing frustrates me more than when someone tells you to do the "optimal" thing without telling you the criteria over which to optimize. That being said, I'm betting that since it was an interview,
Interview question of biased coin Nothing frustrates me more than when someone tells you to do the "optimal" thing without telling you the criteria over which to optimize. That being said, I'm betting that since it was an interview, they intended for you to determine what you wanted to optimize for. Your approach might not be "optimal" if we wanted to optimize for statistical power. If the difference in bias is small, 50 flips might not be sufficient to detect which coin has larger bias. I suspect they were hoping you knew about bandit algorithms. Given the constraint on flips and the goal of learning the coin with the largest bias, this sounds like an AB test one might run in industry. One way the algorithm is run is as follows: Start with uniform beta priors on each on the biases of the coin Draw from those priors and select the coin who's draw was largest. Flip the coin and update the priors (now posteriors) Repeat Here is a python implementation of the bandit. The two coins have a bias of 0.4 and 0.6 respectively. The bandit correctly identifies that coin 2 has the larger bias (as evidenced by the posterior concentrating on larger biases. import numpy as np from scipy.stats import beta, binom import matplotlib.pyplot as plt import numpy as np from scipy.stats import beta, binom import matplotlib.pyplot as plt class Coin(): def __init__(self): self.a = 1 self.b = 1 def draw(self): return beta(self.a, self.b).rvs(1) def update(self, flip): if flip>0: self.a+=1 else: self.b+=1 def __str__(self): return f"{self.a}:{self.b}={self.a/(self.a+self.b):.3f}" #Unknown to us np.random.seed(19920908) coin1 = binom(p=0.4, n=1) coin2 = binom(p=0.6, n=1) model1 = Coin() model2 = Coin() for i in range(100): draw1 = model1.draw() draw2 = model2.draw() if draw1>draw2: flip = coin1.rvs() model1.update(flip) else: flip = coin2.rvs() model2.update(flip) x = np.linspace(0,1,101) plt.plot(x, beta(model1.a, model1.b).pdf(x)) plt.plot(x, beta(model2.a, model2.b).pdf(x)) print(model1,model2)
Interview question of biased coin Nothing frustrates me more than when someone tells you to do the "optimal" thing without telling you the criteria over which to optimize. That being said, I'm betting that since it was an interview,
32,397
Interview question of biased coin
in addition to the prior reply and useful comments, and to answer the actual question, the best approach is leveraging Thomson Sampling, there is an excellent article on sudeepraja's blog. it iteratively samples from the current posterior, selecting the highest mean.
Interview question of biased coin
in addition to the prior reply and useful comments, and to answer the actual question, the best approach is leveraging Thomson Sampling, there is an excellent article on sudeepraja's blog. it iterati
Interview question of biased coin in addition to the prior reply and useful comments, and to answer the actual question, the best approach is leveraging Thomson Sampling, there is an excellent article on sudeepraja's blog. it iteratively samples from the current posterior, selecting the highest mean.
Interview question of biased coin in addition to the prior reply and useful comments, and to answer the actual question, the best approach is leveraging Thomson Sampling, there is an excellent article on sudeepraja's blog. it iterati
32,398
Does machine learning on random situations require a cryptographically-secure random number generator?
Edit: My original answer below is mostly informal, but I want to address some of the comments in a more technical and hopefully convincing manner. Please see the technical appendix for these details. Does machine learning on random situations require a cryptographically secure random number generator, or in other words, is it reasonable to fear that your machine learning algorithm will learn how to predict the output of your pseudo-random number generator (PRNG)? Generally, no. Could a machine learning model such as a neural network emulate a PRNG? By this, I mean: could the function $f$ that produces the sequence of pseudo-random numbers be in the class of functions $V$ that the machine learning model is capable of representing. Possibly, depending on the model in question. Could a capable machine learning model accidentally be trained from data generated by the PRNG to predict the output. Almost certainly not, though the probability of this is non-zero. Could we successfully create and train a custom machine learning model with the sole purpose of predicting the output of a PRNG? Also probably not, at least not without a great deal of "cheating." The key point is that even if a machine learning model is capable of representing the PRNG, it has to be capable of finding the right parameters to predict the output of the PRNG. Training a machine learning model to predict the output of a PRNG is an extremely difficult task, bordering on the impossible. To understand why, let's first talk about how a PRNG works. Pseudo-Random Number Generation Most PRNGs use some form of congruential algorithm which involves starting with a positive integer $X_0$ called the seed and then making a recursive sequence according to a rule similar to $$X_{n + 1} = g(X_n) \text{ mod } m$$ for some function $g$ and constant $m \in \mathbb{N}$. There are some slight variations on the method, and of course some methods which are completely different, such as cellular automata-based methods (like Wolfram's Mathematica uses by default). To answer your question, I'm going to focus on one of the simplest PRNGs: the linear congruential method, which uses the function $g(x) = ax + c$ for some integer constants $a$ and $c$. This method is used by the Java programming language, even though it has relatively poor statistically properties. I'm then going to appeal to intuition to claim that, if we don't have to worry about a machine learning algorithm learning how to predict the output of a very simple PRNG with poor statistical properties, we probably don't have to worry about it learning how to predict the output of a fancy PRNG with better statistical properties. Now, let's consider the actual constants $a$, $c$, and $m$ to use. There are various properties that these need to satisfy to make a good PRNG that I won't discuss (see Donald Knuth's The Art of Computer Programming vol. 2 which is an authoritative treatment of the topic). Let's just consider the constants that Java's PRNG uses as a real-world example. From the source code (on line 173), the values it uses are $a = 25214903917$, $c = 11$, and $m = 2^{48} = 281474976710656$. We also can't forget that in trying to learn the output of the PRNG, the machine learning model will also have to learn the seed $X_0$. Learning the $x$ mod $m$ Function is Hard This is the first difficulty that our machine learning model has to surmount. There is already an excellent discussion of this problem on this stackoverflow post which you should read before continuing this post any further. Hopefully you aren't reading this unless you poked through the linked post. Note that the best solutions use recurrent neural networks (RNN), with the motivation explained in the accepted answer: Please understand that this solution is a bit of a tongue-in-cheek thing, because it is based on the task-domain knowledge that our target function can be defined by a simple recurring formula on the sequence of input bits. In reality, if we aren't using domain knowledge for this problem (for instance, if you're designing your model to play a dice game) then the model might not be able to learn the $x$ mod $m$ function. You could test this by using your model architecture and applying it to this problem directly to see if you can get good results. Cost Functions and Convexity Okay, so maybe learning $x$ mod $m$ is difficult, but as the stackoverflow answer above demonstrates, it is doable. So what's the next hurdle? Let's talk about the training of a model, i.e. the finding of the parameters that best fit the data. The "magic" of modern machine learning is very much reliant on the fact that convex optimization techniques like gradient descent seem to "just work" even when applied to non-convex optimization problems. They don't work perfectly, and often require a fair amount of tinkering to train properly, but they can still get good results. One of the reasons for this "magic" is that lots of cost functions, while non-convex, aren't that non-convex. For example, your cost function might look something like: This cost function might look bad at first glance, but notice that it has some degree of regularity/smoothness. You can still tell that the underlying function is continuous because "small" movements along the $x$ or $y$-axis result in "small" changes in height. You can also pick out a general basin-shaped structure, and it's believable that a convex optimization algorithm with some random perturbations could eventually find the global minimum. Essentially, a cost function with some regularity might not be convex, but can still be "locally convex" in some sense. This means that gradient descent can find a local minimum if the initial point is within a locally convex "basin." In other words, being close the minimum counts for something, so "partial" correctness can be rewarded. Indeed, this is the idea behind transfer learning. Finding a good minimum for one task that is sufficiently similar to another task can provide the second task with a good initial point and then convex optimization can fine-tune the result to find a nearby minimum for the second task. An Experiment However, the cost function for trying to learn a PRNG has virtually no regularity whatsoever. It shouldn't come as a shock, but the cost function behaves like noise. But don't take my word for it: let's do an experiment to try to predict the output of Java's PRNG. For this experiment, we're going to cheat as much as possible and still lose. To start with, instead of using some kind of neural network or other machine learning model with a large number of parameters, we're going to use the exact functional form that we know Java's PRNG takes: $$X_{n + 1} = (a X_n + c) \text{ mod } m$$ which has parameters $a$, $c$, $m$, and $X_0$. This completely sidesteps the difficulty of learning $x$ mod $m$ discussed above. And our model has only four parameters! Modern machine learning algorithms can have hundreds of millions of parameters that require training, so just four should be a piece of cake, right? Let's make it even easier though: suppose that an oracle (no pun intended) tells us three of four correct parameters for Java's PRNG, and our task is simply to learn the value of the fourth. Learning one parameter can't be that hard, right? Here's some Julia code to emulate Java's PRNG and to plot an $\ell_2$ cost function over each of the four slices of the four slices we get from not knowing one of the four parameters: using LinearAlgebra: norm using Plots theme(:dark) seed = 12150615 # Date the Magna Carta was signed # Constants used by Java's linear congruential PRNG a = 25214903917 c = 11 m = 2^48 """Generates the next integer in a sequence of pseudo-random_sequence numbers in a linear congruential sequence.""" function next(x, a, c, m) return mod(a*x + c, m) end """Generates a random sequence of M random integers from a linear congruential sequence with the parameters a, c, m, and seed.""" function random_sequence(a, c, m, seed, M) nums = zeros(Int, M) nums[1] = seed for i = 2:M nums[i] = next(nums[i-1], a, c, m) end return nums end # Generate Java's random sequence y = random_sequence(a, c, m, seed, M) i_values = -200:200 # Range around the correct parameter to test n_trials = length(i_values) # Test a neighborhood of the a-values as = [a + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(as[i], c, m, seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false) # Test a neighborhood of the c-values cs = [c + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, cs[i], m, seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.145e11, 1.151e11)) # Test a neighborhood of the m-values ms = [m + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, c, ms[i], seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.145e11, 1.151e11)) # Test a neighborhood of the seed-values seeds = [seed + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, c, m, seeds[i], M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.147e11, 1.151e11)) So you can clearly see that even with three of the four parameters and the exact functional form known, the cost function still has the form $c + (\text{noise})$ where $c$ is a constant. In this case, a gradient-descent-type algorithm would compute a gradient of $0 + (\text{noise})$. Then gradient descent is simply performing a random walk. While possible that a random walk could converge to correct parameters, it is extremely unlikely given that the size of the space is $10^{77}$ (see below). Without any regularity, convex optimization tools are no better than a random walk looking for that one "valley" in the middle of each graph where the correct parameter lies. Conclusion It turns out that even with all of this simplification, the last step is still virtually impossible. "Learning" the last parameter boils down to a brute force search over the entire range of possible values for the parameters, because the "magic" of applying convex optimization techniques to train a machine learning model does not help solve a search problem when the cost function does not have any information whatsoever about the direction of even a good local minimum. If you wanted to try every possible 64-bit integer for the four parameters, this would mean searching through $(2^{64})^4 = 2^{256} \approx 10^{77}$ combinations. And this is just for a very simple PRNG. Ultimately, if you really want to alleviate any worries you might have about your particular task, you could always drop the board game aspect and see if your model can learn the output of the pseudo-random dice roll using your programming language's PRNG. Good luck (you're going need it)! Technical Appendix First, I want to point out that the function $x$ mod $m$ being difficult to approximate is more of an interesting side note, relevant mostly for the concern in the original question that a machine learning algorithm might incidentally discover how to predict the output of the PRNG while being trained for some other purpose. The fact is that it is difficult even when this is one's sole purpose. Difficult, but not unreasonably difficult. You don't need to appeal to universal approximation theorem to claim this is possible, because in the linked stackoverflow post from above there are several examples of models that successful approximated $x$ mod $m$ (albeit with the input in binary-vector representation). So not only was it possible to represent the function $x$ mod $m$ by a neural network (which is all the UAT guarantees), they were also able to successfully find weights that worked (which is not guaranteed). Second, what is the technical meaning of the claim that the cost function has the form $$ C(x) = \begin{cases} \text{constant} + \text{noise}, & \text{ if } x \ne x^* \\ 0, & \text{ if } x = x^* \end{cases} $$ where $x$ denotes the parameters $x = (a, c, m, \text{seed})$ and $x^*$ denotes the correct parameters? This can be defined technically by picking a radius $\delta > 0$ and then computing the average value $$ \text{Avg} (\delta, t) = \frac{1}{m(B_\delta (t))}\int_{B_\delta (t)} C(x) dx $$ where $K$ can represent either $\mathbb{Z}^4$ or $\mathbb{R}^4$, $m$ is either the Lebesgue measure or the counting measure respectively, and $B_\delta (t) = \{ x \in K: \| x - t \| < \delta \}$ is the ball of radius $\delta$ centered at $t \in K$. Now the claim that $C = \text{constant} + \text{noise}$ means that as $\delta$ increases, the local average $\text{Avg} (\delta, t)$ converges quickly to a constant $L$, so long as the true parameters $x^* \notin B_\delta (t)$. Here, I say "quickly" to rule out the fact that eventually this limit would be constant after surpassing the bounds of the domain. This definition makes sense even though the "noise" term is technically deterministic. In other words, local averages of $C$ are globally constant. Local averaging smooths out the noise term, and what remains is a constant. Plotted below is a much larger scale experiment on the interval $[-9 \times 10^{12}, 9 \times 10^{12}]$ that shows essentially the same phenomenon as before. For this experiment, I only tested the case where the seed is unknown as this experiment took a much longer. Each point here is not the cost function, but the local average $\text{Avg} (100, t)$ of the cost function, which smooths out some of the noise: Here I've plotted the constant as well, which turns out to be roughly $$ \text{constant} = 1.150 \times 10^{12} $$ Ultimately, this is a problem for gradient-based optimization methods not because of the noise term per se, but because the cost function is "flat." Of course, when you do add in the noise term, a flat function plus noise makes an extremely large number of local minima that certainly doesn't help the convergence of any gradient-based optimization algorithm. Moreover, I am well aware that this is an empirical claim, and I can not prove it analytically. I just wanted to demonstrate empirically that the gradient for this function is essentially 0 on average, and contains no information about the direction of $x^*$. In Experiment 1, the neighborhood was purposefully small to demonstrate that even if you started close to $x^*$, there is no visible gradient pointing in that direction. The four slices of the neighborhood $B_{200} (x^*)$ are small, but still don't show a local "basin" (locally approximately convex region) of the sort that gradient-based optimization is good at minimizing in. Experiment 2 demonstrates this same phenomenon on a much larger scale. The last technical detail I want to touch on is the fact that I am only analyzing the the model and the cost function as functions over a subset of the domain $\mathbb{Z}^4$, not over $\mathbb{R}^4$. This means that the gradient/derivative is not defined. So how can I claim something about the convergence or non-convergence of a gradient-based method when gradients aren't defined? Well, we can of course try fitting a differentiable model defined on $\mathbb{R}^4$ to the data, and compute its derivative, but if the data is already "flat" a model that fits it well will be "flat" as well. This is not something that I can prove, but I can prove that it is unprovable by constructing a continuously differentiable ($\mathcal{C}^1$) interpolation function $f : \mathbb{R} \to \mathbb{R}$ to the cost function data $C(x)$ that would cause gradient descent to converge to the true global minimizer $x^*$ in one step with high probability. This is an absurd example, but it demonstrates that trying to prove that gradient-based algorithms couldn't conceivably work here is impossible. To construct the interpolating function, consider two adjacent points $n, n+1 \in \mathbb{Z}$ with cost function values $C(n)$ and $C(n+1)$. Pick a threshold $\epsilon > 0$. Now, on the interval $[n + \epsilon, n + 1 - \epsilon]$, we can construct $f$ so that a regular gradient-descent step will reach $x^*$ in one step, i.e. $x^* = x - f'(x)$. This defines an easy differential equation that we can solve as follows: \begin{align} x^* & = x - f'(x) \\ \int x^* dx & = \int x - f'(x) dx \\ x x^* & = \frac{1}{2} x^2 - f(x) + D\\ f(x) & = \frac{1}{2} x^2 - x x^* + D \end{align} for any constant $D$. The constant is irrelevant, because regardless of its value, we can still define $f$ in such a way on the intevals $[n, n + \epsilon)$ and $(n+1-\epsilon, n+1]$ to make $f \in \mathcal{C}^1$ and such that $C(n)$ and $C(n+1)$ are the correct values, using splines for instance. This construction can be repeated on all intervals and the results can be stitched together in a $\mathcal{C}^1$ manner (using splines again, as one particular method). The result will be a $\mathcal{C}^1$ function that interpolates the cost function at all $n \in \mathbb{Z}$ (so it fits the data here perfectly well), and one which will converge to $x^*$ in one step of the gradient descent algorithm with probability $1 - 2\epsilon$. Take $\epsilon > 0$ to be as small as desired.
Does machine learning on random situations require a cryptographically-secure random number generato
Edit: My original answer below is mostly informal, but I want to address some of the comments in a more technical and hopefully convincing manner. Please see the technical appendix for these details.
Does machine learning on random situations require a cryptographically-secure random number generator? Edit: My original answer below is mostly informal, but I want to address some of the comments in a more technical and hopefully convincing manner. Please see the technical appendix for these details. Does machine learning on random situations require a cryptographically secure random number generator, or in other words, is it reasonable to fear that your machine learning algorithm will learn how to predict the output of your pseudo-random number generator (PRNG)? Generally, no. Could a machine learning model such as a neural network emulate a PRNG? By this, I mean: could the function $f$ that produces the sequence of pseudo-random numbers be in the class of functions $V$ that the machine learning model is capable of representing. Possibly, depending on the model in question. Could a capable machine learning model accidentally be trained from data generated by the PRNG to predict the output. Almost certainly not, though the probability of this is non-zero. Could we successfully create and train a custom machine learning model with the sole purpose of predicting the output of a PRNG? Also probably not, at least not without a great deal of "cheating." The key point is that even if a machine learning model is capable of representing the PRNG, it has to be capable of finding the right parameters to predict the output of the PRNG. Training a machine learning model to predict the output of a PRNG is an extremely difficult task, bordering on the impossible. To understand why, let's first talk about how a PRNG works. Pseudo-Random Number Generation Most PRNGs use some form of congruential algorithm which involves starting with a positive integer $X_0$ called the seed and then making a recursive sequence according to a rule similar to $$X_{n + 1} = g(X_n) \text{ mod } m$$ for some function $g$ and constant $m \in \mathbb{N}$. There are some slight variations on the method, and of course some methods which are completely different, such as cellular automata-based methods (like Wolfram's Mathematica uses by default). To answer your question, I'm going to focus on one of the simplest PRNGs: the linear congruential method, which uses the function $g(x) = ax + c$ for some integer constants $a$ and $c$. This method is used by the Java programming language, even though it has relatively poor statistically properties. I'm then going to appeal to intuition to claim that, if we don't have to worry about a machine learning algorithm learning how to predict the output of a very simple PRNG with poor statistical properties, we probably don't have to worry about it learning how to predict the output of a fancy PRNG with better statistical properties. Now, let's consider the actual constants $a$, $c$, and $m$ to use. There are various properties that these need to satisfy to make a good PRNG that I won't discuss (see Donald Knuth's The Art of Computer Programming vol. 2 which is an authoritative treatment of the topic). Let's just consider the constants that Java's PRNG uses as a real-world example. From the source code (on line 173), the values it uses are $a = 25214903917$, $c = 11$, and $m = 2^{48} = 281474976710656$. We also can't forget that in trying to learn the output of the PRNG, the machine learning model will also have to learn the seed $X_0$. Learning the $x$ mod $m$ Function is Hard This is the first difficulty that our machine learning model has to surmount. There is already an excellent discussion of this problem on this stackoverflow post which you should read before continuing this post any further. Hopefully you aren't reading this unless you poked through the linked post. Note that the best solutions use recurrent neural networks (RNN), with the motivation explained in the accepted answer: Please understand that this solution is a bit of a tongue-in-cheek thing, because it is based on the task-domain knowledge that our target function can be defined by a simple recurring formula on the sequence of input bits. In reality, if we aren't using domain knowledge for this problem (for instance, if you're designing your model to play a dice game) then the model might not be able to learn the $x$ mod $m$ function. You could test this by using your model architecture and applying it to this problem directly to see if you can get good results. Cost Functions and Convexity Okay, so maybe learning $x$ mod $m$ is difficult, but as the stackoverflow answer above demonstrates, it is doable. So what's the next hurdle? Let's talk about the training of a model, i.e. the finding of the parameters that best fit the data. The "magic" of modern machine learning is very much reliant on the fact that convex optimization techniques like gradient descent seem to "just work" even when applied to non-convex optimization problems. They don't work perfectly, and often require a fair amount of tinkering to train properly, but they can still get good results. One of the reasons for this "magic" is that lots of cost functions, while non-convex, aren't that non-convex. For example, your cost function might look something like: This cost function might look bad at first glance, but notice that it has some degree of regularity/smoothness. You can still tell that the underlying function is continuous because "small" movements along the $x$ or $y$-axis result in "small" changes in height. You can also pick out a general basin-shaped structure, and it's believable that a convex optimization algorithm with some random perturbations could eventually find the global minimum. Essentially, a cost function with some regularity might not be convex, but can still be "locally convex" in some sense. This means that gradient descent can find a local minimum if the initial point is within a locally convex "basin." In other words, being close the minimum counts for something, so "partial" correctness can be rewarded. Indeed, this is the idea behind transfer learning. Finding a good minimum for one task that is sufficiently similar to another task can provide the second task with a good initial point and then convex optimization can fine-tune the result to find a nearby minimum for the second task. An Experiment However, the cost function for trying to learn a PRNG has virtually no regularity whatsoever. It shouldn't come as a shock, but the cost function behaves like noise. But don't take my word for it: let's do an experiment to try to predict the output of Java's PRNG. For this experiment, we're going to cheat as much as possible and still lose. To start with, instead of using some kind of neural network or other machine learning model with a large number of parameters, we're going to use the exact functional form that we know Java's PRNG takes: $$X_{n + 1} = (a X_n + c) \text{ mod } m$$ which has parameters $a$, $c$, $m$, and $X_0$. This completely sidesteps the difficulty of learning $x$ mod $m$ discussed above. And our model has only four parameters! Modern machine learning algorithms can have hundreds of millions of parameters that require training, so just four should be a piece of cake, right? Let's make it even easier though: suppose that an oracle (no pun intended) tells us three of four correct parameters for Java's PRNG, and our task is simply to learn the value of the fourth. Learning one parameter can't be that hard, right? Here's some Julia code to emulate Java's PRNG and to plot an $\ell_2$ cost function over each of the four slices of the four slices we get from not knowing one of the four parameters: using LinearAlgebra: norm using Plots theme(:dark) seed = 12150615 # Date the Magna Carta was signed # Constants used by Java's linear congruential PRNG a = 25214903917 c = 11 m = 2^48 """Generates the next integer in a sequence of pseudo-random_sequence numbers in a linear congruential sequence.""" function next(x, a, c, m) return mod(a*x + c, m) end """Generates a random sequence of M random integers from a linear congruential sequence with the parameters a, c, m, and seed.""" function random_sequence(a, c, m, seed, M) nums = zeros(Int, M) nums[1] = seed for i = 2:M nums[i] = next(nums[i-1], a, c, m) end return nums end # Generate Java's random sequence y = random_sequence(a, c, m, seed, M) i_values = -200:200 # Range around the correct parameter to test n_trials = length(i_values) # Test a neighborhood of the a-values as = [a + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(as[i], c, m, seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false) # Test a neighborhood of the c-values cs = [c + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, cs[i], m, seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.145e11, 1.151e11)) # Test a neighborhood of the m-values ms = [m + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, c, ms[i], seed, M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.145e11, 1.151e11)) # Test a neighborhood of the seed-values seeds = [seed + i for i = i_values] avg_errors = [] for i = 1:n_trials # Generate another random sequence using random constants a, b, c, and a random seed y_test = random_sequence(a, c, m, seeds[i], M) avg_error = norm(y_test - y) / M push!(avg_errors, avg_error) end plot(avg_errors, size=(400, 400), legend=false, ylim=(1.147e11, 1.151e11)) So you can clearly see that even with three of the four parameters and the exact functional form known, the cost function still has the form $c + (\text{noise})$ where $c$ is a constant. In this case, a gradient-descent-type algorithm would compute a gradient of $0 + (\text{noise})$. Then gradient descent is simply performing a random walk. While possible that a random walk could converge to correct parameters, it is extremely unlikely given that the size of the space is $10^{77}$ (see below). Without any regularity, convex optimization tools are no better than a random walk looking for that one "valley" in the middle of each graph where the correct parameter lies. Conclusion It turns out that even with all of this simplification, the last step is still virtually impossible. "Learning" the last parameter boils down to a brute force search over the entire range of possible values for the parameters, because the "magic" of applying convex optimization techniques to train a machine learning model does not help solve a search problem when the cost function does not have any information whatsoever about the direction of even a good local minimum. If you wanted to try every possible 64-bit integer for the four parameters, this would mean searching through $(2^{64})^4 = 2^{256} \approx 10^{77}$ combinations. And this is just for a very simple PRNG. Ultimately, if you really want to alleviate any worries you might have about your particular task, you could always drop the board game aspect and see if your model can learn the output of the pseudo-random dice roll using your programming language's PRNG. Good luck (you're going need it)! Technical Appendix First, I want to point out that the function $x$ mod $m$ being difficult to approximate is more of an interesting side note, relevant mostly for the concern in the original question that a machine learning algorithm might incidentally discover how to predict the output of the PRNG while being trained for some other purpose. The fact is that it is difficult even when this is one's sole purpose. Difficult, but not unreasonably difficult. You don't need to appeal to universal approximation theorem to claim this is possible, because in the linked stackoverflow post from above there are several examples of models that successful approximated $x$ mod $m$ (albeit with the input in binary-vector representation). So not only was it possible to represent the function $x$ mod $m$ by a neural network (which is all the UAT guarantees), they were also able to successfully find weights that worked (which is not guaranteed). Second, what is the technical meaning of the claim that the cost function has the form $$ C(x) = \begin{cases} \text{constant} + \text{noise}, & \text{ if } x \ne x^* \\ 0, & \text{ if } x = x^* \end{cases} $$ where $x$ denotes the parameters $x = (a, c, m, \text{seed})$ and $x^*$ denotes the correct parameters? This can be defined technically by picking a radius $\delta > 0$ and then computing the average value $$ \text{Avg} (\delta, t) = \frac{1}{m(B_\delta (t))}\int_{B_\delta (t)} C(x) dx $$ where $K$ can represent either $\mathbb{Z}^4$ or $\mathbb{R}^4$, $m$ is either the Lebesgue measure or the counting measure respectively, and $B_\delta (t) = \{ x \in K: \| x - t \| < \delta \}$ is the ball of radius $\delta$ centered at $t \in K$. Now the claim that $C = \text{constant} + \text{noise}$ means that as $\delta$ increases, the local average $\text{Avg} (\delta, t)$ converges quickly to a constant $L$, so long as the true parameters $x^* \notin B_\delta (t)$. Here, I say "quickly" to rule out the fact that eventually this limit would be constant after surpassing the bounds of the domain. This definition makes sense even though the "noise" term is technically deterministic. In other words, local averages of $C$ are globally constant. Local averaging smooths out the noise term, and what remains is a constant. Plotted below is a much larger scale experiment on the interval $[-9 \times 10^{12}, 9 \times 10^{12}]$ that shows essentially the same phenomenon as before. For this experiment, I only tested the case where the seed is unknown as this experiment took a much longer. Each point here is not the cost function, but the local average $\text{Avg} (100, t)$ of the cost function, which smooths out some of the noise: Here I've plotted the constant as well, which turns out to be roughly $$ \text{constant} = 1.150 \times 10^{12} $$ Ultimately, this is a problem for gradient-based optimization methods not because of the noise term per se, but because the cost function is "flat." Of course, when you do add in the noise term, a flat function plus noise makes an extremely large number of local minima that certainly doesn't help the convergence of any gradient-based optimization algorithm. Moreover, I am well aware that this is an empirical claim, and I can not prove it analytically. I just wanted to demonstrate empirically that the gradient for this function is essentially 0 on average, and contains no information about the direction of $x^*$. In Experiment 1, the neighborhood was purposefully small to demonstrate that even if you started close to $x^*$, there is no visible gradient pointing in that direction. The four slices of the neighborhood $B_{200} (x^*)$ are small, but still don't show a local "basin" (locally approximately convex region) of the sort that gradient-based optimization is good at minimizing in. Experiment 2 demonstrates this same phenomenon on a much larger scale. The last technical detail I want to touch on is the fact that I am only analyzing the the model and the cost function as functions over a subset of the domain $\mathbb{Z}^4$, not over $\mathbb{R}^4$. This means that the gradient/derivative is not defined. So how can I claim something about the convergence or non-convergence of a gradient-based method when gradients aren't defined? Well, we can of course try fitting a differentiable model defined on $\mathbb{R}^4$ to the data, and compute its derivative, but if the data is already "flat" a model that fits it well will be "flat" as well. This is not something that I can prove, but I can prove that it is unprovable by constructing a continuously differentiable ($\mathcal{C}^1$) interpolation function $f : \mathbb{R} \to \mathbb{R}$ to the cost function data $C(x)$ that would cause gradient descent to converge to the true global minimizer $x^*$ in one step with high probability. This is an absurd example, but it demonstrates that trying to prove that gradient-based algorithms couldn't conceivably work here is impossible. To construct the interpolating function, consider two adjacent points $n, n+1 \in \mathbb{Z}$ with cost function values $C(n)$ and $C(n+1)$. Pick a threshold $\epsilon > 0$. Now, on the interval $[n + \epsilon, n + 1 - \epsilon]$, we can construct $f$ so that a regular gradient-descent step will reach $x^*$ in one step, i.e. $x^* = x - f'(x)$. This defines an easy differential equation that we can solve as follows: \begin{align} x^* & = x - f'(x) \\ \int x^* dx & = \int x - f'(x) dx \\ x x^* & = \frac{1}{2} x^2 - f(x) + D\\ f(x) & = \frac{1}{2} x^2 - x x^* + D \end{align} for any constant $D$. The constant is irrelevant, because regardless of its value, we can still define $f$ in such a way on the intevals $[n, n + \epsilon)$ and $(n+1-\epsilon, n+1]$ to make $f \in \mathcal{C}^1$ and such that $C(n)$ and $C(n+1)$ are the correct values, using splines for instance. This construction can be repeated on all intervals and the results can be stitched together in a $\mathcal{C}^1$ manner (using splines again, as one particular method). The result will be a $\mathcal{C}^1$ function that interpolates the cost function at all $n \in \mathbb{Z}$ (so it fits the data here perfectly well), and one which will converge to $x^*$ in one step of the gradient descent algorithm with probability $1 - 2\epsilon$. Take $\epsilon > 0$ to be as small as desired.
Does machine learning on random situations require a cryptographically-secure random number generato Edit: My original answer below is mostly informal, but I want to address some of the comments in a more technical and hopefully convincing manner. Please see the technical appendix for these details.
32,399
What is meant by correlation between intercept and slope(s)
There is a sense in which it is 'bad' for covariates to be highly correlated in a regression model, namely, that it can lead to multicollinearity. However, I don't think it's very meaningful to claim that correlation between the slope and the intercept to be collinear. That said, your question is really about how there can be a correlation between the slope and the intercept, when these are always just $2$ points. This confusion is perfectly sensible. The problem is that the fact has been stated in an imprecise way. (I'm not being critical of whomever wrote that—I speak like that all the time.) A more precise way to state the underlying fact is that the sampling distributions of the slope and intercept are correlated. An easy way to see this is through a simple simulation: Generate (pseudo)random samples of $X$ and $Y$ data from a single data generating process, fit a simple regression model in the same way to each sample, and store the estimates. Then you can compute the correlation, or plot them as you like. set.seed(6781) # this makes the example exactly reproducible B = 100 # the number of simulations we'll do N = 20 # the number of data in each sample estimates = matrix(NA, nrow=B, ncol=4) # this will hold the results colnames(estimates) = c("i0", "s0", "i1", "s1") for(i in 1:B){ x0 = rnorm(N, mean=0, sd=1) # generating X data w/ mean 0 x1 = rnorm(N, mean=1, sd=1) # generating X data w/ mean 1 e = rnorm(N, mean=0, sd=1) # error data y0 = 5 + 1*x0 + e # the true data generating process y1 = 5 + 1*x1 + e m0 = lm(y0~x0) # fitting the models m1 = lm(y1~x1) estimates[i,1:2] = coef(m0) # storing the estimates estimates[i,3:4] = coef(m1) } cor(estimates[,"i0"], estimates[,"s0"]) # [1] -0.06876971 # uncorrelated cor(estimates[,"i1"], estimates[,"s1"]) # [1] -0.7426974 # highly correlated windows(height=4, width=7) layout(matrix(1:2, nrow=1)) plot(i0~s0, estimates) abline(h=5, col="gray") # these are the population parameters abline(v=1, col="gray") plot(i1~s1, estimates) abline(h=5, col="gray") abline(v=1, col="gray") For some related information, it may help to read some of my other answers: How to interpret coefficient standard errors in linear regression? Are all slope coefficients correlated with the intercept in multiple linear regression? Why does the standard error of the intercept increase the further x¯ is from 0? Edit: From your comments, I gather your concern is based on the following quote: in complex models, strong correlations like this can make it difficult to fit the model to the data. So we’ll want to use some golem engineering tricks to avoid it, when possible. The first trick is centering. From: McElreath, R. (2015). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Chapman & Hall. (Note that I haven't read the book.) The author's concern is perfectly reasonable, but it doesn't really have anything to do with the quality of the model or the inferences that it will support. The issue is with computational problems that could arise in the methods used to estimate the model. Note further that centering does not change anything substantive about the model, and that this is an issue in Bayesian estimation, but won't be a problem for frequentist models (like those above) that are estimated via ordinary least squares. It may help to read: When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
What is meant by correlation between intercept and slope(s)
There is a sense in which it is 'bad' for covariates to be highly correlated in a regression model, namely, that it can lead to multicollinearity. However, I don't think it's very meaningful to claim
What is meant by correlation between intercept and slope(s) There is a sense in which it is 'bad' for covariates to be highly correlated in a regression model, namely, that it can lead to multicollinearity. However, I don't think it's very meaningful to claim that correlation between the slope and the intercept to be collinear. That said, your question is really about how there can be a correlation between the slope and the intercept, when these are always just $2$ points. This confusion is perfectly sensible. The problem is that the fact has been stated in an imprecise way. (I'm not being critical of whomever wrote that—I speak like that all the time.) A more precise way to state the underlying fact is that the sampling distributions of the slope and intercept are correlated. An easy way to see this is through a simple simulation: Generate (pseudo)random samples of $X$ and $Y$ data from a single data generating process, fit a simple regression model in the same way to each sample, and store the estimates. Then you can compute the correlation, or plot them as you like. set.seed(6781) # this makes the example exactly reproducible B = 100 # the number of simulations we'll do N = 20 # the number of data in each sample estimates = matrix(NA, nrow=B, ncol=4) # this will hold the results colnames(estimates) = c("i0", "s0", "i1", "s1") for(i in 1:B){ x0 = rnorm(N, mean=0, sd=1) # generating X data w/ mean 0 x1 = rnorm(N, mean=1, sd=1) # generating X data w/ mean 1 e = rnorm(N, mean=0, sd=1) # error data y0 = 5 + 1*x0 + e # the true data generating process y1 = 5 + 1*x1 + e m0 = lm(y0~x0) # fitting the models m1 = lm(y1~x1) estimates[i,1:2] = coef(m0) # storing the estimates estimates[i,3:4] = coef(m1) } cor(estimates[,"i0"], estimates[,"s0"]) # [1] -0.06876971 # uncorrelated cor(estimates[,"i1"], estimates[,"s1"]) # [1] -0.7426974 # highly correlated windows(height=4, width=7) layout(matrix(1:2, nrow=1)) plot(i0~s0, estimates) abline(h=5, col="gray") # these are the population parameters abline(v=1, col="gray") plot(i1~s1, estimates) abline(h=5, col="gray") abline(v=1, col="gray") For some related information, it may help to read some of my other answers: How to interpret coefficient standard errors in linear regression? Are all slope coefficients correlated with the intercept in multiple linear regression? Why does the standard error of the intercept increase the further x¯ is from 0? Edit: From your comments, I gather your concern is based on the following quote: in complex models, strong correlations like this can make it difficult to fit the model to the data. So we’ll want to use some golem engineering tricks to avoid it, when possible. The first trick is centering. From: McElreath, R. (2015). Statistical Rethinking: A Bayesian Course with Examples in R and Stan. Chapman & Hall. (Note that I haven't read the book.) The author's concern is perfectly reasonable, but it doesn't really have anything to do with the quality of the model or the inferences that it will support. The issue is with computational problems that could arise in the methods used to estimate the model. Note further that centering does not change anything substantive about the model, and that this is an issue in Bayesian estimation, but won't be a problem for frequentist models (like those above) that are estimated via ordinary least squares. It may help to read: When conducting multiple regression, when should you center your predictor variables & when should you standardize them?
What is meant by correlation between intercept and slope(s) There is a sense in which it is 'bad' for covariates to be highly correlated in a regression model, namely, that it can lead to multicollinearity. However, I don't think it's very meaningful to claim
32,400
What is meant by correlation between intercept and slope(s)
The R command cov2cor(vcov(fitted_model)) will return you the covariance matrix of regression estimates. It is proportional to $(X'X)^{-1}$, which means that in the extreme case of a perfect correlation of a slope and an intercept the covariance matrix is rank deficient. Because the inverse of rank deficient matrix doesn't exist, the only way to have this situation is if when the matrix $X'X$ was rank deficient to start with, which is a definition of perfect multicollinearity (PM). PM can be problematic for inference, but often is no big deal for forecasting
What is meant by correlation between intercept and slope(s)
The R command cov2cor(vcov(fitted_model)) will return you the covariance matrix of regression estimates. It is proportional to $(X'X)^{-1}$, which means that in the extreme case of a perfect correlati
What is meant by correlation between intercept and slope(s) The R command cov2cor(vcov(fitted_model)) will return you the covariance matrix of regression estimates. It is proportional to $(X'X)^{-1}$, which means that in the extreme case of a perfect correlation of a slope and an intercept the covariance matrix is rank deficient. Because the inverse of rank deficient matrix doesn't exist, the only way to have this situation is if when the matrix $X'X$ was rank deficient to start with, which is a definition of perfect multicollinearity (PM). PM can be problematic for inference, but often is no big deal for forecasting
What is meant by correlation between intercept and slope(s) The R command cov2cor(vcov(fitted_model)) will return you the covariance matrix of regression estimates. It is proportional to $(X'X)^{-1}$, which means that in the extreme case of a perfect correlati