idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
19,801
Can chi square be used to compare proportions?
Yes, you can test the null hypothesis: "H0: prop(red)=prop(blue)=prop(green)=prop(yellow)=1/4" using a chi square test that compares the proportions of the survey (0.273, ...) to the expected proportions (1/4, 1/4, 1/4, 1/4)
Can chi square be used to compare proportions?
Yes, you can test the null hypothesis: "H0: prop(red)=prop(blue)=prop(green)=prop(yellow)=1/4" using a chi square test that compares the proportions of the survey (0.273, ...) to the expected proport
Can chi square be used to compare proportions? Yes, you can test the null hypothesis: "H0: prop(red)=prop(blue)=prop(green)=prop(yellow)=1/4" using a chi square test that compares the proportions of the survey (0.273, ...) to the expected proportions (1/4, 1/4, 1/4, 1/4)
Can chi square be used to compare proportions? Yes, you can test the null hypothesis: "H0: prop(red)=prop(blue)=prop(green)=prop(yellow)=1/4" using a chi square test that compares the proportions of the survey (0.273, ...) to the expected proport
19,802
Can chi square be used to compare proportions?
The test statistic for Pearson's chi-square test is $$\sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ If you write $o_i = \dfrac{O_i}{n}$ and $e_i = \dfrac{E_i}{n}$ to have proportions, where $n=\sum_{i=1}^{n} O_i$ is the sample size and $\sum_{i=1}^{n} e_i =1$, then the test statistic is is equal to $$n \sum_{i=1}^{n} \frac{(o_i - e_i)^2}{e_i}$$ so a test of the significance of the observed proportions depends on the sample size, much as one would expect.
Can chi square be used to compare proportions?
The test statistic for Pearson's chi-square test is $$\sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ If you write $o_i = \dfrac{O_i}{n}$ and $e_i = \dfrac{E_i}{n}$ to have proportions, where $n=\sum_{i=1
Can chi square be used to compare proportions? The test statistic for Pearson's chi-square test is $$\sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ If you write $o_i = \dfrac{O_i}{n}$ and $e_i = \dfrac{E_i}{n}$ to have proportions, where $n=\sum_{i=1}^{n} O_i$ is the sample size and $\sum_{i=1}^{n} e_i =1$, then the test statistic is is equal to $$n \sum_{i=1}^{n} \frac{(o_i - e_i)^2}{e_i}$$ so a test of the significance of the observed proportions depends on the sample size, much as one would expect.
Can chi square be used to compare proportions? The test statistic for Pearson's chi-square test is $$\sum_{i=1}^{n} \frac{(O_i - E_i)^2}{E_i}$$ If you write $o_i = \dfrac{O_i}{n}$ and $e_i = \dfrac{E_i}{n}$ to have proportions, where $n=\sum_{i=1
19,803
Can chi square be used to compare proportions?
P value will vary with total size of sample even if proportion remains the same. This can be seen in following example with OP's proportions and varying sample size: from statsmodels.stats.proportion import proportions_chisquare countlist = [273, 236, 182, 309] nobslist = [1000,1000,1000,1000] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [27.3, 23.6, 18.2, 30.9] nobslist = [100,100,100,100] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [2.73, 2.36, 1.82, 3.09] nobslist = [10,10,10,10] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [.273, .236, .182, .309] nobslist = [1,1,1,1] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) Four P values printed out by above code are all different: P= 3.3202983952938086e-10 P= 0.19436113917526665 P= 0.9252292201159897 P= 0.9973200264790189
Can chi square be used to compare proportions?
P value will vary with total size of sample even if proportion remains the same. This can be seen in following example with OP's proportions and varying sample size: from statsmodels.stats.proportion
Can chi square be used to compare proportions? P value will vary with total size of sample even if proportion remains the same. This can be seen in following example with OP's proportions and varying sample size: from statsmodels.stats.proportion import proportions_chisquare countlist = [273, 236, 182, 309] nobslist = [1000,1000,1000,1000] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [27.3, 23.6, 18.2, 30.9] nobslist = [100,100,100,100] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [2.73, 2.36, 1.82, 3.09] nobslist = [10,10,10,10] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) countlist = [.273, .236, .182, .309] nobslist = [1,1,1,1] res = proportions_chisquare(countlist, nobslist) print("P=",res[1]) Four P values printed out by above code are all different: P= 3.3202983952938086e-10 P= 0.19436113917526665 P= 0.9252292201159897 P= 0.9973200264790189
Can chi square be used to compare proportions? P value will vary with total size of sample even if proportion remains the same. This can be seen in following example with OP's proportions and varying sample size: from statsmodels.stats.proportion
19,804
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
Apart from Protovis (HTML+JS) or Mayavi (Python), I would recommend Processing which is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context. There are a lot of open-source scripts on http://www.openprocessing.org/, and a lot of related books that deal with Processing but also data visualization. I know there is a project to provide an R interface, rprocessing, but I don't know how it goes. There's also an interface with clojure/incanter (see e.g., Creating Processing Visualizations with Clojure and Incanter). There are many online resources, among which Stanford class notes, e.g. CS448B, or 7 Classic Foundational Vis Papers You Might not Want to Publicly Confess you Don’t Know.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
Apart from Protovis (HTML+JS) or Mayavi (Python), I would recommend Processing which is an open source programming language and environment for people who want to create images, animations, and
Resources for learning to use (/create) dynamic (/interactive) statistical visualization Apart from Protovis (HTML+JS) or Mayavi (Python), I would recommend Processing which is an open source programming language and environment for people who want to create images, animations, and interactions. Initially developed to serve as a software sketchbook and to teach fundamentals of computer programming within a visual context. There are a lot of open-source scripts on http://www.openprocessing.org/, and a lot of related books that deal with Processing but also data visualization. I know there is a project to provide an R interface, rprocessing, but I don't know how it goes. There's also an interface with clojure/incanter (see e.g., Creating Processing Visualizations with Clojure and Incanter). There are many online resources, among which Stanford class notes, e.g. CS448B, or 7 Classic Foundational Vis Papers You Might not Want to Publicly Confess you Don’t Know.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization Apart from Protovis (HTML+JS) or Mayavi (Python), I would recommend Processing which is an open source programming language and environment for people who want to create images, animations, and
19,805
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
Some more packages to add to Chl's suggestion of Processing for creating interactive visualisations. All these are javascript-based and can run in a browser, so can be used for publishing as well as for your own analysis: D3.js is the successor to Protovis. It's more powerful in that you have more control over the objects created (they're proper DOM objects, i.e. you have full control over them using javascript), but some prefer Protovis for simplicity. Good technical D3 vs Protovis discussion here. Raphael.js is a good option for highly customised mass-market web interactivity since it's both future proof (no flash) and works on browsers as old as IE6 (the only thing it doesn't work on that I know of is old versions of the Android browser). Like D3, everything is a targetable DOM object and it has good built api controls for animation and interactivity. It offers nothing out of the box that is specific to visualisation: it's a very powerful and flexible blank slate, a great choice for designing custom visualisations but not for your own initial exploratory analysis. Get acquainted with your data first. gRaphael.js is standard charts (bar, line, etc) for Raphael. It's basic but works and can be built upon - might be a useful ingredient if you are building your own suite. Regarding your other question about learning, for general principles, Information Dashboard Design deserves a mention, if what you want is to make an array of general purpose interactive standard tools for your data. Interactive visualisations are on the line between stats and interactivity design: so books on that may be of use. I don't have any personal experience of any of the many interaction design textbooks, but I am a big fan of Universal Principles of Design. It might be overkill for your needs, but consider looking down the Usability column in its excellent Categorical Contents page and reading the chapters listed (progressive disclosure, signal to noise, etc). Also, for anyone new to programming, Programming Interactivity is a good place to start for beefing up technical skills (it also includes a hefty chapter on Processing). But for knowing what works and what is possible, you can't beat learning by doing, and a good kick-start could be to consider trailing and analysing the big-name big-price-tag general purpose interactive visualisation packages like tableau and jmp, and think about why their features are designed the way they are.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
Some more packages to add to Chl's suggestion of Processing for creating interactive visualisations. All these are javascript-based and can run in a browser, so can be used for publishing as well as f
Resources for learning to use (/create) dynamic (/interactive) statistical visualization Some more packages to add to Chl's suggestion of Processing for creating interactive visualisations. All these are javascript-based and can run in a browser, so can be used for publishing as well as for your own analysis: D3.js is the successor to Protovis. It's more powerful in that you have more control over the objects created (they're proper DOM objects, i.e. you have full control over them using javascript), but some prefer Protovis for simplicity. Good technical D3 vs Protovis discussion here. Raphael.js is a good option for highly customised mass-market web interactivity since it's both future proof (no flash) and works on browsers as old as IE6 (the only thing it doesn't work on that I know of is old versions of the Android browser). Like D3, everything is a targetable DOM object and it has good built api controls for animation and interactivity. It offers nothing out of the box that is specific to visualisation: it's a very powerful and flexible blank slate, a great choice for designing custom visualisations but not for your own initial exploratory analysis. Get acquainted with your data first. gRaphael.js is standard charts (bar, line, etc) for Raphael. It's basic but works and can be built upon - might be a useful ingredient if you are building your own suite. Regarding your other question about learning, for general principles, Information Dashboard Design deserves a mention, if what you want is to make an array of general purpose interactive standard tools for your data. Interactive visualisations are on the line between stats and interactivity design: so books on that may be of use. I don't have any personal experience of any of the many interaction design textbooks, but I am a big fan of Universal Principles of Design. It might be overkill for your needs, but consider looking down the Usability column in its excellent Categorical Contents page and reading the chapters listed (progressive disclosure, signal to noise, etc). Also, for anyone new to programming, Programming Interactivity is a good place to start for beefing up technical skills (it also includes a hefty chapter on Processing). But for knowing what works and what is possible, you can't beat learning by doing, and a good kick-start could be to consider trailing and analysing the big-name big-price-tag general purpose interactive visualisation packages like tableau and jmp, and think about why their features are designed the way they are.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization Some more packages to add to Chl's suggestion of Processing for creating interactive visualisations. All these are javascript-based and can run in a browser, so can be used for publishing as well as f
19,806
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
In addition to Processing, check out the Python-based Nodebox (1, 2, OpenGL), which was inspired by Processing: http://nodebox.net http://beta.nodebox.net/ www.cityinabottle.org/nodebox/ Nodebox 1 is Mac only, whereas Nodebox 2 and the OpenGL version are cross-platform. Python has a ton of data crunching libraries that can be imported into Nodebox, e.g., scipy.org
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
In addition to Processing, check out the Python-based Nodebox (1, 2, OpenGL), which was inspired by Processing: http://nodebox.net http://beta.nodebox.net/ www.cityinabottle.org/nodebox/ Nodebox 1 i
Resources for learning to use (/create) dynamic (/interactive) statistical visualization In addition to Processing, check out the Python-based Nodebox (1, 2, OpenGL), which was inspired by Processing: http://nodebox.net http://beta.nodebox.net/ www.cityinabottle.org/nodebox/ Nodebox 1 is Mac only, whereas Nodebox 2 and the OpenGL version are cross-platform. Python has a ton of data crunching libraries that can be imported into Nodebox, e.g., scipy.org
Resources for learning to use (/create) dynamic (/interactive) statistical visualization In addition to Processing, check out the Python-based Nodebox (1, 2, OpenGL), which was inspired by Processing: http://nodebox.net http://beta.nodebox.net/ www.cityinabottle.org/nodebox/ Nodebox 1 i
19,807
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
As a seperate approach to the existing answers, shortly after I posted my first long list, WEAVE emerged: an open source dedicated data visualisation suite. Here's a brief write-up on WEAVE on the leading data vis blog Flowing Data It's wise to take a different approach to data visualisation depending on where you are in the process. The earlier you are - the more raw and unexplored your data - the more likely you are to benefit from pre-built, flexible, general purpose suites like WEAVE and it's closed source commercial counterparts like Tableau and JMP - you can try things out quickly and painlessly to get to know the data and to figure out what lines of attack to take to get the most out of it. As you discover more about the data, your focus is likely to shift towards communication or 'guided exploration' - more customised exploratory data visualisations designed based on the caveats, nuances and areas of interest you have now discovered in the data. This is where blank slate products like the programmatic vector drawing tools listed above come into their own.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
As a seperate approach to the existing answers, shortly after I posted my first long list, WEAVE emerged: an open source dedicated data visualisation suite. Here's a brief write-up on WEAVE on the lea
Resources for learning to use (/create) dynamic (/interactive) statistical visualization As a seperate approach to the existing answers, shortly after I posted my first long list, WEAVE emerged: an open source dedicated data visualisation suite. Here's a brief write-up on WEAVE on the leading data vis blog Flowing Data It's wise to take a different approach to data visualisation depending on where you are in the process. The earlier you are - the more raw and unexplored your data - the more likely you are to benefit from pre-built, flexible, general purpose suites like WEAVE and it's closed source commercial counterparts like Tableau and JMP - you can try things out quickly and painlessly to get to know the data and to figure out what lines of attack to take to get the most out of it. As you discover more about the data, your focus is likely to shift towards communication or 'guided exploration' - more customised exploratory data visualisations designed based on the caveats, nuances and areas of interest you have now discovered in the data. This is where blank slate products like the programmatic vector drawing tools listed above come into their own.
Resources for learning to use (/create) dynamic (/interactive) statistical visualization As a seperate approach to the existing answers, shortly after I posted my first long list, WEAVE emerged: an open source dedicated data visualisation suite. Here's a brief write-up on WEAVE on the lea
19,808
Can CART models be made robust?
No, not in their present forms. The problem is that convex loss functions cannot be made to be robust to contamination by outliers (this is a well known fact since the 70's but keeps being rediscovered periodically, see for instance this paper for one recent such re-discovery): http://www.cs.columbia.edu/~rocco/Public/mlj9.pdf Now, in the case of regression trees, the fact that CART uses marginals (or alternatively univariate projections) can be used: one can think of a version of CART where the s.d. criterion is replaced by a more robust counterpart (MAD or better yet, Qn estimator). Edit: I recently came across an older paper implementing the approach suggested above (using robust M estimator of scale instead of the MAD). This will impart robustness to "y" outliers to CART/RF's (but not to outliers located on the design space, which will affect the estimates of the model's hyper-parameters) See: Galimberti, G., Pillati, M., & Soffritti, G. (2007). Robust regression trees based on M-estimators. Statistica, LXVII, 173–190.
Can CART models be made robust?
No, not in their present forms. The problem is that convex loss functions cannot be made to be robust to contamination by outliers (this is a well known fact since the 70's but keeps being rediscovere
Can CART models be made robust? No, not in their present forms. The problem is that convex loss functions cannot be made to be robust to contamination by outliers (this is a well known fact since the 70's but keeps being rediscovered periodically, see for instance this paper for one recent such re-discovery): http://www.cs.columbia.edu/~rocco/Public/mlj9.pdf Now, in the case of regression trees, the fact that CART uses marginals (or alternatively univariate projections) can be used: one can think of a version of CART where the s.d. criterion is replaced by a more robust counterpart (MAD or better yet, Qn estimator). Edit: I recently came across an older paper implementing the approach suggested above (using robust M estimator of scale instead of the MAD). This will impart robustness to "y" outliers to CART/RF's (but not to outliers located on the design space, which will affect the estimates of the model's hyper-parameters) See: Galimberti, G., Pillati, M., & Soffritti, G. (2007). Robust regression trees based on M-estimators. Statistica, LXVII, 173–190.
Can CART models be made robust? No, not in their present forms. The problem is that convex loss functions cannot be made to be robust to contamination by outliers (this is a well known fact since the 70's but keeps being rediscovere
19,809
Can CART models be made robust?
You might consider using Breiman's bagging or random forests. One good reference is Breiman "Bagging Predictors" (1996). Also summarized in Clifton Sutton's "Classification and Regression Trees, Bagging, and Boosting" in the Handbook of Statistics. You can also see Andy Liaw and Matthew Wiener R News discussion of the randomForest package.
Can CART models be made robust?
You might consider using Breiman's bagging or random forests. One good reference is Breiman "Bagging Predictors" (1996). Also summarized in Clifton Sutton's "Classification and Regression Trees, Bag
Can CART models be made robust? You might consider using Breiman's bagging or random forests. One good reference is Breiman "Bagging Predictors" (1996). Also summarized in Clifton Sutton's "Classification and Regression Trees, Bagging, and Boosting" in the Handbook of Statistics. You can also see Andy Liaw and Matthew Wiener R News discussion of the randomForest package.
Can CART models be made robust? You might consider using Breiman's bagging or random forests. One good reference is Breiman "Bagging Predictors" (1996). Also summarized in Clifton Sutton's "Classification and Regression Trees, Bag
19,810
Can CART models be made robust?
If you check out the 'gbm' package in R (generalized gradient boosting), the 'boosting' uses loss functions that are not necessarily mean squared error. This shows up in the 'distribution' argument to function 'gbm()'. Thus the elaboration of the tree via boosting will be resistant to outliers, similar to how M-estimators work. You might start here. Another approach would be to build the tree the usual way (partitions based on SSE), but prune the tree using cross validation with a robust measure of fit. I think xpred in rpart will give cross validated predictors (for a variety of different tree complexities), which you can then apply your own measure of error, such as mean absolute value.
Can CART models be made robust?
If you check out the 'gbm' package in R (generalized gradient boosting), the 'boosting' uses loss functions that are not necessarily mean squared error. This shows up in the 'distribution' argument t
Can CART models be made robust? If you check out the 'gbm' package in R (generalized gradient boosting), the 'boosting' uses loss functions that are not necessarily mean squared error. This shows up in the 'distribution' argument to function 'gbm()'. Thus the elaboration of the tree via boosting will be resistant to outliers, similar to how M-estimators work. You might start here. Another approach would be to build the tree the usual way (partitions based on SSE), but prune the tree using cross validation with a robust measure of fit. I think xpred in rpart will give cross validated predictors (for a variety of different tree complexities), which you can then apply your own measure of error, such as mean absolute value.
Can CART models be made robust? If you check out the 'gbm' package in R (generalized gradient boosting), the 'boosting' uses loss functions that are not necessarily mean squared error. This shows up in the 'distribution' argument t
19,811
DataCamp exercise about distributions
Let's begin by answering the question as it stands. Then we can respond to some of the points raised in comments. The question wants you to make the following assumptions: The future will behave like the past, but "outliers" like Secretariat will not occur. The past results are characterized as a sequence of realizations of random variables that are independent identically distributed (thus, exhibit no trends over time) normal Your estimate of that common normal distributions is perfect -- there is no error in it. With these assumptions, let $\mu$ be the common Normal mean, $\sigma$ the common Normal standard deviation, and $x_{min}$ Secretariat's time (144 seconds). The chance of equaling or beating that time on each future run of the Belmont stakes therefore is the chance that a standard Normal variate $Z$ will be less than or equal to $p = (x_{min}- \mu)/\sigma,$ given by the standard Normal distribution function $\Phi.$ This describes a sequence of independent identically distributed Bernoulli$(p)$ variables. The chance that the waiting time $T$ exceeds $N$ runs of the stakes (its survival function) is the chance the next $N$ values equal $0.$ That is the product of those chances (by the independence part of assumption $(2)$), given by $$\Pr(T \gt N) = (1-p)^{N}.$$ This is a geometric distribution. However, $p$ is tiny. (It will be somewhere around $10^{-3},$ $10^{-4},$ or even less, depending on how you estimate $\mu$ and $\sigma.$) Thus, to an excellent approximation, $$\Pr(T \gt N) = (1-p)^{N} = ((1-p)^{1/p})^{pN} \approx \exp(-pN).$$ That is manifestly an exponential waiting time, consistent with your expectation that Poisson processes might play a role. The expected waiting time is $$E[T] = \sum_{N=0}^\infty \Pr(T \ge N) = \frac{1}{p} = \frac{1}{\Phi\left((x_{min}-\mu)/\sigma\right)}.$$ That will be hundreds to tens of thousands of runs (and therefore at least that many years). Bear in mind that the assumptions $(1)$ - $(3)$ must continue to hold for at least Justin's lifetime for this to be a useful calculation. Let's do a reality check. The data at https://www.belmontstakes.com/history/past-winners/ give 94 winning times from 1926 through the present. Suppose Justin expects to live another $N$ years. Assume only the first two parts of $(2)$ -- namely, winning times are independent and identically distributed. In that case, the location of the best time during the entire time series of $94 + N$ results is randomly and uniformly distributed: it could occur at any time. Consequently, Justin would compute a probability of $N/(94 + N)$ of observing the best time during the next $N$ years. With this argument, they would have a greater than 50% chance of observing the best time provided $N$ exceeds $94.$ This is a couple of orders of magnitude shorter than the previous result. Which should we believe? We might appeal to the data. This plot immediately shows most of the assumptions $(1) - (3)$ are implausible or must be modified. There have been trends, leveling off around 1970 - 1990. Consequently, "$94$" in the preceding calculations ought to be replaced by some value between c. $32$ and $52.$ If Justin is young, they should expect to see a new record time during their lifetime. Relative to these trends, the times do display remarkably consistent randomness, as suggested by this time series plot of the residuals (differences between the winning times and their smoothed values): Moreover, there is no significant evidence of lack of independence: the autocorrelations are negligible. This justifies examining the univariate distribution of the residuals. Indeed, it is well described as a Normal distribution (the red curve), provided we ignore two of the $94$ data points. These residuals (after removing the two extreme values) have a mean of $-0.015$ and a standard deviation of $1.268.$ The residual for the best time is $-4.732$ (almost five seconds better than expected that year based on the Loess fit). The intended answer to the question, then, is tantamount to a mean waiting time of $$E[T] = \frac{1}{\Phi\left((4.732-(-0.015))/1.268\right)} \approx 10^4.$$ The Belmont Stakes will never run that many times. The post-Secretariat residuals have a bit more scatter than the older residuals, with a standard deviation of $1.5.$ An upper $95\%$ confidence limit for this estimate is $2.0.$ Using that instead of $1.268$ in the preceding calculation causes the estimated waiting time to drop two orders of magnitude from $10\,000$ to $100.$ That's still a long time, but it leaves some room for Justin to hope! This shows how sensitive the answer is to what otherwise seems to be a minor technical issue, that of estimating the Normal parameters. That still leaves several thorny issues: How should we treat the fact that the parameters of this distribution can only be estimated and are not very certain? For instance, the solution is exquisitely sensitive to the estimate of $\sigma$ and that could be off by $30\%$ or more. We can do this with the Delta method or bootstrapping. You can read about these elsewhere on CV. What justifies ignoring the two "outliers"? The correct answer is, nothing. "Outlying" conditions will recur in the future -- one can almost guarantee that. Since $2$ outliers have appeared in $94$ runs, we can predict (with 95% confidence) that between $0$ and $8$ will appear in the next $94$ runs, suggesting Justin has a chance of observing several unusual winning times (good or bad). (The value $8$ is a nonparametric $95\%$ upper prediction limit.) Why can we assume that trends -- which clearly occurred in the past -- won't recur in the future? For instance, it's difficult to believe that gradual improvements halted a third of a century ago in a competitive business like horse racing. An alternative is that some countervailing effects may have slowed progress. Maybe gradual warming of the climate adversely affects winning times? Only a tiny, tiny effect is needed. If the warming accelerates, the future curve might trend upwards. That would make it less likely for any horse ever to beat Secretariat's time. The moral of this post is that data analysis is not a matter of hiring a roomful of monkeys (pardon me, DataCamp graduates) to plug numbers into Poisson process calculators that spit out predictions and probabilities. Data analysis, when practiced well, is a principled exploration of how data help us reason from assumptions to tentative conclusions. As this case study shows, even slight differences in in the assumptions can lead to profound differences in the conclusions. The value of a data analysis, then, lies not in its predictions but primarily in the care with which it is conducted, how well it exposes and examines the assumptions, and how it reveals their connections to the results.
DataCamp exercise about distributions
Let's begin by answering the question as it stands. Then we can respond to some of the points raised in comments. The question wants you to make the following assumptions: The future will behave lik
DataCamp exercise about distributions Let's begin by answering the question as it stands. Then we can respond to some of the points raised in comments. The question wants you to make the following assumptions: The future will behave like the past, but "outliers" like Secretariat will not occur. The past results are characterized as a sequence of realizations of random variables that are independent identically distributed (thus, exhibit no trends over time) normal Your estimate of that common normal distributions is perfect -- there is no error in it. With these assumptions, let $\mu$ be the common Normal mean, $\sigma$ the common Normal standard deviation, and $x_{min}$ Secretariat's time (144 seconds). The chance of equaling or beating that time on each future run of the Belmont stakes therefore is the chance that a standard Normal variate $Z$ will be less than or equal to $p = (x_{min}- \mu)/\sigma,$ given by the standard Normal distribution function $\Phi.$ This describes a sequence of independent identically distributed Bernoulli$(p)$ variables. The chance that the waiting time $T$ exceeds $N$ runs of the stakes (its survival function) is the chance the next $N$ values equal $0.$ That is the product of those chances (by the independence part of assumption $(2)$), given by $$\Pr(T \gt N) = (1-p)^{N}.$$ This is a geometric distribution. However, $p$ is tiny. (It will be somewhere around $10^{-3},$ $10^{-4},$ or even less, depending on how you estimate $\mu$ and $\sigma.$) Thus, to an excellent approximation, $$\Pr(T \gt N) = (1-p)^{N} = ((1-p)^{1/p})^{pN} \approx \exp(-pN).$$ That is manifestly an exponential waiting time, consistent with your expectation that Poisson processes might play a role. The expected waiting time is $$E[T] = \sum_{N=0}^\infty \Pr(T \ge N) = \frac{1}{p} = \frac{1}{\Phi\left((x_{min}-\mu)/\sigma\right)}.$$ That will be hundreds to tens of thousands of runs (and therefore at least that many years). Bear in mind that the assumptions $(1)$ - $(3)$ must continue to hold for at least Justin's lifetime for this to be a useful calculation. Let's do a reality check. The data at https://www.belmontstakes.com/history/past-winners/ give 94 winning times from 1926 through the present. Suppose Justin expects to live another $N$ years. Assume only the first two parts of $(2)$ -- namely, winning times are independent and identically distributed. In that case, the location of the best time during the entire time series of $94 + N$ results is randomly and uniformly distributed: it could occur at any time. Consequently, Justin would compute a probability of $N/(94 + N)$ of observing the best time during the next $N$ years. With this argument, they would have a greater than 50% chance of observing the best time provided $N$ exceeds $94.$ This is a couple of orders of magnitude shorter than the previous result. Which should we believe? We might appeal to the data. This plot immediately shows most of the assumptions $(1) - (3)$ are implausible or must be modified. There have been trends, leveling off around 1970 - 1990. Consequently, "$94$" in the preceding calculations ought to be replaced by some value between c. $32$ and $52.$ If Justin is young, they should expect to see a new record time during their lifetime. Relative to these trends, the times do display remarkably consistent randomness, as suggested by this time series plot of the residuals (differences between the winning times and their smoothed values): Moreover, there is no significant evidence of lack of independence: the autocorrelations are negligible. This justifies examining the univariate distribution of the residuals. Indeed, it is well described as a Normal distribution (the red curve), provided we ignore two of the $94$ data points. These residuals (after removing the two extreme values) have a mean of $-0.015$ and a standard deviation of $1.268.$ The residual for the best time is $-4.732$ (almost five seconds better than expected that year based on the Loess fit). The intended answer to the question, then, is tantamount to a mean waiting time of $$E[T] = \frac{1}{\Phi\left((4.732-(-0.015))/1.268\right)} \approx 10^4.$$ The Belmont Stakes will never run that many times. The post-Secretariat residuals have a bit more scatter than the older residuals, with a standard deviation of $1.5.$ An upper $95\%$ confidence limit for this estimate is $2.0.$ Using that instead of $1.268$ in the preceding calculation causes the estimated waiting time to drop two orders of magnitude from $10\,000$ to $100.$ That's still a long time, but it leaves some room for Justin to hope! This shows how sensitive the answer is to what otherwise seems to be a minor technical issue, that of estimating the Normal parameters. That still leaves several thorny issues: How should we treat the fact that the parameters of this distribution can only be estimated and are not very certain? For instance, the solution is exquisitely sensitive to the estimate of $\sigma$ and that could be off by $30\%$ or more. We can do this with the Delta method or bootstrapping. You can read about these elsewhere on CV. What justifies ignoring the two "outliers"? The correct answer is, nothing. "Outlying" conditions will recur in the future -- one can almost guarantee that. Since $2$ outliers have appeared in $94$ runs, we can predict (with 95% confidence) that between $0$ and $8$ will appear in the next $94$ runs, suggesting Justin has a chance of observing several unusual winning times (good or bad). (The value $8$ is a nonparametric $95\%$ upper prediction limit.) Why can we assume that trends -- which clearly occurred in the past -- won't recur in the future? For instance, it's difficult to believe that gradual improvements halted a third of a century ago in a competitive business like horse racing. An alternative is that some countervailing effects may have slowed progress. Maybe gradual warming of the climate adversely affects winning times? Only a tiny, tiny effect is needed. If the warming accelerates, the future curve might trend upwards. That would make it less likely for any horse ever to beat Secretariat's time. The moral of this post is that data analysis is not a matter of hiring a roomful of monkeys (pardon me, DataCamp graduates) to plug numbers into Poisson process calculators that spit out predictions and probabilities. Data analysis, when practiced well, is a principled exploration of how data help us reason from assumptions to tentative conclusions. As this case study shows, even slight differences in in the assumptions can lead to profound differences in the conclusions. The value of a data analysis, then, lies not in its predictions but primarily in the care with which it is conducted, how well it exposes and examines the assumptions, and how it reveals their connections to the results.
DataCamp exercise about distributions Let's begin by answering the question as it stands. Then we can respond to some of the points raised in comments. The question wants you to make the following assumptions: The future will behave lik
19,812
Why are residual connections needed in transformer architectures?
The reason for having the residual connection in Transformer is more technical than motivated by the architecture design. Residual connections mainly help mitigate the vanishing gradient problem. During the back-propagation, the signal gets multiplied by the derivative of the activation function. In the case of ReLU, it means that in approximately half of the cases, the gradient is zero. Without the residual connections, a large part of the training signal would get lost during back-propagation. Residual connections reduce effect because summation is linear with respect to derivative, so each residual block also gets a signal that is not affected by the vanishing gradient. The summation operations of residual connections form a path in the computation graphs where the gradient does not get lost. Another effect of residual connections is that the information stays local in the Transformer layer stack. The self-attention mechanism allows an arbitrary information flow in the network and thus arbitrary permuting the input tokens. The residual connections, however, always "remind" the representation of what the original state was. To some extent, the residual connections give a guarantee that contextual representations of the input tokens really represent the tokens.
Why are residual connections needed in transformer architectures?
The reason for having the residual connection in Transformer is more technical than motivated by the architecture design. Residual connections mainly help mitigate the vanishing gradient problem. Duri
Why are residual connections needed in transformer architectures? The reason for having the residual connection in Transformer is more technical than motivated by the architecture design. Residual connections mainly help mitigate the vanishing gradient problem. During the back-propagation, the signal gets multiplied by the derivative of the activation function. In the case of ReLU, it means that in approximately half of the cases, the gradient is zero. Without the residual connections, a large part of the training signal would get lost during back-propagation. Residual connections reduce effect because summation is linear with respect to derivative, so each residual block also gets a signal that is not affected by the vanishing gradient. The summation operations of residual connections form a path in the computation graphs where the gradient does not get lost. Another effect of residual connections is that the information stays local in the Transformer layer stack. The self-attention mechanism allows an arbitrary information flow in the network and thus arbitrary permuting the input tokens. The residual connections, however, always "remind" the representation of what the original state was. To some extent, the residual connections give a guarantee that contextual representations of the input tokens really represent the tokens.
Why are residual connections needed in transformer architectures? The reason for having the residual connection in Transformer is more technical than motivated by the architecture design. Residual connections mainly help mitigate the vanishing gradient problem. Duri
19,813
What is the difference between econometrics and statistics?
I think it's helpful to think of econometrics as an application of statistics that is well suited to deal with problems economists typically encounter in their research. So they are certainly very related in some sense, but the focus is on the connection between economics and statistics. One way to alternatively think about this is that econometrics combines statistics with assumptions that come from economic theory or reasoning, and econometrics is about studying to what extent these economic assumptions buy information in a statistical context. Three ways this manifests itself are: 1. statistical models fall out of economic models, rather than starting with a statistics model, 2. the focus is on issues that are particularly salient for economists, and 3. re-contextualizing statistical assumptions and approaches as economic assumptions (and vice-versa) To expand on these points, the first point emphasizes that the statistical model are typically motivated from of an economics model. For example, you may be studying markets, and a classic result from economic theory is market clearing, which states that supply of a good equals demand of that good, and so when you have data on firms producing goods and consumers purchasing them, you may want to impose this condition in your statistical model, and this can be stated as a moment condition, and thus is a subset of Generalized Method of Moments (GMM), which was developed in econometrics because so many economic models have some moment conditions that must hold, and we can use that information with our statistical models. The second point is an obvious one, and you could maybe think of the first point as a case of it, but it really emphasizes that econometrics develops statistical tools in the context of what economists are interested in, and one classic interest is in causality rather than correlation in situations. For example, the development of instrumental variable approaches that allow for heterogeneity in potential outcomes is largely driven by econometricians, since it's a common problem in that field: economists typically studies individuals (or individual firms), and it's very reasonable that each individual has a different treatment effect. Additionally, unlike some fields, it may be harder to run RCTs in some contexts, and so classic papers like Imbens and Angrist (1994) analyze what IV methods identify when you have an instrument without full support. A final point should be made that econometrics also focuses on relating statistical models to economics. This is the reverse direction of the first point: given a statistical model, what assumptions would you have to place on individuals so that the model holds, and are these assumptions sensible from an economics perspective. For example, Vytlacil (2002) showed that the classic IV assumptions and monotonicity are equivalent to a Roy model with an index switching threshold (a variant of a classic economic model), which allows economists to understand statistical assumptions from an economics perspective.
What is the difference between econometrics and statistics?
I think it's helpful to think of econometrics as an application of statistics that is well suited to deal with problems economists typically encounter in their research. So they are certainly very rel
What is the difference between econometrics and statistics? I think it's helpful to think of econometrics as an application of statistics that is well suited to deal with problems economists typically encounter in their research. So they are certainly very related in some sense, but the focus is on the connection between economics and statistics. One way to alternatively think about this is that econometrics combines statistics with assumptions that come from economic theory or reasoning, and econometrics is about studying to what extent these economic assumptions buy information in a statistical context. Three ways this manifests itself are: 1. statistical models fall out of economic models, rather than starting with a statistics model, 2. the focus is on issues that are particularly salient for economists, and 3. re-contextualizing statistical assumptions and approaches as economic assumptions (and vice-versa) To expand on these points, the first point emphasizes that the statistical model are typically motivated from of an economics model. For example, you may be studying markets, and a classic result from economic theory is market clearing, which states that supply of a good equals demand of that good, and so when you have data on firms producing goods and consumers purchasing them, you may want to impose this condition in your statistical model, and this can be stated as a moment condition, and thus is a subset of Generalized Method of Moments (GMM), which was developed in econometrics because so many economic models have some moment conditions that must hold, and we can use that information with our statistical models. The second point is an obvious one, and you could maybe think of the first point as a case of it, but it really emphasizes that econometrics develops statistical tools in the context of what economists are interested in, and one classic interest is in causality rather than correlation in situations. For example, the development of instrumental variable approaches that allow for heterogeneity in potential outcomes is largely driven by econometricians, since it's a common problem in that field: economists typically studies individuals (or individual firms), and it's very reasonable that each individual has a different treatment effect. Additionally, unlike some fields, it may be harder to run RCTs in some contexts, and so classic papers like Imbens and Angrist (1994) analyze what IV methods identify when you have an instrument without full support. A final point should be made that econometrics also focuses on relating statistical models to economics. This is the reverse direction of the first point: given a statistical model, what assumptions would you have to place on individuals so that the model holds, and are these assumptions sensible from an economics perspective. For example, Vytlacil (2002) showed that the classic IV assumptions and monotonicity are equivalent to a Roy model with an index switching threshold (a variant of a classic economic model), which allows economists to understand statistical assumptions from an economics perspective.
What is the difference between econometrics and statistics? I think it's helpful to think of econometrics as an application of statistics that is well suited to deal with problems economists typically encounter in their research. So they are certainly very rel
19,814
What is the difference between econometrics and statistics?
Econometrics originally came from statistics. In general statistics is more general than econometrics, since while econometrics focuses in Statistical Inference, Statistics also deals with other important fields such as Design of Experiments and Sampling techiniques. However, today I may undoubtedly assert that Econometrics has also largely contributed to statistics as well. 1) The kind of statistical problem in economics: The first time I heard about linear regression was in the Physics lab when I was still a student of chemical engineering. I am not sure the specific class I was really having, but we may consider here that my class was a experiment to estimate the elasticity coefficient of a spring ... Easy! Even if your knowledge of physics is very limited, you can understand this experiment. Consider that one end of the spring is attached to the ceiling and the other end that is free, you want to attach a mass $m$. Soon, the spring will expand and knowing Hooke's Law, the equilibrium position of the mass will be that in which the weight is equal to the force generated by the deformation of the spring. We can equate this idea as follows: $mg = kd$, where $g$ is gravity, $k$ is the spring deformation constant and $d$ is how much the spring is expanded when you put the dough on its end. If you put different masses, you will have different deformations. Then you can build a data matrix where the dependent variable is $d$ (known exactly) and the independent variable is $mg$ (which is known), you can estimate the value of $1 / k$ from linear regression $$d = \alpha + \beta mg + u, $$ where $ \beta $ is an estimate of $1/k$ and $u$ is a possible error associated with the model. Note that: Cause: Higher weight Effect: Greater spring distension This effect is very clear. This situation is very rare in econometrics. In economics, few people know but the intention is to study/understand the choices of the government/families/companies… When we try to model choice situations, the cause-effect relationship is not explicit such as in above. Consider the following social-economic problem that comes from the field of Economics of Crime where cities would like to know how much they would need to increase the number of policemen to reduce crime. Therefore, the model of interest could take the following form: $$crimes = \alpha_1 + \beta_1 policemen + ... + u_1 $$ This model suggests that the number of crimes decreases with the number of policemen. Interpretation: If the number of policemen increase, the incentive to commit crimes reduce. Question: Does this equation answer the question above? Can we write cause = police $\Rightarrow$ effect = crimes? No, why? Simply, because the number of policemen can be associated with the following model $$policemen = \alpha_2 + \beta_2 crimes + ... + u_2 $$ This model says that mayors respond to the number of crimes, increasing the number of policemen or a higher number of policemen is associated with areas of greater crime. Interpretation: If the crime in a given area increases and the mayor wants to get reelected, then she/he want to solve the problem and she/he increases the number of policemen. The cause and effect in this situation is not clear. This problem is called endogeneity and it is the rule in economics. In this case, the error term is not exogenous (it is easy to prove that) and we know that this is the most important assumption that we have to consider to ensure that the estimated parameters of our model are not biased. [This happens because if we use the ols estimator, it will force the error to be orthogonal to the regressors and in the case of this regression model, this does not happen.] Disclaimer: This is a classical model (that is very easy to explain) in economics. I am not suggesting or not suggesting that the number of policemen should be increased or not given the recent events that took place in USA. I am just talking about simple models to point some ideas. Most events in economics comes from equilibrium relations such as: A) Equilibrium models of Offer and Demand a) The demand decreses with the price of a given product b) The offer increases with the price of a given product. And we have in equilibrium Demand=Offer. tHow do we separate these effects in economics? B) Inflation and interest rate a) If the basic interest rate of the economy decreases, the economic activity increases and it is likely to increase the inflation. (here, the low interest rate seems to be causing the inflation) b) However, if the inflation is higher, central bank decision makers may decide to increase the interest rate to control the inflation. (here the high inflation seems to be causing the high interest rate) In fact, we have got another equilibrium relation. 2) The data we have in econometrics In many field in statistics, we are able to create experiments to generate the data we need. For instance, we want to test the effect of a drug. We divide the population in two parts and the first part receives the treatment and the second part does not receive it (receives the placebo). In many situations in economics is not possible to generate the “perfect” data to test a phenomenon. For instance, we may not play with the interest rate to estimate its effect on inflation. If we do that many people may lose their jobs due to a recession or may cause an hyper-inflation or a scape of international capital. Having said that in many situations in economics we have to leave with the data is out there, that is subject to lots of problems. So, the focus of econometrics is to arrive to relations as Cause-Effect as we found in the example with a spring above with an imperfect data. 3) The role of economic theory In econometrics the role of theory is very important. Usually economists want to test hypothesys. So the model is build in order to test these hypothesis. For instance, what is the effect of additional years of study in the wage of people? This is the kind of question that arise in the field of labor economics. 4) Models Models in econometrics focus in creating the cause-effect relationship in the situations discussed (for instance) above. The classical idea to deal endogeneity is to find instrumental variables that replace the endogeneous variables and we recover the exogeneity of the error term. An extension of this idea is the so-called two stage least square and also the generalized methods of moments. However, this is just a general overview of the field. If you realy want to have a general perspective of the field of econometrics I strongly suggest the book "Mostly Harmless Econometrics -- Joshua D. Angrist and Jörn-Steffen Pischke" or its simplified version "Mastering Metrics: The Path from Cause to Effect -- Joshua D. Angrist and Jörn-Steffen Pischke". Now the main contributions of the field are related to mix ideas of Econometrics with machine learning. It is worth mentioning that some ideas of this answer came from previous answers I gave to a Brazilian site: Endogeneity and Econometrics versus Statistics.
What is the difference between econometrics and statistics?
Econometrics originally came from statistics. In general statistics is more general than econometrics, since while econometrics focuses in Statistical Inference, Statistics also deals with other impo
What is the difference between econometrics and statistics? Econometrics originally came from statistics. In general statistics is more general than econometrics, since while econometrics focuses in Statistical Inference, Statistics also deals with other important fields such as Design of Experiments and Sampling techiniques. However, today I may undoubtedly assert that Econometrics has also largely contributed to statistics as well. 1) The kind of statistical problem in economics: The first time I heard about linear regression was in the Physics lab when I was still a student of chemical engineering. I am not sure the specific class I was really having, but we may consider here that my class was a experiment to estimate the elasticity coefficient of a spring ... Easy! Even if your knowledge of physics is very limited, you can understand this experiment. Consider that one end of the spring is attached to the ceiling and the other end that is free, you want to attach a mass $m$. Soon, the spring will expand and knowing Hooke's Law, the equilibrium position of the mass will be that in which the weight is equal to the force generated by the deformation of the spring. We can equate this idea as follows: $mg = kd$, where $g$ is gravity, $k$ is the spring deformation constant and $d$ is how much the spring is expanded when you put the dough on its end. If you put different masses, you will have different deformations. Then you can build a data matrix where the dependent variable is $d$ (known exactly) and the independent variable is $mg$ (which is known), you can estimate the value of $1 / k$ from linear regression $$d = \alpha + \beta mg + u, $$ where $ \beta $ is an estimate of $1/k$ and $u$ is a possible error associated with the model. Note that: Cause: Higher weight Effect: Greater spring distension This effect is very clear. This situation is very rare in econometrics. In economics, few people know but the intention is to study/understand the choices of the government/families/companies… When we try to model choice situations, the cause-effect relationship is not explicit such as in above. Consider the following social-economic problem that comes from the field of Economics of Crime where cities would like to know how much they would need to increase the number of policemen to reduce crime. Therefore, the model of interest could take the following form: $$crimes = \alpha_1 + \beta_1 policemen + ... + u_1 $$ This model suggests that the number of crimes decreases with the number of policemen. Interpretation: If the number of policemen increase, the incentive to commit crimes reduce. Question: Does this equation answer the question above? Can we write cause = police $\Rightarrow$ effect = crimes? No, why? Simply, because the number of policemen can be associated with the following model $$policemen = \alpha_2 + \beta_2 crimes + ... + u_2 $$ This model says that mayors respond to the number of crimes, increasing the number of policemen or a higher number of policemen is associated with areas of greater crime. Interpretation: If the crime in a given area increases and the mayor wants to get reelected, then she/he want to solve the problem and she/he increases the number of policemen. The cause and effect in this situation is not clear. This problem is called endogeneity and it is the rule in economics. In this case, the error term is not exogenous (it is easy to prove that) and we know that this is the most important assumption that we have to consider to ensure that the estimated parameters of our model are not biased. [This happens because if we use the ols estimator, it will force the error to be orthogonal to the regressors and in the case of this regression model, this does not happen.] Disclaimer: This is a classical model (that is very easy to explain) in economics. I am not suggesting or not suggesting that the number of policemen should be increased or not given the recent events that took place in USA. I am just talking about simple models to point some ideas. Most events in economics comes from equilibrium relations such as: A) Equilibrium models of Offer and Demand a) The demand decreses with the price of a given product b) The offer increases with the price of a given product. And we have in equilibrium Demand=Offer. tHow do we separate these effects in economics? B) Inflation and interest rate a) If the basic interest rate of the economy decreases, the economic activity increases and it is likely to increase the inflation. (here, the low interest rate seems to be causing the inflation) b) However, if the inflation is higher, central bank decision makers may decide to increase the interest rate to control the inflation. (here the high inflation seems to be causing the high interest rate) In fact, we have got another equilibrium relation. 2) The data we have in econometrics In many field in statistics, we are able to create experiments to generate the data we need. For instance, we want to test the effect of a drug. We divide the population in two parts and the first part receives the treatment and the second part does not receive it (receives the placebo). In many situations in economics is not possible to generate the “perfect” data to test a phenomenon. For instance, we may not play with the interest rate to estimate its effect on inflation. If we do that many people may lose their jobs due to a recession or may cause an hyper-inflation or a scape of international capital. Having said that in many situations in economics we have to leave with the data is out there, that is subject to lots of problems. So, the focus of econometrics is to arrive to relations as Cause-Effect as we found in the example with a spring above with an imperfect data. 3) The role of economic theory In econometrics the role of theory is very important. Usually economists want to test hypothesys. So the model is build in order to test these hypothesis. For instance, what is the effect of additional years of study in the wage of people? This is the kind of question that arise in the field of labor economics. 4) Models Models in econometrics focus in creating the cause-effect relationship in the situations discussed (for instance) above. The classical idea to deal endogeneity is to find instrumental variables that replace the endogeneous variables and we recover the exogeneity of the error term. An extension of this idea is the so-called two stage least square and also the generalized methods of moments. However, this is just a general overview of the field. If you realy want to have a general perspective of the field of econometrics I strongly suggest the book "Mostly Harmless Econometrics -- Joshua D. Angrist and Jörn-Steffen Pischke" or its simplified version "Mastering Metrics: The Path from Cause to Effect -- Joshua D. Angrist and Jörn-Steffen Pischke". Now the main contributions of the field are related to mix ideas of Econometrics with machine learning. It is worth mentioning that some ideas of this answer came from previous answers I gave to a Brazilian site: Endogeneity and Econometrics versus Statistics.
What is the difference between econometrics and statistics? Econometrics originally came from statistics. In general statistics is more general than econometrics, since while econometrics focuses in Statistical Inference, Statistics also deals with other impo
19,815
What is the difference between econometrics and statistics?
Econometrics is an applied branch of statistics that is primarily related to economics. For example, in econometrics, one of the primary challenges is the non-independence of the error terms, which is typically assumed away in many/most statistical problems. This makes sense for traditional statistics but not so much for economics, where humans are always part of a larger society that is not easily split into double-blind treatment and control groups.
What is the difference between econometrics and statistics?
Econometrics is an applied branch of statistics that is primarily related to economics. For example, in econometrics, one of the primary challenges is the non-independence of the error terms, which is
What is the difference between econometrics and statistics? Econometrics is an applied branch of statistics that is primarily related to economics. For example, in econometrics, one of the primary challenges is the non-independence of the error terms, which is typically assumed away in many/most statistical problems. This makes sense for traditional statistics but not so much for economics, where humans are always part of a larger society that is not easily split into double-blind treatment and control groups.
What is the difference between econometrics and statistics? Econometrics is an applied branch of statistics that is primarily related to economics. For example, in econometrics, one of the primary challenges is the non-independence of the error terms, which is
19,816
What is the difference between econometrics and statistics?
The main difference is area of application: econometrics is statistics applied to problems/phenomena from economics. C'est ça. Naturally, this leads to a different emphasis and focus in the methodology.
What is the difference between econometrics and statistics?
The main difference is area of application: econometrics is statistics applied to problems/phenomena from economics. C'est ça. Naturally, this leads to a different emphasis and focus in the methodol
What is the difference between econometrics and statistics? The main difference is area of application: econometrics is statistics applied to problems/phenomena from economics. C'est ça. Naturally, this leads to a different emphasis and focus in the methodology.
What is the difference between econometrics and statistics? The main difference is area of application: econometrics is statistics applied to problems/phenomena from economics. C'est ça. Naturally, this leads to a different emphasis and focus in the methodol
19,817
What is the difference between econometrics and statistics?
Previous answers already touched upon the difference between statistics and reduced form econometrics, in that the latter places more emphasis on causal inference based on observational data. This difference is very clear is you compare the techniques for "panel data" by econometricians with those used for "longitudinal data" by statisticians, despite the data structure being exactly the same. There is an additional layer of difference between statistics and structural econometrics. Econometric models and methods arise from the need to test economic theory. One starts with an economic model, then consider how it can be taken to data, rather than applying statistical models/methods in an ad hoc way. Two standard examples: 1. CAPM and Fama-French-MacBeth The classical Capital Asset Pricing Model (due to Markowitz and Sharpe) says that, if investors have mean-variance preferences, then asset price obey the relationship $$ E[R - r] = Cov(R, M) $$ where the RHS is covariance of return $R$ with the market $M$, and LHS is expected excess return of asset. Empirically, taking this relationship to data means fitting a linear model---regressing $R-r$ on $M$. Later Fama and French introduced additional covariates (the Fama-French factors) in the CAPM regression. In this particular case, the appropriate econometric model turns out to be the linear model. 2. Generalized Method of Moments In a more contemporary model of asset prices (by now also basic), one arrives at the equilibrium relationship (called an asset pricing equation in economics) $$ E[u'(c_t) R_t|\mathcal{I}_t] = 0 $$ where $c_t$ is consumption, $R_t$ is asset return, and $u$ is the preference (utility function) of the agent. A natural econometric question is now to estimate the parameters of $u$ from data. This led Hansen to introduce GMM, which makes the above moment condition, and others, a testable statistical hypothesis. (GMM contains the instrumental variables (IV) as a special case.)
What is the difference between econometrics and statistics?
Previous answers already touched upon the difference between statistics and reduced form econometrics, in that the latter places more emphasis on causal inference based on observational data. This dif
What is the difference between econometrics and statistics? Previous answers already touched upon the difference between statistics and reduced form econometrics, in that the latter places more emphasis on causal inference based on observational data. This difference is very clear is you compare the techniques for "panel data" by econometricians with those used for "longitudinal data" by statisticians, despite the data structure being exactly the same. There is an additional layer of difference between statistics and structural econometrics. Econometric models and methods arise from the need to test economic theory. One starts with an economic model, then consider how it can be taken to data, rather than applying statistical models/methods in an ad hoc way. Two standard examples: 1. CAPM and Fama-French-MacBeth The classical Capital Asset Pricing Model (due to Markowitz and Sharpe) says that, if investors have mean-variance preferences, then asset price obey the relationship $$ E[R - r] = Cov(R, M) $$ where the RHS is covariance of return $R$ with the market $M$, and LHS is expected excess return of asset. Empirically, taking this relationship to data means fitting a linear model---regressing $R-r$ on $M$. Later Fama and French introduced additional covariates (the Fama-French factors) in the CAPM regression. In this particular case, the appropriate econometric model turns out to be the linear model. 2. Generalized Method of Moments In a more contemporary model of asset prices (by now also basic), one arrives at the equilibrium relationship (called an asset pricing equation in economics) $$ E[u'(c_t) R_t|\mathcal{I}_t] = 0 $$ where $c_t$ is consumption, $R_t$ is asset return, and $u$ is the preference (utility function) of the agent. A natural econometric question is now to estimate the parameters of $u$ from data. This led Hansen to introduce GMM, which makes the above moment condition, and others, a testable statistical hypothesis. (GMM contains the instrumental variables (IV) as a special case.)
What is the difference between econometrics and statistics? Previous answers already touched upon the difference between statistics and reduced form econometrics, in that the latter places more emphasis on causal inference based on observational data. This dif
19,818
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1?
Who told you that factor loadings can't be greater than 1? It can happen. Especially with highly correlated factors. This passage from a report about it by a prominent pioneer of SEM pretty much sums it up: "This misunderstanding probably stems from classical exploratory factor analysis where factor loadings are correlations if a correlation matrix is analyzed and the factors are standardized and uncorrelated (orthogonal). However, if the factors are correlated (oblique), the factor loadings are regression coefficients and not correlations and as such they can be larger than one in magnitude."
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1?
Who told you that factor loadings can't be greater than 1? It can happen. Especially with highly correlated factors. This passage from a report about it by a prominent pioneer of SEM pretty much sum
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1? Who told you that factor loadings can't be greater than 1? It can happen. Especially with highly correlated factors. This passage from a report about it by a prominent pioneer of SEM pretty much sums it up: "This misunderstanding probably stems from classical exploratory factor analysis where factor loadings are correlations if a correlation matrix is analyzed and the factors are standardized and uncorrelated (orthogonal). However, if the factors are correlated (oblique), the factor loadings are regression coefficients and not correlations and as such they can be larger than one in magnitude."
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1? Who told you that factor loadings can't be greater than 1? It can happen. Especially with highly correlated factors. This passage from a report about it by a prominent pioneer of SEM pretty much sum
19,819
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1?
Loading in factor analysis or in PCA (see 1, see 2, see 3) is the regression coefficient, weight in a linear combination predicting variables (items) by standardized (unit-variance) factors/components. Reasons for a loading to exceed $1$: Reason 1: analyzed covariance matrix. If analyzed were standardized variables, that is, the analysis was based on correlation matrix, then after extraction or after orthogonal rotation (such as varimax) - when factors/components remain uncorrelated - loadings are also the correlation coefficients. That is the property of linear regression equation: with orthogonal standardized predictors, parameters equal Pearson correlations. So, in such a case loading cannot be beyond [-1, 1]. But if analyzed were just centered variables, that is, the analysis was based on covariance matrix, then loadings don't have to be confined to [-1, 1] because regression coefficients is such model need not be equal to correlation coefficients. They are, actually, covariances. Note that it were raw loadings. There exist "rescaled" or "standardized" loadings (described in the links I gave in the 1st paragraph) which are rescaled not to leave the [-1, 1] band. Reason 2: oblique rotation. After oblique rotation such as promax or oblimin we have two types of loadings: pattern matrix (regression coefficients, or loadings per se) and structure matrix (correlation coefficients). They are not equal to each other because of the reason given above: correlated predictors' regression coefficients are different from Pearson correlations. Thus, a pattern loading can easily lie beyond [-1, 1]. Note that it is true even when correlation matrix was the analyzed matrix. So, that is how when factors/components are oblique. Reason 3 (rare): Heywood case. Heywood case (pt 6) is a difficulty in factor analysis algorithms when on iterations loading exceeds the theoretically permitted magnitude - it occurs when the communality gets beyond the variance. Heywood case is a rare situation and is encountered on some datasets typically when there are too few variables to support the requested number of factors. Programs inform that there's Heywood case error and either stop or try to resolve it.
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1?
Loading in factor analysis or in PCA (see 1, see 2, see 3) is the regression coefficient, weight in a linear combination predicting variables (items) by standardized (unit-variance) factors/components
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1? Loading in factor analysis or in PCA (see 1, see 2, see 3) is the regression coefficient, weight in a linear combination predicting variables (items) by standardized (unit-variance) factors/components. Reasons for a loading to exceed $1$: Reason 1: analyzed covariance matrix. If analyzed were standardized variables, that is, the analysis was based on correlation matrix, then after extraction or after orthogonal rotation (such as varimax) - when factors/components remain uncorrelated - loadings are also the correlation coefficients. That is the property of linear regression equation: with orthogonal standardized predictors, parameters equal Pearson correlations. So, in such a case loading cannot be beyond [-1, 1]. But if analyzed were just centered variables, that is, the analysis was based on covariance matrix, then loadings don't have to be confined to [-1, 1] because regression coefficients is such model need not be equal to correlation coefficients. They are, actually, covariances. Note that it were raw loadings. There exist "rescaled" or "standardized" loadings (described in the links I gave in the 1st paragraph) which are rescaled not to leave the [-1, 1] band. Reason 2: oblique rotation. After oblique rotation such as promax or oblimin we have two types of loadings: pattern matrix (regression coefficients, or loadings per se) and structure matrix (correlation coefficients). They are not equal to each other because of the reason given above: correlated predictors' regression coefficients are different from Pearson correlations. Thus, a pattern loading can easily lie beyond [-1, 1]. Note that it is true even when correlation matrix was the analyzed matrix. So, that is how when factors/components are oblique. Reason 3 (rare): Heywood case. Heywood case (pt 6) is a difficulty in factor analysis algorithms when on iterations loading exceeds the theoretically permitted magnitude - it occurs when the communality gets beyond the variance. Heywood case is a rare situation and is encountered on some datasets typically when there are too few variables to support the requested number of factors. Programs inform that there's Heywood case error and either stop or try to resolve it.
In Factor Analysis (or in PCA), what does it mean a factor loading greater than 1? Loading in factor analysis or in PCA (see 1, see 2, see 3) is the regression coefficient, weight in a linear combination predicting variables (items) by standardized (unit-variance) factors/components
19,820
Mathematical intuition of Bias-Variance equation
The similarity is more than superficial. The "bias-variance tradeoff" can be interpreted as the Pythagorean Theorem applied to two perpendicular Euclidean vectors: the length of one is the standard deviation and the length of the other is the bias. The length of the hypotenuse is the root mean squared error. A fundamental relationship As a point of departure, consider this revealing calculation, valid for any random variable $X$ with a finite second moment and any real number $a$. Since the second moment is finite, $X$ has a finite mean $\mu=\mathbb{E}(X)$ for which $\mathbb{E}(X-\mu)=0$, whence $$\eqalign{ \mathbb{E}((X-a)^2) &= \mathbb{E}((X-\mu\,+\,\mu-a)^2) \\ &= \mathbb{E}((X-\mu)^2) + 2 \mathbb{E}(X-\mu)(\mu-a) + (\mu-a)^2 \\ &= \operatorname{Var}(X) + (\mu-a)^2.\tag{1} }$$ This shows how the mean squared deviation between $X$ and any "baseline" value $a$ varies with $a$: it is a quadratic function of $a$ with a minimum at $\mu$, where the mean squared deviation is the variance of $X$. The connection with estimators and bias Any estimator $\hat \theta$ is a random variable because (by definition) it is a (measurable) function of random variables. Letting it play the role of $X$ in the preceding, and letting the estimand (the thing $\hat\theta$ is supposed to estimate) be $\theta$, we have $$\operatorname{MSE}(\hat\theta) = \mathbb{E}((\hat\theta-\theta)^2) = \operatorname{Var}(\hat\theta) + (\mathbb{E}(\hat\theta)-\theta)^2.$$ Let's return to $(1)$ now that we have seen how the statement about bias+variance for an estimator is literally a case of $(1)$. The question seeks "mathematical analogies with mathematical objects." We can do more than that by showing that square-integrable random variables can naturally be made into a Euclidean space. Mathematical background In a very general sense, a random variable is a (measurable) real-valued function on a probability space $(\Omega, \mathfrak{S}, \mathbb{P})$. The set of such functions that are square integrable, which is often written $\mathcal{L}^2(\Omega)$ (with the given probability structure understood), almost is a Hilbert space. To make it into one, we have to conflate any two random variables $X$ and $Y$ which don't really differ in terms of integration: that is, we say $X$ and $Y$ are equivalent whenever $$\mathbb{E}(|X-Y|^2) = \int_\Omega |X(\omega)-Y(\omega)|^2 d\mathbb{P}(\omega) = 0.$$ It's straightforward to check that this is a true equivalence relation: most importantly, when $X$ is equivalent to $Y$ and $Y$ is equivalent to $Z$, then necessarily $X$ will be equivalent to $Z$. We may therefore partition all square-integrable random variables into equivalence classes. These classes form the set $L^2(\Omega)$. Moreover, $L^2$ inherits the vector space structure of $\mathcal{L}^2$ defined by pointwise addition of values and pointwise scalar multiplication. On this vector space, the function $$X \to \left(\int_\Omega |X(\omega)|^2 d\mathbb{P}(\omega)\right)^{1/2}=\sqrt{\mathbb{E}(|X|^2)}$$ is a norm, often written $||X||_2$. This norm makes $L^2(\Omega)$ into a Hilbert space. Think of a Hilbert space $\mathcal{H}$ as an "infinite dimensional Euclidean space." Any finite-dimensional subspace $V\subset \mathcal{H}$ inherits the norm from $\mathcal{H}$ and $V$, with this norm, is a Euclidean space: we can do Euclidean geometry in it. Finally, we need one fact that is special to probability spaces (rather than general measure spaces): because $\mathbb{P}$ is a probability, it is bounded (by $1$), whence the constant functions $\omega\to a$ (for any fixed real number $a$) are square integrable random variables with finite norms. A geometric interpretation Consider any square-integrable random variable $X$, thought of as a representative of its equivalence class in $L^2(\Omega)$. It has a mean $\mu=\mathbb{E}(X)$ which (as one can check) depends only on the equivalence class of $X$. Let $\mathbf{1}:\omega\to 1$ be the class of the constant random variable. $X$ and $\mathbf{1}$ generate a Euclidean subspace $V\subset L^2(\Omega)$ whose dimension is at most $2$. In this subspace, $||X||_2^2 = \mathbb{E}(X^2)$ is the squared length of $X$ and $||a\,\mathbf{1}||_2^2 = a^2$ is the squared length of the constant random variable $\omega\to a$. It is fundamental that $X-\mu\mathbf{1}$ is perpendicular to $\mathbf{1}$. (One definition of $\mu$ is that it's the unique number for which this is the case.) Relation $(1)$ may be written $$||X - a\mathbf{1}||_2^2 = ||X - \mu\mathbf{1}||_2^2 + ||(a-\mu)\mathbf{1}||_2^2.$$ It indeed is precisely the Pythagorean Theorem, in essentially the same form known 2500 years ago. The object $$X-a\mathbf{1} = (X-\mu\mathbf{1})-(a-\mu)\mathbf{1}$$ is the hypotenuse of a right triangle with legs $X-\mu\mathbf{1}$ and $(a-\mu)\mathbf{1}$. If you would like mathematical analogies, then, you may use anything that can be expressed in terms of the hypotenuse of a right triangle in a Euclidean space. The hypotenuse will represent the "error" and the legs will represent the bias and the deviations from the mean.
Mathematical intuition of Bias-Variance equation
The similarity is more than superficial. The "bias-variance tradeoff" can be interpreted as the Pythagorean Theorem applied to two perpendicular Euclidean vectors: the length of one is the standard d
Mathematical intuition of Bias-Variance equation The similarity is more than superficial. The "bias-variance tradeoff" can be interpreted as the Pythagorean Theorem applied to two perpendicular Euclidean vectors: the length of one is the standard deviation and the length of the other is the bias. The length of the hypotenuse is the root mean squared error. A fundamental relationship As a point of departure, consider this revealing calculation, valid for any random variable $X$ with a finite second moment and any real number $a$. Since the second moment is finite, $X$ has a finite mean $\mu=\mathbb{E}(X)$ for which $\mathbb{E}(X-\mu)=0$, whence $$\eqalign{ \mathbb{E}((X-a)^2) &= \mathbb{E}((X-\mu\,+\,\mu-a)^2) \\ &= \mathbb{E}((X-\mu)^2) + 2 \mathbb{E}(X-\mu)(\mu-a) + (\mu-a)^2 \\ &= \operatorname{Var}(X) + (\mu-a)^2.\tag{1} }$$ This shows how the mean squared deviation between $X$ and any "baseline" value $a$ varies with $a$: it is a quadratic function of $a$ with a minimum at $\mu$, where the mean squared deviation is the variance of $X$. The connection with estimators and bias Any estimator $\hat \theta$ is a random variable because (by definition) it is a (measurable) function of random variables. Letting it play the role of $X$ in the preceding, and letting the estimand (the thing $\hat\theta$ is supposed to estimate) be $\theta$, we have $$\operatorname{MSE}(\hat\theta) = \mathbb{E}((\hat\theta-\theta)^2) = \operatorname{Var}(\hat\theta) + (\mathbb{E}(\hat\theta)-\theta)^2.$$ Let's return to $(1)$ now that we have seen how the statement about bias+variance for an estimator is literally a case of $(1)$. The question seeks "mathematical analogies with mathematical objects." We can do more than that by showing that square-integrable random variables can naturally be made into a Euclidean space. Mathematical background In a very general sense, a random variable is a (measurable) real-valued function on a probability space $(\Omega, \mathfrak{S}, \mathbb{P})$. The set of such functions that are square integrable, which is often written $\mathcal{L}^2(\Omega)$ (with the given probability structure understood), almost is a Hilbert space. To make it into one, we have to conflate any two random variables $X$ and $Y$ which don't really differ in terms of integration: that is, we say $X$ and $Y$ are equivalent whenever $$\mathbb{E}(|X-Y|^2) = \int_\Omega |X(\omega)-Y(\omega)|^2 d\mathbb{P}(\omega) = 0.$$ It's straightforward to check that this is a true equivalence relation: most importantly, when $X$ is equivalent to $Y$ and $Y$ is equivalent to $Z$, then necessarily $X$ will be equivalent to $Z$. We may therefore partition all square-integrable random variables into equivalence classes. These classes form the set $L^2(\Omega)$. Moreover, $L^2$ inherits the vector space structure of $\mathcal{L}^2$ defined by pointwise addition of values and pointwise scalar multiplication. On this vector space, the function $$X \to \left(\int_\Omega |X(\omega)|^2 d\mathbb{P}(\omega)\right)^{1/2}=\sqrt{\mathbb{E}(|X|^2)}$$ is a norm, often written $||X||_2$. This norm makes $L^2(\Omega)$ into a Hilbert space. Think of a Hilbert space $\mathcal{H}$ as an "infinite dimensional Euclidean space." Any finite-dimensional subspace $V\subset \mathcal{H}$ inherits the norm from $\mathcal{H}$ and $V$, with this norm, is a Euclidean space: we can do Euclidean geometry in it. Finally, we need one fact that is special to probability spaces (rather than general measure spaces): because $\mathbb{P}$ is a probability, it is bounded (by $1$), whence the constant functions $\omega\to a$ (for any fixed real number $a$) are square integrable random variables with finite norms. A geometric interpretation Consider any square-integrable random variable $X$, thought of as a representative of its equivalence class in $L^2(\Omega)$. It has a mean $\mu=\mathbb{E}(X)$ which (as one can check) depends only on the equivalence class of $X$. Let $\mathbf{1}:\omega\to 1$ be the class of the constant random variable. $X$ and $\mathbf{1}$ generate a Euclidean subspace $V\subset L^2(\Omega)$ whose dimension is at most $2$. In this subspace, $||X||_2^2 = \mathbb{E}(X^2)$ is the squared length of $X$ and $||a\,\mathbf{1}||_2^2 = a^2$ is the squared length of the constant random variable $\omega\to a$. It is fundamental that $X-\mu\mathbf{1}$ is perpendicular to $\mathbf{1}$. (One definition of $\mu$ is that it's the unique number for which this is the case.) Relation $(1)$ may be written $$||X - a\mathbf{1}||_2^2 = ||X - \mu\mathbf{1}||_2^2 + ||(a-\mu)\mathbf{1}||_2^2.$$ It indeed is precisely the Pythagorean Theorem, in essentially the same form known 2500 years ago. The object $$X-a\mathbf{1} = (X-\mu\mathbf{1})-(a-\mu)\mathbf{1}$$ is the hypotenuse of a right triangle with legs $X-\mu\mathbf{1}$ and $(a-\mu)\mathbf{1}$. If you would like mathematical analogies, then, you may use anything that can be expressed in terms of the hypotenuse of a right triangle in a Euclidean space. The hypotenuse will represent the "error" and the legs will represent the bias and the deviations from the mean.
Mathematical intuition of Bias-Variance equation The similarity is more than superficial. The "bias-variance tradeoff" can be interpreted as the Pythagorean Theorem applied to two perpendicular Euclidean vectors: the length of one is the standard d
19,821
Mathematical intuition of Bias-Variance equation
This is a way to think visually about accuracy and the variance bias trade-off. Suppose you are looking at a target and you make many shots that are all scattered close to the center of the target in such a way that there is no bias. Then accuracy is solely determined by variance and when the variance is small the shooter is accurate. Now let us consider a case where there is great precision but a large bias. In this case, the shots are scattered around a point far from the center. Something is messing up the aim point but around this aim point, every shot is close to that new aim point. The shooter is precise but very inaccurate because of the bias. There are other situations where the shots are accurate because of small bias and high precision. What we want is no bias and small variance or small variance with small bias. In some statistical problems, you can't have both. So MSE becomes the measure of accuracy that you want to use that plays off the variance bias trade-off and minimizing MSE should be the goal.
Mathematical intuition of Bias-Variance equation
This is a way to think visually about accuracy and the variance bias trade-off. Suppose you are looking at a target and you make many shots that are all scattered close to the center of the target in
Mathematical intuition of Bias-Variance equation This is a way to think visually about accuracy and the variance bias trade-off. Suppose you are looking at a target and you make many shots that are all scattered close to the center of the target in such a way that there is no bias. Then accuracy is solely determined by variance and when the variance is small the shooter is accurate. Now let us consider a case where there is great precision but a large bias. In this case, the shots are scattered around a point far from the center. Something is messing up the aim point but around this aim point, every shot is close to that new aim point. The shooter is precise but very inaccurate because of the bias. There are other situations where the shots are accurate because of small bias and high precision. What we want is no bias and small variance or small variance with small bias. In some statistical problems, you can't have both. So MSE becomes the measure of accuracy that you want to use that plays off the variance bias trade-off and minimizing MSE should be the goal.
Mathematical intuition of Bias-Variance equation This is a way to think visually about accuracy and the variance bias trade-off. Suppose you are looking at a target and you make many shots that are all scattered close to the center of the target in
19,822
How to interpret cv.glmnet() plot?
This isn't really about statistics, just reading the documentation. The two different values of $\lambda$ reflect two common choices for $\lambda$. The $\lambda_{\min}$ is the one which minimizes out-of-sample loss in CV. The $\lambda_{1se}$ is the one which is the largest $\lambda$ value within 1 standard error of $\lambda_{\min}$. One line of reasoning suggests using $\lambda_{1se}$ because it hedges against overfitting by selecting a larger $\lambda$ value than the min. Which choice is best is context-dependent. The intervals estimate variance of the loss metric (red dots). They're computed using CV. The vertical lines show the locations of $\lambda_{\min}$ and $\lambda_{1se}$. The numbers across the top are the number of nonzero coefficient estimates.
How to interpret cv.glmnet() plot?
This isn't really about statistics, just reading the documentation. The two different values of $\lambda$ reflect two common choices for $\lambda$. The $\lambda_{\min}$ is the one which minimizes out
How to interpret cv.glmnet() plot? This isn't really about statistics, just reading the documentation. The two different values of $\lambda$ reflect two common choices for $\lambda$. The $\lambda_{\min}$ is the one which minimizes out-of-sample loss in CV. The $\lambda_{1se}$ is the one which is the largest $\lambda$ value within 1 standard error of $\lambda_{\min}$. One line of reasoning suggests using $\lambda_{1se}$ because it hedges against overfitting by selecting a larger $\lambda$ value than the min. Which choice is best is context-dependent. The intervals estimate variance of the loss metric (red dots). They're computed using CV. The vertical lines show the locations of $\lambda_{\min}$ and $\lambda_{1se}$. The numbers across the top are the number of nonzero coefficient estimates.
How to interpret cv.glmnet() plot? This isn't really about statistics, just reading the documentation. The two different values of $\lambda$ reflect two common choices for $\lambda$. The $\lambda_{\min}$ is the one which minimizes out
19,823
Is studentized residuals v/s standardized residuals in lm model
No, studentized residuals and standardized residuals are different (but related) concepts. R in fact does provide built-in functions rstandard() and rstudent() as as part of influence.measures. The same built-in package provides many similar functions for leverage, Cook's distance, etc. rstudent() is essentially the same as MASS::studres(), which you can check for yourself like so: > all.equal(MASS::studres(model), rstudent(model)) [1] TRUE Standardized residuals are a way of estimating the error for a particular data point which takes into account the leverage/influence of the point. These are sometimes called "internally studentized residuals." $$r_{i}=\frac{e_{i}}{s(e_{i})}=\frac{e_{i}}{\sqrt{MSE(1-h_{ii})}}$$ The motivation behind standardized residuals is that even though our model assumed homoscedasticity with an i.i.d. error term with fixed variance $\epsilon_i \sim \mathbb{N}(0, \sigma^2)$, the distribution, the residuals $e_i$ cannot be i.i.d. because the sum of residuals is always exactly zero. Studentized residuals for any given data point are calculated from a model fit to every other data point except the one in question. These is variously called the "externally studentized residuals", "deleted residuals," or "jackknifed residuals". This sounds computationally difficult (it sounds like we'd have to fit one new model for every point) but in fact there's a way to compute it from just the original model without refitting. If the standardized residual is $r_i$, then the studentized residual $t_i$ is: $$t_i=r_i \left( \frac{n-k-2}{n-k-1-r_{i}^{2}}\right) ^{1/2},$$ The motivation behind studentized residuals comes from their use in outlier testing. If we suspect a point is an outlier, then it was not generated from the assumed model, by definition. Therefore it would be a mistake - a violation of assumptions - to include that outlier in the fitting of the model. Studentized residuals are widely used in practical outlier detection. Studentized residuals also have the desirable property that for each data point, the distribution of the residual will Student's t-distribution, assuming the normality assumptions of the original regression model were met. (Standardized residuals do not have so nice a distribution.) Lastly, to address any concerns that the R library may be following nomenclature different than above, the R documentation explicitly states that they use "standardized" and "studentized" in the exact same sense described above. Functions rstandard and rstudent give the standardized and Studentized residuals respectively. (These re-normalize the residuals to have unit variance, using an overall and leave-one-out measure of the error variance respectively.)
Is studentized residuals v/s standardized residuals in lm model
No, studentized residuals and standardized residuals are different (but related) concepts. R in fact does provide built-in functions rstandard() and rstudent() as as part of influence.measures. The
Is studentized residuals v/s standardized residuals in lm model No, studentized residuals and standardized residuals are different (but related) concepts. R in fact does provide built-in functions rstandard() and rstudent() as as part of influence.measures. The same built-in package provides many similar functions for leverage, Cook's distance, etc. rstudent() is essentially the same as MASS::studres(), which you can check for yourself like so: > all.equal(MASS::studres(model), rstudent(model)) [1] TRUE Standardized residuals are a way of estimating the error for a particular data point which takes into account the leverage/influence of the point. These are sometimes called "internally studentized residuals." $$r_{i}=\frac{e_{i}}{s(e_{i})}=\frac{e_{i}}{\sqrt{MSE(1-h_{ii})}}$$ The motivation behind standardized residuals is that even though our model assumed homoscedasticity with an i.i.d. error term with fixed variance $\epsilon_i \sim \mathbb{N}(0, \sigma^2)$, the distribution, the residuals $e_i$ cannot be i.i.d. because the sum of residuals is always exactly zero. Studentized residuals for any given data point are calculated from a model fit to every other data point except the one in question. These is variously called the "externally studentized residuals", "deleted residuals," or "jackknifed residuals". This sounds computationally difficult (it sounds like we'd have to fit one new model for every point) but in fact there's a way to compute it from just the original model without refitting. If the standardized residual is $r_i$, then the studentized residual $t_i$ is: $$t_i=r_i \left( \frac{n-k-2}{n-k-1-r_{i}^{2}}\right) ^{1/2},$$ The motivation behind studentized residuals comes from their use in outlier testing. If we suspect a point is an outlier, then it was not generated from the assumed model, by definition. Therefore it would be a mistake - a violation of assumptions - to include that outlier in the fitting of the model. Studentized residuals are widely used in practical outlier detection. Studentized residuals also have the desirable property that for each data point, the distribution of the residual will Student's t-distribution, assuming the normality assumptions of the original regression model were met. (Standardized residuals do not have so nice a distribution.) Lastly, to address any concerns that the R library may be following nomenclature different than above, the R documentation explicitly states that they use "standardized" and "studentized" in the exact same sense described above. Functions rstandard and rstudent give the standardized and Studentized residuals respectively. (These re-normalize the residuals to have unit variance, using an overall and leave-one-out measure of the error variance respectively.)
Is studentized residuals v/s standardized residuals in lm model No, studentized residuals and standardized residuals are different (but related) concepts. R in fact does provide built-in functions rstandard() and rstudent() as as part of influence.measures. The
19,824
Getting the right starting values for an nls model in R
This is a common problem with nonlinear least squares models; if your start values are very far from the optimum the algorithm may not converge, even though it may be well behaved near the optimum. If you start by taking logs of both sides and fit a linear model, you get estimates of $\log(a)$ and $b$ as the slope and intercept ( 9.947 and -2.011 ) (edit: that's natural log) If you use those to guide the starting values for $a$ and $b$ everything seems to work okay: newMod <- nls(rev ~ a*weeks^b, data=mydf, start = list(a=exp(9.947),b=-2.011)) predict(newMod, newdata = data.frame(weeks=c(1,2,3,4,5,6,7,8,9,10))) [1] 17919.2138 5280.7001 2584.0109 1556.1951 1050.1230 761.4947 580.3091 458.6027 [9] 372.6231 309.4658
Getting the right starting values for an nls model in R
This is a common problem with nonlinear least squares models; if your start values are very far from the optimum the algorithm may not converge, even though it may be well behaved near the optimum. If
Getting the right starting values for an nls model in R This is a common problem with nonlinear least squares models; if your start values are very far from the optimum the algorithm may not converge, even though it may be well behaved near the optimum. If you start by taking logs of both sides and fit a linear model, you get estimates of $\log(a)$ and $b$ as the slope and intercept ( 9.947 and -2.011 ) (edit: that's natural log) If you use those to guide the starting values for $a$ and $b$ everything seems to work okay: newMod <- nls(rev ~ a*weeks^b, data=mydf, start = list(a=exp(9.947),b=-2.011)) predict(newMod, newdata = data.frame(weeks=c(1,2,3,4,5,6,7,8,9,10))) [1] 17919.2138 5280.7001 2584.0109 1556.1951 1050.1230 761.4947 580.3091 458.6027 [9] 372.6231 309.4658
Getting the right starting values for an nls model in R This is a common problem with nonlinear least squares models; if your start values are very far from the optimum the algorithm may not converge, even though it may be well behaved near the optimum. If
19,825
Getting the right starting values for an nls model in R
Try newMod <- nls(rev ~ a*weeks^b, data=mydf, startlist(a=17919.2127344,b=-1.76270557120)) I've been asked to expand this answer a bit. This problem is so simple I'm kind of surprised that nls fails at it. The real problem however is with the entire R approach and philosophy of nonlinear model fitting. In the real world one would scale x to lie between -1 and 1 and y and y to lie between 0 an 1 (y=ax^b). That would probably be enough to get nls to converge. Of course as Glen points out you can fit the corresponding log-linear model. That relies on the fact that there exists a simple transformation which linearizes the model. That is often not the case. The problem with R routines like nls is that they do not offer support for reparameterizing the model. In this case the reparameterization is simple, just rescale/recentre x and y. However having fit the model the user will have different parameters a and b from the original ones. While it is simple to calculate the original ones from these, the other difficulty is that it is not so simple in general to get the estimated standard deviations for these parameter estimates. This is done by the delta method which involves the Hessian of the log-likelihood and some derivatives. Nonlinear parameter estimation software should supply these calculations automatically, so that reparameterization of the model is easily supported. Another thing which software should support is the notion of phases. You can think of first fitting the model with Glen's version as phase 1. The "real" model is fit in stage 2. I fit your model with AD Model Builder which supports phases in a natural way. In the first phase only a was estimated. This gets your model into the ballpark. In the second phase a and b are estimated to get the solution. AD Model Builder automatically calculates the standard deviations for any function of the model parameters via the delta method so that it encourages stable reparameterization of the model.
Getting the right starting values for an nls model in R
Try newMod <- nls(rev ~ a*weeks^b, data=mydf, startlist(a=17919.2127344,b=-1.76270557120)) I've been asked to expand this answer a bit. This problem is so simple I'm kind of surprised that nls fails
Getting the right starting values for an nls model in R Try newMod <- nls(rev ~ a*weeks^b, data=mydf, startlist(a=17919.2127344,b=-1.76270557120)) I've been asked to expand this answer a bit. This problem is so simple I'm kind of surprised that nls fails at it. The real problem however is with the entire R approach and philosophy of nonlinear model fitting. In the real world one would scale x to lie between -1 and 1 and y and y to lie between 0 an 1 (y=ax^b). That would probably be enough to get nls to converge. Of course as Glen points out you can fit the corresponding log-linear model. That relies on the fact that there exists a simple transformation which linearizes the model. That is often not the case. The problem with R routines like nls is that they do not offer support for reparameterizing the model. In this case the reparameterization is simple, just rescale/recentre x and y. However having fit the model the user will have different parameters a and b from the original ones. While it is simple to calculate the original ones from these, the other difficulty is that it is not so simple in general to get the estimated standard deviations for these parameter estimates. This is done by the delta method which involves the Hessian of the log-likelihood and some derivatives. Nonlinear parameter estimation software should supply these calculations automatically, so that reparameterization of the model is easily supported. Another thing which software should support is the notion of phases. You can think of first fitting the model with Glen's version as phase 1. The "real" model is fit in stage 2. I fit your model with AD Model Builder which supports phases in a natural way. In the first phase only a was estimated. This gets your model into the ballpark. In the second phase a and b are estimated to get the solution. AD Model Builder automatically calculates the standard deviations for any function of the model parameters via the delta method so that it encourages stable reparameterization of the model.
Getting the right starting values for an nls model in R Try newMod <- nls(rev ~ a*weeks^b, data=mydf, startlist(a=17919.2127344,b=-1.76270557120)) I've been asked to expand this answer a bit. This problem is so simple I'm kind of surprised that nls fails
19,826
Getting the right starting values for an nls model in R
The Levenberg-Marquardt algorithm can help: modeldf <- data.frame(rev=c(17906.4, 5303.72, 2700.58 ,1696.77 ,947.53 ,362.03), weeks=c(1,2,3,4,5,6)) require(minpack.lm) fit <- nlsLM(rev ~ a*weeks^b, data=modeldf, start = list(a=1,b=1)) require(broom) fit_data <- augment(fit) plot(.fitted~rev, data=fit_data)
Getting the right starting values for an nls model in R
The Levenberg-Marquardt algorithm can help: modeldf <- data.frame(rev=c(17906.4, 5303.72, 2700.58 ,1696.77 ,947.53 ,362.03), weeks=c(1,2,3,4,5,6)) require(minpack.lm) fit <- nlsLM(rev ~ a*weeks^b, da
Getting the right starting values for an nls model in R The Levenberg-Marquardt algorithm can help: modeldf <- data.frame(rev=c(17906.4, 5303.72, 2700.58 ,1696.77 ,947.53 ,362.03), weeks=c(1,2,3,4,5,6)) require(minpack.lm) fit <- nlsLM(rev ~ a*weeks^b, data=modeldf, start = list(a=1,b=1)) require(broom) fit_data <- augment(fit) plot(.fitted~rev, data=fit_data)
Getting the right starting values for an nls model in R The Levenberg-Marquardt algorithm can help: modeldf <- data.frame(rev=c(17906.4, 5303.72, 2700.58 ,1696.77 ,947.53 ,362.03), weeks=c(1,2,3,4,5,6)) require(minpack.lm) fit <- nlsLM(rev ~ a*weeks^b, da
19,827
Getting the right starting values for an nls model in R
In my experience a good way of finding starting values for parameters of NLR models is to use an evolutionary algorithm. From an initial population (100) of random estimates (parents) in a search space choose the best 20 (offspring) and use these to help define a search in a succeeding population. Repeat until convergence. No need for gradients or hessians, just SSE evaluations. If you are not too greedy this very often works. The problems that people often have is that they are using a local search (Newton-Raphson) to perform the work of a global search. As always it is a matter of using the correct tool for the job at hand. It makes more sense to use an EA global search to find starting values for the Newton local search, and then let this run down to the minimum. But, as with all things, the devil is in the detail.
Getting the right starting values for an nls model in R
In my experience a good way of finding starting values for parameters of NLR models is to use an evolutionary algorithm. From an initial population (100) of random estimates (parents) in a search sp
Getting the right starting values for an nls model in R In my experience a good way of finding starting values for parameters of NLR models is to use an evolutionary algorithm. From an initial population (100) of random estimates (parents) in a search space choose the best 20 (offspring) and use these to help define a search in a succeeding population. Repeat until convergence. No need for gradients or hessians, just SSE evaluations. If you are not too greedy this very often works. The problems that people often have is that they are using a local search (Newton-Raphson) to perform the work of a global search. As always it is a matter of using the correct tool for the job at hand. It makes more sense to use an EA global search to find starting values for the Newton local search, and then let this run down to the minimum. But, as with all things, the devil is in the detail.
Getting the right starting values for an nls model in R In my experience a good way of finding starting values for parameters of NLR models is to use an evolutionary algorithm. From an initial population (100) of random estimates (parents) in a search sp
19,828
Curse of dimensionality: kNN classifier
That is precisely the unexpected behavior of distances in high dimensions. For 1 dimension, you have the interval [0, 1]. 10% of the points are in a segment of length 0.1. But what happens as the dimensionality of the feature space increases? That expression is telling you that if you want to have that 10% of the points for 5 dimensions, you need to have a length for the cube of 0.63, in 10 dimensions of 0.79 and 0.98 for 100 dimensions. As you see, for increasing dimensions you need to look further away to get the same amount of points. Even more, is telling you that most of the points are at the boundary of the cube as the number of dimensions increase. Which is unexpected.
Curse of dimensionality: kNN classifier
That is precisely the unexpected behavior of distances in high dimensions. For 1 dimension, you have the interval [0, 1]. 10% of the points are in a segment of length 0.1. But what happens as the dim
Curse of dimensionality: kNN classifier That is precisely the unexpected behavior of distances in high dimensions. For 1 dimension, you have the interval [0, 1]. 10% of the points are in a segment of length 0.1. But what happens as the dimensionality of the feature space increases? That expression is telling you that if you want to have that 10% of the points for 5 dimensions, you need to have a length for the cube of 0.63, in 10 dimensions of 0.79 and 0.98 for 100 dimensions. As you see, for increasing dimensions you need to look further away to get the same amount of points. Even more, is telling you that most of the points are at the boundary of the cube as the number of dimensions increase. Which is unexpected.
Curse of dimensionality: kNN classifier That is precisely the unexpected behavior of distances in high dimensions. For 1 dimension, you have the interval [0, 1]. 10% of the points are in a segment of length 0.1. But what happens as the dim
19,829
Curse of dimensionality: kNN classifier
I think the main thing to notice is that the expression $$e_D(f) = f^{\frac{1}{D}}$$ is really really steep at the beginning. This means that the size of the edge that you will need to encompass a certain fraction of the volume will increase drastically, specially at the beginning. i.e. the edge you need will become ridiculously large as $D$ increases. To make this even clearer, recall the plot that Murphy shows: if you notice, for values of $D > 1$, the slope is really large and hence, the function grows really steeply at the beginning. This can be better appreciated if you take the derivative of $e_D(f)$: $$ e'_D(f) = \frac{1}{D} f^{\frac{1}{D} - 1} = \frac{1}{D} f^{\frac{1 - D}{D}} $$ Since we are only considering increasing dimension (that are integer values), we only care for integer values of $D > 1$. This means that $1-D < 0$. Consider the expression for the edge as follows: $$ e'_D(f) = \frac{1}{D} (f^{1 - D})^{\frac{1}{D}} $$ Notices that we are raising $f$ to a power less than 0 (i.e. negative). When we raise number to negative powers we are at some point doing a reciprocal (i.e. $x^{-1} = \frac{1}{x}$). Doing a reciprocal to a number that is already really small ( recall $f < 1$ since we are only considering fraction of the volume, since we are doing KNN, i.e. $k$ nearest data points out of the total $N$) means that number will "grows a lot". Therefore, we get the desired behavior, i.e. that as $D$ increases the power becomes even more negative and hence, the edge required grows a lot depending how large $D$ increases the exponent. (notice that $f^{1 - D}$ grows exponentially compared to the division $\frac{1}{D}$ that quickly becomes insignificant).
Curse of dimensionality: kNN classifier
I think the main thing to notice is that the expression $$e_D(f) = f^{\frac{1}{D}}$$ is really really steep at the beginning. This means that the size of the edge that you will need to encompass a c
Curse of dimensionality: kNN classifier I think the main thing to notice is that the expression $$e_D(f) = f^{\frac{1}{D}}$$ is really really steep at the beginning. This means that the size of the edge that you will need to encompass a certain fraction of the volume will increase drastically, specially at the beginning. i.e. the edge you need will become ridiculously large as $D$ increases. To make this even clearer, recall the plot that Murphy shows: if you notice, for values of $D > 1$, the slope is really large and hence, the function grows really steeply at the beginning. This can be better appreciated if you take the derivative of $e_D(f)$: $$ e'_D(f) = \frac{1}{D} f^{\frac{1}{D} - 1} = \frac{1}{D} f^{\frac{1 - D}{D}} $$ Since we are only considering increasing dimension (that are integer values), we only care for integer values of $D > 1$. This means that $1-D < 0$. Consider the expression for the edge as follows: $$ e'_D(f) = \frac{1}{D} (f^{1 - D})^{\frac{1}{D}} $$ Notices that we are raising $f$ to a power less than 0 (i.e. negative). When we raise number to negative powers we are at some point doing a reciprocal (i.e. $x^{-1} = \frac{1}{x}$). Doing a reciprocal to a number that is already really small ( recall $f < 1$ since we are only considering fraction of the volume, since we are doing KNN, i.e. $k$ nearest data points out of the total $N$) means that number will "grows a lot". Therefore, we get the desired behavior, i.e. that as $D$ increases the power becomes even more negative and hence, the edge required grows a lot depending how large $D$ increases the exponent. (notice that $f^{1 - D}$ grows exponentially compared to the division $\frac{1}{D}$ that quickly becomes insignificant).
Curse of dimensionality: kNN classifier I think the main thing to notice is that the expression $$e_D(f) = f^{\frac{1}{D}}$$ is really really steep at the beginning. This means that the size of the edge that you will need to encompass a c
19,830
Curse of dimensionality: kNN classifier
Yeah, so if you have a unit cube, or in your case a unit line, and the data is uniformly distributed then you have to go a length of 0.1 to capture 10% of the data. Now as you increase the dimensions, D increases, which deceases the power and f being less than 1, will increase, such that if D goes to infinity the you have to capture all the cube, e=1.
Curse of dimensionality: kNN classifier
Yeah, so if you have a unit cube, or in your case a unit line, and the data is uniformly distributed then you have to go a length of 0.1 to capture 10% of the data. Now as you increase the dimensions,
Curse of dimensionality: kNN classifier Yeah, so if you have a unit cube, or in your case a unit line, and the data is uniformly distributed then you have to go a length of 0.1 to capture 10% of the data. Now as you increase the dimensions, D increases, which deceases the power and f being less than 1, will increase, such that if D goes to infinity the you have to capture all the cube, e=1.
Curse of dimensionality: kNN classifier Yeah, so if you have a unit cube, or in your case a unit line, and the data is uniformly distributed then you have to go a length of 0.1 to capture 10% of the data. Now as you increase the dimensions,
19,831
Curse of dimensionality: kNN classifier
I think for kNN distance plays a bigger role. As Bernhard Schölkopf put it, "a high-dimensional space is a lonely place". What happens to an (hyper) cube is analogous to what happens to the distance between points. As you increase the number of dimensions, the ratio between the closest distance to the average distance grows - this means that the nearest point is almost as far away as the average point, then it has only slightly more predictive power than the average point. This article explains it nicely Joel Grus does a good job of describing this issue in Data Science from Scratch. In that book he calculates the average and minimum distances between two points in a dimension space as the number of dimensions increases. He calculated 10,000 distances between points, with the number of dimensions ranging from 0 to 100. He then proceeds to plot the average and minimum distance between two points, as well as the ratio of the closest distance to the average distance (Distance_Closest / Distance_Average). In those plots, Joel showed that the ratio of the closest distance to the average distance increased from 0 at 0 dimensions, up to ~0.8 at 100 dimensions. And this shows the fundamental challenge of dimensionality when using the k-nearest neighbors algorithm; as the number of dimensions increases and the ratio of closest distance to average distance approaches 1 the predictive power of the algorithm decreases. If the nearest point is almost as far away as the average point, then it has only slightly more predictive power than the average point.
Curse of dimensionality: kNN classifier
I think for kNN distance plays a bigger role. As Bernhard Schölkopf put it, "a high-dimensional space is a lonely place". What happens to an (hyper) cube is analogous to what happens to the distance b
Curse of dimensionality: kNN classifier I think for kNN distance plays a bigger role. As Bernhard Schölkopf put it, "a high-dimensional space is a lonely place". What happens to an (hyper) cube is analogous to what happens to the distance between points. As you increase the number of dimensions, the ratio between the closest distance to the average distance grows - this means that the nearest point is almost as far away as the average point, then it has only slightly more predictive power than the average point. This article explains it nicely Joel Grus does a good job of describing this issue in Data Science from Scratch. In that book he calculates the average and minimum distances between two points in a dimension space as the number of dimensions increases. He calculated 10,000 distances between points, with the number of dimensions ranging from 0 to 100. He then proceeds to plot the average and minimum distance between two points, as well as the ratio of the closest distance to the average distance (Distance_Closest / Distance_Average). In those plots, Joel showed that the ratio of the closest distance to the average distance increased from 0 at 0 dimensions, up to ~0.8 at 100 dimensions. And this shows the fundamental challenge of dimensionality when using the k-nearest neighbors algorithm; as the number of dimensions increases and the ratio of closest distance to average distance approaches 1 the predictive power of the algorithm decreases. If the nearest point is almost as far away as the average point, then it has only slightly more predictive power than the average point.
Curse of dimensionality: kNN classifier I think for kNN distance plays a bigger role. As Bernhard Schölkopf put it, "a high-dimensional space is a lonely place". What happens to an (hyper) cube is analogous to what happens to the distance b
19,832
Why do mixed effects models resolve dependency?
Including random terms in the model is a way to induce some covariance structure bteween the grades. The random factor for the school induces a non zero covariance between different students from the same school, whereas it is $0$ when the school are different. Let's write your model as $$ Y_{s,i} = \alpha + \text{hours}_{s,i} \beta + \text{school}_s + e_{s, i} $$ where $s$ indexes the school and $i$ indexes the students (in each school). The terms $\text{school}_s$ are independent random variables drawn in a $\mathcal N(0, \tau)$. The $e_{s, i}$ are independent random variables drawn in a $\mathcal N(0, \sigma^2)$. This vector has expected value $$\left[ \alpha + \text{hours}_{s,i} \beta \right]_{s,i}$$ which is determined by the number of worked hours. The covariance between $Y_{s,i}$ and $Y_{s',i'}$ is $0$ when $s \ne s'$, which means that the departure of the grades from the expected values are independent when the students are not in the same school. The covariance between $Y_{s,i}$ and $Y_{s,i'}$ is $\tau$ when $i \ne i'$, and the variance of $Y_{s,i}$ is $\tau + \sigma^2$: grades of students from the same school will have correlated departures from their expected values. Example and simulated data Here is a short R simulation for fifty students from five schools (here I take $\sigma^2 = \tau = 1$); the names of the variable are self documenting: set.seed(1) school <- rep(1:5, each=10) school_effect <- rnorm(5) school_effect_by_ind <- rep(school_effect, each=10) individual_effect <- rnorm(50) We plot the departures from expected grade for each student, that is the terms $\text{school}_s + e_{s, i}$, together with (dotted line) the mean departure for each school: plot(individual_effect + school_effect_by_ind, col=school, pch=19, xlab="student", ylab="grades departure from expected value") segments(seq(1,length=5,by=10), school_effect, seq(10,length=5,by=10), col=1:5, lty=3) Now let's comment on this plot. The level of each dotted line (corresponding to $\text{school}_s$) is drawn at random in a normal law. The student specific random terms are also drawn at random in a normal law, they correspond to the distance of the points from the dotted line. The resulting value is, for each student, the departure from $\alpha + \text{hours} \beta$, the grade determined by the time spent to work. As a result, pupils in the same school are more similar to each other than pupils from different schools, as you stated in your question. The variance matrix for this example In the above simulations we draw separately the school effects $\text{school}_s$ and the individual effects $e_{s,i}$, so the covariance considerations with which I began don’t appear clearly here. In fact, we would have obtained similar results by drawing a random normal vector of dimension 50 with block-diagonal covariance matrix $$\left[\begin{matrix} A & 0 & 0 & 0 & 0 \\ 0 & A & 0 & 0 & 0 \\ 0 & 0 & A & 0 & 0 \\ 0 & 0 & 0 & A & 0 \\ 0 & 0 & 0 & 0 & A \end{matrix}\right]$$ where each of the five $10\times 10$ blocks $A$ correspond to the covariance between the students of a same school: $$A = \left[\begin{matrix} 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 \end{matrix}\right].$$
Why do mixed effects models resolve dependency?
Including random terms in the model is a way to induce some covariance structure bteween the grades. The random factor for the school induces a non zero covariance between different students from the
Why do mixed effects models resolve dependency? Including random terms in the model is a way to induce some covariance structure bteween the grades. The random factor for the school induces a non zero covariance between different students from the same school, whereas it is $0$ when the school are different. Let's write your model as $$ Y_{s,i} = \alpha + \text{hours}_{s,i} \beta + \text{school}_s + e_{s, i} $$ where $s$ indexes the school and $i$ indexes the students (in each school). The terms $\text{school}_s$ are independent random variables drawn in a $\mathcal N(0, \tau)$. The $e_{s, i}$ are independent random variables drawn in a $\mathcal N(0, \sigma^2)$. This vector has expected value $$\left[ \alpha + \text{hours}_{s,i} \beta \right]_{s,i}$$ which is determined by the number of worked hours. The covariance between $Y_{s,i}$ and $Y_{s',i'}$ is $0$ when $s \ne s'$, which means that the departure of the grades from the expected values are independent when the students are not in the same school. The covariance between $Y_{s,i}$ and $Y_{s,i'}$ is $\tau$ when $i \ne i'$, and the variance of $Y_{s,i}$ is $\tau + \sigma^2$: grades of students from the same school will have correlated departures from their expected values. Example and simulated data Here is a short R simulation for fifty students from five schools (here I take $\sigma^2 = \tau = 1$); the names of the variable are self documenting: set.seed(1) school <- rep(1:5, each=10) school_effect <- rnorm(5) school_effect_by_ind <- rep(school_effect, each=10) individual_effect <- rnorm(50) We plot the departures from expected grade for each student, that is the terms $\text{school}_s + e_{s, i}$, together with (dotted line) the mean departure for each school: plot(individual_effect + school_effect_by_ind, col=school, pch=19, xlab="student", ylab="grades departure from expected value") segments(seq(1,length=5,by=10), school_effect, seq(10,length=5,by=10), col=1:5, lty=3) Now let's comment on this plot. The level of each dotted line (corresponding to $\text{school}_s$) is drawn at random in a normal law. The student specific random terms are also drawn at random in a normal law, they correspond to the distance of the points from the dotted line. The resulting value is, for each student, the departure from $\alpha + \text{hours} \beta$, the grade determined by the time spent to work. As a result, pupils in the same school are more similar to each other than pupils from different schools, as you stated in your question. The variance matrix for this example In the above simulations we draw separately the school effects $\text{school}_s$ and the individual effects $e_{s,i}$, so the covariance considerations with which I began don’t appear clearly here. In fact, we would have obtained similar results by drawing a random normal vector of dimension 50 with block-diagonal covariance matrix $$\left[\begin{matrix} A & 0 & 0 & 0 & 0 \\ 0 & A & 0 & 0 & 0 \\ 0 & 0 & A & 0 & 0 \\ 0 & 0 & 0 & A & 0 \\ 0 & 0 & 0 & 0 & A \end{matrix}\right]$$ where each of the five $10\times 10$ blocks $A$ correspond to the covariance between the students of a same school: $$A = \left[\begin{matrix} 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 1\\ 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 \end{matrix}\right].$$
Why do mixed effects models resolve dependency? Including random terms in the model is a way to induce some covariance structure bteween the grades. The random factor for the school induces a non zero covariance between different students from the
19,833
Comprehending output from mediation analysis in R
What does it mean that ACME (treated) is 0.0808? 0.0808 is the estimated average increase in the dependent variable among the treatment group that arrives as a result of the mediators rather than 'directly' from the treatment. The dependent variable in this example is the probability of sending a message to a congress member, the mediator is the emotional response generated by the treatment, and the treatment is a framing manipulation. So this number means that of the estimated 0.0949 (the Total Effect) increase in this probability due to framing, an estimated 0.0805 (ACME (average)) is as a result of the emotional changes generated by the framing and the remaining 0.0145 (ADE (average)) is from framing itself. In short Total Effect = ACME (average) + ADE (average) However, there is no reason that the average mediation effect (ACME) is the same for people in the treatment group and people in the control, so two mediation effects are estimated: ACME (control) and ACME (treated), which is your 0.0808. The average of these average treatment effects is ACME (average) (which is a bit confusing, I admit). A similar argument holds for the direct effects. The assumption that there is only one mediation effect and one direct effect in this population is called 'no interference' in the the authors of the package. It's helpful when interpreting the output to bear in mind the definitions in the accompanying papers and push your ordinary understanding of regression tables into the background a little bit. One last thing: the proportion of the causal effect of framing that is mediated by emotional response rather than direct would normally be calculated as something like ACME (average) / Total Effect, but here it is not (quite). Some discussion of how to compute this quantity for models where the dependent variable is discrete, as it is here, appears in Appendix G of Imai et al. 2010.
Comprehending output from mediation analysis in R
What does it mean that ACME (treated) is 0.0808? 0.0808 is the estimated average increase in the dependent variable among the treatment group that arrives as a result of the mediators rather than 'di
Comprehending output from mediation analysis in R What does it mean that ACME (treated) is 0.0808? 0.0808 is the estimated average increase in the dependent variable among the treatment group that arrives as a result of the mediators rather than 'directly' from the treatment. The dependent variable in this example is the probability of sending a message to a congress member, the mediator is the emotional response generated by the treatment, and the treatment is a framing manipulation. So this number means that of the estimated 0.0949 (the Total Effect) increase in this probability due to framing, an estimated 0.0805 (ACME (average)) is as a result of the emotional changes generated by the framing and the remaining 0.0145 (ADE (average)) is from framing itself. In short Total Effect = ACME (average) + ADE (average) However, there is no reason that the average mediation effect (ACME) is the same for people in the treatment group and people in the control, so two mediation effects are estimated: ACME (control) and ACME (treated), which is your 0.0808. The average of these average treatment effects is ACME (average) (which is a bit confusing, I admit). A similar argument holds for the direct effects. The assumption that there is only one mediation effect and one direct effect in this population is called 'no interference' in the the authors of the package. It's helpful when interpreting the output to bear in mind the definitions in the accompanying papers and push your ordinary understanding of regression tables into the background a little bit. One last thing: the proportion of the causal effect of framing that is mediated by emotional response rather than direct would normally be calculated as something like ACME (average) / Total Effect, but here it is not (quite). Some discussion of how to compute this quantity for models where the dependent variable is discrete, as it is here, appears in Appendix G of Imai et al. 2010.
Comprehending output from mediation analysis in R What does it mean that ACME (treated) is 0.0808? 0.0808 is the estimated average increase in the dependent variable among the treatment group that arrives as a result of the mediators rather than 'di
19,834
Correlation between two Decks of cards?
You can measure the relative level of correlation (or more precisely, the increasing level of randomness) using the Shannon entropy of the difference in face value between all pairs of adjacent cards. Here's how to compute it, for a randomly shuffled deck of 52 cards. You start by looping once through the entire deck, and building a sort of histogram. For each card position $i=1,2,...,52$, calculate the difference in face value $\Delta F_{i} = F_{i+1} - F_{i}$. To make this more concrete, let's say that the card in the $(i+1)$th position is the king of spades, and the card in the $i$th position is the four of clubs. Then we have $F_{i+1} = 51$ and $F_{i} = 3$ and $\Delta F_{i} = 51-3 = 48$. When you get to $i=52$, it's a special case; you loop around back to the beginning of the deck again and take $\Delta F_{52} = F_{1} - F_{52}$. If you end up with negative numbers for any of the $\Delta F$'s, add 52 to bring the face value difference back into the range 1-52. You will end up with a set of face value differences for 52 pairs of adjacent cards, each one falling into an allowed range from 1-52; count the relative frequency of these using a histogram (i.e., a one-dimensional array) with 52 elements. The histogram records a sort of "observed probability distribution" for the deck; you can normalize this distribution by dividing the counts in each bin by 52. You will thus end up with a series of variables $p_{1}, p_{2}, ... p_{52}$ where each one may take on a discrete range of possible values: {0, 1/52, 2/52, 3/52, etc.} depending upon how many pairwise face value differences ended up randomly in a particular bin of the histogram. Once you have the histogram, you can calculate the Shannon entropy for a particular shuffle iteration as $$E = \sum_{k=1}^{52} -p_{k} ln(p_{k})$$ I have written a small simulation in R to demonstrate the result. The first plot shows how the entropy evolves over the course of 20 shuffle iterations. A value of 0 is associated with a perfectly ordered deck; larger values signify a deck which is progressively more disordered or decorrelated. The second plot shows a series of 20 facets, each containing a plot similar to the one that was originally included with the question, showing shuffled card order vs. initial card order. The 20 facets in the 2nd plot are the same as the 20 iterations in the first plot, and they are also color coded the same as well, so that you can get a visual feel for what level of Shannon entropy corresponds to how much randomness in the sort order. The simulation code that generated the plots is appended at the end. library(ggplot2) # Number of cards ncard <- 52 # Number of shuffles to plot nshuffle <- 20 # Parameter between 0 and 1 to control randomness of the shuffle # Setting this closer to 1 makes the initial correlations fade away # more slowly, setting it closer to 0 makes them fade away faster mixprob <- 0.985 # Make data frame to keep track of progress shuffleorder <- NULL startorder <- NULL iteration <- NULL shuffletracker <- data.frame(shuffleorder, startorder, iteration) # Initialize cards in sequential order startorder <- seq(1,ncard) shuffleorder <- startorder entropy <- rep(0, nshuffle) # Loop over each new shuffle for (ii in 1:nshuffle) { # Append previous results to data frame iteration <- rep(ii, ncard) shuffletracker <- rbind(shuffletracker, data.frame(shuffleorder, startorder, iteration)) # Calculate pairwise value difference histogram freq <- rep(0, ncard) for (ij in 1:ncard) { if (ij == 1) { idx <- shuffleorder[1] - shuffleorder[ncard] } else { idx <- shuffleorder[ij] - shuffleorder[ij-1] } # Impose periodic boundary condition if (idx < 1) { idx <- idx + ncard } freq[idx] <- freq[idx] + 1 } # Sum over frequency histogram to compute entropy for (ij in 1:ncard) { if (freq[ij] == 0) { x <- 0 } else { p <- freq[ij] / ncard x <- -p * log(p, base=exp(1)) } entropy[ii] <- entropy[ii] + x } # Shuffle the cards to prepare for the next iteration lefthand <- shuffleorder[floor((ncard/2)+1):ncard] righthand <- shuffleorder[1:floor(ncard/2)] ij <- 0 ik <- 0 while ((ij+ik) < ncard) { if ((runif(1) < mixprob) & (ij < length(lefthand))) { ij <- ij + 1 shuffleorder[ij+ik] <- lefthand[ij] } if ((runif(1) < mixprob) & (ik < length(righthand))) { ik <- ik + 1 shuffleorder[ij+ik] <- righthand[ik] } } } # Plot entropy vs. shuffle iteration iteration <- seq(1, nshuffle) output <- data.frame(iteration, entropy) print(qplot(iteration, entropy, data=output, xlab="Shuffle Iteration", ylab="Information Entropy", geom=c("point", "line"), color=iteration) + scale_color_gradient(low="#ffb000", high="red")) # Plot gradually de-correlating sort order dev.new() print(qplot(startorder, shuffleorder, data=shuffletracker, color=iteration, xlab="Start Order", ylab="Shuffle Order") + facet_wrap(~ iteration, ncol=4) + scale_color_gradient(low="#ffb000", high="red"))
Correlation between two Decks of cards?
You can measure the relative level of correlation (or more precisely, the increasing level of randomness) using the Shannon entropy of the difference in face value between all pairs of adjacent cards.
Correlation between two Decks of cards? You can measure the relative level of correlation (or more precisely, the increasing level of randomness) using the Shannon entropy of the difference in face value between all pairs of adjacent cards. Here's how to compute it, for a randomly shuffled deck of 52 cards. You start by looping once through the entire deck, and building a sort of histogram. For each card position $i=1,2,...,52$, calculate the difference in face value $\Delta F_{i} = F_{i+1} - F_{i}$. To make this more concrete, let's say that the card in the $(i+1)$th position is the king of spades, and the card in the $i$th position is the four of clubs. Then we have $F_{i+1} = 51$ and $F_{i} = 3$ and $\Delta F_{i} = 51-3 = 48$. When you get to $i=52$, it's a special case; you loop around back to the beginning of the deck again and take $\Delta F_{52} = F_{1} - F_{52}$. If you end up with negative numbers for any of the $\Delta F$'s, add 52 to bring the face value difference back into the range 1-52. You will end up with a set of face value differences for 52 pairs of adjacent cards, each one falling into an allowed range from 1-52; count the relative frequency of these using a histogram (i.e., a one-dimensional array) with 52 elements. The histogram records a sort of "observed probability distribution" for the deck; you can normalize this distribution by dividing the counts in each bin by 52. You will thus end up with a series of variables $p_{1}, p_{2}, ... p_{52}$ where each one may take on a discrete range of possible values: {0, 1/52, 2/52, 3/52, etc.} depending upon how many pairwise face value differences ended up randomly in a particular bin of the histogram. Once you have the histogram, you can calculate the Shannon entropy for a particular shuffle iteration as $$E = \sum_{k=1}^{52} -p_{k} ln(p_{k})$$ I have written a small simulation in R to demonstrate the result. The first plot shows how the entropy evolves over the course of 20 shuffle iterations. A value of 0 is associated with a perfectly ordered deck; larger values signify a deck which is progressively more disordered or decorrelated. The second plot shows a series of 20 facets, each containing a plot similar to the one that was originally included with the question, showing shuffled card order vs. initial card order. The 20 facets in the 2nd plot are the same as the 20 iterations in the first plot, and they are also color coded the same as well, so that you can get a visual feel for what level of Shannon entropy corresponds to how much randomness in the sort order. The simulation code that generated the plots is appended at the end. library(ggplot2) # Number of cards ncard <- 52 # Number of shuffles to plot nshuffle <- 20 # Parameter between 0 and 1 to control randomness of the shuffle # Setting this closer to 1 makes the initial correlations fade away # more slowly, setting it closer to 0 makes them fade away faster mixprob <- 0.985 # Make data frame to keep track of progress shuffleorder <- NULL startorder <- NULL iteration <- NULL shuffletracker <- data.frame(shuffleorder, startorder, iteration) # Initialize cards in sequential order startorder <- seq(1,ncard) shuffleorder <- startorder entropy <- rep(0, nshuffle) # Loop over each new shuffle for (ii in 1:nshuffle) { # Append previous results to data frame iteration <- rep(ii, ncard) shuffletracker <- rbind(shuffletracker, data.frame(shuffleorder, startorder, iteration)) # Calculate pairwise value difference histogram freq <- rep(0, ncard) for (ij in 1:ncard) { if (ij == 1) { idx <- shuffleorder[1] - shuffleorder[ncard] } else { idx <- shuffleorder[ij] - shuffleorder[ij-1] } # Impose periodic boundary condition if (idx < 1) { idx <- idx + ncard } freq[idx] <- freq[idx] + 1 } # Sum over frequency histogram to compute entropy for (ij in 1:ncard) { if (freq[ij] == 0) { x <- 0 } else { p <- freq[ij] / ncard x <- -p * log(p, base=exp(1)) } entropy[ii] <- entropy[ii] + x } # Shuffle the cards to prepare for the next iteration lefthand <- shuffleorder[floor((ncard/2)+1):ncard] righthand <- shuffleorder[1:floor(ncard/2)] ij <- 0 ik <- 0 while ((ij+ik) < ncard) { if ((runif(1) < mixprob) & (ij < length(lefthand))) { ij <- ij + 1 shuffleorder[ij+ik] <- lefthand[ij] } if ((runif(1) < mixprob) & (ik < length(righthand))) { ik <- ik + 1 shuffleorder[ij+ik] <- righthand[ik] } } } # Plot entropy vs. shuffle iteration iteration <- seq(1, nshuffle) output <- data.frame(iteration, entropy) print(qplot(iteration, entropy, data=output, xlab="Shuffle Iteration", ylab="Information Entropy", geom=c("point", "line"), color=iteration) + scale_color_gradient(low="#ffb000", high="red")) # Plot gradually de-correlating sort order dev.new() print(qplot(startorder, shuffleorder, data=shuffletracker, color=iteration, xlab="Start Order", ylab="Shuffle Order") + facet_wrap(~ iteration, ncol=4) + scale_color_gradient(low="#ffb000", high="red"))
Correlation between two Decks of cards? You can measure the relative level of correlation (or more precisely, the increasing level of randomness) using the Shannon entropy of the difference in face value between all pairs of adjacent cards.
19,835
Correlation between two Decks of cards?
I know that this post is almost 4 years old, but I am a hobbyist cryptanalyst, and have been studying playing card ciphers. As a result, I have come back to this post over and over to explain deck shuffling as a source of entropy for randomly keying the deck. Finally, I decided to verify the answer by stachyra by shuffling the deck by hand, and estimating the deck entropy after each shuffle. TL;DR, to maximize deck entropy: For only riffle shuffling, you need 11-12 shuffles. For cutting the deck first then riffle shuffling, you only need 6-7 cut-and-shuffles. First off, everything that stachyra mentioned for calculating Shannon entropy is correct. It can be boiled down this way: Numerically assign a unique value to each of the 52 cards in the deck. Shuffle the deck. For n=0 to n=51, record each value of (n - (n+1) mod 52) mod 52 Count the number of occurrences of 0, 1, 2, ..., 49, 50, 51 Normalize those records by dividing each by 52 For i=1 to i=52, calculate -p_i * log(p_i)/log(2) Sum the values Where stachyra makes one subtle assumption, is that implementing a human shuffle in a computer program is going to come with some baggage. With paper-based playing cards, as they get used, oil from your hands transfers to the cards. Over an extended time, due to oil buildup, cards will begin sticking together, and this will end up in your shuffle. The more heavily used the deck, the more likely two or more adjacent cards will stick together, and the more frequently it will happen. Further, supposed the two of clubs and jack of hearts stick together. They may end up stuck together for the duration of your shuffling, never separating. This could be imitated in a computer program, but this isn't the case with stachyra's R routine. Also, stachyra has a manipulation variable "mixprob". Without fully understanding this variable, it is a little bit of a black box. You could incorrectly set it, affecting the results. So, I wanted to make sure his intuition was correct. So I verified it by hand. I shuffled the deck 20 times by hand, in two different instances (40 total shuffles). In the first instance, I just riffle shuffled, keeping the right and left cuts close to even. In the second instance, I cut the deck deliberately away from the middle of the deck (1/3, 2/5, 1/4, etc.) before doing an even cut for the riffle shuffle. My gut feeling in the second instance was that by cutting the deck before shuffling, and staying away from the middle, I could introduce diffusion into the deck more quickly than stock riffle shuffling. Here are the results. First, straight riffle shuffling: And here is cutting the deck combined with riffle shuffling: It seems that entropy is maximized in about 1/2 the time of the claim by stachyra. Further, my intuition was correct that cutting the deck deliberately away from the middle first, before riffle shuffling did introduce more diffusion into the deck. However, after about 5 shuffles, it didn't really matter much anymore. You can see that after about 6-7 shuffles, entropy is maximized, versus the 10-12 as the claim made my stachyra. Could it be possible that 7 shuffles is sufficient, or am I being blinded? You can see my data at Google Sheets. It is possible that I recorded a playing card or two incorrectly, so I can't guarantee 100% accuracy with the data. It's important that your findings are also independently verified. Brad Mann, from the Department of Mathematics at Harvard University, studied how many times it would take to shuffle a deck of cards before the predictability of any card in the deck is completely unpredictable (Shannon entropy is maximized). His results can be found in this 33-page PDF. What's interesting with his findings, is that he is actually independently verifying a 1990 New York Times article by Persi Diaconis, who claims that 7 shuffles are sufficient for thoroughly mixing a deck of playing cards via the riffle shuffle. Brad Mann walks through a few different mathematical models in shuffling, including Markov chains, and comes to the following conclusion: This is approximately 11.7 for n=52, which means that, according to this viewpoint, we expect on average 11 or 12 shuffles to be necessary for randomizing a real deck of cards. Note that this is substantially larger than 7. Brad Mann just independently verified stachyra's result, and not mine. So, I looked closer at my data, and I discovered why 7 shuffles is not sufficient. First off, the theoretical maximum Shannon entropy in bits for any card in the deck is log(52)/log(2) ~= 5.7 bits. But my data never really breaks much above 5 bits. Curious, I created an array of 52 elements in Python, shuffled that array: >>> import random >>> r = random.SystemRandom() >>> d = [x for x in xrange(1,52)] >>> r.shuffle(d) >>> print d [20, 51, 42, 44, 16, 5, 18, 27, 8, 24, 23, 13, 6, 22, 19, 45, 40, 30, 10, 15, 25, 37, 52, 34, 12, 46, 48, 3, 26, 4, 1, 38, 32, 14, 43, 7, 31, 50, 47, 41, 29, 36, 39, 49, 28, 21, 2, 33, 35, 9, 17, 11] Calculating its entropy-per-card yields about 4.8 bits. Doing this a dozen times or so shows similar results varying between 5.2 bits and 4.6 bits, with 4.8 to 4.9 as the average. So looking at the raw entropy value of my data isn't enough, otherwise I could call it good at 5 shuffles. When I look closer at my data, I noticed the number of "zero buckets". These are buckets where there is no data for deltas between card faces for that number. For example, when subtracting the value of two adjacent cards, there is no "15" result after all 52 deltas have been calculated. I see that it eventually settles around 17-18 "zero buckets" around 11-12 shuffles. Sure enough, my shuffled deck via Python averages 17-18 "zero buckets", with a high of 21 and a low of 14. Why 17-18 is the settled result, I can't explain ... yet. But, it appears that I want both ~4.8 bits of entropy AND 17 "zero buckets". With my stock riffle shuffling, that's 11-12 shuffles. With my cut-and-shuffle, that's 6-7. So, when it comes to games, I would recommend cut-and-shuffles. Not only does this guarantee that the top and bottom cards are getting mixed into the deck on each shuffle, it's also just plain quicker than 11-12 shuffles. I don't know about you, but when I'm playing card games with my family and friends, they're not patient enough for me to perform 12 riffle shuffles.
Correlation between two Decks of cards?
I know that this post is almost 4 years old, but I am a hobbyist cryptanalyst, and have been studying playing card ciphers. As a result, I have come back to this post over and over to explain deck shu
Correlation between two Decks of cards? I know that this post is almost 4 years old, but I am a hobbyist cryptanalyst, and have been studying playing card ciphers. As a result, I have come back to this post over and over to explain deck shuffling as a source of entropy for randomly keying the deck. Finally, I decided to verify the answer by stachyra by shuffling the deck by hand, and estimating the deck entropy after each shuffle. TL;DR, to maximize deck entropy: For only riffle shuffling, you need 11-12 shuffles. For cutting the deck first then riffle shuffling, you only need 6-7 cut-and-shuffles. First off, everything that stachyra mentioned for calculating Shannon entropy is correct. It can be boiled down this way: Numerically assign a unique value to each of the 52 cards in the deck. Shuffle the deck. For n=0 to n=51, record each value of (n - (n+1) mod 52) mod 52 Count the number of occurrences of 0, 1, 2, ..., 49, 50, 51 Normalize those records by dividing each by 52 For i=1 to i=52, calculate -p_i * log(p_i)/log(2) Sum the values Where stachyra makes one subtle assumption, is that implementing a human shuffle in a computer program is going to come with some baggage. With paper-based playing cards, as they get used, oil from your hands transfers to the cards. Over an extended time, due to oil buildup, cards will begin sticking together, and this will end up in your shuffle. The more heavily used the deck, the more likely two or more adjacent cards will stick together, and the more frequently it will happen. Further, supposed the two of clubs and jack of hearts stick together. They may end up stuck together for the duration of your shuffling, never separating. This could be imitated in a computer program, but this isn't the case with stachyra's R routine. Also, stachyra has a manipulation variable "mixprob". Without fully understanding this variable, it is a little bit of a black box. You could incorrectly set it, affecting the results. So, I wanted to make sure his intuition was correct. So I verified it by hand. I shuffled the deck 20 times by hand, in two different instances (40 total shuffles). In the first instance, I just riffle shuffled, keeping the right and left cuts close to even. In the second instance, I cut the deck deliberately away from the middle of the deck (1/3, 2/5, 1/4, etc.) before doing an even cut for the riffle shuffle. My gut feeling in the second instance was that by cutting the deck before shuffling, and staying away from the middle, I could introduce diffusion into the deck more quickly than stock riffle shuffling. Here are the results. First, straight riffle shuffling: And here is cutting the deck combined with riffle shuffling: It seems that entropy is maximized in about 1/2 the time of the claim by stachyra. Further, my intuition was correct that cutting the deck deliberately away from the middle first, before riffle shuffling did introduce more diffusion into the deck. However, after about 5 shuffles, it didn't really matter much anymore. You can see that after about 6-7 shuffles, entropy is maximized, versus the 10-12 as the claim made my stachyra. Could it be possible that 7 shuffles is sufficient, or am I being blinded? You can see my data at Google Sheets. It is possible that I recorded a playing card or two incorrectly, so I can't guarantee 100% accuracy with the data. It's important that your findings are also independently verified. Brad Mann, from the Department of Mathematics at Harvard University, studied how many times it would take to shuffle a deck of cards before the predictability of any card in the deck is completely unpredictable (Shannon entropy is maximized). His results can be found in this 33-page PDF. What's interesting with his findings, is that he is actually independently verifying a 1990 New York Times article by Persi Diaconis, who claims that 7 shuffles are sufficient for thoroughly mixing a deck of playing cards via the riffle shuffle. Brad Mann walks through a few different mathematical models in shuffling, including Markov chains, and comes to the following conclusion: This is approximately 11.7 for n=52, which means that, according to this viewpoint, we expect on average 11 or 12 shuffles to be necessary for randomizing a real deck of cards. Note that this is substantially larger than 7. Brad Mann just independently verified stachyra's result, and not mine. So, I looked closer at my data, and I discovered why 7 shuffles is not sufficient. First off, the theoretical maximum Shannon entropy in bits for any card in the deck is log(52)/log(2) ~= 5.7 bits. But my data never really breaks much above 5 bits. Curious, I created an array of 52 elements in Python, shuffled that array: >>> import random >>> r = random.SystemRandom() >>> d = [x for x in xrange(1,52)] >>> r.shuffle(d) >>> print d [20, 51, 42, 44, 16, 5, 18, 27, 8, 24, 23, 13, 6, 22, 19, 45, 40, 30, 10, 15, 25, 37, 52, 34, 12, 46, 48, 3, 26, 4, 1, 38, 32, 14, 43, 7, 31, 50, 47, 41, 29, 36, 39, 49, 28, 21, 2, 33, 35, 9, 17, 11] Calculating its entropy-per-card yields about 4.8 bits. Doing this a dozen times or so shows similar results varying between 5.2 bits and 4.6 bits, with 4.8 to 4.9 as the average. So looking at the raw entropy value of my data isn't enough, otherwise I could call it good at 5 shuffles. When I look closer at my data, I noticed the number of "zero buckets". These are buckets where there is no data for deltas between card faces for that number. For example, when subtracting the value of two adjacent cards, there is no "15" result after all 52 deltas have been calculated. I see that it eventually settles around 17-18 "zero buckets" around 11-12 shuffles. Sure enough, my shuffled deck via Python averages 17-18 "zero buckets", with a high of 21 and a low of 14. Why 17-18 is the settled result, I can't explain ... yet. But, it appears that I want both ~4.8 bits of entropy AND 17 "zero buckets". With my stock riffle shuffling, that's 11-12 shuffles. With my cut-and-shuffle, that's 6-7. So, when it comes to games, I would recommend cut-and-shuffles. Not only does this guarantee that the top and bottom cards are getting mixed into the deck on each shuffle, it's also just plain quicker than 11-12 shuffles. I don't know about you, but when I'm playing card games with my family and friends, they're not patient enough for me to perform 12 riffle shuffles.
Correlation between two Decks of cards? I know that this post is almost 4 years old, but I am a hobbyist cryptanalyst, and have been studying playing card ciphers. As a result, I have come back to this post over and over to explain deck shu
19,836
Confusion related to elastic net
Suppose two predictors have a strong effect on the response but are highly correlated in the sample from which you build your model. If you drop one from the model it won't predict well for samples from similar populations in which the predictors aren't highly correlated. If you want to improve the precision of your coefficient estimates in the presence of multicollinearity you have to introduce a little bias, off-setting it by a larger reduction in variance. One way is by removing predictors entirely—with LASSO, or, in the old days, stepwise methods—, which is setting their coefficient estimates to zero. Another is by biasing all of the estimates a bit—with ridge regression, or, in the old days, regressing on the first few principal components. A drawback of the former is that it's very unsafe if the model will be used to predict responses for predictor patterns away from those that occurred in the original sample, as predictors tend to get excluded just because they're not much use together with other, nearly collinear, predictors. (Not that extrapolation is ever completely safe.) The elastic net is a mixture of the two, as @user12436 explains, & tends to keep groups of correlated predictors in the model.
Confusion related to elastic net
Suppose two predictors have a strong effect on the response but are highly correlated in the sample from which you build your model. If you drop one from the model it won't predict well for samples f
Confusion related to elastic net Suppose two predictors have a strong effect on the response but are highly correlated in the sample from which you build your model. If you drop one from the model it won't predict well for samples from similar populations in which the predictors aren't highly correlated. If you want to improve the precision of your coefficient estimates in the presence of multicollinearity you have to introduce a little bias, off-setting it by a larger reduction in variance. One way is by removing predictors entirely—with LASSO, or, in the old days, stepwise methods—, which is setting their coefficient estimates to zero. Another is by biasing all of the estimates a bit—with ridge regression, or, in the old days, regressing on the first few principal components. A drawback of the former is that it's very unsafe if the model will be used to predict responses for predictor patterns away from those that occurred in the original sample, as predictors tend to get excluded just because they're not much use together with other, nearly collinear, predictors. (Not that extrapolation is ever completely safe.) The elastic net is a mixture of the two, as @user12436 explains, & tends to keep groups of correlated predictors in the model.
Confusion related to elastic net Suppose two predictors have a strong effect on the response but are highly correlated in the sample from which you build your model. If you drop one from the model it won't predict well for samples f
19,837
Confusion related to elastic net
But isn't this what we want. I mean it saves us from the trouble of multicollinearity isn't it. Yes! and no. Elastic net is a combination of two regularization techniques, the L2 regularization (used in ridge regression) and L1 regularization (used in LASSO). Lasso produces naturally sparse models, i.e. most of the variable coefficients will be shrinked to 0 and effectively excluded out of the model. So the least significant variables are shrinked away, before shrinking the others, unlike with ridge, where all variables are shrinked, while none of them are really shrinked to 0. Elastic net uses a linear combination of both these approaches. The specific case mentioned by Hastie when discussing the method was in the case of large p, small n. Which means: high dimensional data with, relatively few observations. In this case LASSO would (reportedly) only ever select at most n variables, while eliminating all the rest, see paper by Hastie. It will always depend on the actual dataset, but you can well imagine that you don't always want to have the upper limit on the number of variables in your models being equal to, or lower than the number of your observations.
Confusion related to elastic net
But isn't this what we want. I mean it saves us from the trouble of multicollinearity isn't it. Yes! and no. Elastic net is a combination of two regularization techniques, the L2 regularization (us
Confusion related to elastic net But isn't this what we want. I mean it saves us from the trouble of multicollinearity isn't it. Yes! and no. Elastic net is a combination of two regularization techniques, the L2 regularization (used in ridge regression) and L1 regularization (used in LASSO). Lasso produces naturally sparse models, i.e. most of the variable coefficients will be shrinked to 0 and effectively excluded out of the model. So the least significant variables are shrinked away, before shrinking the others, unlike with ridge, where all variables are shrinked, while none of them are really shrinked to 0. Elastic net uses a linear combination of both these approaches. The specific case mentioned by Hastie when discussing the method was in the case of large p, small n. Which means: high dimensional data with, relatively few observations. In this case LASSO would (reportedly) only ever select at most n variables, while eliminating all the rest, see paper by Hastie. It will always depend on the actual dataset, but you can well imagine that you don't always want to have the upper limit on the number of variables in your models being equal to, or lower than the number of your observations.
Confusion related to elastic net But isn't this what we want. I mean it saves us from the trouble of multicollinearity isn't it. Yes! and no. Elastic net is a combination of two regularization techniques, the L2 regularization (us
19,838
Confusion related to elastic net
Both Lasso and Elastic Net are efficient methods to perform variable or feature selection in high-dimensional data settings (much more variables than patients or samples; e.g., 20,000 genes and 500 tumor samples). It has been shown (by Hastie and others) that Elastic Net can outperform Lasso when the data is highly correlated. Lasso may just select one of the correlated variables and does not care which one is selected. This can be a problem when one wants to validate the selected variables in an independent dataset. The variable selected by Lasso may not be the best predictor among all correlated variables. Elastic Net solves this problem by averaging highly correlated variables.
Confusion related to elastic net
Both Lasso and Elastic Net are efficient methods to perform variable or feature selection in high-dimensional data settings (much more variables than patients or samples; e.g., 20,000 genes and 500 tu
Confusion related to elastic net Both Lasso and Elastic Net are efficient methods to perform variable or feature selection in high-dimensional data settings (much more variables than patients or samples; e.g., 20,000 genes and 500 tumor samples). It has been shown (by Hastie and others) that Elastic Net can outperform Lasso when the data is highly correlated. Lasso may just select one of the correlated variables and does not care which one is selected. This can be a problem when one wants to validate the selected variables in an independent dataset. The variable selected by Lasso may not be the best predictor among all correlated variables. Elastic Net solves this problem by averaging highly correlated variables.
Confusion related to elastic net Both Lasso and Elastic Net are efficient methods to perform variable or feature selection in high-dimensional data settings (much more variables than patients or samples; e.g., 20,000 genes and 500 tu
19,839
Difference between replication and repeated measurements
I don't think his second example is replication OR repeated measurements. Any study involves multiple cases (subjects, people, silicon chips, whatever). Repeated measures involves measuring the same cases multiple times. So, if you measured the chips, then did something to them, then measured them again, etc it would be repeated measures. Replication involves running the same study on different subjects but identical conditions. So, if you did the study on n chips, then did it again on another n chips that would be replication.
Difference between replication and repeated measurements
I don't think his second example is replication OR repeated measurements. Any study involves multiple cases (subjects, people, silicon chips, whatever). Repeated measures involves measuring the same
Difference between replication and repeated measurements I don't think his second example is replication OR repeated measurements. Any study involves multiple cases (subjects, people, silicon chips, whatever). Repeated measures involves measuring the same cases multiple times. So, if you measured the chips, then did something to them, then measured them again, etc it would be repeated measures. Replication involves running the same study on different subjects but identical conditions. So, if you did the study on n chips, then did it again on another n chips that would be replication.
Difference between replication and repeated measurements I don't think his second example is replication OR repeated measurements. Any study involves multiple cases (subjects, people, silicon chips, whatever). Repeated measures involves measuring the same
19,840
Difference between replication and repeated measurements
Unfortunately, terminology varies quite a bit and in confusing ways, especially between disciplines. There will be many people who will use different terms for the same thing, and/or the same terms for different things (this is a pet peeve of mine). I gather the book in question is this. This is design of experiments from the perspective of engineering (as opposed to the biomedical or the social science perspectives). The Wikipedia entry seems to be coming from the biomedical / social science perspective. In engineering, an experimental run is typically thought of as having set up your equipment and run it. This produces, in a sense, one data point. Running your experiment again is a replication; it gets you a second data point. In a biomedical context, you run an experiment and get $N$ data. Someone else replicates your experiment on a new sample with another $N'$ data. These constitute different ways of thinking about what you call an "experimental run". Tragically, they are very confusing. Montgomery is referring to multiple data from the same run as "repeated measurements". Again, this is common in engineering. A way to think about this from outside the engineering context is to think about a hierarchical analysis, where you are interested in estimating and drawing inferences about the level 2 units. That is, treatments are randomly assigned to doctors and every patient (on whom you take a measurement) is a repeated measurement with respect to the doctor. Within the same doctor, those measurements "reflect differences among the wafers [patients] and other sources of variability within that particular furnace run [doctor's care]".
Difference between replication and repeated measurements
Unfortunately, terminology varies quite a bit and in confusing ways, especially between disciplines. There will be many people who will use different terms for the same thing, and/or the same terms f
Difference between replication and repeated measurements Unfortunately, terminology varies quite a bit and in confusing ways, especially between disciplines. There will be many people who will use different terms for the same thing, and/or the same terms for different things (this is a pet peeve of mine). I gather the book in question is this. This is design of experiments from the perspective of engineering (as opposed to the biomedical or the social science perspectives). The Wikipedia entry seems to be coming from the biomedical / social science perspective. In engineering, an experimental run is typically thought of as having set up your equipment and run it. This produces, in a sense, one data point. Running your experiment again is a replication; it gets you a second data point. In a biomedical context, you run an experiment and get $N$ data. Someone else replicates your experiment on a new sample with another $N'$ data. These constitute different ways of thinking about what you call an "experimental run". Tragically, they are very confusing. Montgomery is referring to multiple data from the same run as "repeated measurements". Again, this is common in engineering. A way to think about this from outside the engineering context is to think about a hierarchical analysis, where you are interested in estimating and drawing inferences about the level 2 units. That is, treatments are randomly assigned to doctors and every patient (on whom you take a measurement) is a repeated measurement with respect to the doctor. Within the same doctor, those measurements "reflect differences among the wafers [patients] and other sources of variability within that particular furnace run [doctor's care]".
Difference between replication and repeated measurements Unfortunately, terminology varies quite a bit and in confusing ways, especially between disciplines. There will be many people who will use different terms for the same thing, and/or the same terms f
19,841
Difference between replication and repeated measurements
What's going on here is the confusion in terminology. Here in the book, measurements refer to a single experimental trial observation, and the experiment calls for several observations to be made. The term 'repeated measures' refers to measuring subjects in multiple conditions. That is, in a within-subject design (aka crossed design, or repeated measures), you have, say, two conditions: a treatment and a control, and each subject goes through both conditions, usually in a counter-balanced way. This means that you have subjects act as their own control, and this design helps you deal with between-subject variability. One disadvantage of this research design is the problem of carryover effects, where the first condition that the subject goes through adversely influences the other condition. In other words, don't confuse 'repeated measures' and multiple observations under the same experimental condition. See also: Are Measurements made on the same patient independent?
Difference between replication and repeated measurements
What's going on here is the confusion in terminology. Here in the book, measurements refer to a single experimental trial observation, and the experiment calls for several observations to be made. Th
Difference between replication and repeated measurements What's going on here is the confusion in terminology. Here in the book, measurements refer to a single experimental trial observation, and the experiment calls for several observations to be made. The term 'repeated measures' refers to measuring subjects in multiple conditions. That is, in a within-subject design (aka crossed design, or repeated measures), you have, say, two conditions: a treatment and a control, and each subject goes through both conditions, usually in a counter-balanced way. This means that you have subjects act as their own control, and this design helps you deal with between-subject variability. One disadvantage of this research design is the problem of carryover effects, where the first condition that the subject goes through adversely influences the other condition. In other words, don't confuse 'repeated measures' and multiple observations under the same experimental condition. See also: Are Measurements made on the same patient independent?
Difference between replication and repeated measurements What's going on here is the confusion in terminology. Here in the book, measurements refer to a single experimental trial observation, and the experiment calls for several observations to be made. Th
19,842
Difference between replication and repeated measurements
http://blog.gembaacademy.com/2007/05/08/repetitions-versus-replications/ Repetitions versus Replications May 8, 2007 By Ron 6 Comments Many Six Sigma practitioners struggle to differentiate between a repetition and replication. Normally this confusion arises when dealing with Design of Experiments (DOE). Let’s use an example to explain the difference. Sallie wants to run a DOE in her paint booth. After some brainstorming and data analysis she decides to experiment with the “fluid flow” and “attack angle” of the paint gun. Since she has 2 factors and wants to test a “high” and “low” level for each factor she decides on a 2 factor, 2 level full factorial DOE. Here is what this basic design would look like. Now then, Sallie decides to paint 6 parts during each run. Since there are 4 runs she needs at least 24 parts (6 x 4). These 6 parts per run are what we call repetitions. Here is what the design looks like with the 6 repetitions added to the design. Finally, since this painting process is ultra critical to her company Sallie decides to do the entire experiment twice. This helps her add some statistical power and serves as a sort of confirmation. If she wanted to she could do the first 4 runs with the day shift staff and the second 4 runs with the night shift staff. Completing the DOE a second time is what we call replication. You may also hear the term blocking used instead of replicating. Here is what the design looks like with the 6 repetitions and replication in place (in yellow). So there you have it! That is the difference between repetition and replication.
Difference between replication and repeated measurements
http://blog.gembaacademy.com/2007/05/08/repetitions-versus-replications/ Repetitions versus Replications May 8, 2007 By Ron 6 Comments Many Six Sigma practitioners struggle to differentiate between a
Difference between replication and repeated measurements http://blog.gembaacademy.com/2007/05/08/repetitions-versus-replications/ Repetitions versus Replications May 8, 2007 By Ron 6 Comments Many Six Sigma practitioners struggle to differentiate between a repetition and replication. Normally this confusion arises when dealing with Design of Experiments (DOE). Let’s use an example to explain the difference. Sallie wants to run a DOE in her paint booth. After some brainstorming and data analysis she decides to experiment with the “fluid flow” and “attack angle” of the paint gun. Since she has 2 factors and wants to test a “high” and “low” level for each factor she decides on a 2 factor, 2 level full factorial DOE. Here is what this basic design would look like. Now then, Sallie decides to paint 6 parts during each run. Since there are 4 runs she needs at least 24 parts (6 x 4). These 6 parts per run are what we call repetitions. Here is what the design looks like with the 6 repetitions added to the design. Finally, since this painting process is ultra critical to her company Sallie decides to do the entire experiment twice. This helps her add some statistical power and serves as a sort of confirmation. If she wanted to she could do the first 4 runs with the day shift staff and the second 4 runs with the night shift staff. Completing the DOE a second time is what we call replication. You may also hear the term blocking used instead of replicating. Here is what the design looks like with the 6 repetitions and replication in place (in yellow). So there you have it! That is the difference between repetition and replication.
Difference between replication and repeated measurements http://blog.gembaacademy.com/2007/05/08/repetitions-versus-replications/ Repetitions versus Replications May 8, 2007 By Ron 6 Comments Many Six Sigma practitioners struggle to differentiate between a
19,843
Difference between replication and repeated measurements
Let me add an interesting factor, lot. In the above example, instead of making six tests with the same lot of paint (which, per above definitions means six repetitions per combination of conditions) she tests with six different paint lots per combination of conditions, which means also 24 total experiments; does this mean she is doing six replications per combination of conditions? Another example: A liquid pigment is measured for color intensity I. The lab method of analysis has two factors: suspension clarification time "T" and sample size W. Each factor has two levels, i.e, short and long T, and small and large W. That makes a 2x2 design. Testing the same lot sample under the four different conditions means there are 4 experiments in total, no repetitions. Testing the same lot twice each time means there would be two repetitions per condition, 8 experiments in total. But what if we test samples from six different lots per condition? Does this mean there are six replications per combination or conditions? The number of experiments would be 24. Now, we may want to make the method more precise and ask the lab technician to repeat the test twice (from the same sample) every time he makes a measurement, and report only the average per lot sample. I assume we could use the averages as a single result per lot sample, and for DoE, say a 2-way layout ANOVA with replications, each lot sample result is a replication. Please comment.
Difference between replication and repeated measurements
Let me add an interesting factor, lot. In the above example, instead of making six tests with the same lot of paint (which, per above definitions means six repetitions per combination of conditions) s
Difference between replication and repeated measurements Let me add an interesting factor, lot. In the above example, instead of making six tests with the same lot of paint (which, per above definitions means six repetitions per combination of conditions) she tests with six different paint lots per combination of conditions, which means also 24 total experiments; does this mean she is doing six replications per combination of conditions? Another example: A liquid pigment is measured for color intensity I. The lab method of analysis has two factors: suspension clarification time "T" and sample size W. Each factor has two levels, i.e, short and long T, and small and large W. That makes a 2x2 design. Testing the same lot sample under the four different conditions means there are 4 experiments in total, no repetitions. Testing the same lot twice each time means there would be two repetitions per condition, 8 experiments in total. But what if we test samples from six different lots per condition? Does this mean there are six replications per combination or conditions? The number of experiments would be 24. Now, we may want to make the method more precise and ask the lab technician to repeat the test twice (from the same sample) every time he makes a measurement, and report only the average per lot sample. I assume we could use the averages as a single result per lot sample, and for DoE, say a 2-way layout ANOVA with replications, each lot sample result is a replication. Please comment.
Difference between replication and repeated measurements Let me add an interesting factor, lot. In the above example, instead of making six tests with the same lot of paint (which, per above definitions means six repetitions per combination of conditions) s
19,844
Distance Metrics For Binary Vectors
Seems like you're looking for either the Jaccard distance or the Dice dissimilarity. Jaccard distance: $1 - \frac{|A \cap B|}{|A \cup B|}$ Dice dissimilarity: $1 - \frac{2|A \cap B|}{|A| + |B|}$ These both are equal to zero if $A$ and $B$ are exactly the same, and one if they are completely different. However, Jaccard will "punish" differences more severely. Also note, Dice is not really a metric (doesn't satisfy triangle inequality) so it may not satisfy your needs. The appropriate distance likely depends on the origin of the data and what you are trying to achieve, but those two are likely a good start. Jaccard index Sorensen-Dice index
Distance Metrics For Binary Vectors
Seems like you're looking for either the Jaccard distance or the Dice dissimilarity. Jaccard distance: $1 - \frac{|A \cap B|}{|A \cup B|}$ Dice dissimilarity: $1 - \frac{2|A \cap B|}{|A| + |B|}$ Thes
Distance Metrics For Binary Vectors Seems like you're looking for either the Jaccard distance or the Dice dissimilarity. Jaccard distance: $1 - \frac{|A \cap B|}{|A \cup B|}$ Dice dissimilarity: $1 - \frac{2|A \cap B|}{|A| + |B|}$ These both are equal to zero if $A$ and $B$ are exactly the same, and one if they are completely different. However, Jaccard will "punish" differences more severely. Also note, Dice is not really a metric (doesn't satisfy triangle inequality) so it may not satisfy your needs. The appropriate distance likely depends on the origin of the data and what you are trying to achieve, but those two are likely a good start. Jaccard index Sorensen-Dice index
Distance Metrics For Binary Vectors Seems like you're looking for either the Jaccard distance or the Dice dissimilarity. Jaccard distance: $1 - \frac{|A \cap B|}{|A \cup B|}$ Dice dissimilarity: $1 - \frac{2|A \cap B|}{|A| + |B|}$ Thes
19,845
Distance Metrics For Binary Vectors
In addition to Jaccard and Dice, I've had success with: Cosine Similarity: $\text{cos}(\theta) = \frac{{\bf u} \cdot {\bf v}}{||{\bf u}|| \times ||{\bf v}||}$ Not a metric, only a similarity measure. See Angular similarity if metric is needed. Rajski's distance: $1 - \frac{H(u;v)}{H(u,v)} $ H(u;v)=mutual information; H(u,v)=Joint Entropy See this article for a good survey of binary similarity measures and distances.
Distance Metrics For Binary Vectors
In addition to Jaccard and Dice, I've had success with: Cosine Similarity: $\text{cos}(\theta) = \frac{{\bf u} \cdot {\bf v}}{||{\bf u}|| \times ||{\bf v}||}$ Not a metric, only a similarity measur
Distance Metrics For Binary Vectors In addition to Jaccard and Dice, I've had success with: Cosine Similarity: $\text{cos}(\theta) = \frac{{\bf u} \cdot {\bf v}}{||{\bf u}|| \times ||{\bf v}||}$ Not a metric, only a similarity measure. See Angular similarity if metric is needed. Rajski's distance: $1 - \frac{H(u;v)}{H(u,v)} $ H(u;v)=mutual information; H(u,v)=Joint Entropy See this article for a good survey of binary similarity measures and distances.
Distance Metrics For Binary Vectors In addition to Jaccard and Dice, I've had success with: Cosine Similarity: $\text{cos}(\theta) = \frac{{\bf u} \cdot {\bf v}}{||{\bf u}|| \times ||{\bf v}||}$ Not a metric, only a similarity measur
19,846
Distance Metrics For Binary Vectors
Quick summary of metrics I used for a similar problem. Jaccard distance is also useful, as previously cited. Distance metric are defined over the interval [0,+∞] with 0=identity, while similarity metrics are defined over [0,1] with 1=identity. a = nb positive bits for vector A b = nb positive bits for vector B c = nb of common positive bits between vector A and B S = similarity D = distance Dice and Tanimoto metrics are monotonic (which means you will get the exact same ordering/ranking of the vectors ([B,C,D,..]) you will compare to a reference vector (A) by using these two metrics, although similarity values may differ). Manhattan and Euclidian metrics are monotonic. Cosine and Tanimoto metrics are always highly correlated but not strictly monotonic. Tanimoto is the reference metric used in the field of drug discovery for problems that can be framed like yours. Its only issue is it is biased towards low values when your vectors contains very few positive bits.
Distance Metrics For Binary Vectors
Quick summary of metrics I used for a similar problem. Jaccard distance is also useful, as previously cited. Distance metric are defined over the interval [0,+∞] with 0=identity, while similarity metr
Distance Metrics For Binary Vectors Quick summary of metrics I used for a similar problem. Jaccard distance is also useful, as previously cited. Distance metric are defined over the interval [0,+∞] with 0=identity, while similarity metrics are defined over [0,1] with 1=identity. a = nb positive bits for vector A b = nb positive bits for vector B c = nb of common positive bits between vector A and B S = similarity D = distance Dice and Tanimoto metrics are monotonic (which means you will get the exact same ordering/ranking of the vectors ([B,C,D,..]) you will compare to a reference vector (A) by using these two metrics, although similarity values may differ). Manhattan and Euclidian metrics are monotonic. Cosine and Tanimoto metrics are always highly correlated but not strictly monotonic. Tanimoto is the reference metric used in the field of drug discovery for problems that can be framed like yours. Its only issue is it is biased towards low values when your vectors contains very few positive bits.
Distance Metrics For Binary Vectors Quick summary of metrics I used for a similar problem. Jaccard distance is also useful, as previously cited. Distance metric are defined over the interval [0,+∞] with 0=identity, while similarity metr
19,847
Do unique visitors to a website follow a power law?
No, unique visitors to a website do not follow a power law. In the last few years, there has been increasing rigor in testing power law claims (e.g., Clauset, Shalizi and Newman 2009). Apparently, past claims often weren't well tested and it was common to plot the data on a log-log scale and rely on the "eyeball test" to demonstrate a straight line. Now that formal tests are more common, many distributions turn out not to follow power laws. The best two references I know that examine user visits on the web are Ali and Scarr (2007) and Clauset, Shalizi and Newman (2009). Ali and Scarr (2007) looked at a random sample of user clicks on a Yahoo website and concluded: Prevailing wisdom is that the distribution of web clicks and pageviews follows a scale-free power law distribution. However, we have found that a statistically significantly better description of the data is the scale-sensitive Zipf- Mandelbrot distribution and that mixtures thereof further enhances the fit. Previous analyses have three disadvantages: they have used a small set of candidate distributions, analyzed out-of-date user web behavior (circa 1998) and used questionable statistical methodologies. Although we cannot preclude that a better fitting distribution may not one day be found, we can say for sure that the scale-sensitive Zipf-Mandelbrot distribution provides a statistically significantly stronger fit to the data than the scale-free power-law or Zipf on a variety of verticals from the Yahoo domain. Here is a histogram of individual user clicks over a month and their same data on a log-log plot, with different models they compared. The data are clearly not on a straight log-log line expected from a scale-free power distribution. Clauset, Shalizi and Newman (2009) compared power law explanations with alternative hypotheses using likelihood ratio tests and concluded both web hits and links "cannot plausibly be considered to follow a power law." Their data for the former were web hits by customers of the America Online Internet service in a single day and for the latter were links to web sites found in a 1997 web crawl of about 200 million web pages. The below images give the cumulative distribution functions P(x) and their maximum likelihood power-law fits. For both these data sets, Clauset, Shalizi and Newman found that power distributions with exponential cutoffs to modify the extreme tail of the distribution were clearly better than pure power law distributions and that log-normal distributions were also good fits. (They also looked at exponential and stretched exponential hypotheses.) If you have a dataset in hand and are not just idly curious, you should fit it with different models and compare them (in R: pchisq(2 * (logLik(model1) - logLik(model2)), df = 1, lower.tail = FALSE) ). I confess I have no idea offhand how to model a zero-adjusted ZM model. Ron Pearson has blogged about ZM distributions and there is apparently an R package zipfR. Me, I would probably start with a negative binomial model but I am not a real statistician (and I'd love their opinions). (I also want to second commenter @richiemorrisroe above who points out data are likely influenced by factors unrelated to individual human behavior, like programs crawling the web and IP addresses that represent many people's computers.) Papers mentioned: Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. "Power-law distributions in empirical data." SIAM review 51.4 (2009): 661-703. (See also this site) Ali, Kamal, and Mark Scarr. "Robust methodologies for modeling web click distributions." Proceedings of the 16th international conference on World Wide Web. ACM, 2007.
Do unique visitors to a website follow a power law?
No, unique visitors to a website do not follow a power law. In the last few years, there has been increasing rigor in testing power law claims (e.g., Clauset, Shalizi and Newman 2009). Apparently,
Do unique visitors to a website follow a power law? No, unique visitors to a website do not follow a power law. In the last few years, there has been increasing rigor in testing power law claims (e.g., Clauset, Shalizi and Newman 2009). Apparently, past claims often weren't well tested and it was common to plot the data on a log-log scale and rely on the "eyeball test" to demonstrate a straight line. Now that formal tests are more common, many distributions turn out not to follow power laws. The best two references I know that examine user visits on the web are Ali and Scarr (2007) and Clauset, Shalizi and Newman (2009). Ali and Scarr (2007) looked at a random sample of user clicks on a Yahoo website and concluded: Prevailing wisdom is that the distribution of web clicks and pageviews follows a scale-free power law distribution. However, we have found that a statistically significantly better description of the data is the scale-sensitive Zipf- Mandelbrot distribution and that mixtures thereof further enhances the fit. Previous analyses have three disadvantages: they have used a small set of candidate distributions, analyzed out-of-date user web behavior (circa 1998) and used questionable statistical methodologies. Although we cannot preclude that a better fitting distribution may not one day be found, we can say for sure that the scale-sensitive Zipf-Mandelbrot distribution provides a statistically significantly stronger fit to the data than the scale-free power-law or Zipf on a variety of verticals from the Yahoo domain. Here is a histogram of individual user clicks over a month and their same data on a log-log plot, with different models they compared. The data are clearly not on a straight log-log line expected from a scale-free power distribution. Clauset, Shalizi and Newman (2009) compared power law explanations with alternative hypotheses using likelihood ratio tests and concluded both web hits and links "cannot plausibly be considered to follow a power law." Their data for the former were web hits by customers of the America Online Internet service in a single day and for the latter were links to web sites found in a 1997 web crawl of about 200 million web pages. The below images give the cumulative distribution functions P(x) and their maximum likelihood power-law fits. For both these data sets, Clauset, Shalizi and Newman found that power distributions with exponential cutoffs to modify the extreme tail of the distribution were clearly better than pure power law distributions and that log-normal distributions were also good fits. (They also looked at exponential and stretched exponential hypotheses.) If you have a dataset in hand and are not just idly curious, you should fit it with different models and compare them (in R: pchisq(2 * (logLik(model1) - logLik(model2)), df = 1, lower.tail = FALSE) ). I confess I have no idea offhand how to model a zero-adjusted ZM model. Ron Pearson has blogged about ZM distributions and there is apparently an R package zipfR. Me, I would probably start with a negative binomial model but I am not a real statistician (and I'd love their opinions). (I also want to second commenter @richiemorrisroe above who points out data are likely influenced by factors unrelated to individual human behavior, like programs crawling the web and IP addresses that represent many people's computers.) Papers mentioned: Clauset, Aaron, Cosma Rohilla Shalizi, and Mark EJ Newman. "Power-law distributions in empirical data." SIAM review 51.4 (2009): 661-703. (See also this site) Ali, Kamal, and Mark Scarr. "Robust methodologies for modeling web click distributions." Proceedings of the 16th international conference on World Wide Web. ACM, 2007.
Do unique visitors to a website follow a power law? No, unique visitors to a website do not follow a power law. In the last few years, there has been increasing rigor in testing power law claims (e.g., Clauset, Shalizi and Newman 2009). Apparently,
19,848
Confidence interval of multivariate gaussian distribution
The quantity $y = (x - \mu)^T \Sigma^{-1} (x-\mu)$ is distributed as $\chi^2$ with $k$ degrees of freedom (where $k$ is the length of the $x$ and $\mu$ vectors). $\Sigma$ is the (known) covariance matrix of the multivariate Gaussian. When $\Sigma$ is unknown, we can replace it by the sample covariance matrix $S = \frac{1}{n-1} \sum_i (x_i-\overline{x})(x_i-\overline{x})^T$, where $\{x_i\}$ are the $n$ data vectors, and $\overline{x} = \frac{1}{n} \sum_i x_i$ is the sample mean. The quantity $t^2 = n(\overline{x} - \mu)^T S^{-1} (\overline{x}-\mu)$ is distributed as Hotelling's $T^2$ distribution with parameters $k$ and $n-1$. An ellipsoidal confidence set with coverage probability $1-\alpha$ consists of all $\mu$ vectors such that $n(\overline{x} - \mu)^T S^{-1} (\overline{x}-\mu) \leq T^2_{k,n-k}(1-\alpha)$. The critical values of $T^2$ can be computed from the $F$ distribution. Specifically, $\frac{n-k}{k(n-1)}t^2$ is distributed as $F_{k,n-k}$. Source: Wikipeda Hotelling's T-squared distribution
Confidence interval of multivariate gaussian distribution
The quantity $y = (x - \mu)^T \Sigma^{-1} (x-\mu)$ is distributed as $\chi^2$ with $k$ degrees of freedom (where $k$ is the length of the $x$ and $\mu$ vectors). $\Sigma$ is the (known) covariance mat
Confidence interval of multivariate gaussian distribution The quantity $y = (x - \mu)^T \Sigma^{-1} (x-\mu)$ is distributed as $\chi^2$ with $k$ degrees of freedom (where $k$ is the length of the $x$ and $\mu$ vectors). $\Sigma$ is the (known) covariance matrix of the multivariate Gaussian. When $\Sigma$ is unknown, we can replace it by the sample covariance matrix $S = \frac{1}{n-1} \sum_i (x_i-\overline{x})(x_i-\overline{x})^T$, where $\{x_i\}$ are the $n$ data vectors, and $\overline{x} = \frac{1}{n} \sum_i x_i$ is the sample mean. The quantity $t^2 = n(\overline{x} - \mu)^T S^{-1} (\overline{x}-\mu)$ is distributed as Hotelling's $T^2$ distribution with parameters $k$ and $n-1$. An ellipsoidal confidence set with coverage probability $1-\alpha$ consists of all $\mu$ vectors such that $n(\overline{x} - \mu)^T S^{-1} (\overline{x}-\mu) \leq T^2_{k,n-k}(1-\alpha)$. The critical values of $T^2$ can be computed from the $F$ distribution. Specifically, $\frac{n-k}{k(n-1)}t^2$ is distributed as $F_{k,n-k}$. Source: Wikipeda Hotelling's T-squared distribution
Confidence interval of multivariate gaussian distribution The quantity $y = (x - \mu)^T \Sigma^{-1} (x-\mu)$ is distributed as $\chi^2$ with $k$ degrees of freedom (where $k$ is the length of the $x$ and $\mu$ vectors). $\Sigma$ is the (known) covariance mat
19,849
One sided Chebyshev inequality for higher moment
For convenience, let $X$ denote a continuous zero-mean random variable with density function $f(x)$, and consider $P\{X \geq a\}$ where $a > 0$. We have $$P\{X \geq a\} = \int_a^{\infty}f(x)\,\mathrm dx = \int_{-\infty}^{\infty}g(x)f(x)\,\mathrm dx = E[g(X)]$$ where $g(x) = \mathbf 1_{[a,\infty)}$. If $n$ is an even integer and $b$ any positive real number, then $$h(x) = \left(\frac{x+b}{a+b}\right)^n \geq g(x), -\infty < x < \infty,$$ and so $$E[h(X)] = \int_{-\infty}^{\infty} h(x)f(x)\,\mathrm dx \geq \int_{-\infty}^{\infty}g(x)f(x)\,\mathrm dx = E[g(X)].$$ Thus we have that for all positive real numbers $a$ and $b$, $$P\{X \geq a\} \leq E\left[\left(\frac{X+b}{a+b}\right)^n\right] = (a+b)^{-n}E[(X+b)^n]\tag{1}$$ where the rightmost expectation in $(1)$ is the $n$-th moment ($n$ even) of $X$ about $-b$. When $n = 2$, the smallest upper bound on $P\{X \geq a\}$ is obtained when $b = \sigma^2/a$ giving the one-sided Chebyshev inequality (or Chebyshev-Cantelli inequality): $$P\{X \geq a\} \leq \frac{\sigma^2}{a^2 + \sigma^2}.$$ For larger values of $n$, minimization with respect to $b$ is messier.
One sided Chebyshev inequality for higher moment
For convenience, let $X$ denote a continuous zero-mean random variable with density function $f(x)$, and consider $P\{X \geq a\}$ where $a > 0$. We have $$P\{X \geq a\} = \int_a^{\infty}f(x)\,\mathrm
One sided Chebyshev inequality for higher moment For convenience, let $X$ denote a continuous zero-mean random variable with density function $f(x)$, and consider $P\{X \geq a\}$ where $a > 0$. We have $$P\{X \geq a\} = \int_a^{\infty}f(x)\,\mathrm dx = \int_{-\infty}^{\infty}g(x)f(x)\,\mathrm dx = E[g(X)]$$ where $g(x) = \mathbf 1_{[a,\infty)}$. If $n$ is an even integer and $b$ any positive real number, then $$h(x) = \left(\frac{x+b}{a+b}\right)^n \geq g(x), -\infty < x < \infty,$$ and so $$E[h(X)] = \int_{-\infty}^{\infty} h(x)f(x)\,\mathrm dx \geq \int_{-\infty}^{\infty}g(x)f(x)\,\mathrm dx = E[g(X)].$$ Thus we have that for all positive real numbers $a$ and $b$, $$P\{X \geq a\} \leq E\left[\left(\frac{X+b}{a+b}\right)^n\right] = (a+b)^{-n}E[(X+b)^n]\tag{1}$$ where the rightmost expectation in $(1)$ is the $n$-th moment ($n$ even) of $X$ about $-b$. When $n = 2$, the smallest upper bound on $P\{X \geq a\}$ is obtained when $b = \sigma^2/a$ giving the one-sided Chebyshev inequality (or Chebyshev-Cantelli inequality): $$P\{X \geq a\} \leq \frac{\sigma^2}{a^2 + \sigma^2}.$$ For larger values of $n$, minimization with respect to $b$ is messier.
One sided Chebyshev inequality for higher moment For convenience, let $X$ denote a continuous zero-mean random variable with density function $f(x)$, and consider $P\{X \geq a\}$ where $a > 0$. We have $$P\{X \geq a\} = \int_a^{\infty}f(x)\,\mathrm
19,850
Model Selection: Logistic Regression
This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search procedure. While this approach is intuitive, inferences made from this procedure are not valid (e.g., the true p-values are different from those reported by software). The problem is magnified the larger the size of the initial set of covariates is. If you do this anyway (and, unfortunately, many people do), you cannot take the resulting model seriously. Instead, you must run an entirely new study, gathering an independent sample and fitting the previous model, to test it. However, this requires a lot of resources, and moreover, since the process is flawed and the previous model is likely a poor one, there is a strong chance it will not hold up--meaning that it is likely to waste a lot of resources. A better way is to evaluate models of substantive interest to you. Then use an information criterion that penalizes model flexibility (such as the AIC) to adjudicate amongst those models. For logistic regression, the AIC is: $$ AIC = -2\times\ln(\text{likelihood}) + 2k $$ where $k$ is the number of covariates included in that model. You want the model with the smallest value for the AIC, all things being equal. However, it is not always so simple; be wary when several models have similar values for the AIC, even though one may be lowest. I include the complete formula for the AIC here, because different software outputs different information. You may have to calculate it from just the likelihood, or you may get the final AIC, or anything in between.
Model Selection: Logistic Regression
This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search proce
Model Selection: Logistic Regression This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search procedure. While this approach is intuitive, inferences made from this procedure are not valid (e.g., the true p-values are different from those reported by software). The problem is magnified the larger the size of the initial set of covariates is. If you do this anyway (and, unfortunately, many people do), you cannot take the resulting model seriously. Instead, you must run an entirely new study, gathering an independent sample and fitting the previous model, to test it. However, this requires a lot of resources, and moreover, since the process is flawed and the previous model is likely a poor one, there is a strong chance it will not hold up--meaning that it is likely to waste a lot of resources. A better way is to evaluate models of substantive interest to you. Then use an information criterion that penalizes model flexibility (such as the AIC) to adjudicate amongst those models. For logistic regression, the AIC is: $$ AIC = -2\times\ln(\text{likelihood}) + 2k $$ where $k$ is the number of covariates included in that model. You want the model with the smallest value for the AIC, all things being equal. However, it is not always so simple; be wary when several models have similar values for the AIC, even though one may be lowest. I include the complete formula for the AIC here, because different software outputs different information. You may have to calculate it from just the likelihood, or you may get the final AIC, or anything in between.
Model Selection: Logistic Regression This is probably not a good thing to do. Looking at all the individual covariates first, and then building a model with those that are significant is logically equivalent to an automatic search proce
19,851
Model Selection: Logistic Regression
There are many ways to choose what variables go in a regression model, some decent, some bad, and some terrible. One may simply browse the publications of Sander Greenland, many of which concern variable selection. Generally speaking however, I have a few common "rules": Automated algorithms, like those that come in software packages, are probably a bad idea. Using model diagnostic techniques, like gung suggests, are a good means of evaluating your variable selection choices You should also be using a combination of subject-matter expertise, literature searchers, directed acyclic graphs, etc. to inform your variable selection choices.
Model Selection: Logistic Regression
There are many ways to choose what variables go in a regression model, some decent, some bad, and some terrible. One may simply browse the publications of Sander Greenland, many of which concern varia
Model Selection: Logistic Regression There are many ways to choose what variables go in a regression model, some decent, some bad, and some terrible. One may simply browse the publications of Sander Greenland, many of which concern variable selection. Generally speaking however, I have a few common "rules": Automated algorithms, like those that come in software packages, are probably a bad idea. Using model diagnostic techniques, like gung suggests, are a good means of evaluating your variable selection choices You should also be using a combination of subject-matter expertise, literature searchers, directed acyclic graphs, etc. to inform your variable selection choices.
Model Selection: Logistic Regression There are many ways to choose what variables go in a regression model, some decent, some bad, and some terrible. One may simply browse the publications of Sander Greenland, many of which concern varia
19,852
Model Selection: Logistic Regression
How would you choose the "best" model? There isn't enough information provided to answer this question; if you want to get at causal effects on y you'll need to implement regressions that reflect what's known about the confounding. If you want to do prediction, AIC would be a reasonable approach. These approaches are not the same; the context will determine which of the (many) ways of choosing variables will be more/less appropriate.
Model Selection: Logistic Regression
How would you choose the "best" model? There isn't enough information provided to answer this question; if you want to get at causal effects on y you'll need to implement regressions that reflect what
Model Selection: Logistic Regression How would you choose the "best" model? There isn't enough information provided to answer this question; if you want to get at causal effects on y you'll need to implement regressions that reflect what's known about the confounding. If you want to do prediction, AIC would be a reasonable approach. These approaches are not the same; the context will determine which of the (many) ways of choosing variables will be more/less appropriate.
Model Selection: Logistic Regression How would you choose the "best" model? There isn't enough information provided to answer this question; if you want to get at causal effects on y you'll need to implement regressions that reflect what
19,853
Why are Gaussian "discriminant" analysis models called so?
If you mean LDA I would say the name, linear discriminant analysis, can be explained historically dating back at least to Fisher's paper from 1936, which, to the best of my knowledge, precedes the current terminology and distinction in machine learning between a discriminative and a generative model. Not that Fisher called it linear discriminant analysis directly, but he did explicitly ask for a linear function for discrimination. As a curious side remark, Fisher considered discrimination for the famous Iris data set in the paper. Fisher did, by the way, not present the linear method for discrimination in terms of a generative model. He sought a linear combination (for two classes) that maximizes the ratio of the between-group variance to the within-group variance, which does not require a normality assumption. Details, and how it relates to LDA as a Bayes rule for a generative model, can be found in Chapter 3 in Brian Ripley's book "Pattern Recognition and Neural Networks".
Why are Gaussian "discriminant" analysis models called so?
If you mean LDA I would say the name, linear discriminant analysis, can be explained historically dating back at least to Fisher's paper from 1936, which, to the best of my knowledge, precedes the cur
Why are Gaussian "discriminant" analysis models called so? If you mean LDA I would say the name, linear discriminant analysis, can be explained historically dating back at least to Fisher's paper from 1936, which, to the best of my knowledge, precedes the current terminology and distinction in machine learning between a discriminative and a generative model. Not that Fisher called it linear discriminant analysis directly, but he did explicitly ask for a linear function for discrimination. As a curious side remark, Fisher considered discrimination for the famous Iris data set in the paper. Fisher did, by the way, not present the linear method for discrimination in terms of a generative model. He sought a linear combination (for two classes) that maximizes the ratio of the between-group variance to the within-group variance, which does not require a normality assumption. Details, and how it relates to LDA as a Bayes rule for a generative model, can be found in Chapter 3 in Brian Ripley's book "Pattern Recognition and Neural Networks".
Why are Gaussian "discriminant" analysis models called so? If you mean LDA I would say the name, linear discriminant analysis, can be explained historically dating back at least to Fisher's paper from 1936, which, to the best of my knowledge, precedes the cur
19,854
Why are Gaussian "discriminant" analysis models called so?
It is simple, in case you have two classes $(Y=0 , Y=1)$, the GDA makes use of this assumption: $P(X|Y=0) \sim \mathcal{N}(\mu_0,\Sigma_0) $ $P(X|Y=1) \sim \mathcal{N}(\mu_1,\Sigma_1)$ $P(Y=1)=1-P(Y=0)=\Phi$ And then gets the parameters $(\mu_0,\Sigma_0,\mu_1,\Sigma_1,\Phi)$ using maximum likelihood estimation. So it's Gaussian because it uses a gaussian assumption for the intra-goup distribution (you may want to use uniform instead for ex) and discriminant because it aims to separate data into groups. You can find more info here.
Why are Gaussian "discriminant" analysis models called so?
It is simple, in case you have two classes $(Y=0 , Y=1)$, the GDA makes use of this assumption: $P(X|Y=0) \sim \mathcal{N}(\mu_0,\Sigma_0) $ $P(X|Y=1) \sim \mathcal{N}(\mu_1,\Sigma_1)$ $P(Y=1)=1-P(
Why are Gaussian "discriminant" analysis models called so? It is simple, in case you have two classes $(Y=0 , Y=1)$, the GDA makes use of this assumption: $P(X|Y=0) \sim \mathcal{N}(\mu_0,\Sigma_0) $ $P(X|Y=1) \sim \mathcal{N}(\mu_1,\Sigma_1)$ $P(Y=1)=1-P(Y=0)=\Phi$ And then gets the parameters $(\mu_0,\Sigma_0,\mu_1,\Sigma_1,\Phi)$ using maximum likelihood estimation. So it's Gaussian because it uses a gaussian assumption for the intra-goup distribution (you may want to use uniform instead for ex) and discriminant because it aims to separate data into groups. You can find more info here.
Why are Gaussian "discriminant" analysis models called so? It is simple, in case you have two classes $(Y=0 , Y=1)$, the GDA makes use of this assumption: $P(X|Y=0) \sim \mathcal{N}(\mu_0,\Sigma_0) $ $P(X|Y=1) \sim \mathcal{N}(\mu_1,\Sigma_1)$ $P(Y=1)=1-P(
19,855
Are short time series worth modelling?
The small number of data points limits what kinds of models you may fit on your data. However it does not necessarily mean that it would make no sense to start modelling. With few data you will only be able to detect associations if the effects are strong and the scatter is weak. It's an other question what kind of model suits your data. You used the word 'regression' in the title. The model should to some extent reflect what you know about the phenomenon. This seems to be an ecological setting, so the previous year may be influential as well.
Are short time series worth modelling?
The small number of data points limits what kinds of models you may fit on your data. However it does not necessarily mean that it would make no sense to start modelling. With few data you will only b
Are short time series worth modelling? The small number of data points limits what kinds of models you may fit on your data. However it does not necessarily mean that it would make no sense to start modelling. With few data you will only be able to detect associations if the effects are strong and the scatter is weak. It's an other question what kind of model suits your data. You used the word 'regression' in the title. The model should to some extent reflect what you know about the phenomenon. This seems to be an ecological setting, so the previous year may be influential as well.
Are short time series worth modelling? The small number of data points limits what kinds of models you may fit on your data. However it does not necessarily mean that it would make no sense to start modelling. With few data you will only b
19,856
Are short time series worth modelling?
I've seen ecological datasets with fewer than 11 points, so I would say if you are very careful, you can draw some limited conclusions with your limited data. You could also do a power analysis to determine how small an effect you could detect, given the parameters of your experimental design. You also might not need to throw out the extra variation per year if you do some careful analysis
Are short time series worth modelling?
I've seen ecological datasets with fewer than 11 points, so I would say if you are very careful, you can draw some limited conclusions with your limited data. You could also do a power analysis to det
Are short time series worth modelling? I've seen ecological datasets with fewer than 11 points, so I would say if you are very careful, you can draw some limited conclusions with your limited data. You could also do a power analysis to determine how small an effect you could detect, given the parameters of your experimental design. You also might not need to throw out the extra variation per year if you do some careful analysis
Are short time series worth modelling? I've seen ecological datasets with fewer than 11 points, so I would say if you are very careful, you can draw some limited conclusions with your limited data. You could also do a power analysis to det
19,857
Are short time series worth modelling?
Modeling the data fundamentally (especially for time series) assumes that you have collected data at a sufficient enough frequency to capture the phenomena of interest. Simplest example is for a sine wave - if you are collecting data at a frequency of n*pi where n is an integer then you will not see anything but zeros and miss the sinusoidal pattern altogether. There are articles on sampling theory which discuss how often should data be collected.
Are short time series worth modelling?
Modeling the data fundamentally (especially for time series) assumes that you have collected data at a sufficient enough frequency to capture the phenomena of interest. Simplest example is for a sine
Are short time series worth modelling? Modeling the data fundamentally (especially for time series) assumes that you have collected data at a sufficient enough frequency to capture the phenomena of interest. Simplest example is for a sine wave - if you are collecting data at a frequency of n*pi where n is an integer then you will not see anything but zeros and miss the sinusoidal pattern altogether. There are articles on sampling theory which discuss how often should data be collected.
Are short time series worth modelling? Modeling the data fundamentally (especially for time series) assumes that you have collected data at a sufficient enough frequency to capture the phenomena of interest. Simplest example is for a sine
19,858
Are short time series worth modelling?
I am not sure I understand this bit: "Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal)" With careful modelling, it seems to me you could gain a lot by modelling this as panel data. Depending on the spatial scope of your data, there may be large differences in the temperatures that your data points were exposed to within any given year. Averaging all these variations seems costly.
Are short time series worth modelling?
I am not sure I understand this bit: "Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal)" With careful modelli
Are short time series worth modelling? I am not sure I understand this bit: "Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal)" With careful modelling, it seems to me you could gain a lot by modelling this as panel data. Depending on the spatial scope of your data, there may be large differences in the temperatures that your data points were exposed to within any given year. Averaging all these variations seems costly.
Are short time series worth modelling? I am not sure I understand this bit: "Unfortunately, since the response is the mean value (without looking at the mean, just regular inter-annual variation will swamp the signal)" With careful modelli
19,859
Are short time series worth modelling?
I would say that the validity of the test has less to do with the number of data points and more to do with the validity of the assumption that you have the correct model. For example, the regression analysis that is used to generate a standard curve may be based on only 3 standards (low, med, and high) but the result is highly valid since there is strong evidence that the response is linear between the points. On the other hand, even a regression with 1000s of data points will be flawed if the wrong model is applied to the data. In the first case any variation between the model predictions and the actual data is due to random error. In the second case some of the variation between the model predictions and the actual data is due to bias from choosing the wrong model.
Are short time series worth modelling?
I would say that the validity of the test has less to do with the number of data points and more to do with the validity of the assumption that you have the correct model. For example, the regressio
Are short time series worth modelling? I would say that the validity of the test has less to do with the number of data points and more to do with the validity of the assumption that you have the correct model. For example, the regression analysis that is used to generate a standard curve may be based on only 3 standards (low, med, and high) but the result is highly valid since there is strong evidence that the response is linear between the points. On the other hand, even a regression with 1000s of data points will be flawed if the wrong model is applied to the data. In the first case any variation between the model predictions and the actual data is due to random error. In the second case some of the variation between the model predictions and the actual data is due to bias from choosing the wrong model.
Are short time series worth modelling? I would say that the validity of the test has less to do with the number of data points and more to do with the validity of the assumption that you have the correct model. For example, the regressio
19,860
Are short time series worth modelling?
The required number of observations to Identify a model depends on the ratio of signal to noise in the data and the form of the model. If I am given the numbers ,1,2,3,4,5 , I will predict 6,7,8,.... Box-Jenkins model identification is an approach to determine the underlying General Term much like the test for "numerical intelligence" that we give to children. If the signal is strong then we need fewer observations and vice-versa. If the observed frequency suggests a possible "seasonal structure" then we need repetitions of this phenomena e.g. at least 3 seasons ( preferably more ) as a rule of thumb to extract (identify this from the basic descriptive statistics (the acf/pacf ).
Are short time series worth modelling?
The required number of observations to Identify a model depends on the ratio of signal to noise in the data and the form of the model. If I am given the numbers ,1,2,3,4,5 , I will predict 6,7,8,....
Are short time series worth modelling? The required number of observations to Identify a model depends on the ratio of signal to noise in the data and the form of the model. If I am given the numbers ,1,2,3,4,5 , I will predict 6,7,8,.... Box-Jenkins model identification is an approach to determine the underlying General Term much like the test for "numerical intelligence" that we give to children. If the signal is strong then we need fewer observations and vice-versa. If the observed frequency suggests a possible "seasonal structure" then we need repetitions of this phenomena e.g. at least 3 seasons ( preferably more ) as a rule of thumb to extract (identify this from the basic descriptive statistics (the acf/pacf ).
Are short time series worth modelling? The required number of observations to Identify a model depends on the ratio of signal to noise in the data and the form of the model. If I am given the numbers ,1,2,3,4,5 , I will predict 6,7,8,....
19,861
Are short time series worth modelling?
Maybe you can try to handle your time series as a linear equation system and solve it by Gauss elimination. Of course in that case you constraint yourself to the available data but this is the only price you have to pay.
Are short time series worth modelling?
Maybe you can try to handle your time series as a linear equation system and solve it by Gauss elimination. Of course in that case you constraint yourself to the available data but this is the only pr
Are short time series worth modelling? Maybe you can try to handle your time series as a linear equation system and solve it by Gauss elimination. Of course in that case you constraint yourself to the available data but this is the only price you have to pay.
Are short time series worth modelling? Maybe you can try to handle your time series as a linear equation system and solve it by Gauss elimination. Of course in that case you constraint yourself to the available data but this is the only pr
19,862
Testing the difference in AIC of two non-nested models
Is the question of curiosity, i.e. you are not satisfied by my answer here ? If not... The further investigation of this tricky question showed that there do exist a commonly used rule-of-thumb, that states two models are indistinguishable by $AIC$ criterion if the difference $|AIC_1 - AIC_2| < 2$. The same you actually will read in wikipedia's article on $AIC$ (note the link is clickable!). Just for those who do not click the links: $AIC$ estimates relative support for a model. To apply this in practice, we start with a set of candidate models, and then find the models' corresponding $AIC$ values. Next, identify the minimum $AIC$ value. The selection of a model can then be made as follows. As a rough rule of thumb, models having their $AIC$ within $1–2$ of the minimum have substantial support and should receive consideration in making inferences. Models having their $AIC$ within about $4–7$ of the minimum have considerably less support, while models with their $AIC > 10$ above the minimum have either essentially no support and might be omitted from further consideration or at least fail to explain some substantial structural variation in the data. A more general approach is as follows... Denote the $AIC$ values of the candidate models by $AIC1$, $AIC2, AIC3, \ldots, AICR$. Let $AICmin$ denotes the minimum of those values. Then $e^{(AICmin−AICi)/2}$ can be interpreted as the relative probability that the $i$-th model minimizes the (expected estimated) information loss. As an example, suppose that there were three models in the candidate set, with $AIC$ values $100$, $102$, and $110$. Then the second model is $e^{(100−102)/2} = 0.368$ times as probable as the first model to minimize the information loss, and the third model is $e^{(100−110)/2} = 0.007$ times as probable as the first model to minimize the information loss. In this case, we might omit the third model from further consideration and take a weighted average of the first two models, with weights $1$ and $0.368$, respectively. Statistical inference would then be based on the weighted multimodel. Nice explanation and useful suggestions, in my opinion. Just don't be afraid of reading what is clickable! In addition, note once more, $AIC$ is less preferable for large-scale data sets. In addition to $BIC$ you may find useful to apply bias-corrected version of $AIC$ criterion $AICc$ (you may use this R code or use the formula $AICc = AIC + \frac{2p(p+1)}{n-p-1}$, where $p$ is the number of estimated parameters). Rule-of-thumb will be the same though.
Testing the difference in AIC of two non-nested models
Is the question of curiosity, i.e. you are not satisfied by my answer here ? If not... The further investigation of this tricky question showed that there do exist a commonly used rule-of-thumb, that
Testing the difference in AIC of two non-nested models Is the question of curiosity, i.e. you are not satisfied by my answer here ? If not... The further investigation of this tricky question showed that there do exist a commonly used rule-of-thumb, that states two models are indistinguishable by $AIC$ criterion if the difference $|AIC_1 - AIC_2| < 2$. The same you actually will read in wikipedia's article on $AIC$ (note the link is clickable!). Just for those who do not click the links: $AIC$ estimates relative support for a model. To apply this in practice, we start with a set of candidate models, and then find the models' corresponding $AIC$ values. Next, identify the minimum $AIC$ value. The selection of a model can then be made as follows. As a rough rule of thumb, models having their $AIC$ within $1–2$ of the minimum have substantial support and should receive consideration in making inferences. Models having their $AIC$ within about $4–7$ of the minimum have considerably less support, while models with their $AIC > 10$ above the minimum have either essentially no support and might be omitted from further consideration or at least fail to explain some substantial structural variation in the data. A more general approach is as follows... Denote the $AIC$ values of the candidate models by $AIC1$, $AIC2, AIC3, \ldots, AICR$. Let $AICmin$ denotes the minimum of those values. Then $e^{(AICmin−AICi)/2}$ can be interpreted as the relative probability that the $i$-th model minimizes the (expected estimated) information loss. As an example, suppose that there were three models in the candidate set, with $AIC$ values $100$, $102$, and $110$. Then the second model is $e^{(100−102)/2} = 0.368$ times as probable as the first model to minimize the information loss, and the third model is $e^{(100−110)/2} = 0.007$ times as probable as the first model to minimize the information loss. In this case, we might omit the third model from further consideration and take a weighted average of the first two models, with weights $1$ and $0.368$, respectively. Statistical inference would then be based on the weighted multimodel. Nice explanation and useful suggestions, in my opinion. Just don't be afraid of reading what is clickable! In addition, note once more, $AIC$ is less preferable for large-scale data sets. In addition to $BIC$ you may find useful to apply bias-corrected version of $AIC$ criterion $AICc$ (you may use this R code or use the formula $AICc = AIC + \frac{2p(p+1)}{n-p-1}$, where $p$ is the number of estimated parameters). Rule-of-thumb will be the same though.
Testing the difference in AIC of two non-nested models Is the question of curiosity, i.e. you are not satisfied by my answer here ? If not... The further investigation of this tricky question showed that there do exist a commonly used rule-of-thumb, that
19,863
Testing the difference in AIC of two non-nested models
I think this may be an attempt to get what you don't really want. Model selection is not a science. Except in rare circumstances, there is no one perfect model, or even one "true" model; there is rarely even one "best" model. Discussions of AIC vs. AICc vs BIC vs. SBC vs. whatever leave me somewhat nonplussed. I think the idea is to get some GOOD models. You then choose among them based on a combination of substantive expertise and statistical ideas. If you have no substantive expertise (rarely the case; much more rarely than most people suppose) then choose the lowest AIC (or AICc or whatever). But you usually DO have some expertise - else why are you investigating these particular variables?
Testing the difference in AIC of two non-nested models
I think this may be an attempt to get what you don't really want. Model selection is not a science. Except in rare circumstances, there is no one perfect model, or even one "true" model; there is rar
Testing the difference in AIC of two non-nested models I think this may be an attempt to get what you don't really want. Model selection is not a science. Except in rare circumstances, there is no one perfect model, or even one "true" model; there is rarely even one "best" model. Discussions of AIC vs. AICc vs BIC vs. SBC vs. whatever leave me somewhat nonplussed. I think the idea is to get some GOOD models. You then choose among them based on a combination of substantive expertise and statistical ideas. If you have no substantive expertise (rarely the case; much more rarely than most people suppose) then choose the lowest AIC (or AICc or whatever). But you usually DO have some expertise - else why are you investigating these particular variables?
Testing the difference in AIC of two non-nested models I think this may be an attempt to get what you don't really want. Model selection is not a science. Except in rare circumstances, there is no one perfect model, or even one "true" model; there is rar
19,864
Understanding AIC and Schwarz criterion
It is quite difficult to answer your question in a precise manner, but it seems to me you are comparing two criteria (information criteria and p-value) that don't give the same information. For all information criteria (AIC, or Schwarz criterion), the smaller they are the better the fit of your model is (from a statistical perspective) as they reflect a trade-off between the lack of fit and the number of parameters in the model; for example, the Akaike criterion reads $-2\log(\ell)+2k$, where $k$ is the number of parameters. However, unlike AIC, SC is consistent: the probability of choosing incorrectly a bigger model converges to 0 as the sample size increases. They are used for comparing models, but you can well observe a model with significant predictors that provide poor fit (large residual deviance). If you can achieve a different model with a lower AIC, this is suggestive of a poor model. And, if your sample size is large, $p$-values can still be low which doesn't give much information about model fit. At least, look if the AIC shows a significant decrease when comparing the model with an intercept only and the model with covariates. However, if your interest lies in finding the best subset of predictors, you definitively have to look at methods for variable selection. I would suggest to look at penalized regression, which allows to perform variable selection to avoid overfitting issues. This is discussed in Frank Harrell's Regression Modeling Strategies (p. 207 ff.), or Moons et al., Penalized maximum likelihood estimation to directly adjust diagnostic and prognostic prediction models for overoptimism: a clinical example, J Clin Epid (2004) 57(12). See also the Design (lrm) and stepPlr (step.plr) R packages, or the penalized package. You may browse related questions on variable selection on this SE.
Understanding AIC and Schwarz criterion
It is quite difficult to answer your question in a precise manner, but it seems to me you are comparing two criteria (information criteria and p-value) that don't give the same information. For all in
Understanding AIC and Schwarz criterion It is quite difficult to answer your question in a precise manner, but it seems to me you are comparing two criteria (information criteria and p-value) that don't give the same information. For all information criteria (AIC, or Schwarz criterion), the smaller they are the better the fit of your model is (from a statistical perspective) as they reflect a trade-off between the lack of fit and the number of parameters in the model; for example, the Akaike criterion reads $-2\log(\ell)+2k$, where $k$ is the number of parameters. However, unlike AIC, SC is consistent: the probability of choosing incorrectly a bigger model converges to 0 as the sample size increases. They are used for comparing models, but you can well observe a model with significant predictors that provide poor fit (large residual deviance). If you can achieve a different model with a lower AIC, this is suggestive of a poor model. And, if your sample size is large, $p$-values can still be low which doesn't give much information about model fit. At least, look if the AIC shows a significant decrease when comparing the model with an intercept only and the model with covariates. However, if your interest lies in finding the best subset of predictors, you definitively have to look at methods for variable selection. I would suggest to look at penalized regression, which allows to perform variable selection to avoid overfitting issues. This is discussed in Frank Harrell's Regression Modeling Strategies (p. 207 ff.), or Moons et al., Penalized maximum likelihood estimation to directly adjust diagnostic and prognostic prediction models for overoptimism: a clinical example, J Clin Epid (2004) 57(12). See also the Design (lrm) and stepPlr (step.plr) R packages, or the penalized package. You may browse related questions on variable selection on this SE.
Understanding AIC and Schwarz criterion It is quite difficult to answer your question in a precise manner, but it seems to me you are comparing two criteria (information criteria and p-value) that don't give the same information. For all in
19,865
Understanding AIC and Schwarz criterion
Grouping SC and AIC together IS WRONG. They are very different things, even though people heavily misuse them. AIC is meaningful when you are predicting things, using SC in this scenario can lead (not all the times) to wrong results. Similarly, if you are interested in doing model selection with the principle of parsimony (Occam's Razor) SC is better. I don't want to go into the theoretical details, but in a nutshell: SC -- good for parsimonious models when you want something equivalent to simplest possible model to explain your data, AIC -- When you want to predict. AIC doesn't assume that your true model lies in the model space where as SC does. Secondly, using p-values and information criteria together can be also misleading as explained by chl.
Understanding AIC and Schwarz criterion
Grouping SC and AIC together IS WRONG. They are very different things, even though people heavily misuse them. AIC is meaningful when you are predicting things, using SC in this scenario can lead (not
Understanding AIC and Schwarz criterion Grouping SC and AIC together IS WRONG. They are very different things, even though people heavily misuse them. AIC is meaningful when you are predicting things, using SC in this scenario can lead (not all the times) to wrong results. Similarly, if you are interested in doing model selection with the principle of parsimony (Occam's Razor) SC is better. I don't want to go into the theoretical details, but in a nutshell: SC -- good for parsimonious models when you want something equivalent to simplest possible model to explain your data, AIC -- When you want to predict. AIC doesn't assume that your true model lies in the model space where as SC does. Secondly, using p-values and information criteria together can be also misleading as explained by chl.
Understanding AIC and Schwarz criterion Grouping SC and AIC together IS WRONG. They are very different things, even though people heavily misuse them. AIC is meaningful when you are predicting things, using SC in this scenario can lead (not
19,866
Pseudo-random number generation algorithms
In R, the default setting for random number generation are: For U(0,1), use the Mersenne-Twister algorithm For Guassian numbers use the numerical inversion of the standard normal distribution function. You can easily check this, viz. > RNGkind() [1] "Mersenne-Twister" "Inversion" It is possible to change the default generator to other PRNGs, such as Super-Duper,Wichmann-Hill, Marsaglia-Multicarry or even a user-supplied PRNG. See the ?RNGkind for further details. I have never needed to change the default PRNG. The C GSL library also uses the Mersenne-Twister by default.
Pseudo-random number generation algorithms
In R, the default setting for random number generation are: For U(0,1), use the Mersenne-Twister algorithm For Guassian numbers use the numerical inversion of the standard normal distribution functi
Pseudo-random number generation algorithms In R, the default setting for random number generation are: For U(0,1), use the Mersenne-Twister algorithm For Guassian numbers use the numerical inversion of the standard normal distribution function. You can easily check this, viz. > RNGkind() [1] "Mersenne-Twister" "Inversion" It is possible to change the default generator to other PRNGs, such as Super-Duper,Wichmann-Hill, Marsaglia-Multicarry or even a user-supplied PRNG. See the ?RNGkind for further details. I have never needed to change the default PRNG. The C GSL library also uses the Mersenne-Twister by default.
Pseudo-random number generation algorithms In R, the default setting for random number generation are: For U(0,1), use the Mersenne-Twister algorithm For Guassian numbers use the numerical inversion of the standard normal distribution functi
19,867
Pseudo-random number generation algorithms
The Mersenne Twister is one I've come across and used before now.
Pseudo-random number generation algorithms
The Mersenne Twister is one I've come across and used before now.
Pseudo-random number generation algorithms The Mersenne Twister is one I've come across and used before now.
Pseudo-random number generation algorithms The Mersenne Twister is one I've come across and used before now.
19,868
Pseudo-random number generation algorithms
The Xorshift PNG designed by George Marsaglia. Its period (2^128-1) is much shorter than the Mersenne-Twister but the algorithm is very simple to implement and lends itself to parallelization. Performs well on many-core architectures such as DSP chips and Nvidia's Tesla.
Pseudo-random number generation algorithms
The Xorshift PNG designed by George Marsaglia. Its period (2^128-1) is much shorter than the Mersenne-Twister but the algorithm is very simple to implement and lends itself to parallelization. Perform
Pseudo-random number generation algorithms The Xorshift PNG designed by George Marsaglia. Its period (2^128-1) is much shorter than the Mersenne-Twister but the algorithm is very simple to implement and lends itself to parallelization. Performs well on many-core architectures such as DSP chips and Nvidia's Tesla.
Pseudo-random number generation algorithms The Xorshift PNG designed by George Marsaglia. Its period (2^128-1) is much shorter than the Mersenne-Twister but the algorithm is very simple to implement and lends itself to parallelization. Perform
19,869
Pseudo-random number generation algorithms
At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and dieharder. You can pick and choose.
Pseudo-random number generation algorithms
At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and diehard
Pseudo-random number generation algorithms At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and dieharder. You can pick and choose.
Pseudo-random number generation algorithms At http://prng.di.unimi.it/ you can find a shootout of several random number generators tested using TestU01, the modern test suite for pseudorandom number generators that replaced diehard and diehard
19,870
Proper regression for determining correlations between derivatives of functions
If your measurements are at equidistant intervals, you can try converting this into ARIMAX(1,1,0) model, which can be estimated with OLS. Despite similarity to OLS that you described in the question, the improvement is that here you only need the first difference. Avoiding the second difference helps with stability. I assume the time sampling $\Delta t=1$. You can always change the unit of measurement of time to make it so. First, explode the discrete derivatives $$\Delta x_t-\Delta x_{t-1}=\gamma+\alpha\Delta x_t+\beta u_t$$ $$\Delta x_t=\frac 1 {1-\alpha}(\gamma+\Delta x_{t-1}+\beta u_t)$$ $$\Delta x_t=\underbrace{\frac 1 {1-\alpha}\gamma}_{\beta_0}+\underbrace{\frac 1 {1-\alpha}}_{\beta_1}\Delta x_{t-1}+\underbrace{\frac 1 {1-\alpha}\beta}_{\beta_2} u_t$$ This looks like ARIMAX(1,1,0) model which is ARIMA model with exogenous covariates: $$\Delta x_t=\beta_0+\beta_1\Delta x_{t-1}+\beta_2 u_t+\varepsilon_t$$ The problem is errors: ideally, it would be great to write this as $x_t=s_t+\varepsilon_t$, i.e. with true displacement and a measurement error component, and the same for $u_t$. Also, there are not only measurement errors, but perhaps noise entering displacement and voltage, which will impact the next measurement. Then the true model is probably with MA terms. However, this makes everything more complicated. Therefore, I'd start with the above simple equation, and see where it leads to. Maybe, it'll be good enough. A simplified version above can be easily estimated by OLS in lagged first differences, i.e. $y=X\beta+e$, where $y_t=\Delta x_t$ and $X_t=(1,\Delta x_{t-1},u_t)$. And once you got the coefficients, the original parameters can be backed out with simple algebra: $$\alpha=1-1/\beta_1\\ \gamma=(1-\alpha)\beta_0\\ \beta=(1-\alpha)\beta_2$$ You can estimate this with statistical packages, such as MATLAB's arima, but it sounds like overkill to me. If you choose to use not MATLAB, but R or Python, be careful, see Hyndman's post here: https://robjhyndman.com/hyndsight/arimax/ There's a lot of confusion about ARIMAX model, and major packages estimate regression with ARIMA errors in ARIMAX functions!
Proper regression for determining correlations between derivatives of functions
If your measurements are at equidistant intervals, you can try converting this into ARIMAX(1,1,0) model, which can be estimated with OLS. Despite similarity to OLS that you described in the question,
Proper regression for determining correlations between derivatives of functions If your measurements are at equidistant intervals, you can try converting this into ARIMAX(1,1,0) model, which can be estimated with OLS. Despite similarity to OLS that you described in the question, the improvement is that here you only need the first difference. Avoiding the second difference helps with stability. I assume the time sampling $\Delta t=1$. You can always change the unit of measurement of time to make it so. First, explode the discrete derivatives $$\Delta x_t-\Delta x_{t-1}=\gamma+\alpha\Delta x_t+\beta u_t$$ $$\Delta x_t=\frac 1 {1-\alpha}(\gamma+\Delta x_{t-1}+\beta u_t)$$ $$\Delta x_t=\underbrace{\frac 1 {1-\alpha}\gamma}_{\beta_0}+\underbrace{\frac 1 {1-\alpha}}_{\beta_1}\Delta x_{t-1}+\underbrace{\frac 1 {1-\alpha}\beta}_{\beta_2} u_t$$ This looks like ARIMAX(1,1,0) model which is ARIMA model with exogenous covariates: $$\Delta x_t=\beta_0+\beta_1\Delta x_{t-1}+\beta_2 u_t+\varepsilon_t$$ The problem is errors: ideally, it would be great to write this as $x_t=s_t+\varepsilon_t$, i.e. with true displacement and a measurement error component, and the same for $u_t$. Also, there are not only measurement errors, but perhaps noise entering displacement and voltage, which will impact the next measurement. Then the true model is probably with MA terms. However, this makes everything more complicated. Therefore, I'd start with the above simple equation, and see where it leads to. Maybe, it'll be good enough. A simplified version above can be easily estimated by OLS in lagged first differences, i.e. $y=X\beta+e$, where $y_t=\Delta x_t$ and $X_t=(1,\Delta x_{t-1},u_t)$. And once you got the coefficients, the original parameters can be backed out with simple algebra: $$\alpha=1-1/\beta_1\\ \gamma=(1-\alpha)\beta_0\\ \beta=(1-\alpha)\beta_2$$ You can estimate this with statistical packages, such as MATLAB's arima, but it sounds like overkill to me. If you choose to use not MATLAB, but R or Python, be careful, see Hyndman's post here: https://robjhyndman.com/hyndsight/arimax/ There's a lot of confusion about ARIMAX model, and major packages estimate regression with ARIMA errors in ARIMAX functions!
Proper regression for determining correlations between derivatives of functions If your measurements are at equidistant intervals, you can try converting this into ARIMAX(1,1,0) model, which can be estimated with OLS. Despite similarity to OLS that you described in the question,
19,871
Proper regression for determining correlations between derivatives of functions
If you have long-enough time-series you can try to tackle it in Fourier domain. If you do that it may be a good idea to apply a window function, which would force your signal $x=x\left(t\right)$ to zero at the edges of the window, and then make your signal periodic by repeating gated signal. For what is to follow I will assume that $x\left(t\right)$ that is known for $t=-\infty...\infty$. More subtle choices can be incorporated into analysis as well. Let: $$ x=x\left(t\right)=\frac{1}{\sqrt{2\pi}}\int d\nu \exp\left(-i2\pi\cdot\nu\cdot t\right)\,\tilde{x}\left(\nu\right) $$ You can extract $\tilde{x}\left(\nu\right)$ using Fourier Transform, FFT will give you a discrete analogue, which requires quite similar treatment. I will stay continuous for now. $$ \tilde{x}\left(\nu\right) = \frac{1}{\sqrt{2\pi}}\int dt \exp\left(i2\pi\cdot\nu\cdot t\right)\,x\left(t\right) $$ Taking a FT of your equation of motion then gives: $$ \left[4\pi^2\nu^2 - i2\pi\nu\cdot\alpha\right]\tilde{x} + \beta\cdot\tilde{u}+\gamma\sqrt{2\pi}\delta\left(\nu\right)=0 $$ This is now an algebraic equation, well-suited for regression. The delta-function in the last term will not appear in a discrete treatment (it will be something better behaved). Of course, other types of orthogonal basis functions are available. Perhaps, it would make more sense to use wavelets, really depends on your signal. Fourier methods are a good place to start though IMHO
Proper regression for determining correlations between derivatives of functions
If you have long-enough time-series you can try to tackle it in Fourier domain. If you do that it may be a good idea to apply a window function, which would force your signal $x=x\left(t\right)$ to ze
Proper regression for determining correlations between derivatives of functions If you have long-enough time-series you can try to tackle it in Fourier domain. If you do that it may be a good idea to apply a window function, which would force your signal $x=x\left(t\right)$ to zero at the edges of the window, and then make your signal periodic by repeating gated signal. For what is to follow I will assume that $x\left(t\right)$ that is known for $t=-\infty...\infty$. More subtle choices can be incorporated into analysis as well. Let: $$ x=x\left(t\right)=\frac{1}{\sqrt{2\pi}}\int d\nu \exp\left(-i2\pi\cdot\nu\cdot t\right)\,\tilde{x}\left(\nu\right) $$ You can extract $\tilde{x}\left(\nu\right)$ using Fourier Transform, FFT will give you a discrete analogue, which requires quite similar treatment. I will stay continuous for now. $$ \tilde{x}\left(\nu\right) = \frac{1}{\sqrt{2\pi}}\int dt \exp\left(i2\pi\cdot\nu\cdot t\right)\,x\left(t\right) $$ Taking a FT of your equation of motion then gives: $$ \left[4\pi^2\nu^2 - i2\pi\nu\cdot\alpha\right]\tilde{x} + \beta\cdot\tilde{u}+\gamma\sqrt{2\pi}\delta\left(\nu\right)=0 $$ This is now an algebraic equation, well-suited for regression. The delta-function in the last term will not appear in a discrete treatment (it will be something better behaved). Of course, other types of orthogonal basis functions are available. Perhaps, it would make more sense to use wavelets, really depends on your signal. Fourier methods are a good place to start though IMHO
Proper regression for determining correlations between derivatives of functions If you have long-enough time-series you can try to tackle it in Fourier domain. If you do that it may be a good idea to apply a window function, which would force your signal $x=x\left(t\right)$ to ze
19,872
Proper regression for determining correlations between derivatives of functions
The differential equation can be rewritten $$x_{t+1}=(\alpha+1)x_t+\beta\int^t u(x)\,\mathrm{d}x+\gamma t+C.$$ This eliminates many of the problematic aspects of the direct formulation and computing numerical derivatives, but it still leaves one untouched: because all the $x_t$ may be subject to measurement error, the explanatory variable $x_t$ on the right hand side might be subject to enough measurement error to bias the estimates. There is a quick fix: fit the model using the data $(x_t, U_t, t, x_{t+1})$ (where $U(t) = \int_t u(x)\,\mathrm{d}x$) and then refit the model after replacing $x_t$ by the predicted values at those times. Here, for example, is a dataset of $x_t$ represented by the $50$ dots on the right. On the right, the derivative $\dot{x}_t$ is shown in blue and the second derivative $\ddot{x}_t$ in red. Such data were independently generated $500$ times (with unit measurement variance), fit, and re-fit. To assess these fits, I plotted histograms of the parameter estimates. In each histogram a vertical red line marks the true parameter value while the vertical dashed black line indicates the mean of the estimated values. In this example there is appreciable bias in evidence: the red lines are out in the tails of the distributions. After refitting, the situation has improved, but at substantial cost in uncertainty: the estimates are much more spread out. If you are limited to code for ordinary least squares solutions, as indicated in comments, you will have to pick your poison: precise biased estimates or much less precise unbiased estimates. Perhaps an initial study of simulated data will suggest how much bias and imprecision are present in your setting and help you choose.
Proper regression for determining correlations between derivatives of functions
The differential equation can be rewritten $$x_{t+1}=(\alpha+1)x_t+\beta\int^t u(x)\,\mathrm{d}x+\gamma t+C.$$ This eliminates many of the problematic aspects of the direct formulation and computing n
Proper regression for determining correlations between derivatives of functions The differential equation can be rewritten $$x_{t+1}=(\alpha+1)x_t+\beta\int^t u(x)\,\mathrm{d}x+\gamma t+C.$$ This eliminates many of the problematic aspects of the direct formulation and computing numerical derivatives, but it still leaves one untouched: because all the $x_t$ may be subject to measurement error, the explanatory variable $x_t$ on the right hand side might be subject to enough measurement error to bias the estimates. There is a quick fix: fit the model using the data $(x_t, U_t, t, x_{t+1})$ (where $U(t) = \int_t u(x)\,\mathrm{d}x$) and then refit the model after replacing $x_t$ by the predicted values at those times. Here, for example, is a dataset of $x_t$ represented by the $50$ dots on the right. On the right, the derivative $\dot{x}_t$ is shown in blue and the second derivative $\ddot{x}_t$ in red. Such data were independently generated $500$ times (with unit measurement variance), fit, and re-fit. To assess these fits, I plotted histograms of the parameter estimates. In each histogram a vertical red line marks the true parameter value while the vertical dashed black line indicates the mean of the estimated values. In this example there is appreciable bias in evidence: the red lines are out in the tails of the distributions. After refitting, the situation has improved, but at substantial cost in uncertainty: the estimates are much more spread out. If you are limited to code for ordinary least squares solutions, as indicated in comments, you will have to pick your poison: precise biased estimates or much less precise unbiased estimates. Perhaps an initial study of simulated data will suggest how much bias and imprecision are present in your setting and help you choose.
Proper regression for determining correlations between derivatives of functions The differential equation can be rewritten $$x_{t+1}=(\alpha+1)x_t+\beta\int^t u(x)\,\mathrm{d}x+\gamma t+C.$$ This eliminates many of the problematic aspects of the direct formulation and computing n
19,873
Unbiased estimator of exponential of measure of a set?
Suppose that you have the following resources available to you: You have access to an estimator $\hat{\lambda}$. $\hat{\lambda}$ is unbiased for $\lambda ( S )$. $\hat{\lambda}$ is almost surely bounded above by $C$. You know the constant $C$, and You can form independent realisations of $\hat{\lambda}$ as many times as you'd like. Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $\exp x$): \begin{align} e^{-\alpha \lambda ( S ) } &= e^{-\alpha C} \cdot e^{\alpha \left( C - \lambda ( S ) \right)} \\ &= e^{- \alpha C} \cdot \sum_{k \geqslant 0} \frac{ \left( \alpha \left[ C - \lambda ( S ) \right] \right)^k}{ k! } \\ &= e^{- \alpha C} \cdot e^u \cdot \sum_{k \geqslant 0} \frac{ e^{-u} \cdot \left( \alpha \left[ C - \lambda ( S ) \right] \right)^k}{ k! } \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \frac{ u^k e^{-u} }{ k! } \left(\frac{ \alpha \left[ C - \lambda ( S ) \right]}{u} \right)^k \end{align} Now, do the following: Sample $K \sim \text{Poisson} ( u )$. Form $\hat{\lambda}_1, \cdots, \hat{\lambda}_K$ as iid unbiased estimators of $\lambda(S)$. Return the estimator $$\hat{\Lambda} = e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \cdot \prod_{i = 1}^K \left\{ C - \hat{\lambda}_i \right\}.$$ $\hat{\Lambda}$ is then a non-negative, unbiased estimator of $\lambda(S)$. This is because \begin{align} \mathbf{E} \left[ \hat{\Lambda} | K \right] &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \mathbf{E} \left[ \prod_{i = 1}^K \left\{ C - \hat{\lambda}_i \right\} | K \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \prod_{i = 1}^K \mathbf{E} \left[ C - \hat{\lambda}_i \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \prod_{i = 1}^K \left[ C - \lambda ( S ) \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \end{align} and thus \begin{align} \mathbf{E} \left[ \hat{\Lambda} \right] &= \mathbf{E}_K \left[ \mathbf{E} \left[ \hat{\Lambda} | K \right] \right] \\ &= \mathbf{E}_K \left[ e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \right] \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \mathbf{P} ( K = k ) \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \frac{ u^k e^{-u} }{ k! } \left(\frac{ \alpha \left[ C - \lambda ( S ) \right]}{u} \right)^k \\ &= e^{-\alpha \lambda ( S ) } \end{align} by the earlier calculation.
Unbiased estimator of exponential of measure of a set?
Suppose that you have the following resources available to you: You have access to an estimator $\hat{\lambda}$. $\hat{\lambda}$ is unbiased for $\lambda ( S )$. $\hat{\lambda}$ is almost surely boun
Unbiased estimator of exponential of measure of a set? Suppose that you have the following resources available to you: You have access to an estimator $\hat{\lambda}$. $\hat{\lambda}$ is unbiased for $\lambda ( S )$. $\hat{\lambda}$ is almost surely bounded above by $C$. You know the constant $C$, and You can form independent realisations of $\hat{\lambda}$ as many times as you'd like. Now, note that for any $u > 0$, the following holds (by the Taylor expansion of $\exp x$): \begin{align} e^{-\alpha \lambda ( S ) } &= e^{-\alpha C} \cdot e^{\alpha \left( C - \lambda ( S ) \right)} \\ &= e^{- \alpha C} \cdot \sum_{k \geqslant 0} \frac{ \left( \alpha \left[ C - \lambda ( S ) \right] \right)^k}{ k! } \\ &= e^{- \alpha C} \cdot e^u \cdot \sum_{k \geqslant 0} \frac{ e^{-u} \cdot \left( \alpha \left[ C - \lambda ( S ) \right] \right)^k}{ k! } \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \frac{ u^k e^{-u} }{ k! } \left(\frac{ \alpha \left[ C - \lambda ( S ) \right]}{u} \right)^k \end{align} Now, do the following: Sample $K \sim \text{Poisson} ( u )$. Form $\hat{\lambda}_1, \cdots, \hat{\lambda}_K$ as iid unbiased estimators of $\lambda(S)$. Return the estimator $$\hat{\Lambda} = e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \cdot \prod_{i = 1}^K \left\{ C - \hat{\lambda}_i \right\}.$$ $\hat{\Lambda}$ is then a non-negative, unbiased estimator of $\lambda(S)$. This is because \begin{align} \mathbf{E} \left[ \hat{\Lambda} | K \right] &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \mathbf{E} \left[ \prod_{i = 1}^K \left\{ C - \hat{\lambda}_i \right\} | K \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \prod_{i = 1}^K \mathbf{E} \left[ C - \hat{\lambda}_i \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \prod_{i = 1}^K \left[ C - \lambda ( S ) \right] \\ &= e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \end{align} and thus \begin{align} \mathbf{E} \left[ \hat{\Lambda} \right] &= \mathbf{E}_K \left[ \mathbf{E} \left[ \hat{\Lambda} | K \right] \right] \\ &= \mathbf{E}_K \left[ e^{u -\alpha C} \cdot \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \right] \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \mathbf{P} ( K = k ) \left(\frac{ \alpha }{u} \right)^K \left[ C - \lambda ( S ) \right]^K \\ &= e^{u -\alpha C} \cdot \sum_{k \geqslant 0} \frac{ u^k e^{-u} }{ k! } \left(\frac{ \alpha \left[ C - \lambda ( S ) \right]}{u} \right)^k \\ &= e^{-\alpha \lambda ( S ) } \end{align} by the earlier calculation.
Unbiased estimator of exponential of measure of a set? Suppose that you have the following resources available to you: You have access to an estimator $\hat{\lambda}$. $\hat{\lambda}$ is unbiased for $\lambda ( S )$. $\hat{\lambda}$ is almost surely boun
19,874
Unbiased estimator of exponential of measure of a set?
The answer is in the negative. A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,\lambda(S)/\lambda(B))$ distribution. Write $p=\lambda(S)/\lambda(B)$ and $\alpha^\prime = \alpha\lambda(B).$ For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $\exp(-\alpha \lambda(S)) = \exp(-(\alpha\lambda(B)) p) = \exp(-\alpha^\prime p).$ The expectation is $$E[t_n(X)] = \sum_{x=0}^n \binom{n}{x}p^x (1-p)^{n-x}\, t_n(x),$$ which equals a polynomial of degree at most $n$ in $p.$ But if $\alpha^\prime p \ne 0,$ the exponential $\exp(-\alpha^\prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.) The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$ Consequently, no unbiased estimator exists.
Unbiased estimator of exponential of measure of a set?
The answer is in the negative. A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,\lambda(S)/\lambda(B))$ distribution. Write
Unbiased estimator of exponential of measure of a set? The answer is in the negative. A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,\lambda(S)/\lambda(B))$ distribution. Write $p=\lambda(S)/\lambda(B)$ and $\alpha^\prime = \alpha\lambda(B).$ For a sample size of $n,$ let $t_n$ be any (unrandomized) estimator of $\exp(-\alpha \lambda(S)) = \exp(-(\alpha\lambda(B)) p) = \exp(-\alpha^\prime p).$ The expectation is $$E[t_n(X)] = \sum_{x=0}^n \binom{n}{x}p^x (1-p)^{n-x}\, t_n(x),$$ which equals a polynomial of degree at most $n$ in $p.$ But if $\alpha^\prime p \ne 0,$ the exponential $\exp(-\alpha^\prime p)$ cannot be expressed as a polynomial in $p.$ (One proof: take $n+1$ derivatives. The result for the expectation will be zero but the derivative of the exponential, which itself is an exponential in $p,$ cannot be zero.) The demonstration for randomized estimators is nearly the same: upon taking expectations, we again obtain a polynomial in $p.$ Consequently, no unbiased estimator exists.
Unbiased estimator of exponential of measure of a set? The answer is in the negative. A sufficient statistic for a uniform sample is the count $X$ of points observed to lie in $S.$ This count has a Binomial$(n,\lambda(S)/\lambda(B))$ distribution. Write
19,875
gamma parameter in xgboost
As you correctly note gamma is a regularisation parameter. In contrast with min_child_weight and max_depth that regularise using "within tree" information, gamma works by regularising using "across trees" information. In particular by observing what is the typical size of loss changes we can adjust gamma appropriately such that we instruct our trees to add nodes only if the associated gain is larger or equal to $\gamma$. In the rather famous 2014 XGBoost presentation by Chen, p.33 refers to $\gamma$ as the "complexity cost by introducing additional leaf". Now, a typical situation where we would tune gamma is when we use shallow trees as we try to combat over-fitting. The obvious thing to combat overfitting is use shallower trees (i.e. lower max_depth) and therefore the context where tuning gamma becomes relevant is "when you want to use shallow (low max_depth) trees". Indeed it is a bit of a tautology but realistically if we expect deeper trees to be beneficial, tuning gamma, while still effective in regularising, will also unnecessarily burden our learning procedure. On the other hand, if we wrongly use deeper trees, unless we regularise very aggressively we might be accidentally end-up to a local minimum where $\gamma$ cannot save us from. Therefore, $\gamma$ is indeed more relevant for "shallow-tree situations". :) A great blog post on tuning $\gamma$ can be found here: xgboost: “Hi I’m Gamma. What can I do for you?” - and the tuning of regularization. Final word of caution: Do notice that $\gamma$ is strongly dependant on the actual estimated parameters and the (training) data. That is because the scale of our response variable effectively dictates the scale of our loss function and what are the subsequent reductions in the loss function (i.e. values of $\gamma$) we consider meaningful.
gamma parameter in xgboost
As you correctly note gamma is a regularisation parameter. In contrast with min_child_weight and max_depth that regularise using "within tree" information, gamma works by regularising using "across tr
gamma parameter in xgboost As you correctly note gamma is a regularisation parameter. In contrast with min_child_weight and max_depth that regularise using "within tree" information, gamma works by regularising using "across trees" information. In particular by observing what is the typical size of loss changes we can adjust gamma appropriately such that we instruct our trees to add nodes only if the associated gain is larger or equal to $\gamma$. In the rather famous 2014 XGBoost presentation by Chen, p.33 refers to $\gamma$ as the "complexity cost by introducing additional leaf". Now, a typical situation where we would tune gamma is when we use shallow trees as we try to combat over-fitting. The obvious thing to combat overfitting is use shallower trees (i.e. lower max_depth) and therefore the context where tuning gamma becomes relevant is "when you want to use shallow (low max_depth) trees". Indeed it is a bit of a tautology but realistically if we expect deeper trees to be beneficial, tuning gamma, while still effective in regularising, will also unnecessarily burden our learning procedure. On the other hand, if we wrongly use deeper trees, unless we regularise very aggressively we might be accidentally end-up to a local minimum where $\gamma$ cannot save us from. Therefore, $\gamma$ is indeed more relevant for "shallow-tree situations". :) A great blog post on tuning $\gamma$ can be found here: xgboost: “Hi I’m Gamma. What can I do for you?” - and the tuning of regularization. Final word of caution: Do notice that $\gamma$ is strongly dependant on the actual estimated parameters and the (training) data. That is because the scale of our response variable effectively dictates the scale of our loss function and what are the subsequent reductions in the loss function (i.e. values of $\gamma$) we consider meaningful.
gamma parameter in xgboost As you correctly note gamma is a regularisation parameter. In contrast with min_child_weight and max_depth that regularise using "within tree" information, gamma works by regularising using "across tr
19,876
gamma parameter in xgboost
Gamma causes shallower trees (or, at least, trees with fewer leaves), by restricting when splits will be made. I think the tutorial isn't entirely clear/accurate. For example, the bullet point immediately before the one you're questioning states that gamma penalizes large coefficients, which is not the case (alpha and lambda penalize coefficients, gamma just penalizes the number of leaves). And further down, alpha is the L1 penalty on weights, but the weights are those at each leaf, not on individual features, so alpha does not perform feature selection in the same way as Lasso. (I suppose, though I haven't seen it discussed, that it could force a split candidate's leaf coefficient to zero, causing the algorithm to pass over splitting that feature, perhaps in the long run skipping the feature altogether?)
gamma parameter in xgboost
Gamma causes shallower trees (or, at least, trees with fewer leaves), by restricting when splits will be made. I think the tutorial isn't entirely clear/accurate. For example, the bullet point immedi
gamma parameter in xgboost Gamma causes shallower trees (or, at least, trees with fewer leaves), by restricting when splits will be made. I think the tutorial isn't entirely clear/accurate. For example, the bullet point immediately before the one you're questioning states that gamma penalizes large coefficients, which is not the case (alpha and lambda penalize coefficients, gamma just penalizes the number of leaves). And further down, alpha is the L1 penalty on weights, but the weights are those at each leaf, not on individual features, so alpha does not perform feature selection in the same way as Lasso. (I suppose, though I haven't seen it discussed, that it could force a split candidate's leaf coefficient to zero, causing the algorithm to pass over splitting that feature, perhaps in the long run skipping the feature altogether?)
gamma parameter in xgboost Gamma causes shallower trees (or, at least, trees with fewer leaves), by restricting when splits will be made. I think the tutorial isn't entirely clear/accurate. For example, the bullet point immedi
19,877
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models?
In my understanding, the abstract intuition of ANOVA is the following: One decomposes the sources of variance of the observed variable in various directions and investigates the respective contributions. To be more precise, one decomposes the identity map into a sum of projections and investigates which projections/directions make an important contribution to explaining the variance and which do not. The theoretical basis is Cochran's theorem. To be less abstract, I cast the second form mentioned by the OP into the framework just described. Subsequently, I interpret the first form as a special case of the second one. Let us consider a regression model with $K$ explanatory variables (the full model) and compare it to the restricted model with $K-J$ variables. WLOG, the last $J$ variables of the full model are not included in the restricted model. The question answered by ANOVA is "Can we explain significantly more variance in the observed variable if we include $J$ additional variables"? This question is answered by comparing the variance contributions of the first $K-J$ variables, the next $J$ variables, and the remainder/unexplained part (the residual sum of squares). This decomposition (obtained e.g. from Cochran's theorem) is used to construct the F-test. Thus, one analyses the reduction (by including more variables) in the residual sum of squares of the restricted model (corresponding to the $H_0:$ all coefficients pertaining to the last $J$ variables are zero) by including more variables and obtains the F-statistic $$ \frac{ \frac{RSS_{restr} - RSS_{full}}{J} }{ \frac{RSS_{full}}{N-K} }$$ If the value is large enough, then the variance explained by the additional $J$ variables is significant. Now, the first form mentioned by the OP is interpreted as a special case of the second form. Consider three different groups A, B, and C with means $\mu_A$, $\mu_B$, and $\mu_C$. The $H_0: \mu_A = \mu_B = \mu_C$ is tested by comparing the variance explained by the regression on an intercept (the restricted model) with the variance explained by the full model containing an intercept, a dummy for group A, and a dummy for group B. The resulting F-statistic $$ \frac{ \frac{RSS_{intercept} - RSS_{dummies}}{2} }{ \frac{RSS_{dummies}}{N-3} }$$ is equivalent to the ANOVA-test on Wikipedia. The denominator is equal to the variation within the groups, the numerator is equal to the variation between the groups. If the variation between the groups is larger than the variation within the groups, one rejects the hypothesis that all means are equal.
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste
In my understanding, the abstract intuition of ANOVA is the following: One decomposes the sources of variance of the observed variable in various directions and investigates the respective contributio
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models? In my understanding, the abstract intuition of ANOVA is the following: One decomposes the sources of variance of the observed variable in various directions and investigates the respective contributions. To be more precise, one decomposes the identity map into a sum of projections and investigates which projections/directions make an important contribution to explaining the variance and which do not. The theoretical basis is Cochran's theorem. To be less abstract, I cast the second form mentioned by the OP into the framework just described. Subsequently, I interpret the first form as a special case of the second one. Let us consider a regression model with $K$ explanatory variables (the full model) and compare it to the restricted model with $K-J$ variables. WLOG, the last $J$ variables of the full model are not included in the restricted model. The question answered by ANOVA is "Can we explain significantly more variance in the observed variable if we include $J$ additional variables"? This question is answered by comparing the variance contributions of the first $K-J$ variables, the next $J$ variables, and the remainder/unexplained part (the residual sum of squares). This decomposition (obtained e.g. from Cochran's theorem) is used to construct the F-test. Thus, one analyses the reduction (by including more variables) in the residual sum of squares of the restricted model (corresponding to the $H_0:$ all coefficients pertaining to the last $J$ variables are zero) by including more variables and obtains the F-statistic $$ \frac{ \frac{RSS_{restr} - RSS_{full}}{J} }{ \frac{RSS_{full}}{N-K} }$$ If the value is large enough, then the variance explained by the additional $J$ variables is significant. Now, the first form mentioned by the OP is interpreted as a special case of the second form. Consider three different groups A, B, and C with means $\mu_A$, $\mu_B$, and $\mu_C$. The $H_0: \mu_A = \mu_B = \mu_C$ is tested by comparing the variance explained by the regression on an intercept (the restricted model) with the variance explained by the full model containing an intercept, a dummy for group A, and a dummy for group B. The resulting F-statistic $$ \frac{ \frac{RSS_{intercept} - RSS_{dummies}}{2} }{ \frac{RSS_{dummies}}{N-3} }$$ is equivalent to the ANOVA-test on Wikipedia. The denominator is equal to the variation within the groups, the numerator is equal to the variation between the groups. If the variation between the groups is larger than the variation within the groups, one rejects the hypothesis that all means are equal.
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste In my understanding, the abstract intuition of ANOVA is the following: One decomposes the sources of variance of the observed variable in various directions and investigates the respective contributio
19,878
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models?
If you are doing one-way ANOVA to test if there is a significant difference between groups, then implicitly you are comparing two nested models (so there is only one level of nesting, but it is still nesting). Those two models are: Model 0: The values $y_{ij}$ (with $i$ the sample number and $j$ the group number) are modeled by the estimated mean, $\hat{\beta}_0$ of the entire sample. $$ y_{ij} = \hat{\beta}_0 + \epsilon_i $$ Model 1: The values are modeled by the estimated means of the groups. (and if we represent the the model by the between group variations, $\hat{\beta_j}$ , then model 0 is nested inside model 1) $$ y_i = \hat{\beta}_0 + \hat{\beta}_j + \epsilon_i $$ An example of comparing means and equivalence to nested models: let's take the sepal length (cm) from the iris data set (if we use all four variables we actually could be doing LDA or MANOVA as Fisher did in 1936) The observed total and group means are: $$\begin{array} \\ \mu_{total} &= 5.83\\ \mu_{setosa} &= 5.01\\ \mu_{versicolor} &= 5.94\\ \mu_{virginica} &= 6.59\\ \end{array}$$ Which is in model form: $$\begin{array}\\ \text{model 1: }& y_{ij} = 5.83 + \epsilon_i\\ \text{model 2: }& y_{ij} = 5.01 + \begin{bmatrix} 0 \\ 0.93 \\ 1.58 \end{bmatrix}_j + \epsilon_i\\ \end{array}$$ The $\sum{\epsilon_i^2} = 102.1683$ in model 1 represents the total sum of squares. The $\sum{\epsilon_i^2} = 38.9562$ in model 2 represents the within group sum of squares. And the ANOVA table will be like (and implicitly calculate the difference which is the between group sum of squares which is the 63.212 in the table with 2 degrees of freedom): > model1 <- lm(Sepal.Length ~ 1 + Species, data=iris) > model0 <- lm(Sepal.Length ~ 1, data=iris) > anova(model0, model1) Analysis of Variance Table Model 1: Sepal.Length ~ 1 Model 2: Sepal.Length ~ 1 + Species Res.Df RSS Df Sum of Sq F Pr(>F) 1 149 102.168 2 147 38.956 2 63.212 119.26 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 with $$F = \frac{\frac{RSS_{difference}}{DF_{difference}}}{\frac{RSS_{new}}{DF_{new}}} = \frac{\frac{63.212}{2}}{\frac{38.956}{147}} = 119.26$$ data set used in the example: petal length (cm) for three different species of Iris flowers Iris setosa Iris versicolor Iris virginica 5.1 7.0 6.3 4.9 6.4 5.8 4.7 6.9 7.1 4.6 5.5 6.3 5.0 6.5 6.5 5.4 5.7 7.6 4.6 6.3 4.9 5.0 4.9 7.3 4.4 6.6 6.7 4.9 5.2 7.2 5.4 5.0 6.5 4.8 5.9 6.4 4.8 6.0 6.8 4.3 6.1 5.7 5.8 5.6 5.8 5.7 6.7 6.4 5.4 5.6 6.5 5.1 5.8 7.7 5.7 6.2 7.7 5.1 5.6 6.0 5.4 5.9 6.9 5.1 6.1 5.6 4.6 6.3 7.7 5.1 6.1 6.3 4.8 6.4 6.7 5.0 6.6 7.2 5.0 6.8 6.2 5.2 6.7 6.1 5.2 6.0 6.4 4.7 5.7 7.2 4.8 5.5 7.4 5.4 5.5 7.9 5.2 5.8 6.4 5.5 6.0 6.3 4.9 5.4 6.1 5.0 6.0 7.7 5.5 6.7 6.3 4.9 6.3 6.4 4.4 5.6 6.0 5.1 5.5 6.9 5.0 5.5 6.7 4.5 6.1 6.9 4.4 5.8 5.8 5.0 5.0 6.8 5.1 5.6 6.7 4.8 5.7 6.7 5.1 5.7 6.3 4.6 6.2 6.5 5.3 5.1 6.2 5.0 5.7 5.9
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste
If you are doing one-way ANOVA to test if there is a significant difference between groups, then implicitly you are comparing two nested models (so there is only one level of nesting, but it is still
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models? If you are doing one-way ANOVA to test if there is a significant difference between groups, then implicitly you are comparing two nested models (so there is only one level of nesting, but it is still nesting). Those two models are: Model 0: The values $y_{ij}$ (with $i$ the sample number and $j$ the group number) are modeled by the estimated mean, $\hat{\beta}_0$ of the entire sample. $$ y_{ij} = \hat{\beta}_0 + \epsilon_i $$ Model 1: The values are modeled by the estimated means of the groups. (and if we represent the the model by the between group variations, $\hat{\beta_j}$ , then model 0 is nested inside model 1) $$ y_i = \hat{\beta}_0 + \hat{\beta}_j + \epsilon_i $$ An example of comparing means and equivalence to nested models: let's take the sepal length (cm) from the iris data set (if we use all four variables we actually could be doing LDA or MANOVA as Fisher did in 1936) The observed total and group means are: $$\begin{array} \\ \mu_{total} &= 5.83\\ \mu_{setosa} &= 5.01\\ \mu_{versicolor} &= 5.94\\ \mu_{virginica} &= 6.59\\ \end{array}$$ Which is in model form: $$\begin{array}\\ \text{model 1: }& y_{ij} = 5.83 + \epsilon_i\\ \text{model 2: }& y_{ij} = 5.01 + \begin{bmatrix} 0 \\ 0.93 \\ 1.58 \end{bmatrix}_j + \epsilon_i\\ \end{array}$$ The $\sum{\epsilon_i^2} = 102.1683$ in model 1 represents the total sum of squares. The $\sum{\epsilon_i^2} = 38.9562$ in model 2 represents the within group sum of squares. And the ANOVA table will be like (and implicitly calculate the difference which is the between group sum of squares which is the 63.212 in the table with 2 degrees of freedom): > model1 <- lm(Sepal.Length ~ 1 + Species, data=iris) > model0 <- lm(Sepal.Length ~ 1, data=iris) > anova(model0, model1) Analysis of Variance Table Model 1: Sepal.Length ~ 1 Model 2: Sepal.Length ~ 1 + Species Res.Df RSS Df Sum of Sq F Pr(>F) 1 149 102.168 2 147 38.956 2 63.212 119.26 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 with $$F = \frac{\frac{RSS_{difference}}{DF_{difference}}}{\frac{RSS_{new}}{DF_{new}}} = \frac{\frac{63.212}{2}}{\frac{38.956}{147}} = 119.26$$ data set used in the example: petal length (cm) for three different species of Iris flowers Iris setosa Iris versicolor Iris virginica 5.1 7.0 6.3 4.9 6.4 5.8 4.7 6.9 7.1 4.6 5.5 6.3 5.0 6.5 6.5 5.4 5.7 7.6 4.6 6.3 4.9 5.0 4.9 7.3 4.4 6.6 6.7 4.9 5.2 7.2 5.4 5.0 6.5 4.8 5.9 6.4 4.8 6.0 6.8 4.3 6.1 5.7 5.8 5.6 5.8 5.7 6.7 6.4 5.4 5.6 6.5 5.1 5.8 7.7 5.7 6.2 7.7 5.1 5.6 6.0 5.4 5.9 6.9 5.1 6.1 5.6 4.6 6.3 7.7 5.1 6.1 6.3 4.8 6.4 6.7 5.0 6.6 7.2 5.0 6.8 6.2 5.2 6.7 6.1 5.2 6.0 6.4 4.7 5.7 7.2 4.8 5.5 7.4 5.4 5.5 7.9 5.2 5.8 6.4 5.5 6.0 6.3 4.9 5.4 6.1 5.0 6.0 7.7 5.5 6.7 6.3 4.9 6.3 6.4 4.4 5.6 6.0 5.1 5.5 6.9 5.0 5.5 6.7 4.5 6.1 6.9 4.4 5.8 5.8 5.0 5.0 6.8 5.1 5.6 6.7 4.8 5.7 6.7 5.1 5.7 6.3 4.6 6.2 6.5 5.3 5.1 6.2 5.0 5.7 5.9
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste If you are doing one-way ANOVA to test if there is a significant difference between groups, then implicitly you are comparing two nested models (so there is only one level of nesting, but it is still
19,879
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models?
Usage of ANOVA in comparison between several models means to test whether at least one of the coefficients used in model with higher order (and absent in model with the lower order) is significantly different from zero. That is equivalent to saying that the sum of residuals for the higher order model is significantly less than that of the lower order model. It is about two models since the basic equation used is MSM/MSE Where MSM is the mean of squared residuals of the lower order model (where the lowest order is the mean of target variable, i.e., intercept). (http://www.stat.yale.edu/Courses/1997-98/101/anovareg.htm) You can read though similar topics on CV, like How to use anova for two models comparison?
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste
Usage of ANOVA in comparison between several models means to test whether at least one of the coefficients used in model with higher order (and absent in model with the lower order) is significantly d
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models? Usage of ANOVA in comparison between several models means to test whether at least one of the coefficients used in model with higher order (and absent in model with the lower order) is significantly different from zero. That is equivalent to saying that the sum of residuals for the higher order model is significantly less than that of the lower order model. It is about two models since the basic equation used is MSM/MSE Where MSM is the mean of squared residuals of the lower order model (where the lowest order is the mean of target variable, i.e., intercept). (http://www.stat.yale.edu/Courses/1997-98/101/anovareg.htm) You can read though similar topics on CV, like How to use anova for two models comparison?
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste Usage of ANOVA in comparison between several models means to test whether at least one of the coefficients used in model with higher order (and absent in model with the lower order) is significantly d
19,880
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models?
From what I've learned, You can use ANOVA tables to determine whether your explanatory variables actually have a significant affect on the response variable, and thus fit the appropriate model. For example, suppose you have 2 explanatory variables $x_1$ and $x_2$, but you are not sure whether $x_2$ actually has an effect on Y. You can compare ANOVA tables of the two models: $$ y=\beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilon $$ vs $$ y=\beta_0 + \beta_1x_1 + \epsilon $$ You perform a hypothesis test with the Extra Residual Sum of Squares using the F-test to determine whether a reduced model with just $x_1$ is more significant. Here is an ANOVA output example for a project I am working on in R, where I test two models (one with the Variable Days, and one without the Variable Days): As you can see, the corresponding p-value from the F-test is 0.13, which is greater than 0.05. Thus, we cannot reject the null hypothesis that Days has no effect on Y. So, I choose model 1 over model 2.
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste
From what I've learned, You can use ANOVA tables to determine whether your explanatory variables actually have a significant affect on the response variable, and thus fit the appropriate model. For ex
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare nested models? From what I've learned, You can use ANOVA tables to determine whether your explanatory variables actually have a significant affect on the response variable, and thus fit the appropriate model. For example, suppose you have 2 explanatory variables $x_1$ and $x_2$, but you are not sure whether $x_2$ actually has an effect on Y. You can compare ANOVA tables of the two models: $$ y=\beta_0 + \beta_1x_1 + \beta_2x_2 + \epsilon $$ vs $$ y=\beta_0 + \beta_1x_1 + \epsilon $$ You perform a hypothesis test with the Extra Residual Sum of Squares using the F-test to determine whether a reduced model with just $x_1$ is more significant. Here is an ANOVA output example for a project I am working on in R, where I test two models (one with the Variable Days, and one without the Variable Days): As you can see, the corresponding p-value from the F-test is 0.13, which is greater than 0.05. Thus, we cannot reject the null hypothesis that Days has no effect on Y. So, I choose model 1 over model 2.
What is the relationship between ANOVA to compare means of several groups and ANOVA to compare neste From what I've learned, You can use ANOVA tables to determine whether your explanatory variables actually have a significant affect on the response variable, and thus fit the appropriate model. For ex
19,881
Jeffreys prior for binomial likelihood
Lets have $\phi = g(\theta)$, where $g$ is a monotone function of $\theta$ and let $h$ be the inverse of $g$, so that $\theta = h(\phi)$. We can obtain Jeffrey's prior distribution $p_{J}(\phi)$ in two ways: Start with the Binomial model (1) \begin{equation} \label{original} p(y | \theta) = \binom{n}{y} \theta^{y} (1-\theta)^{n-y} \end{equation} reparameterize the model with $\phi = g(\theta)$ to get $$ p(y | \phi) = \binom{n}{y} h(\phi)^{y} (1-h(\phi))^{n-y} $$ and obtain Jeffrey's prior distribution $p_{J}(\phi)$ for this model. Obtain Jeffrey's prior distribution $p_{J}(\theta)$ from original Binomial model 1 and apply the change of variables formula to obtain the induced prior density on $\phi$ $$ p_{J}(\phi) = p_{J}(h(\phi)) |\frac{dh}{d\phi}|. $$ To be invariant to reparameterisations means that densities $p_{J}(\phi)$ derived in both ways should be the same. Jeffrey's prior has this characteristic [Reference: A First Course in Bayesian Statistical Methods by P. Hoff.] To answer your comment. To obtain Jeffrey's prior distribution $p_{J}(\theta)$ from the likelihood for Binomial model $$ p(y | \theta) = \binom{n}{y} \theta^{y} (1-\theta)^{n-y} $$ we must calculate Fisher information by taking logarithm of likelihood $l$ and calculate second derivative of $l$ \begin{align*} l := \log(p(y | \theta)) &\propto y \log(\theta) + (n-y) \log(1-\theta) \\ \frac{\partial l }{\partial \theta} &= \frac{y}{\theta} - \frac{n-y}{1-\theta} \\ \frac{\partial^{2} l }{\partial \theta^{2}} &= -\frac{y}{\theta^{2}} - \frac{n-y}{ (1-\theta)^{2} } \end{align*} and Fisher information is \begin{align*} I(\theta) &= -E(\frac{\partial^{2} l }{\partial \theta^{2}} | \theta) \\ &= \frac{n\theta}{\theta^{2}} + \frac{n - n \theta}{(1-\theta)^{2}} \\ &= \frac{n}{\theta ( 1- \theta)} \\ &\propto \theta^{-1} (1-\theta)^{-1}. \end{align*} Jeffrey's prior for this model is \begin{align*} p_{J}(\theta) &= \sqrt{I(\theta)} \\ &\propto \theta^{-1/2} (1-\theta)^{-1/2} \end{align*} which is $\texttt{beta}(1/2, 1/2)$.
Jeffreys prior for binomial likelihood
Lets have $\phi = g(\theta)$, where $g$ is a monotone function of $\theta$ and let $h$ be the inverse of $g$, so that $\theta = h(\phi)$. We can obtain Jeffrey's prior distribution $p_{J}(\phi)$ in tw
Jeffreys prior for binomial likelihood Lets have $\phi = g(\theta)$, where $g$ is a monotone function of $\theta$ and let $h$ be the inverse of $g$, so that $\theta = h(\phi)$. We can obtain Jeffrey's prior distribution $p_{J}(\phi)$ in two ways: Start with the Binomial model (1) \begin{equation} \label{original} p(y | \theta) = \binom{n}{y} \theta^{y} (1-\theta)^{n-y} \end{equation} reparameterize the model with $\phi = g(\theta)$ to get $$ p(y | \phi) = \binom{n}{y} h(\phi)^{y} (1-h(\phi))^{n-y} $$ and obtain Jeffrey's prior distribution $p_{J}(\phi)$ for this model. Obtain Jeffrey's prior distribution $p_{J}(\theta)$ from original Binomial model 1 and apply the change of variables formula to obtain the induced prior density on $\phi$ $$ p_{J}(\phi) = p_{J}(h(\phi)) |\frac{dh}{d\phi}|. $$ To be invariant to reparameterisations means that densities $p_{J}(\phi)$ derived in both ways should be the same. Jeffrey's prior has this characteristic [Reference: A First Course in Bayesian Statistical Methods by P. Hoff.] To answer your comment. To obtain Jeffrey's prior distribution $p_{J}(\theta)$ from the likelihood for Binomial model $$ p(y | \theta) = \binom{n}{y} \theta^{y} (1-\theta)^{n-y} $$ we must calculate Fisher information by taking logarithm of likelihood $l$ and calculate second derivative of $l$ \begin{align*} l := \log(p(y | \theta)) &\propto y \log(\theta) + (n-y) \log(1-\theta) \\ \frac{\partial l }{\partial \theta} &= \frac{y}{\theta} - \frac{n-y}{1-\theta} \\ \frac{\partial^{2} l }{\partial \theta^{2}} &= -\frac{y}{\theta^{2}} - \frac{n-y}{ (1-\theta)^{2} } \end{align*} and Fisher information is \begin{align*} I(\theta) &= -E(\frac{\partial^{2} l }{\partial \theta^{2}} | \theta) \\ &= \frac{n\theta}{\theta^{2}} + \frac{n - n \theta}{(1-\theta)^{2}} \\ &= \frac{n}{\theta ( 1- \theta)} \\ &\propto \theta^{-1} (1-\theta)^{-1}. \end{align*} Jeffrey's prior for this model is \begin{align*} p_{J}(\theta) &= \sqrt{I(\theta)} \\ &\propto \theta^{-1/2} (1-\theta)^{-1/2} \end{align*} which is $\texttt{beta}(1/2, 1/2)$.
Jeffreys prior for binomial likelihood Lets have $\phi = g(\theta)$, where $g$ is a monotone function of $\theta$ and let $h$ be the inverse of $g$, so that $\theta = h(\phi)$. We can obtain Jeffrey's prior distribution $p_{J}(\phi)$ in tw
19,882
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable is bounded?
Here is what you are missing. The asymptotic distribution is not of $\bar{X}_n$ (the sample mean), but of $\sqrt{n}(\bar{X}_n - \theta)$, where $\theta$ is the mean of $X$. Let $X_1, X_2, \dots$ be iid random variables such that $a < X_i <b$ and $X_i$ has mean $\theta$ and variance $\sigma^2$. Thus $X_i$ has bounded support. The CLT says that $$\sqrt{n}(\bar{X}_n - \theta) \overset{d}{\to} N(0, \sigma^2), $$ where $\bar{X}_n$ is the sample mean. Now \begin{align*} a < &X_i <b\\ a < & \bar{X}_n <b\\ a-\theta < &\bar{X}_n - \theta < b - \theta\\ \sqrt{n}(a - \theta) < & \sqrt{n}(\bar{X}_n - \theta) < \sqrt{n}(b - \theta).\\ \end{align*} As $n \to \infty$, the lower bound and the upper bound tend to $-\infty$ and $\infty$ respectively, and thus as $n \to \infty$ the support of $\sqrt{n}(\bar{X}_n - \theta)$ is exactly the whole real line. Whenever we use the CLT in practice, we say $\bar{X}_n \approx N(\theta, \sigma^2/n)$, and this will always be an approximation. EDIT: I think part of the confusion is from the misinterpretation of the Central Limit Theorem. You are correct that the sampling distribution of the sample mean is $$\bar{X}_n \approx N(\theta, \sigma^2/n). $$ However, the sampling distribution is a finite sample property. Like you said, we want to let $n \to \infty$; once we do that the $\approx$ sign will be an exact result. However, if we let $n \to \infty$, we can no longer have an $n$ on the right hand side (since $n$ is now $\infty$). So the following statement is incorrect $$ \bar{X}_n \overset{d}{\to} N(\theta, \sigma^2/n) \text{ as } n \to \infty.$$ [Here $\overset{d}{\to}$ stands for convergence in terms of distribution]. We want to write the result down accurately, so the $n$ is not on the right hand side. Here we now use properties of random variables to get $$ \sqrt{n}(\bar{X}_n - \theta) \overset{d}{\to} N(0, \sigma^2)$$ To see how the algebra works out, look at the answer here.
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable
Here is what you are missing. The asymptotic distribution is not of $\bar{X}_n$ (the sample mean), but of $\sqrt{n}(\bar{X}_n - \theta)$, where $\theta$ is the mean of $X$. Let $X_1, X_2, \dots$ be ii
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable is bounded? Here is what you are missing. The asymptotic distribution is not of $\bar{X}_n$ (the sample mean), but of $\sqrt{n}(\bar{X}_n - \theta)$, where $\theta$ is the mean of $X$. Let $X_1, X_2, \dots$ be iid random variables such that $a < X_i <b$ and $X_i$ has mean $\theta$ and variance $\sigma^2$. Thus $X_i$ has bounded support. The CLT says that $$\sqrt{n}(\bar{X}_n - \theta) \overset{d}{\to} N(0, \sigma^2), $$ where $\bar{X}_n$ is the sample mean. Now \begin{align*} a < &X_i <b\\ a < & \bar{X}_n <b\\ a-\theta < &\bar{X}_n - \theta < b - \theta\\ \sqrt{n}(a - \theta) < & \sqrt{n}(\bar{X}_n - \theta) < \sqrt{n}(b - \theta).\\ \end{align*} As $n \to \infty$, the lower bound and the upper bound tend to $-\infty$ and $\infty$ respectively, and thus as $n \to \infty$ the support of $\sqrt{n}(\bar{X}_n - \theta)$ is exactly the whole real line. Whenever we use the CLT in practice, we say $\bar{X}_n \approx N(\theta, \sigma^2/n)$, and this will always be an approximation. EDIT: I think part of the confusion is from the misinterpretation of the Central Limit Theorem. You are correct that the sampling distribution of the sample mean is $$\bar{X}_n \approx N(\theta, \sigma^2/n). $$ However, the sampling distribution is a finite sample property. Like you said, we want to let $n \to \infty$; once we do that the $\approx$ sign will be an exact result. However, if we let $n \to \infty$, we can no longer have an $n$ on the right hand side (since $n$ is now $\infty$). So the following statement is incorrect $$ \bar{X}_n \overset{d}{\to} N(\theta, \sigma^2/n) \text{ as } n \to \infty.$$ [Here $\overset{d}{\to}$ stands for convergence in terms of distribution]. We want to write the result down accurately, so the $n$ is not on the right hand side. Here we now use properties of random variables to get $$ \sqrt{n}(\bar{X}_n - \theta) \overset{d}{\to} N(0, \sigma^2)$$ To see how the algebra works out, look at the answer here.
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable Here is what you are missing. The asymptotic distribution is not of $\bar{X}_n$ (the sample mean), but of $\sqrt{n}(\bar{X}_n - \theta)$, where $\theta$ is the mean of $X$. Let $X_1, X_2, \dots$ be ii
19,883
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable is bounded?
If you're referring to a central limit theorem, note that one proper way to to write it out is $\left( \frac{\bar x - \mu} {\sigma} \right) \sqrt n \rightarrow_d N(0,1)$ under normal conditions ($\mu, \sigma$ being the mean and standard deviation of $x_i$). With this formal definition, you can see right away that the left hand side can take on values for any finite range given a large enough $n$. To help connect to the for informal idea that "a mean approaches a normal distribution for large $n$", we need to realize that "approaches a normal distribution" means that the CDF's get arbitrarily close to a normal distribution as $n$ gets large. But as $n$ gets large, the standard deviation of this approximate distribution shrinks, so the probability of an extreme tail of the approximating normal also goes to 0. For example, suppose $X_i \sim \text{Bern}(p = 0.5)$. Then you could use the informal approximation to say that $\bar X \dot \sim N\left(p, \frac{p(1-p)}{n}\right)$ So while it is true that for any finite $n$, $P\left(N\left(p, \frac{p(1-p)}{n}\right) < 0\right) >0$ (implying the approximation is clearly never perfect), as $n \rightarrow \infty$, $P\left(N\left(p, \frac{p(1-p)}{n}\right) < 0\right) \rightarrow 0$ So that discrepancy between the actual distribution and approximate distribution is disappearing, as is supposed to happen with approximations.
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable
If you're referring to a central limit theorem, note that one proper way to to write it out is $\left( \frac{\bar x - \mu} {\sigma} \right) \sqrt n \rightarrow_d N(0,1)$ under normal conditions ($\mu
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable is bounded? If you're referring to a central limit theorem, note that one proper way to to write it out is $\left( \frac{\bar x - \mu} {\sigma} \right) \sqrt n \rightarrow_d N(0,1)$ under normal conditions ($\mu, \sigma$ being the mean and standard deviation of $x_i$). With this formal definition, you can see right away that the left hand side can take on values for any finite range given a large enough $n$. To help connect to the for informal idea that "a mean approaches a normal distribution for large $n$", we need to realize that "approaches a normal distribution" means that the CDF's get arbitrarily close to a normal distribution as $n$ gets large. But as $n$ gets large, the standard deviation of this approximate distribution shrinks, so the probability of an extreme tail of the approximating normal also goes to 0. For example, suppose $X_i \sim \text{Bern}(p = 0.5)$. Then you could use the informal approximation to say that $\bar X \dot \sim N\left(p, \frac{p(1-p)}{n}\right)$ So while it is true that for any finite $n$, $P\left(N\left(p, \frac{p(1-p)}{n}\right) < 0\right) >0$ (implying the approximation is clearly never perfect), as $n \rightarrow \infty$, $P\left(N\left(p, \frac{p(1-p)}{n}\right) < 0\right) \rightarrow 0$ So that discrepancy between the actual distribution and approximate distribution is disappearing, as is supposed to happen with approximations.
How can we get a normal distribution as $n \to \infty$ if the range of values of our random variable If you're referring to a central limit theorem, note that one proper way to to write it out is $\left( \frac{\bar x - \mu} {\sigma} \right) \sqrt n \rightarrow_d N(0,1)$ under normal conditions ($\mu
19,884
Difference is summary statistics: Gini coefficient and standard deviation
Two things to consider The Gini is scale independent whereas the SD is in the original units Suppose we have a measure bounded above and below. SD takes on its maximum value if half measurements are at each bound whereas Gini takes on the maximum is one is at one bound and all the rest at the other.
Difference is summary statistics: Gini coefficient and standard deviation
Two things to consider The Gini is scale independent whereas the SD is in the original units Suppose we have a measure bounded above and below. SD takes on its maximum value if half measurements are a
Difference is summary statistics: Gini coefficient and standard deviation Two things to consider The Gini is scale independent whereas the SD is in the original units Suppose we have a measure bounded above and below. SD takes on its maximum value if half measurements are at each bound whereas Gini takes on the maximum is one is at one bound and all the rest at the other.
Difference is summary statistics: Gini coefficient and standard deviation Two things to consider The Gini is scale independent whereas the SD is in the original units Suppose we have a measure bounded above and below. SD takes on its maximum value if half measurements are a
19,885
Difference is summary statistics: Gini coefficient and standard deviation
The Gini coefficient is invariant to scale and is bounded, the standard deviation invariant to a shift, and unbounded, so they are difficult to compare directly. Now you can define a scale-invariant version of the standard deviation, by dividing by the mean (coefficient of variation). However, the Gini index is still based on values, the second on squared values, so you can expect the second one will to more influenced by outliers (excessively low or high values). This can be found in Income inequality measures, F De Maio, ‎2007: This measure of income inequality is calculated by the dividing the standard deviation of the income distribution by its mean. More equal income distributions will have smaller standard deviations; as such, the CV will be smaller in more equal societies. Despite being one of the simplest measures of inequality, use of the CV has been fairly limited in the public health literature and it has not featured in research on the income inequality hypothesis. This may be attributed to important limitations of the CV measure: (1) it does not have an upper bound, unlike the Gini coefficient,18 making interpretation and comparison somewhat more difficult; and (2) the two components of the CV (the mean and the standard deviation) may be exceedingly influenced by anomalously low or high income values. In other words, the CV would not be an appropriate choice of income inequality measure if a study's income data did not approach a normal distribution. So the coefficient of variation is less robust, and still unbounded. To take a further step, you can remove the mean, and divide by the absolute deviation instead ($\ell_1(x-m)=\sum |x_n -m|$). Up to a factor, you end up with a $\ell_1/\ell_2$ norm ratio, which can be bounded, since, for an $N$-point vector, $\ell_2(x)\le \ell_1(x)\le \sqrt{N}\ell_2(x) $. Now you have, with the Gini index and the $\ell_1/\ell_2$ norm ratio, two interesting measures of distribution sparsity, scale-invariant and bounded. They are compared in Comparing Measures of Sparsity, 2009. Tested against different natural sparsity properties (Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), the Gini index stands out as the best. But its shape makes it difficult to use as a loss function, and regularized versions of the $\ell_1/\ell_2$ can be used in this context. So unless you want to characterize a nearly Gaussian distribution, if you want to measure a sparsity, use the Gini index, if you want to promote sparsity among different models, you can try such a norm ratio. Additional lecture: The GMD (Gini’s Mean difference): A Superior Measure of Variability for Non-Normal Distributions, Shlomo Yitzhaki, 2002, whose abstract might appear of interest: Of all measures of variability, the variance is by far the most popular. This paper argues that Gini’s Mean Difference (GMD), an alternative index of variability, shares many properties with the variance, but can be more informative about the properties of distributions that depart from normality
Difference is summary statistics: Gini coefficient and standard deviation
The Gini coefficient is invariant to scale and is bounded, the standard deviation invariant to a shift, and unbounded, so they are difficult to compare directly. Now you can define a scale-invariant v
Difference is summary statistics: Gini coefficient and standard deviation The Gini coefficient is invariant to scale and is bounded, the standard deviation invariant to a shift, and unbounded, so they are difficult to compare directly. Now you can define a scale-invariant version of the standard deviation, by dividing by the mean (coefficient of variation). However, the Gini index is still based on values, the second on squared values, so you can expect the second one will to more influenced by outliers (excessively low or high values). This can be found in Income inequality measures, F De Maio, ‎2007: This measure of income inequality is calculated by the dividing the standard deviation of the income distribution by its mean. More equal income distributions will have smaller standard deviations; as such, the CV will be smaller in more equal societies. Despite being one of the simplest measures of inequality, use of the CV has been fairly limited in the public health literature and it has not featured in research on the income inequality hypothesis. This may be attributed to important limitations of the CV measure: (1) it does not have an upper bound, unlike the Gini coefficient,18 making interpretation and comparison somewhat more difficult; and (2) the two components of the CV (the mean and the standard deviation) may be exceedingly influenced by anomalously low or high income values. In other words, the CV would not be an appropriate choice of income inequality measure if a study's income data did not approach a normal distribution. So the coefficient of variation is less robust, and still unbounded. To take a further step, you can remove the mean, and divide by the absolute deviation instead ($\ell_1(x-m)=\sum |x_n -m|$). Up to a factor, you end up with a $\ell_1/\ell_2$ norm ratio, which can be bounded, since, for an $N$-point vector, $\ell_2(x)\le \ell_1(x)\le \sqrt{N}\ell_2(x) $. Now you have, with the Gini index and the $\ell_1/\ell_2$ norm ratio, two interesting measures of distribution sparsity, scale-invariant and bounded. They are compared in Comparing Measures of Sparsity, 2009. Tested against different natural sparsity properties (Robin Hood, Scaling, Rising Tide, Cloning, Bill Gates, and Babies), the Gini index stands out as the best. But its shape makes it difficult to use as a loss function, and regularized versions of the $\ell_1/\ell_2$ can be used in this context. So unless you want to characterize a nearly Gaussian distribution, if you want to measure a sparsity, use the Gini index, if you want to promote sparsity among different models, you can try such a norm ratio. Additional lecture: The GMD (Gini’s Mean difference): A Superior Measure of Variability for Non-Normal Distributions, Shlomo Yitzhaki, 2002, whose abstract might appear of interest: Of all measures of variability, the variance is by far the most popular. This paper argues that Gini’s Mean Difference (GMD), an alternative index of variability, shares many properties with the variance, but can be more informative about the properties of distributions that depart from normality
Difference is summary statistics: Gini coefficient and standard deviation The Gini coefficient is invariant to scale and is bounded, the standard deviation invariant to a shift, and unbounded, so they are difficult to compare directly. Now you can define a scale-invariant v
19,886
Difference is summary statistics: Gini coefficient and standard deviation
The standard deviation has a scale (say, °K, meters, mmHg,...). Usually, this influences our judgement of its magnitude. So we tend to prefer the coefficient of variation or even better (on finite samples) the standard error. The Gini coefficient is constructed on (scaleless) percentage values and thus has no scale on its own unit (like e.g. the Mach number). Use the Gini coefficient if you want to compare the equality of shares on something common (shares of 100%). Note that for this application the standard deviation could also be used, so I think your question to compare advantages and disadvantages only applies to this kind of application. In this case, the standard deviation would also be bounded to $[0,1]$. Both indicators would depend on the number of (non-negative) shares but in an opposite direction: Gini increases as the number increases, standard deviation decreases.
Difference is summary statistics: Gini coefficient and standard deviation
The standard deviation has a scale (say, °K, meters, mmHg,...). Usually, this influences our judgement of its magnitude. So we tend to prefer the coefficient of variation or even better (on finite sam
Difference is summary statistics: Gini coefficient and standard deviation The standard deviation has a scale (say, °K, meters, mmHg,...). Usually, this influences our judgement of its magnitude. So we tend to prefer the coefficient of variation or even better (on finite samples) the standard error. The Gini coefficient is constructed on (scaleless) percentage values and thus has no scale on its own unit (like e.g. the Mach number). Use the Gini coefficient if you want to compare the equality of shares on something common (shares of 100%). Note that for this application the standard deviation could also be used, so I think your question to compare advantages and disadvantages only applies to this kind of application. In this case, the standard deviation would also be bounded to $[0,1]$. Both indicators would depend on the number of (non-negative) shares but in an opposite direction: Gini increases as the number increases, standard deviation decreases.
Difference is summary statistics: Gini coefficient and standard deviation The standard deviation has a scale (say, °K, meters, mmHg,...). Usually, this influences our judgement of its magnitude. So we tend to prefer the coefficient of variation or even better (on finite sam
19,887
Explaining The Variance of a Regression Model
I will try to explain this in simple terms. The regression model focuses on the relationship between a dependent variable and a set of independent variables. The dependent variable is the outcome, which you’re trying to predict, using one or more independent variables. Assume you have a model like this: Weight_i = 3.0 + 35 * Height_i + ε Now one of the obvious questions is: how well does this model work? In other words, how well the height of a person accurately predicts – or explains – the weight of that person? Before we answer this question, we first need to understand how much fluctuation we observe in people’s weights. This is important, because what we are trying to do here is to explain the fluctuation (variation) in weights across different people, by using their heights. If people’s height is able to explain this variation in weight, then we have a good model. The variance is a good metric to be used for this purpose, as it measures how far a set of numbers are spread out (from their mean value). This helps us rephrase our original question: How much variance in a person’s weight can be explained by his/her height? This is where the “% variance explained” comes from. By the way, for regression analysis, it equals the correlation coefficient R-squared. For the model above, we might be able to make a statement like: Using regression analysis, it was possible to set up a predictive model using the height of a person that explain 60% of the variance in weight”. Now, how good is 60%? It’s hard to make an objective judgement about this. But if you have other competing models – say, another regression model that uses the age of a person to predict his/her weight – you can compare different models based on how much variance is explained by them and decide which model is better. (There are some caveats to this, see ‘Interpreting and Using Regression’ -- Christopher H. Achen http://www.sagepub.in/books/Book450/authors)
Explaining The Variance of a Regression Model
I will try to explain this in simple terms. The regression model focuses on the relationship between a dependent variable and a set of independent variables. The dependent variable is the outcome, whi
Explaining The Variance of a Regression Model I will try to explain this in simple terms. The regression model focuses on the relationship between a dependent variable and a set of independent variables. The dependent variable is the outcome, which you’re trying to predict, using one or more independent variables. Assume you have a model like this: Weight_i = 3.0 + 35 * Height_i + ε Now one of the obvious questions is: how well does this model work? In other words, how well the height of a person accurately predicts – or explains – the weight of that person? Before we answer this question, we first need to understand how much fluctuation we observe in people’s weights. This is important, because what we are trying to do here is to explain the fluctuation (variation) in weights across different people, by using their heights. If people’s height is able to explain this variation in weight, then we have a good model. The variance is a good metric to be used for this purpose, as it measures how far a set of numbers are spread out (from their mean value). This helps us rephrase our original question: How much variance in a person’s weight can be explained by his/her height? This is where the “% variance explained” comes from. By the way, for regression analysis, it equals the correlation coefficient R-squared. For the model above, we might be able to make a statement like: Using regression analysis, it was possible to set up a predictive model using the height of a person that explain 60% of the variance in weight”. Now, how good is 60%? It’s hard to make an objective judgement about this. But if you have other competing models – say, another regression model that uses the age of a person to predict his/her weight – you can compare different models based on how much variance is explained by them and decide which model is better. (There are some caveats to this, see ‘Interpreting and Using Regression’ -- Christopher H. Achen http://www.sagepub.in/books/Book450/authors)
Explaining The Variance of a Regression Model I will try to explain this in simple terms. The regression model focuses on the relationship between a dependent variable and a set of independent variables. The dependent variable is the outcome, whi
19,888
Explaining The Variance of a Regression Model
The authors are referring to the $R^2$ value for the model which is given by the formula $$ \frac{\sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} $$ where $y_i$ is the observed value, $\hat{y}_i$ the least squares fitted value for the $i^\text{th}$ data point and $\bar{y}$ is the overall mean. We sometimes think of $R^2$ as a proportion of variation explained by the model because of the total sum of squares decomposition $$ \sum_{i=1}^{n} (y_i - \bar{y})^2 = \sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2 + \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 , $$ the latter term being residual error that is not accounted for by the model. The $R^2$ basically tells us how much of the overall variation has been "absorbed into" the fitted values.
Explaining The Variance of a Regression Model
The authors are referring to the $R^2$ value for the model which is given by the formula $$ \frac{\sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} $$ where $y_i$ is the observ
Explaining The Variance of a Regression Model The authors are referring to the $R^2$ value for the model which is given by the formula $$ \frac{\sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} $$ where $y_i$ is the observed value, $\hat{y}_i$ the least squares fitted value for the $i^\text{th}$ data point and $\bar{y}$ is the overall mean. We sometimes think of $R^2$ as a proportion of variation explained by the model because of the total sum of squares decomposition $$ \sum_{i=1}^{n} (y_i - \bar{y})^2 = \sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2 + \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 , $$ the latter term being residual error that is not accounted for by the model. The $R^2$ basically tells us how much of the overall variation has been "absorbed into" the fitted values.
Explaining The Variance of a Regression Model The authors are referring to the $R^2$ value for the model which is given by the formula $$ \frac{\sum_{i=1}^{n} (\hat{y}_i - \bar{y})^2}{\sum_{i=1}^{n} (y_i - \bar{y})^2} $$ where $y_i$ is the observ
19,889
Calculating F-Score, which is the "positive" class, the majority or minority class?
I think you've discovered that the F-score is not a very good way to evaluate a classification scheme. From the Wikipedia page you linked, there is a simplification of the formula for the F-score: $$ {F1} = \frac {2 {TP}} {2 {TP} + {FP} + {FN}} $$ where $TP,FP,FN$ are numbers of true positives, false positives, and false negatives, respectively. You will note that the number of true negative cases (equivalently, the total number of cases) is not considered at all in the formula. Thus you can have the same F-score whether you have a very high or a very low number of true negatives in your classification results. If you take your case 1, "# of predicted healthy patients over # of actual healthy patients", the "true negatives" are those who were correctly classified as having cancer yet that success in identifying patients with cancer doesn't enter into the F-score. If you take case 2, "# of predicted cancer patients over # of actual cancer patients," then the number of patients correctly classified as not having cancer is ignored. Neither seems like a good choice in this situation. If you look at any of my favorite easily accessible references on classification and regression, An Introduction to Statistical Learning, Elements of Statistical Learning, or Frank Harrell's Regression Modeling Strategies and associated course notes, you won't find much if any discussion of F-scores. What you will often find is a caution against evaluating classification procedures based simply on $TP,FP,FN,$ and $TN$ values. You are much better off focusing on an accurate assessment of likely disease status with an approach like logistic regression, which in this case would relate the probability of having cancer to the values of the predictors that you included in your classification scheme. Then, as Harrell says on page 258 of Regression Modeling Strategies, 2nd edition: If you make a classification rule from a probability model, you are being presumptuous. Suppose that a model is developed to assist physicians in diagnosing a disease. Physicians sometimes profess to desiring a binary decision model, but if given a probability they will rightfully apply different thresholds for treating different patients or for ordering other diagnostic tests. A good model of the probability of being a member of a class, in this case of having cancer, is thus much more useful than any particular classification scheme.
Calculating F-Score, which is the "positive" class, the majority or minority class?
I think you've discovered that the F-score is not a very good way to evaluate a classification scheme. From the Wikipedia page you linked, there is a simplification of the formula for the F-score: $$
Calculating F-Score, which is the "positive" class, the majority or minority class? I think you've discovered that the F-score is not a very good way to evaluate a classification scheme. From the Wikipedia page you linked, there is a simplification of the formula for the F-score: $$ {F1} = \frac {2 {TP}} {2 {TP} + {FP} + {FN}} $$ where $TP,FP,FN$ are numbers of true positives, false positives, and false negatives, respectively. You will note that the number of true negative cases (equivalently, the total number of cases) is not considered at all in the formula. Thus you can have the same F-score whether you have a very high or a very low number of true negatives in your classification results. If you take your case 1, "# of predicted healthy patients over # of actual healthy patients", the "true negatives" are those who were correctly classified as having cancer yet that success in identifying patients with cancer doesn't enter into the F-score. If you take case 2, "# of predicted cancer patients over # of actual cancer patients," then the number of patients correctly classified as not having cancer is ignored. Neither seems like a good choice in this situation. If you look at any of my favorite easily accessible references on classification and regression, An Introduction to Statistical Learning, Elements of Statistical Learning, or Frank Harrell's Regression Modeling Strategies and associated course notes, you won't find much if any discussion of F-scores. What you will often find is a caution against evaluating classification procedures based simply on $TP,FP,FN,$ and $TN$ values. You are much better off focusing on an accurate assessment of likely disease status with an approach like logistic regression, which in this case would relate the probability of having cancer to the values of the predictors that you included in your classification scheme. Then, as Harrell says on page 258 of Regression Modeling Strategies, 2nd edition: If you make a classification rule from a probability model, you are being presumptuous. Suppose that a model is developed to assist physicians in diagnosing a disease. Physicians sometimes profess to desiring a binary decision model, but if given a probability they will rightfully apply different thresholds for treating different patients or for ordering other diagnostic tests. A good model of the probability of being a member of a class, in this case of having cancer, is thus much more useful than any particular classification scheme.
Calculating F-Score, which is the "positive" class, the majority or minority class? I think you've discovered that the F-score is not a very good way to evaluate a classification scheme. From the Wikipedia page you linked, there is a simplification of the formula for the F-score: $$
19,890
Calculating F-Score, which is the "positive" class, the majority or minority class?
Precision is what fraction actually has cancer out of the total number that you predict positive, precision = ( number of true positives ) / (number of positives predicted by your classifier) Recall (or true positive rate) is, what fraction of all predicted by your classifier were accurately identified. true positive rate = true positives / ( True positive + False negative) Coming to F-score, it is a measure of trade-off between precision and recall. Lets assume you set the thresh-hold for predicting a positive as very high. Say predicting positive if h(x) >= 0.8, and negative if h(x) < 0.8 you have huge precision but low recall. You have a precision of (15)/(15+20) = 42.8% (15 is the number of true positives 20 total cancerous, subtracted 5 which are wrongly predicted) If you want to have a high recall [or true positive rate], it means you want to avoid missing positive cases, so you predict a positive more easily. Predict positive if h(x) >= 0.3 else predict negative. Basically having a high recall means you are avoiding a lot of false negatives. Here your true positive rate is ( 15 / (15+5) )= 75% Having a high recall for cancer classifiers can be a good thing, you totally need to avoid false negatives here. But of course this comes at the cost of precision. F-score measures this trade-off between precise prediction vs avoiding false negatives. Its definition can be arbitrary depending upon your classifier, lets assume it is defined as the average between precision and true positive rate. This is not a very good F-score measure because you can have huge recall value, and very low precision [eg predicting all cases positive] and you will still end up with an F-score which is same that when your precision and recall are well balanced. Define F score as : 2 * (Precision * Recall) / (Precision + Recall) Why? If you have very low precision or recall or both, your F-score falls; and you'll know that something is wrong. I would advise you to calculate F-score, precision and recall, for the case in which your classifier predicts all negatives, and then with the actual algorithm. If it is a skewed set you might want more training data. Also note that it is a good idea to measure F score on the cross-validation set. It is also known as F1-score. http://arxiv.org/ftp/arxiv/papers/1503/1503.06410.pdf https://www.google.co.in/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=a+probabilistic+theory+of+precision+recall+and+f+score
Calculating F-Score, which is the "positive" class, the majority or minority class?
Precision is what fraction actually has cancer out of the total number that you predict positive, precision = ( number of true positives ) / (number of positives predicted by your classifier) Recall
Calculating F-Score, which is the "positive" class, the majority or minority class? Precision is what fraction actually has cancer out of the total number that you predict positive, precision = ( number of true positives ) / (number of positives predicted by your classifier) Recall (or true positive rate) is, what fraction of all predicted by your classifier were accurately identified. true positive rate = true positives / ( True positive + False negative) Coming to F-score, it is a measure of trade-off between precision and recall. Lets assume you set the thresh-hold for predicting a positive as very high. Say predicting positive if h(x) >= 0.8, and negative if h(x) < 0.8 you have huge precision but low recall. You have a precision of (15)/(15+20) = 42.8% (15 is the number of true positives 20 total cancerous, subtracted 5 which are wrongly predicted) If you want to have a high recall [or true positive rate], it means you want to avoid missing positive cases, so you predict a positive more easily. Predict positive if h(x) >= 0.3 else predict negative. Basically having a high recall means you are avoiding a lot of false negatives. Here your true positive rate is ( 15 / (15+5) )= 75% Having a high recall for cancer classifiers can be a good thing, you totally need to avoid false negatives here. But of course this comes at the cost of precision. F-score measures this trade-off between precise prediction vs avoiding false negatives. Its definition can be arbitrary depending upon your classifier, lets assume it is defined as the average between precision and true positive rate. This is not a very good F-score measure because you can have huge recall value, and very low precision [eg predicting all cases positive] and you will still end up with an F-score which is same that when your precision and recall are well balanced. Define F score as : 2 * (Precision * Recall) / (Precision + Recall) Why? If you have very low precision or recall or both, your F-score falls; and you'll know that something is wrong. I would advise you to calculate F-score, precision and recall, for the case in which your classifier predicts all negatives, and then with the actual algorithm. If it is a skewed set you might want more training data. Also note that it is a good idea to measure F score on the cross-validation set. It is also known as F1-score. http://arxiv.org/ftp/arxiv/papers/1503/1503.06410.pdf https://www.google.co.in/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=a+probabilistic+theory+of+precision+recall+and+f+score
Calculating F-Score, which is the "positive" class, the majority or minority class? Precision is what fraction actually has cancer out of the total number that you predict positive, precision = ( number of true positives ) / (number of positives predicted by your classifier) Recall
19,891
Calculating F-Score, which is the "positive" class, the majority or minority class?
In addition to the former answers, note that the F1 score can also be solved as being: $$ F_1 score = \frac{2}{\frac{1}{P}+\frac{1}{R}}$$ Where P = precision and R = recall = true positive rate (TPR). This offers the advantage of referencing P and R a single time each when solving for the F1 score.
Calculating F-Score, which is the "positive" class, the majority or minority class?
In addition to the former answers, note that the F1 score can also be solved as being: $$ F_1 score = \frac{2}{\frac{1}{P}+\frac{1}{R}}$$ Where P = precision and R = recall = true positive rate (TPR).
Calculating F-Score, which is the "positive" class, the majority or minority class? In addition to the former answers, note that the F1 score can also be solved as being: $$ F_1 score = \frac{2}{\frac{1}{P}+\frac{1}{R}}$$ Where P = precision and R = recall = true positive rate (TPR). This offers the advantage of referencing P and R a single time each when solving for the F1 score.
Calculating F-Score, which is the "positive" class, the majority or minority class? In addition to the former answers, note that the F1 score can also be solved as being: $$ F_1 score = \frac{2}{\frac{1}{P}+\frac{1}{R}}$$ Where P = precision and R = recall = true positive rate (TPR).
19,892
Plotting results having only mean and standard deviation
Standard deviation on bar graphs can be illustrated by including error bars in them. The visualization(source) below is an example of such visualization: From a discussion in the comments below, having only the error whiskers instead of the error bars setup seems a better way to visualize such data. So, the graph can look somewhat like this:
Plotting results having only mean and standard deviation
Standard deviation on bar graphs can be illustrated by including error bars in them. The visualization(source) below is an example of such visualization: From a discussion in the comments below, hav
Plotting results having only mean and standard deviation Standard deviation on bar graphs can be illustrated by including error bars in them. The visualization(source) below is an example of such visualization: From a discussion in the comments below, having only the error whiskers instead of the error bars setup seems a better way to visualize such data. So, the graph can look somewhat like this:
Plotting results having only mean and standard deviation Standard deviation on bar graphs can be illustrated by including error bars in them. The visualization(source) below is an example of such visualization: From a discussion in the comments below, hav
19,893
Plotting results having only mean and standard deviation
I'd suggest a dot plot: Although there is still some room for improvement (perhaps dimming the edges of the big rectangle surrounding the data), almost all of the ink is being used to display information.
Plotting results having only mean and standard deviation
I'd suggest a dot plot: Although there is still some room for improvement (perhaps dimming the edges of the big rectangle surrounding the data), almost all of the ink is being used to display informa
Plotting results having only mean and standard deviation I'd suggest a dot plot: Although there is still some room for improvement (perhaps dimming the edges of the big rectangle surrounding the data), almost all of the ink is being used to display information.
Plotting results having only mean and standard deviation I'd suggest a dot plot: Although there is still some room for improvement (perhaps dimming the edges of the big rectangle surrounding the data), almost all of the ink is being used to display informa
19,894
Plotting results having only mean and standard deviation
Perhaps the best way to visualise the kind of data that gives rise to those sorts of results is to simulate a data set of a few hundred or a few thousand data points where one variable (control) has mean 37 and standard deviation 8 while the other (experimental) has men 21 and standard deviation 6. The simulation is simple enough in a spreadsheet or your favourite stats package. You can then graph the two distribitions to get an impression of the extent that the two sets of recall scores vary. With a simuated data-set you can also easily construct summary graphs like box-plots or histograms with error bars.
Plotting results having only mean and standard deviation
Perhaps the best way to visualise the kind of data that gives rise to those sorts of results is to simulate a data set of a few hundred or a few thousand data points where one variable (control) has m
Plotting results having only mean and standard deviation Perhaps the best way to visualise the kind of data that gives rise to those sorts of results is to simulate a data set of a few hundred or a few thousand data points where one variable (control) has mean 37 and standard deviation 8 while the other (experimental) has men 21 and standard deviation 6. The simulation is simple enough in a spreadsheet or your favourite stats package. You can then graph the two distribitions to get an impression of the extent that the two sets of recall scores vary. With a simuated data-set you can also easily construct summary graphs like box-plots or histograms with error bars.
Plotting results having only mean and standard deviation Perhaps the best way to visualise the kind of data that gives rise to those sorts of results is to simulate a data set of a few hundred or a few thousand data points where one variable (control) has m
19,895
Does a correlation matrix of two variables always have the same eigenvectors?
Algebraically, correlation matrix for two variables looks like that: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}.$$ Following the definition of an eigenvector, it is easy to verify that $(1, 1)$ and $(-1, 1)$ are the eigenvectors irrespective of $\rho$, with eigenvalues $1+\rho$ and $1-\rho$. For example: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\begin{pmatrix}1\\1\end{pmatrix}=(\rho+1)\begin{pmatrix}1\\1\end{pmatrix}.$$ Normalizing these two eigenvectors to unit length yields $(\sqrt{2}/2, \sqrt{2}/2)$ and $(-\sqrt{2}/2, \sqrt{2}/2)$, as you observed. Geometrically, if the variables are standardized, then the scatter plot will always be stretched along the main diagonal (which will be the 1st PC) if $\rho>0$, whatever the value of $\rho$ is: Regarding TLS, you might want to check my answer in this thread: How to perform orthogonal regression (total least squares) via PCA? As should be pretty obvious from the figure above, if both your $x$ and $y$ are standardized, then the TLS line is always a diagonal. So it hardly makes sense to perform TLS at all! However, if the variables are not standardized, then you should be doing PCA on their covariance matrix (not on their correlation matrix), and the regression line can have any slope. For a discussion of the case of three dimensions, see here: https://stats.stackexchange.com/a/19317.
Does a correlation matrix of two variables always have the same eigenvectors?
Algebraically, correlation matrix for two variables looks like that: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}.$$ Following the definition of an eigenvector, it is easy to verify that $(1,
Does a correlation matrix of two variables always have the same eigenvectors? Algebraically, correlation matrix for two variables looks like that: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}.$$ Following the definition of an eigenvector, it is easy to verify that $(1, 1)$ and $(-1, 1)$ are the eigenvectors irrespective of $\rho$, with eigenvalues $1+\rho$ and $1-\rho$. For example: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}\begin{pmatrix}1\\1\end{pmatrix}=(\rho+1)\begin{pmatrix}1\\1\end{pmatrix}.$$ Normalizing these two eigenvectors to unit length yields $(\sqrt{2}/2, \sqrt{2}/2)$ and $(-\sqrt{2}/2, \sqrt{2}/2)$, as you observed. Geometrically, if the variables are standardized, then the scatter plot will always be stretched along the main diagonal (which will be the 1st PC) if $\rho>0$, whatever the value of $\rho$ is: Regarding TLS, you might want to check my answer in this thread: How to perform orthogonal regression (total least squares) via PCA? As should be pretty obvious from the figure above, if both your $x$ and $y$ are standardized, then the TLS line is always a diagonal. So it hardly makes sense to perform TLS at all! However, if the variables are not standardized, then you should be doing PCA on their covariance matrix (not on their correlation matrix), and the regression line can have any slope. For a discussion of the case of three dimensions, see here: https://stats.stackexchange.com/a/19317.
Does a correlation matrix of two variables always have the same eigenvectors? Algebraically, correlation matrix for two variables looks like that: $$\begin{pmatrix} 1 & \rho \\ \rho & 1 \end{pmatrix}.$$ Following the definition of an eigenvector, it is easy to verify that $(1,
19,896
Does a correlation matrix of two variables always have the same eigenvectors?
As your first eigenvector is $(\sqrt{2}, \sqrt{2})$, the other eigenvector is uniquely (we're in 2D) up to factor $1$/$-1$ the vector $(\sqrt{2} -\sqrt{2})$. So you get your diagonalizing orthogonal matrix as $$\sqrt{2}\left[ \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right]$$ No we can reconstruct the covariance* matrix to have the shape $$\left[ \begin{array}{cc} a+b & a-b \\ a-b & a+b \end{array} \right] $$ $a$ and $b$ are the eigenvalues. I would suggest to look closely on your model or the origin of the data. Then you might find a reason why your data may be distributed as $X_1=X_a + X_b$ and $X_2 = X_a - X_b$, where $Var(X_a)=a$ and $Var(X_b)=b$ and $X_a$ and $X_b$ are independent. If your data would follow a continuous multivariate distribution, it is almost sure that your correlation matrix follows from this sum/difference relation. If the data follow a discrete distribution, it is still very likely that the model $X_1=X_a + X_b$ and $X_2 = X_a - X_b$ describes your data properly. In this case, you don't need a PCA. But it is generally better to infer such relations from sure insight into the nature of the data and not by estimation procedures like PCA. *Say correlation matrix, if $a+b=1$.
Does a correlation matrix of two variables always have the same eigenvectors?
As your first eigenvector is $(\sqrt{2}, \sqrt{2})$, the other eigenvector is uniquely (we're in 2D) up to factor $1$/$-1$ the vector $(\sqrt{2} -\sqrt{2})$. So you get your diagonalizing orthogonal m
Does a correlation matrix of two variables always have the same eigenvectors? As your first eigenvector is $(\sqrt{2}, \sqrt{2})$, the other eigenvector is uniquely (we're in 2D) up to factor $1$/$-1$ the vector $(\sqrt{2} -\sqrt{2})$. So you get your diagonalizing orthogonal matrix as $$\sqrt{2}\left[ \begin{array}{cc} 1 & 1 \\ 1 & -1 \end{array} \right]$$ No we can reconstruct the covariance* matrix to have the shape $$\left[ \begin{array}{cc} a+b & a-b \\ a-b & a+b \end{array} \right] $$ $a$ and $b$ are the eigenvalues. I would suggest to look closely on your model or the origin of the data. Then you might find a reason why your data may be distributed as $X_1=X_a + X_b$ and $X_2 = X_a - X_b$, where $Var(X_a)=a$ and $Var(X_b)=b$ and $X_a$ and $X_b$ are independent. If your data would follow a continuous multivariate distribution, it is almost sure that your correlation matrix follows from this sum/difference relation. If the data follow a discrete distribution, it is still very likely that the model $X_1=X_a + X_b$ and $X_2 = X_a - X_b$ describes your data properly. In this case, you don't need a PCA. But it is generally better to infer such relations from sure insight into the nature of the data and not by estimation procedures like PCA. *Say correlation matrix, if $a+b=1$.
Does a correlation matrix of two variables always have the same eigenvectors? As your first eigenvector is $(\sqrt{2}, \sqrt{2})$, the other eigenvector is uniquely (we're in 2D) up to factor $1$/$-1$ the vector $(\sqrt{2} -\sqrt{2})$. So you get your diagonalizing orthogonal m
19,897
Discrete uniform random variable(?) taking all rational values in a closed interval
This "random variable" is similar to the idea of having a flat prior on the entire real line (your second example). To show that there can be no random variable $X$ such that $P(X=q)=c$ for all $q\in \mathbb{Q}\cap[0,1]$ and constant $c$, we use the $\sigma$-additive property of random variables: the countable union of disjoint events has probability equal to the (possibly infinite) sum of probabilites of the events. So, if $c=0$, the probability $P(X\in\mathbb{Q}\cap[0,1])=0$, as it is the sum of countably many zeros. If $c>0$, then $P(X\in\mathbb{Q}\cap[0,1])=\infty$. However a proper random variable taking values in $\mathbb{Q}\cap[0,1]$ must be such that $P(X\in\mathbb{Q}\cap[0,1])=1$, so there is no such random variable. The key here, as you may already be aware, is that if the space is composed of finitely many points, then we can use $c>0$ and have no problem with the sum, and if the space has uncountably many points you can have $c=0$ and the $\sigma$-additivity isn't violated when integrating over the space because it is a statement about countable things. However you're going to problems when you want a uniform distribution over a countably infinite set. In the context of a Bayesian prior, though, you can of course just say that $P(X=q)\propto 1$ for all $q\in \mathbb{Q}\cap[0,1]$ if you're willing to use the improper prior.
Discrete uniform random variable(?) taking all rational values in a closed interval
This "random variable" is similar to the idea of having a flat prior on the entire real line (your second example). To show that there can be no random variable $X$ such that $P(X=q)=c$ for all $q\in
Discrete uniform random variable(?) taking all rational values in a closed interval This "random variable" is similar to the idea of having a flat prior on the entire real line (your second example). To show that there can be no random variable $X$ such that $P(X=q)=c$ for all $q\in \mathbb{Q}\cap[0,1]$ and constant $c$, we use the $\sigma$-additive property of random variables: the countable union of disjoint events has probability equal to the (possibly infinite) sum of probabilites of the events. So, if $c=0$, the probability $P(X\in\mathbb{Q}\cap[0,1])=0$, as it is the sum of countably many zeros. If $c>0$, then $P(X\in\mathbb{Q}\cap[0,1])=\infty$. However a proper random variable taking values in $\mathbb{Q}\cap[0,1]$ must be such that $P(X\in\mathbb{Q}\cap[0,1])=1$, so there is no such random variable. The key here, as you may already be aware, is that if the space is composed of finitely many points, then we can use $c>0$ and have no problem with the sum, and if the space has uncountably many points you can have $c=0$ and the $\sigma$-additivity isn't violated when integrating over the space because it is a statement about countable things. However you're going to problems when you want a uniform distribution over a countably infinite set. In the context of a Bayesian prior, though, you can of course just say that $P(X=q)\propto 1$ for all $q\in \mathbb{Q}\cap[0,1]$ if you're willing to use the improper prior.
Discrete uniform random variable(?) taking all rational values in a closed interval This "random variable" is similar to the idea of having a flat prior on the entire real line (your second example). To show that there can be no random variable $X$ such that $P(X=q)=c$ for all $q\in
19,898
Discrete uniform random variable(?) taking all rational values in a closed interval
A more positive fact is the following. If you drop the requirement that the probability measure be countably additive, and only require, instead, that it be finitely additive (just for the sake of this question), then for the rational numbers the answer is "yes". The rational numbers are an additive group since one can add two rational numbers, there is a neutral element, zero, and any $z\in\mathbb{Q}$ has an additive inverse $-z\in\mathbb{Q}$. Now, one can equip the rational numbers with the discrete topology so that they are a discrete group. (This is important because in other contexts it is more convenient not to do so and put another topology on them.) Viewed as a discrete group, they are even a countable discrete group because there are only countably many rational numbers. Also, they are an abelian group because $z +y =y+z$ for any pair of rational numbers. Now, the rational numbers, viewed as a countable discrete group, are an amenable group. See here for the definition of an amenable discrete group. Here it is shown that every countable abelian discrete group is amenable. In particular, this applies to the group of rational numbers. Therefore, by the very definition of an amenable discrete group, there exists a finitely additive probability measure $\mu$ on the rational numbers that is translation invariant, meaning that $\mu(z + A) = \mu(A)$ for any subset $A\subset\mathbb{Q}$ and any rational number $z\in\mathbb{Q}$. This property encompasses the intuitive way of defining "uniformity". $\mu$ necessarily vanishes on all finite subsets: $\mu(\{z\})=0$ for all $z\in\mathbb{Q}$. If you seek a random variable instead of a probability measure, then just consider the identity function on the probability space $(\mathbb{Q}, \mu)$. This gives such a required random variable. Therefore, if you relax your definition of probability measure a bit, you end up with a positive answer for the rational numbers. Perhaps, the existence of $\mu$ seems a bit counter-intuitive. One can get a better idea of $\mu$ by taking into account that a direct consequence of the translation-invariance is that the measure of all rational numbers whose floor is even, is one half; also, the measure of those with odd floor is one half, and so on. That measure $\mu$ that we just showed to exist, also necessarily vanishes on all bounded subsets (as one can show with a similar argument), in particular on the unit interval. Therefore, $\mu$ does not immediately give an answer for the rational numbers in the unit interval. One would have thought that the answer is easier to give for the rational numbers in the unit interval instead of all rational numbers, but it seems to be the other way around. (However, it also seems that one can cook up a probability measure on the rational numbers in the unit interval with similar properties, but the answer would then require a more precise definition of "uniformity" - maybe something along the lines of "translation-invariant whenever translation does not lead outside the unit interval".) UPDATE: You immediately obtain a measure on the unit interval rationals that is uniform in that sense, by considering the push-forward measure of the one on the rationals, that we constructed, along the map from the rationals to the unit interval rationals that maps each rational to its fractional part. Therefore, after relaxing the requirement to finite additivity, you obtain such measures in both cases you mentioned.
Discrete uniform random variable(?) taking all rational values in a closed interval
A more positive fact is the following. If you drop the requirement that the probability measure be countably additive, and only require, instead, that it be finitely additive (just for the sake of thi
Discrete uniform random variable(?) taking all rational values in a closed interval A more positive fact is the following. If you drop the requirement that the probability measure be countably additive, and only require, instead, that it be finitely additive (just for the sake of this question), then for the rational numbers the answer is "yes". The rational numbers are an additive group since one can add two rational numbers, there is a neutral element, zero, and any $z\in\mathbb{Q}$ has an additive inverse $-z\in\mathbb{Q}$. Now, one can equip the rational numbers with the discrete topology so that they are a discrete group. (This is important because in other contexts it is more convenient not to do so and put another topology on them.) Viewed as a discrete group, they are even a countable discrete group because there are only countably many rational numbers. Also, they are an abelian group because $z +y =y+z$ for any pair of rational numbers. Now, the rational numbers, viewed as a countable discrete group, are an amenable group. See here for the definition of an amenable discrete group. Here it is shown that every countable abelian discrete group is amenable. In particular, this applies to the group of rational numbers. Therefore, by the very definition of an amenable discrete group, there exists a finitely additive probability measure $\mu$ on the rational numbers that is translation invariant, meaning that $\mu(z + A) = \mu(A)$ for any subset $A\subset\mathbb{Q}$ and any rational number $z\in\mathbb{Q}$. This property encompasses the intuitive way of defining "uniformity". $\mu$ necessarily vanishes on all finite subsets: $\mu(\{z\})=0$ for all $z\in\mathbb{Q}$. If you seek a random variable instead of a probability measure, then just consider the identity function on the probability space $(\mathbb{Q}, \mu)$. This gives such a required random variable. Therefore, if you relax your definition of probability measure a bit, you end up with a positive answer for the rational numbers. Perhaps, the existence of $\mu$ seems a bit counter-intuitive. One can get a better idea of $\mu$ by taking into account that a direct consequence of the translation-invariance is that the measure of all rational numbers whose floor is even, is one half; also, the measure of those with odd floor is one half, and so on. That measure $\mu$ that we just showed to exist, also necessarily vanishes on all bounded subsets (as one can show with a similar argument), in particular on the unit interval. Therefore, $\mu$ does not immediately give an answer for the rational numbers in the unit interval. One would have thought that the answer is easier to give for the rational numbers in the unit interval instead of all rational numbers, but it seems to be the other way around. (However, it also seems that one can cook up a probability measure on the rational numbers in the unit interval with similar properties, but the answer would then require a more precise definition of "uniformity" - maybe something along the lines of "translation-invariant whenever translation does not lead outside the unit interval".) UPDATE: You immediately obtain a measure on the unit interval rationals that is uniform in that sense, by considering the push-forward measure of the one on the rationals, that we constructed, along the map from the rationals to the unit interval rationals that maps each rational to its fractional part. Therefore, after relaxing the requirement to finite additivity, you obtain such measures in both cases you mentioned.
Discrete uniform random variable(?) taking all rational values in a closed interval A more positive fact is the following. If you drop the requirement that the probability measure be countably additive, and only require, instead, that it be finitely additive (just for the sake of thi
19,899
Question about a normal equation proof
One is tempted to be glib and point out that because the quadratic form $$\beta \to (Y - X\beta)'(Y - X\beta)$$ is positive semi-definite, there exists a $\beta$ for which it is minimum and that minimum is found (by setting the gradient with respect to $\beta$ to zero) with the normal equations $$X'X(Y - X\beta) = 0,$$ whence there must be at least one solution regardless of the rank of $X'X$. However, this argument does not seem to be in the spirit of the question, which appears to be a purely algebraic statement. Perhaps it's of interest to understand why such an equation must have a solution and under precisely what conditions. So let's start over and pretend we don't know the connection with least squares. It all comes down to the meaning of $X'$, the transpose of $X$. This will turn out to be a matter of a simple definition, appropriate notation, and the concept of a nondegenerate sesquilinear form. Recall that $X$ is the "design matrix" of $n$ rows (one for each observation) and $p$ columns (one for each variable, including a constant if any). It therefore represents a linear transformation from the vector space $\mathbb V = \mathbb{R}^p$ to $\mathbb W = \mathbb{R}^n$. The transpose of $X$, thought of as a linear transformation, is a linear transformation of the dual spaces $X': \mathbb{W}^* \to \mathbb{V}^*$. In order to make sense of a composition like $X'X$, then, it is necessary to identify $\mathbb{W}^*$ with $\mathbb{W}$. That's what the usual inner product (sum of squares) on $\mathbb{W}$ does. There are actually two inner products $g_V$ and $g_W$ defined on $\mathbb V$ and $\mathbb W$ respectively. These are real-valued bilinear symmetric functions that are non-degenerate. The latter means that $$g_W(u, v) = 0\ \forall u\in \mathbb W \implies v = 0,$$ with analogous statements for $g_V$. Geometrically, these inner products enable us to measure length and angle. The condition $g(u,v)=0$ can be thought of as $u$ being "perpendicular" to or "orthogonal to" $v$. ("Perpendicular" is a geometric term while "orthogonal" is the more general algebraic term. See Michael J. Wichura (2006), The Coordinate-Free Approach to Linear Models at p. 7.) Nondegeneracy means that only the zero vector is perpendicular to the entire vector space. (This generality means that the results obtained here will apply to the generalized least squares setting, for which $g_W$ is not necessarily the usual inner product given as the sum of products of components, but is some arbitrary nondegenerate form. We could dispense with $g_V$ altogether, defining $X':\mathbb W\to\mathbb V^*$, but I expect many readers to be unfamiliar or uncomfortable with dual spaces and so choose to avoid this formulation.) With these inner products in hand, the transpose of any linear transformation $X: \mathbb V \to \mathbb W$ is defined by $X': \mathbb W \to \mathbb V$ via $$g_V(X'(w), v) = g_W(w, X(v))$$ for all $w\in \mathbb W$ and $v\in \mathbb V$. That there actually exists a vector $X'(w) \in \mathbb V$ with this property can be established by writing things out with bases for $\mathbb V$ and $\mathbb W$; that this vector is unique follows from the non-degeneracy of the inner products. For if $v_1$ and $v_2$ are two vectors for which $g_V(v_1,v)=g_V(v_2,v)$ for all $v\in\mathbb V$, then (from the linearity in the first component) $g_V(v_1-v_2,v)=0$ for all $v$ implying $v_1-v_2=0$. When $\mathbb U \subset \mathbb W,$ write $\mathbb{U}^\perp$ for the set of all vectors perpendicular to every vector in $\mathbb U$. Also as a matter of notation, write $X(\mathbb V)$ for the image of $X$, defined to be the set $\{X(v) | v \in \mathbb V\} \subset \mathbb W$. A fundamental relationship between $X$ and its transpose $X'$ is $$X'(w) = 0 \iff w \in X(\mathbb V)^\perp.$$ That is, $w$ is in the kernel of $X'$ if and only if $w$ is perpendicular to the image of $X$. This assertion says two things: If $X'(w) = 0$, then $g_W(w, X(v)) = g_V(X'(w),v) = g_V(0,v)=0$ for all $v\in\mathbb V$, which merely means $w$ is perpendicular to $X(V)$. If $w$ is perpendicular to $X(\mathbb V)$, that only means $g_W(w, X(v)) = 0$ for all $v\in\mathbb V$, but this is equivalent to $g_V(X'(w), v) = 0$ and nondegeneracy of $g_V$ implies $X'(w)=0$. We're actually done now. The analysis has shown that $\mathbb W$ decomposes as a direct product $\mathbb W = X(\mathbb V) \oplus X(\mathbb V)^\perp$. That is, we can take any arbitrary $y \in \mathbb W$ and write it uniquely as $y = y_0 + y^\perp$ with $y_0\in X(\mathbb V)$ and $y^\perp \in X(\mathbb V)^\perp$. That means $y_0$ is of the form $X(\beta)$ for at least one $\beta\in\mathbb V$. Notice, then, that $$y - X\beta = (y_0 + y^\perp) - y_0 = y^\perp \in X(\mathbb V)^\perp$$ The fundamental relationship says that is the same as the left hand side being in the kernel of $X'$: $$X'(y - X\beta) = 0,$$ whence $\beta$ solves the normal equations $X'X\beta = X'y.$ We are now in a position to give a brief geometric answer to the question (along with some revealing comments): the normal equations have a solution because any $n$-vector $y\in\mathbb W$ decomposes (uniquely) as the sum of a vector $y_0$ in the range of $X$ and another vector $y^\perp$ perpendicular to $y_0$ and $y_0$ is the image of at least one $p$-vector $\beta\in\mathbb V$. The dimension of the image $X(\mathbb V)$ (its rank) is the dimension of the identifiable parameters. The dimension of the kernel of $X$ counts the nontrivial linear relations among the parameters. All parameters are identifiable when $X$ is a one-to-one map from $\mathbb V$ to its image in $\mathbb W$. It is ultimately useful to dispense with the space $\mathbb V$ altogether and work entirely with the subspace $\mathbb U = X(\mathbb V)\subset\mathbb W$, the "column space" of the matrix $X$. The normal equations amount to orthogonal projection onto $\mathbb U$. That frees us conceptually from being tied to any particular parameterization of the model and shows that least-squares models have an intrinsic dimension independent of how they happen to be parameterized. One interesting outcome of this abstract algebraic demonstration is that we can solve the normal equations in arbitrary vector spaces. The result holds, say, for complex spaces, for spaces over finite fields (where minimizing a sum of squares makes little sense), and even over infinite-dimensional spaces that support suitable sequilinear forms.
Question about a normal equation proof
One is tempted to be glib and point out that because the quadratic form $$\beta \to (Y - X\beta)'(Y - X\beta)$$ is positive semi-definite, there exists a $\beta$ for which it is minimum and that minim
Question about a normal equation proof One is tempted to be glib and point out that because the quadratic form $$\beta \to (Y - X\beta)'(Y - X\beta)$$ is positive semi-definite, there exists a $\beta$ for which it is minimum and that minimum is found (by setting the gradient with respect to $\beta$ to zero) with the normal equations $$X'X(Y - X\beta) = 0,$$ whence there must be at least one solution regardless of the rank of $X'X$. However, this argument does not seem to be in the spirit of the question, which appears to be a purely algebraic statement. Perhaps it's of interest to understand why such an equation must have a solution and under precisely what conditions. So let's start over and pretend we don't know the connection with least squares. It all comes down to the meaning of $X'$, the transpose of $X$. This will turn out to be a matter of a simple definition, appropriate notation, and the concept of a nondegenerate sesquilinear form. Recall that $X$ is the "design matrix" of $n$ rows (one for each observation) and $p$ columns (one for each variable, including a constant if any). It therefore represents a linear transformation from the vector space $\mathbb V = \mathbb{R}^p$ to $\mathbb W = \mathbb{R}^n$. The transpose of $X$, thought of as a linear transformation, is a linear transformation of the dual spaces $X': \mathbb{W}^* \to \mathbb{V}^*$. In order to make sense of a composition like $X'X$, then, it is necessary to identify $\mathbb{W}^*$ with $\mathbb{W}$. That's what the usual inner product (sum of squares) on $\mathbb{W}$ does. There are actually two inner products $g_V$ and $g_W$ defined on $\mathbb V$ and $\mathbb W$ respectively. These are real-valued bilinear symmetric functions that are non-degenerate. The latter means that $$g_W(u, v) = 0\ \forall u\in \mathbb W \implies v = 0,$$ with analogous statements for $g_V$. Geometrically, these inner products enable us to measure length and angle. The condition $g(u,v)=0$ can be thought of as $u$ being "perpendicular" to or "orthogonal to" $v$. ("Perpendicular" is a geometric term while "orthogonal" is the more general algebraic term. See Michael J. Wichura (2006), The Coordinate-Free Approach to Linear Models at p. 7.) Nondegeneracy means that only the zero vector is perpendicular to the entire vector space. (This generality means that the results obtained here will apply to the generalized least squares setting, for which $g_W$ is not necessarily the usual inner product given as the sum of products of components, but is some arbitrary nondegenerate form. We could dispense with $g_V$ altogether, defining $X':\mathbb W\to\mathbb V^*$, but I expect many readers to be unfamiliar or uncomfortable with dual spaces and so choose to avoid this formulation.) With these inner products in hand, the transpose of any linear transformation $X: \mathbb V \to \mathbb W$ is defined by $X': \mathbb W \to \mathbb V$ via $$g_V(X'(w), v) = g_W(w, X(v))$$ for all $w\in \mathbb W$ and $v\in \mathbb V$. That there actually exists a vector $X'(w) \in \mathbb V$ with this property can be established by writing things out with bases for $\mathbb V$ and $\mathbb W$; that this vector is unique follows from the non-degeneracy of the inner products. For if $v_1$ and $v_2$ are two vectors for which $g_V(v_1,v)=g_V(v_2,v)$ for all $v\in\mathbb V$, then (from the linearity in the first component) $g_V(v_1-v_2,v)=0$ for all $v$ implying $v_1-v_2=0$. When $\mathbb U \subset \mathbb W,$ write $\mathbb{U}^\perp$ for the set of all vectors perpendicular to every vector in $\mathbb U$. Also as a matter of notation, write $X(\mathbb V)$ for the image of $X$, defined to be the set $\{X(v) | v \in \mathbb V\} \subset \mathbb W$. A fundamental relationship between $X$ and its transpose $X'$ is $$X'(w) = 0 \iff w \in X(\mathbb V)^\perp.$$ That is, $w$ is in the kernel of $X'$ if and only if $w$ is perpendicular to the image of $X$. This assertion says two things: If $X'(w) = 0$, then $g_W(w, X(v)) = g_V(X'(w),v) = g_V(0,v)=0$ for all $v\in\mathbb V$, which merely means $w$ is perpendicular to $X(V)$. If $w$ is perpendicular to $X(\mathbb V)$, that only means $g_W(w, X(v)) = 0$ for all $v\in\mathbb V$, but this is equivalent to $g_V(X'(w), v) = 0$ and nondegeneracy of $g_V$ implies $X'(w)=0$. We're actually done now. The analysis has shown that $\mathbb W$ decomposes as a direct product $\mathbb W = X(\mathbb V) \oplus X(\mathbb V)^\perp$. That is, we can take any arbitrary $y \in \mathbb W$ and write it uniquely as $y = y_0 + y^\perp$ with $y_0\in X(\mathbb V)$ and $y^\perp \in X(\mathbb V)^\perp$. That means $y_0$ is of the form $X(\beta)$ for at least one $\beta\in\mathbb V$. Notice, then, that $$y - X\beta = (y_0 + y^\perp) - y_0 = y^\perp \in X(\mathbb V)^\perp$$ The fundamental relationship says that is the same as the left hand side being in the kernel of $X'$: $$X'(y - X\beta) = 0,$$ whence $\beta$ solves the normal equations $X'X\beta = X'y.$ We are now in a position to give a brief geometric answer to the question (along with some revealing comments): the normal equations have a solution because any $n$-vector $y\in\mathbb W$ decomposes (uniquely) as the sum of a vector $y_0$ in the range of $X$ and another vector $y^\perp$ perpendicular to $y_0$ and $y_0$ is the image of at least one $p$-vector $\beta\in\mathbb V$. The dimension of the image $X(\mathbb V)$ (its rank) is the dimension of the identifiable parameters. The dimension of the kernel of $X$ counts the nontrivial linear relations among the parameters. All parameters are identifiable when $X$ is a one-to-one map from $\mathbb V$ to its image in $\mathbb W$. It is ultimately useful to dispense with the space $\mathbb V$ altogether and work entirely with the subspace $\mathbb U = X(\mathbb V)\subset\mathbb W$, the "column space" of the matrix $X$. The normal equations amount to orthogonal projection onto $\mathbb U$. That frees us conceptually from being tied to any particular parameterization of the model and shows that least-squares models have an intrinsic dimension independent of how they happen to be parameterized. One interesting outcome of this abstract algebraic demonstration is that we can solve the normal equations in arbitrary vector spaces. The result holds, say, for complex spaces, for spaces over finite fields (where minimizing a sum of squares makes little sense), and even over infinite-dimensional spaces that support suitable sequilinear forms.
Question about a normal equation proof One is tempted to be glib and point out that because the quadratic form $$\beta \to (Y - X\beta)'(Y - X\beta)$$ is positive semi-definite, there exists a $\beta$ for which it is minimum and that minim
19,900
Question about a normal equation proof
This is an old question, but I wanted to expand on a slightly more direct approach for getting to the same conclusion as in whuber's answer. It follows from the same observation about what $X^{T}$ actually means: it is the matrix representing $X$ acting on the left (or contravariant) term in the inner product. Basically, rather than work with the equation $$ X^{T}X\beta = X^{T}Y$$ we multiply on the left by the transpose of a vector $\gamma$ to get $$ \gamma^{T}X^{T}X\beta = \gamma^{T} X^{T} Y. $$ Note that the first equation is true if and only if the second equation holds for all $\gamma$, so basically we are thinking of $\gamma$ as an arbitrary vector in $\mathbb{R}^{p}$. Now, using the observation, we can rewrite this as $$ \left< X \gamma, X \beta \right> = \left< X \gamma, Y \right>, $$ where the brackets indicate the inner product (in this case, it is just the standard dot product). This is clearer to interpret since we can think of $X \gamma$ as being the predictions or fitted values of the model with parameter vector $\gamma$. It becomes even clearer if we subtract the left term from both sides and rearrange, to get $$ \left< X \gamma, (Y - X\beta) \right>. $$ This is finally really interpretable -- it says that the residuals under the model given by $\beta$ are orthogonal to the predictions under $\gamma$ for all $\gamma$. This is equivalent to saying that the residuals under $\beta$ are orthogonal to the entire column space of $X$. As such, we get to the same conclusion -- that any solution $\beta$ must decompose $Y$ into a component in the column space of $X$ and a component in the orthogonal complement of the column space. In other words, to convince ourselves that there is a unique solution, we need only determine that there is a unique orthogonal projection onto the column space of $X$. Now, this is surprisingly nontrivial, and whuber's answer discusses it nicely, but I wanted to add a direct, if crude, way of showing it is through the Gram-Schmidt process. Basically, the GS process allows you to construct an orthonormal basis of a space iteratively through the inner product. Basically, at each step you have part of a basis, and given a new vector not in the span of the basis, it shows you how to 'project out' the parts not orthogonal and then normalize. Here you would first find an orthonormal basis for the column space of $X$, and then expand it to an orthonormal basis for all of $\mathbb{R}^{n}$. The subset of vectors added crafting the basis of the column space would form a basis for the orthogonal complement. From there it is direct to simply define the orthogonal projection by how it affects these basis vectors (fixes those in the column space, sends the others to zero). Applying this to $Y$ then gives $\hat{Y} = X\beta$, and solving the system $(X^{T}X)\beta = X^{T}\hat{Y}$ gives you $\beta$.
Question about a normal equation proof
This is an old question, but I wanted to expand on a slightly more direct approach for getting to the same conclusion as in whuber's answer. It follows from the same observation about what $X^{T}$ act
Question about a normal equation proof This is an old question, but I wanted to expand on a slightly more direct approach for getting to the same conclusion as in whuber's answer. It follows from the same observation about what $X^{T}$ actually means: it is the matrix representing $X$ acting on the left (or contravariant) term in the inner product. Basically, rather than work with the equation $$ X^{T}X\beta = X^{T}Y$$ we multiply on the left by the transpose of a vector $\gamma$ to get $$ \gamma^{T}X^{T}X\beta = \gamma^{T} X^{T} Y. $$ Note that the first equation is true if and only if the second equation holds for all $\gamma$, so basically we are thinking of $\gamma$ as an arbitrary vector in $\mathbb{R}^{p}$. Now, using the observation, we can rewrite this as $$ \left< X \gamma, X \beta \right> = \left< X \gamma, Y \right>, $$ where the brackets indicate the inner product (in this case, it is just the standard dot product). This is clearer to interpret since we can think of $X \gamma$ as being the predictions or fitted values of the model with parameter vector $\gamma$. It becomes even clearer if we subtract the left term from both sides and rearrange, to get $$ \left< X \gamma, (Y - X\beta) \right>. $$ This is finally really interpretable -- it says that the residuals under the model given by $\beta$ are orthogonal to the predictions under $\gamma$ for all $\gamma$. This is equivalent to saying that the residuals under $\beta$ are orthogonal to the entire column space of $X$. As such, we get to the same conclusion -- that any solution $\beta$ must decompose $Y$ into a component in the column space of $X$ and a component in the orthogonal complement of the column space. In other words, to convince ourselves that there is a unique solution, we need only determine that there is a unique orthogonal projection onto the column space of $X$. Now, this is surprisingly nontrivial, and whuber's answer discusses it nicely, but I wanted to add a direct, if crude, way of showing it is through the Gram-Schmidt process. Basically, the GS process allows you to construct an orthonormal basis of a space iteratively through the inner product. Basically, at each step you have part of a basis, and given a new vector not in the span of the basis, it shows you how to 'project out' the parts not orthogonal and then normalize. Here you would first find an orthonormal basis for the column space of $X$, and then expand it to an orthonormal basis for all of $\mathbb{R}^{n}$. The subset of vectors added crafting the basis of the column space would form a basis for the orthogonal complement. From there it is direct to simply define the orthogonal projection by how it affects these basis vectors (fixes those in the column space, sends the others to zero). Applying this to $Y$ then gives $\hat{Y} = X\beta$, and solving the system $(X^{T}X)\beta = X^{T}\hat{Y}$ gives you $\beta$.
Question about a normal equation proof This is an old question, but I wanted to expand on a slightly more direct approach for getting to the same conclusion as in whuber's answer. It follows from the same observation about what $X^{T}$ act