idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
36,101
When has a Bayesian approach been critical to addressing a theory, hypothesis or problem?
In response to my own question, an article was just published in the journal Ecology titled "Density estimation in tiger populations: combining information for strong inference" by Gopalaswamy et al. They used a Bayesian model that combined information from tiger studies with different methodologies to improve the accuracy of their estimation of the density of tigers nature preserve. On their own the two separate studies indicated that there were ~12 +/- 1.95 tigers/100km2 (posterior mean +/- SD) or 6.7 +/- 2.37 tigers/100km2. The combined Bayesian model provided an estimate of 8.5 +/- 1.95 tigers/100km2.
When has a Bayesian approach been critical to addressing a theory, hypothesis or problem?
In response to my own question, an article was just published in the journal Ecology titled "Density estimation in tiger populations: combining information for strong inference" by Gopalaswamy et al.
When has a Bayesian approach been critical to addressing a theory, hypothesis or problem? In response to my own question, an article was just published in the journal Ecology titled "Density estimation in tiger populations: combining information for strong inference" by Gopalaswamy et al. They used a Bayesian model that combined information from tiger studies with different methodologies to improve the accuracy of their estimation of the density of tigers nature preserve. On their own the two separate studies indicated that there were ~12 +/- 1.95 tigers/100km2 (posterior mean +/- SD) or 6.7 +/- 2.37 tigers/100km2. The combined Bayesian model provided an estimate of 8.5 +/- 1.95 tigers/100km2.
When has a Bayesian approach been critical to addressing a theory, hypothesis or problem? In response to my own question, an article was just published in the journal Ecology titled "Density estimation in tiger populations: combining information for strong inference" by Gopalaswamy et al.
36,102
Common weak learners for Adaboost
The most basic and most common weak learner is a decision stump which is basically a single level decision tree. That means, say the points in your dataset are N dimensional (features), a decision stump is a threshold on a single dimension. Below the threshold is one class, above it the other. Pretty much any classifier could be used as a weak classifier. I have seen papers where people use more complicated (multiple levels) decision trees or Support Vector Machines as weak learners.
Common weak learners for Adaboost
The most basic and most common weak learner is a decision stump which is basically a single level decision tree. That means, say the points in your dataset are N dimensional (features), a decision stu
Common weak learners for Adaboost The most basic and most common weak learner is a decision stump which is basically a single level decision tree. That means, say the points in your dataset are N dimensional (features), a decision stump is a threshold on a single dimension. Below the threshold is one class, above it the other. Pretty much any classifier could be used as a weak classifier. I have seen papers where people use more complicated (multiple levels) decision trees or Support Vector Machines as weak learners.
Common weak learners for Adaboost The most basic and most common weak learner is a decision stump which is basically a single level decision tree. That means, say the points in your dataset are N dimensional (features), a decision stu
36,103
Is this an unbiased estimator for standard deviation of normal distribution?
The proposed estimator is not unbiased, at least if we indeed know the true mean, $\mu$, and if we are dealing with a normal sample as the title says, where the distribution is symmetric and unimodal and the mean equals the median. Informally, knowing the true mean, makes the mean absolute deviation equal in value to the probability limit of the same expression with $\bar X$ instead of $\mu$. We have $$ y=\frac{1}{n} \sum_{i=1}^n |X_i - \mu| = \frac{1}{n}\left[\sum_{X_i\geq \mu} (X_i - \mu)+\sum_{X_j< \mu} (\mu - X_j)\right]$$ Denote $m_1$ the count for the first sum, and $m_2$ the count for the second sum (both are random variables). Then, using also Wald's equation $$E(y) = \frac{1}{n}\Big[E(m_1)E(X\mid X\geq \mu) - E(m_1)\mu + E(m_2)\mu - E(m_2)E(X\mid X\leq \mu)\Big]$$ Since we have the true mean, which is equal to the median, we get $E(m_1)=E(m_2) = n/2$, so the two middle terms cancel, while substituitng for the expected values of the counts, taking common factors and simplifying we arrive at $$E(y) = \frac{1}{2}\Big[E(X\mid X\geq \mu) - E(X\mid X\leq \mu)\Big]$$ For the truncated normal distribution, these expected values are $$E(X\mid X\geq \mu) = \mu + \sigma \frac{\phi(0)}{1-\Phi(0)} = \mu +\sigma\sqrt{2/\pi}$$ $$E(X\mid X\leq \mu) = \mu - \sigma \frac{\phi(0)}{\Phi(0)} = \mu -\sigma\sqrt{2/\pi}$$ So $$E(y)=\frac{1}{2}\Big[\mu +\sigma\sqrt{2/\pi} - \mu +\sigma\sqrt{2/\pi}\Big] = \sigma\sqrt{2/\pi}$$ So the correction factor in $\tilde \sigma$ should be $\sqrt{\pi/2}$ only, for it to be unbiased. I note that since $X_i - \bar X = (1-1/n)X_i - (1/n)\sum_{j\neq i}X_j$ one suspects that we should examine the case where we do not know $\mu$ and we use the sample mean instead, which I may find the time to do later.
Is this an unbiased estimator for standard deviation of normal distribution?
The proposed estimator is not unbiased, at least if we indeed know the true mean, $\mu$, and if we are dealing with a normal sample as the title says, where the distribution is symmetric and unimodal
Is this an unbiased estimator for standard deviation of normal distribution? The proposed estimator is not unbiased, at least if we indeed know the true mean, $\mu$, and if we are dealing with a normal sample as the title says, where the distribution is symmetric and unimodal and the mean equals the median. Informally, knowing the true mean, makes the mean absolute deviation equal in value to the probability limit of the same expression with $\bar X$ instead of $\mu$. We have $$ y=\frac{1}{n} \sum_{i=1}^n |X_i - \mu| = \frac{1}{n}\left[\sum_{X_i\geq \mu} (X_i - \mu)+\sum_{X_j< \mu} (\mu - X_j)\right]$$ Denote $m_1$ the count for the first sum, and $m_2$ the count for the second sum (both are random variables). Then, using also Wald's equation $$E(y) = \frac{1}{n}\Big[E(m_1)E(X\mid X\geq \mu) - E(m_1)\mu + E(m_2)\mu - E(m_2)E(X\mid X\leq \mu)\Big]$$ Since we have the true mean, which is equal to the median, we get $E(m_1)=E(m_2) = n/2$, so the two middle terms cancel, while substituitng for the expected values of the counts, taking common factors and simplifying we arrive at $$E(y) = \frac{1}{2}\Big[E(X\mid X\geq \mu) - E(X\mid X\leq \mu)\Big]$$ For the truncated normal distribution, these expected values are $$E(X\mid X\geq \mu) = \mu + \sigma \frac{\phi(0)}{1-\Phi(0)} = \mu +\sigma\sqrt{2/\pi}$$ $$E(X\mid X\leq \mu) = \mu - \sigma \frac{\phi(0)}{\Phi(0)} = \mu -\sigma\sqrt{2/\pi}$$ So $$E(y)=\frac{1}{2}\Big[\mu +\sigma\sqrt{2/\pi} - \mu +\sigma\sqrt{2/\pi}\Big] = \sigma\sqrt{2/\pi}$$ So the correction factor in $\tilde \sigma$ should be $\sqrt{\pi/2}$ only, for it to be unbiased. I note that since $X_i - \bar X = (1-1/n)X_i - (1/n)\sum_{j\neq i}X_j$ one suspects that we should examine the case where we do not know $\mu$ and we use the sample mean instead, which I may find the time to do later.
Is this an unbiased estimator for standard deviation of normal distribution? The proposed estimator is not unbiased, at least if we indeed know the true mean, $\mu$, and if we are dealing with a normal sample as the title says, where the distribution is symmetric and unimodal
36,104
When is a covariance `degenerate in some direction`?
Yes. This figure shows the situation for $p=2$ where the span of $\{\mathbf{x}_1, \ldots, \mathbf{x}_K\}$ is one-dimensional, shown as a red line through the origin, and the orthogonal space--the kernel of the covariance matrix--also is one-dimensional, shown as a dashed gray line through the origin. Data are shown as points on the red line. Evidently, the data exhibit no variation in directions parallel to the orthogonal space. When $\mathbf{z}|y$ is a random variable, a similar picture and the same interpretation hold. Now, any realization of $\mathbf{z}|y$ must lie on the red line. No two realizations can differ by any nonzero element of the orthogonal space.
When is a covariance `degenerate in some direction`?
Yes. This figure shows the situation for $p=2$ where the span of $\{\mathbf{x}_1, \ldots, \mathbf{x}_K\}$ is one-dimensional, shown as a red line through the origin, and the orthogonal space--the ker
When is a covariance `degenerate in some direction`? Yes. This figure shows the situation for $p=2$ where the span of $\{\mathbf{x}_1, \ldots, \mathbf{x}_K\}$ is one-dimensional, shown as a red line through the origin, and the orthogonal space--the kernel of the covariance matrix--also is one-dimensional, shown as a dashed gray line through the origin. Data are shown as points on the red line. Evidently, the data exhibit no variation in directions parallel to the orthogonal space. When $\mathbf{z}|y$ is a random variable, a similar picture and the same interpretation hold. Now, any realization of $\mathbf{z}|y$ must lie on the red line. No two realizations can differ by any nonzero element of the orthogonal space.
When is a covariance `degenerate in some direction`? Yes. This figure shows the situation for $p=2$ where the span of $\{\mathbf{x}_1, \ldots, \mathbf{x}_K\}$ is one-dimensional, shown as a red line through the origin, and the orthogonal space--the ker
36,105
When is a covariance `degenerate in some direction`?
In this case the covariance matrix is singular, i.e. it is not of full rank. The directions associated with the zero eigenvalues are degenerate.
When is a covariance `degenerate in some direction`?
In this case the covariance matrix is singular, i.e. it is not of full rank. The directions associated with the zero eigenvalues are degenerate.
When is a covariance `degenerate in some direction`? In this case the covariance matrix is singular, i.e. it is not of full rank. The directions associated with the zero eigenvalues are degenerate.
When is a covariance `degenerate in some direction`? In this case the covariance matrix is singular, i.e. it is not of full rank. The directions associated with the zero eigenvalues are degenerate.
36,106
Estimating prediction error
If you have done cross-validation very carefully (there are many ways to make mistakes that can lead to overly optimistic results) then if your new data is drawn from the same population as the training data, the cross-validation result should be about right. In technical terms cross-validation should return an unbiased estimate of the error, so even if though the test result may vary from expectations, it should be just as likely to be better as it is to be worse. For a good guide to cross-validation, see chapter 7 of Elements of Statistical Learning. A common mistake in cross-validation is to ensure that any choices you make developing the model such as tuning parameters, deciding which variables are useful and even what algorithm to use, needs to be evaluated via cross-validation. However, the key assumption is that the test set is from the same population as the training set. In many real world applications of statistical models, the system being modelled is likely to change over time, even if it is in subtle ways such as changes in the ways samples are taken. Any change will degrade the performance of the model. For this reason, in practical terms cross-validation error on the static training set might be optimistic compared with how some system might perform in the real world. The details will depend entirely on the nature of the data, so there is no single quantitative answer to your question.
Estimating prediction error
If you have done cross-validation very carefully (there are many ways to make mistakes that can lead to overly optimistic results) then if your new data is drawn from the same population as the traini
Estimating prediction error If you have done cross-validation very carefully (there are many ways to make mistakes that can lead to overly optimistic results) then if your new data is drawn from the same population as the training data, the cross-validation result should be about right. In technical terms cross-validation should return an unbiased estimate of the error, so even if though the test result may vary from expectations, it should be just as likely to be better as it is to be worse. For a good guide to cross-validation, see chapter 7 of Elements of Statistical Learning. A common mistake in cross-validation is to ensure that any choices you make developing the model such as tuning parameters, deciding which variables are useful and even what algorithm to use, needs to be evaluated via cross-validation. However, the key assumption is that the test set is from the same population as the training set. In many real world applications of statistical models, the system being modelled is likely to change over time, even if it is in subtle ways such as changes in the ways samples are taken. Any change will degrade the performance of the model. For this reason, in practical terms cross-validation error on the static training set might be optimistic compared with how some system might perform in the real world. The details will depend entirely on the nature of the data, so there is no single quantitative answer to your question.
Estimating prediction error If you have done cross-validation very carefully (there are many ways to make mistakes that can lead to overly optimistic results) then if your new data is drawn from the same population as the traini
36,107
Estimating prediction error
Let me add to Bogdanovist's excellent answer that cross validation is unbiased for what it measures: the predicitive abilities of "surrogate" models with respect to the data at hand ("drawn from the same population"). the often stated pessimistic bias arises in situations where the surrogate models are on average worse than the real model, usually because of the smaller training sample size (even if the drawn from the same population assumption is true) This paper stresses the "drawn from the same population" problems, particularly the drift over time: Esbensen and Geladi: Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, Volume 24, Issue 3-4, pages 168-187, March-April 2010
Estimating prediction error
Let me add to Bogdanovist's excellent answer that cross validation is unbiased for what it measures: the predicitive abilities of "surrogate" models with respect to the data at hand ("drawn from the
Estimating prediction error Let me add to Bogdanovist's excellent answer that cross validation is unbiased for what it measures: the predicitive abilities of "surrogate" models with respect to the data at hand ("drawn from the same population"). the often stated pessimistic bias arises in situations where the surrogate models are on average worse than the real model, usually because of the smaller training sample size (even if the drawn from the same population assumption is true) This paper stresses the "drawn from the same population" problems, particularly the drift over time: Esbensen and Geladi: Principles of Proper Validation: use and abuse of re-sampling for validation, Journal of Chemometrics, Volume 24, Issue 3-4, pages 168-187, March-April 2010
Estimating prediction error Let me add to Bogdanovist's excellent answer that cross validation is unbiased for what it measures: the predicitive abilities of "surrogate" models with respect to the data at hand ("drawn from the
36,108
Are there references for plotting binary time series?
Kedem and Fokianos in their book "Regression Models for Time Series Analysis" have a whole chapter (Chapter 2) on binary time series models with many examples of plotted series and periodograms. In response to whuber's request I am adding some description of the plots in the chapter. page 63 Fig 2.3 This figure is in the section on logistic autoregression. A model for a logistic autoregression with a sinusoidal component is give by the formula Logit(πt(β))= β1 + β2 cos(2πt/12) + β3 Yt-1 They plot Yt with the time series plotted below it where the particular function is Logit(πt(β))= 0.3 + 0.75 cos(2πt/12) + Yt-1 fig 2.4 page 62 is similar but for a different series fig 2.5 shows sample autocorrelation for 4 such logistic autoregressions with sinusoidal components. fig 2.9 page 70 plots level of percipitation at Mount Washington NH over 107 day period with the binary time series Yt (rain yes or no). fig 2.14 (looking at logistic models for sleep data Yt awake vs asleep) figure provides cumulative periodogram for raw residuals from a model and Pearson residulas from the model. fig 2.15 shows observed series for logistic model for sleep data with model prediction of the series below it.
Are there references for plotting binary time series?
Kedem and Fokianos in their book "Regression Models for Time Series Analysis" have a whole chapter (Chapter 2) on binary time series models with many examples of plotted series and periodograms. In re
Are there references for plotting binary time series? Kedem and Fokianos in their book "Regression Models for Time Series Analysis" have a whole chapter (Chapter 2) on binary time series models with many examples of plotted series and periodograms. In response to whuber's request I am adding some description of the plots in the chapter. page 63 Fig 2.3 This figure is in the section on logistic autoregression. A model for a logistic autoregression with a sinusoidal component is give by the formula Logit(πt(β))= β1 + β2 cos(2πt/12) + β3 Yt-1 They plot Yt with the time series plotted below it where the particular function is Logit(πt(β))= 0.3 + 0.75 cos(2πt/12) + Yt-1 fig 2.4 page 62 is similar but for a different series fig 2.5 shows sample autocorrelation for 4 such logistic autoregressions with sinusoidal components. fig 2.9 page 70 plots level of percipitation at Mount Washington NH over 107 day period with the binary time series Yt (rain yes or no). fig 2.14 (looking at logistic models for sleep data Yt awake vs asleep) figure provides cumulative periodogram for raw residuals from a model and Pearson residulas from the model. fig 2.15 shows observed series for logistic model for sleep data with model prediction of the series below it.
Are there references for plotting binary time series? Kedem and Fokianos in their book "Regression Models for Time Series Analysis" have a whole chapter (Chapter 2) on binary time series models with many examples of plotted series and periodograms. In re
36,109
Are there references for plotting binary time series?
As always, it depends on the purpose of the plot: what is it intended to communicate to whom? In any event, cumulative plots tend to be interesting and informative. The NY Times has lately been producing many nice examples. Some examples of similar plots appear on the "Edward Tufte forum". This combination of "sparklines" (cumulative plots without labeled axes), tabular data, and the raw time series provides a lot of information in one place: Note the subtleties of design, such as positioning the table rows and the righthand plots (just binary time series plots) at heights corresponding to the final standings; and using consistent colors across the sparklines, the table, and the time series plots. In looking these over, I would be tempted to redesign them slightly: either scale one or both plots by time, rather than game index, to introduce chronological information; or--perhaps better--put gaps between the individual series of games. (Baseball is usually played in series of three or four games between pairs of teams. This structure can be important in understanding the data.) Even better: at the right, color-code each series according to the opposing team (or perhaps just the strength of the opposing team) rather than using monochromatic series. These recommendations follow principles enunciated by Tufte in his first book on the topic, The Visual Display of Quantitative Information, in which he advocates increasing the data-ink ratio through erasing (here, putting gaps in the data to show the series) and modifying the graphical modes of representation (here, replacing an uninformative single color by changes of color).
Are there references for plotting binary time series?
As always, it depends on the purpose of the plot: what is it intended to communicate to whom? In any event, cumulative plots tend to be interesting and informative. The NY Times has lately been prod
Are there references for plotting binary time series? As always, it depends on the purpose of the plot: what is it intended to communicate to whom? In any event, cumulative plots tend to be interesting and informative. The NY Times has lately been producing many nice examples. Some examples of similar plots appear on the "Edward Tufte forum". This combination of "sparklines" (cumulative plots without labeled axes), tabular data, and the raw time series provides a lot of information in one place: Note the subtleties of design, such as positioning the table rows and the righthand plots (just binary time series plots) at heights corresponding to the final standings; and using consistent colors across the sparklines, the table, and the time series plots. In looking these over, I would be tempted to redesign them slightly: either scale one or both plots by time, rather than game index, to introduce chronological information; or--perhaps better--put gaps between the individual series of games. (Baseball is usually played in series of three or four games between pairs of teams. This structure can be important in understanding the data.) Even better: at the right, color-code each series according to the opposing team (or perhaps just the strength of the opposing team) rather than using monochromatic series. These recommendations follow principles enunciated by Tufte in his first book on the topic, The Visual Display of Quantitative Information, in which he advocates increasing the data-ink ratio through erasing (here, putting gaps in the data to show the series) and modifying the graphical modes of representation (here, replacing an uninformative single color by changes of color).
Are there references for plotting binary time series? As always, it depends on the purpose of the plot: what is it intended to communicate to whom? In any event, cumulative plots tend to be interesting and informative. The NY Times has lately been prod
36,110
Are there references for plotting binary time series?
Just to follow up on this issue, I didn't find any other resources on plotting binary series and wound up going with the original line-plots that I dismissed initially. (The plots of the observed series in the book M. Chernick refers to also plot the original data just as lines, which I discovered after making my choice). Tufte's tick plots require a bit more space to be legible and the benefits of being able to count wins/losses in a row seem small. Accurate counting is difficult, and if the length of the largest winning or losing streak is important it could be presented on it's own, just like he does for the min/max in more traditional sparklines). Here's the result so far: The last column gives wins and losses for games played, plus predictions from a fixed effects model for remaining games. The other columns are kind of beside the point, but there's a description available here for anyone interested. I'm happy to hear other suggestions, but anything extensive might warrant opening another question. And let me know if adding this follow up answer is inappropriate.
Are there references for plotting binary time series?
Just to follow up on this issue, I didn't find any other resources on plotting binary series and wound up going with the original line-plots that I dismissed initially. (The plots of the observed seri
Are there references for plotting binary time series? Just to follow up on this issue, I didn't find any other resources on plotting binary series and wound up going with the original line-plots that I dismissed initially. (The plots of the observed series in the book M. Chernick refers to also plot the original data just as lines, which I discovered after making my choice). Tufte's tick plots require a bit more space to be legible and the benefits of being able to count wins/losses in a row seem small. Accurate counting is difficult, and if the length of the largest winning or losing streak is important it could be presented on it's own, just like he does for the min/max in more traditional sparklines). Here's the result so far: The last column gives wins and losses for games played, plus predictions from a fixed effects model for remaining games. The other columns are kind of beside the point, but there's a description available here for anyone interested. I'm happy to hear other suggestions, but anything extensive might warrant opening another question. And let me know if adding this follow up answer is inappropriate.
Are there references for plotting binary time series? Just to follow up on this issue, I didn't find any other resources on plotting binary series and wound up going with the original line-plots that I dismissed initially. (The plots of the observed seri
36,111
Online reference for review of introductory statistics material
Take a look at these documents: http://onlinestatbook.com/Online_Statistics_Education.pdf http://www.micquality.com/downloads/ref-primer.pdf And at this site, for more materials: http://onlinestatbook.com/ Hope this helps.
Online reference for review of introductory statistics material
Take a look at these documents: http://onlinestatbook.com/Online_Statistics_Education.pdf http://www.micquality.com/downloads/ref-primer.pdf And at this site, for more materials: http://onlinestatbook
Online reference for review of introductory statistics material Take a look at these documents: http://onlinestatbook.com/Online_Statistics_Education.pdf http://www.micquality.com/downloads/ref-primer.pdf And at this site, for more materials: http://onlinestatbook.com/ Hope this helps.
Online reference for review of introductory statistics material Take a look at these documents: http://onlinestatbook.com/Online_Statistics_Education.pdf http://www.micquality.com/downloads/ref-primer.pdf And at this site, for more materials: http://onlinestatbook
36,112
Online reference for review of introductory statistics material
These are not PDFs, but there are quite a few good videos at the Khan Academy.
Online reference for review of introductory statistics material
These are not PDFs, but there are quite a few good videos at the Khan Academy.
Online reference for review of introductory statistics material These are not PDFs, but there are quite a few good videos at the Khan Academy.
Online reference for review of introductory statistics material These are not PDFs, but there are quite a few good videos at the Khan Academy.
36,113
Online reference for review of introductory statistics material
There are so many good possibilities and your vague description makes it difficult to narrow it down to just a couple. But here is a short list. 1. Humourous but also clear and accurate by Gonick "The Cartoon Guide to Statistics 1993" http://www.amazon.com/Cartoon-Guide-Statistics-Larry-Gonick/dp/0062731025/ref=sr_1_1?s=books&ie=UTF8&qid=1341601837&sr=1-1&keywords=the+cartoon+guide+to+statistics Clearly written in the style of David Moore "The Basic Practice of Statistics 5th Edition 2010." http://www.amazon.com/Basic-Practice-Statistics-David-Moore/dp/1429201215/ref=sr_1_2?s=books&ie=UTF8&qid=1341601954&sr=1-2&keywords=the+basic+practice+of+statistics Good college level book by Hogg and Tanis "Probability and Statistical Inference 8th Edition 2009." Now Published by Prentice-Hall. Was published by Macmillian when I studied out of it in the 1970s The authors were Hogg and Craig then and the title was different too. I had it as "Introduction to Mathematical Statistics 3rd Edition 1970." http://www.amazon.com/Probability-Statistical-Inference-Robert-Hogg/dp/0321584759/ref=sr_1_1?s=books&ie=UTF8&qid=1341601657&sr=1-1&keywords=hogg+tanis The classic by Mood, Graybill and Boes 1974 "Introduction to the Theory of Statistics". http://www.amazon.com/Introduction-Theory-Statistics-3rd-Edition/dp/0070854653/ref=la_B002880BCE_1_1?ie=UTF8&qid=1341601600&sr=1-1 Very modern first year undergraduate introductory text. One of my favorites because it includes resampling methods. Chihara and Hesterberg "Mathematical Statistics with Resampling and R, 2011" http://www.amazon.com/Mathematical-Statistics-Resampling-Laura-Chihara/dp/1118029852/ref=sr_1_1?s=books&ie=UTF8&qid=1341602206&sr=1-1&keywords=Chihara+and+Hesterberg This is the only good one that is concise "pocketbook" size. Silvey's "Statistical Inference 1975" http://www.amazon.com/Statistical-Inference-Monographs-Statistics-Probability/dp/0412138204/ref=sr_1_1?s=books&ie=UTF8&qid=1341602312&sr=1-1&keywords=silvey+s+d
Online reference for review of introductory statistics material
There are so many good possibilities and your vague description makes it difficult to narrow it down to just a couple. But here is a short list. 1. Humourous but also clear and accurate by Gonick "Th
Online reference for review of introductory statistics material There are so many good possibilities and your vague description makes it difficult to narrow it down to just a couple. But here is a short list. 1. Humourous but also clear and accurate by Gonick "The Cartoon Guide to Statistics 1993" http://www.amazon.com/Cartoon-Guide-Statistics-Larry-Gonick/dp/0062731025/ref=sr_1_1?s=books&ie=UTF8&qid=1341601837&sr=1-1&keywords=the+cartoon+guide+to+statistics Clearly written in the style of David Moore "The Basic Practice of Statistics 5th Edition 2010." http://www.amazon.com/Basic-Practice-Statistics-David-Moore/dp/1429201215/ref=sr_1_2?s=books&ie=UTF8&qid=1341601954&sr=1-2&keywords=the+basic+practice+of+statistics Good college level book by Hogg and Tanis "Probability and Statistical Inference 8th Edition 2009." Now Published by Prentice-Hall. Was published by Macmillian when I studied out of it in the 1970s The authors were Hogg and Craig then and the title was different too. I had it as "Introduction to Mathematical Statistics 3rd Edition 1970." http://www.amazon.com/Probability-Statistical-Inference-Robert-Hogg/dp/0321584759/ref=sr_1_1?s=books&ie=UTF8&qid=1341601657&sr=1-1&keywords=hogg+tanis The classic by Mood, Graybill and Boes 1974 "Introduction to the Theory of Statistics". http://www.amazon.com/Introduction-Theory-Statistics-3rd-Edition/dp/0070854653/ref=la_B002880BCE_1_1?ie=UTF8&qid=1341601600&sr=1-1 Very modern first year undergraduate introductory text. One of my favorites because it includes resampling methods. Chihara and Hesterberg "Mathematical Statistics with Resampling and R, 2011" http://www.amazon.com/Mathematical-Statistics-Resampling-Laura-Chihara/dp/1118029852/ref=sr_1_1?s=books&ie=UTF8&qid=1341602206&sr=1-1&keywords=Chihara+and+Hesterberg This is the only good one that is concise "pocketbook" size. Silvey's "Statistical Inference 1975" http://www.amazon.com/Statistical-Inference-Monographs-Statistics-Probability/dp/0412138204/ref=sr_1_1?s=books&ie=UTF8&qid=1341602312&sr=1-1&keywords=silvey+s+d
Online reference for review of introductory statistics material There are so many good possibilities and your vague description makes it difficult to narrow it down to just a couple. But here is a short list. 1. Humourous but also clear and accurate by Gonick "Th
36,114
Online reference for review of introductory statistics material
I think these two free PDF's are very good for this purpose: This one is a more "conceptual" introduction, good for a refresher: http://www.stat-help.com/intro.pdf And this one is a more "complete" introduction: http://www.openintro.org/stat/down/OpenIntroStatFirst.pdf
Online reference for review of introductory statistics material
I think these two free PDF's are very good for this purpose: This one is a more "conceptual" introduction, good for a refresher: http://www.stat-help.com/intro.pdf And this one is a more "complete" i
Online reference for review of introductory statistics material I think these two free PDF's are very good for this purpose: This one is a more "conceptual" introduction, good for a refresher: http://www.stat-help.com/intro.pdf And this one is a more "complete" introduction: http://www.openintro.org/stat/down/OpenIntroStatFirst.pdf
Online reference for review of introductory statistics material I think these two free PDF's are very good for this purpose: This one is a more "conceptual" introduction, good for a refresher: http://www.stat-help.com/intro.pdf And this one is a more "complete" i
36,115
Online reference for review of introductory statistics material
David Colquhoun's book "Lectures on Biostatistics" covers most of the material that you mention and is available as a free pdf from the author's website http://www.dcscience.net/ It is slightly idiosyncratic in parts (which will not surprise any who know the author) and quite entertaining (the test for pureness of heart is wonderful). You can't go wrong.
Online reference for review of introductory statistics material
David Colquhoun's book "Lectures on Biostatistics" covers most of the material that you mention and is available as a free pdf from the author's website http://www.dcscience.net/ It is slightly idiosy
Online reference for review of introductory statistics material David Colquhoun's book "Lectures on Biostatistics" covers most of the material that you mention and is available as a free pdf from the author's website http://www.dcscience.net/ It is slightly idiosyncratic in parts (which will not surprise any who know the author) and quite entertaining (the test for pureness of heart is wonderful). You can't go wrong.
Online reference for review of introductory statistics material David Colquhoun's book "Lectures on Biostatistics" covers most of the material that you mention and is available as a free pdf from the author's website http://www.dcscience.net/ It is slightly idiosy
36,116
Online reference for review of introductory statistics material
The NIST/SEMATECH e-Handbook of Statistical Methods, also known as the Engineering Statistics Handbook is a great, and authoritative, reference. It is continuously supported by the National Institute of Standards and Technology (U.S. tax dollars at work!) It is available in pdf at http://www.itl.nist.gov/div898/handbook/toolaids/pff/index.htm The organization is a bit unusual, but there is a good search function.
Online reference for review of introductory statistics material
The NIST/SEMATECH e-Handbook of Statistical Methods, also known as the Engineering Statistics Handbook is a great, and authoritative, reference. It is continuously supported by the National Institute
Online reference for review of introductory statistics material The NIST/SEMATECH e-Handbook of Statistical Methods, also known as the Engineering Statistics Handbook is a great, and authoritative, reference. It is continuously supported by the National Institute of Standards and Technology (U.S. tax dollars at work!) It is available in pdf at http://www.itl.nist.gov/div898/handbook/toolaids/pff/index.htm The organization is a bit unusual, but there is a good search function.
Online reference for review of introductory statistics material The NIST/SEMATECH e-Handbook of Statistical Methods, also known as the Engineering Statistics Handbook is a great, and authoritative, reference. It is continuously supported by the National Institute
36,117
What is a stationary Gaussian field?
For time series stationarity means that the joint distribution of variables in the sequence depends only on their separation in time and not on the actual time. This implies that the mean and variance are constant and the covariance between the variables at two time points depends only on the difference in time between the points. With spatial data it would mean that the distribution of a set of points on a grid only depend on how they are separated. So if you shift a set of points k units in the x direction and m units in the y direction their joint distribution will not change.
What is a stationary Gaussian field?
For time series stationarity means that the joint distribution of variables in the sequence depends only on their separation in time and not on the actual time. This implies that the mean and variance
What is a stationary Gaussian field? For time series stationarity means that the joint distribution of variables in the sequence depends only on their separation in time and not on the actual time. This implies that the mean and variance are constant and the covariance between the variables at two time points depends only on the difference in time between the points. With spatial data it would mean that the distribution of a set of points on a grid only depend on how they are separated. So if you shift a set of points k units in the x direction and m units in the y direction their joint distribution will not change.
What is a stationary Gaussian field? For time series stationarity means that the joint distribution of variables in the sequence depends only on their separation in time and not on the actual time. This implies that the mean and variance
36,118
Understanding the linear mixed effects model equation and fitting a random effects model with weights in R
I'll answer each of your questions one at a time. May I ask if the following model is a random-intercept model? 1. There is a common beta for all N individuals. 2. Each group has a different within group regression line (same slope but different intercepts). 3. The regression line within each group crosses the "cloud" consisting of the group members. And the individual residuals scatter around the regression line, within each group. Conditions (2) and (3) are in conflict with each other. If each group has the same slope but different intercepts then the within group regression line will not pass through the cloud of group observations unless the truth is that every group has the exact same slope. You would need both a random intercept and a random slope in every predictor to guarantee that condition (3) is satisfied. However, how do I explicitly write out the equation? The familiar formula for the random effects model, as you pointed out, is $$ {\bf Y}_i = {\bf X}_i {\boldsymbol \beta} + {\bf Z}_{i} {\bf b}_i + {\boldsymbol \varepsilon}_{i} $$ Where $${\bf Y}_i = \left( \begin{array}{c} y_{i1} \\ y_{i2} \\ \vdots \\ y_{i n_{i}} \end{array} \right) $$ is the vector of responses in group $i$, $n_{i}$ is the number of observations in group $i$ $(n_i$ can be $1$ for some groups but not all groups $)$. $${\bf X}_i = \left( \begin{array}{ccccc} 1 & x_{i11} & x_{i12} & \cdots & x_{i1p} \\ 1 & x_{i21} & x_{i22} & \cdots & x_{i2p} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & x_{i n_{i} 1} & x_{i n_{i} 2} & \cdots & x_{i n_{i} p} \\ \end{array} \right) $$ is the matrix of the $p$ predictor variables for each observation in group $i$ with corresponding $p$-length fixed effects regression coefficient vector ${\boldsymbol \beta}$. $${\bf b}_i = \left( \begin{array}{c} b_{i0} \\ b_{i1} \\ \vdots \\ b_{im} \end{array} \right) $$ is the $m$ length vector the vector of random effects and $${\bf Z}_i = \left( \begin{array}{cccc} z_{i11} & z_{i12} & \cdots & z_{i1m} \\ z_{i21} & z_{i22} & \cdots & z_{i2m} \\ \vdots & \vdots & \vdots & \vdots \\ z_{in_{i} 1} & z_{i n_{i} 2} & \cdots & z_{i n_{i} m} \\ \end{array} \right) $$ be the random effects design matrix for group $i$ and $$ {\boldsymbol \varepsilon}_i = \left( \begin{array}{c} \varepsilon_{i1} \\ \varepsilon_{i2} \\ \vdots \\ \varepsilon_{i n_i} \end{array} \right) $$ is the vector of errors. So, for example if you just had a random intercept and a random slope in the first predictor, then $${\bf b}_i = \left( \begin{array}{c} b_{i0} \\ b_{i1} \end{array} \right), {\bf Z}_i = \left( \begin{array}{cc} 1 & x_{i11} \\ 1 & x_{i21} \\ \vdots & \vdots \\ 1 & x_{i n_{i} 2} \\ \end{array} \right) $$ where $b_{i0}$ is the random intercept and $b_{i1}$ is the random slope. If you only had a random intercept and nothing else then ${\bf b}_i$ would be a scalar and ${\bf Z}_{i}$ would just be a vector of $1$s. In your particular example, you have a categorical predictor (say with, $k$ levels), which, for modeling, is effectively like having $k-1$ dummy variables which are $1$ if variable takes on that value and $0$ otherwise. So, your ${\bf X}_{i}$ matrix will have $k+2$ columns - one column of $1$s, two columns with the values of the quantitative predictors, and $k-1$ columns with that are $0/1$ indicators of which level the categorical predictor takes. If you are going to include random slopes in every predictor, then ${\bf Z}_{i}$ will be exactly the same as ${\bf X}_{i}$. As mentioned above, if you only want a random intercept in the model then ${\bf Z}_{i}$ is just a column of $1$s. how do I set up the weights in LME in R? This depends on what you mean by "weights". Usually this means that certain observations are weighted (e.g. inverse-probability weights to correct for unequal probability sampling) and the criterion function that is being optimized to produce your estimates (probably the likelihood function in this case) is a weighted. For example, if the groups were sampled with unequal probability the function $$ {\bf L} = \sum_{i=1}^{K} w_i L_i $$ where $L_i$ is the group $i$ log-likelihood may be optimized instead of the the unweighted sum. In terms of point estimation, this is effectively the same as as replicating group $i$ in the data set a number of times equal to $w_i$, which can be accomplished by doing exactly that - expand the data set based on the weights and fit the model to this expanded data set. I'm not sure if there is functionality in lme to do this automatically, so you may need to do this yourself. Regarding weighting within a group (i.e. at the individual level), I do not recommend this in the context of random effects modeling. To see why, consider the fact that by weighting within a cluster, you're effectively creating exact copies of individuals within the cluster. Therefore, there will be pockets within the group that are perfectly correlated with each other, so the estimates of the random effect variances will be biased up, since the model will think that members of a group are more correlated than they are. Comments related to the Edit: The only way adding a random intercept would make it so that each group's regression line passes through the "cloud" is if each group's "cloud" was just a vertical shift of each other's group - that is, the slopes are exactly the same but the intercepts are different. More generally, the linear least squares line requires both a slope and an intercept. If you don't let the slopes vary,the random intercept will go wherever is "best" (i.e. maximizes the posterior mode, if you're trying to estimate random effects),so I don't think it could possibly appear all the way on one side or another of the "cloud". The central point within each group would presumably be the sample means, within the group, although this would require some more thought, as would the comments made in (1), since we aren't fitting this model by least squares (although it is closely related to least squares, since it involves the Gaussian likelihood). The only way I can see to visualize this is to plot the points for a given group, along with the fitted line within the group, the same way you would with ordinary regression. You can extract posterior estimates of the random effects using the ranef function with the argument being the lmer model fit and you can extract the fixed effects in the usual way.
Understanding the linear mixed effects model equation and fitting a random effects model with weight
I'll answer each of your questions one at a time. May I ask if the following model is a random-intercept model? 1. There is a common beta for all N individuals. 2. Each group has a different within gr
Understanding the linear mixed effects model equation and fitting a random effects model with weights in R I'll answer each of your questions one at a time. May I ask if the following model is a random-intercept model? 1. There is a common beta for all N individuals. 2. Each group has a different within group regression line (same slope but different intercepts). 3. The regression line within each group crosses the "cloud" consisting of the group members. And the individual residuals scatter around the regression line, within each group. Conditions (2) and (3) are in conflict with each other. If each group has the same slope but different intercepts then the within group regression line will not pass through the cloud of group observations unless the truth is that every group has the exact same slope. You would need both a random intercept and a random slope in every predictor to guarantee that condition (3) is satisfied. However, how do I explicitly write out the equation? The familiar formula for the random effects model, as you pointed out, is $$ {\bf Y}_i = {\bf X}_i {\boldsymbol \beta} + {\bf Z}_{i} {\bf b}_i + {\boldsymbol \varepsilon}_{i} $$ Where $${\bf Y}_i = \left( \begin{array}{c} y_{i1} \\ y_{i2} \\ \vdots \\ y_{i n_{i}} \end{array} \right) $$ is the vector of responses in group $i$, $n_{i}$ is the number of observations in group $i$ $(n_i$ can be $1$ for some groups but not all groups $)$. $${\bf X}_i = \left( \begin{array}{ccccc} 1 & x_{i11} & x_{i12} & \cdots & x_{i1p} \\ 1 & x_{i21} & x_{i22} & \cdots & x_{i2p} \\ \vdots & \vdots & \vdots & \vdots & \vdots \\ 1 & x_{i n_{i} 1} & x_{i n_{i} 2} & \cdots & x_{i n_{i} p} \\ \end{array} \right) $$ is the matrix of the $p$ predictor variables for each observation in group $i$ with corresponding $p$-length fixed effects regression coefficient vector ${\boldsymbol \beta}$. $${\bf b}_i = \left( \begin{array}{c} b_{i0} \\ b_{i1} \\ \vdots \\ b_{im} \end{array} \right) $$ is the $m$ length vector the vector of random effects and $${\bf Z}_i = \left( \begin{array}{cccc} z_{i11} & z_{i12} & \cdots & z_{i1m} \\ z_{i21} & z_{i22} & \cdots & z_{i2m} \\ \vdots & \vdots & \vdots & \vdots \\ z_{in_{i} 1} & z_{i n_{i} 2} & \cdots & z_{i n_{i} m} \\ \end{array} \right) $$ be the random effects design matrix for group $i$ and $$ {\boldsymbol \varepsilon}_i = \left( \begin{array}{c} \varepsilon_{i1} \\ \varepsilon_{i2} \\ \vdots \\ \varepsilon_{i n_i} \end{array} \right) $$ is the vector of errors. So, for example if you just had a random intercept and a random slope in the first predictor, then $${\bf b}_i = \left( \begin{array}{c} b_{i0} \\ b_{i1} \end{array} \right), {\bf Z}_i = \left( \begin{array}{cc} 1 & x_{i11} \\ 1 & x_{i21} \\ \vdots & \vdots \\ 1 & x_{i n_{i} 2} \\ \end{array} \right) $$ where $b_{i0}$ is the random intercept and $b_{i1}$ is the random slope. If you only had a random intercept and nothing else then ${\bf b}_i$ would be a scalar and ${\bf Z}_{i}$ would just be a vector of $1$s. In your particular example, you have a categorical predictor (say with, $k$ levels), which, for modeling, is effectively like having $k-1$ dummy variables which are $1$ if variable takes on that value and $0$ otherwise. So, your ${\bf X}_{i}$ matrix will have $k+2$ columns - one column of $1$s, two columns with the values of the quantitative predictors, and $k-1$ columns with that are $0/1$ indicators of which level the categorical predictor takes. If you are going to include random slopes in every predictor, then ${\bf Z}_{i}$ will be exactly the same as ${\bf X}_{i}$. As mentioned above, if you only want a random intercept in the model then ${\bf Z}_{i}$ is just a column of $1$s. how do I set up the weights in LME in R? This depends on what you mean by "weights". Usually this means that certain observations are weighted (e.g. inverse-probability weights to correct for unequal probability sampling) and the criterion function that is being optimized to produce your estimates (probably the likelihood function in this case) is a weighted. For example, if the groups were sampled with unequal probability the function $$ {\bf L} = \sum_{i=1}^{K} w_i L_i $$ where $L_i$ is the group $i$ log-likelihood may be optimized instead of the the unweighted sum. In terms of point estimation, this is effectively the same as as replicating group $i$ in the data set a number of times equal to $w_i$, which can be accomplished by doing exactly that - expand the data set based on the weights and fit the model to this expanded data set. I'm not sure if there is functionality in lme to do this automatically, so you may need to do this yourself. Regarding weighting within a group (i.e. at the individual level), I do not recommend this in the context of random effects modeling. To see why, consider the fact that by weighting within a cluster, you're effectively creating exact copies of individuals within the cluster. Therefore, there will be pockets within the group that are perfectly correlated with each other, so the estimates of the random effect variances will be biased up, since the model will think that members of a group are more correlated than they are. Comments related to the Edit: The only way adding a random intercept would make it so that each group's regression line passes through the "cloud" is if each group's "cloud" was just a vertical shift of each other's group - that is, the slopes are exactly the same but the intercepts are different. More generally, the linear least squares line requires both a slope and an intercept. If you don't let the slopes vary,the random intercept will go wherever is "best" (i.e. maximizes the posterior mode, if you're trying to estimate random effects),so I don't think it could possibly appear all the way on one side or another of the "cloud". The central point within each group would presumably be the sample means, within the group, although this would require some more thought, as would the comments made in (1), since we aren't fitting this model by least squares (although it is closely related to least squares, since it involves the Gaussian likelihood). The only way I can see to visualize this is to plot the points for a given group, along with the fitted line within the group, the same way you would with ordinary regression. You can extract posterior estimates of the random effects using the ranef function with the argument being the lmer model fit and you can extract the fixed effects in the usual way.
Understanding the linear mixed effects model equation and fitting a random effects model with weight I'll answer each of your questions one at a time. May I ask if the following model is a random-intercept model? 1. There is a common beta for all N individuals. 2. Each group has a different within gr
36,119
Comparing two classifiers on separate pairs of train and test datasets
First of all, before testing you need to define couple of things: do all classification errors have same "cost"? Then you chose a single measurement parameter. I usually chose MCC for binary data and Cohen's kappa for k-category classification. Next it is very important to define what is the minimal difference that is significant in your domain? When I say "significant" I don't mean statistically significant (i.e. p<1e-9), but practically significant. Most of the time improvement of 0.01% in classification accuracy means nothing, event if it has nice p-value. Now you can start comparing the methods. What are you testing? Is it the predictor sets, model building process or the final classifiers. In the first two cases I would generate many bootstrap models using the training set data and test them on bootstrap samples from the testing set data. In the last case I would use the final models to predict bootstrap samples from the testing set data. If you have a reliable way to estimate noise in the data parameters (predictors), you may also add this to both training and testing data. The end result will be two histograms of the measurement values, one for each classifier. You may now test these histograms for mean value, dispersion, etc. Two last notes: (1) I'm not aware of a way to account for model complexity when dealing with classifiers. As a result better apparent performance may be a result of overfitting. (2) Having two separate data sets is a good thing, but as I understand from your question, you used both sets for many times, which means that the testing set information "leaks" into your models. Make sure you have another, validation data set that will be used only once when you have made all the decisions. Clarifications following notes In your notes you said that "previous papers usually present such kind [i.e. 1%] of improvements". I'm not familiar with this field, but the fact that people publish 1% improvement in papers does not mean this improvement is significant :-) Regarding t-test, I think it would be a good choice, provided that the data is normally distributed or converted to normal distribution or that you have enough data samples, which you will most probably will.
Comparing two classifiers on separate pairs of train and test datasets
First of all, before testing you need to define couple of things: do all classification errors have same "cost"? Then you chose a single measurement parameter. I usually chose MCC for binary data and
Comparing two classifiers on separate pairs of train and test datasets First of all, before testing you need to define couple of things: do all classification errors have same "cost"? Then you chose a single measurement parameter. I usually chose MCC for binary data and Cohen's kappa for k-category classification. Next it is very important to define what is the minimal difference that is significant in your domain? When I say "significant" I don't mean statistically significant (i.e. p<1e-9), but practically significant. Most of the time improvement of 0.01% in classification accuracy means nothing, event if it has nice p-value. Now you can start comparing the methods. What are you testing? Is it the predictor sets, model building process or the final classifiers. In the first two cases I would generate many bootstrap models using the training set data and test them on bootstrap samples from the testing set data. In the last case I would use the final models to predict bootstrap samples from the testing set data. If you have a reliable way to estimate noise in the data parameters (predictors), you may also add this to both training and testing data. The end result will be two histograms of the measurement values, one for each classifier. You may now test these histograms for mean value, dispersion, etc. Two last notes: (1) I'm not aware of a way to account for model complexity when dealing with classifiers. As a result better apparent performance may be a result of overfitting. (2) Having two separate data sets is a good thing, but as I understand from your question, you used both sets for many times, which means that the testing set information "leaks" into your models. Make sure you have another, validation data set that will be used only once when you have made all the decisions. Clarifications following notes In your notes you said that "previous papers usually present such kind [i.e. 1%] of improvements". I'm not familiar with this field, but the fact that people publish 1% improvement in papers does not mean this improvement is significant :-) Regarding t-test, I think it would be a good choice, provided that the data is normally distributed or converted to normal distribution or that you have enough data samples, which you will most probably will.
Comparing two classifiers on separate pairs of train and test datasets First of all, before testing you need to define couple of things: do all classification errors have same "cost"? Then you chose a single measurement parameter. I usually chose MCC for binary data and
36,120
The only spherical and independent density is normal!
This is a standard calculus derivation: spherical symmetry tells you that $f_1(x)$ is a function of $x^2$, i.e. $$f_1(x)=g_1(x^2).$$ Independence plus spherical symmetry tell you that $$g_1(u)g_1(0)=g_2(u) \quad\text{and}\quad g_1(u)g_1(v)=g_2(u+v)\propto g_1(u+v)$$ Therefore, rescaling $g_1$ into $h_1$ so that the above become an equality, we derive the identity $$h_1(u)h_1(v)=h_1(u+v)$$ for which the only solution is of the form $$ h_1(u) = \exp \{\alpha u\},\qquad \alpha\in\mathbb{R} $$ Thus, $$f_1(x) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp\{-x^2/2\sigma^2\},\qquad \sigma\in\mathbb{R}_+,$$ since only negative factors $\alpha$ lead to densities.
The only spherical and independent density is normal!
This is a standard calculus derivation: spherical symmetry tells you that $f_1(x)$ is a function of $x^2$, i.e. $$f_1(x)=g_1(x^2).$$ Independence plus spherical symmetry tell you that $$g_1(u)g_1(0)=
The only spherical and independent density is normal! This is a standard calculus derivation: spherical symmetry tells you that $f_1(x)$ is a function of $x^2$, i.e. $$f_1(x)=g_1(x^2).$$ Independence plus spherical symmetry tell you that $$g_1(u)g_1(0)=g_2(u) \quad\text{and}\quad g_1(u)g_1(v)=g_2(u+v)\propto g_1(u+v)$$ Therefore, rescaling $g_1$ into $h_1$ so that the above become an equality, we derive the identity $$h_1(u)h_1(v)=h_1(u+v)$$ for which the only solution is of the form $$ h_1(u) = \exp \{\alpha u\},\qquad \alpha\in\mathbb{R} $$ Thus, $$f_1(x) = \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp\{-x^2/2\sigma^2\},\qquad \sigma\in\mathbb{R}_+,$$ since only negative factors $\alpha$ lead to densities.
The only spherical and independent density is normal! This is a standard calculus derivation: spherical symmetry tells you that $f_1(x)$ is a function of $x^2$, i.e. $$f_1(x)=g_1(x^2).$$ Independence plus spherical symmetry tell you that $$g_1(u)g_1(0)=
36,121
error in getting predictions from a lme object
Thanks for providing the data so that I could perform some diagnostics. Actually, this is an epic bug of predict.lme. Your factors have more levels in your initial data (for example you have more than 4 countries) than in your new data. A line of code specifically causes the unused levels to be discarded so you end up with matrices of different dimensions, whence the non-conformable arguments I removed that line and put the code here. In R you can do library(nlme) source("http://lab.thegrandlocus.com/static/code/predict.lme_patched.txt") This registers a new function predict.lme that will be invoked instead of the one from the package nlme and you can run your code. At least it worked for me. Warning: The posted code and the method are neither a replacement nor a real bug fix of the package. The patched function has not been tested beyond its ability to run the bit of code of the OP.
error in getting predictions from a lme object
Thanks for providing the data so that I could perform some diagnostics. Actually, this is an epic bug of predict.lme. Your factors have more levels in your initial data (for example you have more than
error in getting predictions from a lme object Thanks for providing the data so that I could perform some diagnostics. Actually, this is an epic bug of predict.lme. Your factors have more levels in your initial data (for example you have more than 4 countries) than in your new data. A line of code specifically causes the unused levels to be discarded so you end up with matrices of different dimensions, whence the non-conformable arguments I removed that line and put the code here. In R you can do library(nlme) source("http://lab.thegrandlocus.com/static/code/predict.lme_patched.txt") This registers a new function predict.lme that will be invoked instead of the one from the package nlme and you can run your code. At least it worked for me. Warning: The posted code and the method are neither a replacement nor a real bug fix of the package. The patched function has not been tested beyond its ability to run the bit of code of the OP.
error in getting predictions from a lme object Thanks for providing the data so that I could perform some diagnostics. Actually, this is an epic bug of predict.lme. Your factors have more levels in your initial data (for example you have more than
36,122
How to specify pulses/level-shifts in data when creating ARIMA in R?
If you want to empirically identify Pulses, Seasonal Pulses, Level(Step) Shifts and/or Local Time Trends you might want to look at How do I detect shifts in sales mix? or Detect changes in time series or Outlier detection for generic time series . Some commercial packages offer Intervention Analysis which does not include Intervention Detection which is what you are pursuing.
How to specify pulses/level-shifts in data when creating ARIMA in R?
If you want to empirically identify Pulses, Seasonal Pulses, Level(Step) Shifts and/or Local Time Trends you might want to look at How do I detect shifts in sales mix? or Detect changes in time series
How to specify pulses/level-shifts in data when creating ARIMA in R? If you want to empirically identify Pulses, Seasonal Pulses, Level(Step) Shifts and/or Local Time Trends you might want to look at How do I detect shifts in sales mix? or Detect changes in time series or Outlier detection for generic time series . Some commercial packages offer Intervention Analysis which does not include Intervention Detection which is what you are pursuing.
How to specify pulses/level-shifts in data when creating ARIMA in R? If you want to empirically identify Pulses, Seasonal Pulses, Level(Step) Shifts and/or Local Time Trends you might want to look at How do I detect shifts in sales mix? or Detect changes in time series
36,123
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Plain old linear regression has a nice non-parametric interpretation as the average linear trend across all pairs of observations; see Berman 1988, "A theorem of Jacobi and its generalization". So, the data doesn't have to look linear in order to use it; any (broadly) monotonic trend could be summarized this way. You could also use the Spearman rank correlation... and probably much else besides.
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Plain old linear regression has a nice non-parametric interpretation as the average linear trend across all pairs of observations; see Berman 1988, "A theorem of Jacobi and its generalization". So, th
Methods of measuring strength of arbitrary non-linear relationships between two variables? Plain old linear regression has a nice non-parametric interpretation as the average linear trend across all pairs of observations; see Berman 1988, "A theorem of Jacobi and its generalization". So, the data doesn't have to look linear in order to use it; any (broadly) monotonic trend could be summarized this way. You could also use the Spearman rank correlation... and probably much else besides.
Methods of measuring strength of arbitrary non-linear relationships between two variables? Plain old linear regression has a nice non-parametric interpretation as the average linear trend across all pairs of observations; see Berman 1988, "A theorem of Jacobi and its generalization". So, th
36,124
Methods of measuring strength of arbitrary non-linear relationships between two variables?
The "Amount of relationship" between two discrete variables $X$, $Y$ is formally measured by mutual information : $I(X,Y)$. While the covariance/correlation is somehow the amount of linear relationship, mutual information is somehow the amount of (any kind of) relationship. I'm pasting the picture form Wikipedia's page : For continuous variables, the information-theoretic concepts are often defined as well but less manageable, maybe less meaningful. I don't want to bother for the moment. Let's stick to discrete variables. Anyway it makes sense approximating continuous variables by discrete ones (using slices) especially in information theoretic approaches. The problem with information theoretic concepts is often their impracticability. Being able to approximate the mutual information between $X$ and $Y$ is the same as being able to find arbitrary non-linear relationship between them : you need a statistical power (quantity of data) most often far beyond what is reasonable : for any possible value for $x$, you need many (say 1000) samples to compute an estimation of each $P(Y=y|X=x)$. This is not possible in most machine learning or statistical analysis problems. It is kind of logical : if you allow a model to be able to express "any possibility", then it can only be trained by an amount of data covering any possibility several times. But maybe such an approach is possible, for low dimensional variables, if you enforce low precision : decompose the domains of $X$ and $Y$ into a number of slices small enough so that it is ok for your data. Anyway I think this requires some research.
Methods of measuring strength of arbitrary non-linear relationships between two variables?
The "Amount of relationship" between two discrete variables $X$, $Y$ is formally measured by mutual information : $I(X,Y)$. While the covariance/correlation is somehow the amount of linear relationshi
Methods of measuring strength of arbitrary non-linear relationships between two variables? The "Amount of relationship" between two discrete variables $X$, $Y$ is formally measured by mutual information : $I(X,Y)$. While the covariance/correlation is somehow the amount of linear relationship, mutual information is somehow the amount of (any kind of) relationship. I'm pasting the picture form Wikipedia's page : For continuous variables, the information-theoretic concepts are often defined as well but less manageable, maybe less meaningful. I don't want to bother for the moment. Let's stick to discrete variables. Anyway it makes sense approximating continuous variables by discrete ones (using slices) especially in information theoretic approaches. The problem with information theoretic concepts is often their impracticability. Being able to approximate the mutual information between $X$ and $Y$ is the same as being able to find arbitrary non-linear relationship between them : you need a statistical power (quantity of data) most often far beyond what is reasonable : for any possible value for $x$, you need many (say 1000) samples to compute an estimation of each $P(Y=y|X=x)$. This is not possible in most machine learning or statistical analysis problems. It is kind of logical : if you allow a model to be able to express "any possibility", then it can only be trained by an amount of data covering any possibility several times. But maybe such an approach is possible, for low dimensional variables, if you enforce low precision : decompose the domains of $X$ and $Y$ into a number of slices small enough so that it is ok for your data. Anyway I think this requires some research.
Methods of measuring strength of arbitrary non-linear relationships between two variables? The "Amount of relationship" between two discrete variables $X$, $Y$ is formally measured by mutual information : $I(X,Y)$. While the covariance/correlation is somehow the amount of linear relationshi
36,125
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Eventually the most general form of an injective function is $f(x) = y$ and you can use a discretized version of that function as a model for your data. Then the problem reduces to determining the expected $y$ for separate regions $a<x<b$. The method is not powerful because of the high amount of degrees of freedom in the model. Although, that is also inherent to the problem which desires a high degree of freedom (and generality) in the type of functions that can describe the model for the data. For more specific cases improvements can be made.
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Eventually the most general form of an injective function is $f(x) = y$ and you can use a discretized version of that function as a model for your data. Then the problem reduces to determining the
Methods of measuring strength of arbitrary non-linear relationships between two variables? Eventually the most general form of an injective function is $f(x) = y$ and you can use a discretized version of that function as a model for your data. Then the problem reduces to determining the expected $y$ for separate regions $a<x<b$. The method is not powerful because of the high amount of degrees of freedom in the model. Although, that is also inherent to the problem which desires a high degree of freedom (and generality) in the type of functions that can describe the model for the data. For more specific cases improvements can be made.
Methods of measuring strength of arbitrary non-linear relationships between two variables? Eventually the most general form of an injective function is $f(x) = y$ and you can use a discretized version of that function as a model for your data. Then the problem reduces to determining the
36,126
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Needs to be a method that is quick to calculate, similar to correlation, but can detect quadratic relationships for example. The Spearman correlation, which was mentioned in another answer, fits the bill. It is calculated by simply converting the data to ranks and then finding the Pearson correlation for the ranks. It can detect any monotonic association. There's also the Kendall correlation. The Kendall correlation has a nice interpretation as (a rescaled version of) the probability that ranking cases on one variable will agree with ranking them on another variable. The Spearman correlation, by contrast, is a bit opaque—who thinks about data in terms of linear relationships between ranks? The Kendall correlation is not "quick to calculate" in terms of computational complexity (it's $O(n \log n)$ whereas Spearman is $O(n)$), but it requires no human judgment to compute and it's already implemented in a lot of statistics software, and with a modern machine, the asymptomatic complexity is unlikely to matter except with the very largest datasets.
Methods of measuring strength of arbitrary non-linear relationships between two variables?
Needs to be a method that is quick to calculate, similar to correlation, but can detect quadratic relationships for example. The Spearman correlation, which was mentioned in another answer, fits the
Methods of measuring strength of arbitrary non-linear relationships between two variables? Needs to be a method that is quick to calculate, similar to correlation, but can detect quadratic relationships for example. The Spearman correlation, which was mentioned in another answer, fits the bill. It is calculated by simply converting the data to ranks and then finding the Pearson correlation for the ranks. It can detect any monotonic association. There's also the Kendall correlation. The Kendall correlation has a nice interpretation as (a rescaled version of) the probability that ranking cases on one variable will agree with ranking them on another variable. The Spearman correlation, by contrast, is a bit opaque—who thinks about data in terms of linear relationships between ranks? The Kendall correlation is not "quick to calculate" in terms of computational complexity (it's $O(n \log n)$ whereas Spearman is $O(n)$), but it requires no human judgment to compute and it's already implemented in a lot of statistics software, and with a modern machine, the asymptomatic complexity is unlikely to matter except with the very largest datasets.
Methods of measuring strength of arbitrary non-linear relationships between two variables? Needs to be a method that is quick to calculate, similar to correlation, but can detect quadratic relationships for example. The Spearman correlation, which was mentioned in another answer, fits the
36,127
Methods of measuring strength of arbitrary non-linear relationships between two variables?
It's not totally clear to be what you are trying to measure, but I'll try to give you info that might help. There are correlation measures like Cronback's Alpha that can be used to assess the internal consistency/ relationship among a set of variables. You could also use things like general additive models (GAMs) to test whether the functional estimate is constant. This would imply no relationship between your variables. See the answer here for a discussion on this: How do I test a nonlinear association?
Methods of measuring strength of arbitrary non-linear relationships between two variables?
It's not totally clear to be what you are trying to measure, but I'll try to give you info that might help. There are correlation measures like Cronback's Alpha that can be used to assess the internal
Methods of measuring strength of arbitrary non-linear relationships between two variables? It's not totally clear to be what you are trying to measure, but I'll try to give you info that might help. There are correlation measures like Cronback's Alpha that can be used to assess the internal consistency/ relationship among a set of variables. You could also use things like general additive models (GAMs) to test whether the functional estimate is constant. This would imply no relationship between your variables. See the answer here for a discussion on this: How do I test a nonlinear association?
Methods of measuring strength of arbitrary non-linear relationships between two variables? It's not totally clear to be what you are trying to measure, but I'll try to give you info that might help. There are correlation measures like Cronback's Alpha that can be used to assess the internal
36,128
Methods of measuring strength of arbitrary non-linear relationships between two variables?
You may try maximal information coefficient. It outperforms selected methods in the paper and works well in detecting nonlinear relationships between two random variables.
Methods of measuring strength of arbitrary non-linear relationships between two variables?
You may try maximal information coefficient. It outperforms selected methods in the paper and works well in detecting nonlinear relationships between two random variables.
Methods of measuring strength of arbitrary non-linear relationships between two variables? You may try maximal information coefficient. It outperforms selected methods in the paper and works well in detecting nonlinear relationships between two random variables.
Methods of measuring strength of arbitrary non-linear relationships between two variables? You may try maximal information coefficient. It outperforms selected methods in the paper and works well in detecting nonlinear relationships between two random variables.
36,129
Methods of measuring strength of arbitrary non-linear relationships between two variables?
I cannot comment thus I have to post the answer. Have a look at Dynamic Time Warping, simple algorithm that can kind of detect/compare patterns between two time series, which can have even different granularity. https://en.wikipedia.org/wiki/Dynamic_time_warping
Methods of measuring strength of arbitrary non-linear relationships between two variables?
I cannot comment thus I have to post the answer. Have a look at Dynamic Time Warping, simple algorithm that can kind of detect/compare patterns between two time series, which can have even different g
Methods of measuring strength of arbitrary non-linear relationships between two variables? I cannot comment thus I have to post the answer. Have a look at Dynamic Time Warping, simple algorithm that can kind of detect/compare patterns between two time series, which can have even different granularity. https://en.wikipedia.org/wiki/Dynamic_time_warping
Methods of measuring strength of arbitrary non-linear relationships between two variables? I cannot comment thus I have to post the answer. Have a look at Dynamic Time Warping, simple algorithm that can kind of detect/compare patterns between two time series, which can have even different g
36,130
Test of multicollinearity among independent variables in logistic regression
You can use whatever method you would use for ordinary regression. The dependent variable is irrelevant to multicollinearity issues, so it doesn't matter if you used logistic regression or regular regression or whatever.
Test of multicollinearity among independent variables in logistic regression
You can use whatever method you would use for ordinary regression. The dependent variable is irrelevant to multicollinearity issues, so it doesn't matter if you used logistic regression or regular reg
Test of multicollinearity among independent variables in logistic regression You can use whatever method you would use for ordinary regression. The dependent variable is irrelevant to multicollinearity issues, so it doesn't matter if you used logistic regression or regular regression or whatever.
Test of multicollinearity among independent variables in logistic regression You can use whatever method you would use for ordinary regression. The dependent variable is irrelevant to multicollinearity issues, so it doesn't matter if you used logistic regression or regular reg
36,131
Test of multicollinearity among independent variables in logistic regression
You can take the reference of condition index as well. a value greater than 30 indicates there is a near dependency in most cases. you can then go by either the correlation matrix or durbin watson test.
Test of multicollinearity among independent variables in logistic regression
You can take the reference of condition index as well. a value greater than 30 indicates there is a near dependency in most cases. you can then go by either the correlation matrix or durbin watson tes
Test of multicollinearity among independent variables in logistic regression You can take the reference of condition index as well. a value greater than 30 indicates there is a near dependency in most cases. you can then go by either the correlation matrix or durbin watson test.
Test of multicollinearity among independent variables in logistic regression You can take the reference of condition index as well. a value greater than 30 indicates there is a near dependency in most cases. you can then go by either the correlation matrix or durbin watson tes
36,132
Test of multicollinearity among independent variables in logistic regression
You could construct a correlation matrix and look for high values. An alternative would indeed be the VIF values as already mentioned. Both are quite arbitrary and rely on rules of thumb. For example what's the threshold for a correlation to be 'dangerous'? You could try to use factor scores on the correlated variables and check whether your results (estimates) are robust/sensitive to this issue. Good luck!
Test of multicollinearity among independent variables in logistic regression
You could construct a correlation matrix and look for high values. An alternative would indeed be the VIF values as already mentioned. Both are quite arbitrary and rely on rules of thumb. For example
Test of multicollinearity among independent variables in logistic regression You could construct a correlation matrix and look for high values. An alternative would indeed be the VIF values as already mentioned. Both are quite arbitrary and rely on rules of thumb. For example what's the threshold for a correlation to be 'dangerous'? You could try to use factor scores on the correlated variables and check whether your results (estimates) are robust/sensitive to this issue. Good luck!
Test of multicollinearity among independent variables in logistic regression You could construct a correlation matrix and look for high values. An alternative would indeed be the VIF values as already mentioned. Both are quite arbitrary and rely on rules of thumb. For example
36,133
Test of multicollinearity among independent variables in logistic regression
Examining a correlation matrix is helpful, but it is not a sufficient check since variables may be correlated when taken together but not pairwise. I recommend examining tolerance or Variance Inflation Factor diagnostics in regression using a weighted regression where the weights are set to be equal to phat x (1-phat) where phat are the predicted values obtained from the logistic regression model fit with the same variables.
Test of multicollinearity among independent variables in logistic regression
Examining a correlation matrix is helpful, but it is not a sufficient check since variables may be correlated when taken together but not pairwise. I recommend examining tolerance or Variance Inflati
Test of multicollinearity among independent variables in logistic regression Examining a correlation matrix is helpful, but it is not a sufficient check since variables may be correlated when taken together but not pairwise. I recommend examining tolerance or Variance Inflation Factor diagnostics in regression using a weighted regression where the weights are set to be equal to phat x (1-phat) where phat are the predicted values obtained from the logistic regression model fit with the same variables.
Test of multicollinearity among independent variables in logistic regression Examining a correlation matrix is helpful, but it is not a sufficient check since variables may be correlated when taken together but not pairwise. I recommend examining tolerance or Variance Inflati
36,134
What are good references on calculating confidence intervals using subsampling or the delete-d jackknife?
The practice in the field seems to be to rely on asymptotic normality of the estimate and to use the jackknife just as an estimate of variance to help calculate standard errors, which are then plugged into a hoped-for normal distribution of the estimate. If there's reason to think the estimate doesn't have a normal distribution I think a bootstrap of some sort is more appropriate eg for asymmetric confidence intervals. Some references on using delete-d-jackknife to estimate variance include these articles by Messer and Gamst, Shao and Wu, and Xiquan Shi. If you are interested in delete-a-group-jackknife (basically a stratified version) then there is a bunch of articles by Kott including (from the help files for Zardetto's EVER library): Kott, Phillip S. (1998) "Using the Delete-A-Group Jackknife Variance Estimator in NASS Surveys", RD Research Report No. RD-98-01, USDA, NASS: Washington, DC. Kott, Phillip S. (1999) "The Extended Delete-A-Group Jackknife". Bulletin of the International Statistical Instititute. 52nd Session. Contributed Papers. Book 2, pp. 167-168. Kott, Phillip S. (2001) "The Delete-A-Group Jackknife". Journal of Official Statistics, Vol.17, No.4, pp. 521-526.
What are good references on calculating confidence intervals using subsampling or the delete-d jackk
The practice in the field seems to be to rely on asymptotic normality of the estimate and to use the jackknife just as an estimate of variance to help calculate standard errors, which are then plugged
What are good references on calculating confidence intervals using subsampling or the delete-d jackknife? The practice in the field seems to be to rely on asymptotic normality of the estimate and to use the jackknife just as an estimate of variance to help calculate standard errors, which are then plugged into a hoped-for normal distribution of the estimate. If there's reason to think the estimate doesn't have a normal distribution I think a bootstrap of some sort is more appropriate eg for asymmetric confidence intervals. Some references on using delete-d-jackknife to estimate variance include these articles by Messer and Gamst, Shao and Wu, and Xiquan Shi. If you are interested in delete-a-group-jackknife (basically a stratified version) then there is a bunch of articles by Kott including (from the help files for Zardetto's EVER library): Kott, Phillip S. (1998) "Using the Delete-A-Group Jackknife Variance Estimator in NASS Surveys", RD Research Report No. RD-98-01, USDA, NASS: Washington, DC. Kott, Phillip S. (1999) "The Extended Delete-A-Group Jackknife". Bulletin of the International Statistical Instititute. 52nd Session. Contributed Papers. Book 2, pp. 167-168. Kott, Phillip S. (2001) "The Delete-A-Group Jackknife". Journal of Official Statistics, Vol.17, No.4, pp. 521-526.
What are good references on calculating confidence intervals using subsampling or the delete-d jackk The practice in the field seems to be to rely on asymptotic normality of the estimate and to use the jackknife just as an estimate of variance to help calculate standard errors, which are then plugged
36,135
What are good references on calculating confidence intervals using subsampling or the delete-d jackknife?
For subsampling you can look at material in my book or Efron and Tibshirani. But the best reference that has been over looked so far is the book by Politis, Romano and Wolf. The original work on this predates the bootstrap and is due mainly to Hartigan in his 1969 paper in JASA. Here are links to the books. Subsampling Bootstrap Methods: A Guide for Practitioners and Researchers An Introduction to the Bootstrap
What are good references on calculating confidence intervals using subsampling or the delete-d jackk
For subsampling you can look at material in my book or Efron and Tibshirani. But the best reference that has been over looked so far is the book by Politis, Romano and Wolf. The original work on thi
What are good references on calculating confidence intervals using subsampling or the delete-d jackknife? For subsampling you can look at material in my book or Efron and Tibshirani. But the best reference that has been over looked so far is the book by Politis, Romano and Wolf. The original work on this predates the bootstrap and is due mainly to Hartigan in his 1969 paper in JASA. Here are links to the books. Subsampling Bootstrap Methods: A Guide for Practitioners and Researchers An Introduction to the Bootstrap
What are good references on calculating confidence intervals using subsampling or the delete-d jackk For subsampling you can look at material in my book or Efron and Tibshirani. But the best reference that has been over looked so far is the book by Politis, Romano and Wolf. The original work on thi
36,136
Fitting the parameters of a stable distribution
As suggested in the comments, you can use fitdistr, with the density function from fBasics. # Sample data x <- rt(100,df=4) # Density (I reparametrize it to remove the constraints # on the parameters) library(fBasics) library(stabledist) f <- function(u,a,b,c,d) { cat(a,b,c,d,"\n") # Some logging (it is very slow) dstable(u, 2*exp(a)/(1+exp(a)), 2*exp(b)/(1+exp(b))-1, exp(c), d) } # Fit the distribution library(MASS) r <- fitdistr(x, f, list(a=1, b=0, c=log(mad(x)), d=median(x))) r # Graphical check plot( qstable(ppoints(100), 2*exp(r$estimate[1])/(1+exp(r$estimate[1])), 2*exp(r$estimate[2])/(1+exp(r$estimate[2]))-1, exp(r$estimate[3]), r$estimate[4] ), sort(x) ) abline(0,1)
Fitting the parameters of a stable distribution
As suggested in the comments, you can use fitdistr, with the density function from fBasics. # Sample data x <- rt(100,df=4) # Density (I reparametrize it to remove the constraints # on the paramete
Fitting the parameters of a stable distribution As suggested in the comments, you can use fitdistr, with the density function from fBasics. # Sample data x <- rt(100,df=4) # Density (I reparametrize it to remove the constraints # on the parameters) library(fBasics) library(stabledist) f <- function(u,a,b,c,d) { cat(a,b,c,d,"\n") # Some logging (it is very slow) dstable(u, 2*exp(a)/(1+exp(a)), 2*exp(b)/(1+exp(b))-1, exp(c), d) } # Fit the distribution library(MASS) r <- fitdistr(x, f, list(a=1, b=0, c=log(mad(x)), d=median(x))) r # Graphical check plot( qstable(ppoints(100), 2*exp(r$estimate[1])/(1+exp(r$estimate[1])), 2*exp(r$estimate[2])/(1+exp(r$estimate[2]))-1, exp(r$estimate[3]), r$estimate[4] ), sort(x) ) abline(0,1)
Fitting the parameters of a stable distribution As suggested in the comments, you can use fitdistr, with the density function from fBasics. # Sample data x <- rt(100,df=4) # Density (I reparametrize it to remove the constraints # on the paramete
36,137
Fitting the parameters of a stable distribution
@Vincent's answer sounds good, but here is another approach: Since you know the Fourier transform of the distribution, take the appropriate Fourier transformation of the data, and find parameters that give the best fit in Fourier space. I think this method should work just as well in theory, and in practice would avoid lots of numerical integration to get the form of the stable distributions. I am not coding up the test now, sorry. Anyone have any insight on this?
Fitting the parameters of a stable distribution
@Vincent's answer sounds good, but here is another approach: Since you know the Fourier transform of the distribution, take the appropriate Fourier transformation of the data, and find parameters that
Fitting the parameters of a stable distribution @Vincent's answer sounds good, but here is another approach: Since you know the Fourier transform of the distribution, take the appropriate Fourier transformation of the data, and find parameters that give the best fit in Fourier space. I think this method should work just as well in theory, and in practice would avoid lots of numerical integration to get the form of the stable distributions. I am not coding up the test now, sorry. Anyone have any insight on this?
Fitting the parameters of a stable distribution @Vincent's answer sounds good, but here is another approach: Since you know the Fourier transform of the distribution, take the appropriate Fourier transformation of the data, and find parameters that
36,138
Fitting the parameters of a stable distribution
One way to fit the $\alpha$ parameter is via the Nagaev transform described by Okoneshnikov. An alternative is the 'Probability of Return' method of Mantegna and Stanley, which is considerably easier. edit: the other 'classical' method is of Kogon & Williams (S.M. Kogon, Douglas B. Williams, "On Characteristic Function Based Stable Distribution Parameter Estimation Techniques"), see also matlab implementation of K&W
Fitting the parameters of a stable distribution
One way to fit the $\alpha$ parameter is via the Nagaev transform described by Okoneshnikov. An alternative is the 'Probability of Return' method of Mantegna and Stanley, which is considerably easier.
Fitting the parameters of a stable distribution One way to fit the $\alpha$ parameter is via the Nagaev transform described by Okoneshnikov. An alternative is the 'Probability of Return' method of Mantegna and Stanley, which is considerably easier. edit: the other 'classical' method is of Kogon & Williams (S.M. Kogon, Douglas B. Williams, "On Characteristic Function Based Stable Distribution Parameter Estimation Techniques"), see also matlab implementation of K&W
Fitting the parameters of a stable distribution One way to fit the $\alpha$ parameter is via the Nagaev transform described by Okoneshnikov. An alternative is the 'Probability of Return' method of Mantegna and Stanley, which is considerably easier.
36,139
Introduction to Kalman filters
The most human readable intro with examples I have found so far is the SIGGRAPH Course Pack.
Introduction to Kalman filters
The most human readable intro with examples I have found so far is the SIGGRAPH Course Pack.
Introduction to Kalman filters The most human readable intro with examples I have found so far is the SIGGRAPH Course Pack.
Introduction to Kalman filters The most human readable intro with examples I have found so far is the SIGGRAPH Course Pack.
36,140
Introduction to Kalman filters
[Reposting a comment by @Vincent-Zoonekynd from Estimate in presence of missing observations]: Here is a very simple introduction to the Kalman filter, to estimate the position of a robot (think of the position as the parameter you are trying to estimate): sites.google.com/site/udacitymirrorcs373/cs-373/unit-2 (you may want to skip part of the beginning, which is irrelevant, and check the previous and next lectures, which present non parametric alternatives to the Kalman filter: histogram filter and particle filter).
Introduction to Kalman filters
[Reposting a comment by @Vincent-Zoonekynd from Estimate in presence of missing observations]: Here is a very simple introduction to the Kalman filter, to estimate the position of a robot (think of th
Introduction to Kalman filters [Reposting a comment by @Vincent-Zoonekynd from Estimate in presence of missing observations]: Here is a very simple introduction to the Kalman filter, to estimate the position of a robot (think of the position as the parameter you are trying to estimate): sites.google.com/site/udacitymirrorcs373/cs-373/unit-2 (you may want to skip part of the beginning, which is irrelevant, and check the previous and next lectures, which present non parametric alternatives to the Kalman filter: histogram filter and particle filter).
Introduction to Kalman filters [Reposting a comment by @Vincent-Zoonekynd from Estimate in presence of missing observations]: Here is a very simple introduction to the Kalman filter, to estimate the position of a robot (think of th
36,141
Introduction to Kalman filters
Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook by Bruce Gibbs is liberally sprinkled with examples. One book I'm not fond of is A Kalman Filter Primer.
Introduction to Kalman filters
Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook by Bruce Gibbs is liberally sprinkled with examples. One book I'm not fond of is A Kalman Filter Primer.
Introduction to Kalman filters Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook by Bruce Gibbs is liberally sprinkled with examples. One book I'm not fond of is A Kalman Filter Primer.
Introduction to Kalman filters Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook by Bruce Gibbs is liberally sprinkled with examples. One book I'm not fond of is A Kalman Filter Primer.
36,142
Introduction to Kalman filters
I did get Kalman cleared after reading 'Kalman Filter for Beginners with Matlab examples' by Phil Kim http://books.google.co.uk/books?id=W8u_XwAACAAJ&dq=kalman+filter+phil+kim&source=bl&ots=N-I0YhBX_U&sig=pcfeeEGHYmYDr7bockF5kSIMM_s&hl=en&sa=X&ei=ir5xUM3gM8Op0QWI8YDwDQ&ved=0CC4Q6AEwAA The book starts with some basic idea like recursion, moving average, low pass filter, to move to implementation of Kalman. There are Matlab's examples, which you can try by yourself and there is no space to any unclear method or derivation. The book treat Kalman Filter from practical point of view and all mathematics are left for more advanced books. Maybe after this book you will not be an expert but for sure you will know how to start to be an expert, and how to use Kalman straight away.
Introduction to Kalman filters
I did get Kalman cleared after reading 'Kalman Filter for Beginners with Matlab examples' by Phil Kim http://books.google.co.uk/books?id=W8u_XwAACAAJ&dq=kalman+filter+phil+kim&source=bl&ots=N-I0YhBX_U
Introduction to Kalman filters I did get Kalman cleared after reading 'Kalman Filter for Beginners with Matlab examples' by Phil Kim http://books.google.co.uk/books?id=W8u_XwAACAAJ&dq=kalman+filter+phil+kim&source=bl&ots=N-I0YhBX_U&sig=pcfeeEGHYmYDr7bockF5kSIMM_s&hl=en&sa=X&ei=ir5xUM3gM8Op0QWI8YDwDQ&ved=0CC4Q6AEwAA The book starts with some basic idea like recursion, moving average, low pass filter, to move to implementation of Kalman. There are Matlab's examples, which you can try by yourself and there is no space to any unclear method or derivation. The book treat Kalman Filter from practical point of view and all mathematics are left for more advanced books. Maybe after this book you will not be an expert but for sure you will know how to start to be an expert, and how to use Kalman straight away.
Introduction to Kalman filters I did get Kalman cleared after reading 'Kalman Filter for Beginners with Matlab examples' by Phil Kim http://books.google.co.uk/books?id=W8u_XwAACAAJ&dq=kalman+filter+phil+kim&source=bl&ots=N-I0YhBX_U
36,143
Introduction to Kalman filters
The book An Introduction To State Space Time Series Analysis, by Commandeur and Koopman, is small and fairly readable. It uses Ssfpack and STAMP to implement things, which made it harder for me to transfer the knowledge.
Introduction to Kalman filters
The book An Introduction To State Space Time Series Analysis, by Commandeur and Koopman, is small and fairly readable. It uses Ssfpack and STAMP to implement things, which made it harder for me to tra
Introduction to Kalman filters The book An Introduction To State Space Time Series Analysis, by Commandeur and Koopman, is small and fairly readable. It uses Ssfpack and STAMP to implement things, which made it harder for me to transfer the knowledge.
Introduction to Kalman filters The book An Introduction To State Space Time Series Analysis, by Commandeur and Koopman, is small and fairly readable. It uses Ssfpack and STAMP to implement things, which made it harder for me to tra
36,144
To use Discrete Fourier Transform to invert a covariance matrix
A circulant is a matrix whose first column is a vector $x$ and its subsequent columns are obtained by rotation of one element to the right. Here is R code to produce any circulant from its first column x: rotate <- function(x,k) {c(tail(x,-k), head(x,k))} circulant <- function(x) { n=length(x) apply(matrix(0:(n-1),1,n), 2, function(k) rotate(x,n-k)) } # Returns the circulant matrix of which x is the first column For example, > circulant(c(2,3,5,7)) [,1] [,2] [,3] [,4] [1,] 2 7 5 3 [2,] 3 2 7 5 [3,] 5 3 2 7 [4,] 7 5 3 2 It is inverted by changing to an eigenbasis. The diagonal elements are the entries of the Fourier Transform of x, so we invert them individually and change back to the original basis: reciprocal <- function(x) {i <- which(x!=0); x[i] <- 1/x[i]; x} inverse.circulant <- function(x) { n <- length(x) # x is the first column of the circulant i <- (0:(n-1)) %o% (1:n) # Powers of exp(2 Pi I/n) in the eigenbasis q q <- matrix(exp(complex(real=-log(n)/2, imaginary=2*pi*i / n)), n, n) w <- reciprocal(fft(x)) # Reciprocals of nonzero eigenvalues Re(t(q) %*% diag(w) %*% Conj(q))# Convert back to the original basis } # Returns a generalized inverse to circulant(x) For example, we demonstrate this works by multiplying its output by the original circulant and checking that the identity matrix is obtained (up to negligible floating point error): > a <- c(2,3,5,7) > zapsmall(inverse.circulant(a) %*% circulant(a)) [,1] [,2] [,3] [,4] [1,] 1 0 0 0 [2,] 0 1 0 0 [3,] 0 0 1 0 [4,] 0 0 0 1 Be aware that ill-conditioning will still plague this approach due to floating point roundoff in fft. That is why I have implemented a reciprocal function: it refuses to compute $1/x$ when $x=0$. As such, inverse.circulant computes a generalized inverse, exactly as in MASS::ginv: # The following determines a nonsingular but ill-conditioned circulant: > (a <- c(1, -200000/200001, -2500000/500001, 5000000/1000003)) [1] 1.000000 -0.999995 -4.999990 4.999985 > 1 / rcond(circulant(a)) # HUGE condition number! [1] 5.404306e+16 > library(MASS) > inverse.circulant(a) - ginv(circulant(a)) [,1] [,2] [,3] [,4] [1,] 6.938894e-18 -2.081668e-17 -4.163336e-17 2.775558e-17 [2,] 1.387779e-17 -6.938894e-18 -3.469447e-18 1.387779e-17 [3,] 1.387779e-17 4.163336e-17 -6.938894e-18 1.040834e-17 [4,] 3.469447e-17 -2.775558e-17 -1.387779e-17 -2.081668e-17
To use Discrete Fourier Transform to invert a covariance matrix
A circulant is a matrix whose first column is a vector $x$ and its subsequent columns are obtained by rotation of one element to the right. Here is R code to produce any circulant from its first colu
To use Discrete Fourier Transform to invert a covariance matrix A circulant is a matrix whose first column is a vector $x$ and its subsequent columns are obtained by rotation of one element to the right. Here is R code to produce any circulant from its first column x: rotate <- function(x,k) {c(tail(x,-k), head(x,k))} circulant <- function(x) { n=length(x) apply(matrix(0:(n-1),1,n), 2, function(k) rotate(x,n-k)) } # Returns the circulant matrix of which x is the first column For example, > circulant(c(2,3,5,7)) [,1] [,2] [,3] [,4] [1,] 2 7 5 3 [2,] 3 2 7 5 [3,] 5 3 2 7 [4,] 7 5 3 2 It is inverted by changing to an eigenbasis. The diagonal elements are the entries of the Fourier Transform of x, so we invert them individually and change back to the original basis: reciprocal <- function(x) {i <- which(x!=0); x[i] <- 1/x[i]; x} inverse.circulant <- function(x) { n <- length(x) # x is the first column of the circulant i <- (0:(n-1)) %o% (1:n) # Powers of exp(2 Pi I/n) in the eigenbasis q q <- matrix(exp(complex(real=-log(n)/2, imaginary=2*pi*i / n)), n, n) w <- reciprocal(fft(x)) # Reciprocals of nonzero eigenvalues Re(t(q) %*% diag(w) %*% Conj(q))# Convert back to the original basis } # Returns a generalized inverse to circulant(x) For example, we demonstrate this works by multiplying its output by the original circulant and checking that the identity matrix is obtained (up to negligible floating point error): > a <- c(2,3,5,7) > zapsmall(inverse.circulant(a) %*% circulant(a)) [,1] [,2] [,3] [,4] [1,] 1 0 0 0 [2,] 0 1 0 0 [3,] 0 0 1 0 [4,] 0 0 0 1 Be aware that ill-conditioning will still plague this approach due to floating point roundoff in fft. That is why I have implemented a reciprocal function: it refuses to compute $1/x$ when $x=0$. As such, inverse.circulant computes a generalized inverse, exactly as in MASS::ginv: # The following determines a nonsingular but ill-conditioned circulant: > (a <- c(1, -200000/200001, -2500000/500001, 5000000/1000003)) [1] 1.000000 -0.999995 -4.999990 4.999985 > 1 / rcond(circulant(a)) # HUGE condition number! [1] 5.404306e+16 > library(MASS) > inverse.circulant(a) - ginv(circulant(a)) [,1] [,2] [,3] [,4] [1,] 6.938894e-18 -2.081668e-17 -4.163336e-17 2.775558e-17 [2,] 1.387779e-17 -6.938894e-18 -3.469447e-18 1.387779e-17 [3,] 1.387779e-17 4.163336e-17 -6.938894e-18 1.040834e-17 [4,] 3.469447e-17 -2.775558e-17 -1.387779e-17 -2.081668e-17
To use Discrete Fourier Transform to invert a covariance matrix A circulant is a matrix whose first column is a vector $x$ and its subsequent columns are obtained by rotation of one element to the right. Here is R code to produce any circulant from its first colu
36,145
To use Discrete Fourier Transform to invert a covariance matrix
Have you tried a correction- by adding a small $\epsilon$ perturbation to the diagonal of the matrix you are trying to invert. This is a standard processing routine used to defer the singularity issue while inverting a covariance matrix or a hessian.
To use Discrete Fourier Transform to invert a covariance matrix
Have you tried a correction- by adding a small $\epsilon$ perturbation to the diagonal of the matrix you are trying to invert. This is a standard processing routine used to defer the singularity issue
To use Discrete Fourier Transform to invert a covariance matrix Have you tried a correction- by adding a small $\epsilon$ perturbation to the diagonal of the matrix you are trying to invert. This is a standard processing routine used to defer the singularity issue while inverting a covariance matrix or a hessian.
To use Discrete Fourier Transform to invert a covariance matrix Have you tried a correction- by adding a small $\epsilon$ perturbation to the diagonal of the matrix you are trying to invert. This is a standard processing routine used to defer the singularity issue
36,146
To use Discrete Fourier Transform to invert a covariance matrix
Is the Wikipedia article on circulant matrices clear? This is something that in a time series context is discussed, for instance, in the first pages of Hannan's Time Series book. The eigenvalues of a circulant matrix are given by the Fourier transform of what (again in a time series context) would be the autocovariances. So to invert the matrix you have to take the reciprocals of the eigenvalues and pre- and post-multiply by the matrix whose columns are the eigenvectors.
To use Discrete Fourier Transform to invert a covariance matrix
Is the Wikipedia article on circulant matrices clear? This is something that in a time series context is discussed, for instance, in the first pages of Hannan's Time Series book. The eigenvalues of a
To use Discrete Fourier Transform to invert a covariance matrix Is the Wikipedia article on circulant matrices clear? This is something that in a time series context is discussed, for instance, in the first pages of Hannan's Time Series book. The eigenvalues of a circulant matrix are given by the Fourier transform of what (again in a time series context) would be the autocovariances. So to invert the matrix you have to take the reciprocals of the eigenvalues and pre- and post-multiply by the matrix whose columns are the eigenvectors.
To use Discrete Fourier Transform to invert a covariance matrix Is the Wikipedia article on circulant matrices clear? This is something that in a time series context is discussed, for instance, in the first pages of Hannan's Time Series book. The eigenvalues of a
36,147
Multilayer neural networks for multivariate temporal data
I also agree with Ran, most of the deep learning techniques are tested with image data set. Please checkout this research paper, it talks about audio classification using deep learning techniques.
Multilayer neural networks for multivariate temporal data
I also agree with Ran, most of the deep learning techniques are tested with image data set. Please checkout this research paper, it talks about audio classification using deep learning techniques.
Multilayer neural networks for multivariate temporal data I also agree with Ran, most of the deep learning techniques are tested with image data set. Please checkout this research paper, it talks about audio classification using deep learning techniques.
Multilayer neural networks for multivariate temporal data I also agree with Ran, most of the deep learning techniques are tested with image data set. Please checkout this research paper, it talks about audio classification using deep learning techniques.
36,148
Multilayer neural networks for multivariate temporal data
There are multiple papers by Hinton et al. which deal with temporal data and also audio (http://www.cs.toronto.edu/~hinton/papers.html). For example: Acoustic Modeling using Deep Belief Networks, 2012. Learning a better Representation of Speech Sound Waves using Restricted Boltzmann Machines, 2011. Deep Belief Networks using Discriminative Features for Phone Recognition, 2011. The Recurrent Temporal Restricted Boltzmann Machine, 2009. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style, 2009. I haven't read the more recent papers, but the 2009 papers should give you a good sense of how temporal data can be modeled using RBMs and DBNs.
Multilayer neural networks for multivariate temporal data
There are multiple papers by Hinton et al. which deal with temporal data and also audio (http://www.cs.toronto.edu/~hinton/papers.html). For example: Acoustic Modeling using Deep Belief Networks, 201
Multilayer neural networks for multivariate temporal data There are multiple papers by Hinton et al. which deal with temporal data and also audio (http://www.cs.toronto.edu/~hinton/papers.html). For example: Acoustic Modeling using Deep Belief Networks, 2012. Learning a better Representation of Speech Sound Waves using Restricted Boltzmann Machines, 2011. Deep Belief Networks using Discriminative Features for Phone Recognition, 2011. The Recurrent Temporal Restricted Boltzmann Machine, 2009. Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style, 2009. I haven't read the more recent papers, but the 2009 papers should give you a good sense of how temporal data can be modeled using RBMs and DBNs.
Multilayer neural networks for multivariate temporal data There are multiple papers by Hinton et al. which deal with temporal data and also audio (http://www.cs.toronto.edu/~hinton/papers.html). For example: Acoustic Modeling using Deep Belief Networks, 201
36,149
Multilayer neural networks for multivariate temporal data
I can't post comments yet, but I have the following remark. With multivariate data, I usually think of different "types" of data, e.g. a mixture of bool, multiselection, or floating point data (here it is called mixed-variate). As I see it, the input of different audio signals is therefore not multivariate, but multidimensional, because you probably have same data type (real valued data) for all channels. I think most of the basic RNN can handle highdimensional time series inbut. See e.g. here. Maybe a combination of the two above links will lead to a reasonable algorithm.
Multilayer neural networks for multivariate temporal data
I can't post comments yet, but I have the following remark. With multivariate data, I usually think of different "types" of data, e.g. a mixture of bool, multiselection, or floating point data (here i
Multilayer neural networks for multivariate temporal data I can't post comments yet, but I have the following remark. With multivariate data, I usually think of different "types" of data, e.g. a mixture of bool, multiselection, or floating point data (here it is called mixed-variate). As I see it, the input of different audio signals is therefore not multivariate, but multidimensional, because you probably have same data type (real valued data) for all channels. I think most of the basic RNN can handle highdimensional time series inbut. See e.g. here. Maybe a combination of the two above links will lead to a reasonable algorithm.
Multilayer neural networks for multivariate temporal data I can't post comments yet, but I have the following remark. With multivariate data, I usually think of different "types" of data, e.g. a mixture of bool, multiselection, or floating point data (here i
36,150
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap?
In R i would tell you to see if the functions related with "bootweights" in the survey package suit you in any way. But since you have already gone over that package I don't think you will find many alternatives ... I also looked for a similiar thing a couple of weeks ago and ended up implementing my own code. For the discussion of bootstrapping and survey weights in general you can find some references in this presentation which also contains references to an implementation of a bootstrapping procedure for complex survey designs in STATA.
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap?
In R i would tell you to see if the functions related with "bootweights" in the survey package suit you in any way. But since you have already gone over that package I don't think you will find many a
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap? In R i would tell you to see if the functions related with "bootweights" in the survey package suit you in any way. But since you have already gone over that package I don't think you will find many alternatives ... I also looked for a similiar thing a couple of weeks ago and ended up implementing my own code. For the discussion of bootstrapping and survey weights in general you can find some references in this presentation which also contains references to an implementation of a bootstrapping procedure for complex survey designs in STATA.
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap? In R i would tell you to see if the functions related with "bootweights" in the survey package suit you in any way. But since you have already gone over that package I don't think you will find many a
36,151
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap?
References on the reweighting method When you reweight your data to match known population totals (using raking, post-stratification, or some other form of calibration), it has long been common practice to repeat the reweighting procedure for each replicate sample. This practice and its justification are described clearly in the following classical references on variance estimation for surveys: Rust, K., & Rao, J. (1996). "Variance estimation for complex surveys using replication techniques." Statistical Methods in Medical Research, 5(3), 283–310. https://doi.org/10.1177/096228029600500305 Dippo, C., Fay, R., & Morganstein, D. (1984). "Computing Variances from Complex Samples with Replicate Weights." Proceedings of the Section on Survey Research Methods, 489–494. http://www.asasrms.org/Proceedings/papers/1984_094.pdf Packages in R The 'survey' and 'svrep' packages both provide a few different methods for bootstrapping with survey data. This vignette from the 'svrep' package provides guidance on how to choose an appropriate bootstrap method and number of bootstrap replicates: https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html. When you use the 'survey' package's functions (rake(), postStratify(), or calibrate()) to reweight data with bootstrap replicate weights, the package will automatically repeat the reweighting procedure separately for each bootstrap replicate. Below is example R code for how to implement this: library(survey) library(svrep) # Load example survey data ---- data('lou_vax_survey', package = 'svrep') # Create bootstrap weights ---- vax_survey_design <- svydesign(data = lou_vax_survey, ids = ~ 1, prob = ~ SAMPLING_WEIGHT) boot_design <- as_bootstrap_design( design = vax_survey_design, replicates = 500 ) # Define control totals (i.e. known population values) ---- control_totals <- list( 'RACE_ETHNICITY' = data.frame( 'RACE_ETHNICITY' = c( "Black or African American alone, not Hispanic or Latino", "Hispanic or Latino", "Other Race, not Hispanic or Latino", "White alone, not Hispanic or Latino"), 'TOTAL' = c(119041, 27001, 27633, 423027) ), 'SEX' = data.frame( 'SEX' = c("Male", "Female"), 'TOTAL' = c(283688, 313014) ) ) # Reweight the data to match control totals, using raking ---- raked_boot_design <- rake( design = boot_design, sample.margins = list(~ RACE_ETHNICITY, ~ SEX), population.margins = control_totals, control = list(maxit = 20, epsilon = 0.0001) ) # Check the resulting estimates ---- raked_boot_design |> svytable(formula = ~ RACE_ETHNICITY) #> RACE_ETHNICITY #> Black or African American alone, not Hispanic or Latino #> 119041 #> Hispanic or Latino #> 27001 #> Other Race, not Hispanic or Latino #> 27633 #> White alone, not Hispanic or Latino #> 423027 raked_boot_design |> svytable(formula = ~ SEX) #> SEX #> Female Male #> 313014 283688
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap?
References on the reweighting method When you reweight your data to match known population totals (using raking, post-stratification, or some other form of calibration), it has long been common practi
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap? References on the reweighting method When you reweight your data to match known population totals (using raking, post-stratification, or some other form of calibration), it has long been common practice to repeat the reweighting procedure for each replicate sample. This practice and its justification are described clearly in the following classical references on variance estimation for surveys: Rust, K., & Rao, J. (1996). "Variance estimation for complex surveys using replication techniques." Statistical Methods in Medical Research, 5(3), 283–310. https://doi.org/10.1177/096228029600500305 Dippo, C., Fay, R., & Morganstein, D. (1984). "Computing Variances from Complex Samples with Replicate Weights." Proceedings of the Section on Survey Research Methods, 489–494. http://www.asasrms.org/Proceedings/papers/1984_094.pdf Packages in R The 'survey' and 'svrep' packages both provide a few different methods for bootstrapping with survey data. This vignette from the 'svrep' package provides guidance on how to choose an appropriate bootstrap method and number of bootstrap replicates: https://cran.r-project.org/web/packages/svrep/vignettes/bootstrap-replicates.html. When you use the 'survey' package's functions (rake(), postStratify(), or calibrate()) to reweight data with bootstrap replicate weights, the package will automatically repeat the reweighting procedure separately for each bootstrap replicate. Below is example R code for how to implement this: library(survey) library(svrep) # Load example survey data ---- data('lou_vax_survey', package = 'svrep') # Create bootstrap weights ---- vax_survey_design <- svydesign(data = lou_vax_survey, ids = ~ 1, prob = ~ SAMPLING_WEIGHT) boot_design <- as_bootstrap_design( design = vax_survey_design, replicates = 500 ) # Define control totals (i.e. known population values) ---- control_totals <- list( 'RACE_ETHNICITY' = data.frame( 'RACE_ETHNICITY' = c( "Black or African American alone, not Hispanic or Latino", "Hispanic or Latino", "Other Race, not Hispanic or Latino", "White alone, not Hispanic or Latino"), 'TOTAL' = c(119041, 27001, 27633, 423027) ), 'SEX' = data.frame( 'SEX' = c("Male", "Female"), 'TOTAL' = c(283688, 313014) ) ) # Reweight the data to match control totals, using raking ---- raked_boot_design <- rake( design = boot_design, sample.margins = list(~ RACE_ETHNICITY, ~ SEX), population.margins = control_totals, control = list(maxit = 20, epsilon = 0.0001) ) # Check the resulting estimates ---- raked_boot_design |> svytable(formula = ~ RACE_ETHNICITY) #> RACE_ETHNICITY #> Black or African American alone, not Hispanic or Latino #> 119041 #> Hispanic or Latino #> 27001 #> Other Race, not Hispanic or Latino #> 27633 #> White alone, not Hispanic or Latino #> 423027 raked_boot_design |> svytable(formula = ~ SEX) #> SEX #> Female Male #> 313014 283688
Bootstrapping stratified sample that is weighted to population - reweighting during the bootstrap? References on the reweighting method When you reweight your data to match known population totals (using raking, post-stratification, or some other form of calibration), it has long been common practi
36,152
How to specify Bayesian mixed effects model in BUGS
You are (were) almost there. Just a few comments - you don't have to make the prior for the beta[,1:2] parameters a joint MV normal; you can make the prior such that beta[i,1] and beta[i,2] are independent, which simplifies things (for example, no prior covariance need be specified.) Note that doing so doesn't mean they will be independent in the posterior. Other comments: Since you have a constant term - alpha - in the regression, the components beta[,1] should have zero mean in the prior. Also, you don't have a prior for alpha in the code. Here's a model with hierarchical intercept and slope terms; I've tried to keep to your priors and notation where possible, given the changes: model { for(i in 1:n){ mu.y[i] <- alpha + beta0[s[i]] + beta1[s[i]]*(j[i]-jbar) demVote[i] ~ dnorm(mu.y[i],tau) } alpha ~ dnorm(0, 0.001) ## prior on alpha; parameters just made up for illustration sigma ~ dunif(0,20) ## prior on standard deviation tau <- pow(sigma,-2) ## convert to precision ## hierarchical model for each state’s intercept & slope for (p in 1:120) { beta0[p] ~ dnorm(0, tau0) beta1[p] ~ dnorm(mu1, tau1) } ## Priors on hierarchical components; parameters just made up for illustration mu1 ~ dnorm(0, 0.001) sigma0 ~ dunif(0,20) sigma1 ~ dunif(0,20) tau0 <- pow(sigma0,-2) tau1 <- pow(sigma1,-2) } A very useful resource for hierarchical models, including some "tricks" to speed up convergence, is Gelman and Hill. (A little late with the answer, but may be helpful to some future questioner.)
How to specify Bayesian mixed effects model in BUGS
You are (were) almost there. Just a few comments - you don't have to make the prior for the beta[,1:2] parameters a joint MV normal; you can make the prior such that beta[i,1] and beta[i,2] are indep
How to specify Bayesian mixed effects model in BUGS You are (were) almost there. Just a few comments - you don't have to make the prior for the beta[,1:2] parameters a joint MV normal; you can make the prior such that beta[i,1] and beta[i,2] are independent, which simplifies things (for example, no prior covariance need be specified.) Note that doing so doesn't mean they will be independent in the posterior. Other comments: Since you have a constant term - alpha - in the regression, the components beta[,1] should have zero mean in the prior. Also, you don't have a prior for alpha in the code. Here's a model with hierarchical intercept and slope terms; I've tried to keep to your priors and notation where possible, given the changes: model { for(i in 1:n){ mu.y[i] <- alpha + beta0[s[i]] + beta1[s[i]]*(j[i]-jbar) demVote[i] ~ dnorm(mu.y[i],tau) } alpha ~ dnorm(0, 0.001) ## prior on alpha; parameters just made up for illustration sigma ~ dunif(0,20) ## prior on standard deviation tau <- pow(sigma,-2) ## convert to precision ## hierarchical model for each state’s intercept & slope for (p in 1:120) { beta0[p] ~ dnorm(0, tau0) beta1[p] ~ dnorm(mu1, tau1) } ## Priors on hierarchical components; parameters just made up for illustration mu1 ~ dnorm(0, 0.001) sigma0 ~ dunif(0,20) sigma1 ~ dunif(0,20) tau0 <- pow(sigma0,-2) tau1 <- pow(sigma1,-2) } A very useful resource for hierarchical models, including some "tricks" to speed up convergence, is Gelman and Hill. (A little late with the answer, but may be helpful to some future questioner.)
How to specify Bayesian mixed effects model in BUGS You are (were) almost there. Just a few comments - you don't have to make the prior for the beta[,1:2] parameters a joint MV normal; you can make the prior such that beta[i,1] and beta[i,2] are indep
36,153
Setting the threshold p-value as part of hypothesis generation
If you intend to use the Neyman-Pearson approach then you definitely cannot set the cutoff for significance after the data has been analysed. However, that is not the only approach to statistical inference, and in many cases it is not the best approach. N-P is certainly not well matched to a task that you specify as hypothesis generation. N-P allows you to specify a maximally acceptable rate of false positive results, the alpha level that is most often unthinkingly set to 0.05. The N-P approach mostly deals with decisions about what to do next (significant, discard the null; not significant, accept the null) rather than dealing directly with the evidential meaning of the results. Fisher's approach is incompatible with N-P and treats the data as evidence: it yields a p value that is an index of evidence against the null hypothesis. It is far more often compatible with the needs of scientific experimentation than the N-P approach, in my opinion, in so far as it allows the evidence from an experiment to be considered in light of any other information before any decision is made about what to do next. In contrast to the all-or-none results of an N-P analysis, it encourages experiments to be repeated or refined. Specify the exact p values that you obtained from the experiment and interpret the results thoughtfully. If an interesting finding comes from the data rather than a pre-experiment hypothesis then the results should be taken as preliminary and, if sufficiently interesting, it may be worth repeating the experiment. (You should note that it is fairly common to see statistical analyses and interpretations that are a hybrid of N-P and Fisher: the hybrid is always inappropriate.) To answer your specific questions, I will do so (obliquely) as a pharmacologist: it is unlikely that all of thousands of chemicals will affect cell growth at low concentrations, but certain that all chemicals will do so at a high concentration. Paracelsus famously said (in Greek, I assume) "All drugs are poisons, dose determines effect." If your doses are large then it is not scientifically interesting to find that they are toxic. Perhaps you should test them at a wide range of concentrations (geometrical spacing of concentrations is efficient). The concentration at which a chemical has biological effects is at least as interesting as the magnitude of the effect, and much more interesting than the significance level obtained in an experiment. Make sure that you don't convert a biochemical and experimental design question into a question about statistical significance.
Setting the threshold p-value as part of hypothesis generation
If you intend to use the Neyman-Pearson approach then you definitely cannot set the cutoff for significance after the data has been analysed. However, that is not the only approach to statistical infe
Setting the threshold p-value as part of hypothesis generation If you intend to use the Neyman-Pearson approach then you definitely cannot set the cutoff for significance after the data has been analysed. However, that is not the only approach to statistical inference, and in many cases it is not the best approach. N-P is certainly not well matched to a task that you specify as hypothesis generation. N-P allows you to specify a maximally acceptable rate of false positive results, the alpha level that is most often unthinkingly set to 0.05. The N-P approach mostly deals with decisions about what to do next (significant, discard the null; not significant, accept the null) rather than dealing directly with the evidential meaning of the results. Fisher's approach is incompatible with N-P and treats the data as evidence: it yields a p value that is an index of evidence against the null hypothesis. It is far more often compatible with the needs of scientific experimentation than the N-P approach, in my opinion, in so far as it allows the evidence from an experiment to be considered in light of any other information before any decision is made about what to do next. In contrast to the all-or-none results of an N-P analysis, it encourages experiments to be repeated or refined. Specify the exact p values that you obtained from the experiment and interpret the results thoughtfully. If an interesting finding comes from the data rather than a pre-experiment hypothesis then the results should be taken as preliminary and, if sufficiently interesting, it may be worth repeating the experiment. (You should note that it is fairly common to see statistical analyses and interpretations that are a hybrid of N-P and Fisher: the hybrid is always inappropriate.) To answer your specific questions, I will do so (obliquely) as a pharmacologist: it is unlikely that all of thousands of chemicals will affect cell growth at low concentrations, but certain that all chemicals will do so at a high concentration. Paracelsus famously said (in Greek, I assume) "All drugs are poisons, dose determines effect." If your doses are large then it is not scientifically interesting to find that they are toxic. Perhaps you should test them at a wide range of concentrations (geometrical spacing of concentrations is efficient). The concentration at which a chemical has biological effects is at least as interesting as the magnitude of the effect, and much more interesting than the significance level obtained in an experiment. Make sure that you don't convert a biochemical and experimental design question into a question about statistical significance.
Setting the threshold p-value as part of hypothesis generation If you intend to use the Neyman-Pearson approach then you definitely cannot set the cutoff for significance after the data has been analysed. However, that is not the only approach to statistical infe
36,154
Setting the threshold p-value as part of hypothesis generation
I suggest you try a different approach -- False Discovery Rate (FDR). The FDR for any given P value cutoff is the expected fraction of those comparisons (with P less than your cutoff) where the null hypothesis is actually false (while 1.0 - FDR is the fraction of the comparisons where you expect the null hypothesis to be true). You call all comparisons with a P value less than your cutoff to be "a discovery" and the FDR is the fraction of those discoveries that are expected to be false (false positive findings). You can either choose a FDR and find out what P value cutoff to use. Or you can compute FDR for any P value you choose.
Setting the threshold p-value as part of hypothesis generation
I suggest you try a different approach -- False Discovery Rate (FDR). The FDR for any given P value cutoff is the expected fraction of those comparisons (with P less than your cutoff) where the null
Setting the threshold p-value as part of hypothesis generation I suggest you try a different approach -- False Discovery Rate (FDR). The FDR for any given P value cutoff is the expected fraction of those comparisons (with P less than your cutoff) where the null hypothesis is actually false (while 1.0 - FDR is the fraction of the comparisons where you expect the null hypothesis to be true). You call all comparisons with a P value less than your cutoff to be "a discovery" and the FDR is the fraction of those discoveries that are expected to be false (false positive findings). You can either choose a FDR and find out what P value cutoff to use. Or you can compute FDR for any P value you choose.
Setting the threshold p-value as part of hypothesis generation I suggest you try a different approach -- False Discovery Rate (FDR). The FDR for any given P value cutoff is the expected fraction of those comparisons (with P less than your cutoff) where the null
36,155
Setting the threshold p-value as part of hypothesis generation
I think you're really wondering about $p$-value correction. Bonferroni's is the simplest. You should use one if you have multiple post-hoc tests. This is what you were discussing, except that people typically consider this an adjustment to the $p$-value rather than an adjustment to $\alpha$. Also, since your sample sizes are large, it's quite reasonable that you are getting low $p$-values. But if you still think that your sample size that is "too large", I would guess that you are violating an assumption of independence of observations. For instance, you would violate this assumption if you observe cells multiple times but do not account for how observations within the same cell are correlated.
Setting the threshold p-value as part of hypothesis generation
I think you're really wondering about $p$-value correction. Bonferroni's is the simplest. You should use one if you have multiple post-hoc tests. This is what you were discussing, except that people t
Setting the threshold p-value as part of hypothesis generation I think you're really wondering about $p$-value correction. Bonferroni's is the simplest. You should use one if you have multiple post-hoc tests. This is what you were discussing, except that people typically consider this an adjustment to the $p$-value rather than an adjustment to $\alpha$. Also, since your sample sizes are large, it's quite reasonable that you are getting low $p$-values. But if you still think that your sample size that is "too large", I would guess that you are violating an assumption of independence of observations. For instance, you would violate this assumption if you observe cells multiple times but do not account for how observations within the same cell are correlated.
Setting the threshold p-value as part of hypothesis generation I think you're really wondering about $p$-value correction. Bonferroni's is the simplest. You should use one if you have multiple post-hoc tests. This is what you were discussing, except that people t
36,156
Estimating the intersection of two lines
One straightforward way is to obtain the maximum likelihood estimator of $x^{*}$ directly. Using the first subscript to designate the line ($1$ or $2$), the model is $$y_{ij} = m_i(x_{ij} - x^{*}) + y^{*} + \varepsilon_{ij},$$ $1 \le j \le n_i$, $\varepsilon_{ij} \sim \text{Normal}(0, \sigma_i / \omega_{ij})$ and independent. The $n_i$ count the data for each line. The parameters are the point of intersection $(x^{*}, y^{*})$, the slopes $m_1$ and $m_2$, and (as nuisance parameters) the scale factors $\sigma_1$ and $\sigma_2$. The $\omega_{ij}$ are specified weights (not parameters). The standard ML machinery will provide a confidence interval for any of the parameters, including $x^{*}$. This illustration show two linear fits from $10$ independent samples of two lines, each passing through $(7,10)$, with slopes $1$ and $1/3$. The colored bands are $95\%$ confidence bands around the fits (using least squares). The estimated point of intersection, shown as a large black dot, occurs at $(6.3, 9.4)$. A $95\%$ confidence interval for its x-coordinate is portrayed as a dashed black segment: it extends from $4.4$ to $8.3$. Some details (added in response to comments) This model is identical to performing two separate weighted least squares regressions. It merely combines their 3+3 = 6 parameters (intercept, slope, and scale) in a way that isolates the x-coordinate of the point of intersection. Therefore the parameter estimates will be the same. The point is that in fitting the combined model, any maximum likelihood procedure will report standard errors (equivalently, confidence intervals) for the parameters and this is how we solve the original problem. I would also like to point out that the there is some freedom in choosing the weights (which can be helpful in persuading an optimization routine to behave well). The foregoing analysis shows it suffices to examine the case of fitting a single line. Writing $\omega_i$ for the weights the negative log likelihood is equal to $$-\log(\Lambda) = \frac{1}{2}\sum_i{\log(2 \pi (\sigma/\omega_i)^2) + \frac{(m(x_i-x^*) + y^* - y_i)^2}{(\sigma/\omega_i)^2}}.$$ Removing additive constants (which don't affect the ML procedure) this simplifies somewhat to $$\sum_i{\log(\sigma) + \frac{u_i^2\omega_i^2}{2\sigma^2}}$$ where $u_i^2 = (m(x_i-x^*) + y^* - y_i)^2$. To estimate $\sigma$ we equate the derivative with $0$: $$0 = \frac{\partial(-\log(\Lambda))}{\partial \sigma} = \sum_i^n{\frac{1}{\sigma} - \frac{u_i^2\omega_i^2}{\sigma^3}},$$ implying $$(\hat{\sigma})^2 = \frac{1}{n}\sum_i^n{u_i^2 \omega_i^2}.$$ This shows that the estimate of $\sigma^2$ is directly proportional to the scale of the weights. This is clear: if we multiply all weights by a positive value $\lambda$, then we get exactly the same model by dividing $\sigma$ by $\lambda$. Consequently, we are free to normalize the weights as we wish. A good choice is to make $\sum{\omega_i^2}=n$, because this is what we would get for an unweighted model. In particular, note that the actual value of $\sigma$ is meaningless except in comparison to the $L^2$ norm of the weights. Accordingly, the standard error of $\hat{\sigma}$ is also meaningless except in comparison to the weights.
Estimating the intersection of two lines
One straightforward way is to obtain the maximum likelihood estimator of $x^{*}$ directly. Using the first subscript to designate the line ($1$ or $2$), the model is $$y_{ij} = m_i(x_{ij} - x^{*}) + y
Estimating the intersection of two lines One straightforward way is to obtain the maximum likelihood estimator of $x^{*}$ directly. Using the first subscript to designate the line ($1$ or $2$), the model is $$y_{ij} = m_i(x_{ij} - x^{*}) + y^{*} + \varepsilon_{ij},$$ $1 \le j \le n_i$, $\varepsilon_{ij} \sim \text{Normal}(0, \sigma_i / \omega_{ij})$ and independent. The $n_i$ count the data for each line. The parameters are the point of intersection $(x^{*}, y^{*})$, the slopes $m_1$ and $m_2$, and (as nuisance parameters) the scale factors $\sigma_1$ and $\sigma_2$. The $\omega_{ij}$ are specified weights (not parameters). The standard ML machinery will provide a confidence interval for any of the parameters, including $x^{*}$. This illustration show two linear fits from $10$ independent samples of two lines, each passing through $(7,10)$, with slopes $1$ and $1/3$. The colored bands are $95\%$ confidence bands around the fits (using least squares). The estimated point of intersection, shown as a large black dot, occurs at $(6.3, 9.4)$. A $95\%$ confidence interval for its x-coordinate is portrayed as a dashed black segment: it extends from $4.4$ to $8.3$. Some details (added in response to comments) This model is identical to performing two separate weighted least squares regressions. It merely combines their 3+3 = 6 parameters (intercept, slope, and scale) in a way that isolates the x-coordinate of the point of intersection. Therefore the parameter estimates will be the same. The point is that in fitting the combined model, any maximum likelihood procedure will report standard errors (equivalently, confidence intervals) for the parameters and this is how we solve the original problem. I would also like to point out that the there is some freedom in choosing the weights (which can be helpful in persuading an optimization routine to behave well). The foregoing analysis shows it suffices to examine the case of fitting a single line. Writing $\omega_i$ for the weights the negative log likelihood is equal to $$-\log(\Lambda) = \frac{1}{2}\sum_i{\log(2 \pi (\sigma/\omega_i)^2) + \frac{(m(x_i-x^*) + y^* - y_i)^2}{(\sigma/\omega_i)^2}}.$$ Removing additive constants (which don't affect the ML procedure) this simplifies somewhat to $$\sum_i{\log(\sigma) + \frac{u_i^2\omega_i^2}{2\sigma^2}}$$ where $u_i^2 = (m(x_i-x^*) + y^* - y_i)^2$. To estimate $\sigma$ we equate the derivative with $0$: $$0 = \frac{\partial(-\log(\Lambda))}{\partial \sigma} = \sum_i^n{\frac{1}{\sigma} - \frac{u_i^2\omega_i^2}{\sigma^3}},$$ implying $$(\hat{\sigma})^2 = \frac{1}{n}\sum_i^n{u_i^2 \omega_i^2}.$$ This shows that the estimate of $\sigma^2$ is directly proportional to the scale of the weights. This is clear: if we multiply all weights by a positive value $\lambda$, then we get exactly the same model by dividing $\sigma$ by $\lambda$. Consequently, we are free to normalize the weights as we wish. A good choice is to make $\sum{\omega_i^2}=n$, because this is what we would get for an unweighted model. In particular, note that the actual value of $\sigma$ is meaningless except in comparison to the $L^2$ norm of the weights. Accordingly, the standard error of $\hat{\sigma}$ is also meaningless except in comparison to the weights.
Estimating the intersection of two lines One straightforward way is to obtain the maximum likelihood estimator of $x^{*}$ directly. Using the first subscript to designate the line ($1$ or $2$), the model is $$y_{ij} = m_i(x_{ij} - x^{*}) + y
36,157
Why does t statistic increase with the sample size?
A little change in notation might help in answering your question: What you call $\mu$ is often called $\mu_{0}$ because it is the population mean under the null hypothesis, whereas $\mu$ is the actual population mean - which is unknown because we don't know whether the null hypothesis is true. Also, what you call $\sigma$ is usually called $s$ following the convention that population parameters get greek letters, and sample parameters are denoted by latin letters. Note that $s / \sqrt{n}$ is the standard error of the mean $\bar{X}$ (SEM) - an estimate of the variablity of $\bar{X}$, where $\bar{X}$ is understood as a random variable. So we have $t = \frac{\bar{X} - \mu_{0}}{s / \sqrt{n}}$ Now, for a given sample, you have a fixed empirical $\bar{X}_{emp}$, and thus a fixed difference $d_{emp} = \bar{X}_{emp} - \mu_{0}$. Part of the confusion seems to be related to the idea that "a bigger sample size (higher $n$) should give sample mean closer to population mean". This should be rephrased to conditional on the null hypothesis being true ($\mu = \mu_{0}$), the probability of observing a difference $d = \bar{X} - \mu_{0}$ that is at least as large as the already observed $d_{emp}$ becomes smaller when $n$ increases. This is because the "accuracy" of our estimator $\bar{X}$ then increases (variability decreases). I guess the main point is that you already have a fixed $\bar{X}_{emp}$ and thus $d_{emp}$, and $t$ just tells you how "big that difference is" measured in (estimated) units of variability of $\bar{X}$. When the units become smaller in absolute numbers, the same absolute difference $d_{emp}$ will "be worth more units" and will thus count as "more suprisingly high" (= less likely to occur) if $\mu = \mu_{0}$.
Why does t statistic increase with the sample size?
A little change in notation might help in answering your question: What you call $\mu$ is often called $\mu_{0}$ because it is the population mean under the null hypothesis, whereas $\mu$ is the actua
Why does t statistic increase with the sample size? A little change in notation might help in answering your question: What you call $\mu$ is often called $\mu_{0}$ because it is the population mean under the null hypothesis, whereas $\mu$ is the actual population mean - which is unknown because we don't know whether the null hypothesis is true. Also, what you call $\sigma$ is usually called $s$ following the convention that population parameters get greek letters, and sample parameters are denoted by latin letters. Note that $s / \sqrt{n}$ is the standard error of the mean $\bar{X}$ (SEM) - an estimate of the variablity of $\bar{X}$, where $\bar{X}$ is understood as a random variable. So we have $t = \frac{\bar{X} - \mu_{0}}{s / \sqrt{n}}$ Now, for a given sample, you have a fixed empirical $\bar{X}_{emp}$, and thus a fixed difference $d_{emp} = \bar{X}_{emp} - \mu_{0}$. Part of the confusion seems to be related to the idea that "a bigger sample size (higher $n$) should give sample mean closer to population mean". This should be rephrased to conditional on the null hypothesis being true ($\mu = \mu_{0}$), the probability of observing a difference $d = \bar{X} - \mu_{0}$ that is at least as large as the already observed $d_{emp}$ becomes smaller when $n$ increases. This is because the "accuracy" of our estimator $\bar{X}$ then increases (variability decreases). I guess the main point is that you already have a fixed $\bar{X}_{emp}$ and thus $d_{emp}$, and $t$ just tells you how "big that difference is" measured in (estimated) units of variability of $\bar{X}$. When the units become smaller in absolute numbers, the same absolute difference $d_{emp}$ will "be worth more units" and will thus count as "more suprisingly high" (= less likely to occur) if $\mu = \mu_{0}$.
Why does t statistic increase with the sample size? A little change in notation might help in answering your question: What you call $\mu$ is often called $\mu_{0}$ because it is the population mean under the null hypothesis, whereas $\mu$ is the actua
36,158
Why does t statistic increase with the sample size?
Someone else can probably give a more rigorous answer, but: For any given (fixed) difference between $\bar{X}$ and $\mu$, the difference is more meaningful if n is high. Increasing n will result in a sample mean closer to the population mean, but only in the case that your sample is not different from the population. So when n is high and $\bar{X}$ still differs from $\mu$, that reinforces the rejection of the null hypothesis.
Why does t statistic increase with the sample size?
Someone else can probably give a more rigorous answer, but: For any given (fixed) difference between $\bar{X}$ and $\mu$, the difference is more meaningful if n is high. Increasing n will result in a
Why does t statistic increase with the sample size? Someone else can probably give a more rigorous answer, but: For any given (fixed) difference between $\bar{X}$ and $\mu$, the difference is more meaningful if n is high. Increasing n will result in a sample mean closer to the population mean, but only in the case that your sample is not different from the population. So when n is high and $\bar{X}$ still differs from $\mu$, that reinforces the rejection of the null hypothesis.
Why does t statistic increase with the sample size? Someone else can probably give a more rigorous answer, but: For any given (fixed) difference between $\bar{X}$ and $\mu$, the difference is more meaningful if n is high. Increasing n will result in a
36,159
Why does t statistic increase with the sample size?
Your last sentence seems to capsulize the confusion. You wrote But this formula seems counter-intuitive to me as bigger sample size (higher n) should give sample mean closer to population mean. but this is true only if the sample is from a population that has the same mean as the population it is being compared to. The word "population" is being used to refer to two different populations
Why does t statistic increase with the sample size?
Your last sentence seems to capsulize the confusion. You wrote But this formula seems counter-intuitive to me as bigger sample size (higher n) should give sample mean closer to population mean. b
Why does t statistic increase with the sample size? Your last sentence seems to capsulize the confusion. You wrote But this formula seems counter-intuitive to me as bigger sample size (higher n) should give sample mean closer to population mean. but this is true only if the sample is from a population that has the same mean as the population it is being compared to. The word "population" is being used to refer to two different populations
Why does t statistic increase with the sample size? Your last sentence seems to capsulize the confusion. You wrote But this formula seems counter-intuitive to me as bigger sample size (higher n) should give sample mean closer to population mean. b
36,160
What is the expected value of the sample variance under a linear regression with omitted variables of an AR(2) process?
Nice introductory book on the topic related to different aspects of time series models could be Introduction to Time Series and Forecasting by Brockwell and Davis among many others. Roughly speaking, the characteristic of the autoregressive process of order $p$ is linked to the partial autocorrelation function. Estimating the $AR(p)$ process: $$X_t = \sum_{j=1}^p\phi_jX_{t-j} + \varepsilon_t$$ one common solution is to apply the Durbin-Levinson (wiki on the math of Levinson recursion) method, where the residual sum of squares of $AR(p) $ $$RSS_p = \mathbb{E} \varepsilon^2= \mathbb{E}(X_t - \sum_{j=1}^p\phi_jX_{t-j})^2$$ is linked to the $RSS_{p-1}$ as: $$RSS_p = RSS_{p-1}(1-\varphi_{pp}^2),$$ with $\varphi_{pp}$ being the partial autocorrelations or the last component of $$\Gamma_p^{-1}\gamma_p = {([\gamma(i-j)]}_{i,j=1}^p)^{-1}[\gamma(1),\gamma(2),\dots,\gamma(p)]^\prime,$$ and $\gamma(.)$ being autocorrelation function. Thus if you apply wrong order autoregression it will cost you in theory the $(1 - \varphi_{pp}^2))$, note that in practice the estimation error also adds here. In small samples it may happen that a smaller model $(AR(1))$ is a better predictor than the true model $AR(2)$ (as the parameters has to be estimated and they are not known!). This is also known as the parsimony property of a smaller model.
What is the expected value of the sample variance under a linear regression with omitted variables o
Nice introductory book on the topic related to different aspects of time series models could be Introduction to Time Series and Forecasting by Brockwell and Davis among many others. Roughly speaking,
What is the expected value of the sample variance under a linear regression with omitted variables of an AR(2) process? Nice introductory book on the topic related to different aspects of time series models could be Introduction to Time Series and Forecasting by Brockwell and Davis among many others. Roughly speaking, the characteristic of the autoregressive process of order $p$ is linked to the partial autocorrelation function. Estimating the $AR(p)$ process: $$X_t = \sum_{j=1}^p\phi_jX_{t-j} + \varepsilon_t$$ one common solution is to apply the Durbin-Levinson (wiki on the math of Levinson recursion) method, where the residual sum of squares of $AR(p) $ $$RSS_p = \mathbb{E} \varepsilon^2= \mathbb{E}(X_t - \sum_{j=1}^p\phi_jX_{t-j})^2$$ is linked to the $RSS_{p-1}$ as: $$RSS_p = RSS_{p-1}(1-\varphi_{pp}^2),$$ with $\varphi_{pp}$ being the partial autocorrelations or the last component of $$\Gamma_p^{-1}\gamma_p = {([\gamma(i-j)]}_{i,j=1}^p)^{-1}[\gamma(1),\gamma(2),\dots,\gamma(p)]^\prime,$$ and $\gamma(.)$ being autocorrelation function. Thus if you apply wrong order autoregression it will cost you in theory the $(1 - \varphi_{pp}^2))$, note that in practice the estimation error also adds here. In small samples it may happen that a smaller model $(AR(1))$ is a better predictor than the true model $AR(2)$ (as the parameters has to be estimated and they are not known!). This is also known as the parsimony property of a smaller model.
What is the expected value of the sample variance under a linear regression with omitted variables o Nice introductory book on the topic related to different aspects of time series models could be Introduction to Time Series and Forecasting by Brockwell and Davis among many others. Roughly speaking,
36,161
Machine learning techniques for time series estimation - forecasting price
An ARMAX model might be a good place to start.
Machine learning techniques for time series estimation - forecasting price
An ARMAX model might be a good place to start.
Machine learning techniques for time series estimation - forecasting price An ARMAX model might be a good place to start.
Machine learning techniques for time series estimation - forecasting price An ARMAX model might be a good place to start.
36,162
Machine learning techniques for time series estimation - forecasting price
Recurrent neural networks: no assumptions on the distributions of $f_i$, distribution of $x_t$ can be modelled via an adequate loss function (sum of squares for Gaussian, sum of differences for Laplace, cross entropy, kulback leibler divergences, ...) rather difficult to implement (need advanced techniques such as Hessian free optimization or Long short-term memory to work well). PyBrain has a LSTM implementation.
Machine learning techniques for time series estimation - forecasting price
Recurrent neural networks: no assumptions on the distributions of $f_i$, distribution of $x_t$ can be modelled via an adequate loss function (sum of squares for Gaussian, sum of differences for Lapla
Machine learning techniques for time series estimation - forecasting price Recurrent neural networks: no assumptions on the distributions of $f_i$, distribution of $x_t$ can be modelled via an adequate loss function (sum of squares for Gaussian, sum of differences for Laplace, cross entropy, kulback leibler divergences, ...) rather difficult to implement (need advanced techniques such as Hessian free optimization or Long short-term memory to work well). PyBrain has a LSTM implementation.
Machine learning techniques for time series estimation - forecasting price Recurrent neural networks: no assumptions on the distributions of $f_i$, distribution of $x_t$ can be modelled via an adequate loss function (sum of squares for Gaussian, sum of differences for Lapla
36,163
Machine learning techniques for time series estimation - forecasting price
Critically, what's your data frequency? Is this high or low frequency news? Also critically, you want to forecast returns, not prices. Your question is sufficiently broad that every supervised learning/regression technique can be listed legitimately. For example, your news could be high frequency news, meaning the response is basically an inhomogeneous time series and a discrete process. Whereas if it is monthly data it is much more Gaussian but is also much more efficiently priced and you have no sample size to test your model's ability to generalise. Data frequency, the market's liquidity, microstructure and other domain specific issues will completely change the statistical model chosen.
Machine learning techniques for time series estimation - forecasting price
Critically, what's your data frequency? Is this high or low frequency news? Also critically, you want to forecast returns, not prices. Your question is sufficiently broad that every supervised learnin
Machine learning techniques for time series estimation - forecasting price Critically, what's your data frequency? Is this high or low frequency news? Also critically, you want to forecast returns, not prices. Your question is sufficiently broad that every supervised learning/regression technique can be listed legitimately. For example, your news could be high frequency news, meaning the response is basically an inhomogeneous time series and a discrete process. Whereas if it is monthly data it is much more Gaussian but is also much more efficiently priced and you have no sample size to test your model's ability to generalise. Data frequency, the market's liquidity, microstructure and other domain specific issues will completely change the statistical model chosen.
Machine learning techniques for time series estimation - forecasting price Critically, what's your data frequency? Is this high or low frequency news? Also critically, you want to forecast returns, not prices. Your question is sufficiently broad that every supervised learnin
36,164
Interpretation of intercept term in poisson model with offset and covariates
I think that you want offset(log(population)) in your models above. The offset is just a term included in the model without estimating a coefficient for it (fixing the coefficient at 1). Since the standard transformation in poisson regression is log, you can think of incuding the offset of log(population) as a rough equivalent (though mathematically better) of using log( cases/population ) as the response variable. So it is adjusting for differences in population sizes. This means that the intercept without any offset is predicting the average when log(population) is 0, or in other words, when you have a population of 1. The slope in the second model would then be the increase for a population of size 1. You could also use an offset like offset(log(population/1000)) and then the interpretations would be for a population of size 1,000 (change the 1,000 to whatever value is meaningful for you), this makes it easier to visualize. For most models beyond the simplest it is often easier to interpret predictions from the model rather than individual coeficients. The Predict.Plot and TkPredict functions in the TeachingDemos package may help.
Interpretation of intercept term in poisson model with offset and covariates
I think that you want offset(log(population)) in your models above. The offset is just a term included in the model without estimating a coefficient for it (fixing the coefficient at 1). Since the
Interpretation of intercept term in poisson model with offset and covariates I think that you want offset(log(population)) in your models above. The offset is just a term included in the model without estimating a coefficient for it (fixing the coefficient at 1). Since the standard transformation in poisson regression is log, you can think of incuding the offset of log(population) as a rough equivalent (though mathematically better) of using log( cases/population ) as the response variable. So it is adjusting for differences in population sizes. This means that the intercept without any offset is predicting the average when log(population) is 0, or in other words, when you have a population of 1. The slope in the second model would then be the increase for a population of size 1. You could also use an offset like offset(log(population/1000)) and then the interpretations would be for a population of size 1,000 (change the 1,000 to whatever value is meaningful for you), this makes it easier to visualize. For most models beyond the simplest it is often easier to interpret predictions from the model rather than individual coeficients. The Predict.Plot and TkPredict functions in the TeachingDemos package may help.
Interpretation of intercept term in poisson model with offset and covariates I think that you want offset(log(population)) in your models above. The offset is just a term included in the model without estimating a coefficient for it (fixing the coefficient at 1). Since the
36,165
Estimating effect of latent variable in regression
One answer is "no." Another is, "of course." No To simplify notation, let $\lambda(x) = 1/(1 + \exp(-x))$, the inverse logit. Because $\lambda(x) = 1 - \lambda(-x)$, $$\beta_0 + \beta_1 \lambda(x) = (\beta_0 + \beta_1) - \beta_1 \lambda(-x)).$$ Therefore it is impossible to distinguish the parameters $(\beta_0, \beta_1, \beta_2, \beta_3)$ from $(\beta_0+\beta_1, -\beta_1, -\beta_2, -\beta_3)$. Of course Let us stipulate that the first nonzero element of $(\beta_1, \beta_2, \beta_3)$ must be positive. That resolves the indeterminacy. We still need a model for the errors. If we suppose, for instance, that $Y - \left(\beta_0 + \beta_1 \lambda(\beta_2 X_2 + \beta_3 X_3)\right)$ has a Normal distribution and the various $Y$'s are independent, then we can use least squares to estimate the parameters. There is no exact solution to this nonlinear optimization problem, but it is straightforward to do numerically. This graphic shows 50 points generated with standard Normal values for $X_1$ and $X_2$, parameter $\beta = (1,2,1/2,-1)$, with iid Normal errors of standard deviation 1/2. The surface shows the fit, $\hat{\beta} = (2.68, -1.23, -0.89, 1.75) \sim (1.45, 1.23, 0.89, -1.75)$. Least squares is the maximum likelihood with iid Normal errors. With another error distribution, use MLE directly. You can obtain asymptotic confidence intervals for the parameters in the standard ways.
Estimating effect of latent variable in regression
One answer is "no." Another is, "of course." No To simplify notation, let $\lambda(x) = 1/(1 + \exp(-x))$, the inverse logit. Because $\lambda(x) = 1 - \lambda(-x)$, $$\beta_0 + \beta_1 \lambda(x) =
Estimating effect of latent variable in regression One answer is "no." Another is, "of course." No To simplify notation, let $\lambda(x) = 1/(1 + \exp(-x))$, the inverse logit. Because $\lambda(x) = 1 - \lambda(-x)$, $$\beta_0 + \beta_1 \lambda(x) = (\beta_0 + \beta_1) - \beta_1 \lambda(-x)).$$ Therefore it is impossible to distinguish the parameters $(\beta_0, \beta_1, \beta_2, \beta_3)$ from $(\beta_0+\beta_1, -\beta_1, -\beta_2, -\beta_3)$. Of course Let us stipulate that the first nonzero element of $(\beta_1, \beta_2, \beta_3)$ must be positive. That resolves the indeterminacy. We still need a model for the errors. If we suppose, for instance, that $Y - \left(\beta_0 + \beta_1 \lambda(\beta_2 X_2 + \beta_3 X_3)\right)$ has a Normal distribution and the various $Y$'s are independent, then we can use least squares to estimate the parameters. There is no exact solution to this nonlinear optimization problem, but it is straightforward to do numerically. This graphic shows 50 points generated with standard Normal values for $X_1$ and $X_2$, parameter $\beta = (1,2,1/2,-1)$, with iid Normal errors of standard deviation 1/2. The surface shows the fit, $\hat{\beta} = (2.68, -1.23, -0.89, 1.75) \sim (1.45, 1.23, 0.89, -1.75)$. Least squares is the maximum likelihood with iid Normal errors. With another error distribution, use MLE directly. You can obtain asymptotic confidence intervals for the parameters in the standard ways.
Estimating effect of latent variable in regression One answer is "no." Another is, "of course." No To simplify notation, let $\lambda(x) = 1/(1 + \exp(-x))$, the inverse logit. Because $\lambda(x) = 1 - \lambda(-x)$, $$\beta_0 + \beta_1 \lambda(x) =
36,166
Using an SVM for feature selection
As I understand them, SVMs have built-in regularization because they tend to penalize large weights of predictors which amounts to favor simpler models. They're often used with recursive feature elimination (in neuroimaging paradigms, at least). About R specifically, there's the kernlab package, by Alex Smola who co-authored Learning with Kernels (2002, MIT Press), which implements SVM (in addition to e1071). However, if you are after a dedicated framework, I would warmly recommend the caret package.
Using an SVM for feature selection
As I understand them, SVMs have built-in regularization because they tend to penalize large weights of predictors which amounts to favor simpler models. They're often used with recursive feature elimi
Using an SVM for feature selection As I understand them, SVMs have built-in regularization because they tend to penalize large weights of predictors which amounts to favor simpler models. They're often used with recursive feature elimination (in neuroimaging paradigms, at least). About R specifically, there's the kernlab package, by Alex Smola who co-authored Learning with Kernels (2002, MIT Press), which implements SVM (in addition to e1071). However, if you are after a dedicated framework, I would warmly recommend the caret package.
Using an SVM for feature selection As I understand them, SVMs have built-in regularization because they tend to penalize large weights of predictors which amounts to favor simpler models. They're often used with recursive feature elimi
36,167
Using an SVM for feature selection
For Recursive Feature Extraction (SVM-RFE) the packages e1071 and Kernlab doesn't implement it i think. For the Weka SVMAttributeEval package is for Java i think, but the question was for R as i saw. The best way is trying to implement the SVM-RFE using e1071 and LIBSVM library I found a good parper relating that here.
Using an SVM for feature selection
For Recursive Feature Extraction (SVM-RFE) the packages e1071 and Kernlab doesn't implement it i think. For the Weka SVMAttributeEval package is for Java i think, but the question was for R as i saw.
Using an SVM for feature selection For Recursive Feature Extraction (SVM-RFE) the packages e1071 and Kernlab doesn't implement it i think. For the Weka SVMAttributeEval package is for Java i think, but the question was for R as i saw. The best way is trying to implement the SVM-RFE using e1071 and LIBSVM library I found a good parper relating that here.
Using an SVM for feature selection For Recursive Feature Extraction (SVM-RFE) the packages e1071 and Kernlab doesn't implement it i think. For the Weka SVMAttributeEval package is for Java i think, but the question was for R as i saw.
36,168
Using an SVM for feature selection
The Weka SVMAttributeEval package allows you to do feature selection using SVM. It should be pretty easy to dump your R data frame to a csv file, import that into Weka, do the feature selection, and then pull it back into R.
Using an SVM for feature selection
The Weka SVMAttributeEval package allows you to do feature selection using SVM. It should be pretty easy to dump your R data frame to a csv file, import that into Weka, do the feature selection, and t
Using an SVM for feature selection The Weka SVMAttributeEval package allows you to do feature selection using SVM. It should be pretty easy to dump your R data frame to a csv file, import that into Weka, do the feature selection, and then pull it back into R.
Using an SVM for feature selection The Weka SVMAttributeEval package allows you to do feature selection using SVM. It should be pretty easy to dump your R data frame to a csv file, import that into Weka, do the feature selection, and t
36,169
How to understand moments for a random variable?
If you have a linear rod, the center of gravity is the first moment (the expected value), and the moment of rotational inertia about the center of gravity is the variance. (A rod with centrally located mass will have less inertia than a rod with heavy concentrations of mass at the tips.)
How to understand moments for a random variable?
If you have a linear rod, the center of gravity is the first moment (the expected value), and the moment of rotational inertia about the center of gravity is the variance. (A rod with centrally locat
How to understand moments for a random variable? If you have a linear rod, the center of gravity is the first moment (the expected value), and the moment of rotational inertia about the center of gravity is the variance. (A rod with centrally located mass will have less inertia than a rod with heavy concentrations of mass at the tips.)
How to understand moments for a random variable? If you have a linear rod, the center of gravity is the first moment (the expected value), and the moment of rotational inertia about the center of gravity is the variance. (A rod with centrally locat
36,170
How to understand moments for a random variable?
Moments gives information about the statistical distribution. We judge one dataset over other based on moments of the dataset (e.g. difference between means(1st moment) of the 2 dataset)
How to understand moments for a random variable?
Moments gives information about the statistical distribution. We judge one dataset over other based on moments of the dataset (e.g. difference between means(1st moment) of the 2 dataset)
How to understand moments for a random variable? Moments gives information about the statistical distribution. We judge one dataset over other based on moments of the dataset (e.g. difference between means(1st moment) of the 2 dataset)
How to understand moments for a random variable? Moments gives information about the statistical distribution. We judge one dataset over other based on moments of the dataset (e.g. difference between means(1st moment) of the 2 dataset)
36,171
Market mix modelling with R
Marketing Mix Modelling is regression analysis with two differences. Variables are transformed to incorporate the memory effect of advertising, i.e. adstock effect, as well as diminishing returns of advertising. I have created a simple tutorial on how to do Marketing Mix Modelling "MMM" in R: https://analyticsartist.wordpress.com/2014/08/17/marketing-mix-modeling-explained-with-r/
Market mix modelling with R
Marketing Mix Modelling is regression analysis with two differences. Variables are transformed to incorporate the memory effect of advertising, i.e. adstock effect, as well as diminishing returns of a
Market mix modelling with R Marketing Mix Modelling is regression analysis with two differences. Variables are transformed to incorporate the memory effect of advertising, i.e. adstock effect, as well as diminishing returns of advertising. I have created a simple tutorial on how to do Marketing Mix Modelling "MMM" in R: https://analyticsartist.wordpress.com/2014/08/17/marketing-mix-modeling-explained-with-r/
Market mix modelling with R Marketing Mix Modelling is regression analysis with two differences. Variables are transformed to incorporate the memory effect of advertising, i.e. adstock effect, as well as diminishing returns of a
36,172
Market mix modelling with R
Regarding the R part, which was not cited at commentaries above: bayesm R package, published nearly one year after this question was posted. Title: Bayesian Inference for Marketing/Micro-econometrics. Author: Peter Rossi .
Market mix modelling with R
Regarding the R part, which was not cited at commentaries above: bayesm R package, published nearly one year after this question was posted. Title: Bayesian Inference for Marketing/Micro-econometrics.
Market mix modelling with R Regarding the R part, which was not cited at commentaries above: bayesm R package, published nearly one year after this question was posted. Title: Bayesian Inference for Marketing/Micro-econometrics. Author: Peter Rossi .
Market mix modelling with R Regarding the R part, which was not cited at commentaries above: bayesm R package, published nearly one year after this question was posted. Title: Bayesian Inference for Marketing/Micro-econometrics.
36,173
How can I demonstrate non-linearity without categorising a predictor?
Converting a continuous variable into categorical may be a bad idea, but may be a good idea as well, this depends on the problem. When the relationships of the variable can be best described using thresholds, categorisation may be one of the best options. You wrote that in different categories of X1 the correlation between Y and X2 is very different. This is a clear indication of a non-linear relationship between Y, X1 and X2. Thus multiple linear regression is probably not the best method to use here. In any case I suggest you to visualize your data (maybe using a circles plot, or coloured scatterplot). You may continue with machine learning, or modelling methods that suit what you know about your data.
How can I demonstrate non-linearity without categorising a predictor?
Converting a continuous variable into categorical may be a bad idea, but may be a good idea as well, this depends on the problem. When the relationships of the variable can be best described using thr
How can I demonstrate non-linearity without categorising a predictor? Converting a continuous variable into categorical may be a bad idea, but may be a good idea as well, this depends on the problem. When the relationships of the variable can be best described using thresholds, categorisation may be one of the best options. You wrote that in different categories of X1 the correlation between Y and X2 is very different. This is a clear indication of a non-linear relationship between Y, X1 and X2. Thus multiple linear regression is probably not the best method to use here. In any case I suggest you to visualize your data (maybe using a circles plot, or coloured scatterplot). You may continue with machine learning, or modelling methods that suit what you know about your data.
How can I demonstrate non-linearity without categorising a predictor? Converting a continuous variable into categorical may be a bad idea, but may be a good idea as well, this depends on the problem. When the relationships of the variable can be best described using thr
36,174
How can I demonstrate non-linearity without categorising a predictor?
You could fit a Generalized Additive Model (GAM) which could uncover nonlinear covariate effects quite easily. In R you can use the gam or mgcv packages. Here is the canonical reference
How can I demonstrate non-linearity without categorising a predictor?
You could fit a Generalized Additive Model (GAM) which could uncover nonlinear covariate effects quite easily. In R you can use the gam or mgcv packages. Here is the canonical reference
How can I demonstrate non-linearity without categorising a predictor? You could fit a Generalized Additive Model (GAM) which could uncover nonlinear covariate effects quite easily. In R you can use the gam or mgcv packages. Here is the canonical reference
How can I demonstrate non-linearity without categorising a predictor? You could fit a Generalized Additive Model (GAM) which could uncover nonlinear covariate effects quite easily. In R you can use the gam or mgcv packages. Here is the canonical reference
36,175
Introduction to maths for a junior in epidemiology
Jeff Gill has a good book, on Essential Mathematics for Social and Political Research: http://www.amazon.com/Essential-Mathematics-Political-Research-Analytical/dp/052168403X/ref=sr_1_2?ie=UTF8&s=books&qid=1301047912&sr=8-2 I found it quite useful for getting a good overview of linear algebra and calculus. He only assumes knowledge of basic algebra (i.e. x+y=2 etc). Despite the name, its a good read for anyone interested in bringing their maths up to the level required for reading journal articles and multivariate textbooks.
Introduction to maths for a junior in epidemiology
Jeff Gill has a good book, on Essential Mathematics for Social and Political Research: http://www.amazon.com/Essential-Mathematics-Political-Research-Analytical/dp/052168403X/ref=sr_1_2?ie=UTF8&s=book
Introduction to maths for a junior in epidemiology Jeff Gill has a good book, on Essential Mathematics for Social and Political Research: http://www.amazon.com/Essential-Mathematics-Political-Research-Analytical/dp/052168403X/ref=sr_1_2?ie=UTF8&s=books&qid=1301047912&sr=8-2 I found it quite useful for getting a good overview of linear algebra and calculus. He only assumes knowledge of basic algebra (i.e. x+y=2 etc). Despite the name, its a good read for anyone interested in bringing their maths up to the level required for reading journal articles and multivariate textbooks.
Introduction to maths for a junior in epidemiology Jeff Gill has a good book, on Essential Mathematics for Social and Political Research: http://www.amazon.com/Essential-Mathematics-Political-Research-Analytical/dp/052168403X/ref=sr_1_2?ie=UTF8&s=book
36,176
Introduction to maths for a junior in epidemiology
As far as learning the information on slide 6 in the slideshow you linked to, I would suggest A Mathematical Primer for Social Statistics by John Fox (not free but cheap, Google book link). All of those sage green books are aimed at individuals with only a very brief statistics background. If you are interested in taking that specific class I would also suggest you ask the professor for a syllabus and maybe some example problems. Although the professor did not state any preferred reference mathematical book I would imagine if pressed they could give some recommendations.
Introduction to maths for a junior in epidemiology
As far as learning the information on slide 6 in the slideshow you linked to, I would suggest A Mathematical Primer for Social Statistics by John Fox (not free but cheap, Google book link). All of tho
Introduction to maths for a junior in epidemiology As far as learning the information on slide 6 in the slideshow you linked to, I would suggest A Mathematical Primer for Social Statistics by John Fox (not free but cheap, Google book link). All of those sage green books are aimed at individuals with only a very brief statistics background. If you are interested in taking that specific class I would also suggest you ask the professor for a syllabus and maybe some example problems. Although the professor did not state any preferred reference mathematical book I would imagine if pressed they could give some recommendations.
Introduction to maths for a junior in epidemiology As far as learning the information on slide 6 in the slideshow you linked to, I would suggest A Mathematical Primer for Social Statistics by John Fox (not free but cheap, Google book link). All of tho
36,177
Logistic regression for bounds different from 0 and 1
To begin with, I think we have to distinguish between logistic regression and (generalized) logistic function. Though the latter may be viewed as a separate case of a the former taking the time as the only explanatory variable. It is then straightforward to see that fitted process will go by $S$ shaped path that goes to its upper (or, probably, lower) limit when $t \rightarrow \infty$. Therefore moving along the $S$ curve is the influence of over covariates (not time) that are changing up and down with time (in consumption structures these are income, tastes, prices, etc.). So there could be jumps or whatever, because nobody restricts the linear regression to go only up or only down to the $S$ curve's bounds. Since you are working with structures $0$ and $1$ are natural limits. You can never be sure that any other bounds won't be hit higher or lower in the future, when your conclusions are based only on historical data analysis, and arguing that the process never did so is not an appropriate reasoning. Therefore your fears are not justified, logistic regression (but not fitting the logistic curve! that comes as a solution of deterministic differential equation!) will work just fine here. Pay attention to the fact that there could be several categories that some up to 1, so you need multinomial logit model to fit the structure in this case. Among the alternatives there could be any model that could be applied to discrete choice. Commonly used candidates are probit and logit models. Even if you think that there is no decision in my model actually all structures in the world are the result of decision processes solved either by humans, nature or the aliens ^_^.
Logistic regression for bounds different from 0 and 1
To begin with, I think we have to distinguish between logistic regression and (generalized) logistic function. Though the latter may be viewed as a separate case of a the former taking the time as the
Logistic regression for bounds different from 0 and 1 To begin with, I think we have to distinguish between logistic regression and (generalized) logistic function. Though the latter may be viewed as a separate case of a the former taking the time as the only explanatory variable. It is then straightforward to see that fitted process will go by $S$ shaped path that goes to its upper (or, probably, lower) limit when $t \rightarrow \infty$. Therefore moving along the $S$ curve is the influence of over covariates (not time) that are changing up and down with time (in consumption structures these are income, tastes, prices, etc.). So there could be jumps or whatever, because nobody restricts the linear regression to go only up or only down to the $S$ curve's bounds. Since you are working with structures $0$ and $1$ are natural limits. You can never be sure that any other bounds won't be hit higher or lower in the future, when your conclusions are based only on historical data analysis, and arguing that the process never did so is not an appropriate reasoning. Therefore your fears are not justified, logistic regression (but not fitting the logistic curve! that comes as a solution of deterministic differential equation!) will work just fine here. Pay attention to the fact that there could be several categories that some up to 1, so you need multinomial logit model to fit the structure in this case. Among the alternatives there could be any model that could be applied to discrete choice. Commonly used candidates are probit and logit models. Even if you think that there is no decision in my model actually all structures in the world are the result of decision processes solved either by humans, nature or the aliens ^_^.
Logistic regression for bounds different from 0 and 1 To begin with, I think we have to distinguish between logistic regression and (generalized) logistic function. Though the latter may be viewed as a separate case of a the former taking the time as the
36,178
Logistic regression for bounds different from 0 and 1
You are looking for the wrong keywords. Logistic regression is for 0-1 outcomes, where the probability of being 1 is modeled with an S-shaped (logistic) function, not the actual data points themselves. Looking for nonlinear regression, specifically the four-parameter logistic model: $y = A + (B-A)/(1+\exp(-(a+bx))) + \epsilon$ (it can show up with other parametrizations). You fill find lots of ELISA related stuff if you search for this model. There is specialized software for fitting this, and of course the major statistical packages also can handle it. Unfortunately, if there is a lot of variability, estimation might be difficult, and the fitting process might "not converge".
Logistic regression for bounds different from 0 and 1
You are looking for the wrong keywords. Logistic regression is for 0-1 outcomes, where the probability of being 1 is modeled with an S-shaped (logistic) function, not the actual data points themselves
Logistic regression for bounds different from 0 and 1 You are looking for the wrong keywords. Logistic regression is for 0-1 outcomes, where the probability of being 1 is modeled with an S-shaped (logistic) function, not the actual data points themselves. Looking for nonlinear regression, specifically the four-parameter logistic model: $y = A + (B-A)/(1+\exp(-(a+bx))) + \epsilon$ (it can show up with other parametrizations). You fill find lots of ELISA related stuff if you search for this model. There is specialized software for fitting this, and of course the major statistical packages also can handle it. Unfortunately, if there is a lot of variability, estimation might be difficult, and the fitting process might "not converge".
Logistic regression for bounds different from 0 and 1 You are looking for the wrong keywords. Logistic regression is for 0-1 outcomes, where the probability of being 1 is modeled with an S-shaped (logistic) function, not the actual data points themselves
36,179
Which correlation measure should be used with a large gap (missing data)?
Create a scatterplot to check whether it makes any sense to suppose that a single correlation coefficient is an adequate description of the association between the variables. For example, in these (simulated) data the correlation for ages 6-20 is 90%, for ages 50+ it's -70%, and overall it's 15%. In such a situation reporting a single correlation coefficient would be as deceptive as reporting that the average number of legs among household pets is four when half of the pets are fish and the other half are spiders... The choice of how to express correlation is of secondary concern and rests on other aspects of the dataset.
Which correlation measure should be used with a large gap (missing data)?
Create a scatterplot to check whether it makes any sense to suppose that a single correlation coefficient is an adequate description of the association between the variables. For example, in these (si
Which correlation measure should be used with a large gap (missing data)? Create a scatterplot to check whether it makes any sense to suppose that a single correlation coefficient is an adequate description of the association between the variables. For example, in these (simulated) data the correlation for ages 6-20 is 90%, for ages 50+ it's -70%, and overall it's 15%. In such a situation reporting a single correlation coefficient would be as deceptive as reporting that the average number of legs among household pets is four when half of the pets are fish and the other half are spiders... The choice of how to express correlation is of secondary concern and rests on other aspects of the dataset.
Which correlation measure should be used with a large gap (missing data)? Create a scatterplot to check whether it makes any sense to suppose that a single correlation coefficient is an adequate description of the association between the variables. For example, in these (si
36,180
Generate random strings based on regular expressions in R
While generating random data from regular expressions would be a convenient interface, it is not directly supported in R. You could try one level of indirection though: generate random numbers and convert them into strings. For example, to convert a number into a character, you could use the following: > rawToChar(as.raw(65)) [1] "A" By carefully selecting the range of the random number to draw you can restrict your self to a desired set of ASCII characters that might correspond to a regular expression, e.g., to the character class [a-zA-Z]. Clearly, this is neither an elegant nor efficient solution, but it is at least native and could give you the desired effect with some boilerplate.
Generate random strings based on regular expressions in R
While generating random data from regular expressions would be a convenient interface, it is not directly supported in R. You could try one level of indirection though: generate random numbers and con
Generate random strings based on regular expressions in R While generating random data from regular expressions would be a convenient interface, it is not directly supported in R. You could try one level of indirection though: generate random numbers and convert them into strings. For example, to convert a number into a character, you could use the following: > rawToChar(as.raw(65)) [1] "A" By carefully selecting the range of the random number to draw you can restrict your self to a desired set of ASCII characters that might correspond to a regular expression, e.g., to the character class [a-zA-Z]. Clearly, this is neither an elegant nor efficient solution, but it is at least native and could give you the desired effect with some boilerplate.
Generate random strings based on regular expressions in R While generating random data from regular expressions would be a convenient interface, it is not directly supported in R. You could try one level of indirection though: generate random numbers and con
36,181
Generate random strings based on regular expressions in R
Still not a perfect answer, however, Mark Heckmann has suggested using a random string generator which partially solves this problem: GenRandomString <- function(n=1, lenght=12) { randomString <- c(1:n) # initialize vector for (i in 1:n) { randomString[i] <- paste(sample(c(0:9, letters, LETTERS), lenght, replace=TRUE), collapse="") } return(randomString) } GenRandomString(5,8) Output: five random strings, 8 characters long [1] "l42DjAtc" "jW6TdRZw" "5aAvMuDL" "iC3xOvst" "gqgSzE83" This can be used for various cases, e.g generate keys, names, simulations, etc.
Generate random strings based on regular expressions in R
Still not a perfect answer, however, Mark Heckmann has suggested using a random string generator which partially solves this problem: GenRandomString <- function(n=1, lenght=12) { randomString <- c(
Generate random strings based on regular expressions in R Still not a perfect answer, however, Mark Heckmann has suggested using a random string generator which partially solves this problem: GenRandomString <- function(n=1, lenght=12) { randomString <- c(1:n) # initialize vector for (i in 1:n) { randomString[i] <- paste(sample(c(0:9, letters, LETTERS), lenght, replace=TRUE), collapse="") } return(randomString) } GenRandomString(5,8) Output: five random strings, 8 characters long [1] "l42DjAtc" "jW6TdRZw" "5aAvMuDL" "iC3xOvst" "gqgSzE83" This can be used for various cases, e.g generate keys, names, simulations, etc.
Generate random strings based on regular expressions in R Still not a perfect answer, however, Mark Heckmann has suggested using a random string generator which partially solves this problem: GenRandomString <- function(n=1, lenght=12) { randomString <- c(
36,182
Can one use Cohen's Kappa for two judgements only?
The "chance correction" in Cohen's $\kappa$ estimates probabilities with which each rater chooses the existing categories. The estimation comes from the marginal frequencies of the categories. When you only have 1 judgement for each rater, this means that $\kappa$ assumes the category chosen for this single judgement in general has a probability of 1. This obviously makes no sense since the number of judgements (1) is too small to reliably estimate the base rates of all categories. An alternative might be a simple binomial model: without additional information, we might assume that the probability of agreement between two raters for one judgement is 0.5 since judgements are binary. This means that we implicitly assume that both raters pick each category with probability 0.5 for all criteria. The number of agreements expected by chance over all criteria then follows a binomial distribution with $p=0.5$.
Can one use Cohen's Kappa for two judgements only?
The "chance correction" in Cohen's $\kappa$ estimates probabilities with which each rater chooses the existing categories. The estimation comes from the marginal frequencies of the categories. When yo
Can one use Cohen's Kappa for two judgements only? The "chance correction" in Cohen's $\kappa$ estimates probabilities with which each rater chooses the existing categories. The estimation comes from the marginal frequencies of the categories. When you only have 1 judgement for each rater, this means that $\kappa$ assumes the category chosen for this single judgement in general has a probability of 1. This obviously makes no sense since the number of judgements (1) is too small to reliably estimate the base rates of all categories. An alternative might be a simple binomial model: without additional information, we might assume that the probability of agreement between two raters for one judgement is 0.5 since judgements are binary. This means that we implicitly assume that both raters pick each category with probability 0.5 for all criteria. The number of agreements expected by chance over all criteria then follows a binomial distribution with $p=0.5$.
Can one use Cohen's Kappa for two judgements only? The "chance correction" in Cohen's $\kappa$ estimates probabilities with which each rater chooses the existing categories. The estimation comes from the marginal frequencies of the categories. When yo
36,183
Can one use Cohen's Kappa for two judgements only?
I find caracal's answer convincing, but I also believe Cohen's Kappa can only account for part of what constitutes interrater reliability. The simple % of ratings in agreement accounts for another part, and the correlation between ratings, a third. It takes all three methods to gain a complete picture. For details please see http://pareonline.net/getvn.asp?v=9&n=4 : "[...] the general practice of describing interrater reliability as a single, unified concept is at best imprecise, and at worst potentially misleading."
Can one use Cohen's Kappa for two judgements only?
I find caracal's answer convincing, but I also believe Cohen's Kappa can only account for part of what constitutes interrater reliability. The simple % of ratings in agreement accounts for another pa
Can one use Cohen's Kappa for two judgements only? I find caracal's answer convincing, but I also believe Cohen's Kappa can only account for part of what constitutes interrater reliability. The simple % of ratings in agreement accounts for another part, and the correlation between ratings, a third. It takes all three methods to gain a complete picture. For details please see http://pareonline.net/getvn.asp?v=9&n=4 : "[...] the general practice of describing interrater reliability as a single, unified concept is at best imprecise, and at worst potentially misleading."
Can one use Cohen's Kappa for two judgements only? I find caracal's answer convincing, but I also believe Cohen's Kappa can only account for part of what constitutes interrater reliability. The simple % of ratings in agreement accounts for another pa
36,184
Test for independence, when I'm missing one bin of a 2x2 contingency
If the missing count were close to 22*12/17, the table would appear independent. This is consistent with your observations. If the missing count is far from this value, the table would exhibit strong lack of dependence. This, too, is consistent with your observations. Evidently your data cannot discriminate the two cases: independence or lack thereof are unidentifiable. Therefore, your only hope is to adopt additional assumptions, such as a prior for the missing count (equivalently, for the total number of emitted particles).
Test for independence, when I'm missing one bin of a 2x2 contingency
If the missing count were close to 22*12/17, the table would appear independent. This is consistent with your observations. If the missing count is far from this value, the table would exhibit stron
Test for independence, when I'm missing one bin of a 2x2 contingency If the missing count were close to 22*12/17, the table would appear independent. This is consistent with your observations. If the missing count is far from this value, the table would exhibit strong lack of dependence. This, too, is consistent with your observations. Evidently your data cannot discriminate the two cases: independence or lack thereof are unidentifiable. Therefore, your only hope is to adopt additional assumptions, such as a prior for the missing count (equivalently, for the total number of emitted particles).
Test for independence, when I'm missing one bin of a 2x2 contingency If the missing count were close to 22*12/17, the table would appear independent. This is consistent with your observations. If the missing count is far from this value, the table would exhibit stron
36,185
Use Empirical CDF vs Distribution CDF?
Personally, I'd favour instead showing the fit of the theoretical to the empirical distribution using a set of P-P plots or Q-Q plots.
Use Empirical CDF vs Distribution CDF?
Personally, I'd favour instead showing the fit of the theoretical to the empirical distribution using a set of P-P plots or Q-Q plots.
Use Empirical CDF vs Distribution CDF? Personally, I'd favour instead showing the fit of the theoretical to the empirical distribution using a set of P-P plots or Q-Q plots.
Use Empirical CDF vs Distribution CDF? Personally, I'd favour instead showing the fit of the theoretical to the empirical distribution using a set of P-P plots or Q-Q plots.
36,186
Use Empirical CDF vs Distribution CDF?
The empirical CDF needs to be treated with care at the end points of the data, and in other places where there is "sparse" data. This is because they tend to make weak structural assumptions about what goes on "in between" each data point. It would also be a good idea to have "dots" for the empirical CDF plot rather than lines, or have the dots superimposed over the lines, so that it is easier to see where most of the data actually is. Another alternative is to put the "dots" for the data over the fitted CDF plot, although there may be too much going on in the plot. Maybe its a plotting difficulty, but the empirical CDF should look like a staircase or step function (horizontal lines with "jumps" at the observed values). The empirical plots above do not look this way, they appear "smoothed". Maybe they are a "non-parametric" CDF using some kind of plot smoother? If it is a "non-parametric" CDF then you are basically comparing between to models: the negative binomial and the non-parametric one. My advice: have a separate plot for each data (each colour on a new graph), and then put the empirical CDF as "dots" where the data was observed, and the fitted negative binomial CDF as a smooth line on the same plot. This would look similar to a regression-style scatter plot with a fitted line. An example of the kind of plot I am talking (which has R-code to create it) is here How to present the gain in explained variance thanks to the correlation of Y and X?)
Use Empirical CDF vs Distribution CDF?
The empirical CDF needs to be treated with care at the end points of the data, and in other places where there is "sparse" data. This is because they tend to make weak structural assumptions about wh
Use Empirical CDF vs Distribution CDF? The empirical CDF needs to be treated with care at the end points of the data, and in other places where there is "sparse" data. This is because they tend to make weak structural assumptions about what goes on "in between" each data point. It would also be a good idea to have "dots" for the empirical CDF plot rather than lines, or have the dots superimposed over the lines, so that it is easier to see where most of the data actually is. Another alternative is to put the "dots" for the data over the fitted CDF plot, although there may be too much going on in the plot. Maybe its a plotting difficulty, but the empirical CDF should look like a staircase or step function (horizontal lines with "jumps" at the observed values). The empirical plots above do not look this way, they appear "smoothed". Maybe they are a "non-parametric" CDF using some kind of plot smoother? If it is a "non-parametric" CDF then you are basically comparing between to models: the negative binomial and the non-parametric one. My advice: have a separate plot for each data (each colour on a new graph), and then put the empirical CDF as "dots" where the data was observed, and the fitted negative binomial CDF as a smooth line on the same plot. This would look similar to a regression-style scatter plot with a fitted line. An example of the kind of plot I am talking (which has R-code to create it) is here How to present the gain in explained variance thanks to the correlation of Y and X?)
Use Empirical CDF vs Distribution CDF? The empirical CDF needs to be treated with care at the end points of the data, and in other places where there is "sparse" data. This is because they tend to make weak structural assumptions about wh
36,187
Questions about antithetic variate method
Yes. There's no simple condition. When $f$ is monotonic, $f(X_1)$ and $f(Y_1)$ will still be negatively correlated. When $f$ is not monotonic, all bets are off. For example, let $F$ be a uniform distribution on $[-1,1]$ and let $f(x) = x^2$. Then $X_1 = -Y_1$, whence $f(X_1) = f(Y_1)$, implying $f(X_1)$ and $f(Y_1)$ are perfectly correlated: you gain no additional information about the expectation from $(X_1, Y_1)$ than you do from $X_1$ alone. The cost of using the antithetic method in this extreme case is to double the sample size in order to achieve a given estimation variance. A practical example of the problem with non-monotonic $f$ appears here. Yes, in some cases. Use the antithetic method on the components of $X$ separately. This ought to work provided the components are not strongly correlated or when $F$ is symmetric. Provided $f(X_1)$ and $f(Y_1)$ are negatively correlated, you get smaller estimation variance with the antithetic method. As an extreme example of this, consider the case where $F$ is uniform on $[-1,1]$ and $f$ is the identity. Then for any single sample, $Y_1 = -X_1$ and their mean $(X_1+Y_1)/2 = 0$ estimates the mean of $F$ exactly; whereas the mean of two independent samples $(X_1, X_2)$ has a variance of $1/6$. This technique seems to be related, at least in spirit, to Latin hypercube sampling.
Questions about antithetic variate method
Yes. There's no simple condition. When $f$ is monotonic, $f(X_1)$ and $f(Y_1)$ will still be negatively correlated. When $f$ is not monotonic, all bets are off. For example, let $F$ be a uniform d
Questions about antithetic variate method Yes. There's no simple condition. When $f$ is monotonic, $f(X_1)$ and $f(Y_1)$ will still be negatively correlated. When $f$ is not monotonic, all bets are off. For example, let $F$ be a uniform distribution on $[-1,1]$ and let $f(x) = x^2$. Then $X_1 = -Y_1$, whence $f(X_1) = f(Y_1)$, implying $f(X_1)$ and $f(Y_1)$ are perfectly correlated: you gain no additional information about the expectation from $(X_1, Y_1)$ than you do from $X_1$ alone. The cost of using the antithetic method in this extreme case is to double the sample size in order to achieve a given estimation variance. A practical example of the problem with non-monotonic $f$ appears here. Yes, in some cases. Use the antithetic method on the components of $X$ separately. This ought to work provided the components are not strongly correlated or when $F$ is symmetric. Provided $f(X_1)$ and $f(Y_1)$ are negatively correlated, you get smaller estimation variance with the antithetic method. As an extreme example of this, consider the case where $F$ is uniform on $[-1,1]$ and $f$ is the identity. Then for any single sample, $Y_1 = -X_1$ and their mean $(X_1+Y_1)/2 = 0$ estimates the mean of $F$ exactly; whereas the mean of two independent samples $(X_1, X_2)$ has a variance of $1/6$. This technique seems to be related, at least in spirit, to Latin hypercube sampling.
Questions about antithetic variate method Yes. There's no simple condition. When $f$ is monotonic, $f(X_1)$ and $f(Y_1)$ will still be negatively correlated. When $f$ is not monotonic, all bets are off. For example, let $F$ be a uniform d
36,188
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
OK, try reading the wikipedia page for Monte Carlo integration. You'll see they mention a stratified version. Stratification is the technical term in statistics for what you attempt: subdividing in subsets (subsamples). I guess the references can help you further.
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
OK, try reading the wikipedia page for Monte Carlo integration. You'll see they mention a stratified version. Stratification is the technical term in statistics for what you attempt: subdividing in su
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? OK, try reading the wikipedia page for Monte Carlo integration. You'll see they mention a stratified version. Stratification is the technical term in statistics for what you attempt: subdividing in subsets (subsamples). I guess the references can help you further.
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? OK, try reading the wikipedia page for Monte Carlo integration. You'll see they mention a stratified version. Stratification is the technical term in statistics for what you attempt: subdividing in su
36,189
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
For any subset $Y$ of $A$, let $\pi(Y)$ be the probability you will select it in your sampling. You have described a random variable $$f(Y) = |Y \cap X|.$$ The total of $f$ in the population of subsets of $A$ is $$\tau(X) = \sum_{Y \subset A}|Y \cap X| = 2^{|A|-1}|X|.$$ From a sample (with replacement) of subsets of $A$, say $Y_1, Y_2, \ldots, Y_m$, the Hansen-Hurwitz Estimator obtains an unbiased estimate of this total as $$\hat{f}_\pi = \sum_{i=1}^{m} \frac{|Y_i \cap X|}{\pi(Y_i)} .$$ Dividing this by $2^{|A|-1}|A|$ therefore estimates $|X|/|A|$. The variance of $\hat{f}_\pi$ is $$\text{Var}(\hat{f}_\pi) = \frac{1}{m} \sum_{Y \subset A} \pi(Y) \left( \frac{|Y \cap X|}{\pi(Y)} - 2^{|A|-1}|X| \right)^2\text{.}$$ Dividing this by $2^{2(|A|-1)}|A|^2$ yields the sampling variance of $|X|/|A|$. Given $A$, $X$, and a proposed sampling procedure (which specifies $\pi(Y)$ for all $Y \subset A$), choose a value of $m$ (the sample size) for which the estimation variance becomes acceptably small.
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
For any subset $Y$ of $A$, let $\pi(Y)$ be the probability you will select it in your sampling. You have described a random variable $$f(Y) = |Y \cap X|.$$ The total of $f$ in the population of subs
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? For any subset $Y$ of $A$, let $\pi(Y)$ be the probability you will select it in your sampling. You have described a random variable $$f(Y) = |Y \cap X|.$$ The total of $f$ in the population of subsets of $A$ is $$\tau(X) = \sum_{Y \subset A}|Y \cap X| = 2^{|A|-1}|X|.$$ From a sample (with replacement) of subsets of $A$, say $Y_1, Y_2, \ldots, Y_m$, the Hansen-Hurwitz Estimator obtains an unbiased estimate of this total as $$\hat{f}_\pi = \sum_{i=1}^{m} \frac{|Y_i \cap X|}{\pi(Y_i)} .$$ Dividing this by $2^{|A|-1}|A|$ therefore estimates $|X|/|A|$. The variance of $\hat{f}_\pi$ is $$\text{Var}(\hat{f}_\pi) = \frac{1}{m} \sum_{Y \subset A} \pi(Y) \left( \frac{|Y \cap X|}{\pi(Y)} - 2^{|A|-1}|X| \right)^2\text{.}$$ Dividing this by $2^{2(|A|-1)}|A|^2$ yields the sampling variance of $|X|/|A|$. Given $A$, $X$, and a proposed sampling procedure (which specifies $\pi(Y)$ for all $Y \subset A$), choose a value of $m$ (the sample size) for which the estimation variance becomes acceptably small.
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? For any subset $Y$ of $A$, let $\pi(Y)$ be the probability you will select it in your sampling. You have described a random variable $$f(Y) = |Y \cap X|.$$ The total of $f$ in the population of subs
36,190
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
I assume your measure is finite. WLOG it can be a probability. The first procedure you mention is the good old empirical probability estimate: $\hat{P}(Y\in X)= | \{ x_i \in X\} | /n $ (there montecarlo estimate of an inetgral is a good interpretation also). In high dimension it does not work since $\{x_i\in X\}$ is likely to be empty for typical A. As you have noticed, you need regularization. How sophisticate regularisation you need is related to the dimension of your space. An idea is to enlarge $X$ or even give a weight to $x_i$ that is not in $X$ according to its distance to $X$, this is what I ould call kernel probability estimate (by analogy with kernel density estimate): $\hat{P}(Y\in X)= 1/(c(k) n)\sum_{i} K(d(x_i,X)/k) $ where $K$ is a kernel that integrates to $1$ (in your case it can be $K(x)=1\{x\leq 1\}$ but gaussian kernel has good properties) and $c(k)$ a well chosen normalization constant (i.e such that $\hat{P}(Y\in A)=1$).
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A?
I assume your measure is finite. WLOG it can be a probability. The first procedure you mention is the good old empirical probability estimate: $\hat{P}(Y\in X)= | \{ x_i \in X\} | /n $ (there monte
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? I assume your measure is finite. WLOG it can be a probability. The first procedure you mention is the good old empirical probability estimate: $\hat{P}(Y\in X)= | \{ x_i \in X\} | /n $ (there montecarlo estimate of an inetgral is a good interpretation also). In high dimension it does not work since $\{x_i\in X\}$ is likely to be empty for typical A. As you have noticed, you need regularization. How sophisticate regularisation you need is related to the dimension of your space. An idea is to enlarge $X$ or even give a weight to $x_i$ that is not in $X$ according to its distance to $X$, this is what I ould call kernel probability estimate (by analogy with kernel density estimate): $\hat{P}(Y\in X)= 1/(c(k) n)\sum_{i} K(d(x_i,X)/k) $ where $K$ is a kernel that integrates to $1$ (in your case it can be $K(x)=1\{x\leq 1\}$ but gaussian kernel has good properties) and $c(k)$ a well chosen normalization constant (i.e such that $\hat{P}(Y\in A)=1$).
Can we estimate the size of a subset X of a set A, by randomly sampling subsets of A? I assume your measure is finite. WLOG it can be a probability. The first procedure you mention is the good old empirical probability estimate: $\hat{P}(Y\in X)= | \{ x_i \in X\} | /n $ (there monte
36,191
Averaging time series with different sampling interval
The zoo package is very good at that (as is xts which extends it). The zoo vignettes have e.g. this example: zr3 <- zooreg(rnorm(9), start=as.yearmon(2000), frequency=12) zr3 aggregate(zr3, as.yearqtr, mean) A (regular) series is created with monthly frequency, and the averaged by quarter. It works the very same way for POSIXct objects at much higher granularity; see the vignette. I suspect that the R-SIG-Finance list archives have plenty of related examples too.
Averaging time series with different sampling interval
The zoo package is very good at that (as is xts which extends it). The zoo vignettes have e.g. this example: zr3 <- zooreg(rnorm(9), start=as.yearmon(2000), frequency=12) zr3 aggregate(zr3, as.yearqt
Averaging time series with different sampling interval The zoo package is very good at that (as is xts which extends it). The zoo vignettes have e.g. this example: zr3 <- zooreg(rnorm(9), start=as.yearmon(2000), frequency=12) zr3 aggregate(zr3, as.yearqtr, mean) A (regular) series is created with monthly frequency, and the averaged by quarter. It works the very same way for POSIXct objects at much higher granularity; see the vignette. I suspect that the R-SIG-Finance list archives have plenty of related examples too.
Averaging time series with different sampling interval The zoo package is very good at that (as is xts which extends it). The zoo vignettes have e.g. this example: zr3 <- zooreg(rnorm(9), start=as.yearmon(2000), frequency=12) zr3 aggregate(zr3, as.yearqt
36,192
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
I would go straight to VideoLectures.net. This is by far the best source--whether free or paid--i have found for very-high quality (both w/r/t the video quality and w/r/t the presentation content) video lectures and tutorials on statistics, forecasting, and machine learning. The target audience for these video lectures ranges from beginner (some lectures are specifically tagged as "tutorials") to expert; most of them seem to be somewhere in the middle. All of the lectures and tutorials are taught to highly experienced professionals and academics, and in many instances, the lecturer is the leading authority on the topic he/she is lecturing on. The site is also 100% free. The one disadvantage is that you cannot download the lectures and store them in e.g., itunes; however, nearly every lectures has a set of slides which you can download (or, conveniently, you can view them online as you watch the presentation). YouTube might have more, but even if you search Y/T through a specific channel, i am sure the signal-to-noise ratio is far higher--on VideoLectures.net, every lecture i've viewed has been outstanding and if you scan the viewer reviews, you'll find that's the consensus opinion towards the entire collection. A few that i've watched and that i can recommend highly: Basics of Probability and Statistics Introduction to Machine Learning Gaussian Process Basics Graphical Models k-Nearest Neighbor Models
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
I would go straight to VideoLectures.net. This is by far the best source--whether free or paid--i have found for very-high quality (both w/r/t the video quality and w/r/t the presentation content) vid
Video/Audio online material for getting into Bayesian analysis and logistic-regressions I would go straight to VideoLectures.net. This is by far the best source--whether free or paid--i have found for very-high quality (both w/r/t the video quality and w/r/t the presentation content) video lectures and tutorials on statistics, forecasting, and machine learning. The target audience for these video lectures ranges from beginner (some lectures are specifically tagged as "tutorials") to expert; most of them seem to be somewhere in the middle. All of the lectures and tutorials are taught to highly experienced professionals and academics, and in many instances, the lecturer is the leading authority on the topic he/she is lecturing on. The site is also 100% free. The one disadvantage is that you cannot download the lectures and store them in e.g., itunes; however, nearly every lectures has a set of slides which you can download (or, conveniently, you can view them online as you watch the presentation). YouTube might have more, but even if you search Y/T through a specific channel, i am sure the signal-to-noise ratio is far higher--on VideoLectures.net, every lecture i've viewed has been outstanding and if you scan the viewer reviews, you'll find that's the consensus opinion towards the entire collection. A few that i've watched and that i can recommend highly: Basics of Probability and Statistics Introduction to Machine Learning Gaussian Process Basics Graphical Models k-Nearest Neighbor Models
Video/Audio online material for getting into Bayesian analysis and logistic-regressions I would go straight to VideoLectures.net. This is by far the best source--whether free or paid--i have found for very-high quality (both w/r/t the video quality and w/r/t the presentation content) vid
36,193
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
I've only had a little look at this lecture series on Machine Learning, but it looks good. http://academicearth.org/courses/machine-learning Lecture 11 covers Bayesian Statistics and Regularization.
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
I've only had a little look at this lecture series on Machine Learning, but it looks good. http://academicearth.org/courses/machine-learning Lecture 11 covers Bayesian Statistics and Regularization.
Video/Audio online material for getting into Bayesian analysis and logistic-regressions I've only had a little look at this lecture series on Machine Learning, but it looks good. http://academicearth.org/courses/machine-learning Lecture 11 covers Bayesian Statistics and Regularization.
Video/Audio online material for getting into Bayesian analysis and logistic-regressions I've only had a little look at this lecture series on Machine Learning, but it looks good. http://academicearth.org/courses/machine-learning Lecture 11 covers Bayesian Statistics and Regularization.
36,194
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
try the machine learning summer school La Palma 2012 http://www.youtube.com/channel/UCHhbDEKA7BP58mq1wfTBQNQ?feature=watch impressive indeed
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
try the machine learning summer school La Palma 2012 http://www.youtube.com/channel/UCHhbDEKA7BP58mq1wfTBQNQ?feature=watch impressive indeed
Video/Audio online material for getting into Bayesian analysis and logistic-regressions try the machine learning summer school La Palma 2012 http://www.youtube.com/channel/UCHhbDEKA7BP58mq1wfTBQNQ?feature=watch impressive indeed
Video/Audio online material for getting into Bayesian analysis and logistic-regressions try the machine learning summer school La Palma 2012 http://www.youtube.com/channel/UCHhbDEKA7BP58mq1wfTBQNQ?feature=watch impressive indeed
36,195
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
Coursera is offering a wide a range of online lectures. The Machine Learning lecture by Andrew Ng covers logistic regression and regularization in the beginning. Furthermore, Probabilistic Graphical Models by Daphne Koller might be of interest to you as well.
Video/Audio online material for getting into Bayesian analysis and logistic-regressions
Coursera is offering a wide a range of online lectures. The Machine Learning lecture by Andrew Ng covers logistic regression and regularization in the beginning. Furthermore, Probabilistic Graphical M
Video/Audio online material for getting into Bayesian analysis and logistic-regressions Coursera is offering a wide a range of online lectures. The Machine Learning lecture by Andrew Ng covers logistic regression and regularization in the beginning. Furthermore, Probabilistic Graphical Models by Daphne Koller might be of interest to you as well.
Video/Audio online material for getting into Bayesian analysis and logistic-regressions Coursera is offering a wide a range of online lectures. The Machine Learning lecture by Andrew Ng covers logistic regression and regularization in the beginning. Furthermore, Probabilistic Graphical M
36,196
Prove $Y=X$ almost surely given they have the same distribution and $Y$ is an increasing function of $X$
The idea is clear in a picture: if we were to sketch the graph of the distribution function $F_X$ of $X,$ we may conceive of $g$ as locally shifting all horizontal points various amounts (but always consistently, never allowing any overlapping), thereby distorting the graph of $F_X$ into the graph of $F_Y$. The condition $F_X=F_Y$ means that this distortion is purely horizontal: the height $F_X(t)$ at any point $t$ must remain the same as the height $F_Y(t).$ Thus, if $g(t)\ne t,$ $(t,F_X(t))$ and $(g(t),F_X(g(t))$ are always part of a horizontal line segment in the graph of $F_X$: but over that segment, $X$ has zero probability (because its distribution function $F$ does not change over that segment). The only real issue is showing that it's legitimate to sum these zero probabilities over potentially infinitely many such segments. This is related to a basic property of real numbers. Let's reason from the definitions. $X$ is a real-valued random variable. Let $F_X(x) = \Pr(X\le x)$ be its distribution function. We are given $g:\mathbb R\to \mathbb R$ where $s\lt t$ implies $g(s)\lt g(t)$ ($g$ is increasing) and $Y = g\circ X$ is also a random variable. The condition in the statement, that $F_X = F_Y,$ therefore means that for all numbers $t,$ $$\Pr(X\le t) = F_X(t) = F_Y(t) = \Pr(Y\le t) = \Pr(g(X)\lt t).\tag{*}$$ To adopt an economical notation, when $a$ and $b$ are real numbers, $(a,b)$ is the open interval with endpoints at $a$ and $b$ (even when $ b\lt a$). When $a=b,$ this is the empty set. Condition $(*)$ implies that for all numbers $t$ where $g(t)\le t,$ $$\Pr(X\in (g(t), t]) = \Pr(X\in (-\infty, t] \setminus (-\infty, g(t)]) = F_X(t) - F_Y(t) = 0.\tag{**}$$ The same result obtains when $g(t)\gt t.$ Thus, writing $\mathcal A$ for the event $X\ne Y,$ we may characterize it as $$X\ne Y:\ X\in \mathcal A = \bigcup_{t\in\mathbb R}\, (t, g(t))$$ This is an uncountable union of open intervals: no axioms or theorems of probability permit us to draw any conclusion about its probability directly. The key is to recall that $\mathbb R$ is locally compact: this implies that on any bounded closed interval, such as $[-n,n]$ for positive integers $n,$ a finite number of real numbers $t_{n,i},$ $i=1,2,\ldots, N(n),$ can be found for which $$\mathcal A \cap [-n,n] = \bigcup_{i=1}^{N(n)}\, (t_{n,i}, g(t_i)).$$ (See the Heine-Borel theorem.) Therefore the probability of this event is not greater than the sum of the probabilities of the intervals of which it is comprised and since by $(**)$ each of those intervals is contained within a zero-probability event, $$\Pr(\mathcal A \cap [-n,n]) = 0.$$ Taking the countable union of these sets for $n=1,2,3,\ldots,$ and applying the sigma-additivity property of probability shows $$\Pr(\mathcal A) = 0 = \Pr(X\ne Y),$$ QED.
Prove $Y=X$ almost surely given they have the same distribution and $Y$ is an increasing function of
The idea is clear in a picture: if we were to sketch the graph of the distribution function $F_X$ of $X,$ we may conceive of $g$ as locally shifting all horizontal points various amounts (but always c
Prove $Y=X$ almost surely given they have the same distribution and $Y$ is an increasing function of $X$ The idea is clear in a picture: if we were to sketch the graph of the distribution function $F_X$ of $X,$ we may conceive of $g$ as locally shifting all horizontal points various amounts (but always consistently, never allowing any overlapping), thereby distorting the graph of $F_X$ into the graph of $F_Y$. The condition $F_X=F_Y$ means that this distortion is purely horizontal: the height $F_X(t)$ at any point $t$ must remain the same as the height $F_Y(t).$ Thus, if $g(t)\ne t,$ $(t,F_X(t))$ and $(g(t),F_X(g(t))$ are always part of a horizontal line segment in the graph of $F_X$: but over that segment, $X$ has zero probability (because its distribution function $F$ does not change over that segment). The only real issue is showing that it's legitimate to sum these zero probabilities over potentially infinitely many such segments. This is related to a basic property of real numbers. Let's reason from the definitions. $X$ is a real-valued random variable. Let $F_X(x) = \Pr(X\le x)$ be its distribution function. We are given $g:\mathbb R\to \mathbb R$ where $s\lt t$ implies $g(s)\lt g(t)$ ($g$ is increasing) and $Y = g\circ X$ is also a random variable. The condition in the statement, that $F_X = F_Y,$ therefore means that for all numbers $t,$ $$\Pr(X\le t) = F_X(t) = F_Y(t) = \Pr(Y\le t) = \Pr(g(X)\lt t).\tag{*}$$ To adopt an economical notation, when $a$ and $b$ are real numbers, $(a,b)$ is the open interval with endpoints at $a$ and $b$ (even when $ b\lt a$). When $a=b,$ this is the empty set. Condition $(*)$ implies that for all numbers $t$ where $g(t)\le t,$ $$\Pr(X\in (g(t), t]) = \Pr(X\in (-\infty, t] \setminus (-\infty, g(t)]) = F_X(t) - F_Y(t) = 0.\tag{**}$$ The same result obtains when $g(t)\gt t.$ Thus, writing $\mathcal A$ for the event $X\ne Y,$ we may characterize it as $$X\ne Y:\ X\in \mathcal A = \bigcup_{t\in\mathbb R}\, (t, g(t))$$ This is an uncountable union of open intervals: no axioms or theorems of probability permit us to draw any conclusion about its probability directly. The key is to recall that $\mathbb R$ is locally compact: this implies that on any bounded closed interval, such as $[-n,n]$ for positive integers $n,$ a finite number of real numbers $t_{n,i},$ $i=1,2,\ldots, N(n),$ can be found for which $$\mathcal A \cap [-n,n] = \bigcup_{i=1}^{N(n)}\, (t_{n,i}, g(t_i)).$$ (See the Heine-Borel theorem.) Therefore the probability of this event is not greater than the sum of the probabilities of the intervals of which it is comprised and since by $(**)$ each of those intervals is contained within a zero-probability event, $$\Pr(\mathcal A \cap [-n,n]) = 0.$$ Taking the countable union of these sets for $n=1,2,3,\ldots,$ and applying the sigma-additivity property of probability shows $$\Pr(\mathcal A) = 0 = \Pr(X\ne Y),$$ QED.
Prove $Y=X$ almost surely given they have the same distribution and $Y$ is an increasing function of The idea is clear in a picture: if we were to sketch the graph of the distribution function $F_X$ of $X,$ we may conceive of $g$ as locally shifting all horizontal points various amounts (but always c
36,197
Radial axis transformation in polar kernel density estimate
Consider any density $f$ for the circular parameter $\theta.$ The relevant integrals are of the form $$\Pr(\mathcal A) = \int_\mathcal{A}f(\theta)\,\mathrm d\theta$$ where $\mathcal A\subset[0,2\pi)$ is any circular event. Ordinarily we would plot them in Cartesian coordinates, as in this example: Now, if you wish to represent these integrals as circular areas, perhaps you are thinking of plotting the graph of some related functions $g$ and $h$ in polar coordinates, given by the region $$\{(\theta, r)\mid g(\theta)\le r \le h(\theta);\ 0\le \theta\lt 2\pi\}.$$ The area on the plot itself therefore is $$\int_\mathcal{A}\int_{g(\theta)}^{h(\theta)} r\,\mathrm dr\,\mathrm d\theta = \int_{\mathcal A}\frac{h(\theta)^2 - g(\theta)^2}{2}\,\mathrm d\theta.$$ Consequently, if you pick any nonnegative functions for which $h(\theta)^2 - g(\theta)^2 = f(\theta)$ the right side works out to the desired probability. Two natural choices are $$(g(\theta), h(\theta)) = (0, \sqrt{2 f(\theta)}),$$ the "filled" version and $$(g(\theta), h(\theta)) = (\sqrt{f(\theta)}/\lambda, \lambda\sqrt{f(\theta)})$$ where $\lambda = \sqrt{1 + \sqrt{2}},$ the "symmetric" version. Other choices are possible. For instance, you could enclose everything within a disk provided $f$ is bounded.
Radial axis transformation in polar kernel density estimate
Consider any density $f$ for the circular parameter $\theta.$ The relevant integrals are of the form $$\Pr(\mathcal A) = \int_\mathcal{A}f(\theta)\,\mathrm d\theta$$ where $\mathcal A\subset[0,2\pi)$
Radial axis transformation in polar kernel density estimate Consider any density $f$ for the circular parameter $\theta.$ The relevant integrals are of the form $$\Pr(\mathcal A) = \int_\mathcal{A}f(\theta)\,\mathrm d\theta$$ where $\mathcal A\subset[0,2\pi)$ is any circular event. Ordinarily we would plot them in Cartesian coordinates, as in this example: Now, if you wish to represent these integrals as circular areas, perhaps you are thinking of plotting the graph of some related functions $g$ and $h$ in polar coordinates, given by the region $$\{(\theta, r)\mid g(\theta)\le r \le h(\theta);\ 0\le \theta\lt 2\pi\}.$$ The area on the plot itself therefore is $$\int_\mathcal{A}\int_{g(\theta)}^{h(\theta)} r\,\mathrm dr\,\mathrm d\theta = \int_{\mathcal A}\frac{h(\theta)^2 - g(\theta)^2}{2}\,\mathrm d\theta.$$ Consequently, if you pick any nonnegative functions for which $h(\theta)^2 - g(\theta)^2 = f(\theta)$ the right side works out to the desired probability. Two natural choices are $$(g(\theta), h(\theta)) = (0, \sqrt{2 f(\theta)}),$$ the "filled" version and $$(g(\theta), h(\theta)) = (\sqrt{f(\theta)}/\lambda, \lambda\sqrt{f(\theta)})$$ where $\lambda = \sqrt{1 + \sqrt{2}},$ the "symmetric" version. Other choices are possible. For instance, you could enclose everything within a disk provided $f$ is bounded.
Radial axis transformation in polar kernel density estimate Consider any density $f$ for the circular parameter $\theta.$ The relevant integrals are of the form $$\Pr(\mathcal A) = \int_\mathcal{A}f(\theta)\,\mathrm d\theta$$ where $\mathcal A\subset[0,2\pi)$
36,198
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim?
TL;DR The paper Holder, Hume on Miracles: Bayesian Interpretation, Multiple Testimony, and the Existence of God deals with Bayesian updating based on witness reports from multiple events. Holder only considers the case of two events, so I will adapt his calculations to the general case. I expected, intuitively, that (other things being equal) a single event reported by 100 witnesses has a higher posterior than 100 events reported by a single witness each. This is not what happens in general. The picture is more complicated and the answer depends on a certain inequality between the reliability of witnesses and the Bayesian prior. When there are sufficiently many reports, the witnesses are reasonably reliable and the prior is very low (as with alien abductions and miracles) the all-in-one distribution of testimonies over events is better, but with less reliable witnesses and/or higher priors this reverses. It looks like for large $n$ the posteriors of intermediate distributions line up in between the all-in-one and one-in-all, but I did not study this closely. Assumptions I assume that all events $E_i$ are equally probable and independent with the prior probability $p=p(E_i)$. Each one is reported by one or more witnesses, and the testimonies are denoted $T_j$. I also assume that the testimonies are independent of each other and of events they are not testimonies for. Single event When there is a single event $E$ and a single testimony $T$ the probability is updated according to the standard formula $$p':=p(E\mid T)=\frac1{1+\frac{p(E^c)}{p(E)}\cdot\frac{p(T\mid E^c)}{p(T\mid E)}}=\frac1{1+\frac{1-p}{p}\cdot\frac{p(T\mid E^c)}{p(T\mid E)}}.$$ Let us denote $a:=\frac{p(E^c)}{p(E)}=\frac{1-p}{p}$, the prior odds against the event, and $r:=\frac{p(T\mid E^c)}{p(T\mid E)}$ the ratio of witness's reliability (or rather unreliability): if $r>1$ then the witness is more likely to report the event if it did not happen than if it did. By simple calculation, $a':=\frac{1-p'}{p'}=ar$ are the updated odds against. So after $n$ updates (based on independent testimonies with identical statistical characteristics) we get $a^{(n)}=ar^n$ and $$p^{(n)}=\frac1{1+ar^n}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{(1)}$$ Note that for unreliable witnesses ($r>1$) this posterior decreases rather than grows with $n$, converging to $0$ when $n\to\infty$. This is because for $r>1$ their reporting actually makes the event less likely. Multiple events Following Holder, the probability we are looking for here is $p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n)$ — that at least one of the reported events happened. This is not the probability of $X$ conditioned on $T_j$, but the leftover probability (of $X$ without any confirming events) is the same whether we are updating based on one or multiple events. So comparing to $p(E\mid T_1,\dots,T_n)$ gets us what we want. By the complement rule, and taking into account independence of $E_i$: $$p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n)=1-\prod_{i=1}^mp(E_i^c\mid T_1,\dots,T_n).$$ Suppose we have $n_1$ testimonies for $E_1$, $n_2$ for $E_2$, and so on, $n=n_1+\dots+n_m$. Assuming again that all statistical characteristics are identical, and each testimony influences the probability of its event only, we have $$p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n) =1-\prod_{i=1}^m(1-p^{(n_i)})\\ =1-\frac{a^mr^n}{\prod_{i=1}^m(1+ar^{n_i})}.\ \ \ \ \ \ \ \ \ \ \text{(2)}$$ When $m=1,n_1=n$ we recover the single event formula. Comparison Since comparing posteriors across the full range of $m,n,a,r$ looks hairy I will restrict to the extreme cases, $m=1$, $m=n$ (single event reported by $n$ witnesses, and $n$ events reported by a single witness each), and consider only the case of large $n$, which is arguably where the results become meaningful. The posterior for the latter reduces to $$\widetilde{p}^{(n)} =1-\left(\frac{ar}{1+ar}\right)^n.\ \ \ \ \ \ \ \ \ \ \text{(3)}$$ In contrast to $p^{(n)}$, this posterior converges to $1$ for any values of $a,r>0$, even when the witnesses are unreliable with $r>1$. This already tells us that for large $n$ this posterior is closer to $1$ when $r>1$. When $r<1$, we have from calculus that $p^{(n)}=\frac1{1+ar^n}\sim1-ar^n$ for large $n$, so which one of $(1),\ (3)$ is larger for large $n$ is determined by the direction of the inequality between the bases of the exponents, $r$ and $\frac{ar}{1+ar}$. In particular, for $p^{(n)}$ to dominate we need $a(1-r)>1$. This will be the case if the prior odds against our events are high ($a\gg1$, which is, presumably, the case for alien abductions and miracles), and the witnesses favor what actually happened ($r\ll1$). Discussion In the case of multiple events with single reports even anti-witnesses (with $r>1$), who drove the probability down to $0$ for a single event, will now drive it up to $1$. This is simply the consequence of the independence of $E_i$ and the fact that even anti-witnesses leave the posterior of each event positive. Looking at $(2)$, it seems that the all-in-one and one-in-all distributions are the optimal ones for large $n$ (due to exponential dichotomy), but I did not prove this rigorously. One has the maximal posterior (other things being equal), the other the minimal, and the rest line up in between. Which is which is determined by the inequality $a(1-r)>1$. Independence across the board is assumed above to make the calculations tractable, and is not realistic. For example, Holder calls assuming independence of $E_i$ "too simplistic" because "if we know that one miracle has occurred then our reasoning to the intrinsic improbability of miracles in general is wrong, and we should instead assume that they are likely". Assuming dependence will reduce the posterior for multiple events because each will contribute less to it, so in conditions that make all-in-one distribution optimal it will remain so. However, interdependence of testimonies for the same event is also more than likely, and would reduce that posterior. How all of this balances out in the end depends on how these dependencies are quantified, which is hard for me to guess. Another confounding factor is evidence against. For example, from testimonies that confirm the validity of natural laws (in case of miracles) or debunk alleged encounters (in case of alien abductions). I suppose some of that goes into the inscrutable prior, but the quantitative effect on the posterior is hard to assess. I suspect it is a large part of why miracles and alien abductions are not widely believed despite the posteriors approaching $1$ for large $n$.
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting
TL;DR The paper Holder, Hume on Miracles: Bayesian Interpretation, Multiple Testimony, and the Existence of God deals with Bayesian updating based on witness reports from multiple events. Holder only
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim? TL;DR The paper Holder, Hume on Miracles: Bayesian Interpretation, Multiple Testimony, and the Existence of God deals with Bayesian updating based on witness reports from multiple events. Holder only considers the case of two events, so I will adapt his calculations to the general case. I expected, intuitively, that (other things being equal) a single event reported by 100 witnesses has a higher posterior than 100 events reported by a single witness each. This is not what happens in general. The picture is more complicated and the answer depends on a certain inequality between the reliability of witnesses and the Bayesian prior. When there are sufficiently many reports, the witnesses are reasonably reliable and the prior is very low (as with alien abductions and miracles) the all-in-one distribution of testimonies over events is better, but with less reliable witnesses and/or higher priors this reverses. It looks like for large $n$ the posteriors of intermediate distributions line up in between the all-in-one and one-in-all, but I did not study this closely. Assumptions I assume that all events $E_i$ are equally probable and independent with the prior probability $p=p(E_i)$. Each one is reported by one or more witnesses, and the testimonies are denoted $T_j$. I also assume that the testimonies are independent of each other and of events they are not testimonies for. Single event When there is a single event $E$ and a single testimony $T$ the probability is updated according to the standard formula $$p':=p(E\mid T)=\frac1{1+\frac{p(E^c)}{p(E)}\cdot\frac{p(T\mid E^c)}{p(T\mid E)}}=\frac1{1+\frac{1-p}{p}\cdot\frac{p(T\mid E^c)}{p(T\mid E)}}.$$ Let us denote $a:=\frac{p(E^c)}{p(E)}=\frac{1-p}{p}$, the prior odds against the event, and $r:=\frac{p(T\mid E^c)}{p(T\mid E)}$ the ratio of witness's reliability (or rather unreliability): if $r>1$ then the witness is more likely to report the event if it did not happen than if it did. By simple calculation, $a':=\frac{1-p'}{p'}=ar$ are the updated odds against. So after $n$ updates (based on independent testimonies with identical statistical characteristics) we get $a^{(n)}=ar^n$ and $$p^{(n)}=\frac1{1+ar^n}.\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \text{(1)}$$ Note that for unreliable witnesses ($r>1$) this posterior decreases rather than grows with $n$, converging to $0$ when $n\to\infty$. This is because for $r>1$ their reporting actually makes the event less likely. Multiple events Following Holder, the probability we are looking for here is $p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n)$ — that at least one of the reported events happened. This is not the probability of $X$ conditioned on $T_j$, but the leftover probability (of $X$ without any confirming events) is the same whether we are updating based on one or multiple events. So comparing to $p(E\mid T_1,\dots,T_n)$ gets us what we want. By the complement rule, and taking into account independence of $E_i$: $$p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n)=1-\prod_{i=1}^mp(E_i^c\mid T_1,\dots,T_n).$$ Suppose we have $n_1$ testimonies for $E_1$, $n_2$ for $E_2$, and so on, $n=n_1+\dots+n_m$. Assuming again that all statistical characteristics are identical, and each testimony influences the probability of its event only, we have $$p(E_1\cup\dots\cup E_m\mid T_1,\dots,T_n) =1-\prod_{i=1}^m(1-p^{(n_i)})\\ =1-\frac{a^mr^n}{\prod_{i=1}^m(1+ar^{n_i})}.\ \ \ \ \ \ \ \ \ \ \text{(2)}$$ When $m=1,n_1=n$ we recover the single event formula. Comparison Since comparing posteriors across the full range of $m,n,a,r$ looks hairy I will restrict to the extreme cases, $m=1$, $m=n$ (single event reported by $n$ witnesses, and $n$ events reported by a single witness each), and consider only the case of large $n$, which is arguably where the results become meaningful. The posterior for the latter reduces to $$\widetilde{p}^{(n)} =1-\left(\frac{ar}{1+ar}\right)^n.\ \ \ \ \ \ \ \ \ \ \text{(3)}$$ In contrast to $p^{(n)}$, this posterior converges to $1$ for any values of $a,r>0$, even when the witnesses are unreliable with $r>1$. This already tells us that for large $n$ this posterior is closer to $1$ when $r>1$. When $r<1$, we have from calculus that $p^{(n)}=\frac1{1+ar^n}\sim1-ar^n$ for large $n$, so which one of $(1),\ (3)$ is larger for large $n$ is determined by the direction of the inequality between the bases of the exponents, $r$ and $\frac{ar}{1+ar}$. In particular, for $p^{(n)}$ to dominate we need $a(1-r)>1$. This will be the case if the prior odds against our events are high ($a\gg1$, which is, presumably, the case for alien abductions and miracles), and the witnesses favor what actually happened ($r\ll1$). Discussion In the case of multiple events with single reports even anti-witnesses (with $r>1$), who drove the probability down to $0$ for a single event, will now drive it up to $1$. This is simply the consequence of the independence of $E_i$ and the fact that even anti-witnesses leave the posterior of each event positive. Looking at $(2)$, it seems that the all-in-one and one-in-all distributions are the optimal ones for large $n$ (due to exponential dichotomy), but I did not prove this rigorously. One has the maximal posterior (other things being equal), the other the minimal, and the rest line up in between. Which is which is determined by the inequality $a(1-r)>1$. Independence across the board is assumed above to make the calculations tractable, and is not realistic. For example, Holder calls assuming independence of $E_i$ "too simplistic" because "if we know that one miracle has occurred then our reasoning to the intrinsic improbability of miracles in general is wrong, and we should instead assume that they are likely". Assuming dependence will reduce the posterior for multiple events because each will contribute less to it, so in conditions that make all-in-one distribution optimal it will remain so. However, interdependence of testimonies for the same event is also more than likely, and would reduce that posterior. How all of this balances out in the end depends on how these dependencies are quantified, which is hard for me to guess. Another confounding factor is evidence against. For example, from testimonies that confirm the validity of natural laws (in case of miracles) or debunk alleged encounters (in case of alien abductions). I suppose some of that goes into the inscrutable prior, but the quantitative effect on the posterior is hard to assess. I suspect it is a large part of why miracles and alien abductions are not widely believed despite the posteriors approaching $1$ for large $n$.
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting TL;DR The paper Holder, Hume on Miracles: Bayesian Interpretation, Multiple Testimony, and the Existence of God deals with Bayesian updating based on witness reports from multiple events. Holder only
36,199
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim?
You use an expression like 'P(E1 is true | N eyewitness accounts for E1)', but I assume that the question is not whether E1 is true but instead whether X is true. Instead you might consider the probability that $X$ is true given the number of eyewitnesses $N_1, N_2, etc.$ $$P(\text{$X$ true} | N_1, N_2, \dots, N_k)$$ This can be expressed with Bayes' rule $$P(\text{$X$ true} | N_1, N_2, \dots, N_k) = \frac{P(N_1, N_2, \dots, N_k | \text{$X$ true})}{P(N_1, N_2, \dots, N_k ) } P(\text{$X$ true})$$ This expresses a computation of posterior probability or believe (given the eyewitnesses) in terms of the prior probability or believe (without the eye witnesses). The change of $P(\text{$X$ true})$ to $P(\text{$X$ true} | N_1, N_2, \dots, N_k)$ depends on the probability of the eyewitnesses given that $X$ is true and also given that $X$ is false. You can also express it as a ratio with the probability $P(\text{$X$ false})$ $$\frac{P(\text{$X$ true} | N_1, N_2, \dots, N_k) }{P(\text{$X$ false} | N_1, N_2, \dots, N_k) } = \frac{P(\text{$X$ true})}{ P(\text{$X$ false})} \cdot \frac{P(N_1, N_2, \dots, N_k | \text{$X$ true})}{P(N_1, N_2, \dots, N_k | \text{$X$ false})} $$ The odds for $X$ being false or true change depending on whether the eyewitnesses are more or less probable given $X$ being false or true. The problem in cases of 'esoteric theories' like alien abductions is that the previous odds are very low. "extraordinary claims require extraordinary evidence" and the observations like eyewitnesses do not change it much. When alien abductions are true, then it is very probable to have eye witnesses for it. But, when alien abductions are false then it is also very probable to have eye witnesses for it. (Because it is not true does not mean that there won't be eyewitnesses that in some way believe they saw or experienced abduction) Also problematic is that these mathematical formulations don't capture the entire situation very well. Asside from the number of eyewitnesses it is also important what the quality of the eyewitnesses is. How did people experience and witness the events? For instance during the broadcast of war of the world's in 1938 a lot of people thought that aliens were landing on earth. But, they all saw (listened to) the same event where the event itself was fake. The number of witnesses might not be very useful if the event is not useful. It also matters whether some event is contrary to other previous believes (for instance do the spaceships and aliens disobey known physical laws or not). But, possibly for some less extreme theories it might be possible to fill in some of the terms. An example could be finding the landing place of a crashing airplane that was undetected by the radar. Then the theories to consider are not whether the plane crashed or not, but more like how it had been flying. In this case observations along with triangulation may help to reduce the distribution of theories and pinpoint the crash site. For events like meteors and earthquakes these eyewitness accounts are gathered. (I am not sure how useful it is for meteors since most often a camera is capturing it now. In the case of earthquakes it can help to learn about the spread and impact of an earthquake) An example:
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting
You use an expression like 'P(E1 is true | N eyewitness accounts for E1)', but I assume that the question is not whether E1 is true but instead whether X is true. Instead you might consider the probab
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim? You use an expression like 'P(E1 is true | N eyewitness accounts for E1)', but I assume that the question is not whether E1 is true but instead whether X is true. Instead you might consider the probability that $X$ is true given the number of eyewitnesses $N_1, N_2, etc.$ $$P(\text{$X$ true} | N_1, N_2, \dots, N_k)$$ This can be expressed with Bayes' rule $$P(\text{$X$ true} | N_1, N_2, \dots, N_k) = \frac{P(N_1, N_2, \dots, N_k | \text{$X$ true})}{P(N_1, N_2, \dots, N_k ) } P(\text{$X$ true})$$ This expresses a computation of posterior probability or believe (given the eyewitnesses) in terms of the prior probability or believe (without the eye witnesses). The change of $P(\text{$X$ true})$ to $P(\text{$X$ true} | N_1, N_2, \dots, N_k)$ depends on the probability of the eyewitnesses given that $X$ is true and also given that $X$ is false. You can also express it as a ratio with the probability $P(\text{$X$ false})$ $$\frac{P(\text{$X$ true} | N_1, N_2, \dots, N_k) }{P(\text{$X$ false} | N_1, N_2, \dots, N_k) } = \frac{P(\text{$X$ true})}{ P(\text{$X$ false})} \cdot \frac{P(N_1, N_2, \dots, N_k | \text{$X$ true})}{P(N_1, N_2, \dots, N_k | \text{$X$ false})} $$ The odds for $X$ being false or true change depending on whether the eyewitnesses are more or less probable given $X$ being false or true. The problem in cases of 'esoteric theories' like alien abductions is that the previous odds are very low. "extraordinary claims require extraordinary evidence" and the observations like eyewitnesses do not change it much. When alien abductions are true, then it is very probable to have eye witnesses for it. But, when alien abductions are false then it is also very probable to have eye witnesses for it. (Because it is not true does not mean that there won't be eyewitnesses that in some way believe they saw or experienced abduction) Also problematic is that these mathematical formulations don't capture the entire situation very well. Asside from the number of eyewitnesses it is also important what the quality of the eyewitnesses is. How did people experience and witness the events? For instance during the broadcast of war of the world's in 1938 a lot of people thought that aliens were landing on earth. But, they all saw (listened to) the same event where the event itself was fake. The number of witnesses might not be very useful if the event is not useful. It also matters whether some event is contrary to other previous believes (for instance do the spaceships and aliens disobey known physical laws or not). But, possibly for some less extreme theories it might be possible to fill in some of the terms. An example could be finding the landing place of a crashing airplane that was undetected by the radar. Then the theories to consider are not whether the plane crashed or not, but more like how it had been flying. In this case observations along with triangulation may help to reduce the distribution of theories and pinpoint the crash site. For events like meteors and earthquakes these eyewitness accounts are gathered. (I am not sure how useful it is for meteors since most often a camera is capturing it now. In the case of earthquakes it can help to learn about the spread and impact of an earthquake) An example:
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting You use an expression like 'P(E1 is true | N eyewitness accounts for E1)', but I assume that the question is not whether E1 is true but instead whether X is true. Instead you might consider the probab
36,200
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim?
There’s a trade off based on the correlations among observers, with a possible quantitative argument and a clearer qualitative case. A quantitative approach might consider $n$ observers in $k$ correlated groups, where the correlations relate the probability of an event observed by A and B with the probability of an event observed by A times the probability of an event observed by B. Then you can answer the question numerically if you can estimate the correlations within groups and the correlations between groups. More qualitatively, suppose many people report a UFO at a particular time, but all from one neighborhood, all wearing similar eyewear, all consuming the same news source that mentions certain topics, all subject to similar reactions from friends and family and colleagues. In that case, even a large number of reports might not be convincing; reports from a more diverse group might be more persuasive. On this logic, should you be more persuaded that an event happened if A and B report the same event, or if they reported different events? You may consider A and B so correlated that you find an event roughly equally likely whether one or both report it. In that case you should be more convinced an event occurred when they report separate events. Or, you may consider A and B so different that their agreement helps overcome your background skepticism. In that case you should be more convinced an event occurred when they report the same event. The optimal number of groups in the question will likewise depend on your evaluations of the different observers and their correlations.
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting
There’s a trade off based on the correlations among observers, with a possible quantitative argument and a clearer qualitative case. A quantitative approach might consider $n$ observers in $k$ correla
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting specific instances of that claim? There’s a trade off based on the correlations among observers, with a possible quantitative argument and a clearer qualitative case. A quantitative approach might consider $n$ observers in $k$ correlated groups, where the correlations relate the probability of an event observed by A and B with the probability of an event observed by A times the probability of an event observed by B. Then you can answer the question numerically if you can estimate the correlations within groups and the correlations between groups. More qualitatively, suppose many people report a UFO at a particular time, but all from one neighborhood, all wearing similar eyewear, all consuming the same news source that mentions certain topics, all subject to similar reactions from friends and family and colleagues. In that case, even a large number of reports might not be convincing; reports from a more diverse group might be more persuasive. On this logic, should you be more persuaded that an event happened if A and B report the same event, or if they reported different events? You may consider A and B so correlated that you find an event roughly equally likely whether one or both report it. In that case you should be more convinced an event occurred when they report separate events. Or, you may consider A and B so different that their agreement helps overcome your background skepticism. In that case you should be more convinced an event occurred when they report the same event. The optimal number of groups in the question will likewise depend on your evaluations of the different observers and their correlations.
How to model the probability of a truth claim given an arrangement of eyewitness accounts supporting There’s a trade off based on the correlations among observers, with a possible quantitative argument and a clearer qualitative case. A quantitative approach might consider $n$ observers in $k$ correla