idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
27,201
Boxplot for several distributions?
Beanplots Possibly the coolest plots ever, these are basically a small-multiples implementation of violin plots. Violin plots have a massive advantage over boxplots: they can show a lot more detail for distributions that aren't normal (e.g. they can show bi-modal distributions really well). Because they're usually based on Gaussian smoothing (or similar), they won't work really well for distributions with high end points (like exponential distributions), but then, neither will boxplots. Beanplots can be achieved very easily in R - just install the beanplot package: library(beanplot) # Sampling code from Greg Snow's answer: my.dat <- lapply( 1:20, function(x) rnorm(x+10, sample( 10, 1), sample(3,1) ) ) beanplot(my.dat) The beanplot function has tons of options, so you can customise it to your heart's desire. There's also a way to do beanplots in ggplot2 (need the latest version): library(ggplot2) my.dat <- lapply(1:20, function(x) rnorm(x+10, sample(10, 1), sample(3,1))) my.df <- melt(my.dat) ggplot(my.df, aes(x=L1, y=value, group=L1)) + geom_violin(trim=FALSE) + geom_segment(aes(x=L1-0.1, xend=L1+0.1, y=value, yend=value), colour='white')
Boxplot for several distributions?
Beanplots Possibly the coolest plots ever, these are basically a small-multiples implementation of violin plots. Violin plots have a massive advantage over boxplots: they can show a lot more detail fo
Boxplot for several distributions? Beanplots Possibly the coolest plots ever, these are basically a small-multiples implementation of violin plots. Violin plots have a massive advantage over boxplots: they can show a lot more detail for distributions that aren't normal (e.g. they can show bi-modal distributions really well). Because they're usually based on Gaussian smoothing (or similar), they won't work really well for distributions with high end points (like exponential distributions), but then, neither will boxplots. Beanplots can be achieved very easily in R - just install the beanplot package: library(beanplot) # Sampling code from Greg Snow's answer: my.dat <- lapply( 1:20, function(x) rnorm(x+10, sample( 10, 1), sample(3,1) ) ) beanplot(my.dat) The beanplot function has tons of options, so you can customise it to your heart's desire. There's also a way to do beanplots in ggplot2 (need the latest version): library(ggplot2) my.dat <- lapply(1:20, function(x) rnorm(x+10, sample(10, 1), sample(3,1))) my.df <- melt(my.dat) ggplot(my.df, aes(x=L1, y=value, group=L1)) + geom_violin(trim=FALSE) + geom_segment(aes(x=L1-0.1, xend=L1+0.1, y=value, yend=value), colour='white')
Boxplot for several distributions? Beanplots Possibly the coolest plots ever, these are basically a small-multiples implementation of violin plots. Violin plots have a massive advantage over boxplots: they can show a lot more detail fo
27,202
Boxplot for several distributions?
Here is some sample R code for a couple of ways to do it, you will probably want to expand on this (include labels etc.) and maybe turn it ito a function: my.dat <- lapply( 1:20, function(x) rnorm(x+10, sample( 10, 1), sample(3,1) ) ) tmp <- boxplot(my.dat, plot=FALSE, range=0) # box and median only plot( range(tmp$stats), c(1,length(my.dat)), xlab='', ylab='', type='n' ) segments( tmp$stats[2,], seq_along(my.dat), tmp$stats[4,] ) points( tmp$stats[3,], seq_along(my.dat) ) # wiskers and implied box plot( range(tmp$stats), c(1,length(my.dat)), xlab='', ylab='', type='n' ) segments( tmp$stats[1,], seq_along(my.dat), tmp$stats[2,] ) segments( tmp$stats[4,], seq_along(my.dat), tmp$stats[5,] ) points( tmp$stats[3,], seq_along(my.dat) )
Boxplot for several distributions?
Here is some sample R code for a couple of ways to do it, you will probably want to expand on this (include labels etc.) and maybe turn it ito a function: my.dat <- lapply( 1:20, function(x) rnorm(x+1
Boxplot for several distributions? Here is some sample R code for a couple of ways to do it, you will probably want to expand on this (include labels etc.) and maybe turn it ito a function: my.dat <- lapply( 1:20, function(x) rnorm(x+10, sample( 10, 1), sample(3,1) ) ) tmp <- boxplot(my.dat, plot=FALSE, range=0) # box and median only plot( range(tmp$stats), c(1,length(my.dat)), xlab='', ylab='', type='n' ) segments( tmp$stats[2,], seq_along(my.dat), tmp$stats[4,] ) points( tmp$stats[3,], seq_along(my.dat) ) # wiskers and implied box plot( range(tmp$stats), c(1,length(my.dat)), xlab='', ylab='', type='n' ) segments( tmp$stats[1,], seq_along(my.dat), tmp$stats[2,] ) segments( tmp$stats[4,], seq_along(my.dat), tmp$stats[5,] ) points( tmp$stats[3,], seq_along(my.dat) )
Boxplot for several distributions? Here is some sample R code for a couple of ways to do it, you will probably want to expand on this (include labels etc.) and maybe turn it ito a function: my.dat <- lapply( 1:20, function(x) rnorm(x+1
27,203
Boxplot for several distributions?
You have decided to use a boxplot to summarize each of the 20 distributions. A boxplot is a 5-number summary: it shows the minimum, lower quartile, median, upper quartile, and maximum. It has advantages (it's simple and well-known) as well as disadvantages (How should we do boxplots with small samples?). There are other ways to visualize a sample from a distribution and, depending on your actual data, one of the alternatives may work better than a boxplot: If there are a few observations from each distribution you can show the raw data in a Wilkinson dot plot (aka a stacked dot plot). If the samples are larger (at least ~30 points per distribution), you can make a histogram (with 10-12 bars since there are 20 histograms to show) or a kernel density (smooth histogram). There is also the question how to put together the individual representations of 20 distributions. A small multiples graphics, with several panels arranged in a grid, can be very effective. In your case there will be 20 panels, one for each distribution, so it's necessary to pare down the details in each panel. However the juxtaposition with common x- and y-axes makes comparisons between distributions very intuitive. Here is an illustration. I draw 50 observations from 20 distributions: 16 distributions are Normal(0,1) and 4 distributions are Normal(1,2). I estimate and plot the density of each sample and use color to highlight the "unusual" distributions. You can use color to indicate different experimental conditions or the levels of a categorical variable, if relevant. This example is inspired by David MacKay's Information Theory, Inference, and Learning Algorithms. This book is full of amazing graphics, all without color. Chapter 21 has several small multiples plots which represent different combinations of the mean $\mu$ and variance $\sigma^2$ of the Normal distribution, in a mixture of two Normals. [1] D. J. MacKay. Information Theory, Inference, and Learning Algorithms (2003). Available free online.
Boxplot for several distributions?
You have decided to use a boxplot to summarize each of the 20 distributions. A boxplot is a 5-number summary: it shows the minimum, lower quartile, median, upper quartile, and maximum. It has advantag
Boxplot for several distributions? You have decided to use a boxplot to summarize each of the 20 distributions. A boxplot is a 5-number summary: it shows the minimum, lower quartile, median, upper quartile, and maximum. It has advantages (it's simple and well-known) as well as disadvantages (How should we do boxplots with small samples?). There are other ways to visualize a sample from a distribution and, depending on your actual data, one of the alternatives may work better than a boxplot: If there are a few observations from each distribution you can show the raw data in a Wilkinson dot plot (aka a stacked dot plot). If the samples are larger (at least ~30 points per distribution), you can make a histogram (with 10-12 bars since there are 20 histograms to show) or a kernel density (smooth histogram). There is also the question how to put together the individual representations of 20 distributions. A small multiples graphics, with several panels arranged in a grid, can be very effective. In your case there will be 20 panels, one for each distribution, so it's necessary to pare down the details in each panel. However the juxtaposition with common x- and y-axes makes comparisons between distributions very intuitive. Here is an illustration. I draw 50 observations from 20 distributions: 16 distributions are Normal(0,1) and 4 distributions are Normal(1,2). I estimate and plot the density of each sample and use color to highlight the "unusual" distributions. You can use color to indicate different experimental conditions or the levels of a categorical variable, if relevant. This example is inspired by David MacKay's Information Theory, Inference, and Learning Algorithms. This book is full of amazing graphics, all without color. Chapter 21 has several small multiples plots which represent different combinations of the mean $\mu$ and variance $\sigma^2$ of the Normal distribution, in a mixture of two Normals. [1] D. J. MacKay. Information Theory, Inference, and Learning Algorithms (2003). Available free online.
Boxplot for several distributions? You have decided to use a boxplot to summarize each of the 20 distributions. A boxplot is a 5-number summary: it shows the minimum, lower quartile, median, upper quartile, and maximum. It has advantag
27,204
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Statistics is a mathematical science that studies the collection, analysis, interpretation, and presentation of data. Statistical/Machine Learning is the application of statistical methods (mostly regression) to make predictions about unseen data. Statistical Learning and Machine Learning are broadly the same thing. The main distinction between them is in the culture.
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Statistics is a mathematical science that studies the collection, analysis, interpretation, and presentation of data. Statistical/Machine Learning is the application of statistical methods (mostly reg
Machine Learning VS Statistical Learning vs Statistics [duplicate] Statistics is a mathematical science that studies the collection, analysis, interpretation, and presentation of data. Statistical/Machine Learning is the application of statistical methods (mostly regression) to make predictions about unseen data. Statistical Learning and Machine Learning are broadly the same thing. The main distinction between them is in the culture.
Machine Learning VS Statistical Learning vs Statistics [duplicate] Statistics is a mathematical science that studies the collection, analysis, interpretation, and presentation of data. Statistical/Machine Learning is the application of statistical methods (mostly reg
27,205
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Think of the set of questions one can ask about data as living on a simplex, where the vertices represent Confirmatory questions, Exploratory Questions, and Predictive questions about the data. Here is a visual aid I've taken from a course my supervisor has taught. Included are some questions that could be asked about data concerning how many people are in our university's gym at a given time on a given day. One way to think about statistics vs ML is by partitioning the simplex as so I think this is a good way to think about the difference between statistics and ML. As for statistical learning, I would put this somewhere in the purple region; methods for prediction or data mining which seem to be motivated by traditional statistical tools. The definition is highly variable and dependent on who you ask. Consequently, the distinction is of little practical importance.
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Think of the set of questions one can ask about data as living on a simplex, where the vertices represent Confirmatory questions, Exploratory Questions, and Predictive questions about the data. Here
Machine Learning VS Statistical Learning vs Statistics [duplicate] Think of the set of questions one can ask about data as living on a simplex, where the vertices represent Confirmatory questions, Exploratory Questions, and Predictive questions about the data. Here is a visual aid I've taken from a course my supervisor has taught. Included are some questions that could be asked about data concerning how many people are in our university's gym at a given time on a given day. One way to think about statistics vs ML is by partitioning the simplex as so I think this is a good way to think about the difference between statistics and ML. As for statistical learning, I would put this somewhere in the purple region; methods for prediction or data mining which seem to be motivated by traditional statistical tools. The definition is highly variable and dependent on who you ask. Consequently, the distinction is of little practical importance.
Machine Learning VS Statistical Learning vs Statistics [duplicate] Think of the set of questions one can ask about data as living on a simplex, where the vertices represent Confirmatory questions, Exploratory Questions, and Predictive questions about the data. Here
27,206
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Does this image clear it up? Source: https://www.datasciencecentral.com/profiles/blogs/machine-learning-vs-statistics-in-one-picture
Machine Learning VS Statistical Learning vs Statistics [duplicate]
Does this image clear it up? Source: https://www.datasciencecentral.com/profiles/blogs/machine-learning-vs-statistics-in-one-picture
Machine Learning VS Statistical Learning vs Statistics [duplicate] Does this image clear it up? Source: https://www.datasciencecentral.com/profiles/blogs/machine-learning-vs-statistics-in-one-picture
Machine Learning VS Statistical Learning vs Statistics [duplicate] Does this image clear it up? Source: https://www.datasciencecentral.com/profiles/blogs/machine-learning-vs-statistics-in-one-picture
27,207
Machine Learning VS Statistical Learning vs Statistics [duplicate]
I have not studied this but as far as I can tell statistical learning and machine learning are the same thing. One can make inferences and predictions in both statistical learning and inferential statistics, but the goal in statistical learning tends to be prediction over inference, whereas the reverse is true of inferential statistics. The main difference between statistical learning and inferential statistics as I understand it is the method. In statistical learning data are split into a training set and test set, and the model learns from the training set how to maximise the accuracy of prediction of the test set. This may involve cross validation etc. In inferential statistics there is no splitting of data and training of models in any formal way.
Machine Learning VS Statistical Learning vs Statistics [duplicate]
I have not studied this but as far as I can tell statistical learning and machine learning are the same thing. One can make inferences and predictions in both statistical learning and inferential sta
Machine Learning VS Statistical Learning vs Statistics [duplicate] I have not studied this but as far as I can tell statistical learning and machine learning are the same thing. One can make inferences and predictions in both statistical learning and inferential statistics, but the goal in statistical learning tends to be prediction over inference, whereas the reverse is true of inferential statistics. The main difference between statistical learning and inferential statistics as I understand it is the method. In statistical learning data are split into a training set and test set, and the model learns from the training set how to maximise the accuracy of prediction of the test set. This may involve cross validation etc. In inferential statistics there is no splitting of data and training of models in any formal way.
Machine Learning VS Statistical Learning vs Statistics [duplicate] I have not studied this but as far as I can tell statistical learning and machine learning are the same thing. One can make inferences and predictions in both statistical learning and inferential sta
27,208
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer?
fixef() is relatively easy: it is a convenience wrapper that gives you the fixed-effect parameters, i.e. the same values that show up in summary(). Unless you are specifying your model in a very particular way, these are not the "mean values corresponding to what treatment was given" as suggested in your question; rather they are contrasts among treatments. Using R's default setup, the first parameter ("Intercept") is the mean response for the first treatment level, while the remaining parameters are the difference between the mean responses for levels 2 and higher and the mean response for level 1. (From Jake Westfall in comments: "Another way of explaining fixef() is that it returns essentially the same thing as when you call coef() on an lm regression object -- that is, it returns the (mean) regression coefficients.") ranef() gives the conditional modes, that is the difference between the (population-level) average predicted response for a given set of fixed-effect values (treatment) and the response predicted for a particular individual. You can think of these as individual-level effects, i.e. how much does any individual differ from the population? They are also, roughly, equivalent to the modes of the Bayesian posterior densities for the deviations of the individual group effects from the population means (but note that in most other ways lme4 is not giving Bayesian estimates). It's not that easy to give a non-technical summary of where the conditional modes come from; technically, they are the solutions to a penalized weighted least-squares estimation procedure. Another way of thinking of them is as shrinkage estimates; they are a compromise between the observed value for a particular group (which is what we would estimate if the among-group variance were infinite, i.e. we treated groups as fixed effects) and the population-level average (which is what we would estimate if the among-group variance were 0, i.e. we pooled all groups), weighted by the relative proportions of variance that are within vs among individuals. For further information, you can search for a non-technical explanation of best linear unbiased predictions (or "BLUPs"), which are equivalent to the conditional modes in this (simple linear mixed model) case ... coef() gives the predicted effects for each individual; in the simple example you give, coef() is basically just the value of fixef() applicable to each individual plus the value of ranef(). I agree with comments that it would be wise to look for some more background material on mixed models: Gelman and Hill's Applied Regression Modeling Pinheiro and Bates Mixed-Effects Models in S and S-PLUS various books by Zuur et al. McElreath's Rethinking Statistics (shameless plug) chapter 13 in Fox et al. Ecological Statistics
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer?
fixef() is relatively easy: it is a convenience wrapper that gives you the fixed-effect parameters, i.e. the same values that show up in summary(). Unless you are specifying your model in a very parti
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer? fixef() is relatively easy: it is a convenience wrapper that gives you the fixed-effect parameters, i.e. the same values that show up in summary(). Unless you are specifying your model in a very particular way, these are not the "mean values corresponding to what treatment was given" as suggested in your question; rather they are contrasts among treatments. Using R's default setup, the first parameter ("Intercept") is the mean response for the first treatment level, while the remaining parameters are the difference between the mean responses for levels 2 and higher and the mean response for level 1. (From Jake Westfall in comments: "Another way of explaining fixef() is that it returns essentially the same thing as when you call coef() on an lm regression object -- that is, it returns the (mean) regression coefficients.") ranef() gives the conditional modes, that is the difference between the (population-level) average predicted response for a given set of fixed-effect values (treatment) and the response predicted for a particular individual. You can think of these as individual-level effects, i.e. how much does any individual differ from the population? They are also, roughly, equivalent to the modes of the Bayesian posterior densities for the deviations of the individual group effects from the population means (but note that in most other ways lme4 is not giving Bayesian estimates). It's not that easy to give a non-technical summary of where the conditional modes come from; technically, they are the solutions to a penalized weighted least-squares estimation procedure. Another way of thinking of them is as shrinkage estimates; they are a compromise between the observed value for a particular group (which is what we would estimate if the among-group variance were infinite, i.e. we treated groups as fixed effects) and the population-level average (which is what we would estimate if the among-group variance were 0, i.e. we pooled all groups), weighted by the relative proportions of variance that are within vs among individuals. For further information, you can search for a non-technical explanation of best linear unbiased predictions (or "BLUPs"), which are equivalent to the conditional modes in this (simple linear mixed model) case ... coef() gives the predicted effects for each individual; in the simple example you give, coef() is basically just the value of fixef() applicable to each individual plus the value of ranef(). I agree with comments that it would be wise to look for some more background material on mixed models: Gelman and Hill's Applied Regression Modeling Pinheiro and Bates Mixed-Effects Models in S and S-PLUS various books by Zuur et al. McElreath's Rethinking Statistics (shameless plug) chapter 13 in Fox et al. Ecological Statistics
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer? fixef() is relatively easy: it is a convenience wrapper that gives you the fixed-effect parameters, i.e. the same values that show up in summary(). Unless you are specifying your model in a very parti
27,209
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer?
The model fitted with lmer(response ~ treatment + (1|Person)) may be expressed in a matrix form as $$ y = X\beta + Zu + e $$ where $\beta$ is the fixed effect vector, u the random effect vector and e the vector of error terms. The function getME{lme4} may be used to extract both $X$ and $Z$ for a model. For the model in discussion, we apply the typical assumptions of normality, zero means for the random elements, and $$Var(u) = \sigma_u^2 I, \ Var(e) = \sigma_e^2I\ and \ Cov(u, e') = 0.$$ Therefore, $$Var(y) = \sigma_u^2 ZZ'\ + \sigma_e^2I$$ Since the question was specific about how the "estimated" random effects of $u$ are calculated, I assume there is an understanding of how $\hat\beta, \hat\sigma_u^2,$ and $\hat\sigma_e^2$ are obtained by the method of restricted maximum likelihood (REML), which is the default for lmer(). As mentioned by others, $u$, being random, can be predicted by the BLUP method. Conceptually, the key concept is to find an estimator, not for $u,$ but for the conditional mean of $u$ given $y$, denoted as $E(u|y)$. To do this, observe that $$ \begin{pmatrix} y\\ u \end{pmatrix} = \begin{pmatrix} X\beta\\ 0 \end{pmatrix} + \begin{pmatrix} Z & I\\ I & 0 \end{pmatrix} \begin{pmatrix} u\\ e \end{pmatrix} \sim N\begin{pmatrix} \begin{pmatrix} X\beta \\ 0\\ \end{pmatrix}, & \begin{pmatrix} \sigma_u^2 ZZ' + \sigma_e^2 I & \sigma_u^2 Z\\ \sigma_u^2 Z' & \sigma_u^2I \end{pmatrix} \\ \end{pmatrix} $$ Therefore, by the property of multivariate normal distribution, $$E(u|y) = Z'( ZZ' + \frac{\sigma_e^2}{\sigma_u^2} I)^{-1}(y - X\beta)$$ It then seems natural to plug in the REML estimators $\hat\beta, \hat\sigma_u^2,$ and $\hat\sigma_e^2$ into the above formula for $E(u|y)$ as a predictor for the unobserved $u$. Theoretical works have established that this estimator indeed is "the" BLUP of $u$. I do not know the scripts of lmer() well to claim that it uses this algorithm; however, upon testing, the algorithm did produce numbers consistent with ranef().
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer?
The model fitted with lmer(response ~ treatment + (1|Person)) may be expressed in a matrix form as $$ y = X\beta + Zu + e $$ where $\beta$ is the fixed effect vector, u the random effect vector and
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer? The model fitted with lmer(response ~ treatment + (1|Person)) may be expressed in a matrix form as $$ y = X\beta + Zu + e $$ where $\beta$ is the fixed effect vector, u the random effect vector and e the vector of error terms. The function getME{lme4} may be used to extract both $X$ and $Z$ for a model. For the model in discussion, we apply the typical assumptions of normality, zero means for the random elements, and $$Var(u) = \sigma_u^2 I, \ Var(e) = \sigma_e^2I\ and \ Cov(u, e') = 0.$$ Therefore, $$Var(y) = \sigma_u^2 ZZ'\ + \sigma_e^2I$$ Since the question was specific about how the "estimated" random effects of $u$ are calculated, I assume there is an understanding of how $\hat\beta, \hat\sigma_u^2,$ and $\hat\sigma_e^2$ are obtained by the method of restricted maximum likelihood (REML), which is the default for lmer(). As mentioned by others, $u$, being random, can be predicted by the BLUP method. Conceptually, the key concept is to find an estimator, not for $u,$ but for the conditional mean of $u$ given $y$, denoted as $E(u|y)$. To do this, observe that $$ \begin{pmatrix} y\\ u \end{pmatrix} = \begin{pmatrix} X\beta\\ 0 \end{pmatrix} + \begin{pmatrix} Z & I\\ I & 0 \end{pmatrix} \begin{pmatrix} u\\ e \end{pmatrix} \sim N\begin{pmatrix} \begin{pmatrix} X\beta \\ 0\\ \end{pmatrix}, & \begin{pmatrix} \sigma_u^2 ZZ' + \sigma_e^2 I & \sigma_u^2 Z\\ \sigma_u^2 Z' & \sigma_u^2I \end{pmatrix} \\ \end{pmatrix} $$ Therefore, by the property of multivariate normal distribution, $$E(u|y) = Z'( ZZ' + \frac{\sigma_e^2}{\sigma_u^2} I)^{-1}(y - X\beta)$$ It then seems natural to plug in the REML estimators $\hat\beta, \hat\sigma_u^2,$ and $\hat\sigma_e^2$ into the above formula for $E(u|y)$ as a predictor for the unobserved $u$. Theoretical works have established that this estimator indeed is "the" BLUP of $u$. I do not know the scripts of lmer() well to claim that it uses this algorithm; however, upon testing, the algorithm did produce numbers consistent with ranef().
What's the interpretation of ranef, fixef, and coef in mixed effects model using lmer? The model fitted with lmer(response ~ treatment + (1|Person)) may be expressed in a matrix form as $$ y = X\beta + Zu + e $$ where $\beta$ is the fixed effect vector, u the random effect vector and
27,210
When was importance sampling first stated?
Edit: Looks like sk1ll3r is correct that Kahn introduced the method a year earlier than this answer suggests. I haven't checked if the term "importance sampling" appears in the earlier references, though. Original answer: It looks like the term was first proposed in Kahn, H. and Harris, T. E. (1951). Estimation of particle transmission by random sampling. National Bureau of Standards applied mathematics series, 12:27–30. https://dornsifecms.usc.edu/assets/sites/520/docs/kahnharris.pdf Here's the relevant section (from the first column of the first page):
When was importance sampling first stated?
Edit: Looks like sk1ll3r is correct that Kahn introduced the method a year earlier than this answer suggests. I haven't checked if the term "importance sampling" appears in the earlier references, tho
When was importance sampling first stated? Edit: Looks like sk1ll3r is correct that Kahn introduced the method a year earlier than this answer suggests. I haven't checked if the term "importance sampling" appears in the earlier references, though. Original answer: It looks like the term was first proposed in Kahn, H. and Harris, T. E. (1951). Estimation of particle transmission by random sampling. National Bureau of Standards applied mathematics series, 12:27–30. https://dornsifecms.usc.edu/assets/sites/520/docs/kahnharris.pdf Here's the relevant section (from the first column of the first page):
When was importance sampling first stated? Edit: Looks like sk1ll3r is correct that Kahn introduced the method a year earlier than this answer suggests. I haven't checked if the term "importance sampling" appears in the earlier references, tho
27,211
When was importance sampling first stated?
I also found an article "Use of different Monte Carlo sampling techniques" from Herman Kahn, which is dated 30 November 1955 and which enumerates variance reduction techniques; Importance sampling is one of them. (https://www.rand.org/content/dam/rand/pubs/papers/2008/P766.pdf)
When was importance sampling first stated?
I also found an article "Use of different Monte Carlo sampling techniques" from Herman Kahn, which is dated 30 November 1955 and which enumerates variance reduction techniques; Importance sampling is
When was importance sampling first stated? I also found an article "Use of different Monte Carlo sampling techniques" from Herman Kahn, which is dated 30 November 1955 and which enumerates variance reduction techniques; Importance sampling is one of them. (https://www.rand.org/content/dam/rand/pubs/papers/2008/P766.pdf)
When was importance sampling first stated? I also found an article "Use of different Monte Carlo sampling techniques" from Herman Kahn, which is dated 30 November 1955 and which enumerates variance reduction techniques; Importance sampling is
27,212
When was importance sampling first stated?
One earlier reference is the Hammersley & Handscomb (1964) monograph on Monte Carlo methods (http://www.worldcat.org/title/monte-carlo-methods/oclc/312077), which details the technique on page 57 and following, naming it "importance sampling". This reference was used in a 1976 article by Siegmund (http://www.jstor.org/stable/2958179). Unfortunately Hammersley & Handscomb do not give any reference for the method, so another earlier precedent, possibly a paper, might exist. You can find a PDF of this monograph quite easily, as it has been uploaded to several educational sites. EDIT: Following Adela's answer, I found another reference: A Kahn and Marshall paper from 1953: http://www.jstor.org/stable/166789 ("Importance sampling" is explained from page 269 onwards). Again, the way its described suggests there is some earlier precedent.
When was importance sampling first stated?
One earlier reference is the Hammersley & Handscomb (1964) monograph on Monte Carlo methods (http://www.worldcat.org/title/monte-carlo-methods/oclc/312077), which details the technique on page 57 and
When was importance sampling first stated? One earlier reference is the Hammersley & Handscomb (1964) monograph on Monte Carlo methods (http://www.worldcat.org/title/monte-carlo-methods/oclc/312077), which details the technique on page 57 and following, naming it "importance sampling". This reference was used in a 1976 article by Siegmund (http://www.jstor.org/stable/2958179). Unfortunately Hammersley & Handscomb do not give any reference for the method, so another earlier precedent, possibly a paper, might exist. You can find a PDF of this monograph quite easily, as it has been uploaded to several educational sites. EDIT: Following Adela's answer, I found another reference: A Kahn and Marshall paper from 1953: http://www.jstor.org/stable/166789 ("Importance sampling" is explained from page 269 onwards). Again, the way its described suggests there is some earlier precedent.
When was importance sampling first stated? One earlier reference is the Hammersley & Handscomb (1964) monograph on Monte Carlo methods (http://www.worldcat.org/title/monte-carlo-methods/oclc/312077), which details the technique on page 57 and
27,213
When was importance sampling first stated?
Art Owen attributes importance sampling to Kahn's papers from 1950 [1, 2] in Chapter 9's end notes of his Monte Carlo book [3]. [1] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron attenuation problems, I. Nucleonics, 6(5):27–37, 1950a. [2] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron attenuation problems, II. Nucleonics, 6(6):60–65, 1950b. [3] Art B. Owen. Monte Carlo theory, methods and examples. 2013.
When was importance sampling first stated?
Art Owen attributes importance sampling to Kahn's papers from 1950 [1, 2] in Chapter 9's end notes of his Monte Carlo book [3]. [1] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron att
When was importance sampling first stated? Art Owen attributes importance sampling to Kahn's papers from 1950 [1, 2] in Chapter 9's end notes of his Monte Carlo book [3]. [1] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron attenuation problems, I. Nucleonics, 6(5):27–37, 1950a. [2] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron attenuation problems, II. Nucleonics, 6(6):60–65, 1950b. [3] Art B. Owen. Monte Carlo theory, methods and examples. 2013.
When was importance sampling first stated? Art Owen attributes importance sampling to Kahn's papers from 1950 [1, 2] in Chapter 9's end notes of his Monte Carlo book [3]. [1] Herman Kahn. Random sampling (Monte Carlo) techniques in neutron att
27,214
When was importance sampling first stated?
These 1949 references all contain mentions of importance sampling: Goertzel, G. 1949 "Quota Sampling and Importance Functions in Stochastic Solution of Particle Problems." Technical Report ORNL-434, 21 June, 1949. Oak Ridge National Laboratory, Oak Ridge, TN. Goertzel, G., and H. Kahn. 1949. "Monte Carlo Methods for Shield Computation." Technical Report ORNL-429, December 1949. Oak Ridge National Laboratory, Oak Ridge, TN. Kahn, H. 1949. "Stochastic (Monte Carlo) Attenuation Analysis." Technical Report R-163, 14 June 1949. The Rand Corporation, Santa Monica, CA. Kahn, H., and T. E. Harris. 1949 (?). "Estimation of Particle Transmission by Random Sampling." Proceedings of a Symposium Held June 29, 30, and July, 1, 1949, in Los Angeles, California, Under the Sponsorship of the Rand Corporation, and the National Bureau of Standards, with the Cooperation of the Oak Ridge National Laboratory later published in 1951 in Monte Carlo Method, Volume 12 of Applied Mathematics Series, 27--30: National Bureau of Standards.
When was importance sampling first stated?
These 1949 references all contain mentions of importance sampling: Goertzel, G. 1949 "Quota Sampling and Importance Functions in Stochastic Solution of Particle Problems." Technical Report ORNL-434,
When was importance sampling first stated? These 1949 references all contain mentions of importance sampling: Goertzel, G. 1949 "Quota Sampling and Importance Functions in Stochastic Solution of Particle Problems." Technical Report ORNL-434, 21 June, 1949. Oak Ridge National Laboratory, Oak Ridge, TN. Goertzel, G., and H. Kahn. 1949. "Monte Carlo Methods for Shield Computation." Technical Report ORNL-429, December 1949. Oak Ridge National Laboratory, Oak Ridge, TN. Kahn, H. 1949. "Stochastic (Monte Carlo) Attenuation Analysis." Technical Report R-163, 14 June 1949. The Rand Corporation, Santa Monica, CA. Kahn, H., and T. E. Harris. 1949 (?). "Estimation of Particle Transmission by Random Sampling." Proceedings of a Symposium Held June 29, 30, and July, 1, 1949, in Los Angeles, California, Under the Sponsorship of the Rand Corporation, and the National Bureau of Standards, with the Cooperation of the Oak Ridge National Laboratory later published in 1951 in Monte Carlo Method, Volume 12 of Applied Mathematics Series, 27--30: National Bureau of Standards.
When was importance sampling first stated? These 1949 references all contain mentions of importance sampling: Goertzel, G. 1949 "Quota Sampling and Importance Functions in Stochastic Solution of Particle Problems." Technical Report ORNL-434,
27,215
How to implement dummy variable using n-1 variables?
In practice, one usually lets one's software of choice handle creating and manipulating the dummy variables. There are several ways it might be handled; here are several common possibilities for a data set with four observations, one at each level of A, B, C, and D. These are different parameterizations; they result in exactly the same model fit, but with different interpretations to the parameters. One can easily convert from one to another using basic algebra; note they are all linear combinations of each other; in fact, any linear combination can be used. Use differences from the first level (default in R): A 0 0 0 B 1 0 0 C 0 1 0 D 0 0 1 Use differences from the last level (default in SAS): A 1 0 0 B 0 1 0 C 0 0 1 D 0 0 0 Use "sum" contrasts: A 1 0 0 B 0 1 0 C 0 0 1 D -1 -1 -1 Use "helmert" contrasts: A -1 -1 -1 B 1 -1 -1 C 0 2 -1 D 0 0 3
How to implement dummy variable using n-1 variables?
In practice, one usually lets one's software of choice handle creating and manipulating the dummy variables. There are several ways it might be handled; here are several common possibilities for a da
How to implement dummy variable using n-1 variables? In practice, one usually lets one's software of choice handle creating and manipulating the dummy variables. There are several ways it might be handled; here are several common possibilities for a data set with four observations, one at each level of A, B, C, and D. These are different parameterizations; they result in exactly the same model fit, but with different interpretations to the parameters. One can easily convert from one to another using basic algebra; note they are all linear combinations of each other; in fact, any linear combination can be used. Use differences from the first level (default in R): A 0 0 0 B 1 0 0 C 0 1 0 D 0 0 1 Use differences from the last level (default in SAS): A 1 0 0 B 0 1 0 C 0 0 1 D 0 0 0 Use "sum" contrasts: A 1 0 0 B 0 1 0 C 0 0 1 D -1 -1 -1 Use "helmert" contrasts: A -1 -1 -1 B 1 -1 -1 C 0 2 -1 D 0 0 3
How to implement dummy variable using n-1 variables? In practice, one usually lets one's software of choice handle creating and manipulating the dummy variables. There are several ways it might be handled; here are several common possibilities for a da
27,216
How to implement dummy variable using n-1 variables?
Let us assume your variable levels are A, B, C, and D. If you have a constant term in the regression, you need to use three dummy variables, otherwise, you need to have all four. There are many mathematically equivalent ways you can implement the dummy variables. If you have a constant term in the regression, one way is to pick one of the levels as the "baseline" level and compare the other three to it. Let us say, for concreteness, that the baseline level is A. Then your first dummy variable takes on the value 1 whenever the level is B and 0 otherwise; the second takes on the value 1 whenever the level is C and 0 otherwise, and the third takes on the value 1 whenever the level is D and 0 otherwise. Because your constant term is equal to 1 all the time, the first dummy variable's estimated coefficient will be the estimate of the difference between level B and A, and similarly for the other dummy variables. If you don't have a constant term, you can just use four dummy variables, constructed as in the previous example, just adding one for the A level.
How to implement dummy variable using n-1 variables?
Let us assume your variable levels are A, B, C, and D. If you have a constant term in the regression, you need to use three dummy variables, otherwise, you need to have all four. There are many mathe
How to implement dummy variable using n-1 variables? Let us assume your variable levels are A, B, C, and D. If you have a constant term in the regression, you need to use three dummy variables, otherwise, you need to have all four. There are many mathematically equivalent ways you can implement the dummy variables. If you have a constant term in the regression, one way is to pick one of the levels as the "baseline" level and compare the other three to it. Let us say, for concreteness, that the baseline level is A. Then your first dummy variable takes on the value 1 whenever the level is B and 0 otherwise; the second takes on the value 1 whenever the level is C and 0 otherwise, and the third takes on the value 1 whenever the level is D and 0 otherwise. Because your constant term is equal to 1 all the time, the first dummy variable's estimated coefficient will be the estimate of the difference between level B and A, and similarly for the other dummy variables. If you don't have a constant term, you can just use four dummy variables, constructed as in the previous example, just adding one for the A level.
How to implement dummy variable using n-1 variables? Let us assume your variable levels are A, B, C, and D. If you have a constant term in the regression, you need to use three dummy variables, otherwise, you need to have all four. There are many mathe
27,217
How to implement dummy variable using n-1 variables?
In R, define the variable as a factor and it will implement it for you: x <- as.factor(sample(LETTERS[1:4], 20, replace = TRUE)) y <- rnorm(20) lm (y ~ x) which returns Call: lm(formula = y ~ x) Coefficients: (Intercept) xB xC xD 1.0236 -0.6462 -0.9466 -0.4234 The documentation for 'lm', 'factor', and 'formula' in R fills in some of the details.
How to implement dummy variable using n-1 variables?
In R, define the variable as a factor and it will implement it for you: x <- as.factor(sample(LETTERS[1:4], 20, replace = TRUE)) y <- rnorm(20) lm (y ~ x) which returns Call: lm(formula = y ~ x) Coe
How to implement dummy variable using n-1 variables? In R, define the variable as a factor and it will implement it for you: x <- as.factor(sample(LETTERS[1:4], 20, replace = TRUE)) y <- rnorm(20) lm (y ~ x) which returns Call: lm(formula = y ~ x) Coefficients: (Intercept) xB xC xD 1.0236 -0.6462 -0.9466 -0.4234 The documentation for 'lm', 'factor', and 'formula' in R fills in some of the details.
How to implement dummy variable using n-1 variables? In R, define the variable as a factor and it will implement it for you: x <- as.factor(sample(LETTERS[1:4], 20, replace = TRUE)) y <- rnorm(20) lm (y ~ x) which returns Call: lm(formula = y ~ x) Coe
27,218
How to implement dummy variable using n-1 variables?
whuber told you in the comments that coding a 0-3 or 1-4 coding instead of creating dummy variables isn't what you want. This is try - I am to hopefully explain what you would be doing with that model and why it is wrong. If you do code a variable X such that if A then X=1, if B then X=2, if C then X=3, if D then X=4 then when you do the regression you'll only get one parameter. Let's say it ended up be that the estimated parameter associated with X was 2. This would tell you that the expected difference between the mean of B and the mean of A is 2. It also tells you that the expected difference between the mean of C and the mean of B is 2. Some for D and C. You would be forcing the differences in the means for these groups to follow this very strict pattern. That one parameter tells you exactly how all of your group means relate to each other. So if you did this kind of coding you would need to assume that not only did you get the ordering correct (because in this case if you expect an increase from A to B then you need to expect an increase from B to C and from C to D) but you also need to assume that that difference is the same! If instead you do the dummy coding that has been suggested you're allowing each group to have its own mean - no restrictions. This model is much more sensible and answers the questions you want.
How to implement dummy variable using n-1 variables?
whuber told you in the comments that coding a 0-3 or 1-4 coding instead of creating dummy variables isn't what you want. This is try - I am to hopefully explain what you would be doing with that mode
How to implement dummy variable using n-1 variables? whuber told you in the comments that coding a 0-3 or 1-4 coding instead of creating dummy variables isn't what you want. This is try - I am to hopefully explain what you would be doing with that model and why it is wrong. If you do code a variable X such that if A then X=1, if B then X=2, if C then X=3, if D then X=4 then when you do the regression you'll only get one parameter. Let's say it ended up be that the estimated parameter associated with X was 2. This would tell you that the expected difference between the mean of B and the mean of A is 2. It also tells you that the expected difference between the mean of C and the mean of B is 2. Some for D and C. You would be forcing the differences in the means for these groups to follow this very strict pattern. That one parameter tells you exactly how all of your group means relate to each other. So if you did this kind of coding you would need to assume that not only did you get the ordering correct (because in this case if you expect an increase from A to B then you need to expect an increase from B to C and from C to D) but you also need to assume that that difference is the same! If instead you do the dummy coding that has been suggested you're allowing each group to have its own mean - no restrictions. This model is much more sensible and answers the questions you want.
How to implement dummy variable using n-1 variables? whuber told you in the comments that coding a 0-3 or 1-4 coding instead of creating dummy variables isn't what you want. This is try - I am to hopefully explain what you would be doing with that mode
27,219
What is standard error used for?
Error bars in general are to convince the plot reader that the differences she/he sees on the plot are statistically significant. In an approximation, you may imagine a small gaussian which $\pm1\sigma$ range is shown as this error bar -- "visual integration" of a product of two such gaussians is more-less a chance that the two values are really equal. In this particular case, one can see that both the difference between red and violet bar as well as gray and green ones are not too significant.
What is standard error used for?
Error bars in general are to convince the plot reader that the differences she/he sees on the plot are statistically significant. In an approximation, you may imagine a small gaussian which $\pm1\sigm
What is standard error used for? Error bars in general are to convince the plot reader that the differences she/he sees on the plot are statistically significant. In an approximation, you may imagine a small gaussian which $\pm1\sigma$ range is shown as this error bar -- "visual integration" of a product of two such gaussians is more-less a chance that the two values are really equal. In this particular case, one can see that both the difference between red and violet bar as well as gray and green ones are not too significant.
What is standard error used for? Error bars in general are to convince the plot reader that the differences she/he sees on the plot are statistically significant. In an approximation, you may imagine a small gaussian which $\pm1\sigm
27,220
What is standard error used for?
In general, the standard error tells you how uncertain you are that true value of the top of the bar is where the bar says it is. When there are multiple bars, it can also enable comparisons between bars, in the sense of a statistical test. However, interpreting them in this way requires some assumptions, shown graphically below. If you are really interested in comparing the bars to see if the differences are statistically significant, then you should run tests on the data and display which tests were significant,like this. In addition, I would suggest using confidence intervals rather than standard errors. This paper is well worth the read: Cumming and Finch. "Inference by Eye: Confidence Intervals and How to Read Pictures of Data." Am Psych. Vol. 60, No. 2, 170–180. Their overall conclusion is: "Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals." For independent samples, using confidence intervals, half overlap of the CIs means the difference is statistically significant. For independent samples using standard error bars instead, the following graph shows you how to figure out statistical significance:
What is standard error used for?
In general, the standard error tells you how uncertain you are that true value of the top of the bar is where the bar says it is. When there are multiple bars, it can also enable comparisons between
What is standard error used for? In general, the standard error tells you how uncertain you are that true value of the top of the bar is where the bar says it is. When there are multiple bars, it can also enable comparisons between bars, in the sense of a statistical test. However, interpreting them in this way requires some assumptions, shown graphically below. If you are really interested in comparing the bars to see if the differences are statistically significant, then you should run tests on the data and display which tests were significant,like this. In addition, I would suggest using confidence intervals rather than standard errors. This paper is well worth the read: Cumming and Finch. "Inference by Eye: Confidence Intervals and How to Read Pictures of Data." Am Psych. Vol. 60, No. 2, 170–180. Their overall conclusion is: "Seek bars that relate directly to effects of interest, be sensitive to experimental design, and interpret the intervals." For independent samples, using confidence intervals, half overlap of the CIs means the difference is statistically significant. For independent samples using standard error bars instead, the following graph shows you how to figure out statistical significance:
What is standard error used for? In general, the standard error tells you how uncertain you are that true value of the top of the bar is where the bar says it is. When there are multiple bars, it can also enable comparisons between
27,221
What is standard error used for?
As mbq says, error bars are a way of letting your readers to get a feel if the differences between two groups are significant - i.e. if the variation within each of your groups is small enough to believe that the difference you've found for the mean between your groups. All else being equal, larger error bars mean more within-group difference, but it looks like the y-axis of your plot is log-transformed, so the lower groups aren't quite on the same scale as the higher ones. You should be aware, many of your readers won't understand what error bars represent, even if you explicitly explain it! Often you can achieve the same goal with a jittered dot-plot or a boxplot (or both together) to achieve the same effect.
What is standard error used for?
As mbq says, error bars are a way of letting your readers to get a feel if the differences between two groups are significant - i.e. if the variation within each of your groups is small enough to beli
What is standard error used for? As mbq says, error bars are a way of letting your readers to get a feel if the differences between two groups are significant - i.e. if the variation within each of your groups is small enough to believe that the difference you've found for the mean between your groups. All else being equal, larger error bars mean more within-group difference, but it looks like the y-axis of your plot is log-transformed, so the lower groups aren't quite on the same scale as the higher ones. You should be aware, many of your readers won't understand what error bars represent, even if you explicitly explain it! Often you can achieve the same goal with a jittered dot-plot or a boxplot (or both together) to achieve the same effect.
What is standard error used for? As mbq says, error bars are a way of letting your readers to get a feel if the differences between two groups are significant - i.e. if the variation within each of your groups is small enough to beli
27,222
What is standard error used for?
Plenty of researchers have trouble interpreting these graphs. See http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/ for a more detailed elaboration.
What is standard error used for?
Plenty of researchers have trouble interpreting these graphs. See http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/ for a more detailed elaboration.
What is standard error used for? Plenty of researchers have trouble interpreting these graphs. See http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/ for a more detailed elaboration.
What is standard error used for? Plenty of researchers have trouble interpreting these graphs. See http://scienceblogs.com/cognitivedaily/2008/07/31/most-researchers-dont-understa-1/ for a more detailed elaboration.
27,223
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
The Box-Cox transformation includes many of the ones you cited. See this answer for some details: How should I transform non-negative data including zeros? UPDATE: These slides provide a pretty good overview of Box-Cox transformations.
What other normalizing transformations are commonly used beyond the common ones like square root, lo
The Box-Cox transformation includes many of the ones you cited. See this answer for some details: How should I transform non-negative data including zeros? UPDATE: These slides provide a pretty goo
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? The Box-Cox transformation includes many of the ones you cited. See this answer for some details: How should I transform non-negative data including zeros? UPDATE: These slides provide a pretty good overview of Box-Cox transformations.
What other normalizing transformations are commonly used beyond the common ones like square root, lo The Box-Cox transformation includes many of the ones you cited. See this answer for some details: How should I transform non-negative data including zeros? UPDATE: These slides provide a pretty goo
27,224
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
The first step should be to ask why your variables are non-normally distributed. This can be illuminating. Common findings from my experience: Ability tests (e.g., exams, intelligence tests, admission tests) tend to be negatively skewed when there are ceiling effects and positively skewed when there are floor effects. Both findings suggest that the difficulty level of the test is not optimised for the sample, either being too easy or too difficult to optimally differentiate ability. It also implies that the latent variable of interest could still be normally distributed, but that the structure of the test is inducing a skew in the measured variable. Ability tests often have outliers in terms of low scorers. In short there are many ways to do poorly on a test. In particular this can sometimes be seen on exams where there are a small percentage of students where some combination of lack of aptitude and lack of effort have combined to create very low test scores. This implies that the latent variable of interest probably has a few outliers. In relation to self-report tests (e.g., personality, attitude tests, etc.) skew often occurs when the sample is inherently high on the scale (e.g., distributions of life satisfaction are negatively skewed because most people are satisfied) or when the scale has been optimised for a sample different to the one the test is being applied to (e.g., applying a clinical measure of depression to a non-clinical sample). This first step may suggest design modifications to the test. If you are aware of these issues ahead of time, you can even design your test to avoid them, if you see them as problematic. The second step is to decide what to do in the situation where you have non-normal data. Note transformations are but one possible strategy. I'd reiterate the general advice from a previous answer regarding non-normality: Many procedures that assume normality of residuals are robust to modest violations of normality of residuals Bootstrapping is generally a good strategy Transformations are another good strategy. Note that from my experience the kinds of mild skew that commonly occur with ability and self-report psychological tests can usually be fairly readily transformed to a distribution approximating normality using a log, sqrt, or inverse transformation (or the reversed equivalent).
What other normalizing transformations are commonly used beyond the common ones like square root, lo
The first step should be to ask why your variables are non-normally distributed. This can be illuminating. Common findings from my experience: Ability tests (e.g., exams, intelligence tests, admissi
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? The first step should be to ask why your variables are non-normally distributed. This can be illuminating. Common findings from my experience: Ability tests (e.g., exams, intelligence tests, admission tests) tend to be negatively skewed when there are ceiling effects and positively skewed when there are floor effects. Both findings suggest that the difficulty level of the test is not optimised for the sample, either being too easy or too difficult to optimally differentiate ability. It also implies that the latent variable of interest could still be normally distributed, but that the structure of the test is inducing a skew in the measured variable. Ability tests often have outliers in terms of low scorers. In short there are many ways to do poorly on a test. In particular this can sometimes be seen on exams where there are a small percentage of students where some combination of lack of aptitude and lack of effort have combined to create very low test scores. This implies that the latent variable of interest probably has a few outliers. In relation to self-report tests (e.g., personality, attitude tests, etc.) skew often occurs when the sample is inherently high on the scale (e.g., distributions of life satisfaction are negatively skewed because most people are satisfied) or when the scale has been optimised for a sample different to the one the test is being applied to (e.g., applying a clinical measure of depression to a non-clinical sample). This first step may suggest design modifications to the test. If you are aware of these issues ahead of time, you can even design your test to avoid them, if you see them as problematic. The second step is to decide what to do in the situation where you have non-normal data. Note transformations are but one possible strategy. I'd reiterate the general advice from a previous answer regarding non-normality: Many procedures that assume normality of residuals are robust to modest violations of normality of residuals Bootstrapping is generally a good strategy Transformations are another good strategy. Note that from my experience the kinds of mild skew that commonly occur with ability and self-report psychological tests can usually be fairly readily transformed to a distribution approximating normality using a log, sqrt, or inverse transformation (or the reversed equivalent).
What other normalizing transformations are commonly used beyond the common ones like square root, lo The first step should be to ask why your variables are non-normally distributed. This can be illuminating. Common findings from my experience: Ability tests (e.g., exams, intelligence tests, admissi
27,225
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
John Tukey systematically discusses transformations in his book on EDA. In addition to the Box-Cox family (affinely scaled power transformations) he defines a family of "folded" transformations for proportions (essentially powers of x/(1-x)) and "started" counts (adding a positive offset to counted data before transforming them). The folded transformations, which essentially generalize the logit, are especially useful for test scores. In a completely different vein, Johnson & Kotz in their books on distributions offer many transformations intended to convert test statistics to approximate normality (or to some other target distribution), such as the cube-root transformation for chi-square. This material is a great source of ideas for useful transformations when you anticipate your data will follow some specific distribution.
What other normalizing transformations are commonly used beyond the common ones like square root, lo
John Tukey systematically discusses transformations in his book on EDA. In addition to the Box-Cox family (affinely scaled power transformations) he defines a family of "folded" transformations for p
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? John Tukey systematically discusses transformations in his book on EDA. In addition to the Box-Cox family (affinely scaled power transformations) he defines a family of "folded" transformations for proportions (essentially powers of x/(1-x)) and "started" counts (adding a positive offset to counted data before transforming them). The folded transformations, which essentially generalize the logit, are especially useful for test scores. In a completely different vein, Johnson & Kotz in their books on distributions offer many transformations intended to convert test statistics to approximate normality (or to some other target distribution), such as the cube-root transformation for chi-square. This material is a great source of ideas for useful transformations when you anticipate your data will follow some specific distribution.
What other normalizing transformations are commonly used beyond the common ones like square root, lo John Tukey systematically discusses transformations in his book on EDA. In addition to the Box-Cox family (affinely scaled power transformations) he defines a family of "folded" transformations for p
27,226
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
A simple option is to use sums of scores instead of the scores themselves. The sum of distributions tends to normality. For example, in Education you could add a student's scores over a series of tests. Another option, of course, is to use techniques that do not assume normality, which are underestimated and underused.
What other normalizing transformations are commonly used beyond the common ones like square root, lo
A simple option is to use sums of scores instead of the scores themselves. The sum of distributions tends to normality. For example, in Education you could add a student's scores over a series of test
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? A simple option is to use sums of scores instead of the scores themselves. The sum of distributions tends to normality. For example, in Education you could add a student's scores over a series of tests. Another option, of course, is to use techniques that do not assume normality, which are underestimated and underused.
What other normalizing transformations are commonly used beyond the common ones like square root, lo A simple option is to use sums of scores instead of the scores themselves. The sum of distributions tends to normality. For example, in Education you could add a student's scores over a series of test
27,227
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.?
For skewed and heavy tailed data I use (and developed) the Lambert W x F distribution framework. Skewed and heavy-tailed Lambert W x F distributions are based on a non-linear transform of an input random variable (RV) $X \sim F$ to output $Y ~ Lambert W \times F$, which is similar to X but skewed and/or heavy tailed (see papers for detailed formulas). This works in general for any continuous RV, but in practice we are mostly interested in Gaussian $X \sim N(\mu, \sigma^2)$. For heavy-tailed Lambert W x F distributions the inverse is bijective and can be estimated from the data using your favorite estimator for the parameter $\theta = (\mu_x, \sigma_x, \delta, \alpha)$ (MLE, methods of moments, Bayesian analysis, ...). For $\alpha \equiv 1$ and X being Gaussian it reduces to Tukey's h distribution. Now as a data transformation this becomes interesting as the transformation is bijective (almost bijective for skewed case) and can be obtained explicitly using Lambert's W function (hence the name Lambert W x F). This means we can remove skewness from data and also remove heavy tails (bijectively!). You can try it out using the LambertW R package, with the manual showing many examples of how to use it. For applications see these posts What's the distribution of these data?: this has a full illustration of how to transform data to normality in R using the LambertW package. Looking for a distribution where: Mean=0, variance is variable, Skew=0 and kurtosis is variable -
What other normalizing transformations are commonly used beyond the common ones like square root, lo
For skewed and heavy tailed data I use (and developed) the Lambert W x F distribution framework. Skewed and heavy-tailed Lambert W x F distributions are based on a non-linear transform of an input ra
What other normalizing transformations are commonly used beyond the common ones like square root, log, etc.? For skewed and heavy tailed data I use (and developed) the Lambert W x F distribution framework. Skewed and heavy-tailed Lambert W x F distributions are based on a non-linear transform of an input random variable (RV) $X \sim F$ to output $Y ~ Lambert W \times F$, which is similar to X but skewed and/or heavy tailed (see papers for detailed formulas). This works in general for any continuous RV, but in practice we are mostly interested in Gaussian $X \sim N(\mu, \sigma^2)$. For heavy-tailed Lambert W x F distributions the inverse is bijective and can be estimated from the data using your favorite estimator for the parameter $\theta = (\mu_x, \sigma_x, \delta, \alpha)$ (MLE, methods of moments, Bayesian analysis, ...). For $\alpha \equiv 1$ and X being Gaussian it reduces to Tukey's h distribution. Now as a data transformation this becomes interesting as the transformation is bijective (almost bijective for skewed case) and can be obtained explicitly using Lambert's W function (hence the name Lambert W x F). This means we can remove skewness from data and also remove heavy tails (bijectively!). You can try it out using the LambertW R package, with the manual showing many examples of how to use it. For applications see these posts What's the distribution of these data?: this has a full illustration of how to transform data to normality in R using the LambertW package. Looking for a distribution where: Mean=0, variance is variable, Skew=0 and kurtosis is variable -
What other normalizing transformations are commonly used beyond the common ones like square root, lo For skewed and heavy tailed data I use (and developed) the Lambert W x F distribution framework. Skewed and heavy-tailed Lambert W x F distributions are based on a non-linear transform of an input ra
27,228
Operations research versus statistical analysis?
Those are entire academic discplines so I do not think you can expect much more here than pointers to further, and more extensive, documentation as e.g. Wikipedia on Operations Research and Statistics. Let me try a personal definition which may be grossly simplifying: Operations Research is concerned with process modeling and optimisation Statistical Modeling is concerning with describing the so-called 'data generating process': find a model that describes something observed, and then do estimation, inference and possibly prediction.
Operations research versus statistical analysis?
Those are entire academic discplines so I do not think you can expect much more here than pointers to further, and more extensive, documentation as e.g. Wikipedia on Operations Research and Statistics
Operations research versus statistical analysis? Those are entire academic discplines so I do not think you can expect much more here than pointers to further, and more extensive, documentation as e.g. Wikipedia on Operations Research and Statistics. Let me try a personal definition which may be grossly simplifying: Operations Research is concerned with process modeling and optimisation Statistical Modeling is concerning with describing the so-called 'data generating process': find a model that describes something observed, and then do estimation, inference and possibly prediction.
Operations research versus statistical analysis? Those are entire academic discplines so I do not think you can expect much more here than pointers to further, and more extensive, documentation as e.g. Wikipedia on Operations Research and Statistics
27,229
Operations research versus statistical analysis?
Operations Research (OR), sometimes called "Management Science", consists of three main topics, Optimization, Stochastic Processes, Process and Production Methodologies. OR uses statistical analysis in many contexts (for example discrete event simulations) but they should not be considered the same, additionally one of the main topics in OR is optimization (linear, and nonlinear) which can make it more clear why these two fields should be considered different There is another exchange website for OR if you are interested
Operations research versus statistical analysis?
Operations Research (OR), sometimes called "Management Science", consists of three main topics, Optimization, Stochastic Processes, Process and Production Methodologies. OR uses statistical analysis
Operations research versus statistical analysis? Operations Research (OR), sometimes called "Management Science", consists of three main topics, Optimization, Stochastic Processes, Process and Production Methodologies. OR uses statistical analysis in many contexts (for example discrete event simulations) but they should not be considered the same, additionally one of the main topics in OR is optimization (linear, and nonlinear) which can make it more clear why these two fields should be considered different There is another exchange website for OR if you are interested
Operations research versus statistical analysis? Operations Research (OR), sometimes called "Management Science", consists of three main topics, Optimization, Stochastic Processes, Process and Production Methodologies. OR uses statistical analysis
27,230
Operations research versus statistical analysis?
Operations Research began during wartime in the 1940s with scientists and others addressing problems in Radar operations, Anti-Submarine Warfare (ASW), and air operations. It is really a methodology to help decision makers choose a course of action by using an analytic framework that includes statistics, linear and non-linear programming, game theory, decision theory, etc. Statistics is one of many tools it uses.
Operations research versus statistical analysis?
Operations Research began during wartime in the 1940s with scientists and others addressing problems in Radar operations, Anti-Submarine Warfare (ASW), and air operations. It is really a methodology t
Operations research versus statistical analysis? Operations Research began during wartime in the 1940s with scientists and others addressing problems in Radar operations, Anti-Submarine Warfare (ASW), and air operations. It is really a methodology to help decision makers choose a course of action by using an analytic framework that includes statistics, linear and non-linear programming, game theory, decision theory, etc. Statistics is one of many tools it uses.
Operations research versus statistical analysis? Operations Research began during wartime in the 1940s with scientists and others addressing problems in Radar operations, Anti-Submarine Warfare (ASW), and air operations. It is really a methodology t
27,231
Operations research versus statistical analysis?
Operation research is the process for optimizing the problem to analyse some near value and we get different answers in Transportation method in OR. Statistics is the process of collection, presentation, analysis, and interpretation of data. We use random number generation in operation research, simulation model.
Operations research versus statistical analysis?
Operation research is the process for optimizing the problem to analyse some near value and we get different answers in Transportation method in OR. Statistics is the process of collection, presenta
Operations research versus statistical analysis? Operation research is the process for optimizing the problem to analyse some near value and we get different answers in Transportation method in OR. Statistics is the process of collection, presentation, analysis, and interpretation of data. We use random number generation in operation research, simulation model.
Operations research versus statistical analysis? Operation research is the process for optimizing the problem to analyse some near value and we get different answers in Transportation method in OR. Statistics is the process of collection, presenta
27,232
I've already used my entire dataset in a regression, should I not use that as a prediction model?
With so few cases, train/test splits aren't helpful. You then lose power in training the model and precision in testing it. What you've done so far is fine. You could go on to estimate how well the model is likely to work for prediction by repeating the modeling on multiple bootstrap samples of the data and evaluating performance of those models on the full data set. That's an accepted way to evaluate the performance of your modeling process. One caution: "whether they'll return for a follow-up visit" might not be an all-or-none result. If you deliberately restricted consideration to returning during a fixed period of time like 1 year that could be OK, but in general you might also be interested in how soon they return and you might also want to take advantage of information from individuals who haven't yet been followed up for that fixed period of time. For those sorts of things you would need to use a survival model instead of logistic regression.
I've already used my entire dataset in a regression, should I not use that as a prediction model?
With so few cases, train/test splits aren't helpful. You then lose power in training the model and precision in testing it. What you've done so far is fine. You could go on to estimate how well the mo
I've already used my entire dataset in a regression, should I not use that as a prediction model? With so few cases, train/test splits aren't helpful. You then lose power in training the model and precision in testing it. What you've done so far is fine. You could go on to estimate how well the model is likely to work for prediction by repeating the modeling on multiple bootstrap samples of the data and evaluating performance of those models on the full data set. That's an accepted way to evaluate the performance of your modeling process. One caution: "whether they'll return for a follow-up visit" might not be an all-or-none result. If you deliberately restricted consideration to returning during a fixed period of time like 1 year that could be OK, but in general you might also be interested in how soon they return and you might also want to take advantage of information from individuals who haven't yet been followed up for that fixed period of time. For those sorts of things you would need to use a survival model instead of logistic regression.
I've already used my entire dataset in a regression, should I not use that as a prediction model? With so few cases, train/test splits aren't helpful. You then lose power in training the model and precision in testing it. What you've done so far is fine. You could go on to estimate how well the mo
27,233
I've already used my entire dataset in a regression, should I not use that as a prediction model?
In my opinion the best course of action, if possible, is to collect more data and then use that data to check your current model as well as maybe top 5 of the previous models you tried. Continuing with the child learning multiplication example from the comment you referenced - each of your models is a different child. You set up a procedure which ranks children according to how well they perform multiplication on the data they have already seen. This procedure is biased towards those who memorised the table. The child who best learned how to multiply might rank (e.g.) third from the top or even lower. So the only way to select models that will perform well outside of the data you have so far is to put them to the test using a new set of data (suitable named "testing" data). If getting more data is impossible you can always do cross-validation. But here you will have to re-estimate your models. You can learn more about cross-validation by looking at the relevant answers on this site, but the idea is to simulate training/testing splits while still using all of the training data. If you cannot even adjust the original analysis then the last best thing might be to select a well-enough performing model that uses the least number of variables. For example, if one model reaches 76% accuracy using 30 variables, and another one reaches 72% wile using only 10 - it is less likely for the lesser model to have "memorized" the data. So we would expect that model to perform better on new patients.
I've already used my entire dataset in a regression, should I not use that as a prediction model?
In my opinion the best course of action, if possible, is to collect more data and then use that data to check your current model as well as maybe top 5 of the previous models you tried. Continuing wit
I've already used my entire dataset in a regression, should I not use that as a prediction model? In my opinion the best course of action, if possible, is to collect more data and then use that data to check your current model as well as maybe top 5 of the previous models you tried. Continuing with the child learning multiplication example from the comment you referenced - each of your models is a different child. You set up a procedure which ranks children according to how well they perform multiplication on the data they have already seen. This procedure is biased towards those who memorised the table. The child who best learned how to multiply might rank (e.g.) third from the top or even lower. So the only way to select models that will perform well outside of the data you have so far is to put them to the test using a new set of data (suitable named "testing" data). If getting more data is impossible you can always do cross-validation. But here you will have to re-estimate your models. You can learn more about cross-validation by looking at the relevant answers on this site, but the idea is to simulate training/testing splits while still using all of the training data. If you cannot even adjust the original analysis then the last best thing might be to select a well-enough performing model that uses the least number of variables. For example, if one model reaches 76% accuracy using 30 variables, and another one reaches 72% wile using only 10 - it is less likely for the lesser model to have "memorized" the data. So we would expect that model to perform better on new patients.
I've already used my entire dataset in a regression, should I not use that as a prediction model? In my opinion the best course of action, if possible, is to collect more data and then use that data to check your current model as well as maybe top 5 of the previous models you tried. Continuing wit
27,234
I've already used my entire dataset in a regression, should I not use that as a prediction model?
With something like a dozen variables to start with and then several tries on which model works best but only 600 data points on a binary output you have a severe risk of overfitting. That is your model works very well on the data you have but maybe its predictive power for new patients is not very good. What you can do with splitting the data is getting a feel for how much of an issue that is for your specific data. I would not throw away what you have but if you have programmed this in R it should be relatively easy to split the data and check whether you have overfitting. So split the data randomly into say 500 patients in training and 100 in testing and then look at the following: is the best model on this set of 500 patients the same as on the whole set of 600? how much worse does it perform on the test cases than on the training cases? how much better is your complicated model relative to a model that only uses the single variable that is the best predictor? Repeat this with different random choices of the split into training and testing. The goal is to gain an understanding whether your model is only a good fit on your existing data or whether it is actually a good tool to predict the behavior of future patients.
I've already used my entire dataset in a regression, should I not use that as a prediction model?
With something like a dozen variables to start with and then several tries on which model works best but only 600 data points on a binary output you have a severe risk of overfitting. That is your mod
I've already used my entire dataset in a regression, should I not use that as a prediction model? With something like a dozen variables to start with and then several tries on which model works best but only 600 data points on a binary output you have a severe risk of overfitting. That is your model works very well on the data you have but maybe its predictive power for new patients is not very good. What you can do with splitting the data is getting a feel for how much of an issue that is for your specific data. I would not throw away what you have but if you have programmed this in R it should be relatively easy to split the data and check whether you have overfitting. So split the data randomly into say 500 patients in training and 100 in testing and then look at the following: is the best model on this set of 500 patients the same as on the whole set of 600? how much worse does it perform on the test cases than on the training cases? how much better is your complicated model relative to a model that only uses the single variable that is the best predictor? Repeat this with different random choices of the split into training and testing. The goal is to gain an understanding whether your model is only a good fit on your existing data or whether it is actually a good tool to predict the behavior of future patients.
I've already used my entire dataset in a regression, should I not use that as a prediction model? With something like a dozen variables to start with and then several tries on which model works best but only 600 data points on a binary output you have a severe risk of overfitting. That is your mod
27,235
I've already used my entire dataset in a regression, should I not use that as a prediction model?
I disagree with the consens that this is fine. I think it's not, because I can construct a better model than you did: a hashtable that remembers the entries. Performance estimation Having a model created from data is fine, it's the first step. But this itself is worthless: we need to assess the performance of the model (otherwise, a random model may be better). To assess the performance of the model (and distinguish it from random guessing), we need some data. Overfitting Now you could just use the same data as you already used to assess the performance. But this can lead to a bias: your model could have overfitted to the data and, in an extreme case, "remember" (hash table) the data. (remark: this is indeed less of an issue with less powerful models such as logistic regression) Low statistics: cross validation As mentioned, ways out are to use resampling methods that do not really reduce the sample size used: This can be done by either using bootstrap methods or cross validation. They have their own advantages and disadvantages, however they tend to perform similar in most real-world cases. *I would suggest you to do Cross-validation to get an estimate, but then, any technique is fine. Why you need this For a paper, where you claim to have developed a model, it seems crucial to provide an unbiased estimate of the performance of the model (for completeness: or a very strong theoretical motivation). I understand that it means you would need to redo some stuff, but that should not be a lot actually. And it is also worth to maybe contact someone who understands more of it: if you say you just tried a bit around, there are many many more pitfalls (and possible improvements) that you can have. Data scientists are a thing these days ;)
I've already used my entire dataset in a regression, should I not use that as a prediction model?
I disagree with the consens that this is fine. I think it's not, because I can construct a better model than you did: a hashtable that remembers the entries. Performance estimation Having a model crea
I've already used my entire dataset in a regression, should I not use that as a prediction model? I disagree with the consens that this is fine. I think it's not, because I can construct a better model than you did: a hashtable that remembers the entries. Performance estimation Having a model created from data is fine, it's the first step. But this itself is worthless: we need to assess the performance of the model (otherwise, a random model may be better). To assess the performance of the model (and distinguish it from random guessing), we need some data. Overfitting Now you could just use the same data as you already used to assess the performance. But this can lead to a bias: your model could have overfitted to the data and, in an extreme case, "remember" (hash table) the data. (remark: this is indeed less of an issue with less powerful models such as logistic regression) Low statistics: cross validation As mentioned, ways out are to use resampling methods that do not really reduce the sample size used: This can be done by either using bootstrap methods or cross validation. They have their own advantages and disadvantages, however they tend to perform similar in most real-world cases. *I would suggest you to do Cross-validation to get an estimate, but then, any technique is fine. Why you need this For a paper, where you claim to have developed a model, it seems crucial to provide an unbiased estimate of the performance of the model (for completeness: or a very strong theoretical motivation). I understand that it means you would need to redo some stuff, but that should not be a lot actually. And it is also worth to maybe contact someone who understands more of it: if you say you just tried a bit around, there are many many more pitfalls (and possible improvements) that you can have. Data scientists are a thing these days ;)
I've already used my entire dataset in a regression, should I not use that as a prediction model? I disagree with the consens that this is fine. I think it's not, because I can construct a better model than you did: a hashtable that remembers the entries. Performance estimation Having a model crea
27,236
What is the expected number of marked fishes after 7 times?
Let $X_t$ denote the number of marked fish after $t$ rounds. Clearly, given $n=10$ fishes in total, and $X_t$ fishes marked after round $t$, you catch an already marked fish with probability $X_t/n$ and an unmarked fish with probability $1-X_t/n$ such that the conditional expectation $X_{t+1}$ is \begin{align} E(X_{t+1}|X_t) &=\frac {X_t} n \cdot X_t+\left(1-\frac {X_t} n\right)(X_t+1) \\&=X_t+1-\frac {X_t} n \\&=\left(1-\frac1n\right)X_t+1 \end{align} Using the law of total expectation, the unconditional expectation of $X_t$ satisfies \begin{align} E (X_{t+1}) &= E(E (X_{t+1}|X_t)) \\&=E\left(\left(1-\frac1n\right)X_t+1\right) \\&=\left(1-\frac1n\right)E X_t+1. \end{align} With the initial condition $X_0=0$, the solution of this first order linear non-homogenous difference equation is $$ E X_t=\left[1 - \left(1-\frac1n\right)^t \right]n. $$ Thus, for $n=10$ fishes, the expected number of fish marked after $t=7$ rounds would be $$ E X_7=(1-0.9^7)10=5.217031. $$
What is the expected number of marked fishes after 7 times?
Let $X_t$ denote the number of marked fish after $t$ rounds. Clearly, given $n=10$ fishes in total, and $X_t$ fishes marked after round $t$, you catch an already marked fish with probability $X_t/n$
What is the expected number of marked fishes after 7 times? Let $X_t$ denote the number of marked fish after $t$ rounds. Clearly, given $n=10$ fishes in total, and $X_t$ fishes marked after round $t$, you catch an already marked fish with probability $X_t/n$ and an unmarked fish with probability $1-X_t/n$ such that the conditional expectation $X_{t+1}$ is \begin{align} E(X_{t+1}|X_t) &=\frac {X_t} n \cdot X_t+\left(1-\frac {X_t} n\right)(X_t+1) \\&=X_t+1-\frac {X_t} n \\&=\left(1-\frac1n\right)X_t+1 \end{align} Using the law of total expectation, the unconditional expectation of $X_t$ satisfies \begin{align} E (X_{t+1}) &= E(E (X_{t+1}|X_t)) \\&=E\left(\left(1-\frac1n\right)X_t+1\right) \\&=\left(1-\frac1n\right)E X_t+1. \end{align} With the initial condition $X_0=0$, the solution of this first order linear non-homogenous difference equation is $$ E X_t=\left[1 - \left(1-\frac1n\right)^t \right]n. $$ Thus, for $n=10$ fishes, the expected number of fish marked after $t=7$ rounds would be $$ E X_7=(1-0.9^7)10=5.217031. $$
What is the expected number of marked fishes after 7 times? Let $X_t$ denote the number of marked fish after $t$ rounds. Clearly, given $n=10$ fishes in total, and $X_t$ fishes marked after round $t$, you catch an already marked fish with probability $X_t/n$
27,237
What is the expected number of marked fishes after 7 times?
Just in case you do not want to do analytical math, we can use numeric simulation in R to answer this approximately. If I name your ten fish using values from 1 to 10, each time we catch only one, mark and return it to the pool. That is sampling with replacement. So we sample the pool 7 times, count the number of unique values. length(unique(sample(10,7, replace=T))) We can simulate this process 100000 times to get the probability. set.seed(1) count <- vector() for (i in 1:100000){ count[i] <- length(unique(sample(10,7, replace=T))) } mean(count) The result is about 5.2.
What is the expected number of marked fishes after 7 times?
Just in case you do not want to do analytical math, we can use numeric simulation in R to answer this approximately. If I name your ten fish using values from 1 to 10, each time we catch only one, mar
What is the expected number of marked fishes after 7 times? Just in case you do not want to do analytical math, we can use numeric simulation in R to answer this approximately. If I name your ten fish using values from 1 to 10, each time we catch only one, mark and return it to the pool. That is sampling with replacement. So we sample the pool 7 times, count the number of unique values. length(unique(sample(10,7, replace=T))) We can simulate this process 100000 times to get the probability. set.seed(1) count <- vector() for (i in 1:100000){ count[i] <- length(unique(sample(10,7, replace=T))) } mean(count) The result is about 5.2.
What is the expected number of marked fishes after 7 times? Just in case you do not want to do analytical math, we can use numeric simulation in R to answer this approximately. If I name your ten fish using values from 1 to 10, each time we catch only one, mar
27,238
What is the expected number of marked fishes after 7 times?
To answer your question about which thought process is correct: The AND one is, the other (first presented) is wrong. That can be seen by looking critically at your result that after 7 marking rounds, the expected number of marked fish is supposed to be 7, when that is actually the maximum number you can hope for. Generally, if you have 2 events $A$ and $B$ and know $P(A)$ and $P(B)$, you need different kinds of assumptions to be able to calculate $P(A \cup B)$ and $P(A \cap B)$ just from $P(A)$ and $P(B)$. In order for $P(A \cup B) = P(A) + P(B)$ to hold, you need to know that events A and B are disjoint, meaning they can't both happen. That's why the OR approach is incorrect, you can mark the same fish in the first and 3rd turn, for example, so the events whose probabilities you added were not disjoint. In order for $P(A \cap B) = P(A)P(B)$ to hold, the events must be independent. That's why the AND approch works: Knowing if a given is fish being (un)marked on turn 1 does not tell you anything if it is being (un)marked on turn 2 or any other turn.
What is the expected number of marked fishes after 7 times?
To answer your question about which thought process is correct: The AND one is, the other (first presented) is wrong. That can be seen by looking critically at your result that after 7 marking rounds,
What is the expected number of marked fishes after 7 times? To answer your question about which thought process is correct: The AND one is, the other (first presented) is wrong. That can be seen by looking critically at your result that after 7 marking rounds, the expected number of marked fish is supposed to be 7, when that is actually the maximum number you can hope for. Generally, if you have 2 events $A$ and $B$ and know $P(A)$ and $P(B)$, you need different kinds of assumptions to be able to calculate $P(A \cup B)$ and $P(A \cap B)$ just from $P(A)$ and $P(B)$. In order for $P(A \cup B) = P(A) + P(B)$ to hold, you need to know that events A and B are disjoint, meaning they can't both happen. That's why the OR approach is incorrect, you can mark the same fish in the first and 3rd turn, for example, so the events whose probabilities you added were not disjoint. In order for $P(A \cap B) = P(A)P(B)$ to hold, the events must be independent. That's why the AND approch works: Knowing if a given is fish being (un)marked on turn 1 does not tell you anything if it is being (un)marked on turn 2 or any other turn.
What is the expected number of marked fishes after 7 times? To answer your question about which thought process is correct: The AND one is, the other (first presented) is wrong. That can be seen by looking critically at your result that after 7 marking rounds,
27,239
What are the current state-of-the-art convolutional neural networks?
The best suggestion is from shimao: typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous results, which can be good way to keep track. Any leaderboard will soon become useless, because they're basically always maintained by (undergrad/grad) students, who stop updating them as soon as they get their degree/land a job. Anyway, if CIFAR-10 and CIFAR-100 are good enough for you, this is pretty good: https://github.com/arunpatala/cifarSOTA This one is more general (includes ImageNet) and it has more recent results: https://github.com/Lextal/SotA-CV This is the one I used to use, but the owner has stopped updating it, as it often happens: https://github.com/RedditSota/state-of-the-art-result-for-machine-learning-problems/ Finally, you may be interested in this Jupyter notebook released just today by Ali Rahimi, based on data scraped from LSVRC and the COCO website. One last note: if you’re looking for the latest results because you want to compare your results to SotA, great. However, if your goal is applying the “best” architecture on ImageNet to an industrial application using transfer learning, you should know (if you don’t already) that the latest architectures are worse, in terms of translation invariance, than the older ones. This is a risk if your dataset doesn’t have photographer bias of if you don’t have enough compute and data to retrain the architecture on a more useful image distribution. See the excellent preprint: Azulay & Weiss, 2018, more “Why do deep convolutional networks generalize so poorly to small image transformations?”
What are the current state-of-the-art convolutional neural networks?
The best suggestion is from shimao: typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous result
What are the current state-of-the-art convolutional neural networks? The best suggestion is from shimao: typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous results, which can be good way to keep track. Any leaderboard will soon become useless, because they're basically always maintained by (undergrad/grad) students, who stop updating them as soon as they get their degree/land a job. Anyway, if CIFAR-10 and CIFAR-100 are good enough for you, this is pretty good: https://github.com/arunpatala/cifarSOTA This one is more general (includes ImageNet) and it has more recent results: https://github.com/Lextal/SotA-CV This is the one I used to use, but the owner has stopped updating it, as it often happens: https://github.com/RedditSota/state-of-the-art-result-for-machine-learning-problems/ Finally, you may be interested in this Jupyter notebook released just today by Ali Rahimi, based on data scraped from LSVRC and the COCO website. One last note: if you’re looking for the latest results because you want to compare your results to SotA, great. However, if your goal is applying the “best” architecture on ImageNet to an industrial application using transfer learning, you should know (if you don’t already) that the latest architectures are worse, in terms of translation invariance, than the older ones. This is a risk if your dataset doesn’t have photographer bias of if you don’t have enough compute and data to retrain the architecture on a more useful image distribution. See the excellent preprint: Azulay & Weiss, 2018, more “Why do deep convolutional networks generalize so poorly to small image transformations?”
What are the current state-of-the-art convolutional neural networks? The best suggestion is from shimao: typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous result
27,240
What are the current state-of-the-art convolutional neural networks?
A "leaderboard" of sorts is maintained at this website, "Classification datasets results". The maintainers attempt to keep track of published results of various neural network architectures. The leaderboard is not solely restricted to CNNs per se -- any network is admissible. But because all of the tasks tracked on the leaderboard are image tasks (as of this writing), it's likely that many of the networks will be CNNs since they are very effective at image tasks.
What are the current state-of-the-art convolutional neural networks?
A "leaderboard" of sorts is maintained at this website, "Classification datasets results". The maintainers attempt to keep track of published results of various neural network architectures. The lead
What are the current state-of-the-art convolutional neural networks? A "leaderboard" of sorts is maintained at this website, "Classification datasets results". The maintainers attempt to keep track of published results of various neural network architectures. The leaderboard is not solely restricted to CNNs per se -- any network is admissible. But because all of the tasks tracked on the leaderboard are image tasks (as of this writing), it's likely that many of the networks will be CNNs since they are very effective at image tasks.
What are the current state-of-the-art convolutional neural networks? A "leaderboard" of sorts is maintained at this website, "Classification datasets results". The maintainers attempt to keep track of published results of various neural network architectures. The lead
27,241
What are the current state-of-the-art convolutional neural networks?
DenseNet is a generic successor to ResNet and achieves 3.46% error on CIFAR-10 and 17.18 on C-100. Compare to 3.47 and 24.28 mentioned on the leaderboard. Shake-shake, Shake-drop and possibly other variants are regularization techniques which can be used any ResNet-like architectures, and achieves 2.86/2.31% error on C-10 and 15.85/12.19 on C-100 (shake-shake/shake-drop). These techniques work on only multi-branch architectures, which is why I mention them even though they are not strictly architectures in themselves. Efficient Neural Architecture Search (using reinforcement learning to search for architectures) finds a network which achieves 2.89% error on C-10, using the Cutout regularization technique. Performance is 3.54% without cutout. In summary: Dense Net and possibly some ENAS produced network may perform slightly better than ResNet, but the use of sophisticated regularization techniques makes the comparison admittedly difficult. I don't know of any leaderboard which is really up to date, but typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous results, which can be good way to keep track.
What are the current state-of-the-art convolutional neural networks?
DenseNet is a generic successor to ResNet and achieves 3.46% error on CIFAR-10 and 17.18 on C-100. Compare to 3.47 and 24.28 mentioned on the leaderboard. Shake-shake, Shake-drop and possibly other va
What are the current state-of-the-art convolutional neural networks? DenseNet is a generic successor to ResNet and achieves 3.46% error on CIFAR-10 and 17.18 on C-100. Compare to 3.47 and 24.28 mentioned on the leaderboard. Shake-shake, Shake-drop and possibly other variants are regularization techniques which can be used any ResNet-like architectures, and achieves 2.86/2.31% error on C-10 and 15.85/12.19 on C-100 (shake-shake/shake-drop). These techniques work on only multi-branch architectures, which is why I mention them even though they are not strictly architectures in themselves. Efficient Neural Architecture Search (using reinforcement learning to search for architectures) finds a network which achieves 2.89% error on C-10, using the Cutout regularization technique. Performance is 3.54% without cutout. In summary: Dense Net and possibly some ENAS produced network may perform slightly better than ResNet, but the use of sophisticated regularization techniques makes the comparison admittedly difficult. I don't know of any leaderboard which is really up to date, but typically any new paper which claims good or state-of-the-art performance on any task will have a fairly comprehensive results table comparing with previous results, which can be good way to keep track.
What are the current state-of-the-art convolutional neural networks? DenseNet is a generic successor to ResNet and achieves 3.46% error on CIFAR-10 and 17.18 on C-100. Compare to 3.47 and 24.28 mentioned on the leaderboard. Shake-shake, Shake-drop and possibly other va
27,242
What are the current state-of-the-art convolutional neural networks?
For checking state-of-the-art neural network architectures (and other machine learning models) in various application domains, there is now a page called paperswithcode.
What are the current state-of-the-art convolutional neural networks?
For checking state-of-the-art neural network architectures (and other machine learning models) in various application domains, there is now a page called paperswithcode.
What are the current state-of-the-art convolutional neural networks? For checking state-of-the-art neural network architectures (and other machine learning models) in various application domains, there is now a page called paperswithcode.
What are the current state-of-the-art convolutional neural networks? For checking state-of-the-art neural network architectures (and other machine learning models) in various application domains, there is now a page called paperswithcode.
27,243
Is low bias in a sample a synonym for high variance?
No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you have such impression is that in "early age" of machine learning, there is a concept called bias variance trade-off (as @Kodiologist mentioned, this concept is still true and a fundamental concept of tuning models today.) When increase model complexity, variance is increased and bias is reduced when regularize the model, bias is increased and variance is reduced. In Andrew Ng's recent Deep Learning Coursera lecture, he mentioned that in recent deep learning framework (with huge amount of data), people talk less about trade off. In stead, there are ways to only reduce variance and do not increase bias (For example, increase training data size), as vice versa.
Is low bias in a sample a synonym for high variance?
No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you
Is low bias in a sample a synonym for high variance? No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you have such impression is that in "early age" of machine learning, there is a concept called bias variance trade-off (as @Kodiologist mentioned, this concept is still true and a fundamental concept of tuning models today.) When increase model complexity, variance is increased and bias is reduced when regularize the model, bias is increased and variance is reduced. In Andrew Ng's recent Deep Learning Coursera lecture, he mentioned that in recent deep learning framework (with huge amount of data), people talk less about trade off. In stead, there are ways to only reduce variance and do not increase bias (For example, increase training data size), as vice versa.
Is low bias in a sample a synonym for high variance? No. You can have both high or both low at same time. Here is an illustrate example. picture and article source I also recommend you to read the article where this picture comes from. The reason you
27,244
Is low bias in a sample a synonym for high variance?
The difference between bias and and variance is the same as between accuracy and precision: The accuracy of a measurement system is how close it gets to a quantity's actual (true) value. (≈ bias) The precision of a measurement system is the degree to which repeated measurements give the same results. (≈ variance)
Is low bias in a sample a synonym for high variance?
The difference between bias and and variance is the same as between accuracy and precision: The accuracy of a measurement system is how close it gets to a quantity's actual (true) value. (≈ bias) T
Is low bias in a sample a synonym for high variance? The difference between bias and and variance is the same as between accuracy and precision: The accuracy of a measurement system is how close it gets to a quantity's actual (true) value. (≈ bias) The precision of a measurement system is the degree to which repeated measurements give the same results. (≈ variance)
Is low bias in a sample a synonym for high variance? The difference between bias and and variance is the same as between accuracy and precision: The accuracy of a measurement system is how close it gets to a quantity's actual (true) value. (≈ bias) T
27,245
How does Tensorflow `tf.train.Optimizer` compute gradients?
It's not numerical differentiation, it's automatic differentiation. This is one of the main reasons for tensorflow's existence: by specifying operations in a tensorflow graph (with operations on Tensors and so on), it can automatically follow the chain rule through the graph and, since it knows the derivatives of each individual operation you specify, it can combine them automatically. If for some reason you want to override that piecewise, it's possible with gradient_override_map.
How does Tensorflow `tf.train.Optimizer` compute gradients?
It's not numerical differentiation, it's automatic differentiation. This is one of the main reasons for tensorflow's existence: by specifying operations in a tensorflow graph (with operations on Tenso
How does Tensorflow `tf.train.Optimizer` compute gradients? It's not numerical differentiation, it's automatic differentiation. This is one of the main reasons for tensorflow's existence: by specifying operations in a tensorflow graph (with operations on Tensors and so on), it can automatically follow the chain rule through the graph and, since it knows the derivatives of each individual operation you specify, it can combine them automatically. If for some reason you want to override that piecewise, it's possible with gradient_override_map.
How does Tensorflow `tf.train.Optimizer` compute gradients? It's not numerical differentiation, it's automatic differentiation. This is one of the main reasons for tensorflow's existence: by specifying operations in a tensorflow graph (with operations on Tenso
27,246
How does Tensorflow `tf.train.Optimizer` compute gradients?
It uses automatic differentiation, where it uses the chain rule and goes backward in the graph assigning gradients. Let’s say we have a tensor, C. C has made after a series of operations (adding, multiplying, going through some nonlinearity, etc.) So if C depends on some set of tensors called Xk, we need to get the gradients. Tensorflow always tracks the path of operations; the sequential behavior of the nodes and how data flow between them. That is done by the graph: If we need to get the derivatives of the cost w.r.t inputs X, what this will first do is load the path from X to the cost by extending the graph. Then it starts in the reverse order, distributing the gradients with the chain rule (same as backpropagation). Anyway if you read the source code belonging to tf.gradients() you can find that Tensorflow has done this gradient distribution part in a nice way. While backtracking tf interact with graph, in the backward pass TF will meet different nodes. Inside these nodes there are operations which we call "ops" (matmal, softmax,relu, batch_normalization, etc.) So what tf does is it automatically load these ops in to the graph. This new node compose the partial derivative of the operations. get_gradient() Let’s talk a bit about these newly added nodes Inside these nodes tf adds 2 things: Derivative we calculated earlier Inputs to the corresponding "op" in the forward pass So by the chain rule we can calculate similar to the backpropagation API. So Tensorflow always thinks about the order of the graph in order to do automatic differentiation. So as we know we need forward pass variables to calculate the gradients then we need to store intermidiate values also in tensors. This can reduce the memory. For many operations tf knows how to calculate gradients and distribute them.
How does Tensorflow `tf.train.Optimizer` compute gradients?
It uses automatic differentiation, where it uses the chain rule and goes backward in the graph assigning gradients. Let’s say we have a tensor, C. C has made after a series of operations (adding, mul
How does Tensorflow `tf.train.Optimizer` compute gradients? It uses automatic differentiation, where it uses the chain rule and goes backward in the graph assigning gradients. Let’s say we have a tensor, C. C has made after a series of operations (adding, multiplying, going through some nonlinearity, etc.) So if C depends on some set of tensors called Xk, we need to get the gradients. Tensorflow always tracks the path of operations; the sequential behavior of the nodes and how data flow between them. That is done by the graph: If we need to get the derivatives of the cost w.r.t inputs X, what this will first do is load the path from X to the cost by extending the graph. Then it starts in the reverse order, distributing the gradients with the chain rule (same as backpropagation). Anyway if you read the source code belonging to tf.gradients() you can find that Tensorflow has done this gradient distribution part in a nice way. While backtracking tf interact with graph, in the backward pass TF will meet different nodes. Inside these nodes there are operations which we call "ops" (matmal, softmax,relu, batch_normalization, etc.) So what tf does is it automatically load these ops in to the graph. This new node compose the partial derivative of the operations. get_gradient() Let’s talk a bit about these newly added nodes Inside these nodes tf adds 2 things: Derivative we calculated earlier Inputs to the corresponding "op" in the forward pass So by the chain rule we can calculate similar to the backpropagation API. So Tensorflow always thinks about the order of the graph in order to do automatic differentiation. So as we know we need forward pass variables to calculate the gradients then we need to store intermidiate values also in tensors. This can reduce the memory. For many operations tf knows how to calculate gradients and distribute them.
How does Tensorflow `tf.train.Optimizer` compute gradients? It uses automatic differentiation, where it uses the chain rule and goes backward in the graph assigning gradients. Let’s say we have a tensor, C. C has made after a series of operations (adding, mul
27,247
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
This is identical to a common distribution in physics called the Fermi-Dirac distribution, which describes a situation called Fermi-Dirac statistics. In a certain setting in physics, the average number of particles with an energy $\epsilon$ is $$ \bar{n}_\epsilon = \frac{1}{e^{(\epsilon -\mu)/kT}+1} $$ where $\mu$, $k$, and $T$ are physical parameters that probably aren't so important to you (the chemical potential, Boltzmann's constant, and the temperature). Its trivial to reinterpret this as probability density function for the energy of a particle.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
This is identical to a common distribution in physics called the Fermi-Dirac distribution, which describes a situation called Fermi-Dirac statistics. In a certain setting in physics, the average numbe
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? This is identical to a common distribution in physics called the Fermi-Dirac distribution, which describes a situation called Fermi-Dirac statistics. In a certain setting in physics, the average number of particles with an energy $\epsilon$ is $$ \bar{n}_\epsilon = \frac{1}{e^{(\epsilon -\mu)/kT}+1} $$ where $\mu$, $k$, and $T$ are physical parameters that probably aren't so important to you (the chemical potential, Boltzmann's constant, and the temperature). Its trivial to reinterpret this as probability density function for the energy of a particle.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? This is identical to a common distribution in physics called the Fermi-Dirac distribution, which describes a situation called Fermi-Dirac statistics. In a certain setting in physics, the average numbe
27,248
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
The normalizing constant for the first should be $\frac{1}{\ln(2)}$ (not that it really matters for the present question). I'm not aware of either having a name. The first (without the $\log(2)$ normalizing constant) is the survivor function for a truncated logistic distribution, but I haven't seen it used for a density function (though I expect that it has probably been named several times ... that's often the case with simple functional forms that are not in very wide use, where people "reinvent" such things without encountering previous ideas, which are often in different application areas*). If you were to try to name it, then because of the logistic-type functional form you'd probably want to squeeze the word "logistic" in there somewhere, but the difficulty would be in choosing a name that would distinguish it sufficiently from the logistic density. * and jwimberly's answer offers one such application area. The name "Fermi-Dirac distribution" seems a perfectly reasonable choice if you don't have a name in the application area you're working in.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
The normalizing constant for the first should be $\frac{1}{\ln(2)}$ (not that it really matters for the present question). I'm not aware of either having a name. The first (without the $\log(2)$ norma
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? The normalizing constant for the first should be $\frac{1}{\ln(2)}$ (not that it really matters for the present question). I'm not aware of either having a name. The first (without the $\log(2)$ normalizing constant) is the survivor function for a truncated logistic distribution, but I haven't seen it used for a density function (though I expect that it has probably been named several times ... that's often the case with simple functional forms that are not in very wide use, where people "reinvent" such things without encountering previous ideas, which are often in different application areas*). If you were to try to name it, then because of the logistic-type functional form you'd probably want to squeeze the word "logistic" in there somewhere, but the difficulty would be in choosing a name that would distinguish it sufficiently from the logistic density. * and jwimberly's answer offers one such application area. The name "Fermi-Dirac distribution" seems a perfectly reasonable choice if you don't have a name in the application area you're working in.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? The normalizing constant for the first should be $\frac{1}{\ln(2)}$ (not that it really matters for the present question). I'm not aware of either having a name. The first (without the $\log(2)$ norma
27,249
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
A density that integrates to unity over $[0,\infty]$ would be $$f_X(x) = \frac {\theta}{\ln 2}\frac{1}{1+e^{\theta x}},\;\;\; \theta >0$$ Raw moments are given by $$E(X^k) = \frac {(1-2^{-k})}{\ln 2} \frac {1}{\theta^{k}}\cdot \Gamma(k+1) \cdot \zeta(k+1)$$ where $\Gamma()$ is the Gamma function and $\zeta()$ is the Riemann zeta function. So $$E(X) = \frac {\pi^2}{12\cdot \ln 2}\theta^{-1} \approx 1.1866 \cdot \theta^{-1}$$ $$E(X^2) \approx \frac {7.212}{4\cdot \ln 2}\theta^{-2} \approx 2.601 \cdot \theta^{-2} $$ leading to $$\text{Var}(X) \approx 1.193 \cdot \theta^{-2}$$ Numerical calculations verify these.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$?
A density that integrates to unity over $[0,\infty]$ would be $$f_X(x) = \frac {\theta}{\ln 2}\frac{1}{1+e^{\theta x}},\;\;\; \theta >0$$ Raw moments are given by $$E(X^k) = \frac {(1-2^{-k})}{\ln 2}
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? A density that integrates to unity over $[0,\infty]$ would be $$f_X(x) = \frac {\theta}{\ln 2}\frac{1}{1+e^{\theta x}},\;\;\; \theta >0$$ Raw moments are given by $$E(X^k) = \frac {(1-2^{-k})}{\ln 2} \frac {1}{\theta^{k}}\cdot \Gamma(k+1) \cdot \zeta(k+1)$$ where $\Gamma()$ is the Gamma function and $\zeta()$ is the Riemann zeta function. So $$E(X) = \frac {\pi^2}{12\cdot \ln 2}\theta^{-1} \approx 1.1866 \cdot \theta^{-1}$$ $$E(X^2) \approx \frac {7.212}{4\cdot \ln 2}\theta^{-2} \approx 2.601 \cdot \theta^{-2} $$ leading to $$\text{Var}(X) \approx 1.193 \cdot \theta^{-2}$$ Numerical calculations verify these.
What is the name of the distribution with a probability density like $1/(1+\exp(x))$? A density that integrates to unity over $[0,\infty]$ would be $$f_X(x) = \frac {\theta}{\ln 2}\frac{1}{1+e^{\theta x}},\;\;\; \theta >0$$ Raw moments are given by $$E(X^k) = \frac {(1-2^{-k})}{\ln 2}
27,250
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
If you're using standard hypothesis testing, then you are setting the confidence level $\alpha$ then comparing the test p-value to it. In this case the sample size will not impact the probability of type I error because your confidence level $\alpha$ is the probability of type I error, pretty much by defintition. In other words, you set the probability of Type I error by choosing the confidence level. The probability of type I error is only impacted by your choice of the confidence level and nothing else.
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
If you're using standard hypothesis testing, then you are setting the confidence level $\alpha$ then comparing the test p-value to it. In this case the sample size will not impact the probability of t
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] If you're using standard hypothesis testing, then you are setting the confidence level $\alpha$ then comparing the test p-value to it. In this case the sample size will not impact the probability of type I error because your confidence level $\alpha$ is the probability of type I error, pretty much by defintition. In other words, you set the probability of Type I error by choosing the confidence level. The probability of type I error is only impacted by your choice of the confidence level and nothing else.
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] If you're using standard hypothesis testing, then you are setting the confidence level $\alpha$ then comparing the test p-value to it. In this case the sample size will not impact the probability of t
27,251
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
This is a question that is not asked often enough. In frequentist statistics we tend to fix $\alpha$ by convention. Then as $n\rightarrow\infty$ the type II error $\rightarrow 0$ (i.e., power $\rightarrow 1$) even though we also have the luxury for large $n$ of not allowing so many false positives had we chosen differently. The result of this convention is that when $n$ is "large", one can detect trivial differences, and when there are many hypotheses there is a multiplicity problem. By contrast, the likelihood school of inference tends to deal with the total of type I and type II errors, and lets type I error $\rightarrow 0$ as $n \rightarrow\infty$. This solves many of the problems of the frequentist paradigm. Ironically, the frequentist performance characteristics of the likelihood method are also quite good. See for example http://people.musc.edu/~elg26/SCT2011/SCT2011.Blume.pdf and http://onlinelibrary.wiley.com/doi/10.1002/sim.1216/abstract .
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
This is a question that is not asked often enough. In frequentist statistics we tend to fix $\alpha$ by convention. Then as $n\rightarrow\infty$ the type II error $\rightarrow 0$ (i.e., power $\righ
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] This is a question that is not asked often enough. In frequentist statistics we tend to fix $\alpha$ by convention. Then as $n\rightarrow\infty$ the type II error $\rightarrow 0$ (i.e., power $\rightarrow 1$) even though we also have the luxury for large $n$ of not allowing so many false positives had we chosen differently. The result of this convention is that when $n$ is "large", one can detect trivial differences, and when there are many hypotheses there is a multiplicity problem. By contrast, the likelihood school of inference tends to deal with the total of type I and type II errors, and lets type I error $\rightarrow 0$ as $n \rightarrow\infty$. This solves many of the problems of the frequentist paradigm. Ironically, the frequentist performance characteristics of the likelihood method are also quite good. See for example http://people.musc.edu/~elg26/SCT2011/SCT2011.Blume.pdf and http://onlinelibrary.wiley.com/doi/10.1002/sim.1216/abstract .
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] This is a question that is not asked often enough. In frequentist statistics we tend to fix $\alpha$ by convention. Then as $n\rightarrow\infty$ the type II error $\rightarrow 0$ (i.e., power $\righ
27,252
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
It seems that you're missing the main point that Type I error rate is also your criterion for cutoff. If your criterion for cutoff is not changing then alpha is not changing. The $p$-value is the conditional probability of observing an effect as large or larger than the one you found if the null is true. If you select a cutoff $p$-value of 0.05 for deciding that the null is not true then that 0.05 probability that it was true turns into your Type I error. As an aside, this highlights why you cannot take the same test and set a cutoff for $\beta$. $\beta$ can only exist if the null was not true whereas the test value calculated assumes it is. Frank Harrell's point is excellent that it depends on your philosophy. Nevertheless, even under frequentist statistics you can choose a lower criterion in advance and thereby change the rate of Type I error.
Why is type I error not affected by different sample size - hypothesis testing? [duplicate]
It seems that you're missing the main point that Type I error rate is also your criterion for cutoff. If your criterion for cutoff is not changing then alpha is not changing. The $p$-value is the cond
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] It seems that you're missing the main point that Type I error rate is also your criterion for cutoff. If your criterion for cutoff is not changing then alpha is not changing. The $p$-value is the conditional probability of observing an effect as large or larger than the one you found if the null is true. If you select a cutoff $p$-value of 0.05 for deciding that the null is not true then that 0.05 probability that it was true turns into your Type I error. As an aside, this highlights why you cannot take the same test and set a cutoff for $\beta$. $\beta$ can only exist if the null was not true whereas the test value calculated assumes it is. Frank Harrell's point is excellent that it depends on your philosophy. Nevertheless, even under frequentist statistics you can choose a lower criterion in advance and thereby change the rate of Type I error.
Why is type I error not affected by different sample size - hypothesis testing? [duplicate] It seems that you're missing the main point that Type I error rate is also your criterion for cutoff. If your criterion for cutoff is not changing then alpha is not changing. The $p$-value is the cond
27,253
How to sample when you don't know the distribution
I dispute your claim that "In either case, you can't really tell how common or rare billionaires are". Let $f$ be the unknown fraction of billionaires in the population. With a uniform prior on $f$, the posterior distribution of $f$ after $1000$ draws that turned out to have 0 billionaires is a Beta(1,1001) distribution, which looks like this: While the posterior distribution of $f$ after $1000$ draws that turned out to have 1 billionaire is a Beta(2,1000) distribution, which looks like this: In both cases, you can be quite certain that $f < 0.01$. You might think that isn't precise enough. But actually 0.01 is quite precise for a sample of size 1000. Most other quantities that you might estimate would be less precise than this. For example, the fraction of males could only be estimated within a range of size 0.1.
How to sample when you don't know the distribution
I dispute your claim that "In either case, you can't really tell how common or rare billionaires are". Let $f$ be the unknown fraction of billionaires in the population. With a uniform prior on $f$,
How to sample when you don't know the distribution I dispute your claim that "In either case, you can't really tell how common or rare billionaires are". Let $f$ be the unknown fraction of billionaires in the population. With a uniform prior on $f$, the posterior distribution of $f$ after $1000$ draws that turned out to have 0 billionaires is a Beta(1,1001) distribution, which looks like this: While the posterior distribution of $f$ after $1000$ draws that turned out to have 1 billionaire is a Beta(2,1000) distribution, which looks like this: In both cases, you can be quite certain that $f < 0.01$. You might think that isn't precise enough. But actually 0.01 is quite precise for a sample of size 1000. Most other quantities that you might estimate would be less precise than this. For example, the fraction of males could only be estimated within a range of size 0.1.
How to sample when you don't know the distribution I dispute your claim that "In either case, you can't really tell how common or rare billionaires are". Let $f$ be the unknown fraction of billionaires in the population. With a uniform prior on $f$,
27,254
How to sample when you don't know the distribution
There are two things you could do (separately or in combination) Model the tail One is to model the tail of the distribution using a parametric distribution. Power laws are known to fit the distribution of wealth well, so you try a Pareto distribution. You would either fit that distribution by maximum likelihood, that is, by finding the parameters which best represent your sample. Or better, you could put a Bayesian priors on the parameters, and compute the full posterior. Unfortunately, power laws are very sensitive to parameters, and without many large datapoints in your sample, there will be a lot of uncertainty about the exponent. The estimated number of billionaires will be sensitive to this parameter, but much less than the average wealth of billionaires, so the situation isn't too bad. Importance sampling The other one is to change the way you collect your sample. Suppose that you suspect (as you should) there are more billionaires per capita in Monaco or Zurich than in Mogadishiu. If you know the population of each of these cities, you could collect a larger sample in the cities where you expect to see more billionaires, and a smaller one in the others. So say Zurich has 400,000 people and Mogadishu 1,400,000 and we want to poll 9,000 people. We're interested here in the number of millionaires, not billionaires. A unbiased sample would select 2,000 people in Zurich and 7,000 in Mogadishu. However, we'll bias the sample by sampling seven fold more often from Zurich. So we'll "pretend" that Zurich has 2,800,000 people and adjust later. This means we'll poll 6,000 people in Zurich instead of 2,000 and 4,000 in Mogadishu. Say we count 21 millionaires in our Zurich sample, and only 1 in our Mogadishu sample. Since we over sampled Zurich 7 fold, we would only count it as 3 millionaires. This procedure will decrease the variance of your estimator. It can also be used in conjunction with the first method, in which case you will be adjusting for the importance sampling when fitting a parametric distribution.
How to sample when you don't know the distribution
There are two things you could do (separately or in combination) Model the tail One is to model the tail of the distribution using a parametric distribution. Power laws are known to fit the distributi
How to sample when you don't know the distribution There are two things you could do (separately or in combination) Model the tail One is to model the tail of the distribution using a parametric distribution. Power laws are known to fit the distribution of wealth well, so you try a Pareto distribution. You would either fit that distribution by maximum likelihood, that is, by finding the parameters which best represent your sample. Or better, you could put a Bayesian priors on the parameters, and compute the full posterior. Unfortunately, power laws are very sensitive to parameters, and without many large datapoints in your sample, there will be a lot of uncertainty about the exponent. The estimated number of billionaires will be sensitive to this parameter, but much less than the average wealth of billionaires, so the situation isn't too bad. Importance sampling The other one is to change the way you collect your sample. Suppose that you suspect (as you should) there are more billionaires per capita in Monaco or Zurich than in Mogadishiu. If you know the population of each of these cities, you could collect a larger sample in the cities where you expect to see more billionaires, and a smaller one in the others. So say Zurich has 400,000 people and Mogadishu 1,400,000 and we want to poll 9,000 people. We're interested here in the number of millionaires, not billionaires. A unbiased sample would select 2,000 people in Zurich and 7,000 in Mogadishu. However, we'll bias the sample by sampling seven fold more often from Zurich. So we'll "pretend" that Zurich has 2,800,000 people and adjust later. This means we'll poll 6,000 people in Zurich instead of 2,000 and 4,000 in Mogadishu. Say we count 21 millionaires in our Zurich sample, and only 1 in our Mogadishu sample. Since we over sampled Zurich 7 fold, we would only count it as 3 millionaires. This procedure will decrease the variance of your estimator. It can also be used in conjunction with the first method, in which case you will be adjusting for the importance sampling when fitting a parametric distribution.
How to sample when you don't know the distribution There are two things you could do (separately or in combination) Model the tail One is to model the tail of the distribution using a parametric distribution. Power laws are known to fit the distributi
27,255
How to sample when you don't know the distribution
I think a good sampling method is based on previous knowledge of the system. In your field, you have knowledge about potential biases that might affect your sampling. If you don't have that knowledge, you can acquire it from literature. In your example, you know that there is billionaires and that they might bias your sampling. So you can decide to stratify the sampling by education level, country, type of job, etc. There are multiple options. Let's try with another example. Your objective is to determine the abundance of a mice species in a park. In this park, there is forest and meadows. By the literature, you know that mice are more abundant in forest than meadows. So you stratify your sampling by this characteristic. There is other sampling procedure possible, but I think your best information will be from the existing literature. And if there is no literature about your field ? Improbable, but in that context, I would do a pre-study to see what factors need to be taken into account for sampling.
How to sample when you don't know the distribution
I think a good sampling method is based on previous knowledge of the system. In your field, you have knowledge about potential biases that might affect your sampling. If you don't have that knowledge,
How to sample when you don't know the distribution I think a good sampling method is based on previous knowledge of the system. In your field, you have knowledge about potential biases that might affect your sampling. If you don't have that knowledge, you can acquire it from literature. In your example, you know that there is billionaires and that they might bias your sampling. So you can decide to stratify the sampling by education level, country, type of job, etc. There are multiple options. Let's try with another example. Your objective is to determine the abundance of a mice species in a park. In this park, there is forest and meadows. By the literature, you know that mice are more abundant in forest than meadows. So you stratify your sampling by this characteristic. There is other sampling procedure possible, but I think your best information will be from the existing literature. And if there is no literature about your field ? Improbable, but in that context, I would do a pre-study to see what factors need to be taken into account for sampling.
How to sample when you don't know the distribution I think a good sampling method is based on previous knowledge of the system. In your field, you have knowledge about potential biases that might affect your sampling. If you don't have that knowledge,
27,256
How to sample when you don't know the distribution
Whether a sample is representative or not has nothing to do with the observed measurements of the sample. A sample is representative if every set of observational units has the same probability of being chosen as any other set of the same size. Of course this is hard to do unless you can get a complete enumeration of your sample space. Assuming you can get that (from census tract data, for instance), a simple random sample will be representative. No matter how you obtain your sample, there will always be at least three separate sources of error to consider: sampling error: by chance you include Bill Gates in your representative sample. Statistical methods, especially the widths of confidence intervals etc. are designed to take care of this, provided you have some rough knowledge of the distribution at hand (e.g. normality, which wealth distribution definitely does not possess). sampling bias: The sample was not representative. Example: Bill Gates has an unlisted number, so your telephone survey could never reach him (unless you use something like "random-digit dialing"). This is an extreme example, but sampling bias is very widespread. A common occurrence is to take on-site or convenience samples: You sample restaurant patrons at the restaurant as to whether they like the place, how often they have been there, and whether they plan to return. Repeat customers are far more likely to be sampled than one-time customers, and samples of this type can be severely biased in their attitudes. response bias: The measurements themselves are inaccurate. This can came about because of anything from malfunctions of the meter to conscious lying to quantum effects (e.g. Heisenberg's uncertainty principle).
How to sample when you don't know the distribution
Whether a sample is representative or not has nothing to do with the observed measurements of the sample. A sample is representative if every set of observational units has the same probability of bei
How to sample when you don't know the distribution Whether a sample is representative or not has nothing to do with the observed measurements of the sample. A sample is representative if every set of observational units has the same probability of being chosen as any other set of the same size. Of course this is hard to do unless you can get a complete enumeration of your sample space. Assuming you can get that (from census tract data, for instance), a simple random sample will be representative. No matter how you obtain your sample, there will always be at least three separate sources of error to consider: sampling error: by chance you include Bill Gates in your representative sample. Statistical methods, especially the widths of confidence intervals etc. are designed to take care of this, provided you have some rough knowledge of the distribution at hand (e.g. normality, which wealth distribution definitely does not possess). sampling bias: The sample was not representative. Example: Bill Gates has an unlisted number, so your telephone survey could never reach him (unless you use something like "random-digit dialing"). This is an extreme example, but sampling bias is very widespread. A common occurrence is to take on-site or convenience samples: You sample restaurant patrons at the restaurant as to whether they like the place, how often they have been there, and whether they plan to return. Repeat customers are far more likely to be sampled than one-time customers, and samples of this type can be severely biased in their attitudes. response bias: The measurements themselves are inaccurate. This can came about because of anything from malfunctions of the meter to conscious lying to quantum effects (e.g. Heisenberg's uncertainty principle).
How to sample when you don't know the distribution Whether a sample is representative or not has nothing to do with the observed measurements of the sample. A sample is representative if every set of observational units has the same probability of bei
27,257
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
There may well be such a way but I would say that it is not advisable in your circumstances. If you have a result that is impossible either: 1) There is a problem with your data 2) There is a problem with your definition of "impossible" or 3) You are using the wrong method First, check the data. Second, check the code. (Or ask others to check it). If both are fine then perhaps something unexpected is happening. Fortunately for you, you have a simple "impossibility" - you say two variables cannot be positively correlated. So, make a scatter plot and add a smoother and see. A single outlier might cause this; or it might be a nonlinear relationship. Or something else. But, if you are lucky, you've found something new. As my favorite professor used to say "If you're not surprised, you haven't learned anything".
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
There may well be such a way but I would say that it is not advisable in your circumstances. If you have a result that is impossible either: 1) There is a problem with your data 2) There is a problem
Is it possible in R (or in general) to force regression coefficients to be a certain sign? There may well be such a way but I would say that it is not advisable in your circumstances. If you have a result that is impossible either: 1) There is a problem with your data 2) There is a problem with your definition of "impossible" or 3) You are using the wrong method First, check the data. Second, check the code. (Or ask others to check it). If both are fine then perhaps something unexpected is happening. Fortunately for you, you have a simple "impossibility" - you say two variables cannot be positively correlated. So, make a scatter plot and add a smoother and see. A single outlier might cause this; or it might be a nonlinear relationship. Or something else. But, if you are lucky, you've found something new. As my favorite professor used to say "If you're not surprised, you haven't learned anything".
Is it possible in R (or in general) to force regression coefficients to be a certain sign? There may well be such a way but I would say that it is not advisable in your circumstances. If you have a result that is impossible either: 1) There is a problem with your data 2) There is a problem
27,258
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
beware the distinction between the marginal correlation and the partial correlation (correlation conditional on other variables). They may legitimately be of different sign. That is $\text{corr}(Y, X_i)$ may in fact be negative while the regression coefficient in a multiple regression is positive. There is not necessarily any contradiction in those two things. See also Simpson's paradox, which is somewhat related (especially the diagram). In general you cannot infer that a regression coefficient must be of one sign merely based on an argument about the marginal correlation. Yes, it's certainly possible to constrain regression coefficients to be $\geq 0$ or $\leq 0$*. There are several ways to do so; some of these can be done readily enough in R, such as via nnls. See also the answers to this question which mention a number of R packages and other possible approaches. However I caution you against hastily ignoring the points in 1. just because many of those are easily implemented. * (you can use programs that do non-negative to do non-positive by negating the corresponding variable)
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
beware the distinction between the marginal correlation and the partial correlation (correlation conditional on other variables). They may legitimately be of different sign. That is $\text{corr}(Y, X_
Is it possible in R (or in general) to force regression coefficients to be a certain sign? beware the distinction between the marginal correlation and the partial correlation (correlation conditional on other variables). They may legitimately be of different sign. That is $\text{corr}(Y, X_i)$ may in fact be negative while the regression coefficient in a multiple regression is positive. There is not necessarily any contradiction in those two things. See also Simpson's paradox, which is somewhat related (especially the diagram). In general you cannot infer that a regression coefficient must be of one sign merely based on an argument about the marginal correlation. Yes, it's certainly possible to constrain regression coefficients to be $\geq 0$ or $\leq 0$*. There are several ways to do so; some of these can be done readily enough in R, such as via nnls. See also the answers to this question which mention a number of R packages and other possible approaches. However I caution you against hastily ignoring the points in 1. just because many of those are easily implemented. * (you can use programs that do non-negative to do non-positive by negating the corresponding variable)
Is it possible in R (or in general) to force regression coefficients to be a certain sign? beware the distinction between the marginal correlation and the partial correlation (correlation conditional on other variables). They may legitimately be of different sign. That is $\text{corr}(Y, X_
27,259
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
To answer your specific question, you can try the nnls package which does least squares regression with non-negative constraints on the coefficients. You can use it to get the signs you want by changing the signs of the appropriate predictors. By the way, here is a very simple way to create a dataset to demonstrate how it is possible to have positive correlations and negative regression coefficients. > n <- rnorm(200) > x <- rnorm(200) > d <- data.frame(x1 = x+n, x2= 2*x+n, y=x) > cor(d) x1 x2 y x1 1.0000000 0.9474537 0.7260542 x2 0.9474537 1.0000000 0.9078732 y 0.7260542 0.9078732 1.0000000 > plot(d) > lm(y~x1+x2-1, d) Call: lm(formula = y ~ x1 + x2 - 1, data = d) Coefficients: x1 x2 -1 1
Is it possible in R (or in general) to force regression coefficients to be a certain sign?
To answer your specific question, you can try the nnls package which does least squares regression with non-negative constraints on the coefficients. You can use it to get the signs you want by changi
Is it possible in R (or in general) to force regression coefficients to be a certain sign? To answer your specific question, you can try the nnls package which does least squares regression with non-negative constraints on the coefficients. You can use it to get the signs you want by changing the signs of the appropriate predictors. By the way, here is a very simple way to create a dataset to demonstrate how it is possible to have positive correlations and negative regression coefficients. > n <- rnorm(200) > x <- rnorm(200) > d <- data.frame(x1 = x+n, x2= 2*x+n, y=x) > cor(d) x1 x2 y x1 1.0000000 0.9474537 0.7260542 x2 0.9474537 1.0000000 0.9078732 y 0.7260542 0.9078732 1.0000000 > plot(d) > lm(y~x1+x2-1, d) Call: lm(formula = y ~ x1 + x2 - 1, data = d) Coefficients: x1 x2 -1 1
Is it possible in R (or in general) to force regression coefficients to be a certain sign? To answer your specific question, you can try the nnls package which does least squares regression with non-negative constraints on the coefficients. You can use it to get the signs you want by changi
27,260
How can I show that a random walk is not covariance stationary?
I think you're making life hard for yourself there. You just need to use a few elementary properties of variances and covariances. Here's one approach: start with the algebraic definition of your random walk process. derive $\text{Var}(y_t)$ in terms of $\text{Var}(y_{t-1})$ and the variance of the error term show that $\text{Cov}(y_t,y_{t-1}) = Var(y_{t-1})$ argue that $\text{Cov}(y_s,y_{s-1})\neq \text{Cov}(y_t,y_{t-1})$ if $s\neq t$. ... though, frankly, I think even just going to the second step (writing $\text{Var}(y_t)$ in terms of $\text{Var}(y_{t-1})$ and the variance of the error term) is sufficient to establish it's not covariance stationary.
How can I show that a random walk is not covariance stationary?
I think you're making life hard for yourself there. You just need to use a few elementary properties of variances and covariances. Here's one approach: start with the algebraic definition of your ra
How can I show that a random walk is not covariance stationary? I think you're making life hard for yourself there. You just need to use a few elementary properties of variances and covariances. Here's one approach: start with the algebraic definition of your random walk process. derive $\text{Var}(y_t)$ in terms of $\text{Var}(y_{t-1})$ and the variance of the error term show that $\text{Cov}(y_t,y_{t-1}) = Var(y_{t-1})$ argue that $\text{Cov}(y_s,y_{s-1})\neq \text{Cov}(y_t,y_{t-1})$ if $s\neq t$. ... though, frankly, I think even just going to the second step (writing $\text{Var}(y_t)$ in terms of $\text{Var}(y_{t-1})$ and the variance of the error term) is sufficient to establish it's not covariance stationary.
How can I show that a random walk is not covariance stationary? I think you're making life hard for yourself there. You just need to use a few elementary properties of variances and covariances. Here's one approach: start with the algebraic definition of your ra
27,261
How can I show that a random walk is not covariance stationary?
For each integer $t$, $Y_t = \sum_{i=1}^t X_i$ where the $X_i$ are iid random variables. From the independence of the $X_i$, it follows that $\operatorname{var}(Y_t) = \sum_{i=1}^t \operatorname{var}(X_i) = t\sigma^2$. For integer $h$, let $W = \sum_{i=t+1}^{t+h} X_i$ and note that $W$ and $Y_t$ are independent random variables because they are functions (sums) of disjoint collections of independent random variables. Then, $$\begin{align} \operatorname{cov}(Y_{t+h},Y_t) &= \operatorname{cov}(Y_t+W,Y_t)\\ &= \operatorname{cov}(Y_t,Y_t) + \operatorname{cov}(W,Y_t) &{\scriptstyle\text{this step follows because the covariance operator is bilinear;}}\\ &= \operatorname{var}(Y_t) + 0 &\scriptstyle{\text{0 because independent RVs}~W~\text{and}~Y_t~\text{have zero covariance;}}\\ &= \operatorname{var}(Y_t)\\ &= t\sigma^2 \end{align}$$ and thus the covariance increases as a function of $t$ but is not a function of $h$ at all as is needed for covariance stationarity. More generally, you can show that $\operatorname{cov}(Y_t, Y_s) = \operatorname{var}\left(Y_{\min\{t,s\}}\right)$.
How can I show that a random walk is not covariance stationary?
For each integer $t$, $Y_t = \sum_{i=1}^t X_i$ where the $X_i$ are iid random variables. From the independence of the $X_i$, it follows that $\operatorname{var}(Y_t) = \sum_{i=1}^t \operatorname{var}(
How can I show that a random walk is not covariance stationary? For each integer $t$, $Y_t = \sum_{i=1}^t X_i$ where the $X_i$ are iid random variables. From the independence of the $X_i$, it follows that $\operatorname{var}(Y_t) = \sum_{i=1}^t \operatorname{var}(X_i) = t\sigma^2$. For integer $h$, let $W = \sum_{i=t+1}^{t+h} X_i$ and note that $W$ and $Y_t$ are independent random variables because they are functions (sums) of disjoint collections of independent random variables. Then, $$\begin{align} \operatorname{cov}(Y_{t+h},Y_t) &= \operatorname{cov}(Y_t+W,Y_t)\\ &= \operatorname{cov}(Y_t,Y_t) + \operatorname{cov}(W,Y_t) &{\scriptstyle\text{this step follows because the covariance operator is bilinear;}}\\ &= \operatorname{var}(Y_t) + 0 &\scriptstyle{\text{0 because independent RVs}~W~\text{and}~Y_t~\text{have zero covariance;}}\\ &= \operatorname{var}(Y_t)\\ &= t\sigma^2 \end{align}$$ and thus the covariance increases as a function of $t$ but is not a function of $h$ at all as is needed for covariance stationarity. More generally, you can show that $\operatorname{cov}(Y_t, Y_s) = \operatorname{var}\left(Y_{\min\{t,s\}}\right)$.
How can I show that a random walk is not covariance stationary? For each integer $t$, $Y_t = \sum_{i=1}^t X_i$ where the $X_i$ are iid random variables. From the independence of the $X_i$, it follows that $\operatorname{var}(Y_t) = \sum_{i=1}^t \operatorname{var}(
27,262
How can I show that a random walk is not covariance stationary?
A usual way that we show this is by writing the random walk as $$y_t = \sum_{i=1}^tu_t$$ and so $$\operatorname{Var}(y_t) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right) = t\sigma^2$$ and $$\operatorname{Cov}(y_t, y_{t+k})= E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=1}^{t+k}u_i\right)-E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=1}^{t+k}u_i\right)$$ $$=E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=1}^{t}u_i+ \sum_{i=t+1}^{t+k}u_i\right) -E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=1}^{t}u_i+ \sum_{i=t+1}^{t+k}u_i\right)$$ $$=E\left(\sum_{i=1}^tu_i\right)^2 - \left[E\left(\sum_{i=1}^tu_i\right) \right]^2 +E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=t+1}^{t+k}u_i\right) -E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=t+1}^{t+k}u_i\right)$$ $$=\operatorname{Var}\left(\sum_{i=1}^tu_i\right) + \operatorname{Cov}\left(\sum_{i=1}^tu_i, \sum_{i=t+1}^{t+k}u_i\right)$$ The two sums in the covariance term are independent since the white noises in the first do not appear in the second (different time-indices), so this covariance is zero, and we are left with $$\operatorname{Cov}(y_t, y_{t+k}) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right)=t\sigma^2$$
How can I show that a random walk is not covariance stationary?
A usual way that we show this is by writing the random walk as $$y_t = \sum_{i=1}^tu_t$$ and so $$\operatorname{Var}(y_t) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right) = t\sigma^2$$ and $$\operator
How can I show that a random walk is not covariance stationary? A usual way that we show this is by writing the random walk as $$y_t = \sum_{i=1}^tu_t$$ and so $$\operatorname{Var}(y_t) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right) = t\sigma^2$$ and $$\operatorname{Cov}(y_t, y_{t+k})= E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=1}^{t+k}u_i\right)-E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=1}^{t+k}u_i\right)$$ $$=E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=1}^{t}u_i+ \sum_{i=t+1}^{t+k}u_i\right) -E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=1}^{t}u_i+ \sum_{i=t+1}^{t+k}u_i\right)$$ $$=E\left(\sum_{i=1}^tu_i\right)^2 - \left[E\left(\sum_{i=1}^tu_i\right) \right]^2 +E\left(\sum_{i=1}^tu_i\right)\left(\sum_{i=t+1}^{t+k}u_i\right) -E\left(\sum_{i=1}^tu_i\right)E\left(\sum_{i=t+1}^{t+k}u_i\right)$$ $$=\operatorname{Var}\left(\sum_{i=1}^tu_i\right) + \operatorname{Cov}\left(\sum_{i=1}^tu_i, \sum_{i=t+1}^{t+k}u_i\right)$$ The two sums in the covariance term are independent since the white noises in the first do not appear in the second (different time-indices), so this covariance is zero, and we are left with $$\operatorname{Cov}(y_t, y_{t+k}) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right)=t\sigma^2$$
How can I show that a random walk is not covariance stationary? A usual way that we show this is by writing the random walk as $$y_t = \sum_{i=1}^tu_t$$ and so $$\operatorname{Var}(y_t) = \operatorname{Var}\left(\sum_{i=1}^tu_i\right) = t\sigma^2$$ and $$\operator
27,263
Find probability density intervals
As pointed out above, there are many different ways to define an interval that includes 90% of the density. One that hasn't been pointed out yet is the highest [posterior] density interval (wikipedia), which is defined as "the shortest interval for which the difference in the empirical cumulative density function values of the endpoints is the nominal probability". library(coda) HPDinterval(as.mcmc(x), prob=0.9)
Find probability density intervals
As pointed out above, there are many different ways to define an interval that includes 90% of the density. One that hasn't been pointed out yet is the highest [posterior] density interval (wikipedia)
Find probability density intervals As pointed out above, there are many different ways to define an interval that includes 90% of the density. One that hasn't been pointed out yet is the highest [posterior] density interval (wikipedia), which is defined as "the shortest interval for which the difference in the empirical cumulative density function values of the endpoints is the nominal probability". library(coda) HPDinterval(as.mcmc(x), prob=0.9)
Find probability density intervals As pointed out above, there are many different ways to define an interval that includes 90% of the density. One that hasn't been pointed out yet is the highest [posterior] density interval (wikipedia)
27,264
Find probability density intervals
Your way seems sensible, especially with the discrete data in the example, quantile(x,probs=c(0.05,0.95), type=5) 5% 95% 2.8 9.0 but another way would be to use a computed density kernel: dx <- density(x) dn <- cumsum(dx$y)/sum(dx$y) li <- which(dn>=0.05)[1] ui <- which(dn>=0.95)[1] dx$x[c(li,ui)] [1] 2.787912 9.163246
Find probability density intervals
Your way seems sensible, especially with the discrete data in the example, quantile(x,probs=c(0.05,0.95), type=5) 5% 95% 2.8 9.0 but another way would be to use a computed density kernel: dx <- den
Find probability density intervals Your way seems sensible, especially with the discrete data in the example, quantile(x,probs=c(0.05,0.95), type=5) 5% 95% 2.8 9.0 but another way would be to use a computed density kernel: dx <- density(x) dn <- cumsum(dx$y)/sum(dx$y) li <- which(dn>=0.05)[1] ui <- which(dn>=0.95)[1] dx$x[c(li,ui)] [1] 2.787912 9.163246
Find probability density intervals Your way seems sensible, especially with the discrete data in the example, quantile(x,probs=c(0.05,0.95), type=5) 5% 95% 2.8 9.0 but another way would be to use a computed density kernel: dx <- den
27,265
Find probability density intervals
It certainly seems like the most straightforward approach. The function is quite fast. I use it all the time on samples that are hundreds of times larger that the one you are using, and the stability of the estimates should be good at your sample size. There are functions in other packages that provide more complete sets of descriptive statistics. The one I use is Hmisc::describe, but there are several other packages with describe functions.
Find probability density intervals
It certainly seems like the most straightforward approach. The function is quite fast. I use it all the time on samples that are hundreds of times larger that the one you are using, and the stability
Find probability density intervals It certainly seems like the most straightforward approach. The function is quite fast. I use it all the time on samples that are hundreds of times larger that the one you are using, and the stability of the estimates should be good at your sample size. There are functions in other packages that provide more complete sets of descriptive statistics. The one I use is Hmisc::describe, but there are several other packages with describe functions.
Find probability density intervals It certainly seems like the most straightforward approach. The function is quite fast. I use it all the time on samples that are hundreds of times larger that the one you are using, and the stability
27,266
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
Technically, when you do a visual preselection of where to do the test, you should already correct for that: your eyes and brain already bypass some uncertainties in the data, that you don't account for if you simply do the test at that point. Imagine that your 'peak' is really a plateau, and you hand pick the 'peak' difference, then run a test on that and it turns out barely significant. If you were to run the test slightly more to the left or to the right, the result could change. In this way, you have to account for the process of preselection: you don't have quite the certainty that you state! You are using the data to do the selection, so you are effectively using the same information twice. Of course, in practice, it is very hard to account for something like a handpicking process, but that doesn't mean you shouldn't (or at least take/state the resulting confidence intervals / test results with a grain of salt). Conclusion: you should always correct for multiple comparisons if you do multiple comparisons, regardless of how you selected those comparisons. If they weren't picked before seeing the data, you should correct for that in addition. Note: an alternative to correcting for manual preselection (e.g. when it is practically impossible) is probably to state your results so that they obviously contain reference to the manual selection. But that is not 'reproducible research', I guess.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
Technically, when you do a visual preselection of where to do the test, you should already correct for that: your eyes and brain already bypass some uncertainties in the data, that you don't account f
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? Technically, when you do a visual preselection of where to do the test, you should already correct for that: your eyes and brain already bypass some uncertainties in the data, that you don't account for if you simply do the test at that point. Imagine that your 'peak' is really a plateau, and you hand pick the 'peak' difference, then run a test on that and it turns out barely significant. If you were to run the test slightly more to the left or to the right, the result could change. In this way, you have to account for the process of preselection: you don't have quite the certainty that you state! You are using the data to do the selection, so you are effectively using the same information twice. Of course, in practice, it is very hard to account for something like a handpicking process, but that doesn't mean you shouldn't (or at least take/state the resulting confidence intervals / test results with a grain of salt). Conclusion: you should always correct for multiple comparisons if you do multiple comparisons, regardless of how you selected those comparisons. If they weren't picked before seeing the data, you should correct for that in addition. Note: an alternative to correcting for manual preselection (e.g. when it is practically impossible) is probably to state your results so that they obviously contain reference to the manual selection. But that is not 'reproducible research', I guess.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? Technically, when you do a visual preselection of where to do the test, you should already correct for that: your eyes and brain already bypass some uncertainties in the data, that you don't account f
27,267
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
Long ago, in one of my first statistics classes, I was reading about this in a text (I think it was an old edition of Cohen's book on regreession) where it said "this is a question about which reasonable people can differ". It is not clear to me that anyone ever needs to correct for multiple comparisons, nor, if they do, over what period or set of comparisons they should correct. Each article? Each regression or ANOVA? Everything they publish on a subject? What about what OTHER people publish? As you write in your first line, it's philosophical.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
Long ago, in one of my first statistics classes, I was reading about this in a text (I think it was an old edition of Cohen's book on regreession) where it said "this is a question about which reasona
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? Long ago, in one of my first statistics classes, I was reading about this in a text (I think it was an old edition of Cohen's book on regreession) where it said "this is a question about which reasonable people can differ". It is not clear to me that anyone ever needs to correct for multiple comparisons, nor, if they do, over what period or set of comparisons they should correct. Each article? Each regression or ANOVA? Everything they publish on a subject? What about what OTHER people publish? As you write in your first line, it's philosophical.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? Long ago, in one of my first statistics classes, I was reading about this in a text (I think it was an old edition of Cohen's book on regreession) where it said "this is a question about which reasona
27,268
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
If you are trying to make one-off decisions about reality and want to control the rate at which you falsely reject the null hypothesis, then you will be using null hypothesis significance testing (NHST) and will want to use correction for multiple comparisons. However, as Peter Flom notes in his answer, it's unclear how to define the set of comparisons over which to apply the correction. The easiest choice is the set of comparisons applied to a given data set, and this is the most common approach. However, science is arguably best conceived as cumulative system where one-off decisions are not necessary and in fact serve only to reduce the efficiency of evidence accumulation (reducing obtained evidence to a single bit of information). Thus, if one follows a properly scientific approach to statistical analysis, eschewing NHST for tools like likelihood ratios (possibly Bayesian approaches too), then the "problem" of multiple comparisons disappears.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
If you are trying to make one-off decisions about reality and want to control the rate at which you falsely reject the null hypothesis, then you will be using null hypothesis significance testing (NHS
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? If you are trying to make one-off decisions about reality and want to control the rate at which you falsely reject the null hypothesis, then you will be using null hypothesis significance testing (NHST) and will want to use correction for multiple comparisons. However, as Peter Flom notes in his answer, it's unclear how to define the set of comparisons over which to apply the correction. The easiest choice is the set of comparisons applied to a given data set, and this is the most common approach. However, science is arguably best conceived as cumulative system where one-off decisions are not necessary and in fact serve only to reduce the efficiency of evidence accumulation (reducing obtained evidence to a single bit of information). Thus, if one follows a properly scientific approach to statistical analysis, eschewing NHST for tools like likelihood ratios (possibly Bayesian approaches too), then the "problem" of multiple comparisons disappears.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? If you are trying to make one-off decisions about reality and want to control the rate at which you falsely reject the null hypothesis, then you will be using null hypothesis significance testing (NHS
27,269
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
One very important thing to remember is that multiple testing correction assumes independent tests. If the data your analyzing isn't independent, things get a little more complicated than simply correcting for the number of tests performed, you have to account for the correlation between the data being analyzed or your correction will probably be way too conservative and you will have a high type II error rate. I've found cross-validation, permutation tests, or bootstrapping can be effective ways to deal with multiple comparisons if used properly. Others have mentioned using FDR, but this can give incorrect results if there's a lot of non-independence in your data as it assumes p-values are uniform across all tests under the null. The distribution of p-values across tests under the null can be very skewed if a lot of non-independence exists.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
One very important thing to remember is that multiple testing correction assumes independent tests. If the data your analyzing isn't independent, things get a little more complicated than simply corre
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? One very important thing to remember is that multiple testing correction assumes independent tests. If the data your analyzing isn't independent, things get a little more complicated than simply correcting for the number of tests performed, you have to account for the correlation between the data being analyzed or your correction will probably be way too conservative and you will have a high type II error rate. I've found cross-validation, permutation tests, or bootstrapping can be effective ways to deal with multiple comparisons if used properly. Others have mentioned using FDR, but this can give incorrect results if there's a lot of non-independence in your data as it assumes p-values are uniform across all tests under the null. The distribution of p-values across tests under the null can be very skewed if a lot of non-independence exists.
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? One very important thing to remember is that multiple testing correction assumes independent tests. If the data your analyzing isn't independent, things get a little more complicated than simply corre
27,270
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
A possible alternative to correction, depending on you question, is to test for significance of the sum of p-values. You can then even penalize yourself for test that are not done by adding high p-values. Extension's (which don't require independence) of Fisher's method (which require independence of test) could be used. Eg. Kost's method
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"?
A possible alternative to correction, depending on you question, is to test for significance of the sum of p-values. You can then even penalize yourself for test that are not done by adding high p-va
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? A possible alternative to correction, depending on you question, is to test for significance of the sum of p-values. You can then even penalize yourself for test that are not done by adding high p-values. Extension's (which don't require independence) of Fisher's method (which require independence of test) could be used. Eg. Kost's method
Are multiple comparisons corrections necessary for informal/visual "multiple comparisons"? A possible alternative to correction, depending on you question, is to test for significance of the sum of p-values. You can then even penalize yourself for test that are not done by adding high p-va
27,271
Visualizing 2-letter combinations
Here is a start: visualize these on a grid of first and second letters: combi <- c("Ad", "am", "ar", "as", "bc", "bd", "bp", "br", "BR", "bs", "by", "c", "C", "cc", "cd", "ch", "ci", "CJ", "ck", "Cl", "cm", "cn", "cq", "cs", "Cs", "cv", "d", "D", "dc", "dd", "de", "df", "dg", "dn", "do", "ds", "dt", "e", "E", "el", "ES", "F", "FF", "fn", "gc", "gl", "go", "H", "Hi", "hm", "I", "ic", "id", "ID", "if", "IJ", "Im", "In", "ip", "is", "J", "lh", "ll", "lm", "lo", "Lo", "ls", "lu", "m", "MH", "mn", "ms", "N", "nc", "nd", "nn", "ns", "on", "Op", "P", "pa", "pf", "pi", "Pi", "pm", "pp", "ps", "pt", "q", "qf", "qq", "qr", "qt", "r", "Re", "rf", "rk", "rl", "rm", "rt", "s", "sc", "sd", "SJ", "sn", "sp", "ss", "t", "T", "te", "tr", "ts", "tt", "tz", "ug", "UG", "UN", "V", "VA", "Vd", "vi", "Vo", "w", "W", "y") df <- data.frame (first = factor (gsub ("^(.).", "\\1", combi), levels = c (LETTERS, letters)), second = factor (gsub ("^.", "", combi), levels = c (LETTERS, letters)), combi = combi)) library(ggplot2) ggplot (data = df, aes (x = first, y = second)) + geom_text (aes (label = combi), size = 3) + ## geom_point () + geom_vline (x = 26.5, col = "grey") + geom_hline (y = 26.5, col = "grey") (was: ) ggplot (data = df, aes (x = second)) + geom_histogram () ggplot (data = df, aes (x = first)) + geom_histogram () I gather: of the one letter names, fortunately i, j, k, and l are available (so I can index up to 4d arrays) unfortunately t (time), c (concentration) are gone. So are m (mass), V (volume) and F (force). No radius r nor diameter d. I can have pressure (p), amount of substance (n), and length l, though. Maybe I'll have to change to greek names: ε is OK, but then shouldn't π <- pi ? I can have whatever lowerUPPER name I want. In general, starting with an upper case letter is a safer bet than lower case. don't start with c or d
Visualizing 2-letter combinations
Here is a start: visualize these on a grid of first and second letters: combi <- c("Ad", "am", "ar", "as", "bc", "bd", "bp", "br", "BR", "bs", "by", "c", "C", "cc", "cd", "ch", "ci", "CJ", "ck", "C
Visualizing 2-letter combinations Here is a start: visualize these on a grid of first and second letters: combi <- c("Ad", "am", "ar", "as", "bc", "bd", "bp", "br", "BR", "bs", "by", "c", "C", "cc", "cd", "ch", "ci", "CJ", "ck", "Cl", "cm", "cn", "cq", "cs", "Cs", "cv", "d", "D", "dc", "dd", "de", "df", "dg", "dn", "do", "ds", "dt", "e", "E", "el", "ES", "F", "FF", "fn", "gc", "gl", "go", "H", "Hi", "hm", "I", "ic", "id", "ID", "if", "IJ", "Im", "In", "ip", "is", "J", "lh", "ll", "lm", "lo", "Lo", "ls", "lu", "m", "MH", "mn", "ms", "N", "nc", "nd", "nn", "ns", "on", "Op", "P", "pa", "pf", "pi", "Pi", "pm", "pp", "ps", "pt", "q", "qf", "qq", "qr", "qt", "r", "Re", "rf", "rk", "rl", "rm", "rt", "s", "sc", "sd", "SJ", "sn", "sp", "ss", "t", "T", "te", "tr", "ts", "tt", "tz", "ug", "UG", "UN", "V", "VA", "Vd", "vi", "Vo", "w", "W", "y") df <- data.frame (first = factor (gsub ("^(.).", "\\1", combi), levels = c (LETTERS, letters)), second = factor (gsub ("^.", "", combi), levels = c (LETTERS, letters)), combi = combi)) library(ggplot2) ggplot (data = df, aes (x = first, y = second)) + geom_text (aes (label = combi), size = 3) + ## geom_point () + geom_vline (x = 26.5, col = "grey") + geom_hline (y = 26.5, col = "grey") (was: ) ggplot (data = df, aes (x = second)) + geom_histogram () ggplot (data = df, aes (x = first)) + geom_histogram () I gather: of the one letter names, fortunately i, j, k, and l are available (so I can index up to 4d arrays) unfortunately t (time), c (concentration) are gone. So are m (mass), V (volume) and F (force). No radius r nor diameter d. I can have pressure (p), amount of substance (n), and length l, though. Maybe I'll have to change to greek names: ε is OK, but then shouldn't π <- pi ? I can have whatever lowerUPPER name I want. In general, starting with an upper case letter is a safer bet than lower case. don't start with c or d
Visualizing 2-letter combinations Here is a start: visualize these on a grid of first and second letters: combi <- c("Ad", "am", "ar", "as", "bc", "bd", "bp", "br", "BR", "bs", "by", "c", "C", "cc", "cd", "ch", "ci", "CJ", "ck", "C
27,272
Visualizing 2-letter combinations
Ok, here's my very quick take on a "periodic table"-like visualization, based on the SO question and the comments of the others. The main problem is the big difference in number of variables between packages, which kind of hinders the visualization... I realize this is very rough, so please feel free to change it as you wish. Here is the current output (from my package list) And the code # Load all the installed packages lapply(rownames(installed.packages()), require, character.only = TRUE) # Find variables of length 1 or 2 one_or_two <- unique(apropos("^[a-zA-Z]{1,2}$")) # Find which package they come from packages <- lapply(one_or_two, find) # Some of the variables may belong to multiple packages, so determine the length # of each entry in packages and duplicate the names accordingly lengths <- unlist(lapply(packages, length)) var.data <- data.frame(var = rep(one_or_two, lengths), package = unlist(packages)) Now, we have a data frame like this: > head(var.data, 10) var package 1 ar package:stats 2 as package:methods 3 BD package:fields 4 bs package:VGAM 5 bs package:splines 6 by package:base 7 c package:base 8 C package:stats 9 cm package:grDevices 10 D package:stats We can now split the data by package data.split <- split(var.data, var.data$package) We can see that most variables come from the base and stats package > unlist(lapply(data.split, nrow)) package:base package:datasets package:fields 16 1 2 package:ggplot2 package:grDevices package:gWidgets 2 1 1 package:lattice package:MASS package:Matrix 1 1 3 package:methods package:mgcv package:plyr 3 2 1 package:spam package:splines package:stats 1 2 14 package:survival package:utils package:VGAM 1 2 4 Finally, the drawing routine plot(0, 0, "n", xlim=c(0, 100), ylim=c(0, 120), xaxt="n", yaxt="n", xlab="", ylab="") side.len.x <- 100 / length(data.split) side.len.y <- 100 / max(unlist(lapply(data.split, nrow))) colors <- rainbow(length(data.split), start=0.2, end=0.6) for (xcnt in 1:length(data.split)) { posx <- side.len.x * (xcnt-1) # Remove "package :" in front of the package name pkg <- unlist(strsplit(as.character(data.split[[xcnt]]$package[1]), ":")) pkg <- pkg[2] # Write the package name text(posx + side.len.x/2, 102, pkg, srt=90, cex=0.95, adj=c(0, 0)) for (ycnt in 1:nrow(data.split[[xcnt]])) { posy <- side.len.y * (ycnt-1) rect(posx, posy, posx+side.len.x*0.85, posy+side.len.y*0.9, col = colors[xcnt]) text(posx+side.len.x/2, posy+side.len.y/2, data.split[[xcnt]]$var[ycnt], cex=0.7) } }
Visualizing 2-letter combinations
Ok, here's my very quick take on a "periodic table"-like visualization, based on the SO question and the comments of the others. The main problem is the big difference in number of variables between p
Visualizing 2-letter combinations Ok, here's my very quick take on a "periodic table"-like visualization, based on the SO question and the comments of the others. The main problem is the big difference in number of variables between packages, which kind of hinders the visualization... I realize this is very rough, so please feel free to change it as you wish. Here is the current output (from my package list) And the code # Load all the installed packages lapply(rownames(installed.packages()), require, character.only = TRUE) # Find variables of length 1 or 2 one_or_two <- unique(apropos("^[a-zA-Z]{1,2}$")) # Find which package they come from packages <- lapply(one_or_two, find) # Some of the variables may belong to multiple packages, so determine the length # of each entry in packages and duplicate the names accordingly lengths <- unlist(lapply(packages, length)) var.data <- data.frame(var = rep(one_or_two, lengths), package = unlist(packages)) Now, we have a data frame like this: > head(var.data, 10) var package 1 ar package:stats 2 as package:methods 3 BD package:fields 4 bs package:VGAM 5 bs package:splines 6 by package:base 7 c package:base 8 C package:stats 9 cm package:grDevices 10 D package:stats We can now split the data by package data.split <- split(var.data, var.data$package) We can see that most variables come from the base and stats package > unlist(lapply(data.split, nrow)) package:base package:datasets package:fields 16 1 2 package:ggplot2 package:grDevices package:gWidgets 2 1 1 package:lattice package:MASS package:Matrix 1 1 3 package:methods package:mgcv package:plyr 3 2 1 package:spam package:splines package:stats 1 2 14 package:survival package:utils package:VGAM 1 2 4 Finally, the drawing routine plot(0, 0, "n", xlim=c(0, 100), ylim=c(0, 120), xaxt="n", yaxt="n", xlab="", ylab="") side.len.x <- 100 / length(data.split) side.len.y <- 100 / max(unlist(lapply(data.split, nrow))) colors <- rainbow(length(data.split), start=0.2, end=0.6) for (xcnt in 1:length(data.split)) { posx <- side.len.x * (xcnt-1) # Remove "package :" in front of the package name pkg <- unlist(strsplit(as.character(data.split[[xcnt]]$package[1]), ":")) pkg <- pkg[2] # Write the package name text(posx + side.len.x/2, 102, pkg, srt=90, cex=0.95, adj=c(0, 0)) for (ycnt in 1:nrow(data.split[[xcnt]])) { posy <- side.len.y * (ycnt-1) rect(posx, posy, posx+side.len.x*0.85, posy+side.len.y*0.9, col = colors[xcnt]) text(posx+side.len.x/2, posy+side.len.y/2, data.split[[xcnt]]$var[ycnt], cex=0.7) } }
Visualizing 2-letter combinations Ok, here's my very quick take on a "periodic table"-like visualization, based on the SO question and the comments of the others. The main problem is the big difference in number of variables between p
27,273
Visualizing 2-letter combinations
Here's a letter-based histogram. Considered sizing the first letters by number, but decided against since that's already encoded in the vertical component. # "Load" data nms <- c("Ad","am","ar","as","bc","bd","bp","br","BR","bs","by","c","C","cc","cd","ch","ci","CJ","ck","Cl","cm","cn","cq","cs","Cs","cv","d","D","dc","dd","de","df","dg","dn","do","ds","dt","e","E","el","ES","F","FF","fn","gc","gl","go","H","Hi","hm","I","ic","id","ID","if","IJ","Im","In","ip","is","J","lh","ll","lm","lo","Lo","ls","lu","m","MH","mn","ms","N","nc","nd","nn","ns","on","Op","P","pa","pf","pi","Pi","pm","pp","ps","pt","q","qf","qq","qr","qt","r","Re","rf","rk","rl","rm","rt","s","sc","sd","SJ","sn","sp","ss","t","T","te","tr","ts","tt","tz","ug","UG","UN","V","VA","Vd","vi","Vo","w","W","y") #all names two_in_base <- c("ar", "as", "by", "cm", "de", "df", "dt", "el", "gc", "gl", "if", "Im", "is", "lh", "lm", "ls", "pf", "pi", "pt", "qf", "qr", "qt", "Re", "rf", "rm", "rt", "sd", "ts", "vi") # 2-letter names in base R vowels <- c("a","e","i","o","u") vowels <- c( vowels, toupper(vowels) ) # Constants yoffset.singles <- 3 # Define a function to give us consistent X coordinates returnX <- function(vec) { sapply(vec, function(x) seq(length(all.letters))[ x == all.letters ] ) } # Make df of 2-letter names combi <- nms[ sapply( nms, function(x) nchar(x)==2 ) ] combidf <- data.frame( first = substr(combi,1,1), second=substr(combi,2,2) ) library(plyr) combidf <- arrange(combidf,first,second) # Add vowels combidf$first.vwl <- (combidf$first %in% vowels) combidf$second.vwl <- (combidf$second %in% vowels) # Flag items only in base R combidf$in_base <- paste(combidf$first,combidf$second,sep="") %in% two_in_base # Create a data.frame to hold our plotting information for the first letters combilist <- dlply(combidf,.(first),function(x) x$second) combi.first <- data.frame( first = names(combilist), n = sapply(combilist,length) ,stringsAsFactors=FALSE ) combi.first$y <- 0 all.letters <- c(letters,LETTERS) # arrange(combi.first,desc(n))$first to go in order of prevalence (which may break the one-letter name display) combi.first$x <- returnX( combi.first$first ) # Create a data.frame to hold plotting information for the second letters combidf$x <- returnX( combidf$first ) combidf$y <- unlist( by( combidf$second, combidf$first, seq_along ) ) # Make df of 1-letter names sngldf <- data.frame( sngl = nms[ sapply( nms, function(x) nchar(x)==1 ) ] ) singles.y <- max(combidf$y) + yoffset.singles sngldf$y <- singles.y sngldf$x <- returnX( sngldf$sngl ) # Plot library(ggplot2) ggplot(data=combidf, aes(x=x,y=y) ) + geom_text(aes( label=second, size=3, colour=combidf$in_base ), position=position_jitter(w=0,h=.25)) + geom_text( data=combi.first, aes( label=first, x=x, y=y, size=4 ) ) + geom_text( data=sngldf, aes( label=sngl, x=x, y=y, size=4 ) ) + scale_size(name="Order (2-letter names)",limits=c(1,4),breaks=c(1,2),labels=c("Second","First")) + scale_x_continuous("",breaks=c(13,39),labels=c("lower","UPPER")) + scale_y_continuous("",breaks=c(0,5,singles.y),labels=c("First letter of two-letter names","Second letter of two-letter names","One-letter names") ) + coord_equal(1.5) + labs( colour="In base R" )
Visualizing 2-letter combinations
Here's a letter-based histogram. Considered sizing the first letters by number, but decided against since that's already encoded in the vertical component. # "Load" data nms <- c("Ad","am","ar","as",
Visualizing 2-letter combinations Here's a letter-based histogram. Considered sizing the first letters by number, but decided against since that's already encoded in the vertical component. # "Load" data nms <- c("Ad","am","ar","as","bc","bd","bp","br","BR","bs","by","c","C","cc","cd","ch","ci","CJ","ck","Cl","cm","cn","cq","cs","Cs","cv","d","D","dc","dd","de","df","dg","dn","do","ds","dt","e","E","el","ES","F","FF","fn","gc","gl","go","H","Hi","hm","I","ic","id","ID","if","IJ","Im","In","ip","is","J","lh","ll","lm","lo","Lo","ls","lu","m","MH","mn","ms","N","nc","nd","nn","ns","on","Op","P","pa","pf","pi","Pi","pm","pp","ps","pt","q","qf","qq","qr","qt","r","Re","rf","rk","rl","rm","rt","s","sc","sd","SJ","sn","sp","ss","t","T","te","tr","ts","tt","tz","ug","UG","UN","V","VA","Vd","vi","Vo","w","W","y") #all names two_in_base <- c("ar", "as", "by", "cm", "de", "df", "dt", "el", "gc", "gl", "if", "Im", "is", "lh", "lm", "ls", "pf", "pi", "pt", "qf", "qr", "qt", "Re", "rf", "rm", "rt", "sd", "ts", "vi") # 2-letter names in base R vowels <- c("a","e","i","o","u") vowels <- c( vowels, toupper(vowels) ) # Constants yoffset.singles <- 3 # Define a function to give us consistent X coordinates returnX <- function(vec) { sapply(vec, function(x) seq(length(all.letters))[ x == all.letters ] ) } # Make df of 2-letter names combi <- nms[ sapply( nms, function(x) nchar(x)==2 ) ] combidf <- data.frame( first = substr(combi,1,1), second=substr(combi,2,2) ) library(plyr) combidf <- arrange(combidf,first,second) # Add vowels combidf$first.vwl <- (combidf$first %in% vowels) combidf$second.vwl <- (combidf$second %in% vowels) # Flag items only in base R combidf$in_base <- paste(combidf$first,combidf$second,sep="") %in% two_in_base # Create a data.frame to hold our plotting information for the first letters combilist <- dlply(combidf,.(first),function(x) x$second) combi.first <- data.frame( first = names(combilist), n = sapply(combilist,length) ,stringsAsFactors=FALSE ) combi.first$y <- 0 all.letters <- c(letters,LETTERS) # arrange(combi.first,desc(n))$first to go in order of prevalence (which may break the one-letter name display) combi.first$x <- returnX( combi.first$first ) # Create a data.frame to hold plotting information for the second letters combidf$x <- returnX( combidf$first ) combidf$y <- unlist( by( combidf$second, combidf$first, seq_along ) ) # Make df of 1-letter names sngldf <- data.frame( sngl = nms[ sapply( nms, function(x) nchar(x)==1 ) ] ) singles.y <- max(combidf$y) + yoffset.singles sngldf$y <- singles.y sngldf$x <- returnX( sngldf$sngl ) # Plot library(ggplot2) ggplot(data=combidf, aes(x=x,y=y) ) + geom_text(aes( label=second, size=3, colour=combidf$in_base ), position=position_jitter(w=0,h=.25)) + geom_text( data=combi.first, aes( label=first, x=x, y=y, size=4 ) ) + geom_text( data=sngldf, aes( label=sngl, x=x, y=y, size=4 ) ) + scale_size(name="Order (2-letter names)",limits=c(1,4),breaks=c(1,2),labels=c("Second","First")) + scale_x_continuous("",breaks=c(13,39),labels=c("lower","UPPER")) + scale_y_continuous("",breaks=c(0,5,singles.y),labels=c("First letter of two-letter names","Second letter of two-letter names","One-letter names") ) + coord_equal(1.5) + labs( colour="In base R" )
Visualizing 2-letter combinations Here's a letter-based histogram. Considered sizing the first letters by number, but decided against since that's already encoded in the vertical component. # "Load" data nms <- c("Ad","am","ar","as",
27,274
Visualizing 2-letter combinations
Periodic Table for 100, Alex. I don't have code for it, though. :( One might think that a "periodic table" package might already exist in CRAN. The idea of a coloring scheme and layout of such data could be interesting and useful. These could be colored by package and sorted vertically by frequency, e.g. in a sample of code on CRAN or as they appear in one's local codebase.
Visualizing 2-letter combinations
Periodic Table for 100, Alex. I don't have code for it, though. :( One might think that a "periodic table" package might already exist in CRAN. The idea of a coloring scheme and layout of such data
Visualizing 2-letter combinations Periodic Table for 100, Alex. I don't have code for it, though. :( One might think that a "periodic table" package might already exist in CRAN. The idea of a coloring scheme and layout of such data could be interesting and useful. These could be colored by package and sorted vertically by frequency, e.g. in a sample of code on CRAN or as they appear in one's local codebase.
Visualizing 2-letter combinations Periodic Table for 100, Alex. I don't have code for it, though. :( One might think that a "periodic table" package might already exist in CRAN. The idea of a coloring scheme and layout of such data
27,275
Visualizing 2-letter combinations
The first two pages in chapter 2 of MacKay's ITILA has nice diagrams showing the conditional probabilities of all character pairings in the English language. You may find it of use. I'm embarrassed to say that I don't remember what program was used to produce them.
Visualizing 2-letter combinations
The first two pages in chapter 2 of MacKay's ITILA has nice diagrams showing the conditional probabilities of all character pairings in the English language. You may find it of use. I'm embarrassed to
Visualizing 2-letter combinations The first two pages in chapter 2 of MacKay's ITILA has nice diagrams showing the conditional probabilities of all character pairings in the English language. You may find it of use. I'm embarrassed to say that I don't remember what program was used to produce them.
Visualizing 2-letter combinations The first two pages in chapter 2 of MacKay's ITILA has nice diagrams showing the conditional probabilities of all character pairings in the English language. You may find it of use. I'm embarrassed to
27,276
Why do you subtract the mean when calculating autocorrelation?
Let's start from the basics. Variance tells us about the variability around the mean $$ \operatorname{Var}(X) = E[(X - E[X])^2] $$ You can generalize this concept to two variables, the covariance $$ \operatorname{Cov}(X, Y) = E[(X - E[X]) (Y - E[Y])] $$ where variance is a special case of it $$ \operatorname{Cov}(X, X) = E[(X - E[X])^2] $$ Correlation is just a normalized covariance so that it is bounded between -1 and 1, $$ \operatorname{Corr}(X, Y) = \frac{\operatorname{Cov}(X, Y)}{\sigma_X \sigma_Y} $$ Autocorrelation is just a special case of correlation. Yes, you can calculate the expected value of the ratio of two variables and in some case, it might be a meaningful statistic, but it doesn't anymore measure the "spread" or "co-spread" of the variables. You may be interested in reading the How would you explain covariance to someone who understands only the mean? thread.
Why do you subtract the mean when calculating autocorrelation?
Let's start from the basics. Variance tells us about the variability around the mean $$ \operatorname{Var}(X) = E[(X - E[X])^2] $$ You can generalize this concept to two variables, the covariance $$ \
Why do you subtract the mean when calculating autocorrelation? Let's start from the basics. Variance tells us about the variability around the mean $$ \operatorname{Var}(X) = E[(X - E[X])^2] $$ You can generalize this concept to two variables, the covariance $$ \operatorname{Cov}(X, Y) = E[(X - E[X]) (Y - E[Y])] $$ where variance is a special case of it $$ \operatorname{Cov}(X, X) = E[(X - E[X])^2] $$ Correlation is just a normalized covariance so that it is bounded between -1 and 1, $$ \operatorname{Corr}(X, Y) = \frac{\operatorname{Cov}(X, Y)}{\sigma_X \sigma_Y} $$ Autocorrelation is just a special case of correlation. Yes, you can calculate the expected value of the ratio of two variables and in some case, it might be a meaningful statistic, but it doesn't anymore measure the "spread" or "co-spread" of the variables. You may be interested in reading the How would you explain covariance to someone who understands only the mean? thread.
Why do you subtract the mean when calculating autocorrelation? Let's start from the basics. Variance tells us about the variability around the mean $$ \operatorname{Var}(X) = E[(X - E[X])^2] $$ You can generalize this concept to two variables, the covariance $$ \
27,277
Why do you subtract the mean when calculating autocorrelation?
Similar to simple covariance and correlation, the mean is subtracted while estimating the autocorrelation (autocovariance or cross-correlation and cross-covariance). The following is the covariance of $X$ and $Y$ for example, where the means of the random variables are subtracted from the random variable itself: $$\operatorname{cov}(X,Y)=\mathbb E[(X-\mathbb E[X])(Y-\mathbb E[Y])]$$ Your suggestion corresponds to something like $\mathbb E[X/Y]$, which is fundamentally different than the covariance or the correlation. In time-series literature, we usually see the generative equation similar to $$x_t=\rho x_{t-1}+\epsilon_t$$ However, not all time series are generated this way, so ignoring the noise term and having a rough estimate like $\rho\approx x_t/x_{t-1}$ may not work well, even in this type of series, let alone the general family of random processes.
Why do you subtract the mean when calculating autocorrelation?
Similar to simple covariance and correlation, the mean is subtracted while estimating the autocorrelation (autocovariance or cross-correlation and cross-covariance). The following is the covariance of
Why do you subtract the mean when calculating autocorrelation? Similar to simple covariance and correlation, the mean is subtracted while estimating the autocorrelation (autocovariance or cross-correlation and cross-covariance). The following is the covariance of $X$ and $Y$ for example, where the means of the random variables are subtracted from the random variable itself: $$\operatorname{cov}(X,Y)=\mathbb E[(X-\mathbb E[X])(Y-\mathbb E[Y])]$$ Your suggestion corresponds to something like $\mathbb E[X/Y]$, which is fundamentally different than the covariance or the correlation. In time-series literature, we usually see the generative equation similar to $$x_t=\rho x_{t-1}+\epsilon_t$$ However, not all time series are generated this way, so ignoring the noise term and having a rough estimate like $\rho\approx x_t/x_{t-1}$ may not work well, even in this type of series, let alone the general family of random processes.
Why do you subtract the mean when calculating autocorrelation? Similar to simple covariance and correlation, the mean is subtracted while estimating the autocorrelation (autocovariance or cross-correlation and cross-covariance). The following is the covariance of
27,278
Why is maximum likelihood estimator suspectible to outliers?
Measurements do not always show ideal behavior and the presumed underlying distribution that is used for the maximum likelihood estimate (MLE) is often not the distribution of the measurements. For instance, in the graph above the distribution of the measurements is a mixture of two Gaussian distributions 25% with $\sigma = 10$ and 75% with $\sigma = 1$. (so for whatever reason the distribution is not ideal, either it is because the population is not ideal or it is because the measurements are not perfect) This component with large variation will increase the sampling variance (and inaccuracy/error) of the estimator a lot. Instead of using the MLE (which in a simple case, like estimating the mean of the population, boils down to the average/mean value of the sample), one could use a statistic that filters a few of the extreme values from the sample. This would greatly reduce the variation of the statistic. Then this alternative statistic is more robust to outliers, more robust to the small component of the distribution that has extreme values, and increases the variance if they are not 'taken care of'. Example with the above distribution. Let's consider a sample of size 10 and we compute the MLE as the mean of the sample and the alternative by considering only the mean of the middle 6 values. Let's see how do these differ in distribution/error: ### function to compute estimate in two different ways get_sample = function() { ### geneare data n = 10 sigma = 10^rbinom(n,1,0.25) ### mixture distribution 0.25 ### part sigma = 10 and 0.75 part sigma = 1 x = rnorm(n,0,sigma) ### compute estimates est1 = mean(x) est2 = mean(x[order(x)][3:8]) ### use only values 3 to 8 ### (deleting outer 20%) return(c(est1,est2)) } ### compute the estimates set.seed(1) x <- replicate(10^4,get_sample()) ### plot the histograms layout(matrix(1:2,2)) hist(x[1,], breaks = seq(-10,10,0.1), xlim = c(-6,6), freq = 0, xlab = "estimator value", main = "distribution of estimated based on sample mean") hist(x[2,], breaks = seq(-10,10,0.1), xlim = c(-6,6), freq = 0, xlab = "estimator value", main = "distribution of estimator based on mean of 6 middle values ") The MLE is often an estimator that has the lowest variance or performs sufficiently if the ideal conditions are true. But when the assumed distribution (on which this statement of low variance is based) is only slightly perturbed (but with values of large magnitude) then this can already result in a large variance for the MLE. Note 1: It also depends on what sort of MLE you have. For instance, when we estimate the mean of a distribution and the distribution is a Gaussian distribution then the MLE is the mean of the sample, and as you see in the example above, the mean is not very robust against small perturbations. But when the distribution is a Laplace distribution then the MLE is the median of the sample, and this will be more robust against small perturbations. Note 2: In the example above we simply excluded the bottom and top 20% from the sample. But robust estimators are not that simple. It is a complex and large field. For instance, what if we have only had positive outliers, then our discarding of the bottom part makes the estimate biased? And how much should we discard? There are many considerations that go into constructing a robust estimator (and sometimes it is a bit art instead of science, but the example shows the idea of why it generally works).
Why is maximum likelihood estimator suspectible to outliers?
Measurements do not always show ideal behavior and the presumed underlying distribution that is used for the maximum likelihood estimate (MLE) is often not the distribution of the measurements. For i
Why is maximum likelihood estimator suspectible to outliers? Measurements do not always show ideal behavior and the presumed underlying distribution that is used for the maximum likelihood estimate (MLE) is often not the distribution of the measurements. For instance, in the graph above the distribution of the measurements is a mixture of two Gaussian distributions 25% with $\sigma = 10$ and 75% with $\sigma = 1$. (so for whatever reason the distribution is not ideal, either it is because the population is not ideal or it is because the measurements are not perfect) This component with large variation will increase the sampling variance (and inaccuracy/error) of the estimator a lot. Instead of using the MLE (which in a simple case, like estimating the mean of the population, boils down to the average/mean value of the sample), one could use a statistic that filters a few of the extreme values from the sample. This would greatly reduce the variation of the statistic. Then this alternative statistic is more robust to outliers, more robust to the small component of the distribution that has extreme values, and increases the variance if they are not 'taken care of'. Example with the above distribution. Let's consider a sample of size 10 and we compute the MLE as the mean of the sample and the alternative by considering only the mean of the middle 6 values. Let's see how do these differ in distribution/error: ### function to compute estimate in two different ways get_sample = function() { ### geneare data n = 10 sigma = 10^rbinom(n,1,0.25) ### mixture distribution 0.25 ### part sigma = 10 and 0.75 part sigma = 1 x = rnorm(n,0,sigma) ### compute estimates est1 = mean(x) est2 = mean(x[order(x)][3:8]) ### use only values 3 to 8 ### (deleting outer 20%) return(c(est1,est2)) } ### compute the estimates set.seed(1) x <- replicate(10^4,get_sample()) ### plot the histograms layout(matrix(1:2,2)) hist(x[1,], breaks = seq(-10,10,0.1), xlim = c(-6,6), freq = 0, xlab = "estimator value", main = "distribution of estimated based on sample mean") hist(x[2,], breaks = seq(-10,10,0.1), xlim = c(-6,6), freq = 0, xlab = "estimator value", main = "distribution of estimator based on mean of 6 middle values ") The MLE is often an estimator that has the lowest variance or performs sufficiently if the ideal conditions are true. But when the assumed distribution (on which this statement of low variance is based) is only slightly perturbed (but with values of large magnitude) then this can already result in a large variance for the MLE. Note 1: It also depends on what sort of MLE you have. For instance, when we estimate the mean of a distribution and the distribution is a Gaussian distribution then the MLE is the mean of the sample, and as you see in the example above, the mean is not very robust against small perturbations. But when the distribution is a Laplace distribution then the MLE is the median of the sample, and this will be more robust against small perturbations. Note 2: In the example above we simply excluded the bottom and top 20% from the sample. But robust estimators are not that simple. It is a complex and large field. For instance, what if we have only had positive outliers, then our discarding of the bottom part makes the estimate biased? And how much should we discard? There are many considerations that go into constructing a robust estimator (and sometimes it is a bit art instead of science, but the example shows the idea of why it generally works).
Why is maximum likelihood estimator suspectible to outliers? Measurements do not always show ideal behavior and the presumed underlying distribution that is used for the maximum likelihood estimate (MLE) is often not the distribution of the measurements. For i
27,279
Why is maximum likelihood estimator suspectible to outliers?
Let's consider a simple example for computing MLE of mean from a small sample dataset: $X=\{1, 2, 5, 10, 4, 8\}$. Let's assume the data is generated with a Poisson distribution with rate $\lambda$ and we aim at finding the MLE of $\lambda$. Now, the likelihood $L(\lambda|x_1,x_2,\ldots,x_n)=\prod\limits_{i=1}^{n}P(X=x_i)=\prod\limits_{i=1}^{n}\dfrac{e^{-\lambda}\lambda^{x_i}}{x_i!}$ s.t., the log likelihood $l(\lambda) = \sum\limits_{i=1}^{n}(-\lambda)+x_iln(\lambda)-ln(x_i!)=-n\lambda+ln(\lambda)\sum\limits_{i=1}^{n}x_i-\sum\limits_{i=1}^{n}ln(x_i!)$ $\implies \dfrac{\partial l}{\partial\lambda}=-n+\dfrac{\sum\limits_{i=1}^{n}x_i}{\lambda}=0$ at maximum $\implies \hat{\lambda}_{MLE}=\dfrac{\sum\limits_{i=1}^{n}x_i}{n}$ Here, $n=6$, $\hat{\lambda}_{MLE}=5$, i.e., the sample mean. Now, let's consider an outlier point $1000$ gets added to the dataset. Now, the MLE changes to $147.1429$, which clearly overfits to the noise in the data.
Why is maximum likelihood estimator suspectible to outliers?
Let's consider a simple example for computing MLE of mean from a small sample dataset: $X=\{1, 2, 5, 10, 4, 8\}$. Let's assume the data is generated with a Poisson distribution with rate $\lambda$ and
Why is maximum likelihood estimator suspectible to outliers? Let's consider a simple example for computing MLE of mean from a small sample dataset: $X=\{1, 2, 5, 10, 4, 8\}$. Let's assume the data is generated with a Poisson distribution with rate $\lambda$ and we aim at finding the MLE of $\lambda$. Now, the likelihood $L(\lambda|x_1,x_2,\ldots,x_n)=\prod\limits_{i=1}^{n}P(X=x_i)=\prod\limits_{i=1}^{n}\dfrac{e^{-\lambda}\lambda^{x_i}}{x_i!}$ s.t., the log likelihood $l(\lambda) = \sum\limits_{i=1}^{n}(-\lambda)+x_iln(\lambda)-ln(x_i!)=-n\lambda+ln(\lambda)\sum\limits_{i=1}^{n}x_i-\sum\limits_{i=1}^{n}ln(x_i!)$ $\implies \dfrac{\partial l}{\partial\lambda}=-n+\dfrac{\sum\limits_{i=1}^{n}x_i}{\lambda}=0$ at maximum $\implies \hat{\lambda}_{MLE}=\dfrac{\sum\limits_{i=1}^{n}x_i}{n}$ Here, $n=6$, $\hat{\lambda}_{MLE}=5$, i.e., the sample mean. Now, let's consider an outlier point $1000$ gets added to the dataset. Now, the MLE changes to $147.1429$, which clearly overfits to the noise in the data.
Why is maximum likelihood estimator suspectible to outliers? Let's consider a simple example for computing MLE of mean from a small sample dataset: $X=\{1, 2, 5, 10, 4, 8\}$. Let's assume the data is generated with a Poisson distribution with rate $\lambda$ and
27,280
Adding a linear regression predictor decreases R squared
Could it be that you have missing values in Q that are getting auto-dropped? That'd have implications on the sample, making the two regressions not comparable.
Adding a linear regression predictor decreases R squared
Could it be that you have missing values in Q that are getting auto-dropped? That'd have implications on the sample, making the two regressions not comparable.
Adding a linear regression predictor decreases R squared Could it be that you have missing values in Q that are getting auto-dropped? That'd have implications on the sample, making the two regressions not comparable.
Adding a linear regression predictor decreases R squared Could it be that you have missing values in Q that are getting auto-dropped? That'd have implications on the sample, making the two regressions not comparable.
27,281
Difference between non-informative and improper Priors
Improper priors are $\sigma$-finite non-negative measures $\text{d}\pi$ on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) = +\infty$$As such they generalise the notion of a prior distribution, which is a probability distribution on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) =1$$They are useful in several ways to characterise the set of limits of proper Bayesian procedures,which are not all proper Bayesian procedures; frequentist optimal procedures as in (admissibility) complete class theorems such as Wald's; frequentist best invariant estimators (since they can be expressed as Bayes estimates under the corresponding right Haar measure, usually improper); priors derived from the shape of the likelihood function, such as non-informative priors (e.g., Jeffreys'). Because they do not integrate to a finite number, they do not allow for a probabilistic interpretation but nonetheless can be used in statistical inference if the marginal likelihood is finite$$\int_\Theta \ell(\theta|x)\text{d}\pi(\theta) < +\infty$$since the posterior distribution$$\dfrac{\ell(\theta|x)\text{d}\pi(\theta)}{\int_\Theta \ell(\theta|x)\text{d}\pi(\theta)}$$is then well-defined. This means it can be used in exactly the same way a posterior distribution derived from a proper prior is used, to derive posterior quantities for estimation like posterior means or posterior credible intervals. Warning: One branch of Bayesian inference does not cope very well with improper priors, namely when testing sharp hypotheses. Indeed those hypotheses require the construction of two prior distributions, one under the null and one under the alternative, that are orthogonal. If one of these priors is improper, it cannot be normalised and the resulting Bayes factor is undetermined. In Bayesian decision theory, when seeking an optimal decision procedure $\delta$ under the loss function $L(d,\theta)$ an improper prior $\text{d}\pi$ is useful in cases when the minimisation problem $$\arg \min_d \int_\Theta L(d,\theta)\ell(\theta|x)\text{d}\pi(\theta)$$ allows for a non-trivial solution (even when the posterior distribution is not defined). The reason for this distinction is that the decision only depends on the product $L(d,\theta)\text{d}\pi(\theta)$, which means that it is invariant under changes of the prior by multiplicative terms $\varpi(\theta)$ provided the loss function is divided by the same multiplicative terms $\varpi(\theta)$,$$L(d,\theta)\text{d}\pi(\theta)=\dfrac{L(d,\theta)}{\varpi(\theta)}\times\varpi(\theta)\text{d}\pi(\theta)$$ Non-informative priors are classes of (proper or improper) prior distributions that are determined in terms of a certain informational criterion that relates to the likelihood function, like Laplace's insufficient reason flat prior; Jeffreys (1939) invariant priors; maximum entropy (or MaxEnt) priors (Jaynes, 1957); minimum description length priors (Rissanen, 1987; Grünwald, 2005); reference priors (Bernardo, 1979, 1781; Berger & Bernardo, 1992; Bernardo & Sun, 2012) probability matching priors (Welsh & Peers, 1963; Scricciolo, ‎1999; Datta, 2005) and further classes, some of which are described in Kass & Wasserman (1995). The name non-informative is a misnomer in that no prior is ever completely non-informative. See my discussion on this forum. Or Larry Wasserman's diatribe. (Non-informative priors are most often improper.)
Difference between non-informative and improper Priors
Improper priors are $\sigma$-finite non-negative measures $\text{d}\pi$ on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) = +\infty$$As such they generalise the notion of a pr
Difference between non-informative and improper Priors Improper priors are $\sigma$-finite non-negative measures $\text{d}\pi$ on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) = +\infty$$As such they generalise the notion of a prior distribution, which is a probability distribution on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) =1$$They are useful in several ways to characterise the set of limits of proper Bayesian procedures,which are not all proper Bayesian procedures; frequentist optimal procedures as in (admissibility) complete class theorems such as Wald's; frequentist best invariant estimators (since they can be expressed as Bayes estimates under the corresponding right Haar measure, usually improper); priors derived from the shape of the likelihood function, such as non-informative priors (e.g., Jeffreys'). Because they do not integrate to a finite number, they do not allow for a probabilistic interpretation but nonetheless can be used in statistical inference if the marginal likelihood is finite$$\int_\Theta \ell(\theta|x)\text{d}\pi(\theta) < +\infty$$since the posterior distribution$$\dfrac{\ell(\theta|x)\text{d}\pi(\theta)}{\int_\Theta \ell(\theta|x)\text{d}\pi(\theta)}$$is then well-defined. This means it can be used in exactly the same way a posterior distribution derived from a proper prior is used, to derive posterior quantities for estimation like posterior means or posterior credible intervals. Warning: One branch of Bayesian inference does not cope very well with improper priors, namely when testing sharp hypotheses. Indeed those hypotheses require the construction of two prior distributions, one under the null and one under the alternative, that are orthogonal. If one of these priors is improper, it cannot be normalised and the resulting Bayes factor is undetermined. In Bayesian decision theory, when seeking an optimal decision procedure $\delta$ under the loss function $L(d,\theta)$ an improper prior $\text{d}\pi$ is useful in cases when the minimisation problem $$\arg \min_d \int_\Theta L(d,\theta)\ell(\theta|x)\text{d}\pi(\theta)$$ allows for a non-trivial solution (even when the posterior distribution is not defined). The reason for this distinction is that the decision only depends on the product $L(d,\theta)\text{d}\pi(\theta)$, which means that it is invariant under changes of the prior by multiplicative terms $\varpi(\theta)$ provided the loss function is divided by the same multiplicative terms $\varpi(\theta)$,$$L(d,\theta)\text{d}\pi(\theta)=\dfrac{L(d,\theta)}{\varpi(\theta)}\times\varpi(\theta)\text{d}\pi(\theta)$$ Non-informative priors are classes of (proper or improper) prior distributions that are determined in terms of a certain informational criterion that relates to the likelihood function, like Laplace's insufficient reason flat prior; Jeffreys (1939) invariant priors; maximum entropy (or MaxEnt) priors (Jaynes, 1957); minimum description length priors (Rissanen, 1987; Grünwald, 2005); reference priors (Bernardo, 1979, 1781; Berger & Bernardo, 1992; Bernardo & Sun, 2012) probability matching priors (Welsh & Peers, 1963; Scricciolo, ‎1999; Datta, 2005) and further classes, some of which are described in Kass & Wasserman (1995). The name non-informative is a misnomer in that no prior is ever completely non-informative. See my discussion on this forum. Or Larry Wasserman's diatribe. (Non-informative priors are most often improper.)
Difference between non-informative and improper Priors Improper priors are $\sigma$-finite non-negative measures $\text{d}\pi$ on the parameter space $\Theta$ such that$$\int_\Theta \text{d}\pi(\theta) = +\infty$$As such they generalise the notion of a pr
27,282
Difference between non-informative and improper Priors
A non-informative prior, rigorously speaking, is not a prior distribution. This is a function such that, if we consider it as if it were a distribution and apply Bayes' formula, we get a certain posterior distribution, which aims to reflect as best as possible the information contained in the data and only in the data, or to achieve a good frequentist-matching property (i.e. a $95\%$-posterior credible interval is approximately a $95\%$-confidence interval). A non-informative prior is often "improper". A distribution has a well-known property: its integral equals one. A non-informative prior is said to be improper when its integral is infinite (therefore in such a case it is clear that it is not a distribution).
Difference between non-informative and improper Priors
A non-informative prior, rigorously speaking, is not a prior distribution. This is a function such that, if we consider it as if it were a distribution and apply Bayes' formula, we get a certain poste
Difference between non-informative and improper Priors A non-informative prior, rigorously speaking, is not a prior distribution. This is a function such that, if we consider it as if it were a distribution and apply Bayes' formula, we get a certain posterior distribution, which aims to reflect as best as possible the information contained in the data and only in the data, or to achieve a good frequentist-matching property (i.e. a $95\%$-posterior credible interval is approximately a $95\%$-confidence interval). A non-informative prior is often "improper". A distribution has a well-known property: its integral equals one. A non-informative prior is said to be improper when its integral is infinite (therefore in such a case it is clear that it is not a distribution).
Difference between non-informative and improper Priors A non-informative prior, rigorously speaking, is not a prior distribution. This is a function such that, if we consider it as if it were a distribution and apply Bayes' formula, we get a certain poste
27,283
What is this type of plot called with side-by-side centered horizontal density bars?
Sorry I don't have enough street cred on CV to post a comment, where this is more appropriate, but here is a link to some code in R to perform something like what you've depicted, using base graphics to rotate histograms in place of the density function inherent in ggplot2: from stack overflow: https://stackoverflow.com/questions/15846873/symmetrical-violin-plot-like-histogram if someone with appropriate powers cares to move this from answer to comment, please do.
What is this type of plot called with side-by-side centered horizontal density bars?
Sorry I don't have enough street cred on CV to post a comment, where this is more appropriate, but here is a link to some code in R to perform something like what you've depicted, using base graphics
What is this type of plot called with side-by-side centered horizontal density bars? Sorry I don't have enough street cred on CV to post a comment, where this is more appropriate, but here is a link to some code in R to perform something like what you've depicted, using base graphics to rotate histograms in place of the density function inherent in ggplot2: from stack overflow: https://stackoverflow.com/questions/15846873/symmetrical-violin-plot-like-histogram if someone with appropriate powers cares to move this from answer to comment, please do.
What is this type of plot called with side-by-side centered horizontal density bars? Sorry I don't have enough street cred on CV to post a comment, where this is more appropriate, but here is a link to some code in R to perform something like what you've depicted, using base graphics
27,284
What is this type of plot called with side-by-side centered horizontal density bars?
It's a little hard to tell what the plots are supposed to represent, but they look an awful lot like violin plots. A violin plot is essentially a vertical, doubled kernel density plot, so that the width along the x axis corresponds to greater density at the corresponding value along the y axis. You can generate them in package lattice with panel.violin, or in ggplot2 with geom_violin. Edit: there is also an R package called vioplot that (I think) uses only base R graphics, and a package called beanplot that generates something similar called a "bean plot."
What is this type of plot called with side-by-side centered horizontal density bars?
It's a little hard to tell what the plots are supposed to represent, but they look an awful lot like violin plots. A violin plot is essentially a vertical, doubled kernel density plot, so that the wid
What is this type of plot called with side-by-side centered horizontal density bars? It's a little hard to tell what the plots are supposed to represent, but they look an awful lot like violin plots. A violin plot is essentially a vertical, doubled kernel density plot, so that the width along the x axis corresponds to greater density at the corresponding value along the y axis. You can generate them in package lattice with panel.violin, or in ggplot2 with geom_violin. Edit: there is also an R package called vioplot that (I think) uses only base R graphics, and a package called beanplot that generates something similar called a "bean plot."
What is this type of plot called with side-by-side centered horizontal density bars? It's a little hard to tell what the plots are supposed to represent, but they look an awful lot like violin plots. A violin plot is essentially a vertical, doubled kernel density plot, so that the wid
27,285
What can I do beyond Pearson correlation?
First look at the scatterplot shows that the problem is in a few very high values of the second variable: Log scale on V2 shows some subtle decreasing relationship, so you can start from there: EDIT: I have posted this mainly to show how to get a clearer scatterplot in this case... I didn't want to go into log-log since V1 was quite evenly distributed, so it is not useful for visualization. If you want to fit this with $V_2=\alpha V_1^{\beta}$, nonlinear OLS is a safer idea.
What can I do beyond Pearson correlation?
First look at the scatterplot shows that the problem is in a few very high values of the second variable: Log scale on V2 shows some subtle decreasing relationship, so you can start from there: EDIT
What can I do beyond Pearson correlation? First look at the scatterplot shows that the problem is in a few very high values of the second variable: Log scale on V2 shows some subtle decreasing relationship, so you can start from there: EDIT: I have posted this mainly to show how to get a clearer scatterplot in this case... I didn't want to go into log-log since V1 was quite evenly distributed, so it is not useful for visualization. If you want to fit this with $V_2=\alpha V_1^{\beta}$, nonlinear OLS is a safer idea.
What can I do beyond Pearson correlation? First look at the scatterplot shows that the problem is in a few very high values of the second variable: Log scale on V2 shows some subtle decreasing relationship, so you can start from there: EDIT
27,286
What can I do beyond Pearson correlation?
I am with @mbq on this, but would suggest plotting in logscale in both dimensions: The correlation coefficient is now like -0.44.
What can I do beyond Pearson correlation?
I am with @mbq on this, but would suggest plotting in logscale in both dimensions: The correlation coefficient is now like -0.44.
What can I do beyond Pearson correlation? I am with @mbq on this, but would suggest plotting in logscale in both dimensions: The correlation coefficient is now like -0.44.
What can I do beyond Pearson correlation? I am with @mbq on this, but would suggest plotting in logscale in both dimensions: The correlation coefficient is now like -0.44.
27,287
What can I do beyond Pearson correlation?
You mentioned in the comments to @shabbychef, that you wanted to see the code to create the trend line. Here's a basic R demonstration Get and prepare the data: > # I created a gist on github with the data > filename <- "https://raw.github.com/gist/1320989/40be602c43b5f29d79af50bf2b63ba6c1a839807/data.txt" > > # downloaded the data using wget because default R doesn't handle https > # if you don't have wget, just download the file manually > system(paste("wget -nc", filename)) File 'data.txt' already there; not retrieving. > # read downloaded data into R > x <- read.table("data.txt") > names(x) <- c("x", "y") > > x$logx <- log(x$x) > x$logy <- log(x$y) Create plot with abline > png("logplot.png") > plot(x$logx, x$logy) > abline(lm(logy~logx, x)) > dev.off() Examine correlations You might want to examine correlations before and after transformation. Note how the spearman correlation does not change. > cor(x$x, x$y, method="pearson") [1] -0.1821122 > cor(x$x, x$y, method="spearman") [1] -0.3322378 > > cor(x$logx, x$logy, method="pearson") [1] -0.4399946 > cor(x$logx, x$logy, method="spearman") [1] -0.3322378
What can I do beyond Pearson correlation?
You mentioned in the comments to @shabbychef, that you wanted to see the code to create the trend line. Here's a basic R demonstration Get and prepare the data: > # I created a gist on github with the
What can I do beyond Pearson correlation? You mentioned in the comments to @shabbychef, that you wanted to see the code to create the trend line. Here's a basic R demonstration Get and prepare the data: > # I created a gist on github with the data > filename <- "https://raw.github.com/gist/1320989/40be602c43b5f29d79af50bf2b63ba6c1a839807/data.txt" > > # downloaded the data using wget because default R doesn't handle https > # if you don't have wget, just download the file manually > system(paste("wget -nc", filename)) File 'data.txt' already there; not retrieving. > # read downloaded data into R > x <- read.table("data.txt") > names(x) <- c("x", "y") > > x$logx <- log(x$x) > x$logy <- log(x$y) Create plot with abline > png("logplot.png") > plot(x$logx, x$logy) > abline(lm(logy~logx, x)) > dev.off() Examine correlations You might want to examine correlations before and after transformation. Note how the spearman correlation does not change. > cor(x$x, x$y, method="pearson") [1] -0.1821122 > cor(x$x, x$y, method="spearman") [1] -0.3322378 > > cor(x$logx, x$logy, method="pearson") [1] -0.4399946 > cor(x$logx, x$logy, method="spearman") [1] -0.3322378
What can I do beyond Pearson correlation? You mentioned in the comments to @shabbychef, that you wanted to see the code to create the trend line. Here's a basic R demonstration Get and prepare the data: > # I created a gist on github with the
27,288
What can I do beyond Pearson correlation?
The advice on taking logs that you have gotten from several people is good. But look at the plots that, e.g @jeromy produced. It's not QUITE one of the bad ones from Anscombe quartet., but it doesn't look like the good one, either. So, I would use Spearman's, noting, as @Jeromy points out, that it doesn't change. But I might not want to use any correlation at all ... Does the correlation capture what you want to capture about the relationship? Does it capture the clumpiness of the variables and the zero-inflation?
What can I do beyond Pearson correlation?
The advice on taking logs that you have gotten from several people is good. But look at the plots that, e.g @jeromy produced. It's not QUITE one of the bad ones from Anscombe quartet., but it doesn't
What can I do beyond Pearson correlation? The advice on taking logs that you have gotten from several people is good. But look at the plots that, e.g @jeromy produced. It's not QUITE one of the bad ones from Anscombe quartet., but it doesn't look like the good one, either. So, I would use Spearman's, noting, as @Jeromy points out, that it doesn't change. But I might not want to use any correlation at all ... Does the correlation capture what you want to capture about the relationship? Does it capture the clumpiness of the variables and the zero-inflation?
What can I do beyond Pearson correlation? The advice on taking logs that you have gotten from several people is good. But look at the plots that, e.g @jeromy produced. It's not QUITE one of the bad ones from Anscombe quartet., but it doesn't
27,289
What can I do beyond Pearson correlation?
I'm just going to throw out three more approaches, mostly for the reaction, since these methods are newer and I'm curious about them... First, the MIC (maximal information coefficient) introduced by Reshef et al. (2011) to determine nonlinear correlations. Described as 'a correlation for the 21st century' when it made a splash in the media. Second, the distance correlation statistic, dCor, a standardized Brownian covariance introduced by Szekely, Rizzo, and Bakirov (2007) and favoured by Simon & Tibshirani (2012) in their critique of the low power of MIC. Third, the HHG test of Heller, Heller and Gorfine (2012a, b), noted by them as an superior alternative to MIC with better power characteristics. # Using @Jeromy Anglim's gist on github with the data library(RCurl) writeChar(con="data.txt", getURL("https://raw.github.com/gist/1320989/40be602c43b5f29d79af50bf2b63ba6c1a839807/data.txt", ssl.verifypeer = FALSE)) # read downloaded data into R x <- read.table("data.txt") names(x) <- c("x", "y") x$logx <- log(x$x) x$logy <- log(x$y) ## maximal information coefficient on untransformed data library(minerva) with(x, mine(x, y)) $MIC [1] 0.4333141 # maximal information coefficient on log-log data with(x, mine(logx, logy)) $MIC [1] 0.4333141 MIC: No difference between untransformed and log transform data. Quite similar to the absolute value of the pearson on log-log data. ## distance correlation statistic on untransformed data library("energy") with(x, dcor(x, y)) [1] 0.2352139 # distance correlation statistic on log-log data with(x, dcor(logx, logy)) [1] 0.3638021 dCor: Lower values than MIC, and sensitive to log transform. Similar to the spearman correlation. ## HHG test download.file("http://www.math.tau.ac.il/~ruheller/Software/HHG2x2_0.1-1.tar.gz", "HHG2x2_0.1-1.tar.gz") install.packages("HHG2x2_0.1-1.tar.gz", repos = NULL, type="source") library(HHG2x2) writeChar(con="myHHG.R", getURL("https://raw.github.com/andrewdyates/HHG_R/master/R/myHHG.R", ssl.verifypeer = FALSE)) source("myHHG.R") xs <- x[sample(nrow(x), 50), ] # crashed with the full dataset... Dx = as.matrix(dist((xs[,1]),diag=TRUE,upper=TRUE)) Dy = as.matrix(dist((xs[,2]),diag=TRUE,upper=TRUE)) myHHG(Dx,Dy); pvHHG(Dx,Dy) $sum_chisquared [1] 8374.196 $sum_lr [1] 3890.457 $max_chisquared [1] 21.28747 $max_lr [1] 10.80188 $pv [1] 9.998e-05 $output_monte [1] 10001 $A_threshold [1] 2.985682 $B_threshold [1] -4.553877 HHG: Actually I'm not sure how to get a comparable distance metric out of the HHG... References: Gorfine, M., Heller, R., & Heller, Y. (2012a). Comment on “Detecting Novel Associations in Large Data Sets”. Preprint, available at the website http://iew3.technion.ac.il/~gorfinm/files/science6.pdf Heller, R., Heller, Y., & Gorfine, M. (2012b). A consistent multivariate test of association based on ranks of distances. Biometrika, arXiv preprint arXiv:1201.3522. Reshef, D. N., Y. A. Reshef, et al. (2011). "Detecting Novel Associations in Large Data Sets." Science 334(6062): 1518-1524. Simon, Noah and Robert Tibshirani (2012). Comment On Detecting Novel Associations In Large Data Sets By Reshef et al, Science Dec 16, 2011. www-stat.stanford.edu/~tibs/reshef/comment.pdf Szekely, G.J., Rizzo, M.L., and Bakirov, N.K. (2007), Measuring and Testing Dependence by Correlation of Distances, Annals of Statistics, Vol. 35 No. 6, pp. 2769-2794. http://dx.doi.org/10.1214/009053607000000505
What can I do beyond Pearson correlation?
I'm just going to throw out three more approaches, mostly for the reaction, since these methods are newer and I'm curious about them... First, the MIC (maximal information coefficient) introduced by R
What can I do beyond Pearson correlation? I'm just going to throw out three more approaches, mostly for the reaction, since these methods are newer and I'm curious about them... First, the MIC (maximal information coefficient) introduced by Reshef et al. (2011) to determine nonlinear correlations. Described as 'a correlation for the 21st century' when it made a splash in the media. Second, the distance correlation statistic, dCor, a standardized Brownian covariance introduced by Szekely, Rizzo, and Bakirov (2007) and favoured by Simon & Tibshirani (2012) in their critique of the low power of MIC. Third, the HHG test of Heller, Heller and Gorfine (2012a, b), noted by them as an superior alternative to MIC with better power characteristics. # Using @Jeromy Anglim's gist on github with the data library(RCurl) writeChar(con="data.txt", getURL("https://raw.github.com/gist/1320989/40be602c43b5f29d79af50bf2b63ba6c1a839807/data.txt", ssl.verifypeer = FALSE)) # read downloaded data into R x <- read.table("data.txt") names(x) <- c("x", "y") x$logx <- log(x$x) x$logy <- log(x$y) ## maximal information coefficient on untransformed data library(minerva) with(x, mine(x, y)) $MIC [1] 0.4333141 # maximal information coefficient on log-log data with(x, mine(logx, logy)) $MIC [1] 0.4333141 MIC: No difference between untransformed and log transform data. Quite similar to the absolute value of the pearson on log-log data. ## distance correlation statistic on untransformed data library("energy") with(x, dcor(x, y)) [1] 0.2352139 # distance correlation statistic on log-log data with(x, dcor(logx, logy)) [1] 0.3638021 dCor: Lower values than MIC, and sensitive to log transform. Similar to the spearman correlation. ## HHG test download.file("http://www.math.tau.ac.il/~ruheller/Software/HHG2x2_0.1-1.tar.gz", "HHG2x2_0.1-1.tar.gz") install.packages("HHG2x2_0.1-1.tar.gz", repos = NULL, type="source") library(HHG2x2) writeChar(con="myHHG.R", getURL("https://raw.github.com/andrewdyates/HHG_R/master/R/myHHG.R", ssl.verifypeer = FALSE)) source("myHHG.R") xs <- x[sample(nrow(x), 50), ] # crashed with the full dataset... Dx = as.matrix(dist((xs[,1]),diag=TRUE,upper=TRUE)) Dy = as.matrix(dist((xs[,2]),diag=TRUE,upper=TRUE)) myHHG(Dx,Dy); pvHHG(Dx,Dy) $sum_chisquared [1] 8374.196 $sum_lr [1] 3890.457 $max_chisquared [1] 21.28747 $max_lr [1] 10.80188 $pv [1] 9.998e-05 $output_monte [1] 10001 $A_threshold [1] 2.985682 $B_threshold [1] -4.553877 HHG: Actually I'm not sure how to get a comparable distance metric out of the HHG... References: Gorfine, M., Heller, R., & Heller, Y. (2012a). Comment on “Detecting Novel Associations in Large Data Sets”. Preprint, available at the website http://iew3.technion.ac.il/~gorfinm/files/science6.pdf Heller, R., Heller, Y., & Gorfine, M. (2012b). A consistent multivariate test of association based on ranks of distances. Biometrika, arXiv preprint arXiv:1201.3522. Reshef, D. N., Y. A. Reshef, et al. (2011). "Detecting Novel Associations in Large Data Sets." Science 334(6062): 1518-1524. Simon, Noah and Robert Tibshirani (2012). Comment On Detecting Novel Associations In Large Data Sets By Reshef et al, Science Dec 16, 2011. www-stat.stanford.edu/~tibs/reshef/comment.pdf Szekely, G.J., Rizzo, M.L., and Bakirov, N.K. (2007), Measuring and Testing Dependence by Correlation of Distances, Annals of Statistics, Vol. 35 No. 6, pp. 2769-2794. http://dx.doi.org/10.1214/009053607000000505
What can I do beyond Pearson correlation? I'm just going to throw out three more approaches, mostly for the reaction, since these methods are newer and I'm curious about them... First, the MIC (maximal information coefficient) introduced by R
27,290
Animating the effect of changing kernel width in R
It depends a little bit on what your end goal is. Quick and dirty hack for real-time demonstrations Using Sys.sleep(seconds) in a loop where seconds indicates the number of seconds between frames is a viable option. You'll need to set the xlim and ylim parameters in your call to plot to make things behave as expected. Here's some simple demonstration code. # Just a quick test of Sys.sleep() animation x <- seq(0,2*pi, by=0.01) y <- sin(x) n <- 5 pause <- 0.5 ybnds <- quantile(n*y, probs=c(0,1)) x11() # Draw successively taller sinewaves with a gradually changing color for( i in 1:n ) { plot(x, i*y, type="l", lwd=2, ylim=ybnds, col=topo.colors(2*n)[i]) Sys.sleep(pause) } This works pretty well, especially using X-Windows as the windowing system. I've found that Mac's quartz() does not play nice, unfortunately. Animated GIFs If you need something that can be redistributed, posted on a webpage, etc., look at the write.gif function in the caTools package. Displaying help on write.gif gives several nice examples, including a couple of animations—one with a quite nice example using the Mandelbrot set. See also here and here. More fine-tuned control and fancier animations There is an animation package that looks pretty capable. I haven't used it myself, though, so I can't give any real recommendations either way. I have seen a few good examples of output from this package and they look pretty nice. Perhaps one of the "highlights" is the ability to embed an animation in a PDF.
Animating the effect of changing kernel width in R
It depends a little bit on what your end goal is. Quick and dirty hack for real-time demonstrations Using Sys.sleep(seconds) in a loop where seconds indicates the number of seconds between frames is a
Animating the effect of changing kernel width in R It depends a little bit on what your end goal is. Quick and dirty hack for real-time demonstrations Using Sys.sleep(seconds) in a loop where seconds indicates the number of seconds between frames is a viable option. You'll need to set the xlim and ylim parameters in your call to plot to make things behave as expected. Here's some simple demonstration code. # Just a quick test of Sys.sleep() animation x <- seq(0,2*pi, by=0.01) y <- sin(x) n <- 5 pause <- 0.5 ybnds <- quantile(n*y, probs=c(0,1)) x11() # Draw successively taller sinewaves with a gradually changing color for( i in 1:n ) { plot(x, i*y, type="l", lwd=2, ylim=ybnds, col=topo.colors(2*n)[i]) Sys.sleep(pause) } This works pretty well, especially using X-Windows as the windowing system. I've found that Mac's quartz() does not play nice, unfortunately. Animated GIFs If you need something that can be redistributed, posted on a webpage, etc., look at the write.gif function in the caTools package. Displaying help on write.gif gives several nice examples, including a couple of animations—one with a quite nice example using the Mandelbrot set. See also here and here. More fine-tuned control and fancier animations There is an animation package that looks pretty capable. I haven't used it myself, though, so I can't give any real recommendations either way. I have seen a few good examples of output from this package and they look pretty nice. Perhaps one of the "highlights" is the ability to embed an animation in a PDF.
Animating the effect of changing kernel width in R It depends a little bit on what your end goal is. Quick and dirty hack for real-time demonstrations Using Sys.sleep(seconds) in a loop where seconds indicates the number of seconds between frames is a
27,291
Animating the effect of changing kernel width in R
One way to go is to use the excellent animation package by Yihui Xie. I uploaded a very simple example to my public dropbox account: densityplot (I will remove this example in 3 days). Is this what you are looking for? The animation was created using the following R code: library(animation) density.ani <- function(){ i <- 1 d <- c(1,2,3,4) while (i <= ani.options("nmax")) { plot(density(d, kernel="gaussian", bw = i), ylim = c(0, 0.25)) ani.pause() i <- i + 1 } } saveHTML({ par(mar = c(5, 4, 1, 0.5)) density.ani() }, nmax = 30, title = "Changing kernel width")
Animating the effect of changing kernel width in R
One way to go is to use the excellent animation package by Yihui Xie. I uploaded a very simple example to my public dropbox account: densityplot (I will remove this example in 3 days). Is this what yo
Animating the effect of changing kernel width in R One way to go is to use the excellent animation package by Yihui Xie. I uploaded a very simple example to my public dropbox account: densityplot (I will remove this example in 3 days). Is this what you are looking for? The animation was created using the following R code: library(animation) density.ani <- function(){ i <- 1 d <- c(1,2,3,4) while (i <= ani.options("nmax")) { plot(density(d, kernel="gaussian", bw = i), ylim = c(0, 0.25)) ani.pause() i <- i + 1 } } saveHTML({ par(mar = c(5, 4, 1, 0.5)) density.ani() }, nmax = 30, title = "Changing kernel width")
Animating the effect of changing kernel width in R One way to go is to use the excellent animation package by Yihui Xie. I uploaded a very simple example to my public dropbox account: densityplot (I will remove this example in 3 days). Is this what yo
27,292
Animating the effect of changing kernel width in R
Just for the sake of completeness, if you need this for a class demonstration, I would also mention the manipulate package which comes with RStudio. Note that this package is dependent on RStudio interface, so it won't work outside of it. manipulate is quite cool because it allows to quickly create some sliders to manipulate any element in the plot. This would allow to do some easy and real-time demonstration in class. manipulate( plot(density(1:10, bw)), bw = slider(0, 10, step = 0.1, initial = 1)) Other examples here
Animating the effect of changing kernel width in R
Just for the sake of completeness, if you need this for a class demonstration, I would also mention the manipulate package which comes with RStudio. Note that this package is dependent on RStudio inte
Animating the effect of changing kernel width in R Just for the sake of completeness, if you need this for a class demonstration, I would also mention the manipulate package which comes with RStudio. Note that this package is dependent on RStudio interface, so it won't work outside of it. manipulate is quite cool because it allows to quickly create some sliders to manipulate any element in the plot. This would allow to do some easy and real-time demonstration in class. manipulate( plot(density(1:10, bw)), bw = slider(0, 10, step = 0.1, initial = 1)) Other examples here
Animating the effect of changing kernel width in R Just for the sake of completeness, if you need this for a class demonstration, I would also mention the manipulate package which comes with RStudio. Note that this package is dependent on RStudio inte
27,293
Animating the effect of changing kernel width in R
Here is another approach: library(TeachingDemos) d <- c(1,2,3,4) tmpfun <- function(width=1, kernel='gaussian'){ plot(density(d, width=width, kernel=kernel)) } tmplst <- list( width=list('slider', init=1, from=.5, to=5, resolution=.1), kernel=list('radiobuttons', init='gaussian', values=c('gaussian', "epanechnikov","rectangular","triangular","biweight","cosine", "optcosine"))) tkexamp( tmpfun, tmplst, plotloc='left' )
Animating the effect of changing kernel width in R
Here is another approach: library(TeachingDemos) d <- c(1,2,3,4) tmpfun <- function(width=1, kernel='gaussian'){ plot(density(d, width=width, kernel=kernel)) } tmplst <- list( width=list('slide
Animating the effect of changing kernel width in R Here is another approach: library(TeachingDemos) d <- c(1,2,3,4) tmpfun <- function(width=1, kernel='gaussian'){ plot(density(d, width=width, kernel=kernel)) } tmplst <- list( width=list('slider', init=1, from=.5, to=5, resolution=.1), kernel=list('radiobuttons', init='gaussian', values=c('gaussian', "epanechnikov","rectangular","triangular","biweight","cosine", "optcosine"))) tkexamp( tmpfun, tmplst, plotloc='left' )
Animating the effect of changing kernel width in R Here is another approach: library(TeachingDemos) d <- c(1,2,3,4) tmpfun <- function(width=1, kernel='gaussian'){ plot(density(d, width=width, kernel=kernel)) } tmplst <- list( width=list('slide
27,294
Recommend some books/articles/guides to enter predictive analytics?
There's no need to call it Predictive Analytics :) It already has two names: statistics, and data mining. Beginner Stats Book: Statistics in Plain English Advanced Stats Book: Multivariate Analysis, by Hair Data Mining Book: I still haven't found a great one, but Data Mining by Witten is okay. Don't get too confused by all the details. There are only so many things you can accomplish in general: predict a real number (regression) predict a whole number (classification) modeling (same as the above two, but the model is understandable by humans) group similar observations (clustering) group similar factors (factor analysis) describe a single factor describe the relationship between multiple factors (correlation, association, etc) determine if a population value is different from another, based on a sample design experiments and calculate sample size good luck!
Recommend some books/articles/guides to enter predictive analytics?
There's no need to call it Predictive Analytics :) It already has two names: statistics, and data mining. Beginner Stats Book: Statistics in Plain English Advanced Stats Book: Multivariate Analysis,
Recommend some books/articles/guides to enter predictive analytics? There's no need to call it Predictive Analytics :) It already has two names: statistics, and data mining. Beginner Stats Book: Statistics in Plain English Advanced Stats Book: Multivariate Analysis, by Hair Data Mining Book: I still haven't found a great one, but Data Mining by Witten is okay. Don't get too confused by all the details. There are only so many things you can accomplish in general: predict a real number (regression) predict a whole number (classification) modeling (same as the above two, but the model is understandable by humans) group similar observations (clustering) group similar factors (factor analysis) describe a single factor describe the relationship between multiple factors (correlation, association, etc) determine if a population value is different from another, based on a sample design experiments and calculate sample size good luck!
Recommend some books/articles/guides to enter predictive analytics? There's no need to call it Predictive Analytics :) It already has two names: statistics, and data mining. Beginner Stats Book: Statistics in Plain English Advanced Stats Book: Multivariate Analysis,
27,295
Recommend some books/articles/guides to enter predictive analytics?
Go to http://www.vaultanalytics.com/books They have written a book on what predictive models are, when to use what tests/models, and how to create them in Excel. I'm using it every day in my job. I think it's extremely useful.
Recommend some books/articles/guides to enter predictive analytics?
Go to http://www.vaultanalytics.com/books They have written a book on what predictive models are, when to use what tests/models, and how to create them in Excel. I'm using it every day in my job. I
Recommend some books/articles/guides to enter predictive analytics? Go to http://www.vaultanalytics.com/books They have written a book on what predictive models are, when to use what tests/models, and how to create them in Excel. I'm using it every day in my job. I think it's extremely useful.
Recommend some books/articles/guides to enter predictive analytics? Go to http://www.vaultanalytics.com/books They have written a book on what predictive models are, when to use what tests/models, and how to create them in Excel. I'm using it every day in my job. I
27,296
Recommend some books/articles/guides to enter predictive analytics?
Reading this one now: Predictive Analytics: Microsoft Excel By Conrad Carlberg Published Jul 2, 2012 by Que. ISBN-10: 0-7897-4941-6 ISBN-13: 978-0-7897-4941-3 I'm not done reading it yet, but so far its a good introduction to the topic for a non-stat person. It starts pretty basic with both stat concepts and Excel functionality and is building from there. On the Stats front, its going into a pretty healthy discussion of of using moving averages and smoothing to help determine signal/noise in time series. On the Excel front, its explaining how to build models using the above concepts (rather than just plunking a typical Excel trendline on a chart), and using some of Excels add-on functionality (e.g. Solver and Data Analysis).
Recommend some books/articles/guides to enter predictive analytics?
Reading this one now: Predictive Analytics: Microsoft Excel By Conrad Carlberg Published Jul 2, 2012 by Que. ISBN-10: 0-7897-4941-6 ISBN-13: 978-0-7897-4941-3 I'm not done reading it yet, but so f
Recommend some books/articles/guides to enter predictive analytics? Reading this one now: Predictive Analytics: Microsoft Excel By Conrad Carlberg Published Jul 2, 2012 by Que. ISBN-10: 0-7897-4941-6 ISBN-13: 978-0-7897-4941-3 I'm not done reading it yet, but so far its a good introduction to the topic for a non-stat person. It starts pretty basic with both stat concepts and Excel functionality and is building from there. On the Stats front, its going into a pretty healthy discussion of of using moving averages and smoothing to help determine signal/noise in time series. On the Excel front, its explaining how to build models using the above concepts (rather than just plunking a typical Excel trendline on a chart), and using some of Excels add-on functionality (e.g. Solver and Data Analysis).
Recommend some books/articles/guides to enter predictive analytics? Reading this one now: Predictive Analytics: Microsoft Excel By Conrad Carlberg Published Jul 2, 2012 by Que. ISBN-10: 0-7897-4941-6 ISBN-13: 978-0-7897-4941-3 I'm not done reading it yet, but so f
27,297
Recommend some books/articles/guides to enter predictive analytics?
I wrote a book on this topic: "Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die", by Eric Siegel, Ph.D. (Wiley, February 2013) More info: http://www.thepredictionbook.com The Fiscal Times ran an excerpt as an article: http://www.thefiscaltimes.com/Articles/2013/01/21/The-Real-Story-Behind-Obamas-Election-Victory.aspx And there are other excerpts available throught the book website above. Let me know if you have any questions about the book!
Recommend some books/articles/guides to enter predictive analytics?
I wrote a book on this topic: "Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die", by Eric Siegel, Ph.D. (Wiley, February 2013) More info: http://www.thepredictionbook.com Th
Recommend some books/articles/guides to enter predictive analytics? I wrote a book on this topic: "Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die", by Eric Siegel, Ph.D. (Wiley, February 2013) More info: http://www.thepredictionbook.com The Fiscal Times ran an excerpt as an article: http://www.thefiscaltimes.com/Articles/2013/01/21/The-Real-Story-Behind-Obamas-Election-Victory.aspx And there are other excerpts available throught the book website above. Let me know if you have any questions about the book!
Recommend some books/articles/guides to enter predictive analytics? I wrote a book on this topic: "Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die", by Eric Siegel, Ph.D. (Wiley, February 2013) More info: http://www.thepredictionbook.com Th
27,298
Recommend some books/articles/guides to enter predictive analytics?
There are quite a few books around. The above are all pretty good. I've also done a book focusing specifically on predictive analytics in retailing and financial services. Finlay, Steven (2012). Credit Scoring, Response Modeling and Insurance Rating. A Practical Guide to Forecasting Customer Behavior. Basingstoke: Palgrave Macmillan. ISBN 0-230-34776-2. Its certainly not a "Hard core" mathematics book, but it does give a basic introduction to they key methods such as logistic regression, neural networks etc. In particular it focuses on the entire model development process. Starting with project planning and going through to implementation and monitoring of the model post live.
Recommend some books/articles/guides to enter predictive analytics?
There are quite a few books around. The above are all pretty good. I've also done a book focusing specifically on predictive analytics in retailing and financial services. Finlay, Steven (2012). Credi
Recommend some books/articles/guides to enter predictive analytics? There are quite a few books around. The above are all pretty good. I've also done a book focusing specifically on predictive analytics in retailing and financial services. Finlay, Steven (2012). Credit Scoring, Response Modeling and Insurance Rating. A Practical Guide to Forecasting Customer Behavior. Basingstoke: Palgrave Macmillan. ISBN 0-230-34776-2. Its certainly not a "Hard core" mathematics book, but it does give a basic introduction to they key methods such as logistic regression, neural networks etc. In particular it focuses on the entire model development process. Starting with project planning and going through to implementation and monitoring of the model post live.
Recommend some books/articles/guides to enter predictive analytics? There are quite a few books around. The above are all pretty good. I've also done a book focusing specifically on predictive analytics in retailing and financial services. Finlay, Steven (2012). Credi
27,299
Recommend some books/articles/guides to enter predictive analytics?
Further to my previous note - I'd just like to let people know that my new book: Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods is now out. Available at amazon and all good book shops: http://www.amazon.co.uk/s/ref=nb_sb_noss_1?url=search-alias%3Dstripbooks&field-keywords=predictive+analytics Eric - your book is recommended reading.
Recommend some books/articles/guides to enter predictive analytics?
Further to my previous note - I'd just like to let people know that my new book: Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods is now out. Available at amazon and a
Recommend some books/articles/guides to enter predictive analytics? Further to my previous note - I'd just like to let people know that my new book: Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods is now out. Available at amazon and all good book shops: http://www.amazon.co.uk/s/ref=nb_sb_noss_1?url=search-alias%3Dstripbooks&field-keywords=predictive+analytics Eric - your book is recommended reading.
Recommend some books/articles/guides to enter predictive analytics? Further to my previous note - I'd just like to let people know that my new book: Predictive Analytics, Data Mining and Big Data. Myths, Misconceptions and Methods is now out. Available at amazon and a
27,300
Could we explain the disadvantage of imbalanced data mathematically?
Not a formal proof, but an intuition: Unbalanced data is not a problem per se, the problem is that you don't have many samples to represent the minority class. Imagine a trivial model, where you would use logistic regression with only the intercept for your data. In such a case, the model would correctly estimate the probability to be $\tfrac{100}{10100}$ (try it yourself on different datasets). Now imagine that you use a more complicated model, the model would start struggling a little bit for the minority class because it doesn't have much data for it. Imagine a different problem, where you take only the minority class data ($n=100$ here) and try fitting some complicated model to this data. It will fail simply because you don't have enough data. For exactly the same reason, your predictions regarding the minority class would be not as precise as with the majority class, because you don't have enough data to represent it.
Could we explain the disadvantage of imbalanced data mathematically?
Not a formal proof, but an intuition: Unbalanced data is not a problem per se, the problem is that you don't have many samples to represent the minority class. Imagine a trivial model, where you would
Could we explain the disadvantage of imbalanced data mathematically? Not a formal proof, but an intuition: Unbalanced data is not a problem per se, the problem is that you don't have many samples to represent the minority class. Imagine a trivial model, where you would use logistic regression with only the intercept for your data. In such a case, the model would correctly estimate the probability to be $\tfrac{100}{10100}$ (try it yourself on different datasets). Now imagine that you use a more complicated model, the model would start struggling a little bit for the minority class because it doesn't have much data for it. Imagine a different problem, where you take only the minority class data ($n=100$ here) and try fitting some complicated model to this data. It will fail simply because you don't have enough data. For exactly the same reason, your predictions regarding the minority class would be not as precise as with the majority class, because you don't have enough data to represent it.
Could we explain the disadvantage of imbalanced data mathematically? Not a formal proof, but an intuition: Unbalanced data is not a problem per se, the problem is that you don't have many samples to represent the minority class. Imagine a trivial model, where you would