idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
12,501
Simulate constrained normal on lower or upper bound in R
This is called a truncated normal distribution: http://en.wikipedia.org/wiki/Truncated_normal_distribution Christian Robert wrote about an approach to doing it for a variety of situations (using different depending on where the truncation points were) here: Robert, C.P. (1995) "Simulation of truncated normal variables", Statistics and Computing, Volume 5, Issue 2, June, pp 121-125 Paper available at http://arxiv.org/abs/0907.4010 This discusses a number of different ideas for different truncation points. It's not the only way of approaching these by any means but it has typically pretty good performance. If you want to do a lot of different truncated normals with various truncation points, it would be a reasonable approach. As you noted, msm::tnorm is based on Robert's approach, while truncnorm::truncnorm implements Geweke's (1991) accept-reject sampler; this is related to the approach in Robert's paper. Note that msm::tnorm includes density, cdf, and quantile (inverse cdf) functions in the usual R fashion. An older reference with an approach is Luc Devroye's book; since it went out of print he's got back the copyright and made it available as a download. Your particular example is the same as sampling a standard normal truncated at 1 (if $t$ is the truncation point, $(t-\mu)/\sigma = (5-3)/2 = 1$), and then scaling the result (multiply by $\sigma$ and add $\mu$). In that specific case, Robert suggests that your idea (in the second or third incarnation) is quite reasonable. You get an acceptable value about 84% of the time and so generate about $1.19 n$ normals on average (you can work out bounds so that you generate enough values using a vectorized algorithm say 99.5% of the time, and then once in a while generate the last few less efficiently - even one at a time). There's also discussion of an implementation in R code here (and in Rccp in another answer to the same question, but the R code there is actually faster). The plain R code there generates 50000 truncated normals in 6 milliseconds, though that particular truncated normal only cuts off the extreme tails, so a more substantive truncation would mean the results were slower. It implements the idea of generating "too many" by calculating how many it should generate to be almost certain to get enough. If I needed just one particular kind of truncated normal a lot of times, I'd probably look at adapting a version of the ziggurat method, or something similar, to the problem. In fact it looks like Nicolas Chopin did just that already, so I'm not the only person that has occurred to: http://arxiv.org/abs/1201.6140 He discusses several other algorithms and compares the time for 3 versions of his algorithm with other algorithms to generate 10^8 random normals for various truncation points. Perhaps unsurprisingly, his algorithm turns out to be relatively fast. From the graph in the paper, even the slowest of the algorithms he compares with at the (for them) worst truncation points are generating $10^8$ values in about 3 seconds - which suggests that any of the algorithms discussed there may be acceptable if reasonably well implemented. Edit: One that I am not certain is mentioned here (but perhaps it's in one of the links) is to transform (via inverse normal cdf) a truncated uniform -- but the uniform can be truncated by simply generating a uniform within the truncation bounds. If the inverse normal cdf is fast this is both fast and easy and works well for a wide range of truncation points.
Simulate constrained normal on lower or upper bound in R
This is called a truncated normal distribution: http://en.wikipedia.org/wiki/Truncated_normal_distribution Christian Robert wrote about an approach to doing it for a variety of situations (using diffe
Simulate constrained normal on lower or upper bound in R This is called a truncated normal distribution: http://en.wikipedia.org/wiki/Truncated_normal_distribution Christian Robert wrote about an approach to doing it for a variety of situations (using different depending on where the truncation points were) here: Robert, C.P. (1995) "Simulation of truncated normal variables", Statistics and Computing, Volume 5, Issue 2, June, pp 121-125 Paper available at http://arxiv.org/abs/0907.4010 This discusses a number of different ideas for different truncation points. It's not the only way of approaching these by any means but it has typically pretty good performance. If you want to do a lot of different truncated normals with various truncation points, it would be a reasonable approach. As you noted, msm::tnorm is based on Robert's approach, while truncnorm::truncnorm implements Geweke's (1991) accept-reject sampler; this is related to the approach in Robert's paper. Note that msm::tnorm includes density, cdf, and quantile (inverse cdf) functions in the usual R fashion. An older reference with an approach is Luc Devroye's book; since it went out of print he's got back the copyright and made it available as a download. Your particular example is the same as sampling a standard normal truncated at 1 (if $t$ is the truncation point, $(t-\mu)/\sigma = (5-3)/2 = 1$), and then scaling the result (multiply by $\sigma$ and add $\mu$). In that specific case, Robert suggests that your idea (in the second or third incarnation) is quite reasonable. You get an acceptable value about 84% of the time and so generate about $1.19 n$ normals on average (you can work out bounds so that you generate enough values using a vectorized algorithm say 99.5% of the time, and then once in a while generate the last few less efficiently - even one at a time). There's also discussion of an implementation in R code here (and in Rccp in another answer to the same question, but the R code there is actually faster). The plain R code there generates 50000 truncated normals in 6 milliseconds, though that particular truncated normal only cuts off the extreme tails, so a more substantive truncation would mean the results were slower. It implements the idea of generating "too many" by calculating how many it should generate to be almost certain to get enough. If I needed just one particular kind of truncated normal a lot of times, I'd probably look at adapting a version of the ziggurat method, or something similar, to the problem. In fact it looks like Nicolas Chopin did just that already, so I'm not the only person that has occurred to: http://arxiv.org/abs/1201.6140 He discusses several other algorithms and compares the time for 3 versions of his algorithm with other algorithms to generate 10^8 random normals for various truncation points. Perhaps unsurprisingly, his algorithm turns out to be relatively fast. From the graph in the paper, even the slowest of the algorithms he compares with at the (for them) worst truncation points are generating $10^8$ values in about 3 seconds - which suggests that any of the algorithms discussed there may be acceptable if reasonably well implemented. Edit: One that I am not certain is mentioned here (but perhaps it's in one of the links) is to transform (via inverse normal cdf) a truncated uniform -- but the uniform can be truncated by simply generating a uniform within the truncation bounds. If the inverse normal cdf is fast this is both fast and easy and works well for a wide range of truncation points.
Simulate constrained normal on lower or upper bound in R This is called a truncated normal distribution: http://en.wikipedia.org/wiki/Truncated_normal_distribution Christian Robert wrote about an approach to doing it for a variety of situations (using diffe
12,502
Simulate constrained normal on lower or upper bound in R
Following on from @glen_b's references and focussing exclusively on R implementation. There are a couple of functions designed to sample from a truncated normal distribution: rtruncnorm(100, a=-Inf, b=5, mean=3, sd=2) in the truncnorm package rtnorm(100, 3, 2, upper=5) in the msm package
Simulate constrained normal on lower or upper bound in R
Following on from @glen_b's references and focussing exclusively on R implementation. There are a couple of functions designed to sample from a truncated normal distribution: rtruncnorm(100, a=-Inf,
Simulate constrained normal on lower or upper bound in R Following on from @glen_b's references and focussing exclusively on R implementation. There are a couple of functions designed to sample from a truncated normal distribution: rtruncnorm(100, a=-Inf, b=5, mean=3, sd=2) in the truncnorm package rtnorm(100, 3, 2, upper=5) in the msm package
Simulate constrained normal on lower or upper bound in R Following on from @glen_b's references and focussing exclusively on R implementation. There are a couple of functions designed to sample from a truncated normal distribution: rtruncnorm(100, a=-Inf,
12,503
Simulate constrained normal on lower or upper bound in R
An example of using the inverse CDF (quantile function) as suggested by @Glen_b You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) to find the values these quantiles correspond to for the given distribution. If you only generate quantiles within a specific interval, you truncate the distribution. We can use the CDF (e.g. pnorm) to find the limits of the quantiles, that correspond to a given truncation. rtruncnorm <- function(n, mu, sigma, low, high) { # find quantiles that correspond the the given low and high levels. p_low <- pnorm(low, mu, sigma) p_high <- pnorm(high, mu, sigma) # draw quantiles uniformly between the limits and pass these # to the relevant quantile function. qnorm(runif(n, p_low, p_high), mu, sigma) } samples <- rtruncnorm(1000, 3, 2, low = -Inf, high = 5) max(samples) #> [1] 4.996336 hist(samples)
Simulate constrained normal on lower or upper bound in R
An example of using the inverse CDF (quantile function) as suggested by @Glen_b You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) t
Simulate constrained normal on lower or upper bound in R An example of using the inverse CDF (quantile function) as suggested by @Glen_b You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) to find the values these quantiles correspond to for the given distribution. If you only generate quantiles within a specific interval, you truncate the distribution. We can use the CDF (e.g. pnorm) to find the limits of the quantiles, that correspond to a given truncation. rtruncnorm <- function(n, mu, sigma, low, high) { # find quantiles that correspond the the given low and high levels. p_low <- pnorm(low, mu, sigma) p_high <- pnorm(high, mu, sigma) # draw quantiles uniformly between the limits and pass these # to the relevant quantile function. qnorm(runif(n, p_low, p_high), mu, sigma) } samples <- rtruncnorm(1000, 3, 2, low = -Inf, high = 5) max(samples) #> [1] 4.996336 hist(samples)
Simulate constrained normal on lower or upper bound in R An example of using the inverse CDF (quantile function) as suggested by @Glen_b You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) t
12,504
Difference between panel data & mixed model
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over time. When time is second index for mixed effect model the models are called longitudinal models. The mixed effect model is best understood in terms of 2 level regressions. (For ease of exposition assume only one explanatory variable) First level regression is the following $$y_{ij}=\alpha_i+x_{ij}\beta_i+\varepsilon_{ij}.$$ This is simply explained as individual regression for each group. The second level regression tries to explain variation in regression coefficients: $$\alpha_i=\gamma_0+z_{i1}\gamma_1+u_i$$ $$\beta_i=\delta_0+z_{i2}\delta_1+v_i$$ When you substitute the second equation to the first one you get $$y_{ij}=\gamma_0+z_{i1}\gamma_1+x_{ij}\delta_0+x_{ij}z_{i2}\delta_1+u_i+x_{ij}v_i+\varepsilon_{ij}$$ The fixed effects are what is fixed, this means $\gamma_0,\gamma_1,\delta_0,\delta_1$. The random effects are $u_i$ and $v_i$. Now for panel data the terminology changes, but you still can find common points. The panel data random effects models is the same as mixed effects model with $$\alpha_i=\gamma_0+u_i$$ $$\beta_i=\delta_0$$ with model becomming $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ where $u_i$ are random effects. The most important difference between mixed effects model and panel data models is the treatment of regressors $x_{ij}$. For mixed effects models they are non-random variables, whereas for panel data models it is always assumed that they are random. This becomes important when stating what is fixed effects model for panel data. For mixed effect model it is assumed that random effects $u_i$ and $v_i$ are independent of $\varepsilon_{ij}$ and also from $x_{ij}$ and $z_i$, which is always true when $x_{ij}$ and $z_i$ are fixed. If we allow for stochastic $x_{ij}$ this becomes important. So the random effects model for panel data assumes that $x_{it}$ is not correlated with $u_i$. But the fixed effect model which has the same form $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ allows correlation of $x_{it}$ and $u_i$. The emphasis then is solely for consistently estimating $\delta_0$. This is done by subtracting the individual means: $$y_{it}-\bar{y}_{i.}=(x_{it}-\bar{x}_{i.})\delta_0+\varepsilon_{it}-\bar{\varepsilon}_{i.},$$ and using simple OLS on resulting regression problem. Algebraically this coincides with least square dummy variable regression problem, where we assume that $u_i$ are fixed parameters. Hence the name fixed effects model. There is a lot of history behind fixed effects and random effects terminology in panel data econometrics, which I omitted. In my personal opinion these models are best explained in Wooldridge's "Econometric analysis of cross section and panel data". As far as I know there is no such history in mixed effects model, but on the other hand I come from econometrics background, so I might be mistaken.
Difference between panel data & mixed model
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second
Difference between panel data & mixed model Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over time. When time is second index for mixed effect model the models are called longitudinal models. The mixed effect model is best understood in terms of 2 level regressions. (For ease of exposition assume only one explanatory variable) First level regression is the following $$y_{ij}=\alpha_i+x_{ij}\beta_i+\varepsilon_{ij}.$$ This is simply explained as individual regression for each group. The second level regression tries to explain variation in regression coefficients: $$\alpha_i=\gamma_0+z_{i1}\gamma_1+u_i$$ $$\beta_i=\delta_0+z_{i2}\delta_1+v_i$$ When you substitute the second equation to the first one you get $$y_{ij}=\gamma_0+z_{i1}\gamma_1+x_{ij}\delta_0+x_{ij}z_{i2}\delta_1+u_i+x_{ij}v_i+\varepsilon_{ij}$$ The fixed effects are what is fixed, this means $\gamma_0,\gamma_1,\delta_0,\delta_1$. The random effects are $u_i$ and $v_i$. Now for panel data the terminology changes, but you still can find common points. The panel data random effects models is the same as mixed effects model with $$\alpha_i=\gamma_0+u_i$$ $$\beta_i=\delta_0$$ with model becomming $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ where $u_i$ are random effects. The most important difference between mixed effects model and panel data models is the treatment of regressors $x_{ij}$. For mixed effects models they are non-random variables, whereas for panel data models it is always assumed that they are random. This becomes important when stating what is fixed effects model for panel data. For mixed effect model it is assumed that random effects $u_i$ and $v_i$ are independent of $\varepsilon_{ij}$ and also from $x_{ij}$ and $z_i$, which is always true when $x_{ij}$ and $z_i$ are fixed. If we allow for stochastic $x_{ij}$ this becomes important. So the random effects model for panel data assumes that $x_{it}$ is not correlated with $u_i$. But the fixed effect model which has the same form $$y_{it}=\gamma_0+x_{it}\delta_0+u_i+\varepsilon_{it},$$ allows correlation of $x_{it}$ and $u_i$. The emphasis then is solely for consistently estimating $\delta_0$. This is done by subtracting the individual means: $$y_{it}-\bar{y}_{i.}=(x_{it}-\bar{x}_{i.})\delta_0+\varepsilon_{it}-\bar{\varepsilon}_{i.},$$ and using simple OLS on resulting regression problem. Algebraically this coincides with least square dummy variable regression problem, where we assume that $u_i$ are fixed parameters. Hence the name fixed effects model. There is a lot of history behind fixed effects and random effects terminology in panel data econometrics, which I omitted. In my personal opinion these models are best explained in Wooldridge's "Econometric analysis of cross section and panel data". As far as I know there is no such history in mixed effects model, but on the other hand I come from econometrics background, so I might be mistaken.
Difference between panel data & mixed model Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second
12,505
Difference between panel data & mixed model
I understand you're looking for a text that describes mixed modelling theory without reference to a software package. I would recommend Multilevel Analysis, An introduction to basic and advanced multilevel modelling by Tom Snijders and Roel Bosker, about 250pp. It has a chapter on software at the end (which is somewhat outdated now) but the remainder is very approachable theory. I must say though that I agree with the recommendation above for Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal. The book is very theoretical and the software component is really just a nice addition to a substantial text. I don't normally use Stata and have the text sitting on my desk and find it extremely well written. It is however much longer than 200pp. The following texts are all written by current experts in the field and would be useful for anyone wanting more information about these techniques (although they don't specifically fit your request): [I can't link to these because I'm a new user, sorry] Hoox, Joop (2010). Multilevel Analysis, Techniques and Applications. Gelman, A., and Hill, J. (2006) Data Analysis Using Regression and Multilevel/Hierarchical Models. Singer, J. (2003) Applied Longitudinal Data Analysis: Modeling Change and Event Occurance Raudenbush, S. W., and Bryk, A., S. (2002). Hierarchical Linear Models: Applications and data analysis methods Luke, Douglas,(2004). Multilevel Modeling I would also second Wooldridge's text mentioned above, as well as the R text, and the Bristol University Centre for Multilevel Modelling has a bunch of tutorials and information
Difference between panel data & mixed model
I understand you're looking for a text that describes mixed modelling theory without reference to a software package. I would recommend Multilevel Analysis, An introduction to basic and advanced multi
Difference between panel data & mixed model I understand you're looking for a text that describes mixed modelling theory without reference to a software package. I would recommend Multilevel Analysis, An introduction to basic and advanced multilevel modelling by Tom Snijders and Roel Bosker, about 250pp. It has a chapter on software at the end (which is somewhat outdated now) but the remainder is very approachable theory. I must say though that I agree with the recommendation above for Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal. The book is very theoretical and the software component is really just a nice addition to a substantial text. I don't normally use Stata and have the text sitting on my desk and find it extremely well written. It is however much longer than 200pp. The following texts are all written by current experts in the field and would be useful for anyone wanting more information about these techniques (although they don't specifically fit your request): [I can't link to these because I'm a new user, sorry] Hoox, Joop (2010). Multilevel Analysis, Techniques and Applications. Gelman, A., and Hill, J. (2006) Data Analysis Using Regression and Multilevel/Hierarchical Models. Singer, J. (2003) Applied Longitudinal Data Analysis: Modeling Change and Event Occurance Raudenbush, S. W., and Bryk, A., S. (2002). Hierarchical Linear Models: Applications and data analysis methods Luke, Douglas,(2004). Multilevel Modeling I would also second Wooldridge's text mentioned above, as well as the R text, and the Bristol University Centre for Multilevel Modelling has a bunch of tutorials and information
Difference between panel data & mixed model I understand you're looking for a text that describes mixed modelling theory without reference to a software package. I would recommend Multilevel Analysis, An introduction to basic and advanced multi
12,506
Difference between panel data & mixed model
@mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed models and panel data is worth a read.
Difference between panel data & mixed model
@mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed
Difference between panel data & mixed model @mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed models and panel data is worth a read.
Difference between panel data & mixed model @mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed
12,507
Difference between panel data & mixed model
I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cross-section or group of people who are surveyed periodically over a given time span". So the "panel" is a group-structure within the dataset, and having such a group the most natural way of analyzing this type of data is via a mixed-modelling approach. A good reference (regardless if you "speak" R or not) on mixed-effects modelling is the draft of a (?)forthcoming book by Douglas Bates (lme4: Mixed-effects modeling with R).
Difference between panel data & mixed model
I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cr
Difference between panel data & mixed model I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cross-section or group of people who are surveyed periodically over a given time span". So the "panel" is a group-structure within the dataset, and having such a group the most natural way of analyzing this type of data is via a mixed-modelling approach. A good reference (regardless if you "speak" R or not) on mixed-effects modelling is the draft of a (?)forthcoming book by Douglas Bates (lme4: Mixed-effects modeling with R).
Difference between panel data & mixed model I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cr
12,508
Difference between panel data & mixed model
In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias. However, it is possible to perform this type of estimation within a multilevel model using a Mundlak type approach, i.e. including the group means as extra regressors. This approach removes the correlation between the error term and potential group level omitted factors, revealing the 'within' coefficient. However, for a reason unknown to me this is not typically done in applied research. These slides and this document provide an elaboration.
Difference between panel data & mixed model
In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias. However, it is possible
Difference between panel data & mixed model In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias. However, it is possible to perform this type of estimation within a multilevel model using a Mundlak type approach, i.e. including the group means as extra regressors. This approach removes the correlation between the error term and potential group level omitted factors, revealing the 'within' coefficient. However, for a reason unknown to me this is not typically done in applied research. These slides and this document provide an elaboration.
Difference between panel data & mixed model In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias. However, it is possible
12,509
Difference between panel data & mixed model
If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be about right.
Difference between panel data & mixed model
If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be
Difference between panel data & mixed model If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be about right.
Difference between panel data & mixed model If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be
12,510
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum Entropy" and Cox's "Algebra of Probable Inference"). Bayesian approach is straightforward to apply if your prior knowledge comes in a form of a measurable real-valued function over your hypothesis space, so called "prior". MaxEnt is straightforward when the information comes as a set of hard constraints on your hypothesis space. In real life, knowledge comes neither in "prior" form nor in "constraint" form, so success of your method depends on your ability to represent your knowledge in the corresponding form. On a toy problem, Bayesian model averaging will give you lowest average log-loss (averaged over many model draws) when the prior matches the true distribution of hypotheses. MaxEnt approach will give you lowest worst-case log-loss when its constraints are satisfied (worst taken over all possible priors) E.T.Jaynes, considered a father of "MaxEnt" methods also relied on Bayesian methods. On page 1412 of his book, he gives an example where Bayesian approach resulted in a good solution, followed by an example where MaxEnt approach is more natural. Maximum likelihood essentially takes the model to lie inside some pre-determined model space and trying to fit it "as hard as possible" in a sense that it'll have the highest sensitivity to data out of all model-picking methods restricted to such model space. Whereas MaxEnt and Bayesian are frameworks, ML is a concrete model fitting method, and for some particular design choices, ML can end up the method coming out of Bayesian or MaxEnt approach. For instance, MaxEnt with equality constraints is equivalent to Maximum Likelihood fitting of a certain exponential family. Similarly, an approximation to Bayesian Inference can lead to regularized Maximum Likelihood solution. If you choose your prior to make your conclusions maximally sensitive to data, result of Bayesian inference will correspond to Maximum Likelihood fitting. For instance, when inferring $p$ over Bernoulli trials, such prior would be the limiting distribution Beta(0,0) Real-life Machine Learning successes are often a mix of various philosophies. For instance, "Random Fields" were derived from MaxEnt principles. Most popular implementation of the idea, regularized CRF, involves adding a "prior" on the parameters. As a result, the method is not really MaxEnt nor Bayesian, but influenced by both schools of thought. I've collected some links on philosophical foundations of Bayesian and MaxEnt approaches here and here. Note on terminology: sometimes people call their method Bayesian simply if it uses Bayes rule at some point. Likewise, "MaxEnt" is sometimes used for some method that favors high entropy solutions. This is not the same as "MaxEnt inference" or "Bayesian inference" as described above
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum En
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum Entropy" and Cox's "Algebra of Probable Inference"). Bayesian approach is straightforward to apply if your prior knowledge comes in a form of a measurable real-valued function over your hypothesis space, so called "prior". MaxEnt is straightforward when the information comes as a set of hard constraints on your hypothesis space. In real life, knowledge comes neither in "prior" form nor in "constraint" form, so success of your method depends on your ability to represent your knowledge in the corresponding form. On a toy problem, Bayesian model averaging will give you lowest average log-loss (averaged over many model draws) when the prior matches the true distribution of hypotheses. MaxEnt approach will give you lowest worst-case log-loss when its constraints are satisfied (worst taken over all possible priors) E.T.Jaynes, considered a father of "MaxEnt" methods also relied on Bayesian methods. On page 1412 of his book, he gives an example where Bayesian approach resulted in a good solution, followed by an example where MaxEnt approach is more natural. Maximum likelihood essentially takes the model to lie inside some pre-determined model space and trying to fit it "as hard as possible" in a sense that it'll have the highest sensitivity to data out of all model-picking methods restricted to such model space. Whereas MaxEnt and Bayesian are frameworks, ML is a concrete model fitting method, and for some particular design choices, ML can end up the method coming out of Bayesian or MaxEnt approach. For instance, MaxEnt with equality constraints is equivalent to Maximum Likelihood fitting of a certain exponential family. Similarly, an approximation to Bayesian Inference can lead to regularized Maximum Likelihood solution. If you choose your prior to make your conclusions maximally sensitive to data, result of Bayesian inference will correspond to Maximum Likelihood fitting. For instance, when inferring $p$ over Bernoulli trials, such prior would be the limiting distribution Beta(0,0) Real-life Machine Learning successes are often a mix of various philosophies. For instance, "Random Fields" were derived from MaxEnt principles. Most popular implementation of the idea, regularized CRF, involves adding a "prior" on the parameters. As a result, the method is not really MaxEnt nor Bayesian, but influenced by both schools of thought. I've collected some links on philosophical foundations of Bayesian and MaxEnt approaches here and here. Note on terminology: sometimes people call their method Bayesian simply if it uses Bayes rule at some point. Likewise, "MaxEnt" is sometimes used for some method that favors high entropy solutions. This is not the same as "MaxEnt inference" or "Bayesian inference" as described above
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum En
12,511
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal: How informative is the Maximum Entropy method? (1994) Maximum Entropy Imputation (2002) Explanation of Maximum Entropy (2004) I'm not aware of any comparisons between maxent and other methods: part of the problem seems to be that maxent is not really a framework, but an ambiguous directive ("when faced with an unknown, simply maximise the entropy"), which is interpreted in different ways by different people.
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal: How informative i
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal: How informative is the Maximum Entropy method? (1994) Maximum Entropy Imputation (2002) Explanation of Maximum Entropy (2004) I'm not aware of any comparisons between maxent and other methods: part of the problem seems to be that maxent is not really a framework, but an ambiguous directive ("when faced with an unknown, simply maximise the entropy"), which is interpreted in different ways by different people.
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal: How informative i
12,512
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood. In any case, it is not an issue anymore as Bayes Rule (not the product rule) can be obtained from Maximum relative Entropy (MrE), and not in an ambiguous way: Updating Probabilities with Data and Moments From Physics to Economics: An Econometric Example Using Maximum Relative Entropy It's a new world...
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood. In any case, it is
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood. In any case, it is not an issue anymore as Bayes Rule (not the product rule) can be obtained from Maximum relative Entropy (MrE), and not in an ambiguous way: Updating Probabilities with Data and Moments From Physics to Economics: An Econometric Example Using Maximum Relative Entropy It's a new world...
Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood. In any case, it is
12,513
Difference in Difference method: how to test for assumption of common trend between treatment and control group?
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Ideally the pre-treatment trends should look something like this: This graph was taken from a previous answer to the question why we need the common trends assumption. This includes also an explanation of the blue-dashed line which is the counterfactual outcome for the treated that can be assumed if we can reasonably verify the parallel trends assumption. A formal test which is also suitable for multivalued treatments or several groups is to interact the treatment variable with time dummies. Suppose you have 3 pre-treatment periods and 3 post-treatment periods, you would then regress $$y_{it} = \lambda_i + \delta_t + \beta_{-2}D_{it} + \beta_{-1}D_{it} + \beta_1 D_{it} + \beta_2 D_{it} + \beta_3 D_{it} + \epsilon_{it}$$ where $y$ is the outcome for individual $i$ at time $t$, $\lambda$ and $\delta$ are individual and time fixed effects (this is a generalized way of writing down the diff-in-diff model which also allows for multiple treatments or treatments at different times). The idea is the following. You include the interactions of the time dummies and the treatment indicator for the first two pre-treatment periods and you leave out the one interaction for the last pre-treatment period due to the dummy variable trap. Also now all the other interactions are expressed relative to the omitted period which serves as the baseline. If the outcome trends between treatment and control group are the same, then $\beta_{-2}$ and $\beta_{-1}$ should be insignificant, i.e. the difference in differences is not significantly different between the two groups in the pre-treatment period. An attractive feature of this test is that also the interactions of the time dummies after the treatment with the treatment indicator is informative. For instance, $\beta_{1}, \beta_2, \beta_3$ show you whether the treatment effect fades out over time, stays constant, or even increases. An application of this approach is Autor (2003). Note that the literature generally refers to $\beta_{-2}, \beta_{-1}$ as "leads" and $\beta_{1}, \beta_2, \beta_3$ as "lags", even though they are merely interactions of the treatment indicator with time dummies and are not actually leads and lags of the treatment indicator in a time-series jargon sense. A more detailed explanation of this parallel trends test is provided in the lecture notes by Steve Pischke (here on page 7, or here on page 9).
Difference in Difference method: how to test for assumption of common trend between treatment and co
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatmen
Difference in Difference method: how to test for assumption of common trend between treatment and control group? The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Ideally the pre-treatment trends should look something like this: This graph was taken from a previous answer to the question why we need the common trends assumption. This includes also an explanation of the blue-dashed line which is the counterfactual outcome for the treated that can be assumed if we can reasonably verify the parallel trends assumption. A formal test which is also suitable for multivalued treatments or several groups is to interact the treatment variable with time dummies. Suppose you have 3 pre-treatment periods and 3 post-treatment periods, you would then regress $$y_{it} = \lambda_i + \delta_t + \beta_{-2}D_{it} + \beta_{-1}D_{it} + \beta_1 D_{it} + \beta_2 D_{it} + \beta_3 D_{it} + \epsilon_{it}$$ where $y$ is the outcome for individual $i$ at time $t$, $\lambda$ and $\delta$ are individual and time fixed effects (this is a generalized way of writing down the diff-in-diff model which also allows for multiple treatments or treatments at different times). The idea is the following. You include the interactions of the time dummies and the treatment indicator for the first two pre-treatment periods and you leave out the one interaction for the last pre-treatment period due to the dummy variable trap. Also now all the other interactions are expressed relative to the omitted period which serves as the baseline. If the outcome trends between treatment and control group are the same, then $\beta_{-2}$ and $\beta_{-1}$ should be insignificant, i.e. the difference in differences is not significantly different between the two groups in the pre-treatment period. An attractive feature of this test is that also the interactions of the time dummies after the treatment with the treatment indicator is informative. For instance, $\beta_{1}, \beta_2, \beta_3$ show you whether the treatment effect fades out over time, stays constant, or even increases. An application of this approach is Autor (2003). Note that the literature generally refers to $\beta_{-2}, \beta_{-1}$ as "leads" and $\beta_{1}, \beta_2, \beta_3$ as "lags", even though they are merely interactions of the treatment indicator with time dummies and are not actually leads and lags of the treatment indicator in a time-series jargon sense. A more detailed explanation of this parallel trends test is provided in the lecture notes by Steve Pischke (here on page 7, or here on page 9).
Difference in Difference method: how to test for assumption of common trend between treatment and co The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatmen
12,514
Difference in Difference method: how to test for assumption of common trend between treatment and control group?
There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more than one pre-treatment period (Sometimes, the DiD with two periods performs better than the DiD with multiple periods). Considering your example, you can run a DiD with the period of 2002 like a post-treatment and another pre treatment period (Suppose 2001). If the ATT be statistically significant its an evidence against the pre-trend common assumption, in other words, in the period of 2001-2002 the effect was already happening. The following papers use this approach: Beatty and Shimshack, 2011 Lima and Silveira-Neto, 2015
Difference in Difference method: how to test for assumption of common trend between treatment and co
There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more tha
Difference in Difference method: how to test for assumption of common trend between treatment and control group? There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more than one pre-treatment period (Sometimes, the DiD with two periods performs better than the DiD with multiple periods). Considering your example, you can run a DiD with the period of 2002 like a post-treatment and another pre treatment period (Suppose 2001). If the ATT be statistically significant its an evidence against the pre-trend common assumption, in other words, in the period of 2001-2002 the effect was already happening. The following papers use this approach: Beatty and Shimshack, 2011 Lima and Silveira-Neto, 2015
Difference in Difference method: how to test for assumption of common trend between treatment and co There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more tha
12,515
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions?
Let's consider a one-dimensional problem for the simplest possible exposition. (Higher dimensional cases have similar properties.) While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\sum_i |x_i-\mu|$ (a sum of absolute value functions with different x-offsets) often doesn't. Consider $x_1=1$ and $x_2=3$: (NB in spite of the label on the x-axis, this is really a function of $\mu$; I should have modified the label but I'll just leave it as is) In higher dimensions, you can get regions of constant minimum with the $L_1$-norm. There's an example in the case of fitting lines here. Sums of quadratics are still quadratic, so $\sum_i (x_i-\mu)^2 = n(\bar{x}-\mu)^2+k(\mathbf{x})$ will have a unique solution. In higher dimensions (multiple regression say) the quadratic problem may not automatically have a unique minimum -- you may have multicollinearity leading to a lower-dimensional ridge in the negative of the loss in the parameter space; that's a somewhat different issue than the one presented here. A warning. The page you link to claims that $L_1$-norm regression is robust. I'd have to say I don't completely agree. It's robust against large deviations in the y-direction, as long as they aren't influential points (discrepant in x-space). It can be arbitrarily-badly screwed up by even a single influential outlier. There's an example here. Since (outside some specific circumstances) you don't usually have any such guarantee of no highly influential observations, I wouldn't call L1-regression robust. R code for plot: fi <- function(x,i=0) abs(x-i) f <- function(x) fi(x,1)+fi(x,3) plot(f,-1,5,ylim=c(0,6),col="blue",lwd=2) curve(fi(x,1),-1,5,lty=3,col="dimgrey",add=TRUE) curve(fi(x,3),-1,5,lty=3,col="dimgrey",add=TRUE)
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio
Let's consider a one-dimensional problem for the simplest possible exposition. (Higher dimensional cases have similar properties.) While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\s
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions? Let's consider a one-dimensional problem for the simplest possible exposition. (Higher dimensional cases have similar properties.) While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\sum_i |x_i-\mu|$ (a sum of absolute value functions with different x-offsets) often doesn't. Consider $x_1=1$ and $x_2=3$: (NB in spite of the label on the x-axis, this is really a function of $\mu$; I should have modified the label but I'll just leave it as is) In higher dimensions, you can get regions of constant minimum with the $L_1$-norm. There's an example in the case of fitting lines here. Sums of quadratics are still quadratic, so $\sum_i (x_i-\mu)^2 = n(\bar{x}-\mu)^2+k(\mathbf{x})$ will have a unique solution. In higher dimensions (multiple regression say) the quadratic problem may not automatically have a unique minimum -- you may have multicollinearity leading to a lower-dimensional ridge in the negative of the loss in the parameter space; that's a somewhat different issue than the one presented here. A warning. The page you link to claims that $L_1$-norm regression is robust. I'd have to say I don't completely agree. It's robust against large deviations in the y-direction, as long as they aren't influential points (discrepant in x-space). It can be arbitrarily-badly screwed up by even a single influential outlier. There's an example here. Since (outside some specific circumstances) you don't usually have any such guarantee of no highly influential observations, I wouldn't call L1-regression robust. R code for plot: fi <- function(x,i=0) abs(x-i) f <- function(x) fi(x,1)+fi(x,3) plot(f,-1,5,ylim=c(0,6),col="blue",lwd=2) curve(fi(x,1),-1,5,lty=3,col="dimgrey",add=TRUE) curve(fi(x,3),-1,5,lty=3,col="dimgrey",add=TRUE)
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio Let's consider a one-dimensional problem for the simplest possible exposition. (Higher dimensional cases have similar properties.) While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\s
12,516
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions?
Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of elements are included in the median calculation (see Central tendency: Solutions to variational problems).
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio
Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions? Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of elements are included in the median calculation (see Central tendency: Solutions to variational problems).
Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of
12,517
Keras: why does loss decrease while val_loss increase?
(this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and validating on a larger data set. Explain more about the data/features and the model for further ideas.
Keras: why does loss decrease while val_loss increase?
(this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and
Keras: why does loss decrease while val_loss increase? (this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and validating on a larger data set. Explain more about the data/features and the model for further ideas.
Keras: why does loss decrease while val_loss increase? (this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and
12,518
Keras: why does loss decrease while val_loss increase?
Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The more you train it, the better it is at distinguishing chickens from airplanes, but also the worse it is when it is shown an apple. I'm having the same situation and am thinking of using a Generative Adversarial Network to identify if a validation data point is "alien" to the training dataset or not
Keras: why does loss decrease while val_loss increase?
Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The m
Keras: why does loss decrease while val_loss increase? Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The more you train it, the better it is at distinguishing chickens from airplanes, but also the worse it is when it is shown an apple. I'm having the same situation and am thinking of using a Generative Adversarial Network to identify if a validation data point is "alien" to the training dataset or not
Keras: why does loss decrease while val_loss increase? Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The m
12,519
What are differences between the terms "time series analysis" and "longitudinal data analysis"
I doubt there are strict, formal definitions that a wide range of data analysts agree on. In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A prototypical example would be the annual GDP growth of a country over decades or even more than a hundred years. For an analyst working for a private company, it might be monthly sales revenues over the life of the company. Because there are so many observations, the data are analyzed in great detail, looking for things like seasonality over different periods (e.g., monthly: more sales at the beginning of a month just after people have been paid; yearly: more sales in November and December, when people are shopping for the Christmas season), and possibly regime shifts. Forecasting is often very important, as @StephanKolassa notes. Longitudinal typically refers to fewer measurements over a larger number of study units. A prototypical example might be a drug trial, where there are hundreds of patients measured at baseline (before treatment), and monthly for the next 3 months. With just 4 observations of each unit in this example, it is not possible to try to detect the kinds of features time series analysts are interested in. On the other hand, with patients presumably randomized into treatment and control arms, causality can be inferred once the non-independence has been addressed. As that suggests, often the non-independence is considered almost a nuisance, rather than the primary feature of interest.
What are differences between the terms "time series analysis" and "longitudinal data analysis"
I doubt there are strict, formal definitions that a wide range of data analysts agree on. In general however, time series connotes a single study unit observed at regular intervals over a very long
What are differences between the terms "time series analysis" and "longitudinal data analysis" I doubt there are strict, formal definitions that a wide range of data analysts agree on. In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A prototypical example would be the annual GDP growth of a country over decades or even more than a hundred years. For an analyst working for a private company, it might be monthly sales revenues over the life of the company. Because there are so many observations, the data are analyzed in great detail, looking for things like seasonality over different periods (e.g., monthly: more sales at the beginning of a month just after people have been paid; yearly: more sales in November and December, when people are shopping for the Christmas season), and possibly regime shifts. Forecasting is often very important, as @StephanKolassa notes. Longitudinal typically refers to fewer measurements over a larger number of study units. A prototypical example might be a drug trial, where there are hundreds of patients measured at baseline (before treatment), and monthly for the next 3 months. With just 4 observations of each unit in this example, it is not possible to try to detect the kinds of features time series analysts are interested in. On the other hand, with patients presumably randomized into treatment and control arms, causality can be inferred once the non-independence has been addressed. As that suggests, often the non-independence is considered almost a nuisance, rather than the primary feature of interest.
What are differences between the terms "time series analysis" and "longitudinal data analysis" I doubt there are strict, formal definitions that a wide range of data analysts agree on. In general however, time series connotes a single study unit observed at regular intervals over a very long
12,520
What are differences between the terms "time series analysis" and "longitudinal data analysis"
There are roughly three kinds of datasets: cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects; time series: the same subject at different times; think of it as one column with rows corresponding to different time points; panel (longitudinal): many subjects at different times, you have the same subject at different times, and you have many subjects at the same time; think of it as a table where rows are time points, and columns are subjects.
What are differences between the terms "time series analysis" and "longitudinal data analysis"
There are roughly three kinds of datasets: cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects; time series: the same subj
What are differences between the terms "time series analysis" and "longitudinal data analysis" There are roughly three kinds of datasets: cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects; time series: the same subject at different times; think of it as one column with rows corresponding to different time points; panel (longitudinal): many subjects at different times, you have the same subject at different times, and you have many subjects at the same time; think of it as a table where rows are time points, and columns are subjects.
What are differences between the terms "time series analysis" and "longitudinal data analysis" There are roughly three kinds of datasets: cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects; time series: the same subj
12,521
What are differences between the terms "time series analysis" and "longitudinal data analysis"
These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses. Instead time-series analysis describes a set of lower-level techniques which might be useful to analyze data in a longitudinal study. The object of study in time series analysis is some time-dependent signal. Most techniques to analyze and model / predict these time-dependent signals are built upon the premise that these signals are decomposable into various components. The two most important are: cyclic components (eg, daily, weekly, monthly, seasonal); and trend In other words, time series analysis is based on exploiting the cyclic nature of a time-dependent signal to extract an underlying signal.
What are differences between the terms "time series analysis" and "longitudinal data analysis"
These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses. Instead time-series analysis describes a set of lower-level techniques which
What are differences between the terms "time series analysis" and "longitudinal data analysis" These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses. Instead time-series analysis describes a set of lower-level techniques which might be useful to analyze data in a longitudinal study. The object of study in time series analysis is some time-dependent signal. Most techniques to analyze and model / predict these time-dependent signals are built upon the premise that these signals are decomposable into various components. The two most important are: cyclic components (eg, daily, weekly, monthly, seasonal); and trend In other words, time series analysis is based on exploiting the cyclic nature of a time-dependent signal to extract an underlying signal.
What are differences between the terms "time series analysis" and "longitudinal data analysis" These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses. Instead time-series analysis describes a set of lower-level techniques which
12,522
What are differences between the terms "time series analysis" and "longitudinal data analysis"
What Are Longitudinal Data? Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments, and so on. In contrast, repeated cross-sectional data, which also provides long-term data, gives the same survey to different samples over time. Longitudinal data have a number of advantages over repeated cross-sectional data. Longitudinal data allow for the measurement of within-sample change over time, enable the measurement of the duration of events, and record the timing of various events. For example, suppose the unemployment rate remained high for a long period of time. One can use longitudinal data to see if the same group of individuals stays unemployed over the entire period or if different groups of individuals move in and out of unemployment over the time period. Source
What are differences between the terms "time series analysis" and "longitudinal data analysis"
What Are Longitudinal Data? Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments
What are differences between the terms "time series analysis" and "longitudinal data analysis" What Are Longitudinal Data? Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments, and so on. In contrast, repeated cross-sectional data, which also provides long-term data, gives the same survey to different samples over time. Longitudinal data have a number of advantages over repeated cross-sectional data. Longitudinal data allow for the measurement of within-sample change over time, enable the measurement of the duration of events, and record the timing of various events. For example, suppose the unemployment rate remained high for a long period of time. One can use longitudinal data to see if the same group of individuals stays unemployed over the entire period or if different groups of individuals move in and out of unemployment over the time period. Source
What are differences between the terms "time series analysis" and "longitudinal data analysis" What Are Longitudinal Data? Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments
12,523
What are differences between the terms "time series analysis" and "longitudinal data analysis"
To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measurement from an equivalent population at separate time intervals - or collected continuously but analyzed at timed intervals. Longitudinal data much broader in scope. The equivalent population is replaced by the identical population, so individual data can be paired or joined over time. Longitudinal data can be repeated measurements or not depending on the goal of the study. When Longitudinal data looks like a time series is when we measure the same thing over time. The big difference is that in a time series we can measure the overall change in the measurement over time (or by group) while in a longitudinal analysis you actually have the measurement of change at the individual level. So you have much more potential for analysis and the measurement of change is without error if sampling is involved, so a longitudinal study can be more precise and informative.
What are differences between the terms "time series analysis" and "longitudinal data analysis"
To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measureme
What are differences between the terms "time series analysis" and "longitudinal data analysis" To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measurement from an equivalent population at separate time intervals - or collected continuously but analyzed at timed intervals. Longitudinal data much broader in scope. The equivalent population is replaced by the identical population, so individual data can be paired or joined over time. Longitudinal data can be repeated measurements or not depending on the goal of the study. When Longitudinal data looks like a time series is when we measure the same thing over time. The big difference is that in a time series we can measure the overall change in the measurement over time (or by group) while in a longitudinal analysis you actually have the measurement of change at the individual level. So you have much more potential for analysis and the measurement of change is without error if sampling is involved, so a longitudinal study can be more precise and informative.
What are differences between the terms "time series analysis" and "longitudinal data analysis" To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measureme
12,524
What is meant by the standard error of a maximum likelihood estimate?
The other answer has covered the derivation of the standard error, I just want to help you with notation: Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote the Estimator (which is a function), and a specific estimate (which is the value that the estimator takes when receives as input a specific realized sample). So $\hat \alpha = h(\mathbf X)$ and $\hat \alpha(\mathbf X = \mathbf x) = 4.6931$ for $\mathbf x = \{14,\,21,\,6,\,32,\,2\}$. So $\hat \alpha(X)$ is a function of random variables and so a random variable itself, that certainly has a variance. In ML estimation, in many cases what we can compute is the asymptotic standard error, because the finite-sample distribution of the estimator is not known (cannot be derived). Strictly speaking, $\hat \alpha$ does not have an asymptotic distribution, since it converges to a real number (the true number in almost all cases of ML estimation). But the quantity $\sqrt n (\hat \alpha - \alpha)$ converges to a normal random variable (by application of the Central Limit Theorem). A second point of notational confusion: most, if not all texts, will write $\text {Avar}(\hat \alpha)$ ("Avar" = asymptotic variance") while what they mean is $\text {Avar}(\sqrt n (\hat \alpha - \alpha))$, i.e. they refer to the asymptotic variance of the quantity $\sqrt n (\hat \alpha - \alpha)$, not of $\hat \alpha$... For the case of a basic Pareto distribution we have $$\text {Avar}[\sqrt n (\hat \alpha - \alpha)] = \alpha^2$$ and so $$\text {Avar}(\hat \alpha ) = \alpha^2/n$$ (but what you will find written is $\text {Avar}(\hat \alpha ) = \alpha^2$) Now, in what sense the Estimator $\hat \alpha$ has an "asymptotic variance", since as said, asymptotically it converges to a constant? Well, in an approximate sense and for large but finite samples. I.e. somewhere in-between a "small" sample, where the Estimator is a random variable with (usually) unknown distribution, and an "infinite" sample, where the estimator is a constant, there is this "large but finite sample territory" where the Estimator has not yet become a constant and where its distribution and variance is derived in a roundabout way, by first using the Central Limit Theorem to derive the properly asymptotic distribution of the quantity $Z = \sqrt n (\hat \alpha - \alpha)$ (which is normal due to the CLT), and then turning things around and writing $\hat \alpha = \frac 1{\sqrt n} Z + \alpha$ (while taking one step back and treating $n$ as finite) which shows $\hat \alpha$ as an affine function of the normal random variable $Z$, and so normally distributed itself (always approximately).
What is meant by the standard error of a maximum likelihood estimate?
The other answer has covered the derivation of the standard error, I just want to help you with notation: Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote
What is meant by the standard error of a maximum likelihood estimate? The other answer has covered the derivation of the standard error, I just want to help you with notation: Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote the Estimator (which is a function), and a specific estimate (which is the value that the estimator takes when receives as input a specific realized sample). So $\hat \alpha = h(\mathbf X)$ and $\hat \alpha(\mathbf X = \mathbf x) = 4.6931$ for $\mathbf x = \{14,\,21,\,6,\,32,\,2\}$. So $\hat \alpha(X)$ is a function of random variables and so a random variable itself, that certainly has a variance. In ML estimation, in many cases what we can compute is the asymptotic standard error, because the finite-sample distribution of the estimator is not known (cannot be derived). Strictly speaking, $\hat \alpha$ does not have an asymptotic distribution, since it converges to a real number (the true number in almost all cases of ML estimation). But the quantity $\sqrt n (\hat \alpha - \alpha)$ converges to a normal random variable (by application of the Central Limit Theorem). A second point of notational confusion: most, if not all texts, will write $\text {Avar}(\hat \alpha)$ ("Avar" = asymptotic variance") while what they mean is $\text {Avar}(\sqrt n (\hat \alpha - \alpha))$, i.e. they refer to the asymptotic variance of the quantity $\sqrt n (\hat \alpha - \alpha)$, not of $\hat \alpha$... For the case of a basic Pareto distribution we have $$\text {Avar}[\sqrt n (\hat \alpha - \alpha)] = \alpha^2$$ and so $$\text {Avar}(\hat \alpha ) = \alpha^2/n$$ (but what you will find written is $\text {Avar}(\hat \alpha ) = \alpha^2$) Now, in what sense the Estimator $\hat \alpha$ has an "asymptotic variance", since as said, asymptotically it converges to a constant? Well, in an approximate sense and for large but finite samples. I.e. somewhere in-between a "small" sample, where the Estimator is a random variable with (usually) unknown distribution, and an "infinite" sample, where the estimator is a constant, there is this "large but finite sample territory" where the Estimator has not yet become a constant and where its distribution and variance is derived in a roundabout way, by first using the Central Limit Theorem to derive the properly asymptotic distribution of the quantity $Z = \sqrt n (\hat \alpha - \alpha)$ (which is normal due to the CLT), and then turning things around and writing $\hat \alpha = \frac 1{\sqrt n} Z + \alpha$ (while taking one step back and treating $n$ as finite) which shows $\hat \alpha$ as an affine function of the normal random variable $Z$, and so normally distributed itself (always approximately).
What is meant by the standard error of a maximum likelihood estimate? The other answer has covered the derivation of the standard error, I just want to help you with notation: Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote
12,525
What is meant by the standard error of a maximum likelihood estimate?
$\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the Fisher information, $$ I(\theta) = -\mathbb{E}\left[ \frac{\partial^2 \mathcal{L}(\theta|Y = y)}{\partial \theta^2}|_\theta \right] $$ Where $\theta$ is a parameter and $\mathcal{L}(\theta|Y = y)$ is the log-likelihood function of $\theta$ conditional on random sample $y$. Intuitively, the Fisher information indicates the steepness of the curvature of the log-likelihood surface around the MLE, and so the amount of 'information' that $y$ provides about $\theta$. For a $\mathrm{Pareto}(\alpha,y_0)$ distribution with a single realization $Y = y$, the log-likelihood where $y_0$ is known: $$ \begin{aligned} \mathcal{L}(\alpha|y,y_0) &= \log \alpha + \alpha \log y_0 - (\alpha + 1) \log y \\ \mathcal{L}'(\alpha|y,y_0) &= \frac{1}{\alpha} + \log y_0 - \log y \\ \mathcal{L}''(\alpha|y,y_0) &= -\frac{1}{\alpha^2} \end{aligned} $$ Plugging in to the definition of Fisher information, $$ I(\alpha) = \frac{1}{\alpha^2} $$ For a sample $\{y_1, y_2, ..., y_n\}$ The maximum likelihood estimator $\hat{\alpha}$ is asymptotically distributed as: $$ \begin{aligned} \hat{\alpha} \overset{n \rightarrow \infty}{\sim} \mathcal{N}(\alpha,\frac{1}{nI(\alpha)}) = \mathcal{N}(\alpha,\frac{\alpha^2}{n}),~ \end{aligned} $$ Where $n$ is the sample size. Because $\alpha$ is unknown, we can plug in $\hat{\alpha}$ to obtain an estimate the standard error: $$ \mathrm{SE}(\hat{\alpha}) \approx \sqrt{\hat{\alpha}^2/n} \approx \sqrt{4.6931^2/5} \approx 2.1 $$
What is meant by the standard error of a maximum likelihood estimate?
$\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the
What is meant by the standard error of a maximum likelihood estimate? $\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the Fisher information, $$ I(\theta) = -\mathbb{E}\left[ \frac{\partial^2 \mathcal{L}(\theta|Y = y)}{\partial \theta^2}|_\theta \right] $$ Where $\theta$ is a parameter and $\mathcal{L}(\theta|Y = y)$ is the log-likelihood function of $\theta$ conditional on random sample $y$. Intuitively, the Fisher information indicates the steepness of the curvature of the log-likelihood surface around the MLE, and so the amount of 'information' that $y$ provides about $\theta$. For a $\mathrm{Pareto}(\alpha,y_0)$ distribution with a single realization $Y = y$, the log-likelihood where $y_0$ is known: $$ \begin{aligned} \mathcal{L}(\alpha|y,y_0) &= \log \alpha + \alpha \log y_0 - (\alpha + 1) \log y \\ \mathcal{L}'(\alpha|y,y_0) &= \frac{1}{\alpha} + \log y_0 - \log y \\ \mathcal{L}''(\alpha|y,y_0) &= -\frac{1}{\alpha^2} \end{aligned} $$ Plugging in to the definition of Fisher information, $$ I(\alpha) = \frac{1}{\alpha^2} $$ For a sample $\{y_1, y_2, ..., y_n\}$ The maximum likelihood estimator $\hat{\alpha}$ is asymptotically distributed as: $$ \begin{aligned} \hat{\alpha} \overset{n \rightarrow \infty}{\sim} \mathcal{N}(\alpha,\frac{1}{nI(\alpha)}) = \mathcal{N}(\alpha,\frac{\alpha^2}{n}),~ \end{aligned} $$ Where $n$ is the sample size. Because $\alpha$ is unknown, we can plug in $\hat{\alpha}$ to obtain an estimate the standard error: $$ \mathrm{SE}(\hat{\alpha}) \approx \sqrt{\hat{\alpha}^2/n} \approx \sqrt{4.6931^2/5} \approx 2.1 $$
What is meant by the standard error of a maximum likelihood estimate? $\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the
12,526
What is autocorrelation function?
Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocorrelation function is one of the tools used to find patterns in the data. Specifically, the autocorrelation function tells you the correlation between points separated by various time lags. As an example, here are some possible acf function values for a series with discrete time periods: The notation is ACF(n=number of time periods between points)=correlation between points separated by n time periods. Ill give examples for the first few values of n. ACF(0)=1 (all data are perfectly correlated with themselves), ACF(1)=.9 (the correlation between a point and the next point is 0.9), ACF(2)=.4 (the correlation between a point and a point two time steps ahead is 0.4)...etc. So, the ACF tells you how correlated points are with each other, based on how many time steps they are separated by. That is the gist of autocorrelation, it is how correlated past data points are to future data points, for different values of the time separation. Typically, you'd expect the autocorrelation function to fall towards 0 as points become more separated (i.e. n becomes large in the above notation) because its generally harder to forecast further into the future from a given set of data. This is not a rule, but is typical. Now, to the second part...why do we care? The ACF and its sister function, the partial autocorrelation function (more on this in a bit), are used in the Box-Jenkins/ARIMA modeling approach to determine how past and future data points are related in a time series. The partial autocorrelation function (PACF) can be thought of as the correlation between two points that are separated by some number of periods n, BUT with the effect of the intervening correlations removed. This is important because lets say that in reality, each data point is only directly correlated with the NEXT data point, and none other. However, it will APPEAR as if the current point is correlated with points further into the future, but only due to a "chain reaction" type effect, i.e., T1 is directly correlated with T2 which is directly correlated with T3, so it LOOKs like T1 is directly correlated with T3. The PACF will remove the intervening correlation with T2 so you can better discern patterns. A nice intro to this is here. The NIST Engineering Statistics handbook, online, also has a chapter on this and an example time series analysis using autocorrelation and partial autocorrelation. I won't reproduce it here, but go through it and you should have a much better understanding of autocorrelation.
What is autocorrelation function?
Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocor
What is autocorrelation function? Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocorrelation function is one of the tools used to find patterns in the data. Specifically, the autocorrelation function tells you the correlation between points separated by various time lags. As an example, here are some possible acf function values for a series with discrete time periods: The notation is ACF(n=number of time periods between points)=correlation between points separated by n time periods. Ill give examples for the first few values of n. ACF(0)=1 (all data are perfectly correlated with themselves), ACF(1)=.9 (the correlation between a point and the next point is 0.9), ACF(2)=.4 (the correlation between a point and a point two time steps ahead is 0.4)...etc. So, the ACF tells you how correlated points are with each other, based on how many time steps they are separated by. That is the gist of autocorrelation, it is how correlated past data points are to future data points, for different values of the time separation. Typically, you'd expect the autocorrelation function to fall towards 0 as points become more separated (i.e. n becomes large in the above notation) because its generally harder to forecast further into the future from a given set of data. This is not a rule, but is typical. Now, to the second part...why do we care? The ACF and its sister function, the partial autocorrelation function (more on this in a bit), are used in the Box-Jenkins/ARIMA modeling approach to determine how past and future data points are related in a time series. The partial autocorrelation function (PACF) can be thought of as the correlation between two points that are separated by some number of periods n, BUT with the effect of the intervening correlations removed. This is important because lets say that in reality, each data point is only directly correlated with the NEXT data point, and none other. However, it will APPEAR as if the current point is correlated with points further into the future, but only due to a "chain reaction" type effect, i.e., T1 is directly correlated with T2 which is directly correlated with T3, so it LOOKs like T1 is directly correlated with T3. The PACF will remove the intervening correlation with T2 so you can better discern patterns. A nice intro to this is here. The NIST Engineering Statistics handbook, online, also has a chapter on this and an example time series analysis using autocorrelation and partial autocorrelation. I won't reproduce it here, but go through it and you should have a much better understanding of autocorrelation.
What is autocorrelation function? Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocor
12,527
What is autocorrelation function?
let me give you another perspective. plot the lagged values of a time series with the current values of the time series. if the graph you see is linear, means there is a linear dependence between the current values of the time series versus the lagged values of the time series. autocorrelation values are the most obvious way to measure the linearity of that dependence.
What is autocorrelation function?
let me give you another perspective. plot the lagged values of a time series with the current values of the time series. if the graph you see is linear, means there is a linear dependence between the
What is autocorrelation function? let me give you another perspective. plot the lagged values of a time series with the current values of the time series. if the graph you see is linear, means there is a linear dependence between the current values of the time series versus the lagged values of the time series. autocorrelation values are the most obvious way to measure the linearity of that dependence.
What is autocorrelation function? let me give you another perspective. plot the lagged values of a time series with the current values of the time series. if the graph you see is linear, means there is a linear dependence between the
12,528
What does "unbiasedness" mean?
You can find everything here. However, here is a brief answer. Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$. Now, let us say you use the following estimator: $S^2 = \frac{1}{n} \sum_{i=1}^n (X_{i} - \bar{X})^2$, where $\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i$ is the estimator of $\mu$. It is not too difficult (see footnote) to see that $E[S^2] = \frac{n-1}{n}\sigma^2$. Since $E[S^2] \neq \sigma^2$, the estimator $S^2$ is said to be biased. But, observe that $E[\frac{n}{n-1} S^2] = \sigma^2$. Therefore $\tilde{S}^2 = \frac{n}{n-1} S^2$ is an unbiased estimator of $\sigma^2$. Footnote Start by writing $(X_i - \bar{X})^2 = ((X_i - \mu) + (\mu - \bar{X}))^2$ and then expand the product... Edit to account for your comments The expected value of $S^2$ does not give $\sigma^2$ (and hence $S^2$ is biased) but it turns out you can transform $S^2$ into $\tilde{S}^2$ so that the expectation does give $\sigma^2$. In practice, one often prefers to work with $\tilde{S}^2$ instead of $S^2$. But, if $n$ is large enough, this is not a big issue since $\frac{n}{n-1} \approx 1$. Remark Note that unbiasedness is a property of an estimator, not of an expectation as you wrote.
What does "unbiasedness" mean?
You can find everything here. However, here is a brief answer. Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$. Now, le
What does "unbiasedness" mean? You can find everything here. However, here is a brief answer. Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$. Now, let us say you use the following estimator: $S^2 = \frac{1}{n} \sum_{i=1}^n (X_{i} - \bar{X})^2$, where $\bar{X} = \frac{1}{n} \sum_{i=1}^n X_i$ is the estimator of $\mu$. It is not too difficult (see footnote) to see that $E[S^2] = \frac{n-1}{n}\sigma^2$. Since $E[S^2] \neq \sigma^2$, the estimator $S^2$ is said to be biased. But, observe that $E[\frac{n}{n-1} S^2] = \sigma^2$. Therefore $\tilde{S}^2 = \frac{n}{n-1} S^2$ is an unbiased estimator of $\sigma^2$. Footnote Start by writing $(X_i - \bar{X})^2 = ((X_i - \mu) + (\mu - \bar{X}))^2$ and then expand the product... Edit to account for your comments The expected value of $S^2$ does not give $\sigma^2$ (and hence $S^2$ is biased) but it turns out you can transform $S^2$ into $\tilde{S}^2$ so that the expectation does give $\sigma^2$. In practice, one often prefers to work with $\tilde{S}^2$ instead of $S^2$. But, if $n$ is large enough, this is not a big issue since $\frac{n}{n-1} \approx 1$. Remark Note that unbiasedness is a property of an estimator, not of an expectation as you wrote.
What does "unbiasedness" mean? You can find everything here. However, here is a brief answer. Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$. Now, le
12,529
What does "unbiasedness" mean?
This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data. If you work through the derivation, you will see that the variance of this estimate $E[(\bar{X}-\mu)^2]$ is exactly what gives the additional $-\frac{\sigma^2}{n}$ term
What does "unbiasedness" mean?
This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data. If you w
What does "unbiasedness" mean? This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data. If you work through the derivation, you will see that the variance of this estimate $E[(\bar{X}-\mu)^2]$ is exactly what gives the additional $-\frac{\sigma^2}{n}$ term
What does "unbiasedness" mean? This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data. If you w
12,530
What does "unbiasedness" mean?
The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To compensate, we divide by $n-1$. Here's an exercise: Make up a discrete probability with 2 outcomes, say $P(2) = .25$ and $P(6) = .75$. Find $\mu$ and $\sigma$ for this distribution. Calculate $\mu$ and $\sigma$ for the sample mean when $n = 3$. Calculate all possible samples of size $n =3$. Calculate $s^2$ over those samples, and apply appropriate frequencies. Sometimes, you gotta get your hands dirty.
What does "unbiasedness" mean?
The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To c
What does "unbiasedness" mean? The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To compensate, we divide by $n-1$. Here's an exercise: Make up a discrete probability with 2 outcomes, say $P(2) = .25$ and $P(6) = .75$. Find $\mu$ and $\sigma$ for this distribution. Calculate $\mu$ and $\sigma$ for the sample mean when $n = 3$. Calculate all possible samples of size $n =3$. Calculate $s^2$ over those samples, and apply appropriate frequencies. Sometimes, you gotta get your hands dirty.
What does "unbiasedness" mean? The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To c
12,531
What does "unbiasedness" mean?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population variance and needs to be made "unbiased". This video will answer each part of your question adequately. https://www.youtube.com/watch?v=xslIhnquFoE
What does "unbiasedness" mean?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s
What does "unbiasedness" mean? Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population variance and needs to be made "unbiased". This video will answer each part of your question adequately. https://www.youtube.com/watch?v=xslIhnquFoE
What does "unbiasedness" mean? Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s
12,532
Transforming proportion data: when arcsin square root is not enough
Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas: To be able to extend the tails (towards 0 and 1) as controlled by a parameter. Nevertheless, to match the original (untransformed) values near the middle ($1/2$), which makes the transformation easier to interpret. To make the re-expression symmetric about $1/2.$ That is, if $p$ is re-expressed as $f(p)$, then $1-p$ will be re-expressed as $-f(p)$. If you begin with any increasing monotonic function $g: (0,1) \to \mathbb{R}$ differentiable at $1/2$ you can adjust it to meet the second and third criteria: just define $$f(p) = \frac{g(p) - g(1-p)}{2g'(1/2)}.$$ The numerator is explicitly symmetric (criterion $(3)$), because swapping $p$ with $1-p$ reverses the subtraction, thereby negating it. To see that $(2)$ is satisfied, note that the denominator is precisely the factor needed to make $f^\prime(1/2)=1.$ Recall that the derivative approximates the local behavior of a function with a linear function; a slope of $1=1:1$ thereby means that $f(p)\approx p$ (plus a constant $-1/2$) when $p$ is sufficiently close to $1/2.$ This is the sense in which the original values are "matched near the middle." Tukey calls this the "folded" version of $g$. His family consists of the power and log transformations $g(p) = p^\lambda$ where, when $\lambda=0$, we consider $g(p) = \log(p)$. Let's look at some examples. When $\lambda = 1/2$ we get the folded root, or "froot," $f(p) = \sqrt{1/2}\left(\sqrt{p} - \sqrt{1-p}\right)$. When $\lambda = 0$ we have the folded logarithm, or "flog," $f(p) = (\log(p) - \log(1-p))/4.$ Evidently this is just a constant multiple of the logit transformation, $\log(\frac{p}{1-p})$. In this graph the blue line corresponds to $\lambda=1$, the intermediate red line to $\lambda=1/2$, and the extreme green line to $\lambda=0$. The dashed gold line is the arcsine transformation, $\arcsin(2p-1)/2 = \arcsin(\sqrt{p}) - \arcsin(\sqrt{1/2})$. The "matching" of slopes (criterion $(2)$) causes all the graphs to coincide near $p=1/2.$ The most useful values of the parameter $\lambda$ lie between $1$ and $0$. (You can make the tails even heavier with negative values of $\lambda$, but this use is rare.) $\lambda=1$ doesn't do anything at all except recenter the values ($f(p) = p-1/2$). As $\lambda$ shrinks towards zero, the tails get pulled further towards $\pm \infty$. This satisfies criterion #1. Thus, by choosing an appropriate value of $\lambda$, you can control the "strength" of this re-expression in the tails.
Transforming proportion data: when arcsin square root is not enough
Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas: To be able to extend the tails (towards 0 and 1) as controlled by a parameter. Ne
Transforming proportion data: when arcsin square root is not enough Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas: To be able to extend the tails (towards 0 and 1) as controlled by a parameter. Nevertheless, to match the original (untransformed) values near the middle ($1/2$), which makes the transformation easier to interpret. To make the re-expression symmetric about $1/2.$ That is, if $p$ is re-expressed as $f(p)$, then $1-p$ will be re-expressed as $-f(p)$. If you begin with any increasing monotonic function $g: (0,1) \to \mathbb{R}$ differentiable at $1/2$ you can adjust it to meet the second and third criteria: just define $$f(p) = \frac{g(p) - g(1-p)}{2g'(1/2)}.$$ The numerator is explicitly symmetric (criterion $(3)$), because swapping $p$ with $1-p$ reverses the subtraction, thereby negating it. To see that $(2)$ is satisfied, note that the denominator is precisely the factor needed to make $f^\prime(1/2)=1.$ Recall that the derivative approximates the local behavior of a function with a linear function; a slope of $1=1:1$ thereby means that $f(p)\approx p$ (plus a constant $-1/2$) when $p$ is sufficiently close to $1/2.$ This is the sense in which the original values are "matched near the middle." Tukey calls this the "folded" version of $g$. His family consists of the power and log transformations $g(p) = p^\lambda$ where, when $\lambda=0$, we consider $g(p) = \log(p)$. Let's look at some examples. When $\lambda = 1/2$ we get the folded root, or "froot," $f(p) = \sqrt{1/2}\left(\sqrt{p} - \sqrt{1-p}\right)$. When $\lambda = 0$ we have the folded logarithm, or "flog," $f(p) = (\log(p) - \log(1-p))/4.$ Evidently this is just a constant multiple of the logit transformation, $\log(\frac{p}{1-p})$. In this graph the blue line corresponds to $\lambda=1$, the intermediate red line to $\lambda=1/2$, and the extreme green line to $\lambda=0$. The dashed gold line is the arcsine transformation, $\arcsin(2p-1)/2 = \arcsin(\sqrt{p}) - \arcsin(\sqrt{1/2})$. The "matching" of slopes (criterion $(2)$) causes all the graphs to coincide near $p=1/2.$ The most useful values of the parameter $\lambda$ lie between $1$ and $0$. (You can make the tails even heavier with negative values of $\lambda$, but this use is rare.) $\lambda=1$ doesn't do anything at all except recenter the values ($f(p) = p-1/2$). As $\lambda$ shrinks towards zero, the tails get pulled further towards $\pm \infty$. This satisfies criterion #1. Thus, by choosing an appropriate value of $\lambda$, you can control the "strength" of this re-expression in the tails.
Transforming proportion data: when arcsin square root is not enough Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas: To be able to extend the tails (towards 0 and 1) as controlled by a parameter. Ne
12,533
Transforming proportion data: when arcsin square root is not enough
One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is the standard student t distribution, with $\nu$ degrees of freedom. The parameter $v$ controls how quickly the transformed variable wanders off to infinity. If you set $v=1$ then you have the arctan transform: $$x=arctan\left(\frac{\pi[2p-1]}{2}\right)$$ This is much more extreme than arcsine, and more extreme than logit transform. Note that logit transform can be roughly approximated by using the t-distribution with $\nu\approx 8$. SO in some way it provides an approximate link between logit and probit ($\nu=\infty$) transforms, and an extension of them to more extreme transformations. The problem with these transforms is that they give $\pm\infty$ when the observed proportion is equal to $1$ or $0$. So you need to somehow shrink these somehow - the simplest way being to add $+1$ "successes" and $+1$ "failures".
Transforming proportion data: when arcsin square root is not enough
One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is
Transforming proportion data: when arcsin square root is not enough One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is the standard student t distribution, with $\nu$ degrees of freedom. The parameter $v$ controls how quickly the transformed variable wanders off to infinity. If you set $v=1$ then you have the arctan transform: $$x=arctan\left(\frac{\pi[2p-1]}{2}\right)$$ This is much more extreme than arcsine, and more extreme than logit transform. Note that logit transform can be roughly approximated by using the t-distribution with $\nu\approx 8$. SO in some way it provides an approximate link between logit and probit ($\nu=\infty$) transforms, and an extension of them to more extreme transformations. The problem with these transforms is that they give $\pm\infty$ when the observed proportion is equal to $1$ or $0$. So you need to somehow shrink these somehow - the simplest way being to add $+1$ "successes" and $+1$ "failures".
Transforming proportion data: when arcsin square root is not enough One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is
12,534
Interpretation of log transformed predictors in logistic regression
If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor. I usually choose to take logarithms to base 2 in this situation, so I can interpet the exponentiated coefficient as an odds ratio associated with a doubling of the predictor.
Interpretation of log transformed predictors in logistic regression
If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the
Interpretation of log transformed predictors in logistic regression If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor. I usually choose to take logarithms to base 2 in this situation, so I can interpet the exponentiated coefficient as an odds ratio associated with a doubling of the predictor.
Interpretation of log transformed predictors in logistic regression If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the
12,535
Interpretation of log transformed predictors in logistic regression
@gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV. One IV that often should be transformed is income. If you included it untransformed, then each (say) \$1,000 increase in income would have an effect on the odds ratio as specified by the odds ratio. On the other hand, if you took log(10) of income, then each 10 fold increase in income would have the effect on the odds ratio specified in the odds ratio. It makes sense to do this for income because, in many ways, an increase of \$1,000 in income is much bigger for someone who makes \$10,000 per year than someone who makes \$100,000. One final note - although logistic regression makes no normality assumptions, even OLS regression doesn't make assumptions about the variables, it makes assumptions about the error, as estimated by the residuals.
Interpretation of log transformed predictors in logistic regression
@gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV. One IV that
Interpretation of log transformed predictors in logistic regression @gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV. One IV that often should be transformed is income. If you included it untransformed, then each (say) \$1,000 increase in income would have an effect on the odds ratio as specified by the odds ratio. On the other hand, if you took log(10) of income, then each 10 fold increase in income would have the effect on the odds ratio specified in the odds ratio. It makes sense to do this for income because, in many ways, an increase of \$1,000 in income is much bigger for someone who makes \$10,000 per year than someone who makes \$100,000. One final note - although logistic regression makes no normality assumptions, even OLS regression doesn't make assumptions about the variables, it makes assumptions about the error, as estimated by the residuals.
Interpretation of log transformed predictors in logistic regression @gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV. One IV that
12,536
Interpretation of log transformed predictors in logistic regression
This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer. If your model equation is: $log(p/(1-p)) = \beta _{0} + \beta log(X)$ Then, each $k$-fold increase in $X$ is associated with a change in the odds by a multiplicative factor of $k^{\beta }$. For example, I have the following model for presence of bed sores regressed on length of stay at a hospital. $log(odds of bedsore)= -.44 + 0.45(length of stay)$ So my $\beta = 0.45$. You can choose any $k$, based on what's works best for your model's interpretability. I decide that $k=2$ and get the following: $k^{\beta } = 2^{0.45} = 1.37$ Each doubling ($k=2$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of 1.37. Or if you double my length of stay, my odds of getting a bedsore will be 137% what they would have been otherwise. Or if you decide $k=0.5$. $k^{\beta } = 0.5^{0.45} = 0.73$ Each halving ($k=0.5$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of .73. Or if you cut my length of stay in half, my odds of getting a bedsore will only 73% of what they would have been otherwise.
Interpretation of log transformed predictors in logistic regression
This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer. If your model equation is: $log(p/(1-p)) = \beta _{0} + \beta log(X)$ Then, each $k$-fold increase in $X$ i
Interpretation of log transformed predictors in logistic regression This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer. If your model equation is: $log(p/(1-p)) = \beta _{0} + \beta log(X)$ Then, each $k$-fold increase in $X$ is associated with a change in the odds by a multiplicative factor of $k^{\beta }$. For example, I have the following model for presence of bed sores regressed on length of stay at a hospital. $log(odds of bedsore)= -.44 + 0.45(length of stay)$ So my $\beta = 0.45$. You can choose any $k$, based on what's works best for your model's interpretability. I decide that $k=2$ and get the following: $k^{\beta } = 2^{0.45} = 1.37$ Each doubling ($k=2$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of 1.37. Or if you double my length of stay, my odds of getting a bedsore will be 137% what they would have been otherwise. Or if you decide $k=0.5$. $k^{\beta } = 0.5^{0.45} = 0.73$ Each halving ($k=0.5$) of the length of stay is associated with a change in the odds of getting a bedsore by a factor of .73. Or if you cut my length of stay in half, my odds of getting a bedsore will only 73% of what they would have been otherwise.
Interpretation of log transformed predictors in logistic regression This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer. If your model equation is: $log(p/(1-p)) = \beta _{0} + \beta log(X)$ Then, each $k$-fold increase in $X$ i
12,537
Interpretation of log transformed predictors in logistic regression
The general model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$. Case 1: $k=e$, i.e. natural log transformed independent variable. Then if $\beta$ is close to zero we can say "a 1% increase in $x$ leads to a $\beta$ percent increase in the odds of the outcome." Details follow. The model is $ln(p/(1-p)) = \beta _{0} + \beta ln(x)$ where $ln()$ is the natural log. @whuber's comment was that they always use natural logs for the independent variable, since in this case only, if $\beta$ is small then it is approximately the percentage change in the odds from a percentage increase in $x$. To see this, it helps to define $odds(x) = p(x)/(1-p(x))$ as the odds of the dependent variable being 1 given the value x. Then the model is $ln(odd(x)) = \beta _{0} + \beta ln(x)$. Using usual arguments for log-transformed regressions (e.g. https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed/), we can write for values $x_1$ and, say, $x_2 = 1.01 \times x_1$ , $odds(x_2)/odds(x_1) = (x_2/x_1)^\beta = (1.01)^\beta \approx 1 + \beta \times 0.01$ the last approximation requires $|\beta|$ to be small. Thus we can write in this case, "a 1% increase in $x$ leads to a $\beta$ percent increase in the odds of the outcome." For example, if $\beta = 0.05$, then $\beta \times 0.01 = 0.0005$, and so a 1% increase in x leads to a 0.05% increase in the odds of the outcome being 1 (i.e. these odds are multiplied by 1.0005). This argument rests on the base of the logarithm used for the independent variable being the same as the base used for the log odds ratio in the logit transformation. Since practically always the base used for the logit transformation is the natural log, then this argument rests on using the natural log to transform the independent variable. (If one were to make a modified logit regression that uses a different base for the logit transformation, it appears that the same argument would hold, but I do not think this is convention.) Case 2: base $k$ transformed independent variable. Then the exponentiated coefficient, $e^\beta$, can be interpreted as the proportionate increase in the odds from a $k$-fold increase in the independent variable. Note that $k$ could be $e$, but $e$ would be a very strange choice given this interpretation. The model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ where $ln()$ is the natural log and $log_k()$ is log base k. Notice that the logit transformation of the dependent variable remains using the natural log. Again it helps to define $odds(x) = p(x)/(1-p(x))$ (see above). General derivations using the model equation yields that $odds(log_k(x) + 1) / odds(log_k(x)) = e^\beta$ this is the usual interpretation of exponentiated coefficients, called "odds ratios" (e.g. in Stata, the relevant commands are -logit, or- where the "or" means "odds ratio", or -esttab, eform- where the "eform" means "exponentiate using e"). In words, the coefficient $e^\beta$ represents the proportional increase in the odds of the dependent variable being 1 from a unit increase in the independent variable. E.g. if $e^\beta = 1.10$ then the odds increase by 10% from a unit increase in the independent variable. Since the independent variable is log transformed, we can use $1 = log_k(k)$ to find $odds(log_k(x) + log_k(k)) / odds(log_k(x)) = e^\beta$ thus $odds(log_k(kx)) / odds(log_k(x)) = e^\beta$ Hence the exponentiated coefficient represents the proportional increase in the odds from a k-fold increase in the $x$ (the non-log-transformed) variable.
Interpretation of log transformed predictors in logistic regression
The general model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$. Case 1: $k=e$, i.e. natural log
Interpretation of log transformed predictors in logistic regression The general model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$. Case 1: $k=e$, i.e. natural log transformed independent variable. Then if $\beta$ is close to zero we can say "a 1% increase in $x$ leads to a $\beta$ percent increase in the odds of the outcome." Details follow. The model is $ln(p/(1-p)) = \beta _{0} + \beta ln(x)$ where $ln()$ is the natural log. @whuber's comment was that they always use natural logs for the independent variable, since in this case only, if $\beta$ is small then it is approximately the percentage change in the odds from a percentage increase in $x$. To see this, it helps to define $odds(x) = p(x)/(1-p(x))$ as the odds of the dependent variable being 1 given the value x. Then the model is $ln(odd(x)) = \beta _{0} + \beta ln(x)$. Using usual arguments for log-transformed regressions (e.g. https://stats.idre.ucla.edu/other/mult-pkg/faq/general/faqhow-do-i-interpret-a-regression-model-when-some-variables-are-log-transformed/), we can write for values $x_1$ and, say, $x_2 = 1.01 \times x_1$ , $odds(x_2)/odds(x_1) = (x_2/x_1)^\beta = (1.01)^\beta \approx 1 + \beta \times 0.01$ the last approximation requires $|\beta|$ to be small. Thus we can write in this case, "a 1% increase in $x$ leads to a $\beta$ percent increase in the odds of the outcome." For example, if $\beta = 0.05$, then $\beta \times 0.01 = 0.0005$, and so a 1% increase in x leads to a 0.05% increase in the odds of the outcome being 1 (i.e. these odds are multiplied by 1.0005). This argument rests on the base of the logarithm used for the independent variable being the same as the base used for the log odds ratio in the logit transformation. Since practically always the base used for the logit transformation is the natural log, then this argument rests on using the natural log to transform the independent variable. (If one were to make a modified logit regression that uses a different base for the logit transformation, it appears that the same argument would hold, but I do not think this is convention.) Case 2: base $k$ transformed independent variable. Then the exponentiated coefficient, $e^\beta$, can be interpreted as the proportionate increase in the odds from a $k$-fold increase in the independent variable. Note that $k$ could be $e$, but $e$ would be a very strange choice given this interpretation. The model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ where $ln()$ is the natural log and $log_k()$ is log base k. Notice that the logit transformation of the dependent variable remains using the natural log. Again it helps to define $odds(x) = p(x)/(1-p(x))$ (see above). General derivations using the model equation yields that $odds(log_k(x) + 1) / odds(log_k(x)) = e^\beta$ this is the usual interpretation of exponentiated coefficients, called "odds ratios" (e.g. in Stata, the relevant commands are -logit, or- where the "or" means "odds ratio", or -esttab, eform- where the "eform" means "exponentiate using e"). In words, the coefficient $e^\beta$ represents the proportional increase in the odds of the dependent variable being 1 from a unit increase in the independent variable. E.g. if $e^\beta = 1.10$ then the odds increase by 10% from a unit increase in the independent variable. Since the independent variable is log transformed, we can use $1 = log_k(k)$ to find $odds(log_k(x) + log_k(k)) / odds(log_k(x)) = e^\beta$ thus $odds(log_k(kx)) / odds(log_k(x)) = e^\beta$ Hence the exponentiated coefficient represents the proportional increase in the odds from a k-fold increase in the $x$ (the non-log-transformed) variable.
Interpretation of log transformed predictors in logistic regression The general model is $ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$ for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$. Case 1: $k=e$, i.e. natural log
12,538
Interpretation of log transformed predictors in logistic regression
Model Assume the following model $$ y_i \sim \text{Binomial}(n_i, p_i) \\ \log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i). $$ How can we interpret the coefficient $\beta_1$? Odds Ratio We calculate the odds ratio between response $i$ and $j$. First note that the log odds ratio is given by \begin{align*} \log(\text{OR}_{ij}) = \log\left(\frac{\frac{p_i}{1-p_i}}{\frac{p_j}{1-p_j}}\right) &= \log\left(\frac{p_i}{1-p_i}\right) - \log\left(\frac{p_j}{1-p_j}\right) \\ &= \beta_0 + \beta_1 \log(x_i) - \left(\beta_0 + \beta_1 \log(x_j)\right) \\ &= \beta_1 \log\left(\frac{x_i}{x_j}\right). \end{align*} Therefore, \begin{align*} \text{OR}_{ij} &= \exp\left(\beta_1 \log\left(\frac{x_i}{x_j}\right)\right) \\ &= \left(\exp\left(\log\left(\frac{x_i}{x_j}\right)\right)\right)^{\beta_1} \\ &= \left(\frac{x_i}{x_j}\right)^{\beta_1} \end{align*} Interpretation The interpretation of $\beta_1$ is simple if we define a specific ratio between $x_i$ and $x_j$. For example we could calculate the odds ratio associated with a doubling of predictor $x$ (i.e. $x_i/x_j = 2$) as $$ 2^{\beta_1}. $$ Alternative A widely used alternative is to transform $x$ with a logarithm to a specific base $b$ which later helps the interpretation of the coefficient. Let's say I want to determine the odds ratio associated with a doubling of predictor $x$. In this case we would transform $x$ with the logarithm to the base 2 changing the linear predictor to $\eta = \tilde\beta_0 + \tilde\beta_1 \log_2(x_i)$. This changes the formula above to $$ \text{OR}_{ij} = \exp\left(\tilde\beta_1 \log_2\left(\frac{x_i}{x_j}\right)\right) \\ = \exp\left(\tilde\beta_1 \log_2(2)\right) \\ = \exp(\tilde\beta_1) \\ $$ i.e. the odds ratio associated with a doubling of predictor $x$ can be determined by calculating $e$ to the power of the coefficient. Advantage The same operation $\exp(\cdot)$ is applied to all coefficients in order to determine its effect on the odds ratio (also to coefficients associated to a predictor which was not log transformed) Disadvantage It is no longer simply possible to calculate the OR associated with another factor increase in $x$.
Interpretation of log transformed predictors in logistic regression
Model Assume the following model $$ y_i \sim \text{Binomial}(n_i, p_i) \\ \log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i). $$ How can we interpret the coefficient $\beta_1$? O
Interpretation of log transformed predictors in logistic regression Model Assume the following model $$ y_i \sim \text{Binomial}(n_i, p_i) \\ \log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i). $$ How can we interpret the coefficient $\beta_1$? Odds Ratio We calculate the odds ratio between response $i$ and $j$. First note that the log odds ratio is given by \begin{align*} \log(\text{OR}_{ij}) = \log\left(\frac{\frac{p_i}{1-p_i}}{\frac{p_j}{1-p_j}}\right) &= \log\left(\frac{p_i}{1-p_i}\right) - \log\left(\frac{p_j}{1-p_j}\right) \\ &= \beta_0 + \beta_1 \log(x_i) - \left(\beta_0 + \beta_1 \log(x_j)\right) \\ &= \beta_1 \log\left(\frac{x_i}{x_j}\right). \end{align*} Therefore, \begin{align*} \text{OR}_{ij} &= \exp\left(\beta_1 \log\left(\frac{x_i}{x_j}\right)\right) \\ &= \left(\exp\left(\log\left(\frac{x_i}{x_j}\right)\right)\right)^{\beta_1} \\ &= \left(\frac{x_i}{x_j}\right)^{\beta_1} \end{align*} Interpretation The interpretation of $\beta_1$ is simple if we define a specific ratio between $x_i$ and $x_j$. For example we could calculate the odds ratio associated with a doubling of predictor $x$ (i.e. $x_i/x_j = 2$) as $$ 2^{\beta_1}. $$ Alternative A widely used alternative is to transform $x$ with a logarithm to a specific base $b$ which later helps the interpretation of the coefficient. Let's say I want to determine the odds ratio associated with a doubling of predictor $x$. In this case we would transform $x$ with the logarithm to the base 2 changing the linear predictor to $\eta = \tilde\beta_0 + \tilde\beta_1 \log_2(x_i)$. This changes the formula above to $$ \text{OR}_{ij} = \exp\left(\tilde\beta_1 \log_2\left(\frac{x_i}{x_j}\right)\right) \\ = \exp\left(\tilde\beta_1 \log_2(2)\right) \\ = \exp(\tilde\beta_1) \\ $$ i.e. the odds ratio associated with a doubling of predictor $x$ can be determined by calculating $e$ to the power of the coefficient. Advantage The same operation $\exp(\cdot)$ is applied to all coefficients in order to determine its effect on the odds ratio (also to coefficients associated to a predictor which was not log transformed) Disadvantage It is no longer simply possible to calculate the OR associated with another factor increase in $x$.
Interpretation of log transformed predictors in logistic regression Model Assume the following model $$ y_i \sim \text{Binomial}(n_i, p_i) \\ \log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i). $$ How can we interpret the coefficient $\beta_1$? O
12,539
What is the sum of squared t variates?
Answering the first question. We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random variables distributed by $F(1,n)$. This could be done either by calculating the convolution of two random variables, or calculating the product of their characteristic functions. The article by P.C.B. Phillips shows that my first guess about "[confluent] hypergeometric functions involved" was indeed true. It means that the solution will be not trivial, and the brute-force is complicated, but necessary condition to answer your question. So since $n$ is fixed and you sum up t-distributions, we can't say for sure what the final result will be. Unless someone has a good skill playing with products of confluent hypergeometric functions.
What is the sum of squared t variates?
Answering the first question. We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random vari
What is the sum of squared t variates? Answering the first question. We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random variables distributed by $F(1,n)$. This could be done either by calculating the convolution of two random variables, or calculating the product of their characteristic functions. The article by P.C.B. Phillips shows that my first guess about "[confluent] hypergeometric functions involved" was indeed true. It means that the solution will be not trivial, and the brute-force is complicated, but necessary condition to answer your question. So since $n$ is fixed and you sum up t-distributions, we can't say for sure what the final result will be. Unless someone has a good skill playing with products of confluent hypergeometric functions.
What is the sum of squared t variates? Answering the first question. We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random vari
12,540
What is the sum of squared t variates?
It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histograms of $\log(T)$ and of $\log(\chi^2(k))$ don't even have the same shape, indicating that shifting and rescaling $T$ still won't work. Intuitively, for small degrees of freedom Student's $t$ is heavy tailed. Squaring it emphasizes that heaviness. The sums therefore will be more skewed--usually much more skewed--than sums of squared normals (the $\chi^2$ distribution). Calculations and simulations bear this out. Illustration (as requested) Each histogram depicts an independent simulation of 100,000 trials with the specified degrees of freedom ($n$) and summands ($k$), standardized as described by @mpiktas. The value of $n=9999$ on the bottom row approximates the $\chi^2$ case. Thus you can compare $T$ to $\chi^2$ by scanning down each column. Note that standardization is not possible for $n \lt 5$ because the appropriate moments do not even exist. The lack of stability of shape (as you scan from left to right across any row or from top to bottom down any column) is even more marked for $n \le 4$. Finally, let's address the question about a central limit theorem. Since the square of a random variable is a random variable, the usual Central Limit Theorem automatically applies to sequences of independent squared random variables like those in the question. For its conclusion (convergence of the standardized sum to Normality) to hold, we need the squared random variable to have finite variance. Which Student t variables, when squared, have finite variance? When $X$ is any random variable, by one standard definition the variance of its square $Y=X^2$ is $$\operatorname{Var}(Y) = E[Y^2] - E[Y]^2 = E[X^4] - E[X^2]^2.$$ Finiteness of $E[X^4]$ will assure finiteness of $E[X^2].$ Because the student $t$ density with $\nu$ degrees of freedom is (up to a rescaling of $X$) proportional to $f_{\nu}(x)=(1+x^2)^{-(\nu+1)/2},$ the question comes down to the finiteness of the integral of $x^4$ times this. Because the product is bounded, we are concerned with the behavior as $|x|\to\infty,$ where the integrand is asymptotically $$x^4 f_{\nu}(x) \sim x^4 (x^2)^{-(\nu+1)/2} = x^{3 - \nu}.$$ Its integral diverges when the exponent exceeds $-1$ and otherwise converges; that is, The standardized version of $T$ converges to a standard Normal distribution if and only if $\nu \gt 4.$
What is the sum of squared t variates?
It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histogram
What is the sum of squared t variates? It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histograms of $\log(T)$ and of $\log(\chi^2(k))$ don't even have the same shape, indicating that shifting and rescaling $T$ still won't work. Intuitively, for small degrees of freedom Student's $t$ is heavy tailed. Squaring it emphasizes that heaviness. The sums therefore will be more skewed--usually much more skewed--than sums of squared normals (the $\chi^2$ distribution). Calculations and simulations bear this out. Illustration (as requested) Each histogram depicts an independent simulation of 100,000 trials with the specified degrees of freedom ($n$) and summands ($k$), standardized as described by @mpiktas. The value of $n=9999$ on the bottom row approximates the $\chi^2$ case. Thus you can compare $T$ to $\chi^2$ by scanning down each column. Note that standardization is not possible for $n \lt 5$ because the appropriate moments do not even exist. The lack of stability of shape (as you scan from left to right across any row or from top to bottom down any column) is even more marked for $n \le 4$. Finally, let's address the question about a central limit theorem. Since the square of a random variable is a random variable, the usual Central Limit Theorem automatically applies to sequences of independent squared random variables like those in the question. For its conclusion (convergence of the standardized sum to Normality) to hold, we need the squared random variable to have finite variance. Which Student t variables, when squared, have finite variance? When $X$ is any random variable, by one standard definition the variance of its square $Y=X^2$ is $$\operatorname{Var}(Y) = E[Y^2] - E[Y]^2 = E[X^4] - E[X^2]^2.$$ Finiteness of $E[X^4]$ will assure finiteness of $E[X^2].$ Because the student $t$ density with $\nu$ degrees of freedom is (up to a rescaling of $X$) proportional to $f_{\nu}(x)=(1+x^2)^{-(\nu+1)/2},$ the question comes down to the finiteness of the integral of $x^4$ times this. Because the product is bounded, we are concerned with the behavior as $|x|\to\infty,$ where the integrand is asymptotically $$x^4 f_{\nu}(x) \sim x^4 (x^2)^{-(\nu+1)/2} = x^{3 - \nu}.$$ Its integral diverges when the exponent exceeds $-1$ and otherwise converges; that is, The standardized version of $T$ converges to a standard Normal distribution if and only if $\nu \gt 4.$
What is the sum of squared t variates? It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histogram
12,541
What is the sum of squared t variates?
I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have $\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\sim N(0,1)$ where $Et_1^2$ and $Var(t_1^2)$ is respectively the mean and variance of squared Student t distribution with $n$ degrees of freedom. Note that $t_1^2$ is distributed as F distribution with $1$ and $n$ degrees of freedom. So we can grab the formulas for mean and variance from wikipedia page. The final result then is: $\dfrac{T-k\frac{n}{n-2}}{\sqrt{k\frac{2n^2(n-1)}{(n-2)^2(n-4)}}}\sim N(0,1)$
What is the sum of squared t variates?
I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have $\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\si
What is the sum of squared t variates? I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have $\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\sim N(0,1)$ where $Et_1^2$ and $Var(t_1^2)$ is respectively the mean and variance of squared Student t distribution with $n$ degrees of freedom. Note that $t_1^2$ is distributed as F distribution with $1$ and $n$ degrees of freedom. So we can grab the formulas for mean and variance from wikipedia page. The final result then is: $\dfrac{T-k\frac{n}{n-2}}{\sqrt{k\frac{2n^2(n-1)}{(n-2)^2(n-4)}}}\sim N(0,1)$
What is the sum of squared t variates? I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have $\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\si
12,542
Is it possible to automate time series forecasting?
First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models. To answer your main question "Is it possible to automate time series forecasting?": Yes it is. In my field of demand forecasting, most commercial forecasting packages do so. Several open source packages do so as well, most notably Rob Hyndman's auto.arima() (automated ARIMA forecasting) and ETS() (automated exponential smoothing forecasting) functions from the open source Forecast package in R see here for details on these two functions. There's also a Python implementation of auto.arima called Pyramid, although in my experience it is not as mature as the R packages. Both the commercial products that I mentioned and the open source packages I mentioned work based on the idea of using information criteria to choose the best forecast: You fit a bunch of models, and then select the model with the lowest AIC, BIC, AICc, etc....(typically this is done in lieu of out of sample validation). There is however a major caveat: all of these methods work within a single family of models. They choose the best possible model amongst a set of ARIMA models, or the best possible model amongst a set of exponential smoothing models. It is much more challenging to do so if you want to choose from different families of models, for example if you want to choose the best model from ARIMA, Exponential smoothing and the Theta method. In theory, you can do so in the same way that you do within a single family of models, i.e. by using information criteria. However in practice, you need to calculate the AIC or BIC in exactly the same way for all models considered, and that is a significant challenge. It might be better to use time series cross-validation, or out of sample validation instead of information criteria, but that will be much more computationally intensive (and tedious to code). Facebook's Prophet package also automates forecast generation based on General Additive Models See here for details. However Prophet fits only one single model, albeit a very flexible model with many parameters. Prophet's implicit assumption is that a GAM is "the one model to rule them all", which might not be theoretically justified but is very pragmatic and useful for real world scenarios. Another caveat that applies to all of the above mentioned methods: Presumably you want to do automated time series forecasting because you want to forecast multiple time series, too many to analyze manually. Otherwise you could just do your own experiments and find the best model on your own. You need to keep in mind that an automated forecasting approach is never going to find the best model for each and every time series - it is going to give a reasonably good model on average over all the time series, but it is still possible that some of those time series will have better models than the ones selected by the automated method. See this post for an example of this. To put it simply, if you are going to go with automated forecasting - you will have to tolerate "good enough" forecasts instead of the best possible forecasts for each time series.
Is it possible to automate time series forecasting?
First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models. To answer your main question "Is it possible to automate time series fore
Is it possible to automate time series forecasting? First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models. To answer your main question "Is it possible to automate time series forecasting?": Yes it is. In my field of demand forecasting, most commercial forecasting packages do so. Several open source packages do so as well, most notably Rob Hyndman's auto.arima() (automated ARIMA forecasting) and ETS() (automated exponential smoothing forecasting) functions from the open source Forecast package in R see here for details on these two functions. There's also a Python implementation of auto.arima called Pyramid, although in my experience it is not as mature as the R packages. Both the commercial products that I mentioned and the open source packages I mentioned work based on the idea of using information criteria to choose the best forecast: You fit a bunch of models, and then select the model with the lowest AIC, BIC, AICc, etc....(typically this is done in lieu of out of sample validation). There is however a major caveat: all of these methods work within a single family of models. They choose the best possible model amongst a set of ARIMA models, or the best possible model amongst a set of exponential smoothing models. It is much more challenging to do so if you want to choose from different families of models, for example if you want to choose the best model from ARIMA, Exponential smoothing and the Theta method. In theory, you can do so in the same way that you do within a single family of models, i.e. by using information criteria. However in practice, you need to calculate the AIC or BIC in exactly the same way for all models considered, and that is a significant challenge. It might be better to use time series cross-validation, or out of sample validation instead of information criteria, but that will be much more computationally intensive (and tedious to code). Facebook's Prophet package also automates forecast generation based on General Additive Models See here for details. However Prophet fits only one single model, albeit a very flexible model with many parameters. Prophet's implicit assumption is that a GAM is "the one model to rule them all", which might not be theoretically justified but is very pragmatic and useful for real world scenarios. Another caveat that applies to all of the above mentioned methods: Presumably you want to do automated time series forecasting because you want to forecast multiple time series, too many to analyze manually. Otherwise you could just do your own experiments and find the best model on your own. You need to keep in mind that an automated forecasting approach is never going to find the best model for each and every time series - it is going to give a reasonably good model on average over all the time series, but it is still possible that some of those time series will have better models than the ones selected by the automated method. See this post for an example of this. To put it simply, if you are going to go with automated forecasting - you will have to tolerate "good enough" forecasts instead of the best possible forecasts for each time series.
Is it possible to automate time series forecasting? First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models. To answer your main question "Is it possible to automate time series fore
12,543
Is it possible to automate time series forecasting?
My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , parameters that may change over time and even error variances that may change over time. This family is more precisely called ARMAX models but for complete transparency does exclude a (rare) variant that has multiplicative structure. You asked for tips and I believe that this might be a good one to get you started. I would suggest that you write code to follow/emulate this flowchart/workflow. The "best model" could be found by evaluating the criterion that you specify ... it could be the MSE/AIC of the fitted data or it could be the MAPE/SMAPE of withheld data or any criterion of your choice. Be aware as the detailing of each of these steps can be quite simple if you are unaware of some of the specific requirements/objectives/constraints of time series analysis BUT it can be (should be !) more complex if you have a deeper understanding/learning/appreciation of the complexities/opportunities present in thorough time series analysis. I have been asked to provide further direction as to how one should go about automating time series modelling ( or modelling in general ) https://stats.stackexchange.com/search?q=peeling+an+onion contains some of my guidance on "peeling onions" and related tasks . AUTOBOX actually details and shows the interim steps as it forms a useful model and could be a useful teacher in this regard. The whole scientific idea is to "add what appears to be needed" and "delete what appears to be less than useful" . This is the iterative process suggested by Box and Bacon in earlier times. Models need to be complex enough (fancy enough) but not too complex (fancy). Assuming that simple methods work with complex problems is not consistent with scientific method following Roger Bacon and tons of followers of Bacon. As Roger Bacon once said and I have often paraphrased : To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. For whoever knows the ways of Nature will more easily notice her deviations and, on the other hand, whoever knows her deviations will more accurately describe her ways. One learns the rules by observing when the current rules fail.In the spirt pf Bacon by identifying when the currently identified "best model/theory" is inadeqaute one can then iterate to "a better representation" In my words "Tukey proposed Exploratory Data Analysis (EDA) which suggested schemes of model refinement based upon evidented model deficiency suggested by the data" . This is the heart of AUTOBOX and of science. EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. The litmus test of an automatic modelling program is quite simple . Does it separate signal and noise without over-fitting ? Empirical evidence suggests that this can and has been done. Forecasting accuracies are often misleading because the future is not accountable for the past and depending on which origin you pick results can and do vary.
Is it possible to automate time series forecasting?
My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , paramet
Is it possible to automate time series forecasting? My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , parameters that may change over time and even error variances that may change over time. This family is more precisely called ARMAX models but for complete transparency does exclude a (rare) variant that has multiplicative structure. You asked for tips and I believe that this might be a good one to get you started. I would suggest that you write code to follow/emulate this flowchart/workflow. The "best model" could be found by evaluating the criterion that you specify ... it could be the MSE/AIC of the fitted data or it could be the MAPE/SMAPE of withheld data or any criterion of your choice. Be aware as the detailing of each of these steps can be quite simple if you are unaware of some of the specific requirements/objectives/constraints of time series analysis BUT it can be (should be !) more complex if you have a deeper understanding/learning/appreciation of the complexities/opportunities present in thorough time series analysis. I have been asked to provide further direction as to how one should go about automating time series modelling ( or modelling in general ) https://stats.stackexchange.com/search?q=peeling+an+onion contains some of my guidance on "peeling onions" and related tasks . AUTOBOX actually details and shows the interim steps as it forms a useful model and could be a useful teacher in this regard. The whole scientific idea is to "add what appears to be needed" and "delete what appears to be less than useful" . This is the iterative process suggested by Box and Bacon in earlier times. Models need to be complex enough (fancy enough) but not too complex (fancy). Assuming that simple methods work with complex problems is not consistent with scientific method following Roger Bacon and tons of followers of Bacon. As Roger Bacon once said and I have often paraphrased : To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. For whoever knows the ways of Nature will more easily notice her deviations and, on the other hand, whoever knows her deviations will more accurately describe her ways. One learns the rules by observing when the current rules fail.In the spirt pf Bacon by identifying when the currently identified "best model/theory" is inadeqaute one can then iterate to "a better representation" In my words "Tukey proposed Exploratory Data Analysis (EDA) which suggested schemes of model refinement based upon evidented model deficiency suggested by the data" . This is the heart of AUTOBOX and of science. EDA is for seeing what the data can tell us beyond the formal modeling or hypothesis testing task. The litmus test of an automatic modelling program is quite simple . Does it separate signal and noise without over-fitting ? Empirical evidence suggests that this can and has been done. Forecasting accuracies are often misleading because the future is not accountable for the past and depending on which origin you pick results can and do vary.
Is it possible to automate time series forecasting? My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , paramet
12,544
Is it possible to automate time series forecasting?
Short Answer While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach. Long Answer The approach you describe is similar to what is seen in the machine learning community where a tremendous amount of focus is put on model selection and parameter estimation. For example, there are countless papers on how to optimize a neural net to attain strong results on the ImageNet dataset. Part of the reason there is such an emphasis on this, is that in the research process it is important to compare your model to other models on benchmark datasets. To make sure your results are comparable to others' reported results you cannot manually bring in outside information to improve your model's performance. While this automatic-forecasting approach is not wrong per se, in the time series setting often the most important step in an analysis is bringing in exogenous variables to help explain a time series. For example, if one is attempting to forecast a time series of average house prices in New York, what would matter most in forecasting is determining what factors influence house prices (e.g. population growth, interest rates, unemployment) and then coming up with a reasonable forecast of these variables to inform your forecast of house prices. The time dependence of the residuals (which is what most traditional statistical methods attempt to model), while still important to consider in the model, would be likely be far less important in improving a forecast than the selection and forecasting of the aforementioned exogenous variables.
Is it possible to automate time series forecasting?
Short Answer While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach. Long Answer The approach you describe
Is it possible to automate time series forecasting? Short Answer While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach. Long Answer The approach you describe is similar to what is seen in the machine learning community where a tremendous amount of focus is put on model selection and parameter estimation. For example, there are countless papers on how to optimize a neural net to attain strong results on the ImageNet dataset. Part of the reason there is such an emphasis on this, is that in the research process it is important to compare your model to other models on benchmark datasets. To make sure your results are comparable to others' reported results you cannot manually bring in outside information to improve your model's performance. While this automatic-forecasting approach is not wrong per se, in the time series setting often the most important step in an analysis is bringing in exogenous variables to help explain a time series. For example, if one is attempting to forecast a time series of average house prices in New York, what would matter most in forecasting is determining what factors influence house prices (e.g. population growth, interest rates, unemployment) and then coming up with a reasonable forecast of these variables to inform your forecast of house prices. The time dependence of the residuals (which is what most traditional statistical methods attempt to model), while still important to consider in the model, would be likely be far less important in improving a forecast than the selection and forecasting of the aforementioned exogenous variables.
Is it possible to automate time series forecasting? Short Answer While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach. Long Answer The approach you describe
12,545
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers. The goal of MCMC is to draw samples from an (unnormalized) target distribution $f(x)$. The obtained samples are used to approximate $f$ and mostly allow to compute expectations of functions under $f$ (i.e., high-dimensional integrals) and, in particular, properties of $f$ (such as moments). Sampling usually requires a large number of evaluations of $f$, and possibly of its gradient, for methods such as Hamiltonian Monte Carlo (HMC). If $f$ is costly to evaluate, or the gradient is unavailable, it is sometimes possible to build a less expensive surrogate function that can help guide the sampling and is evaluated in place of $f$ (in a way that still preserves the properties of MCMC). For example, a seminal paper (Rasmussen 2003) proposes to use Gaussian Processes (a nonparametric function approximation) to build an approximation to $\log f$ and perform HMC on the surrogate function, with only the acceptance/rejection step of HMC based on $f$. This reduces the number of evaluation of the original $f$, and allows to perform MCMC on pdfs that would otherwise too expensive to evaluate. The idea of using surrogates to speed up MCMC has been explored a lot in the past few years, essentially by trying different ways to build the surrogate function and combine it efficiently/adaptively with different MCMC methods (and in a way that preserves the 'correctness' of MCMC sampling). Related to your question, these two very recent papers use advanced machine learning techniques -- random networks (Zhang et al. 2015) or adaptively learnt exponential kernel functions (Strathmann et al. 2015) -- to build the surrogate function. HMC is not the only form of MCMC that can benefit from surrogates. For example, Nishiara et al. (2014) build an approximation of the target density by fitting a multivariate Student's $t$ distribution to the multi-chain state of an ensemble sampler, and use this to perform a generalized form of elliptical slice sampling. These are only examples. In general, a number of distinct ML techniques (mostly in the area of function approximation and density estimation) can be used to extract information that might improve the efficiency of MCMC samplers. Their actual usefulness -- e.g. measured in number of "effective independent samples per second" -- is conditional on $f$ being expensive or somewhat hard to compute; also, many of these methods may require tuning of their own or additional knowledge, restricting their applicability. References: Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals." Bayesian Statistics 7. 2003. Zhang, Cheng, Babak Shahbaba, and Hongkai Zhao. "Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases." arXiv preprint arXiv:1506.05555 (2015). Strathmann, Heiko, et al. "Gradient-free Hamiltonian Monte Carlo with efficient kernel exponential families." Advances in Neural Information Processing Systems. 2015. Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." Journal of Machine Learning Research 15.1 (2014): 2087-2112.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers. The goal of MCMC is to draw samples fro
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers. The goal of MCMC is to draw samples from an (unnormalized) target distribution $f(x)$. The obtained samples are used to approximate $f$ and mostly allow to compute expectations of functions under $f$ (i.e., high-dimensional integrals) and, in particular, properties of $f$ (such as moments). Sampling usually requires a large number of evaluations of $f$, and possibly of its gradient, for methods such as Hamiltonian Monte Carlo (HMC). If $f$ is costly to evaluate, or the gradient is unavailable, it is sometimes possible to build a less expensive surrogate function that can help guide the sampling and is evaluated in place of $f$ (in a way that still preserves the properties of MCMC). For example, a seminal paper (Rasmussen 2003) proposes to use Gaussian Processes (a nonparametric function approximation) to build an approximation to $\log f$ and perform HMC on the surrogate function, with only the acceptance/rejection step of HMC based on $f$. This reduces the number of evaluation of the original $f$, and allows to perform MCMC on pdfs that would otherwise too expensive to evaluate. The idea of using surrogates to speed up MCMC has been explored a lot in the past few years, essentially by trying different ways to build the surrogate function and combine it efficiently/adaptively with different MCMC methods (and in a way that preserves the 'correctness' of MCMC sampling). Related to your question, these two very recent papers use advanced machine learning techniques -- random networks (Zhang et al. 2015) or adaptively learnt exponential kernel functions (Strathmann et al. 2015) -- to build the surrogate function. HMC is not the only form of MCMC that can benefit from surrogates. For example, Nishiara et al. (2014) build an approximation of the target density by fitting a multivariate Student's $t$ distribution to the multi-chain state of an ensemble sampler, and use this to perform a generalized form of elliptical slice sampling. These are only examples. In general, a number of distinct ML techniques (mostly in the area of function approximation and density estimation) can be used to extract information that might improve the efficiency of MCMC samplers. Their actual usefulness -- e.g. measured in number of "effective independent samples per second" -- is conditional on $f$ being expensive or somewhat hard to compute; also, many of these methods may require tuning of their own or additional knowledge, restricting their applicability. References: Rasmussen, Carl Edward. "Gaussian processes to speed up hybrid Monte Carlo for expensive Bayesian integrals." Bayesian Statistics 7. 2003. Zhang, Cheng, Babak Shahbaba, and Hongkai Zhao. "Hamiltonian Monte Carlo Acceleration using Surrogate Functions with Random Bases." arXiv preprint arXiv:1506.05555 (2015). Strathmann, Heiko, et al. "Gradient-free Hamiltonian Monte Carlo with efficient kernel exponential families." Advances in Neural Information Processing Systems. 2015. Nishihara, Robert, Iain Murray, and Ryan P. Adams. "Parallel MCMC with generalized elliptical slice sampling." Journal of Machine Learning Research 15.1 (2014): 2087-2112.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers. The goal of MCMC is to draw samples fro
12,546
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distribution (typically a multivariate normal or t-distribution). A well known fact is that the further the proposal distribution is from the posterior distribution, the less efficient the sampler is. So one could imagine using some sort of machine learning method to build up a proposal distribution that matches better to the true posterior distribution than a simple multivariate normal/t distribution. However, it's not clear this would be any improvement to efficiency. By suggesting deep learning, I assume that you may be interested in using some sort of neural network approach. In most cases, this would be significantly more computationally expensive than the entire vanilla MCMC method itself. Similarly, I don't know any reason that NN methods (or even most machine learning methods) do a good job of providing adequate density outside the observed space, crucial for MCMC. So even ignoring the computational costs associated with building the machine learning model, I cannot see a good reason why this would improve the sampling efficiency.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distributi
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distribution (typically a multivariate normal or t-distribution). A well known fact is that the further the proposal distribution is from the posterior distribution, the less efficient the sampler is. So one could imagine using some sort of machine learning method to build up a proposal distribution that matches better to the true posterior distribution than a simple multivariate normal/t distribution. However, it's not clear this would be any improvement to efficiency. By suggesting deep learning, I assume that you may be interested in using some sort of neural network approach. In most cases, this would be significantly more computationally expensive than the entire vanilla MCMC method itself. Similarly, I don't know any reason that NN methods (or even most machine learning methods) do a good job of providing adequate density outside the observed space, crucial for MCMC. So even ignoring the computational costs associated with building the machine learning model, I cannot see a good reason why this would improve the sampling efficiency.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distributi
12,547
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (usually with no closed form) using probabilistic numerical methods. Metropolis sampling is definitely not the most commonly used approach. In fact, this is the only MCMC method not to have any probabilistic component. So ML would not inform anything with MCMC in this case. Importance based sampling does require a probabilistic component. It is more efficient than Metropolis under some basic assumptions. ML methods can be used to estimate this probabilistic component if it dovetails with some assumptions. Examples might be multivariate clustering to estimate a complex high dimensional Gaussian density. I am not familiar with non-parametric approaches to this problem, but that could be an interesting area of development. Nonetheless, ML stands out to me as a distinct step in the process of estimating a high dimensional complex probability model which is subsequently used in a numerical method. I don't see how ML really improves MCMC in this case.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (u
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (usually with no closed form) using probabilistic numerical methods. Metropolis sampling is definitely not the most commonly used approach. In fact, this is the only MCMC method not to have any probabilistic component. So ML would not inform anything with MCMC in this case. Importance based sampling does require a probabilistic component. It is more efficient than Metropolis under some basic assumptions. ML methods can be used to estimate this probabilistic component if it dovetails with some assumptions. Examples might be multivariate clustering to estimate a complex high dimensional Gaussian density. I am not familiar with non-parametric approaches to this problem, but that could be an interesting area of development. Nonetheless, ML stands out to me as a distinct step in the process of estimating a high dimensional complex probability model which is subsequently used in a numerical method. I don't see how ML really improves MCMC in this case.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (u
12,548
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo updates arXiv:1610.02746. The idea here turns out to be quite similar to the references given by @lacerbi in above. In another attempt 1702.08586, the author explicitly constructed Boltzmann Machines which can perform (and even discover) the celebrated cluster Monte Carlo updates.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo upda
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo updates arXiv:1610.02746. The idea here turns out to be quite similar to the references given by @lacerbi in above. In another attempt 1702.08586, the author explicitly constructed Boltzmann Machines which can perform (and even discover) the celebrated cluster Monte Carlo updates.
Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo upda
12,549
In a random forest, is larger %IncMSE better or worse?
%IncMSE is the most robust and informative measure. It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled). grow regression forest. Compute OOB-mse, name this mse0. for 1 to j var: permute values of column j, then predict and compute OOB-mse(j) %IncMSE of j'th is (mse(j)-mse0)/mse0 * 100% the higher number, the more important IncNodePurity relates to the loss function which by best splits are chosen. The loss function is mse for regression and gini-impurity for classification. More useful variables achieve higher increases in node purities, that is to find a split which has a high inter node 'variance' and a small intra node 'variance'. IncNodePurity is biased and should only be used if the extra computation time of calculating %IncMSE is unacceptable. Since it only takes ~5-25% extra time to calculate %IncMSE, this would almost never happen. A similar question and answer
In a random forest, is larger %IncMSE better or worse?
%IncMSE is the most robust and informative measure. It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled). grow reg
In a random forest, is larger %IncMSE better or worse? %IncMSE is the most robust and informative measure. It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled). grow regression forest. Compute OOB-mse, name this mse0. for 1 to j var: permute values of column j, then predict and compute OOB-mse(j) %IncMSE of j'th is (mse(j)-mse0)/mse0 * 100% the higher number, the more important IncNodePurity relates to the loss function which by best splits are chosen. The loss function is mse for regression and gini-impurity for classification. More useful variables achieve higher increases in node purities, that is to find a split which has a high inter node 'variance' and a small intra node 'variance'. IncNodePurity is biased and should only be used if the extra computation time of calculating %IncMSE is unacceptable. Since it only takes ~5-25% extra time to calculate %IncMSE, this would almost never happen. A similar question and answer
In a random forest, is larger %IncMSE better or worse? %IncMSE is the most robust and informative measure. It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled). grow reg
12,550
Proof of closeness of kernel functions under pointwise product
By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product \begin{align} k_{p}( x, y) = k_1( x, y) k_2(x,y) \end{align} is also a valid kernel function. Proving this property is rather straightforward when we invoke Mercer's theorem. Since $k_1, k_2$ are valid kernels, we know (via Mercer) that they must admit an inner product representation. Let $a$ denote the feature vector of $k_1$ and $b$ denote the same for $k_2$. \begin{align} k_1(x,y) = a(x)^T a(y), \qquad a( z ) = [a_1(z), a_2(z), \ldots a_M(z)] \\ k_2(x,y) = b(x)^T b(y), \qquad b( z ) = [b_1(z), b_2(z), \ldots b_N(z)] \end{align} So $a$ is a function that produces an $M$-dim vector, and $b$ produces an $N$-dim vector. Next, we just write the product in terms of $a$ and $b$, and perform some regrouping. \begin{align} k_{p}(x,y) &= k_1(x,y) k_2(x,y) \\&= \Big( \sum_{m=1}^M a_m(x) a_m(y) \Big) \Big( \sum_{n=1}^N b_n(x) b_n(y) \Big) \\&= \sum_{m=1}^M \sum_{n=1}^N [ a_m(x) b_n(x) ] [a_m(y) b_n(y)] \\&= \sum_{m=1}^M \sum_{n=1}^N c_{mn}( x ) c_{mn}( y ) \\&= c(x)^T c(y) \end{align} where $c(z)$ is an $M \cdot N$ -dimensional vector, s.t. $c_{mn}(z) = a_m(z) b_n(z)$. Now, because we can write $k_p(x,y)$ as an inner product using the feature map $c$, we know $k_p$ is a valid kernel (via Mercer's theorem). That's all there is to it.
Proof of closeness of kernel functions under pointwise product
By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product \begin{align} k_{p}( x, y) = k_1( x, y) k_2(x,y) \end{align} is also a valid k
Proof of closeness of kernel functions under pointwise product By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product \begin{align} k_{p}( x, y) = k_1( x, y) k_2(x,y) \end{align} is also a valid kernel function. Proving this property is rather straightforward when we invoke Mercer's theorem. Since $k_1, k_2$ are valid kernels, we know (via Mercer) that they must admit an inner product representation. Let $a$ denote the feature vector of $k_1$ and $b$ denote the same for $k_2$. \begin{align} k_1(x,y) = a(x)^T a(y), \qquad a( z ) = [a_1(z), a_2(z), \ldots a_M(z)] \\ k_2(x,y) = b(x)^T b(y), \qquad b( z ) = [b_1(z), b_2(z), \ldots b_N(z)] \end{align} So $a$ is a function that produces an $M$-dim vector, and $b$ produces an $N$-dim vector. Next, we just write the product in terms of $a$ and $b$, and perform some regrouping. \begin{align} k_{p}(x,y) &= k_1(x,y) k_2(x,y) \\&= \Big( \sum_{m=1}^M a_m(x) a_m(y) \Big) \Big( \sum_{n=1}^N b_n(x) b_n(y) \Big) \\&= \sum_{m=1}^M \sum_{n=1}^N [ a_m(x) b_n(x) ] [a_m(y) b_n(y)] \\&= \sum_{m=1}^M \sum_{n=1}^N c_{mn}( x ) c_{mn}( y ) \\&= c(x)^T c(y) \end{align} where $c(z)$ is an $M \cdot N$ -dimensional vector, s.t. $c_{mn}(z) = a_m(z) b_n(z)$. Now, because we can write $k_p(x,y)$ as an inner product using the feature map $c$, we know $k_p$ is a valid kernel (via Mercer's theorem). That's all there is to it.
Proof of closeness of kernel functions under pointwise product By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product \begin{align} k_{p}( x, y) = k_1( x, y) k_2(x,y) \end{align} is also a valid k
12,551
Proof of closeness of kernel functions under pointwise product
How about the following proof: Source: UChicago kernel methods lecture, page 5
Proof of closeness of kernel functions under pointwise product
How about the following proof: Source: UChicago kernel methods lecture, page 5
Proof of closeness of kernel functions under pointwise product How about the following proof: Source: UChicago kernel methods lecture, page 5
Proof of closeness of kernel functions under pointwise product How about the following proof: Source: UChicago kernel methods lecture, page 5
12,552
Proof of closeness of kernel functions under pointwise product
Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. This is equivalent to prove its corresponding kernel matrix $K = K1 \circ K2$ is PSD. $K_3 = K1 \otimes K2$ is a PSD (The kronecker product of two PSD is PSD). $K$ is a principal submatrix of $K_3$, and therefore is PSD (The principal submatrix of PSD is PSD).
Proof of closeness of kernel functions under pointwise product
Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. T
Proof of closeness of kernel functions under pointwise product Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. This is equivalent to prove its corresponding kernel matrix $K = K1 \circ K2$ is PSD. $K_3 = K1 \otimes K2$ is a PSD (The kronecker product of two PSD is PSD). $K$ is a principal submatrix of $K_3$, and therefore is PSD (The principal submatrix of PSD is PSD).
Proof of closeness of kernel functions under pointwise product Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. T
12,553
Reason to normalize in euclidean distance measures in hierarchical clustering
It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves. The problem is when you have mixed attributes. Say you have data on persons. Weight in grams and shoe size. Shoe sizes differ very little, while the differences in body mass (in grams) are much much larger. You can come up with dozens of examples. You just cannot compare 1 g and 1 shoe size difference. In fact, in this example you compute something that would have the physical unit of $\sqrt{g\cdot\text{shoe-size}}$! Usually in these cases, Euclidean distance just does not make sense. But it may still work, in many situations if you normalize your data. Even if it actually doesn't make sense, it is a good heuristic for situations where you do not have "proven correct" distance function, such as Euclidean distance in human-scale physical world.
Reason to normalize in euclidean distance measures in hierarchical clustering
It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves. The problem is when you have mixed attributes. Say you have data on
Reason to normalize in euclidean distance measures in hierarchical clustering It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves. The problem is when you have mixed attributes. Say you have data on persons. Weight in grams and shoe size. Shoe sizes differ very little, while the differences in body mass (in grams) are much much larger. You can come up with dozens of examples. You just cannot compare 1 g and 1 shoe size difference. In fact, in this example you compute something that would have the physical unit of $\sqrt{g\cdot\text{shoe-size}}$! Usually in these cases, Euclidean distance just does not make sense. But it may still work, in many situations if you normalize your data. Even if it actually doesn't make sense, it is a good heuristic for situations where you do not have "proven correct" distance function, such as Euclidean distance in human-scale physical world.
Reason to normalize in euclidean distance measures in hierarchical clustering It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves. The problem is when you have mixed attributes. Say you have data on
12,554
Reason to normalize in euclidean distance measures in hierarchical clustering
If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute very little. We can visualise this in R via: set.seed(42) dat <- data.frame(var1 = rnorm(100, mean = 100000), var2 = runif(100), var3 = runif(100)) dist1 <- dist(dat) dist2 <- dist(dat[,1, drop = FALSE]) dist1 contains the Euclidean distances for the 100 observations based on all three variables whilst dist2 contains the Euclidean distance based on var1 alone. > summary(dist1) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.07351 0.77840 1.15200 1.36200 1.77000 5.30200 > summary(dist2) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000072 0.470000 0.963600 1.169000 1.663000 5.280000 Note how similar the distributions of distances are, indicating little contribution from var2 and var3, and the actual distances are very similar: > head(dist1) [1] 1.9707186 1.0936524 0.8745579 1.2724471 1.6054603 0.1870085 > head(dist2) [1] 1.9356566 1.0078300 0.7380958 0.9666901 1.4770830 0.1405636 If we standardise the data dist3 <- dist(scale(dat)) dist4 <- dist(scale(dat[,1, drop = FALSE])) then there is a big change in the distances based only on var1 and those based on all three variables: > summary(dist3) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09761 1.62400 2.25000 2.28200 2.93600 5.33100 > summary(dist4) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000069 0.451400 0.925400 1.123000 1.597000 5.070000 > head(dist3) [1] 2.2636288 1.7272588 1.7791074 3.0129750 2.5821981 0.4434073 > head(dist4) [1] 1.8587830 0.9678046 0.7087827 0.9282985 1.4184214 0.1349811 As hierarchical clustering uses these distances, whether it is desirable to standardise or not will depend on the type of data/variables you have and whether you want the big things to dominate the distances and hence dominant the formation of the clustering. The answer to this is domain specific and data-set specific.
Reason to normalize in euclidean distance measures in hierarchical clustering
If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute ver
Reason to normalize in euclidean distance measures in hierarchical clustering If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute very little. We can visualise this in R via: set.seed(42) dat <- data.frame(var1 = rnorm(100, mean = 100000), var2 = runif(100), var3 = runif(100)) dist1 <- dist(dat) dist2 <- dist(dat[,1, drop = FALSE]) dist1 contains the Euclidean distances for the 100 observations based on all three variables whilst dist2 contains the Euclidean distance based on var1 alone. > summary(dist1) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.07351 0.77840 1.15200 1.36200 1.77000 5.30200 > summary(dist2) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000072 0.470000 0.963600 1.169000 1.663000 5.280000 Note how similar the distributions of distances are, indicating little contribution from var2 and var3, and the actual distances are very similar: > head(dist1) [1] 1.9707186 1.0936524 0.8745579 1.2724471 1.6054603 0.1870085 > head(dist2) [1] 1.9356566 1.0078300 0.7380958 0.9666901 1.4770830 0.1405636 If we standardise the data dist3 <- dist(scale(dat)) dist4 <- dist(scale(dat[,1, drop = FALSE])) then there is a big change in the distances based only on var1 and those based on all three variables: > summary(dist3) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.09761 1.62400 2.25000 2.28200 2.93600 5.33100 > summary(dist4) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.000069 0.451400 0.925400 1.123000 1.597000 5.070000 > head(dist3) [1] 2.2636288 1.7272588 1.7791074 3.0129750 2.5821981 0.4434073 > head(dist4) [1] 1.8587830 0.9678046 0.7087827 0.9282985 1.4184214 0.1349811 As hierarchical clustering uses these distances, whether it is desirable to standardise or not will depend on the type of data/variables you have and whether you want the big things to dominate the distances and hence dominant the formation of the clustering. The answer to this is domain specific and data-set specific.
Reason to normalize in euclidean distance measures in hierarchical clustering If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute ver
12,555
Reason to normalize in euclidean distance measures in hierarchical clustering
Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalanobis distance is the appropriate measure.
Reason to normalize in euclidean distance measures in hierarchical clustering
Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalano
Reason to normalize in euclidean distance measures in hierarchical clustering Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalanobis distance is the appropriate measure.
Reason to normalize in euclidean distance measures in hierarchical clustering Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalano
12,556
GLM: verifying a choice of distribution and link function
This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your plots (qq-plots, histograms, etc.) constitutes the 'test'. (For a general overview of the issue of asserting the null, it may help to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) In your specific case, you can say that the plots show your residuals are consistent with your assumption of normality, but they don't "validate" the assumption. You can fit your model using different link functions and compare them, but there isn't a test of a single link function in isolation (this is evidently incorrect, see @Glen_b's answer). In my answer to Difference between logit and probit models (which may be worth reading, although it isn't quite the same), I argue that link functions should be chosen based on: Knowledge of the response distribution, Theoretical considerations, and Empirical fit to the data. Within that framework, the canonical link for a Gaussian model would be the identity link. In this case you rejected that possibility, presumably for theoretical reasons. I suspect your thinking was that $Y$ cannot take negative values (note that 'does not happen to' is not the same thing). If so, the log is a reasonable choice a-priori, but it doesn't just prevent $Y$ from becoming negative, it also induces a specific shape to the curvilinear relationship. A standard plot of residuals vs. fitted values (perhaps with a loess fit overlaid) will help you identify if the intrinsic curvature in your data is a reasonable match for the specific curvature imposed by the log link. As I mentioned, you can also try whatever other transformation meets your theoretical criteria that you want and compare the two fits directly.
GLM: verifying a choice of distribution and link function
This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your
GLM: verifying a choice of distribution and link function This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your plots (qq-plots, histograms, etc.) constitutes the 'test'. (For a general overview of the issue of asserting the null, it may help to read my answer here: Why do statisticians say a non-significant result means “you can't reject the null” as opposed to accepting the null hypothesis?) In your specific case, you can say that the plots show your residuals are consistent with your assumption of normality, but they don't "validate" the assumption. You can fit your model using different link functions and compare them, but there isn't a test of a single link function in isolation (this is evidently incorrect, see @Glen_b's answer). In my answer to Difference between logit and probit models (which may be worth reading, although it isn't quite the same), I argue that link functions should be chosen based on: Knowledge of the response distribution, Theoretical considerations, and Empirical fit to the data. Within that framework, the canonical link for a Gaussian model would be the identity link. In this case you rejected that possibility, presumably for theoretical reasons. I suspect your thinking was that $Y$ cannot take negative values (note that 'does not happen to' is not the same thing). If so, the log is a reasonable choice a-priori, but it doesn't just prevent $Y$ from becoming negative, it also induces a specific shape to the curvilinear relationship. A standard plot of residuals vs. fitted values (perhaps with a loess fit overlaid) will help you identify if the intrinsic curvature in your data is a reasonable match for the specific curvature imposed by the log link. As I mentioned, you can also try whatever other transformation meets your theoretical criteria that you want and compare the two fits directly.
GLM: verifying a choice of distribution and link function This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your
12,557
GLM: verifying a choice of distribution and link function
Would it be going too far to state that it validates my choice of distribution? It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that you can't really say "the null is shown to be true", (especially with point nulls, but in at least some sense more generally). You can only really say "well, we don't have strong evidence that it's wrong". But in any case we don't expect our models to be perfect, they're models. What matters, as Box & Draper said, is "how wrong do they have to be to not be useful?" Either of these two prior sentences: This seems to suggest (to me) that the choice of a Gaussian distribution was quite reasonable. Or, at least, that the residuals are consistent with the distribution I used in my model. much more accurately describe what your diagnostics indicate -- not that a Gaussian model with log link was right -- but that it was reasonable, or consistent with the data. I chose a log link function because my response variable is always positive, but I'd like some sort of confirmation that it was a good choice. If you know it must be positive then its mean must be positive. It's sensible to choose a model that's at least consistent with that. I don't know if it's a good choice (there could well be much better choices), but it's a reasonable thing to do; it could well be my starting point. [However, if the variable itself is necessarily positive, my first thought would tend to be Gamma with log-link, rather than Gaussian. "Necessarily positive" does suggest both skewness and variance that changes with the mean.] Q2: Are there any tests, like checking the residuals for the choice of distribution, that can support my choice of link function? It sounds like you don't mean 'test' as in "formal hypothesis test" but rather as 'diagnostic check'. In either case, the answer is, yes, there are. One formal hypothesis test is Pregibon's Goodness of link test[1]. This is based on embedding the link function in a Box-Cox family in order to do a hypothesis test of the Box-Cox parameter. See also the brief discussion of Pregibon's test in Breslow (1996)[2] (see p 14). However, I'd strongly advise sticking with the diagnostic route. If you want to check a link function, you're basically asserting that on the link-scale, $\eta=g(\mu)$ is linear in the $x$'s that are in the model, so one basic assessment might look at a plot of residuals against the predictors. For example, working residuals $r^W_i=(y_i-\hat{\mu}_i)\left(\frac{\partial \eta}{\partial\mu}\right)$ (which I'd lean toward for this assessment), or perhaps by looking at deviations from linearity in partial residuals, with one plot for each predictor (see for example, Hardin and Hilbe, Generalized linear models and extensions, 2nd ed. sec 4.5.4 p54, for the definition), $\quad r^T_{ki}=(y_i-\hat{\mu}_i)\left(\frac{\partial \eta}{\partial\mu}\right)+x_{ik}\hat{\beta}_k$ $\qquad\:=r^W_i+x_{ik}\hat{\beta}_k$ In cases where the data admit transformation by the link function, you could look for linearity in the same fashion as with linear regression (though you my have left skewness and possibly heteroskedasticity). In the case of categorical predictors the choice of link function is more a matter of convenience or interpretability, the fit should be the same (so no need to assess for them). You could also base a diagnostic off Pregibon's approach. These don't form an exhaustive list; you can find other diagnostics discussed. [That said, I agree with gung's assessment that the choice of link function should initially be based on things like theoretical considerations, where possible.] See also some of the discussion in this post, which is at least partly relevant. [1]: Pregibon, D. (1980), "Goodness of Link Tests for Generalized Linear Models," Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 29, No. 1, pp. 15-23. [2]: Breslow N. E. (1996), "Generalized linear models: Checking assumptions and strengthening conclusions," Statistica Applicata 8, 23-41. pdf
GLM: verifying a choice of distribution and link function
Would it be going too far to state that it validates my choice of distribution? It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that yo
GLM: verifying a choice of distribution and link function Would it be going too far to state that it validates my choice of distribution? It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that you can't really say "the null is shown to be true", (especially with point nulls, but in at least some sense more generally). You can only really say "well, we don't have strong evidence that it's wrong". But in any case we don't expect our models to be perfect, they're models. What matters, as Box & Draper said, is "how wrong do they have to be to not be useful?" Either of these two prior sentences: This seems to suggest (to me) that the choice of a Gaussian distribution was quite reasonable. Or, at least, that the residuals are consistent with the distribution I used in my model. much more accurately describe what your diagnostics indicate -- not that a Gaussian model with log link was right -- but that it was reasonable, or consistent with the data. I chose a log link function because my response variable is always positive, but I'd like some sort of confirmation that it was a good choice. If you know it must be positive then its mean must be positive. It's sensible to choose a model that's at least consistent with that. I don't know if it's a good choice (there could well be much better choices), but it's a reasonable thing to do; it could well be my starting point. [However, if the variable itself is necessarily positive, my first thought would tend to be Gamma with log-link, rather than Gaussian. "Necessarily positive" does suggest both skewness and variance that changes with the mean.] Q2: Are there any tests, like checking the residuals for the choice of distribution, that can support my choice of link function? It sounds like you don't mean 'test' as in "formal hypothesis test" but rather as 'diagnostic check'. In either case, the answer is, yes, there are. One formal hypothesis test is Pregibon's Goodness of link test[1]. This is based on embedding the link function in a Box-Cox family in order to do a hypothesis test of the Box-Cox parameter. See also the brief discussion of Pregibon's test in Breslow (1996)[2] (see p 14). However, I'd strongly advise sticking with the diagnostic route. If you want to check a link function, you're basically asserting that on the link-scale, $\eta=g(\mu)$ is linear in the $x$'s that are in the model, so one basic assessment might look at a plot of residuals against the predictors. For example, working residuals $r^W_i=(y_i-\hat{\mu}_i)\left(\frac{\partial \eta}{\partial\mu}\right)$ (which I'd lean toward for this assessment), or perhaps by looking at deviations from linearity in partial residuals, with one plot for each predictor (see for example, Hardin and Hilbe, Generalized linear models and extensions, 2nd ed. sec 4.5.4 p54, for the definition), $\quad r^T_{ki}=(y_i-\hat{\mu}_i)\left(\frac{\partial \eta}{\partial\mu}\right)+x_{ik}\hat{\beta}_k$ $\qquad\:=r^W_i+x_{ik}\hat{\beta}_k$ In cases where the data admit transformation by the link function, you could look for linearity in the same fashion as with linear regression (though you my have left skewness and possibly heteroskedasticity). In the case of categorical predictors the choice of link function is more a matter of convenience or interpretability, the fit should be the same (so no need to assess for them). You could also base a diagnostic off Pregibon's approach. These don't form an exhaustive list; you can find other diagnostics discussed. [That said, I agree with gung's assessment that the choice of link function should initially be based on things like theoretical considerations, where possible.] See also some of the discussion in this post, which is at least partly relevant. [1]: Pregibon, D. (1980), "Goodness of Link Tests for Generalized Linear Models," Journal of the Royal Statistical Society. Series C (Applied Statistics), Vol. 29, No. 1, pp. 15-23. [2]: Breslow N. E. (1996), "Generalized linear models: Checking assumptions and strengthening conclusions," Statistica Applicata 8, 23-41. pdf
GLM: verifying a choice of distribution and link function Would it be going too far to state that it validates my choice of distribution? It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that yo
12,558
Linear vs. nonlinear regression
"Better" is a function of your model. Part of the reason for your confusion is you only wrote half of your model. When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equal to $ax^b$; they have an error component. For example, the two models you mention (not the only possible models by any means) make entirely different assumptions about the error. You probably mean something closer to $E(Y|X=x) = ax^b\,$. But then what do we say about the variation of $Y$ away from that expectation at a given $x$? It matters! When you fit the nonlinear least squares model, you're saying that the errors are additive and the standard deviation of the errors is constant across the data: $\: y_i \sim N(ax_i^b,\sigma^2)$ or equivalently $\: y_i = ax_i^b + e_i$, with $\text{var}(e_i) = \sigma^2$ by contrast when you take logs and fit a linear model, you're saying the error is additive on the log scale and (on the log scale) constant across the data. This means that on the scale of the observations, the error term is multiplicative, and so the errors are larger when the expected values are larger: $\: y_i \sim \text{logN}(\log a+b\log x_i,\sigma^2)$ or equivalently $\: y_i = ax_i^b \cdot \eta_i$, with $\eta_i \sim \text{logN}(0,\sigma^2)$ (Note that $\text{E}(\eta)$ is not 1. If $\sigma^2$ is not very small, you will need to allow for this effect if you want a reasonable approximation for the conditional mean of $Y$) (You can do least squares without assuming normality / lognormal distributions, but the central issue being discussed still applies ... and if you're nowhere near normality, you should probably be considering a different error model anyway) So what is best depends on which kind of error model describes your circumstances. [If you're doing some exploratory analysis with some kind of data that's not been seen before, you'd consider questions like "What do your data look like? (i.e. $y$ plotted against $x$? What do the residuals look like against $x$?". On the other hand if variables like these are not uncommon you should already have information about their general behaviour.]
Linear vs. nonlinear regression
"Better" is a function of your model. Part of the reason for your confusion is you only wrote half of your model. When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equa
Linear vs. nonlinear regression "Better" is a function of your model. Part of the reason for your confusion is you only wrote half of your model. When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equal to $ax^b$; they have an error component. For example, the two models you mention (not the only possible models by any means) make entirely different assumptions about the error. You probably mean something closer to $E(Y|X=x) = ax^b\,$. But then what do we say about the variation of $Y$ away from that expectation at a given $x$? It matters! When you fit the nonlinear least squares model, you're saying that the errors are additive and the standard deviation of the errors is constant across the data: $\: y_i \sim N(ax_i^b,\sigma^2)$ or equivalently $\: y_i = ax_i^b + e_i$, with $\text{var}(e_i) = \sigma^2$ by contrast when you take logs and fit a linear model, you're saying the error is additive on the log scale and (on the log scale) constant across the data. This means that on the scale of the observations, the error term is multiplicative, and so the errors are larger when the expected values are larger: $\: y_i \sim \text{logN}(\log a+b\log x_i,\sigma^2)$ or equivalently $\: y_i = ax_i^b \cdot \eta_i$, with $\eta_i \sim \text{logN}(0,\sigma^2)$ (Note that $\text{E}(\eta)$ is not 1. If $\sigma^2$ is not very small, you will need to allow for this effect if you want a reasonable approximation for the conditional mean of $Y$) (You can do least squares without assuming normality / lognormal distributions, but the central issue being discussed still applies ... and if you're nowhere near normality, you should probably be considering a different error model anyway) So what is best depends on which kind of error model describes your circumstances. [If you're doing some exploratory analysis with some kind of data that's not been seen before, you'd consider questions like "What do your data look like? (i.e. $y$ plotted against $x$? What do the residuals look like against $x$?". On the other hand if variables like these are not uncommon you should already have information about their general behaviour.]
Linear vs. nonlinear regression "Better" is a function of your model. Part of the reason for your confusion is you only wrote half of your model. When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equa
12,559
Linear vs. nonlinear regression
When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with your raw data (nonlinear regression), then it won't be true for the log-transformed values (linear regression), and vice versa. Which model is "better"? The one where the assumptions of the model most closely match the data.
Linear vs. nonlinear regression
When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with you
Linear vs. nonlinear regression When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with your raw data (nonlinear regression), then it won't be true for the log-transformed values (linear regression), and vice versa. Which model is "better"? The one where the assumptions of the model most closely match the data.
Linear vs. nonlinear regression When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with you
12,560
Is it important for statisticians to learn machine learning?
Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially at the undergraduate level but also to some extent at the graduate level. It has application only to the prediction aspect of statistics, whereas mathematical statistics as well as inferential and descriptive applied statistics require attention. Many programs offer students the chance to have a great deal of exposure to machine learning (CMU for instance), but industrial statisticians on the whole rarely get the chance to apply these tools, barring certain high profile tech jobs. While I have recently seen many data scientist and machine learning positions in the job market, I think the general job description of "statistician" does not require a machine learning background, but does require an impeccable understanding of basic statistics, inference, and communication: these should really be the core of a graduate statistics program. Machine learning and data science are also relatively new as job titles and as disciplines. It would be a disservice to those seeking employment as statisticians to sway their problem solving strategies toward machine learning if it's mostly abandoned in business/pharma/bioscience enterprise for underwhelming efficacy in 10 or 20 years. Lastly, I don't feel that machine learning tremendously enhances a solid understanding of statistics. Statistics is fundamentally a cross-disciplinary field and it's important to communicate and convince non-technical experts in your field (such as doctors, CFOs, or administrators) exactly why you chose the methodology you did choose. Machine learning is such a niche, highly technical field that, in many applied practices, only promises incrementally better performance than standard tools and techniques. Many of the methods in supervised and unsupervised learning are perceived by non-experts (and even some less trained experts) as "black box". When asked to defend their choice of a specific learning method, there are explanations that fall flat and draw on none of the applied problem motivated circumstances. This is a great risk to advising any decision making process.
Is it important for statisticians to learn machine learning?
Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially a
Is it important for statisticians to learn machine learning? Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially at the undergraduate level but also to some extent at the graduate level. It has application only to the prediction aspect of statistics, whereas mathematical statistics as well as inferential and descriptive applied statistics require attention. Many programs offer students the chance to have a great deal of exposure to machine learning (CMU for instance), but industrial statisticians on the whole rarely get the chance to apply these tools, barring certain high profile tech jobs. While I have recently seen many data scientist and machine learning positions in the job market, I think the general job description of "statistician" does not require a machine learning background, but does require an impeccable understanding of basic statistics, inference, and communication: these should really be the core of a graduate statistics program. Machine learning and data science are also relatively new as job titles and as disciplines. It would be a disservice to those seeking employment as statisticians to sway their problem solving strategies toward machine learning if it's mostly abandoned in business/pharma/bioscience enterprise for underwhelming efficacy in 10 or 20 years. Lastly, I don't feel that machine learning tremendously enhances a solid understanding of statistics. Statistics is fundamentally a cross-disciplinary field and it's important to communicate and convince non-technical experts in your field (such as doctors, CFOs, or administrators) exactly why you chose the methodology you did choose. Machine learning is such a niche, highly technical field that, in many applied practices, only promises incrementally better performance than standard tools and techniques. Many of the methods in supervised and unsupervised learning are perceived by non-experts (and even some less trained experts) as "black box". When asked to defend their choice of a specific learning method, there are explanations that fall flat and draw on none of the applied problem motivated circumstances. This is a great risk to advising any decision making process.
Is it important for statisticians to learn machine learning? Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially a
12,561
Is it important for statisticians to learn machine learning?
OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs... Stat programs require what they see fit, that is, what is the most important stuff they want their students to learn given a limited amount of time the students will have on the program. Requiring one narrow area means kissing goodbye to some other areas that can be argued to be equally important. Some programs require measure theoretic probability, some don't. Some require a foreign language, but most programs don't. Some programs take Bayesian paradigm as the only thing worth studying, but most don't. Some programs know that the greatest demand for statisticians is in survey statistics (at least that's the case in the US), but most don't. Biostat programs follow the money and teach SAS + the methods that will sell easily to medical and pharma sciences. For a person designing agricultural experiments, or collecting survey data via phone surveys, or validating psychometric scales, or producing disease incidence maps in a GIS, machine learning is an abstract art of computer science, very distant from statistics that they work with on a daily basis. None of these people will see any immediate benefit from learning support vector machines or random forests. All in all, machine learning is a nice complement to other areas of statistics, but I would argue that the mainstream stuff like multivariate normal distribution and generalized linear models need to come first.
Is it important for statisticians to learn machine learning?
OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs... Stat programs require what they
Is it important for statisticians to learn machine learning? OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs... Stat programs require what they see fit, that is, what is the most important stuff they want their students to learn given a limited amount of time the students will have on the program. Requiring one narrow area means kissing goodbye to some other areas that can be argued to be equally important. Some programs require measure theoretic probability, some don't. Some require a foreign language, but most programs don't. Some programs take Bayesian paradigm as the only thing worth studying, but most don't. Some programs know that the greatest demand for statisticians is in survey statistics (at least that's the case in the US), but most don't. Biostat programs follow the money and teach SAS + the methods that will sell easily to medical and pharma sciences. For a person designing agricultural experiments, or collecting survey data via phone surveys, or validating psychometric scales, or producing disease incidence maps in a GIS, machine learning is an abstract art of computer science, very distant from statistics that they work with on a daily basis. None of these people will see any immediate benefit from learning support vector machines or random forests. All in all, machine learning is a nice complement to other areas of statistics, but I would argue that the mainstream stuff like multivariate normal distribution and generalized linear models need to come first.
Is it important for statisticians to learn machine learning? OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs... Stat programs require what they
12,562
Is it important for statisticians to learn machine learning?
Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease from DNA Microarray data (e.g. cancers or diabetes). Scientists can then use these genes (learned models) for early diagnosis in the future (classification of unseen samples). There is a lot of statistics involved in machine learning but there are branches of machine learning that do not require statistics (e.g. genetic programming). The only time you would need statistics in these instances would be to see if a model that you have built using machine learning is statistically significantly different from some other model. In my opinion, an introduction to machine learning for statisticians would be advantageous. This will help statisticians to see real world scenarios of application of statistics. However, it shouldn't be compulsory. You may become a successful statistician and spend your whole life without ever having to go near machine learning!
Is it important for statisticians to learn machine learning?
Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease f
Is it important for statisticians to learn machine learning? Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease from DNA Microarray data (e.g. cancers or diabetes). Scientists can then use these genes (learned models) for early diagnosis in the future (classification of unseen samples). There is a lot of statistics involved in machine learning but there are branches of machine learning that do not require statistics (e.g. genetic programming). The only time you would need statistics in these instances would be to see if a model that you have built using machine learning is statistically significantly different from some other model. In my opinion, an introduction to machine learning for statisticians would be advantageous. This will help statisticians to see real world scenarios of application of statistics. However, it shouldn't be compulsory. You may become a successful statistician and spend your whole life without ever having to go near machine learning!
Is it important for statisticians to learn machine learning? Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease f
12,563
What is a good AUC for a precision-recall curve?
There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem. However, the meaning of "close" is entirely application dependent. For example, if you could reliably identify profitable investments with an AUC of 0.7 or, for that matter anything distinguishable from chance, I would be very impressed and you would be very rich. On the other hand, classifying handwritten digits with an AUC of 0.95 is still substantially below the current state of the art. Furthermore, while the best possible AUC-ROC is guaranteed to be in [0,1], this is not true for precision-recall curves because there can be "unreachable" areas of P-R space, depending on how skewed the class distributions are. This may render a "large" AUC-PR value less impressive than it might otherwise seem. See this paper by Boyd et al (2012) for details.
What is a good AUC for a precision-recall curve?
There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem. However, the meaning of "close" is entire
What is a good AUC for a precision-recall curve? There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem. However, the meaning of "close" is entirely application dependent. For example, if you could reliably identify profitable investments with an AUC of 0.7 or, for that matter anything distinguishable from chance, I would be very impressed and you would be very rich. On the other hand, classifying handwritten digits with an AUC of 0.95 is still substantially below the current state of the art. Furthermore, while the best possible AUC-ROC is guaranteed to be in [0,1], this is not true for precision-recall curves because there can be "unreachable" areas of P-R space, depending on how skewed the class distributions are. This may render a "large" AUC-PR value less impressive than it might otherwise seem. See this paper by Boyd et al (2012) for details.
What is a good AUC for a precision-recall curve? There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem. However, the meaning of "close" is entire
12,564
What is a good AUC for a precision-recall curve?
A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase. If this is a good result could only be assessed in compariso to other algorithms, but you didn't give detail on the method/data you used. Additionally, you might want to assess the shape of your PR-curve. An ideal PR-curve goes from the topleft corner horizontically to the topright corner and straight down to the bottomright corner, resulting in a PR-AUC of 1. In some applications, the PR-curve shows instead a strong spike at the beginning to quickly drop again close to the "random estimator line" (the horizontal line at 0.09 precision in your case). This would indicate a good detection of "strong" positive outcomes, but poor performance on the less clear candidates. If you want to find a good threshold for your algorithm's cutoff parameter, you might consider the point on the PR-curve that's closest to the topright corner. Or even better, consider cross validation if possible. You might achieve precision and recall values for a specific cutoff parameter that are more interesting for your application than the value of the PR-AUC. The AUCs are most interesting when comparing different algorithms.
What is a good AUC for a precision-recall curve?
A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase. If this is a good result could only be assessed in compariso to o
What is a good AUC for a precision-recall curve? A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase. If this is a good result could only be assessed in compariso to other algorithms, but you didn't give detail on the method/data you used. Additionally, you might want to assess the shape of your PR-curve. An ideal PR-curve goes from the topleft corner horizontically to the topright corner and straight down to the bottomright corner, resulting in a PR-AUC of 1. In some applications, the PR-curve shows instead a strong spike at the beginning to quickly drop again close to the "random estimator line" (the horizontal line at 0.09 precision in your case). This would indicate a good detection of "strong" positive outcomes, but poor performance on the less clear candidates. If you want to find a good threshold for your algorithm's cutoff parameter, you might consider the point on the PR-curve that's closest to the topright corner. Or even better, consider cross validation if possible. You might achieve precision and recall values for a specific cutoff parameter that are more interesting for your application than the value of the PR-AUC. The AUCs are most interesting when comparing different algorithms.
What is a good AUC for a precision-recall curve? A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase. If this is a good result could only be assessed in compariso to o
12,565
What is a good AUC for a precision-recall curve?
.49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 PR AUC, however it might not be that bad. I would consider looking at individual precision and recall, perhaps one or the other is what is driving down your PR AUC. Recall will tell you how much of that 9% positive class you are actually guessing correct. Precision will tell you how many you guessed positive that were not. (False Positives). 50% recall would be bad meaning you're not guessing many of your imbalanced class, but perhaps 50% precision wouldn't bad. Depends on your situation.
What is a good AUC for a precision-recall curve?
.49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49
What is a good AUC for a precision-recall curve? .49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 PR AUC, however it might not be that bad. I would consider looking at individual precision and recall, perhaps one or the other is what is driving down your PR AUC. Recall will tell you how much of that 9% positive class you are actually guessing correct. Precision will tell you how many you guessed positive that were not. (False Positives). 50% recall would be bad meaning you're not guessing many of your imbalanced class, but perhaps 50% precision wouldn't bad. Depends on your situation.
What is a good AUC for a precision-recall curve? .49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49
12,566
Understanding Kolmogorov-Smirnov test in R
The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astonishingly small (also stated). The test statistic is the maximum distance between the ECDF's of the two samples. The p-value is the probability of seeing a test statistic as high or higher than the one observed if the two samples were drawn from the same distribution. (It is not the "probability that var1 = var2". And furthermore, 1-p_value is NOT the that probability either.) High p-values say you cannot claim statistical support for a difference, but low p-values are not evidence of sameness. Low p-values can occur with low sample sizes (as your example provides) or the presence of interesting but small differences, e.g. superimposed oscillatory disturbances. If you are working with situations with large numbers of ties it suggests you may need to use a test that more closely fits your data situation. My explanation of why ties were a violation of assumptions was not a claim that ties invalidated the results. The statistical properties of the KS test in practice are relatively resistant or robust to failure of that assumption. The main problem with the KS test as I see is that it is excessively general and as a consequence is under-powered to identify meaningful differences of an interesting nature. The KS test is a very general test and has rather low power for more specific hypotheses. On the other hand, I also see the KS-test (or the "even more powerful" Anderson Darling or Lillefors(sp?) test) used to test "normality" in situations where such a test is completely unwarranted, such as test for the normality of variables being used as predictors in a regression model before the fit. One might legitimately want to be testing the normality of the residuals since that is what is assumed in the modeling theory. Even then modest departures from normality of the residuals do not generally challenge the validity of the results. Persons would be better of using robust methods to check for important impact of "non-normality" on conclusions about statistical significance. Perhaps you should consult with a local statistician? It might assist you in defining the statistical question a bit more precisely and therefore have a better chance of identifying a difference if one actually exists. That would be avoidance of a "type II error": failing to support a conclusion of difference when such a difference is present.
Understanding Kolmogorov-Smirnov test in R
The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astoni
Understanding Kolmogorov-Smirnov test in R The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astonishingly small (also stated). The test statistic is the maximum distance between the ECDF's of the two samples. The p-value is the probability of seeing a test statistic as high or higher than the one observed if the two samples were drawn from the same distribution. (It is not the "probability that var1 = var2". And furthermore, 1-p_value is NOT the that probability either.) High p-values say you cannot claim statistical support for a difference, but low p-values are not evidence of sameness. Low p-values can occur with low sample sizes (as your example provides) or the presence of interesting but small differences, e.g. superimposed oscillatory disturbances. If you are working with situations with large numbers of ties it suggests you may need to use a test that more closely fits your data situation. My explanation of why ties were a violation of assumptions was not a claim that ties invalidated the results. The statistical properties of the KS test in practice are relatively resistant or robust to failure of that assumption. The main problem with the KS test as I see is that it is excessively general and as a consequence is under-powered to identify meaningful differences of an interesting nature. The KS test is a very general test and has rather low power for more specific hypotheses. On the other hand, I also see the KS-test (or the "even more powerful" Anderson Darling or Lillefors(sp?) test) used to test "normality" in situations where such a test is completely unwarranted, such as test for the normality of variables being used as predictors in a regression model before the fit. One might legitimately want to be testing the normality of the residuals since that is what is assumed in the modeling theory. Even then modest departures from normality of the residuals do not generally challenge the validity of the results. Persons would be better of using robust methods to check for important impact of "non-normality" on conclusions about statistical significance. Perhaps you should consult with a local statistician? It might assist you in defining the statistical question a bit more precisely and therefore have a better chance of identifying a difference if one actually exists. That would be avoidance of a "type II error": failing to support a conclusion of difference when such a difference is present.
Understanding Kolmogorov-Smirnov test in R The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astoni
12,567
Understanding Kolmogorov-Smirnov test in R
To compute the D (from ks.test code): ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.5, p-value = 0.1641 alternative hypothesis: two-sided alternative <- "two.sided" x <- x[!is.na(x)] n <- length(x) y <- y[!is.na(y)] n.x <- as.double(n) n.y <- length(y) w <- c(x, y) z <- cumsum(ifelse(order(w) <= n.x, 1/n.x, -1/n.y)) z <- z[c(which(diff(sort(w)) != 0), n.x + n.y)] #exclude ties STATISTIC <- switch(alternative, two.sided = max(abs(z)), greater = max(z), less = -min(z)) STATISTIC [1] 0.5
Understanding Kolmogorov-Smirnov test in R
To compute the D (from ks.test code): ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.5, p-value = 0.1641 alternative hypothesis: two-sided alternative <- "two.sided" x <-
Understanding Kolmogorov-Smirnov test in R To compute the D (from ks.test code): ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.5, p-value = 0.1641 alternative hypothesis: two-sided alternative <- "two.sided" x <- x[!is.na(x)] n <- length(x) y <- y[!is.na(y)] n.x <- as.double(n) n.y <- length(y) w <- c(x, y) z <- cumsum(ifelse(order(w) <= n.x, 1/n.x, -1/n.y)) z <- z[c(which(diff(sort(w)) != 0), n.x + n.y)] #exclude ties STATISTIC <- switch(alternative, two.sided = max(abs(z)), greater = max(z), less = -min(z)) STATISTIC [1] 0.5
Understanding Kolmogorov-Smirnov test in R To compute the D (from ks.test code): ks.test(x,y) Two-sample Kolmogorov-Smirnov test data: x and y D = 0.5, p-value = 0.1641 alternative hypothesis: two-sided alternative <- "two.sided" x <-
12,568
What should be taught first: Probability or Statistics?
It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense of where the teaching of statistics is going, look at the list of paper titles in last year's special edition of The American Statistician (reproduced below): not a single one of them refers to probability. They do discuss the teaching of probability and its role in the curriculum. A good example is George Cobb's paper and its responses. Here are some relevant quotations: Modern statistical practice is much broader than is recognized by our traditional curricular emphasis on probability-­based inference. What we teach lags decades behind what we practice. Our curricular paradigm emphasizes formal inference from a frequentist orientation, based either on the central limit theorem at the entry level or, in the course for mathematics majors, on a small set of parametric probability models that lend themselves to closed-­form solutions derived using calculus. The gap between our half-­century‐old curriculum and our contemporary statistical practice continues to widen. My thesis ... is that as a profession we have only begun to explore the possibilities. The history of our subject also supports this thesis: Unlike probability, a scion of mathematics, statistics sprouted de novo from the soil of science. Probability is a notoriously slippery concept. The gap between intuition and formal treatment may be wider than in any other branch of applied mathematics. If we insist that statistical thinking must necessarily be based on a probability model, how do we reconcile that requirement with goals of making central ideas “simple and approachable” and minimizing “prerequisites to research”? As a thought experiment, run through the basic concepts and theory of estimation. Note how almost all of them can be explained and illustrated using only first-­semester calculus, with probability introduced along the way. Of course we want students to learn calculus and probability, but it would be nice if we could join all the other sciences in teaching the fundamental concepts of our subject to first year students. There's far more like this. You can read it yourself; the material is freely available. References The special issue of the American Statistician on "Statistics and the Undergraduate Curriculum" (November, 2015) is available at http://amstat.tandfonline.com/toc/utas20/69/4. Teaching the Next Generation of Statistics Students to “Think With Data”: Special Issue on Statistics and the Undergraduate Curriculum Nicholas J. Horton & Johanna S. Hardin DOI:10.1080/00031305.2015.1094283 Mere Renovation is Too Little Too Late: We Need to Rethink our Undergraduate Curriculum from the Ground Up George Cobb DOI:10.1080/00031305.2015.1093029 Teaching Statistics at Google-Scale Nicholas Chamandy, Omkar Muralidharan & Stefan Wager pages 283-291 DOI:10.1080/00031305.2015.1089790 Explorations in Statistics Research: An Approach to Expose Undergraduates to Authentic Data Analysis Deborah Nolan & Duncan Temple Lang DOI:10.1080/00031305.2015.1073624 Beyond Normal: Preparing Undergraduates for the Work Force in a Statistical Consulting Capstone Byran J. Smucker & A. John Bailer DOI:10.1080/00031305.2015.1077731 A Framework for Infusing Authentic Data Experiences Within Statistics Courses Scott D. Grimshaw DOI:10.1080/00031305.2015.1081106 Fostering Conceptual Understanding in Mathematical Statistics Jennifer L. Green & Erin E. Blankenship DOI:10.1080/00031305.2015.1069759 The Second Course in Statistics: Design and Analysis of Experiments? Natalie J. Blades, G. Bruce Schaalje & William F. Christensen DOI:10.1080/00031305.2015.1086437 A Data Science Course for Undergraduates: Thinking With Data Ben Baumer DOI:10.1080/00031305.2015.1081105 Data Science in Statistics Curricula: Preparing Students to “Think with Data” J. Hardin, R. Hoerl, Nicholas J. Horton, D. Nolan, B. Baumer, O. Hall-Holt, P. Murrell, R. Peng, P. Roback, D. Temple Lang & M. D. Ward DOI:10.1080/00031305.2015.1077729 Using Online Game-Based Simulations to Strengthen Students’ Understanding of Practical Statistical Issues in Real-World Data Analysis Shonda Kuiper & Rodney X. Sturdivant DOI:10.1080/00031305.2015.1075421 Combating Anti-Statistical Thinking Using Simulation-Based Methods Throughout the Undergraduate Curriculum Nathan Tintle, Beth Chance, George Cobb, Soma Roy, Todd Swanson & Jill VanderStoep DOI:10.1080/00031305.2015.1081619 What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum Tim C. Hesterberg DOI:10.1080/00031305.2015.1089789 Incorporating Statistical Consulting Case Studies in Introductory Time Series Courses Davit Khachatryan DOI:10.1080/00031305.2015.1026611 Developing a New Interdisciplinary Computational Analytics Undergraduate Program: A Qualitative-Quantitative-Qualitative Approach Scotland Leman, Leanna House & Andrew Hoegh DOI:10.1080/00031305.2015.1090337 From Curriculum Guidelines to Learning Outcomes: Assessment at the Program Level Beth Chance & Roxy Peck DOI:10.1080/00031305.2015.1077730 Program Assessment for an Undergraduate Statistics Major Allison Amanda Moore & Jennifer J. Kaplan DOI:10.1080/00031305.2015.1087331
What should be taught first: Probability or Statistics?
It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense
What should be taught first: Probability or Statistics? It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense of where the teaching of statistics is going, look at the list of paper titles in last year's special edition of The American Statistician (reproduced below): not a single one of them refers to probability. They do discuss the teaching of probability and its role in the curriculum. A good example is George Cobb's paper and its responses. Here are some relevant quotations: Modern statistical practice is much broader than is recognized by our traditional curricular emphasis on probability-­based inference. What we teach lags decades behind what we practice. Our curricular paradigm emphasizes formal inference from a frequentist orientation, based either on the central limit theorem at the entry level or, in the course for mathematics majors, on a small set of parametric probability models that lend themselves to closed-­form solutions derived using calculus. The gap between our half-­century‐old curriculum and our contemporary statistical practice continues to widen. My thesis ... is that as a profession we have only begun to explore the possibilities. The history of our subject also supports this thesis: Unlike probability, a scion of mathematics, statistics sprouted de novo from the soil of science. Probability is a notoriously slippery concept. The gap between intuition and formal treatment may be wider than in any other branch of applied mathematics. If we insist that statistical thinking must necessarily be based on a probability model, how do we reconcile that requirement with goals of making central ideas “simple and approachable” and minimizing “prerequisites to research”? As a thought experiment, run through the basic concepts and theory of estimation. Note how almost all of them can be explained and illustrated using only first-­semester calculus, with probability introduced along the way. Of course we want students to learn calculus and probability, but it would be nice if we could join all the other sciences in teaching the fundamental concepts of our subject to first year students. There's far more like this. You can read it yourself; the material is freely available. References The special issue of the American Statistician on "Statistics and the Undergraduate Curriculum" (November, 2015) is available at http://amstat.tandfonline.com/toc/utas20/69/4. Teaching the Next Generation of Statistics Students to “Think With Data”: Special Issue on Statistics and the Undergraduate Curriculum Nicholas J. Horton & Johanna S. Hardin DOI:10.1080/00031305.2015.1094283 Mere Renovation is Too Little Too Late: We Need to Rethink our Undergraduate Curriculum from the Ground Up George Cobb DOI:10.1080/00031305.2015.1093029 Teaching Statistics at Google-Scale Nicholas Chamandy, Omkar Muralidharan & Stefan Wager pages 283-291 DOI:10.1080/00031305.2015.1089790 Explorations in Statistics Research: An Approach to Expose Undergraduates to Authentic Data Analysis Deborah Nolan & Duncan Temple Lang DOI:10.1080/00031305.2015.1073624 Beyond Normal: Preparing Undergraduates for the Work Force in a Statistical Consulting Capstone Byran J. Smucker & A. John Bailer DOI:10.1080/00031305.2015.1077731 A Framework for Infusing Authentic Data Experiences Within Statistics Courses Scott D. Grimshaw DOI:10.1080/00031305.2015.1081106 Fostering Conceptual Understanding in Mathematical Statistics Jennifer L. Green & Erin E. Blankenship DOI:10.1080/00031305.2015.1069759 The Second Course in Statistics: Design and Analysis of Experiments? Natalie J. Blades, G. Bruce Schaalje & William F. Christensen DOI:10.1080/00031305.2015.1086437 A Data Science Course for Undergraduates: Thinking With Data Ben Baumer DOI:10.1080/00031305.2015.1081105 Data Science in Statistics Curricula: Preparing Students to “Think with Data” J. Hardin, R. Hoerl, Nicholas J. Horton, D. Nolan, B. Baumer, O. Hall-Holt, P. Murrell, R. Peng, P. Roback, D. Temple Lang & M. D. Ward DOI:10.1080/00031305.2015.1077729 Using Online Game-Based Simulations to Strengthen Students’ Understanding of Practical Statistical Issues in Real-World Data Analysis Shonda Kuiper & Rodney X. Sturdivant DOI:10.1080/00031305.2015.1075421 Combating Anti-Statistical Thinking Using Simulation-Based Methods Throughout the Undergraduate Curriculum Nathan Tintle, Beth Chance, George Cobb, Soma Roy, Todd Swanson & Jill VanderStoep DOI:10.1080/00031305.2015.1081619 What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum Tim C. Hesterberg DOI:10.1080/00031305.2015.1089789 Incorporating Statistical Consulting Case Studies in Introductory Time Series Courses Davit Khachatryan DOI:10.1080/00031305.2015.1026611 Developing a New Interdisciplinary Computational Analytics Undergraduate Program: A Qualitative-Quantitative-Qualitative Approach Scotland Leman, Leanna House & Andrew Hoegh DOI:10.1080/00031305.2015.1090337 From Curriculum Guidelines to Learning Outcomes: Assessment at the Program Level Beth Chance & Roxy Peck DOI:10.1080/00031305.2015.1077730 Program Assessment for an Undergraduate Statistics Major Allison Amanda Moore & Jennifer J. Kaplan DOI:10.1080/00031305.2015.1087331
What should be taught first: Probability or Statistics? It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense
12,569
What should be taught first: Probability or Statistics?
The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics. On the other hand, historically, ordinary least squares was developed before the normal distribution was discovered! The statistical method came first, the more rigorous, probability based justification of why it works came second! Stephen Stigler's History of Statistics: Measurement of Uncertainty Before 1900 takes the reader through the historical development: Mathematicians, astronomers understood basic mechanics and the law of gravity. They could describe the motion of heavenly bodies as a function of several parameters. They also had hundreds of observations of the celestial bodies, but how should the observations be combined to recover the parameters? A hundred observations gives you one hundred equations, but if there are only three unknowns to solve for, this is an overdetermined system... Legendre was first to develop the method of minimizing the sum of the square error. Later this was connected with the work in probability of Gauss and Laplace, that ordinary least squares was in some sense optimal given normally distributed errors. Why do I bring this up? There's a certain logical elegance to first build up the mathematical machinery required to derive, understand some method, to lay the foundation before you build the house. In the reality of science though, the house often comes first, the foundation second :P. I'd love to see results from the education literature. What's more effective for teaching? What then why? Or why then what? (I might be a weirdo, but I found the story of how least squares was developed to be an exciting page turner! Stories can make otherwise boring, abstract stuff come alive...)
What should be taught first: Probability or Statistics?
The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics. On the other hand, historically, ordinary least squares was developed
What should be taught first: Probability or Statistics? The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics. On the other hand, historically, ordinary least squares was developed before the normal distribution was discovered! The statistical method came first, the more rigorous, probability based justification of why it works came second! Stephen Stigler's History of Statistics: Measurement of Uncertainty Before 1900 takes the reader through the historical development: Mathematicians, astronomers understood basic mechanics and the law of gravity. They could describe the motion of heavenly bodies as a function of several parameters. They also had hundreds of observations of the celestial bodies, but how should the observations be combined to recover the parameters? A hundred observations gives you one hundred equations, but if there are only three unknowns to solve for, this is an overdetermined system... Legendre was first to develop the method of minimizing the sum of the square error. Later this was connected with the work in probability of Gauss and Laplace, that ordinary least squares was in some sense optimal given normally distributed errors. Why do I bring this up? There's a certain logical elegance to first build up the mathematical machinery required to derive, understand some method, to lay the foundation before you build the house. In the reality of science though, the house often comes first, the foundation second :P. I'd love to see results from the education literature. What's more effective for teaching? What then why? Or why then what? (I might be a weirdo, but I found the story of how least squares was developed to be an exciting page turner! Stories can make otherwise boring, abstract stuff come alive...)
What should be taught first: Probability or Statistics? The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics. On the other hand, historically, ordinary least squares was developed
12,570
What should be taught first: Probability or Statistics?
I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc. For instance, take a look at the PhD Stat requirements at GWU. The PhD level Probability course 8257 has the following brief description: STAT 8257. Probability. 3 Credits. Probabilistic foundations of statistics, probability distributions, random variables, moments, characteristic functions, modes of convergence, limit theorems, probability bounds. Prerequisite: STAT 6201– STAT 6202, knowledge of calculus through functions of several variables and series. Note, how it has Master's level statistics courses 6201 and 6202 in the pre-requisites. If you drill down to the lowest level stat or probability course in GWU, you'll get to Introduction to Business and Economic Statistics 1051 or Introduction to Statistics in Social Science 1053. Here's the description to one of them: STAT 1051. Introduction to Business and Economic Statistics. 3 Credits. Lecture (3 hours), laboratory (1 hour). Frequency distributions, descriptive measures, probability, probability distributions, sampling, estimation, tests of hypotheses, regression and correlation, with applications to business. Notice, how the course has "Statistics" title but it teaches a probability within it. For many it's the first encounter with Probability theory after the high school "Stats" course. This is somewhat similar to how it was taught in my days: the courses and textbooks were usually titled "Probability theory and mathematical statistics", e.g. Gmurman's text. I can't imagine studying probability theory without any stats whatsoever. The PhD level course above 8257 assumes you already know statistics. So even if you first teach probability there will be some statistics learning involved. It's just for the first course it probably makes a sense to weigh a tad more on statistics, and use it to introduce probability theory too. In the end it's an iterative process as I described in the beginning. And as in any good iterative process the first step is not important, whether the very first concept was from stats or probability won't matter after several iterations: you'll get to the same place regardless. Final note, the teaching approach depends on your field. If you're studying physics, you'll get things like statistical mechanics, Fermi-Dirac statistics, which you're not going to deal with in social sciences. Also, in physics the frequentist approaches are natural, and in fact they're in the basis of some fundamental theories. Hence, it makes a sense to have a stand-alone probability theory taught early on, unlike social sciences where it may not make much sense to spend time on it and instead weigh more on statistics.
What should be taught first: Probability or Statistics?
I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc. For instance, take
What should be taught first: Probability or Statistics? I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc. For instance, take a look at the PhD Stat requirements at GWU. The PhD level Probability course 8257 has the following brief description: STAT 8257. Probability. 3 Credits. Probabilistic foundations of statistics, probability distributions, random variables, moments, characteristic functions, modes of convergence, limit theorems, probability bounds. Prerequisite: STAT 6201– STAT 6202, knowledge of calculus through functions of several variables and series. Note, how it has Master's level statistics courses 6201 and 6202 in the pre-requisites. If you drill down to the lowest level stat or probability course in GWU, you'll get to Introduction to Business and Economic Statistics 1051 or Introduction to Statistics in Social Science 1053. Here's the description to one of them: STAT 1051. Introduction to Business and Economic Statistics. 3 Credits. Lecture (3 hours), laboratory (1 hour). Frequency distributions, descriptive measures, probability, probability distributions, sampling, estimation, tests of hypotheses, regression and correlation, with applications to business. Notice, how the course has "Statistics" title but it teaches a probability within it. For many it's the first encounter with Probability theory after the high school "Stats" course. This is somewhat similar to how it was taught in my days: the courses and textbooks were usually titled "Probability theory and mathematical statistics", e.g. Gmurman's text. I can't imagine studying probability theory without any stats whatsoever. The PhD level course above 8257 assumes you already know statistics. So even if you first teach probability there will be some statistics learning involved. It's just for the first course it probably makes a sense to weigh a tad more on statistics, and use it to introduce probability theory too. In the end it's an iterative process as I described in the beginning. And as in any good iterative process the first step is not important, whether the very first concept was from stats or probability won't matter after several iterations: you'll get to the same place regardless. Final note, the teaching approach depends on your field. If you're studying physics, you'll get things like statistical mechanics, Fermi-Dirac statistics, which you're not going to deal with in social sciences. Also, in physics the frequentist approaches are natural, and in fact they're in the basis of some fundamental theories. Hence, it makes a sense to have a stand-alone probability theory taught early on, unlike social sciences where it may not make much sense to spend time on it and instead weigh more on statistics.
What should be taught first: Probability or Statistics? I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc. For instance, take
12,571
How to interpret the coefficients from a beta regression?
So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$ where the $\text{logit}(y_i)$ is the usual log-odds we are used to when using the logit link in the glm function (i.e., family binomial) in R. Thus the beta coefficients that betareg returns are the additional increase (or decrease if the beta is negative) in the log-odds of your response. I am assuming you want to be able to interpret the betas on the probability scale (i.e., on the interval (0,1)) thus once you have you beta coefficients all you need to do is simply change the response, i.e., $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i\Rightarrow y_i=\frac{e^{\beta_0+\sum_{i=1}^p\beta_i}}{1+e^{\beta_0+\sum_{i=1}^p\beta_i}}$$ Thus you should realize that we are basically using the same results and interpretations from standard generalized linear modeling (under the logit link). One of the main differences between logistic regression and beta regression is that you are allowing the variance of your response to be much larger than it could be in logistic regression in order to deal with the typical problem of over-dispersion.
How to interpret the coefficients from a beta regression?
So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$ where the
How to interpret the coefficients from a beta regression? So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$ where the $\text{logit}(y_i)$ is the usual log-odds we are used to when using the logit link in the glm function (i.e., family binomial) in R. Thus the beta coefficients that betareg returns are the additional increase (or decrease if the beta is negative) in the log-odds of your response. I am assuming you want to be able to interpret the betas on the probability scale (i.e., on the interval (0,1)) thus once you have you beta coefficients all you need to do is simply change the response, i.e., $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i\Rightarrow y_i=\frac{e^{\beta_0+\sum_{i=1}^p\beta_i}}{1+e^{\beta_0+\sum_{i=1}^p\beta_i}}$$ Thus you should realize that we are basically using the same results and interpretations from standard generalized linear modeling (under the logit link). One of the main differences between logistic regression and beta regression is that you are allowing the variance of your response to be much larger than it could be in logistic regression in order to deal with the typical problem of over-dispersion.
How to interpret the coefficients from a beta regression? So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model $$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$ where the
12,572
How does linear discriminant analysis reduce the dimensions?
Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional space there can exist at most 2 discriminants such as on the graph below. (Note that discriminants are not necessarily orthogonal as axes drawn in the original space, although they, as variables, are uncorrelated.) The centroids of the classes are located within the discriminant subspace according to the their perpendicular coordinates onto the discriminants. Algebra of LDA at the extraction phase is here. Also on plotting a discriminant.
How does linear discriminant analysis reduce the dimensions?
Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional sp
How does linear discriminant analysis reduce the dimensions? Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional space there can exist at most 2 discriminants such as on the graph below. (Note that discriminants are not necessarily orthogonal as axes drawn in the original space, although they, as variables, are uncorrelated.) The centroids of the classes are located within the discriminant subspace according to the their perpendicular coordinates onto the discriminants. Algebra of LDA at the extraction phase is here. Also on plotting a discriminant.
How does linear discriminant analysis reduce the dimensions? Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional sp
12,573
How does linear discriminant analysis reduce the dimensions?
While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to understand the topics in the book. Lets take a very simple example of linear discriminant analysis where you want to group a set of two dimensional data points into K = 2 groups. The drop in dimensions will be only be K-1 = 2-1 = 1. As @deinst explained, the drop in dimensions can be explained with elementary geometry. Two points in any dimension can be joined by a line, and a line is one dimensional. This is an example of a K-1 = 2-1 = 1 dimensional subspace. Now, in this simple example, the set of data points will be scattered in two-dimensional space. The points will be represented by (x,y), so for example you could have data points such as (1,2), (2,1), (9,10), (13,13). Now, using linear discriminant analysis to create two groups A and B will result in the data points being classified as belonging to group A or to group B such that certain properties are satisfied. Linear discriminant analysis attempts to maximize the variance between the groups compared to the variance within the groups. In other words, groups A and B will be far apart and contain data points that are close together. In this simple example, it is clear that the points will be grouped as follows. Group A = {(1,2), (2,1)} and Group B = {(9,10), (13,13)}. Now, the centroids are calculated as the centroids of the groups of data points so Centroid of group A = ((1+2)/2, (2+1)/2) = (1.5,1.5) Centroid of group B = ((9+13)/2, (10+13)/2) = (11,11.5) The Centroids are simply 2 points and they span a 1-dimensional line which joins them together. You can think of linear discriminant analysis as a projection of the data points on a line so that the two groups of data points are as "separated as possible" If you had three groups (and say three dimensional data points) then you would get three centroids, simply three points, and three points in 3D space define a two dimensional plane. Again the rule K-1 = 3-1 = 2 dimensions. I suggest you search the web for resources that will help explain and expand on the simple introduction I have given; for example http://www.music.mcgill.ca/~ich/classes/mumt611_07/classifiers/lda_theory.pdf
How does linear discriminant analysis reduce the dimensions?
While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to und
How does linear discriminant analysis reduce the dimensions? While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to understand the topics in the book. Lets take a very simple example of linear discriminant analysis where you want to group a set of two dimensional data points into K = 2 groups. The drop in dimensions will be only be K-1 = 2-1 = 1. As @deinst explained, the drop in dimensions can be explained with elementary geometry. Two points in any dimension can be joined by a line, and a line is one dimensional. This is an example of a K-1 = 2-1 = 1 dimensional subspace. Now, in this simple example, the set of data points will be scattered in two-dimensional space. The points will be represented by (x,y), so for example you could have data points such as (1,2), (2,1), (9,10), (13,13). Now, using linear discriminant analysis to create two groups A and B will result in the data points being classified as belonging to group A or to group B such that certain properties are satisfied. Linear discriminant analysis attempts to maximize the variance between the groups compared to the variance within the groups. In other words, groups A and B will be far apart and contain data points that are close together. In this simple example, it is clear that the points will be grouped as follows. Group A = {(1,2), (2,1)} and Group B = {(9,10), (13,13)}. Now, the centroids are calculated as the centroids of the groups of data points so Centroid of group A = ((1+2)/2, (2+1)/2) = (1.5,1.5) Centroid of group B = ((9+13)/2, (10+13)/2) = (11,11.5) The Centroids are simply 2 points and they span a 1-dimensional line which joins them together. You can think of linear discriminant analysis as a projection of the data points on a line so that the two groups of data points are as "separated as possible" If you had three groups (and say three dimensional data points) then you would get three centroids, simply three points, and three points in 3D space define a two dimensional plane. Again the rule K-1 = 3-1 = 2 dimensions. I suggest you search the web for resources that will help explain and expand on the simple introduction I have given; for example http://www.music.mcgill.ca/~ich/classes/mumt611_07/classifiers/lda_theory.pdf
How does linear discriminant analysis reduce the dimensions? While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to und
12,574
Reasons for data to be normally distributed
Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for large samples. Most real-world data are NOT normally distributed. A paper by Micceri (1989) called "The unicorn, the normal curve, and other improbable creatures" examined 440 large-scale achievement and psychometric measures. He found a lot of variability in distributions w.r.t. their moments and not much evidence for (even approximate) normality. In a 1977 paper by Steven Stigler called "Do Robust Estimators Work with Real Data" he used 24 data sets collected from famous 18th century attempts to measure the distance from the earth to the sun and 19th century attempts to measure the speed of light. He reported sample skewness and kurtosis in Table 3. The data are heavy-tailed. In statistics, we assume normality oftentimes because it makes maximum likelihood (or some other method) convenient. What the two papers cited above show, however, is that the assumption is often tenuous. This is why robustness studies are useful.
Reasons for data to be normally distributed
Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for la
Reasons for data to be normally distributed Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for large samples. Most real-world data are NOT normally distributed. A paper by Micceri (1989) called "The unicorn, the normal curve, and other improbable creatures" examined 440 large-scale achievement and psychometric measures. He found a lot of variability in distributions w.r.t. their moments and not much evidence for (even approximate) normality. In a 1977 paper by Steven Stigler called "Do Robust Estimators Work with Real Data" he used 24 data sets collected from famous 18th century attempts to measure the distance from the earth to the sun and 19th century attempts to measure the speed of light. He reported sample skewness and kurtosis in Table 3. The data are heavy-tailed. In statistics, we assume normality oftentimes because it makes maximum likelihood (or some other method) convenient. What the two papers cited above show, however, is that the assumption is often tenuous. This is why robustness studies are useful.
Reasons for data to be normally distributed Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for la
12,575
Reasons for data to be normally distributed
There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distributions. There are plenty of sources discussing this property. A brief one can be found here. A more general discussion of the motivation for using Gaussian distribution involving most of the arguments mentioned so far can be found in this article from Signal Processing magazine.
Reasons for data to be normally distributed
There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distribu
Reasons for data to be normally distributed There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distributions. There are plenty of sources discussing this property. A brief one can be found here. A more general discussion of the motivation for using Gaussian distribution involving most of the arguments mentioned so far can be found in this article from Signal Processing magazine.
Reasons for data to be normally distributed There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distribu
12,576
Reasons for data to be normally distributed
In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements. The two most common errors distributions in experimental physics are normal and Poisson. The latter is usually encountered in count measurements, such as radioactive decay. Another interesting feature of these two distributions is that a sum of random variables from Gaussian and Poisson belongs to Gaussian and Poisson. There are several books on statistics in experimental sciences such as this one:Gerhard Bohm, Günter Zech, Introduction to Statistics and Data Analysis for Physicists, ISBN 978-3-935702-41-6
Reasons for data to be normally distributed
In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements. The two most common errors distributions in experimental physics are normal and Po
Reasons for data to be normally distributed In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements. The two most common errors distributions in experimental physics are normal and Poisson. The latter is usually encountered in count measurements, such as radioactive decay. Another interesting feature of these two distributions is that a sum of random variables from Gaussian and Poisson belongs to Gaussian and Poisson. There are several books on statistics in experimental sciences such as this one:Gerhard Bohm, Günter Zech, Introduction to Statistics and Data Analysis for Physicists, ISBN 978-3-935702-41-6
Reasons for data to be normally distributed In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements. The two most common errors distributions in experimental physics are normal and Po
12,577
Reasons for data to be normally distributed
The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. However, when we try to make inferences about individual observations, especially future ones (eg, prediction intervals), deviations from normality are much more important if we are interested in the tails of the distribution. For example, if we have 50 observations, we're making a very big extrapolation (and leap of faith) when we say something about the probability of a future observation being at least 3 standard deviations from the mean.
Reasons for data to be normally distributed
The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. Howev
Reasons for data to be normally distributed The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. However, when we try to make inferences about individual observations, especially future ones (eg, prediction intervals), deviations from normality are much more important if we are interested in the tails of the distribution. For example, if we have 50 observations, we're making a very big extrapolation (and leap of faith) when we say something about the probability of a future observation being at least 3 standard deviations from the mean.
Reasons for data to be normally distributed The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. Howev
12,578
Why do we need autoencoders?
Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in. The hidden layer form a kind of encoding of the input. "The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data." If input is a 100 dimensional vector, and you have 60 neurons in the hidden layer, then the auto encoder algorithm will replicate the input as a 100 dimensional vector in the output layer, in the process giving you a 60 dimensional vector that encodes your input. So the purpose of auto encoders is dimensionality reduction, amongst many others.
Why do we need autoencoders?
Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in. The hidden layer form a kind o
Why do we need autoencoders? Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in. The hidden layer form a kind of encoding of the input. "The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for a set of data." If input is a 100 dimensional vector, and you have 60 neurons in the hidden layer, then the auto encoder algorithm will replicate the input as a 100 dimensional vector in the output layer, in the process giving you a 60 dimensional vector that encodes your input. So the purpose of auto encoders is dimensionality reduction, amongst many others.
Why do we need autoencoders? Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in. The hidden layer form a kind o
12,579
Why do we need autoencoders?
It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the population. If they're "quite" different, then the input probably doesn't belong to the population you modeled. I see it as a kind of "regression by neural networks" where you try to have a function describing your data: its output is the same as the input.
Why do we need autoencoders?
It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the popul
Why do we need autoencoders? It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the population. If they're "quite" different, then the input probably doesn't belong to the population you modeled. I see it as a kind of "regression by neural networks" where you try to have a function describing your data: its output is the same as the input.
Why do we need autoencoders? It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the popul
12,580
Why do we need autoencoders?
Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is used to train each hidden level separately for the deep NN on the second picture. Pictures are taken from Russian wikipedia.
Why do we need autoencoders?
Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is u
Why do we need autoencoders? Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is used to train each hidden level separately for the deep NN on the second picture. Pictures are taken from Russian wikipedia.
Why do we need autoencoders? Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is u
12,581
Why do we need autoencoders?
In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks. Auto encoding is useful in the sense that it allows us to compress the data in an optimal way (that can actual used to represent the input data, as observed by the decoding layer). Now that we have these features, we are able to complete many different tasks - for example we can use it as a very good starting point for supervised learning tasks.
Why do we need autoencoders?
In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks. Auto encoding is useful in the sens
Why do we need autoencoders? In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks. Auto encoding is useful in the sense that it allows us to compress the data in an optimal way (that can actual used to represent the input data, as observed by the decoding layer). Now that we have these features, we are able to complete many different tasks - for example we can use it as a very good starting point for supervised learning tasks.
Why do we need autoencoders? In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks. Auto encoding is useful in the sens
12,582
Using R for GLM with Gamma distribution
The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance. In GLM parlance the dispersion parameter, $\phi$ in $\text{Var}(Y_i)=\phi\text{V}(\mu_i)$ is normally constant. More generally, you have $a(\phi)$, but that doesn't help. It might perhaps be possible to use a weighted Gamma GLM to incorporate this effect of a specified shape parameter, but I haven't investigated this possibility yet (if it works it is probably the easiest way to do it, but I am not at all sure that it will). If you had a double GLM you could estimate that parameter as a function of covariates... and if the double glm software let you specify an offset in the variance term you could do this. It looks like the function dglm in the package dglm let you specify an offset. I don't know if it will let you specify a variance model like (say) ~ offset(<something>) + 0 though. Another alternative would be to maximize the likelihood directly. > y <- rgamma(100,10,.1) > summary(glm(y~1,family=Gamma)) Call: glm(formula = y ~ 1, family = Gamma) Deviance Residuals: Min 1Q Median 3Q Max -0.93768 -0.25371 -0.05188 0.16078 0.81347 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0103660 0.0003486 29.74 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1130783) Null deviance: 11.223 on 99 degrees of freedom Residual deviance: 11.223 on 99 degrees of freedom AIC: 973.56 Number of Fisher Scoring iterations: 5 The line where it says: (Dispersion parameter for Gamma family taken to be 0.1130783) is the one you want. That $\hat\phi$ is related to the shape parameter of the Gamma.
Using R for GLM with Gamma distribution
The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance. In GLM parlance the dispersion parameter, $\p
Using R for GLM with Gamma distribution The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance. In GLM parlance the dispersion parameter, $\phi$ in $\text{Var}(Y_i)=\phi\text{V}(\mu_i)$ is normally constant. More generally, you have $a(\phi)$, but that doesn't help. It might perhaps be possible to use a weighted Gamma GLM to incorporate this effect of a specified shape parameter, but I haven't investigated this possibility yet (if it works it is probably the easiest way to do it, but I am not at all sure that it will). If you had a double GLM you could estimate that parameter as a function of covariates... and if the double glm software let you specify an offset in the variance term you could do this. It looks like the function dglm in the package dglm let you specify an offset. I don't know if it will let you specify a variance model like (say) ~ offset(<something>) + 0 though. Another alternative would be to maximize the likelihood directly. > y <- rgamma(100,10,.1) > summary(glm(y~1,family=Gamma)) Call: glm(formula = y ~ 1, family = Gamma) Deviance Residuals: Min 1Q Median 3Q Max -0.93768 -0.25371 -0.05188 0.16078 0.81347 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0103660 0.0003486 29.74 <2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for Gamma family taken to be 0.1130783) Null deviance: 11.223 on 99 degrees of freedom Residual deviance: 11.223 on 99 degrees of freedom AIC: 973.56 Number of Fisher Scoring iterations: 5 The line where it says: (Dispersion parameter for Gamma family taken to be 0.1130783) is the one you want. That $\hat\phi$ is related to the shape parameter of the Gamma.
Using R for GLM with Gamma distribution The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance. In GLM parlance the dispersion parameter, $\p
12,583
Using R for GLM with Gamma distribution
I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. I advised you to read the lecture as it is, in my opinion, very clear and interesting concerning the use of gamma distribution in GLMs. glmGamma <- glm(response ~ x1, family = Gamma(link = "identity") library(MASS) myshape <- gamma.shape(glmGamma) gampred <- predict(glmGamma , type = "response", se = TRUE, dispersion = 1/myshape$alpha) summary(glmGamma, dispersion = 1/myshape$alpha)
Using R for GLM with Gamma distribution
I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM.
Using R for GLM with Gamma distribution I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. I advised you to read the lecture as it is, in my opinion, very clear and interesting concerning the use of gamma distribution in GLMs. glmGamma <- glm(response ~ x1, family = Gamma(link = "identity") library(MASS) myshape <- gamma.shape(glmGamma) gampred <- predict(glmGamma , type = "response", se = TRUE, dispersion = 1/myshape$alpha) summary(glmGamma, dispersion = 1/myshape$alpha)
Using R for GLM with Gamma distribution I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM.
12,584
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eigenvector is negated. If the latter is less, negate the next eigenvector. Here's an implementation. (I am not familiar with zoo, which might allow a more elegant solution.) require(zoo) amend <- function(result) { result.m <- as.matrix(result) n <- dim(result.m)[1] delta <- apply(abs(result.m[-1,] - result.m[-n,]), 1, sum) delta.1 <- apply(abs(result.m[-1,] + result.m[-n,]), 1, sum) signs <- c(1, cumprod(rep(-1, n-1) ^ (delta.1 <= delta))) zoo(result * signs) } As an example, let's run a random walk in an orthogonal group and jitter it a little for interest: random.rotation <- function(eps) { theta <- rnorm(3, sd=eps) matrix(c(1, theta[1:2], -theta[1], 1, theta[3], -theta[2:3], 1), 3) } set.seed(17) n.times <- 1000 x <- matrix(1., nrow=n.times, ncol=3) for (i in 2:n.times) { x[i,] <- random.rotation(.05) %*% x[i-1,] } Here's the rolling PCA: window <- 31 data <- zoo(x) result <- rollapply(data, window, function(x) summary(princomp(x))$loadings[, 1], by.column = FALSE, align = "right") plot(result) Now the fixed version: plot(amend(result))
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eig
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eigenvector is negated. If the latter is less, negate the next eigenvector. Here's an implementation. (I am not familiar with zoo, which might allow a more elegant solution.) require(zoo) amend <- function(result) { result.m <- as.matrix(result) n <- dim(result.m)[1] delta <- apply(abs(result.m[-1,] - result.m[-n,]), 1, sum) delta.1 <- apply(abs(result.m[-1,] + result.m[-n,]), 1, sum) signs <- c(1, cumprod(rep(-1, n-1) ^ (delta.1 <= delta))) zoo(result * signs) } As an example, let's run a random walk in an orthogonal group and jitter it a little for interest: random.rotation <- function(eps) { theta <- rnorm(3, sd=eps) matrix(c(1, theta[1:2], -theta[1], 1, theta[3], -theta[2:3], 1), 3) } set.seed(17) n.times <- 1000 x <- matrix(1., nrow=n.times, ncol=3) for (i in 2:n.times) { x[i,] <- random.rotation(.05) %*% x[i-1,] } Here's the rolling PCA: window <- 31 data <- zoo(x) result <- rollapply(data, window, function(x) summary(princomp(x))$loadings[, 1], by.column = FALSE, align = "right") plot(result) Now the fixed version: plot(amend(result))
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eig
12,585
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
@whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector. For instance, you could make the loadings for USD positive on all your eigenvectors (i.e., if USD's loading is negative, flip the signs of the entire vector). The overall direction of your vector is still arbitrary (since you could have used EUR or ZAR as your reference instead), but the first few axes of your PCA probably won't jump around nearly as much--especially because your rolling windows are so long.
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
@whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector. For instance, yo
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? @whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector. For instance, you could make the loadings for USD positive on all your eigenvectors (i.e., if USD's loading is negative, flip the signs of the entire vector). The overall direction of your vector is still arbitrary (since you could have used EUR or ZAR as your reference instead), but the first few axes of your PCA probably won't jump around nearly as much--especially because your rolling windows are so long.
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? @whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector. For instance, yo
12,586
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this threshold I flip the eigenvector, factors and loadings in order to have consistency in the rolling window. Personally I don't like to force given signs in some correlations since they can be very volatile depending of the macro drivers.
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this thr
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this threshold I flip the eigenvector, factors and loadings in order to have consistency in the rolling window. Personally I don't like to force given signs in some correlations since they can be very volatile depending of the macro drivers.
I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this thr
12,587
What is F1 Optimal Threshold? How to calculate it?
I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal threshold is approximately 1/2 the F1 score that it achieves. This gives you some intuition. The optimal threshold will never be more than .5. If your F1 is .5 and the threshold is .5, then you should expect to improve F1 by lowering the threshold. On the other hand, if the F1 were .5 and the threshold were .1, you should probably increase the threshold to improve F1. The paper with all details and a discussion of why F1 may or may not be a good measure to optimize (in both single and multilabel case) can be found here: https://arxiv.org/abs/1402.1892 Sorry that it took 9 months for this post to come to my attention. Hope that you still find the information useful!
What is F1 Optimal Threshold? How to calculate it?
I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal
What is F1 Optimal Threshold? How to calculate it? I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal threshold is approximately 1/2 the F1 score that it achieves. This gives you some intuition. The optimal threshold will never be more than .5. If your F1 is .5 and the threshold is .5, then you should expect to improve F1 by lowering the threshold. On the other hand, if the F1 were .5 and the threshold were .1, you should probably increase the threshold to improve F1. The paper with all details and a discussion of why F1 may or may not be a good measure to optimize (in both single and multilabel case) can be found here: https://arxiv.org/abs/1402.1892 Sorry that it took 9 months for this post to come to my attention. Hope that you still find the information useful!
What is F1 Optimal Threshold? How to calculate it? I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal
12,588
When should I not use an ensemble classifier?
The model that is closest to the true data generating process will always be best and will beat most ensemble methods. So if the data come from a linear process lm() will be much superior to random forests, e.g.: set.seed(1234) p=10 N=1000 #covariates x = matrix(rnorm(N*p),ncol=p) #coefficients: b = round(rnorm(p),2) y = x %*% b + rnorm(N) train=sample(N, N/2) data = cbind.data.frame(y,x) colnames(data) = c("y", paste0("x",1:p)) #linear model fit1 = lm(y ~ ., data = data[train,]) summary(fit1) yPred1 =predict(fit1,data[-train,]) round(mean(abs(yPred1-data[-train,"y"])),2)#0.79 library(randomForest) fit2 = randomForest(y ~ ., data = data[train,],ntree=1000) yPred2 =predict(fit2,data[-train,]) round(mean(abs(yPred2-data[-train,"y"])),2)#1.33
When should I not use an ensemble classifier?
The model that is closest to the true data generating process will always be best and will beat most ensemble methods. So if the data come from a linear process lm() will be much superior to random fo
When should I not use an ensemble classifier? The model that is closest to the true data generating process will always be best and will beat most ensemble methods. So if the data come from a linear process lm() will be much superior to random forests, e.g.: set.seed(1234) p=10 N=1000 #covariates x = matrix(rnorm(N*p),ncol=p) #coefficients: b = round(rnorm(p),2) y = x %*% b + rnorm(N) train=sample(N, N/2) data = cbind.data.frame(y,x) colnames(data) = c("y", paste0("x",1:p)) #linear model fit1 = lm(y ~ ., data = data[train,]) summary(fit1) yPred1 =predict(fit1,data[-train,]) round(mean(abs(yPred1-data[-train,"y"])),2)#0.79 library(randomForest) fit2 = randomForest(y ~ ., data = data[train,],ntree=1000) yPred2 =predict(fit2,data[-train,]) round(mean(abs(yPred2-data[-train,"y"])),2)#1.33
When should I not use an ensemble classifier? The model that is closest to the true data generating process will always be best and will beat most ensemble methods. So if the data come from a linear process lm() will be much superior to random fo
12,589
When should I not use an ensemble classifier?
I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions. When you need to convince people that the predictions are worth believing, a highly accurate model can be very persuasive, but I have struggled to convince people to act on predictions when the methods are too complex for their comfort level. In my experience, most people are comfortable with linear additive models, models they could score by hand, and if you try to explain adaptive boosting, hyper-planes and 5th level interaction effects they will respond as if you are pitching them black magic. On the other hand, people can be comfortable with the complexity of the model, but still want to internalize some insight. Scientists, for example, might not consider a black-box model to be an advance in human knowledge, even if the model is highly accurate. Variable importance analysis can help with insights, but if the ensemble is more accurate than a linear additive model, the ensemble is probably exploiting some non-linear and interaction effects that the variable importance analysis can't completely account for.
When should I not use an ensemble classifier?
I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions. When you need to convi
When should I not use an ensemble classifier? I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions. When you need to convince people that the predictions are worth believing, a highly accurate model can be very persuasive, but I have struggled to convince people to act on predictions when the methods are too complex for their comfort level. In my experience, most people are comfortable with linear additive models, models they could score by hand, and if you try to explain adaptive boosting, hyper-planes and 5th level interaction effects they will respond as if you are pitching them black magic. On the other hand, people can be comfortable with the complexity of the model, but still want to internalize some insight. Scientists, for example, might not consider a black-box model to be an advance in human knowledge, even if the model is highly accurate. Variable importance analysis can help with insights, but if the ensemble is more accurate than a linear additive model, the ensemble is probably exploiting some non-linear and interaction effects that the variable importance analysis can't completely account for.
When should I not use an ensemble classifier? I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions. When you need to convi
12,590
When should I not use an ensemble classifier?
I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too difficult to implement/maintain/modify/port. Goef Hinton's work on "Dark Knowledge" is exactly about this: how to transfer the "knowledge" of a large ensemble into one easy to move around model. He states that ensembles are bad at test time: they are highly redundant and the computation time can be of concern. His team got some interesting results, I suggest to check out his publications or at least the slides. If my memory is good, this was one of 2013 or 2014 hot topics. The slides about Dark Knowledge can be found here: http://www.ttic.edu/dl/dark14.pdf
When should I not use an ensemble classifier?
I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too diff
When should I not use an ensemble classifier? I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too difficult to implement/maintain/modify/port. Goef Hinton's work on "Dark Knowledge" is exactly about this: how to transfer the "knowledge" of a large ensemble into one easy to move around model. He states that ensembles are bad at test time: they are highly redundant and the computation time can be of concern. His team got some interesting results, I suggest to check out his publications or at least the slides. If my memory is good, this was one of 2013 or 2014 hot topics. The slides about Dark Knowledge can be found here: http://www.ttic.edu/dl/dark14.pdf
When should I not use an ensemble classifier? I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too diff
12,591
Splines vs Gaussian Process Regression
I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging. If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model. This fact is proven in this paper by Kimeldorf and Wahba (1970). It is rather technical, as it uses the link between the kernels used in kriging and Reproducing Kernel Hilbert Spaces (RKHS).
Splines vs Gaussian Process Regression
I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging. If you take a certain type of kernel in Gaussian p
Splines vs Gaussian Process Regression I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging. If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model. This fact is proven in this paper by Kimeldorf and Wahba (1970). It is rather technical, as it uses the link between the kernels used in kriging and Reproducing Kernel Hilbert Spaces (RKHS).
Splines vs Gaussian Process Regression I agree with @j__'s answer. However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging. If you take a certain type of kernel in Gaussian p
12,592
Splines vs Gaussian Process Regression
It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of constrained interpolation has been developed in Bay et al. 2016. Bay et al. 2016. Generalization of the Kimeldorf-Wahba Correspondence for constrained interpolation. Electronic Journal of Statistics. In this paper, the advantage of the Bayesian approach has been discussed.
Splines vs Gaussian Process Regression
It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of
Splines vs Gaussian Process Regression It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of constrained interpolation has been developed in Bay et al. 2016. Bay et al. 2016. Generalization of the Kimeldorf-Wahba Correspondence for constrained interpolation. Electronic Journal of Statistics. In this paper, the advantage of the Bayesian approach has been discussed.
Splines vs Gaussian Process Regression It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of
12,593
Splines vs Gaussian Process Regression
I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have a variance about that. This allows for great opportunities such as experimental design (choosing input data which is maximally informative). Also if you want to performs integration (quadrature) of the model a GP will have a gaussian result which allows you to give confidence to your result. At least with standard spline models this is not possible. In practice GPR gives a more informative result (in my experience) but spline models seem to be quicker in my experience.
Splines vs Gaussian Process Regression
I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have
Splines vs Gaussian Process Regression I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have a variance about that. This allows for great opportunities such as experimental design (choosing input data which is maximally informative). Also if you want to performs integration (quadrature) of the model a GP will have a gaussian result which allows you to give confidence to your result. At least with standard spline models this is not possible. In practice GPR gives a more informative result (in my experience) but spline models seem to be quicker in my experience.
Splines vs Gaussian Process Regression I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have
12,594
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I find that he does not make the issue clear enough -- so I am not surprised to see your confusion. The important point to realize is that Colquhoun is talking about the situation without any multiple comparison adjustments. One can understand Colquhoun's paper as adopting a reader's perspective: he essentially asks what false discovery rate (FDR) can he expect when he reads scientific literature, and this means what is the expected FDR when no multiple comparison adjustments were done. Multiple comparisons can be taken into account when running multiple statistical tests in one study, e.g. in one paper. But nobody ever adjusts for multiple comparisons across papers. If you actually control FDR, e.g. by following Benjamini-Hochberg (BH) procedure, then it will be controlled. The problem is that running BH procedure separately in each study, does not guarantee overall FDR control. Can I safely assume that in the long run, if I do such analysis on a regular basis, the FDR is not $30\%$, but below $5\%$, because I used Benjamini-Hochberg? No. If you use BH procedure in every paper, but independently in each of your papers, then you can essentially interpret your BH-adjusted $p$-values as normal $p$-values, and what Colquhoun says still applies. General remarks The answer to Colquhoun's question about the expected FDR is difficult to give because it depends on various assumptions. If e.g. all the null hypotheses are true, then FDR will be $100\%$ (i.e. all "significant" findings would be statistical flukes). And if all nulls are in reality false, then FDR will be zero. So the FDR depends on the proportion of true nulls, and this is something that has be externally estimated or guessed, in order to estimate the FDR. Colquhoun gives some arguments in favor of the $30\%$ number, but this estimate is highly sensitive to the assumptions. I think the paper is mostly reasonable, but I dislike that it makes some claims sound way too bold. E.g. the first sentence of the abstract is: If you use $p=0.05$ to suggest that you have made a discovery, you will be wrong at least $30\%$ of the time. This is formulated too strongly and can actually be misleading.
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I find that he does not make the issue clear enough -- so I am not surprised to see your confusion. The important point to realize is that Colquhoun is talking about the situation without any multiple comparison adjustments. One can understand Colquhoun's paper as adopting a reader's perspective: he essentially asks what false discovery rate (FDR) can he expect when he reads scientific literature, and this means what is the expected FDR when no multiple comparison adjustments were done. Multiple comparisons can be taken into account when running multiple statistical tests in one study, e.g. in one paper. But nobody ever adjusts for multiple comparisons across papers. If you actually control FDR, e.g. by following Benjamini-Hochberg (BH) procedure, then it will be controlled. The problem is that running BH procedure separately in each study, does not guarantee overall FDR control. Can I safely assume that in the long run, if I do such analysis on a regular basis, the FDR is not $30\%$, but below $5\%$, because I used Benjamini-Hochberg? No. If you use BH procedure in every paper, but independently in each of your papers, then you can essentially interpret your BH-adjusted $p$-values as normal $p$-values, and what Colquhoun says still applies. General remarks The answer to Colquhoun's question about the expected FDR is difficult to give because it depends on various assumptions. If e.g. all the null hypotheses are true, then FDR will be $100\%$ (i.e. all "significant" findings would be statistical flukes). And if all nulls are in reality false, then FDR will be zero. So the FDR depends on the proportion of true nulls, and this is something that has be externally estimated or guessed, in order to estimate the FDR. Colquhoun gives some arguments in favor of the $30\%$ number, but this estimate is highly sensitive to the assumptions. I think the paper is mostly reasonable, but I dislike that it makes some claims sound way too bold. E.g. the first sentence of the abstract is: If you use $p=0.05$ to suggest that you have made a discovery, you will be wrong at least $30\%$ of the time. This is formulated too strongly and can actually be misleading.
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I
12,595
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you control FDR properly. It's worth noting, though, that there are quite a lot of variants on the B-H method. Benjamini's seminars at Berkeley are on Youtube, and well worth watching: Part I: https://www.youtube.com/watch?v=oONHlua2gBY Part II: https://www.youtube.com/watch?v=inUr5I5WKAM I'm not sure why @amoeba says "This is formulated too strongly and can actually be misleading". I'd be interested to know why he/she thinks that. The most persuasive argument comes from the simulated t tests (section 6). That mimics what almost everyone does in practice and it shows that if you observe P close to 0.047, and claim to have made a discovery, you'll be wrong at least 26% of the time. What can go wrong? Of course, I should not describe this as a minimum. It's what you get if you assume that there's a 50% chance of the there being a real effect. Of course if you assume that most of your hypotheses are correct in advance, then you can get a lower FDR than 26%, but can you imagine the hilarity that would greet a claim that you'd made a discovery on the basis of the assumption that you were 90% sure in advance that your conclusion would be true. 26% is the minimum FDR given that it isn't a reasonable basis for inference to assume any prior probability greater than 0.5. Given that hunches frequently don't stand up when tested, it could well be that there is only a 10% chance of any particular hypothesis being true, and in that case the FDR would be a disastrous 76%. It's true that all this is contingent on the null hypothesis being that there is zero difference (the so called point null). Other choices can give different results. But the point null is what almost everyone uses in real life (though the may not be aware of it). Furthermore the point null seems to me to be entirely appropriate thing to use. It's sometimes objected that true differences are never exactly zero. I disagree. We want to tell whether are not our results are distinguishable from the case where both groups are given identical treatments, so the true difference is exactly zero. If we decide that out data are not compatible with that view, we go on to estimate the effect size. and at that point we make the separate judgment about whether the effect, though real, is big enough to be important in practice. There is some vigorous discussion of these topics on Deborah Mayo's blog. @amoeba Thanks for you response. What the discussion on Mayo's blog shows is mostly that Mayo doesn't agree with me, though she hasn't made clear why, to me at least). Stephen Senn points out correctly that you can get a different answer if you postulate a different prior distribution. That seems to me to be interesting only to subjective Bayesians. It's certainly irrelevant to everyday practice which always assumes a point null. And as I explained, that seems to me to be a perfectly sensible thing to do. Many professional statisticians have come to conclusions much the same as mine. Try Sellke & Berger, and Valen Johnson (refs in my paper). There is nothing very controversial (or very original) about my claims. Your other point, about assuming a 0.5 prior, doesn't seem to me to be an assumption at all. As I explained above, anything above 0.5 woold be unacceptable in practice. And anything below 0.5 makes the false discovery rate even higher (eg 76% if prior is 0.1). Therefore it's perfectly reasonable to say that 26% is the minimum false discovery rate that you can expect if you observe P = 0.047 in a single experiment. I've been thinking more about this question. My definition of FDR is the same as Benjamini's -the fraction of positive tests that are false. But it is applied to a quite different problem, the interpretation of a single test. With hindsight it might have been better if I'd picked a different term. In the case of a single test, B&H leaves the P value unchanged, so it does not say anything about the false discovery rate in the sense that I use the term. es of course you are right. Benjamini & Hochberg, and other people who work on multiple comparisons, aim only to correct the type 1 error rate. So they end up with a "correct" P value. It's subject to the same problems as any other P value. In my latest paper, I changed the name from FDR to False Positive Risk (FPR) in an attempt to avoid this misunderstanding. We've also written a web app to do some of the calculations (after noticing that few people download the R scripts that we provide). It's at https://davidcolquhoun.shinyapps.io/3-calcs-final/ All opinions about itare welcome (please read the Notes tab first). PS The web calculator now has a new (permanent, I hope) at http://fpr-calc.ucl.ac.uk/ Shiny.io is easy to use, but very expensive if anyone actually uses the app :-( I've returned to this discussion, now that my second paper on the topic is about to appear in Royal Society Open Science. It is at https://www.biorxiv.org/content/early/2017/08/07/144337 I realise that the biggest mistake that I made in the first paper was to use the term "false discovery rate (FDR)". In the new paper I make it more explicit that I am saying nothing about the multiple comparisons problem. I deal only with the question of how to interpret the P value that's observed in a single unbiased test. In the latest version, I refer to the probability that the result as the false positive risk (FPR) rather than FDR, in the hope of reducing confusion. I also advocate the reverse Bayesian approach -specify the prior probability that would be needed to ensure an FPR of, say, 5%. If you observe P = 0.05, that comes to 0.87. In other words you'd have to be almost (87%) sure that there was a real effect before doing the experiment to achieve an FPR of 5% (which is what most people still believe, mistakenly, p=0.05 means).
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you cont
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you control FDR properly. It's worth noting, though, that there are quite a lot of variants on the B-H method. Benjamini's seminars at Berkeley are on Youtube, and well worth watching: Part I: https://www.youtube.com/watch?v=oONHlua2gBY Part II: https://www.youtube.com/watch?v=inUr5I5WKAM I'm not sure why @amoeba says "This is formulated too strongly and can actually be misleading". I'd be interested to know why he/she thinks that. The most persuasive argument comes from the simulated t tests (section 6). That mimics what almost everyone does in practice and it shows that if you observe P close to 0.047, and claim to have made a discovery, you'll be wrong at least 26% of the time. What can go wrong? Of course, I should not describe this as a minimum. It's what you get if you assume that there's a 50% chance of the there being a real effect. Of course if you assume that most of your hypotheses are correct in advance, then you can get a lower FDR than 26%, but can you imagine the hilarity that would greet a claim that you'd made a discovery on the basis of the assumption that you were 90% sure in advance that your conclusion would be true. 26% is the minimum FDR given that it isn't a reasonable basis for inference to assume any prior probability greater than 0.5. Given that hunches frequently don't stand up when tested, it could well be that there is only a 10% chance of any particular hypothesis being true, and in that case the FDR would be a disastrous 76%. It's true that all this is contingent on the null hypothesis being that there is zero difference (the so called point null). Other choices can give different results. But the point null is what almost everyone uses in real life (though the may not be aware of it). Furthermore the point null seems to me to be entirely appropriate thing to use. It's sometimes objected that true differences are never exactly zero. I disagree. We want to tell whether are not our results are distinguishable from the case where both groups are given identical treatments, so the true difference is exactly zero. If we decide that out data are not compatible with that view, we go on to estimate the effect size. and at that point we make the separate judgment about whether the effect, though real, is big enough to be important in practice. There is some vigorous discussion of these topics on Deborah Mayo's blog. @amoeba Thanks for you response. What the discussion on Mayo's blog shows is mostly that Mayo doesn't agree with me, though she hasn't made clear why, to me at least). Stephen Senn points out correctly that you can get a different answer if you postulate a different prior distribution. That seems to me to be interesting only to subjective Bayesians. It's certainly irrelevant to everyday practice which always assumes a point null. And as I explained, that seems to me to be a perfectly sensible thing to do. Many professional statisticians have come to conclusions much the same as mine. Try Sellke & Berger, and Valen Johnson (refs in my paper). There is nothing very controversial (or very original) about my claims. Your other point, about assuming a 0.5 prior, doesn't seem to me to be an assumption at all. As I explained above, anything above 0.5 woold be unacceptable in practice. And anything below 0.5 makes the false discovery rate even higher (eg 76% if prior is 0.1). Therefore it's perfectly reasonable to say that 26% is the minimum false discovery rate that you can expect if you observe P = 0.047 in a single experiment. I've been thinking more about this question. My definition of FDR is the same as Benjamini's -the fraction of positive tests that are false. But it is applied to a quite different problem, the interpretation of a single test. With hindsight it might have been better if I'd picked a different term. In the case of a single test, B&H leaves the P value unchanged, so it does not say anything about the false discovery rate in the sense that I use the term. es of course you are right. Benjamini & Hochberg, and other people who work on multiple comparisons, aim only to correct the type 1 error rate. So they end up with a "correct" P value. It's subject to the same problems as any other P value. In my latest paper, I changed the name from FDR to False Positive Risk (FPR) in an attempt to avoid this misunderstanding. We've also written a web app to do some of the calculations (after noticing that few people download the R scripts that we provide). It's at https://davidcolquhoun.shinyapps.io/3-calcs-final/ All opinions about itare welcome (please read the Notes tab first). PS The web calculator now has a new (permanent, I hope) at http://fpr-calc.ucl.ac.uk/ Shiny.io is easy to use, but very expensive if anyone actually uses the app :-( I've returned to this discussion, now that my second paper on the topic is about to appear in Royal Society Open Science. It is at https://www.biorxiv.org/content/early/2017/08/07/144337 I realise that the biggest mistake that I made in the first paper was to use the term "false discovery rate (FDR)". In the new paper I make it more explicit that I am saying nothing about the multiple comparisons problem. I deal only with the question of how to interpret the P value that's observed in a single unbiased test. In the latest version, I refer to the probability that the result as the false positive risk (FPR) rather than FDR, in the hope of reducing confusion. I also advocate the reverse Bayesian approach -specify the prior probability that would be needed to ensure an FPR of, say, 5%. If you observe P = 0.05, that comes to 0.87. In other words you'd have to be almost (87%) sure that there was a real effect before doing the experiment to achieve an FPR of 5% (which is what most people still believe, mistakenly, p=0.05 means).
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you cont
12,596
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to coin a term without first checking to make sure that the term did not already have a well-established, different definition. To make matters worse, Colquhoun defined FDR in precisely the way that the conventional FDR has often been misinterpreted. In his answer here, Colquhoun defines FDR as "the fraction of positive tests that are false." That is similar to what Benjamini-Hochberg define as the FDP (false discovery proportion, not to be confused with the false discovery rate). Benjamini-Hochberg define FDR as the EXPECTED VALUE of the FDP, with a special stipulation that the FDP is considered as 0 when there are no positive tests (a stipulation that happens to make the FDR equal to the FWER when all nulls are true, and avoids undefinable values due to division by zero). To avoid confusion, I suggest not worrying about the details in the Colquhoun paper, and instead just taking to heart the big-picture point (which countless others have also made) that the alpha level does not directly correspond to the proportion of significant tests that are Type I errors (whether we're talking about the significant tests in a single study or in several studies combined). That proportion depends not only on alpha, but also on power and on the proportion of tested null hypotheses that are true.
Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to coin a term without first checking to make sure that the term did not already have a well-established, different definition. To make matters worse, Colquhoun defined FDR in precisely the way that the conventional FDR has often been misinterpreted. In his answer here, Colquhoun defines FDR as "the fraction of positive tests that are false." That is similar to what Benjamini-Hochberg define as the FDP (false discovery proportion, not to be confused with the false discovery rate). Benjamini-Hochberg define FDR as the EXPECTED VALUE of the FDP, with a special stipulation that the FDP is considered as 0 when there are no positive tests (a stipulation that happens to make the FDR equal to the FWER when all nulls are true, and avoids undefinable values due to division by zero). To avoid confusion, I suggest not worrying about the details in the Colquhoun paper, and instead just taking to heart the big-picture point (which countless others have also made) that the alpha level does not directly correspond to the proportion of significant tests that are Type I errors (whether we're talking about the significant tests in a single study or in several studies combined). That proportion depends not only on alpha, but also on power and on the proportion of tested null hypotheses that are true.
Confusion with false discovery rate and multiple testing (on Colquhoun 2014) A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to
12,597
When are Bayesian methods preferable to Frequentist?
Here are some links which may interest you comparing frequentist and Bayesian methods: http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf Archived here: https://web.archive.org/web/20140308021414/https://stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf http://www.bayesian-inference.com/advantagesbayesian http://www.researchgate.net/post/Bayesian_vs_frequentist_statistics2 In a nutshell, the way I have understood it, given a specific set of data, the frequentist believes that there is a true, underlying distribution from which said data was generated. The inability to get the exact parameters is a function of finite sample size. The Bayesian, on the other hand, think that we start with some assumption about the parameters (even if unknowingly) and use the data to refine our opinion about those parameters. Both are trying to develop a model which can explain the observations and make predictions; the difference is in the assumptions (both actual and philosophical). As a pithy, non-rigorous, statement, one can say the frequentist believes that the parameters are fixed and the data is random; the Bayesian believes the data is fixed and the parameters are random. Which is better or preferable? To answer that you have to dig in and realize just what assumptions each entails (e.g. are parameters asymptotically normal?).
When are Bayesian methods preferable to Frequentist?
Here are some links which may interest you comparing frequentist and Bayesian methods: http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf Archived here: https://web.archive.org/web/20
When are Bayesian methods preferable to Frequentist? Here are some links which may interest you comparing frequentist and Bayesian methods: http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf Archived here: https://web.archive.org/web/20140308021414/https://stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf http://www.bayesian-inference.com/advantagesbayesian http://www.researchgate.net/post/Bayesian_vs_frequentist_statistics2 In a nutshell, the way I have understood it, given a specific set of data, the frequentist believes that there is a true, underlying distribution from which said data was generated. The inability to get the exact parameters is a function of finite sample size. The Bayesian, on the other hand, think that we start with some assumption about the parameters (even if unknowingly) and use the data to refine our opinion about those parameters. Both are trying to develop a model which can explain the observations and make predictions; the difference is in the assumptions (both actual and philosophical). As a pithy, non-rigorous, statement, one can say the frequentist believes that the parameters are fixed and the data is random; the Bayesian believes the data is fixed and the parameters are random. Which is better or preferable? To answer that you have to dig in and realize just what assumptions each entails (e.g. are parameters asymptotically normal?).
When are Bayesian methods preferable to Frequentist? Here are some links which may interest you comparing frequentist and Bayesian methods: http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf Archived here: https://web.archive.org/web/20
12,598
When are Bayesian methods preferable to Frequentist?
One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One example is the ever-increasing importance of penalization methods (shrinkage). When one obtains penalized maximum likelihood estimates, the biased point estimates and "confidence intervals" are very difficult to interpret. On the other hand, the Bayesian posterior distribution for parameters that are penalized towards zero using a prior distribution concentrated around zero have completely standard interpretations.
When are Bayesian methods preferable to Frequentist?
One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One exa
When are Bayesian methods preferable to Frequentist? One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One example is the ever-increasing importance of penalization methods (shrinkage). When one obtains penalized maximum likelihood estimates, the biased point estimates and "confidence intervals" are very difficult to interpret. On the other hand, the Bayesian posterior distribution for parameters that are penalized towards zero using a prior distribution concentrated around zero have completely standard interpretations.
When are Bayesian methods preferable to Frequentist? One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One exa
12,599
When are Bayesian methods preferable to Frequentist?
I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a contrast of the two statistical schools. The first difference with a Bayesian analysis will be the presence of priors which, even when weak, will constrain the posterior mass for those 4 parameters into a finite neighborhood (otherwise you wouldn't have had a valid prior in the first place). Despite this, you can still have non-identifiability in the sense that the posterior will not converge to a point mass in the limit of infinite data. In a very real sense, however, that doesn't matter because (a) the infinite data limit isn't real anyways and (b) Bayesian inference doesn't report point estimates but rather distributions. In practice such non-identifiability will result in large correlations between the parameters (perhaps even non-convexity) but a proper Bayesian analysis will identify those correlations. Even if you report single parameter marginals you'll get distributions that span the marginal variance rather than the conditional variance at any point (which is what a standard frequentist result would quote, and why identifiability is really important there), and it's really the marginal variance that best encodes the uncertainty regarding a parameter. Simple example: consider a model with parameters $\mu_1$ and $\mu_2$ with likelihood $\mathcal{N}(x | \mu_1 + \mu_2, \sigma)$. No matter how much data you collect, the likelihood will not converge to a point but rather a line $\mu_1 + \mu_2 = 0$. The conditional variance of $\mu_1$ and $\mu_2$ at any point on that line will be really small, despite the fact that the parameters can't really be identified. Bayesian priors constrain the posterior distribution from that line to a long, cigar shaped distribution. Not easily to sample from but at least compact. A good Bayesian analysis will explore the entirety of that cigar, either identifying the correlation between $\mu_1$ and $\mu_2$ or returning the marginal variances that correspond to the projection of the long cigar onto the $\mu_1$ or $\mu_2$ axes, which give a much more faithful summary of the uncertainty in the parameters than the conditional variances.
When are Bayesian methods preferable to Frequentist?
I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a con
When are Bayesian methods preferable to Frequentist? I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a contrast of the two statistical schools. The first difference with a Bayesian analysis will be the presence of priors which, even when weak, will constrain the posterior mass for those 4 parameters into a finite neighborhood (otherwise you wouldn't have had a valid prior in the first place). Despite this, you can still have non-identifiability in the sense that the posterior will not converge to a point mass in the limit of infinite data. In a very real sense, however, that doesn't matter because (a) the infinite data limit isn't real anyways and (b) Bayesian inference doesn't report point estimates but rather distributions. In practice such non-identifiability will result in large correlations between the parameters (perhaps even non-convexity) but a proper Bayesian analysis will identify those correlations. Even if you report single parameter marginals you'll get distributions that span the marginal variance rather than the conditional variance at any point (which is what a standard frequentist result would quote, and why identifiability is really important there), and it's really the marginal variance that best encodes the uncertainty regarding a parameter. Simple example: consider a model with parameters $\mu_1$ and $\mu_2$ with likelihood $\mathcal{N}(x | \mu_1 + \mu_2, \sigma)$. No matter how much data you collect, the likelihood will not converge to a point but rather a line $\mu_1 + \mu_2 = 0$. The conditional variance of $\mu_1$ and $\mu_2$ at any point on that line will be really small, despite the fact that the parameters can't really be identified. Bayesian priors constrain the posterior distribution from that line to a long, cigar shaped distribution. Not easily to sample from but at least compact. A good Bayesian analysis will explore the entirety of that cigar, either identifying the correlation between $\mu_1$ and $\mu_2$ or returning the marginal variances that correspond to the projection of the long cigar onto the $\mu_1$ or $\mu_2$ axes, which give a much more faithful summary of the uncertainty in the parameters than the conditional variances.
When are Bayesian methods preferable to Frequentist? I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a con
12,600
When are Bayesian methods preferable to Frequentist?
The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist approaches are reasonable, if it isn't then you should use a Bayesian approach. If either interpretation is acceptable, then Bayesian and frequentist approaches are likely to be reasonable. Another way of putting it, is if you want to know what inferences you can draw from a particular experiment, you probably want to be Bayesian; if you want to draw conclusions about some population of experiments (e.g. quality control) then frequentist methods are well suited. Essentially, the important thing is to know what question you want answered, and choose the form of analysis that answers the question most directly.
When are Bayesian methods preferable to Frequentist?
The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist ap
When are Bayesian methods preferable to Frequentist? The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist approaches are reasonable, if it isn't then you should use a Bayesian approach. If either interpretation is acceptable, then Bayesian and frequentist approaches are likely to be reasonable. Another way of putting it, is if you want to know what inferences you can draw from a particular experiment, you probably want to be Bayesian; if you want to draw conclusions about some population of experiments (e.g. quality control) then frequentist methods are well suited. Essentially, the important thing is to know what question you want answered, and choose the form of analysis that answers the question most directly.
When are Bayesian methods preferable to Frequentist? The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist ap