idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
10,301 | From a statistical perspective, can one infer causality using propensity scores with an observational study? | Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible.
However, observational trials can provide evidence of a strong association between x and y, and are therefore useful fo... | From a statistical perspective, can one infer causality using propensity scores with an observationa | Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible. | From a statistical perspective, can one infer causality using propensity scores with an observational study?
Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible.
However, o... | From a statistical perspective, can one infer causality using propensity scores with an observationa
Only a prospective randomized trial can determine causality. In observational studies, there will always be the chance of an unmeasured or unknown covariate which makes ascribing causality impossible. |
10,302 | From a statistical perspective, can one infer causality using propensity scores with an observational study? | The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views of, say, Pearl (2009), who argues yes so long as you can model the process properly, versus the view @propofol, who will ... | From a statistical perspective, can one infer causality using propensity scores with an observationa | The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views o | From a statistical perspective, can one infer causality using propensity scores with an observational study?
The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views of, say, Pea... | From a statistical perspective, can one infer causality using propensity scores with an observationa
The question seems to involve two things that really ought to be considered separately. First is whether one can infer causality from an observational study, and on that you might contrast the views o |
10,303 | From a statistical perspective, can one infer causality using propensity scores with an observational study? | Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality.
However, it is not as simple as that.
One reason that randomization may not be enough is that in "small" samples the law of large number is not "strong enough" to ensure that each and all differences are balan... | From a statistical perspective, can one infer causality using propensity scores with an observationa | Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality.
However, it is not as simple as that.
One reason that randomization may not be enough is | From a statistical perspective, can one infer causality using propensity scores with an observational study?
Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality.
However, it is not as simple as that.
One reason that randomization may not be enough is that in "sm... | From a statistical perspective, can one infer causality using propensity scores with an observationa
Conventional wisdom states that only randomized controlled trials ("real" experiments) can identify causality.
However, it is not as simple as that.
One reason that randomization may not be enough is |
10,304 | Markov Process that depends on present state and past state | Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order markov chain by a suitable change in state space definition. L... | Markov Process that depends on present state and past state | Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you c | Markov Process that depends on present state and past state
Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you can transform a second order markov chain to a first order ma... | Markov Process that depends on present state and past state
Technically, both the processes you describe are markov chains. The difference is that the first one is a first order markov chain whereas the second one is a second order markov chain. And yes, you c |
10,305 | Markov Process that depends on present state and past state | The definition of a markov process says the next step depends on the current state only and no past states.
That is the Markov property and it defines a first order MC, which is very tractable mathematically and quite easy to present/explain. Of course you could have $n^{th}$ order MC (where the next state depends on t... | Markov Process that depends on present state and past state | The definition of a markov process says the next step depends on the current state only and no past states.
That is the Markov property and it defines a first order MC, which is very tractable mathema | Markov Process that depends on present state and past state
The definition of a markov process says the next step depends on the current state only and no past states.
That is the Markov property and it defines a first order MC, which is very tractable mathematically and quite easy to present/explain. Of course you cou... | Markov Process that depends on present state and past state
The definition of a markov process says the next step depends on the current state only and no past states.
That is the Markov property and it defines a first order MC, which is very tractable mathema |
10,306 | Exact two sample proportions binomial test in R (and some strange p-values) | If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so:
> fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2))
Fisher's Exact Test for Count Data
data: matrix(c(17, 25 - 17, 8, 20 - 8), ncol = 2)
p-value = 0.07671
alternativ... | Exact two sample proportions binomial test in R (and some strange p-values) | If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so:
> fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2) | Exact two sample proportions binomial test in R (and some strange p-values)
If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so:
> fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2))
Fisher's Exact Test for Count Data
dat... | Exact two sample proportions binomial test in R (and some strange p-values)
If you are looking for an 'exact' test for two binomial proportions, I believe you are looking for Fisher's Exact Test. In R it is applied like so:
> fisher.test(matrix(c(17, 25-17, 8, 20-8), ncol=2) |
10,307 | Exact two sample proportions binomial test in R (and some strange p-values) | There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two people flipping a coin of unknown fairness and one getting heads 55 times and the other 45 times. In the former case you are... | Exact two sample proportions binomial test in R (and some strange p-values) | There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two peopl | Exact two sample proportions binomial test in R (and some strange p-values)
There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two people flipping a coin of unknown fairness and on... | Exact two sample proportions binomial test in R (and some strange p-values)
There is a difference between two samples and a sample compared to a known hypothesis. So if someone flips a coin 100 times and gets heads 55 times and the hypothesis is a fair coin, versus two peopl |
10,308 | Exact two sample proportions binomial test in R (and some strange p-values) | The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absolute-truth 0.4 with zero variance around it. Or it is as if you were comparing player A's 17 wins out of 25 to player B's... | Exact two sample proportions binomial test in R (and some strange p-values) | The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absol | Exact two sample proportions binomial test in R (and some strange p-values)
The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absolute-truth 0.4 with zero variance around it. ... | Exact two sample proportions binomial test in R (and some strange p-values)
The syntax of the binom.test is your successes within a number of trials compared to a population point estimate. Although you entered it as p=8/20, the calculation is as if that was a God-given absol |
10,309 | Exact two sample proportions binomial test in R (and some strange p-values) | First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution.
Second, it is important be clear on how the "experiment", if you will, was conducted. Were the number of games that each person played determined in advance (... | Exact two sample proportions binomial test in R (and some strange p-values) | First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution.
Second, it is important be clear on how | Exact two sample proportions binomial test in R (and some strange p-values)
First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution.
Second, it is important be clear on how the "experiment", if you will, was conducte... | Exact two sample proportions binomial test in R (and some strange p-values)
First I would suggest that you want to do a continuity correction, since you are estimating a discrete distribution with a continuous (chi-square) distribution.
Second, it is important be clear on how |
10,310 | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tabulated. My preference is to refer to the test as the Wilcoxon-Mann-Whitney, to recognize both contributions (Mann-Whitne... | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tab | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tabulated. My preference is to refer to the te... | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Wilcoxon is generally credited with being the original inventor of the test*, though Mann and Whitney's approach was a great stride forward, and they extended the cases for which the statistic was tab |
10,311 | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in all cases.
When you use: wilcox.test(df$var1 ~ df$var2, paired=FALSE) the given W is the same as U. So you may report it... | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in all cases.
When you use: wilcox.test(df$var... | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Both the Wilcoxon rank sum test and the Mann-Whitney test are the non-parametric equivalents of the independent t-test. In some cases the version of W that R gives, is also the valua of U. But not in |
10,312 | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Note however, that the code:
wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~')
will produce a different W statistic than a:
wilcox.test(df$var1, df$var2, paired=FALSE) (using ',') | Is the W statistic output by wilcox.test() in R the same as the U statistic? | Note however, that the code:
wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~')
will produce a different W statistic than a:
wilcox.test(df$var1, df$var2, paired=FALSE) (using ',') | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Note however, that the code:
wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~')
will produce a different W statistic than a:
wilcox.test(df$var1, df$var2, paired=FALSE) (using ',') | Is the W statistic output by wilcox.test() in R the same as the U statistic?
Note however, that the code:
wilcox.test(df$var1 ~ df$var2, paired=FALSE) (using '~')
will produce a different W statistic than a:
wilcox.test(df$var1, df$var2, paired=FALSE) (using ',') |
10,313 | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) 337-401. It's a physics paper for sure, but the applications there are not all related to quantum physics.
My personal... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) 337-401. It's a physics ... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
Yes. I like the article Søren shared very much, and together with the references in that article I would recommend Muckenheim, W. et al. (1986). A Review of Extended Probabilities. Phys. Rep. 133 (6) |
10,314 | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities!
What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. From it the probability amplitude (which is a bona fide probability density) can be constructed; it is variously written $\... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities!
What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. Fro | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities!
What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. From it the probability ampli... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
QM does not use negative or imaginary probabilities: if it did, they would no longer be probabilities!
What can be (and usually is) a complex value is the quantum mechanical wave function $\psi$. Fro |
10,315 | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, it's up to her students to go find a use for the stuff in the world. (at least that's a kind-of defensible position, and ... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics? | I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, i | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, it's up to her students to ... | Do negative probabilities/probability amplitudes have applications outside quantum mechanics?
I'm of the opinion that the "What's the application of this theory?" is a question that students of a theory should have to answer. Professor McGonagall spends all her time teaching and researching, i |
10,316 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Some very good books:
"Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition"
by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely good on the applied side.
"Data Analysis Using Regression and Multilevel/Hierarchical Models" ... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Some very good books:
"Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition"
by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineerin | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Some very good books:
"Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition"
by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineering people) but extremely go... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Some very good books:
"Statistics for Experimenters: Design, Innovation, and Discovery , 2nd Edition"
by Box, Hunter & Hunter. This is formally an introductory text (more for chemistry & engineerin |
10,317 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Harrell (2001), Regression Modelling Strategies is distinguished by
covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics included
an emphasis on explaining how to employ different methods at different stages
thoroughly worked-out examples (& S... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Harrell (2001), Regression Modelling Strategies is distinguished by
covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics in | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Harrell (2001), Regression Modelling Strategies is distinguished by
covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics included
an emphasis on expl... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Harrell (2001), Regression Modelling Strategies is distinguished by
covering modelling from start to finish—so data reduction, imputation of missing values, & model validation are among the topics in |
10,318 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level.
edit: if you're dealing with categorical outcomes, Hastie et al is indispensable. Also, Categorical Data Analysis by Agresti is a g... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level.
edit: if yo | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level.
edit: if you're dealing with categori... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
In addition to those, Introductory Econometrics: A Modern Approach by Wooldrige has pretty much everything you could ever want to know about regression, at an advanced undergraduate level.
edit: if yo |
10,319 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in principled application of methods I'd recommend this book. | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in princip | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in principled application of methods... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Bayesian Data Analysis third edition (2013) by Gelman et al. The level is mixed but the treatment I find so good that something valuable can be got from most chapters. If you're interested in princip |
10,320 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and tons of notes about the subtleties of each. You can see the TOC at the publisher's site (linked above). | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and tons of notes about the su... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I've gotten a lot of use out of Sheskin's Handbook of Parametric and Nonparametric Statistical Procedures. It's a broad survey of hypothesis testing methods, with good introductions to the theory and |
10,321 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of models, dealing with common pitfalls and avoiding problematic methods. | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of models, dealing with commo... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
Regression Modeling Strategies by Frank Harrell, is a great book if you already know some basics. It is heavily focused on applications (lot's of examples with code), specifying models, diagnostic of |
10,322 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots of mathematical statistics. It gives a lot more perspective than most books on even the simplest applied methods since it... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots o | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots of mathematical statistics.... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
The UW Stat PhD program's top-level regression methods sequence uses Wakefield's "Bayesian and Frequentist Regression Methods" which is a particularly good choice for folks like you who've seen lots o |
10,323 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning course. It's great for an introduction to ML Concepts (if that's part of your data analysis). https://work.caltech.edu/te... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning c | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning course. It's great for an i... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used "Engineering Statistics" by Montgomery and Runger. It's pretty good (especially if you have a strong math background). I'd also highly recommend checking out CalTech's online Machine Learning c |
10,324 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, Rhinehart, R. R. because I sensed such a need. The book is 361 pages and has a companion web site with Excel/VBA open-c... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, Rhinehart, R. R. because I... | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I wrote the book Nonlinear Regression Modeling for Engineering Applications: Modeling, Model Validation, and Enabling Design of Experiments, Wiley, New York, NY, September, 2016. ISBN 9781118597965, |
10,325 | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this. | Do you have recommendations for books to self-teach Applied Statistics at the graduate level? | I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this. | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this. | Do you have recommendations for books to self-teach Applied Statistics at the graduate level?
I used College Statistics Made Easy by Sean Connolly. It is aimed at a first / second course in statistics. The material very, very easy to follow. I tried a few books and none compare to this. |
10,326 | How can I generate data with a prespecified correlation matrix? | It appears that you're asking how to generate data with a particular correlation matrix.
A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random vector ${\bf Ax}$ has mean ${\bf A} E({\bf x})$ and covariance matrix $ \Omega = {\bf A} \Sigma {\bf A}^{T} $. So, if you... | How can I generate data with a prespecified correlation matrix? | It appears that you're asking how to generate data with a particular correlation matrix.
A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random v | How can I generate data with a prespecified correlation matrix?
It appears that you're asking how to generate data with a particular correlation matrix.
A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random vector ${\bf Ax}$ has mean ${\bf A} E({\bf x})$ and covar... | How can I generate data with a prespecified correlation matrix?
It appears that you're asking how to generate data with a particular correlation matrix.
A useful fact is that if you have a random vector ${\bf x}$ with covariance matrix $\Sigma$, then the random v |
10,327 | How can I generate data with a prespecified correlation matrix? | If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses the eigenvectors of the correlation matrix instead of the cholesky decomposition and scaling with a singular value decomp... | How can I generate data with a prespecified correlation matrix? | If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses t | How can I generate data with a prespecified correlation matrix?
If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses the eigenvectors of the correlation matrix instead of the... | How can I generate data with a prespecified correlation matrix?
If you're using R, you can also use the mvrnorm function from the MASS package, assuming you want normally distributed variables. The implementation is similar to Macro's description above, but uses t |
10,328 | How can I generate data with a prespecified correlation matrix? | An alternative solution without cholesky factorization is the following.
Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive definite with $\Lambda$ the diagonal matrix of the eigenvalues and $V$ the matrix of column eigenvectors .
You can write... | How can I generate data with a prespecified correlation matrix? | An alternative solution without cholesky factorization is the following.
Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive | How can I generate data with a prespecified correlation matrix?
An alternative solution without cholesky factorization is the following.
Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive definite with $\Lambda$ the diagonal matrix of the eigen... | How can I generate data with a prespecified correlation matrix?
An alternative solution without cholesky factorization is the following.
Let $\Sigma_y$ the desired covariance matrix and suppose you have data $x$ with $\Sigma_x = I$. Suppose $\Sigma_y$ is positive |
10,329 | How does negative sampling work in word2vec? | The issue
There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network).
Predicting the next word is like predicting the class. That is, such a netwo... | How does negative sampling work in word2vec? | The issue
There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a w | How does negative sampling work in word2vec?
The issue
There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a window of words (the input of the network).
Predicting the next word is lik... | How does negative sampling work in word2vec?
The issue
There are some issues with learning the word vectors using an "standard" neural network. In this way, the word vectors are learned while the network learns to predict the next word given a w |
10,330 | Intuition for cumulative hazard function (survival analysis) | Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen:
$$h(t) = \lim_{\Delta t \rightarrow 0} \frac {P(t<T \le t + \Delta t | T >t)} {\Delta t}$$
Cumulative hazard is integrating (ins... | Intuition for cumulative hazard function (survival analysis) | Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen:
$$h(t) = | Intuition for cumulative hazard function (survival analysis)
Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen:
$$h(t) = \lim_{\Delta t \rightarrow 0} \frac {P(t<T \le t + \Delta ... | Intuition for cumulative hazard function (survival analysis)
Combining proportions dying as you do is not giving you cumulative hazard. Hazard rate in continuous time is a conditional probability that during a very short interval an event will happen:
$$h(t) = |
10,331 | Intuition for cumulative hazard function (survival analysis) | The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic.
You can find the chapter on google books, p. 13-15. But I would advise on reading the whole chapter 2.
Here is the short form:
"it measures the total amount of risk that has been accumulated up t... | Intuition for cumulative hazard function (survival analysis) | The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic.
You can find the chapter on google books, p. 13-15. But I would advise on re | Intuition for cumulative hazard function (survival analysis)
The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic.
You can find the chapter on google books, p. 13-15. But I would advise on reading the whole chapter 2.
Here is the short form:
"it mea... | Intuition for cumulative hazard function (survival analysis)
The Book "An Introduction to Survival Analysis Using Stata" (2nd Edition) by Mario Cleves has a good chapter on that topic.
You can find the chapter on google books, p. 13-15. But I would advise on re |
10,332 | Intuition for cumulative hazard function (survival analysis) | I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots:
(1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coefficient and covariate vectors respectively, & $h_0(x)$ is the baseline hazard function; & so (by integrating both sides ... | Intuition for cumulative hazard function (survival analysis) | I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots:
(1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coef | Intuition for cumulative hazard function (survival analysis)
I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots:
(1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coefficient and covariate vectors respectively, & $h_0(x)$ is t... | Intuition for cumulative hazard function (survival analysis)
I'd HAZARD a guess that it's noteworthy owing to its use in diagnostic plots:
(1) In the Cox proportional hazards model $h(x)=\mathrm{e}^{\beta^\mathrm{T} z}h_0(x)$, where $\beta$ and $z$ are the coef |
10,333 | Intuition for cumulative hazard function (survival analysis) | In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results; telling a non-statistical researcher that the cumulative hazards are different will most likely result in an "mm-hm" a... | Intuition for cumulative hazard function (survival analysis) | In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results | Intuition for cumulative hazard function (survival analysis)
In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results; telling a non-statistical researcher that the cumulative ... | Intuition for cumulative hazard function (survival analysis)
In paraphrasing what @Scortchi is saying, I would emphasize that the cumulative hazard function does not have a nice interpretation, and as such I would not try to use it as a way to interpret results |
10,334 | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? | You should use the signed rank test when the data are paired.
You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired values are not dependent. Often the dependence-pairing occurs because they're observations on t... | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? | You should use the signed rank test when the data are paired.
You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively d | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
You should use the signed rank test when the data are paired.
You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively dependent, while unpaired va... | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
You should use the signed rank test when the data are paired.
You'll find many definitions of pairing, but at heart the criterion is something that makes pairs of values at least somewhat positively d |
10,335 | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? | I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST).
The WSRST requires that the populations be paired, for example, the same group of people are tested on two different occasions or things and MEASURED on the effects of each and we then... | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test? | I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST).
The WSRST requires that the populations be paired, for example, | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST).
The WSRST requires that the populations be paired, for example, the same group of people ar... | What is the difference between the Wilcoxon Rank Sum Test and the Wilcoxon Signed Rank Test?
I'm not a researcher, I'm a statistics major though. I'll first layout the requirements for the Wilcoxon Signed Rank Sum Test (WSRST).
The WSRST requires that the populations be paired, for example, |
10,336 | Converting (normalizing) very small likelihood values to probability | Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.)
Indeed, if you want a relative precision of $\epsilon$ (such as $\epsilon = 10^{-d}$ for $d$ digits of precision) and you have $n$ l... | Converting (normalizing) very small likelihood values to probability | Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.)
Indeed, if y | Converting (normalizing) very small likelihood values to probability
Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.)
Indeed, if you want a relative precision of $\epsilon$ (such as... | Converting (normalizing) very small likelihood values to probability
Subtract the maximum logarithm from all logs. Throw away all results that are so negative they will underflow the exponential. (Their likelihoods are, for all practical purposes, zero.)
Indeed, if y |
10,337 | Probability of not drawing a word from a bag of letters in Scrabble | This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exploit it to the fullest. His results suggest that a brute-force solution is feasible. After all, including a wildcard, ... | Probability of not drawing a word from a bag of letters in Scrabble | This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exp | Probability of not drawing a word from a bag of letters in Scrabble
This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exploit it to the fullest. His results suggest that a ... | Probability of not drawing a word from a bag of letters in Scrabble
This is a (long!) comment on the nice work @vqv has posted in this thread. It aims to obtain a definitive answer. He has done the hard work of simplifying the dictionary. All that remains is to exp |
10,338 | Probability of not drawing a word from a bag of letters in Scrabble | It is very hard to draw a rack that does not contain any valid word in
Scrabble and its variants. Below is an R program I wrote to estimate the
probability that the initial 7-tile rack does not contain a valid word. It
uses a monte carlo approach and the Words With Friends lexicon (I
couldn’t find the official Scrabble... | Probability of not drawing a word from a bag of letters in Scrabble | It is very hard to draw a rack that does not contain any valid word in
Scrabble and its variants. Below is an R program I wrote to estimate the
probability that the initial 7-tile rack does not contai | Probability of not drawing a word from a bag of letters in Scrabble
It is very hard to draw a rack that does not contain any valid word in
Scrabble and its variants. Below is an R program I wrote to estimate the
probability that the initial 7-tile rack does not contain a valid word. It
uses a monte carlo approach and t... | Probability of not drawing a word from a bag of letters in Scrabble
It is very hard to draw a rack that does not contain any valid word in
Scrabble and its variants. Below is an R program I wrote to estimate the
probability that the initial 7-tile rack does not contai |
10,339 | Probability of not drawing a word from a bag of letters in Scrabble | Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains every possible single-letter word. In this case, the chance of not making a word in a draw of $1$ or more letters is zero... | Probability of not drawing a word from a bag of letters in Scrabble | Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains ev | Probability of not drawing a word from a bag of letters in Scrabble
Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains every possible single-letter word. In this case, the ... | Probability of not drawing a word from a bag of letters in Scrabble
Srikant is right: a Monte Carlo study is the way to go. There are two reasons. First, the answer depends strongly on the structure of the dictionary. Two extremes are (1) the dictionary contains ev |
10,340 | Probability of not drawing a word from a bag of letters in Scrabble | Monte Carlo Approach
The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could form a word by $m_w$. The desired probability would be:
$$1 - \frac{m_w}{m}$$
Direct Approach
Let the number of words i... | Probability of not drawing a word from a bag of letters in Scrabble | Monte Carlo Approach
The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could | Probability of not drawing a word from a bag of letters in Scrabble
Monte Carlo Approach
The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could form a word by $m_w$. The desired probability would... | Probability of not drawing a word from a bag of letters in Scrabble
Monte Carlo Approach
The quick and dirty approach is to do a monte carlo study. Draw $k$ tiles $m$ times and for each draw of $k$ tiles see if you can form a word. Denote the number of times you could |
10,341 | Representing interaction effects in directed acyclic graphs | Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effects can vary (wildly) by assumption.
If an effect is identified and you estimate it from data non-parametrically, you obt... | Representing interaction effects in directed acyclic graphs | Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effect | Representing interaction effects in directed acyclic graphs
Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effects can vary (wildly) by assumption.
If an effect is identifi... | Representing interaction effects in directed acyclic graphs
Pearl's theory of causality is completely non-parametric. Interactions are not made explicit because of that, neither in the graph nor in the structural equations it represents. However, causal effect |
10,342 | Representing interaction effects in directed acyclic graphs | The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already assume that any variables pointing to the same outcome can modify the effect of the others pointing to the same outcome. I... | Representing interaction effects in directed acyclic graphs | The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already as | Representing interaction effects in directed acyclic graphs
The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already assume that any variables pointing to the same outcome can mod... | Representing interaction effects in directed acyclic graphs
The simple answer is that you already do. Conventional DAGs do not only represent main effects but rather the combination of main effects and interactions. Once you have drawn your DAG, you already as |
10,343 | Representing interaction effects in directed acyclic graphs | A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exposure causes a change in the direct causal effect of tobacco smoke exposure on risk of mesothelioma" would be represente... | Representing interaction effects in directed acyclic graphs | A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exp | Representing interaction effects in directed acyclic graphs
A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exposure causes a change in the direct causal effect of tobacco... | Representing interaction effects in directed acyclic graphs
A new method of representing interactions by creating dedicated nodes was proposed and termed "IDAG" since this question was asked. In my understanding the example sentence from question "asbestos exp |
10,344 | Representing interaction effects in directed acyclic graphs | If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ensure statistical identification (even if you have built a defensible case for causal identification using graphical crite... | Representing interaction effects in directed acyclic graphs | If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ens | Representing interaction effects in directed acyclic graphs
If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ensure statistical identification (even if you have built a def... | Representing interaction effects in directed acyclic graphs
If you want to estimate the non-separable non-linear structural equations directly, there is a growing econometrics literature on this. You do, of course, need to make some assumptions in order to ens |
10,345 | Square of normal distribution with specific variance | To close this one:
$$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$
with
$$E(Q) = \frac {\sigma^2}{4},\;\; \text{Var}(Q) = \frac {\sigma^4}{8}$$
RESPONSE TO QUESTION IN THE COMMENT
If
$$X\sim... | Square of normal distribution with specific variance | To close this one:
$$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$
wit | Square of normal distribution with specific variance
To close this one:
$$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$
with
$$E(Q) = \frac {\sigma^2}{4},\;\; \text{Var}(Q) = \frac {\sigma^4... | Square of normal distribution with specific variance
To close this one:
$$ X\sim N(0,\sigma^2/4) \Rightarrow \frac {X^2}{\sigma^2/4}\sim \mathcal \chi^2_1 \Rightarrow X^2 = \frac {\sigma^2}{4}\mathcal \chi^2_1 = Q\sim \text{Gamma}(1/2, \sigma^2/2)$$
wit |
10,346 | Finding the PDF given the CDF | As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable.
In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to represent these atoms. | Finding the PDF given the CDF | As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable.
In the continuous case, wherever the | Finding the PDF given the CDF
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable.
In the continuous case, wherever the cdf has a discontinuity the pdf has an atom. Dirac delta "functions" can be used to repre... | Finding the PDF given the CDF
As user28 said in comments above, the pdf is the first derivative of the cdf for a continuous random variable, and the difference for a discrete random variable.
In the continuous case, wherever the |
10,347 | Finding the PDF given the CDF | Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small. | Finding the PDF given the CDF | Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of t | Finding the PDF given the CDF
Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of the point where you want to know the pdf and the distance $|x_2 - x_1|$ is small. | Finding the PDF given the CDF
Let $F(x)$ denote the cdf; then you can always approximate the pdf of a continuous random variable by calculating $$ \frac{F(x_2) - F(x_1)}{x_2 - x_1},$$ where $x_1$ and $x_2$ are on either side of t |
10,348 | Finding the PDF given the CDF | Differentiating the CDF does not always help, consider equation:
F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2,
Differentiating it you'll get:
((2 - x) / 4)
substituting 0 in it gives value (1/2) which is clearly wrong as P(x = 0) is clearly (1 / 4).
Instead what you should do is calculate difference... | Finding the PDF given the CDF | Differentiating the CDF does not always help, consider equation:
F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2,
Differentiating it you'll get:
((2 - x) / 4)
substituting 0 in it gives | Finding the PDF given the CDF
Differentiating the CDF does not always help, consider equation:
F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2,
Differentiating it you'll get:
((2 - x) / 4)
substituting 0 in it gives value (1/2) which is clearly wrong as P(x = 0) is clearly (1 / 4).
Instead what you sho... | Finding the PDF given the CDF
Differentiating the CDF does not always help, consider equation:
F(x) = (1/4) + ((4x - x*x) / 8) ... 0 <= x < 2,
Differentiating it you'll get:
((2 - x) / 4)
substituting 0 in it gives |
10,349 | Interpreting plot of residuals vs. fitted values from Poisson regression | This is the appearance you expect of such a plot when the dependent variable is discrete.
Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. Every case where $y=k$ has a prediction $\hat{y}$; its residual--by definition--equals $k-\hat{y}$. The plot of $k-\hat... | Interpreting plot of residuals vs. fitted values from Poisson regression | This is the appearance you expect of such a plot when the dependent variable is discrete.
Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. | Interpreting plot of residuals vs. fitted values from Poisson regression
This is the appearance you expect of such a plot when the dependent variable is discrete.
Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. Every case where $y=k$ has a prediction $\hat{y... | Interpreting plot of residuals vs. fitted values from Poisson regression
This is the appearance you expect of such a plot when the dependent variable is discrete.
Each curvilinear trace of points on the plot corresponds to a fixed value $k$ of the dependent variable $y$. |
10,350 | Interpreting plot of residuals vs. fitted values from Poisson regression | Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If my suggestion is correct there should be 9 unique values in your training data set. | Interpreting plot of residuals vs. fitted values from Poisson regression | Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If | Interpreting plot of residuals vs. fitted values from Poisson regression
Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If my suggestion is correct there should be 9 uniq... | Interpreting plot of residuals vs. fitted values from Poisson regression
Sometimes stripes like these in residual plots represent points with (almost) identical observed values that get different predictions. Look at your target values: how many unique values are they? If |
10,351 | Interpreting plot of residuals vs. fitted values from Poisson regression | This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) distributions. Also you should be plotting your residuals against the transformed linear predictor, not the predictors when ... | Interpreting plot of residuals vs. fitted values from Poisson regression | This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) dist | Interpreting plot of residuals vs. fitted values from Poisson regression
This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) distributions. Also you should be plotting your res... | Interpreting plot of residuals vs. fitted values from Poisson regression
This pattern is characteristic of an incorrect match of the family and/or link. If you have overdispersed data then perhaps you should consider the negative binomial (count) or gamma (continuous) dist |
10,352 | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | There are several issues to address.
$R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with the $R^2$ from a richer model
The Hosmer-Lemeshow test is for overall calibration error, not for any particular lack of f... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | There are several issues to address.
$R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with t | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
There are several issues to address.
$R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with the $R^2$ from a richer model
The Ho... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
There are several issues to address.
$R^2$ measures by themselves never measure goodness of fit; they measure mainly predictive discrimination. Goodness of fit only comes from comparing $R^2$ with t |
10,353 | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | From Wikipedia:
The test assesses whether or not the observed event rates match
expected event rates in subgroups of the model population. The
Hosmer–Lemeshow test specifically identifies subgroups as the deciles
of fitted risk values. Models for which expected and observed event
rates in subgroups are similar... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | From Wikipedia:
The test assesses whether or not the observed event rates match
expected event rates in subgroups of the model population. The
Hosmer–Lemeshow test specifically identifies subgrou | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
From Wikipedia:
The test assesses whether or not the observed event rates match
expected event rates in subgroups of the model population. The
Hosmer–Lemeshow test specifically identifies subgroups as the deciles
of fitted risk ... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
From Wikipedia:
The test assesses whether or not the observed event rates match
expected event rates in subgroups of the model population. The
Hosmer–Lemeshow test specifically identifies subgrou |
10,354 | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model still showed significant lack of fit, & that perhaps a even more complex model would be appropriate. You're testing the fi... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit | This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model st | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model still showed significant lack of fit,... | Evaluating logistic regression and interpretation of Hosmer-Lemeshow Goodness of Fit
This is rather moot following @FrankHarrell's answer, but a fan of the H–L test would infer from that result that despite your inclusion of quadratic terms & some† 2nd-order interactions, the model st |
10,355 | Why is the expectation maximization algorithm used? | The question is legit and I had the same confusion when I first learnt the EM algorithm.
In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function of a parametric model in the case in which some variables of the model are (or are treated as) "latent" or unknown.
In... | Why is the expectation maximization algorithm used? | The question is legit and I had the same confusion when I first learnt the EM algorithm.
In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function | Why is the expectation maximization algorithm used?
The question is legit and I had the same confusion when I first learnt the EM algorithm.
In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function of a parametric model in the case in which some variables of the mo... | Why is the expectation maximization algorithm used?
The question is legit and I had the same confusion when I first learnt the EM algorithm.
In general terms, the EM algorithm defines an iterative process that allows to maximize the likelihood function |
10,356 | Why is the expectation maximization algorithm used? | EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing values in your data matrix. Consider a sample $X = (X_{1},...,X_{n})$ which has conditional density $f_{X|\Theta}(x|\theta)$... | Why is the expectation maximization algorithm used? | EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing valu | Why is the expectation maximization algorithm used?
EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing values in your data matrix. Consider a sample $X = (X_{1},...,X_{n})$ wh... | Why is the expectation maximization algorithm used?
EM is not needed instead of using some numerical technique because EM is a numerical method as well. So it's not a substitute for Newton-Raphson. EM is for the specific case when you have missing valu |
10,357 | Why is the expectation maximization algorithm used? | EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model. | Why is the expectation maximization algorithm used? | EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model. | Why is the expectation maximization algorithm used?
EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model. | Why is the expectation maximization algorithm used?
EM is used because it's often infeasible or impossible to directly calculate the parameters of a model that maximizes the probability of a dataset given that model. |
10,358 | Why is the Fisher Information matrix positive semidefinite? | Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form
From the definition, we have
$$
I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\right] \, ,
$$
for $i,j=1,\dots,k$, in which $\partial_i=\pa... | Why is the Fisher Information matrix positive semidefinite? | Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form
From the definition, we have
$$
I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right | Why is the Fisher Information matrix positive semidefinite?
Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form
From the definition, we have
$$
I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right) \left(\partial_j \log f_{X\mid\Theta}(X\mid\theta)\right)\... | Why is the Fisher Information matrix positive semidefinite?
Check this out: http://en.wikipedia.org/wiki/Fisher_information#Matrix_form
From the definition, we have
$$
I_{ij} = \mathrm{E}_\theta \left[ \left(\partial_i \log f_{X\mid\Theta}(X\mid\theta)\right |
10,359 | Why is the Fisher Information matrix positive semidefinite? | WARNING: not a general answer!
If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Covariance matrices are always positive semi-definite. Since the Fisher information is a convex combination of positive s... | Why is the Fisher Information matrix positive semidefinite? | WARNING: not a general answer!
If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Co | Why is the Fisher Information matrix positive semidefinite?
WARNING: not a general answer!
If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Covariance matrices are always positive semi-definite. Since ... | Why is the Fisher Information matrix positive semidefinite?
WARNING: not a general answer!
If $f(X|\theta)$ corresponds to a full-rank exponential family, then the negative Hessian of the log-likelihood is the covariance matrix of the sufficient statistic. Co |
10,360 | Why is there an asymmetry between the training step and evaluation step? | It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine Learning Tools and Techniques" and Tom Mitchell's "Machine Learning".
Introduction.
So we have a classifier and a limite... | Why is there an asymmetry between the training step and evaluation step? | It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine L | Why is there an asymmetry between the training step and evaluation step?
It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine Learning Tools and Techniques" and Tom Mitchell'... | Why is there an asymmetry between the training step and evaluation step?
It's funny that the most upvoted answer doesn't really answer the question :) so I thought it would be nice to back this up with a bit more theory - mostly taken from "Data Mining: Practical Machine L |
10,361 | Why is there an asymmetry between the training step and evaluation step? | Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial:
y = a0 + a1*X+a2*X^2 + ... + an*X^m
Now if you have some new record, not used in training set and values of an input vector X are different from any vector X, used in tr... | Why is there an asymmetry between the training step and evaluation step? | Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial:
y = a0 + a1*X+a2*X^2 + ... + an*X^m
Now if you ha | Why is there an asymmetry between the training step and evaluation step?
Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial:
y = a0 + a1*X+a2*X^2 + ... + an*X^m
Now if you have some new record, not used in training set an... | Why is there an asymmetry between the training step and evaluation step?
Consider a finite set of m records. If you use all the records as a training set you could perfectly fit all the points with the following polynomial:
y = a0 + a1*X+a2*X^2 + ... + an*X^m
Now if you ha |
10,362 | Why is there an asymmetry between the training step and evaluation step? | This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened in case your model fit only the data you have and not a new one: Titius-Bode law | Why is there an asymmetry between the training step and evaluation step? | This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened i | Why is there an asymmetry between the training step and evaluation step?
This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened in case your model fit only the data you have an... | Why is there an asymmetry between the training step and evaluation step?
This is the problem of generalization—that is, how well our hypothesis will correctly classify future examples that are not part of the training set. Please see this fantastic example, what happened i |
10,363 | Why is there an asymmetry between the training step and evaluation step? | So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of the question: Why using different data for training and evaluation helps us avoid overfitting.
Our data is split into:... | Why is there an asymmetry between the training step and evaluation step? | So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of | Why is there an asymmetry between the training step and evaluation step?
So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of the question: Why using different data for tra... | Why is there an asymmetry between the training step and evaluation step?
So far @andreiser gave a brilliant answer to the second part of OP's question regarding training/testing data split, and @niko explained how to avoid overfitting, but nobody has gotten to the merit of |
10,364 | Why do we model noise in linear regression but not logistic regression? | Short answer: we do, just implicitly.
A possibly more enlightening way of looking at things is the following.
In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N(0,\sigma^2)$ distributed, but we model the observations as $N(x\beta,\sigma^2)$ distributed.
(Of course, this is precis... | Why do we model noise in linear regression but not logistic regression? | Short answer: we do, just implicitly.
A possibly more enlightening way of looking at things is the following.
In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N | Why do we model noise in linear regression but not logistic regression?
Short answer: we do, just implicitly.
A possibly more enlightening way of looking at things is the following.
In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N(0,\sigma^2)$ distributed, but we model the obse... | Why do we model noise in linear regression but not logistic regression?
Short answer: we do, just implicitly.
A possibly more enlightening way of looking at things is the following.
In Ordinary Least Squares, we can consider that we do not model the errors or noise as $N |
10,365 | Why do we model noise in linear regression but not logistic regression? | To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regression (and softmax regression more generally) you can actually also think of the target $y$ as computed by the following o... | Why do we model noise in linear regression but not logistic regression? | To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regress | Why do we model noise in linear regression but not logistic regression?
To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regression (and softmax regression more generally) you ... | Why do we model noise in linear regression but not logistic regression?
To supplement Stephan's answer, similar to how in linear regression the target $y$ is generated by a ''systematic'' component involving $x$ and an independent ''noise'' component, in logistic regress |
10,366 | Hidden Markov Model vs Recurrent Neural Network | Summary
Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working.
An RNN may perform better if you have a very large... | Hidden Markov Model vs Recurrent Neural Network | Summary
Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see be | Hidden Markov Model vs Recurrent Neural Network
Summary
Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see better performance from an HMM since it is less finicky to get working.
An... | Hidden Markov Model vs Recurrent Neural Network
Summary
Hidden Markov Models (HMMs) are much simpler than Recurrent Neural Networks (RNNs), and rely on strong assumptions which may not always be true. If the assumptions are true then you may see be |
10,367 | Hidden Markov Model vs Recurrent Neural Network | Let's first see the differences between the HMM and RNN.
From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characterized by the following three fundamental problems:
Problem 1 (Likelihood): Given an HMM λ = (A,B) and an observation seq... | Hidden Markov Model vs Recurrent Neural Network | Let's first see the differences between the HMM and RNN.
From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characteri | Hidden Markov Model vs Recurrent Neural Network
Let's first see the differences between the HMM and RNN.
From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characterized by the following three fundamental problems:
Problem 1 (Likelihood... | Hidden Markov Model vs Recurrent Neural Network
Let's first see the differences between the HMM and RNN.
From this paper: A tutorial on hidden Markov models and selected applications in speech recognition we can learn that HMM should be characteri |
10,368 | Hidden Markov Model vs Recurrent Neural Network | I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the strictest sense.
HMMs are so-called generative models, if you have an HMM, you can generate some observations from it ... | Hidden Markov Model vs Recurrent Neural Network | I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the | Hidden Markov Model vs Recurrent Neural Network
I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the strictest sense.
HMMs are so-called generative models, if you have an ... | Hidden Markov Model vs Recurrent Neural Network
I found this question, because I was wondering about their similarities and differences too. I think it's very important to state that Hidden Markov Models (HMMs) do not have inputs and outputs in the |
10,369 | Why do these statements not follow logically from a 95% CI for the mean? | The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this context. The paper's explanation of its answer to question (5) is
"... [it] mentions the boundaries of the CI whereas .... | Why do these statements not follow logically from a 95% CI for the mean? | The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this c | Why do these statements not follow logically from a 95% CI for the mean?
The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this context. The paper's explanation of its answer ... | Why do these statements not follow logically from a 95% CI for the mean?
The very meaning of question (5) depends on some undisclosed interpretation of "confidence." I searched the paper carefully and found no attempt to define "confidence" or what it might mean in this c |
10,370 | Why do these statements not follow logically from a 95% CI for the mean? | Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior.
Question 3: For example, consider a case where we know for sure It would still be possible to get these results, but rath... | Why do these statements not follow logically from a 95% CI for the mean? | Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior.
Qu | Why do these statements not follow logically from a 95% CI for the mean?
Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior.
Question 3: For example, consider a case where we... | Why do these statements not follow logically from a 95% CI for the mean?
Questions 1-2, 4: in frequentist analysis, the true mean is not a random variable, thus thes probabilities are not defined, whereas in Bayesian analysis the probabilities would depend on the prior.
Qu |
10,371 | Why do these statements not follow logically from a 95% CI for the mean? | Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% probability of the mean's being in that interval: but some people do use it in the sense of having used an interval-generati... | Why do these statements not follow logically from a 95% CI for the mean? | Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% prob | Why do these statements not follow logically from a 95% CI for the mean?
Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% probability of the mean's being in that interval: b... | Why do these statements not follow logically from a 95% CI for the mean?
Without any formal definition of what it means to be "95% confident", what justification is there for labelling #5 true or false? A layman would doubtless misinterpret it as synonymous with a 95% prob |
10,372 | Why do these statements not follow logically from a 95% CI for the mean? | Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics:
"A range of values, calculated from the sample observations, that are
believed, with a certain probability, to contain the true parameter
value. A 95% CI, for example, implies that were the estimation process
repeated... | Why do these statements not follow logically from a 95% CI for the mean? | Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics:
"A range of values, calculated from the sample observations, that are
believed, with a certain probab | Why do these statements not follow logically from a 95% CI for the mean?
Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics:
"A range of values, calculated from the sample observations, that are
believed, with a certain probability, to contain the true parameter
value. A... | Why do these statements not follow logically from a 95% CI for the mean?
Here is the definition of a confidence interval, from B. S. Everitt's Dictionary of Statistics:
"A range of values, calculated from the sample observations, that are
believed, with a certain probab |
10,373 | Why do these statements not follow logically from a 95% CI for the mean? | Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here
It is correct to say that there is a 95% chance that the confidence interval you calculated contains the true population mean. It is not quite correct to say that there is a 95% chance that the population... | Why do these statements not follow logically from a 95% CI for the mean? | Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here
It is correct to say that there is a 95% chance that the confidence interval you cal | Why do these statements not follow logically from a 95% CI for the mean?
Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here
It is correct to say that there is a 95% chance that the confidence interval you calculated contains the true population mean. It i... | Why do these statements not follow logically from a 95% CI for the mean?
Regarding the intuition for the falsehood of Question 5, I obtain the following discussion on this topic from here
It is correct to say that there is a 95% chance that the confidence interval you cal |
10,374 | Confidence Interval for variance given one observation | Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible.
Let $\renewcommand{\Pr}{\mathbb P}\newcommand{\Ind}[1]{\mathbf 1_{(#1)}}X \sim \mathcal N(\mu,\sigma^2)$ with $\mu$ and $\sigma^2$ u... | Confidence Interval for variance given one observation | Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible.
Let $\renewc | Confidence Interval for variance given one observation
Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible.
Let $\renewcommand{\Pr}{\mathbb P}\newcommand{\Ind}[1]{\mathbf 1_{(#1)}}X \si... | Confidence Interval for variance given one observation
Viewed through the lens of probability inequalities and connections to the multiple-observation case, this result might not seem so impossible, or, at least, it might seem more plausible.
Let $\renewc |
10,375 | Confidence Interval for variance given one observation | Time to follow up! Here's the solution I was given:
We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interval with confidence level at least 99% if
$$(\forall \mu \in \mathbb R )(\forall \sigma > 0)\; \mathbb P_{\mu,\sigma_2}(... | Confidence Interval for variance given one observation | Time to follow up! Here's the solution I was given:
We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interva | Confidence Interval for variance given one observation
Time to follow up! Here's the solution I was given:
We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interval with confidence level at least 99% if
$$(\forall \mu \in \mat... | Confidence Interval for variance given one observation
Time to follow up! Here's the solution I was given:
We will construct a confidence interval of the form $[0,T(X))$, where $T(\cdot)$ is some statistic. By definition this will be a confidence interva |
10,376 | Confidence Interval for variance given one observation | The CI's $(0,\infty)$ presumably. | Confidence Interval for variance given one observation | The CI's $(0,\infty)$ presumably. | Confidence Interval for variance given one observation
The CI's $(0,\infty)$ presumably. | Confidence Interval for variance given one observation
The CI's $(0,\infty)$ presumably. |
10,377 | Comparing two classifier accuracy results for statistical significance with t-test | I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is also mentioned in his book).
Just to add, as Peter Flom says, the answer is almost certainly "yes" just by looking at the ... | Comparing two classifier accuracy results for statistical significance with t-test | I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is al | Comparing two classifier accuracy results for statistical significance with t-test
I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is also mentioned in his book).
Just to ad... | Comparing two classifier accuracy results for statistical significance with t-test
I would probably opt for McNemar's test if you only train the classifiers once. David Barber also suggests a rather neat Bayesian test that seems rather elegant to me, but isn't widely used (it is al |
10,378 | Comparing two classifier accuracy results for statistical significance with t-test | Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions.
Let $\hat p_1$ and $\hat p_2$ be the accuracies obtained from classifiers 1 and 2 respectively, and $n$ be the number of samples. The number of samples correctly c... | Comparing two classifier accuracy results for statistical significance with t-test | Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions.
Let $\hat p_1$ and $\hat p_2$ be the accura | Comparing two classifier accuracy results for statistical significance with t-test
Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions.
Let $\hat p_1$ and $\hat p_2$ be the accuracies obtained from classifiers 1 and ... | Comparing two classifier accuracy results for statistical significance with t-test
Since accuracy, in this case, is the proportion of samples correctly classified, we can apply the test of hypothesis concerning a system of two proportions.
Let $\hat p_1$ and $\hat p_2$ be the accura |
10,379 | Comparing two classifier accuracy results for statistical significance with t-test | I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes).
If you do want to do a test, though, you could do it as a test of two proportions - this can be done with a two sample t-test.
You m... | Comparing two classifier accuracy results for statistical significance with t-test | I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes).
If you do wan | Comparing two classifier accuracy results for statistical significance with t-test
I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes).
If you do want to do a test, though, you could do ... | Comparing two classifier accuracy results for statistical significance with t-test
I can tell you, without even running anything, that the difference will be highly statistically significant. It passes the IOTT (interocular trauma test - it hits you between the eyes).
If you do wan |
10,380 | Comparing two classifier accuracy results for statistical significance with t-test | Sorry, due my reputation I cant comment the answer of @Ébe Isaac.
If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the accuracy metrics.
I suggest three possible experiments on applying z-test over accuracy values.
Do the experiments with a... | Comparing two classifier accuracy results for statistical significance with t-test | Sorry, due my reputation I cant comment the answer of @Ébe Isaac.
If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the ac | Comparing two classifier accuracy results for statistical significance with t-test
Sorry, due my reputation I cant comment the answer of @Ébe Isaac.
If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the accuracy metrics.
I suggest three possi... | Comparing two classifier accuracy results for statistical significance with t-test
Sorry, due my reputation I cant comment the answer of @Ébe Isaac.
If you perform z-test, which I think is a quite good option to compare two classifiers, you have to be careful about how to use the ac |
10,381 | Comparing two classifier accuracy results for statistical significance with t-test | @Chris looks like you can apply this: https://abtestguide.com/calc/
Calcuale Z-score
And from Z-score look the p-value | Comparing two classifier accuracy results for statistical significance with t-test | @Chris looks like you can apply this: https://abtestguide.com/calc/
Calcuale Z-score
And from Z-score look the p-value | Comparing two classifier accuracy results for statistical significance with t-test
@Chris looks like you can apply this: https://abtestguide.com/calc/
Calcuale Z-score
And from Z-score look the p-value | Comparing two classifier accuracy results for statistical significance with t-test
@Chris looks like you can apply this: https://abtestguide.com/calc/
Calcuale Z-score
And from Z-score look the p-value |
10,382 | A layman understanding of the difference between back-door and front-door adjustment | Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches:
Back-door adjustment: Determine which other variables $X$ (age, gender) drive both $D$ (a drug) and $Y$ (health). Then, find units with the same valu... | A layman understanding of the difference between back-door and front-door adjustment | Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches:
Back-door adjustment: Determi | A layman understanding of the difference between back-door and front-door adjustment
Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches:
Back-door adjustment: Determine which other variables $X$ (age, ... | A layman understanding of the difference between back-door and front-door adjustment
Let's say you are interested in the causal effect of $D$ on $Y$. The following statement are not quite precise but I think convey the intuition behind the two approaches:
Back-door adjustment: Determi |
10,383 | do(x) operator meaning? | That is $do$-calculus. They explain it here:
Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain functions from the model, replacing them with a constant $X = x$, while keeping the rest of the model unchanged. The resu... | do(x) operator meaning? | That is $do$-calculus. They explain it here:
Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain f | do(x) operator meaning?
That is $do$-calculus. They explain it here:
Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain functions from the model, replacing them with a constant $X = x$, while keeping the rest of the m... | do(x) operator meaning?
That is $do$-calculus. They explain it here:
Interventions and counterfactuals are defined through a mathematical operator called $do(x)$, which simulates physical interventions by deleting certain f |
10,384 | do(x) operator meaning? | A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of structural equations that determines the values of each endogenous variable and $P(U)$ a probability distribution over ... | do(x) operator meaning? | A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of | do(x) operator meaning?
A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of structural equations that determines the values of each endogenous variable and $P(U)$ a probab... | do(x) operator meaning?
A probabilistic Structural Causal Model (SCM) is defined as a tuple $M = \langle U, V, F, P(U) \rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables, $F$ is a set of |
10,385 | Incidental parameter problem | In FE models of the type
$$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$
$\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the important parameter, statistically speaking. But in essence, $\alpha$ is important because it provides useful information ... | Incidental parameter problem | In FE models of the type
$$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$
$\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the im | Incidental parameter problem
In FE models of the type
$$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$
$\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the important parameter, statistically speaking. But in essence, $\alpha$ is important because it... | Incidental parameter problem
In FE models of the type
$$y_{it} = \alpha_i + \beta X_{it} + u_{it}$$
$\alpha$ is the incidental parameter, because theoretically speaking, it is of a secondary importance. Usually, $\beta$ is the im |
10,386 | What can we learn about the human brain from artificial neural networks? | As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account signals and timing as real neurons do.
There's a fairly recent interview, that I felt was appropriate for your specific... | What can we learn about the human brain from artificial neural networks? | As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account | What can we learn about the human brain from artificial neural networks?
As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account signals and timing as real neurons do.
There's... | What can we learn about the human brain from artificial neural networks?
As you mentioned, most neural networks are based on general simple abstractions of the brain. Not only are they lacking in mimicking characteristics like plasticity, but they do not take into account |
10,387 | What can we learn about the human brain from artificial neural networks? | Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine learning; @MattKrause (+1) is right that neural network models of some biological neural phenomena might have been helpful ... | What can we learn about the human brain from artificial neural networks? | Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine lea | What can we learn about the human brain from artificial neural networks?
Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine learning; @MattKrause (+1) is right that neural ne... | What can we learn about the human brain from artificial neural networks?
Not much --- arguably nothing --- has so far been learnt about brain functioning from artificial neural networks. [Clarification: I wrote this answer thinking about neural networks used in machine lea |
10,388 | What can we learn about the human brain from artificial neural networks? | It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman and Van Essen is a rough outline of how visual information flows through the monkey brain, beginning in the eyes (RGC at ... | What can we learn about the human brain from artificial neural networks? | It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman a | What can we learn about the human brain from artificial neural networks?
It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman and Van Essen is a rough outline of how visual i... | What can we learn about the human brain from artificial neural networks?
It is certainly not true that the human brain only uses "a few" convolutional layers. About 1/3 of the primate brain is somehow involved in processing visual information. This diagram, from Felleman a |
10,389 | What can we learn about the human brain from artificial neural networks? | The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so called neural network since using this kind of activation functions resulted in dramatic degrease of training affords f... | What can we learn about the human brain from artificial neural networks? | The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so | What can we learn about the human brain from artificial neural networks?
The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so called neural network since using this kind of... | What can we learn about the human brain from artificial neural networks?
The one what we really learned is the use of sparse activation and the use of linear rectified activation functions. The later is basically one reason, why we saw an explosion in activity regarding so |
10,390 | Daily Time Series Analysis | Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth.
You may also have yearly seasonality, although it's not obvious from your time series.
Your best bet, given potentially multiple seasonalities, may be a tbats model, which explicitly models ... | Daily Time Series Analysis | Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth.
You may also have yearly seasonality, although it's not obvious from y | Daily Time Series Analysis
Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth.
You may also have yearly seasonality, although it's not obvious from your time series.
Your best bet, given potentially multiple seasonalities, may be a tbats mode... | Daily Time Series Analysis
Your ACF and PACF indicate that you at least have weekly seasonality, which is shown by the peaks at lags 7, 14, 21 and so forth.
You may also have yearly seasonality, although it's not obvious from y |
10,391 | Daily Time Series Analysis | The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huang transform instead of the Fourier transform. The Fourier transform has a severe drawback in that it can only handle st... | Daily Time Series Analysis | The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huan | Daily Time Series Analysis
The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huang transform instead of the Fourier transform. The Fourier transform has a severe drawback in... | Daily Time Series Analysis
The best way to decompose seasonal data using existing R packages is ceemdan() in Rlibeemd. This technique extracts seasonality of multiple periods. The defaults work well. It uses the Hilbert-Huan |
10,392 | Daily Time Series Analysis | The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including those to the original question as I believe they are relevant to your problem. You might actually take the data that was... | Daily Time Series Analysis | The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including | Daily Time Series Analysis
The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including those to the original question as I believe they are relevant to your problem. You might actu... | Daily Time Series Analysis
The questions you raise have been dealt with in R Time Series Forecasting: Questions regarding my output . Please look carefully at my detailed answer and all the comments in the discussion including |
10,393 | How many lags to use in the Ljung-Box test of a time series? | Assume that we specify a simple AR(1) model, with all the usual properties,
$$y_t = \beta y_{t-1} + u_t$$
Denote the theoretical covariance of the error term as
$$\gamma_j \equiv E(u_tu_{t-j})$$
If we could observe the error term, then the sample autocorrelation of the error term is defined as
$$\tilde \rho_j \equiv \f... | How many lags to use in the Ljung-Box test of a time series? | Assume that we specify a simple AR(1) model, with all the usual properties,
$$y_t = \beta y_{t-1} + u_t$$
Denote the theoretical covariance of the error term as
$$\gamma_j \equiv E(u_tu_{t-j})$$
If we | How many lags to use in the Ljung-Box test of a time series?
Assume that we specify a simple AR(1) model, with all the usual properties,
$$y_t = \beta y_{t-1} + u_t$$
Denote the theoretical covariance of the error term as
$$\gamma_j \equiv E(u_tu_{t-j})$$
If we could observe the error term, then the sample autocorrelat... | How many lags to use in the Ljung-Box test of a time series?
Assume that we specify a simple AR(1) model, with all the usual properties,
$$y_t = \beta y_{t-1} + u_t$$
Denote the theoretical covariance of the error term as
$$\gamma_j \equiv E(u_tu_{t-j})$$
If we |
10,394 | How many lags to use in the Ljung-Box test of a time series? | The answer definitely depends on:
What are actually trying to use the $Q$ test for?
The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of no autocorrelation up to lag $h$ (alternatively assuming that you have something close to a weak white noise) and to bui... | How many lags to use in the Ljung-Box test of a time series? | The answer definitely depends on:
What are actually trying to use the $Q$ test for?
The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of | How many lags to use in the Ljung-Box test of a time series?
The answer definitely depends on:
What are actually trying to use the $Q$ test for?
The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of no autocorrelation up to lag $h$ (alternatively assuming th... | How many lags to use in the Ljung-Box test of a time series?
The answer definitely depends on:
What are actually trying to use the $Q$ test for?
The common reason is: to be more or less confident about joint statistical significance of the null hypothesis of |
10,395 | How many lags to use in the Ljung-Box test of a time series? | Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined.
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
Quoting the section below Issue 4 in the above link:
"....The p-values shown for the Ljung-Box statistic plot are incorrect because t... | How many lags to use in the Ljung-Box test of a time series? | Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined.
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
Quoting the s | How many lags to use in the Ljung-Box test of a time series?
Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined.
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
Quoting the section below Issue 4 in the above link:
"....The p-values s... | How many lags to use in the Ljung-Box test of a time series?
Before you zero-in on the "right" h (which appears to be more of an opinion than a hard rule), make sure the "lag" is correctly defined.
http://www.stat.pitt.edu/stoffer/tsa2/Rissues.htm
Quoting the s |
10,396 | How many lags to use in the Ljung-Box test of a time series? | The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-Godfrey test should be used instead. That limits the relevance of your question and the answers (although the answers ma... | How many lags to use in the Ljung-Box test of a time series? | The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch- | How many lags to use in the Ljung-Box test of a time series?
The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch-Godfrey test should be used instead. That limits the releva... | How many lags to use in the Ljung-Box test of a time series?
The thread "Testing for autocorrelation: Ljung-Box versus Breusch-Godfrey" shows that the Ljung-Box test is essentially inapplicable in the case of an autoregressive model. It also shows that Breusch- |
10,397 | How many lags to use in the Ljung-Box test of a time series? | The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted.
The first one is supposed to be from the authorative book by Box, Jenkins, and Reinsel. Time Series Analysis: Forecasting and Control. 3rd ed. Englewood Cliffs, NJ: Prentice Hall, 1994.. However, h... | How many lags to use in the Ljung-Box test of a time series? | The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted.
The first one is supposed to be from the authorative book by Box, Jenkins, an | How many lags to use in the Ljung-Box test of a time series?
The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted.
The first one is supposed to be from the authorative book by Box, Jenkins, and Reinsel. Time Series Analysis: Forecasting and Control. 3... | How many lags to use in the Ljung-Box test of a time series?
The two most common settings are $\min(20,T-1)$ and $\ln T$ where $T$ is the length of the series, as you correctly noted.
The first one is supposed to be from the authorative book by Box, Jenkins, an |
10,398 | How many lags to use in the Ljung-Box test of a time series? | Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test).
The gist of their approach is to combine the AIC and BIC criteria --- common in the identification and estimation of ARMA models --- to se... | How many lags to use in the Ljung-Box test of a time series? | Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test).
The gist of their | How many lags to use in the Ljung-Box test of a time series?
Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test).
The gist of their approach is to combine the AIC and BIC criteria --- common ... | How many lags to use in the Ljung-Box test of a time series?
Escanciano and Lobato constructed a portmanteau test with automatic, data-driven lag selection based on the Pierce-Box test and its refinements (which include the Ljung-Box test).
The gist of their |
10,399 | How many lags to use in the Ljung-Box test of a time series? | ... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lot of samples; n must be ~> 100 to be meaningful. Unfortunately I have never
seen a better test. But perhaps one exists.... | How many lags to use in the Ljung-Box test of a time series? | ... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lo | How many lags to use in the Ljung-Box test of a time series?
... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lot of samples; n must be ~> 100 to be meaningful. Unfortunat... | How many lags to use in the Ljung-Box test of a time series?
... h should be as small as possible to preserve whatever power the LB test may have under the circumstances. As h increases the power drops. The LB test is a dreadfully weak test; you must have a lo |
10,400 | How many lags to use in the Ljung-Box test of a time series? | There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data.
That said, after trying to figure out to reproduce a result in Stata in R I can tell you that, by default Stata implementation uses: $\mathrm{min}(\frac{n}{2}-2, 40)$. Either half the number of da... | How many lags to use in the Ljung-Box test of a time series? | There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data.
That said, after trying to figure out to reproduce a result in Stata in R I | How many lags to use in the Ljung-Box test of a time series?
There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data.
That said, after trying to figure out to reproduce a result in Stata in R I can tell you that, by default Stata implementation uses: $\... | How many lags to use in the Ljung-Box test of a time series?
There's no correct answer to this that works in all situation for the reasons other have said it will depend on your data.
That said, after trying to figure out to reproduce a result in Stata in R I |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.