idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
12,501 | Simulate constrained normal on lower or upper bound in R | This is called a truncated normal distribution:
http://en.wikipedia.org/wiki/Truncated_normal_distribution
Christian Robert wrote about an approach to doing it for a variety of situations (using different depending on where the truncation points were) here:
Robert, C.P. (1995) "Simulation of truncated normal variables... | Simulate constrained normal on lower or upper bound in R | This is called a truncated normal distribution:
http://en.wikipedia.org/wiki/Truncated_normal_distribution
Christian Robert wrote about an approach to doing it for a variety of situations (using diffe | Simulate constrained normal on lower or upper bound in R
This is called a truncated normal distribution:
http://en.wikipedia.org/wiki/Truncated_normal_distribution
Christian Robert wrote about an approach to doing it for a variety of situations (using different depending on where the truncation points were) here:
Robe... | Simulate constrained normal on lower or upper bound in R
This is called a truncated normal distribution:
http://en.wikipedia.org/wiki/Truncated_normal_distribution
Christian Robert wrote about an approach to doing it for a variety of situations (using diffe |
12,502 | Simulate constrained normal on lower or upper bound in R | Following on from @glen_b's references and focussing exclusively on R implementation.
There are a couple of functions designed to sample from a truncated normal distribution:
rtruncnorm(100, a=-Inf, b=5, mean=3, sd=2) in the truncnorm package
rtnorm(100, 3, 2, upper=5) in the msm package | Simulate constrained normal on lower or upper bound in R | Following on from @glen_b's references and focussing exclusively on R implementation.
There are a couple of functions designed to sample from a truncated normal distribution:
rtruncnorm(100, a=-Inf, | Simulate constrained normal on lower or upper bound in R
Following on from @glen_b's references and focussing exclusively on R implementation.
There are a couple of functions designed to sample from a truncated normal distribution:
rtruncnorm(100, a=-Inf, b=5, mean=3, sd=2) in the truncnorm package
rtnorm(100, 3, 2, u... | Simulate constrained normal on lower or upper bound in R
Following on from @glen_b's references and focussing exclusively on R implementation.
There are a couple of functions designed to sample from a truncated normal distribution:
rtruncnorm(100, a=-Inf, |
12,503 | Simulate constrained normal on lower or upper bound in R | An example of using the inverse CDF (quantile function) as suggested by @Glen_b
You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) to find the values these quantiles correspond to for the given distribution.
If you only generate quantiles within a spec... | Simulate constrained normal on lower or upper bound in R | An example of using the inverse CDF (quantile function) as suggested by @Glen_b
You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) t | Simulate constrained normal on lower or upper bound in R
An example of using the inverse CDF (quantile function) as suggested by @Glen_b
You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) to find the values these quantiles correspond to for the given d... | Simulate constrained normal on lower or upper bound in R
An example of using the inverse CDF (quantile function) as suggested by @Glen_b
You can use runif to generate random quantiles and then pass these quantiles to e.g. qnorm (or any other distribution) t |
12,504 | Difference between panel data & mixed model | Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over time. When time is second index for mixed effe... | Difference between panel data & mixed model | Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second | Difference between panel data & mixed model
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second index is usually time, and it is assumed that we observe individuals over ti... | Difference between panel data & mixed model
Both panel data and mixed effect model data deal with double indexed random variables $y_{ij}$. First index is for group, the second is for individuals within the group. For the panel data the second |
12,505 | Difference between panel data & mixed model | I understand you're looking for a text that describes mixed modelling theory without reference to a software package.
I would recommend Multilevel Analysis, An introduction to basic and advanced multilevel modelling by Tom Snijders and Roel Bosker, about 250pp.
It has a chapter on software at the end (which is somewha... | Difference between panel data & mixed model | I understand you're looking for a text that describes mixed modelling theory without reference to a software package.
I would recommend Multilevel Analysis, An introduction to basic and advanced multi | Difference between panel data & mixed model
I understand you're looking for a text that describes mixed modelling theory without reference to a software package.
I would recommend Multilevel Analysis, An introduction to basic and advanced multilevel modelling by Tom Snijders and Roel Bosker, about 250pp.
It has a chap... | Difference between panel data & mixed model
I understand you're looking for a text that describes mixed modelling theory without reference to a software package.
I would recommend Multilevel Analysis, An introduction to basic and advanced multi |
12,506 | Difference between panel data & mixed model | @mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed models and panel data is worth a read. | Difference between panel data & mixed model | @mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed | Difference between panel data & mixed model
@mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed models and panel data is worth a read. | Difference between panel data & mixed model
@mpiktas has given a thorough answer. I would also suggest reading the "plm versus nlme and lme4" section of documentation for plm package in R . The authors' discussion about difference between mixed |
12,507 | Difference between panel data & mixed model | I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cross-section or group of people who are surveyed periodically over a given time span". So the "panel" is a group-structur... | Difference between panel data & mixed model | I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cr | Difference between panel data & mixed model
I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cross-section or group of people who are surveyed periodically over a given ti... | Difference between panel data & mixed model
I too have wondered about the difference between both as well and having recently found a reference on this topic I understand that "panel data" is a traditional name for datasets that represent a "cr |
12,508 | Difference between panel data & mixed model | In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias.
However, it is possible to perform this type of estimation within a multilevel model using a Mundlak type approach, i.e. including the group mea... | Difference between panel data & mixed model | In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias.
However, it is possible | Difference between panel data & mixed model
In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias.
However, it is possible to perform this type of estimation within a multilevel model using a Mundlak... | Difference between panel data & mixed model
In my experience, the rationale for using 'panel econometrics' is that the panel 'fixed effects' estimators can be used to control for various forms of omitted variable bias.
However, it is possible |
12,509 | Difference between panel data & mixed model | If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be about right. | Difference between panel data & mixed model | If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be | Difference between panel data & mixed model
If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be about right. | Difference between panel data & mixed model
If you use Stata, Multilevel and Longitudinal Models Using Stata by Sophia Rabe-Hesketh and Anders Skrondal would be a good choice. Depending on what exactly you are interested in, 200 pages might be |
12,510 | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum Entropy" and Cox's "Algebra of Probable Inference").
Bayesian approach is straightforward to apply if your prior knowledge... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum En | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum Entropy" and Cox's "Algebra of Probab... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
MaxEnt and Bayesian inference methods correspond to different ways of incorporating information into your modeling procedure. Both can be put on axiomatic ground (John Skilling's "Axioms of Maximum En |
12,511 | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal:
How informative is the Maximum Entropy method? (1994)
Maximum Entropy Imputation (2002)
Explanation of Maximum Entropy (2004)
I'm not ... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal:
How informative i | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal:
How informative is the Maximum Entropy method? (1994... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
For an entertaining critique of maximum entropy methods, I'd recommend reading some old newsgroup posts on sci.stat.math and sci.stat.consult, particularly the ones by Radford Neal:
How informative i |
12,512 | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood.
In any case, it is not an issue anymore as Bayes Rule (not the product rule) can be obtained from Maximum relative Entropy (MrE), and not ... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods | It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood.
In any case, it is | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood.
In any case, it is not an issue anymore as Bayes Rule... | Comparison between MaxEnt, ML, Bayes and other kind of statistical inference methods
It is true that in the past, MaxEnt and Bayes have dealt with different types or forms of information. I would say that Bayes uses "hard" constraints as well though, the likelihood.
In any case, it is |
12,513 | Difference in Difference method: how to test for assumption of common trend between treatment and control group? | The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Ideally the pre-treatment trends should look something like this:
This graph was taken from a previous answer to th... | Difference in Difference method: how to test for assumption of common trend between treatment and co | The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatmen | Difference in Difference method: how to test for assumption of common trend between treatment and control group?
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatment. Idea... | Difference in Difference method: how to test for assumption of common trend between treatment and co
The typical thing to do is visual inspection of the pre-treatment trends for the control and treatment group. This is particularly easy if you only have those two groups given a single binary treatmen |
12,514 | Difference in Difference method: how to test for assumption of common trend between treatment and control group? | There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more than one pre-treatment period (Sometimes, the DiD with two periods performs better than the DiD with multiple periods).
Con... | Difference in Difference method: how to test for assumption of common trend between treatment and co | There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more tha | Difference in Difference method: how to test for assumption of common trend between treatment and control group?
There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more than one p... | Difference in Difference method: how to test for assumption of common trend between treatment and co
There is a good manner to verify if the pre-trend common assumption is reasonable in a difference-in-difference framework with two time and two periods. But is necessary to have some data for more tha |
12,515 | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions? | Let's consider a one-dimensional problem for the simplest possible exposition. (Higher
dimensional cases have similar properties.)
While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\sum_i |x_i-\mu|$ (a sum of absolute value functions with different x-offsets) often doesn't. Consider $x_1=1$ and $x_2=3$... | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio | Let's consider a one-dimensional problem for the simplest possible exposition. (Higher
dimensional cases have similar properties.)
While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\s | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions?
Let's consider a one-dimensional problem for the simplest possible exposition. (Higher
dimensional cases have similar properties.)
While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\sum_i |x_i-\mu|$ ... | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio
Let's consider a one-dimensional problem for the simplest possible exposition. (Higher
dimensional cases have similar properties.)
While both $|x-\mu|$ and $(x-\mu)^2$ each have a unique minimum, $\s |
12,516 | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions? | Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of elements are included in the median calculation (see Central tendency: Solutions to variational problems). | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio | Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutions?
Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of elements are in... | Why does the L2 norm loss have a unique solution and the L1 norm loss have possibly multiple solutio
Minimizing the L2 loss corresponds to calculating the arithmetic mean, which is unambiguous, while minimizing the L1 loss corresponds to calculating the median, which is ambiguous if an even number of |
12,517 | Keras: why does loss decrease while val_loss increase? | (this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and validating on a larger data set. Explain more about the data/features and the model for further ideas. | Keras: why does loss decrease while val_loss increase? | (this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and | Keras: why does loss decrease while val_loss increase?
(this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and validating on a larger data set. Explain more about the data/fea... | Keras: why does loss decrease while val_loss increase?
(this may be a duplicate) It looks like your model is over fitting, that is just memorizing the training data. In general a model that over fits can be improved by adding more dropout, or training and |
12,518 | Keras: why does loss decrease while val_loss increase? | Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The more you train it, the better it is at distinguishing chickens from airplanes, but also the worse it is when it is shown ... | Keras: why does loss decrease while val_loss increase? | Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The m | Keras: why does loss decrease while val_loss increase?
Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The more you train it, the better it is at distinguishing chickens fro... | Keras: why does loss decrease while val_loss increase?
Perhaps your training dataset has different properties than your validation dataset. It's like training a network to distinguish between a chicken and an airplane, but then you show it an apple. The m |
12,519 | What are differences between the terms "time series analysis" and "longitudinal data analysis" | I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A prototypical example would be the annual GDP growth of a country over decades or even more than a hun... | What are differences between the terms "time series analysis" and "longitudinal data analysis" | I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long | What are differences between the terms "time series analysis" and "longitudinal data analysis"
I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long period of time. A protot... | What are differences between the terms "time series analysis" and "longitudinal data analysis"
I doubt there are strict, formal definitions that a wide range of data analysts agree on.
In general however, time series connotes a single study unit observed at regular intervals over a very long |
12,520 | What are differences between the terms "time series analysis" and "longitudinal data analysis" | There are roughly three kinds of datasets:
cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects;
time series: the same subject at different times; think of it as one column with rows corresponding to different time points;
panel (longitudinal)... | What are differences between the terms "time series analysis" and "longitudinal data analysis" | There are roughly three kinds of datasets:
cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects;
time series: the same subj | What are differences between the terms "time series analysis" and "longitudinal data analysis"
There are roughly three kinds of datasets:
cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects;
time series: the same subject at different times; t... | What are differences between the terms "time series analysis" and "longitudinal data analysis"
There are roughly three kinds of datasets:
cross section: different subjects at the same time; think of it as one row with many columns corresponding to different subjects;
time series: the same subj |
12,521 | What are differences between the terms "time series analysis" and "longitudinal data analysis" | These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses.
Instead time-series analysis describes a set of lower-level techniques which might be useful to analyze data in a longitudinal study.
The object of study in time series analysis is some time-depen... | What are differences between the terms "time series analysis" and "longitudinal data analysis" | These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses.
Instead time-series analysis describes a set of lower-level techniques which | What are differences between the terms "time series analysis" and "longitudinal data analysis"
These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses.
Instead time-series analysis describes a set of lower-level techniques which might be useful to analyz... | What are differences between the terms "time series analysis" and "longitudinal data analysis"
These two terms might not be related in the way the OP assumes--i.e., I don't think they are competing modes of analyses.
Instead time-series analysis describes a set of lower-level techniques which |
12,522 | What are differences between the terms "time series analysis" and "longitudinal data analysis" | What Are Longitudinal Data?
Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments, and so on. In contrast, repeated cross-sectional data, which also provides long-term data, gives the same survey to di... | What are differences between the terms "time series analysis" and "longitudinal data analysis" | What Are Longitudinal Data?
Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments | What are differences between the terms "time series analysis" and "longitudinal data analysis"
What Are Longitudinal Data?
Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments, and so on. In contrast,... | What are differences between the terms "time series analysis" and "longitudinal data analysis"
What Are Longitudinal Data?
Longitudinal data, sometimes referred to as panel data, track the same sample at different points in time. The sample can consist of individuals, households, establishments |
12,523 | What are differences between the terms "time series analysis" and "longitudinal data analysis" | To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measurement from an equivalent population at separate time intervals - or collected continuously but analyzed at timed intervals.... | What are differences between the terms "time series analysis" and "longitudinal data analysis" | To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measureme | What are differences between the terms "time series analysis" and "longitudinal data analysis"
To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measurement from an equivalent pop... | What are differences between the terms "time series analysis" and "longitudinal data analysis"
To make it simple I will assume a study of individuals, but the same applies to any unit of analysis. It isn't complicated, time series is data collected over time, usually implying the same measureme |
12,524 | What is meant by the standard error of a maximum likelihood estimate? | The other answer has covered the derivation of the standard error, I just want to help you with notation:
Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote the Estimator (which is a function), and a specific estimate (which is the value that the estimator takes when receives ... | What is meant by the standard error of a maximum likelihood estimate? | The other answer has covered the derivation of the standard error, I just want to help you with notation:
Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote | What is meant by the standard error of a maximum likelihood estimate?
The other answer has covered the derivation of the standard error, I just want to help you with notation:
Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote the Estimator (which is a function), and a specifi... | What is meant by the standard error of a maximum likelihood estimate?
The other answer has covered the derivation of the standard error, I just want to help you with notation:
Your confusion is due to the fact that in Statistics we use exactly the same symbol to denote |
12,525 | What is meant by the standard error of a maximum likelihood estimate? | $\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the Fisher information,
$$
I(\theta) = -\mathbb{E}\left[ \frac{\partial^2 \mathcal{L}(\theta|Y = y)}{\partial \theta^2}|_\th... | What is meant by the standard error of a maximum likelihood estimate? | $\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the | What is meant by the standard error of a maximum likelihood estimate?
$\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the Fisher information,
$$
I(\theta) = -\mathbb{E}\lef... | What is meant by the standard error of a maximum likelihood estimate?
$\hat{\alpha}$ -- a maximum likelihood estimator -- is a function of a random sample, and so is also random (not fixed). An estimate of the standard error of $\hat{\alpha}$ could be obtained from the |
12,526 | What is autocorrelation function? | Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocorrelation function is one of the tools used to find patterns in the data. Specifically, the autocorrelation function tell... | What is autocorrelation function? | Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocor | What is autocorrelation function?
Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocorrelation function is one of the tools used to find patterns in the data. Specifically,... | What is autocorrelation function?
Unlike regular sampling data, time-series data are ordered. Therefore, there is extra information about your sample that you could take advantage of, if there are useful temporal patterns. The autocor |
12,527 | What is autocorrelation function? | let me give you another perspective.
plot the lagged values of a time series with the current values of the time series.
if the graph you see is linear, means there is a linear dependence between the current values of the time series versus the lagged values of the time series.
autocorrelation values are the most obvi... | What is autocorrelation function? | let me give you another perspective.
plot the lagged values of a time series with the current values of the time series.
if the graph you see is linear, means there is a linear dependence between the | What is autocorrelation function?
let me give you another perspective.
plot the lagged values of a time series with the current values of the time series.
if the graph you see is linear, means there is a linear dependence between the current values of the time series versus the lagged values of the time series.
autoco... | What is autocorrelation function?
let me give you another perspective.
plot the lagged values of a time series with the current values of the time series.
if the graph you see is linear, means there is a linear dependence between the |
12,528 | What does "unbiasedness" mean? | You can find everything here. However, here is a brief answer.
Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$.
Now, let us say you use the following estimator:
$S^2 = \frac{1}{n} \sum_{i=1}^n (X_{i} - \bar{X})^2$,
where $\bar{X} = \frac{1... | What does "unbiasedness" mean? | You can find everything here. However, here is a brief answer.
Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$.
Now, le | What does "unbiasedness" mean?
You can find everything here. However, here is a brief answer.
Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$.
Now, let us say you use the following estimator:
$S^2 = \frac{1}{n} \sum_{i=1}^n (X_{i} - \bar{X... | What does "unbiasedness" mean?
You can find everything here. However, here is a brief answer.
Let $\mu$ and $\sigma^2$ be the mean and the variance of interest; you wish to estimate $\sigma^2$ based on a sample of size $n$.
Now, le |
12,529 | What does "unbiasedness" mean? | This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data.
If you work through the derivation, you will see that the variance of this estimate $E[(\bar{X}-\mu)^2]$ is exactly what gives t... | What does "unbiasedness" mean? | This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data.
If you w | What does "unbiasedness" mean?
This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data.
If you work through the derivation, you will see that the variance of this estimate $E[(\bar{X}-\... | What does "unbiasedness" mean?
This response clarifies ocram's answer. The key reason (and common misunderstanding) for $E[S^2] \neq \sigma^2$ is that $S^2$ uses the estimate $\bar{X}$ which is itself estimated from data.
If you w |
12,530 | What does "unbiasedness" mean? | The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To compensate, we divide by $n-1$.
Here's an exercise: Make up a discrete probability with 2 outcomes, say $P(2) = .25$ and... | What does "unbiasedness" mean? | The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To c | What does "unbiasedness" mean?
The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To compensate, we divide by $n-1$.
Here's an exercise: Make up a discrete probability with 2... | What does "unbiasedness" mean?
The explanation that @Ocram gave is great. To explain what he said in words: if we calculate $s^2$ by dividing just by $n$, (which is intuitive) our estimation of $s^2$ will be an underestimate. To c |
12,531 | What does "unbiasedness" mean? | Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population variance and needs to be made ... | What does "unbiasedness" mean? | Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s | What does "unbiasedness" mean?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of statistics, we say that the sample variance provides a “biased” estimate of the population... | What does "unbiasedness" mean?
Generally using "n" in the denominator gives smaller values than the population variance which is what we want to estimate. This especially happens if the small samples are taken. In the language of s |
12,532 | Transforming proportion data: when arcsin square root is not enough | Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas:
To be able to extend the tails (towards 0 and 1) as controlled by a parameter.
Nevertheless, to match the original (untransformed) values near the middle ($1/2$), which makes the transformation easier ... | Transforming proportion data: when arcsin square root is not enough | Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas:
To be able to extend the tails (towards 0 and 1) as controlled by a parameter.
Ne | Transforming proportion data: when arcsin square root is not enough
Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas:
To be able to extend the tails (towards 0 and 1) as controlled by a parameter.
Nevertheless, to match the original (untransformed) va... | Transforming proportion data: when arcsin square root is not enough
Sure. John Tukey describes a family of (increasing, one-to-one) transformations in EDA. It is based on these ideas:
To be able to extend the tails (towards 0 and 1) as controlled by a parameter.
Ne |
12,533 | Transforming proportion data: when arcsin square root is not enough | One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is the standard student t distribution, with $\nu$ degrees of freedom. The parameter $v$ controls how quickly the transfor... | Transforming proportion data: when arcsin square root is not enough | One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is | Transforming proportion data: when arcsin square root is not enough
One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is the standard student t distribution, with $\nu$ degr... | Transforming proportion data: when arcsin square root is not enough
One way to include is to include an indexed transformation. One general way is to use any symmetric (inverse) cumulative distribution function, so that $F(0)=0.5$ and $F(x)=1-F(-x)$. One example is |
12,534 | Interpretation of log transformed predictors in logistic regression | If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor.
I usually choose to take logarithms to base 2 in this situation, so I can interpet the exponentiated coeffic... | Interpretation of log transformed predictors in logistic regression | If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the | Interpretation of log transformed predictors in logistic regression
If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the predictor.
I usually choose to take logarithms to b... | Interpretation of log transformed predictors in logistic regression
If you exponentiate the estimated coefficient, you'll get an odds ratio associated with a $b$-fold increase in the predictor, where $b$ is the base of the logarithm you used when log-transforming the |
12,535 | Interpretation of log transformed predictors in logistic regression | @gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV.
One IV that often should be transformed is income. If you included it untransformed, then each (say) \$1,000 increase in income wou... | Interpretation of log transformed predictors in logistic regression | @gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV.
One IV that | Interpretation of log transformed predictors in logistic regression
@gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV.
One IV that often should be transformed is income. If you inclu... | Interpretation of log transformed predictors in logistic regression
@gung is completely correct, but, in case you do decide to keep it, you can interpret the coefficient has having an effect on each multiple of the IV, rather than each addition of the IV.
One IV that |
12,536 | Interpretation of log transformed predictors in logistic regression | This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer.
If your model equation is:
$log(p/(1-p)) = \beta _{0} + \beta log(X)$
Then, each $k$-fold increase in $X$ is associated with a change in the odds by a multiplicative factor of $k^{\beta }$.
For example, I have the following mod... | Interpretation of log transformed predictors in logistic regression | This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer.
If your model equation is:
$log(p/(1-p)) = \beta _{0} + \beta log(X)$
Then, each $k$-fold increase in $X$ i | Interpretation of log transformed predictors in logistic regression
This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer.
If your model equation is:
$log(p/(1-p)) = \beta _{0} + \beta log(X)$
Then, each $k$-fold increase in $X$ is associated with a change in the odds by a multipli... | Interpretation of log transformed predictors in logistic regression
This answer is adapted from The Statistical Sleuth by Fred L. Ramsey and Daniel W. Schafer.
If your model equation is:
$log(p/(1-p)) = \beta _{0} + \beta log(X)$
Then, each $k$-fold increase in $X$ i |
12,537 | Interpretation of log transformed predictors in logistic regression | The general model is
$ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$
for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$.
Case 1: $k=e$, i.e. natural log transformed independent variable. Then if $\beta$ is close to zero we can say "a 1% increase in $x$ leads to a $\beta$ p... | Interpretation of log transformed predictors in logistic regression | The general model is
$ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$
for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$.
Case 1: $k=e$, i.e. natural log | Interpretation of log transformed predictors in logistic regression
The general model is
$ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$
for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$.
Case 1: $k=e$, i.e. natural log transformed independent variable. Then if $\beta$ is... | Interpretation of log transformed predictors in logistic regression
The general model is
$ln(p/(1-p)) = \beta _{0} + \beta log_k(x)$
for some $k$, which could be $e$. I start by explaining the case of $k=e$, then consider general $k$.
Case 1: $k=e$, i.e. natural log |
12,538 | Interpretation of log transformed predictors in logistic regression | Model
Assume the following model
$$
y_i \sim \text{Binomial}(n_i, p_i) \\
\log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i).
$$
How can we interpret the coefficient $\beta_1$?
Odds Ratio
We calculate the odds ratio between response $i$ and $j$. First note that the log odds ratio is given by
\begi... | Interpretation of log transformed predictors in logistic regression | Model
Assume the following model
$$
y_i \sim \text{Binomial}(n_i, p_i) \\
\log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i).
$$
How can we interpret the coefficient $\beta_1$?
O | Interpretation of log transformed predictors in logistic regression
Model
Assume the following model
$$
y_i \sim \text{Binomial}(n_i, p_i) \\
\log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i).
$$
How can we interpret the coefficient $\beta_1$?
Odds Ratio
We calculate the odds ratio between respon... | Interpretation of log transformed predictors in logistic regression
Model
Assume the following model
$$
y_i \sim \text{Binomial}(n_i, p_i) \\
\log\left(\frac{p_i}{1-p_i}\right) = \eta = \beta_0 + \beta_1 \log(x_i).
$$
How can we interpret the coefficient $\beta_1$?
O |
12,539 | What is the sum of squared t variates? | Answering the first question.
We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random variables distributed by $F(1,n)$. This could be done either by calculating the convolution of two random variables, or calc... | What is the sum of squared t variates? | Answering the first question.
We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random vari | What is the sum of squared t variates?
Answering the first question.
We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random variables distributed by $F(1,n)$. This could be done either by calculating the convo... | What is the sum of squared t variates?
Answering the first question.
We could start from the fact noted by mpiktas, that $t^2 \sim F(1, n)$. And then try a more simple step at first - search for the distribution of a sum of two random vari |
12,540 | What is the sum of squared t variates? | It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histograms of $\log(T)$ and of $\log(\chi^2(k))$ don't even have the same shape, indicating that shifting and rescaling $T$ still... | What is the sum of squared t variates? | It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histogram | What is the sum of squared t variates?
It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histograms of $\log(T)$ and of $\log(\chi^2(k))$ don't even have the same shape, indicatin... | What is the sum of squared t variates?
It's not even a close approximation. For small $n$, the expectation of $T$ equals $\frac{k n}{n-2}$ whereas the expectation of $\chi^2(k)$ equals $k$. When $k$ is small (less than 10, say) histogram |
12,541 | What is the sum of squared t variates? | I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have
$\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\sim N(0,1)$
where $Et_1^2$ and $Var(t_1^2)$ is respectively the mean and variance of squared Student t distribution with $... | What is the sum of squared t variates? | I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have
$\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\si | What is the sum of squared t variates?
I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have
$\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\sim N(0,1)$
where $Et_1^2$ and $Var(t_1^2)$ is respectively the mean and variance o... | What is the sum of squared t variates?
I'll answer second question. The central limit theorem is for any iid sequence, squared or not squared. So in your case if $k$ is sufficiently large we have
$\dfrac{T-kE(t_1)^2}{\sqrt{kVar(t_1^2)}}\si |
12,542 | Is it possible to automate time series forecasting? | First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models.
To answer your main question "Is it possible to automate time series forecasting?":
Yes it is. In my field of demand forecasting, most commercial forecasting packages do so. Several open sourc... | Is it possible to automate time series forecasting? | First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models.
To answer your main question "Is it possible to automate time series fore | Is it possible to automate time series forecasting?
First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models.
To answer your main question "Is it possible to automate time series forecasting?":
Yes it is. In my field of demand forecasting, most comme... | Is it possible to automate time series forecasting?
First you need to note that the approach outlined by IrishStat is specific to ARIMA models, not to any generic set of models.
To answer your main question "Is it possible to automate time series fore |
12,543 | Is it possible to automate time series forecasting? | My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , parameters that may change over time and even error variances that may change over time. This family is more precisely called A... | Is it possible to automate time series forecasting? | My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , paramet | Is it possible to automate time series forecasting?
My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , parameters that may change over time and even error variances that may chan... | Is it possible to automate time series forecasting?
My suggested approach encompasses models that are much more general than ARIMA as they include the potential for seasonal dummies that may change over time , multiple levels ,multiple trends , paramet |
12,544 | Is it possible to automate time series forecasting? | Short Answer
While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach.
Long Answer
The approach you describe is similar to what is seen in the machine learning community where a tremendous amount of focus is put on model selecti... | Is it possible to automate time series forecasting? | Short Answer
While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach.
Long Answer
The approach you describe | Is it possible to automate time series forecasting?
Short Answer
While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach.
Long Answer
The approach you describe is similar to what is seen in the machine learning community where ... | Is it possible to automate time series forecasting?
Short Answer
While it could be possible to do something like this, in many cases you are probably better off forecasting time series using a more manual approach.
Long Answer
The approach you describe |
12,545 | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? | Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers.
The goal of MCMC is to draw samples from an (unnormalized) target distribution $f(x)$. The obtained samples are used to approximate $f$ and mostly allow to com... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a | Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers.
The goal of MCMC is to draw samples fro | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers.
The goal of MCMC is to draw samples from an... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
Yes. Unlike what other answers state, 'typical' machine-learning methods such as nonparametrics and (deep) neural networks can help create better MCMC samplers.
The goal of MCMC is to draw samples fro |
12,546 | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? | A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distribution (typically a multivariate normal or t-distribution).
A well known fact is that the further the proposal distribution... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a | A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distributi | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distribution (... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
A method that could connect the two concepts is that of a multivariate Metropolis Hastings algorithm. In this case, we have a target distribution (the posterior distribution) and a proposal distributi |
12,547 | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? | Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (usually with no closed form) using probabilistic numerical methods. Metropolis sampling is definitely not the most common... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a | Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (u | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (usual... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
Machine Learning is concerned with prediction, classification, or clustering in a supervised or unsupervised setting. On the other hand, MCMC is simply concerned with evaluating a complex intergral (u |
12,548 | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique? | There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo updates arXiv:1610.02746. The idea here turns out to be quite similar to the references given by @lacerbi in above.
In anot... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a | There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo upda | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a MCMC technique?
There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo updates ... | Can Machine Learning or Deep Learning algorithms be utilised to "improve" the sampling process of a
There were some recent works in computational physics where the authors used the Restricted Boltzmann Machines to model probability distribution and then propose (hopefully) efficient Monte Carlo upda |
12,549 | In a random forest, is larger %IncMSE better or worse? | %IncMSE is the most robust and informative measure.
It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled).
grow regression forest. Compute OOB-mse, name this mse0.
for 1 to j var: permute values of column j, then predict and compute OO... | In a random forest, is larger %IncMSE better or worse? | %IncMSE is the most robust and informative measure.
It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled).
grow reg | In a random forest, is larger %IncMSE better or worse?
%IncMSE is the most robust and informative measure.
It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled).
grow regression forest. Compute OOB-mse, name this mse0.
for 1 to j var: ... | In a random forest, is larger %IncMSE better or worse?
%IncMSE is the most robust and informative measure.
It is the increase in mse of predictions(estimated with out-of-bag-CV) as a result of variable j being permuted(values randomly shuffled).
grow reg |
12,550 | Proof of closeness of kernel functions under pointwise product | By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product
\begin{align}
k_{p}( x, y) = k_1( x, y) k_2(x,y)
\end{align}
is also a valid kernel function.
Proving this property is rather straightforward when we invoke Mercer's theorem. Since $k_1, k_2$ are va... | Proof of closeness of kernel functions under pointwise product | By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product
\begin{align}
k_{p}( x, y) = k_1( x, y) k_2(x,y)
\end{align}
is also a valid k | Proof of closeness of kernel functions under pointwise product
By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product
\begin{align}
k_{p}( x, y) = k_1( x, y) k_2(x,y)
\end{align}
is also a valid kernel function.
Proving this property is rather straightf... | Proof of closeness of kernel functions under pointwise product
By point-wise product, I assume you mean that if $k_1(x,y), k_2(x,y)$ are both valid kernel functions, then their product
\begin{align}
k_{p}( x, y) = k_1( x, y) k_2(x,y)
\end{align}
is also a valid k |
12,551 | Proof of closeness of kernel functions under pointwise product | How about the following proof:
Source: UChicago kernel methods lecture, page 5 | Proof of closeness of kernel functions under pointwise product | How about the following proof:
Source: UChicago kernel methods lecture, page 5 | Proof of closeness of kernel functions under pointwise product
How about the following proof:
Source: UChicago kernel methods lecture, page 5 | Proof of closeness of kernel functions under pointwise product
How about the following proof:
Source: UChicago kernel methods lecture, page 5 |
12,552 | Proof of closeness of kernel functions under pointwise product | Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. This is equivalent to prove its corresponding kernel matrix $K = K1 \circ K2$ is PSD.
$K_3 = K1 \otimes K2$ is a PSD (Th... | Proof of closeness of kernel functions under pointwise product | Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. T | Proof of closeness of kernel functions under pointwise product
Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. This is equivalent to prove its corresponding kernel matri... | Proof of closeness of kernel functions under pointwise product
Assume $K1$ and $K2$ are the kernel matrix of these two kernel $k_1(x,y)$ and $k_2(x,y)$, respectively, and they are PSD. We define $k(x,y) = k_1(x,y)k_2(x,y)$ and want to prove it is also a kernel. T |
12,553 | Reason to normalize in euclidean distance measures in hierarchical clustering | It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves.
The problem is when you have mixed attributes.
Say you have data on persons. Weight in grams and shoe size. Shoe sizes differ very little, while the differences in body mass (in grams) ar... | Reason to normalize in euclidean distance measures in hierarchical clustering | It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves.
The problem is when you have mixed attributes.
Say you have data on | Reason to normalize in euclidean distance measures in hierarchical clustering
It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves.
The problem is when you have mixed attributes.
Say you have data on persons. Weight in grams and shoe size. S... | Reason to normalize in euclidean distance measures in hierarchical clustering
It depends on your data. And actually it has nothing to do with hierarchical clustering, but with the distance functions themselves.
The problem is when you have mixed attributes.
Say you have data on |
12,554 | Reason to normalize in euclidean distance measures in hierarchical clustering | If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute very little.
We can visualise this in R via:
set.seed(42)
dat <- data.frame(var1 = rnorm(100, mean = 100000),
... | Reason to normalize in euclidean distance measures in hierarchical clustering | If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute ver | Reason to normalize in euclidean distance measures in hierarchical clustering
If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute very little.
We can visualise this in R via:
... | Reason to normalize in euclidean distance measures in hierarchical clustering
If you do not standardise your data then the variables measured in large valued units will dominate the computed dissimilarity and variables that are measured in small valued units will contribute ver |
12,555 | Reason to normalize in euclidean distance measures in hierarchical clustering | Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalanobis distance is the appropriate measure. | Reason to normalize in euclidean distance measures in hierarchical clustering | Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalano | Reason to normalize in euclidean distance measures in hierarchical clustering
Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalanobis distance is the appropriate measure. | Reason to normalize in euclidean distance measures in hierarchical clustering
Anony-Mousse gave an excellent answer. I would just add that the distance metric that makes sense would depend on the shape of the multivariate distributions. For multivariate Gaussian, the Mahalano |
12,556 | GLM: verifying a choice of distribution and link function | This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your plots (qq-plots, histograms, etc.) constitutes the 'test'. (For a general overview of the issue of asserting the null,... | GLM: verifying a choice of distribution and link function | This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your | GLM: verifying a choice of distribution and link function
This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your plots (qq-plots, histograms, etc.) constitutes the 'test'. (... | GLM: verifying a choice of distribution and link function
This is a variant of the frequently asked question regarding whether you can assert the null hypothesis. In your case, the null would be that the residuals are Gaussian, and visual inspection of your |
12,557 | GLM: verifying a choice of distribution and link function | Would it be going too far to state that it validates my choice of distribution?
It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that you can't really say "the null is shown to be true", (especially with point nulls, but in at least some sense more general... | GLM: verifying a choice of distribution and link function | Would it be going too far to state that it validates my choice of distribution?
It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that yo | GLM: verifying a choice of distribution and link function
Would it be going too far to state that it validates my choice of distribution?
It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that you can't really say "the null is shown to be true", (especially... | GLM: verifying a choice of distribution and link function
Would it be going too far to state that it validates my choice of distribution?
It kind of depends on what you mean by 'validate' exactly, but I'd say 'yes, that goes too far' in the same way that yo |
12,558 | Linear vs. nonlinear regression | "Better" is a function of your model.
Part of the reason for your confusion is you only wrote half of your model.
When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equal to $ax^b$; they have an error component.
For example, the two models you mention (not the only possible models by any... | Linear vs. nonlinear regression | "Better" is a function of your model.
Part of the reason for your confusion is you only wrote half of your model.
When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equa | Linear vs. nonlinear regression
"Better" is a function of your model.
Part of the reason for your confusion is you only wrote half of your model.
When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equal to $ax^b$; they have an error component.
For example, the two models you mention (not... | Linear vs. nonlinear regression
"Better" is a function of your model.
Part of the reason for your confusion is you only wrote half of your model.
When you say $y=ax^b$, that's not actually true. Your observed $y$ values aren't equa |
12,559 | Linear vs. nonlinear regression | When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with your raw data (nonlinear regression), then it won't be true for the log-transformed values (linear regression), and vice ve... | Linear vs. nonlinear regression | When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with you | Linear vs. nonlinear regression
When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with your raw data (nonlinear regression), then it won't be true for the log-transformed values ... | Linear vs. nonlinear regression
When you fit either model, you are assuming that the set of residuals (discrepancies between the observed and predicted values of Y) follow a Gaussian distribution. If that assumption is true with you |
12,560 | Is it important for statisticians to learn machine learning? | Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially at the undergraduate level but also to some extent at the graduate level. It has application only to the prediction aspec... | Is it important for statisticians to learn machine learning? | Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially a | Is it important for statisticians to learn machine learning?
Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially at the undergraduate level but also to some extent at the gr... | Is it important for statisticians to learn machine learning?
Machine Learning is a specialized field of high dimensional applied statistics. It also requires considerable programming background which isn't necessary for a good quantitative program, especially a |
12,561 | Is it important for statisticians to learn machine learning? | OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs...
Stat programs require what they see fit, that is, what is the most important stuff they want their students to learn given a limited amount of time the... | Is it important for statisticians to learn machine learning? | OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs...
Stat programs require what they | Is it important for statisticians to learn machine learning?
OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs...
Stat programs require what they see fit, that is, what is the most important stuff they wa... | Is it important for statisticians to learn machine learning?
OK, let's talk about the elephant of statistics with our sight blindfolded by what we've learnt from one or two people we've closely worked with in our grad programs...
Stat programs require what they |
12,562 | Is it important for statisticians to learn machine learning? | Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease from DNA Microarray data (e.g. cancers or diabetes). Scientists can then use these genes (learned models) for early diag... | Is it important for statisticians to learn machine learning? | Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease f | Is it important for statisticians to learn machine learning?
Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease from DNA Microarray data (e.g. cancers or diabetes). Scient... | Is it important for statisticians to learn machine learning?
Machine learning is about gaining knowledge/learning from data. For example, I work with machine learning algorithms that can select a few genes that may be involved in a particular type of disease f |
12,563 | What is a good AUC for a precision-recall curve? | There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem.
However, the meaning of "close" is entirely application dependent.
For example, if you could reliably identify profitable investments with an AUC of 0.7 or, for ... | What is a good AUC for a precision-recall curve? | There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem.
However, the meaning of "close" is entire | What is a good AUC for a precision-recall curve?
There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem.
However, the meaning of "close" is entirely application dependent.
For example, if you could reliably identify p... | What is a good AUC for a precision-recall curve?
There is no magic cut-off for either AUC-ROC or AUC-PR. Obviously, higher is better, and the closer you are to 1.0, the closer you are to solving the problem.
However, the meaning of "close" is entire |
12,564 | What is a good AUC for a precision-recall curve? | A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase.
If this is a good result could only be assessed in compariso to other algorithms, but you didn't give detail on the method/data you used.
Additionally, you might want to assess the shap... | What is a good AUC for a precision-recall curve? | A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase.
If this is a good result could only be assessed in compariso to o | What is a good AUC for a precision-recall curve?
A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase.
If this is a good result could only be assessed in compariso to other algorithms, but you didn't give detail on the method/data you used... | What is a good AUC for a precision-recall curve?
A random estimator would have a PR-AUC of 0.09 in your case (9% positive outcomes), so your 0.49 is definitely a substantial increase.
If this is a good result could only be assessed in compariso to o |
12,565 | What is a good AUC for a precision-recall curve? | .49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 PR AUC, however it might not be that bad. I would consider looking at individual precision and recall, perhaps one or ... | What is a good AUC for a precision-recall curve? | .49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 | What is a good AUC for a precision-recall curve?
.49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 PR AUC, however it might not be that bad. I would consider looking at... | What is a good AUC for a precision-recall curve?
.49 is not great, but its interpretation is different than the ROC AUC. For ROC AUC, if you obtained a .49 using a logistic regression model, I would say you are doing no better than random. For .49 |
12,566 | Understanding Kolmogorov-Smirnov test in R | The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astonishingly small (also stated). The test statistic is the maximum distance between the ECDF's of the two samples. The p-val... | Understanding Kolmogorov-Smirnov test in R | The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astoni | Understanding Kolmogorov-Smirnov test in R
The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astonishingly small (also stated). The test statistic is the maximum distance betwe... | Understanding Kolmogorov-Smirnov test in R
The KS test is premised on testing the "sameness" of two independent samples from a continuous distribution (as the help page states). If that is the case then the probability of ties should be astoni |
12,567 | Understanding Kolmogorov-Smirnov test in R | To compute the D (from ks.test code):
ks.test(x,y)
Two-sample Kolmogorov-Smirnov test
data: x and y
D = 0.5, p-value = 0.1641
alternative hypothesis: two-sided
alternative <- "two.sided"
x <- x[!is.na(x)]
n <- length(x)
y <- y[!is.na(y)]
n.x <- as.double(n)
n.y <- length(y)
w <- c(x, y)
z <- cumsum(if... | Understanding Kolmogorov-Smirnov test in R | To compute the D (from ks.test code):
ks.test(x,y)
Two-sample Kolmogorov-Smirnov test
data: x and y
D = 0.5, p-value = 0.1641
alternative hypothesis: two-sided
alternative <- "two.sided"
x <- | Understanding Kolmogorov-Smirnov test in R
To compute the D (from ks.test code):
ks.test(x,y)
Two-sample Kolmogorov-Smirnov test
data: x and y
D = 0.5, p-value = 0.1641
alternative hypothesis: two-sided
alternative <- "two.sided"
x <- x[!is.na(x)]
n <- length(x)
y <- y[!is.na(y)]
n.x <- as.double(n)
n.y <... | Understanding Kolmogorov-Smirnov test in R
To compute the D (from ks.test code):
ks.test(x,y)
Two-sample Kolmogorov-Smirnov test
data: x and y
D = 0.5, p-value = 0.1641
alternative hypothesis: two-sided
alternative <- "two.sided"
x <- |
12,568 | What should be taught first: Probability or Statistics? | It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense of where the teaching of statistics is going, look at the list of paper titles in last year's special edition of The Am... | What should be taught first: Probability or Statistics? | It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense | What should be taught first: Probability or Statistics?
It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense of where the teaching of statistics is going, look at the list ... | What should be taught first: Probability or Statistics?
It doesn't seem to be a question of opinion any more: the world appears to have moved well beyond the traditional "teach probability and then teach statistics as an application of it." To get a sense |
12,569 | What should be taught first: Probability or Statistics? | The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics.
On the other hand, historically, ordinary least squares was developed before the normal distribution was discovered! The statistical method came first, the more rigorous, probability based ... | What should be taught first: Probability or Statistics? | The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics.
On the other hand, historically, ordinary least squares was developed | What should be taught first: Probability or Statistics?
The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics.
On the other hand, historically, ordinary least squares was developed before the normal distribution was discovered! The statistical ... | What should be taught first: Probability or Statistics?
The plural of anecdote isn't data, but in almost any course I've seen, at least the basics of probability comes before statistics.
On the other hand, historically, ordinary least squares was developed |
12,570 | What should be taught first: Probability or Statistics? | I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc.
For instance, take a look at the PhD Stat requirements at GWU. The PhD level Probability course 8257 has the following brief description:
S... | What should be taught first: Probability or Statistics? | I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc.
For instance, take | What should be taught first: Probability or Statistics?
I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc.
For instance, take a look at the PhD Stat requirements at GWU. The PhD level Probab... | What should be taught first: Probability or Statistics?
I think it should be an iterative process for most people: you learn a little probability, then a little statistics, then a little more probability, and little more statistics etc.
For instance, take |
12,571 | How to interpret the coefficients from a beta regression? | So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model
$$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$
where the $\text{logit}(y_i)$ is the usual log-odds we are used to when using the logit link in the glm function (i.e., family bi... | How to interpret the coefficients from a beta regression? | So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model
$$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$
where the | How to interpret the coefficients from a beta regression?
So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model
$$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$
where the $\text{logit}(y_i)$ is the usual log-odds we are used to when... | How to interpret the coefficients from a beta regression?
So you need to figure out what scale you are modeling the response on. In the case of the betareg function in R we have the following model
$$\text{logit}(y_i)=\beta_0+\sum_{i=1}^p\beta_i$$
where the |
12,572 | How does linear discriminant analysis reduce the dimensions? | Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional space there can exist at most 2 discriminants such as on the graph below. (Note that discriminants are not necessarily ort... | How does linear discriminant analysis reduce the dimensions? | Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional sp | How does linear discriminant analysis reduce the dimensions?
Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional space there can exist at most 2 discriminants such as on the ... | How does linear discriminant analysis reduce the dimensions?
Discriminants are the axes and the latent variables which differentiate the classes most strongly. Number of possible discriminants is $min(k-1,p)$. For example, with k=3 classes in p=2 dimensional sp |
12,573 | How does linear discriminant analysis reduce the dimensions? | While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to understand the topics in the book.
Lets take a very simple example of linear discriminant analysis where you want to group... | How does linear discriminant analysis reduce the dimensions? | While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to und | How does linear discriminant analysis reduce the dimensions?
While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to understand the topics in the book.
Lets take a very simple ex... | How does linear discriminant analysis reduce the dimensions?
While "The Elements of Statistical Learning" is a brilliant book, it requires a relatively high level of knowledge to get the most from it. There are many other resources on the web to help you to und |
12,574 | Reasons for data to be normally distributed | Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for large samples.
Most real-world data are NOT normally distributed. A paper by Micceri (1989) called "The unicorn, the norma... | Reasons for data to be normally distributed | Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for la | Reasons for data to be normally distributed
Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for large samples.
Most real-world data are NOT normally distributed. A paper by M... | Reasons for data to be normally distributed
Many limiting distributions of discrete RVs (poisson, binomial, etc) are approximately normal. Think of plinko. In almost all instances when approximate normality holds, normality kicks in only for la |
12,575 | Reasons for data to be normally distributed | There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distributions. There are plenty of sources discussing this property. A brief one can be found here. A more general discussion of... | Reasons for data to be normally distributed | There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distribu | Reasons for data to be normally distributed
There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distributions. There are plenty of sources discussing this property. A brief one can... | Reasons for data to be normally distributed
There is also an information theoretic justification for use of the normal distribution. Given mean and variance, the normal distribution has maximum entropy among all real-valued probability distribu |
12,576 | Reasons for data to be normally distributed | In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements.
The two most common errors distributions in experimental physics are normal and Poisson. The latter is usually encountered in count measurements, such as radioactive decay.
Another interesting feature o... | Reasons for data to be normally distributed | In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements.
The two most common errors distributions in experimental physics are normal and Po | Reasons for data to be normally distributed
In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements.
The two most common errors distributions in experimental physics are normal and Poisson. The latter is usually encountered in count measurements, such as radi... | Reasons for data to be normally distributed
In physics it is CLT which is usually cited as a reason for having normally distributed errors in many measurements.
The two most common errors distributions in experimental physics are normal and Po |
12,577 | Reasons for data to be normally distributed | The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. However, when we try to make inferences about individual observations, especially future ones (eg, prediction intervals), dev... | Reasons for data to be normally distributed | The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. Howev | Reasons for data to be normally distributed
The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. However, when we try to make inferences about individual observations, especially... | Reasons for data to be normally distributed
The CLT is extremely useful when making inferences about things like the population mean because we get there by computing some sort of linear combination of a bunch of individual measurements. Howev |
12,578 | Why do we need autoencoders? | Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in.
The hidden layer form a kind of encoding of the input. "The aim of an auto-encoder is to learn a compressed, distributed representation (encoding) for... | Why do we need autoencoders? | Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in.
The hidden layer form a kind o | Why do we need autoencoders?
Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in.
The hidden layer form a kind of encoding of the input. "The aim of an auto-encoder is to learn a compressed, distributed ... | Why do we need autoencoders?
Auto encoders have an input layer, hidden layer, and an output layer. The input is forced to be as identical to the output, so its the hidden layer we are interested in.
The hidden layer form a kind o |
12,579 | Why do we need autoencoders? | It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the population. If they're "quite" different, then the input probably doesn't belong to the population you modeled.
I see it as a... | Why do we need autoencoders? | It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the popul | Why do we need autoencoders?
It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the population. If they're "quite" different, then the input probably doesn't belong to the populati... | Why do we need autoencoders?
It can also model your population so that when you input a new vector, you can check how different is the output from the input. If they're "quite" the same, you can assume the input matches the popul |
12,580 | Why do we need autoencoders? | Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is used to train each hidden level separately for the deep NN on the second picture.
Pictures are taken from Russian wikip... | Why do we need autoencoders? | Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is u | Why do we need autoencoders?
Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is used to train each hidden level separately for the deep NN on the second picture.
Pictures... | Why do we need autoencoders?
Maybe these pictures give you some intuition. As commenter above said auto encoders tries to extract some high level features from the training examples. You may see how prognostication algorithm is u |
12,581 | Why do we need autoencoders? | In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks.
Auto encoding is useful in the sense that it allows us to compress the data in an optimal way (that can actual used to represent the input data, as observe... | Why do we need autoencoders? | In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks.
Auto encoding is useful in the sens | Why do we need autoencoders?
In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks.
Auto encoding is useful in the sense that it allows us to compress the data in an optimal way (that can actual used to represe... | Why do we need autoencoders?
In terms of ML, features are gold. Learnt features that use as little data as possible but contain as much information as possible enable us to complete many tasks.
Auto encoding is useful in the sens |
12,582 | Using R for GLM with Gamma distribution | The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance.
In GLM parlance the dispersion parameter, $\phi$ in $\text{Var}(Y_i)=\phi\text{V}(\mu_i)$ is normally constant.
More generally, you have $a(\phi)$, but that doesn't... | Using R for GLM with Gamma distribution | The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance.
In GLM parlance the dispersion parameter, $\p | Using R for GLM with Gamma distribution
The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance.
In GLM parlance the dispersion parameter, $\phi$ in $\text{Var}(Y_i)=\phi\text{V}(\mu_i)$ is normally constant.
More general... | Using R for GLM with Gamma distribution
The usual gamma GLM contains the assumption that the shape parameter is constant, in the same way that the normal linear model assumes constant variance.
In GLM parlance the dispersion parameter, $\p |
12,583 | Using R for GLM with Gamma distribution | I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. I advised you to read the lecture as it is, in my opinion, very clear and interesting concerning the use of gamma distr... | Using R for GLM with Gamma distribution | I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. | Using R for GLM with Gamma distribution
I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. I advised you to read the lecture as it is, in my opinion, very clear and inter... | Using R for GLM with Gamma distribution
I used the gamma.shape function of MASS package as described by Balajari (2013) in order to estimate the shape parameter afterwards and then adjust coefficients estimations and predictions in the GLM. |
12,584 | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eigenvector is negated. If the latter is less, negate the next eigenvector.
Here's an implementation. (I am not familiar ... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eig | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eigenvector is negated. If the latter is less, negate th... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
Whenever the plot jumps too much, reverse the orientation. One effective criterion is this: compute the total amount of jumps on all the components. Compute the total amount of jumps if the next eig |
12,585 | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | @whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector.
For instance, you could make the loadings for USD positive on all your eigenvectors (i.e., if USD's loading is negative, flip the signs ... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | @whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector.
For instance, yo | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
@whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector.
For instance, you could make the loadings for USD positive on all your... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
@whuber is right that there isn't an orientation that's intrinsic to the data, but you could still enforce that your eigenvectors have positive correlation with some reference vector.
For instance, yo |
12,586 | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this threshold I flip the eigenvector, factors and loadings in order to have consistency in the rolling window. Personally I don... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it? | What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this thr | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this threshold I flip the eigenvector, factors and loadings in... | I'm getting "jumpy" loadings in rollapply PCA in R. Can I fix it?
What I did was to compute the L1 distance between successive eigenvectors. After normalizing this matrix I choose a z score threshold e.g. 1, so that if in any new rolling the change is above this thr |
12,587 | What is F1 Optimal Threshold? How to calculate it? | I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal threshold is approximately 1/2 the F1 score that it achieves. This gives you some intuition. The optimal threshold will ... | What is F1 Optimal Threshold? How to calculate it? | I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal | What is F1 Optimal Threshold? How to calculate it?
I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal threshold is approximately 1/2 the F1 score that it achieves. This gi... | What is F1 Optimal Threshold? How to calculate it?
I actually wrote my first paper in machine learning on this topic. In it, we identified that when your classifier outputs calibrated probabilities (as they should for logistic regression) the optimal |
12,588 | When should I not use an ensemble classifier? | The model that is closest to the true data generating process will always be best and will beat most ensemble methods.
So if the data come from a linear process lm() will be much superior to random forests, e.g.:
set.seed(1234)
p=10
N=1000
#covariates
x = matrix(rnorm(N*p),ncol=p)
#coefficients:
b = round(rnorm(p),... | When should I not use an ensemble classifier? | The model that is closest to the true data generating process will always be best and will beat most ensemble methods.
So if the data come from a linear process lm() will be much superior to random fo | When should I not use an ensemble classifier?
The model that is closest to the true data generating process will always be best and will beat most ensemble methods.
So if the data come from a linear process lm() will be much superior to random forests, e.g.:
set.seed(1234)
p=10
N=1000
#covariates
x = matrix(rnorm(N... | When should I not use an ensemble classifier?
The model that is closest to the true data generating process will always be best and will beat most ensemble methods.
So if the data come from a linear process lm() will be much superior to random fo |
12,589 | When should I not use an ensemble classifier? | I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions.
When you need to convince people that the predictions are worth believing, a highly accurate model can be very persuasive, but I have struggle... | When should I not use an ensemble classifier? | I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions.
When you need to convi | When should I not use an ensemble classifier?
I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions.
When you need to convince people that the predictions are worth believing, a highly accurate mod... | When should I not use an ensemble classifier?
I do not recommend using an ensemble classifier when your model needs to be interpretable and explainable. Sometimes you need predictions and explanations of the predictions.
When you need to convi |
12,590 | When should I not use an ensemble classifier? | I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too difficult to implement/maintain/modify/port. Goef Hinton's work on "Dark Knowledge" is exactly about this: how to transfer t... | When should I not use an ensemble classifier? | I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too diff | When should I not use an ensemble classifier?
I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too difficult to implement/maintain/modify/port. Goef Hinton's work on "Dark Knowl... | When should I not use an ensemble classifier?
I would like to add to branco's answer. The ensembles can be highly competitive and provide very good results. In academics for example, this is what counts. In industry, the ensembles may be too diff |
12,591 | Splines vs Gaussian Process Regression | I agree with @j__'s answer.
However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging.
If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model.
This fact is proven in this paper by Kimeldorf and Wahb... | Splines vs Gaussian Process Regression | I agree with @j__'s answer.
However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging.
If you take a certain type of kernel in Gaussian p | Splines vs Gaussian Process Regression
I agree with @j__'s answer.
However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging.
If you take a certain type of kernel in Gaussian process regression, you exactly obtain the spline fitting model.
This fact is pro... | Splines vs Gaussian Process Regression
I agree with @j__'s answer.
However, I would like to highlight the fact that splines are just a special case of Gaussian Process regression/kriging.
If you take a certain type of kernel in Gaussian p |
12,592 | Splines vs Gaussian Process Regression | It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of constrained interpolation has been developed in Bay et al. 2016.
Bay et al. 2016. Generalization of the Kimeldorf-Wahba ... | Splines vs Gaussian Process Regression | It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of | Splines vs Gaussian Process Regression
It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of constrained interpolation has been developed in Bay et al. 2016.
Bay et al. 2016.... | Splines vs Gaussian Process Regression
It is a very interesting question: The equivalent between Gaussian processes and smoothing splines has been shown in Kimeldorf and Wahba 1970. The generalization of this correspondence in the case of |
12,593 | Splines vs Gaussian Process Regression | I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have a variance about that. This allows for great opportunities such as experimental design (choosing input data which is ma... | Splines vs Gaussian Process Regression | I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have | Splines vs Gaussian Process Regression
I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have a variance about that. This allows for great opportunities such as experimental ... | Splines vs Gaussian Process Regression
I agree with @xeon's comment also GPR puts a probability distribution over infinite number of possible functions and the mean function (which is spline like) is only the MAP estimate but you also have |
12,594 | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I find that he does not make the issue clear enough -- so I am not surprised to see your confusion.
The important point to... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I find that he does not make the issue clear ... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
It so happens that by coincidence I read this same paper just a couple of weeks ago. Colquhoun mentions multiple comparisons (including Benjamini-Hochberg) in section 4 when posing the problem, but I |
12,595 | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you control FDR properly. It's worth noting, though, that there are quite a lot of variants on the B-H method. Benjamini's sem... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you cont | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you control FDR properly. It's worth noting, thoug... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
Benjamini & Hochberg define false discovery rate in the same way that I do, as the fraction of positive tests that are false positives. So if you use their procedure for multiple comparisons you cont |
12,596 | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to coin a term without first checking to make sure that the term did not already have a well-established, different defini... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014) | A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to coin a term without first checking to make... | Confusion with false discovery rate and multiple testing (on Colquhoun 2014)
A big part of the confusion is that, despite his comments here to the contrary, Colquhoun does NOT define FDR the same way that Benjamini-Hochberg do. It is unfortunate that Colquhoun has attempted to |
12,597 | When are Bayesian methods preferable to Frequentist? | Here are some links which may interest you comparing frequentist and Bayesian methods:
http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf
Archived here: https://web.archive.org/web/20140308021414/https://stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf
http://www.bayesian-inference.com/advantage... | When are Bayesian methods preferable to Frequentist? | Here are some links which may interest you comparing frequentist and Bayesian methods:
http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf
Archived here: https://web.archive.org/web/20 | When are Bayesian methods preferable to Frequentist?
Here are some links which may interest you comparing frequentist and Bayesian methods:
http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf
Archived here: https://web.archive.org/web/20140308021414/https://stat.ufl.edu/archived/casella/Talks/BayesRefre... | When are Bayesian methods preferable to Frequentist?
Here are some links which may interest you comparing frequentist and Bayesian methods:
http://www.stat.ufl.edu/archived/casella/Talks/BayesRefresher.pdf
Archived here: https://web.archive.org/web/20 |
12,598 | When are Bayesian methods preferable to Frequentist? | One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One example is the ever-increasing importance of penalization methods (shrinkage). When one obtains penalized maximum likeliho... | When are Bayesian methods preferable to Frequentist? | One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One exa | When are Bayesian methods preferable to Frequentist?
One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One example is the ever-increasing importance of penalization methods (shr... | When are Bayesian methods preferable to Frequentist?
One of many interesting aspects of the contrasts between the two approaches is that it is very difficult to have formal interpretation for many quantities we obtain in the frequentist domain. One exa |
12,599 | When are Bayesian methods preferable to Frequentist? | I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a contrast of the two statistical schools.
The first difference with a Bayesian analysis will be the presence of priors whic... | When are Bayesian methods preferable to Frequentist? | I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a con | When are Bayesian methods preferable to Frequentist?
I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a contrast of the two statistical schools.
The first difference with a ... | When are Bayesian methods preferable to Frequentist?
I'm stealing this wholesale from the Stan users group. Michael Betancourt provided this really good discussion of identifiability in Bayesian inference, which I believe bears on your request for a con |
12,600 | When are Bayesian methods preferable to Frequentist? | The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist approaches are reasonable, if it isn't then you should use a Bayesian approach. If either interpretation is acceptable, th... | When are Bayesian methods preferable to Frequentist? | The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist ap | When are Bayesian methods preferable to Frequentist?
The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist approaches are reasonable, if it isn't then you should use a Bayesian... | When are Bayesian methods preferable to Frequentist?
The key difference between Bayesian and frequentist approaches lies in the definition of a probability, so if it is necessary to treat probabilties strictly as a long run frequency then frequentist ap |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.