idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
14,501 | Is there a measure of 'evenness' of spread? | It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in time series analyses. A very basic approach is just a simple linear model regressing the sequence values upon their index... | Is there a measure of 'evenness' of spread? | It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in t | Is there a measure of 'evenness' of spread?
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in time series analyses. A very basic approach is just a simple linear model reg... | Is there a measure of 'evenness' of spread?
It sounds like you are interested in the pairwise differences of randomly observed values in a particular sequence, as in the case of modeling growth or trend. There are a number of ways to do so in t |
14,502 | Calculating AIC “by hand” in R | Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the exponent in the likelihood. Also, you can't forget to count the fact that $\sigma^2$ is a parameter.
$\cal L(\hat\mu,\hat\sig... | Calculating AIC “by hand” in R | Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the expone | Calculating AIC “by hand” in R
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the exponent in the likelihood. Also, you can't forget to count the fact that $\sigma^2$ is a param... | Calculating AIC “by hand” in R
Note that the help on the function logLik in R says that for lm models it includes 'all constants' ... so there will be a log(2*pi) in there somewhere, as well as another constant term for the expone |
14,503 | Calculating AIC “by hand” in R | The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} + 2(k-1)$, where $S_{\mathrm{r}}$ is the residual sum of squares, & $n$ is the sample size. These formulæ differ by an a... | Calculating AIC “by hand” in R | The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} | Calculating AIC “by hand” in R
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} + 2(k-1)$, where $S_{\mathrm{r}}$ is the residual sum of squares, & $n$ is the sample siz... | Calculating AIC “by hand” in R
The AIC function gives $2k -2 \log L$, where $L$ is the likelihood & $k$ is the number of estimated parameters (including the intercept, & the variance). You're using $n \log \frac{S_{\mathrm{r}}}{n} |
14,504 | Interpreting three forms of a "mixed model" | This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogously to refer to the variables in your model.
glmer(counts ~ A + T, data=data, family="Poisson") is the model
$$ \log \big... | Interpreting three forms of a "mixed model" | This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogousl | Interpreting three forms of a "mixed model"
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogously to refer to the variables in your model.
glmer(counts ~ A + T, data=data,... | Interpreting three forms of a "mixed model"
This may become clearer by writing out the model formula for each of these three models. Let $Y_{ij}$ be the observation for person $i$ in site $j$ in each model and define $A_{ij}, T_{ij}$ analogousl |
14,505 | Interpreting three forms of a "mixed model" | You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion of what this specification does you can find in this lmer faq question.
From this questions your model should give the fo... | Interpreting three forms of a "mixed model" | You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion o | Interpreting three forms of a "mixed model"
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion of what this specification does you can find in this lmer faq question.
From ... | Interpreting three forms of a "mixed model"
You should note that T is none of your model's a random effects terms, but a fixed effect. Random effects are only those effects that appear after the | in a lmer formula!
A more thorough discussion o |
14,506 | Interpreting three forms of a "mixed model" | Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in classes, you usually want children only as a random effect. | Interpreting three forms of a "mixed model" | Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in class | Interpreting three forms of a "mixed model"
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in classes, you usually want children only as a random effect. | Interpreting three forms of a "mixed model"
Something should appear only in the random part when you are not particularly interested in its parameter, per se, but need to include it to avoid dependent data. E.g., if children are nested in class |
14,507 | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I also know of K-Sample Analogues of the Kolmogorov-Smirnov and Cramer-V. Mises Tests (but they have less power as far as I k... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I als | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I also know of K-Sample Analogues of the Ko... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There actually are some multiple sample KS Tests. E.g., an r-sample Kolmogorov-Smirnov-Test with $r\geq 2$ which, I believe, has good power. A preprint of that beautiful paper is available here. I als |
14,508 | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does not need to be specified. Maybe you can use this.
Little example on comparing Normal and Gamma-distributed samples scal... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does not need to be specified. Maybe you c... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
There is an R package kSamples that gives you, among other things, a non-parametric k-sample Anderson-Darling test. The null hypothesis is that all k samples came from the same distribution which does |
14,509 | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit over conservative). Then you can be confident that any that are still significantly different are probably not due to the ... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test? | A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit ove | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit over conservative). Then you can be conf... | Is there a multiple-sample version or alternative to the Kolmogorov-Smirnov Test?
A couple of approaches:
Use the pairwise p-values but adjust them for multiple comparisons using something like the Bon Feroni or False Discovery Rate adjustmetns (the first will probably be a bit ove |
14,510 | What is the role of MDS in modern statistics? | In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often interested in using it? Everyone who aims either to display clusters of points or to get some insight of possible latent di... | What is the role of MDS in modern statistics? | In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often int | What is the role of MDS in modern statistics?
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often interested in using it? Everyone who aims either to display clusters of point... | What is the role of MDS in modern statistics?
In case you will accept a concise answer...
What questions does it answer? Visual mapping of pairwise dissimilarities in euclidean (mostly) space of low dimensionality.
Which researchers are often int |
14,511 | What is the role of MDS in modern statistics? | @ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical techniques (such as MDS, but also PCA and others), you might want to take a look at his stuff (for example, this presentati... | What is the role of MDS in modern statistics? | @ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical tec | What is the role of MDS in modern statistics?
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical techniques (such as MDS, but also PCA and others), you might want to take a l... | What is the role of MDS in modern statistics?
@ttnphns has provided a good overview. I just want to add a couple of small things. Greenacre has done a good deal of work with Correspondence Analysis and how it is related to other statistical tec |
14,512 | What is the role of MDS in modern statistics? | One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sort, or directly identify similarity between objects; 2) convert the responses into dissimilarity matrix; 3) apply MDS and... | What is the role of MDS in modern statistics? | One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sor | What is the role of MDS in modern statistics?
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sort, or directly identify similarity between objects; 2) convert the respons... | What is the role of MDS in modern statistics?
One additional strength is that you can use MDS to analyze data for which you don't know the important variables or dimensions. The standard procedure for this would be: 1) have participants rank, sor |
14,513 | If gauge charts are bad, why do cars have gauges? | A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physical gauges were invented, digital (numeric) displays didn't exist so there was no real choice.
A software dashboard is no... | If gauge charts are bad, why do cars have gauges? | A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physic | If gauge charts are bad, why do cars have gauges?
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physical gauges were invented, digital (numeric) displays didn't exist so th... | If gauge charts are bad, why do cars have gauges?
A (real) dashboard gauge needs to be: 1) physical, and 2) read quickly under circumstances that disturb concentration. In that sense, you want a low data-to-area ratio. Not to mention that when physic |
14,514 | If gauge charts are bad, why do cars have gauges? | In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of real-time visualization vs. more static displays might call for differences, he also mentions that gauges aren't very go... | If gauge charts are bad, why do cars have gauges? | In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of | If gauge charts are bad, why do cars have gauges?
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of real-time visualization vs. more static displays might call for differ... | If gauge charts are bad, why do cars have gauges?
In supplement to Wayne's fine answer, Robert Kosara has a recent post on his Eager Eye's blog about the very topic, Data Display vs. Data Visualization. In addition to as Wayne mentioned the goals of |
14,515 | If gauge charts are bad, why do cars have gauges? | There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick note here: it's worth remembering that all car speedometers are oriented in the same way. (What I mean is they all run c... | If gauge charts are bad, why do cars have gauges? | There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick n | If gauge charts are bad, why do cars have gauges?
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick note here: it's worth remembering that all car speedometers are oriente... | If gauge charts are bad, why do cars have gauges?
There are great answers here. I also like @whuber's comment, especially "[o]ne big problem with angles is that the comparison may depend on how the angles are oriented". Let me throw out one quick n |
14,516 | If gauge charts are bad, why do cars have gauges? | Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. An analog watch, can be glanced at, and you know that it is about 10 minutes to 9. You don't (usually) need to know th... | If gauge charts are bad, why do cars have gauges? | Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. | If gauge charts are bad, why do cars have gauges?
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. An analog watch, can be glanced at, and you know that it is about 10 ... | If gauge charts are bad, why do cars have gauges?
Gauges are good if you need low-resolution at a glance. Speedo, tach', oil temperature/pressure don't need single digit resolution, and in a vehicle, you want to know if the are approximately right. |
14,517 | Why do we use Gaussian distributions in Variational Autoencoder? | Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussian mixtures, which is useful for unsupervised [2] and semi-supervised [3] tasks.
Normal distribution has many nice proper... | Why do we use Gaussian distributions in Variational Autoencoder? | Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussia | Why do we use Gaussian distributions in Variational Autoencoder?
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussian mixtures, which is useful for unsupervised [2] and se... | Why do we use Gaussian distributions in Variational Autoencoder?
Normal distribution is not the only distribution used for latent variables in VAEs. There are also works using von Mises-Fisher distribution (Hypershperical VAEs [1]), and there are VAEs using Gaussia |
14,518 | Why do we use Gaussian distributions in Variational Autoencoder? | We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the exact choice is not important.
As for your second question, I would question your premise -- I am pretty sure weights are ... | Why do we use Gaussian distributions in Variational Autoencoder? | We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the ex | Why do we use Gaussian distributions in Variational Autoencoder?
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the exact choice is not important.
As for your second questio... | Why do we use Gaussian distributions in Variational Autoencoder?
We use normal distribution because it is easily reparameterized. Also a sufficiently powerful decoder can map the normal distribution to any other distribution, so from a theoretical viewpoint, the ex |
14,519 | Why are there large coefficents for higher-order polynomial | This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sensitive to small variations in the data and/or roundoff in the computations (i.e. the model is not stably identifiable). ... | Why are there large coefficents for higher-order polynomial | This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sen | Why are there large coefficents for higher-order polynomial
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sensitive to small variations in the data and/or roundoff in th... | Why are there large coefficents for higher-order polynomial
This is a well known issue with high-order polynomials, known as Runge's phenomenon. Numerically it is associated with ill-conditioning of the Vandermonde matrix, which makes the coefficients very sen |
14,520 | Why are there large coefficents for higher-order polynomial | The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two examples of 2nd and 15th order polynomial expansion. First we show the coefficient for 2nd order expansion.
summary(lm(mpg ~... | Why are there large coefficents for higher-order polynomial | The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two exam | Why are there large coefficents for higher-order polynomial
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two examples of 2nd and 15th order polynomial expansion. First we sh... | Why are there large coefficents for higher-order polynomial
The first thing you want to check, is if the author is talking about raw polynomials vs. orthogonal polynomials.
For orthogonal polynomials. the coefficient are not getting "larger".
Here are two exam |
14,521 | Why are there large coefficents for higher-order polynomial | Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = 9 polynomial, the coefficients have become finely tuned to the data by developing large positive and negative values so... | Why are there large coefficents for higher-order polynomial | Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = | Why are there large coefficents for higher-order polynomial
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = 9 polynomial, the coefficients have become finely tuned to ... | Why are there large coefficents for higher-order polynomial
Abhishek,
you are right that improving precision of coefficients will improve accuracy.
We see that, as M increases, the magnitude of the coefficients typically gets larger. In particular for the M = |
14,522 | In a GLM, is the log likelihood of the saturated model always zero? | If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by:
$$\ell(\mu; Y) = -\sum_{i = 1}^n \mu_i + \sum_{i = 1}^n y_i \log \mu_i - \sum_{i = 1}^n \log(... | In a GLM, is the log likelihood of the saturated model always zero? | If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, | In a GLM, is the log likelihood of the saturated model always zero?
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, \ldots, y_n)$ is given by:
$$\ell(\mu; Y) = -\sum_{i... | In a GLM, is the log likelihood of the saturated model always zero?
If you really meant log-likelihood, then the answer is: it's not always zero.
For example, consider Poisson data: $y_i \sim \text{Poisson}(\mu_i), i = 1, \ldots, n$. The log-likelihood for $Y = (y_1, |
14,523 | In a GLM, is the log likelihood of the saturated model always zero? | Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't seen this TeX'd up on this site, and because I just wrote these up for a lecture.
The likelihood is
$$
L(\mathbf{y} ; \math... | In a GLM, is the log likelihood of the saturated model always zero? | Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't see | In a GLM, is the log likelihood of the saturated model always zero?
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't seen this TeX'd up on this site, and because I just wro... | In a GLM, is the log likelihood of the saturated model always zero?
Zhanxiong's answer is already great (+1), but here's a quick demonstration that the log-likelihood of the saturated model is $0$ for a logistic regression. I figured I would post because I haven't see |
14,524 | In a GLM, is the log likelihood of the saturated model always zero? | @Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not a sensible thing to try and achieve. slightly more generally, the log-likelihood of the saturated model gives you an upp... | In a GLM, is the log likelihood of the saturated model always zero? | @Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not | In a GLM, is the log likelihood of the saturated model always zero?
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not a sensible thing to try and achieve. slightly more g... | In a GLM, is the log likelihood of the saturated model always zero?
@Alex: yes, thats right. at least for discrete distributions. for continuous distributions, it would come down to letting the density be equal 1, which is not necessarily meaningful and therefore not |
14,525 | Antonym of variance | $1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became popular because gamma can be used as a conjugate prior for precision in normal distribution as noticed by Kruschke (201... | Antonym of variance | $1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became | Antonym of variance
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became popular because gamma can be used as a conjugate prior for precision in normal distribution as noti... | Antonym of variance
$1/\sigma^2$ is called precision. You can find it often mentioned in Bayesian software manuals for BUGS and JAGS, where it is used as a parameter for normal distribution instead of variance. It became |
14,526 | Generate data samples from Poisson regression | The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(Y) = \mu$) and that $\log(\mu) = \beta_0 + \beta_1 x$. Generating data according to that model easily follows. Here is a... | Generate data samples from Poisson regression | The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V( | Generate data samples from Poisson regression
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V(Y) = \mu$) and that $\log(\mu) = \beta_0 + \beta_1 x$. Generating data acc... | Generate data samples from Poisson regression
The poisson regression model assumes a Poisson distribution for $Y$ and uses the $\log$ link function. So, for a single explanatory variable $x$, it is assumed that $Y \sim P(\mu)$ (so that $E(Y) = V( |
14,527 | Generate data samples from Poisson regression | If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- 1.5 # slope for x1
B2 <- -0.5 # slope for x2
y <- rpois(100, 6.5)
x2 <- seq(-0.5, 0.5,... | Generate data samples from Poisson regression | If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- | Generate data samples from Poisson regression
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- 1.5 # slope for x1
B2 <- -0.5 # slope for ... | Generate data samples from Poisson regression
If you wanted to generate a data set that fit the model perfectly you could do something like this in R:
# y <- exp(B0 + B1 * x1 + B2 * x2)
set.seed(1234)
B0 <- 1.2 # intercept
B1 <- |
14,528 | What are chunk tests? | @mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear variables tricep circumference, waist, hip circumference, all measurements of body size. To get an overall body size chunk... | What are chunk tests? | @mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear var | What are chunk tests?
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear variables tricep circumference, waist, hip circumference, all measurements of body size. To get an o... | What are chunk tests?
@mark999 provided an excellent answer. In addition to jointly testing polynomial terms, you can jointly test ("chunk test") any set of variables. Suppose you had a model with competing collinear var |
14,529 | What are chunk tests? | Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnorm(50))
>
> ols1 <- ols(y ~ x1 + pol(x2, 2), data=d) # pol(x2, 2) means include x2 and x2^2 terms
> ols1
Linear Regress... | What are chunk tests? | Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnor | What are chunk tests?
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnorm(50))
>
> ols1 <- ols(y ~ x1 + pol(x2, 2), data=d) # pol(x2, 2) means include x2 and x2^2 terms
... | What are chunk tests?
Macro's comment is correct, as is Andy's. Here's an example.
> library(rms)
>
> set.seed(1)
> d <- data.frame(x1 = rnorm(50), x2 = rnorm(50))
> d <- within(d, y <- 1 + 2*x1 + 0.3*x2 + 0.2*x2^2 + rnor |
14,530 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pinpoint the problem.
Specifically you can run
FACTOR
/VARIABLES <variables>
/MIS... | Interpreting discrepancies between R and SPSS with exploratory factor analysis | First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of e | Interpreting discrepancies between R and SPSS with exploratory factor analysis
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of each of these steps should help you to pin... | Interpreting discrepancies between R and SPSS with exploratory factor analysis
First of all, I second ttnphns recommendation to look at the solution before rotation. Factor analysis as it is implemented in SPSS is a complex procedure with several steps, comparing the result of e |
14,531 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows up exactly the same in each, and no oblique rotation is used.
One remaining discrepancy is in the
series of values tha... | Interpreting discrepancies between R and SPSS with exploratory factor analysis | Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows u | Interpreting discrepancies between R and SPSS with exploratory factor analysis
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows up exactly the same in each, and no obliqu... | Interpreting discrepancies between R and SPSS with exploratory factor analysis
Recently I have found that most factor analysis discrepancies between SPSS and R (with Psych package)
clear up when data are
treated missing-listwise in each program, the
correlation matrix shows u |
14,532 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results. | Interpreting discrepancies between R and SPSS with exploratory factor analysis | The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results. | Interpreting discrepancies between R and SPSS with exploratory factor analysis
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results. | Interpreting discrepancies between R and SPSS with exploratory factor analysis
The default rotation method in R is oblimin, so this will likely cause the difference. As a test run a PAF/oblimin in SPSS and R and you will find nearly identical results. |
14,533 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If they're the same, then look at the rotation settings in each program. (For SPSS, you can find all the settings in the r... | Interpreting discrepancies between R and SPSS with exploratory factor analysis | This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If | Interpreting discrepancies between R and SPSS with exploratory factor analysis
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If they're the same, then look at the rotat... | Interpreting discrepancies between R and SPSS with exploratory factor analysis
This answer is additive to the ones above. As suggested by Gala in his answer, one should first determine if the solutions provided by R (e.g. fa in psych) and SPSS are different prior to rotation. If |
14,534 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x.pdf
Algorithmic jingle jungle: A comparison of implementations
of principal axis factoring and promax rotation in R and... | Interpreting discrepancies between R and SPSS with exploratory factor analysis | I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x | Interpreting discrepancies between R and SPSS with exploratory factor analysis
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x.pdf
Algorithmic jingle jungle: A compari... | Interpreting discrepancies between R and SPSS with exploratory factor analysis
I know this is an old post but I ran into the same issue.
It seems this is a known issue where SPSS and R implement Promax differently.
https://link.springer.com/content/pdf/10.3758/s13428-021-01581-x |
14,535 | Interpreting discrepancies between R and SPSS with exploratory factor analysis | I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the SPSS explained variance table which actually shows results of principal component analysis. The second part shows the res... | Interpreting discrepancies between R and SPSS with exploratory factor analysis | I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the S | Interpreting discrepancies between R and SPSS with exploratory factor analysis
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the SPSS explained variance table which actual... | Interpreting discrepancies between R and SPSS with exploratory factor analysis
I do not know what causes the differences in pattern loadings, but I assume that the difference in % of explained variance is due to:
- are you perhaps interpreting the first part (of 2 or 3) of the S |
14,536 | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensional plane for multiple regressors and a curved surface for non-linear regression)
In this case, the vector between obse... | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensi | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensional plane for multiple r... | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
An image might help. In this image, we see a geometric view of the fitting.
Least squares finds a solution in a plane that has the closest distance to the observation.
(more general a higher dimensi |
14,537 | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations used to derive the OLS solution (see this answer).
The coefficients in ridge regression (with penalty weighting $\lambd... | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations used to derive the OLS s... | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
I wrote a comprehensive explanation on this question in my site.
It might be useful for readers.
I'll talk about the ridge regularization here because it can be shown to neatly use the same equations |
14,538 | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge regression or otherwise, you will in general move your fit away from the projection and destroy orthgonality.
A book whi... | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge regression or otherwise, ... | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
Think in geometrical terms: the OLS fit is the projection of $Y$ on the space spanned by the columns of $X$, hence the residual vector is orthogonal to that space. If you regularize, performing ridge |
14,539 | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every predictor, and hence orthogonal to linear combinations of the predictors, which is what the predicted values $\hat{y}$ ... | Why does regularization wreck orthogonality of predictions and residuals in linear regression? | One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every predictor, and hence ort... | Why does regularization wreck orthogonality of predictions and residuals in linear regression?
One way to derive the least squares estimate of $\beta$ (the vector of regression coefficients) is that it is the one and only value of $\beta$ (*) that would make the error vector orthogonal to every |
14,540 | For intuition, what are some real life examples of uncorrelated but dependent random variables? | In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves are uncorrelated with their own past $r_{t-1}$ if stock markets are efficient (else, you could easily and profitably predi... | For intuition, what are some real life examples of uncorrelated but dependent random variables? | In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves ar | For intuition, what are some real life examples of uncorrelated but dependent random variables?
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves are uncorrelated with thei... | For intuition, what are some real life examples of uncorrelated but dependent random variables?
In finance, GARCH (generalized autoregressive conditional heteroskedasticity) effects are widely cited here: stock returns $r_t:=(P_t-P_{t-1})/P_{t-1}$, with $P_t$ the price at time $t$, themselves ar |
14,541 | For intuition, what are some real life examples of uncorrelated but dependent random variables? | A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, then the other must be distant from its mean. | For intuition, what are some real life examples of uncorrelated but dependent random variables? | A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, | For intuition, what are some real life examples of uncorrelated but dependent random variables?
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, then the other must be d... | For intuition, what are some real life examples of uncorrelated but dependent random variables?
A simple example is a bivariate distribution that is uniform on a doughnut-shaped area. The variables are uncorrelated, but clearly dependent - for example, if you know one variable is near its mean, |
14,542 | For intuition, what are some real life examples of uncorrelated but dependent random variables? | I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Several sets of (x, y) points, with the Pearson correlation coefficient of x and y for each set. Note that the correlation ref... | For intuition, what are some real life examples of uncorrelated but dependent random variables? | I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Severa | For intuition, what are some real life examples of uncorrelated but dependent random variables?
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Several sets of (x, y) points,... | For intuition, what are some real life examples of uncorrelated but dependent random variables?
I found the following figure from wiki is very useful for intuition. In particular, the bottom row show examples of uncorrelated but dependent distributions.
Caption of the above plot in wiki:
Severa |
14,543 | For intuition, what are some real life examples of uncorrelated but dependent random variables? | There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of correlation to Pearson correlation, which in my opinion is indeed the appropriate meaning to correlation, when no other de... | For intuition, what are some real life examples of uncorrelated but dependent random variables? | There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of co | For intuition, what are some real life examples of uncorrelated but dependent random variables?
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of correlation to Pearson cor... | For intuition, what are some real life examples of uncorrelated but dependent random variables?
There are two words that you mention in the title of your question that are usually used interchangeably, correlation and dependence, but in the body of your question you restrict the definition of co |
14,544 | Why splitting the data into the training and testing set is not enough | Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optimistic, because you are essentially reporting best-case results. As some on this site have already mentioned, optimizatio... | Why splitting the data into the training and testing set is not enough | Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optim | Why splitting the data into the training and testing set is not enough
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optimistic, because you are essentially reporting best... | Why splitting the data into the training and testing set is not enough
Even though you are training models exclusively on the training data, you are optimizing hyperparameters (e.g. $C$ for an SVM) based on the test set. As such, your estimate of performance can be optim |
14,545 | Why splitting the data into the training and testing set is not enough | I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/algorithm.
Consider the first use as part of the actual training of the algorithm. For instance cross validating to dete... | Why splitting the data into the training and testing set is not enough | I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/a | Why splitting the data into the training and testing set is not enough
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/algorithm.
Consider the first use as part of the a... | Why splitting the data into the training and testing set is not enough
I think it's easiest to think of things this way. There are two things that cross validation is used for, tuning the hyper parameters of a model/algorithm, and evaluating the performance of a model/a |
14,546 | Why splitting the data into the training and testing set is not enough | During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with different values of the tuning parameters, or a mixture).
Among all different models that you trained, you have to choo... | Why splitting the data into the training and testing set is not enough | During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with | Why splitting the data into the training and testing set is not enough
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with different values of the tuning parameters, or a m... | Why splitting the data into the training and testing set is not enough
During model building you train your models on a training sample. Note that that you can train different models (i.e. different techniques like SVM, LDA, Random Forest, ... or the same technique with |
14,547 | Why splitting the data into the training and testing set is not enough | Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the higher would be the cross validation error.
Additionally, if you have high degrees of freedom in model selection, then... | Why splitting the data into the training and testing set is not enough | Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the | Why splitting the data into the training and testing set is not enough
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the higher would be the cross validation error.
Add... | Why splitting the data into the training and testing set is not enough
Cross-validation does not completely overcome the over-fitting problem in model selection, it just reduces it. The cross validation error depends on the data set you use. The smaller the data set, the |
14,548 | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model formulation, and the other components of stepwise variable selection you hope to use should likewise be avoided. This has... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model fo | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model formulation, and the other compone... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
None of those proposed methods have been shown by simulation studies to work. Spend your efforts formulating a complete model and then fit it. Univariate screening is a terrible approach to model fo |
14,549 | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was published in the international journal of forecasting entitle "Illusions of predictability" and a commentory on this articl... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was pub | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was published in the international jour... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
Methods specified for variable selection using statistic such as P, stepwise regression in the classic text Hosmer et al should at all cost be avoided.
Recently I stumbled upon an article that was pub |
14,550 | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe the R package dismo is useful for your problem. It includes a nice vignette. Using the dismo or other similar package imp... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R | I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe t | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe the R package dismo is useful for... | Model building and selection using Hosmer et al. 2013. Applied Logistic Regression in R
I think you're trying to predict the presence of the species with a presence/background approach, which is well documented in journals such as Methods in Ecology and Evolution, Ecography, etc. Maybe t |
14,551 | Visualizing a spline basis | Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:ncol(spl)) lines(spl[,j]~x, lwd=2, col=j)
Giving this: | Visualizing a spline basis | Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:nc | Visualizing a spline basis
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:ncol(spl)) lines(spl[,j]~x, lwd=2, col=j)
Giving this: | Visualizing a spline basis
Try this, as an example for B-splines:
x <- seq(0, 1, by=0.001)
spl <- bs(x,df=6)
plot(spl[,1]~x, ylim=c(0,max(spl)), type='l', lwd=2, col=1,
xlab="Cubic B-spline basis", ylab="")
for (j in 2:nc |
14,552 | Visualizing a spline basis | Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
all.knots <- sort(c(attr(basis,"Boundary.knots") ,attr(basis, "knots"))) %>%
unname
bounds <- range(all.k... | Visualizing a spline basis | Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
| Visualizing a spline basis
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
all.knots <- sort(c(attr(basis,"Boundary.knots") ,attr(basis, "knots"))) %>%
unnam... | Visualizing a spline basis
Here's an autoplot method for the "basis" class (which both bs and ns inherit from):
library(ggplot2)
library(magrittr)
library(reshape2)
library(stringr)
autoplot.basis <- function(basis, n=1000) {
|
14,553 | Interpreting Granger causality test's results | Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info.
Is my interpetation directionaly correct
Yes. The fact that first hypothesis was rejected and second was not means... | Interpreting Granger causality test's results | Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info | Interpreting Granger causality test's results
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info.
Is my interpetation directionaly correct
Yes. The fact that first hy... | Interpreting Granger causality test's results
Caveat: I'm not particularly well-versed in Granger causality, but I am generally statistically competent and I have read and mostly understood Judea Pearl's Causality, which I recommend for more info |
14,554 | Pitfalls of linear mixed models | This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio test statistic. The null distribution of this test statistic is approximately chi-squared with degrees of freedom equal to ... | Pitfalls of linear mixed models | This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio tes | Pitfalls of linear mixed models
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio test statistic. The null distribution of this test statistic is approximately chi-squared w... | Pitfalls of linear mixed models
This is a good question.
Here are some common pitfalls:
Using standard likelihood theory, we may derive a test to compare two nested
hypotheses, $H_0$ and $H_1$, by computing the likelihood ratio tes |
14,555 | Pitfalls of linear mixed models | The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only because random effects account for all the variance. But since the graph of actual vs predicted looks nice you are inclined ... | Pitfalls of linear mixed models | The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only beca | Pitfalls of linear mixed models
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only because random effects account for all the variance. But since the graph of actual vs predic... | Pitfalls of linear mixed models
The common pitfall which I see is the ignoring the variance of random effects. If it is large compared to residual variance or variance of dependent variable, the fit usually looks nice, but only beca |
14,556 | Pitfalls of linear mixed models | Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care must be taken to build an appropriate covariance structure otherwise tests of hypotheses, confidence intervals, and estimat... | Pitfalls of linear mixed models | Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care mus | Pitfalls of linear mixed models
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care must be taken to build an appropriate covariance structure otherwise tests of hypotheses, c... | Pitfalls of linear mixed models
Modeling the variance structure is arguably the most powerful and important single feature of mixed models. This extends beyond variance structure to include correlation among observations. Care mus |
14,557 | Is random forest for regression a 'true' regression? | This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smooth function. So this need not be a problem. If you re... | Is random forest for regression a 'true' regression? | This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient s | Is random forest for regression a 'true' regression?
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient splits, a step function with many small steps can approximate a smoo... | Is random forest for regression a 'true' regression?
This is correct - random forests discretize continuous variables since they are based on decision trees, which function through recursive binary partitioning. But with sufficient data and sufficient s |
14,558 | Is random forest for regression a 'true' regression? | It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can theoretically have 100^100 different values, which ca... | Is random forest for regression a 'true' regression? | It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 | Is random forest for regression a 'true' regression?
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 different trees with 100 leaves each, then your random forest can ... | Is random forest for regression a 'true' regression?
It is discrete, but then any output in the form of a floating point number with fixed number of bits will be discrete. If a tree has 100 leaves, then it can give 100 different numbers. If you have 100 |
14,559 | Is random forest for regression a 'true' regression? | The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models conditional expectation. And a regression tree can indeed be seen as an estimator of conditional expectation.
In the leaf node... | Is random forest for regression a 'true' regression? | The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models condit | Is random forest for regression a 'true' regression?
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models conditional expectation. And a regression tree can indeed be seen as an e... | Is random forest for regression a 'true' regression?
The answer will depend on what is your definition of regression, see Definition and delimitation of regression model. But a usual definition (or part of a definition) is that regression models condit |
14,560 | Is random forest for regression a 'true' regression? | It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of the training data; there is a nice graphical example here. | Is random forest for regression a 'true' regression? | It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of th | Is random forest for regression a 'true' regression?
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of the training data; there is a nice graphical example here. | Is random forest for regression a 'true' regression?
It's perhaps worth adding that Random Forest models can't extrapolate outside the range of the training data, since their lowest and highest values are always going to be averages of some subset of th |
14,561 | Why the default matrix norm is spectral norm and not Frobenius norm? | In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covariance matrix regularisation.
I think that part of this question stems from the terminology misdemeanour some people do (mys... | Why the default matrix norm is spectral norm and not Frobenius norm? | In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covarian | Why the default matrix norm is spectral norm and not Frobenius norm?
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covariance matrix regularisation.
I think that part of this... | Why the default matrix norm is spectral norm and not Frobenius norm?
In general, I am unsure that the spectral norm is the most widely used. For example the Frobenius norm is used for to approximate solution on non-negative matrix factorisation or correlation/covarian |
14,562 | Why the default matrix norm is spectral norm and not Frobenius norm? | A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ due to the constraints of finite arithmetics, so that $A\tilde x \approx b$, in some suitable sense. What is it that you... | Why the default matrix norm is spectral norm and not Frobenius norm? | A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ | Why the default matrix norm is spectral norm and not Frobenius norm?
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ due to the constraints of finite arithmetics, so th... | Why the default matrix norm is spectral norm and not Frobenius norm?
A part of the answer may be related to numeric computing.
When you solve the system
$$
Ax=b
$$
in finite precision, you don't get the exact answer to that problem. You get an approximation $\tilde x$ |
14,563 | Why the default matrix norm is spectral norm and not Frobenius norm? | The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constants $C_1,C_2$, which depend only on dimension (and a,b) such that:
$$C_1\|x\|_b\leq \|x\|_a\leq C_2\|x\|_b.$$
This impli... | Why the default matrix norm is spectral norm and not Frobenius norm? | The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constan | Why the default matrix norm is spectral norm and not Frobenius norm?
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constants $C_1,C_2$, which depend only on dimension (and ... | Why the default matrix norm is spectral norm and not Frobenius norm?
The answer to this depends on the field you're in. If you're a mathematician, then all norms in finite dimensions are equivalent: for any two norms $\|\cdot\|_a$ and $\|\cdot\|_b$, there exist constan |
14,564 | Why does propensity score matching work for causal inference? | I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs when a variable affects not only the treatment assigned but also the outcomes. When a randomized experiment is performe... | Why does propensity score matching work for causal inference? | I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs | Why does propensity score matching work for causal inference?
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs when a variable affects not only the treatment assigned bu... | Why does propensity score matching work for causal inference?
I'll try to give you an intuitive understanding with minimal emphasis on the mathematics.
The main problem with observational data and analyses that stem from it is confounding. Confounding occurs |
14,565 | Why does propensity score matching work for causal inference? | In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to adjust for more observed potential confounders than that sample size may allow regression models to incorporate. Propensi... | Why does propensity score matching work for causal inference? | In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to ad | Why does propensity score matching work for causal inference?
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to adjust for more observed potential confounders than that sam... | Why does propensity score matching work for causal inference?
In a strict sense, propensity score adjustment has no more to do with causal inference than regression modeling does. The only real difference with propensity scores is that they make it easier to ad |
14,566 | Why does propensity score matching work for causal inference? | I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correlated with the potential outcomes $y_{0i},y_{1i}$ and with the likelihood of receiving treatment, then if you find that th... | Why does propensity score matching work for causal inference? | I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correla | Why does propensity score matching work for causal inference?
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correlated with the potential outcomes $y_{0i},y_{1i}$ and with t... | Why does propensity score matching work for causal inference?
I recommend checking out Mostly Harmless Econometrics - they have a good explanation of this at an intuitive level.
The problem you're trying to solve is selection bias. If a variable $x_i$ is correla |
14,567 | Why does propensity score matching work for causal inference? | It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps many confounding variables, or a regression model with only one variable - the propensity score (that may or may not be a... | Why does propensity score matching work for causal inference? | It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps m | Why does propensity score matching work for causal inference?
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps many confounding variables, or a regression model with only... | Why does propensity score matching work for causal inference?
It "works" for the same reason that regression "works" - you're controlling for all confounding factors.
You can accomplish such analytical control by a fully specified regression model with perhaps m |
14,568 | Does $r$-squared have a $p$-value? | In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value associated with $r^2$ "directly" using the fact that $r^2$ under the null hypothesis is distributed as $\textrm{Beta}(\f... | Does $r$-squared have a $p$-value? | In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value | Does $r$-squared have a $p$-value?
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value associated with $r^2$ "directly" using the fact that $r^2$ under the null hypothesis ... | Does $r$-squared have a $p$-value?
In addition to the numerous (correct) comments by other users pointing out that the $p$-value for $r^2$ is identical to the $p$-value for the global $F$ test, note that you can also get the $p$-value |
14,569 | Does $r$-squared have a $p$-value? | I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population correlation between $X$ and response $Y$
F-test for zero population R-squared, i.e. nothing of the variability of $Y$ can ... | Does $r$-squared have a $p$-value? | I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population co | Does $r$-squared have a $p$-value?
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population correlation between $X$ and response $Y$
F-test for zero population R-squared, i.e. not... | Does $r$-squared have a $p$-value?
I hope this fourth (!) answer clarifies things further.
In simple linear regression, there are three equivalent tests:
t-test for zero population slope of covariable $X$
t-test for zero population co |
14,570 | Does $r$-squared have a $p$-value? | You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical. | Does $r$-squared have a $p$-value? | You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical. | Does $r$-squared have a $p$-value?
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical. | Does $r$-squared have a $p$-value?
You seem to have a decent understanding to me. We could get a $p$-value for $r^2$, but since it is a (non-stochastic) function of $r$, the $p$s would be identical. |
14,571 | Does $r$-squared have a $p$-value? | There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution of a test statistic under the null hypothesis. Your title and question seems to have some confusion between Pearson cor... | Does $r$-squared have a $p$-value? | There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution | Does $r$-squared have a $p$-value?
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution of a test statistic under the null hypothesis. Your title and question seems to have... | Does $r$-squared have a $p$-value?
There are several ways of deriving the test statistic for tests of the Pearson correlation, $\rho$. To obtain a $p$-value, it is worth emphasizing that you need both a test and a sampling distribution |
14,572 | Does $r$-squared have a $p$-value? | This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a distribution, so a $p$-value doesn't really make sense.
Getting a $p$-value for $b$ makes a lot of sense - that's what t... | Does $r$-squared have a $p$-value? | This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a | Does $r$-squared have a $p$-value?
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a distribution, so a $p$-value doesn't really make sense.
Getting a $p$-value for $b$ m... | Does $r$-squared have a $p$-value?
This isn't quite how I would interpret things. I don't think I'd ever calculate a $p$-value for $r$ or $r^2$. $r$ and $r^2$ are qualitative measures of a model, not measures that we're comparing to a |
14,573 | Variance-covariance structure for random-effects in lme4 | The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive definite. Separate random effects terms are considered independent, however, so if you want to fit (e.g.) a model with random ... | Variance-covariance structure for random-effects in lme4 | The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive defini | Variance-covariance structure for random-effects in lme4
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive definite. Separate random effects terms are considered independent, h... | Variance-covariance structure for random-effects in lme4
The default variance-covariance structure is unstructured -- that is, the only constraint on the variance-covariance matrix for a vector-valued random effect with $n$ levels is that is positive defini |
14,574 | Variance-covariance structure for random-effects in lme4 | I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ 1 + x1 + (1|g) + (0+x1|g), data=data, family="binomial")
Here there are two fixed effects that are allowed to vary ran... | Variance-covariance structure for random-effects in lme4 | I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ | Variance-covariance structure for random-effects in lme4
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ 1 + x1 + (1|g) + (0+x1|g), data=data, family="binomial")
Here... | Variance-covariance structure for random-effects in lme4
I can show this by example.
Covariance terms are specified in the same formula as the fixed and random effects. Covariance terms are specified by the way the formula is written.
For example:
glmer(y ~ |
14,575 | Does MLE require i.i.d. data? Or just independent parameters? | The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event }E;\theta)= {\mathbb P}(\text{observing } {\bf x};\theta).$$
Therefore, there is no assumption of independence of the o... | Does MLE require i.i.d. data? Or just independent parameters? | The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event | Does MLE require i.i.d. data? Or just independent parameters?
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event }E;\theta)= {\mathbb P}(\text{observing } {\bf x};\theta).... | Does MLE require i.i.d. data? Or just independent parameters?
The likelihood function is defined as the probability of an event $E$ (data set ${\bf x}$) as a function of the model parameters $\theta$
$${\mathcal L}(\theta;{\bf x})\propto {\mathbb P}(\text{Event |
14,576 | Does MLE require i.i.d. data? Or just independent parameters? | (+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be produced by IID sampling.
If the dependence of the sampling can be written in the statistical model, you just write the ... | Does MLE require i.i.d. data? Or just independent parameters? | (+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be | Does MLE require i.i.d. data? Or just independent parameters?
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be produced by IID sampling.
If the dependence of the samplin... | Does MLE require i.i.d. data? Or just independent parameters?
(+1) Very good question.
Minor thing, MLE stands for maximum likelihood estimate (not multiple), which means that you just maximize the likelihood. This does not specify that the likelihood has to be |
14,577 | Does MLE require i.i.d. data? Or just independent parameters? | Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal googling produces papers like this one where the likelihood is given in the general form.
Another, to an extent, more int... | Does MLE require i.i.d. data? Or just independent parameters? | Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal g | Does MLE require i.i.d. data? Or just independent parameters?
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal googling produces papers like this one where the likelihood... | Does MLE require i.i.d. data? Or just independent parameters?
Of course, Gaussian ARMA models possess a likelihood, as their covariance function can be derived explicitly. This is basically an extension of gui11ame's answer to more than 2 observations. Minimal g |
14,578 | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators? | Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. This may not be appropriate for a given situation. For instance, data sometimes show a curvilinear relationship. This can... | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu | Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. Thi | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. This may not be... | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
Basic OLS regression is a very good technique for fitting a function to a set of data. However, simple regression only fits a straight line that is constant for the entire possible range of $X$. Thi |
14,579 | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators? | Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and regression are two approaches to the same problem. I'd particularly draw your attention to the chapters on smoothing met... | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu | Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emulators?
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and regression a... | What are the advantages / disadvantages of using splines, smoothed splines, and gaussian process emu
Cosma Shalizi's online notes on his lecture course Advanced Data Analysis from an Elementary Point of View are quite good on this subject, looking at things from a perspective where interpolation and |
14,580 | Correlated Bernoulli trials, multivariate Bernoulli distribution? | No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins.
Let $X$ and $Y$ denote the Bernoulli distributed variables corresponding to the two cases, $X \sim \mathrm{Ber... | Correlated Bernoulli trials, multivariate Bernoulli distribution? | No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of mo | Correlated Bernoulli trials, multivariate Bernoulli distribution?
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of more coins.
Let $X$ and $Y$ denote the Bernoulli distrib... | Correlated Bernoulli trials, multivariate Bernoulli distribution?
No, this is impossible whenever you have three or more coins.
The case of two coins
Let us first see why it works for two coins as this provides some intuition about what breaks down in the case of mo |
14,581 | Correlated Bernoulli trials, multivariate Bernoulli distribution? | The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distribution to give a specified correlation value, and then calculate the probability you want. | Correlated Bernoulli trials, multivariate Bernoulli distribution? | The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distributio | Correlated Bernoulli trials, multivariate Bernoulli distribution?
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distribution to give a specified correlation value, and then calc... | Correlated Bernoulli trials, multivariate Bernoulli distribution?
The beta-binomial distribution is one solution for the count outcome of exchangeable correlated Bernoulli values (see e.g, Hisakado et al 2006). It should be possible to parameterise this distributio |
14,582 | Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction | The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heteroscedastic (the variance will not be constant, and in particular will depend on the mean in systematic ways) and far from N... | Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction | The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heterosc | Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heteroscedastic (the variance will not b... | Fitting a binomial GLMM (glmer) to a response variable that is a proportion or fraction
The binomial GLMM is probably the right answer.
Especially with a small to moderate number of samples (9 and 10 in your example), the distribution of the response variable will probably be heterosc |
14,583 | What does log-uniformly distribution mean? | I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to B for 0 < A < B to mean drawing uniformly in the log domain between log(A) and log(B), exponentiating to get a number b... | What does log-uniformly distribution mean? | I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to | What does log-uniformly distribution mean?
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to B for 0 < A < B to mean drawing uniformly in the log domain between log(A) a... | What does log-uniformly distribution mean?
I believe it means that the log is uniformly distributed, and the variable takes values in the range $[128, 4000]$.
From a footnote of the paper:
We will use the phrase drawn geometrically from A to |
14,584 | Is "test statistic" a value or a random variable? | The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable.
T is a random variable because it represents the results of calculating from a samp... | Is "test statistic" a value or a random variable? | The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed | Is "test statistic" a value or a random variable?
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed or observed of that random variable.
T is a random variable because it... | Is "test statistic" a value or a random variable?
The short answer is "yes".
The tradition in notation is to use an upper case letter (T in the above) to represent a random variable, and a lower case letter (t) to represent a specific value computed |
14,585 | Is "test statistic" a value or a random variable? | A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. As statistics are used to estimate the value of a population parameter, they are themselves values. Because (long enough)... | Is "test statistic" a value or a random variable? | A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. A | Is "test statistic" a value or a random variable?
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. As statistics are used to estimate the value of a population parameter,... | Is "test statistic" a value or a random variable?
A test statistic is a statistic used in making a decision about the null hypothesis.
A statistic is a realized value (e.g. t): A statistic is a numerical value that states something about a sample. A |
14,586 | Is "test statistic" a value or a random variable? | A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your sample the test statistic (called t-statistic) depends on the observed data ($\bar{x}$ and $s$ are both derived from the... | Is "test statistic" a value or a random variable? | A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your | Is "test statistic" a value or a random variable?
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your sample the test statistic (called t-statistic) depends on the observed... | Is "test statistic" a value or a random variable?
A test statistic is an observation specific to your observed data that follows a probability distribution under a given assumption. This assumption is usually called the $H_0$.
For instance, in your |
14,587 | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests? | As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the models have the same or different accuracy (or another statistic of interest), and perform a hypothesis test.
But as a matter... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b | As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the model | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask ... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
As a matter of principle, there is not necessarily any tension between hypothesis testing and machine learning. As an example, if you train 2 models, it's perfectly reasonable to ask whether the model |
14,588 | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests? | This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a useful model, usually under the assumption of the corresponding generalization.
Take, for example, Fisher's Iris Flower Datas... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b | This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a usefu | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
This is generally because the use-case, at least historically, for hypothesis testing in statistics is often about simply making a generalization. The use-case in machine learning is to build a usefu |
14,589 | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests? | Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on top of these "metrics" in order to measure their uncertainty.
For example, if you have 5 male and 5 female students you c... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b | Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on t | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, but statistics uses hypothesis tests?
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis tes... | Why does machine learning have metrics such as accuracy, precision or recall to prove best models, b
Anything can be seen as a "metric", and both groups, statisticians and machine-learners, use plenty of those: accuracy, mean value, estimated parameter of a model, etc. Hypothesis testing is done on t |
14,590 | What are some good datasets to learn basic machine learning algorithms and why? | The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's been mentioned which kind of algorithms are applicable.
UCI- Machine Learning repository
ML Comp
Mammo Image
Mulan | What are some good datasets to learn basic machine learning algorithms and why? | The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's be | What are some good datasets to learn basic machine learning algorithms and why?
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's been mentioned which kind of algorithms ar... | What are some good datasets to learn basic machine learning algorithms and why?
The data sets in the following sites are available for free. These data sets have been used to teach ML algorithms to students because for most there are descriptions with the data sets. Also, it's be |
14,591 | What are some good datasets to learn basic machine learning algorithms and why? | Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots of clean datasets. While noise-free datasets aren't really representative of real-world datasets, they're especially sui... | What are some good datasets to learn basic machine learning algorithms and why? | Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots | What are some good datasets to learn basic machine learning algorithms and why?
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots of clean datasets. While noise-free data... | What are some good datasets to learn basic machine learning algorithms and why?
Kaggle has a whole host of datasets you can use to practice with.
(I'm surprised it wasn't mentioned so far!)
It's got two things (among many others) that make it a highly invaluable resource:
Lots |
14,592 | What are some good datasets to learn basic machine learning algorithms and why? | First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dealing with data types and wrestling the data into the right format for the algorithm. Even if you are building an algori... | What are some good datasets to learn basic machine learning algorithms and why? | First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dea | What are some good datasets to learn basic machine learning algorithms and why?
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dealing with data types and wrestling the d... | What are some good datasets to learn basic machine learning algorithms and why?
First, I'd recommend starting with the sample data that is provided with the software. Most software distributions include example data that you can use to get familiar with the algorithm without dea |
14,593 | What are some good datasets to learn basic machine learning algorithms and why? | The Iris data set hands down. It's in base R as well. | What are some good datasets to learn basic machine learning algorithms and why? | The Iris data set hands down. It's in base R as well. | What are some good datasets to learn basic machine learning algorithms and why?
The Iris data set hands down. It's in base R as well. | What are some good datasets to learn basic machine learning algorithms and why?
The Iris data set hands down. It's in base R as well. |
14,594 | What are some good datasets to learn basic machine learning algorithms and why? | In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class totaling 150 data points. One excellent resource to help you explore this dataset is this video series by Data School.
Ano... | What are some good datasets to learn basic machine learning algorithms and why? | In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class to | What are some good datasets to learn basic machine learning algorithms and why?
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class totaling 150 data points. One excellent re... | What are some good datasets to learn basic machine learning algorithms and why?
In my opinion, you can should start with small datasets which do not have too many features.
One example would be the Iris dataset (for classification). It has 3 classes, 50 samples for each class to |
14,595 | Can the likelihood take values outside of the range [0, 1]? [duplicate] | Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the densities would be 1000.
Consequently log-likelihood may be negative, but it may also be positive.
[Indeed, according to s... | Can the likelihood take values outside of the range [0, 1]? [duplicate] | Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the de | Can the likelihood take values outside of the range [0, 1]? [duplicate]
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the densities would be 1000.
Consequently log-likeliho... | Can the likelihood take values outside of the range [0, 1]? [duplicate]
Likelihood must be at least 0, and can be greater than 1.
Consider, for example, likelihood for three observations from a uniform on (0,0.1); when non-zero, the density is 10, so the product of the de |
14,596 | Can the likelihood take values outside of the range [0, 1]? [duplicate] | The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If your likelihood function $L\left(x\right)$ has values in $\left(0,1\right)$ for some $x$, then the log-likelihood function ... | Can the likelihood take values outside of the range [0, 1]? [duplicate] | The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If you | Can the likelihood take values outside of the range [0, 1]? [duplicate]
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If your likelihood function $L\left(x\right)$ has valu... | Can the likelihood take values outside of the range [0, 1]? [duplicate]
The likelihood function is a product of density functions for independent samples. A density function can have non-negative values. The log-likelihood is the logarithm of a likelihood function. If you |
14,597 | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate] | The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often claimed that LOOCV has higher variance than $k$-fold CV, and that it is so because the training sets in LOOCV have more ... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl | The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often claimed that ... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
The original version of this answer was missing the point (that's when the answer got a couple of downvotes). The answer was fixed in October 2015.
This is a somewhat controversial topic.
It is often |
14,598 | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate] | From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; therefore, these ouputs are highly (positively) correlated with each other. In contrast, when we perform $k$-fold CV with... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl | From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; t | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; therefore, the... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
From An Introduction to Statistical Learning
When we perform LOOCV, we are in effect averaging the outputs of $n$ fitted models, each of which is trained on an almost identical set of observations; t |
14,599 | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate] | In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $k$ or $n$ surrogate models yields the same predicion for the same sample (thought experiment: test surrogate models with... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl | In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $ | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $k$ or $n$ sur... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
In simple cases I think the answer is: the grand mean (over all test cases and all folds) has the same variance for $k$-fold and LOO validation.
Simple means here: models are stable, so each of the $ |
14,600 | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate] | There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use all other instances for training. So in each iteration it will leave one instance from the dataset to test.So in a partic... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl | There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use a | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use all other inst... | Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
There are no folds in LOOCV like k-Fold Cross validation(actually they can be name as folds but meaningless). in LOOCV what it does is leave one Instance from the whole dataset for test data and use a |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.