idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
11,601 | Mother milk of 6 Corona-positive (COVID-19) women does not contain the virus - can we make a confidence statement about this? | These tests of mother's milk are not useful to determine a confidence interval for the risk of contamination of children. Kjetil explained this very nicely in his answer that we end up with a large confidence interval of [0, 0.5], and AdamO has mentioned in the comments that this actually makes no sense because the bin... | Mother milk of 6 Corona-positive (COVID-19) women does not contain the virus - can we make a confide | These tests of mother's milk are not useful to determine a confidence interval for the risk of contamination of children. Kjetil explained this very nicely in his answer that we end up with a large co | Mother milk of 6 Corona-positive (COVID-19) women does not contain the virus - can we make a confidence statement about this?
These tests of mother's milk are not useful to determine a confidence interval for the risk of contamination of children. Kjetil explained this very nicely in his answer that we end up with a la... | Mother milk of 6 Corona-positive (COVID-19) women does not contain the virus - can we make a confide
These tests of mother's milk are not useful to determine a confidence interval for the risk of contamination of children. Kjetil explained this very nicely in his answer that we end up with a large co |
11,602 | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers? | I was browsing the AI StackExchange and ran across a very similar question: What distinguishes “Deep Learning” from other neural networks?
Since the AI StackExchange will close tomorrow (again), I'll copy the two top answers here (user contributions licensed under cc by-sa 3.0 with attribution required):
Author: momm... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu | I was browsing the AI StackExchange and ran across a very similar question: What distinguishes “Deep Learning” from other neural networks?
Since the AI StackExchange will close tomorrow (again), I'll | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers?
I was browsing the AI StackExchange and ran across a very similar question: What distinguishes “Deep Learning” from other neural networks?
Since the AI StackExchange will close tom... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu
I was browsing the AI StackExchange and ran across a very similar question: What distinguishes “Deep Learning” from other neural networks?
Since the AI StackExchange will close tomorrow (again), I'll |
11,603 | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers? | Dropout, from Hinton in 2006, is said to be the greatest improvement in deep learning of the last 10 years, because it reduces a lot overfitting. | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu | Dropout, from Hinton in 2006, is said to be the greatest improvement in deep learning of the last 10 years, because it reduces a lot overfitting. | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers?
Dropout, from Hinton in 2006, is said to be the greatest improvement in deep learning of the last 10 years, because it reduces a lot overfitting. | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu
Dropout, from Hinton in 2006, is said to be the greatest improvement in deep learning of the last 10 years, because it reduces a lot overfitting. |
11,604 | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers? | This is certainly a question that will provoke controversy.
When neural networks are used in deep learning they are typically trained in ways that weren't used in the 1980's. In particular strategies that pretrain individual layers of the neural network to recognize features at different level are claimed to make it... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu | This is certainly a question that will provoke controversy.
When neural networks are used in deep learning they are typically trained in ways that weren't used in the 1980's. In particular strategi | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers?
This is certainly a question that will provoke controversy.
When neural networks are used in deep learning they are typically trained in ways that weren't used in the 1980's. In... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu
This is certainly a question that will provoke controversy.
When neural networks are used in deep learning they are typically trained in ways that weren't used in the 1980's. In particular strategi |
11,605 | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers? | The key is the word "deep" in deep learning. Someone (forgot ref) in the 80s proved that all non-linear functions could be approximated by a single layer neural network with, of course, a sufficiently large number of hidden units. I think this result probably discouraged people from seeking deeper network in the earlie... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu | The key is the word "deep" in deep learning. Someone (forgot ref) in the 80s proved that all non-linear functions could be approximated by a single layer neural network with, of course, a sufficiently | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers?
The key is the word "deep" in deep learning. Someone (forgot ref) in the 80s proved that all non-linear functions could be approximated by a single layer neural network with, of co... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu
The key is the word "deep" in deep learning. Someone (forgot ref) in the 80s proved that all non-linear functions could be approximated by a single layer neural network with, of course, a sufficiently |
11,606 | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers? | Not exactly, the ANN starts in the 50s. Check out one of ML rock stars Yann LeCun's slides for an authentic and comprehensive intro.
http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu | Not exactly, the ANN starts in the 50s. Check out one of ML rock stars Yann LeCun's slides for an authentic and comprehensive intro.
http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf | How true is this slide on deep learning claiming that all improvements from 1980s are only due to much more data and much faster computers?
Not exactly, the ANN starts in the 50s. Check out one of ML rock stars Yann LeCun's slides for an authentic and comprehensive intro.
http://www.cs.nyu.edu/~yann/talks/lecun-ranzat... | How true is this slide on deep learning claiming that all improvements from 1980s are only due to mu
Not exactly, the ANN starts in the 50s. Check out one of ML rock stars Yann LeCun's slides for an authentic and comprehensive intro.
http://www.cs.nyu.edu/~yann/talks/lecun-ranzato-icml2013.pdf |
11,607 | What are the four axes on PCA biplot? | Do you mean, e.g., in the plot that the following command returns?
biplot(prcomp(USArrests, scale = TRUE))
If yes, then the top and the right axes are meant to be used for interpreting the red arrows (points depicting the variables) in the plot.
If you know how the principal component analysis works, and you can read... | What are the four axes on PCA biplot? | Do you mean, e.g., in the plot that the following command returns?
biplot(prcomp(USArrests, scale = TRUE))
If yes, then the top and the right axes are meant to be used for interpreting the red arrow | What are the four axes on PCA biplot?
Do you mean, e.g., in the plot that the following command returns?
biplot(prcomp(USArrests, scale = TRUE))
If yes, then the top and the right axes are meant to be used for interpreting the red arrows (points depicting the variables) in the plot.
If you know how the principal comp... | What are the four axes on PCA biplot?
Do you mean, e.g., in the plot that the following command returns?
biplot(prcomp(USArrests, scale = TRUE))
If yes, then the top and the right axes are meant to be used for interpreting the red arrow |
11,608 | What are the four axes on PCA biplot? | I have a better visualization for the biplot. Please check following figure.
In the experiment, I am trying to mapping 3d points into 2d (simulated data set).
The trick to understand biplot in 2d is finding the correct angle to see same thing in 3d. All the data points are numbered, you can see the mapping clearly.
... | What are the four axes on PCA biplot? | I have a better visualization for the biplot. Please check following figure.
In the experiment, I am trying to mapping 3d points into 2d (simulated data set).
The trick to understand biplot in 2d is | What are the four axes on PCA biplot?
I have a better visualization for the biplot. Please check following figure.
In the experiment, I am trying to mapping 3d points into 2d (simulated data set).
The trick to understand biplot in 2d is finding the correct angle to see same thing in 3d. All the data points are number... | What are the four axes on PCA biplot?
I have a better visualization for the biplot. Please check following figure.
In the experiment, I am trying to mapping 3d points into 2d (simulated data set).
The trick to understand biplot in 2d is |
11,609 | Why bother with low-rank approximations? | A low rank approximation $\hat{X}$ of $X$ can be decomposed into a matrix square root as $G=U_{r}\lambda_{r}^\frac{1}{2}$ where the eigen decomposition of $X$ is $U\lambda U^T$, thereby reducing the number of features, which can be represented by $G$ based on the rank-r approximation as $\hat{X}=GG^T$. Note that the su... | Why bother with low-rank approximations? | A low rank approximation $\hat{X}$ of $X$ can be decomposed into a matrix square root as $G=U_{r}\lambda_{r}^\frac{1}{2}$ where the eigen decomposition of $X$ is $U\lambda U^T$, thereby reducing the n | Why bother with low-rank approximations?
A low rank approximation $\hat{X}$ of $X$ can be decomposed into a matrix square root as $G=U_{r}\lambda_{r}^\frac{1}{2}$ where the eigen decomposition of $X$ is $U\lambda U^T$, thereby reducing the number of features, which can be represented by $G$ based on the rank-r approxim... | Why bother with low-rank approximations?
A low rank approximation $\hat{X}$ of $X$ can be decomposed into a matrix square root as $G=U_{r}\lambda_{r}^\frac{1}{2}$ where the eigen decomposition of $X$ is $U\lambda U^T$, thereby reducing the n |
11,610 | Why bother with low-rank approximations? | The point of low-rank approximation is not necessarily just for performing dimension reduction.
The idea is that based on domain knowledge, the data/entries of the matrix will somehow make the matrix low rank. But that is in the ideal case where the entries are not affected by noise, corruption, missing values etc. The... | Why bother with low-rank approximations? | The point of low-rank approximation is not necessarily just for performing dimension reduction.
The idea is that based on domain knowledge, the data/entries of the matrix will somehow make the matrix | Why bother with low-rank approximations?
The point of low-rank approximation is not necessarily just for performing dimension reduction.
The idea is that based on domain knowledge, the data/entries of the matrix will somehow make the matrix low rank. But that is in the ideal case where the entries are not affected by n... | Why bother with low-rank approximations?
The point of low-rank approximation is not necessarily just for performing dimension reduction.
The idea is that based on domain knowledge, the data/entries of the matrix will somehow make the matrix |
11,611 | Why bother with low-rank approximations? | Once you have decided the rank of the approximation(say $r<m$) , you will only retain the $r$ basis vectors for future use (say, as predictors in a regression or classification problem) and not the original $m$. | Why bother with low-rank approximations? | Once you have decided the rank of the approximation(say $r<m$) , you will only retain the $r$ basis vectors for future use (say, as predictors in a regression or classification problem) and not the or | Why bother with low-rank approximations?
Once you have decided the rank of the approximation(say $r<m$) , you will only retain the $r$ basis vectors for future use (say, as predictors in a regression or classification problem) and not the original $m$. | Why bother with low-rank approximations?
Once you have decided the rank of the approximation(say $r<m$) , you will only retain the $r$ basis vectors for future use (say, as predictors in a regression or classification problem) and not the or |
11,612 | Why bother with low-rank approximations? | Two more reasons not mentioned so far:
Reducing colinearity. I believe that most of these techniques remove colinearity, which can be helpful for follow-on processing.
Our imaginations are low-rank, so it can be helpful for exploring low-rank relationships. | Why bother with low-rank approximations? | Two more reasons not mentioned so far:
Reducing colinearity. I believe that most of these techniques remove colinearity, which can be helpful for follow-on processing.
Our imaginations are low-rank, | Why bother with low-rank approximations?
Two more reasons not mentioned so far:
Reducing colinearity. I believe that most of these techniques remove colinearity, which can be helpful for follow-on processing.
Our imaginations are low-rank, so it can be helpful for exploring low-rank relationships. | Why bother with low-rank approximations?
Two more reasons not mentioned so far:
Reducing colinearity. I believe that most of these techniques remove colinearity, which can be helpful for follow-on processing.
Our imaginations are low-rank, |
11,613 | Why bother with low-rank approximations? | According to "Modern multivariate statistical techniques (Izenman)" reduced rank regression covers several interesting methods as special cases including PCA, factor analysis, canonical variate and correlation analysis, LDA and correspondence analysis | Why bother with low-rank approximations? | According to "Modern multivariate statistical techniques (Izenman)" reduced rank regression covers several interesting methods as special cases including PCA, factor analysis, canonical variate and co | Why bother with low-rank approximations?
According to "Modern multivariate statistical techniques (Izenman)" reduced rank regression covers several interesting methods as special cases including PCA, factor analysis, canonical variate and correlation analysis, LDA and correspondence analysis | Why bother with low-rank approximations?
According to "Modern multivariate statistical techniques (Izenman)" reduced rank regression covers several interesting methods as special cases including PCA, factor analysis, canonical variate and co |
11,614 | Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests? | You should use a proper post hoc pairwise test like Dunn's test.*
If one proceeds by moving from a rejection of Kruskal-Wallis to performing ordinary pair-wise rank sum tests (with or without multiple comparison adjustments), one runs into two problems:
the ranks that the pair-wise rank sum tests use are not the ranks... | Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests? | You should use a proper post hoc pairwise test like Dunn's test.*
If one proceeds by moving from a rejection of Kruskal-Wallis to performing ordinary pair-wise rank sum tests (with or without multiple | Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests?
You should use a proper post hoc pairwise test like Dunn's test.*
If one proceeds by moving from a rejection of Kruskal-Wallis to performing ordinary pair-wise rank sum tests (with or without multiple comparison adjustments), o... | Post-hoc tests after Kruskal-Wallis: Dunn's test or Bonferroni corrected Mann-Whitney tests?
You should use a proper post hoc pairwise test like Dunn's test.*
If one proceeds by moving from a rejection of Kruskal-Wallis to performing ordinary pair-wise rank sum tests (with or without multiple |
11,615 | Can a GAN be used for data augmentation? | Yes, GAN can be used to "hallucinate" additional data as a form of data augmentation.
See these papers which do pretty much what you are asking:
Data Augmentation Generative Adversarial Networks
Low-Shot Learning from Imaginary Data
GAN-based Synthetic Medical Image Augmentation for increased CNN Performance in Liver ... | Can a GAN be used for data augmentation? | Yes, GAN can be used to "hallucinate" additional data as a form of data augmentation.
See these papers which do pretty much what you are asking:
Data Augmentation Generative Adversarial Networks
Low- | Can a GAN be used for data augmentation?
Yes, GAN can be used to "hallucinate" additional data as a form of data augmentation.
See these papers which do pretty much what you are asking:
Data Augmentation Generative Adversarial Networks
Low-Shot Learning from Imaginary Data
GAN-based Synthetic Medical Image Augmentatio... | Can a GAN be used for data augmentation?
Yes, GAN can be used to "hallucinate" additional data as a form of data augmentation.
See these papers which do pretty much what you are asking:
Data Augmentation Generative Adversarial Networks
Low- |
11,616 | Can a GAN be used for data augmentation? | If you train your GAN on dataset A and use it to augment data on B, I think the answer is yes, since it absorb some knowledge from A. If you train your data on B and try to augment on B, I think GAN is useless here because there is no gain in information. | Can a GAN be used for data augmentation? | If you train your GAN on dataset A and use it to augment data on B, I think the answer is yes, since it absorb some knowledge from A. If you train your data on B and try to augment on B, I think GAN i | Can a GAN be used for data augmentation?
If you train your GAN on dataset A and use it to augment data on B, I think the answer is yes, since it absorb some knowledge from A. If you train your data on B and try to augment on B, I think GAN is useless here because there is no gain in information. | Can a GAN be used for data augmentation?
If you train your GAN on dataset A and use it to augment data on B, I think the answer is yes, since it absorb some knowledge from A. If you train your data on B and try to augment on B, I think GAN i |
11,617 | Can a GAN be used for data augmentation? | As already mentioned, it depends on what data you train your GAN. But it also depends on what you expect as an outcome of the GAN. Most methods focus on complete new synthetic data, but that's not the only option. This approach of landing AI seems to be more promising than just generating new artificial data, they augm... | Can a GAN be used for data augmentation? | As already mentioned, it depends on what data you train your GAN. But it also depends on what you expect as an outcome of the GAN. Most methods focus on complete new synthetic data, but that's not the | Can a GAN be used for data augmentation?
As already mentioned, it depends on what data you train your GAN. But it also depends on what you expect as an outcome of the GAN. Most methods focus on complete new synthetic data, but that's not the only option. This approach of landing AI seems to be more promising than just ... | Can a GAN be used for data augmentation?
As already mentioned, it depends on what data you train your GAN. But it also depends on what you expect as an outcome of the GAN. Most methods focus on complete new synthetic data, but that's not the |
11,618 | Can a GAN be used for data augmentation? | (My answer will mostly focus on tabular data as it has proven the hardest to synthesize owning to its heterogeneity and general arbitrariness)
Yes, we can get a generative adversarial network (GAN) to generate synthetic data. An exceptional resource on this is: the Synthetic Data Vault initiative where a number of diff... | Can a GAN be used for data augmentation? | (My answer will mostly focus on tabular data as it has proven the hardest to synthesize owning to its heterogeneity and general arbitrariness)
Yes, we can get a generative adversarial network (GAN) to | Can a GAN be used for data augmentation?
(My answer will mostly focus on tabular data as it has proven the hardest to synthesize owning to its heterogeneity and general arbitrariness)
Yes, we can get a generative adversarial network (GAN) to generate synthetic data. An exceptional resource on this is: the Synthetic Dat... | Can a GAN be used for data augmentation?
(My answer will mostly focus on tabular data as it has proven the hardest to synthesize owning to its heterogeneity and general arbitrariness)
Yes, we can get a generative adversarial network (GAN) to |
11,619 | Can a GAN be used for data augmentation? | After long time, I would conclude the answer is no, based on some quite solid theoretical basis
https://en.wikipedia.org/wiki/Data_processing_inequality | Can a GAN be used for data augmentation? | After long time, I would conclude the answer is no, based on some quite solid theoretical basis
https://en.wikipedia.org/wiki/Data_processing_inequality | Can a GAN be used for data augmentation?
After long time, I would conclude the answer is no, based on some quite solid theoretical basis
https://en.wikipedia.org/wiki/Data_processing_inequality | Can a GAN be used for data augmentation?
After long time, I would conclude the answer is no, based on some quite solid theoretical basis
https://en.wikipedia.org/wiki/Data_processing_inequality |
11,620 | Can a GAN be used for data augmentation? | For visual tasks, data augmentation can often be accomplished by rotation, scaling, or rearranging patches. These transformations do not necessarily add information, but can be useful for models to learn to generalize better.
The generator in a GAN learns a complex distribution from its training data from which you ca... | Can a GAN be used for data augmentation? | For visual tasks, data augmentation can often be accomplished by rotation, scaling, or rearranging patches. These transformations do not necessarily add information, but can be useful for models to l | Can a GAN be used for data augmentation?
For visual tasks, data augmentation can often be accomplished by rotation, scaling, or rearranging patches. These transformations do not necessarily add information, but can be useful for models to learn to generalize better.
The generator in a GAN learns a complex distribution... | Can a GAN be used for data augmentation?
For visual tasks, data augmentation can often be accomplished by rotation, scaling, or rearranging patches. These transformations do not necessarily add information, but can be useful for models to l |
11,621 | Regression for categorical independent variables and a continuous dependent one | Just some semantics and to be clear:
dependent variable == outcome == "$y$" in regression formulas such as
$y = β_0 + β_1x_1 + β_2x_2 + ... + β_kx_k$
independent variable == predictor == one of "$x_k$" in regression
formulas such as $y = β_0 + β_1x_1 + β_2x_2 + ... + β_kx_k$
So in most situations the type of regressi... | Regression for categorical independent variables and a continuous dependent one | Just some semantics and to be clear:
dependent variable == outcome == "$y$" in regression formulas such as
$y = β_0 + β_1x_1 + β_2x_2 + ... + β_kx_k$
independent variable == predictor == one of "$x_k | Regression for categorical independent variables and a continuous dependent one
Just some semantics and to be clear:
dependent variable == outcome == "$y$" in regression formulas such as
$y = β_0 + β_1x_1 + β_2x_2 + ... + β_kx_k$
independent variable == predictor == one of "$x_k$" in regression
formulas such as $y = β... | Regression for categorical independent variables and a continuous dependent one
Just some semantics and to be clear:
dependent variable == outcome == "$y$" in regression formulas such as
$y = β_0 + β_1x_1 + β_2x_2 + ... + β_kx_k$
independent variable == predictor == one of "$x_k |
11,622 | What is the logic behind method of moments? | A sample consisting of $n$ realizations from identically and independently distributed random variables is ergodic. In a such a case, "sample moments" are consistent estimators of theoretical moments of the common distribution, if the theoretical moments exist and are finite.
This means that
$$\hat \mu_k(n) = \mu_k(\... | What is the logic behind method of moments? | A sample consisting of $n$ realizations from identically and independently distributed random variables is ergodic. In a such a case, "sample moments" are consistent estimators of theoretical moments | What is the logic behind method of moments?
A sample consisting of $n$ realizations from identically and independently distributed random variables is ergodic. In a such a case, "sample moments" are consistent estimators of theoretical moments of the common distribution, if the theoretical moments exist and are finite.... | What is the logic behind method of moments?
A sample consisting of $n$ realizations from identically and independently distributed random variables is ergodic. In a such a case, "sample moments" are consistent estimators of theoretical moments |
11,623 | What is the logic behind method of moments? | Econometricians call this "the analogy principle". You compute the population mean as the expected value with respect to the population distribution; you compute the estimator as the expected value with respect to the sample distribution, and it turns out to be the sample mean. You have a unified expression
$$
T(F) = \... | What is the logic behind method of moments? | Econometricians call this "the analogy principle". You compute the population mean as the expected value with respect to the population distribution; you compute the estimator as the expected value wi | What is the logic behind method of moments?
Econometricians call this "the analogy principle". You compute the population mean as the expected value with respect to the population distribution; you compute the estimator as the expected value with respect to the sample distribution, and it turns out to be the sample mea... | What is the logic behind method of moments?
Econometricians call this "the analogy principle". You compute the population mean as the expected value with respect to the population distribution; you compute the estimator as the expected value wi |
11,624 | What is the logic behind method of moments? | I might be wrong but the way that I think about it is as follows:
Let's say you have samples $X_1, X_2, \dotsc, X_n$. Then, the method of the moments suggest that we should compare $m-th$ moment of sample with $m$th moment of population
$$(X_1 + X_2 + \dotsm + X_n) / n = μ$$
here, we are averaging all the samples out w... | What is the logic behind method of moments? | I might be wrong but the way that I think about it is as follows:
Let's say you have samples $X_1, X_2, \dotsc, X_n$. Then, the method of the moments suggest that we should compare $m-th$ moment of sa | What is the logic behind method of moments?
I might be wrong but the way that I think about it is as follows:
Let's say you have samples $X_1, X_2, \dotsc, X_n$. Then, the method of the moments suggest that we should compare $m-th$ moment of sample with $m$th moment of population
$$(X_1 + X_2 + \dotsm + X_n) / n = μ$$
... | What is the logic behind method of moments?
I might be wrong but the way that I think about it is as follows:
Let's say you have samples $X_1, X_2, \dotsc, X_n$. Then, the method of the moments suggest that we should compare $m-th$ moment of sa |
11,625 | Is it possible to apply KL divergence between discrete and continuous distribution? | KL divergence is only defined on distributions over a common space. If $p$ is a distribution on $\mathbb{R}^3$ and $q$ a distribution on $\mathbb{Z}$, then $q(x)$ doesn't make sense for points $p \in \mathbb{R}^3$ and $p(z)$ doesn't make sense for points $z \in \mathbb{Z}$.
However, if you have a discrete distribution ... | Is it possible to apply KL divergence between discrete and continuous distribution? | KL divergence is only defined on distributions over a common space. If $p$ is a distribution on $\mathbb{R}^3$ and $q$ a distribution on $\mathbb{Z}$, then $q(x)$ doesn't make sense for points $p \in | Is it possible to apply KL divergence between discrete and continuous distribution?
KL divergence is only defined on distributions over a common space. If $p$ is a distribution on $\mathbb{R}^3$ and $q$ a distribution on $\mathbb{Z}$, then $q(x)$ doesn't make sense for points $p \in \mathbb{R}^3$ and $p(z)$ doesn't mak... | Is it possible to apply KL divergence between discrete and continuous distribution?
KL divergence is only defined on distributions over a common space. If $p$ is a distribution on $\mathbb{R}^3$ and $q$ a distribution on $\mathbb{Z}$, then $q(x)$ doesn't make sense for points $p \in |
11,626 | Is it possible to apply KL divergence between discrete and continuous distribution? | Yes, the KL divergence between continuous and discrete random variables is well defined. If $P$ and $Q$ are distributions on some space $\mathbb{X}$, then both $P$ and $Q$ have densities $f$, $g$ with respect to $\mu = P+Q$ and
$$
D_{KL}(P,Q) = \int_{\mathbb{X}} f \log\frac{f}{g}d\mu.
$$
For example, if $\mathbb{X} = [... | Is it possible to apply KL divergence between discrete and continuous distribution? | Yes, the KL divergence between continuous and discrete random variables is well defined. If $P$ and $Q$ are distributions on some space $\mathbb{X}$, then both $P$ and $Q$ have densities $f$, $g$ with | Is it possible to apply KL divergence between discrete and continuous distribution?
Yes, the KL divergence between continuous and discrete random variables is well defined. If $P$ and $Q$ are distributions on some space $\mathbb{X}$, then both $P$ and $Q$ have densities $f$, $g$ with respect to $\mu = P+Q$ and
$$
D_{KL... | Is it possible to apply KL divergence between discrete and continuous distribution?
Yes, the KL divergence between continuous and discrete random variables is well defined. If $P$ and $Q$ are distributions on some space $\mathbb{X}$, then both $P$ and $Q$ have densities $f$, $g$ with |
11,627 | Is it possible to apply KL divergence between discrete and continuous distribution? | Not in general. The KL divergence is
$$
D_{KL}(P \ || \ Q) = \int_{\mathcal{X}} \log \left(\frac{dP}{dQ}\right)dP
$$
provided that $P$ is absolutely continuous with respect to $Q$ and both $P$ and $Q$ are $\sigma$-finite (i.e. under conditions where $\frac{dP}{dQ}$ is well-defined).
For a 'continuous-to-discrete' KL d... | Is it possible to apply KL divergence between discrete and continuous distribution? | Not in general. The KL divergence is
$$
D_{KL}(P \ || \ Q) = \int_{\mathcal{X}} \log \left(\frac{dP}{dQ}\right)dP
$$
provided that $P$ is absolutely continuous with respect to $Q$ and both $P$ and $Q | Is it possible to apply KL divergence between discrete and continuous distribution?
Not in general. The KL divergence is
$$
D_{KL}(P \ || \ Q) = \int_{\mathcal{X}} \log \left(\frac{dP}{dQ}\right)dP
$$
provided that $P$ is absolutely continuous with respect to $Q$ and both $P$ and $Q$ are $\sigma$-finite (i.e. under co... | Is it possible to apply KL divergence between discrete and continuous distribution?
Not in general. The KL divergence is
$$
D_{KL}(P \ || \ Q) = \int_{\mathcal{X}} \log \left(\frac{dP}{dQ}\right)dP
$$
provided that $P$ is absolutely continuous with respect to $Q$ and both $P$ and $Q |
11,628 | Balanced accuracy vs F-1 score | Mathematically, b_acc is the arithmetic mean of recall_P and recall_N and f1 is
the harmonic mean of recall_P and precision_P.
Both F1 and b_acc are metrics for classifier evaluation, that (to some extent)
handle class imbalance. Depending of which of the two classes (N or P) outnumbers the other, each metric is outp... | Balanced accuracy vs F-1 score | Mathematically, b_acc is the arithmetic mean of recall_P and recall_N and f1 is
the harmonic mean of recall_P and precision_P.
Both F1 and b_acc are metrics for classifier evaluation, that (to some e | Balanced accuracy vs F-1 score
Mathematically, b_acc is the arithmetic mean of recall_P and recall_N and f1 is
the harmonic mean of recall_P and precision_P.
Both F1 and b_acc are metrics for classifier evaluation, that (to some extent)
handle class imbalance. Depending of which of the two classes (N or P) outnumbers... | Balanced accuracy vs F-1 score
Mathematically, b_acc is the arithmetic mean of recall_P and recall_N and f1 is
the harmonic mean of recall_P and precision_P.
Both F1 and b_acc are metrics for classifier evaluation, that (to some e |
11,629 | Measures of model complexity | Besides the various measures of Minimum Description Length (e.g., normalized maximum likelihood, Fisher Information approximation), there are two other methods worth to mention:
Parametric Bootstrap. It's a lot easier to implement than the demanding MDL measures. A nice paper is by Wagenmaker and colleagues:
Wagenma... | Measures of model complexity | Besides the various measures of Minimum Description Length (e.g., normalized maximum likelihood, Fisher Information approximation), there are two other methods worth to mention:
Parametric Bootstra | Measures of model complexity
Besides the various measures of Minimum Description Length (e.g., normalized maximum likelihood, Fisher Information approximation), there are two other methods worth to mention:
Parametric Bootstrap. It's a lot easier to implement than the demanding MDL measures. A nice paper is by Wagen... | Measures of model complexity
Besides the various measures of Minimum Description Length (e.g., normalized maximum likelihood, Fisher Information approximation), there are two other methods worth to mention:
Parametric Bootstra |
11,630 | Measures of model complexity | I think it would depend on the actual model fitting procedure. For a generally applicable measure, you might consider Generalized Degrees of Freedom described in Ye 1998 -- essentially the sensitivity of change of model estimates to perturbation of observations -- which works quite well as a measure of model complexit... | Measures of model complexity | I think it would depend on the actual model fitting procedure. For a generally applicable measure, you might consider Generalized Degrees of Freedom described in Ye 1998 -- essentially the sensitivit | Measures of model complexity
I think it would depend on the actual model fitting procedure. For a generally applicable measure, you might consider Generalized Degrees of Freedom described in Ye 1998 -- essentially the sensitivity of change of model estimates to perturbation of observations -- which works quite well as... | Measures of model complexity
I think it would depend on the actual model fitting procedure. For a generally applicable measure, you might consider Generalized Degrees of Freedom described in Ye 1998 -- essentially the sensitivit |
11,631 | Measures of model complexity | Minimum Description Length (MDL) and Minimum Message Length (MML) are certainly worth checking out.
As far as MDL is concerned, a simple paper that illustrates the Normalized Maximum Likelihood (NML) procedure as well as the asymptotic approximation is:
S. de Rooij & P. Grünwald. An empirical
study of minimum descri... | Measures of model complexity | Minimum Description Length (MDL) and Minimum Message Length (MML) are certainly worth checking out.
As far as MDL is concerned, a simple paper that illustrates the Normalized Maximum Likelihood (NML) | Measures of model complexity
Minimum Description Length (MDL) and Minimum Message Length (MML) are certainly worth checking out.
As far as MDL is concerned, a simple paper that illustrates the Normalized Maximum Likelihood (NML) procedure as well as the asymptotic approximation is:
S. de Rooij & P. Grünwald. An empiri... | Measures of model complexity
Minimum Description Length (MDL) and Minimum Message Length (MML) are certainly worth checking out.
As far as MDL is concerned, a simple paper that illustrates the Normalized Maximum Likelihood (NML) |
11,632 | Measures of model complexity | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Minimum Description Length may be an avenue worth purs... | Measures of model complexity | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Measures of model complexity
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Minimum Description Lengt... | Measures of model complexity
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
11,633 | Measures of model complexity | By "model complexity" one usually means the richness of the model space. Note that this definition does not depend on data. For linear models, the richness of the model space is trivially measured with the diminution of the space. This is what some authors call the "degrees of freedom" (although historically, the degre... | Measures of model complexity | By "model complexity" one usually means the richness of the model space. Note that this definition does not depend on data. For linear models, the richness of the model space is trivially measured wit | Measures of model complexity
By "model complexity" one usually means the richness of the model space. Note that this definition does not depend on data. For linear models, the richness of the model space is trivially measured with the diminution of the space. This is what some authors call the "degrees of freedom" (alt... | Measures of model complexity
By "model complexity" one usually means the richness of the model space. Note that this definition does not depend on data. For linear models, the richness of the model space is trivially measured wit |
11,634 | Measures of model complexity | From Yaroslav's comments to Henrik's answer:
but cross-validation seems to just postpone the task of assessing complexity. If you use data to pick your parameters and your model as in cross-validation, the relevant question becomes how estimate the amount of data needed for this "meta"-fitter to perform well
I wonder ... | Measures of model complexity | From Yaroslav's comments to Henrik's answer:
but cross-validation seems to just postpone the task of assessing complexity. If you use data to pick your parameters and your model as in cross-validation | Measures of model complexity
From Yaroslav's comments to Henrik's answer:
but cross-validation seems to just postpone the task of assessing complexity. If you use data to pick your parameters and your model as in cross-validation, the relevant question becomes how estimate the amount of data needed for this "meta"-fitt... | Measures of model complexity
From Yaroslav's comments to Henrik's answer:
but cross-validation seems to just postpone the task of assessing complexity. If you use data to pick your parameters and your model as in cross-validation |
11,635 | Measures of model complexity | What about the information criterion for model comparison?
See e.g. http://en.wikipedia.org/wiki/Akaike_information_criterion
Model complexity is here the number of parameters of the model. | Measures of model complexity | What about the information criterion for model comparison?
See e.g. http://en.wikipedia.org/wiki/Akaike_information_criterion
Model complexity is here the number of parameters of the model. | Measures of model complexity
What about the information criterion for model comparison?
See e.g. http://en.wikipedia.org/wiki/Akaike_information_criterion
Model complexity is here the number of parameters of the model. | Measures of model complexity
What about the information criterion for model comparison?
See e.g. http://en.wikipedia.org/wiki/Akaike_information_criterion
Model complexity is here the number of parameters of the model. |
11,636 | Is this an appropriate method to test for seasonal effects in suicide count data? | What about a Poisson regression?
I created a data frame containing your data, plus an index t for the time (in months) and a variable monthdays for the number of days in each month.
T <- read.table("suicide.txt", header=TRUE)
U <- data.frame( year = as.numeric(rep(rownames(T),each=12)),
month = rep(colnames(T... | Is this an appropriate method to test for seasonal effects in suicide count data? | What about a Poisson regression?
I created a data frame containing your data, plus an index t for the time (in months) and a variable monthdays for the number of days in each month.
T <- read.table("s | Is this an appropriate method to test for seasonal effects in suicide count data?
What about a Poisson regression?
I created a data frame containing your data, plus an index t for the time (in months) and a variable monthdays for the number of days in each month.
T <- read.table("suicide.txt", header=TRUE)
U <- data.fr... | Is this an appropriate method to test for seasonal effects in suicide count data?
What about a Poisson regression?
I created a data frame containing your data, plus an index t for the time (in months) and a variable monthdays for the number of days in each month.
T <- read.table("s |
11,637 | Is this an appropriate method to test for seasonal effects in suicide count data? | A chi-square test is a good approach as a preliminary view to your question.
The stl decomposition can be misleading as a tool to test for the presence of seasonality. This procedure manages to return a stable seasonal pattern even if
a white noise (random signal with no structure) is passed as input. Try for example:... | Is this an appropriate method to test for seasonal effects in suicide count data? | A chi-square test is a good approach as a preliminary view to your question.
The stl decomposition can be misleading as a tool to test for the presence of seasonality. This procedure manages to return | Is this an appropriate method to test for seasonal effects in suicide count data?
A chi-square test is a good approach as a preliminary view to your question.
The stl decomposition can be misleading as a tool to test for the presence of seasonality. This procedure manages to return a stable seasonal pattern even if
a ... | Is this an appropriate method to test for seasonal effects in suicide count data?
A chi-square test is a good approach as a preliminary view to your question.
The stl decomposition can be misleading as a tool to test for the presence of seasonality. This procedure manages to return |
11,638 | Is this an appropriate method to test for seasonal effects in suicide count data? | As noted in my comment, this is a very interesting problem. Detecting seasonality is not a statistical exercise alone. A reasonable approach would be to consult theory and experts such as:
Psychologist
Psychiatrist
Sociologist
on this problem to understand "why" there would be seasonality to supplement data analysis.... | Is this an appropriate method to test for seasonal effects in suicide count data? | As noted in my comment, this is a very interesting problem. Detecting seasonality is not a statistical exercise alone. A reasonable approach would be to consult theory and experts such as:
Psychologi | Is this an appropriate method to test for seasonal effects in suicide count data?
As noted in my comment, this is a very interesting problem. Detecting seasonality is not a statistical exercise alone. A reasonable approach would be to consult theory and experts such as:
Psychologist
Psychiatrist
Sociologist
on this p... | Is this an appropriate method to test for seasonal effects in suicide count data?
As noted in my comment, this is a very interesting problem. Detecting seasonality is not a statistical exercise alone. A reasonable approach would be to consult theory and experts such as:
Psychologi |
11,639 | Is this an appropriate method to test for seasonal effects in suicide count data? | For initial visual estimation, following graph can be used. Plotting the monthly data with loess curve and its 95% confidence interval, it appears that there is a mid-year rise peaking at June. Other factors may be causing the data to have wide distribution, hence the seasonal trend may be getting masked in this raw da... | Is this an appropriate method to test for seasonal effects in suicide count data? | For initial visual estimation, following graph can be used. Plotting the monthly data with loess curve and its 95% confidence interval, it appears that there is a mid-year rise peaking at June. Other | Is this an appropriate method to test for seasonal effects in suicide count data?
For initial visual estimation, following graph can be used. Plotting the monthly data with loess curve and its 95% confidence interval, it appears that there is a mid-year rise peaking at June. Other factors may be causing the data to hav... | Is this an appropriate method to test for seasonal effects in suicide count data?
For initial visual estimation, following graph can be used. Plotting the monthly data with loess curve and its 95% confidence interval, it appears that there is a mid-year rise peaking at June. Other |
11,640 | Can a statistical test return a p-value of zero? | It will be the case that if you observed a sample that's impossible under the null (and if the statistic is able to detect that), you can get a p-value of exactly zero.
That can happen in real world problems. For example, if you do an Anderson-Darling test of goodness of fit of data to a standard uniform with some data... | Can a statistical test return a p-value of zero? | It will be the case that if you observed a sample that's impossible under the null (and if the statistic is able to detect that), you can get a p-value of exactly zero.
That can happen in real world p | Can a statistical test return a p-value of zero?
It will be the case that if you observed a sample that's impossible under the null (and if the statistic is able to detect that), you can get a p-value of exactly zero.
That can happen in real world problems. For example, if you do an Anderson-Darling test of goodness of... | Can a statistical test return a p-value of zero?
It will be the case that if you observed a sample that's impossible under the null (and if the statistic is able to detect that), you can get a p-value of exactly zero.
That can happen in real world p |
11,641 | Can a statistical test return a p-value of zero? | In R, the binomial test gives a P value of 'TRUE' presumably 0, if all trials succeed and hypothesis is 100% success, even if number of trials is just 1:
> binom.test(100,100,1)
Exact binomial test
data: 100 and 100
number of successes = 100, number of trials = 100, p-value = TRUE <<<< NOTE
alternative hyp... | Can a statistical test return a p-value of zero? | In R, the binomial test gives a P value of 'TRUE' presumably 0, if all trials succeed and hypothesis is 100% success, even if number of trials is just 1:
> binom.test(100,100,1)
Exact binomia | Can a statistical test return a p-value of zero?
In R, the binomial test gives a P value of 'TRUE' presumably 0, if all trials succeed and hypothesis is 100% success, even if number of trials is just 1:
> binom.test(100,100,1)
Exact binomial test
data: 100 and 100
number of successes = 100, number of trials ... | Can a statistical test return a p-value of zero?
In R, the binomial test gives a P value of 'TRUE' presumably 0, if all trials succeed and hypothesis is 100% success, even if number of trials is just 1:
> binom.test(100,100,1)
Exact binomia |
11,642 | How to choose prior in Bayesian parameter estimation | As stated in comment, the prior distribution represents prior beliefs about the distribution of the parameters.
When prior beliefs are actually available, you can:
convert them in terms of moments (e.g. mean and variance) to fit a common distribution to these moments (e.g. Gaussian if your parameter lies to the real l... | How to choose prior in Bayesian parameter estimation | As stated in comment, the prior distribution represents prior beliefs about the distribution of the parameters.
When prior beliefs are actually available, you can:
convert them in terms of moments (e | How to choose prior in Bayesian parameter estimation
As stated in comment, the prior distribution represents prior beliefs about the distribution of the parameters.
When prior beliefs are actually available, you can:
convert them in terms of moments (e.g. mean and variance) to fit a common distribution to these moment... | How to choose prior in Bayesian parameter estimation
As stated in comment, the prior distribution represents prior beliefs about the distribution of the parameters.
When prior beliefs are actually available, you can:
convert them in terms of moments (e |
11,643 | How to choose prior in Bayesian parameter estimation | There is also empirical Bayes. The idea is to tune the prior to the data:
$$\text{max}_{p(z)} \int p(\mathcal{D}|z)p(z) dz$$
While this might seem awkward at first, there are actually relations to minimum description length. This is also the typical way to estimate the kernel parameters of Gaussian processes. | How to choose prior in Bayesian parameter estimation | There is also empirical Bayes. The idea is to tune the prior to the data:
$$\text{max}_{p(z)} \int p(\mathcal{D}|z)p(z) dz$$
While this might seem awkward at first, there are actually relations to mi | How to choose prior in Bayesian parameter estimation
There is also empirical Bayes. The idea is to tune the prior to the data:
$$\text{max}_{p(z)} \int p(\mathcal{D}|z)p(z) dz$$
While this might seem awkward at first, there are actually relations to minimum description length. This is also the typical way to estimate ... | How to choose prior in Bayesian parameter estimation
There is also empirical Bayes. The idea is to tune the prior to the data:
$$\text{max}_{p(z)} \int p(\mathcal{D}|z)p(z) dz$$
While this might seem awkward at first, there are actually relations to mi |
11,644 | How to choose prior in Bayesian parameter estimation | To answer the two questions above directly:
You have other choices to choose non-conjugate priors other than conjugate priors. The problem is that if you choose non-conjugate priors, you cannot make exact Bayesian inference (simply put, you cannot derive a close-form posterior). Rather, you need to make approximate in... | How to choose prior in Bayesian parameter estimation | To answer the two questions above directly:
You have other choices to choose non-conjugate priors other than conjugate priors. The problem is that if you choose non-conjugate priors, you cannot make | How to choose prior in Bayesian parameter estimation
To answer the two questions above directly:
You have other choices to choose non-conjugate priors other than conjugate priors. The problem is that if you choose non-conjugate priors, you cannot make exact Bayesian inference (simply put, you cannot derive a close-for... | How to choose prior in Bayesian parameter estimation
To answer the two questions above directly:
You have other choices to choose non-conjugate priors other than conjugate priors. The problem is that if you choose non-conjugate priors, you cannot make |
11,645 | What is the difference between lm() and rlm()? | It (rlm) is for robust linear models. It is describe in Venables & Ripley. However, details of the robust calculations would not fit in a "short answer": you need to look into several papers by Ripley, Tukey, and others.
It a form of robust regression that uses M-estimators.
Check out this paper by Ripley for more in... | What is the difference between lm() and rlm()? | It (rlm) is for robust linear models. It is describe in Venables & Ripley. However, details of the robust calculations would not fit in a "short answer": you need to look into several papers by Ripl | What is the difference between lm() and rlm()?
It (rlm) is for robust linear models. It is describe in Venables & Ripley. However, details of the robust calculations would not fit in a "short answer": you need to look into several papers by Ripley, Tukey, and others.
It a form of robust regression that uses M-estimat... | What is the difference between lm() and rlm()?
It (rlm) is for robust linear models. It is describe in Venables & Ripley. However, details of the robust calculations would not fit in a "short answer": you need to look into several papers by Ripl |
11,646 | What is the difference between lm() and rlm()? | lm function uses Ordinary Least Squares(OLS) method for reducing the residuals.
whereas rlm function uses M-estimators.
OLS is very sensitive to outliers, M-estimation method is not. | What is the difference between lm() and rlm()? | lm function uses Ordinary Least Squares(OLS) method for reducing the residuals.
whereas rlm function uses M-estimators.
OLS is very sensitive to outliers, M-estimation method is not. | What is the difference between lm() and rlm()?
lm function uses Ordinary Least Squares(OLS) method for reducing the residuals.
whereas rlm function uses M-estimators.
OLS is very sensitive to outliers, M-estimation method is not. | What is the difference between lm() and rlm()?
lm function uses Ordinary Least Squares(OLS) method for reducing the residuals.
whereas rlm function uses M-estimators.
OLS is very sensitive to outliers, M-estimation method is not. |
11,647 | What is the difference between lm() and rlm()? | Short answer:
In rlm(), points are not treated equally. The weight of each point would be adjusted in an iterative process. rlm() is less sensitive to outliers, as outliers will get reduced weight.
If you want a short answer for the math, I suggest an article provided by Johns Hopkins Bloomberg School of Public Healt... | What is the difference between lm() and rlm()? | Short answer:
In rlm(), points are not treated equally. The weight of each point would be adjusted in an iterative process. rlm() is less sensitive to outliers, as outliers will get reduced weight.
| What is the difference between lm() and rlm()?
Short answer:
In rlm(), points are not treated equally. The weight of each point would be adjusted in an iterative process. rlm() is less sensitive to outliers, as outliers will get reduced weight.
If you want a short answer for the math, I suggest an article provided by... | What is the difference between lm() and rlm()?
Short answer:
In rlm(), points are not treated equally. The weight of each point would be adjusted in an iterative process. rlm() is less sensitive to outliers, as outliers will get reduced weight.
|
11,648 | What's the effect of scaling a loss function in deep learning? | Short answer:
It depends on the optimizer and the regularization term:
Without regularization, using SGD optimizer: scaling loss by $\alpha$ is equivalent to scaling SGD's learning rate by $\alpha$.
Without regularization, using Nadam: scaling loss by $\alpha$ has no effect.
With regularization, using either SGD or Na... | What's the effect of scaling a loss function in deep learning? | Short answer:
It depends on the optimizer and the regularization term:
Without regularization, using SGD optimizer: scaling loss by $\alpha$ is equivalent to scaling SGD's learning rate by $\alpha$.
| What's the effect of scaling a loss function in deep learning?
Short answer:
It depends on the optimizer and the regularization term:
Without regularization, using SGD optimizer: scaling loss by $\alpha$ is equivalent to scaling SGD's learning rate by $\alpha$.
Without regularization, using Nadam: scaling loss by $\al... | What's the effect of scaling a loss function in deep learning?
Short answer:
It depends on the optimizer and the regularization term:
Without regularization, using SGD optimizer: scaling loss by $\alpha$ is equivalent to scaling SGD's learning rate by $\alpha$.
|
11,649 | Are deep learning models parametric? Or non-parametric? | Deep learning models are generally parametric - in fact they have a huge number of parameters, one for each weight that is tuned during training.
As the number of weights generally stays constant, they technically have fixed degrees of freedom. However, as there are generally so many parameters they may be seen to emu... | Are deep learning models parametric? Or non-parametric? | Deep learning models are generally parametric - in fact they have a huge number of parameters, one for each weight that is tuned during training.
As the number of weights generally stays constant, th | Are deep learning models parametric? Or non-parametric?
Deep learning models are generally parametric - in fact they have a huge number of parameters, one for each weight that is tuned during training.
As the number of weights generally stays constant, they technically have fixed degrees of freedom. However, as there ... | Are deep learning models parametric? Or non-parametric?
Deep learning models are generally parametric - in fact they have a huge number of parameters, one for each weight that is tuned during training.
As the number of weights generally stays constant, th |
11,650 | Are deep learning models parametric? Or non-parametric? | A standard deep neural network (DNN) is, technically speaking, parametric since it has a fixed number of parameters. However, most DNNs have so many parameters that they could be interpreted as nonparametric; it has been proven that in the limit of infinite width, a deep neural network can be seen as a Gaussian process... | Are deep learning models parametric? Or non-parametric? | A standard deep neural network (DNN) is, technically speaking, parametric since it has a fixed number of parameters. However, most DNNs have so many parameters that they could be interpreted as nonpar | Are deep learning models parametric? Or non-parametric?
A standard deep neural network (DNN) is, technically speaking, parametric since it has a fixed number of parameters. However, most DNNs have so many parameters that they could be interpreted as nonparametric; it has been proven that in the limit of infinite width,... | Are deep learning models parametric? Or non-parametric?
A standard deep neural network (DNN) is, technically speaking, parametric since it has a fixed number of parameters. However, most DNNs have so many parameters that they could be interpreted as nonpar |
11,651 | Are deep learning models parametric? Or non-parametric? | Deep learning models should not be considered parametric. Parametric models are defined as models based off an a priori assumption about the distributions that generate the data. Deep nets do not make assumptions about the data generating process, rather they use large amounts of data to learn a function that maps inpu... | Are deep learning models parametric? Or non-parametric? | Deep learning models should not be considered parametric. Parametric models are defined as models based off an a priori assumption about the distributions that generate the data. Deep nets do not make | Are deep learning models parametric? Or non-parametric?
Deep learning models should not be considered parametric. Parametric models are defined as models based off an a priori assumption about the distributions that generate the data. Deep nets do not make assumptions about the data generating process, rather they use ... | Are deep learning models parametric? Or non-parametric?
Deep learning models should not be considered parametric. Parametric models are defined as models based off an a priori assumption about the distributions that generate the data. Deep nets do not make |
11,652 | Are deep learning models parametric? Or non-parametric? | Deutsch and Journel (1997, pp. 16-17) opined on the misleading nature of the term "non-parametric". They suggested that ≪...the terminology "parameter-rich" model should be retained for indicator based models instead of the traditional but misleading qualifier "non-parametric".≫
"Parameter rich" may be an accurate d... | Are deep learning models parametric? Or non-parametric? | Deutsch and Journel (1997, pp. 16-17) opined on the misleading nature of the term "non-parametric". They suggested that ≪...the terminology "parameter-rich" model should be retained for indicator bas | Are deep learning models parametric? Or non-parametric?
Deutsch and Journel (1997, pp. 16-17) opined on the misleading nature of the term "non-parametric". They suggested that ≪...the terminology "parameter-rich" model should be retained for indicator based models instead of the traditional but misleading qualifier "n... | Are deep learning models parametric? Or non-parametric?
Deutsch and Journel (1997, pp. 16-17) opined on the misleading nature of the term "non-parametric". They suggested that ≪...the terminology "parameter-rich" model should be retained for indicator bas |
11,653 | Now that I've rejected the null hypothesis what's next? | You can generally continue to improve your estimate of whatever parameter you might be testing with more data. Stopping data collection once a test achieves some semi-arbitrary degree of significance is a good way to make bad inferences. That analysts may misunderstand a significant result as a sign that the job is don... | Now that I've rejected the null hypothesis what's next? | You can generally continue to improve your estimate of whatever parameter you might be testing with more data. Stopping data collection once a test achieves some semi-arbitrary degree of significance | Now that I've rejected the null hypothesis what's next?
You can generally continue to improve your estimate of whatever parameter you might be testing with more data. Stopping data collection once a test achieves some semi-arbitrary degree of significance is a good way to make bad inferences. That analysts may misunder... | Now that I've rejected the null hypothesis what's next?
You can generally continue to improve your estimate of whatever parameter you might be testing with more data. Stopping data collection once a test achieves some semi-arbitrary degree of significance |
11,654 | Now that I've rejected the null hypothesis what's next? | Note first that @Nick Stauner makes some very important arguments regarding optional stopping. If you repeatedly test the data as samples come in, stopping once a test is significant, you're all but guaranteed a significant result. However, a guaranteed result is practically worthless.
In the following, I'll present my... | Now that I've rejected the null hypothesis what's next? | Note first that @Nick Stauner makes some very important arguments regarding optional stopping. If you repeatedly test the data as samples come in, stopping once a test is significant, you're all but g | Now that I've rejected the null hypothesis what's next?
Note first that @Nick Stauner makes some very important arguments regarding optional stopping. If you repeatedly test the data as samples come in, stopping once a test is significant, you're all but guaranteed a significant result. However, a guaranteed result is ... | Now that I've rejected the null hypothesis what's next?
Note first that @Nick Stauner makes some very important arguments regarding optional stopping. If you repeatedly test the data as samples come in, stopping once a test is significant, you're all but g |
11,655 | Now that I've rejected the null hypothesis what's next? | The idea that you cannot prove a positive scientific proposition, but only disprove one, is a principle of Popper's falsificationism. I do agree that you cannot prove an effect is exactly equal to any given point value (cf., my answer here: Why do statisticians say a non-significant result means "you cannot reject the... | Now that I've rejected the null hypothesis what's next? | The idea that you cannot prove a positive scientific proposition, but only disprove one, is a principle of Popper's falsificationism. I do agree that you cannot prove an effect is exactly equal to an | Now that I've rejected the null hypothesis what's next?
The idea that you cannot prove a positive scientific proposition, but only disprove one, is a principle of Popper's falsificationism. I do agree that you cannot prove an effect is exactly equal to any given point value (cf., my answer here: Why do statisticians s... | Now that I've rejected the null hypothesis what's next?
The idea that you cannot prove a positive scientific proposition, but only disprove one, is a principle of Popper's falsificationism. I do agree that you cannot prove an effect is exactly equal to an |
11,656 | Now that I've rejected the null hypothesis what's next? | One think I would like to add is that your question reminds me of my younger self: I wanted desperately to prove my hypothesis because I did not how to write "the hypothesis was wrong" in a way which helped to improve the paper I was writing.
But then I realized that the "damn my absolutely lovely hypothesis cannot be... | Now that I've rejected the null hypothesis what's next? | One think I would like to add is that your question reminds me of my younger self: I wanted desperately to prove my hypothesis because I did not how to write "the hypothesis was wrong" in a way which | Now that I've rejected the null hypothesis what's next?
One think I would like to add is that your question reminds me of my younger self: I wanted desperately to prove my hypothesis because I did not how to write "the hypothesis was wrong" in a way which helped to improve the paper I was writing.
But then I realized ... | Now that I've rejected the null hypothesis what's next?
One think I would like to add is that your question reminds me of my younger self: I wanted desperately to prove my hypothesis because I did not how to write "the hypothesis was wrong" in a way which |
11,657 | Now that I've rejected the null hypothesis what's next? | There is a method for combing probabilities across studies described here. You should not apply the formula blindly without considering the pattern of results. | Now that I've rejected the null hypothesis what's next? | There is a method for combing probabilities across studies described here. You should not apply the formula blindly without considering the pattern of results. | Now that I've rejected the null hypothesis what's next?
There is a method for combing probabilities across studies described here. You should not apply the formula blindly without considering the pattern of results. | Now that I've rejected the null hypothesis what's next?
There is a method for combing probabilities across studies described here. You should not apply the formula blindly without considering the pattern of results. |
11,658 | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once? | Let
$x$ be the top end of your range, $x=100$ in your case.
$n$ be the total number of draws, $n=25$ in your case.
For any number $y\le x$, the number of sequences of $n$ numbers with each number in the sequence $\le y$ is $y^n$. Of these sequence, the number containing no $y$s is $(y-1)^n$, and the number containing... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than | Let
$x$ be the top end of your range, $x=100$ in your case.
$n$ be the total number of draws, $n=25$ in your case.
For any number $y\le x$, the number of sequences of $n$ numbers with each number in | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once?
Let
$x$ be the top end of your range, $x=100$ in your case.
$n$ be the total number of draws, $n=25$ in your case.
For any number $y\le x$, the number of sequences of $n$ numbers with each number in the sequence ... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than
Let
$x$ be the top end of your range, $x=100$ in your case.
$n$ be the total number of draws, $n=25$ in your case.
For any number $y\le x$, the number of sequences of $n$ numbers with each number in |
11,659 | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once? | I would consider to find the probability of having a unique winner first
Probability of having a unique winner and his number is $x$ equals to $\frac{{25\choose1} (x-1)^{24}}{{100}^{25} }$ as there is 25 choices for winner, and the remaining can have number ranging from 1 to $y-1$
The winner can win with his number eq... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than | I would consider to find the probability of having a unique winner first
Probability of having a unique winner and his number is $x$ equals to $\frac{{25\choose1} (x-1)^{24}}{{100}^{25} }$ as there i | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once?
I would consider to find the probability of having a unique winner first
Probability of having a unique winner and his number is $x$ equals to $\frac{{25\choose1} (x-1)^{24}}{{100}^{25} }$ as there is 25 choices f... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than
I would consider to find the probability of having a unique winner first
Probability of having a unique winner and his number is $x$ equals to $\frac{{25\choose1} (x-1)^{24}}{{100}^{25} }$ as there i |
11,660 | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once? | It seems a very similar question to the Birthday paradox (http://en.wikipedia.org/wiki/Birthday_problem), the only difference is that in this case you don't want to match any number but only the highest number.
The first step in the calculation calculate the probability that non of the random numbers overlap ($p$). (s... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than | It seems a very similar question to the Birthday paradox (http://en.wikipedia.org/wiki/Birthday_problem), the only difference is that in this case you don't want to match any number but only the highe | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than once?
It seems a very similar question to the Birthday paradox (http://en.wikipedia.org/wiki/Birthday_problem), the only difference is that in this case you don't want to match any number but only the highest number.
Th... | What's the probability that from 25 random numbers between 1 and 100, the highest appears more than
It seems a very similar question to the Birthday paradox (http://en.wikipedia.org/wiki/Birthday_problem), the only difference is that in this case you don't want to match any number but only the highe |
11,661 | How can I align/synchronize two signals? | The question asks how to find the amount by which one time series ("expansion") lags another ("volume") when the series are sampled at regular but different intervals.
In this case both series exhibit reasonably continuous behavior, as the figures will show. This implies (1) little or no initial smoothing may be needed... | How can I align/synchronize two signals? | The question asks how to find the amount by which one time series ("expansion") lags another ("volume") when the series are sampled at regular but different intervals.
In this case both series exhibit | How can I align/synchronize two signals?
The question asks how to find the amount by which one time series ("expansion") lags another ("volume") when the series are sampled at regular but different intervals.
In this case both series exhibit reasonably continuous behavior, as the figures will show. This implies (1) lit... | How can I align/synchronize two signals?
The question asks how to find the amount by which one time series ("expansion") lags another ("volume") when the series are sampled at regular but different intervals.
In this case both series exhibit |
11,662 | Role of n.minobsinnode parameter of GBM in R | At each step of the GBM algorithm, a new decision tree is constructed. The question when growing a decision tree is 'when to stop?'. The furthest you can go is to split each node until there is only 1 observation in each terminal node. This would correspond to n.minobsinnode=1. Alternatively, the splitting of nodes can... | Role of n.minobsinnode parameter of GBM in R | At each step of the GBM algorithm, a new decision tree is constructed. The question when growing a decision tree is 'when to stop?'. The furthest you can go is to split each node until there is only 1 | Role of n.minobsinnode parameter of GBM in R
At each step of the GBM algorithm, a new decision tree is constructed. The question when growing a decision tree is 'when to stop?'. The furthest you can go is to split each node until there is only 1 observation in each terminal node. This would correspond to n.minobsinnode... | Role of n.minobsinnode parameter of GBM in R
At each step of the GBM algorithm, a new decision tree is constructed. The question when growing a decision tree is 'when to stop?'. The furthest you can go is to split each node until there is only 1 |
11,663 | Estimating gamma distribution parameters using sample mean and std | Both the MLEs and moment based estimators are consistent and so you'd expect that in sufficiently large samples from a gamma distribution they'd tend to be quite similar. However, they won't necessarily be alike when the distribution is not close to a gamma.
Looking at the distribution of the log of the data, it is rou... | Estimating gamma distribution parameters using sample mean and std | Both the MLEs and moment based estimators are consistent and so you'd expect that in sufficiently large samples from a gamma distribution they'd tend to be quite similar. However, they won't necessari | Estimating gamma distribution parameters using sample mean and std
Both the MLEs and moment based estimators are consistent and so you'd expect that in sufficiently large samples from a gamma distribution they'd tend to be quite similar. However, they won't necessarily be alike when the distribution is not close to a g... | Estimating gamma distribution parameters using sample mean and std
Both the MLEs and moment based estimators are consistent and so you'd expect that in sufficiently large samples from a gamma distribution they'd tend to be quite similar. However, they won't necessari |
11,664 | Estimating gamma distribution parameters using sample mean and std | The estimates obtained this way are method of moments estimates. In particular, we know that $\mbox{E}(X) = \alpha \theta$ and $\mbox{Var}[X] = \alpha \theta^2$ for a gamma distribution with shape parameter $\alpha$ and scale parameter $\theta$ (see wikipedia). Solving these equations for $\alpha$ and $\theta$ yields $... | Estimating gamma distribution parameters using sample mean and std | The estimates obtained this way are method of moments estimates. In particular, we know that $\mbox{E}(X) = \alpha \theta$ and $\mbox{Var}[X] = \alpha \theta^2$ for a gamma distribution with shape par | Estimating gamma distribution parameters using sample mean and std
The estimates obtained this way are method of moments estimates. In particular, we know that $\mbox{E}(X) = \alpha \theta$ and $\mbox{Var}[X] = \alpha \theta^2$ for a gamma distribution with shape parameter $\alpha$ and scale parameter $\theta$ (see wik... | Estimating gamma distribution parameters using sample mean and std
The estimates obtained this way are method of moments estimates. In particular, we know that $\mbox{E}(X) = \alpha \theta$ and $\mbox{Var}[X] = \alpha \theta^2$ for a gamma distribution with shape par |
11,665 | Cross Validation (error generalization) after model selection | The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of the cross-validation procedure. The best thing to do is to view feature selection, meta/hyper-parameter setting and opti... | Cross Validation (error generalization) after model selection | The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of t | Cross Validation (error generalization) after model selection
The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of the cross-validation procedure. The best thing to do is to... | Cross Validation (error generalization) after model selection
The key thing to remember is that for cross-validation to give an (almost) unbiased performance estimate every step involved in fitting the model must also be performed independently in each fold of t |
11,666 | Cross Validation (error generalization) after model selection | I have been doing an extensive cross-validation analysis on a data set that cost millions to acquire, and there is no external validation set available. In this case, I performed extensive nested cross validation to ensure validity. I selected features and optimized parameters only from the respective training sets. Th... | Cross Validation (error generalization) after model selection | I have been doing an extensive cross-validation analysis on a data set that cost millions to acquire, and there is no external validation set available. In this case, I performed extensive nested cros | Cross Validation (error generalization) after model selection
I have been doing an extensive cross-validation analysis on a data set that cost millions to acquire, and there is no external validation set available. In this case, I performed extensive nested cross validation to ensure validity. I selected features and o... | Cross Validation (error generalization) after model selection
I have been doing an extensive cross-validation analysis on a data set that cost millions to acquire, and there is no external validation set available. In this case, I performed extensive nested cros |
11,667 | What is wrong with this "naive" shuffling algorithm? | It is broken, although if you perform enough shuffles it can be an excellent approximation (as the previous answers have indicated).
Just to get a handle on what's going on, consider how often your algorithm will generate shuffles of a $k$ element array in which the first element is fixed, $k \ge 2$. When permutations... | What is wrong with this "naive" shuffling algorithm? | It is broken, although if you perform enough shuffles it can be an excellent approximation (as the previous answers have indicated).
Just to get a handle on what's going on, consider how often your al | What is wrong with this "naive" shuffling algorithm?
It is broken, although if you perform enough shuffles it can be an excellent approximation (as the previous answers have indicated).
Just to get a handle on what's going on, consider how often your algorithm will generate shuffles of a $k$ element array in which the ... | What is wrong with this "naive" shuffling algorithm?
It is broken, although if you perform enough shuffles it can be an excellent approximation (as the previous answers have indicated).
Just to get a handle on what's going on, consider how often your al |
11,668 | What is wrong with this "naive" shuffling algorithm? | I think your simple algorithm will shuffle the cards correctly as the number shuffles tends to infinity.
Suppose you have three cards: {A,B,C}. Assume that your cards begin in the following order: A,B,C. Then after one shuffle you have following combinations:
{A,B,C}, {A,B,C}, {A,B,C} #You get this if choose the same R... | What is wrong with this "naive" shuffling algorithm? | I think your simple algorithm will shuffle the cards correctly as the number shuffles tends to infinity.
Suppose you have three cards: {A,B,C}. Assume that your cards begin in the following order: A,B | What is wrong with this "naive" shuffling algorithm?
I think your simple algorithm will shuffle the cards correctly as the number shuffles tends to infinity.
Suppose you have three cards: {A,B,C}. Assume that your cards begin in the following order: A,B,C. Then after one shuffle you have following combinations:
{A,B,C}... | What is wrong with this "naive" shuffling algorithm?
I think your simple algorithm will shuffle the cards correctly as the number shuffles tends to infinity.
Suppose you have three cards: {A,B,C}. Assume that your cards begin in the following order: A,B |
11,669 | What is wrong with this "naive" shuffling algorithm? | One way to see that you won't get a perfectly uniform distribution is by divisibility. In the uniform distribution, the probability of each permutation is $1/n!$. When you generate a sequence of $t$ random transpositions, and then collect sequences by their product, the probabilities you get are of the form $A/n^{2t}$ ... | What is wrong with this "naive" shuffling algorithm? | One way to see that you won't get a perfectly uniform distribution is by divisibility. In the uniform distribution, the probability of each permutation is $1/n!$. When you generate a sequence of $t$ r | What is wrong with this "naive" shuffling algorithm?
One way to see that you won't get a perfectly uniform distribution is by divisibility. In the uniform distribution, the probability of each permutation is $1/n!$. When you generate a sequence of $t$ random transpositions, and then collect sequences by their product, ... | What is wrong with this "naive" shuffling algorithm?
One way to see that you won't get a perfectly uniform distribution is by divisibility. In the uniform distribution, the probability of each permutation is $1/n!$. When you generate a sequence of $t$ r |
11,670 | What is wrong with this "naive" shuffling algorithm? | Bear in mind I am not a statistician, but I'll put my 2cents.
I made a little test in R (careful, it's very slow for high numTrials, the code can probably be optimized):
numElements <- 1000
numTrials <- 5000
swapVec <- function()
{
vec.swp <- vec
for (i in 1:numElements)
{
i <- sample(1:nu... | What is wrong with this "naive" shuffling algorithm? | Bear in mind I am not a statistician, but I'll put my 2cents.
I made a little test in R (careful, it's very slow for high numTrials, the code can probably be optimized):
numElements <- 1000
numTrials | What is wrong with this "naive" shuffling algorithm?
Bear in mind I am not a statistician, but I'll put my 2cents.
I made a little test in R (careful, it's very slow for high numTrials, the code can probably be optimized):
numElements <- 1000
numTrials <- 5000
swapVec <- function()
{
vec.swp <- vec
for (i... | What is wrong with this "naive" shuffling algorithm?
Bear in mind I am not a statistician, but I'll put my 2cents.
I made a little test in R (careful, it's very slow for high numTrials, the code can probably be optimized):
numElements <- 1000
numTrials |
11,671 | What is wrong with this "naive" shuffling algorithm? | Here's how I am interpreting your algorithm, in pseudo code:
void shuffle(array, length, num_passes)
for (pass = 0; pass < num_passes; ++pass)
for (n = 0; n < length; ++)
i = random_in(0, length-1)
j = random_in(0, lenght-1)
swap(array[i], array[j]
We can associate a run of this algorithm with... | What is wrong with this "naive" shuffling algorithm? | Here's how I am interpreting your algorithm, in pseudo code:
void shuffle(array, length, num_passes)
for (pass = 0; pass < num_passes; ++pass)
for (n = 0; n < length; ++)
i = random_in(0, | What is wrong with this "naive" shuffling algorithm?
Here's how I am interpreting your algorithm, in pseudo code:
void shuffle(array, length, num_passes)
for (pass = 0; pass < num_passes; ++pass)
for (n = 0; n < length; ++)
i = random_in(0, length-1)
j = random_in(0, lenght-1)
swap(array[i], ar... | What is wrong with this "naive" shuffling algorithm?
Here's how I am interpreting your algorithm, in pseudo code:
void shuffle(array, length, num_passes)
for (pass = 0; pass < num_passes; ++pass)
for (n = 0; n < length; ++)
i = random_in(0, |
11,672 | GEE: choosing proper working correlation structure | Not necessarily. With small clusters, imbalanced design, and incomplete within-cluster confounder adjustment, exchangeable correlation may be more inefficient and biased relative than independence GEE. Those assumptions can be rather strong, too. However, when those assumptions are met, you get more efficient inference... | GEE: choosing proper working correlation structure | Not necessarily. With small clusters, imbalanced design, and incomplete within-cluster confounder adjustment, exchangeable correlation may be more inefficient and biased relative than independence GEE | GEE: choosing proper working correlation structure
Not necessarily. With small clusters, imbalanced design, and incomplete within-cluster confounder adjustment, exchangeable correlation may be more inefficient and biased relative than independence GEE. Those assumptions can be rather strong, too. However, when those as... | GEE: choosing proper working correlation structure
Not necessarily. With small clusters, imbalanced design, and incomplete within-cluster confounder adjustment, exchangeable correlation may be more inefficient and biased relative than independence GEE |
11,673 | GEE: choosing proper working correlation structure | (1) You will likely need some kind of autoregressive structure, simply because we expect measurements taken further apart to be less correlated than those taken closer together. Exchangeable would assume they are all equally correlated. But as with everything else, it depends.
(2) I think this kind of decision comes d... | GEE: choosing proper working correlation structure | (1) You will likely need some kind of autoregressive structure, simply because we expect measurements taken further apart to be less correlated than those taken closer together. Exchangeable would ass | GEE: choosing proper working correlation structure
(1) You will likely need some kind of autoregressive structure, simply because we expect measurements taken further apart to be less correlated than those taken closer together. Exchangeable would assume they are all equally correlated. But as with everything else, it ... | GEE: choosing proper working correlation structure
(1) You will likely need some kind of autoregressive structure, simply because we expect measurements taken further apart to be less correlated than those taken closer together. Exchangeable would ass |
11,674 | GEE: choosing proper working correlation structure | (0) General comments: most of the models I see on crossvalidated are far too complicated. Simplify if at all possible. It is often worth modeling with GEE and mixed model to compare results.
(1) Yes. Choose exchangeable. My unambiguous answer is based on the most widely touted benefit of GEE: resilience of estimates to... | GEE: choosing proper working correlation structure | (0) General comments: most of the models I see on crossvalidated are far too complicated. Simplify if at all possible. It is often worth modeling with GEE and mixed model to compare results.
(1) Yes. | GEE: choosing proper working correlation structure
(0) General comments: most of the models I see on crossvalidated are far too complicated. Simplify if at all possible. It is often worth modeling with GEE and mixed model to compare results.
(1) Yes. Choose exchangeable. My unambiguous answer is based on the most widel... | GEE: choosing proper working correlation structure
(0) General comments: most of the models I see on crossvalidated are far too complicated. Simplify if at all possible. It is often worth modeling with GEE and mixed model to compare results.
(1) Yes. |
11,675 | GEE: choosing proper working correlation structure | You're using the wrong approach with a gee to do what you are doing because you don't know the structure and your results will be likely confounded. Refer to Jamie Robinson this. You need to use long. TMLE (mark van der laan) or perhaps a gee with iptw weights. Not accounting for correlation does underestimate varian... | GEE: choosing proper working correlation structure | You're using the wrong approach with a gee to do what you are doing because you don't know the structure and your results will be likely confounded. Refer to Jamie Robinson this. You need to use long | GEE: choosing proper working correlation structure
You're using the wrong approach with a gee to do what you are doing because you don't know the structure and your results will be likely confounded. Refer to Jamie Robinson this. You need to use long. TMLE (mark van der laan) or perhaps a gee with iptw weights. Not a... | GEE: choosing proper working correlation structure
You're using the wrong approach with a gee to do what you are doing because you don't know the structure and your results will be likely confounded. Refer to Jamie Robinson this. You need to use long |
11,676 | Stepwise AIC - Does there exist controversy surrounding this topic? | There are a few different issues here.
Probably the main issue is that model selection (whether using p-values or AICs, stepwise or all-subsets or something else) is primarily problematic for inference (e.g. getting p-values with appropriate type I error, confidence intervals with appropriate coverage). For predict... | Stepwise AIC - Does there exist controversy surrounding this topic? | There are a few different issues here.
Probably the main issue is that model selection (whether using p-values or AICs, stepwise or all-subsets or something else) is primarily problematic for infer | Stepwise AIC - Does there exist controversy surrounding this topic?
There are a few different issues here.
Probably the main issue is that model selection (whether using p-values or AICs, stepwise or all-subsets or something else) is primarily problematic for inference (e.g. getting p-values with appropriate type I ... | Stepwise AIC - Does there exist controversy surrounding this topic?
There are a few different issues here.
Probably the main issue is that model selection (whether using p-values or AICs, stepwise or all-subsets or something else) is primarily problematic for infer |
11,677 | Opinions about Oversampling in general, and the SMOTE algorithm in particular [closed] | {1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling:
2.2 Sampling
Oversampling and undersampling can be used to alter
the class distribution of the training data and both methods
have been used to deal with class imbalance [1, 2, 3, 6, 10,
11]. The reason that altering the class ... | Opinions about Oversampling in general, and the SMOTE algorithm in particular [closed] | {1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling:
2.2 Sampling
Oversampling and undersampling can be used to alter
the class distribution of the training dat | Opinions about Oversampling in general, and the SMOTE algorithm in particular [closed]
{1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling:
2.2 Sampling
Oversampling and undersampling can be used to alter
the class distribution of the training data and both methods
have been used... | Opinions about Oversampling in general, and the SMOTE algorithm in particular [closed]
{1} gives a list of advantages and disadvantages of cost-sensitive learning vs. sampling:
2.2 Sampling
Oversampling and undersampling can be used to alter
the class distribution of the training dat |
11,678 | Jenks Natural Breaks in Python: How to find the optimum number of breaks? | Jenks Natural Breaks works by optimizing the Goodness of Variance Fit, a value from 0 to 1 where 0 = No Fit and 1 = Perfect Fit. The key in selecting the number of classes is to find a balance between detecting differences and overfitting your data. To determine the optimum number of classes, I suggest you use a thre... | Jenks Natural Breaks in Python: How to find the optimum number of breaks? | Jenks Natural Breaks works by optimizing the Goodness of Variance Fit, a value from 0 to 1 where 0 = No Fit and 1 = Perfect Fit. The key in selecting the number of classes is to find a balance betwee | Jenks Natural Breaks in Python: How to find the optimum number of breaks?
Jenks Natural Breaks works by optimizing the Goodness of Variance Fit, a value from 0 to 1 where 0 = No Fit and 1 = Perfect Fit. The key in selecting the number of classes is to find a balance between detecting differences and overfitting your d... | Jenks Natural Breaks in Python: How to find the optimum number of breaks?
Jenks Natural Breaks works by optimizing the Goodness of Variance Fit, a value from 0 to 1 where 0 = No Fit and 1 = Perfect Fit. The key in selecting the number of classes is to find a balance betwee |
11,679 | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity | Wojtek J. Krzanowski and David J. Hand ROC Curves for Continuous Data (2009) is a great reference for all things related to ROC curves. It collects together a number of results in what is a frustratingly broad literature base, which often uses different terminology to discuss the same topic.
Additionally, this book of... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen | Wojtek J. Krzanowski and David J. Hand ROC Curves for Continuous Data (2009) is a great reference for all things related to ROC curves. It collects together a number of results in what is a frustratin | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity
Wojtek J. Krzanowski and David J. Hand ROC Curves for Continuous Data (2009) is a great reference for all things related to ROC curves. It collects together a number of results in what is a frust... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen
Wojtek J. Krzanowski and David J. Hand ROC Curves for Continuous Data (2009) is a great reference for all things related to ROC curves. It collects together a number of results in what is a frustratin |
11,680 | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity | Let me keep the answer short, because this guide does explain a lot more and better.
Basically, you have your number of True Postives ($nTP$) and number of True Negatives ($nTN$). Also you have your AUC, A. The standard error of this A is:
$\texttt{SE}_A = \sqrt{\frac{A(1-A) + (nTP-1)(Q_1 - A^2)+(nTN-1)(Q_2 - A^2)}{nTP... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen | Let me keep the answer short, because this guide does explain a lot more and better.
Basically, you have your number of True Postives ($nTP$) and number of True Negatives ($nTN$). Also you have your A | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity
Let me keep the answer short, because this guide does explain a lot more and better.
Basically, you have your number of True Postives ($nTP$) and number of True Negatives ($nTN$). Also you have y... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen
Let me keep the answer short, because this guide does explain a lot more and better.
Basically, you have your number of True Postives ($nTP$) and number of True Negatives ($nTN$). Also you have your A |
11,681 | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity | For Question 1, @Sycorax provided a comprehensive answer.
For Question 2, to the best of my knowledge, averaging predictions from subjects is incorrect.
I decided to use bootstrapping to compute p-values and compare models.
In this case, the procedure is as follows:
For N iterations:
sample 5 subjects with replacemen... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen | For Question 1, @Sycorax provided a comprehensive answer.
For Question 2, to the best of my knowledge, averaging predictions from subjects is incorrect.
I decided to use bootstrapping to compute p-val | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sensitivity and specificity
For Question 1, @Sycorax provided a comprehensive answer.
For Question 2, to the best of my knowledge, averaging predictions from subjects is incorrect.
I decided to use bootstrapping to compute ... | Statistical significance (p-value) for comparing two classifiers with respect to (mean) ROC AUC, sen
For Question 1, @Sycorax provided a comprehensive answer.
For Question 2, to the best of my knowledge, averaging predictions from subjects is incorrect.
I decided to use bootstrapping to compute p-val |
11,682 | Why exactly can't beta regression deal with 0s and 1s in the response variable? | Because the loglikelihood contains both $\log(x)$ and $\log(1-x)$, which are unbounded when $x=0$ or $x=1$. See equation (4) of Smithson & Verkuilen, "A Better Lemon Squeezer? Maximum-Likelihood Regression With Beta-Distributed Dependent Variables" (direct link to PDF). | Why exactly can't beta regression deal with 0s and 1s in the response variable? | Because the loglikelihood contains both $\log(x)$ and $\log(1-x)$, which are unbounded when $x=0$ or $x=1$. See equation (4) of Smithson & Verkuilen, "A Better Lemon Squeezer? Maximum-Likelihood Regr | Why exactly can't beta regression deal with 0s and 1s in the response variable?
Because the loglikelihood contains both $\log(x)$ and $\log(1-x)$, which are unbounded when $x=0$ or $x=1$. See equation (4) of Smithson & Verkuilen, "A Better Lemon Squeezer? Maximum-Likelihood Regression With Beta-Distributed Dependent V... | Why exactly can't beta regression deal with 0s and 1s in the response variable?
Because the loglikelihood contains both $\log(x)$ and $\log(1-x)$, which are unbounded when $x=0$ or $x=1$. See equation (4) of Smithson & Verkuilen, "A Better Lemon Squeezer? Maximum-Likelihood Regr |
11,683 | Why exactly can't beta regression deal with 0s and 1s in the response variable? | besides the fact that the reason comes in practice from the presence of $log(x)$ and $log(1-x)$, I will try to complement the answer to the question by trying to frame the underlying reason why this happens.
as a matter of fact, the beta distribution is "often used to describe the distribution of a probability value" (... | Why exactly can't beta regression deal with 0s and 1s in the response variable? | besides the fact that the reason comes in practice from the presence of $log(x)$ and $log(1-x)$, I will try to complement the answer to the question by trying to frame the underlying reason why this h | Why exactly can't beta regression deal with 0s and 1s in the response variable?
besides the fact that the reason comes in practice from the presence of $log(x)$ and $log(1-x)$, I will try to complement the answer to the question by trying to frame the underlying reason why this happens.
as a matter of fact, the beta di... | Why exactly can't beta regression deal with 0s and 1s in the response variable?
besides the fact that the reason comes in practice from the presence of $log(x)$ and $log(1-x)$, I will try to complement the answer to the question by trying to frame the underlying reason why this h |
11,684 | Why doesn't Wilks' 1938 proof work for misspecified models? | R.V. Foutz and R.C. Srivastava has examined the issue in detail. Their 1977 paper "The performance of the likelihood ratio test when the model is incorrect" contains a statement of the distributional result in case of misspecification alongside a very brief sketch of the proof, while their 1978 paper "The asymptotic di... | Why doesn't Wilks' 1938 proof work for misspecified models? | R.V. Foutz and R.C. Srivastava has examined the issue in detail. Their 1977 paper "The performance of the likelihood ratio test when the model is incorrect" contains a statement of the distributional | Why doesn't Wilks' 1938 proof work for misspecified models?
R.V. Foutz and R.C. Srivastava has examined the issue in detail. Their 1977 paper "The performance of the likelihood ratio test when the model is incorrect" contains a statement of the distributional result in case of misspecification alongside a very brief sk... | Why doesn't Wilks' 1938 proof work for misspecified models?
R.V. Foutz and R.C. Srivastava has examined the issue in detail. Their 1977 paper "The performance of the likelihood ratio test when the model is incorrect" contains a statement of the distributional |
11,685 | Why doesn't Wilks' 1938 proof work for misspecified models? | Wilks' 1938 proof doesn't work because Wilks used $J^{-1}$
As the asymptotic covariance matrix in his proof. $J^{-1}$ is the inverse of the Hessian of the negative log likelihood rather than the sandwich estimator $J^{-1} K J^{-1}$. Wilks references the $ij$th element of $J$ as $c_{ij}$ in his proof.
By making the assu... | Why doesn't Wilks' 1938 proof work for misspecified models? | Wilks' 1938 proof doesn't work because Wilks used $J^{-1}$
As the asymptotic covariance matrix in his proof. $J^{-1}$ is the inverse of the Hessian of the negative log likelihood rather than the sandw | Why doesn't Wilks' 1938 proof work for misspecified models?
Wilks' 1938 proof doesn't work because Wilks used $J^{-1}$
As the asymptotic covariance matrix in his proof. $J^{-1}$ is the inverse of the Hessian of the negative log likelihood rather than the sandwich estimator $J^{-1} K J^{-1}$. Wilks references the $ij$th... | Why doesn't Wilks' 1938 proof work for misspecified models?
Wilks' 1938 proof doesn't work because Wilks used $J^{-1}$
As the asymptotic covariance matrix in his proof. $J^{-1}$ is the inverse of the Hessian of the negative log likelihood rather than the sandw |
11,686 | Is a priori power analysis essentially useless? | The basic issue here is true and fairly well known in statistics. However, his interpretation / claim is extreme. There are several issues to be discussed:
First, power doesn't change very fast with changes in $N$. (Specifically, it changes as a function of $\sqrt N$, so to halve the standard deviation of your sam... | Is a priori power analysis essentially useless? | The basic issue here is true and fairly well known in statistics. However, his interpretation / claim is extreme. There are several issues to be discussed:
First, power doesn't change very fast wi | Is a priori power analysis essentially useless?
The basic issue here is true and fairly well known in statistics. However, his interpretation / claim is extreme. There are several issues to be discussed:
First, power doesn't change very fast with changes in $N$. (Specifically, it changes as a function of $\sqrt N$... | Is a priori power analysis essentially useless?
The basic issue here is true and fairly well known in statistics. However, his interpretation / claim is extreme. There are several issues to be discussed:
First, power doesn't change very fast wi |
11,687 | What's the correct way to test the significance of classification results | In addition to @jb.'s excellent answer, let me add that you can use McNemar's test on the same test set to determine if one classifier is significantly better than the other. This will only work for classification problems (what McNemar's original work call a "dichotomous trait") meaning that the classifiers either get... | What's the correct way to test the significance of classification results | In addition to @jb.'s excellent answer, let me add that you can use McNemar's test on the same test set to determine if one classifier is significantly better than the other. This will only work for c | What's the correct way to test the significance of classification results
In addition to @jb.'s excellent answer, let me add that you can use McNemar's test on the same test set to determine if one classifier is significantly better than the other. This will only work for classification problems (what McNemar's origina... | What's the correct way to test the significance of classification results
In addition to @jb.'s excellent answer, let me add that you can use McNemar's test on the same test set to determine if one classifier is significantly better than the other. This will only work for c |
11,688 | What's the correct way to test the significance of classification results | Since distribution of classification errors is a binary distribution (either there is misclassifcation or there is none) --- I'd say that using Chi-squared is not sensible.
Also only comparing efficiences of classifiers that work on the same datasets is sensible ---
'No free lunch theorem' states that all models have... | What's the correct way to test the significance of classification results | Since distribution of classification errors is a binary distribution (either there is misclassifcation or there is none) --- I'd say that using Chi-squared is not sensible.
Also only comparing effici | What's the correct way to test the significance of classification results
Since distribution of classification errors is a binary distribution (either there is misclassifcation or there is none) --- I'd say that using Chi-squared is not sensible.
Also only comparing efficiences of classifiers that work on the same dat... | What's the correct way to test the significance of classification results
Since distribution of classification errors is a binary distribution (either there is misclassifcation or there is none) --- I'd say that using Chi-squared is not sensible.
Also only comparing effici |
11,689 | What's the correct way to test the significance of classification results | I recommend the paper by Tom Dietterich titled "Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms". Here's the paper's profile on CiteSeer. From the abstract: "This paper reviews five approximate statistical tests for determining whether one learning algorithm out-performs anot... | What's the correct way to test the significance of classification results | I recommend the paper by Tom Dietterich titled "Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms". Here's the paper's profile on CiteSeer. From the abstract: | What's the correct way to test the significance of classification results
I recommend the paper by Tom Dietterich titled "Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms". Here's the paper's profile on CiteSeer. From the abstract: "This paper reviews five approximate statisti... | What's the correct way to test the significance of classification results
I recommend the paper by Tom Dietterich titled "Approximate Statistical Tests for Comparing Supervised Classification Learning Algorithms". Here's the paper's profile on CiteSeer. From the abstract: |
11,690 | What's the correct way to test the significance of classification results | IMHO there shouldn't be any different between distribution of scores to distribution of any other type of data. so basically all you have to check is whether your data is distributed normally or not see here. Moreover, There are great books that deal thoroughly with this question see here (i.e. in short: they all test... | What's the correct way to test the significance of classification results | IMHO there shouldn't be any different between distribution of scores to distribution of any other type of data. so basically all you have to check is whether your data is distributed normally or not | What's the correct way to test the significance of classification results
IMHO there shouldn't be any different between distribution of scores to distribution of any other type of data. so basically all you have to check is whether your data is distributed normally or not see here. Moreover, There are great books that... | What's the correct way to test the significance of classification results
IMHO there shouldn't be any different between distribution of scores to distribution of any other type of data. so basically all you have to check is whether your data is distributed normally or not |
11,691 | What's the correct way to test the significance of classification results | There is no single test that is appropriate for all situations; I can recommend the book "Evaluating Learning Algorithms" by Nathalie Japkowicz and Mohak Shah, Cambridge University Press, 2011. The fact that a book of almost 400 pages can be written on this topic suggests it isn't a straight-forward issue. I have oft... | What's the correct way to test the significance of classification results | There is no single test that is appropriate for all situations; I can recommend the book "Evaluating Learning Algorithms" by Nathalie Japkowicz and Mohak Shah, Cambridge University Press, 2011. The f | What's the correct way to test the significance of classification results
There is no single test that is appropriate for all situations; I can recommend the book "Evaluating Learning Algorithms" by Nathalie Japkowicz and Mohak Shah, Cambridge University Press, 2011. The fact that a book of almost 400 pages can be wri... | What's the correct way to test the significance of classification results
There is no single test that is appropriate for all situations; I can recommend the book "Evaluating Learning Algorithms" by Nathalie Japkowicz and Mohak Shah, Cambridge University Press, 2011. The f |
11,692 | What is the distribution of the difference of two-t-distributions | The sum of two independent t-distributed random variables is not t-distributed. Hence you cannot talk about degrees of freedom of this distribution, since the resulting distribution does not have any degrees of freedom in a sense that t-distribution has. | What is the distribution of the difference of two-t-distributions | The sum of two independent t-distributed random variables is not t-distributed. Hence you cannot talk about degrees of freedom of this distribution, since the resulting distribution does not have any | What is the distribution of the difference of two-t-distributions
The sum of two independent t-distributed random variables is not t-distributed. Hence you cannot talk about degrees of freedom of this distribution, since the resulting distribution does not have any degrees of freedom in a sense that t-distribution has. | What is the distribution of the difference of two-t-distributions
The sum of two independent t-distributed random variables is not t-distributed. Hence you cannot talk about degrees of freedom of this distribution, since the resulting distribution does not have any |
11,693 | What is the distribution of the difference of two-t-distributions | Agree the answers above, the difference of two independent t-distributed random variables are not t distributed. But I want to add some ways of calculating this.
The easiest way of calculating this is using a Monte Carlo method. In R, for example, you random sample 100,000 numbers from the first t distribution, then y... | What is the distribution of the difference of two-t-distributions | Agree the answers above, the difference of two independent t-distributed random variables are not t distributed. But I want to add some ways of calculating this.
The easiest way of calculating this i | What is the distribution of the difference of two-t-distributions
Agree the answers above, the difference of two independent t-distributed random variables are not t distributed. But I want to add some ways of calculating this.
The easiest way of calculating this is using a Monte Carlo method. In R, for example, you r... | What is the distribution of the difference of two-t-distributions
Agree the answers above, the difference of two independent t-distributed random variables are not t distributed. But I want to add some ways of calculating this.
The easiest way of calculating this i |
11,694 | Creating an index of quality from multiple variables to enable rank ordering | The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are apparent:
It is not even evident that each variable is positively related to "quality." For example, what if a 10 for... | Creating an index of quality from multiple variables to enable rank ordering | The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are | Creating an index of quality from multiple variables to enable rank ordering
The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are apparent:
It is not even evident that eac... | Creating an index of quality from multiple variables to enable rank ordering
The proposed approach may give a reasonable result, but only by accident. At this distance--that is, taking the question at face value, with the meanings of the variables disguised--some problems are |
11,695 | Creating an index of quality from multiple variables to enable rank ordering | Anyone looked at Russell G. Congalton 'Review of Assessing the Accuracy of Classifications of Remotely Sensed Data' 1990 ?. It describes a technique known as error matrix for varing matrices, also a term he uses called ' Normalizing data' , whereby one gets all the different vectors and 'normalizes' or sets them to equ... | Creating an index of quality from multiple variables to enable rank ordering | Anyone looked at Russell G. Congalton 'Review of Assessing the Accuracy of Classifications of Remotely Sensed Data' 1990 ?. It describes a technique known as error matrix for varing matrices, also a t | Creating an index of quality from multiple variables to enable rank ordering
Anyone looked at Russell G. Congalton 'Review of Assessing the Accuracy of Classifications of Remotely Sensed Data' 1990 ?. It describes a technique known as error matrix for varing matrices, also a term he uses called ' Normalizing data' , wh... | Creating an index of quality from multiple variables to enable rank ordering
Anyone looked at Russell G. Congalton 'Review of Assessing the Accuracy of Classifications of Remotely Sensed Data' 1990 ?. It describes a technique known as error matrix for varing matrices, also a t |
11,696 | Creating an index of quality from multiple variables to enable rank ordering | One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be better transforming all of the variables into ranks, and determining a weighting for each variable, since it is highly unl... | Creating an index of quality from multiple variables to enable rank ordering | One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be be | Creating an index of quality from multiple variables to enable rank ordering
One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be better transforming all of the variables into... | Creating an index of quality from multiple variables to enable rank ordering
One other thing you did not discuss is the scale of the measurements. V1 and V5 looks like they are of rank order and the other seem not. So standardization may be skewing the score. So you may be be |
11,697 | Creating an index of quality from multiple variables to enable rank ordering | I had a similar problem recently and though I add my approach to the nice answers. I think in order to find a simple way to determine which variable leads to the best ranking. One could transform your problem to a gridsearch approach:
Basically use a combined score for the ranking which is composed as such:
Finel_score... | Creating an index of quality from multiple variables to enable rank ordering | I had a similar problem recently and though I add my approach to the nice answers. I think in order to find a simple way to determine which variable leads to the best ranking. One could transform your | Creating an index of quality from multiple variables to enable rank ordering
I had a similar problem recently and though I add my approach to the nice answers. I think in order to find a simple way to determine which variable leads to the best ranking. One could transform your problem to a gridsearch approach:
Basicall... | Creating an index of quality from multiple variables to enable rank ordering
I had a similar problem recently and though I add my approach to the nice answers. I think in order to find a simple way to determine which variable leads to the best ranking. One could transform your |
11,698 | Creating an index of quality from multiple variables to enable rank ordering | Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to combine future scores.
Do this also after all scores have been transformed into ranks. If the results are very similar,... | Creating an index of quality from multiple variables to enable rank ordering | Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to | Creating an index of quality from multiple variables to enable rank ordering
Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to combine future scores.
Do this also after ... | Creating an index of quality from multiple variables to enable rank ordering
Following up on Ralph Winters' answer, you might use PCA (principal component analysis) on the matrix of suitably standardized scores. This will give you a "natural" weight vector that you can use to |
11,699 | Classifier vs model vs estimator | estimator: This isn't a word with a rigorous definition but it usually associated with finding a current value in data. If we didn't explicitly count the change in our pocket we might use an estimate. That said, in machine learning it is most frequently used in conjunction with parameter estimation or density estimatio... | Classifier vs model vs estimator | estimator: This isn't a word with a rigorous definition but it usually associated with finding a current value in data. If we didn't explicitly count the change in our pocket we might use an estimate. | Classifier vs model vs estimator
estimator: This isn't a word with a rigorous definition but it usually associated with finding a current value in data. If we didn't explicitly count the change in our pocket we might use an estimate. That said, in machine learning it is most frequently used in conjunction with paramete... | Classifier vs model vs estimator
estimator: This isn't a word with a rigorous definition but it usually associated with finding a current value in data. If we didn't explicitly count the change in our pocket we might use an estimate. |
11,700 | Classifier vs model vs estimator | An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a transformer that extracts/filters useful features from raw data.
From scikit-learn documentation. | Classifier vs model vs estimator | An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a transformer that extracts/filters useful features from raw data.
From scikit-lear | Classifier vs model vs estimator
An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a transformer that extracts/filters useful features from raw data.
From scikit-learn documentation. | Classifier vs model vs estimator
An estimator is any object that learns from data; it may be a classification, regression or clustering algorithm or a transformer that extracts/filters useful features from raw data.
From scikit-lear |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.