idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
9,201 | Statistical podcasts | You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very useful in the past. They function essentially as lectures. Top-quality teaching. | Statistical podcasts | You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very u | Statistical podcasts
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very useful in the past. They function essentially as lectures. Top-quality teaching. | Statistical podcasts
You may be interested in the following link: http://www.ats.ucla.edu/stat/seminars/ where the UCLA Statistical Computing unit of the UCLA has very nice screen-casts available. I have found them very u |
9,202 | Statistical podcasts | Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal with Mathematics and Statistics. Take a look at the podcast archive for Science subjects. | Statistical podcasts | Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal wit | Statistical podcasts
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal with Mathematics and Statistics. Take a look at the podcast archive for Science subjects. | Statistical podcasts
Another good podcast is In our time by the BBC. It's a weekly podcast (off air for the summer) that deals with topics in History, Religion and Science. I would say that about 1 in 12 podcasts deal wit |
9,203 | Statistical podcasts | Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning | Statistical podcasts | Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning | Statistical podcasts
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning | Statistical podcasts
Check out my podcast
Www.learningmachines101.com
Which covers topics in statistical
Machine learning |
9,204 | Statistical podcasts | I also just realized the freakonomics has a podcast | Statistical podcasts | I also just realized the freakonomics has a podcast | Statistical podcasts
I also just realized the freakonomics has a podcast | Statistical podcasts
I also just realized the freakonomics has a podcast |
9,205 | Statistical podcasts | Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com. | Statistical podcasts | Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com. | Statistical podcasts
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com. | Statistical podcasts
Keith Bower has a number of statistics related podcasts. They're pretty good and helps get the concepts down. You can get them on iTunes or his website: keithbower.com. |
9,206 | Statistical podcasts | I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman. | Statistical podcasts | I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman. | Statistical podcasts
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman. | Statistical podcasts
I haven't listened to the most recent episodes, but I find the talking machines: http://www.thetalkingmachines.com/ to be really good. it's done by Pr. Ryan Adams and reporter Katherine Gorman. |
9,207 | Statistical podcasts | Not So Standard Deviations (https://soundcloud.com/nssd-podcast) | Statistical podcasts | Not So Standard Deviations (https://soundcloud.com/nssd-podcast) | Statistical podcasts
Not So Standard Deviations (https://soundcloud.com/nssd-podcast) | Statistical podcasts
Not So Standard Deviations (https://soundcloud.com/nssd-podcast) |
9,208 | Statistical podcasts | Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are scientists.
Why “Simpl... | Statistical podcasts | Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, an | Statistical podcasts
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, and Rafa Irizarry) who are fired up about the new era where data are abundant and statisticians are s... | Statistical podcasts
Simply Statistics is a blog about statistics and has several podcasts: http://simplystatistics.org/category/podcast/
From there about:
We are three biostatistics professors (Jeff Leek, Roger Peng, an |
9,209 | Statistical podcasts | A podcast about using R doing statistics: http://www.r-podcast.org/ | Statistical podcasts | A podcast about using R doing statistics: http://www.r-podcast.org/ | Statistical podcasts
A podcast about using R doing statistics: http://www.r-podcast.org/ | Statistical podcasts
A podcast about using R doing statistics: http://www.r-podcast.org/ |
9,210 | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers? | Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the values dividing the distribution (or sample) into 4 equal parts:
1 2 3
---|---|---|---
(Sometimes this is used wit... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer | Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the va | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are t... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Both of these senses of percentile, quartile, and so on are in widespread use. It’s easiest to illustrate the difference with quartiles:
the “divider” sense — there are 3 quartiles, which are the va |
9,211 | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers? | Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathematics. I hope that the mathematics will provide a framework for understanding different usages.
One nice way to treat th... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer | Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathem | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on m... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Take this answer with a grain of salt -- it started out fairly wrong and I am still deciding what to do with it.
The question is partly about language and usage, whereas this answer focuses on mathem |
9,212 | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers? | There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distribution is a number such that approximately $p$ percent ($p\%$) of the values in the distribution are equal to or less tha... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer | There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distrib | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a di... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
There are other ways to calculate percentiles, what follows, is not the only one. Taken from this Source.
The meaning of percentile can be captured by stating that the $p$th percentile of a distrib |
9,213 | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers? | I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observation can be greater than 100% of observations because it forms part of that 100% (and a similar logic applies in the case... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer | I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observat | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No obs... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
I was taught that an observation in the nth percentile was greater than n% of observations in the dataset under consideration. Which to me implies that there is no 0th or 100th percentile. No observat |
9,214 | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers? | Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top half a percent
it seems that the terms are ambiguous, and I suppose (based on my understanding of that post), better t... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer | Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointers to individual numbers?
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for th... | Are there 99 percentiles, or 100 percentiles? And are they groups of numbers, or dividers or pointer
Note- I will accept somebody else's answer rather than mine. But I do see some useful comments so I'm just writing an answer that mentions those.
Based on Nick's answer "-iles" terminology for the top |
9,215 | Do neural networks learn a function or a probability density function? | Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditions are respected (Values must be positive and $\leq$ 1, etc...). But that is a question of how you choose to interpret ... | Do neural networks learn a function or a probability density function? | Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditi | Do neural networks learn a function or a probability density function?
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditions are respected (Values must be positive and $\... | Do neural networks learn a function or a probability density function?
Strictly speaking, neural networks are fitting a non-linear function.
They can be interpreted as fitting a probability density function if suitable activation functions are chosen and certain conditi |
9,216 | Do neural networks learn a function or a probability density function? | Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). Nevertheless it is very possible to model complete probability densities via Neural Networks.
One easy way to do this is... | Do neural networks learn a function or a probability density function? | Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). | Do neural networks learn a function or a probability density function?
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). Nevertheless it is very possible to model complet... | Do neural networks learn a function or a probability density function?
Generally Neural Networks are not used to model complete probability densities. Their focus is to just model the mean of a distribution (or in a deterministic situation simply a non-linear function). |
9,217 | Do neural networks learn a function or a probability density function? | My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They implement stochastic decision making.
On the surface it looks like NN are just fitting the function, queue the universal approx... | Do neural networks learn a function or a probability density function? | My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They impleme | Do neural networks learn a function or a probability density function?
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They implement stochastic decision making.
On the surface it ... | Do neural networks learn a function or a probability density function?
My dissenting answer is that in most impressive practical applications (those where they get the most coverage in the media, for instance) it's neither the function nor the probabilities. They impleme |
9,218 | No regularisation term for bias unit in neural network | Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitted function). The bias parameters don't contribute to the curvature of the model, so there is usually little point in re... | No regularisation term for bias unit in neural network | Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitte | No regularisation term for bias unit in neural network
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitted function). The bias parameters don't contribute to the curvatu... | No regularisation term for bias unit in neural network
Overfitting usually requires the output of the model to be sensitive to small changes in the input data (i.e. to exactly interpolate the target values, you tend to need a lot of curvature in the fitte |
9,219 | No regularisation term for bias unit in neural network | The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the biases are fixed (e.g. b = 1) thus work like neuron intercepts, which make sense to be given a higher flexibility. | No regularisation term for bias unit in neural network | The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the bia | No regularisation term for bias unit in neural network
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the biases are fixed (e.g. b = 1) thus work like neuron intercepts, whic... | No regularisation term for bias unit in neural network
The motivation behind L2 (or L1) is that by restricting the weights, constraining the network, you are less likely to overfit. It makes little sense to restrict the weights of the biases since the bia |
9,220 | No regularisation term for bias unit in neural network | Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. The biases have no influence on the slopes of activation functions. However, they have an influence on the position of th... | No regularisation term for bias unit in neural network | Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. T | No regularisation term for bias unit in neural network
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. The biases have no influence on the slopes of activation functions... | No regularisation term for bias unit in neural network
Weights determine slopes of the activation functions. Regularization reduces the weights and hence the slopes of the activation functions. This reduces the model variance and the overfitting effect. T |
9,221 | No regularisation term for bias unit in neural network | I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2*(bias-1)^2 rather than 1/2*(bias)^2.
Maybe that replacing the -1 part by a subtraction to the mean of the biases could... | No regularisation term for bias unit in neural network | I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2 | No regularisation term for bias unit in neural network
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2*(bias-1)^2 rather than 1/2*(bias)^2.
Maybe that replacing the -... | No regularisation term for bias unit in neural network
I would add that the bias term is often initialized with a mean of 1 rather than of 0, so we might want to regularize it in a way to not get too far away from a constant value like 1 such as doing 1/2 |
9,222 | No regularisation term for bias unit in neural network | The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparameter. If you think regularizing the offset would help in your setup, then cross-validate it; there's no harm in trying. | No regularisation term for bias unit in neural network | The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparamet | No regularisation term for bias unit in neural network
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparameter. If you think regularizing the offset would help in your setup... | No regularisation term for bias unit in neural network
The tutorial says "applying weight decay to the bias units usually makes only a small difference to the final network", so if it does not help, then you can stop doing it to eliminate one hyperparamet |
9,223 | Is it essential to do normalization for SVM and Random Forest? | The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwittingly giving some features more importance than others.
For example, if your first dimension ranges from 0-10, and seco... | Is it essential to do normalization for SVM and Random Forest? | The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwitt | Is it essential to do normalization for SVM and Random Forest?
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwittingly giving some features more importance than others.
... | Is it essential to do normalization for SVM and Random Forest?
The answer to your question depends on what similarity/distance function you plan to use (in SVMs). If it's simple (unweighted) Euclidean distance, then if you don't normalize your data you are unwitt |
9,224 | Is it essential to do normalization for SVM and Random Forest? | Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if your features have roughly the same magnitude, unless you know apriori that some feature is much more important than other... | Is it essential to do normalization for SVM and Random Forest? | Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if yo | Is it essential to do normalization for SVM and Random Forest?
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if your features have roughly the same magnitude, unless you k... | Is it essential to do normalization for SVM and Random Forest?
Random Forest is invariant to monotonic transformations of individual features. Translations or per feature scalings will not change anything for the Random Forest. SVM will probably do better if yo |
9,225 | What is the relationship between regression and linear discriminant analysis (LDA)? | I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the dependent variable (DV) consists just of 2 groups the two analyses are actually identical. Despite that computations are ... | What is the relationship between regression and linear discriminant analysis (LDA)? | I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the d | What is the relationship between regression and linear discriminant analysis (LDA)?
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the dependent variable (DV) consists just... | What is the relationship between regression and linear discriminant analysis (LDA)?
I take it that the question is about LDA and linear (not logistic) regression.
There is a considerable and meaningful relation between linear regression and linear discriminant analysis. In case the d |
9,226 | What is the relationship between regression and linear discriminant analysis (LDA)? | Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. Generative classifierers: A comparison of logistic regression and naive Bayes. And here is an abstract of a comment on it... | What is the relationship between regression and linear discriminant analysis (LDA)? | Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. G | What is the relationship between regression and linear discriminant analysis (LDA)?
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. Generative classifierers: A compariso... | What is the relationship between regression and linear discriminant analysis (LDA)?
Here is a reference to one of Efron's papers: The Efficiency of Logistic Regression Compared to Normal Discriminant Analysis, 1975.
Another relevant paper is Ng & Jordan, 2001, On Discriminative vs. G |
9,227 | What is the relationship between regression and linear discriminant analysis (LDA)? | The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct framework is provided by reduced rank regression (RRR).
We will show that LDA is equivalent to RRR of the whitened class ind... | What is the relationship between regression and linear discriminant analysis (LDA)? | The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct fram | What is the relationship between regression and linear discriminant analysis (LDA)?
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct framework is provided by reduced rank re... | What is the relationship between regression and linear discriminant analysis (LDA)?
The purpose of this answer is to explain the exact mathematical relationship between linear discriminant analysis (LDA) and multivariate linear regression (MLR). It will turn out that the correct fram |
9,228 | What is the relationship between regression and linear discriminant analysis (LDA)? | Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function linear in the parameters that best fits the data. It does not even have to be linear in the covariates. Linear discriminan... | What is the relationship between regression and linear discriminant analysis (LDA)? | Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function line | What is the relationship between regression and linear discriminant analysis (LDA)?
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function linear in the parameters that best fits ... | What is the relationship between regression and linear discriminant analysis (LDA)?
Linear regression and linear discriminant analysis are very different. Linear regression relates a dependent variable to a set of independent predictor variables. The idea is to find a function line |
9,229 | Understanding complete separation for logistic regression [duplicate] | Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump of $y=1$s to the right).
The sequence of curves I plotted is
$$\frac{1}{1 + e^{-x}}, \frac{1}{1 + e^{-2x}}, \frac{1}{1 ... | Understanding complete separation for logistic regression [duplicate] | Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump | Understanding complete separation for logistic regression [duplicate]
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump of $y=1$s to the right).
The sequence of curves I... | Understanding complete separation for logistic regression [duplicate]
Here's a visual explanation of (1)
Imagine that you have a perfectly separated set of points, with the separation occuring at zero in the picture (so a clump of $y=0$s to the left of zero and a clump |
9,230 | Is correlation equivalent to association? | No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedia of Statistical Sciences "a measure of the strength of of the linear relationship between two random variables". In ma... | Is correlation equivalent to association? | No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedi | Is correlation equivalent to association?
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedia of Statistical Sciences "a measure of the strength of of the linear relation... | Is correlation equivalent to association?
No; correlation is not equivalent to association. However, the meaning of correlation is dependent upon context.
The classical statistics definition is, to quote from Kotz and Johnson's Encyclopedi |
9,231 | Is correlation equivalent to association? | I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "correlation ratio." | Is correlation equivalent to association? | I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the " | Is correlation equivalent to association?
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the "correlation ratio." | Is correlation equivalent to association?
I don't see much point in trying to disentangle the terms "correlation" and "association." After all, Pearson himself (and others) developed a measure of nonlinear relationship which they named the " |
9,232 | Is correlation equivalent to association? | There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative. | Is correlation equivalent to association? | There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative. | Is correlation equivalent to association?
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative. | Is correlation equivalent to association?
There seems to be misunderstanding of association. Measures of association (effect size) are inherent in quantitative analysis, not qualitative. |
9,233 | Is correlation equivalent to association? | I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship. | Is correlation equivalent to association? | I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship. | Is correlation equivalent to association?
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship. | Is correlation equivalent to association?
I would say that correlation applies to quantitative data and association to qualitative data and both have no obligatory causal relationship. |
9,234 | Is correlation equivalent to association? | The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated as a special case of association. | Is correlation equivalent to association? | The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated a | Is correlation equivalent to association?
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated as a special case of association. | Is correlation equivalent to association?
The idea that the weight (of a man) is not correlated with the height (because the corresponding function is of 3rd degree, not linear) seems very strange to me. Linear correlation should be treated a |
9,235 | Is correlation equivalent to association? | Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from -1 to 0. The association does not reveals what types of association and how much association. | Is correlation equivalent to association? | Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from | Is correlation equivalent to association?
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from -1 to 0. The association does not reveals what types of association and how mu... | Is correlation equivalent to association?
Correlation and association are different. Correlation describes the three types of relationship positive, negative and non-correlated. It also describe the magnitude of correlation from 0 to 1, from |
9,236 | Is correlation equivalent to association? | As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between association and correlation.
Association --- measures how closely related two variables are (i.e. whether they are dependent o... | Is correlation equivalent to association? | As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between associat | Is correlation equivalent to association?
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between association and correlation.
Association --- measures how closely related two variab... | Is correlation equivalent to association?
As far as the linearity is concerned the response by Tim and Nick Cox covered it completely. Where I thought I might be able to contribute is a clean way to think about the difference between associat |
9,237 | Number of features vs. number of observations | What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this problem. You can use AIC or BIC to penalize models with more predictors. You can choose random sets of variables and asses th... | Number of features vs. number of observations | What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this proble | Number of features vs. number of observations
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this problem. You can use AIC or BIC to penalize models with more predictors. You ca... | Number of features vs. number of observations
What you've hit on here is the curse of dimensionality or the p>>n problem (where p is predictors and n is observations). There have been many techniques developed over the years to solve this proble |
9,238 | Number of features vs. number of observations | I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I}$. In that case, you only need two samples, one from either class to get perfect classification, almost regardless of ... | Number of features vs. number of observations | I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I | Number of features vs. number of observations
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I}$. In that case, you only need two samples, one from either class to get... | Number of features vs. number of observations
I suspect that no such rules of thumb will be generally applicable. Consider a problem with two gaussian classes centered on $\vec{+1}$ and $\vec{-1}$, both with covariance matrix of $0.000001*\vec{I |
9,239 | Number of features vs. number of observations | You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of machine learning, the idea of including robustness as an aim of model optimization is just the core of the whole domain (of... | Number of features vs. number of observations | You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of mac | Number of features vs. number of observations
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of machine learning, the idea of including robustness as an aim of model optimiz... | Number of features vs. number of observations
You are probably over impression from the classical modelling, which is vulnerable to the Runge paradox-like problems and thus require some parsimony tuning in post-processing.
However, in case of mac |
9,240 | Statistical interpretation of Maximum Entropy Distribution | This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen. So, surprise it a probabilistic concept and can be explicated as such (I J Good has written about that). See also Wik... | Statistical interpretation of Maximum Entropy Distribution | This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen | Statistical interpretation of Maximum Entropy Distribution
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen. So, surprise it a probabilistic concept and can be explicat... | Statistical interpretation of Maximum Entropy Distribution
This isn't really my field, so some musings:
I will start with the concept of surprise. What does it mean to be surprised?
Usually, it means that something happened that was not expected to happen |
9,241 | Statistical interpretation of Maximum Entropy Distribution | Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the normal distribution and the central limit theorem. Among all densities with mean zero and standard deviation $\sigma$, the no... | Statistical interpretation of Maximum Entropy Distribution | Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the norma | Statistical interpretation of Maximum Entropy Distribution
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the normal distribution and the central limit theorem. Among all densi... | Statistical interpretation of Maximum Entropy Distribution
Perhaps not exactly what you are after, but in Rissanen, J. Stochastic Complexity in Statistical Inquiry, World Scientific, 1989, p. 41 there is an interesting connection of maximum entropy, the norma |
9,242 | Statistical interpretation of Maximum Entropy Distribution | While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived according to a set of criteria. It and related measures characterize probability distributions. And, it's the unique measu... | Statistical interpretation of Maximum Entropy Distribution | While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived acco | Statistical interpretation of Maximum Entropy Distribution
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived according to a set of criteria. It and related measures characte... | Statistical interpretation of Maximum Entropy Distribution
While not an expert in information theory and maximum entropy, I've been interested in it for a while.
The entropy is a measure of the uncertainty of a probability distribution that was derived acco |
9,243 | Statistical interpretation of Maximum Entropy Distribution | You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of 'uncertainty', 'uninformativeness', or any other imprecisel... | Statistical interpretation of Maximum Entropy Distribution | You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature | Statistical interpretation of Maximum Entropy Distribution
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature, making no reference to information entropy as a measure of ... | Statistical interpretation of Maximum Entropy Distribution
You might want to have a look at the Wallis derivation.
https://en.wikipedia.org/wiki/Principle_of_maximum_entropy#The_Wallis_derivation
It has the advantage of being strictly combinatorial in nature |
9,244 | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.org/v40/i13/paper . The package works well with the utilities from the tm package.
The ... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sam | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sampling. See http://www.jstatsoft.or... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
Implementation:
The topicmodels package provides an interface to the GSL C and C++ code for topic models by Blei et al. and Phan et al. For the earlier it uses Variational EM, for the latter Gibbs Sam |
9,245 | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | +1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda package uses a more esoteric form of input (based on Blei's LDA-C) and I've had no luck using the built-in functions to conver... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | +1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda packag | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda package uses a more esoteric form of inp... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
+1 for topicmodels. @Momo's answer is very comprehensive. I'd just add that topicmodels takes input as document term matrices, which are easily made with the tm package or using python. The lda packag |
9,246 | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling with document-level covariate information.
http://structuraltopicmodel.com/
The STM package includes a series of methods (g... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling wit | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling with document-level covariate informa... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
The R Structural Topic Model (STM) package by Molly Roberts, Brandon Stewart and Dustin Tingley is also a great choice. Built on top of the tm package it's a general framework for topic modeling wit |
9,247 | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone is working with uni grams then the practitioner may preferred stm as it gives structured output. | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed] | I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone i | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone is working with uni grams then the ... | R packages for performing topic modeling / LDA: just `topicmodels` and `lda` [closed]
I used all three libraries, among all 3 viz., topicmodels, lda, stm; not everyone works with n grams. The topicmodels library is good with its estimation and it also work with n grams. But if anyone i |
9,248 | Correcting p values for multiple tests where tests are correlated (genetics) | This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replication studies or meta-analysis, see e.g. (1) for a recent review). Combining SNP ... | Correcting p values for multiple tests where tests are correlated (genetics) | This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some auth | Correcting p values for multiple tests where tests are correlated (genetics)
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some authors, but in a different context (replicatio... | Correcting p values for multiple tests where tests are correlated (genetics)
This is actually a hot topic in Genomewide analysis studies (GWAS)! I am not sure the method you are thinking of is the most appropriate in this context. Pooling of p-values was described by some auth |
9,249 | Correcting p values for multiple tests where tests are correlated (genetics) | Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for details) the problem is that I am not sure if you can say upfront if your correlations are all positive ones.
In R you can ... | Correcting p values for multiple tests where tests are correlated (genetics) | Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for det | Correcting p values for multiple tests where tests are correlated (genetics)
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for details) the problem is that I am not sure if ... | Correcting p values for multiple tests where tests are correlated (genetics)
Using a method like bonferroni is fine, the problem is that if you have many tests you are not likely to find many "discoveries".
You can go with the FDR approach for dependent tests (see here for det |
9,250 | Correcting p values for multiple tests where tests are correlated (genetics) | I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers. PLoS Genet 2009 talks about them and also gives other references. It soun... | Correcting p values for multiple tests where tests are correlated (genetics) | I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power | Correcting p values for multiple tests where tests are correlated (genetics)
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Marke... | Correcting p values for multiple tests where tests are correlated (genetics)
I think Multivariate Normal Models are being used to model the correlated p-values and to get the right type of multiple testing corrections. Rapid and Accurate Multiple Testing Correction and Power |
9,251 | Correcting p values for multiple tests where tests are correlated (genetics) | I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009). Contrary to all bunch of other articles and books he considers specifically the regressions. Besides other methods he ... | Correcting p values for multiple tests where tests are correlated (genetics) | I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009) | Correcting p values for multiple tests where tests are correlated (genetics)
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009). Contrary to all bunch of other articles a... | Correcting p values for multiple tests where tests are correlated (genetics)
I am looking for a working solution for exactly the same problem. The best I found is the Null Unrestricted Bootstrap introduced by Foulkes Andrea in his book Applied Statistical Genetics with R(2009) |
9,252 | How to know whether the data is linearly separable? | There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to find whether they are linearly separable:
Linear programming: Defines an objective function subjected to constraints that... | How to know whether the data is linearly separable? | There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to fi | How to know whether the data is linearly separable?
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to find whether they are linearly separable:
Linear programming: Defines... | How to know whether the data is linearly separable?
There are several methods to find whether the data is linearly separable, some of them are highlighted in this paper (1). With assumption of two classes in the dataset, following are few methods to fi |
9,253 | How to know whether the data is linearly separable? | I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practice, you have to train and test on the same data. If there's such a line then you should get close to 100% accuracy or 100... | How to know whether the data is linearly separable? | I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practic | How to know whether the data is linearly separable?
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practice, you have to train and test on the same data. If there's such a li... | How to know whether the data is linearly separable?
I assume you talk about a 2-class classification problem. In this case there's a line that separates your two classes and any classic algorithm should be able to find it when it converges.
In practic |
9,254 | How to know whether the data is linearly separable? | Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 $$
If our data is linearly seperable, all the inequality constraints will be satisfied. Notice that $w'x + b$ simply in... | How to know whether the data is linearly separable? | Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 | How to know whether the data is linearly separable?
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 $$
If our data is linearly seperable, all the inequality constraint... | How to know whether the data is linearly separable?
Consider the hard margin SVM formulation, which tries to find a hyperplane that strictly separates the data.
$$ min_{w,b} \space ||w||^2 $$
$$ s.t \space \forall i, (w'x_{i} + b)y_{i} \ge 1 |
9,255 | Why maximum likelihood and not expected likelihood? | The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the posterior distribution as your estimator. There are cases where using a flat prior can get you into trouble because you don'... | Why maximum likelihood and not expected likelihood? | The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the poste | Why maximum likelihood and not expected likelihood?
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the posterior distribution as your estimator. There are cases where using a ... | Why maximum likelihood and not expected likelihood?
The method proposed (after normalizing the likelihood to be a density) is equivalent to estimating the parameters using a flat prior for all the parameters in the model and using the mean of the poste |
9,256 | Why maximum likelihood and not expected likelihood? | One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integrating the likelihood times each parameter.
Another reason is that with exponential families, maximum likelihood estimation c... | Why maximum likelihood and not expected likelihood? | One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integratin | Why maximum likelihood and not expected likelihood?
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integrating the likelihood times each parameter.
Another reason is that with e... | Why maximum likelihood and not expected likelihood?
One reason is that maximum likelihood estimation is easier: you set the derivative of the likelihood w.r.t. the parameters to zero and solve for the parameters. Taking an expectation means integratin |
9,257 | Why maximum likelihood and not expected likelihood? | There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as MLE, but in some examples where it is different, it as arguably better, or at least different in an interesting way.
N... | Why maximum likelihood and not expected likelihood? | There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as | Why maximum likelihood and not expected likelihood?
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as MLE, but in some examples where it is different, it as arguably bet... | Why maximum likelihood and not expected likelihood?
There is an interesting paper proposing to maximize not the observed likelihood, but the expected likelihood Expected Maximum Log Likelihood Estimation. In many examples this gives the same results as |
9,258 | Why maximum likelihood and not expected likelihood? | This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655 | Why maximum likelihood and not expected likelihood? | This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655 | Why maximum likelihood and not expected likelihood?
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655 | Why maximum likelihood and not expected likelihood?
This approach exists and it is called Minimum Contrast Estimator. The example of related paper (and see other references from inside)
https://arxiv.org/abs/0901.0655 |
9,259 | Inter-rater reliability for ordinal or interval data | The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291). In the case of ordinal data, you can use the weight... | Inter-rater reliability for ordinal or interval data | The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up) | Inter-rater reliability for ordinal or interval data
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up). Extensions for the case of multiple raters exist (2, pp. 284–291)... | Inter-rater reliability for ordinal or interval data
The Kappa ($\kappa$) statistic is a quality index that compares observed agreement between 2 raters on a nominal or ordinal scale with agreement expected by chance alone (as if raters were tossing up) |
9,260 | Inter-rater reliability for ordinal or interval data | The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of the ICC, see:
Intraclass correlations: uses in assessing rater reliability (Shrout, Fleiss, 1979) | Inter-rater reliability for ordinal or interval data | The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of | Inter-rater reliability for ordinal or interval data
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of the ICC, see:
Intraclass correlations: uses in assessing rater re... | Inter-rater reliability for ordinal or interval data
The Intraclass correlation may be used for ordinal data. But there are some caveats, primarily that the raters cannot be distinguished. For more on this and how to choose among different versions of |
9,261 | Multi-layer perceptron vs deep neural network | One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their learning rule is incorrect. The classical "perceptron update rule" is one of the ways that can be used to train it. The e... | Multi-layer perceptron vs deep neural network | One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their l | Multi-layer perceptron vs deep neural network
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their learning rule is incorrect. The classical "perceptron update rule" is one o... | Multi-layer perceptron vs deep neural network
One can consider multi-layer perceptron (MLP) to be a subset of deep neural networks (DNN), but are often used interchangeably in literature.
The assumption that perceptrons are named based on their l |
9,262 | Multi-layer perceptron vs deep neural network | Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find definitions as rigorous as in Mathematics. Anyway, the multilayer perceptron is a specific feed-forward neural network arc... | Multi-layer perceptron vs deep neural network | Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find de | Multi-layer perceptron vs deep neural network
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find definitions as rigorous as in Mathematics. Anyway, the multilayer perceptron... | Multi-layer perceptron vs deep neural network
Good question: note that in the field of Deep Learning things are not always as well-cut and clearly defined as in Statistical Learning (also because there's a lot of hype), so don't expect to find de |
9,263 | Multi-layer perceptron vs deep neural network | I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So different type of DNN designed to solve different types of problems.
MLPs is classical type of NN which is used for :
Tab... | Multi-layer perceptron vs deep neural network | I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So dif | Multi-layer perceptron vs deep neural network
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So different type of DNN designed to solve different types of problems.
MLPs is... | Multi-layer perceptron vs deep neural network
I wanna add that according to what I have read from many posts :
There are many different architecture through DNN like : MLPs (Multi-Layer Perceptron) and CNNs (Convolutional Neural Networks).So dif |
9,264 | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with different groups, and we attempt to determine whether the measurement of a continuous variable differs between groups. On t... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with dif | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with different groups, and we attemp... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
It would be interesting to appreciate that the divergence is in the type of variables, and more notably the types of explanatory variables. In the typical ANOVA we have a categorical variable with dif |
9,265 | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA is a special case of regression. There is nothing that an ANOVA can tell you that regression cannot derive itself. Th... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA is a special case of regres... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
ANOVA and OLS regression are mathematically identical in cases where your predictors are categorical (in terms of the inferences you are drawing from the test statistic). To put it another way, ANOVA |
9,266 | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provides this test for you. With regression, the categorical variable is represented by 2 or more dummy variables, depending o... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provid | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provides this test for you. With r... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The main benefit of ANOVA ovethe r regression, in my opinion, is in the output. If you are interested in the statistical significance of the categorical variable (factor) as a block, then ANOVA provid |
9,267 | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of several covariates (though this can also be easily accomplished through ANCOVA when you are interested in including just ... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies? | The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of s | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of several covariates (though th... | ANOVA vs multiple linear regression? Why is ANOVA so commonly used in experimental studies?
The major advantage of linear regression is that it is robust to the violation of homogeneity of variance when sample sizes across groups are unequal. Another is that it facilitates the inclusion of s |
9,268 | Can we use MLE to estimate Neural Network weights? | MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is the same as the negative log-likelihood of a binomial model. For regression problems, residual square error is used, which... | Can we use MLE to estimate Neural Network weights? | MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is th | Can we use MLE to estimate Neural Network weights?
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is the same as the negative log-likelihood of a binomial model. For regres... | Can we use MLE to estimate Neural Network weights?
MLE estimates of artificial neural network weights (ANN) certainly are possible; indeed, it's entirely typical. For classification problems, a standard objective function is cross-entropy, which is th |
9,269 | Can we use MLE to estimate Neural Network weights? | In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-likelihood (equivalent MLE). The only constraint to use the negative log-likelihood is to have an output layer that can be i... | Can we use MLE to estimate Neural Network weights? | In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-like | Can we use MLE to estimate Neural Network weights?
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-likelihood (equivalent MLE). The only constraint to use the negative log-... | Can we use MLE to estimate Neural Network weights?
In classification problems, maximizing the likelihood is the most common way to train a neural network (both supervised and unsupervised models).
In practice, we usually minimize the negative log-like |
9,270 | Is power analysis necessary in Bayesian Statistics? | Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the line. Therefore, power as is defined in frequentist statistics doesn't really exist.
That said, it doesn't mean a justif... | Is power analysis necessary in Bayesian Statistics? | Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the | Is power analysis necessary in Bayesian Statistics?
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the line. Therefore, power as is defined in frequentist statistics doesn... | Is power analysis necessary in Bayesian Statistics?
Power is about the long run probability of p < 0.05 (alpha) in studies when the effect does exist in the population. In Bayes the evidence from study A feeds into priors for study B, etc. on down the |
9,271 | Is power analysis necessary in Bayesian Statistics? | You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative, you could employ some form of binary decision based on Bayes factors.
Once you establish such a decision making system... | Is power analysis necessary in Bayesian Statistics? | You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative | Is power analysis necessary in Bayesian Statistics?
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative, you could employ some form of binary decision based on Bayes facto... | Is power analysis necessary in Bayesian Statistics?
You can perform hypothesis tests with Bayesian statistics. For example, you could conclude an effect is greater than zero if more than 95% of the posterior density is greater than zero. Or alternative |
9,272 | Is power analysis necessary in Bayesian Statistics? | This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can answer this question with Bayesian stats by determining if the 95% highest density interval of the difference between tho... | Is power analysis necessary in Bayesian Statistics? | This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can a | Is power analysis necessary in Bayesian Statistics?
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can answer this question with Bayesian stats by determining if the 95% hi... | Is power analysis necessary in Bayesian Statistics?
This issue leads to a lot of misunderstandings because people use Bayesian stats to ask frequentist questions. For example, people want to determine if variant B is better than variant A. They can a |
9,273 | Is power analysis necessary in Bayesian Statistics? | The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum size) if it exists. It isn't feasible to recruit an endless number of patients, first because of time constraints and seco... | Is power analysis necessary in Bayesian Statistics? | The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum si | Is power analysis necessary in Bayesian Statistics?
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum size) if it exists. It isn't feasible to recruit an endless number of ... | Is power analysis necessary in Bayesian Statistics?
The need for a power analysis in a clinical trial for example is to be able to calculate/estimate how many participants to recruit to have a chance of finding a treatment effect (of a given minimum si |
9,274 | Testing Classification on Oversampled Imbalance Data [duplicate] | A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions on the test set the classifier will already have seen identical points in the train set. The whole point of having a tes... | Testing Classification on Oversampled Imbalance Data [duplicate] | A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions | Testing Classification on Oversampled Imbalance Data [duplicate]
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions on the test set the classifier will already have seen i... | Testing Classification on Oversampled Imbalance Data [duplicate]
A few comments:
The option (1) is a very bad idea. Copies of the same point may end up in both the training and test sets. This allows the classifier to cheat, because when trying to make predictions |
9,275 | Testing Classification on Oversampled Imbalance Data [duplicate] | The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purposes while they still ok for training. They are intended to modify the behavior of the classifier without modifying the al... | Testing Classification on Oversampled Imbalance Data [duplicate] | The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purpos | Testing Classification on Oversampled Imbalance Data [duplicate]
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purposes while they still ok for training. They are intended ... | Testing Classification on Oversampled Imbalance Data [duplicate]
The second (2) option is the right way of doing it. The synthetic samples you create with the oversampling techniques are not real examples but rather synthetic. These are not valid for testing purpos |
9,276 | Testing Classification on Oversampled Imbalance Data [duplicate] | Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
This Meta.CV thread contains a curated list of useful links on imbalanced data. | Testing Classification on Oversampled Imbalance Data [duplicate] | Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport | Testing Classification on Oversampled Imbalance Data [duplicate]
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport to) help?
This Meta.CV thread contains a curated list ... | Testing Classification on Oversampled Imbalance Data [duplicate]
Do not do either one of these two approaches. Unbalanced data is not a problem, and oversampling will not solve a non-problem. Are unbalanced datasets problematic, and (how) does oversampling (purport |
9,277 | How do I intentionally design an overfitting neural network? | If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer of neurons performs an "AND" operation to determine whether you are in the right sides of the half-spaces defining the c... | How do I intentionally design an overfitting neural network? | If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer | How do I intentionally design an overfitting neural network?
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer of neurons performs an "AND" operation to determine whether... | How do I intentionally design an overfitting neural network?
If you have a network with two layers of modifiable weights you can form arbitrary convex decision regions, where the lowest level neurons divide the input space into half-spaces and the second layer |
9,278 | How do I intentionally design an overfitting neural network? | Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about someone training a predictor of student performance that got great results in the first year but was an absolute failure in ... | How do I intentionally design an overfitting neural network? | Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about some | How do I intentionally design an overfitting neural network?
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about someone training a predictor of student performance that got gr... | How do I intentionally design an overfitting neural network?
Memorization
For absolute overfitting, you want a network that is technically capable to memorize all the examples, but fundamentally not capable of generalization. I seem to recall a story about some |
9,279 | How do I intentionally design an overfitting neural network? | Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capacity, and then train for many many epochs. Don't use regularization (e.g., dropout, weight decay, etc.).
Experiments hav... | How do I intentionally design an overfitting neural network? | Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capac | How do I intentionally design an overfitting neural network?
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capacity, and then train for many many epochs. Don't use regula... | How do I intentionally design an overfitting neural network?
Generally speaking, if you train for a very large number of epochs, and if your network has enough capacity, the network will overfit. So, to ensure overfitting: pick a network with a very high capac |
9,280 | How do I intentionally design an overfitting neural network? | Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the last layer the features are abstract enough for the network to "make sense of them". By forcing training on a shallower ... | How do I intentionally design an overfitting neural network? | Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the | How do I intentionally design an overfitting neural network?
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the last layer the features are abstract enough for the network... | How do I intentionally design an overfitting neural network?
Here are some things that I think might help.
If you are free to change the network architecture try using a large but shallower network. Layers help a network learn higher level features and by the |
9,281 | How do I intentionally design an overfitting neural network? | I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amount of data.
In the past, the data size is often limited. For example, couple hundreds data points. Then it is easy to h... | How do I intentionally design an overfitting neural network? | I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amo | How do I intentionally design an overfitting neural network?
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amount of data.
In the past, the data size is often limited. F... | How do I intentionally design an overfitting neural network?
I like your question a lot.
People often talk about overfitting, but may be not too many people realized that intentionally design an overfitting model is not a trivial task! Especially with large amo |
9,282 | How do I intentionally design an overfitting neural network? | Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "by design." Machine learning algorithms that overfit easily aren't normally useful. | How do I intentionally design an overfitting neural network? | Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is " | How do I intentionally design an overfitting neural network?
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is "by design." Machine learning algorithms that overfit easily... | How do I intentionally design an overfitting neural network?
Just reduce the training set to a few or even 1 example.
It's a good, simple way to test your code for some obvious bugs.
Otherwise, no, there's no magical architecture that always overfits. This is " |
9,283 | How do I intentionally design an overfitting neural network? | According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the training data, but small enough that you don't get the generalisation effect of a large network. The paper is empirical, so the... | How do I intentionally design an overfitting neural network? | According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the trainin | How do I intentionally design an overfitting neural network?
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the training data, but small enough that you don't get the generalisat... | How do I intentionally design an overfitting neural network?
According to the Open AI paper Deep Double Descent, you need to have just a large enough neural network for a given dataset. Presumably this makes the NN powerful enough to perfectly learn the trainin |
9,284 | How do I intentionally design an overfitting neural network? | If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neural network to memorize the training set perfectly, as suggested by @Peteris and @Wololo (his solution has converted me). ... | How do I intentionally design an overfitting neural network? | If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neura | How do I intentionally design an overfitting neural network?
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neural network to memorize the training set perfectly, as sugges... | How do I intentionally design an overfitting neural network?
If you're given a lot of freedom in the algorithm design, you can do the following :
train one huge but shallow (ad probably non-convolutional, you really want it very powerful but very stupid) neura |
9,285 | The origin of the term "regularization" | Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, and in particular in the context of (linear) inverse problems in geophysics. Interestingly, while I had thought that wa... | The origin of the term "regularization" | Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, | The origin of the term "regularization"
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, and in particular in the context of (linear) inverse problems in geophysics. In... | The origin of the term "regularization"
Similar to Matthew Gunn's contribution, this is also not really an answer, but more of a plausible candidate.
I also first heard of the term "regularization" in the context of Tikhonov Regularization, |
9,286 | The origin of the term "regularization" | This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 1963. Tikhonov is known for Tikhonov regularization (also known as ridge regression).
There's a concept of regularization ... | The origin of the term "regularization" | This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 19 | The origin of the term "regularization"
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 1963. Tikhonov is known for Tikhonov regularization (also known as ridge regressio... | The origin of the term "regularization"
This is part answer, part long comment. An incomplete list of candidates:
Tikhonov, Andrey. "Solution of incorrectly formulated problems and the regularization method." Soviet Math. Dokl.. Vol. 5. 19 |
9,287 | The origin of the term "regularization" | Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions which are regular, that is,
according to rule
(free dictionary's definition)
This is also used in common language ... | The origin of the term "regularization" | Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions | The origin of the term "regularization"
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions which are regular, that is,
according to rule
(free dictionary's definitio... | The origin of the term "regularization"
Most simply, the term survived the natural evolution of scientific terms because it captures the core goal of the technique: from a bunch of solutions to an ill-posed problem, it chooses the solutions |
9,288 | Is variation the same as variance? | Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ variance. Synonyms for "variation" are spread, dispersion, scatter and variability. It's just a way of talking about the behavi... | Is variation the same as variance? | Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ varianc | Is variation the same as variance?
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ variance. Synonyms for "variation" are spread, dispersion, scatter and variability. It's jus... | Is variation the same as variance?
Here's a full wikipedia article discussing this topic: http://en.wikipedia.org/wiki/Statistical_dispersion
As described by others in the comments here, the short answer is: no, variation $\ne$ varianc |
9,289 | Is variation the same as variance? | Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a variation concept, among others.
To demonstrate why the distinction might be important, compare also the coefficient-of-v... | Is variation the same as variance? | Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a v | Is variation the same as variance?
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a variation concept, among others.
To demonstrate why the distinction might be important... | Is variation the same as variance?
Variation may be understood best as a general term for a class of different concepts, of which variance $(\sigma^2)$ is only one. Levine and Roos (1997) also consider standard-deviation $(\sigma)$ a v |
9,290 | Earth Mover's Distance (EMD) between two Gaussians | $\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb R}$The
earth mover's distance can be written as $\EMD(P, Q) = \inf \E \lVert X - Y \rVert$, where the infimum is taken o... | Earth Mover's Distance (EMD) between two Gaussians | $\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb | Earth Mover's Distance (EMD) between two Gaussians
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb R}$The
earth mover's distance can be written as $\EMD(P, Q) = \inf \E... | Earth Mover's Distance (EMD) between two Gaussians
$\DeclareMathOperator\EMD{\mathrm{EMD}}
\DeclareMathOperator\E{\mathbb{E}}
\DeclareMathOperator\Var{Var}
\DeclareMathOperator\N{\mathcal{N}}
\DeclareMathOperator\tr{\mathrm{tr}}
\newcommand\R{\mathbb |
9,291 | Who to follow on github to learn about best practice in data analysis? | Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but admittedly blind) trust in his best practices, particularly with respect to his own packages.
Plus, you get an early heads ... | Who to follow on github to learn about best practice in data analysis? | Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but adm | Who to follow on github to learn about best practice in data analysis?
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but admittedly blind) trust in his best practices, parti... | Who to follow on github to learn about best practice in data analysis?
Hadley Wickham. He has several exploratory data analysis projects on Github that you can look at (e.g., "data-baby-names"), and given the awesomeness of ggplot2/plyr/reshape, I have a default (but adm |
9,292 | Who to follow on github to learn about best practice in data analysis? | I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
log4r, a logging system. | Who to follow on github to learn about best practice in data analysis? | I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
l | Who to follow on github to learn about best practice in data analysis?
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
log4r, a logging system. | Who to follow on github to learn about best practice in data analysis?
I also follow John Myles White's GitHub repository. There are several data-oriented projects, but also interesting stuff for R developers:
ProjectTemplate, a template system for building R project;
l |
9,293 | Who to follow on github to learn about best practice in data analysis? | Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting. | Who to follow on github to learn about best practice in data analysis? | Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting. | Who to follow on github to learn about best practice in data analysis?
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting. | Who to follow on github to learn about best practice in data analysis?
Diego Valle Jones. His Github, especially analysis of homicides in Mexico is really interesting. |
9,294 | Who to follow on github to learn about best practice in data analysis? | If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlando (@ignaciorlando). They are great on that. Although you may be looking for something more broad in terms of data analy... | Who to follow on github to learn about best practice in data analysis? | If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlan | Who to follow on github to learn about best practice in data analysis?
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlando (@ignaciorlando). They are great on that. Alth... | Who to follow on github to learn about best practice in data analysis?
If you are dealing with clinical data (e.g., medical imaging, EMR, or physiologic monitoring data), you can follow Ramesh Sridharan (@rameshvs), Matteo Fumagalli (@mfumagalli), and José Ignacio Orlan |
9,295 | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-free way to decide whether model metrics such as $R^2$ are good or not. At the extremes, it is ... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValida | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValidated. There is no context-f... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This answer will mostly focus on $R^2$, but most of this logic extends to other metrics such as AUC and so on.
This question can almost certainly not be answered well for you by readers at CrossValida |
9,296 | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (63 responded ) to find out what diagnostic plots and goodness of fit statistics they used, which were the most important... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, ( | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, (63 responded ) to find out... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
This problem comes up in my field of hydrology when assessing how well models predict streamflow from rainfall and climate data. Some researchers (Chiew and McMahon, 1993) surveyed 93 hydrologists, ( |
9,297 | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind them, then you can likely artificially increase them to make your model appear better without increasing its actual utilit... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value? | Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind th | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind them, then you can likely ar... | Is my model any good, based on the diagnostic metric ($R^2$/ AUC/ accuracy/ RMSE etc.) value?
Just to add to the great answers above - in my experience, evaluation metrics and diagnostic tools are as good and honest as the person using them. That is, if you understand the mathematics behind th |
9,298 | What is the statistical model behind the SVM algorithm? | You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, if your loss function is $\sum_i g(\varepsilon_i) = \sum_i g(y_i-x_i'\beta)$ then minimizing that will correspond to maxi... | What is the statistical model behind the SVM algorithm? | You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, i | What is the statistical model behind the SVM algorithm?
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, if your loss function is $\sum_i g(\varepsilon_i) = \sum_i g(y_i-... | What is the statistical model behind the SVM algorithm?
You can often write a model that corresponds to a loss function (here I'm going to talk about SVM regression rather than SVM-classification; it's particularly simple)
For example, in a linear model, i |
9,299 | What is the statistical model behind the SVM algorithm? | I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm wondering what differential equation it is a solution to?
In other words, it certainly has a valid answer (perhaps even a ... | What is the statistical model behind the SVM algorithm? | I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm won | What is the statistical model behind the SVM algorithm?
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm wondering what differential equation it is a solution to?
In other... | What is the statistical model behind the SVM algorithm?
I think someone already answered your literal question, but let me clear up a potential confusion.
Your question is somewhat similar to the following:
I have this function $f(x) = \ldots$ and I'm won |
9,300 | Statistical methods for data where only a minimum/maximum value is known | This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: transitioning from A to B) has happened or not. This is a special case of interval censoring.
To formally define it, l... | Statistical methods for data where only a minimum/maximum value is known | This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: | Statistical methods for data where only a minimum/maximum value is known
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: transitioning from A to B) has happened or not... | Statistical methods for data where only a minimum/maximum value is known
This is referred to as current status data. You get one cross sectional view of the data, and regarding the response, all you know is that at the observed age of each subject, the event (in your case: |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.