idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
8,001
Dimensionality reduction (SVD or PCA) on a large, sparse matrix
First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs will be orthogonal to it, which will prevent them from approximating any PCs that happen to be close to that first vector....
Dimensionality reduction (SVD or PCA) on a large, sparse matrix
First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs wi
Dimensionality reduction (SVD or PCA) on a large, sparse matrix First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs will be orthogonal to it, which will prevent them from app...
Dimensionality reduction (SVD or PCA) on a large, sparse matrix First of all, you really do want to center the data. If not, the geometric interpretation of PCA shows that the first principal component will be close to the vector of means and all subsequent PCs wi
8,002
What stop-criteria for agglomerative hierarchical clustering are used in practice?
The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristic described there is probably the most popular due to its simple explanation (amount of variance explained by number of ...
What stop-criteria for agglomerative hierarchical clustering are used in practice?
The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristi
What stop-criteria for agglomerative hierarchical clustering are used in practice? The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristic described there is probably the mos...
What stop-criteria for agglomerative hierarchical clustering are used in practice? The following Wikipedia entry actually does a pretty good job of explaining the most popular and relatively simple methods: Determining the number of clusters in a data set The Elbow Method heuristi
8,003
What stop-criteria for agglomerative hierarchical clustering are used in practice?
It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate groups of statistical units (whether it be individuals or variables) for exploratory or descriptive purpose, essentiall...
What stop-criteria for agglomerative hierarchical clustering are used in practice?
It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate
What stop-criteria for agglomerative hierarchical clustering are used in practice? It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate groups of statistical units (whether...
What stop-criteria for agglomerative hierarchical clustering are used in practice? It is rather difficult to provide a clear-cut solution about how to choose the "best" number of clusters in your data, whatever the clustering method you use, because Cluster Analysis seeks to isolate
8,004
What stop-criteria for agglomerative hierarchical clustering are used in practice?
I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not so hard (I actually did it, just didn't get to publish the code)
What stop-criteria for agglomerative hierarchical clustering are used in practice?
I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not
What stop-criteria for agglomerative hierarchical clustering are used in practice? I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not so hard (I actually did it, just didn...
What stop-criteria for agglomerative hierarchical clustering are used in practice? I recently became fund of the clustergram visualization method (implemented in R). I use it for an extra method to assess a "good" number of clusters. Extending it to other clustering methods is not
8,005
Real-life examples of common distributions
Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types of applications that the different distributions are commonly used for. Just remember that these distributions are used...
Real-life examples of common distributions
Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types
Real-life examples of common distributions Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types of applications that the different distributions are commonly used for. Just...
Real-life examples of common distributions Wikipedia has a page that lists many probability distributions with links to more detail about each distribution. You can look through the list and follow the links to get a better feel for the types
8,006
Real-life examples of common distributions
Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_tmb . At least read all of the Problems for Solution, and preferably try solving as many as you can. You don't need to ...
Real-life examples of common distributions
Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_t
Real-life examples of common distributions Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_tmb . At least read all of the Problems for Solution, and preferably try solv...
Real-life examples of common distributions Buy and read at least the first 6 chapters (first 218 pages) of William J. Feller "An Introduction to Probability Theory and Its Applications, Vol. 2" http://www.amazon.com/dp/0471257095/ref=rdr_ext_t
8,007
Real-life examples of common distributions
Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distribution (continuous) - You sprayed some very fine powder towards an wall. For a small area on wall, the chances of falling d...
Real-life examples of common distributions
Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distributio
Real-life examples of common distributions Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distribution (continuous) - You sprayed some very fine powder towards an wall. For a s...
Real-life examples of common distributions Some common probability distributions; From here Uniform distribution (discrete) - You rolled 1 die and the probability of falling any of 1, 2, 3, 4, 5 and 6 is equal. (from here) Uniform distributio
8,008
Real-life examples of common distributions
Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions. In the case of the Weibull it is an extreme value type for the minimum of a sample. Related to the parametric models...
Real-life examples of common distributions
Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions
Real-life examples of common distributions Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions. In the case of the Weibull it is an extreme value type for the minimum of ...
Real-life examples of common distributions Asymptotic theory leads to the normal distribution, the extreme value types, the stable laws and the Poisson. The exponential and the Weibull tend to come up as parametric time to event distributions
8,009
Real-life examples of common distributions
Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptotically from a binomially distributed variable, when $n$ (the number of Bernoulli experiments) increases without bounds, a...
Real-life examples of common distributions
Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptoti
Real-life examples of common distributions Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptotically from a binomially distributed variable, when $n$ (the number of Bernoul...
Real-life examples of common distributions Just to add to the other excellent answers. The Poisson distribution is useful whenever we have counting variables, as others have mentioned. But much more should be said! The poisson arises asymptoti
8,010
Real-life examples of common distributions
Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on publishing frequency in the most pre-eminent discipline-specific journals. (2) Entertainers, such as actors, musicians and...
Real-life examples of common distributions
Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on pu
Real-life examples of common distributions Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on publishing frequency in the most pre-eminent discipline-specific journals. (2) ...
Real-life examples of common distributions Recently published research suggests that human performance is NOT normally distributed, contrary to common thought. Data from four fields were analyzed: (1) Academics in 50 disciplines, based on pu
8,011
Real-life examples of common distributions
Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset prices, electricity generation and hydrology).
Real-life examples of common distributions
Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset pri
Real-life examples of common distributions Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset prices, electricity generation and hydrology).
Real-life examples of common distributions Cauchy distribution is often used in finance to model asset returns. Also noteworthy are Johnson’s Bounded and Unbounded distributions due to their flexibility (I’ve applied them in modeling asset pri
8,012
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is largely irrelevant. I think this is a misunderstanding. Fisherian $p$ testing follows strongly in the idea of Popper's ...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is largely irrelevant. I think th...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 My personal appraisal of his arguments: Here he talks about using $p$ as evidence for the Null, whereas his thesis is that $p$ can't be used as evidence against the Null. So, I think this argument is
8,013
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evidence has more dimensions than any single number can measure, and so there are always aspects of the relationship betwee...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evi
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evidence has more dimensions than ...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 The reason that arguments like Johansson's are recycled so often seem to be related to the fact that P-values are indices of the evidence against the null but are not measures of the evidence. The evi
8,014
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1$ as the number of hypotheses increases. So if you tell me you have a small p-value, the first thing I want to know is h...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1$ as the number of hypotheses i...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Adding to @Momo's nice answer: Do not forget multiplicity. Given many independent p-values, and sparse non-trivial effect sizes, the smallest p-values are from the null, with probability tending to $1
8,015
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a small inconsequential difference may be statistically significant. If experiment "B" involves only a few samples, an im...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011
Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a small inconsequential differenc...
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 Is Johansson talking about p-values from two different experiments? If so, comparing p-values may be like comparing apples to lamb chops. If experiment "A" involves a huge number of samples, even a
8,016
Algorithm to dynamically monitor quantiles
The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile. The authors claim quadratic interpolation works better in the tails than linear interpolation and cubic would get too f...
Algorithm to dynamically monitor quantiles
The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile.
Algorithm to dynamically monitor quantiles The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile. The authors claim quadratic interpolation works better in the tails than lin...
Algorithm to dynamically monitor quantiles The P2 algorithm is a nice find. It works by making several estimates of the quantile, updating them periodically, and using quadratic (not linear, not cubic) interpolation to estimate the quantile.
8,017
Algorithm to dynamically monitor quantiles
I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea for a different generalization of P2. It's not as detailed as what whuber suggests - more like a research idea instead ...
Algorithm to dynamically monitor quantiles
I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea
Algorithm to dynamically monitor quantiles I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea for a different generalization of P2. It's not as detailed as what whuber su...
Algorithm to dynamically monitor quantiles I think whuber's suggestion is great and I would try that first. However, if you find you really can't accomodate the $O(\sqrt N)$ storage or it doesn't work out for some other reason, here is an idea
8,018
Algorithm to dynamically monitor quantiles
Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles
Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
Algorithm to dynamically monitor quantiles Press et al., Numerical Recipes 8.5.2 "Single-pass estimation of arbitrary quantiles" p. 435, give a c++ class IQAgent which updates a piecewise-linear approximate cdf.
8,019
Algorithm to dynamically monitor quantiles
I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedasticity pretty well and can be used one a rolling window basis. It's basically an L1-Norm penalized regression, so it's not t...
Algorithm to dynamically monitor quantiles
I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedastici
Algorithm to dynamically monitor quantiles I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedasticity pretty well and can be used one a rolling window basis. It's basically an ...
Algorithm to dynamically monitor quantiles I'd look at quantile regression. You can use it to determine a parametric estimate of whichever quantiles you want to look at. It make no assumption regarding normality, so it handles heteroskedastici
8,020
Algorithm to dynamically monitor quantiles
This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set
Algorithm to dynamically monitor quantiles
This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-fr
Algorithm to dynamically monitor quantiles This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-from-a-growing-set
Algorithm to dynamically monitor quantiles This can be adapted from algorithms that determine the median of a dataset online. For more information, see this stackoverflow post - https://stackoverflow.com/questions/1387497/find-median-value-fr
8,021
Algorithm to dynamically monitor quantiles
It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the check-loss function which defines quantile-regression (quantiles being represented by a model containing only an intercep...
Algorithm to dynamically monitor quantiles
It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the c
Algorithm to dynamically monitor quantiles It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the check-loss function which defines quantile-regression (quantiles being represe...
Algorithm to dynamically monitor quantiles It is possible to estimate (and track) quantiles on an on-line basis (the same applies to the parameters of a quantile regression). In essence, this boils down to stochastic gradient descent on the c
8,022
What is the standard error of the sample standard deviation?
Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size and distribution, and is proved on page 438, of Rao, 1973, assuming that the $\mu_4$ is finite. The formula you gave in...
What is the standard error of the sample standard deviation?
Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size
What is the standard error of the sample standard deviation? Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size and distribution, and is proved on page 438, of Rao, 1973,...
What is the standard error of the sample standard deviation? Let $\mu_4 = E(X-\mu)^4$. Then, the formula for the SE of $s^2$ is: $$ se(s^2) = \sqrt{ \frac{1}{n}\left(\mu_4 -\frac{n-3}{n-1} \sigma^4\right)} $$ This is an exact formula, valid for any sample size
8,023
How to train and validate a neural network model in R?
Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of decay and size via bootstrapping or some other approach that you can specify via the trControl parameter. I call the data se...
How to train and validate a neural network model in R?
Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of deca
How to train and validate a neural network model in R? Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of decay and size via bootstrapping or some other approach that you can ...
How to train and validate a neural network model in R? Max Kuhn's caret Manual - Model Building is a great starting point. I would think of the validation stage as occurring within the caret train() call, since it is choosing your hyperparameters of deca
8,024
What is the fiducial argument and why has it not been accepted?
I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of the first column on the page and going through most of the second column the authors Joan Fisher Box (R. A. Fisher's dau...
What is the fiducial argument and why has it not been accepted?
I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of
What is the fiducial argument and why has it not been accepted? I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of the first column on the page and going through most of t...
What is the fiducial argument and why has it not been accepted? I am surprised that you don't consider us authorities. Here is a good reference: Encyclopedia of Biostatistics, Volume 2, page 1526; article titled "Fisher, Ronald Aylmer." Starting at the bottom of
8,025
What is the fiducial argument and why has it not been accepted?
Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density function for $\theta$ in which $L(\theta|x)$ is the likelihood function of $\theta$ and $M(x)=(\int_{-\infty}^{\infty}L(\th...
What is the fiducial argument and why has it not been accepted?
Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density fun
What is the fiducial argument and why has it not been accepted? Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density function for $\theta$ in which $L(\theta|x)$ is the likelih...
What is the fiducial argument and why has it not been accepted? Fiducial inference sometimes interprets likelihoods as probabilities for the parameter $\theta$. That is, $M(x)L(\theta|x)$, provided that $M(x)$ is finite, is interpreted as a probability density fun
8,026
What is the fiducial argument and why has it not been accepted?
Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial intervals. They argued differently about their construction but the constructed intervals were usually the same. So the...
What is the fiducial argument and why has it not been accepted?
Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial i
What is the fiducial argument and why has it not been accepted? Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial intervals. They argued differently about their construct...
What is the fiducial argument and why has it not been accepted? Just to add to what is said, there was controversy between Fisher and Neyman about significance testing and interval estimation. Neyman defined confidence intervals while Fisher introduced fiducial i
8,027
What is the fiducial argument and why has it not been accepted?
TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have wanted it to behave like a probability distribution) but it is not the same as a probability distribution. It is only a f...
What is the fiducial argument and why has it not been accepted?
TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have w
What is the fiducial argument and why has it not been accepted? TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have wanted it to behave like a probability distribution) but ...
What is the fiducial argument and why has it not been accepted? TL;DR The fiducial argument has not been accepted because the idea doesn't work. The fiducial distribution is disguised as something that looks like a probability distribution (and people might have w
8,028
What is the fiducial argument and why has it not been accepted?
In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MATLAB): "Can I calculate the interval as > norminv([alpha/2,1-alpha/2], barX, sigma/sqrt(n))?" In translation: could he t...
What is the fiducial argument and why has it not been accepted?
In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MA
What is the fiducial argument and why has it not been accepted? In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MATLAB): "Can I calculate the interval as > norminv([alpha...
What is the fiducial argument and why has it not been accepted? In a large undergraduate class of engineering intro stats at Georgia Tech, when discussing confidence intervals for the population mean with variance known, one student asked me (in the language of MA
8,029
What are the differences between Logistic Function and Sigmoid Function?
Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value the function can take. $e^{-k(x-x_0)}$ is always greater or equal than 0, so the maximum point is achieved when it it 0, a...
What are the differences between Logistic Function and Sigmoid Function?
Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value th
What are the differences between Logistic Function and Sigmoid Function? Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value the function can take. $e^{-k(x-x_0)}$ is always ...
What are the differences between Logistic Function and Sigmoid Function? Yes, the sigmoid function is a special case of the Logistic function when $L=1$, $k=1$, $x_0 =0$. If you play around with the parameters (Wolfram Alpha), you will see that $L$ is the maximum value th
8,030
What are the differences between Logistic Function and Sigmoid Function?
The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are between $0$ and $1$, we get $\frac{1}{1+e^{-x}}$ which is the sigmoid function.
What are the differences between Logistic Function and Sigmoid Function?
The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are bet
What are the differences between Logistic Function and Sigmoid Function? The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are between $0$ and $1$, we get $\frac{1}{1+e^{-x}}$ w...
What are the differences between Logistic Function and Sigmoid Function? The logistic function is: $$ f(x) = \frac{K}{1+Ce^{-rx}} $$ where $C$ is the constant from integration, $r$ is the proportionality constant, and $K$ is the threshold limit. Assuming the limits are bet
8,031
What are the differences between Logistic Function and Sigmoid Function?
I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape curved monotonically increasing fuction being confined a and b are sigmoid functions.
What are the differences between Logistic Function and Sigmoid Function?
I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape
What are the differences between Logistic Function and Sigmoid Function? I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape curved monotonically increasing fuction being ...
What are the differences between Logistic Function and Sigmoid Function? I would like to say in opposite way to the answer "the sigmoid function is a special case of the Logistic function" into "The Logisitic function is a special case of the sigmoid function". All S shape
8,032
Differences between a statistical model and a probability model?
A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability measure on ${\mathcal F}$. Intuitive explanation. A probability model can be interpreted as a known random variable $X$. F...
Differences between a statistical model and a probability model?
A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability me
Differences between a statistical model and a probability model? A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability measure on ${\mathcal F}$. Intuitive explanation. A proba...
Differences between a statistical model and a probability model? A Probability Model consists of the triplet $(\Omega,{\mathcal F},{\mathbb P})$, where $\Omega$ is the sample space, ${\mathcal F}$ is a $\sigma$−algebra (events) and ${\mathbb P}$ is a probability me
8,033
Can PCA be applied for time series data?
One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of average PCA over the whole time span, and will not say anything about how the different timelags affect each other. But...
Can PCA be applied for time series data?
One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of
Can PCA be applied for time series data? One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of average PCA over the whole time span, and will not say anything about how the ...
Can PCA be applied for time series data? One approach could be to take the first time differences of your 12 variables to ensure stationarity. Then calculate the $12\times12$ covariance matrix and perform PCA on it. This will be some sort of
8,034
Can PCA be applied for time series data?
Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) in columns which represent the features, and the rows representing e.g. days (or objects) for end-of-day trading. Thus, ...
Can PCA be applied for time series data?
Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) i
Can PCA be applied for time series data? Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) in columns which represent the features, and the rows representing e.g. days (or...
Can PCA be applied for time series data? Yes, PCA on time series is performed all the time in financial engineering (quantitative finance) and neurology. In financial engineering, the data matrix is constructed with assets (e.g., stocks) i
8,035
How many stickers do I need to complete my FIFA Panini album?
That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can see here. All the estimates for a 90% upper bound for individually-bought stickers are also upper bounds for the problem ...
How many stickers do I need to complete my FIFA Panini album?
That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can s
How many stickers do I need to complete my FIFA Panini album? That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can see here. All the estimates for a 90% upper bound for indiv...
How many stickers do I need to complete my FIFA Panini album? That is a beautiful Coupon Collector's Problem, with a little twist introduced by the fact that stickers come in packs of 5. If the stickers were bought individually the result are known, as you can s
8,036
How many stickers do I need to complete my FIFA Panini album?
The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of packs you would need to buy would be: $\binom{424}{5}\sum_{j=1}^{424}\left(-1\right)^{j+1}\frac{\binom{424}{j}}{\binom{424}...
How many stickers do I need to complete my FIFA Panini album?
The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of pac
How many stickers do I need to complete my FIFA Panini album? The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of packs you would need to buy would be: $\binom{424}{5}\sum_{j=...
How many stickers do I need to complete my FIFA Panini album? The other day I came across a paper that addresses a closely related question: http://www.unige.ch/math/folks/velenik/Vulg/Paninimania.pdf If I have understood it correctly, the expected number of pac
8,037
Interpretation of mean absolute scaled error (MASE)
In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of which is linked to in the blog post. The benchmarks you refer to - 1.38 for monthly, 1.43 for quarterly and 2.28 for yearly ...
Interpretation of mean absolute scaled error (MASE)
In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of whic
Interpretation of mean absolute scaled error (MASE) In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of which is linked to in the blog post. The benchmarks you refer to - 1.38 ...
Interpretation of mean absolute scaled error (MASE) In the linked blog post, Rob Hyndman calls for entries to a tourism forecasting competition. Essentially, the blog post serves to draw attention to the relevant IJF article, an ungated version of whic
8,038
Interpretation of mean absolute scaled error (MASE)
Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the errors from the "naive" constant predictor, here $5^{th}$ last: $\qquad Error4( y ) \equiv {1 \over 4} \sum_ {last\ 4} |y_...
Interpretation of mean absolute scaled error (MASE)
Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the er
Interpretation of mean absolute scaled error (MASE) Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the errors from the "naive" constant predictor, here $5^{th}$ last: $\qqua...
Interpretation of mean absolute scaled error (MASE) Not an answer, but a plot following Stephan Kolassa's call to "look at these series". Kaggle tourism1 has 518 yearly time series, for which we want to predict the last 4 values: The plot shows the er
8,039
(Why) Has Kohonen-style SOM fallen out of favor?
I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in competitions, such as the Merck Viz Challenge, I personally have had success implementing SOM for both feature extraction and ...
(Why) Has Kohonen-style SOM fallen out of favor?
I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in compet
(Why) Has Kohonen-style SOM fallen out of favor? I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in competitions, such as the Merck Viz Challenge, I personally have had success ...
(Why) Has Kohonen-style SOM fallen out of favor? I think you are on to something by noting the influence of what the machine learning currently touts as the 'best' algorithms for dimensionality reduction. While t-SNE has shown its efficacy in compet
8,040
(Why) Has Kohonen-style SOM fallen out of favor?
I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. Would love to get some idea on what people think about it and if it is worth publishing in python for people to use. IEEE...
(Why) Has Kohonen-style SOM fallen out of favor?
I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. W
(Why) Has Kohonen-style SOM fallen out of favor? I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. Would love to get some idea on what people think about it and if it is w...
(Why) Has Kohonen-style SOM fallen out of favor? I have done research on comparing SOMs with t-SNE and more and also proposed an improvement on SOM that takes it to a new level of efficiency. Please check it out here and let me know your feedback. W
8,041
(Why) Has Kohonen-style SOM fallen out of favor?
My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case that they would have a significant contribution to make if they were more widely used. They are invaluable in the early s...
(Why) Has Kohonen-style SOM fallen out of favor?
My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case t
(Why) Has Kohonen-style SOM fallen out of favor? My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case that they would have a significant contribution to make if they were mor...
(Why) Has Kohonen-style SOM fallen out of favor? My subjective view is that SOMs are less well known and perceived as being less 'sexy' than many other methods, but are still highly relevant for certain classes of problems. It may well be the case t
8,042
How to build the final model and tune probability threshold after nested cross-validation?
Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building process (see below). My main point here is from my point of view I don't need a dedicated nested variety of cross valid...
How to build the final model and tune probability threshold after nested cross-validation?
Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building
How to build the final model and tune probability threshold after nested cross-validation? Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building process (see below). My main...
How to build the final model and tune probability threshold after nested cross-validation? Nested cross validation explained without nesting Here's how I see (nested) cross validation and model building. Note that I'm chemist and like you look from the application side to the model building
8,043
How to build the final model and tune probability threshold after nested cross-validation?
So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and although it makes sense to me, I am still a student in this topic so I am not 100% sure of it. Therefore, any feedback is...
How to build the final model and tune probability threshold after nested cross-validation?
So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and al
How to build the final model and tune probability threshold after nested cross-validation? So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and although it makes sense to me, ...
How to build the final model and tune probability threshold after nested cross-validation? So, first of all, this is an answer based on the one by @cbeleites above, this one here, and the question itself (all these contributions helped me understand). There is nothing original in it, and al
8,044
How to build the final model and tune probability threshold after nested cross-validation?
I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than for estimating the performance of a model". Given that, how should I interpret the result...
How to build the final model and tune probability threshold after nested cross-validation?
I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for bui
How to build the final model and tune probability threshold after nested cross-validation? I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for building a model, rather than fo...
How to build the final model and tune probability threshold after nested cross-validation? I think your following understanding is reasonable: Now, the links I've posted suggest that "the way to think of cross-validation is as estimating the performance obtained using a method for bui
8,045
Should final (production ready) model be trained on complete data or just on training set?
You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is data splitting a lost opportunity to directly model sample differences in an overall model, but it is unstable unless y...
Should final (production ready) model be trained on complete data or just on training set?
You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is
Should final (production ready) model be trained on complete data or just on training set? You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is data splitting a lost opport...
Should final (production ready) model be trained on complete data or just on training set? You will almost always get a better model after refitting on the whole sample. But as others have said you have no validation. This is a fundamental flaw in the data splitting approach. Not only is
8,046
Should final (production ready) model be trained on complete data or just on training set?
Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from your data generating distribution, some set of (x, y). You then split this up into a training and test set, and train a mo...
Should final (production ready) model be trained on complete data or just on training set?
Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from yo
Should final (production ready) model be trained on complete data or just on training set? Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from your data generating distributi...
Should final (production ready) model be trained on complete data or just on training set? Unless you're limiting yourself to a simple class of convex models/loss functions, you're considerably better off keeping a final test split. Here's why: Let's say you collect iid sample pairs from yo
8,047
Should final (production ready) model be trained on complete data or just on training set?
You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a model is likely to perform on out-of-sample data.
Should final (production ready) model be trained on complete data or just on training set?
You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a m
Should final (production ready) model be trained on complete data or just on training set? You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a model is likely to perform on ...
Should final (production ready) model be trained on complete data or just on training set? You dont need to re-train again. When you report your results, you always report test data results because they give much better understanding. By test data set we can more accurately see how well a m
8,048
lme and lmer comparison
UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some correlation structures: CRAN brms page. (Note especially: "As of brms version 0.6.0, the AR structure refers to autoreg...
lme and lmer comparison
UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some
lme and lmer comparison UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some correlation structures: CRAN brms page. (Note especially: "As of brms version 0.6.0, the AR str...
lme and lmer comparison UPDATE JUNE 2016: Please see Ben's blog entry describing his current thoughts on accomplishing this in lme4: Braindump 01 June 2016 If you prefer Bayesian methods, the brms package's brm supports some
8,049
lme and lmer comparison
To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme allows the user to define general corstrs and lme4 does not, this will be the case. This has surprisingly little practic...
lme and lmer comparison
To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme
lme and lmer comparison To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme allows the user to define general corstrs and lme4 does not, this will be the case. This has sur...
lme and lmer comparison To answer your questions directly, and NB this is years after the original post! Yep there are still correlation structures that nlme handles which lme4 will not handle. However, for as long as nlme
8,050
Difference between Hidden Markov models and Particle Filter (and Kalman Filter)
It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden space (discrete or continuous), the hidden state dynamics (linear or non-linear) the nature of the observations (typica...
Difference between Hidden Markov models and Particle Filter (and Kalman Filter)
It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden
Difference between Hidden Markov models and Particle Filter (and Kalman Filter) It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden space (discrete or continuous), the hid...
Difference between Hidden Markov models and Particle Filter (and Kalman Filter) It will be helpful to distinguish the model from inference you want to make with it, because now standard terminology mixes the two. The model is the part where you specify the nature of: the hidden
8,051
Why do smaller weights result in simpler models in regularization?
If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\frac{\lambda}{2m}\Omega$ for a hypothesis $h \in H$, where $\lambda$ is some parameter, usually $\lambda \in (0,1)$, $m$...
Why do smaller weights result in simpler models in regularization?
If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\
Why do smaller weights result in simpler models in regularization? If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\frac{\lambda}{2m}\Omega$ for a hypothesis $h \in H$, ...
Why do smaller weights result in simpler models in regularization? If you use regularization you're not only minimizing the in-sample error but $OutOfSampleError \le InSampleError + ModelComplexityPenalty$. More precisely, $J_{aug}(h(x),y,\lambda,\Omega)=J(h(x),y)+\
8,052
Why do smaller weights result in simpler models in regularization?
I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more strongly reduces the model space. In fact you can regularize around 10000000 if you wanted to by taking the L2 norm of you...
Why do smaller weights result in simpler models in regularization?
I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more str
Why do smaller weights result in simpler models in regularization? I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more strongly reduces the model space. In fact you can regul...
Why do smaller weights result in simpler models in regularization? I'm not sure if I really know what I'm talking about but I'll give it a shot. It isn't so much having small weights that prevents overfitting (I think), it is more the fact that regularizing more str
8,053
Why do smaller weights result in simpler models in regularization?
Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial units.) It also has glaciers on the top, sometimes. If you climb on a year where there is no glacier, and you get to t...
Why do smaller weights result in simpler models in regularization?
Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial
Why do smaller weights result in simpler models in regularization? Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial units.) It also has glaciers on the top, sometimes....
Why do smaller weights result in simpler models in regularization? Story: My grandma walks, but doesn't climb. Some grandmas do. One grandma was famous for climbing Kilimanjaro. That dormant volcano is big. It is 16,000 feet above its base. (Don't hate my imperial
8,054
Why do smaller weights result in simpler models in regularization?
A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the sums of squared errors: $SSE$ Adding more features will likely reduce this $SSE$, especially if the feature is selected ...
Why do smaller weights result in simpler models in regularization?
A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the
Why do smaller weights result in simpler models in regularization? A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the sums of squared errors: $SSE$ Adding more features wi...
Why do smaller weights result in simpler models in regularization? A simple intuition is the following. Remember that for regularization the features should be standardized in order to have approx. the same scale. Let's say that the minimisation function is only the
8,055
Why do smaller weights result in simpler models in regularization?
By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss will now be a function of the errors + contribution of the weights norm. see derivation: https://www.youtube.com/watch?v=qw...
Why do smaller weights result in simpler models in regularization?
By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss wil
Why do smaller weights result in simpler models in regularization? By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss will now be a function of the errors + contribution of t...
Why do smaller weights result in simpler models in regularization? By adding Guassian noise to the input, the learning model will behave like an L2-penalty regularizer. To see why, consider a linear regression where i.i.d. noise is added to the features. The loss wil
8,056
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to normalize the batch In the inference we just apply pre-calculated mini batch statistics So in the 2nd thing how to calcul...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to norm
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to normalize the batc...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr When using batch_normalization first thing we have to understand is that it works on two different ways when in Training and Testing. In Training we need to calculate mini batch mean in order to norm
8,057
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In this section they explain what to do for the inference stage, when you are just making predictions ( ie after training has ...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In thi
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In this section they...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr They are talking about batch normalization, which they have described for the training procedure but not for inference. This is a process of normalizing the hidden units using sample means etc. In thi
8,058
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains?
In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This block of code running_mean = momentum * running_mean + (1 - momentum) * sample_mean running_var = momentum * running_v...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr
In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This
How and why does Batch Normalization use moving averages to track the accuracy of the model as it trains? In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This block of code ...
How and why does Batch Normalization use moving averages to track the accuracy of the model as it tr In the paper you referenced, the suggested test time behavior is to compute sample mean and variance for each feature using a large number of training images rather than using a running average. This
8,059
Machine learning techniques for parsing strings?
This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fields (CRF) to solve the problem. There are good implementations of HMM and CRF in an open-source package called Mallet. I...
Machine learning techniques for parsing strings?
This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fie
Machine learning techniques for parsing strings? This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fields (CRF) to solve the problem. There are good implementations of HMM a...
Machine learning techniques for parsing strings? This can be seen as a sequence labeling problem, in which you have a sequence of tokens and want to give a classification for each one. You can use hidden Markov models (HMM) or conditional random fie
8,060
Machine learning techniques for parsing strings?
This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1600 Pennsylvania Ave, Washington, DC 20500 USA 111111111111111111111, 2222222222, 33 44444 555 Now, train your classifi...
Machine learning techniques for parsing strings?
This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1
Machine learning techniques for parsing strings? This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1600 Pennsylvania Ave, Washington, DC 20500 USA 111111111111111111111, 2...
Machine learning techniques for parsing strings? This sounds like a problem to be solved with bidirectional LSTM classification. You tag each character of the sample as one category for example street: 1 city: 2 province: 3 postcode: 4 country: 5 1
8,061
Machine learning techniques for parsing strings?
I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went down 2000 steps and reached Pennsylvania Ave in Washington DC." is not a valid address. This can be solved by classificati...
Machine learning techniques for parsing strings?
I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went do
Machine learning techniques for parsing strings? I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went down 2000 steps and reached Pennsylvania Ave in Washington DC." is not a ...
Machine learning techniques for parsing strings? I had to solve a very similar problem to validate whether an address is valid or invalid. Typically address have the structure "1600 Pennsylvania Ave, Washington DC, 20500" A string such as "I went do
8,062
Machine learning techniques for parsing strings?
This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with geocode3 from SSC, which uses Google. I guess this is similar to Fuzzy Gazetteer. The first address is pretty messy, the...
Machine learning techniques for parsing strings?
This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with
Machine learning techniques for parsing strings? This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with geocode3 from SSC, which uses Google. I guess this is similar to Fuzzy ...
Machine learning techniques for parsing strings? This is a bit of a hack that does not require your own solution: reverse geocoding. This can either give you cleaner data or actually do all the work for you. For example, here's some Stata code with
8,063
Negative binomial regression question - is it a poor model?
I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for example, the log-link in the Poisson is often both convenient and natural, and so people are often interested in that. Eve...
Negative binomial regression question - is it a poor model?
I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for ex
Negative binomial regression question - is it a poor model? I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for example, the log-link in the Poisson is often both convenient ...
Negative binomial regression question - is it a poor model? I dispute the assertions from several points of view: i) While the canonical link may well be 'problematic', it's not immediately obvious that someone will be interested in that link - whereas, for ex
8,064
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression model and the $p$ parameters are estimated by maximum likelihood and the $G$ groups are defined on the estimated probabi...
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression mode
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression model and the $p$ parameters are estimated by maximum likelihood an...
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test Hosmer D.W., Lemeshow S. (1980), A goodness-of-fit test for the multiple logistic regression model. Communications in Statistics, A10, 1043-1069 show that: If the model is a logistic regression mode
8,065
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi Square from Contingency Tables, and the Calculation of P' (1922) he argued to use the $(R-1) * (C-1)$ rule and in 'The g...
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test
The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi Square from Contingency Tables, and the Calculation of P' (1922) ...
Degrees of freedom of $\chi^2$ in Hosmer-Lemeshow test The theorem that you refer to (the usual reduction part "usual reduction of degrees of freedom due to estimated parameters") has been mostly advocated by R.A. Fisher. In 'On the interpretation of Chi
8,066
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$, and similarly for the probability that all five observations are below the median. As the events "above the median" and...
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$,
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$, and similarly for the probabi...
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Let's ignore the numbers for a bit. If we draw five observations from the population, the probability that all five observations are above the median is $\left({1\over 2}\right)^5 = 1/32 = 0.03125$,
8,067
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median you wanted an interval for. Understanding what a confidence interval means. The interval for a parameter will have a cert...
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median yo
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median you wanted an interval for. Un...
Can you be 93,75% confident from a random sample of only five from a population of 10 000? Yes, this really works, under certain conditions, with a couple of caveats Random selection: You can't just ask any 5 people. It would need to be randomly selected from the population whose median yo
8,068
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample and a high-confidence CI, but the problem sort of glosses over the fact that when choosing even just 5 individuals, the...
Can you be 93,75% confident from a random sample of only five from a population of 10 000?
The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample
Can you be 93,75% confident from a random sample of only five from a population of 10 000? The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample and a high-confidence CI, bu...
Can you be 93,75% confident from a random sample of only five from a population of 10 000? The other answers have this exactly correct, but I'll explain why it seems so surprising. The trick is that the way the problem is posed hides the goalposts a little bit. We know we have a tiny sample
8,069
Is there a plateau-shaped distribution?
You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sigma$ and shape $\beta$ with pdf $$ \frac{\beta}{2\sigma\Gamma(1/\beta)} \exp\left[-\left(\frac{|x-\mu|}{\sigma}\right)^{\...
Is there a plateau-shaped distribution?
You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sig
Is there a plateau-shaped distribution? You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sigma$ and shape $\beta$ with pdf $$ \frac{\beta}{2\sigma\Gamma(1/\beta)} \exp\left...
Is there a plateau-shaped distribution? You may be looking for distribution known under the names of generalized normal (version 1), Subbotin distribution, or exponential power distribution. It is parametrized by location $\mu$, scale $\sig
8,070
Is there a plateau-shaped distribution?
@StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonably nice closed form solution. The pdf of this variable is given by the expression: $$\dfrac{1}{4a}\left[\mathrm{erf}\left...
Is there a plateau-shaped distribution?
@StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonabl
Is there a plateau-shaped distribution? @StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonably nice closed form solution. The pdf of this variable is given by the expression...
Is there a plateau-shaped distribution? @StrongBad's comment is a really good suggestion. The sum of a uniform RV and gaussian RV can give you exactly what you're looking for if you pick the parameters right. And it actually has a reasonabl
8,071
Is there a plateau-shaped distribution?
There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you could always stick a half-normal at each end of a uniform: You can control the "width" of the uniform relative to the sca...
Is there a plateau-shaped distribution?
There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you co
Is there a plateau-shaped distribution? There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you could always stick a half-normal at each end of a uniform: You can control the "w...
Is there a plateau-shaped distribution? There's an infinite number of "plateau-shaped" distributions. Were you after something more specific than "in between the Gaussian and the uniform"? That's somewhat vague. Here's one easy one: you co
8,072
Is there a plateau-shaped distribution?
See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"distribution is even more interesting. It is easy to construct distributions having whatever shape you want. [1]: Westfall, ...
Is there a plateau-shaped distribution?
See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"dist
Is there a plateau-shaped distribution? See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"distribution is even more interesting. It is easy to construct distributions having ...
Is there a plateau-shaped distribution? See my "Devil's tower" distribution in here [1]: $f(x) = 0.3334$, for $|x| < 0.9399$; $f(x) = 0.2945/x^2$, for $0.9399 \leq |x| < 2.3242$; and $f(x) = 0$, for $2.3242 \leq |x|$. The "slip-dress"dist
8,073
Is there a plateau-shaped distribution?
Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped pdf (not just as a special case). I'm not sure if this already has a name in the literature, but absent same, let us cal...
Is there a plateau-shaped distribution?
Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped
Is there a plateau-shaped distribution? Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped pdf (not just as a special case). I'm not sure if this already has a name in the...
Is there a plateau-shaped distribution? Lots of nice answers. The solution proffered here has 2 features: (i) that it has a particularly simple functional form, and (ii) that the resulting distribution necessarily produces a plateau-shaped
8,074
Is there a plateau-shaped distribution?
Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \frac{\cosh{\left(\alpha \cdot a\right)}+ \cosh{\left(\alpha \cdot x\right)}} {\cosh{\left(\alpha \cdot b\right)}...
Is there a plateau-shaped distribution?
Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \
Is there a plateau-shaped distribution? Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \frac{\cosh{\left(\alpha \cdot a\right)}+ \cosh{\left(\alpha \cdot x\right)}}...
Is there a plateau-shaped distribution? Another one (EDIT: I simplified it now. EDIT2: I simplified it even further, though now the picture doesn't really reflect this exact equation): $$f(x) = \frac{1}{3 \cdot \alpha} \cdot \log{\left( \
8,075
Is there a plateau-shaped distribution?
If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio between the plateau and the descent. Why triangles, because their sampling functions already exist in most languages. You...
Is there a plateau-shaped distribution?
If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio b
Is there a plateau-shaped distribution? If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio between the plateau and the descent. Why triangles, because their sampling functi...
Is there a plateau-shaped distribution? If you are looking for something very simple, with a central plateau and the sides of a triangle distribution, you can for instance combine N triangle distributions, N depending on the desired ratio b
8,076
Is there a plateau-shaped distribution?
Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepness of the drop off. Shown below are B=1:6 with A=2. Note: I haven't taken the time to figure out how to properly normali...
Is there a plateau-shaped distribution?
Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepne
Is there a plateau-shaped distribution? Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepness of the drop off. Shown below are B=1:6 with A=2. Note: I haven't taken the ti...
Is there a plateau-shaped distribution? Here's a pretty one: the product of two logistic functions. (1/B) * 1/(1+exp(A*(x-B))) * 1/(1+exp(-A*(x+B))) This has the benefit of not being piecewise. B adjusts the width and A adjusts the steepne
8,077
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? It's called the Gambler's fallacy.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief It's called the Gambler's fallacy.
8,078
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the coin many times, half the time you will get heads and half the time you will get tails." No we won't get that, we won...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the co
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you ...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief The first sentence of this question, incorporates another (related) fallacy: "As we all know, if you flip a coin that has an equal chance of landing heads as it does tails, then if you flip the co
8,079
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the population characteristics must be reflected in small samples) - which I think is a neat name for its contrast with the law...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the pop
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea t...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief This fallacy has many names. 1) It's probably best known as the Gambler's fallacy 2) it's also sometimes called the 'law of small numbers' (also see here) (because it relates to the idea that the pop
8,080
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Just to note, that if you get a huge run of heads or tails in a row, you may be better off revisiting your prior assumption assumption that the coin was fair.
8,081
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fair con, the fact that the coin had been flipped a hundred times with a hundred heads resulting does not change the fact ...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fa
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. As...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Are you thinking of 'stochastic'? The flip of a fair coin (or the roll of a fair die) is stochastic (ie independent) in the sense that it does not depend on a previous flip of such coin. Assuming a fa
8,082
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is approximately $N(n/2, \sqrt{n/4})$. Now, before observing the first 100 tosses, your friend is correct that there is a g...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is a
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief Adding on to Glen_b's and Alecos's responses, let's define $X_n$ to be the number of heads in the first $n$ trials. A familiar result using the normal approximation to the binomial is that $X_n$ is a
8,083
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips?
You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of the elementary probabilities of the coin", this becomes more apparent. Indeed the "fallacy" is related only to (assumed)...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief
You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence beliefs about subsequent coin flips? You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the est...
What is the name of the statistical fallacy whereby outcomes of previous coin flips influence belief You are refering to Gambler's fallacy, although this is not entirely correct. Indeed if phrased as "given an assumed fair coin and one observes a given sequence of outcomes, what is the estimation of
8,084
What can cause PCA to worsen results of a classifier?
Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote them $x_1$ and $x_2$ with positive correlation equal to 0.5, aligned in $X$, and a third response variable $Y$ you wish to ...
What can cause PCA to worsen results of a classifier?
Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote the
What can cause PCA to worsen results of a classifier? Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote them $x_1$ and $x_2$ with positive correlation equal to 0.5, aligned ...
What can cause PCA to worsen results of a classifier? Consider a simple case, lifted from a terrific and undervalued article "A Note on the Use of Principal Components in Regression ". Suppose you only have two (scaled and de-meaned) features, denote the
8,085
What can cause PCA to worsen results of a classifier?
There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n, ncol = 2) y <- sample(c(-1,1), size = n, replace = TRUE) # PCA helps df.good <- data.frame( y = as.factor(y), ...
What can cause PCA to worsen results of a classifier?
There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n
What can cause PCA to worsen results of a classifier? There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n, ncol = 2) y <- sample(c(-1,1), size = n, replace = TRUE) # PCA ...
What can cause PCA to worsen results of a classifier? There is a simple geometric explanation. Try the following example in R and recall that the first principal component maximizes variance. library(ggplot2) n <- 400 z <- matrix(rnorm(n * 2), nrow = n
8,086
What can cause PCA to worsen results of a classifier?
PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informations: http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reduction
What can cause PCA to worsen results of a classifier?
PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informatio
What can cause PCA to worsen results of a classifier? PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informations: http://en.wikipedia.org/wiki/Nonlinear_dimensionality_reductio...
What can cause PCA to worsen results of a classifier? PCA is linear, It hurts when you want to see non linear dependencies. PCA on images as vectors: A non linear algorithm (NLDR) wich reduced images to 2 dimensions, rotation and scale: More informatio
8,087
What can cause PCA to worsen results of a classifier?
Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set the variation of $y$ is very small and so also the variation of $x_3$. Now if you run PCA and you decide to select o...
What can cause PCA to worsen results of a classifier?
Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set
What can cause PCA to worsen results of a classifier? Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set the variation of $y$ is very small and so also the variation of $...
What can cause PCA to worsen results of a classifier? Suppose a simple case with 3 independent variables $x_1,x_2,x_3$ and the output $y$ and suppose now that $x_3=y$ and so you should be able to get a 0 error model. Suppose now that in the training set
8,088
What can cause PCA to worsen results of a classifier?
I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised beautifully in @vqv's answer) is: Principal Component Analysis (PCA) is based on extracting the axes on which data s...
What can cause PCA to worsen results of a classifier?
I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised
What can cause PCA to worsen results of a classifier? I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised beautifully in @vqv's answer) is: Principal Component Analysis (...
What can cause PCA to worsen results of a classifier? I see the question already has an accepted answer but wanted to share this paper that talks about using PCA for feature transformation before classification. The take-home message (which is visualised
8,089
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unicorn in that field behind the hill." Alice stays where she is and doesn't look. If there is a unicorn in the field, Alice...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unic
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unicorn in that field b...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's a difference between not looking and therefore not seeing any X, and looking and not seeing any X. The latter is 'evidence', the former is not. So the hypothesis under test is "There is a unic
8,090
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$ Note that we required no special assumptions. To see this, note that by Bayes’ theorem, the first inequality implies $$...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$ Note that we requir...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Bayes says he’s wrong. If observation $O$ would provide support for theory $T$, $$ P(T|O) > P(T) $$ then failure to observe that, which I denote $\bar O$, must disfavor $T$, $$ P(T|\bar O) < P(T) $$
8,091
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to us here on or near earth, observing the cosmos through instruments. Experimental scientists have a lot more freedom to g...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to u
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to us here on or near e...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? There's an important point missing here, but it's not strictly speaking a statistical one. Cosmologists can't run experiments. Absence of evidence in cosmology means there's no evidence available to u
8,092
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
$ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A}}} {\newcommand\PB[0]{\P{\B}}} {\newcommand\PAB[0]{\PC{\A}{\B}}} {\newcommand\PBA[0]{\PC{\B}{\A}}} $tl;dr– Absence-of-...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
$ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? $ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A}}} {\newcommand\PB...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? $ {\newcommand\P[1]{\operatorname{P} \left(#1\right) }} {\newcommand\PC[2]{\P{#1 \, \middle| \, #2}}} {\newcommand\A[0]{\text{no evidence}}} {\newcommand\B[0]{\text{absence}}} {\newcommand\PA[0]{\P{\A
8,093
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{no evidence}|\text{absence}) \approx P(\text{no evidence})$). The big problem here is that, as a general rule, it is not ...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{n
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{no evidence}|\text{a...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I think that @Nat's answer is good, but understates the importance of the case where $P(\mbox{absence}) \approx P(\mbox{absence} | \mbox{no evidence})$ (or not-quite-but-almost-equivalently $P(\text{n
8,094
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence) Multiply both sides by P(no evidence)/P(absence) to get: P(no evidence) = P(no evidence | absence) which means absen...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence) Multiply both si...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? Suppose absence of evidence were not evidence of absence, i.e. P(absence | no evidence) = P(absence) Then by Bayes' Theorem we would have P(absence) = P(no evidence | absence)P(absence)/P(no evidence
8,095
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extraterrestrial life? They either exist or not, and it is not a realisation of a random variable. To drive the idea to the e...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it?
I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extra
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extraterrestrial life? T...
Absence of evidence is not evidence of absence: What does Bayesian probability have to say about it? I believe this is a very philosophical question and I doubt Bayesian theory is applicable to it. What do we mean by a "probability" of a dragon in the garage, a teapot between Earth and Mars, or extra
8,096
Shouldn't the joint probability of 2 independent events be equal to zero?
There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whether the other happened mutually disjoint events: $\mathbb P(A \cap B) = 0$, i.e. $\mathbb P(A \mid B)= 0$ so knowing one ...
Shouldn't the joint probability of 2 independent events be equal to zero?
There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whethe
Shouldn't the joint probability of 2 independent events be equal to zero? There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whether the other happened mutually disjoint events:...
Shouldn't the joint probability of 2 independent events be equal to zero? There is a difference between independent events: $\mathbb P(A \cap B) =\mathbb P(A)\,\mathbb P(B)$, i.e. $\mathbb P(A \mid B)= \mathbb P(A)$ so knowing one happened gives no information about whethe
8,097
Shouldn't the joint probability of 2 independent events be equal to zero?
What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both happen. For instance, if we roll a die, the outcomes 1 and 2 are disjoint since they cannot both occur. On the other ha...
Shouldn't the joint probability of 2 independent events be equal to zero?
What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both
Shouldn't the joint probability of 2 independent events be equal to zero? What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both happen. For instance, if we roll a die, the o...
Shouldn't the joint probability of 2 independent events be equal to zero? What I understood from your question, is that you might have confused independent events with disjoint events. disjoint events: Two events are called disjoint or mutually exclusive if they cannot both
8,098
Shouldn't the joint probability of 2 independent events be equal to zero?
The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened gives you no information about whether or not B happened. Or in other words, A and B are independent if knowing that A ...
Shouldn't the joint probability of 2 independent events be equal to zero?
The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened give
Shouldn't the joint probability of 2 independent events be equal to zero? The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened gives you no information about whether or not B ...
Shouldn't the joint probability of 2 independent events be equal to zero? The confusion of the OP lies on the notions of disjoint events and independent events. One simple and intuitive description of independence is: A and B are independent if knowing that A happened give
8,099
In Regression Analysis, why do we call independent variables "independent"?
If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good deal of sense. In controlled experimental studies, the choices of a drug and its concentrations, or the choices of a fe...
In Regression Analysis, why do we call independent variables "independent"?
If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good
In Regression Analysis, why do we call independent variables "independent"? If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good deal of sense. In controlled experimental s...
In Regression Analysis, why do we call independent variables "independent"? If we pull back from today's emphasis on machine learning and recall how much of statistical analysis was developed for controlled experimental studies, the phrase "independent variables" makes a good
8,100
In Regression Analysis, why do we call independent variables "independent"?
In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching and in my book Regression Modeling Strategies I use the word predictor. In some situations that word is not strong enoug...
In Regression Analysis, why do we call independent variables "independent"?
In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching a
In Regression Analysis, why do we call independent variables "independent"? In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching and in my book Regression Modeling Strategies...
In Regression Analysis, why do we call independent variables "independent"? In many ways, "independent variable" is an unfortunate choice. The variables need not be independent of each other, and of course need not be independent of the dependent variable $Y$. In teaching a