idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
19,901
Question about a normal equation proof
It is easy to show (try for yourself, for an arbitrary number of points, $n$) that the inverse of $X^T X$ exists if there are at least two distinct $x$-values (predictors) in the sample set. Only if all your data have the same values $x_i=x$ (i.e., points stacked in the $y$-direction, along a vertical line), then any line drawn through their mean $\overline{y}$ will have an arbitrary slope (regression coefficient), so that the LSE regression line is then not unique.
Question about a normal equation proof
It is easy to show (try for yourself, for an arbitrary number of points, $n$) that the inverse of $X^T X$ exists if there are at least two distinct $x$-values (predictors) in the sample set. Only if a
Question about a normal equation proof It is easy to show (try for yourself, for an arbitrary number of points, $n$) that the inverse of $X^T X$ exists if there are at least two distinct $x$-values (predictors) in the sample set. Only if all your data have the same values $x_i=x$ (i.e., points stacked in the $y$-direction, along a vertical line), then any line drawn through their mean $\overline{y}$ will have an arbitrary slope (regression coefficient), so that the LSE regression line is then not unique.
Question about a normal equation proof It is easy to show (try for yourself, for an arbitrary number of points, $n$) that the inverse of $X^T X$ exists if there are at least two distinct $x$-values (predictors) in the sample set. Only if a
19,902
Question about a normal equation proof
In typical regression, X is skinny and therefore certainly not invertible (though it may be left invertible.) It's straightforward to prove (ask if you need help) that if X is skinny and left invertible then X^T * X is invertible. In this case, then there will be exactly one solution. And if X doesn't have full column rank, then X^T * X will not be full rank, and therefore you will have an underdetermined system.
Question about a normal equation proof
In typical regression, X is skinny and therefore certainly not invertible (though it may be left invertible.) It's straightforward to prove (ask if you need help) that if X is skinny and left inverti
Question about a normal equation proof In typical regression, X is skinny and therefore certainly not invertible (though it may be left invertible.) It's straightforward to prove (ask if you need help) that if X is skinny and left invertible then X^T * X is invertible. In this case, then there will be exactly one solution. And if X doesn't have full column rank, then X^T * X will not be full rank, and therefore you will have an underdetermined system.
Question about a normal equation proof In typical regression, X is skinny and therefore certainly not invertible (though it may be left invertible.) It's straightforward to prove (ask if you need help) that if X is skinny and left inverti
19,903
Variance of product of k correlated random variables
More information on this topic than you probably require can be found in Goodman (1962): "The Variance of the Product of K Random Variables", which derives formulae for both independent random variables and potentially correlated random variables, along with some approximations. In an earlier paper (Goodman, 1960), the formula for the product of exactly two random variables was derived, which is somewhat simpler (though still pretty gnarly), so that might be a better place to start if you want to understand the derivation. For completeness, though, it goes like this. Two variables Assume the following: $x$ and $y$ are two random variables $X$ and $Y$ are their (non-zero) expectations $V(x)$ and $V(y)$ are their variances $\delta_x = (x-X)/X$ (and likewise for $\delta_y$) $D_{i,j} = E \left[ (\delta_x)^i (\delta_y)^j\right]$ $\Delta_x = x-X$ (and likewise for $\Delta_y$) $E_{i,j} = E\left[(\Delta_x)^i (\Delta_y)^j\right]$ $G(x)$ is the squared coefficient of variation: $V(x)/X^2$ (likewise for $G(Y)$) Then: $$V(xy) = (XY)^2[G(y) + G(x) + 2D_{1,1} + 2D_{1,2} + 2D_{2,1} + D_{2,2} - D_{1,1}^2] $$ or equivalently: $$ V(xy) = X^2V(y) + Y^2V(x) + 2XYE_{1,1} + 2XE_{1,2} + 2YE_{2,1} + E_{2,2} - E_{1,1}^2$$ More than two variables The 1960 paper suggests that this an exercise for the reader (which appears to have motivated the 1962 paper!). The notation is similar, with a few extensions: $(x_1, x_2, \ldots x_n)$ be the random variables instead of $x$ and $y$ $M = E\left( \prod_{i=1}^k x_i \right)$ $A = \left(M / \prod_{i=1}^k X_i\right) - 1$ $s_i$ = 0, 1, or 2 for $i = 1, 2, \ldots k$ $u$ = number of 1's in $(s_1, s_2, \ldots s_k)$ $m$ = number of 2's in $(s_1, s_2, \ldots s_k)$ $D(u,m) = 2^u - 2$ for $m=0$ and $2^u$ for $m>1$, $C(s_1, s_2, \ldots, s_k) = D(u,m) \cdot E \left( \prod_{i=1}^k \delta_{x_i}^{s_i} \right)$ $\sum_{s_1 \cdots s_k}$ indicates summation of the $3^k - k -1$ sets of $(s_1, s_2, \ldots s_k)$ where $2m + u > 1$ Then, at long last: $$ V\left(\prod_{i=1}^k x_i\right) = \prod X_i^2 \left( \sum_{s_1 \cdots s_k} C(s_1, s_2 \ldots s_k) - A^2\right)$$ See the papers for details and slightly more tractable approximations!
Variance of product of k correlated random variables
More information on this topic than you probably require can be found in Goodman (1962): "The Variance of the Product of K Random Variables", which derives formulae for both independent random variabl
Variance of product of k correlated random variables More information on this topic than you probably require can be found in Goodman (1962): "The Variance of the Product of K Random Variables", which derives formulae for both independent random variables and potentially correlated random variables, along with some approximations. In an earlier paper (Goodman, 1960), the formula for the product of exactly two random variables was derived, which is somewhat simpler (though still pretty gnarly), so that might be a better place to start if you want to understand the derivation. For completeness, though, it goes like this. Two variables Assume the following: $x$ and $y$ are two random variables $X$ and $Y$ are their (non-zero) expectations $V(x)$ and $V(y)$ are their variances $\delta_x = (x-X)/X$ (and likewise for $\delta_y$) $D_{i,j} = E \left[ (\delta_x)^i (\delta_y)^j\right]$ $\Delta_x = x-X$ (and likewise for $\Delta_y$) $E_{i,j} = E\left[(\Delta_x)^i (\Delta_y)^j\right]$ $G(x)$ is the squared coefficient of variation: $V(x)/X^2$ (likewise for $G(Y)$) Then: $$V(xy) = (XY)^2[G(y) + G(x) + 2D_{1,1} + 2D_{1,2} + 2D_{2,1} + D_{2,2} - D_{1,1}^2] $$ or equivalently: $$ V(xy) = X^2V(y) + Y^2V(x) + 2XYE_{1,1} + 2XE_{1,2} + 2YE_{2,1} + E_{2,2} - E_{1,1}^2$$ More than two variables The 1960 paper suggests that this an exercise for the reader (which appears to have motivated the 1962 paper!). The notation is similar, with a few extensions: $(x_1, x_2, \ldots x_n)$ be the random variables instead of $x$ and $y$ $M = E\left( \prod_{i=1}^k x_i \right)$ $A = \left(M / \prod_{i=1}^k X_i\right) - 1$ $s_i$ = 0, 1, or 2 for $i = 1, 2, \ldots k$ $u$ = number of 1's in $(s_1, s_2, \ldots s_k)$ $m$ = number of 2's in $(s_1, s_2, \ldots s_k)$ $D(u,m) = 2^u - 2$ for $m=0$ and $2^u$ for $m>1$, $C(s_1, s_2, \ldots, s_k) = D(u,m) \cdot E \left( \prod_{i=1}^k \delta_{x_i}^{s_i} \right)$ $\sum_{s_1 \cdots s_k}$ indicates summation of the $3^k - k -1$ sets of $(s_1, s_2, \ldots s_k)$ where $2m + u > 1$ Then, at long last: $$ V\left(\prod_{i=1}^k x_i\right) = \prod X_i^2 \left( \sum_{s_1 \cdots s_k} C(s_1, s_2 \ldots s_k) - A^2\right)$$ See the papers for details and slightly more tractable approximations!
Variance of product of k correlated random variables More information on this topic than you probably require can be found in Goodman (1962): "The Variance of the Product of K Random Variables", which derives formulae for both independent random variabl
19,904
Variance of product of k correlated random variables
Just to add to the awesome answer of Matt Krause (in fact easily derivable from there). If x, y are independent then, \begin{equation*} \begin{split} E_{1,1} &= E[(x-E[x])(y-E[y])] = Cov(x,y) = 0\\ E_{1,2} &= E[(x-E[x])(y-E[y])^2] \\ &= E[x-E(x)]E[(y-E[y])^2] \\ &= (E[x]-E[x])E[(y-E[y])^2]=0\\ E_{2,1} &= 0\\ E_{2,2} &= E[(x-E[x])^2(y-E[y])^2]\\ &= E[(x-E[x])^2]E[(y-E[y])^2\\ &= V[x]V[y]\\ V[xy] &= E[x]^2 V[y] + E[y]^2 V[x] + V[x]V[y] \end{split} \end{equation*}
Variance of product of k correlated random variables
Just to add to the awesome answer of Matt Krause (in fact easily derivable from there). If x, y are independent then, \begin{equation*} \begin{split} E_{1,1} &= E[(x-E[x])(y-E[y])] = Cov(x,y) = 0\\ E_
Variance of product of k correlated random variables Just to add to the awesome answer of Matt Krause (in fact easily derivable from there). If x, y are independent then, \begin{equation*} \begin{split} E_{1,1} &= E[(x-E[x])(y-E[y])] = Cov(x,y) = 0\\ E_{1,2} &= E[(x-E[x])(y-E[y])^2] \\ &= E[x-E(x)]E[(y-E[y])^2] \\ &= (E[x]-E[x])E[(y-E[y])^2]=0\\ E_{2,1} &= 0\\ E_{2,2} &= E[(x-E[x])^2(y-E[y])^2]\\ &= E[(x-E[x])^2]E[(y-E[y])^2\\ &= V[x]V[y]\\ V[xy] &= E[x]^2 V[y] + E[y]^2 V[x] + V[x]V[y] \end{split} \end{equation*}
Variance of product of k correlated random variables Just to add to the awesome answer of Matt Krause (in fact easily derivable from there). If x, y are independent then, \begin{equation*} \begin{split} E_{1,1} &= E[(x-E[x])(y-E[y])] = Cov(x,y) = 0\\ E_
19,905
Variance of product of k correlated random variables
In addition to the general formula given by Matt it may be worth noting that there is a somewhat more explicit formula for zero mean Gaussian random variables. It follows from Isserlis' theorem, see also Higher moments for the centered multivariate normal distribution. Suppose that $(x_1, \ldots, x_k)$ follows a multivariate normal distribution with mean 0 and covariance matrix $\Sigma$. If the number of variables $k$ is odd, $E\left(\prod_i x_i\right) = 0$ and $$V\left(\prod_i x_i\right) = E\left( \prod_i x_i^2\right) = \sum \prod \tilde{\Sigma}_{i,j}$$ where $\Sigma \prod$ means sum over all partitions of $\{1, \ldots, 2k\}$ into $k$ disjoint pairs $\{i, j\}$ with each term being a product of the corresponding $k$ $\tilde{\Sigma}_{i,j}$'s, and where $$\tilde{\Sigma} = \left( \begin{array}{cc} \Sigma & \Sigma \\ \Sigma & \Sigma \end{array} \right)$$ is the covariance matrix for $(x_1, \ldots, x_k, x_1, \ldots, x_k)$. If $k$ is even, $$V\left(\prod_i x_i\right) = \sum \prod \tilde{\Sigma}_{i,j} - \left(\sum \prod \Sigma_{i,j}\right)^2.$$ In the case $k = 2$ we get $$V(x_1x_2) = \Sigma_{1,1} \Sigma_{2,2} + 2 (\Sigma_{1,2})^2 - \Sigma_{1,2}^2 = \Sigma_{1,1} \Sigma_{2,2} + (\Sigma_{1,2})^2.$$ If $k = 3$ we get $$V(x_1x_2x_3) = \sum \Sigma_{i,j}\Sigma_{k,l}\Sigma_{r,t},$$ where there are 15 terms in the sum. It is, in fact, possible to implement the general formula. The most difficult part appears to be the computation of the required partitions. In R, this can be done with the function setparts from the package partitions. Using this package it was no problem to generate the 2,027,025 partitions for $k = 8$, the 34,459,425 partitions for $k = 9$ could also be generated, but not the 654,729,075 partitions for $k = 10$ (on my 16 GB laptop). A couple of other things are worth noting. First, for Gaussian variables with non-zero mean it should be possible to derive an expression as well from Isserlis' theorem. Second, it is unclear (to me) if the above formula is robust against deviations from normality, that is, if it can be used as an approximation even if the variables are not multivariate normally distributed. Third, though the formulas above are correct, it is questionable how much the variance tells about the distribution of the products. Even for $k = 2$ the distribution of the product is quite leptokurtic, and for larger $k$ it quickly becomes extremely leptokurtic.
Variance of product of k correlated random variables
In addition to the general formula given by Matt it may be worth noting that there is a somewhat more explicit formula for zero mean Gaussian random variables. It follows from Isserlis' theorem, see a
Variance of product of k correlated random variables In addition to the general formula given by Matt it may be worth noting that there is a somewhat more explicit formula for zero mean Gaussian random variables. It follows from Isserlis' theorem, see also Higher moments for the centered multivariate normal distribution. Suppose that $(x_1, \ldots, x_k)$ follows a multivariate normal distribution with mean 0 and covariance matrix $\Sigma$. If the number of variables $k$ is odd, $E\left(\prod_i x_i\right) = 0$ and $$V\left(\prod_i x_i\right) = E\left( \prod_i x_i^2\right) = \sum \prod \tilde{\Sigma}_{i,j}$$ where $\Sigma \prod$ means sum over all partitions of $\{1, \ldots, 2k\}$ into $k$ disjoint pairs $\{i, j\}$ with each term being a product of the corresponding $k$ $\tilde{\Sigma}_{i,j}$'s, and where $$\tilde{\Sigma} = \left( \begin{array}{cc} \Sigma & \Sigma \\ \Sigma & \Sigma \end{array} \right)$$ is the covariance matrix for $(x_1, \ldots, x_k, x_1, \ldots, x_k)$. If $k$ is even, $$V\left(\prod_i x_i\right) = \sum \prod \tilde{\Sigma}_{i,j} - \left(\sum \prod \Sigma_{i,j}\right)^2.$$ In the case $k = 2$ we get $$V(x_1x_2) = \Sigma_{1,1} \Sigma_{2,2} + 2 (\Sigma_{1,2})^2 - \Sigma_{1,2}^2 = \Sigma_{1,1} \Sigma_{2,2} + (\Sigma_{1,2})^2.$$ If $k = 3$ we get $$V(x_1x_2x_3) = \sum \Sigma_{i,j}\Sigma_{k,l}\Sigma_{r,t},$$ where there are 15 terms in the sum. It is, in fact, possible to implement the general formula. The most difficult part appears to be the computation of the required partitions. In R, this can be done with the function setparts from the package partitions. Using this package it was no problem to generate the 2,027,025 partitions for $k = 8$, the 34,459,425 partitions for $k = 9$ could also be generated, but not the 654,729,075 partitions for $k = 10$ (on my 16 GB laptop). A couple of other things are worth noting. First, for Gaussian variables with non-zero mean it should be possible to derive an expression as well from Isserlis' theorem. Second, it is unclear (to me) if the above formula is robust against deviations from normality, that is, if it can be used as an approximation even if the variables are not multivariate normally distributed. Third, though the formulas above are correct, it is questionable how much the variance tells about the distribution of the products. Even for $k = 2$ the distribution of the product is quite leptokurtic, and for larger $k$ it quickly becomes extremely leptokurtic.
Variance of product of k correlated random variables In addition to the general formula given by Matt it may be worth noting that there is a somewhat more explicit formula for zero mean Gaussian random variables. It follows from Isserlis' theorem, see a
19,906
Relations and differences between time-series analysis and statistical signal processing?
As a signal is by definition a time series, there is significant overlap between the two. I would expect a book on time-series analysis to be either a mathematical treatment, or a business/commercial treatment, while a book on statistical signal processing is likely to make heavy use of mathematics, but interested in the problems of signal analysis, classification, noise reduction, and other problems relevant to engineering / applied science. Statistical signal processing uses the language and techniques of mathematical time-series analysis, but also introduces into the problem domain many concepts and techniques out of electrical engineering: signal to noise, dynamic range, and time/frequency domain transforms. In my view, time-series analysis is a mathematical field, which then has applications wherever time series tend to crop up. Those fields then develop techniques that are specialised for those problem domains, with a specialised body of knowledge. As time series arise in business and economics, there is an industry of material on time-series forecasting, trend analysis, etc. Much of this 'commercial' application is not present in the material on statistical signal processing, in part because the nature of the two time series is very different: signals are continuous over both time and measurement variables (e.g. voltage, intensity, etc.) Whereas most business time-series are taken over a discrete time domain (days, weeks, months, quarters, years).
Relations and differences between time-series analysis and statistical signal processing?
As a signal is by definition a time series, there is significant overlap between the two. I would expect a book on time-series analysis to be either a mathematical treatment, or a business/commercial
Relations and differences between time-series analysis and statistical signal processing? As a signal is by definition a time series, there is significant overlap between the two. I would expect a book on time-series analysis to be either a mathematical treatment, or a business/commercial treatment, while a book on statistical signal processing is likely to make heavy use of mathematics, but interested in the problems of signal analysis, classification, noise reduction, and other problems relevant to engineering / applied science. Statistical signal processing uses the language and techniques of mathematical time-series analysis, but also introduces into the problem domain many concepts and techniques out of electrical engineering: signal to noise, dynamic range, and time/frequency domain transforms. In my view, time-series analysis is a mathematical field, which then has applications wherever time series tend to crop up. Those fields then develop techniques that are specialised for those problem domains, with a specialised body of knowledge. As time series arise in business and economics, there is an industry of material on time-series forecasting, trend analysis, etc. Much of this 'commercial' application is not present in the material on statistical signal processing, in part because the nature of the two time series is very different: signals are continuous over both time and measurement variables (e.g. voltage, intensity, etc.) Whereas most business time-series are taken over a discrete time domain (days, weeks, months, quarters, years).
Relations and differences between time-series analysis and statistical signal processing? As a signal is by definition a time series, there is significant overlap between the two. I would expect a book on time-series analysis to be either a mathematical treatment, or a business/commercial
19,907
Relations and differences between time-series analysis and statistical signal processing?
@AKE's answer above is very good but one additional comment I would make is that while there are major overlaps, the differences between signal processing and time-series analysis often arise from the types of data being considered; Signal processing usually considers the analysis of a 'raw' signal, in that the signal needs to be processed heavily to extract 'features', descriptive parameters that allow the signal to be meaningfully interpreted. For example the origins of statistical signal processing lie in the development of radar technology, the raw radar sensor signal needed to heavily processed and enhanced to allow the operator to make any sense of it and obtain 'useful' data. The extraction of such interpretable parameters can often be the end goal, though often those parameters are in turn used to perform prediction/classification. In contrast time-series analysis often considers the long-term trends and variations of individual (or groups of) parameters (such financial or economic indicators). Such time-series analysis is often used to predict future behavior. Usually the pre-processing of the time-series parameters (predictors) is of secondary importance to building a predictive or explanatory model.
Relations and differences between time-series analysis and statistical signal processing?
@AKE's answer above is very good but one additional comment I would make is that while there are major overlaps, the differences between signal processing and time-series analysis often arise from the
Relations and differences between time-series analysis and statistical signal processing? @AKE's answer above is very good but one additional comment I would make is that while there are major overlaps, the differences between signal processing and time-series analysis often arise from the types of data being considered; Signal processing usually considers the analysis of a 'raw' signal, in that the signal needs to be processed heavily to extract 'features', descriptive parameters that allow the signal to be meaningfully interpreted. For example the origins of statistical signal processing lie in the development of radar technology, the raw radar sensor signal needed to heavily processed and enhanced to allow the operator to make any sense of it and obtain 'useful' data. The extraction of such interpretable parameters can often be the end goal, though often those parameters are in turn used to perform prediction/classification. In contrast time-series analysis often considers the long-term trends and variations of individual (or groups of) parameters (such financial or economic indicators). Such time-series analysis is often used to predict future behavior. Usually the pre-processing of the time-series parameters (predictors) is of secondary importance to building a predictive or explanatory model.
Relations and differences between time-series analysis and statistical signal processing? @AKE's answer above is very good but one additional comment I would make is that while there are major overlaps, the differences between signal processing and time-series analysis often arise from the
19,908
How to define a distribution such that draws from it correlate with a draw from another pre-specified distribution?
You can define it in terms of a data generating mechanism. For example, if $X \sim F_{X}$ and $$ Y = \rho X + \sqrt{1 - \rho^{2}} Z $$ where $Z \sim F_{X}$ and is independent of $X$, then, $$ {\rm cov}(X,Y) = {\rm cov}(X, \rho X) = \rho \cdot {\rm var}(X)$$ Also note that ${\rm var}(Y) = {\rm var}(X)$ since $Z$ has the same distribution as $X$. Therefore, $$ {\rm cor}(X,Y) = \frac{ {\rm cov}(X,Y) }{ \sqrt{ {\rm var}(X)^{2} } } = \rho $$ So if you can generate data from $F_{X}$, you can generate a variate, $Y$, that has a specified correlation $(\rho)$ with $X$. Note, however, that the marginal distribution of $Y$ will only be $F_{X}$ in the special case where $F_{X}$ is the normal distribution (or some other additive distribution). This is due to the fact that sums of normally distributed variables are normal; that is not a general property of distributions. In the general case, you will have to calculate the distribution of $Y$ by calculating the (appropriately scaled) convolution of the density corresponding to $F_{X}$ with itself.
How to define a distribution such that draws from it correlate with a draw from another pre-specifie
You can define it in terms of a data generating mechanism. For example, if $X \sim F_{X}$ and $$ Y = \rho X + \sqrt{1 - \rho^{2}} Z $$ where $Z \sim F_{X}$ and is independent of $X$, then, $$ {\rm
How to define a distribution such that draws from it correlate with a draw from another pre-specified distribution? You can define it in terms of a data generating mechanism. For example, if $X \sim F_{X}$ and $$ Y = \rho X + \sqrt{1 - \rho^{2}} Z $$ where $Z \sim F_{X}$ and is independent of $X$, then, $$ {\rm cov}(X,Y) = {\rm cov}(X, \rho X) = \rho \cdot {\rm var}(X)$$ Also note that ${\rm var}(Y) = {\rm var}(X)$ since $Z$ has the same distribution as $X$. Therefore, $$ {\rm cor}(X,Y) = \frac{ {\rm cov}(X,Y) }{ \sqrt{ {\rm var}(X)^{2} } } = \rho $$ So if you can generate data from $F_{X}$, you can generate a variate, $Y$, that has a specified correlation $(\rho)$ with $X$. Note, however, that the marginal distribution of $Y$ will only be $F_{X}$ in the special case where $F_{X}$ is the normal distribution (or some other additive distribution). This is due to the fact that sums of normally distributed variables are normal; that is not a general property of distributions. In the general case, you will have to calculate the distribution of $Y$ by calculating the (appropriately scaled) convolution of the density corresponding to $F_{X}$ with itself.
How to define a distribution such that draws from it correlate with a draw from another pre-specifie You can define it in terms of a data generating mechanism. For example, if $X \sim F_{X}$ and $$ Y = \rho X + \sqrt{1 - \rho^{2}} Z $$ where $Z \sim F_{X}$ and is independent of $X$, then, $$ {\rm
19,909
Reversible jump MCMC code (Matlab or R)
RJMCMC was introduced by Peter Green in a 1995 paper that is a citation classic. He wrote a Fortran program called AutoRJ for automatic RJMCMC; his page on this links to David Hastie's C program AutoMix. There's a list of freely available software for various RJMCMC algorithms in Table 1 of a 2005 paper by Scott Sisson. A Google search also finds some pseudocode from a group at the University of Glasgow that may be useful in understanding the principles if you want to program it yourself.
Reversible jump MCMC code (Matlab or R)
RJMCMC was introduced by Peter Green in a 1995 paper that is a citation classic. He wrote a Fortran program called AutoRJ for automatic RJMCMC; his page on this links to David Hastie's C program AutoM
Reversible jump MCMC code (Matlab or R) RJMCMC was introduced by Peter Green in a 1995 paper that is a citation classic. He wrote a Fortran program called AutoRJ for automatic RJMCMC; his page on this links to David Hastie's C program AutoMix. There's a list of freely available software for various RJMCMC algorithms in Table 1 of a 2005 paper by Scott Sisson. A Google search also finds some pseudocode from a group at the University of Glasgow that may be useful in understanding the principles if you want to program it yourself.
Reversible jump MCMC code (Matlab or R) RJMCMC was introduced by Peter Green in a 1995 paper that is a citation classic. He wrote a Fortran program called AutoRJ for automatic RJMCMC; his page on this links to David Hastie's C program AutoM
19,910
Reversible jump MCMC code (Matlab or R)
The book Bayesian Analysis for Population Ecology by King et al. describes RJMCMC in the context of population ecology. I found there description very clear and they provide the R code in the appendix. The book also has an associated webpage, but some of the code found in the book isn't on the website.
Reversible jump MCMC code (Matlab or R)
The book Bayesian Analysis for Population Ecology by King et al. describes RJMCMC in the context of population ecology. I found there description very clear and they provide the R code in the appendix
Reversible jump MCMC code (Matlab or R) The book Bayesian Analysis for Population Ecology by King et al. describes RJMCMC in the context of population ecology. I found there description very clear and they provide the R code in the appendix. The book also has an associated webpage, but some of the code found in the book isn't on the website.
Reversible jump MCMC code (Matlab or R) The book Bayesian Analysis for Population Ecology by King et al. describes RJMCMC in the context of population ecology. I found there description very clear and they provide the R code in the appendix
19,911
Reversible jump MCMC code (Matlab or R)
Just add one detail to @onestop's answer: I find the C software released by Olivier Cappé (CT/RJ MCMC) is very helpful to understand the Reversible jump MCMC algorithm (in particular how to design the probabilities for the birth-death and split-merge moves). The link to the source code is: http://perso.telecom-paristech.fr/~cappe/Code/CTRJ_mix/About/
Reversible jump MCMC code (Matlab or R)
Just add one detail to @onestop's answer: I find the C software released by Olivier Cappé (CT/RJ MCMC) is very helpful to understand the Reversible jump MCMC algorithm (in particular how to design the
Reversible jump MCMC code (Matlab or R) Just add one detail to @onestop's answer: I find the C software released by Olivier Cappé (CT/RJ MCMC) is very helpful to understand the Reversible jump MCMC algorithm (in particular how to design the probabilities for the birth-death and split-merge moves). The link to the source code is: http://perso.telecom-paristech.fr/~cappe/Code/CTRJ_mix/About/
Reversible jump MCMC code (Matlab or R) Just add one detail to @onestop's answer: I find the C software released by Olivier Cappé (CT/RJ MCMC) is very helpful to understand the Reversible jump MCMC algorithm (in particular how to design the
19,912
Reversible jump MCMC code (Matlab or R)
Jailin Ai gives a fairly nice presentation of RJ MCMC together (though it hews very closely to Green's original paper) with attendant R code as part of his master's thesis at Leeds. Also gives an in-depth example of change-point problems, which are also included in Green's 1995 paper. Find the thesis and the code here: http://www1.maths.leeds.ac.uk/~voss/projects/2011-RJMCMC/
Reversible jump MCMC code (Matlab or R)
Jailin Ai gives a fairly nice presentation of RJ MCMC together (though it hews very closely to Green's original paper) with attendant R code as part of his master's thesis at Leeds. Also gives an in-d
Reversible jump MCMC code (Matlab or R) Jailin Ai gives a fairly nice presentation of RJ MCMC together (though it hews very closely to Green's original paper) with attendant R code as part of his master's thesis at Leeds. Also gives an in-depth example of change-point problems, which are also included in Green's 1995 paper. Find the thesis and the code here: http://www1.maths.leeds.ac.uk/~voss/projects/2011-RJMCMC/
Reversible jump MCMC code (Matlab or R) Jailin Ai gives a fairly nice presentation of RJ MCMC together (though it hews very closely to Green's original paper) with attendant R code as part of his master's thesis at Leeds. Also gives an in-d
19,913
Reversible jump MCMC code (Matlab or R)
Nando de Freitas provides demos on the use of reversible jump MCMC algorithm for neural network parameter estimation. This model treats the number of neurons, model parameters, regularization parameters and noise parameters as random variables to be estimated. The code and the write-up are available here: http://www.cs.ubc.ca/~nando/software.html
Reversible jump MCMC code (Matlab or R)
Nando de Freitas provides demos on the use of reversible jump MCMC algorithm for neural network parameter estimation. This model treats the number of neurons, model parameters, regularization paramete
Reversible jump MCMC code (Matlab or R) Nando de Freitas provides demos on the use of reversible jump MCMC algorithm for neural network parameter estimation. This model treats the number of neurons, model parameters, regularization parameters and noise parameters as random variables to be estimated. The code and the write-up are available here: http://www.cs.ubc.ca/~nando/software.html
Reversible jump MCMC code (Matlab or R) Nando de Freitas provides demos on the use of reversible jump MCMC algorithm for neural network parameter estimation. This model treats the number of neurons, model parameters, regularization paramete
19,914
Why is the median of an even number of samples the arithmetic mean?
One of the first and best uses of the median is to identify the location (central value) of a dataset or distribution in a robust way. From this perspective it doesn't matter how you choose among alternative values when there is more than one median: they are all equally good. However, the very meaning of "central value" is questionable when a distribution is not symmetric. (See Why is median age a better statistic than mean age? for some discussion.) But it does have a meaning when the distribution is symmetric near its middle. (The sense of "distribution" includes any dataset, which we may consider to be equivalent to its empirical distribution.) This means there is some number $\epsilon \gt 0$ for which the distribution is symmetric, or at least very nearly so, for all values within a distance $\epsilon$ of its median. Since "distance" if measured on the scale of the data would be data-dependent, let's measure it in terms of probability. That is, let us say a distribution $F$ (the cumulative function) is "symmetric near a median $\tilde \mu$" when there exists $\epsilon \gt 0$ for which $|F(\tilde\mu+\delta)-1/2|\le \epsilon$ implies $F(\tilde\mu+\delta) + F(\tilde\mu-\delta)=1.$ See https://stats.stackexchange.com/a/29010/919 for a more precise and general approach using a similar formula. The idea is that the shape of the middle part of the distribution comprising a total probability (or proportion) of $2\epsilon$ is symmetric, but using values $\epsilon \lt 1/2$ permits asymmetry to appear in the tails of the distribution. For discrete distributions (such as empirical distributions) perfect symmetry will rarely be the case due to the jumps in $F$ of size $1/n$ for datasets of size $n.$ We should therefore be willing to relax this requirement to "approximately symmetric in its center" when we have chosen a value $\lambda \ge 0$ and a median $\tilde\mu$ for which $$|F(\tilde\mu+\delta)-1/2|\le \epsilon \text{ implies }|F(\tilde\mu+\delta) + F(\tilde\mu-\delta)-1| \lt \frac{\lambda}{n}$$ $\lambda$ measures the degree of asymmetry and a value of $\lambda = 1$ (or even a little larger) ought to be acceptable. One basic technique of data analysis is re-expression (also known as transformation). Depending on the application and meaning of the data, we seek a "natural" or "simple" function $f,$ such as a power (or Box-Cox transformation), which makes the re-expressed data approximately symmetric in its center. If this can be achieved, it permits simplified descriptions and attendant insights, because we can summarize the distribution by giving a single central value that accounts for much of what the data are doing. After that we can focus on describing how its tails might depart from the central part of the distribution: how long or heavy they appear and their degree of asymmetry. Once we have found such a transformation (it needn't be unique) and applied it to re-express the data, the arithmetic mean of the extreme possible values of the median is a meaningful and appropriate value for the center of a distribution, because it is invariant under reversal of the data -- which, due to the approximate central symmetry, does not appreciably alter the shape of the central part of the distribution. Thus, any proposed median exceeding the arithmetic mean is just as valid as a proposed median falling equally far below the arithmetic mean. The middle is the intuitively obvious choice. John Tukey exploits a generalization of this idea to develop robust ways to identify and measure the asymmetry of any positive distribution and select a Box-Cox parameter that will make it approximately symmetric in its center. Writing $F_{-}$ for the distribution of the reversed data (that is, $F_{-}(x)$ is the proportion of the data equal to or greater than $x$), you study how the mid-quantiles $\mu_q(F)=(F^{-1}(q) + F_{-}^{-1}(q))/2$ vary with $q,$ for a suitably chosen series of $q$ that probe the tails: usually $q = 1/2, 1/4, 1/8, 1/16, \cdots.$ These mid-quantiles will not vary for as long as the central $1-2q$ portion of the distribution is symmetric. The arithmetic mean of the two middle values in an even-size dataset is precisely $\mu_{1/2}(F).$ For the details, see Tukey's book EDA (Addison-Wesley 1977). My post at https://stats.stackexchange.com/a/582120/919 provides a worked example.
Why is the median of an even number of samples the arithmetic mean?
One of the first and best uses of the median is to identify the location (central value) of a dataset or distribution in a robust way. From this perspective it doesn't matter how you choose among alt
Why is the median of an even number of samples the arithmetic mean? One of the first and best uses of the median is to identify the location (central value) of a dataset or distribution in a robust way. From this perspective it doesn't matter how you choose among alternative values when there is more than one median: they are all equally good. However, the very meaning of "central value" is questionable when a distribution is not symmetric. (See Why is median age a better statistic than mean age? for some discussion.) But it does have a meaning when the distribution is symmetric near its middle. (The sense of "distribution" includes any dataset, which we may consider to be equivalent to its empirical distribution.) This means there is some number $\epsilon \gt 0$ for which the distribution is symmetric, or at least very nearly so, for all values within a distance $\epsilon$ of its median. Since "distance" if measured on the scale of the data would be data-dependent, let's measure it in terms of probability. That is, let us say a distribution $F$ (the cumulative function) is "symmetric near a median $\tilde \mu$" when there exists $\epsilon \gt 0$ for which $|F(\tilde\mu+\delta)-1/2|\le \epsilon$ implies $F(\tilde\mu+\delta) + F(\tilde\mu-\delta)=1.$ See https://stats.stackexchange.com/a/29010/919 for a more precise and general approach using a similar formula. The idea is that the shape of the middle part of the distribution comprising a total probability (or proportion) of $2\epsilon$ is symmetric, but using values $\epsilon \lt 1/2$ permits asymmetry to appear in the tails of the distribution. For discrete distributions (such as empirical distributions) perfect symmetry will rarely be the case due to the jumps in $F$ of size $1/n$ for datasets of size $n.$ We should therefore be willing to relax this requirement to "approximately symmetric in its center" when we have chosen a value $\lambda \ge 0$ and a median $\tilde\mu$ for which $$|F(\tilde\mu+\delta)-1/2|\le \epsilon \text{ implies }|F(\tilde\mu+\delta) + F(\tilde\mu-\delta)-1| \lt \frac{\lambda}{n}$$ $\lambda$ measures the degree of asymmetry and a value of $\lambda = 1$ (or even a little larger) ought to be acceptable. One basic technique of data analysis is re-expression (also known as transformation). Depending on the application and meaning of the data, we seek a "natural" or "simple" function $f,$ such as a power (or Box-Cox transformation), which makes the re-expressed data approximately symmetric in its center. If this can be achieved, it permits simplified descriptions and attendant insights, because we can summarize the distribution by giving a single central value that accounts for much of what the data are doing. After that we can focus on describing how its tails might depart from the central part of the distribution: how long or heavy they appear and their degree of asymmetry. Once we have found such a transformation (it needn't be unique) and applied it to re-express the data, the arithmetic mean of the extreme possible values of the median is a meaningful and appropriate value for the center of a distribution, because it is invariant under reversal of the data -- which, due to the approximate central symmetry, does not appreciably alter the shape of the central part of the distribution. Thus, any proposed median exceeding the arithmetic mean is just as valid as a proposed median falling equally far below the arithmetic mean. The middle is the intuitively obvious choice. John Tukey exploits a generalization of this idea to develop robust ways to identify and measure the asymmetry of any positive distribution and select a Box-Cox parameter that will make it approximately symmetric in its center. Writing $F_{-}$ for the distribution of the reversed data (that is, $F_{-}(x)$ is the proportion of the data equal to or greater than $x$), you study how the mid-quantiles $\mu_q(F)=(F^{-1}(q) + F_{-}^{-1}(q))/2$ vary with $q,$ for a suitably chosen series of $q$ that probe the tails: usually $q = 1/2, 1/4, 1/8, 1/16, \cdots.$ These mid-quantiles will not vary for as long as the central $1-2q$ portion of the distribution is symmetric. The arithmetic mean of the two middle values in an even-size dataset is precisely $\mu_{1/2}(F).$ For the details, see Tukey's book EDA (Addison-Wesley 1977). My post at https://stats.stackexchange.com/a/582120/919 provides a worked example.
Why is the median of an even number of samples the arithmetic mean? One of the first and best uses of the median is to identify the location (central value) of a dataset or distribution in a robust way. From this perspective it doesn't matter how you choose among alt
19,915
Why is the median of an even number of samples the arithmetic mean?
When this happens, I see two options. Report the median as an interval, which gives us infinitely-many medians instead of just one. (This is not even a confidence interval or credible interval for a point estimate.) Figure out a way to pick one value. Picking the arithmetic mean of the middle two points is just one way of doing the estimation. Advantages include the ease with which is is computed and explained. Disadvantages could include bias when there is a skewed distribution. There are always many ways of estimating a quantity. In fact, the quantile function in R software has at least nine options for computing quantiles like the median. If you have reason to believe that an arithmetic mean of the middle two values has inferior properties to some other form of estimation, you are free to argue why your alternative is better. Plenty of people have seen other estimators that are not optimal for them and have proposed alternatives (e.g., James-Stein estimator). EDIT The following simulation shows the claim by John Madden in the comments that all values in the interval minimize the absolute deviation function whose minimizer is one way to define median. set.seed(2023) y <- c(1, 2, 3, 7, 8, 9) candidate_medians <- seq(2, 8, 0.01) median_loss <- rep(NA, length(candidate_medians)) for (i in 1:length(candidate_medians)){ median_loss[i] <- mean(abs(y - candidate_medians[i])) } plot(candidate_medians, median_loss) All values $M$ between the middle data values, $3$ and $7$ (so $M\in[3,7]$), give equal and minimizing values of $L(y, M) = \frac{1}{n}\sum_{i=1}^n\big\vert y_i-M\big\vert$.
Why is the median of an even number of samples the arithmetic mean?
When this happens, I see two options. Report the median as an interval, which gives us infinitely-many medians instead of just one. (This is not even a confidence interval or credible interval for a
Why is the median of an even number of samples the arithmetic mean? When this happens, I see two options. Report the median as an interval, which gives us infinitely-many medians instead of just one. (This is not even a confidence interval or credible interval for a point estimate.) Figure out a way to pick one value. Picking the arithmetic mean of the middle two points is just one way of doing the estimation. Advantages include the ease with which is is computed and explained. Disadvantages could include bias when there is a skewed distribution. There are always many ways of estimating a quantity. In fact, the quantile function in R software has at least nine options for computing quantiles like the median. If you have reason to believe that an arithmetic mean of the middle two values has inferior properties to some other form of estimation, you are free to argue why your alternative is better. Plenty of people have seen other estimators that are not optimal for them and have proposed alternatives (e.g., James-Stein estimator). EDIT The following simulation shows the claim by John Madden in the comments that all values in the interval minimize the absolute deviation function whose minimizer is one way to define median. set.seed(2023) y <- c(1, 2, 3, 7, 8, 9) candidate_medians <- seq(2, 8, 0.01) median_loss <- rep(NA, length(candidate_medians)) for (i in 1:length(candidate_medians)){ median_loss[i] <- mean(abs(y - candidate_medians[i])) } plot(candidate_medians, median_loss) All values $M$ between the middle data values, $3$ and $7$ (so $M\in[3,7]$), give equal and minimizing values of $L(y, M) = \frac{1}{n}\sum_{i=1}^n\big\vert y_i-M\big\vert$.
Why is the median of an even number of samples the arithmetic mean? When this happens, I see two options. Report the median as an interval, which gives us infinitely-many medians instead of just one. (This is not even a confidence interval or credible interval for a
19,916
Different usage of the term "Bias" in stats/machine learning
I will give you a run-down of the terminology used in statistics, which I think is sensible terminology. Cases (1) and (2) do refer to bias in the usual statistical sense, (4) refers to something closely related, and (3) is just misleading renaming of an object that already has a perfectly sensible name. Just to quickly dispose of (3), I note that the term $\beta_0$ in regression is called the "intercept" term, not the "bias". Unless there are good contextual reasons to refer to it as the "bias term", that language is highly misleading. (The link you give for this usage is just a CV.SE question that was closed for lack of clarity, so not really evidence of widespread usage. I have never seen this term in the regression referred to as the bias term.) Estimator bias and the "bias-variance trade-off": When we "fit" a statistical model we are essentially just estimating the unknown parameters in that model. As you note in case (2), the bias of an estimator is defined as the difference between the expected value of the estimator and the value of the parameter it is estimating: $$\text{Bias}(\hat{\theta}, \theta) = \mathbb{E}(\hat{\theta}) - \theta. \quad \ $$ When looking at the performance of estimators we often measure this by the mean squared error, which is the expected value of the squared deviation of the estimator from the value it is estimating: $$\text{MSE}(\hat{\theta}, \theta) = \mathbb{E} \Big( (\hat{\theta}-\theta)^2 \Big).$$ One of the properties of the mean squared error is that it can be decomposed as: $$\quad \quad \ \ \text{MSE}(\hat{\theta}, \theta) = \mathbb{V}(\hat{\theta}) + \text{Bias}(\hat{\theta}, \theta) ^2.$$ If we examine a model for an observable value $y = f(x, \theta) + \varepsilon$ composed of a regression term and an error term, we likewise have: $$\mathbb{E} \Big( (y-f(x, \hat{\theta}))^2 \Big) = \mathbb{V}(f(x,\hat{\theta})) + \text{Bias}(f(x,\hat{\theta}), f(x,\theta)) ^2 + \sigma_\varepsilon^2.$$ Now, if we examine the class of estimators with some fixed mean squared error, we can see that there must be a trade-off between the bias and variance in these estimators --- a lower bias corresponds to a higher variance and vice versa. Often when we have competing methods of estimating parameters (i.e., alternative ways to "fit" the model) we focus on those that have the optimal mean squared error, and within this class we see that there is a choice between methods that have higher bias but lower variance, and methods that have low (or no) bias but have higher variance. As you point out in your case (1), it is common in discussions of machine-learning to refer to a general "bias-variance trade-off" in the choice of model fitting methods and the use of training data. While this discussion is often quite broad and esoteric, ultimately it derives from the statistical decomposition shown here. It is therefore referring to "bias" in the standard statistical sense. Consequently, cases (1) and (2) both refer to bias in its usual statistical definition. Informative sampling (so-called "biased" data): In discussions of sampling you might sometimes run across references to "bias" in the data or sampling method. Statisticians generally do not use this language (except sometimes as shorthand) because they recognise that bias is a property of an estimator, and so it only occurs from the combination of a sampling method and an inference method. When data gives information in a non-standard way we say that it is an "informative" sampling method and we try to take this into account in our inferences. If the inference method does not properly take account of this information then this gives us a biased estimator, but the bias comes from the combination of the sampling method and the inference method --- it is not the data per se that is biased. This area of statistics is quite subtle, so I will describe it through an example. Suppose you have a small community with ten children, and you want to know the average number of children per family (only for the families with at least one child). Suppose that of these ten chidren, nine are from the same family (all siblings) and the one is an only child (no siblings), which means that the true average is five. Suppose you sample all the children and ask each of them how many siblings they have, then use this data to estimate the average number of children per family. A naive estimator would be to take the average number of siblings per child and then add one to get an estimate of the average number of children per family. If you use that inference method, you get a substantial overestimate: $$\hat{\theta} = \tfrac{1}{10} (9 \cdot 9 + 1 \cdot 0) + 1 = 8.1+1 = 9.1.$$ The problem here is that we are sampling over children rather than over families, and so we are more likely to select a child from a larger family; the average over all children gives proportionately more weight to families of larger size. This is an example of "informative" sampling method where the naive estimator leads to large bias in estimating the true quantity of interest. (The technical name for this type of sampling is probability proportional to size (PPS) sampling.) Note here that it is the use of a particular type of (erroneous) estimator that leads to the bias. If we take account of the fact that we are using PPS sampling, and account for this in our estimator, we can get rid of this bias. As you can see from this example, it is not quite right to say that the data themselves are biased --- the sampling method is unusual, and it leads us to substantial bias if we treat it as if it were giving us direct information on the quantity of interest, but the bias occurs from our failure to account for the nature of the sampling mechanism in our estimator. In statistical discussion we use the term "informative sampling" to describe this kind of sampling mechanism, but "bias" remains a property of the estimator. Consequently, whether or not we have bias is determined by the combination of the sampling method and the estimator.
Different usage of the term "Bias" in stats/machine learning
I will give you a run-down of the terminology used in statistics, which I think is sensible terminology. Cases (1) and (2) do refer to bias in the usual statistical sense, (4) refers to something clo
Different usage of the term "Bias" in stats/machine learning I will give you a run-down of the terminology used in statistics, which I think is sensible terminology. Cases (1) and (2) do refer to bias in the usual statistical sense, (4) refers to something closely related, and (3) is just misleading renaming of an object that already has a perfectly sensible name. Just to quickly dispose of (3), I note that the term $\beta_0$ in regression is called the "intercept" term, not the "bias". Unless there are good contextual reasons to refer to it as the "bias term", that language is highly misleading. (The link you give for this usage is just a CV.SE question that was closed for lack of clarity, so not really evidence of widespread usage. I have never seen this term in the regression referred to as the bias term.) Estimator bias and the "bias-variance trade-off": When we "fit" a statistical model we are essentially just estimating the unknown parameters in that model. As you note in case (2), the bias of an estimator is defined as the difference between the expected value of the estimator and the value of the parameter it is estimating: $$\text{Bias}(\hat{\theta}, \theta) = \mathbb{E}(\hat{\theta}) - \theta. \quad \ $$ When looking at the performance of estimators we often measure this by the mean squared error, which is the expected value of the squared deviation of the estimator from the value it is estimating: $$\text{MSE}(\hat{\theta}, \theta) = \mathbb{E} \Big( (\hat{\theta}-\theta)^2 \Big).$$ One of the properties of the mean squared error is that it can be decomposed as: $$\quad \quad \ \ \text{MSE}(\hat{\theta}, \theta) = \mathbb{V}(\hat{\theta}) + \text{Bias}(\hat{\theta}, \theta) ^2.$$ If we examine a model for an observable value $y = f(x, \theta) + \varepsilon$ composed of a regression term and an error term, we likewise have: $$\mathbb{E} \Big( (y-f(x, \hat{\theta}))^2 \Big) = \mathbb{V}(f(x,\hat{\theta})) + \text{Bias}(f(x,\hat{\theta}), f(x,\theta)) ^2 + \sigma_\varepsilon^2.$$ Now, if we examine the class of estimators with some fixed mean squared error, we can see that there must be a trade-off between the bias and variance in these estimators --- a lower bias corresponds to a higher variance and vice versa. Often when we have competing methods of estimating parameters (i.e., alternative ways to "fit" the model) we focus on those that have the optimal mean squared error, and within this class we see that there is a choice between methods that have higher bias but lower variance, and methods that have low (or no) bias but have higher variance. As you point out in your case (1), it is common in discussions of machine-learning to refer to a general "bias-variance trade-off" in the choice of model fitting methods and the use of training data. While this discussion is often quite broad and esoteric, ultimately it derives from the statistical decomposition shown here. It is therefore referring to "bias" in the standard statistical sense. Consequently, cases (1) and (2) both refer to bias in its usual statistical definition. Informative sampling (so-called "biased" data): In discussions of sampling you might sometimes run across references to "bias" in the data or sampling method. Statisticians generally do not use this language (except sometimes as shorthand) because they recognise that bias is a property of an estimator, and so it only occurs from the combination of a sampling method and an inference method. When data gives information in a non-standard way we say that it is an "informative" sampling method and we try to take this into account in our inferences. If the inference method does not properly take account of this information then this gives us a biased estimator, but the bias comes from the combination of the sampling method and the inference method --- it is not the data per se that is biased. This area of statistics is quite subtle, so I will describe it through an example. Suppose you have a small community with ten children, and you want to know the average number of children per family (only for the families with at least one child). Suppose that of these ten chidren, nine are from the same family (all siblings) and the one is an only child (no siblings), which means that the true average is five. Suppose you sample all the children and ask each of them how many siblings they have, then use this data to estimate the average number of children per family. A naive estimator would be to take the average number of siblings per child and then add one to get an estimate of the average number of children per family. If you use that inference method, you get a substantial overestimate: $$\hat{\theta} = \tfrac{1}{10} (9 \cdot 9 + 1 \cdot 0) + 1 = 8.1+1 = 9.1.$$ The problem here is that we are sampling over children rather than over families, and so we are more likely to select a child from a larger family; the average over all children gives proportionately more weight to families of larger size. This is an example of "informative" sampling method where the naive estimator leads to large bias in estimating the true quantity of interest. (The technical name for this type of sampling is probability proportional to size (PPS) sampling.) Note here that it is the use of a particular type of (erroneous) estimator that leads to the bias. If we take account of the fact that we are using PPS sampling, and account for this in our estimator, we can get rid of this bias. As you can see from this example, it is not quite right to say that the data themselves are biased --- the sampling method is unusual, and it leads us to substantial bias if we treat it as if it were giving us direct information on the quantity of interest, but the bias occurs from our failure to account for the nature of the sampling mechanism in our estimator. In statistical discussion we use the term "informative sampling" to describe this kind of sampling mechanism, but "bias" remains a property of the estimator. Consequently, whether or not we have bias is determined by the combination of the sampling method and the estimator.
Different usage of the term "Bias" in stats/machine learning I will give you a run-down of the terminology used in statistics, which I think is sensible terminology. Cases (1) and (2) do refer to bias in the usual statistical sense, (4) refers to something clo
19,917
Different usage of the term "Bias" in stats/machine learning
They all refer to something being "non-neutral", but, apart from that, I wouldn't say they are related. To my understanding, (2) refers to the computational method, irrespective of the data. For example MLE vs. unbiased estimator of the variance, whether you use $N$ or $(N-1)$ as the denominator. (4), on the other hand, is about the sampling process. Regarding (3), based on my professional experience and acquaintance with machine learning history, I believe the usage of "bias" with the meaning of "intercept" comes from electronics: The early research on what we today call "machine learning", in the 1950's-60's, often involved building specialised hardware or, later, simulating that hardware in computers. "Bias" in electronics means the intentional shift of the operating voltage away from zero, in order to achieve desired response characteristics (typically: linearity) of a component, like transistor or vacuum tube. This is probably also the reason why the intercept in machine learning is often denoted by $b$ and the predictor coefficients ("weights" in neural network terminology) by $\textbf{w}$.
Different usage of the term "Bias" in stats/machine learning
They all refer to something being "non-neutral", but, apart from that, I wouldn't say they are related. To my understanding, (2) refers to the computational method, irrespective of the data. For examp
Different usage of the term "Bias" in stats/machine learning They all refer to something being "non-neutral", but, apart from that, I wouldn't say they are related. To my understanding, (2) refers to the computational method, irrespective of the data. For example MLE vs. unbiased estimator of the variance, whether you use $N$ or $(N-1)$ as the denominator. (4), on the other hand, is about the sampling process. Regarding (3), based on my professional experience and acquaintance with machine learning history, I believe the usage of "bias" with the meaning of "intercept" comes from electronics: The early research on what we today call "machine learning", in the 1950's-60's, often involved building specialised hardware or, later, simulating that hardware in computers. "Bias" in electronics means the intentional shift of the operating voltage away from zero, in order to achieve desired response characteristics (typically: linearity) of a component, like transistor or vacuum tube. This is probably also the reason why the intercept in machine learning is often denoted by $b$ and the predictor coefficients ("weights" in neural network terminology) by $\textbf{w}$.
Different usage of the term "Bias" in stats/machine learning They all refer to something being "non-neutral", but, apart from that, I wouldn't say they are related. To my understanding, (2) refers to the computational method, irrespective of the data. For examp
19,918
Different usage of the term "Bias" in stats/machine learning
(2) What is bias? As you correctly defined in (2), bias is the difference between the estimator and its true value in expectation. (4) This is an application of the definition. Making things a bit explicit here can help. What we want to estimate is: E[sin(x) | full_period] But instead we estimated using: E[sin(x) | quarter_period] Clearly: E[sin(x) | full_period] != E[sin(x) | quarter_period] in general. Thus our estimator is wrong. It's wrong because we incorrectly sampled on a different set of conditions. But it still fits the definition of bias. (3) and (1) (3) is interesting. I am not sure why historically the term of 'bias' originated in linear regression. If I simulated data from a linear regression model with a non-zero intercept and then built a linear regression model from its output data, clearly my non-zero 'bias' term is what we want--so it's not biased according to our definition of bias. This suggests it does indeed reflect a different meaning. It seems by 'bias' in this context, we are really trying to hint at this idea of how well the model is influenced by data. That is, the larger we make our intercept, the less impact the data has on our predictions. This would also capture this notion . Research Paper I also found this research paper while answering this question (have not read it) that attempts to distinguish the bias definitions used in machine clearing: https://arxiv.org/pdf/2004.00686.pdf
Different usage of the term "Bias" in stats/machine learning
(2) What is bias? As you correctly defined in (2), bias is the difference between the estimator and its true value in expectation. (4) This is an application of the definition. Making things a bit exp
Different usage of the term "Bias" in stats/machine learning (2) What is bias? As you correctly defined in (2), bias is the difference between the estimator and its true value in expectation. (4) This is an application of the definition. Making things a bit explicit here can help. What we want to estimate is: E[sin(x) | full_period] But instead we estimated using: E[sin(x) | quarter_period] Clearly: E[sin(x) | full_period] != E[sin(x) | quarter_period] in general. Thus our estimator is wrong. It's wrong because we incorrectly sampled on a different set of conditions. But it still fits the definition of bias. (3) and (1) (3) is interesting. I am not sure why historically the term of 'bias' originated in linear regression. If I simulated data from a linear regression model with a non-zero intercept and then built a linear regression model from its output data, clearly my non-zero 'bias' term is what we want--so it's not biased according to our definition of bias. This suggests it does indeed reflect a different meaning. It seems by 'bias' in this context, we are really trying to hint at this idea of how well the model is influenced by data. That is, the larger we make our intercept, the less impact the data has on our predictions. This would also capture this notion . Research Paper I also found this research paper while answering this question (have not read it) that attempts to distinguish the bias definitions used in machine clearing: https://arxiv.org/pdf/2004.00686.pdf
Different usage of the term "Bias" in stats/machine learning (2) What is bias? As you correctly defined in (2), bias is the difference between the estimator and its true value in expectation. (4) This is an application of the definition. Making things a bit exp
19,919
Different usage of the term "Bias" in stats/machine learning
They all mean a systematic (as opposed to a random) deviation from a target, when repeatedly applying a (sampling and/or estimation) procedure. The target differs in the mentioned examples: Target is value of the response in a sample from the target population, that was not part of the training sample. Target is true value of the estimated quantity. (Also goes for point 1. actually) Target is zero. Biased data should read biased sample. Target is the (multivariate) distribution of values in the target population.
Different usage of the term "Bias" in stats/machine learning
They all mean a systematic (as opposed to a random) deviation from a target, when repeatedly applying a (sampling and/or estimation) procedure. The target differs in the mentioned examples: Target i
Different usage of the term "Bias" in stats/machine learning They all mean a systematic (as opposed to a random) deviation from a target, when repeatedly applying a (sampling and/or estimation) procedure. The target differs in the mentioned examples: Target is value of the response in a sample from the target population, that was not part of the training sample. Target is true value of the estimated quantity. (Also goes for point 1. actually) Target is zero. Biased data should read biased sample. Target is the (multivariate) distribution of values in the target population.
Different usage of the term "Bias" in stats/machine learning They all mean a systematic (as opposed to a random) deviation from a target, when repeatedly applying a (sampling and/or estimation) procedure. The target differs in the mentioned examples: Target i
19,920
Different usage of the term "Bias" in stats/machine learning
To me this is an excellent question, I would be very proud if I asked it. I will not analyze the bias in bias-variance-trade-off in here just, just the bias function instead. Bias is often related to any of these: sampling method estimator model inference method Sampling method Both quality and quantity sampling may influence the later bias, but sampling method itself will not produce the bias. In short the $N$ -- number of elements in your sample or the $n$ -- number of samples represent the quantity part and quality would be either of the techniques: random sampling stratified sampling systematic sampling It depends on a problem what kind of a sampling we choose, but again sampling method may be a cause for a bias, not the instant creator of the bias, because data is innocent. Estimator Estimator and how it relates to a model I don't understand quite, but I will shoot proudly and congruently and check yor comments later: Estimator is parameter estimator and can show the bias in parameter estimation Model is what estimates the prediction and can show the bias of a prediction We can call estimator -- the decision rule for selecting parameters. I learned that there are biased and unbiased estimators. The bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. We usually use thetas for general parameters so: $$ \text{Bias}(\hat{\theta}, \theta) = \mathbb{E}(\hat{\theta}) - \theta. \quad \ $$ Let's say bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. An estimator with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator super related to the term called degrees of freedom. At this moment my understanding is: The more degrees of freedom, the more bias, the more estimation errors we have. Model There are many ways you can define what is a model, but in here I will just analyze the models as being able to predict the feature of interest. In that respect the bias will tell us how off from the true value of the prediction we are. I will use this image to save some words. There is another bias in the model, where we say bias exists because hypothesis functions are biased to a particular kind of solution. In other words, bias is inherent to a model (but I would call this simple as the bias of the estimator). Inference method Classical inference method (I will just stick with that) computes probabilities from multiple hypotheses in order to determine their acceptability. This method usually analyses two hypotheses at a time. This inference provides quantitative information about a sensor observation in the form of a probability. As I see it -- we cannot create bias in here. We may change and adopt to differnt confidence intervals but this will not provide bias or errors.
Different usage of the term "Bias" in stats/machine learning
To me this is an excellent question, I would be very proud if I asked it. I will not analyze the bias in bias-variance-trade-off in here just, just the bias function instead. Bias is often related to
Different usage of the term "Bias" in stats/machine learning To me this is an excellent question, I would be very proud if I asked it. I will not analyze the bias in bias-variance-trade-off in here just, just the bias function instead. Bias is often related to any of these: sampling method estimator model inference method Sampling method Both quality and quantity sampling may influence the later bias, but sampling method itself will not produce the bias. In short the $N$ -- number of elements in your sample or the $n$ -- number of samples represent the quantity part and quality would be either of the techniques: random sampling stratified sampling systematic sampling It depends on a problem what kind of a sampling we choose, but again sampling method may be a cause for a bias, not the instant creator of the bias, because data is innocent. Estimator Estimator and how it relates to a model I don't understand quite, but I will shoot proudly and congruently and check yor comments later: Estimator is parameter estimator and can show the bias in parameter estimation Model is what estimates the prediction and can show the bias of a prediction We can call estimator -- the decision rule for selecting parameters. I learned that there are biased and unbiased estimators. The bias (or bias function) of an estimator is the difference between this estimator's expected value and the true value of the parameter being estimated. We usually use thetas for general parameters so: $$ \text{Bias}(\hat{\theta}, \theta) = \mathbb{E}(\hat{\theta}) - \theta. \quad \ $$ Let's say bias can also be measured with respect to the median, rather than the mean (expected value), in which case one distinguishes median-unbiased from the usual mean-unbiasedness property. An estimator with zero bias is called unbiased. In statistics, "bias" is an objective property of an estimator super related to the term called degrees of freedom. At this moment my understanding is: The more degrees of freedom, the more bias, the more estimation errors we have. Model There are many ways you can define what is a model, but in here I will just analyze the models as being able to predict the feature of interest. In that respect the bias will tell us how off from the true value of the prediction we are. I will use this image to save some words. There is another bias in the model, where we say bias exists because hypothesis functions are biased to a particular kind of solution. In other words, bias is inherent to a model (but I would call this simple as the bias of the estimator). Inference method Classical inference method (I will just stick with that) computes probabilities from multiple hypotheses in order to determine their acceptability. This method usually analyses two hypotheses at a time. This inference provides quantitative information about a sensor observation in the form of a probability. As I see it -- we cannot create bias in here. We may change and adopt to differnt confidence intervals but this will not provide bias or errors.
Different usage of the term "Bias" in stats/machine learning To me this is an excellent question, I would be very proud if I asked it. I will not analyze the bias in bias-variance-trade-off in here just, just the bias function instead. Bias is often related to
19,921
Why is this linear mixed model singular?
As you have discovered, this happens when one of the variance components is estimated as zero. This typically has one of two explanations: the random effects structure is over-fitted - usually because of too many random slopes one of more variance components is actually very close to zero and there is insufficient data to estimate it above zero. Obviously the first scenario is not the case with your data, since you only have random intercepts. So it is likely that the actual variation of the random intercepts for group_id is very close to zero. If it is, then with only 5 groups, the software may not be able to estimate a variance above zero. A good place to start if by plotting the data: We can already see that the variation in the means of the groups is small compared to the variation within the groups. We can investigate this more formally three ways (at least): First, let us look at the means of the data in each group: library(tidyverse) data %>% group_by(group_id) %>% summarize(mean = mean(y)) ## 1 8.85 ## 2 9.65 ## 3 8.84 ## 4 7.70 ## 5 8.17 Note that there is fairly small variation among all the groups, but note that the mean of group 1 and 3 is almost identical. Let us remove group 1 and see what happens: data %>% filter(group_id != 1) %>% lmer(y ~ 1 + (1|group_id), data = .) %>% summary() ## Random effects: ## Groups Name Variance Std.Dev. ## group_id (Intercept) 0.03789 0.1947 ## Residual 6.75636 2.5993 ## Number of obs: 40, groups: group_id, 4 ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 8.5885 0.4224 20.34 So the model converges without singularity, but the variance component for group_id is very small, as we suspected. Next, we can add some additional variance to the group_id component. The problem with doing this is that with only 5 groups, if we were to sample 5 observations from, say rnorm(5, 0, 1) (with a standard deviation of 1) the sample standard deviation is likely to be not close to 1 and the mean is likely to be not close to zero. A good approach to solve this is to use monte-carlo simulation (basically just do it many times and take averages). Here we will do 100 simulations: n.sim <- 100 simvec_rint <- numeric(n.sim) # vector to hold the random intercepts variances simvec_fint <- numeric(n.sim) # vector to hold the fixed intercepts for (i in 1:n.sim) { set.seed(i) data$y1 = data$y + rep(rnorm(5, 0, 1), each = 10) m0 <- lmer(y1 ~ 1 + (1|group_id), data = data) if (!isSingular(m0)) { # If the model is not singular then extract the random and fixed effects VarCorr(m0) %>% as.data.frame() %>% pull(vcov) %>% nth(1) -> simvec_rint[i] summary(m0) %>% coef() %>% as.vector() %>% nth(1) -> simvec_fint[i] } else { simvec_rint[i] <- simvec_fint[i] <- NA } } So we have added random noise to the groups with a variance of 1. The monte carlo estimates are: > mean(simvec_rint, na.rm = TRUE) [1] 1.116416 > mean(simvec_fint, na.rm = TRUE) [1] 8.637063 Note that: The variance of the random intercepts is 1.12. However we have added variance to the groups equal to 1, so this implies that the variance of random intercepts in the original data is close to zero, as we suspected. The fixed intercept is 8.64 which is basically the same as the model fitted to the original data. Lastly, let us look at a model without random effects, which will obviously simply be an ANOVA: > aov(y ~ group_id, data = data) %>% summary() Df Sum Sq Mean Sq F value Pr(>F) group_id 4 22.0 5.489 0.763 0.555 Residuals 45 323.6 7.190 So there is very little evidence that the means of the 5 groups are different from each other. Another way to look at this is with: > lm(y ~ group_id, data = data) %>% summary() Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.8510 0.8479 10.438 1.33e-13 *** group_id2 0.7950 1.1992 0.663 0.511 group_id3 -0.0150 1.1992 -0.013 0.990 group_id4 -1.1490 1.1992 -0.958 0.343 group_id5 -0.6810 1.1992 -0.568 0.573 So there is also very little evidence that groups 2, 3, 4, and 5 have different means from group 1. Both these models are consistent with there being very small variation of the random intercepts in the mixed model. So, to sum up, in this case we can conclude that due to a combination of the small number of groups and the estimated variation between groups being small, the software is unable to estimate the random intercepts variation above zero, and hence the model is singular, although the model estimates seem to be reliable.
Why is this linear mixed model singular?
As you have discovered, this happens when one of the variance components is estimated as zero. This typically has one of two explanations: the random effects structure is over-fitted - usually becaus
Why is this linear mixed model singular? As you have discovered, this happens when one of the variance components is estimated as zero. This typically has one of two explanations: the random effects structure is over-fitted - usually because of too many random slopes one of more variance components is actually very close to zero and there is insufficient data to estimate it above zero. Obviously the first scenario is not the case with your data, since you only have random intercepts. So it is likely that the actual variation of the random intercepts for group_id is very close to zero. If it is, then with only 5 groups, the software may not be able to estimate a variance above zero. A good place to start if by plotting the data: We can already see that the variation in the means of the groups is small compared to the variation within the groups. We can investigate this more formally three ways (at least): First, let us look at the means of the data in each group: library(tidyverse) data %>% group_by(group_id) %>% summarize(mean = mean(y)) ## 1 8.85 ## 2 9.65 ## 3 8.84 ## 4 7.70 ## 5 8.17 Note that there is fairly small variation among all the groups, but note that the mean of group 1 and 3 is almost identical. Let us remove group 1 and see what happens: data %>% filter(group_id != 1) %>% lmer(y ~ 1 + (1|group_id), data = .) %>% summary() ## Random effects: ## Groups Name Variance Std.Dev. ## group_id (Intercept) 0.03789 0.1947 ## Residual 6.75636 2.5993 ## Number of obs: 40, groups: group_id, 4 ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 8.5885 0.4224 20.34 So the model converges without singularity, but the variance component for group_id is very small, as we suspected. Next, we can add some additional variance to the group_id component. The problem with doing this is that with only 5 groups, if we were to sample 5 observations from, say rnorm(5, 0, 1) (with a standard deviation of 1) the sample standard deviation is likely to be not close to 1 and the mean is likely to be not close to zero. A good approach to solve this is to use monte-carlo simulation (basically just do it many times and take averages). Here we will do 100 simulations: n.sim <- 100 simvec_rint <- numeric(n.sim) # vector to hold the random intercepts variances simvec_fint <- numeric(n.sim) # vector to hold the fixed intercepts for (i in 1:n.sim) { set.seed(i) data$y1 = data$y + rep(rnorm(5, 0, 1), each = 10) m0 <- lmer(y1 ~ 1 + (1|group_id), data = data) if (!isSingular(m0)) { # If the model is not singular then extract the random and fixed effects VarCorr(m0) %>% as.data.frame() %>% pull(vcov) %>% nth(1) -> simvec_rint[i] summary(m0) %>% coef() %>% as.vector() %>% nth(1) -> simvec_fint[i] } else { simvec_rint[i] <- simvec_fint[i] <- NA } } So we have added random noise to the groups with a variance of 1. The monte carlo estimates are: > mean(simvec_rint, na.rm = TRUE) [1] 1.116416 > mean(simvec_fint, na.rm = TRUE) [1] 8.637063 Note that: The variance of the random intercepts is 1.12. However we have added variance to the groups equal to 1, so this implies that the variance of random intercepts in the original data is close to zero, as we suspected. The fixed intercept is 8.64 which is basically the same as the model fitted to the original data. Lastly, let us look at a model without random effects, which will obviously simply be an ANOVA: > aov(y ~ group_id, data = data) %>% summary() Df Sum Sq Mean Sq F value Pr(>F) group_id 4 22.0 5.489 0.763 0.555 Residuals 45 323.6 7.190 So there is very little evidence that the means of the 5 groups are different from each other. Another way to look at this is with: > lm(y ~ group_id, data = data) %>% summary() Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 8.8510 0.8479 10.438 1.33e-13 *** group_id2 0.7950 1.1992 0.663 0.511 group_id3 -0.0150 1.1992 -0.013 0.990 group_id4 -1.1490 1.1992 -0.958 0.343 group_id5 -0.6810 1.1992 -0.568 0.573 So there is also very little evidence that groups 2, 3, 4, and 5 have different means from group 1. Both these models are consistent with there being very small variation of the random intercepts in the mixed model. So, to sum up, in this case we can conclude that due to a combination of the small number of groups and the estimated variation between groups being small, the software is unable to estimate the random intercepts variation above zero, and hence the model is singular, although the model estimates seem to be reliable.
Why is this linear mixed model singular? As you have discovered, this happens when one of the variance components is estimated as zero. This typically has one of two explanations: the random effects structure is over-fitted - usually becaus
19,922
Why eigenvectors reveal the groups in Spectral Clustering
This is a great, and a subtle question. Before we turn to your algorithm, let us first observe the similarity matrix $S$. It is symmetrical and, if your data form convex clusters (see below), and with an appropriate enumeration of the points, it is close to a block-diagonal matrix. That's because points in a cluster tend to have a high similarity, and points from different clusters a low one. Below is an example for the popular "Iris" data set: (there is a noticeable overlap between the second and the third cluster, therefore the two blocks are somewhat connected). You can decompose this matrix into its eigenvectors and the associated eigenvalues. This is called "spectral decomposition", because it is conceptually similar to decomposing light or sound into elementary frequencies and their associated amplitudes. The definition of eigenvector is: $$ A \cdot e = e \cdot \lambda $$ with $A$ being a matrix, $e$ an eigenvector and $\lambda$ its corresponding eigenvalue. We can collect all eigenvectors as columns in a matrix $E$, and the eigenvalues in a diagonal matrix $\Lambda$, so it follows: $$ A \cdot E = E \cdot \Lambda $$ Now, there is a degree of freedom when choosing eigenvectors. Their direction is determined by the matrix, but the size is arbitrary: If $A \cdot e = e \cdot \lambda$, and $f = 7 \cdot e$ (or whatever scaling of $e$ you like), then $A \cdot f = f \cdot \lambda$, too. It is therefore common to scale eigenvectors, so that their length is one ($\lVert e \rVert_2 = 1$). Also, for symmetric matrices, the eigenvectors are orthogonal: $$ e^i \cdot e^j = \Bigg\{ \begin{array}{lcr} 1 & \text{ for } & i = j \\ 0 & \text{ otherwise } & \end{array} $$ or, in the matrix form: $$ E \cdot E^T = I $$ Plugging this into the above matrix definition of eigenvectors leads to: $$ A = E \cdot \Lambda \cdot E^T $$ which you can also write down, in an expanded form, as: $$ A = \sum_i \lambda_i \cdot e^i \cdot (e^i)^T $$ (if it helps you, you can think here of the dyads $e^i \cdot (e^i)^T$ as the "elementary frequencies" and of the $\lambda_i$ as the "amplitudes" of the spectrum). Let us go back to our Iris similarity matrix and look at its spectrum. The first three eigenvectors look like this: You see that in the first eigenvector, the first 50 components, corresponding to the first cluster, are all non-zero (negative), while the remaining components are almost exactly zero. In the second eigenvector, the first 50 components are zero, and the remaining 100 non-zero. These 100 correspond to the "supercluster", containing the two overlapping clusters, 2 and 3. The third eigenvector has both positive and negative components. It splits the "supercluster" into the two clusters, based on the sign of its components. Taking each eigenvector to represent an axis in the feature space, and each component as a point, we can plot them in 3D: To see how this is related to the similarity matrix, we can take a look at the individual terms of the above sum. $\lambda_1 \cdot e^1 \cdot (e^1)^T$ looks like this: i.e. it almost perfectly corresponds to the first "block" in the matrix (and the first cluster in the data set). The second and the third cluster overlap, so the second term, $\lambda_2 \cdot e^2 \cdot (e^2)^T$, corresponds to a "supercluster" containing the two: and the third eigenvector splits it into the two subclusters (notice the negative values!): You get the idea. Now, you might ask why your algorithm needs the transition matrix $P$, instead of working directly on the similarity matrix. Similarity matrix shows these nice blocks only for convex clusters. For non-convex clusters, it is preferable to define them as sets of points separated from other points. The algorithm you describe (Algorithm 7.2, p. 129 in the book?) is based on the random walk interpretation of clustering (there is also a similar, but slightly different graph cut interpretation). If you interpret your points (data, observations) as the nodes in a graph, each entry $p_{ij}$ in the transition matrix $P$ gives you the probability that, if you start at node $i$, the next step in the random walk would bring you to the node $j$. The matrix $P$ is simply a scaled similarity matrix, so that its elements, row-wise (you can do it column-wise, too) are probabilities, i.e. they sum to one. If points form clusters, then a random walk through them will spend much time inside clusters and only occasionally jump from one cluster to another. Taking $P$ to the power of $m$ shows you how likely you are to land at each point after taking $m$ random steps. A suitably high $m$ will lead again to a block-matrix-like matrix. If $m$ is to small, the blocks will not form yet, and if it is too large, $P^m$ will already be close to converging to the steady state. But the block structure remains preserved in the eigenvectors of $P$.
Why eigenvectors reveal the groups in Spectral Clustering
This is a great, and a subtle question. Before we turn to your algorithm, let us first observe the similarity matrix $S$. It is symmetrical and, if your data form convex clusters (see below), and with
Why eigenvectors reveal the groups in Spectral Clustering This is a great, and a subtle question. Before we turn to your algorithm, let us first observe the similarity matrix $S$. It is symmetrical and, if your data form convex clusters (see below), and with an appropriate enumeration of the points, it is close to a block-diagonal matrix. That's because points in a cluster tend to have a high similarity, and points from different clusters a low one. Below is an example for the popular "Iris" data set: (there is a noticeable overlap between the second and the third cluster, therefore the two blocks are somewhat connected). You can decompose this matrix into its eigenvectors and the associated eigenvalues. This is called "spectral decomposition", because it is conceptually similar to decomposing light or sound into elementary frequencies and their associated amplitudes. The definition of eigenvector is: $$ A \cdot e = e \cdot \lambda $$ with $A$ being a matrix, $e$ an eigenvector and $\lambda$ its corresponding eigenvalue. We can collect all eigenvectors as columns in a matrix $E$, and the eigenvalues in a diagonal matrix $\Lambda$, so it follows: $$ A \cdot E = E \cdot \Lambda $$ Now, there is a degree of freedom when choosing eigenvectors. Their direction is determined by the matrix, but the size is arbitrary: If $A \cdot e = e \cdot \lambda$, and $f = 7 \cdot e$ (or whatever scaling of $e$ you like), then $A \cdot f = f \cdot \lambda$, too. It is therefore common to scale eigenvectors, so that their length is one ($\lVert e \rVert_2 = 1$). Also, for symmetric matrices, the eigenvectors are orthogonal: $$ e^i \cdot e^j = \Bigg\{ \begin{array}{lcr} 1 & \text{ for } & i = j \\ 0 & \text{ otherwise } & \end{array} $$ or, in the matrix form: $$ E \cdot E^T = I $$ Plugging this into the above matrix definition of eigenvectors leads to: $$ A = E \cdot \Lambda \cdot E^T $$ which you can also write down, in an expanded form, as: $$ A = \sum_i \lambda_i \cdot e^i \cdot (e^i)^T $$ (if it helps you, you can think here of the dyads $e^i \cdot (e^i)^T$ as the "elementary frequencies" and of the $\lambda_i$ as the "amplitudes" of the spectrum). Let us go back to our Iris similarity matrix and look at its spectrum. The first three eigenvectors look like this: You see that in the first eigenvector, the first 50 components, corresponding to the first cluster, are all non-zero (negative), while the remaining components are almost exactly zero. In the second eigenvector, the first 50 components are zero, and the remaining 100 non-zero. These 100 correspond to the "supercluster", containing the two overlapping clusters, 2 and 3. The third eigenvector has both positive and negative components. It splits the "supercluster" into the two clusters, based on the sign of its components. Taking each eigenvector to represent an axis in the feature space, and each component as a point, we can plot them in 3D: To see how this is related to the similarity matrix, we can take a look at the individual terms of the above sum. $\lambda_1 \cdot e^1 \cdot (e^1)^T$ looks like this: i.e. it almost perfectly corresponds to the first "block" in the matrix (and the first cluster in the data set). The second and the third cluster overlap, so the second term, $\lambda_2 \cdot e^2 \cdot (e^2)^T$, corresponds to a "supercluster" containing the two: and the third eigenvector splits it into the two subclusters (notice the negative values!): You get the idea. Now, you might ask why your algorithm needs the transition matrix $P$, instead of working directly on the similarity matrix. Similarity matrix shows these nice blocks only for convex clusters. For non-convex clusters, it is preferable to define them as sets of points separated from other points. The algorithm you describe (Algorithm 7.2, p. 129 in the book?) is based on the random walk interpretation of clustering (there is also a similar, but slightly different graph cut interpretation). If you interpret your points (data, observations) as the nodes in a graph, each entry $p_{ij}$ in the transition matrix $P$ gives you the probability that, if you start at node $i$, the next step in the random walk would bring you to the node $j$. The matrix $P$ is simply a scaled similarity matrix, so that its elements, row-wise (you can do it column-wise, too) are probabilities, i.e. they sum to one. If points form clusters, then a random walk through them will spend much time inside clusters and only occasionally jump from one cluster to another. Taking $P$ to the power of $m$ shows you how likely you are to land at each point after taking $m$ random steps. A suitably high $m$ will lead again to a block-matrix-like matrix. If $m$ is to small, the blocks will not form yet, and if it is too large, $P^m$ will already be close to converging to the steady state. But the block structure remains preserved in the eigenvectors of $P$.
Why eigenvectors reveal the groups in Spectral Clustering This is a great, and a subtle question. Before we turn to your algorithm, let us first observe the similarity matrix $S$. It is symmetrical and, if your data form convex clusters (see below), and with
19,923
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
I found these posts particularly helpful: How to derive the least square estimator for multiple linear regression? Relationship between SVD and PCA. How to use SVD to perform PCA? http://www.math.miami.edu/~armstrong/210sp13/HW7notes.pdf If $X$ is an $n \times p$ matrix then the matrix $X(X^TX)^{-1}X^T$ defines a projection onto the column space of $X$. Intuitively, you have an overdetermined system of equations, but still want to use it to define a linear map $\mathbb{R}^p \rightarrow \mathbb{R}$ that will map rows $x_i$ of $X$ to something close to values $y_i$, $i\in \{1,\dots,n\}$. So we settle for sending $X$ to the closest thing to $y$ that can be expressed as a linear combination of your features (the columns of $X$). As far as an interpretation of $(X^TX)^{-1}$, I don't have an amazing answer yet. I know you can think of $(X^TX)$ as basically being the covariance matrix of the dataset.
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
I found these posts particularly helpful: How to derive the least square estimator for multiple linear regression? Relationship between SVD and PCA. How to use SVD to perform PCA? http://www.math.miam
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression I found these posts particularly helpful: How to derive the least square estimator for multiple linear regression? Relationship between SVD and PCA. How to use SVD to perform PCA? http://www.math.miami.edu/~armstrong/210sp13/HW7notes.pdf If $X$ is an $n \times p$ matrix then the matrix $X(X^TX)^{-1}X^T$ defines a projection onto the column space of $X$. Intuitively, you have an overdetermined system of equations, but still want to use it to define a linear map $\mathbb{R}^p \rightarrow \mathbb{R}$ that will map rows $x_i$ of $X$ to something close to values $y_i$, $i\in \{1,\dots,n\}$. So we settle for sending $X$ to the closest thing to $y$ that can be expressed as a linear combination of your features (the columns of $X$). As far as an interpretation of $(X^TX)^{-1}$, I don't have an amazing answer yet. I know you can think of $(X^TX)$ as basically being the covariance matrix of the dataset.
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression I found these posts particularly helpful: How to derive the least square estimator for multiple linear regression? Relationship between SVD and PCA. How to use SVD to perform PCA? http://www.math.miam
19,924
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
Geometric viewpoint A geometric viewpoint can be like the n-dimensional vectors $y$ and $X\beta$ being points in n-dimensional-space $V$. Where $X\beta$ is also in the subspace $W$ spanned by the vectors $x_1, x_2, \cdots, x_m$. Two types of coordinates For this subspace $W$ we can imagine two different types of coordinates: The $\boldsymbol{\beta}$ are like coordinates for a regular coordinate space. The vector $z$ in the space $W$ are the linear combination of the vectors $\mathbf{x_i}$ $$z = \boldsymbol{\beta_1} \mathbf{x_1} + \boldsymbol{\beta_2} \mathbf{x_1} + .... \boldsymbol{\beta_m} \mathbf{x_m} $$ The $\boldsymbol{\alpha}$ are not coordinates in the regular sense, but they do define a point in the subspace $W$. Each $\alpha_i$ relates to the perpendicular projections onto the vectors $x_i$. If we use unit vectors $x_i$ (for simplicity) then the "coordinates" $\alpha_i$ for a vector $z$ can be expressed as: $$\alpha_i = \mathbf{x_i^T} \mathbf{z}$$ and the set of all coordinates as: $$\boldsymbol{\alpha} = \mathbf{X^T} \mathbf{z}$$ Mapping between coordinates $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ for $\mathbf{z} = \mathbf{X}\boldsymbol{\beta}$ the expression of "coordinates" $\alpha$ becomes a conversion from coordinates $\beta$ to "coordinates" $\alpha$ $$\boldsymbol{\alpha} = \mathbf{X^T} \mathbf{X}\boldsymbol{\beta}$$ You could see $(\mathbf{X^T} \mathbf{X})_{ij}$ as expressing how much each $x_i$ projects onto the other $x_j$ Then the geometric interpretation of $(\mathbf{X^T} \mathbf{X})^{-1}$ can be seen as the map from vector projection "coordinates" $\boldsymbol{\alpha}$ to linear coordinates $\boldsymbol{\beta}$. $$\boldsymbol{\beta} = (\mathbf{X^T} \mathbf{X})^{-1}\boldsymbol{\alpha}$$ The expression $\mathbf{X^Ty}$ gives the projection "coordinates" of $\mathbf{y}$ and $(\mathbf{X^T} \mathbf{X})^{-1}$ turns them into $\boldsymbol{\beta}$. Note: the projection "coordinates" of $\mathbf{y}$ are the same as projection "coordinates" of $\mathbf{\hat{y}}$ since $(\mathbf{y-\hat{y}}) \perp \mathbf{X}$.
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
Geometric viewpoint A geometric viewpoint can be like the n-dimensional vectors $y$ and $X\beta$ being points in n-dimensional-space $V$. Where $X\beta$ is also in the subspace $W$ spanned by the vect
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Geometric viewpoint A geometric viewpoint can be like the n-dimensional vectors $y$ and $X\beta$ being points in n-dimensional-space $V$. Where $X\beta$ is also in the subspace $W$ spanned by the vectors $x_1, x_2, \cdots, x_m$. Two types of coordinates For this subspace $W$ we can imagine two different types of coordinates: The $\boldsymbol{\beta}$ are like coordinates for a regular coordinate space. The vector $z$ in the space $W$ are the linear combination of the vectors $\mathbf{x_i}$ $$z = \boldsymbol{\beta_1} \mathbf{x_1} + \boldsymbol{\beta_2} \mathbf{x_1} + .... \boldsymbol{\beta_m} \mathbf{x_m} $$ The $\boldsymbol{\alpha}$ are not coordinates in the regular sense, but they do define a point in the subspace $W$. Each $\alpha_i$ relates to the perpendicular projections onto the vectors $x_i$. If we use unit vectors $x_i$ (for simplicity) then the "coordinates" $\alpha_i$ for a vector $z$ can be expressed as: $$\alpha_i = \mathbf{x_i^T} \mathbf{z}$$ and the set of all coordinates as: $$\boldsymbol{\alpha} = \mathbf{X^T} \mathbf{z}$$ Mapping between coordinates $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ for $\mathbf{z} = \mathbf{X}\boldsymbol{\beta}$ the expression of "coordinates" $\alpha$ becomes a conversion from coordinates $\beta$ to "coordinates" $\alpha$ $$\boldsymbol{\alpha} = \mathbf{X^T} \mathbf{X}\boldsymbol{\beta}$$ You could see $(\mathbf{X^T} \mathbf{X})_{ij}$ as expressing how much each $x_i$ projects onto the other $x_j$ Then the geometric interpretation of $(\mathbf{X^T} \mathbf{X})^{-1}$ can be seen as the map from vector projection "coordinates" $\boldsymbol{\alpha}$ to linear coordinates $\boldsymbol{\beta}$. $$\boldsymbol{\beta} = (\mathbf{X^T} \mathbf{X})^{-1}\boldsymbol{\alpha}$$ The expression $\mathbf{X^Ty}$ gives the projection "coordinates" of $\mathbf{y}$ and $(\mathbf{X^T} \mathbf{X})^{-1}$ turns them into $\boldsymbol{\beta}$. Note: the projection "coordinates" of $\mathbf{y}$ are the same as projection "coordinates" of $\mathbf{\hat{y}}$ since $(\mathbf{y-\hat{y}}) \perp \mathbf{X}$.
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Geometric viewpoint A geometric viewpoint can be like the n-dimensional vectors $y$ and $X\beta$ being points in n-dimensional-space $V$. Where $X\beta$ is also in the subspace $W$ spanned by the vect
19,925
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
Assuming you're familiar with the simple linear regression: $$y_i=\alpha+\beta x_i+\varepsilon_i$$ and its solution: $$\beta=\frac{\mathrm{cov}[x_i,y_i]}{\mathrm{var}[x_i]}$$ It's easy to see how $X'y$ corresponds to numerator above and $X'X$ maps to denominator. Since we're dealing with matrices the order matters. $X'X$ is KxK matrix, and $X'y$ is Kx1 vector. Hence, the order is: $(X'X)^{-1}X'y$
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression
Assuming you're familiar with the simple linear regression: $$y_i=\alpha+\beta x_i+\varepsilon_i$$ and its solution: $$\beta=\frac{\mathrm{cov}[x_i,y_i]}{\mathrm{var}[x_i]}$$ It's easy to see how $X'y
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Assuming you're familiar with the simple linear regression: $$y_i=\alpha+\beta x_i+\varepsilon_i$$ and its solution: $$\beta=\frac{\mathrm{cov}[x_i,y_i]}{\mathrm{var}[x_i]}$$ It's easy to see how $X'y$ corresponds to numerator above and $X'X$ maps to denominator. Since we're dealing with matrices the order matters. $X'X$ is KxK matrix, and $X'y$ is Kx1 vector. Hence, the order is: $(X'X)^{-1}X'y$
Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression Assuming you're familiar with the simple linear regression: $$y_i=\alpha+\beta x_i+\varepsilon_i$$ and its solution: $$\beta=\frac{\mathrm{cov}[x_i,y_i]}{\mathrm{var}[x_i]}$$ It's easy to see how $X'y
19,926
What is Bayesian Deep Learning?
Going off of your NIPS workshop link, Yee Whye Teh had a keynote speech at NIPS on Bayesian Deep Learning (video: https://www.youtube.com/watch?v=LVBvJsTr3rg, slides: http://csml.stats.ox.ac.uk/news/2017-12-08-ywteh-breiman-lecture/). I think at some point in the talk, Teh summarized Bayesian deep learning as applying the Bayesian framework to ideas from deep learning (like learning a posterior over the weights of a neural network), and deep Bayesian learning as applying ideas from deep learning to the Bayesian framework (like deep Gaussian processes or deep exponential families). There are of course ideas that straddle the line between the two concepts, like variational autoencoders. When most people say Bayesian deep learning, they usually mean either of the two, and that's reflected in the accepted papers at the workshop you linked (along with the workshop the previous year). While the ideas go back to Neal's work on Bayesian learning of neural networks in the 90's (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.446.9306&rep=rep1&type=pdf), and there's been work over the years since then, probably one of the more important recent papers would be the original variational autoencoder paper (https://arxiv.org/pdf/1312.6114.pdf).
What is Bayesian Deep Learning?
Going off of your NIPS workshop link, Yee Whye Teh had a keynote speech at NIPS on Bayesian Deep Learning (video: https://www.youtube.com/watch?v=LVBvJsTr3rg, slides: http://csml.stats.ox.ac.uk/news/2
What is Bayesian Deep Learning? Going off of your NIPS workshop link, Yee Whye Teh had a keynote speech at NIPS on Bayesian Deep Learning (video: https://www.youtube.com/watch?v=LVBvJsTr3rg, slides: http://csml.stats.ox.ac.uk/news/2017-12-08-ywteh-breiman-lecture/). I think at some point in the talk, Teh summarized Bayesian deep learning as applying the Bayesian framework to ideas from deep learning (like learning a posterior over the weights of a neural network), and deep Bayesian learning as applying ideas from deep learning to the Bayesian framework (like deep Gaussian processes or deep exponential families). There are of course ideas that straddle the line between the two concepts, like variational autoencoders. When most people say Bayesian deep learning, they usually mean either of the two, and that's reflected in the accepted papers at the workshop you linked (along with the workshop the previous year). While the ideas go back to Neal's work on Bayesian learning of neural networks in the 90's (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.446.9306&rep=rep1&type=pdf), and there's been work over the years since then, probably one of the more important recent papers would be the original variational autoencoder paper (https://arxiv.org/pdf/1312.6114.pdf).
What is Bayesian Deep Learning? Going off of your NIPS workshop link, Yee Whye Teh had a keynote speech at NIPS on Bayesian Deep Learning (video: https://www.youtube.com/watch?v=LVBvJsTr3rg, slides: http://csml.stats.ox.ac.uk/news/2
19,927
What is Bayesian Deep Learning?
I would suggest that you first get a good grasp of what is the underlying probabilistic model in a traditional Bayesian Neural Network. In the following, some terms will be written with a boldface. Please, try googling those terms to find more detailed information. This is just a basic overview. I hope it helps. Let's consider the case of regression in feedforward neural networks and establish some notation. Let $(x_1,\dots,x_p) =: \left(z^{(0)}_1,\dots,z^{(0)}_{N_0}\right)$ denote the values of the predictors at the input layer. The values of the units in the inner layers will be denoted by $\left(z^{(\ell)}_1,\dots,z^{(\ell)}_{N_\ell}\right)$, for $\ell=1,\dots,L-1$. Finally, we have the output layer $(y_1,\dots,y_k) =:\left(z^{(L)}_1,\dots,z^{(L)}_{N_L}\right)$. The weights and bias of unit $i$ at layer $\ell$ will be denoted by $w^{(\ell)}_{ij}$ and $b^{(\ell)}_i$, respectively, for $\ell=1,\dots,L$, $i=1\dots,N_\ell$, and $j=1,\dots,N_{\ell-1}$. Let $g^{(\ell)}_i : \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}$ be the activation function for unit $i$ at layer $\ell$, for $\ell=1,\dots,L$ and $i=1\dots,N_\ell$. Commonly used activation functions are the logistic, ReLU (aka positive part), and tanh. Now, for $\ell=1,\dots,L$, define the layer transition functions $$ G^{(\ell)} : \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}^{N_\ell} : \left(z^{(\ell-1)}_1,\dots,z^{(\ell-1)}_{N_{\ell-1}} \right) \mapsto \left( z^{(\ell)}_1,\dots,z^{(\ell)}_{N_\ell} \right), $$ in which $$ z^{(\ell)}_i = g^{(\ell)}_i\!\left( \sum_{j=1}^{N_{\ell-1}} w^{(\ell)}_{ij} z^{(\ell-1)}_j + b^{(\ell)}_i\right), $$ for $i=1,\dots,N_{\ell}$. Denoting the set of weights and biases of all units in all layers by $\theta$, that is $$ \theta = \left\{ w^{(\ell)}_{ij},b^{(\ell)}_i : \ell=1,\dots,L \,;\, i=1\dots,N_\ell \,;\, j=1,\dots,N_{\ell-1} \right\}, $$ our neural network is the family of functions $G_\theta : \mathbb{R}^p\to\mathbb{R}^k$ obtained by composition of the layer transition functions: $$ G_\theta = G^{(L)} \circ G^{(L-1)} \circ \dots \circ G^{(1)}. $$ There are no probabilities involved in the above description. The purpose of the original neural network business is function fitting. The "deep" in Deep Learning stands for the existence of many inner layers in the neural networks under consideration. Given a training set $\{ (\mathbf{x}_i,\mathbf{y}_i) \in \mathbb{R}^p\times\mathbb{R}^k : i = 1,\dots,n \}$, we try to minimize the objective function $$ \sum_{i=1}^n \lVert \mathbf{y}_i-G_\theta(\mathbf{x}_i) \rVert^2, $$ over $\theta$. For some vector of predictors $\mathbf{x}^*$ in the test set, the predicted response is simply $G_\hat{\theta}(\mathbf{x}^*)$, in which $\hat{\theta}$ is the solution found for the minimization problem. The golden standard for this minimization is backpropagation implemented by the TensorFlow library using the parallelization facilities available in modern GPU's (for your projects, check out the Keras interface). Also, there is now hardware available encapsulating these tasks (TPU's). Since the neural network is in general over parameterized, to avoid overfitting some form of regularization is added to the recipe, for instance summing a ridge like penalty to the objective function, or using dropout during training. Geoffrey Hinton (aka Deep Learning Godfather) and collaborators invented many of these things. Success stories of Deep Learning are everywhere. Probabilities were introduced in the picture in the late 80's and early 90's with the proposal of a Gaussian likelihood $$ L_{\mathbf{x},\mathbf{y}}(\theta,\sigma^2)\propto \sigma^{-n} \exp\left(-\frac{1}{2\sigma^2} \sum_{i=1}^n \lVert \mathbf{y}_i-G_\theta(\mathbf{x}_i) \rVert^2\right), $$ and a simple (possibly simplistic) Gaussian prior, supposing a priori independence of all weights and biases in the network: $$ \pi(\theta,\sigma^2) \propto \exp\left( -\frac{1}{2\sigma_0^2} \sum_{\ell=1}^L \sum_{i=1}^{N_\ell} \left( \left(b^{(\ell)}_i\right)^2 + \sum_{j=1}^{N_{\ell-1}} \left(w^{(\ell)}_{ij}\right)^2 \right) \right) \times \pi(\sigma^2).$$ Therefore, the marginal priors for the weights and biases are normal distributions with zero mean and common variance $\sigma_0^2$. This original joint model can be made much more involved, with the trade-off of making inference harder. Bayesian Deep Learning faces the difficult task of sampling from the corresponding posterior distribution. After this is accomplished, predictions are made naturally with the posterior predictive distribution, and the uncertainties involved in these predictions are fully quantified. The holy grail in Bayesian Deep Learning is the construction of an efficient and scalable solution. Many computational methods have been used in this quest: Metropolis-Hastings and Gibbs sampling, Hamiltonian Monte Carlo, and, more recently, Variational Inference. Check out the NIPS conference videos for some success stories: http://bayesiandeeplearning.org/
What is Bayesian Deep Learning?
I would suggest that you first get a good grasp of what is the underlying probabilistic model in a traditional Bayesian Neural Network. In the following, some terms will be written with a boldface. P
What is Bayesian Deep Learning? I would suggest that you first get a good grasp of what is the underlying probabilistic model in a traditional Bayesian Neural Network. In the following, some terms will be written with a boldface. Please, try googling those terms to find more detailed information. This is just a basic overview. I hope it helps. Let's consider the case of regression in feedforward neural networks and establish some notation. Let $(x_1,\dots,x_p) =: \left(z^{(0)}_1,\dots,z^{(0)}_{N_0}\right)$ denote the values of the predictors at the input layer. The values of the units in the inner layers will be denoted by $\left(z^{(\ell)}_1,\dots,z^{(\ell)}_{N_\ell}\right)$, for $\ell=1,\dots,L-1$. Finally, we have the output layer $(y_1,\dots,y_k) =:\left(z^{(L)}_1,\dots,z^{(L)}_{N_L}\right)$. The weights and bias of unit $i$ at layer $\ell$ will be denoted by $w^{(\ell)}_{ij}$ and $b^{(\ell)}_i$, respectively, for $\ell=1,\dots,L$, $i=1\dots,N_\ell$, and $j=1,\dots,N_{\ell-1}$. Let $g^{(\ell)}_i : \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}$ be the activation function for unit $i$ at layer $\ell$, for $\ell=1,\dots,L$ and $i=1\dots,N_\ell$. Commonly used activation functions are the logistic, ReLU (aka positive part), and tanh. Now, for $\ell=1,\dots,L$, define the layer transition functions $$ G^{(\ell)} : \mathbb{R}^{N_{\ell-1}} \to \mathbb{R}^{N_\ell} : \left(z^{(\ell-1)}_1,\dots,z^{(\ell-1)}_{N_{\ell-1}} \right) \mapsto \left( z^{(\ell)}_1,\dots,z^{(\ell)}_{N_\ell} \right), $$ in which $$ z^{(\ell)}_i = g^{(\ell)}_i\!\left( \sum_{j=1}^{N_{\ell-1}} w^{(\ell)}_{ij} z^{(\ell-1)}_j + b^{(\ell)}_i\right), $$ for $i=1,\dots,N_{\ell}$. Denoting the set of weights and biases of all units in all layers by $\theta$, that is $$ \theta = \left\{ w^{(\ell)}_{ij},b^{(\ell)}_i : \ell=1,\dots,L \,;\, i=1\dots,N_\ell \,;\, j=1,\dots,N_{\ell-1} \right\}, $$ our neural network is the family of functions $G_\theta : \mathbb{R}^p\to\mathbb{R}^k$ obtained by composition of the layer transition functions: $$ G_\theta = G^{(L)} \circ G^{(L-1)} \circ \dots \circ G^{(1)}. $$ There are no probabilities involved in the above description. The purpose of the original neural network business is function fitting. The "deep" in Deep Learning stands for the existence of many inner layers in the neural networks under consideration. Given a training set $\{ (\mathbf{x}_i,\mathbf{y}_i) \in \mathbb{R}^p\times\mathbb{R}^k : i = 1,\dots,n \}$, we try to minimize the objective function $$ \sum_{i=1}^n \lVert \mathbf{y}_i-G_\theta(\mathbf{x}_i) \rVert^2, $$ over $\theta$. For some vector of predictors $\mathbf{x}^*$ in the test set, the predicted response is simply $G_\hat{\theta}(\mathbf{x}^*)$, in which $\hat{\theta}$ is the solution found for the minimization problem. The golden standard for this minimization is backpropagation implemented by the TensorFlow library using the parallelization facilities available in modern GPU's (for your projects, check out the Keras interface). Also, there is now hardware available encapsulating these tasks (TPU's). Since the neural network is in general over parameterized, to avoid overfitting some form of regularization is added to the recipe, for instance summing a ridge like penalty to the objective function, or using dropout during training. Geoffrey Hinton (aka Deep Learning Godfather) and collaborators invented many of these things. Success stories of Deep Learning are everywhere. Probabilities were introduced in the picture in the late 80's and early 90's with the proposal of a Gaussian likelihood $$ L_{\mathbf{x},\mathbf{y}}(\theta,\sigma^2)\propto \sigma^{-n} \exp\left(-\frac{1}{2\sigma^2} \sum_{i=1}^n \lVert \mathbf{y}_i-G_\theta(\mathbf{x}_i) \rVert^2\right), $$ and a simple (possibly simplistic) Gaussian prior, supposing a priori independence of all weights and biases in the network: $$ \pi(\theta,\sigma^2) \propto \exp\left( -\frac{1}{2\sigma_0^2} \sum_{\ell=1}^L \sum_{i=1}^{N_\ell} \left( \left(b^{(\ell)}_i\right)^2 + \sum_{j=1}^{N_{\ell-1}} \left(w^{(\ell)}_{ij}\right)^2 \right) \right) \times \pi(\sigma^2).$$ Therefore, the marginal priors for the weights and biases are normal distributions with zero mean and common variance $\sigma_0^2$. This original joint model can be made much more involved, with the trade-off of making inference harder. Bayesian Deep Learning faces the difficult task of sampling from the corresponding posterior distribution. After this is accomplished, predictions are made naturally with the posterior predictive distribution, and the uncertainties involved in these predictions are fully quantified. The holy grail in Bayesian Deep Learning is the construction of an efficient and scalable solution. Many computational methods have been used in this quest: Metropolis-Hastings and Gibbs sampling, Hamiltonian Monte Carlo, and, more recently, Variational Inference. Check out the NIPS conference videos for some success stories: http://bayesiandeeplearning.org/
What is Bayesian Deep Learning? I would suggest that you first get a good grasp of what is the underlying probabilistic model in a traditional Bayesian Neural Network. In the following, some terms will be written with a boldface. P
19,928
Why do researchers in economics use linear regression for binary response variables?
This blog post by on Dave Giles' econometrics blog mostly outlines the disadvantages of the Linear Probability Model (LPM). However, he does include a short list of reasons why researchers choose to use it: It's computationally simpler. It's easier to interpret the "marginal effects". It avoids the risk of mis-specification of the "link function". There are complications with Logit or Probit if you have endogenous dummy regressors. The estimated marginal effects from the LPM, Logit and Probit models are usually very similar, especially if you have a large sample size. I don't know that the LPM is all that commonly used compared with logit or probit but some of those reasons above are sensible to me.
Why do researchers in economics use linear regression for binary response variables?
This blog post by on Dave Giles' econometrics blog mostly outlines the disadvantages of the Linear Probability Model (LPM). However, he does include a short list of reasons why researchers choose to u
Why do researchers in economics use linear regression for binary response variables? This blog post by on Dave Giles' econometrics blog mostly outlines the disadvantages of the Linear Probability Model (LPM). However, he does include a short list of reasons why researchers choose to use it: It's computationally simpler. It's easier to interpret the "marginal effects". It avoids the risk of mis-specification of the "link function". There are complications with Logit or Probit if you have endogenous dummy regressors. The estimated marginal effects from the LPM, Logit and Probit models are usually very similar, especially if you have a large sample size. I don't know that the LPM is all that commonly used compared with logit or probit but some of those reasons above are sensible to me.
Why do researchers in economics use linear regression for binary response variables? This blog post by on Dave Giles' econometrics blog mostly outlines the disadvantages of the Linear Probability Model (LPM). However, he does include a short list of reasons why researchers choose to u
19,929
Why do researchers in economics use linear regression for binary response variables?
I had similar questions when read papers from other filed. And asked a lot of question related to this, such as this one in Education Data Mining community: Why use squared loss on probabilities instead of logistic loss? Here I will present a lot of personal opinions. I feel loss function does not matter too much in many practical use cases. Some researcher may know more about squared loss and build system of it, it work still work and solve real world problems. The researchers may never know logistic loss or hinge loss, and want to try it. Further, they may not interested to find the optimal math model, but want to solve real problems that no one attempted to solve before. This is another example: if you check this answer to my question, all of them are sort of similar. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss More thoughts: a machine learning research may spend a lot of time on what model to chose, and how to optimize the model. This is because a machine learning researcher may not have the ability to collect more data / get more measures. And a machine learning researcher's job is getting better math, not solve a specific real world problem better. On the other hand, in real world, if the data is better, it beats every thing. So, choosing neural network or random forest may not matter too much. All of these models are similar to a person want to use machine learning as a tool to solve real world problems. A person not interested on developing math or tools may spend more time on using specific domain knowledge to make system better. As I mentioned in the comment. And if one is sloppy with math, he/she still be able to build something that works.
Why do researchers in economics use linear regression for binary response variables?
I had similar questions when read papers from other filed. And asked a lot of question related to this, such as this one in Education Data Mining community: Why use squared loss on probabilities inst
Why do researchers in economics use linear regression for binary response variables? I had similar questions when read papers from other filed. And asked a lot of question related to this, such as this one in Education Data Mining community: Why use squared loss on probabilities instead of logistic loss? Here I will present a lot of personal opinions. I feel loss function does not matter too much in many practical use cases. Some researcher may know more about squared loss and build system of it, it work still work and solve real world problems. The researchers may never know logistic loss or hinge loss, and want to try it. Further, they may not interested to find the optimal math model, but want to solve real problems that no one attempted to solve before. This is another example: if you check this answer to my question, all of them are sort of similar. What are the impacts of choosing different loss functions in classification to approximate 0-1 loss More thoughts: a machine learning research may spend a lot of time on what model to chose, and how to optimize the model. This is because a machine learning researcher may not have the ability to collect more data / get more measures. And a machine learning researcher's job is getting better math, not solve a specific real world problem better. On the other hand, in real world, if the data is better, it beats every thing. So, choosing neural network or random forest may not matter too much. All of these models are similar to a person want to use machine learning as a tool to solve real world problems. A person not interested on developing math or tools may spend more time on using specific domain knowledge to make system better. As I mentioned in the comment. And if one is sloppy with math, he/she still be able to build something that works.
Why do researchers in economics use linear regression for binary response variables? I had similar questions when read papers from other filed. And asked a lot of question related to this, such as this one in Education Data Mining community: Why use squared loss on probabilities inst
19,930
ICC as expected correlation between two randomly drawn units that are in the same group
It may be easiest to see the equivalence if you consider a case where there are only two individuals per group. So, let's go through a specific example (I'll use R for this): dat <- read.table(header=TRUE, text = " group person y 1 1 5 1 2 6 2 1 3 2 2 2 3 1 7 3 2 9 4 1 2 4 2 2 5 1 3 5 2 5 6 1 6 6 2 9 7 1 4 7 2 2 8 1 8 8 2 7") So, we have 8 groups with 2 individuals each. Now let's fit the random-effects ANOVA model: library(nlme) res <- lme(y ~ 1, random = ~ 1 | group, data=dat, method="ML") And finally, let's compute the ICC: getVarCov(res)[1] / (getVarCov(res)[1] + res$sigma^2) This yields: 0.7500003 (it's 0.75 to be exact, but there is some slight numerical impression in the estimation procedure here). Now let's reshape the data from the long format into the wide format: dat <- as.matrix(reshape(dat, direction="wide", v.names="y", idvar="group", timevar="person")) It looks like this now: group y.1 y.2 1 1 5 6 3 2 3 2 5 3 7 9 7 4 2 2 9 5 3 5 11 6 6 9 13 7 4 2 15 8 8 7 And now compute the correlation between y.1 and y.2: cor(dat[,2], dat[,3]) This yields: 0.8161138 Wait, what? What's going on here? Shouldn't it be 0.75? Not quite! What I have computed above is not the ICC (intraclass correlation coefficient), but the regular Pearson product-moment correlation coefficient, which is an interclass correlation coefficient. Note that in the long-format data, it is entirely arbitrary who is person 1 and who is person 2 -- the pairs are unordered. You could reshuffle the data within groups and you would get the same results. But in the wide-format data, it is not arbitrary who is listed under y.1 and who is listed under y.2. If you were to switch around some of the individuals, you would get a different correlation (except if you were to switch around all of them -- then this is equivalent to cor(dat[,3], dat[,2]) which of course still gives you 0.8161138). What Fisher pointed out is a little trick to get the ICC with the wide-format data. Have every pair be included twice, in both orders, and then compute the correlation: dat <- rbind(dat, dat[,c(1,3,2)]) cor(dat[,2], dat[,3]) This yields: 0.75. So, as you can see, the ICC is really a correlation coefficient -- for the "unpaired" data of two individuals from the same group. If there were more than two individuals per group, you can still think of the ICC in that way, except that there would be more ways of creating pairs of individuals within groups. The ICC is then the correlation between all possible pairings (again in an unordered way).
ICC as expected correlation between two randomly drawn units that are in the same group
It may be easiest to see the equivalence if you consider a case where there are only two individuals per group. So, let's go through a specific example (I'll use R for this): dat <- read.table(header=
ICC as expected correlation between two randomly drawn units that are in the same group It may be easiest to see the equivalence if you consider a case where there are only two individuals per group. So, let's go through a specific example (I'll use R for this): dat <- read.table(header=TRUE, text = " group person y 1 1 5 1 2 6 2 1 3 2 2 2 3 1 7 3 2 9 4 1 2 4 2 2 5 1 3 5 2 5 6 1 6 6 2 9 7 1 4 7 2 2 8 1 8 8 2 7") So, we have 8 groups with 2 individuals each. Now let's fit the random-effects ANOVA model: library(nlme) res <- lme(y ~ 1, random = ~ 1 | group, data=dat, method="ML") And finally, let's compute the ICC: getVarCov(res)[1] / (getVarCov(res)[1] + res$sigma^2) This yields: 0.7500003 (it's 0.75 to be exact, but there is some slight numerical impression in the estimation procedure here). Now let's reshape the data from the long format into the wide format: dat <- as.matrix(reshape(dat, direction="wide", v.names="y", idvar="group", timevar="person")) It looks like this now: group y.1 y.2 1 1 5 6 3 2 3 2 5 3 7 9 7 4 2 2 9 5 3 5 11 6 6 9 13 7 4 2 15 8 8 7 And now compute the correlation between y.1 and y.2: cor(dat[,2], dat[,3]) This yields: 0.8161138 Wait, what? What's going on here? Shouldn't it be 0.75? Not quite! What I have computed above is not the ICC (intraclass correlation coefficient), but the regular Pearson product-moment correlation coefficient, which is an interclass correlation coefficient. Note that in the long-format data, it is entirely arbitrary who is person 1 and who is person 2 -- the pairs are unordered. You could reshuffle the data within groups and you would get the same results. But in the wide-format data, it is not arbitrary who is listed under y.1 and who is listed under y.2. If you were to switch around some of the individuals, you would get a different correlation (except if you were to switch around all of them -- then this is equivalent to cor(dat[,3], dat[,2]) which of course still gives you 0.8161138). What Fisher pointed out is a little trick to get the ICC with the wide-format data. Have every pair be included twice, in both orders, and then compute the correlation: dat <- rbind(dat, dat[,c(1,3,2)]) cor(dat[,2], dat[,3]) This yields: 0.75. So, as you can see, the ICC is really a correlation coefficient -- for the "unpaired" data of two individuals from the same group. If there were more than two individuals per group, you can still think of the ICC in that way, except that there would be more ways of creating pairs of individuals within groups. The ICC is then the correlation between all possible pairings (again in an unordered way).
ICC as expected correlation between two randomly drawn units that are in the same group It may be easiest to see the equivalence if you consider a case where there are only two individuals per group. So, let's go through a specific example (I'll use R for this): dat <- read.table(header=
19,931
ICC as expected correlation between two randomly drawn units that are in the same group
@Wolfgang already gave a great answer. I want to expand on it a little to show that you can also arrive at the estimated ICC of 0.75 in his example dataset by literally implementing the intuitive algorithm of randomly selecting many pairs of $y$ values -- where the members of each pair come from the same group -- and then simply computing their correlation. And then this same procedure can easily be applied to datasets with groups of any size, as I'll also show. First we load @Wolfgang's dataset (not shown here). Now let's define a simple R function that takes a data.frame and returns a single randomly selected pair of observations from the same group: get_random_pair <- function(df){ # select a random row i <- sample(nrow(df), 1) # select a random other row from the same group # (the call to rep() here is admittedly odd, but it's to avoid unwanted # behavior when the first argument to sample() has length 1) j <- sample(rep(setdiff(which(dat$group==dat[i,"group"]), i), 2), 1) # return the pair of y-values c(df[i,"y"], df[j,"y"]) } Here's an example of what we get if we call this function 10 times on @Wolfgang's dataset: test <- replicate(10, get_random_pair(dat)) t(test) # [,1] [,2] # [1,] 9 6 # [2,] 2 2 # [3,] 2 4 # [4,] 3 5 # [5,] 3 2 # [6,] 2 4 # [7,] 7 9 # [8,] 5 3 # [9,] 5 3 # [10,] 3 2 Now to estimate the ICC, we just call this function a large number of times and then compute the correlation between the two columns. random_pairs <- replicate(100000, get_random_pair(dat)) cor(t(random_pairs)) # [,1] [,2] # [1,] 1.0000000 0.7493072 # [2,] 0.7493072 1.0000000 This same procedure can be applied, with no modifications at all, to datasets with groups of any size. For example, let's create a dataset consisting of 100 groups of 100 observations each, with the true ICC set to 0.75 as in @Wolfgang's example. set.seed(12345) group_effects <- scale(rnorm(100))*sqrt(4.5) errors <- scale(rnorm(100*100))*sqrt(1.5) dat <- data.frame(group = rep(1:100, each=100), person = rep(1:100, times=100), y = rep(group_effects, each=100) + errors) stripchart(y ~ group, data=dat, pch=20, col=rgb(0,0,0,.1), ylab="group") Estimating the ICC based on the variance components from a mixed model, we get: library("lme4") mod <- lmer(y ~ 1 + (1|group), data=dat, REML=FALSE) summary(mod) # Random effects: # Groups Name Variance Std.Dev. # group (Intercept) 4.502 2.122 # Residual 1.497 1.223 # Number of obs: 10000, groups: group, 100 4.502/(4.502 + 1.497) # 0.7504584 And if we apply the random pairing procedure, we get random_pairs <- replicate(100000, get_random_pair(dat)) cor(t(random_pairs)) # [,1] [,2] # [1,] 1.0000000 0.7503004 # [2,] 0.7503004 1.0000000 which closely agrees with the variance component estimate. Note that while the random pairing procedure is kind of intuitive, and didactically useful, the method illustrated by @Wolfgang is actually a lot smarter. For a dataset like this one of size 100*100, the number of unique within-group pairings (not including self-pairings) is 505,000 -- a big but not astronomical number -- so it is totally possible for us to compute the correlation of the fully exhausted set of all possible pairings, rather than needing to sample randomly from the dataset. Here's a function to retrieve all possible pairings for the general case with groups of any size: get_all_pairs <- function(df){ # do this for every group and combine the results into a matrix do.call(rbind, by(df, df$group, function(group_df){ # get all possible pairs of indices i <- expand.grid(seq(nrow(group_df)), seq(nrow(group_df))) # remove self-pairings i <- i[i[,1] != i[,2],] # return a 2-column matrix of the corresponding y-values cbind(group_df[i[,1], "y"], group_df[i[,2], "y"]) })) } Now if we apply this function to the 100*100 dataset and compute the correlation, we get: cor(get_all_pairs(dat)) # [,1] [,2] # [1,] 1.0000000 0.7504817 # [2,] 0.7504817 1.0000000 Which agrees well with the other two estimates, and compared to the random pairing procedure, is much faster to compute, and should also be a more efficient estimate in the sense of having less variance.
ICC as expected correlation between two randomly drawn units that are in the same group
@Wolfgang already gave a great answer. I want to expand on it a little to show that you can also arrive at the estimated ICC of 0.75 in his example dataset by literally implementing the intuitive algo
ICC as expected correlation between two randomly drawn units that are in the same group @Wolfgang already gave a great answer. I want to expand on it a little to show that you can also arrive at the estimated ICC of 0.75 in his example dataset by literally implementing the intuitive algorithm of randomly selecting many pairs of $y$ values -- where the members of each pair come from the same group -- and then simply computing their correlation. And then this same procedure can easily be applied to datasets with groups of any size, as I'll also show. First we load @Wolfgang's dataset (not shown here). Now let's define a simple R function that takes a data.frame and returns a single randomly selected pair of observations from the same group: get_random_pair <- function(df){ # select a random row i <- sample(nrow(df), 1) # select a random other row from the same group # (the call to rep() here is admittedly odd, but it's to avoid unwanted # behavior when the first argument to sample() has length 1) j <- sample(rep(setdiff(which(dat$group==dat[i,"group"]), i), 2), 1) # return the pair of y-values c(df[i,"y"], df[j,"y"]) } Here's an example of what we get if we call this function 10 times on @Wolfgang's dataset: test <- replicate(10, get_random_pair(dat)) t(test) # [,1] [,2] # [1,] 9 6 # [2,] 2 2 # [3,] 2 4 # [4,] 3 5 # [5,] 3 2 # [6,] 2 4 # [7,] 7 9 # [8,] 5 3 # [9,] 5 3 # [10,] 3 2 Now to estimate the ICC, we just call this function a large number of times and then compute the correlation between the two columns. random_pairs <- replicate(100000, get_random_pair(dat)) cor(t(random_pairs)) # [,1] [,2] # [1,] 1.0000000 0.7493072 # [2,] 0.7493072 1.0000000 This same procedure can be applied, with no modifications at all, to datasets with groups of any size. For example, let's create a dataset consisting of 100 groups of 100 observations each, with the true ICC set to 0.75 as in @Wolfgang's example. set.seed(12345) group_effects <- scale(rnorm(100))*sqrt(4.5) errors <- scale(rnorm(100*100))*sqrt(1.5) dat <- data.frame(group = rep(1:100, each=100), person = rep(1:100, times=100), y = rep(group_effects, each=100) + errors) stripchart(y ~ group, data=dat, pch=20, col=rgb(0,0,0,.1), ylab="group") Estimating the ICC based on the variance components from a mixed model, we get: library("lme4") mod <- lmer(y ~ 1 + (1|group), data=dat, REML=FALSE) summary(mod) # Random effects: # Groups Name Variance Std.Dev. # group (Intercept) 4.502 2.122 # Residual 1.497 1.223 # Number of obs: 10000, groups: group, 100 4.502/(4.502 + 1.497) # 0.7504584 And if we apply the random pairing procedure, we get random_pairs <- replicate(100000, get_random_pair(dat)) cor(t(random_pairs)) # [,1] [,2] # [1,] 1.0000000 0.7503004 # [2,] 0.7503004 1.0000000 which closely agrees with the variance component estimate. Note that while the random pairing procedure is kind of intuitive, and didactically useful, the method illustrated by @Wolfgang is actually a lot smarter. For a dataset like this one of size 100*100, the number of unique within-group pairings (not including self-pairings) is 505,000 -- a big but not astronomical number -- so it is totally possible for us to compute the correlation of the fully exhausted set of all possible pairings, rather than needing to sample randomly from the dataset. Here's a function to retrieve all possible pairings for the general case with groups of any size: get_all_pairs <- function(df){ # do this for every group and combine the results into a matrix do.call(rbind, by(df, df$group, function(group_df){ # get all possible pairs of indices i <- expand.grid(seq(nrow(group_df)), seq(nrow(group_df))) # remove self-pairings i <- i[i[,1] != i[,2],] # return a 2-column matrix of the corresponding y-values cbind(group_df[i[,1], "y"], group_df[i[,2], "y"]) })) } Now if we apply this function to the 100*100 dataset and compute the correlation, we get: cor(get_all_pairs(dat)) # [,1] [,2] # [1,] 1.0000000 0.7504817 # [2,] 0.7504817 1.0000000 Which agrees well with the other two estimates, and compared to the random pairing procedure, is much faster to compute, and should also be a more efficient estimate in the sense of having less variance.
ICC as expected correlation between two randomly drawn units that are in the same group @Wolfgang already gave a great answer. I want to expand on it a little to show that you can also arrive at the estimated ICC of 0.75 in his example dataset by literally implementing the intuitive algo
19,932
Why standardization of the testing set has to be performed with the mean and sd of the training set?
When you center and scale a variable in the training data using the mean and sd of that variable calculated on the training data, you are essentially creating a brand-new variable. Then you are doing, say, a regression on that brand new variable. To use that new variable to predict for the validation and/or test datasets, you have to create the same variable in those data sets. Subtracting a different number and dividing by a different number does not create the same variable.
Why standardization of the testing set has to be performed with the mean and sd of the training set?
When you center and scale a variable in the training data using the mean and sd of that variable calculated on the training data, you are essentially creating a brand-new variable. Then you are doing,
Why standardization of the testing set has to be performed with the mean and sd of the training set? When you center and scale a variable in the training data using the mean and sd of that variable calculated on the training data, you are essentially creating a brand-new variable. Then you are doing, say, a regression on that brand new variable. To use that new variable to predict for the validation and/or test datasets, you have to create the same variable in those data sets. Subtracting a different number and dividing by a different number does not create the same variable.
Why standardization of the testing set has to be performed with the mean and sd of the training set? When you center and scale a variable in the training data using the mean and sd of that variable calculated on the training data, you are essentially creating a brand-new variable. Then you are doing,
19,933
Why standardization of the testing set has to be performed with the mean and sd of the training set?
Let me explain a different way. Suppose you had distance measured in m. So X = distance in meters. But that is cumbersome, because some of the values of X are 50,000. So you create a new variable, X1 = distance in km. You obtain the values of X1 by dividing X by 1000. Now you build a model based on X1. You must also create X1 in your test data by dividing X by 1000. If you divide by 1001, or by 5,000, you aren't creating X1, you're creating X2, which has a completely different definition. Any model built based on X1 in the training data will not work if you use X2 in the test data instead of X1. Now, centering and scaling is creating a new variable. You do not have to use the mean and sd of the training data. You could use the mean and sd of the whole dataset before splitting off into training vs. test. You could use the mean and sd of the test data. You could use a number kind of close to the mean and kind of close to the sd. You don't have to perfectly center and scale to create more stability in the design matrix. Bottom line is, if you create X1 = (X-5)/2.865, and then build a model with X1 as a predictor using training data, then if you create X2 = (X-5.375)/2 in your test data, and then act like it's the same new variable as X1, your model will not perform as well as it should, and it is an inappropriate use of the model.
Why standardization of the testing set has to be performed with the mean and sd of the training set?
Let me explain a different way. Suppose you had distance measured in m. So X = distance in meters. But that is cumbersome, because some of the values of X are 50,000. So you create a new variable, X1
Why standardization of the testing set has to be performed with the mean and sd of the training set? Let me explain a different way. Suppose you had distance measured in m. So X = distance in meters. But that is cumbersome, because some of the values of X are 50,000. So you create a new variable, X1 = distance in km. You obtain the values of X1 by dividing X by 1000. Now you build a model based on X1. You must also create X1 in your test data by dividing X by 1000. If you divide by 1001, or by 5,000, you aren't creating X1, you're creating X2, which has a completely different definition. Any model built based on X1 in the training data will not work if you use X2 in the test data instead of X1. Now, centering and scaling is creating a new variable. You do not have to use the mean and sd of the training data. You could use the mean and sd of the whole dataset before splitting off into training vs. test. You could use the mean and sd of the test data. You could use a number kind of close to the mean and kind of close to the sd. You don't have to perfectly center and scale to create more stability in the design matrix. Bottom line is, if you create X1 = (X-5)/2.865, and then build a model with X1 as a predictor using training data, then if you create X2 = (X-5.375)/2 in your test data, and then act like it's the same new variable as X1, your model will not perform as well as it should, and it is an inappropriate use of the model.
Why standardization of the testing set has to be performed with the mean and sd of the training set? Let me explain a different way. Suppose you had distance measured in m. So X = distance in meters. But that is cumbersome, because some of the values of X are 50,000. So you create a new variable, X1
19,934
Why standardization of the testing set has to be performed with the mean and sd of the training set?
Why should we use the mean and std of the training dataset to standardize the test dataset? It is possible that the mean and std of the test dataset are such that after standardizing it with these values, some test data points will end up having same values as some (but different) train data points of the standardized train dataset (standardized by its own mean and std). See here for an example that demonstrates this. Conversely, the test dataset could contain data points that are also contained in the train dataset, and if we standardize the ones that are in test dataset by the mean and std of the test dataset, and the ones that are in train dataset by the mean and std of the train dataset, they will end up having different values (assuming that the mean and std are different for train and test datasets). Although this two situations are highly hypothetical, they demonstrate that by performing different transformations on two different data points can give same data point, whereas two same data points (one from train dataset and one from test dataset) can end up being different after the transformations.
Why standardization of the testing set has to be performed with the mean and sd of the training set?
Why should we use the mean and std of the training dataset to standardize the test dataset? It is possible that the mean and std of the test dataset are such that after standardizing it with these val
Why standardization of the testing set has to be performed with the mean and sd of the training set? Why should we use the mean and std of the training dataset to standardize the test dataset? It is possible that the mean and std of the test dataset are such that after standardizing it with these values, some test data points will end up having same values as some (but different) train data points of the standardized train dataset (standardized by its own mean and std). See here for an example that demonstrates this. Conversely, the test dataset could contain data points that are also contained in the train dataset, and if we standardize the ones that are in test dataset by the mean and std of the test dataset, and the ones that are in train dataset by the mean and std of the train dataset, they will end up having different values (assuming that the mean and std are different for train and test datasets). Although this two situations are highly hypothetical, they demonstrate that by performing different transformations on two different data points can give same data point, whereas two same data points (one from train dataset and one from test dataset) can end up being different after the transformations.
Why standardization of the testing set has to be performed with the mean and sd of the training set? Why should we use the mean and std of the training dataset to standardize the test dataset? It is possible that the mean and std of the test dataset are such that after standardizing it with these val
19,935
What is the distribution for the maximum (minimum) of two independent normal random variables?
The max of two non-identical Normals can be expressed as an Azzalini skew-Normal distribution. See, for instance, a 2007 working paper/presentation by Balakrishnan A Skewed Look at Bivariate and Multivariate Order Statistics Prof. N. Balakrishnan Working paper / presentation (2007) A recent paper by (Nadarajah and Kotz - viewable here) gives some properties of max$(X,Y)$: Nadarajah, S. and Kotz, S. (2008), "Exact Distribution of the Max/Min of Two Gaussian Random Variables", IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 2, FEBRUARY 2008 For earlier work, see: A. P. Basu and J. K. Ghosh, “Identifiability of the multinormal and other distributions under competing risks model,” J. Multivariate Anal., vol. 8, pp. 413–429, 1978 H. N. Nagaraja and N. R. Mohan, “On the independence of system life distribution and cause of failure,” Scandinavian Actuarial J., pp. 188–198, 1982. Y. L. Tong, The Multivariate Normal Distribution. New York: Springer-Verlag, 1990. One can also use a computer algebra system to automate the calculation. For example, given $X \sim N(\mu_1, \sigma_1^2)$ with pdf $f(x)$, and $Y \sim N(\mu_2, \sigma_2^2)$ with pdf $g(y)$: ... the pdf of $Z = max(X,Y)$ is: where I am using the Maximum function from the mathStatica package of Mathematica, and Erf denotes the error function.
What is the distribution for the maximum (minimum) of two independent normal random variables?
The max of two non-identical Normals can be expressed as an Azzalini skew-Normal distribution. See, for instance, a 2007 working paper/presentation by Balakrishnan A Skewed Look at Bivariate and Mul
What is the distribution for the maximum (minimum) of two independent normal random variables? The max of two non-identical Normals can be expressed as an Azzalini skew-Normal distribution. See, for instance, a 2007 working paper/presentation by Balakrishnan A Skewed Look at Bivariate and Multivariate Order Statistics Prof. N. Balakrishnan Working paper / presentation (2007) A recent paper by (Nadarajah and Kotz - viewable here) gives some properties of max$(X,Y)$: Nadarajah, S. and Kotz, S. (2008), "Exact Distribution of the Max/Min of Two Gaussian Random Variables", IEEE TRANSACTIONS ON VERY LARGE SCALE INTEGRATION (VLSI) SYSTEMS, VOL. 16, NO. 2, FEBRUARY 2008 For earlier work, see: A. P. Basu and J. K. Ghosh, “Identifiability of the multinormal and other distributions under competing risks model,” J. Multivariate Anal., vol. 8, pp. 413–429, 1978 H. N. Nagaraja and N. R. Mohan, “On the independence of system life distribution and cause of failure,” Scandinavian Actuarial J., pp. 188–198, 1982. Y. L. Tong, The Multivariate Normal Distribution. New York: Springer-Verlag, 1990. One can also use a computer algebra system to automate the calculation. For example, given $X \sim N(\mu_1, \sigma_1^2)$ with pdf $f(x)$, and $Y \sim N(\mu_2, \sigma_2^2)$ with pdf $g(y)$: ... the pdf of $Z = max(X,Y)$ is: where I am using the Maximum function from the mathStatica package of Mathematica, and Erf denotes the error function.
What is the distribution for the maximum (minimum) of two independent normal random variables? The max of two non-identical Normals can be expressed as an Azzalini skew-Normal distribution. See, for instance, a 2007 working paper/presentation by Balakrishnan A Skewed Look at Bivariate and Mul
19,936
What is the distribution for the maximum (minimum) of two independent normal random variables?
I'm surprised that in the previous answers the most interesting property is not mentioned: the cumulative-probability distribution for the maximum is the product of the respective cumulative-probability distributions.
What is the distribution for the maximum (minimum) of two independent normal random variables?
I'm surprised that in the previous answers the most interesting property is not mentioned: the cumulative-probability distribution for the maximum is the product of the respective cumulative-probabili
What is the distribution for the maximum (minimum) of two independent normal random variables? I'm surprised that in the previous answers the most interesting property is not mentioned: the cumulative-probability distribution for the maximum is the product of the respective cumulative-probability distributions.
What is the distribution for the maximum (minimum) of two independent normal random variables? I'm surprised that in the previous answers the most interesting property is not mentioned: the cumulative-probability distribution for the maximum is the product of the respective cumulative-probabili
19,937
How exactly do Bayesians define (or interpret?) probability?
I believe that most 'frequentists' and 'Bayesians' would rigorously define probability in the same way: via Kolmogorov's axioms and measure theory, modulo some issues about finite vs countable additivity, depending on who you're talking to. So in terms of 'symbols' I reckon you'll likely find more or less the same definition across the board. Everyone agrees on how probabilities behave. I would say the primary difference is in the interpretation of what probabilities are. My (tongue-in-cheek militant Bayesian) preferred interpretation is that probabilities are coherent representations of information about events. 'Coherent' here has a technical meaning: it means that if I represent my information about the world in terms of probabilities and then use those probabilities to size my bets on the occurrence or nonoccurrence of any given event, I am assured that I can not be made a sure loser by agents betting against me. Note that this involves no notion of 'long-run relative frequency'; indeed, I can coherently represent my information about a one-off event - like the sun exploding tomorrow - via the language of probability. On the other hand, it seems more difficult (or arguably less natural) to talk about the event "the sun will explode tomorrow" in terms of long-run relative frequency. For a deep dive on this question I'd refer you to the first chapter of Jay Kadane's excellent (and free) Principles of Uncertainty. UPDATE: I wrote a relatively informal blog post that illustrates coherence.
How exactly do Bayesians define (or interpret?) probability?
I believe that most 'frequentists' and 'Bayesians' would rigorously define probability in the same way: via Kolmogorov's axioms and measure theory, modulo some issues about finite vs countable additiv
How exactly do Bayesians define (or interpret?) probability? I believe that most 'frequentists' and 'Bayesians' would rigorously define probability in the same way: via Kolmogorov's axioms and measure theory, modulo some issues about finite vs countable additivity, depending on who you're talking to. So in terms of 'symbols' I reckon you'll likely find more or less the same definition across the board. Everyone agrees on how probabilities behave. I would say the primary difference is in the interpretation of what probabilities are. My (tongue-in-cheek militant Bayesian) preferred interpretation is that probabilities are coherent representations of information about events. 'Coherent' here has a technical meaning: it means that if I represent my information about the world in terms of probabilities and then use those probabilities to size my bets on the occurrence or nonoccurrence of any given event, I am assured that I can not be made a sure loser by agents betting against me. Note that this involves no notion of 'long-run relative frequency'; indeed, I can coherently represent my information about a one-off event - like the sun exploding tomorrow - via the language of probability. On the other hand, it seems more difficult (or arguably less natural) to talk about the event "the sun will explode tomorrow" in terms of long-run relative frequency. For a deep dive on this question I'd refer you to the first chapter of Jay Kadane's excellent (and free) Principles of Uncertainty. UPDATE: I wrote a relatively informal blog post that illustrates coherence.
How exactly do Bayesians define (or interpret?) probability? I believe that most 'frequentists' and 'Bayesians' would rigorously define probability in the same way: via Kolmogorov's axioms and measure theory, modulo some issues about finite vs countable additiv
19,938
How exactly do Bayesians define (or interpret?) probability?
As already noted by others, there is no specific Bayesian definition of probability. There is only one way of defining probability, i.e. it's a real number assigned to some event by a probability measure, that follows the axioms of probability. If there were different definitions of probability, we wouldn't be able use it consistently, since different people would understand different things behind it. While there is only one way we define it, there are multiple ways to interpret the probability. Probability is a mathematical concept, not related anyhow to real world (quoting de Finetti, "probability does not exist"). To apply it to real world we need to translate, or interpret, the mathematics into real world happenings. There are multiple different ways to interpret the probability, even different interpretations among Bayesians (check Interpretations of Probability in Stanford Encyclopedia of Philosophy for a review). The one that is most commonly associated with Bayesian statistics is subjectivist view, also known as personalistic probability. In subjectivist view, probability is a degree of belief, or degree of confirmation. It measures how much someone considers something believable. It can be analyzed, or observed, most clearly in terms of betting behavior (de Finetti, 1937; see also Savage, 1976; Kemeny, 1955): Let us suppose that an individual is obliged to evaluate the rate $p$ at which he would be ready to exchange the possession of an arbitrary sum $S$ (positive or negative) dependent on the occurrence of a given event $E$, for the possession of the sum $pS$; we will say by definition that this number $p$ is the measure of the degree of probability attributed by the individual considered to the event $E$, or, more simply, that $p$ is the probability of $E$ (according to the individual considered; this specification can be implicit if there is no ambiguity). Betting is one of the situations where one needs to quantify how "likely" he believes something to be and the measure of such belief is clearly a probability. Translating such belief to numbers, least to measure of belief, i.e. probability. Bruno de Finetti, one of the major figures among subjectivists, notices that the subjectivist view is coherent with axioms of probability and it needs to follow them: If we acknowledge only, first that one uncertain event can only appear to us (a) equally probable, (b) more probable, or (c) less probable then another; second that an uncertain event always seems to us more probable then an impossible event and less probable then a necessary event; and finally, third that when we judge an event $E'$ more probable then event $E$, which is itself more probable then an event $E''$, then event $E'$ can only appear more probable then $E''$ (transitive property), it will suffice to add to there three evidently trivial axioms a fourth, itself of purely qualitative nature, in order to construct rigorously the whole theory of probability. The fourth axiom tells us that inequalities are preserved in logical sums: if $E$ is incompatible with $E_1$ and with $E_2$, then $E_1 \lor E$ will be more or less probable then $E_2 \lor E$, or they will be equally probable, according to wherever $E_1$ is more or less probable then $E_2$, or they are equally probable. More generally, it may be deduced from this that two inequalities, such as $$ E_1 \text{ is more probable then } E_2,\\ E_1' \text{ is more probable then } E_2',$$ can be added to give $$ E_1 \lor E_1' \text{ is more probable then } E_2 \lor E_2' $$ provided that the events added are incompatible with each other ($E_1$ with $E_1'$, $E_2$ with $E_2'$). Similar points are made by multiple different authors, like Kemeny (1955), or Savage (1972), who like de Finetti draw connections between the axioms and subjectivist view of probability. They also show that such measure of belief needs to be consistent with the axioms of probability (so if it looks like a probability and quacks like a probability...). Moreover, Cox (1946) shows that probability can be thought as an extension of formal logic that goes beyond binary true and false, allowing for uncertainties. As you can see, this has nothing to do with frequencies. Of course, if you observe that nicotine smokers die of cancer more often then non-smokers, rationally you would assume such death to be more believable for a smoker, so frequency interpretation does not contradict the subjectivist view. What makes such interpretation appealing is that it can be applied also to cases that have nothing to do with frequencies (e.g. the probability that Donald Trump wins the 2016 US presidential election, the probability that there are other intelligent lifeforms somewhere in the space besides us etc). When adopting subjectivist view you can consider such cases in probabilistic manner and build statistical models of such scenarios (see example of election forecasting by FiveThirtyEight, that is consistent with thinking about probability as measuring degree of belief based on the available evidence). This makes such interpretation very broad (some say, overly broad), so we can flexibly adapt probabilistic thinking to different problems. Yes, it is subjective, but de Finetti (1931) notices that as frequentist definition is based on multiple unrealistic assumptions, it does not make it more "rational" interpretation. de Finetti, B. (1937/1980). La Prévision: Ses Lois Logiques, Ses Sources Subjectives. [Foresight. Its Logical Laws, Its Subjective Sources.] Annales de l'Institut Henri Poincaré, 7, 1-68. Kemeny, J. (1955). Fair Bets and Inductive Probabilities. Journal of Symbolic Logic, 20, 263-273. Savage, L.J. (1972). The foundations of statistics. Dover. Cox, R.T. (1946). Probability, frequency and reasonable expectation. American journal of physics, 14(1), 1-13. de Finetti, B. (1931/1989). 'Probabilism: A critical essay on the theory of probability and on the value of science'. Erkenntnis, 31, 169-223.
How exactly do Bayesians define (or interpret?) probability?
As already noted by others, there is no specific Bayesian definition of probability. There is only one way of defining probability, i.e. it's a real number assigned to some event by a probability meas
How exactly do Bayesians define (or interpret?) probability? As already noted by others, there is no specific Bayesian definition of probability. There is only one way of defining probability, i.e. it's a real number assigned to some event by a probability measure, that follows the axioms of probability. If there were different definitions of probability, we wouldn't be able use it consistently, since different people would understand different things behind it. While there is only one way we define it, there are multiple ways to interpret the probability. Probability is a mathematical concept, not related anyhow to real world (quoting de Finetti, "probability does not exist"). To apply it to real world we need to translate, or interpret, the mathematics into real world happenings. There are multiple different ways to interpret the probability, even different interpretations among Bayesians (check Interpretations of Probability in Stanford Encyclopedia of Philosophy for a review). The one that is most commonly associated with Bayesian statistics is subjectivist view, also known as personalistic probability. In subjectivist view, probability is a degree of belief, or degree of confirmation. It measures how much someone considers something believable. It can be analyzed, or observed, most clearly in terms of betting behavior (de Finetti, 1937; see also Savage, 1976; Kemeny, 1955): Let us suppose that an individual is obliged to evaluate the rate $p$ at which he would be ready to exchange the possession of an arbitrary sum $S$ (positive or negative) dependent on the occurrence of a given event $E$, for the possession of the sum $pS$; we will say by definition that this number $p$ is the measure of the degree of probability attributed by the individual considered to the event $E$, or, more simply, that $p$ is the probability of $E$ (according to the individual considered; this specification can be implicit if there is no ambiguity). Betting is one of the situations where one needs to quantify how "likely" he believes something to be and the measure of such belief is clearly a probability. Translating such belief to numbers, least to measure of belief, i.e. probability. Bruno de Finetti, one of the major figures among subjectivists, notices that the subjectivist view is coherent with axioms of probability and it needs to follow them: If we acknowledge only, first that one uncertain event can only appear to us (a) equally probable, (b) more probable, or (c) less probable then another; second that an uncertain event always seems to us more probable then an impossible event and less probable then a necessary event; and finally, third that when we judge an event $E'$ more probable then event $E$, which is itself more probable then an event $E''$, then event $E'$ can only appear more probable then $E''$ (transitive property), it will suffice to add to there three evidently trivial axioms a fourth, itself of purely qualitative nature, in order to construct rigorously the whole theory of probability. The fourth axiom tells us that inequalities are preserved in logical sums: if $E$ is incompatible with $E_1$ and with $E_2$, then $E_1 \lor E$ will be more or less probable then $E_2 \lor E$, or they will be equally probable, according to wherever $E_1$ is more or less probable then $E_2$, or they are equally probable. More generally, it may be deduced from this that two inequalities, such as $$ E_1 \text{ is more probable then } E_2,\\ E_1' \text{ is more probable then } E_2',$$ can be added to give $$ E_1 \lor E_1' \text{ is more probable then } E_2 \lor E_2' $$ provided that the events added are incompatible with each other ($E_1$ with $E_1'$, $E_2$ with $E_2'$). Similar points are made by multiple different authors, like Kemeny (1955), or Savage (1972), who like de Finetti draw connections between the axioms and subjectivist view of probability. They also show that such measure of belief needs to be consistent with the axioms of probability (so if it looks like a probability and quacks like a probability...). Moreover, Cox (1946) shows that probability can be thought as an extension of formal logic that goes beyond binary true and false, allowing for uncertainties. As you can see, this has nothing to do with frequencies. Of course, if you observe that nicotine smokers die of cancer more often then non-smokers, rationally you would assume such death to be more believable for a smoker, so frequency interpretation does not contradict the subjectivist view. What makes such interpretation appealing is that it can be applied also to cases that have nothing to do with frequencies (e.g. the probability that Donald Trump wins the 2016 US presidential election, the probability that there are other intelligent lifeforms somewhere in the space besides us etc). When adopting subjectivist view you can consider such cases in probabilistic manner and build statistical models of such scenarios (see example of election forecasting by FiveThirtyEight, that is consistent with thinking about probability as measuring degree of belief based on the available evidence). This makes such interpretation very broad (some say, overly broad), so we can flexibly adapt probabilistic thinking to different problems. Yes, it is subjective, but de Finetti (1931) notices that as frequentist definition is based on multiple unrealistic assumptions, it does not make it more "rational" interpretation. de Finetti, B. (1937/1980). La Prévision: Ses Lois Logiques, Ses Sources Subjectives. [Foresight. Its Logical Laws, Its Subjective Sources.] Annales de l'Institut Henri Poincaré, 7, 1-68. Kemeny, J. (1955). Fair Bets and Inductive Probabilities. Journal of Symbolic Logic, 20, 263-273. Savage, L.J. (1972). The foundations of statistics. Dover. Cox, R.T. (1946). Probability, frequency and reasonable expectation. American journal of physics, 14(1), 1-13. de Finetti, B. (1931/1989). 'Probabilism: A critical essay on the theory of probability and on the value of science'. Erkenntnis, 31, 169-223.
How exactly do Bayesians define (or interpret?) probability? As already noted by others, there is no specific Bayesian definition of probability. There is only one way of defining probability, i.e. it's a real number assigned to some event by a probability meas
19,939
How exactly do Bayesians define (or interpret?) probability?
I'll try to be incredible clear with my terminology. As you did, we'll focus on one coin, $X \sim Bernoulli(p)$, so $Pr(X=1) = p$. Bayesians and frequentists both view $X$ as a random variable and they share the same views about the probability distribution $Pr(X)$. However, Bayesians also use probability distributions to model their uncertainty about a fixed parameter, in this case $p$. If we now let $x_1, x_2, \dots \sim Bernoulli(p)$ and define $h_n = \sum_{i=1}^n x_i$, as you pointed out $$ \lim_{n\rightarrow \infty} \frac{h_n}{n}= p. $$ This is relevant because $h_n/n$ is the MLE for $p$. However notice that for any positive numbers $a,b$ (in fact they don't even need to be positive): $$ \lim_{n\rightarrow \infty} \frac{h_n+a}{n+a+b}= p. $$ One draw back of the estimator $h_n/n$ is that for small $n$ this might be crazy. The most extreme example of this is when $n = 1$, our estimate of $p$ will be $0$ or $1$. What if we set $a=b=5$ and use the second estimate. If we get a $1$ on the first flip our updated estimate is $6/11$, greater than $50\%$ but not as extreme as $1$. This more restrained estimate can be easily derived by expressing our uncertainty about $p$ in the form of a prior (and eventually posterior) distribution. If you would like to look up this example in depth this is known as the Beta-Binomial. It involves putting a Beta prior on the parameter of a Binomial Distribution, and taking the expectation of the resulting posterior.
How exactly do Bayesians define (or interpret?) probability?
I'll try to be incredible clear with my terminology. As you did, we'll focus on one coin, $X \sim Bernoulli(p)$, so $Pr(X=1) = p$. Bayesians and frequentists both view $X$ as a random variable and th
How exactly do Bayesians define (or interpret?) probability? I'll try to be incredible clear with my terminology. As you did, we'll focus on one coin, $X \sim Bernoulli(p)$, so $Pr(X=1) = p$. Bayesians and frequentists both view $X$ as a random variable and they share the same views about the probability distribution $Pr(X)$. However, Bayesians also use probability distributions to model their uncertainty about a fixed parameter, in this case $p$. If we now let $x_1, x_2, \dots \sim Bernoulli(p)$ and define $h_n = \sum_{i=1}^n x_i$, as you pointed out $$ \lim_{n\rightarrow \infty} \frac{h_n}{n}= p. $$ This is relevant because $h_n/n$ is the MLE for $p$. However notice that for any positive numbers $a,b$ (in fact they don't even need to be positive): $$ \lim_{n\rightarrow \infty} \frac{h_n+a}{n+a+b}= p. $$ One draw back of the estimator $h_n/n$ is that for small $n$ this might be crazy. The most extreme example of this is when $n = 1$, our estimate of $p$ will be $0$ or $1$. What if we set $a=b=5$ and use the second estimate. If we get a $1$ on the first flip our updated estimate is $6/11$, greater than $50\%$ but not as extreme as $1$. This more restrained estimate can be easily derived by expressing our uncertainty about $p$ in the form of a prior (and eventually posterior) distribution. If you would like to look up this example in depth this is known as the Beta-Binomial. It involves putting a Beta prior on the parameter of a Binomial Distribution, and taking the expectation of the resulting posterior.
How exactly do Bayesians define (or interpret?) probability? I'll try to be incredible clear with my terminology. As you did, we'll focus on one coin, $X \sim Bernoulli(p)$, so $Pr(X=1) = p$. Bayesians and frequentists both view $X$ as a random variable and th
19,940
Applying PCA to test data for classification purposes
PCA is a dimension reduction tool, not a classifier. In Scikit-Learn, all classifiers and estimators have a predict method which PCA does not. You need to fit a classifier on the PCA-transformed data. Scikit-Learn has many classifiers. Here is an example of using a decision tree on PCA-transformed data. I chose the decision tree classifier as it works well for data with more than two classes which is the case with the iris dataset. from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_iris # load data iris = load_iris() # initiate PCA and classifier pca = PCA() classifier = DecisionTreeClassifier() # transform / fit X_transformed = pca.fit_transform(iris.data) classifier.fit(X_transformed, iris.target) # predict "new" data # (I'm faking it here by using the original data) newdata = iris.data # transform new data using already fitted pca # (don't re-fit the pca) newdata_transformed = pca.transform(newdata) # predict labels using the trained classifier pred_labels = classifier.predict(newdata_transformed) SciKit learn has a convenient tool called Pipeline which lets you chain together transformers and a final classifier: # you can make this a lot easier using Pipeline from sklearn.pipeline import Pipeline # fits PCA, transforms data and fits the decision tree classifier # on the transformed data pipe = Pipeline([('pca', PCA()), ('tree', DecisionTreeClassifier())]) pipe.fit(iris.data, iris.target) pipe.predict(newdata) This is especially useful when doing cross-validation as it prevents you from accidentally re-fitting ANY step of the pipeline on your testing dataset: from sklearn.cross_validation import cross_val_score print cross_val_score(pipe, iris.data, iris.target) # [ 0.96078431 0.90196078 1. ] By the way, you may not even need to use PCA to get good classification results. The iris dataset doesn't have many dimensions and decision trees will already perform well on the untransformed data.
Applying PCA to test data for classification purposes
PCA is a dimension reduction tool, not a classifier. In Scikit-Learn, all classifiers and estimators have a predict method which PCA does not. You need to fit a classifier on the PCA-transformed data.
Applying PCA to test data for classification purposes PCA is a dimension reduction tool, not a classifier. In Scikit-Learn, all classifiers and estimators have a predict method which PCA does not. You need to fit a classifier on the PCA-transformed data. Scikit-Learn has many classifiers. Here is an example of using a decision tree on PCA-transformed data. I chose the decision tree classifier as it works well for data with more than two classes which is the case with the iris dataset. from sklearn.decomposition import PCA from sklearn.tree import DecisionTreeClassifier from sklearn.datasets import load_iris # load data iris = load_iris() # initiate PCA and classifier pca = PCA() classifier = DecisionTreeClassifier() # transform / fit X_transformed = pca.fit_transform(iris.data) classifier.fit(X_transformed, iris.target) # predict "new" data # (I'm faking it here by using the original data) newdata = iris.data # transform new data using already fitted pca # (don't re-fit the pca) newdata_transformed = pca.transform(newdata) # predict labels using the trained classifier pred_labels = classifier.predict(newdata_transformed) SciKit learn has a convenient tool called Pipeline which lets you chain together transformers and a final classifier: # you can make this a lot easier using Pipeline from sklearn.pipeline import Pipeline # fits PCA, transforms data and fits the decision tree classifier # on the transformed data pipe = Pipeline([('pca', PCA()), ('tree', DecisionTreeClassifier())]) pipe.fit(iris.data, iris.target) pipe.predict(newdata) This is especially useful when doing cross-validation as it prevents you from accidentally re-fitting ANY step of the pipeline on your testing dataset: from sklearn.cross_validation import cross_val_score print cross_val_score(pipe, iris.data, iris.target) # [ 0.96078431 0.90196078 1. ] By the way, you may not even need to use PCA to get good classification results. The iris dataset doesn't have many dimensions and decision trees will already perform well on the untransformed data.
Applying PCA to test data for classification purposes PCA is a dimension reduction tool, not a classifier. In Scikit-Learn, all classifiers and estimators have a predict method which PCA does not. You need to fit a classifier on the PCA-transformed data.
19,941
Applying PCA to test data for classification purposes
If you want to apply PCA to new data, you must have fit a model first on some training dataset. What is the model you will ask? This is the mean vector you subtracted from the dataset, the variances you used to "whiten" each data vector and the learned mapping matrix. So in order to map new dataset in the same space as the training data, you first subtract the mean, whiten it and map it with the mapping matrix.
Applying PCA to test data for classification purposes
If you want to apply PCA to new data, you must have fit a model first on some training dataset. What is the model you will ask? This is the mean vector you subtracted from the dataset, the variances y
Applying PCA to test data for classification purposes If you want to apply PCA to new data, you must have fit a model first on some training dataset. What is the model you will ask? This is the mean vector you subtracted from the dataset, the variances you used to "whiten" each data vector and the learned mapping matrix. So in order to map new dataset in the same space as the training data, you first subtract the mean, whiten it and map it with the mapping matrix.
Applying PCA to test data for classification purposes If you want to apply PCA to new data, you must have fit a model first on some training dataset. What is the model you will ask? This is the mean vector you subtracted from the dataset, the variances y
19,942
What is the difference between the Mann-Whitney and Wilcoxon rank-sumtest? [duplicate]
First of all it might be useful to remember that Mann-Whitney test is also called Wilcoxon rank-sum test. Since it is the same test there is no need to explain the difference ;) A good answer to the common question about the difference between W statistic and U statistic is given here: Is the W statistic output by wilcox.test() in R the same as the U statistic? Mann-Whitney/Wilcoxon rank-sum test (later MWW test) is defined in R through function wilcox.test (with paired=FALSE) which uses [dprq]wilcox functions. However, people sometimes mistake MWW with Wilcoxon signed-rank test. The difference comes from the assumptions. In the MWW test you are interested in the difference between two independent populations (null hypothesis: the same, alternative: there is a difference) while in Wilcoxon signed-rank test you are interested in testing the same hypothesis but with paired/matched samples. For example, the Wilcoxon signed-rank test would be used if you had replicates (repeated) measurements between different time points/plates/... since it is the same sample but measured in different time/on different plates. Wilcoxon signed-rank test is defined in R through wilcox.test function (with paired=TRUE) which uses [dprq]signrank functions. Another implementation of MWW/Wilcoxon signed-rank test can be found in the coin package through wilcox_test function.
What is the difference between the Mann-Whitney and Wilcoxon rank-sumtest? [duplicate]
First of all it might be useful to remember that Mann-Whitney test is also called Wilcoxon rank-sum test. Since it is the same test there is no need to explain the difference ;) A good answer to the c
What is the difference between the Mann-Whitney and Wilcoxon rank-sumtest? [duplicate] First of all it might be useful to remember that Mann-Whitney test is also called Wilcoxon rank-sum test. Since it is the same test there is no need to explain the difference ;) A good answer to the common question about the difference between W statistic and U statistic is given here: Is the W statistic output by wilcox.test() in R the same as the U statistic? Mann-Whitney/Wilcoxon rank-sum test (later MWW test) is defined in R through function wilcox.test (with paired=FALSE) which uses [dprq]wilcox functions. However, people sometimes mistake MWW with Wilcoxon signed-rank test. The difference comes from the assumptions. In the MWW test you are interested in the difference between two independent populations (null hypothesis: the same, alternative: there is a difference) while in Wilcoxon signed-rank test you are interested in testing the same hypothesis but with paired/matched samples. For example, the Wilcoxon signed-rank test would be used if you had replicates (repeated) measurements between different time points/plates/... since it is the same sample but measured in different time/on different plates. Wilcoxon signed-rank test is defined in R through wilcox.test function (with paired=TRUE) which uses [dprq]signrank functions. Another implementation of MWW/Wilcoxon signed-rank test can be found in the coin package through wilcox_test function.
What is the difference between the Mann-Whitney and Wilcoxon rank-sumtest? [duplicate] First of all it might be useful to remember that Mann-Whitney test is also called Wilcoxon rank-sum test. Since it is the same test there is no need to explain the difference ;) A good answer to the c
19,943
Subjectivity in Frequentist Statistics
I often hear the claim that Bayesian statistics can be highly subjective. So do I. But notice that there's a major ambiguity in calling something subjective. Subjectivity (both senses) Subjective can mean (at least) one of depends on the idiosyncracies of the researcher explicitly concerned with the state of knowledge of an individual Bayesianism is subjective in the second sense because it is always offering a way to update beliefs represented by probability distributions by conditioning on information. (Note that whether those beliefs are beliefs that some subject actually has or just beliefs that a subject could have is irrelevant to deciding whether it is 'subjective'.) The main argument being that inference depends on the choice of a prior Actually, if a prior represents your personal belief about something then you almost certainly didn't choose it at any more than you chose most of your beliefs. And if it represents somebody's beliefs then it can be a more or less accurate representation of those beliefs, so ironically there will be a rather 'objective' fact about how well it represents them. (even though one could use the principle of indifference o maximum entropy to choose a prior). One could, though this doesn't tend to generalize very smoothly to continuous domains. Also, arguably it's impossible to be flat or 'indifferent' in all parameterisations at once (though I've never been quite sure why you'd want to be). In comparison, the claim goes, frequentist statistics is in general more objective. How much truth is there in this statement? So how might we evaluate this claim? I suggest that in the second second sense of subjective: it's mostly correct. And in the first sense of subjective: it's probably false. Frequentism as subjective (second sense) Some historical detail is helpful to map the issues For Neyman and Pearson there is only inductive behaviour not inductive inference and all statistical evaluation works with long run sampling properties of estimators. (Hence alpha and power analysis, but not p values). That's pretty unsubjective in both senses. Indeed it's possible, and I think quite reasonable, to argue along these lines that Frequentism is actually not an inference framework at all but rather a collection of evaluation criteria for all possible inference procedures that emphasises their behaviour in repeated application. Simple examples would be consistency, unbiasedness, etc. This makes it obviously unsubjective in sense 2. However, it also risks being subjective in sense 1 when we have to decide what to do when those crteria do not apply (e.g. when there isn't an unbiased estimator to be had) or when they apply but contradict. Fisher offered a less unsubjective Frequentism that is interesting. For Fisher, there is such a thing as inductive inference, in the sense that a subject, the scientist, makes inferences on the basis of a data analysis, done by the statistician. (Hence p-values but not alpha and power analysis). However, the decisions about how to behave, whether to carry on with research etc. are made by the scientist on the basis of her understanding of domain theory, not by the statistician applying the inference paradigm. Because of this Fisherian division of labour, both the subjectiveness (sense 2) and the individual subject (sense 1) sit on the science side, not the statistical side. Legalistically speaking, the Fisher's Frequentism is subjective. It's just that the subject who is subjective is not the statistician. There are various syntheses of these available, both the barely coherent mix of these two you find in applied statistics textbooks and more nuanced versions, e.g. the 'Error Statistics' pushed by Deborah Mayo. This latter is pretty unsubjective in sense 2, but highly subjective in sense 1, because the researcher has to use scientific judgement - Fisher style - to figure out what error probabilities matter and shoudl be tested. Frequentism as subjective (first sense) So is Frequentism less subjective in the first sense? It depends. Any inference procedure can be riddled with idiosyncracies as actually applied. So perhaps it's more useful to ask whether Frequentism encourages a less subjective (first sense) approach? I doubt it - I think the self conscious application of subjective (second sense) methods leads to less subjective (first sense) outcomes, but it can be argued either way. Assume for a moment that subjectiveness (first sense) sneaks into an analysis via 'choices'. Bayesianism does seem to involve more 'choices'. In the simplest case the choices tally up as: one set of potentially idiosyncratic assumptions for the Frequentist (the Likelihood function or equivalent) and two sets for the Bayesian (the Likelihood and a prior over the unknowns). However, Bayesians know they're being subjective (in the second sense) about all these choices so they are liable to be more self conscious about the implications which should lead to less subjectiveness (in the first sense). In contrast, if one looks up a test in a big book of tests, then one could get the feeling that the result is less subjective (first sense), but arguably that's a result of substituting some other subject's understanding of the problem for one's own. It's not clear that one has gotten less subjective this way, but it might feel that way. I think most would agree that that's unhelpful.
Subjectivity in Frequentist Statistics
I often hear the claim that Bayesian statistics can be highly subjective. So do I. But notice that there's a major ambiguity in calling something subjective. Subjectivity (both senses) Subjective
Subjectivity in Frequentist Statistics I often hear the claim that Bayesian statistics can be highly subjective. So do I. But notice that there's a major ambiguity in calling something subjective. Subjectivity (both senses) Subjective can mean (at least) one of depends on the idiosyncracies of the researcher explicitly concerned with the state of knowledge of an individual Bayesianism is subjective in the second sense because it is always offering a way to update beliefs represented by probability distributions by conditioning on information. (Note that whether those beliefs are beliefs that some subject actually has or just beliefs that a subject could have is irrelevant to deciding whether it is 'subjective'.) The main argument being that inference depends on the choice of a prior Actually, if a prior represents your personal belief about something then you almost certainly didn't choose it at any more than you chose most of your beliefs. And if it represents somebody's beliefs then it can be a more or less accurate representation of those beliefs, so ironically there will be a rather 'objective' fact about how well it represents them. (even though one could use the principle of indifference o maximum entropy to choose a prior). One could, though this doesn't tend to generalize very smoothly to continuous domains. Also, arguably it's impossible to be flat or 'indifferent' in all parameterisations at once (though I've never been quite sure why you'd want to be). In comparison, the claim goes, frequentist statistics is in general more objective. How much truth is there in this statement? So how might we evaluate this claim? I suggest that in the second second sense of subjective: it's mostly correct. And in the first sense of subjective: it's probably false. Frequentism as subjective (second sense) Some historical detail is helpful to map the issues For Neyman and Pearson there is only inductive behaviour not inductive inference and all statistical evaluation works with long run sampling properties of estimators. (Hence alpha and power analysis, but not p values). That's pretty unsubjective in both senses. Indeed it's possible, and I think quite reasonable, to argue along these lines that Frequentism is actually not an inference framework at all but rather a collection of evaluation criteria for all possible inference procedures that emphasises their behaviour in repeated application. Simple examples would be consistency, unbiasedness, etc. This makes it obviously unsubjective in sense 2. However, it also risks being subjective in sense 1 when we have to decide what to do when those crteria do not apply (e.g. when there isn't an unbiased estimator to be had) or when they apply but contradict. Fisher offered a less unsubjective Frequentism that is interesting. For Fisher, there is such a thing as inductive inference, in the sense that a subject, the scientist, makes inferences on the basis of a data analysis, done by the statistician. (Hence p-values but not alpha and power analysis). However, the decisions about how to behave, whether to carry on with research etc. are made by the scientist on the basis of her understanding of domain theory, not by the statistician applying the inference paradigm. Because of this Fisherian division of labour, both the subjectiveness (sense 2) and the individual subject (sense 1) sit on the science side, not the statistical side. Legalistically speaking, the Fisher's Frequentism is subjective. It's just that the subject who is subjective is not the statistician. There are various syntheses of these available, both the barely coherent mix of these two you find in applied statistics textbooks and more nuanced versions, e.g. the 'Error Statistics' pushed by Deborah Mayo. This latter is pretty unsubjective in sense 2, but highly subjective in sense 1, because the researcher has to use scientific judgement - Fisher style - to figure out what error probabilities matter and shoudl be tested. Frequentism as subjective (first sense) So is Frequentism less subjective in the first sense? It depends. Any inference procedure can be riddled with idiosyncracies as actually applied. So perhaps it's more useful to ask whether Frequentism encourages a less subjective (first sense) approach? I doubt it - I think the self conscious application of subjective (second sense) methods leads to less subjective (first sense) outcomes, but it can be argued either way. Assume for a moment that subjectiveness (first sense) sneaks into an analysis via 'choices'. Bayesianism does seem to involve more 'choices'. In the simplest case the choices tally up as: one set of potentially idiosyncratic assumptions for the Frequentist (the Likelihood function or equivalent) and two sets for the Bayesian (the Likelihood and a prior over the unknowns). However, Bayesians know they're being subjective (in the second sense) about all these choices so they are liable to be more self conscious about the implications which should lead to less subjectiveness (in the first sense). In contrast, if one looks up a test in a big book of tests, then one could get the feeling that the result is less subjective (first sense), but arguably that's a result of substituting some other subject's understanding of the problem for one's own. It's not clear that one has gotten less subjective this way, but it might feel that way. I think most would agree that that's unhelpful.
Subjectivity in Frequentist Statistics I often hear the claim that Bayesian statistics can be highly subjective. So do I. But notice that there's a major ambiguity in calling something subjective. Subjectivity (both senses) Subjective
19,944
Subjectivity in Frequentist Statistics
The subjectivity in frequentist approaches is rampant in application of inference. When you test a hypothesis you set a confidence level, say 95% or 99%. Where does this come from? It doesn't come from anywhere but your own preferences or a prevailing practice in your field. Bayesian prior matter very little on large datasets, because when you update it with the data, the posterior distribution will float away from your prior as more and more data is processed. Having said that Bayesians start from subjective definition of probabilities, beliefs etc. This makes them different from frequentists, who think in terms of objective probabilities. In small data sets this makes a difference UPDATE: I hope you hate philosophy as much as I do, but they have some interesting thought from time to time, consider subjectivism. How do I know that I'm really on SE? What if it's my dream? etc. :)
Subjectivity in Frequentist Statistics
The subjectivity in frequentist approaches is rampant in application of inference. When you test a hypothesis you set a confidence level, say 95% or 99%. Where does this come from? It doesn't come fro
Subjectivity in Frequentist Statistics The subjectivity in frequentist approaches is rampant in application of inference. When you test a hypothesis you set a confidence level, say 95% or 99%. Where does this come from? It doesn't come from anywhere but your own preferences or a prevailing practice in your field. Bayesian prior matter very little on large datasets, because when you update it with the data, the posterior distribution will float away from your prior as more and more data is processed. Having said that Bayesians start from subjective definition of probabilities, beliefs etc. This makes them different from frequentists, who think in terms of objective probabilities. In small data sets this makes a difference UPDATE: I hope you hate philosophy as much as I do, but they have some interesting thought from time to time, consider subjectivism. How do I know that I'm really on SE? What if it's my dream? etc. :)
Subjectivity in Frequentist Statistics The subjectivity in frequentist approaches is rampant in application of inference. When you test a hypothesis you set a confidence level, say 95% or 99%. Where does this come from? It doesn't come fro
19,945
GLM with continuous data piled up at zero
Clumping at 0 is called "zero inflation". By far the most common cases are count models, leading to zero-inflated Poisson and zero-inflated negative binomial regression. However, there are ways to model zero inflation with real positive values (e.g. zero-inflated gamma model). See Min and Agresti, 2002, Modelling non negative data with clumping at zero for a review of these methods.
GLM with continuous data piled up at zero
Clumping at 0 is called "zero inflation". By far the most common cases are count models, leading to zero-inflated Poisson and zero-inflated negative binomial regression. However, there are ways to mod
GLM with continuous data piled up at zero Clumping at 0 is called "zero inflation". By far the most common cases are count models, leading to zero-inflated Poisson and zero-inflated negative binomial regression. However, there are ways to model zero inflation with real positive values (e.g. zero-inflated gamma model). See Min and Agresti, 2002, Modelling non negative data with clumping at zero for a review of these methods.
GLM with continuous data piled up at zero Clumping at 0 is called "zero inflation". By far the most common cases are count models, leading to zero-inflated Poisson and zero-inflated negative binomial regression. However, there are ways to mod
19,946
GLM with continuous data piled up at zero
As discussed elsewhere on the site, ordinal regression (e.g., proportional odds, proportional hazards, probit) is a flexible and robust approach. Discontinuities are allowed in the distribution of $Y$, including extreme clumping. Nothing is assumed about the distribution of $Y$ for a single $X$. Zero inflated models make far more assumptions than semi-parametric models. For a full case study see my course handouts Chapter 15 at http://hbiostat.org/rms . One great advantage of ordinal models for continuous $Y$ is that you don't need to know how to transform $Y$ before the analysis.
GLM with continuous data piled up at zero
As discussed elsewhere on the site, ordinal regression (e.g., proportional odds, proportional hazards, probit) is a flexible and robust approach. Discontinuities are allowed in the distribution of $Y
GLM with continuous data piled up at zero As discussed elsewhere on the site, ordinal regression (e.g., proportional odds, proportional hazards, probit) is a flexible and robust approach. Discontinuities are allowed in the distribution of $Y$, including extreme clumping. Nothing is assumed about the distribution of $Y$ for a single $X$. Zero inflated models make far more assumptions than semi-parametric models. For a full case study see my course handouts Chapter 15 at http://hbiostat.org/rms . One great advantage of ordinal models for continuous $Y$ is that you don't need to know how to transform $Y$ before the analysis.
GLM with continuous data piled up at zero As discussed elsewhere on the site, ordinal regression (e.g., proportional odds, proportional hazards, probit) is a flexible and robust approach. Discontinuities are allowed in the distribution of $Y
19,947
GLM with continuous data piled up at zero
The suggestion of using a zero-inflated Poisson model is an interesting start. It has some benefits of jointly modeling the probability of having any illness-related costs as well as the process of what those costs turn out to be should you have any illness. It has the limitation that it imposes some strict structure on what the shape of the outcome is, conditional upon having accrued any costs (e.g. a specific mean-variance relationship and a positive integer outcome... the latter of which can be relaxed for some modeling purposes). If you are okay with treating the illness-related admission and illness-related costs conditional upon admission processes independently, you can extend this by first modeling the binary process of y/n did you accrue any costs related to illness? This is a simple logistic regression model and allows you to evaluate risk factors and prevalence. Given that, you can restrict an analysis to the subset of individuals having accrued any costs and model the actual cost process using a host of modeling techinques. Poisson is good, quasi-poisson would be better (accounting for small unmeasured sources of covariation in the data and departures from model assumptions). But sky's the limit with modeling the continuous cost process. If you absolutely need to model the correlation of parameters in the process, you can use bootstrap SE estimates. I see no reason why this would be invalid, but would be curious to hear others' input if this might be wrong. In general, I think those are two separate questions and should be treated as such to have valid inference.
GLM with continuous data piled up at zero
The suggestion of using a zero-inflated Poisson model is an interesting start. It has some benefits of jointly modeling the probability of having any illness-related costs as well as the process of wh
GLM with continuous data piled up at zero The suggestion of using a zero-inflated Poisson model is an interesting start. It has some benefits of jointly modeling the probability of having any illness-related costs as well as the process of what those costs turn out to be should you have any illness. It has the limitation that it imposes some strict structure on what the shape of the outcome is, conditional upon having accrued any costs (e.g. a specific mean-variance relationship and a positive integer outcome... the latter of which can be relaxed for some modeling purposes). If you are okay with treating the illness-related admission and illness-related costs conditional upon admission processes independently, you can extend this by first modeling the binary process of y/n did you accrue any costs related to illness? This is a simple logistic regression model and allows you to evaluate risk factors and prevalence. Given that, you can restrict an analysis to the subset of individuals having accrued any costs and model the actual cost process using a host of modeling techinques. Poisson is good, quasi-poisson would be better (accounting for small unmeasured sources of covariation in the data and departures from model assumptions). But sky's the limit with modeling the continuous cost process. If you absolutely need to model the correlation of parameters in the process, you can use bootstrap SE estimates. I see no reason why this would be invalid, but would be curious to hear others' input if this might be wrong. In general, I think those are two separate questions and should be treated as such to have valid inference.
GLM with continuous data piled up at zero The suggestion of using a zero-inflated Poisson model is an interesting start. It has some benefits of jointly modeling the probability of having any illness-related costs as well as the process of wh
19,948
How to derive errors in neural network with the backpropagation algorithm?
I'm going to answer your question about the $\delta_i^{(l)}$, but remember that your question is a sub question of a larger question which is why: $$\nabla_{ij}^{(l)} = \sum_k \theta_{ki}^{(l+1)}\delta_k^{(l+1)}*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ Reminder about the steps in Neural networks: Step 1: forward propagation (calculation of the $a_{i}^{(l)}$) Step 2a: backward propagation: calculation of the errors $\delta_{i}^{(l)}$ Step 2b: backward propagation: calculation of the gradient $\nabla_{ij}^{(l)}$ of J($\Theta$) using the errors $\delta_{i}^{(l+1)}$ and the $a_{i}^{(l)}$, Step 3: gradient descent: calculate the new $\theta_{ij}^{(l)}$ using the gradients $\nabla_{ij}^{(l)}$ First, to understand what the $\delta_i^{(l)}$ are, what they represent and why Andrew NG it talking about them, you need to understand what Andrew is actually doing at that pointand why we do all these calculations: He's calculating the gradient $\nabla_{ij}^{(l)}$ of $\theta_{ij}^{(l)}$ to be used in the Gradient descent algorithm. The gradient is defined as: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial \theta_{ij}^{(l)}}$$ As we can't really solve this formula directly, we are going to modify it using TWO MAGIC TRICKS to arrive at a formula we can actually calculate. This final usable formula is: $$\nabla_{ij}^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ Note : here mapping from 1st layer to 2nd layer is notated as theta2 and so on, instead of theta1 as in andrew ng coursera To arrive at this result, the FIRST MAGIC TRICK is that we can write the gradient $\nabla_{ij}^{(l)}$ of $\theta_{ij}^{(l)}$ using $\delta_i^{(l)}$: $$\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$$ With $\delta_i^{(L)}$ defined (for the L index only) as: $$ \delta_i^{(L)} = \dfrac {\partial C} {\partial z_i^{(l)}}$$ And then the SECOND MAGIC TRICK using the relation between $\delta_i^{(l)}$ and $\delta_i^{(l+1)}$, to defined the other indexes, $$\delta_i^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) $$ And as I said, we can finally write a formula for which we know all the terms: $$\nabla_{ij}^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ DEMONSTRATION of the FIRST MAGIC TRICK: $\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$ We defined: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial \theta_{ij}^{(l)}}$$ Here, the Generalized chain rule provides a way to write that equality as $$\nabla_{ij}^{(l)} = \sum_k \dfrac {\partial C} {\partial z_k^{(l)}} * \dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}}$$. Note that this change is not obvious, but using the following intuition it becomes clearer: The rate of change of the cost function $C$ with repect to a weighed input $z^l_j$ depends on all contributions $a^l_j$ that $z^l_j$ influences. However, as: $$ z_k^{(l)} = \sum_m \theta_{km}^{(l)} * a_m^{(l-1)} $$ Here: m --> unit in layer l - 1 ; k --> unit in layer l i --> unit in layer *l* j --> unit in layer *l*-1 We then can write: $$\dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}} = \dfrac {\partial}{\partial \theta_{ij}^{(l)}} \sum_m \theta_{km}^{(l)} * a_m^{(l-1)}$$ Because of the linearity of the differentiation [ (u + v)' = u'+ v'], we can write: $$\dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}} = \sum_m\dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} $$ with: $$\text{if } k,m \neq i,j, \ \ \dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} = 0 $$ $$\text{if } k,m = i,j, \ \ \dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} = \dfrac {\partial\theta_{ij}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} = a_j^{(l-1)} $$ Then for $k = i$ (otherwise it's clearly equal to zero): $$\dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}} = \dfrac {\partial\theta_{ij}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} + \sum_{m \neq j}\dfrac {\partial\theta_{im}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} = a_j^{(l-1)} + 0 $$ Finally, for $k = i$: $$\dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}} = a_j^{(l-1)}$$ As a result, we can write our first expression of the gradient $\nabla_{ij}^{(l)}$: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}} * \dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}}$$ Which is equivalent to: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}} * a_j^{(l-1)}$$ Or: $$\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$$ DEMONSTRATION OF THE SECOND MAGIC TRICK: $\delta_i^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) $ or: $$\delta^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a^{(l)}(1-a^{(l)})) $$ Remember that we posed: $$ \delta^{(l)} = \dfrac {\partial C} {\partial z^{(l)}} \ \ \ and \ \ \ \delta_i^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}}$$ Again, the Chain rule for higher dimensions enables us to write: $$ \delta_i^{(l)} = \sum_k \dfrac {\partial C} {\partial z_k^{(l+1)}} \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$$ Replacing $\dfrac {\partial C} {\partial z_k^{(l+1)}}$ by $\delta_k^{(l+1)}$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$$ Now, let's focus on $\dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$. We have: $$ z_k^{(l+1)} = \sum_j \theta_{kj}^{(l+1)} * a_j^{(l)} = \sum_j \theta_{kj}^{(l+1)} * g(z_j^{(l)}) $$ Then we derive this expression regarding $ z_k^{(i)}$: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \dfrac {\partial \sum_j \theta_{kj}^{(l+1)} * g(z_j^{(l)}) }{\partial z_i^{(l)}} $$ Because of the linearity of the derivation, we can write: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \sum_j \theta_{kj}^{(l+1)} * \dfrac {\partial g(z_j^{(l)}) }{\partial z_i^{(l)}} $$ If $j \neq i$, then $\dfrac {\partial \theta_{kj}^{(l+1)} * g(z_j^{(l)})} {\partial z_i^{(l)}} = 0 $ As a consequence: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \dfrac {\theta_{ki}^{(l+1)} * \partial g(z_i^{(l)}) }{\partial z_i^{(l)}} $$ And then: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * \dfrac { \partial g(z_i^{(l)}) }{\partial z_i^{(l)}}$$ As $g'(z) = g(z)(1-g(z))$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * g(z_i^{(l)})(1-g(z_i^{(l)}) $$ And as $g(z_i^{(l)}) = a_i^{(l)}$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * a_i^{(l)}(1-a_i^{(l)}) $$ And finally, using the vectorized notation: $$\nabla_{ij}^{(l)} = [\theta^{(l+1)^T}\delta^{(l+1)}*(a_i^{(l)}(1-a_i^{(l)}))] * [a_j^{(l-1)}]$$
How to derive errors in neural network with the backpropagation algorithm?
I'm going to answer your question about the $\delta_i^{(l)}$, but remember that your question is a sub question of a larger question which is why: $$\nabla_{ij}^{(l)} = \sum_k \theta_{ki}^{(l+1)}\del
How to derive errors in neural network with the backpropagation algorithm? I'm going to answer your question about the $\delta_i^{(l)}$, but remember that your question is a sub question of a larger question which is why: $$\nabla_{ij}^{(l)} = \sum_k \theta_{ki}^{(l+1)}\delta_k^{(l+1)}*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ Reminder about the steps in Neural networks: Step 1: forward propagation (calculation of the $a_{i}^{(l)}$) Step 2a: backward propagation: calculation of the errors $\delta_{i}^{(l)}$ Step 2b: backward propagation: calculation of the gradient $\nabla_{ij}^{(l)}$ of J($\Theta$) using the errors $\delta_{i}^{(l+1)}$ and the $a_{i}^{(l)}$, Step 3: gradient descent: calculate the new $\theta_{ij}^{(l)}$ using the gradients $\nabla_{ij}^{(l)}$ First, to understand what the $\delta_i^{(l)}$ are, what they represent and why Andrew NG it talking about them, you need to understand what Andrew is actually doing at that pointand why we do all these calculations: He's calculating the gradient $\nabla_{ij}^{(l)}$ of $\theta_{ij}^{(l)}$ to be used in the Gradient descent algorithm. The gradient is defined as: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial \theta_{ij}^{(l)}}$$ As we can't really solve this formula directly, we are going to modify it using TWO MAGIC TRICKS to arrive at a formula we can actually calculate. This final usable formula is: $$\nabla_{ij}^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ Note : here mapping from 1st layer to 2nd layer is notated as theta2 and so on, instead of theta1 as in andrew ng coursera To arrive at this result, the FIRST MAGIC TRICK is that we can write the gradient $\nabla_{ij}^{(l)}$ of $\theta_{ij}^{(l)}$ using $\delta_i^{(l)}$: $$\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$$ With $\delta_i^{(L)}$ defined (for the L index only) as: $$ \delta_i^{(L)} = \dfrac {\partial C} {\partial z_i^{(l)}}$$ And then the SECOND MAGIC TRICK using the relation between $\delta_i^{(l)}$ and $\delta_i^{(l+1)}$, to defined the other indexes, $$\delta_i^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) $$ And as I said, we can finally write a formula for which we know all the terms: $$\nabla_{ij}^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) * a_j^{(l-1)}$$ DEMONSTRATION of the FIRST MAGIC TRICK: $\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$ We defined: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial \theta_{ij}^{(l)}}$$ Here, the Generalized chain rule provides a way to write that equality as $$\nabla_{ij}^{(l)} = \sum_k \dfrac {\partial C} {\partial z_k^{(l)}} * \dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}}$$. Note that this change is not obvious, but using the following intuition it becomes clearer: The rate of change of the cost function $C$ with repect to a weighed input $z^l_j$ depends on all contributions $a^l_j$ that $z^l_j$ influences. However, as: $$ z_k^{(l)} = \sum_m \theta_{km}^{(l)} * a_m^{(l-1)} $$ Here: m --> unit in layer l - 1 ; k --> unit in layer l i --> unit in layer *l* j --> unit in layer *l*-1 We then can write: $$\dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}} = \dfrac {\partial}{\partial \theta_{ij}^{(l)}} \sum_m \theta_{km}^{(l)} * a_m^{(l-1)}$$ Because of the linearity of the differentiation [ (u + v)' = u'+ v'], we can write: $$\dfrac {\partial z_k^{(l)}} {\partial \theta_{ij}^{(l)}} = \sum_m\dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} $$ with: $$\text{if } k,m \neq i,j, \ \ \dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} = 0 $$ $$\text{if } k,m = i,j, \ \ \dfrac {\partial\theta_{km}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_m^{(l-1)} = \dfrac {\partial\theta_{ij}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} = a_j^{(l-1)} $$ Then for $k = i$ (otherwise it's clearly equal to zero): $$\dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}} = \dfrac {\partial\theta_{ij}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} + \sum_{m \neq j}\dfrac {\partial\theta_{im}^{(l)}} {\partial \theta_{ij}^{(l)}}* a_j^{(l-1)} = a_j^{(l-1)} + 0 $$ Finally, for $k = i$: $$\dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}} = a_j^{(l-1)}$$ As a result, we can write our first expression of the gradient $\nabla_{ij}^{(l)}$: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}} * \dfrac {\partial z_i^{(l)}} {\partial \theta_{ij}^{(l)}}$$ Which is equivalent to: $$\nabla_{ij}^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}} * a_j^{(l-1)}$$ Or: $$\nabla_{ij}^{(l)} = \delta_i^{(l)} * a_j^{(l-1)}$$ DEMONSTRATION OF THE SECOND MAGIC TRICK: $\delta_i^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a_i^{(l)}(1-a_i^{(l)})) $ or: $$\delta^{(l)} = \theta^{(l+1)^T}\delta^{(l+1)}.*(a^{(l)}(1-a^{(l)})) $$ Remember that we posed: $$ \delta^{(l)} = \dfrac {\partial C} {\partial z^{(l)}} \ \ \ and \ \ \ \delta_i^{(l)} = \dfrac {\partial C} {\partial z_i^{(l)}}$$ Again, the Chain rule for higher dimensions enables us to write: $$ \delta_i^{(l)} = \sum_k \dfrac {\partial C} {\partial z_k^{(l+1)}} \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$$ Replacing $\dfrac {\partial C} {\partial z_k^{(l+1)}}$ by $\delta_k^{(l+1)}$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$$ Now, let's focus on $\dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}}$. We have: $$ z_k^{(l+1)} = \sum_j \theta_{kj}^{(l+1)} * a_j^{(l)} = \sum_j \theta_{kj}^{(l+1)} * g(z_j^{(l)}) $$ Then we derive this expression regarding $ z_k^{(i)}$: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \dfrac {\partial \sum_j \theta_{kj}^{(l+1)} * g(z_j^{(l)}) }{\partial z_i^{(l)}} $$ Because of the linearity of the derivation, we can write: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \sum_j \theta_{kj}^{(l+1)} * \dfrac {\partial g(z_j^{(l)}) }{\partial z_i^{(l)}} $$ If $j \neq i$, then $\dfrac {\partial \theta_{kj}^{(l+1)} * g(z_j^{(l)})} {\partial z_i^{(l)}} = 0 $ As a consequence: $$ \dfrac {\partial z_k^{(l+1)}} {\partial z_i^{(l)}} = \dfrac {\theta_{ki}^{(l+1)} * \partial g(z_i^{(l)}) }{\partial z_i^{(l)}} $$ And then: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * \dfrac { \partial g(z_i^{(l)}) }{\partial z_i^{(l)}}$$ As $g'(z) = g(z)(1-g(z))$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * g(z_i^{(l)})(1-g(z_i^{(l)}) $$ And as $g(z_i^{(l)}) = a_i^{(l)}$, we have: $$ \delta_i^{(l)} = \sum_k \delta_k^{(l+1)} \theta_{ki}^{(l+1)} * a_i^{(l)}(1-a_i^{(l)}) $$ And finally, using the vectorized notation: $$\nabla_{ij}^{(l)} = [\theta^{(l+1)^T}\delta^{(l+1)}*(a_i^{(l)}(1-a_i^{(l)}))] * [a_j^{(l-1)}]$$
How to derive errors in neural network with the backpropagation algorithm? I'm going to answer your question about the $\delta_i^{(l)}$, but remember that your question is a sub question of a larger question which is why: $$\nabla_{ij}^{(l)} = \sum_k \theta_{ki}^{(l+1)}\del
19,949
ROC and multiROC analysis: how to calculate optimal cutpoint?
To elaborate on Frank Harrell's answer, what the Epi package did was to fit a logistic regression, and make a ROC curve with outcome predictions of the following form: $$ outcome = \frac {1}{1+e^{-(\beta_0 + \beta_1 s100b + \beta_2 ndka)}} $$ In your case, the fitted values are $\beta_0$ (intercept) = -2.379, $\beta_1$ (s100b) = 5.334 and $\beta_2$ (ndka) = 0.031. As you want your predicted outcome to be 0.312 (the "optimal" cutoff), you can then substitute this as (hope I didn't introduce errors here): $$ 0.312 = \frac {1}{1+e^{-(-2.379 + 5.334 s100b + 0.031 ndka)}} $$ $$ 1.588214 = 5.334 s100b + 0.031 ndka $$ or: $$ s100b = \frac{1.588214 - 0.031 ndka}{5.334} $$ Any pair of (s100b, ndka) values that satisfy this equality is "optimal". Bad luck for you, there are an infinity of these pairs. For instance, (0.29, 1), (0, 51.2), etc. Even worse, most of them don't make any sense. What does the pair (-580, 10000) mean? Nothing! In other words, you can't establish cut-offs on the inputs - you have to do it on the outputs, and that's the whole point of the model.
ROC and multiROC analysis: how to calculate optimal cutpoint?
To elaborate on Frank Harrell's answer, what the Epi package did was to fit a logistic regression, and make a ROC curve with outcome predictions of the following form: $$ outcome = \frac {1}{1+e^{-(\b
ROC and multiROC analysis: how to calculate optimal cutpoint? To elaborate on Frank Harrell's answer, what the Epi package did was to fit a logistic regression, and make a ROC curve with outcome predictions of the following form: $$ outcome = \frac {1}{1+e^{-(\beta_0 + \beta_1 s100b + \beta_2 ndka)}} $$ In your case, the fitted values are $\beta_0$ (intercept) = -2.379, $\beta_1$ (s100b) = 5.334 and $\beta_2$ (ndka) = 0.031. As you want your predicted outcome to be 0.312 (the "optimal" cutoff), you can then substitute this as (hope I didn't introduce errors here): $$ 0.312 = \frac {1}{1+e^{-(-2.379 + 5.334 s100b + 0.031 ndka)}} $$ $$ 1.588214 = 5.334 s100b + 0.031 ndka $$ or: $$ s100b = \frac{1.588214 - 0.031 ndka}{5.334} $$ Any pair of (s100b, ndka) values that satisfy this equality is "optimal". Bad luck for you, there are an infinity of these pairs. For instance, (0.29, 1), (0, 51.2), etc. Even worse, most of them don't make any sense. What does the pair (-580, 10000) mean? Nothing! In other words, you can't establish cut-offs on the inputs - you have to do it on the outputs, and that's the whole point of the model.
ROC and multiROC analysis: how to calculate optimal cutpoint? To elaborate on Frank Harrell's answer, what the Epi package did was to fit a logistic regression, and make a ROC curve with outcome predictions of the following form: $$ outcome = \frac {1}{1+e^{-(\b
19,950
ROC and multiROC analysis: how to calculate optimal cutpoint?
It is not appropriate to seek cutoffs on input variables, but instead only on the output (e.g., predicted risk from a multivariable model). That is because the cutoff for x1 would depend on the continuous value of x2. And seeking a cutpoint on $\hat{Y}$, to obtain optimum decisions, requires a utility/loss/cost function and this has nothing to do with ROC curves.
ROC and multiROC analysis: how to calculate optimal cutpoint?
It is not appropriate to seek cutoffs on input variables, but instead only on the output (e.g., predicted risk from a multivariable model). That is because the cutoff for x1 would depend on the conti
ROC and multiROC analysis: how to calculate optimal cutpoint? It is not appropriate to seek cutoffs on input variables, but instead only on the output (e.g., predicted risk from a multivariable model). That is because the cutoff for x1 would depend on the continuous value of x2. And seeking a cutpoint on $\hat{Y}$, to obtain optimum decisions, requires a utility/loss/cost function and this has nothing to do with ROC curves.
ROC and multiROC analysis: how to calculate optimal cutpoint? It is not appropriate to seek cutoffs on input variables, but instead only on the output (e.g., predicted risk from a multivariable model). That is because the cutoff for x1 would depend on the conti
19,951
ROC and multiROC analysis: how to calculate optimal cutpoint?
I would guess lr.eta is the linear predictor—the logit—from the fitted model, as $\eta$ is a commonly used symbol for it; or, if not, the probability from the fitted model. (Turns out it's the latter: see https://stackoverflow.com/a/38532555/1864816.) You can check the code in ROC. In any case you'll be able to calculate it from the model coefficients for any number of predictors. (Note that it won't be a cut-off for each predictor separately, but a function of all predictors.) Your first sentence should say (as evidenced by the graphs) that you're looking for where the sum of sensitivity & specificity is maximized. But why is this "optimal"? Does a false positive result have the same import as a false negative result? See here.
ROC and multiROC analysis: how to calculate optimal cutpoint?
I would guess lr.eta is the linear predictor—the logit—from the fitted model, as $\eta$ is a commonly used symbol for it; or, if not, the probability from the fitted model. (Turns out it's the latter:
ROC and multiROC analysis: how to calculate optimal cutpoint? I would guess lr.eta is the linear predictor—the logit—from the fitted model, as $\eta$ is a commonly used symbol for it; or, if not, the probability from the fitted model. (Turns out it's the latter: see https://stackoverflow.com/a/38532555/1864816.) You can check the code in ROC. In any case you'll be able to calculate it from the model coefficients for any number of predictors. (Note that it won't be a cut-off for each predictor separately, but a function of all predictors.) Your first sentence should say (as evidenced by the graphs) that you're looking for where the sum of sensitivity & specificity is maximized. But why is this "optimal"? Does a false positive result have the same import as a false negative result? See here.
ROC and multiROC analysis: how to calculate optimal cutpoint? I would guess lr.eta is the linear predictor—the logit—from the fitted model, as $\eta$ is a commonly used symbol for it; or, if not, the probability from the fitted model. (Turns out it's the latter:
19,952
ROC and multiROC analysis: how to calculate optimal cutpoint?
You can find the threshold at which the true positive rate (tpr) intersects the true negative rate (tnr) this will be point at which the sum of the false positives and false negatives is a minimum.
ROC and multiROC analysis: how to calculate optimal cutpoint?
You can find the threshold at which the true positive rate (tpr) intersects the true negative rate (tnr) this will be point at which the sum of the false positives and false negatives is a minimum.
ROC and multiROC analysis: how to calculate optimal cutpoint? You can find the threshold at which the true positive rate (tpr) intersects the true negative rate (tnr) this will be point at which the sum of the false positives and false negatives is a minimum.
ROC and multiROC analysis: how to calculate optimal cutpoint? You can find the threshold at which the true positive rate (tpr) intersects the true negative rate (tnr) this will be point at which the sum of the false positives and false negatives is a minimum.
19,953
Linear discriminant analysis and Bayes rule: classification
Classification in LDA goes as follows (Bayes' rule approach). [About extraction of discriminants one might look here.] According to Bayes theorem, the sought-for probability that we're dealing with class $k$ while observing currently point $x$ is $P(k|x) = P(k)*P(x|k) / P(x)$, where $P(k)$ – unconditional (background) probability of class $k$; $P(x)$ – unconditional (background) probability of point $x$; $P(x|k)$ – probability of presence of point $x$ in class $k$, if class being dealed with is $k$. "Observing currently point $x$" being the base condition, $P(x)=1$, and so the denominator can be omitted. Thus, $P(k|x) = P(k)*P(x|k)$. $P(k)$ is a prior (pre-analytical) probability that the native class for $x$ is $k$; $P(k)$ is specified by user. Usually by default all classes receive equal $P(k)$ = 1/number_of_classes. In order to compute $P(k|x)$, i.e. posterior (post-analytical) probability that the native class for $x$ is $k$, one should know $P(x|k)$. $P(x|k)$ - probability per se - can't be found, for discriminants, the main issue of LDA, are continuous, not discrete, variables. Quantity expressing $P(x|k)$ in this case and proportional to it is the probability density (PDF function). Hereby we need to compute PDF for point $x$ in class $k$, $PDF(x|k)$, in $p$-dimensional normal distribution formed by values of $p$ discriminants. [See Wikipedia Multivariate normal distribution] $$PDF(x|k) = \frac {e^{-d/2}} {(2\pi)^{p/2}\sqrt{\bf |S|})}$$ where $d$ – squared Mahalanobis distance [See Wikipedia Mahalanobis distance] in the discriminants' space from point $x$ to a class centroid; $\bf S$ – covariance matrix between the discriminants, observed within that class. Compute this way $PDF(x|k)$ for each of the classes. $P(k)*PDF(x|k)$ for point $x$ and class $k$ express the sought-for $P(k)*P(x|k)$ for us. But with the above reserve that PDF isn't probability per se, only proportional to it, we should normalize $P(k)*PDF(x|k)$, dividing by the sum of $P(k)*PDF(x|k)$s over all classes. For example, if there are 3 classes in all, $k$, $l$, $m$, then $P(k|x) = P(k)*PDF(x|k) / [P(k)*PDF(x|k)+P(l)*PDF(x|l)+P(m)*PDF(x|m)]$ Point $x$ is assigned by LDA to the class for which $P(k|x)$ is the highest. Note. This was the general approach. Many LDA programs by default use pooled within-class matrix $\bf S$ for all classes in the formula for PDF above. If so, the formula simplifies greatly because such $\bf S$ in LDA is identity matrix (see the bottom footnote here), and hence $\bf |S|=1$ and $d$ turns into squared euclidean distance (reminder: the pooled within-class $\bf S$ we are talking about is covariances between the discriminants, - not between the input variables, which matrix is usually designated as $\bf S_w$). Addition. Before the above Bayes rule approach to classification was introduced to LDA, Fisher, LDA pioneer, proposed computing the now so called Fisher's linear classification functions to classify points in LDA. For point $x$ the function score of belonging to class $k$ is linear combination $b_{kv1}V1_x+b_{kv2}V2_x+...+Const_k$, where $V1, V2,...V_p$ are the predictor variables in the analysis. Coefficient $b_{kv}=(n-g)\sum_w^p{s_{vw}\bar{V}_{kw}}$, $g$ being the number of classes and $s_{vw}$ being the element of the pooled within-class scatter matrix of $p$ $V$-variables. $Const_k=\log(P(k))-(\sum_v^p{b_{kv}\bar{V}_{kv}})/2$. Point $x$ gets assigned to the class for which its score is the highest. Classification results obtained by this Fisher's method (which bypasses extraction of discriminants engaged in the complex eigendecomposition) are identical with those obtained by Bayes' method only if pooled within-class covariance matrix is used with Bayes' method based on discriminants (see "Note" above) and all the discriminants are being used in the classification. The Bayes' method is more general because it allows using separate within-class matrices as well.
Linear discriminant analysis and Bayes rule: classification
Classification in LDA goes as follows (Bayes' rule approach). [About extraction of discriminants one might look here.] According to Bayes theorem, the sought-for probability that we're dealing with cl
Linear discriminant analysis and Bayes rule: classification Classification in LDA goes as follows (Bayes' rule approach). [About extraction of discriminants one might look here.] According to Bayes theorem, the sought-for probability that we're dealing with class $k$ while observing currently point $x$ is $P(k|x) = P(k)*P(x|k) / P(x)$, where $P(k)$ – unconditional (background) probability of class $k$; $P(x)$ – unconditional (background) probability of point $x$; $P(x|k)$ – probability of presence of point $x$ in class $k$, if class being dealed with is $k$. "Observing currently point $x$" being the base condition, $P(x)=1$, and so the denominator can be omitted. Thus, $P(k|x) = P(k)*P(x|k)$. $P(k)$ is a prior (pre-analytical) probability that the native class for $x$ is $k$; $P(k)$ is specified by user. Usually by default all classes receive equal $P(k)$ = 1/number_of_classes. In order to compute $P(k|x)$, i.e. posterior (post-analytical) probability that the native class for $x$ is $k$, one should know $P(x|k)$. $P(x|k)$ - probability per se - can't be found, for discriminants, the main issue of LDA, are continuous, not discrete, variables. Quantity expressing $P(x|k)$ in this case and proportional to it is the probability density (PDF function). Hereby we need to compute PDF for point $x$ in class $k$, $PDF(x|k)$, in $p$-dimensional normal distribution formed by values of $p$ discriminants. [See Wikipedia Multivariate normal distribution] $$PDF(x|k) = \frac {e^{-d/2}} {(2\pi)^{p/2}\sqrt{\bf |S|})}$$ where $d$ – squared Mahalanobis distance [See Wikipedia Mahalanobis distance] in the discriminants' space from point $x$ to a class centroid; $\bf S$ – covariance matrix between the discriminants, observed within that class. Compute this way $PDF(x|k)$ for each of the classes. $P(k)*PDF(x|k)$ for point $x$ and class $k$ express the sought-for $P(k)*P(x|k)$ for us. But with the above reserve that PDF isn't probability per se, only proportional to it, we should normalize $P(k)*PDF(x|k)$, dividing by the sum of $P(k)*PDF(x|k)$s over all classes. For example, if there are 3 classes in all, $k$, $l$, $m$, then $P(k|x) = P(k)*PDF(x|k) / [P(k)*PDF(x|k)+P(l)*PDF(x|l)+P(m)*PDF(x|m)]$ Point $x$ is assigned by LDA to the class for which $P(k|x)$ is the highest. Note. This was the general approach. Many LDA programs by default use pooled within-class matrix $\bf S$ for all classes in the formula for PDF above. If so, the formula simplifies greatly because such $\bf S$ in LDA is identity matrix (see the bottom footnote here), and hence $\bf |S|=1$ and $d$ turns into squared euclidean distance (reminder: the pooled within-class $\bf S$ we are talking about is covariances between the discriminants, - not between the input variables, which matrix is usually designated as $\bf S_w$). Addition. Before the above Bayes rule approach to classification was introduced to LDA, Fisher, LDA pioneer, proposed computing the now so called Fisher's linear classification functions to classify points in LDA. For point $x$ the function score of belonging to class $k$ is linear combination $b_{kv1}V1_x+b_{kv2}V2_x+...+Const_k$, where $V1, V2,...V_p$ are the predictor variables in the analysis. Coefficient $b_{kv}=(n-g)\sum_w^p{s_{vw}\bar{V}_{kw}}$, $g$ being the number of classes and $s_{vw}$ being the element of the pooled within-class scatter matrix of $p$ $V$-variables. $Const_k=\log(P(k))-(\sum_v^p{b_{kv}\bar{V}_{kv}})/2$. Point $x$ gets assigned to the class for which its score is the highest. Classification results obtained by this Fisher's method (which bypasses extraction of discriminants engaged in the complex eigendecomposition) are identical with those obtained by Bayes' method only if pooled within-class covariance matrix is used with Bayes' method based on discriminants (see "Note" above) and all the discriminants are being used in the classification. The Bayes' method is more general because it allows using separate within-class matrices as well.
Linear discriminant analysis and Bayes rule: classification Classification in LDA goes as follows (Bayes' rule approach). [About extraction of discriminants one might look here.] According to Bayes theorem, the sought-for probability that we're dealing with cl
19,954
Linear discriminant analysis and Bayes rule: classification
Assume equal weights for the two error types in a two class problem. Suppose the two classes have a multivariate class conditional density of the classification variables. Then for any observed vector $x$ and class conditional densities $f_1(x)$ and $f_2(x)$ the Bayes rule will classify $x$ as belonging to group 1 if $f_1(x) \geq f_2(x)$ and as class 2 otherwise. The Bayes rule turns out to be a linear discriminant classifier if $f_1$ and $f_2$ are both multivariate normal densities with the same covariance matrix. Of course in order to be able to usefully discriminate the mean vectors must be different. A nice presentation of this can be found in Duda and Hart Pattern Classification and Scene Analysis 1973 (the book has recently been revised but I like particularly the presentation in the original edition).
Linear discriminant analysis and Bayes rule: classification
Assume equal weights for the two error types in a two class problem. Suppose the two classes have a multivariate class conditional density of the classification variables. Then for any observed vect
Linear discriminant analysis and Bayes rule: classification Assume equal weights for the two error types in a two class problem. Suppose the two classes have a multivariate class conditional density of the classification variables. Then for any observed vector $x$ and class conditional densities $f_1(x)$ and $f_2(x)$ the Bayes rule will classify $x$ as belonging to group 1 if $f_1(x) \geq f_2(x)$ and as class 2 otherwise. The Bayes rule turns out to be a linear discriminant classifier if $f_1$ and $f_2$ are both multivariate normal densities with the same covariance matrix. Of course in order to be able to usefully discriminate the mean vectors must be different. A nice presentation of this can be found in Duda and Hart Pattern Classification and Scene Analysis 1973 (the book has recently been revised but I like particularly the presentation in the original edition).
Linear discriminant analysis and Bayes rule: classification Assume equal weights for the two error types in a two class problem. Suppose the two classes have a multivariate class conditional density of the classification variables. Then for any observed vect
19,955
Bag of words vs vector space model?
Bag-of-words and vector space model refer to different aspects of characterizing a body of text such as a document. They are described well in the textbook "Speech and Language Processing" by Jurafsky and Martin, 2009, in section 23.1 on information retrieval. A more terse reference is "Introduction to Information Retrieval" by Manning, Raghavan, and Schütze, 2008, in the section on "The vector space model for scoring". Bag-of-words refers to what kind of information you can extract from a document (namely, unigram words). Vector space model refers to the data structure for each document (namely, a feature vector of term & term weight pairs). Both aspects complement each other. More specifically: Bag-of-words: For a given document, you extract only the unigram words (aka terms) to create an unordered list of words. No POS tag, no syntax, no semantics, no position, no bigrams, no trigrams. Only the unigram words themselves, making for a bunch of words to represent the document. Thus: Bag-of-words. Vector space model: Given the bag of words that you extracted from the document, you create a feature vector for the document, where each feature is a word (term) and the feature's value is a term weight. The term weight might be: a binary value (with 1 indicating that the term occurred in the document, and 0 indicating that it did not); a term frequency value (indicating how many times the term occurred in the document); or a TF-IDF value (e.g. a small floating-point number like 1.23). The entire document is thus a feature vector, and each feature vector corresponds to a point in a vector space. The model for this vector space is such that there is an axis for every term in the vocabulary, and so the vector space is V-dimensional, where V is the size of the vocabulary. The vector should then conceptually also be V-dimensional with a feature for every vocabulary term. However, because the vocabulary can be large (on the order of V=100,000s of terms), a document's feature vector typically will contain only the terms that occurred in that document and omit the terms that did not. Such a feature vector is considered sparse. An example vector representation of a document thus might look like this: DOCUMENT_ID_42 LABEL_POLITICS a 55 ability 1 about 5 absent 2 abuse 1 access 1 accompanied 1 accompanying 2 according 2 account 1 accounted 1 accurate 1 acknowledge 4 activities 1 actual 1 actually 2 administering 1 ... where this example vector has a document id (e.g. 42), a ground-truth label (e.g. politics) and a list of features and feature values comprising term & term frequency pairs. Here, it can be seen that the word "absent" occurred 2 times in this document.
Bag of words vs vector space model?
Bag-of-words and vector space model refer to different aspects of characterizing a body of text such as a document. They are described well in the textbook "Speech and Language Processing" by Jurafsky
Bag of words vs vector space model? Bag-of-words and vector space model refer to different aspects of characterizing a body of text such as a document. They are described well in the textbook "Speech and Language Processing" by Jurafsky and Martin, 2009, in section 23.1 on information retrieval. A more terse reference is "Introduction to Information Retrieval" by Manning, Raghavan, and Schütze, 2008, in the section on "The vector space model for scoring". Bag-of-words refers to what kind of information you can extract from a document (namely, unigram words). Vector space model refers to the data structure for each document (namely, a feature vector of term & term weight pairs). Both aspects complement each other. More specifically: Bag-of-words: For a given document, you extract only the unigram words (aka terms) to create an unordered list of words. No POS tag, no syntax, no semantics, no position, no bigrams, no trigrams. Only the unigram words themselves, making for a bunch of words to represent the document. Thus: Bag-of-words. Vector space model: Given the bag of words that you extracted from the document, you create a feature vector for the document, where each feature is a word (term) and the feature's value is a term weight. The term weight might be: a binary value (with 1 indicating that the term occurred in the document, and 0 indicating that it did not); a term frequency value (indicating how many times the term occurred in the document); or a TF-IDF value (e.g. a small floating-point number like 1.23). The entire document is thus a feature vector, and each feature vector corresponds to a point in a vector space. The model for this vector space is such that there is an axis for every term in the vocabulary, and so the vector space is V-dimensional, where V is the size of the vocabulary. The vector should then conceptually also be V-dimensional with a feature for every vocabulary term. However, because the vocabulary can be large (on the order of V=100,000s of terms), a document's feature vector typically will contain only the terms that occurred in that document and omit the terms that did not. Such a feature vector is considered sparse. An example vector representation of a document thus might look like this: DOCUMENT_ID_42 LABEL_POLITICS a 55 ability 1 about 5 absent 2 abuse 1 access 1 accompanied 1 accompanying 2 according 2 account 1 accounted 1 accurate 1 acknowledge 4 activities 1 actual 1 actually 2 administering 1 ... where this example vector has a document id (e.g. 42), a ground-truth label (e.g. politics) and a list of features and feature values comprising term & term frequency pairs. Here, it can be seen that the word "absent" occurred 2 times in this document.
Bag of words vs vector space model? Bag-of-words and vector space model refer to different aspects of characterizing a body of text such as a document. They are described well in the textbook "Speech and Language Processing" by Jurafsky
19,956
Bag of words vs vector space model?
Is it that using Bag of Words you assign word frequency to document-term matrix element and in Vector Space Model document-term matrix elements are quite general as long as operations (dot product) in vector space make sense (tf-idf weights, for example)?
Bag of words vs vector space model?
Is it that using Bag of Words you assign word frequency to document-term matrix element and in Vector Space Model document-term matrix elements are quite general as long as operations (dot product) in
Bag of words vs vector space model? Is it that using Bag of Words you assign word frequency to document-term matrix element and in Vector Space Model document-term matrix elements are quite general as long as operations (dot product) in vector space make sense (tf-idf weights, for example)?
Bag of words vs vector space model? Is it that using Bag of Words you assign word frequency to document-term matrix element and in Vector Space Model document-term matrix elements are quite general as long as operations (dot product) in
19,957
Is a predictor with greater variance "better"?
A few quick points: Variance can be arbitrarily increased or decreased by adopting a different scale for your variable. Multiplying a scale by a constant greater than one would increase the variance, but not change the predictive power of the variable. You may be confusing variance with reliability. All else being equal (and assuming that there is at least some true score prediction), increasing the reliability with which you measure a construct should increase its predictive power. Check out this discussion of correction for attenuation. Assuming that both scales were made up of twenty 5-point items, and thus had total scores that ranged from 20 to 100, then the version with the greater variance would also be more reliable (at least in terms of internal consistency). Internal consistency reliability is not the only standard by which to judge a psychological test, and it is not the only factor that distinguishes the predictive power of one scale versus another for a given construct.
Is a predictor with greater variance "better"?
A few quick points: Variance can be arbitrarily increased or decreased by adopting a different scale for your variable. Multiplying a scale by a constant greater than one would increase the variance,
Is a predictor with greater variance "better"? A few quick points: Variance can be arbitrarily increased or decreased by adopting a different scale for your variable. Multiplying a scale by a constant greater than one would increase the variance, but not change the predictive power of the variable. You may be confusing variance with reliability. All else being equal (and assuming that there is at least some true score prediction), increasing the reliability with which you measure a construct should increase its predictive power. Check out this discussion of correction for attenuation. Assuming that both scales were made up of twenty 5-point items, and thus had total scores that ranged from 20 to 100, then the version with the greater variance would also be more reliable (at least in terms of internal consistency). Internal consistency reliability is not the only standard by which to judge a psychological test, and it is not the only factor that distinguishes the predictive power of one scale versus another for a given construct.
Is a predictor with greater variance "better"? A few quick points: Variance can be arbitrarily increased or decreased by adopting a different scale for your variable. Multiplying a scale by a constant greater than one would increase the variance,
19,958
Is a predictor with greater variance "better"?
A simple example helps us identify what is essential. Let $$Y = C + \gamma X_1 + \varepsilon$$ where $C$ and $\gamma$ are parameters, $X_1$ is the score on the first instrument (or independent variable), and $\varepsilon$ represents unbiased iid error. Let the score on the second instrument be related to the first one via $$X_1 = \alpha X_2 + \beta.$$ For example, scores on the second instrument might range from 25 to 75 and scores on the first from 0 to 100, with $X_1 = 2 X_2 - 50$. The variance of $X_1$ is $\alpha^2$ times the variance of $X_2$. Nevertheless, we can rewrite $$Y = C + \gamma(\alpha X_2 + \beta) = (C + \beta \gamma) + (\gamma \alpha) X_2 + \varepsilon = C' + \gamma' X_2 + \varepsilon.$$ The parameters change, and the variance of the independent variable changes, yet the predictive capability of the model remains unchanged. In general the relationship between $X_1$ and $X_2$ may be nonlinear. Which is a better predictor of $Y$ will depend on which has a closer linear relationship to $Y$. Thus the issue is not one of scale (as reflected by the variance of the $X_i$) but has to be decided by the relationships between the instruments and what they are being used to predict. This idea is closely related to one explored in a recent question about selecting independent variables in regression. There can be mitigating factors. For instance, if $X_1$ and $X_2$ are discrete variables and both are equally well related to $Y$, then the one with larger variance might (if it is sufficiently uniformly spread out) allow for finer distinctions among its values and thereby afford more precision. E.g., if both instruments are questionnaires on a 1-5 Likert scale, both are equally well correlated with $Y$, and the answers to $X_1$ are all 2 and 3 and the answers to $X_2$ are spread among 1 through 5, $X_2$ might be favored on this basis.
Is a predictor with greater variance "better"?
A simple example helps us identify what is essential. Let $$Y = C + \gamma X_1 + \varepsilon$$ where $C$ and $\gamma$ are parameters, $X_1$ is the score on the first instrument (or independent variab
Is a predictor with greater variance "better"? A simple example helps us identify what is essential. Let $$Y = C + \gamma X_1 + \varepsilon$$ where $C$ and $\gamma$ are parameters, $X_1$ is the score on the first instrument (or independent variable), and $\varepsilon$ represents unbiased iid error. Let the score on the second instrument be related to the first one via $$X_1 = \alpha X_2 + \beta.$$ For example, scores on the second instrument might range from 25 to 75 and scores on the first from 0 to 100, with $X_1 = 2 X_2 - 50$. The variance of $X_1$ is $\alpha^2$ times the variance of $X_2$. Nevertheless, we can rewrite $$Y = C + \gamma(\alpha X_2 + \beta) = (C + \beta \gamma) + (\gamma \alpha) X_2 + \varepsilon = C' + \gamma' X_2 + \varepsilon.$$ The parameters change, and the variance of the independent variable changes, yet the predictive capability of the model remains unchanged. In general the relationship between $X_1$ and $X_2$ may be nonlinear. Which is a better predictor of $Y$ will depend on which has a closer linear relationship to $Y$. Thus the issue is not one of scale (as reflected by the variance of the $X_i$) but has to be decided by the relationships between the instruments and what they are being used to predict. This idea is closely related to one explored in a recent question about selecting independent variables in regression. There can be mitigating factors. For instance, if $X_1$ and $X_2$ are discrete variables and both are equally well related to $Y$, then the one with larger variance might (if it is sufficiently uniformly spread out) allow for finer distinctions among its values and thereby afford more precision. E.g., if both instruments are questionnaires on a 1-5 Likert scale, both are equally well correlated with $Y$, and the answers to $X_1$ are all 2 and 3 and the answers to $X_2$ are spread among 1 through 5, $X_2$ might be favored on this basis.
Is a predictor with greater variance "better"? A simple example helps us identify what is essential. Let $$Y = C + \gamma X_1 + \varepsilon$$ where $C$ and $\gamma$ are parameters, $X_1$ is the score on the first instrument (or independent variab
19,959
Is a predictor with greater variance "better"?
Always check the assumptions for the statistical test you're using! One of the assumptions of logistic regression is independence of errors which means that cases of data should not be related. Eg. you can't measure the same people at different points in time which I fear you may have done with your anger management surveys. I would also be worried that with 2 anger management surveys you're basically measuring the same thing and your analysis could suffer from multicollinearity.
Is a predictor with greater variance "better"?
Always check the assumptions for the statistical test you're using! One of the assumptions of logistic regression is independence of errors which means that cases of data should not be related. Eg. yo
Is a predictor with greater variance "better"? Always check the assumptions for the statistical test you're using! One of the assumptions of logistic regression is independence of errors which means that cases of data should not be related. Eg. you can't measure the same people at different points in time which I fear you may have done with your anger management surveys. I would also be worried that with 2 anger management surveys you're basically measuring the same thing and your analysis could suffer from multicollinearity.
Is a predictor with greater variance "better"? Always check the assumptions for the statistical test you're using! One of the assumptions of logistic regression is independence of errors which means that cases of data should not be related. Eg. yo
19,960
Coordinate descent for the lasso or elastic net
I earlier suggested the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent, published in the Journal of Statistical Software (2010). Here are some other references that might be useful: Pathwise coordinate optimization, by Friedman and coll. Fast Regularization Paths via Coordinate Descent, by Hastie (UseR! 2009) Coordinate descent algorithms for lasso penalized regression, by Wu and Lange (Ann. Appl. Stat. 2(1): 224-244, 2008; also on available on arXiv.org) Coordinate Descent for Sparse Solutions of Underdetermined Linear Systems of Equations, by Yagle (a bit too complex for me)
Coordinate descent for the lasso or elastic net
I earlier suggested the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent, published in the Journal of Statistical Software (2010). Here are
Coordinate descent for the lasso or elastic net I earlier suggested the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent, published in the Journal of Statistical Software (2010). Here are some other references that might be useful: Pathwise coordinate optimization, by Friedman and coll. Fast Regularization Paths via Coordinate Descent, by Hastie (UseR! 2009) Coordinate descent algorithms for lasso penalized regression, by Wu and Lange (Ann. Appl. Stat. 2(1): 224-244, 2008; also on available on arXiv.org) Coordinate Descent for Sparse Solutions of Underdetermined Linear Systems of Equations, by Yagle (a bit too complex for me)
Coordinate descent for the lasso or elastic net I earlier suggested the recent paper by Friedman and coll., Regularization Paths for Generalized Linear Models via Coordinate Descent, published in the Journal of Statistical Software (2010). Here are
19,961
Coordinate descent for the lasso or elastic net
I've just come across this lecture by Hastie and thought that others might find it interesting.
Coordinate descent for the lasso or elastic net
I've just come across this lecture by Hastie and thought that others might find it interesting.
Coordinate descent for the lasso or elastic net I've just come across this lecture by Hastie and thought that others might find it interesting.
Coordinate descent for the lasso or elastic net I've just come across this lecture by Hastie and thought that others might find it interesting.
19,962
How to make age pyramid like plot in R?
You can do this with the pyramid.plot() function from the plotrix package. Here's an example: library(plotrix) xy.pop<-c(3.2,3.5,3.6,3.6,3.5,3.5,3.9,3.7,3.9,3.5,3.2,2.8,2.2,1.8, 1.5,1.3,0.7,0.4) xx.pop<-c(3.2,3.4,3.5,3.5,3.5,3.7,4,3.8,3.9,3.6,3.2,2.5,2,1.7,1.5, 1.3,1,0.8) agelabels<-c("0-4","5-9","10-14","15-19","20-24","25-29","30-34", "35-39","40-44","45-49","50-54","55-59","60-64","65-69","70-74", "75-79","80-44","85+") mcol<-color.gradient(c(0,0,0.5,1),c(0,0,0.5,1),c(1,1,0.5,1),18) fcol<-color.gradient(c(1,1,0.5,1),c(0.5,0.5,0.5,1),c(0.5,0.5,0.5,1),18) par(mar=pyramid.plot(xy.pop,xx.pop,labels=agelabels, main="Australian population pyramid 2002",lxcol=mcol,rxcol=fcol, gap=0.5,show.values=TRUE)) Which ends up looking like this:
How to make age pyramid like plot in R?
You can do this with the pyramid.plot() function from the plotrix package. Here's an example: library(plotrix) xy.pop<-c(3.2,3.5,3.6,3.6,3.5,3.5,3.9,3.7,3.9,3.5,3.2,2.8,2.2,1.8, 1.5,1.3,0.7,0.4)
How to make age pyramid like plot in R? You can do this with the pyramid.plot() function from the plotrix package. Here's an example: library(plotrix) xy.pop<-c(3.2,3.5,3.6,3.6,3.5,3.5,3.9,3.7,3.9,3.5,3.2,2.8,2.2,1.8, 1.5,1.3,0.7,0.4) xx.pop<-c(3.2,3.4,3.5,3.5,3.5,3.7,4,3.8,3.9,3.6,3.2,2.5,2,1.7,1.5, 1.3,1,0.8) agelabels<-c("0-4","5-9","10-14","15-19","20-24","25-29","30-34", "35-39","40-44","45-49","50-54","55-59","60-64","65-69","70-74", "75-79","80-44","85+") mcol<-color.gradient(c(0,0,0.5,1),c(0,0,0.5,1),c(1,1,0.5,1),18) fcol<-color.gradient(c(1,1,0.5,1),c(0.5,0.5,0.5,1),c(0.5,0.5,0.5,1),18) par(mar=pyramid.plot(xy.pop,xx.pop,labels=agelabels, main="Australian population pyramid 2002",lxcol=mcol,rxcol=fcol, gap=0.5,show.values=TRUE)) Which ends up looking like this:
How to make age pyramid like plot in R? You can do this with the pyramid.plot() function from the plotrix package. Here's an example: library(plotrix) xy.pop<-c(3.2,3.5,3.6,3.6,3.5,3.5,3.9,3.7,3.9,3.5,3.2,2.8,2.2,1.8, 1.5,1.3,0.7,0.4)
19,963
Why is ROC insensitive to class distributions?
Since all points on an ROC curve condition on Y, the distribution of Y is necessarily irrelevant for the points. This also points out why ROC curves should not be used except in a retrospective case-control study where samples are taken from Y=0 and Y=1 observations. For prospectively observed data where we sample based on X or take completely random samples, it is not logical to use a representation that disrespects how the samples arose. See https://www.fharrell.com/post/addvalue/
Why is ROC insensitive to class distributions?
Since all points on an ROC curve condition on Y, the distribution of Y is necessarily irrelevant for the points. This also points out why ROC curves should not be used except in a retrospective case-
Why is ROC insensitive to class distributions? Since all points on an ROC curve condition on Y, the distribution of Y is necessarily irrelevant for the points. This also points out why ROC curves should not be used except in a retrospective case-control study where samples are taken from Y=0 and Y=1 observations. For prospectively observed data where we sample based on X or take completely random samples, it is not logical to use a representation that disrespects how the samples arose. See https://www.fharrell.com/post/addvalue/
Why is ROC insensitive to class distributions? Since all points on an ROC curve condition on Y, the distribution of Y is necessarily irrelevant for the points. This also points out why ROC curves should not be used except in a retrospective case-
19,964
Why is ROC insensitive to class distributions?
What I am answering I felt your key statement was: I cannot reconcile these few concepts together, likely due to a gap in statistical rigour. so my answer if based around addressing the difference between mathematical and statistical implications of class imbalance on AUCROC. Recap of AUCROC The AUCROC is calculated based on the area under the curve for the receiver operators characteristics curve. This curve plots 1-sensitivity vs specificity for a series of thresholds (e.g. each actualised value in the dataset). Sensitivity/recall is the ratio of TP to all actual positives $TP/(TP+FN)$ or $TP/Cases$. There is no consideration for the actual negatives in the calculation of sensitivity. Specificity is the ratio of true negatives to all actual negatives $TN/(TN+FP)$ or $TN/Controls$. There is no consideration for the actual positive group in the calculation of specificity Mathematical and Statistical Interpretation Since the AUCROC is directly calculated from these two metrics and neither metric takes into consideration the other group there is no mathematical link between group balance and the expected AUCROC. However, it is critical to note that 'expected' has a precise statistical meaning, in the form of the value you would expect the metric to converge to over a very very (read infinite) long run experiment. The critical thing about statistics is that we not only consider the long range expected value, but also consider the short range variability/reliability/confidence of an actual result based on finite sampling. The confidence that we have in a actual realised result is proportional to $\pm \frac{\sigma}{\sqrt{n}} $ where $\sigma$ is the standard deviation of the data and $n$ is the total number of samples. If $n_1>>n_2$ then $\sqrt{n_1}>\sqrt{n_2}$. The points in the ROC are displaced by errors in both specificity and sensitivity so the area under that curve is a composite of those errors and the combined impact on overall confidence is proportional to $$\pm\sqrt{ (\frac{\sigma_1}{\sqrt{n_1}})^2 + (\frac{\sigma_2}{\sqrt{n_2}})^2}$$. If $n_1 \sim n_2$ then group prevalence will be balanced and neither group will skew the confidence in the calculated result. If $n_1>>n_2$ then the confidence will be more limited by the low prevalence group. Summary The expected long range AUCROC value is not influenced by class prevalence, but statistical confidence is dragged down by low prevalence classes.
Why is ROC insensitive to class distributions?
What I am answering I felt your key statement was: I cannot reconcile these few concepts together, likely due to a gap in statistical rigour. so my answer if based around addressing the difference b
Why is ROC insensitive to class distributions? What I am answering I felt your key statement was: I cannot reconcile these few concepts together, likely due to a gap in statistical rigour. so my answer if based around addressing the difference between mathematical and statistical implications of class imbalance on AUCROC. Recap of AUCROC The AUCROC is calculated based on the area under the curve for the receiver operators characteristics curve. This curve plots 1-sensitivity vs specificity for a series of thresholds (e.g. each actualised value in the dataset). Sensitivity/recall is the ratio of TP to all actual positives $TP/(TP+FN)$ or $TP/Cases$. There is no consideration for the actual negatives in the calculation of sensitivity. Specificity is the ratio of true negatives to all actual negatives $TN/(TN+FP)$ or $TN/Controls$. There is no consideration for the actual positive group in the calculation of specificity Mathematical and Statistical Interpretation Since the AUCROC is directly calculated from these two metrics and neither metric takes into consideration the other group there is no mathematical link between group balance and the expected AUCROC. However, it is critical to note that 'expected' has a precise statistical meaning, in the form of the value you would expect the metric to converge to over a very very (read infinite) long run experiment. The critical thing about statistics is that we not only consider the long range expected value, but also consider the short range variability/reliability/confidence of an actual result based on finite sampling. The confidence that we have in a actual realised result is proportional to $\pm \frac{\sigma}{\sqrt{n}} $ where $\sigma$ is the standard deviation of the data and $n$ is the total number of samples. If $n_1>>n_2$ then $\sqrt{n_1}>\sqrt{n_2}$. The points in the ROC are displaced by errors in both specificity and sensitivity so the area under that curve is a composite of those errors and the combined impact on overall confidence is proportional to $$\pm\sqrt{ (\frac{\sigma_1}{\sqrt{n_1}})^2 + (\frac{\sigma_2}{\sqrt{n_2}})^2}$$. If $n_1 \sim n_2$ then group prevalence will be balanced and neither group will skew the confidence in the calculated result. If $n_1>>n_2$ then the confidence will be more limited by the low prevalence group. Summary The expected long range AUCROC value is not influenced by class prevalence, but statistical confidence is dragged down by low prevalence classes.
Why is ROC insensitive to class distributions? What I am answering I felt your key statement was: I cannot reconcile these few concepts together, likely due to a gap in statistical rigour. so my answer if based around addressing the difference b
19,965
Why is ROC insensitive to class distributions?
In classification problems the model output is probability. Different problems have different threshold boundaries. For example when deciding between a dog and a cat, 50% makes sense but when we talk about probability to have a heart attack all probabilities will be much much lower. AUC solves it  by checking the $FPR$ & $TPR$ for many (as much as possible) thresholds between 0 and 1. AUC only cares about the ranking of the model i.e. if model rank the ones higher than the zeros.  Let's examine the component of AUC: $TPR = \frac{TP}{P} $ and $FPR = \frac{FP}{N} $  Let's take as an example $TPR$ (will be similar to $FPR$). We calculate $TPR$ for every threshold and for each example. for each example $TP$ is a function of $Y, \hat{Y}, threshold$ - this is not affected by the ratio of positive and negative. Now, the total number of $TP$ is affected by the total number of $P$ but the $TPR$ should remain the same. Because, if we have more $P$, we will also have more $TP$ at the same ratio for the given threshold. To conclude, changing the number of $P$ should not affect the $TPR$ for a given threshold. The same is for $FPR$ and $N$, and thus the ratio between positive and negative should not change the ROC curve.
Why is ROC insensitive to class distributions?
In classification problems the model output is probability. Different problems have different threshold boundaries. For example when deciding between a dog and a cat, 50% makes sense but when we talk
Why is ROC insensitive to class distributions? In classification problems the model output is probability. Different problems have different threshold boundaries. For example when deciding between a dog and a cat, 50% makes sense but when we talk about probability to have a heart attack all probabilities will be much much lower. AUC solves it  by checking the $FPR$ & $TPR$ for many (as much as possible) thresholds between 0 and 1. AUC only cares about the ranking of the model i.e. if model rank the ones higher than the zeros.  Let's examine the component of AUC: $TPR = \frac{TP}{P} $ and $FPR = \frac{FP}{N} $  Let's take as an example $TPR$ (will be similar to $FPR$). We calculate $TPR$ for every threshold and for each example. for each example $TP$ is a function of $Y, \hat{Y}, threshold$ - this is not affected by the ratio of positive and negative. Now, the total number of $TP$ is affected by the total number of $P$ but the $TPR$ should remain the same. Because, if we have more $P$, we will also have more $TP$ at the same ratio for the given threshold. To conclude, changing the number of $P$ should not affect the $TPR$ for a given threshold. The same is for $FPR$ and $N$, and thus the ratio between positive and negative should not change the ROC curve.
Why is ROC insensitive to class distributions? In classification problems the model output is probability. Different problems have different threshold boundaries. For example when deciding between a dog and a cat, 50% makes sense but when we talk
19,966
Why is ROC insensitive to class distributions?
Compared to the others, my answer is focused on understanding how you use ROC and AUC in Data Science cases. If you need the mathematical / statistical part, my answer won't help you. Basically, ROC curve shows false positive (FP) RATE and true positive (TP) RATE for each threshold of the model (score you decided as being the limit between classification '1' and '0'). So at the start, if your threshold is 1 (max possible score for your model), you classify everything as 0 and then there's 0% FP and 0% TP. If threshold is 0 (min possible score for your model), everything is classified as 1 and so your TP and FP rates are 100%. Using a threshold strictly between 0 and 1, you'll have FP and TP rates between 0% and 100%. Since this curve is representing Rates obtained at each possible threshold, if you print ROC Curve for your test set, it's totally independant from the training set. It only shows how much FP and TP you have, compared to the maximum you can have in the set. Let's take an easy example : You have a test set with 100 '0' and 10 '1'. Having found 5 of the 10 '1', but misclassifying 30 '0' as '1' to achieve that, you obtain for your curve x = FP_Rate = 30/100 = 0.3 y = TP Rate = 5/10 = 0.5 Imagine now your dataset is balanced and you have 50 '0' and 50 '1'. If you still find half of the ones (25 '1') misclassifying 30% of your zeros (15 '0'), you'll still find x=0.3 ; y=0.5 for your curve. The only matter with ROC Curve is the percentage of FP compared to the percentage of TP, wether the model is balanced or not. --- Edit after comment question : This depends how you use AUC (Area under ROC curve, what you might call ROC metric). AUC measures the performance of 1 model on 1 set. So if you apply it on Train, it'll measure how your model (built on Train) performs on Train (you often do it to compare AUC_Train and AUC_Test and see if you overfit). AUC has nothing to do with how your model is built, it just evaluates the result of 1 model applied on 1 certain set. Wether the set is Train or Test, when you calculate AUC, it's just "The set in which you test your model performance". So this makes no difference. Also, if you want a probabilistic way to understand AUC : If you have a 0.8 AUC, it means that if you take one random '1' row and one random '0' row and apply your trained model on them, the probability of having a higher score for your '1' row than for your '0' is 0.8 You then understand how AUC=0.5 means the model is a random classifier.
Why is ROC insensitive to class distributions?
Compared to the others, my answer is focused on understanding how you use ROC and AUC in Data Science cases. If you need the mathematical / statistical part, my answer won't help you. Basically, ROC c
Why is ROC insensitive to class distributions? Compared to the others, my answer is focused on understanding how you use ROC and AUC in Data Science cases. If you need the mathematical / statistical part, my answer won't help you. Basically, ROC curve shows false positive (FP) RATE and true positive (TP) RATE for each threshold of the model (score you decided as being the limit between classification '1' and '0'). So at the start, if your threshold is 1 (max possible score for your model), you classify everything as 0 and then there's 0% FP and 0% TP. If threshold is 0 (min possible score for your model), everything is classified as 1 and so your TP and FP rates are 100%. Using a threshold strictly between 0 and 1, you'll have FP and TP rates between 0% and 100%. Since this curve is representing Rates obtained at each possible threshold, if you print ROC Curve for your test set, it's totally independant from the training set. It only shows how much FP and TP you have, compared to the maximum you can have in the set. Let's take an easy example : You have a test set with 100 '0' and 10 '1'. Having found 5 of the 10 '1', but misclassifying 30 '0' as '1' to achieve that, you obtain for your curve x = FP_Rate = 30/100 = 0.3 y = TP Rate = 5/10 = 0.5 Imagine now your dataset is balanced and you have 50 '0' and 50 '1'. If you still find half of the ones (25 '1') misclassifying 30% of your zeros (15 '0'), you'll still find x=0.3 ; y=0.5 for your curve. The only matter with ROC Curve is the percentage of FP compared to the percentage of TP, wether the model is balanced or not. --- Edit after comment question : This depends how you use AUC (Area under ROC curve, what you might call ROC metric). AUC measures the performance of 1 model on 1 set. So if you apply it on Train, it'll measure how your model (built on Train) performs on Train (you often do it to compare AUC_Train and AUC_Test and see if you overfit). AUC has nothing to do with how your model is built, it just evaluates the result of 1 model applied on 1 certain set. Wether the set is Train or Test, when you calculate AUC, it's just "The set in which you test your model performance". So this makes no difference. Also, if you want a probabilistic way to understand AUC : If you have a 0.8 AUC, it means that if you take one random '1' row and one random '0' row and apply your trained model on them, the probability of having a higher score for your '1' row than for your '0' is 0.8 You then understand how AUC=0.5 means the model is a random classifier.
Why is ROC insensitive to class distributions? Compared to the others, my answer is focused on understanding how you use ROC and AUC in Data Science cases. If you need the mathematical / statistical part, my answer won't help you. Basically, ROC c
19,967
Why does MLE make sense, given the probability of an individual sample is 0?
The probability of any sample, $\mathbb{P}_\theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $\delta$ when $\delta$ goes to zero (quoting from Aldrich, 1997): $\qquad\qquad\qquad$ when renormalising this probability by $\delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922). Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle. A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,\ldots,x_n)$ $$\frac{1}{n} \sum_{i=1}^n \log f_\theta(x_i)$$ converges to [Law of Large Numbers]$$\mathbb{E}[\log f_\theta(X)]=\int \log f_\theta(x)\,f_0(x)\,\text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $\theta$] is asymptotically equivalent to minimising [in $\theta$] the Kullback-Leibler divergence $$\int \log \dfrac{f_0(x)}{f_\theta(x)}\, f_0(x)\,\text{d}x=\underbrace{\int \log f_0(x)\,f_0(x)\,\text{d}x}_{\text{constant}\\\text{in }\theta}-\int \log f_\theta(x)\,f_0(x)\,\text{d}x$$ between the true distribution of the iid sample and the family of distributions represented by the $f_\theta$'s.
Why does MLE make sense, given the probability of an individual sample is 0?
The probability of any sample, $\mathbb{P}_\theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluat
Why does MLE make sense, given the probability of an individual sample is 0? The probability of any sample, $\mathbb{P}_\theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluating a sample and the likelihood it occurs. The statistical likelihood, as defined by Fisher (1912), is based on the limiting argument of the probability of observing the sample $x$ within an interval of length $\delta$ when $\delta$ goes to zero (quoting from Aldrich, 1997): $\qquad\qquad\qquad$ when renormalising this probability by $\delta$. The term of likelihood function is only introduced in Fisher (1921) and of maximum likelihood in Fisher (1922). Although he went under the denomination of "most probable value", and used a principle of inverse probability (Bayesian inference) with a flat prior, Carl Friedrich Gauß had already derived in 1809 a maximum likelihood estimator for the variance parameter of a Normal distribution. Hald (1999) mentions several other occurrences of maximum likelihood estimators before Fisher's 1912 paper, which set the general principle. A later justification of the maximum likelihood approach is that, since the renormalised log-likelihood of a sample $(x_1,\ldots,x_n)$ $$\frac{1}{n} \sum_{i=1}^n \log f_\theta(x_i)$$ converges to [Law of Large Numbers]$$\mathbb{E}[\log f_\theta(X)]=\int \log f_\theta(x)\,f_0(x)\,\text{d}x$$(where $f_0$ denotes the true density of the iid sample), maximising the likelihood [as a function of $\theta$] is asymptotically equivalent to minimising [in $\theta$] the Kullback-Leibler divergence $$\int \log \dfrac{f_0(x)}{f_\theta(x)}\, f_0(x)\,\text{d}x=\underbrace{\int \log f_0(x)\,f_0(x)\,\text{d}x}_{\text{constant}\\\text{in }\theta}-\int \log f_\theta(x)\,f_0(x)\,\text{d}x$$ between the true distribution of the iid sample and the family of distributions represented by the $f_\theta$'s.
Why does MLE make sense, given the probability of an individual sample is 0? The probability of any sample, $\mathbb{P}_\theta(X=x)$, is equal to zero and yet one sample is realised by drawing from a probability distribution. Probability is therefore the wrong tool for evaluat
19,968
How to perform non-negative ridge regression?
The rather anti-climatic answer to "Does anyone know why this is?" is that simply nobody cares enough to implement a non-negative ridge regression routine. One of the main reasons is that people have already started implementing non-negative elastic net routines (for example here and here). Elastic net includes ridge regression as a special case (one essentially set the LASSO part to have a zero weighting). These works are relatively new so they have not yet been incorporated in scikit-learn or a similar general use package. You might want to inquire the authors of these papers for code. EDIT: As @amoeba and I discussed on the comments the actual implementation of this is relative simple. Say one has the following regression problem to: $y = 2 x_1 - x_2 + \epsilon, \qquad \epsilon \sim N(0,0.2^2)$ where $x_1$ and $x_2$ are both standard normals such as: $x_p \sim N(0,1)$. Notice I use standardised predictor variables so I do not have to normalise afterwards. For simplicity I do not include an intercept either. We can immediately solve this regression problem using standard linear regression. So in R it should be something like this: rm(list = ls()); library(MASS); set.seed(123); N = 1e6; x1 = rnorm(N) x2 = rnorm(N) y = 2 * x1 - 1 * x2 + rnorm(N,sd = 0.2) simpleLR = lm(y ~ -1 + x1 + x2 ) matrixX = model.matrix(simpleLR); # This is close to standardised vectorY = y all.equal(coef(simpleLR), qr.solve(matrixX, vectorY), tolerance = 1e-7) # TRUE Notice the last line. Almost all linear regression routine use the QR decomposition to estimate $\beta$. We would like to use the same for our ridge regression problem. At this point read this post by @whuber; we will be implementing exactly this procedure. In short, we will be augmenting our original design matrix $X$ with a $\sqrt{\lambda}I_p$ diagonal matrix and our response vector $y$ with $p$ zeros. In that way we will be able to re-express the original ridge regression problem $(X^TX + \lambda I)^{-1} X^Ty$ as $(\bar{X}^T\bar{X})^{-1} \bar{X}^T\bar{y}$ where the $\bar{}$ symbolises the augmented version. Check slides 18-19 from these notes too for completeness, I found them quite straightforward. So in R we would some like the following: myLambda = 100; simpleRR = lm.ridge(y ~ -1 + x1 + x2, lambda = myLambda) newVecY = c(vectorY, rep(0, 2)) newMatX = rbind(matrixX, sqrt(myLambda) * diag(2)) all.equal(coef(simpleRR), qr.solve(newMatX, newVecY), tolerance = 1e-7) # TRUE and it works. OK, so we got the ridge regression part. We could solve in another way though, we could formulate it as an optimisation problem where the residual sum of squares is the cost function and then optimise against it, ie. $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$. Sure enough we can do that: myRSS <- function(X,y,b){ return( sum( (y - X%*%b)^2 ) ) } bfgsOptim = optim(myRSS, par = c(1,1), X = newMatX, y= newVecY, method = 'L-BFGS-B') all.equal(coef(simpleRR), bfgsOptim$par, check.attributes = FALSE, tolerance = 1e-7) # TRUE which as expected again works. So now we just want : $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$ where $\beta \geq 0$. Which is simply the same optimisation problem but constrained so that the solution are non-negative. bfgsOptimConst = optim(myRSS, par = c(1,1), X=newMatX, y= newVecY, method = 'L-BFGS-B', lower = c(0,0)) all(bfgsOptimConst$par >=0) # TRUE (bfgsOptimConst$par) # 2.000504 0.000000 which shows that the original non-negative ridge regression task can be solved by reformulating as a simple constrained optimisation problem. Some caveats: I used (practically) normalised predictor variables. You will need to account of the normalisation yourself. Same thing goes for the non normalisation of the intercept. I used optim's L-BFGS-B argument. It is the most vanilla R solver that accepts bounds. I am sure that you will find dozens of better solvers. In general constraint linear least-squares problems are posed as quadratic optimisation tasks. This is an overkill for this post but keep in mind that you can get better speed if needed. As mentioned in the comments you could skip the ridge-regression as augmented-linear-regression part and directly encode the ridge cost function as an optimisation problem. This would be a lot simpler and this post significantly smaller. For the sake of argument I append this second solution too. I am not fully conversational in Python but essentially you can replicate this work by using NumPy's linalg.solve and SciPy's optimize functions. To pick the hyperparameter $\lambda$ etc. you just do the usual CV-step you would do in any case; nothing changes. Code for point 5: myRidgeRSS <- function(X,y,b, lambda){ return( sum( (y - X%*%b)^2 ) + lambda * sum(b^2) ) } bfgsOptimConst2 = optim(myRidgeRSS, par = c(1,1), X = matrixX, y = vectorY, method = 'L-BFGS-B', lower = c(0,0), lambda = myLambda) all(bfgsOptimConst2$par >0) # TRUE (bfgsOptimConst2$par) # 2.000504 0.000000
How to perform non-negative ridge regression?
The rather anti-climatic answer to "Does anyone know why this is?" is that simply nobody cares enough to implement a non-negative ridge regression routine. One of the main reasons is that people have
How to perform non-negative ridge regression? The rather anti-climatic answer to "Does anyone know why this is?" is that simply nobody cares enough to implement a non-negative ridge regression routine. One of the main reasons is that people have already started implementing non-negative elastic net routines (for example here and here). Elastic net includes ridge regression as a special case (one essentially set the LASSO part to have a zero weighting). These works are relatively new so they have not yet been incorporated in scikit-learn or a similar general use package. You might want to inquire the authors of these papers for code. EDIT: As @amoeba and I discussed on the comments the actual implementation of this is relative simple. Say one has the following regression problem to: $y = 2 x_1 - x_2 + \epsilon, \qquad \epsilon \sim N(0,0.2^2)$ where $x_1$ and $x_2$ are both standard normals such as: $x_p \sim N(0,1)$. Notice I use standardised predictor variables so I do not have to normalise afterwards. For simplicity I do not include an intercept either. We can immediately solve this regression problem using standard linear regression. So in R it should be something like this: rm(list = ls()); library(MASS); set.seed(123); N = 1e6; x1 = rnorm(N) x2 = rnorm(N) y = 2 * x1 - 1 * x2 + rnorm(N,sd = 0.2) simpleLR = lm(y ~ -1 + x1 + x2 ) matrixX = model.matrix(simpleLR); # This is close to standardised vectorY = y all.equal(coef(simpleLR), qr.solve(matrixX, vectorY), tolerance = 1e-7) # TRUE Notice the last line. Almost all linear regression routine use the QR decomposition to estimate $\beta$. We would like to use the same for our ridge regression problem. At this point read this post by @whuber; we will be implementing exactly this procedure. In short, we will be augmenting our original design matrix $X$ with a $\sqrt{\lambda}I_p$ diagonal matrix and our response vector $y$ with $p$ zeros. In that way we will be able to re-express the original ridge regression problem $(X^TX + \lambda I)^{-1} X^Ty$ as $(\bar{X}^T\bar{X})^{-1} \bar{X}^T\bar{y}$ where the $\bar{}$ symbolises the augmented version. Check slides 18-19 from these notes too for completeness, I found them quite straightforward. So in R we would some like the following: myLambda = 100; simpleRR = lm.ridge(y ~ -1 + x1 + x2, lambda = myLambda) newVecY = c(vectorY, rep(0, 2)) newMatX = rbind(matrixX, sqrt(myLambda) * diag(2)) all.equal(coef(simpleRR), qr.solve(newMatX, newVecY), tolerance = 1e-7) # TRUE and it works. OK, so we got the ridge regression part. We could solve in another way though, we could formulate it as an optimisation problem where the residual sum of squares is the cost function and then optimise against it, ie. $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$. Sure enough we can do that: myRSS <- function(X,y,b){ return( sum( (y - X%*%b)^2 ) ) } bfgsOptim = optim(myRSS, par = c(1,1), X = newMatX, y= newVecY, method = 'L-BFGS-B') all.equal(coef(simpleRR), bfgsOptim$par, check.attributes = FALSE, tolerance = 1e-7) # TRUE which as expected again works. So now we just want : $ \displaystyle \min_{\beta} || \bar{y} - \bar{X}\beta||_2^2$ where $\beta \geq 0$. Which is simply the same optimisation problem but constrained so that the solution are non-negative. bfgsOptimConst = optim(myRSS, par = c(1,1), X=newMatX, y= newVecY, method = 'L-BFGS-B', lower = c(0,0)) all(bfgsOptimConst$par >=0) # TRUE (bfgsOptimConst$par) # 2.000504 0.000000 which shows that the original non-negative ridge regression task can be solved by reformulating as a simple constrained optimisation problem. Some caveats: I used (practically) normalised predictor variables. You will need to account of the normalisation yourself. Same thing goes for the non normalisation of the intercept. I used optim's L-BFGS-B argument. It is the most vanilla R solver that accepts bounds. I am sure that you will find dozens of better solvers. In general constraint linear least-squares problems are posed as quadratic optimisation tasks. This is an overkill for this post but keep in mind that you can get better speed if needed. As mentioned in the comments you could skip the ridge-regression as augmented-linear-regression part and directly encode the ridge cost function as an optimisation problem. This would be a lot simpler and this post significantly smaller. For the sake of argument I append this second solution too. I am not fully conversational in Python but essentially you can replicate this work by using NumPy's linalg.solve and SciPy's optimize functions. To pick the hyperparameter $\lambda$ etc. you just do the usual CV-step you would do in any case; nothing changes. Code for point 5: myRidgeRSS <- function(X,y,b, lambda){ return( sum( (y - X%*%b)^2 ) + lambda * sum(b^2) ) } bfgsOptimConst2 = optim(myRidgeRSS, par = c(1,1), X = matrixX, y = vectorY, method = 'L-BFGS-B', lower = c(0,0), lambda = myLambda) all(bfgsOptimConst2$par >0) # TRUE (bfgsOptimConst2$par) # 2.000504 0.000000
How to perform non-negative ridge regression? The rather anti-climatic answer to "Does anyone know why this is?" is that simply nobody cares enough to implement a non-negative ridge regression routine. One of the main reasons is that people have
19,969
How to perform non-negative ridge regression?
R package glmnet that implements elastic net and therefore lasso and ridge allows this. With parameters lower.limits and upper.limits, you can set a minimum or a maximum value for each weight separately, so if you set lower limits to 0, it will perform nonnegative elastic-net (lasso/ridge). There is also a python wrapper https://pypi.python.org/pypi/glmnet/2.0.0
How to perform non-negative ridge regression?
R package glmnet that implements elastic net and therefore lasso and ridge allows this. With parameters lower.limits and upper.limits, you can set a minimum or a maximum value for each weight separate
How to perform non-negative ridge regression? R package glmnet that implements elastic net and therefore lasso and ridge allows this. With parameters lower.limits and upper.limits, you can set a minimum or a maximum value for each weight separately, so if you set lower limits to 0, it will perform nonnegative elastic-net (lasso/ridge). There is also a python wrapper https://pypi.python.org/pypi/glmnet/2.0.0
How to perform non-negative ridge regression? R package glmnet that implements elastic net and therefore lasso and ridge allows this. With parameters lower.limits and upper.limits, you can set a minimum or a maximum value for each weight separate
19,970
How to perform non-negative ridge regression?
I know it's an oldie, but I put an example here of how to do this in Python either by using ElasticNet (approximation), or via nnls(). Basically, for ElasticNet, you can use: from sklearn import linear_model as lm eln = lm.ElasticNet(l1_ratio=0, fit_intercept=False) act_alphas, coefs, dual_gaps = eln.path(X, y, alphas=alphas, positive=True) but be forewarned that ElasticNet spews a lot of warnings for large values of alpha, as it doesn't converge. Via nnls(): from scipy.optimize import nnls def nnls_ridge(X, y, alpha): """return non-negative Ridge coefficients""" p = X.shape[1] Xext = np.vstack((X, alpha * np.eye(p))) yext = np.hstack((y, np.zeros(p))) coefs, _ = nnls(Xext, yext) return coefs Note (Following the notations of this excellent post). While it is true that in an unconstrained setting, the OLS formulation of Ridge: $$(X_{*}^\prime X_{*})\beta = X_{*}^\prime y_{*}.$$ is equivalent to: $$(X^\prime X + \lambda I)\beta = X^\prime y.$$ in a constrained setting (e.g. bounds = (0, np.inf), or using nnls()), the two are not equivalent, as $(X^\prime X + \lambda I)$ has already been reduced to a $p \times p$ matrix by (unconstrained) LS projection. In particular, my initial attempt at nnls_ridge() is wrong: def nnls_ridge_wrong(X, y, alpha): p = X.shape[1] Xmod = X.T @ X + alpha * np.eye(p) ymod = X.T @ y coefs, _ = nnls(Xmod, ymod) return coefs Example (using the functions described in the GitHub link at the beginning of this post): X = np.arange(6).reshape((-1, 2)) y = np.array([8,1,1]) a = 0.001 >>> ols_ridge(X, y, a) array([-8.56345838, 6.81837427]) >>> nn_ridge_path_via_elasticnet(X, y, [a])[1].T array([[0. , 0.45708041]]) >>> nnls_ridge(X, y, a) array([0. , 0.45714284]) >>> nnls_ridge_wrong(X, y, a) array([0. , 0.37663842]) For a $X_{12,10}$ and $y_{12}$ where all components are Normal $N(0,1)$, we see some drastic differences of $R^2$ for the nnls_ridge_wrong (bottom left pair of plots):
How to perform non-negative ridge regression?
I know it's an oldie, but I put an example here of how to do this in Python either by using ElasticNet (approximation), or via nnls(). Basically, for ElasticNet, you can use: from sklearn import linea
How to perform non-negative ridge regression? I know it's an oldie, but I put an example here of how to do this in Python either by using ElasticNet (approximation), or via nnls(). Basically, for ElasticNet, you can use: from sklearn import linear_model as lm eln = lm.ElasticNet(l1_ratio=0, fit_intercept=False) act_alphas, coefs, dual_gaps = eln.path(X, y, alphas=alphas, positive=True) but be forewarned that ElasticNet spews a lot of warnings for large values of alpha, as it doesn't converge. Via nnls(): from scipy.optimize import nnls def nnls_ridge(X, y, alpha): """return non-negative Ridge coefficients""" p = X.shape[1] Xext = np.vstack((X, alpha * np.eye(p))) yext = np.hstack((y, np.zeros(p))) coefs, _ = nnls(Xext, yext) return coefs Note (Following the notations of this excellent post). While it is true that in an unconstrained setting, the OLS formulation of Ridge: $$(X_{*}^\prime X_{*})\beta = X_{*}^\prime y_{*}.$$ is equivalent to: $$(X^\prime X + \lambda I)\beta = X^\prime y.$$ in a constrained setting (e.g. bounds = (0, np.inf), or using nnls()), the two are not equivalent, as $(X^\prime X + \lambda I)$ has already been reduced to a $p \times p$ matrix by (unconstrained) LS projection. In particular, my initial attempt at nnls_ridge() is wrong: def nnls_ridge_wrong(X, y, alpha): p = X.shape[1] Xmod = X.T @ X + alpha * np.eye(p) ymod = X.T @ y coefs, _ = nnls(Xmod, ymod) return coefs Example (using the functions described in the GitHub link at the beginning of this post): X = np.arange(6).reshape((-1, 2)) y = np.array([8,1,1]) a = 0.001 >>> ols_ridge(X, y, a) array([-8.56345838, 6.81837427]) >>> nn_ridge_path_via_elasticnet(X, y, [a])[1].T array([[0. , 0.45708041]]) >>> nnls_ridge(X, y, a) array([0. , 0.45714284]) >>> nnls_ridge_wrong(X, y, a) array([0. , 0.37663842]) For a $X_{12,10}$ and $y_{12}$ where all components are Normal $N(0,1)$, we see some drastic differences of $R^2$ for the nnls_ridge_wrong (bottom left pair of plots):
How to perform non-negative ridge regression? I know it's an oldie, but I put an example here of how to do this in Python either by using ElasticNet (approximation), or via nnls(). Basically, for ElasticNet, you can use: from sklearn import linea
19,971
How to perform non-negative ridge regression?
Recall we are trying to solve: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda \| x \|^2_2 \,\,\,\,\text{s.t. }x>0 $$ is equivalent to: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda x^\top I x \,\,\,\,\text{s.t. }x>0 $$ with some more algebra: $$\text{minimize}_{x}\,\,\,\,x^{T}\left(A^{T}A+ \lambda I \right)x+\left(-2A^{T}y\right)^{T}x\,\,\,\,\text{s.t. }x>0$$ The solution in pseudo-python is simply to do: Q = A'A + lambda*I c = - A'y x,_ = scipy.optimize.nnls(Q,c) see: How does one do sparse non-negative least squares using $K$ regularizers of the form $x^\top R_k x$? for a slightly more general answer.
How to perform non-negative ridge regression?
Recall we are trying to solve: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda \| x \|^2_2 \,\,\,\,\text{s.t. }x>0 $$ is equivalent to: $$ \text{minimize}_{x}\,\,\,\,\left\V
How to perform non-negative ridge regression? Recall we are trying to solve: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda \| x \|^2_2 \,\,\,\,\text{s.t. }x>0 $$ is equivalent to: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda x^\top I x \,\,\,\,\text{s.t. }x>0 $$ with some more algebra: $$\text{minimize}_{x}\,\,\,\,x^{T}\left(A^{T}A+ \lambda I \right)x+\left(-2A^{T}y\right)^{T}x\,\,\,\,\text{s.t. }x>0$$ The solution in pseudo-python is simply to do: Q = A'A + lambda*I c = - A'y x,_ = scipy.optimize.nnls(Q,c) see: How does one do sparse non-negative least squares using $K$ regularizers of the form $x^\top R_k x$? for a slightly more general answer.
How to perform non-negative ridge regression? Recall we are trying to solve: $$ \text{minimize}_{x}\,\,\,\,\left\Vert Ax-y\right\Vert _{2}^{2}+ \lambda \| x \|^2_2 \,\,\,\,\text{s.t. }x>0 $$ is equivalent to: $$ \text{minimize}_{x}\,\,\,\,\left\V
19,972
Success of Bernoulli trials with different probabilities
The distribution you are asking about is called the Poisson Binomial distribution, with rather complicated pmf (see Wikipedia for broader description) $$ \Pr(X=x) = \sum\limits_{A\in F_x} \prod\limits_{i\in A} p_i \prod\limits_{j\in A^c} (1-p_j) $$ Generally, the problem is that using it directly for a larger number of trials would be very slow. There are also other methods of calculating the pmf, e.g. recursive formulas, but they are numerically unstable. The easiest way around those problems are approximation methods (described e.g. by Hong, 2013). If we define $$ \mu = \sum_{i=1}^n p_i $$ $$ \sigma = \sqrt{ \sum_{i=1}^n p_i(1-p_i) } $$ $$ \gamma = \sigma^{-3} \sum_{i=1}^n p_i (1 - p_i) (1 - 2p_i)$$ then we can approximate pmf with Poisson distribution via law of small numbers or Le Cams theorem $$ \Pr(X = x) \approx \frac{\mu^x \exp(-\mu)}{x!} $$ but it sees that generally Binomial approximation behaves better (Choi and Xia, 2002) $$ \Pr(X = x) \approx \mathrm{Binom} \left( n, \frac{\mu}{n} \right)$$ you can use Normal approximation $$ f(x) \approx \phi \left( \frac{x + 0.5 - \mu}{ \sigma} \right) $$ or cdf can be approximated using so-called refined Normal approximation (Volkova, 1996) $$ F(x) \approx \max\left(0, \ g \left( \frac{x + 0.5 - \mu}{ \sigma} \right) \right)$$ where $g(x) = \Phi(x) + \gamma(1-x^2) \frac{\phi(x)}{6}$. Another alternative is of course a Monte Carlo simulation. Simple dpbinom R function would be dpbinom <- function(x, prob, log = FALSE, method = c("MC", "PA", "NA", "BA"), nsim = 1e4) { stopifnot(all(prob >= 0 & prob <= 1)) method <- match.arg(method) if (method == "PA") { # poisson dpois(x, sum(prob), log) } else if (method == "NA") { # normal dnorm(x, sum(prob), sqrt(sum(prob*(1-prob))), log) } else if (method == "BA") { # binomial dbinom(x, length(prob), mean(prob), log) } else { # monte carlo tmp <- table(colSums(replicate(nsim, rbinom(length(prob), 1, prob)))) tmp <- tmp/sum(tmp) p <- as.numeric(tmp[as.character(x)]) p[is.na(p)] <- 0 if (log) log(p) else p } } Most of the methods (and more) are also implemented in R poibin package. Chen, L.H.Y. (1974). On the Convergence of Poisson Binomial to Poisson Distributions. The Annals of Probability, 2(1), 178-180. Chen, S.X. and Liu, J.S. (1997). Statistical applications of the Poisson-Binomial and conditional Bernoulli distributions. Statistica Sinica 7, 875-892. Chen, S.X. (1993). Poisson-Binomial distribution, conditional Bernoulli distribution and maximum entropy. Technical Report. Department of Statistics, Harvard University. Chen, X.H., Dempster, A.P. and Liu, J.S. (1994). Weighted finite population sampling to maximize entropy. Biometrika 81, 457-469. Wang, Y.H. (1993). On the number of successes in independent trials. Statistica Sinica 3(2): 295-312. Hong, Y. (2013). On computing the distribution function for the Poisson binomial distribution. Computational Statistics & Data Analysis, 59, 41-51. Volkova, A. Y. (1996). A refinement of the central limit theorem for sums of independent random indicators. Theory of Probability and its Applications 40, 791-794. Choi, K.P. and Xia, A. (2002). Approximating the number of successes in independent trials: Binomial versus Poisson. The Annals of Applied Probability, 14(4), 1139-1148. Le Cam, L. (1960). An Approximation Theorem for the Poisson Binomial Distribution. Pacific Journal of Mathematics 10(4), 1181–1197.
Success of Bernoulli trials with different probabilities
The distribution you are asking about is called the Poisson Binomial distribution, with rather complicated pmf (see Wikipedia for broader description) $$ \Pr(X=x) = \sum\limits_{A\in F_x} \prod\limits
Success of Bernoulli trials with different probabilities The distribution you are asking about is called the Poisson Binomial distribution, with rather complicated pmf (see Wikipedia for broader description) $$ \Pr(X=x) = \sum\limits_{A\in F_x} \prod\limits_{i\in A} p_i \prod\limits_{j\in A^c} (1-p_j) $$ Generally, the problem is that using it directly for a larger number of trials would be very slow. There are also other methods of calculating the pmf, e.g. recursive formulas, but they are numerically unstable. The easiest way around those problems are approximation methods (described e.g. by Hong, 2013). If we define $$ \mu = \sum_{i=1}^n p_i $$ $$ \sigma = \sqrt{ \sum_{i=1}^n p_i(1-p_i) } $$ $$ \gamma = \sigma^{-3} \sum_{i=1}^n p_i (1 - p_i) (1 - 2p_i)$$ then we can approximate pmf with Poisson distribution via law of small numbers or Le Cams theorem $$ \Pr(X = x) \approx \frac{\mu^x \exp(-\mu)}{x!} $$ but it sees that generally Binomial approximation behaves better (Choi and Xia, 2002) $$ \Pr(X = x) \approx \mathrm{Binom} \left( n, \frac{\mu}{n} \right)$$ you can use Normal approximation $$ f(x) \approx \phi \left( \frac{x + 0.5 - \mu}{ \sigma} \right) $$ or cdf can be approximated using so-called refined Normal approximation (Volkova, 1996) $$ F(x) \approx \max\left(0, \ g \left( \frac{x + 0.5 - \mu}{ \sigma} \right) \right)$$ where $g(x) = \Phi(x) + \gamma(1-x^2) \frac{\phi(x)}{6}$. Another alternative is of course a Monte Carlo simulation. Simple dpbinom R function would be dpbinom <- function(x, prob, log = FALSE, method = c("MC", "PA", "NA", "BA"), nsim = 1e4) { stopifnot(all(prob >= 0 & prob <= 1)) method <- match.arg(method) if (method == "PA") { # poisson dpois(x, sum(prob), log) } else if (method == "NA") { # normal dnorm(x, sum(prob), sqrt(sum(prob*(1-prob))), log) } else if (method == "BA") { # binomial dbinom(x, length(prob), mean(prob), log) } else { # monte carlo tmp <- table(colSums(replicate(nsim, rbinom(length(prob), 1, prob)))) tmp <- tmp/sum(tmp) p <- as.numeric(tmp[as.character(x)]) p[is.na(p)] <- 0 if (log) log(p) else p } } Most of the methods (and more) are also implemented in R poibin package. Chen, L.H.Y. (1974). On the Convergence of Poisson Binomial to Poisson Distributions. The Annals of Probability, 2(1), 178-180. Chen, S.X. and Liu, J.S. (1997). Statistical applications of the Poisson-Binomial and conditional Bernoulli distributions. Statistica Sinica 7, 875-892. Chen, S.X. (1993). Poisson-Binomial distribution, conditional Bernoulli distribution and maximum entropy. Technical Report. Department of Statistics, Harvard University. Chen, X.H., Dempster, A.P. and Liu, J.S. (1994). Weighted finite population sampling to maximize entropy. Biometrika 81, 457-469. Wang, Y.H. (1993). On the number of successes in independent trials. Statistica Sinica 3(2): 295-312. Hong, Y. (2013). On computing the distribution function for the Poisson binomial distribution. Computational Statistics & Data Analysis, 59, 41-51. Volkova, A. Y. (1996). A refinement of the central limit theorem for sums of independent random indicators. Theory of Probability and its Applications 40, 791-794. Choi, K.P. and Xia, A. (2002). Approximating the number of successes in independent trials: Binomial versus Poisson. The Annals of Applied Probability, 14(4), 1139-1148. Le Cam, L. (1960). An Approximation Theorem for the Poisson Binomial Distribution. Pacific Journal of Mathematics 10(4), 1181–1197.
Success of Bernoulli trials with different probabilities The distribution you are asking about is called the Poisson Binomial distribution, with rather complicated pmf (see Wikipedia for broader description) $$ \Pr(X=x) = \sum\limits_{A\in F_x} \prod\limits
19,973
Success of Bernoulli trials with different probabilities
One approach is to use generating functions. The solution to your problem is the coefficient $x^n$ in the polynomial $$\prod_{i=1}^{20}(p_ix + 1-p_i).$$ This is the dynamic programming equivalent (quadratic time in the number of Bernoulli variables) of doing the summation in the Poisson Binomial distribution from Tim's answer (which would be exponential time). Here's the Python code of the quadratic-time dynamic programming algorithm for a given $n$ and $p$: import numpy as np def calculated_probability(ps, n): total = np.zeros((ps.shape[0] + 1,)) total[0] = 1.0 for p in ps: total = p * np.roll(total, 1) + (1 - p) * total return total[n] rng = np.random.default_rng(12345) ps = rng.uniform(size=10000) print(calculated_probability(ps, 5000)) # 0.008196669065619853 Its numerical precision could be improved by implementing the Kahan summation alagorithm, but there's probably very little benefit since adjacent entries in the running totals (which are the addends) are usually not that different in magnitude.
Success of Bernoulli trials with different probabilities
One approach is to use generating functions. The solution to your problem is the coefficient $x^n$ in the polynomial $$\prod_{i=1}^{20}(p_ix + 1-p_i).$$ This is the dynamic programming equivalent (qu
Success of Bernoulli trials with different probabilities One approach is to use generating functions. The solution to your problem is the coefficient $x^n$ in the polynomial $$\prod_{i=1}^{20}(p_ix + 1-p_i).$$ This is the dynamic programming equivalent (quadratic time in the number of Bernoulli variables) of doing the summation in the Poisson Binomial distribution from Tim's answer (which would be exponential time). Here's the Python code of the quadratic-time dynamic programming algorithm for a given $n$ and $p$: import numpy as np def calculated_probability(ps, n): total = np.zeros((ps.shape[0] + 1,)) total[0] = 1.0 for p in ps: total = p * np.roll(total, 1) + (1 - p) * total return total[n] rng = np.random.default_rng(12345) ps = rng.uniform(size=10000) print(calculated_probability(ps, 5000)) # 0.008196669065619853 Its numerical precision could be improved by implementing the Kahan summation alagorithm, but there's probably very little benefit since adjacent entries in the running totals (which are the addends) are usually not that different in magnitude.
Success of Bernoulli trials with different probabilities One approach is to use generating functions. The solution to your problem is the coefficient $x^n$ in the polynomial $$\prod_{i=1}^{20}(p_ix + 1-p_i).$$ This is the dynamic programming equivalent (qu
19,974
Product and sum of big $O_p$ random variables
If $X_n=O_p(a_n)$ and $Y_n=O_p(b_n)$, this means that we can choose $M_X$ and $M_Y$ such that $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2 $$ Your statement is that $X_nY_n=O_p(a_nb_n)$. Consider this product and let $M_{XY}=M_XM_Y$. Then we want to show: $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}\right)=P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|\leq M_X\right)+P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon $$ For the first term, $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|\leq M_X\right)\leq P\left(\left|\frac{M_Xa_nY_n}{a_nb_n}\right|>M_{XY}\right)=P\left(\left|\frac{Y_n}{b_n}\right|>\frac{M_{XY}}{M_X}\right)=P\left(\left|\frac{Y_n}{b_n}\right|>M_Y\right)<\epsilon/2. $$ For the second term, $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|>M_X\right)\leq P\left( \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon/2. $$ So together you get that $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{Y_n}{b_n}\right|>M_Y\right)+P\left( \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon. $$ For addition, use the definition and go from there. First, let us assume that $a_n$ and $b_n$ are both positive. This makes it easier, but it is not particularly restrictive. If $X_n$ is $O_p(a_n)$, that is the same as saying that $X_n/a_n$ is uniformly tight. If $X_n/a_n$ is uniformly tight, then obviously $-X_n/a_n$ must also be. So the results for positive sequences can be directly translated to negative sequences (but for the $O_p$ statements, we have absolute values, so for that it would not matter). Again, we have $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2. $$ We now want to show $$ P\left(\left|\frac{X_n+Y_n}{a_n+b_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{X_n}{a_n+b_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{a_n+b_n}\right|>M_{XY}/2\right)\\ =P\left(\left|\frac{X_n}{a_n}\right|\left|\frac{1}{1+\frac{b_n}{a_n}}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{b_n}\right|\left|\frac{1}{1+\frac{a_n}{b_n}}\right|>M_{XY}/2\right)\\ \leq P\left(\left|\frac{X_n}{a_n}\right|>{M_{XY}}/{2}\right)+P\left(\left|\frac{Y_n}{b_n}\right|>M_{XY}/2\right)\\ <\epsilon/2+\epsilon/2=\epsilon. $$ Here, letting $M_{XY}=2*\max(M_X, M_Y)$ would put you on the safe side. The more common statement of the rule is, however, $O_p(a_n)+O_p(b_n)=O_p(a_n)$ if $a_n$ is of higher (or equal) order compared to $b_n$. For example, if $a_n=n^2$ and $b_n=n$, then $O_p(n^2+n)$ is a bit redundant and $O_p(n^2)$ is enough. Formulating it in this way (i.e. when $|a_n|$ is of higher (or the same) order than (as) $|b_n|$, it is easier to show: $$ P\left(\left|\frac{X_n+Y_n}{a_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{X_n}{a_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{a_n}\right|>M_{XY}/2\right)\\ \leq P\left(\left|\frac{X_n}{a_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{b_n}\right|>M_{XY}/2\right)\\ <\epsilon/2+\epsilon/2=\epsilon. $$
Product and sum of big $O_p$ random variables
If $X_n=O_p(a_n)$ and $Y_n=O_p(b_n)$, this means that we can choose $M_X$ and $M_Y$ such that $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2 $$ Your statement is that $X_nY_n=O_p(a_nb_n)
Product and sum of big $O_p$ random variables If $X_n=O_p(a_n)$ and $Y_n=O_p(b_n)$, this means that we can choose $M_X$ and $M_Y$ such that $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2 $$ Your statement is that $X_nY_n=O_p(a_nb_n)$. Consider this product and let $M_{XY}=M_XM_Y$. Then we want to show: $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}\right)=P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|\leq M_X\right)+P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon $$ For the first term, $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|\leq M_X\right)\leq P\left(\left|\frac{M_Xa_nY_n}{a_nb_n}\right|>M_{XY}\right)=P\left(\left|\frac{Y_n}{b_n}\right|>\frac{M_{XY}}{M_X}\right)=P\left(\left|\frac{Y_n}{b_n}\right|>M_Y\right)<\epsilon/2. $$ For the second term, $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}, \left|\frac{X_n}{a_n}\right|>M_X\right)\leq P\left( \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon/2. $$ So together you get that $$ P\left(\left|\frac{X_nY_n}{a_nb_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{Y_n}{b_n}\right|>M_Y\right)+P\left( \left|\frac{X_n}{a_n}\right|>M_X\right)<\epsilon. $$ For addition, use the definition and go from there. First, let us assume that $a_n$ and $b_n$ are both positive. This makes it easier, but it is not particularly restrictive. If $X_n$ is $O_p(a_n)$, that is the same as saying that $X_n/a_n$ is uniformly tight. If $X_n/a_n$ is uniformly tight, then obviously $-X_n/a_n$ must also be. So the results for positive sequences can be directly translated to negative sequences (but for the $O_p$ statements, we have absolute values, so for that it would not matter). Again, we have $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2. $$ We now want to show $$ P\left(\left|\frac{X_n+Y_n}{a_n+b_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{X_n}{a_n+b_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{a_n+b_n}\right|>M_{XY}/2\right)\\ =P\left(\left|\frac{X_n}{a_n}\right|\left|\frac{1}{1+\frac{b_n}{a_n}}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{b_n}\right|\left|\frac{1}{1+\frac{a_n}{b_n}}\right|>M_{XY}/2\right)\\ \leq P\left(\left|\frac{X_n}{a_n}\right|>{M_{XY}}/{2}\right)+P\left(\left|\frac{Y_n}{b_n}\right|>M_{XY}/2\right)\\ <\epsilon/2+\epsilon/2=\epsilon. $$ Here, letting $M_{XY}=2*\max(M_X, M_Y)$ would put you on the safe side. The more common statement of the rule is, however, $O_p(a_n)+O_p(b_n)=O_p(a_n)$ if $a_n$ is of higher (or equal) order compared to $b_n$. For example, if $a_n=n^2$ and $b_n=n$, then $O_p(n^2+n)$ is a bit redundant and $O_p(n^2)$ is enough. Formulating it in this way (i.e. when $|a_n|$ is of higher (or the same) order than (as) $|b_n|$, it is easier to show: $$ P\left(\left|\frac{X_n+Y_n}{a_n}\right|>M_{XY}\right)\leq P\left(\left|\frac{X_n}{a_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{a_n}\right|>M_{XY}/2\right)\\ \leq P\left(\left|\frac{X_n}{a_n}\right|>M_{XY}/2\right)+P\left(\left|\frac{Y_n}{b_n}\right|>M_{XY}/2\right)\\ <\epsilon/2+\epsilon/2=\epsilon. $$
Product and sum of big $O_p$ random variables If $X_n=O_p(a_n)$ and $Y_n=O_p(b_n)$, this means that we can choose $M_X$ and $M_Y$ such that $$ P(|X_n/a_n|>M_X)<\epsilon/2\\ P(|Y_n/b_n|>M_Y)<\epsilon/2 $$ Your statement is that $X_nY_n=O_p(a_nb_n)
19,975
Product and sum of big $O_p$ random variables
I will answer your second question about addition. By the triangle inequality, $$\Vert X_n + Y_n\Vert \le \Vert X_n \Vert + \Vert Y_n \Vert$$ Now, suppose that $a_n,b_n > 0$. We want to show that for an arbitrary $\varepsilon > 0$ and large enough $t,N > 0$ then $P(\Vert X_n + Y_n \Vert > (a_n + b_n)t) < \varepsilon$ for all $n \ge N$. For $t > 0$, $$\begin{align} P(\Vert X_n + Y_n \Vert > (a_n + b_n)t) &\le P(\Vert X_n\Vert + \Vert Y_n \Vert > (a_n + b_n)t) \\ & \le P\left(\left\{\Vert X_n\Vert > a_n t \right\}\bigcup\left\{\Vert Y_n \Vert > b_n t\right\}\right) \\ &\le P\left(\Vert X_n\Vert > a_n t) + P\left(\Vert Y_n \Vert > b_n t\right\}\right) \\ & \end{align} $$ The second inequality (Event Subset) is the key point that drives the result. We want to show that $\Vert X_n\Vert + \Vert Y_n \Vert > (a_n + b_n)t$ implies that either $\Vert X_n \Vert > a_n t$ or $\Vert Y_n \Vert > b_nt$. This is easy to show by contradiction. Suppose that $\Vert X_n \Vert \le a_n t$ and $\Vert Y_n \Vert \le b_n t$. That would imply that $\Vert X_n\Vert + \Vert Y_n \Vert \le (a_n + b_n)t$, a contradiction. This form is convenient because it allows us to analyze each random variable separately. Our objective is to show that we can make the right-hand side arbitrarily small. I fill in some of the technical details to complete the proof below. Since $X_n = O_p(a_n)$ and $Y_n = O_p(b_n)$, then there exists constants $(t_X,t_Y,N_X,N_Y)$ such that $P(\Vert X_n \Vert > a_n t_X) \le \varepsilon / 2$ for all $n \ge N_X$ and $P(\Vert Y_n \Vert > b_n t_Y) \le \varepsilon / 2$ for all $n \ge N_Y$. Choose $t^* = \max\{t_X,t_Y\}$ and $N^* = \max\{N_X,N_Y\}$. Then, $$P(\Vert X_n \Vert > a_n t^*) \quad \le \quad P(\Vert X_n \Vert > a_n t_X) \quad \le \varepsilon /2 \quad \forall n \ge N^* \ge N_X $$ $$P(\Vert Y_n \Vert > b_n t^*) \quad \le \quad P(\Vert Y_n \Vert > b_n t_Y) \quad \le \varepsilon /2 \quad \forall n \ge N^* \ge N_Y $$ Therefore, $$P(\Vert X_n + Y_n \Vert > (a_n + b_n)t^*) \quad \le \varepsilon/2 + \varepsilon/2 \quad = \varepsilon \quad \forall n \ge N*$$ This shows that $X_n + Y_n$ is $O_p(a_n + b_n)$.
Product and sum of big $O_p$ random variables
I will answer your second question about addition. By the triangle inequality, $$\Vert X_n + Y_n\Vert \le \Vert X_n \Vert + \Vert Y_n \Vert$$ Now, suppose that $a_n,b_n > 0$. We want to show that for
Product and sum of big $O_p$ random variables I will answer your second question about addition. By the triangle inequality, $$\Vert X_n + Y_n\Vert \le \Vert X_n \Vert + \Vert Y_n \Vert$$ Now, suppose that $a_n,b_n > 0$. We want to show that for an arbitrary $\varepsilon > 0$ and large enough $t,N > 0$ then $P(\Vert X_n + Y_n \Vert > (a_n + b_n)t) < \varepsilon$ for all $n \ge N$. For $t > 0$, $$\begin{align} P(\Vert X_n + Y_n \Vert > (a_n + b_n)t) &\le P(\Vert X_n\Vert + \Vert Y_n \Vert > (a_n + b_n)t) \\ & \le P\left(\left\{\Vert X_n\Vert > a_n t \right\}\bigcup\left\{\Vert Y_n \Vert > b_n t\right\}\right) \\ &\le P\left(\Vert X_n\Vert > a_n t) + P\left(\Vert Y_n \Vert > b_n t\right\}\right) \\ & \end{align} $$ The second inequality (Event Subset) is the key point that drives the result. We want to show that $\Vert X_n\Vert + \Vert Y_n \Vert > (a_n + b_n)t$ implies that either $\Vert X_n \Vert > a_n t$ or $\Vert Y_n \Vert > b_nt$. This is easy to show by contradiction. Suppose that $\Vert X_n \Vert \le a_n t$ and $\Vert Y_n \Vert \le b_n t$. That would imply that $\Vert X_n\Vert + \Vert Y_n \Vert \le (a_n + b_n)t$, a contradiction. This form is convenient because it allows us to analyze each random variable separately. Our objective is to show that we can make the right-hand side arbitrarily small. I fill in some of the technical details to complete the proof below. Since $X_n = O_p(a_n)$ and $Y_n = O_p(b_n)$, then there exists constants $(t_X,t_Y,N_X,N_Y)$ such that $P(\Vert X_n \Vert > a_n t_X) \le \varepsilon / 2$ for all $n \ge N_X$ and $P(\Vert Y_n \Vert > b_n t_Y) \le \varepsilon / 2$ for all $n \ge N_Y$. Choose $t^* = \max\{t_X,t_Y\}$ and $N^* = \max\{N_X,N_Y\}$. Then, $$P(\Vert X_n \Vert > a_n t^*) \quad \le \quad P(\Vert X_n \Vert > a_n t_X) \quad \le \varepsilon /2 \quad \forall n \ge N^* \ge N_X $$ $$P(\Vert Y_n \Vert > b_n t^*) \quad \le \quad P(\Vert Y_n \Vert > b_n t_Y) \quad \le \varepsilon /2 \quad \forall n \ge N^* \ge N_Y $$ Therefore, $$P(\Vert X_n + Y_n \Vert > (a_n + b_n)t^*) \quad \le \varepsilon/2 + \varepsilon/2 \quad = \varepsilon \quad \forall n \ge N*$$ This shows that $X_n + Y_n$ is $O_p(a_n + b_n)$.
Product and sum of big $O_p$ random variables I will answer your second question about addition. By the triangle inequality, $$\Vert X_n + Y_n\Vert \le \Vert X_n \Vert + \Vert Y_n \Vert$$ Now, suppose that $a_n,b_n > 0$. We want to show that for
19,976
R package for combining p-values using Fisher's or Stouffer's method
The metap package by Michael Dewey implements many methods for combining p-values: sumlog: Fisher's method sumz: Looks like Stouffer's method (with weights), this isn't mentioned explicitly in the function's documentation but confirmed in the draft vignette (which is not part of the package yet) meanp: When combining p-values, why not just averaging? ...
R package for combining p-values using Fisher's or Stouffer's method
The metap package by Michael Dewey implements many methods for combining p-values: sumlog: Fisher's method sumz: Looks like Stouffer's method (with weights), this isn't mentioned explicitly in the fu
R package for combining p-values using Fisher's or Stouffer's method The metap package by Michael Dewey implements many methods for combining p-values: sumlog: Fisher's method sumz: Looks like Stouffer's method (with weights), this isn't mentioned explicitly in the function's documentation but confirmed in the draft vignette (which is not part of the package yet) meanp: When combining p-values, why not just averaging? ...
R package for combining p-values using Fisher's or Stouffer's method The metap package by Michael Dewey implements many methods for combining p-values: sumlog: Fisher's method sumz: Looks like Stouffer's method (with weights), this isn't mentioned explicitly in the fu
19,977
R package for combining p-values using Fisher's or Stouffer's method
There's also the combine.test function in the survcomp package (on Bioconductor). Implements Fisher's and Stouffer's method, as well as the logit method.
R package for combining p-values using Fisher's or Stouffer's method
There's also the combine.test function in the survcomp package (on Bioconductor). Implements Fisher's and Stouffer's method, as well as the logit method.
R package for combining p-values using Fisher's or Stouffer's method There's also the combine.test function in the survcomp package (on Bioconductor). Implements Fisher's and Stouffer's method, as well as the logit method.
R package for combining p-values using Fisher's or Stouffer's method There's also the combine.test function in the survcomp package (on Bioconductor). Implements Fisher's and Stouffer's method, as well as the logit method.
19,978
R package for combining p-values using Fisher's or Stouffer's method
There is poolr package by Ozan Cinar and Wolfgang Viechtbauer. It includes functions for both Fisher and Stouffer methods.
R package for combining p-values using Fisher's or Stouffer's method
There is poolr package by Ozan Cinar and Wolfgang Viechtbauer. It includes functions for both Fisher and Stouffer methods.
R package for combining p-values using Fisher's or Stouffer's method There is poolr package by Ozan Cinar and Wolfgang Viechtbauer. It includes functions for both Fisher and Stouffer methods.
R package for combining p-values using Fisher's or Stouffer's method There is poolr package by Ozan Cinar and Wolfgang Viechtbauer. It includes functions for both Fisher and Stouffer methods.
19,979
Textbook for Bayesian econometrics
Bayesian Econometrics, by Gary Koop (2003) is a modern rigorous coverage of the field that I recommend. It is in addition completed by a book of exercises: Bayesian Econometric Methods (Econometrics Exercises) by Gary Koop, Dale J. Poirier and Justin L. Tobias (2007).
Textbook for Bayesian econometrics
Bayesian Econometrics, by Gary Koop (2003) is a modern rigorous coverage of the field that I recommend. It is in addition completed by a book of exercises: Bayesian Econometric Methods (Econometrics E
Textbook for Bayesian econometrics Bayesian Econometrics, by Gary Koop (2003) is a modern rigorous coverage of the field that I recommend. It is in addition completed by a book of exercises: Bayesian Econometric Methods (Econometrics Exercises) by Gary Koop, Dale J. Poirier and Justin L. Tobias (2007).
Textbook for Bayesian econometrics Bayesian Econometrics, by Gary Koop (2003) is a modern rigorous coverage of the field that I recommend. It is in addition completed by a book of exercises: Bayesian Econometric Methods (Econometrics E
19,980
Textbook for Bayesian econometrics
I suggest "Bayesian Data Analysis" by Gelman et al.: Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., Rubin, D. (2013). Bayesian Data Analysis, Third Edition. New York: Chapman and Hall/CRC.
Textbook for Bayesian econometrics
I suggest "Bayesian Data Analysis" by Gelman et al.: Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., Rubin, D. (2013). Bayesian Data Analysis, Third Edition. New York: Chapman and Hall/CR
Textbook for Bayesian econometrics I suggest "Bayesian Data Analysis" by Gelman et al.: Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., Rubin, D. (2013). Bayesian Data Analysis, Third Edition. New York: Chapman and Hall/CRC.
Textbook for Bayesian econometrics I suggest "Bayesian Data Analysis" by Gelman et al.: Gelman, A., Carlin, J., Stern, H., Dunson, D., Vehtari, A., Rubin, D. (2013). Bayesian Data Analysis, Third Edition. New York: Chapman and Hall/CR
19,981
Textbook for Bayesian econometrics
An Introduction to Bayesian Inference in Econometrics by Arnold Zellner (1971) From the cover: "This is the first book in econometrics to look at models and problems from the Bayesian point of view. [M]any comparisons of Bayesian and non-Bayesian results are presented. [...] An Introduction to Bayesian Inference in Econometrics will be of value as a guide to Bayesian Econometrics for graduate-level students and as a reference volume for researchers."
Textbook for Bayesian econometrics
An Introduction to Bayesian Inference in Econometrics by Arnold Zellner (1971) From the cover: "This is the first book in econometrics to look at models and problems from the Bayesian point of view
Textbook for Bayesian econometrics An Introduction to Bayesian Inference in Econometrics by Arnold Zellner (1971) From the cover: "This is the first book in econometrics to look at models and problems from the Bayesian point of view. [M]any comparisons of Bayesian and non-Bayesian results are presented. [...] An Introduction to Bayesian Inference in Econometrics will be of value as a guide to Bayesian Econometrics for graduate-level students and as a reference volume for researchers."
Textbook for Bayesian econometrics An Introduction to Bayesian Inference in Econometrics by Arnold Zellner (1971) From the cover: "This is the first book in econometrics to look at models and problems from the Bayesian point of view
19,982
Textbook for Bayesian econometrics
While it's made for marketing, I'd suggest Bayesian Statistics and Marketing by Rossi, Allenby & McCulloch for Bayesian inference in economic models.
Textbook for Bayesian econometrics
While it's made for marketing, I'd suggest Bayesian Statistics and Marketing by Rossi, Allenby & McCulloch for Bayesian inference in economic models.
Textbook for Bayesian econometrics While it's made for marketing, I'd suggest Bayesian Statistics and Marketing by Rossi, Allenby & McCulloch for Bayesian inference in economic models.
Textbook for Bayesian econometrics While it's made for marketing, I'd suggest Bayesian Statistics and Marketing by Rossi, Allenby & McCulloch for Bayesian inference in economic models.
19,983
Textbook for Bayesian econometrics
I might consider Contemporary Bayesian Econometrics and Statistics by John Geweke. It is relatively brief. The first three chapters cover the sort of foundational stuff you find in any Bayesian analysis book. The next chapter is the linear model with a tad of non-linear regression, followed by latent variables and missing data, then time-series and closed with model comparison and evaluation. There's not very much on panel data or semi/nonparametric estimation.
Textbook for Bayesian econometrics
I might consider Contemporary Bayesian Econometrics and Statistics by John Geweke. It is relatively brief. The first three chapters cover the sort of foundational stuff you find in any Bayesian analys
Textbook for Bayesian econometrics I might consider Contemporary Bayesian Econometrics and Statistics by John Geweke. It is relatively brief. The first three chapters cover the sort of foundational stuff you find in any Bayesian analysis book. The next chapter is the linear model with a tad of non-linear regression, followed by latent variables and missing data, then time-series and closed with model comparison and evaluation. There's not very much on panel data or semi/nonparametric estimation.
Textbook for Bayesian econometrics I might consider Contemporary Bayesian Econometrics and Statistics by John Geweke. It is relatively brief. The first three chapters cover the sort of foundational stuff you find in any Bayesian analys
19,984
How can I determine if categorical data is normally distributed?
Categorical data are not from a normal distribution. The normal distribution only makes sense if you're dealing with at least interval data, and the normal distribution is continuous and on the whole real line. If any of those aren't true you don't need to examine the data distribution to conclude that it's not consistent with normality. [Note that if it's not interval you have bigger issues than those associated assuming a distribution shape, since even the calculation of a mean implies that you have interval scale. To say that "High" + "Very Low" = "Medium" + "Low" and "Very High" + "Medium" = "High" + "High" (i.e. exactly the sort of thing you need to hold to even begin adding values in the first place), you are forced to assume interval scale at that point.] It would be somewhat rare to have even reasonably approximate normal-looking samples with actual ratio data, since ratio data are generally non-negative and typically somewhat skew. When your measures are categorical, it's not so much that you can't "check" it as it generally makes no sense to do it - you already know it's not a sample from a normal distribution. Indeed, the idea of even trying makes no sense in the case of nominal data, since the categories don't even have an order! [The only distribution invariant to an arbitrary rearrangement of order would be a discrete uniform.] If your data are ordered categorical the intervals are arbitrary, and again, we're left with a notion we can't really do much with; even simpler notions like symmetry don't really hold up under arbitrary changes in intervals. To begin to contemplate even approximate normality means we must at least assume our categories are interval / have fixed, known "scores". But in any case, the question "is it normal?" isn't really a useful question anyway - since when are real data truly sampled from a normal distribution? [There can be situations in which it could be meaningful to consider whether the ordered categories have an underlying (latent) variable with (say) a normal distribution, but that's quite a different kind of consideration.] A more useful question is suggested by George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful. (I believe that's in Box and Draper, along with his more well known aphorism.) If you had discrete data that was at least interval, and had a fair number of categories, it might make sense to check that it wasn't heavily skew, say, but you wouldn't actually believe it to be drawn from a normal population - it can't be. For some inferential procedures, actual normality may not be especially important, particularly at larger sample sizes.
How can I determine if categorical data is normally distributed?
Categorical data are not from a normal distribution. The normal distribution only makes sense if you're dealing with at least interval data, and the normal distribution is continuous and on the whole
How can I determine if categorical data is normally distributed? Categorical data are not from a normal distribution. The normal distribution only makes sense if you're dealing with at least interval data, and the normal distribution is continuous and on the whole real line. If any of those aren't true you don't need to examine the data distribution to conclude that it's not consistent with normality. [Note that if it's not interval you have bigger issues than those associated assuming a distribution shape, since even the calculation of a mean implies that you have interval scale. To say that "High" + "Very Low" = "Medium" + "Low" and "Very High" + "Medium" = "High" + "High" (i.e. exactly the sort of thing you need to hold to even begin adding values in the first place), you are forced to assume interval scale at that point.] It would be somewhat rare to have even reasonably approximate normal-looking samples with actual ratio data, since ratio data are generally non-negative and typically somewhat skew. When your measures are categorical, it's not so much that you can't "check" it as it generally makes no sense to do it - you already know it's not a sample from a normal distribution. Indeed, the idea of even trying makes no sense in the case of nominal data, since the categories don't even have an order! [The only distribution invariant to an arbitrary rearrangement of order would be a discrete uniform.] If your data are ordered categorical the intervals are arbitrary, and again, we're left with a notion we can't really do much with; even simpler notions like symmetry don't really hold up under arbitrary changes in intervals. To begin to contemplate even approximate normality means we must at least assume our categories are interval / have fixed, known "scores". But in any case, the question "is it normal?" isn't really a useful question anyway - since when are real data truly sampled from a normal distribution? [There can be situations in which it could be meaningful to consider whether the ordered categories have an underlying (latent) variable with (say) a normal distribution, but that's quite a different kind of consideration.] A more useful question is suggested by George Box: Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful. (I believe that's in Box and Draper, along with his more well known aphorism.) If you had discrete data that was at least interval, and had a fair number of categories, it might make sense to check that it wasn't heavily skew, say, but you wouldn't actually believe it to be drawn from a normal population - it can't be. For some inferential procedures, actual normality may not be especially important, particularly at larger sample sizes.
How can I determine if categorical data is normally distributed? Categorical data are not from a normal distribution. The normal distribution only makes sense if you're dealing with at least interval data, and the normal distribution is continuous and on the whole
19,985
Example of an inconsistent Maximum likelihood estimator
[I think this might be an example of the kind of situation under discussion in your question.] There are numerous examples of inconsistent ML estimators. Inconsistency is commonly seen with a variety of slightly complicated mixture problems and censoring problems. [Consistency of a test is basically just that the power of the test for a (fixed) false hypothesis increases to one as $n\to\infty$.] Radford Neal gives an example in his blog entry of 2008-08-09 Inconsistent Maximum Likelihood Estimation: An “Ordinary” Example. It involves estimation of the parameter $\theta$ in: $$X\ |\ \theta\ \ \sim\ \ (1/2) N(0,1)\ +\ (1/2) N(\theta,\exp(-1/\theta^2)^2) $$ (Neal uses $t$ where I have $\theta$) where the ML estimate of $\theta$ will tend to $0$ as $n\to\infty$ (and indeed the likelihood can be far higher in a peak near 0 than at the true value for quite modest sample sizes). It is nevertheless the case that there's a peak near the true value $\theta$, it's just smaller than the one near 0. Imagine now two cases relating to this situation: a) performing a likelihood ratio test of $H_0: \theta=\theta_0$ against the alternative $H_1: \theta<\theta_0$; b) performing a likelihood ratio test of $H_0: \theta=\theta_0$ against the alternative $H_1: \theta\neq\theta_0$. In case (a), imagine that the true $\theta<\theta_0$ (so that the alternative is true and $0$ is the other side of the true $\theta$). Then in spite of the fact that the likelihood very close to 0 will exceed that at $\theta$, the likelihood at $\theta$ nevertheless exceeds the likelihood at $\theta_0$ even in small samples, and the ratio will continue to grow larger as $n\to\infty$, in such a way as to make the rejection probability in a likelihood ratio test go to 1. Indeed, even in case (b), as long as $\theta_0$ is fixed and bounded away from $0$, it should also be the case that the likelihood ratio will grow in such a way as to make the rejection probability in a likelihood ratio test also approach 1. So this would seem to be an example of inconsistent ML estimation, where the power of a LRT should nevertheless go to 1 (except when $\theta_0=0$). [Note that there's really nothing to this that's not already in whuber's answer, which I think is an exemplar of clarity, and is far simpler for understanding the difference between test consistency and consistency of an estimator. The fact that the inconsistent estimator in the specific example wasn't ML doesn't really matter as far as understanding that difference - and bringing in an inconsistent estimator that's specifically ML - as I have tried to do here - doesn't really alter the explanation in any substantive way. The only real point of the example here is that I think it addresses your concern about using an ML estimator.]
Example of an inconsistent Maximum likelihood estimator
[I think this might be an example of the kind of situation under discussion in your question.] There are numerous examples of inconsistent ML estimators. Inconsistency is commonly seen with a variety
Example of an inconsistent Maximum likelihood estimator [I think this might be an example of the kind of situation under discussion in your question.] There are numerous examples of inconsistent ML estimators. Inconsistency is commonly seen with a variety of slightly complicated mixture problems and censoring problems. [Consistency of a test is basically just that the power of the test for a (fixed) false hypothesis increases to one as $n\to\infty$.] Radford Neal gives an example in his blog entry of 2008-08-09 Inconsistent Maximum Likelihood Estimation: An “Ordinary” Example. It involves estimation of the parameter $\theta$ in: $$X\ |\ \theta\ \ \sim\ \ (1/2) N(0,1)\ +\ (1/2) N(\theta,\exp(-1/\theta^2)^2) $$ (Neal uses $t$ where I have $\theta$) where the ML estimate of $\theta$ will tend to $0$ as $n\to\infty$ (and indeed the likelihood can be far higher in a peak near 0 than at the true value for quite modest sample sizes). It is nevertheless the case that there's a peak near the true value $\theta$, it's just smaller than the one near 0. Imagine now two cases relating to this situation: a) performing a likelihood ratio test of $H_0: \theta=\theta_0$ against the alternative $H_1: \theta<\theta_0$; b) performing a likelihood ratio test of $H_0: \theta=\theta_0$ against the alternative $H_1: \theta\neq\theta_0$. In case (a), imagine that the true $\theta<\theta_0$ (so that the alternative is true and $0$ is the other side of the true $\theta$). Then in spite of the fact that the likelihood very close to 0 will exceed that at $\theta$, the likelihood at $\theta$ nevertheless exceeds the likelihood at $\theta_0$ even in small samples, and the ratio will continue to grow larger as $n\to\infty$, in such a way as to make the rejection probability in a likelihood ratio test go to 1. Indeed, even in case (b), as long as $\theta_0$ is fixed and bounded away from $0$, it should also be the case that the likelihood ratio will grow in such a way as to make the rejection probability in a likelihood ratio test also approach 1. So this would seem to be an example of inconsistent ML estimation, where the power of a LRT should nevertheless go to 1 (except when $\theta_0=0$). [Note that there's really nothing to this that's not already in whuber's answer, which I think is an exemplar of clarity, and is far simpler for understanding the difference between test consistency and consistency of an estimator. The fact that the inconsistent estimator in the specific example wasn't ML doesn't really matter as far as understanding that difference - and bringing in an inconsistent estimator that's specifically ML - as I have tried to do here - doesn't really alter the explanation in any substantive way. The only real point of the example here is that I think it addresses your concern about using an ML estimator.]
Example of an inconsistent Maximum likelihood estimator [I think this might be an example of the kind of situation under discussion in your question.] There are numerous examples of inconsistent ML estimators. Inconsistency is commonly seen with a variety
19,986
Example of an inconsistent Maximum likelihood estimator
Let $(X_n)$ be drawn iid from a Normal$(\mu, 1)$ distribution. Consider the estimator $$T(x_1, \ldots, x_n) = 1 + \bar{x} = 1 + \frac{1}{n}\sum_{i=1}^n x_n.$$ The distribution of $T(X_1,\ldots,X_n)=1+\bar{X}$ is Normal$(\mu+1, 1/\sqrt{n})$. It converges to $\mu+1\ne \mu$, showing it is inconsistent. In comparing a null hypothesis $\mu=\mu_0$ to a simple alternative, say $\mu=\mu_A$, the log likelihood ratio will be exactly the same as the LLR based on $\bar{X}$ instead of $T$. (In effect, $T$ is useful for comparing the null hypothesis $\mu+1=\mu_0+1$ to the alternative hypothesis $\mu+1=\mu_A+1$.) Since the test based on the mean has power converging to $1$ for any test size $\alpha\gt 0$ and any effect size, the power of the test using $T$ itself also converges to $1$.
Example of an inconsistent Maximum likelihood estimator
Let $(X_n)$ be drawn iid from a Normal$(\mu, 1)$ distribution. Consider the estimator $$T(x_1, \ldots, x_n) = 1 + \bar{x} = 1 + \frac{1}{n}\sum_{i=1}^n x_n.$$ The distribution of $T(X_1,\ldots,X_n)=1
Example of an inconsistent Maximum likelihood estimator Let $(X_n)$ be drawn iid from a Normal$(\mu, 1)$ distribution. Consider the estimator $$T(x_1, \ldots, x_n) = 1 + \bar{x} = 1 + \frac{1}{n}\sum_{i=1}^n x_n.$$ The distribution of $T(X_1,\ldots,X_n)=1+\bar{X}$ is Normal$(\mu+1, 1/\sqrt{n})$. It converges to $\mu+1\ne \mu$, showing it is inconsistent. In comparing a null hypothesis $\mu=\mu_0$ to a simple alternative, say $\mu=\mu_A$, the log likelihood ratio will be exactly the same as the LLR based on $\bar{X}$ instead of $T$. (In effect, $T$ is useful for comparing the null hypothesis $\mu+1=\mu_0+1$ to the alternative hypothesis $\mu+1=\mu_A+1$.) Since the test based on the mean has power converging to $1$ for any test size $\alpha\gt 0$ and any effect size, the power of the test using $T$ itself also converges to $1$.
Example of an inconsistent Maximum likelihood estimator Let $(X_n)$ be drawn iid from a Normal$(\mu, 1)$ distribution. Consider the estimator $$T(x_1, \ldots, x_n) = 1 + \bar{x} = 1 + \frac{1}{n}\sum_{i=1}^n x_n.$$ The distribution of $T(X_1,\ldots,X_n)=1
19,987
Determining parameters (p, d, q) for ARIMA modeling
In general, dig into an advanced time series analysis textbook (introductory books will usually direct you to just trust your software), like Time Series Analysis by Box, Jenkins & Reinsel. You may also find details on the Box-Jenkins procedure by googling. Note that there are other approaches than Box-Jenkins, e.g., AIC-based ones. In R, you first convert your data into a ts (time series) object and tell R that the frequency is 12 (monthly data): require(forecast) sales <- ts(c(99, 58, 52, 83, 94, 73, 97, 83, 86, 63, 77, 70, 87, 84, 60, 105, 87, 93, 110, 71, 158, 52, 33, 68, 82, 88, 84),frequency=12) You can plot the (partial) autocorrelation functions: acf(sales) pacf(sales) These don't suggest any AR or MA behavior. Then you fit a model and inspect it: model <- auto.arima(sales) model See ?auto.arima for help. As we see, auto.arima chooses a simple (0,0,0) model, since it sees neither trend nor seasonality nor AR or MA in your data. Finally, you can forecast and plot the time series and forecast: plot(forecast(model)) Look at ?forecast.Arima (note the capital A!). This free online textbook is a great introduction to time series analysis and forecasting using R. Very much recommended.
Determining parameters (p, d, q) for ARIMA modeling
In general, dig into an advanced time series analysis textbook (introductory books will usually direct you to just trust your software), like Time Series Analysis by Box, Jenkins & Reinsel. You may al
Determining parameters (p, d, q) for ARIMA modeling In general, dig into an advanced time series analysis textbook (introductory books will usually direct you to just trust your software), like Time Series Analysis by Box, Jenkins & Reinsel. You may also find details on the Box-Jenkins procedure by googling. Note that there are other approaches than Box-Jenkins, e.g., AIC-based ones. In R, you first convert your data into a ts (time series) object and tell R that the frequency is 12 (monthly data): require(forecast) sales <- ts(c(99, 58, 52, 83, 94, 73, 97, 83, 86, 63, 77, 70, 87, 84, 60, 105, 87, 93, 110, 71, 158, 52, 33, 68, 82, 88, 84),frequency=12) You can plot the (partial) autocorrelation functions: acf(sales) pacf(sales) These don't suggest any AR or MA behavior. Then you fit a model and inspect it: model <- auto.arima(sales) model See ?auto.arima for help. As we see, auto.arima chooses a simple (0,0,0) model, since it sees neither trend nor seasonality nor AR or MA in your data. Finally, you can forecast and plot the time series and forecast: plot(forecast(model)) Look at ?forecast.Arima (note the capital A!). This free online textbook is a great introduction to time series analysis and forecasting using R. Very much recommended.
Determining parameters (p, d, q) for ARIMA modeling In general, dig into an advanced time series analysis textbook (introductory books will usually direct you to just trust your software), like Time Series Analysis by Box, Jenkins & Reinsel. You may al
19,988
Determining parameters (p, d, q) for ARIMA modeling
Two things.Your time series is monthly,you need at least 4 years of data for a sensible ARIMA estimation, as reflected 27 points do not give the autocorrelation structure. This can also mean that your sales is affected by some external factors , rather than being correlated with its own value. Try to find out what factor affects your sales and is that factor being measured. Then you can run a regression or VAR (Vector Autoregression) to get forecasts. If you absolutely don't have anything else other than these values , your best way is to use a exponential smoothing method to get a naive forecast. Exponential smoothing is available in R . Secondly don't see the sales of a product in isolation, the sales of two products might be correlated for example increase in coffee sales can reflect decrease in tea sales. use the other product information to improve your forecast. This typically happens with sales data in retail or supply chain. They dont show much of autocorrelation structure in the series. While on the other hand methods like ARIMA or GARCH typically work with stock market data or economic indices where you generally have autocorrelation.
Determining parameters (p, d, q) for ARIMA modeling
Two things.Your time series is monthly,you need at least 4 years of data for a sensible ARIMA estimation, as reflected 27 points do not give the autocorrelation structure. This can also mean that your
Determining parameters (p, d, q) for ARIMA modeling Two things.Your time series is monthly,you need at least 4 years of data for a sensible ARIMA estimation, as reflected 27 points do not give the autocorrelation structure. This can also mean that your sales is affected by some external factors , rather than being correlated with its own value. Try to find out what factor affects your sales and is that factor being measured. Then you can run a regression or VAR (Vector Autoregression) to get forecasts. If you absolutely don't have anything else other than these values , your best way is to use a exponential smoothing method to get a naive forecast. Exponential smoothing is available in R . Secondly don't see the sales of a product in isolation, the sales of two products might be correlated for example increase in coffee sales can reflect decrease in tea sales. use the other product information to improve your forecast. This typically happens with sales data in retail or supply chain. They dont show much of autocorrelation structure in the series. While on the other hand methods like ARIMA or GARCH typically work with stock market data or economic indices where you generally have autocorrelation.
Determining parameters (p, d, q) for ARIMA modeling Two things.Your time series is monthly,you need at least 4 years of data for a sensible ARIMA estimation, as reflected 27 points do not give the autocorrelation structure. This can also mean that your
19,989
Determining parameters (p, d, q) for ARIMA modeling
This is really a comment but exceeds the allowable so I post it as a quasi-answer as it suggests the correct way to analyze time series data. . The well-known fact but often ignored here and elsewhere is that the theoretical ACF/PACF which is used to formulate a tentative ARIMA model premises no Pulses/Level Shifts/Seasonal Pulses/Local Time Trends. Additionally it premises constant parameters and constant error variance over time. In this case the 21st observation (value=158) is easily flagged as an outlier/Pulse and a suggested adjustment of -80 yields a modified value of 78 . The resultant ACF/PACF of the modified series shows little or no evidence of stochastic (ARIMA) structure. In this case the operation was a success but the patient died. The sample ACF is based upon the covariance/variance and an unduly inflated/bloated variance yields a downwards bias to the ACF. Prof. Keith Ord once referred to this as the "Alice in Wonderland effect"
Determining parameters (p, d, q) for ARIMA modeling
This is really a comment but exceeds the allowable so I post it as a quasi-answer as it suggests the correct way to analyze time series data. . The well-known fact but often ignored here and elsewhere
Determining parameters (p, d, q) for ARIMA modeling This is really a comment but exceeds the allowable so I post it as a quasi-answer as it suggests the correct way to analyze time series data. . The well-known fact but often ignored here and elsewhere is that the theoretical ACF/PACF which is used to formulate a tentative ARIMA model premises no Pulses/Level Shifts/Seasonal Pulses/Local Time Trends. Additionally it premises constant parameters and constant error variance over time. In this case the 21st observation (value=158) is easily flagged as an outlier/Pulse and a suggested adjustment of -80 yields a modified value of 78 . The resultant ACF/PACF of the modified series shows little or no evidence of stochastic (ARIMA) structure. In this case the operation was a success but the patient died. The sample ACF is based upon the covariance/variance and an unduly inflated/bloated variance yields a downwards bias to the ACF. Prof. Keith Ord once referred to this as the "Alice in Wonderland effect"
Determining parameters (p, d, q) for ARIMA modeling This is really a comment but exceeds the allowable so I post it as a quasi-answer as it suggests the correct way to analyze time series data. . The well-known fact but often ignored here and elsewhere
19,990
Determining parameters (p, d, q) for ARIMA modeling
As it has been pointed out by Stephan Kolassa there is no much structure in your data. The autocorrelation functions do not suggest an ARMA structure (see acf(sales), pacf(sales)) and forecast::auto.arima does not choose any AR or MA order. require(forecast) require(tsoutliers) fit1 <- auto.arima(sales, d=0, D=0, ic="bic") fit1 #ARIMA(0,0,0) with non-zero mean #Coefficients: # intercept # 81.3704 #s.e. 4.4070 Nevertheless, notice that the null of normality in the residuals is rejected at the 5% significance level. JarqueBera.test(residuals(fit1))[[1]] #X-squared = 12.9466, df = 2, p-value = 0.001544 Aside note: JarqueBera.test is based on function jarque.bera.test available in package tseries. Including the additive outlier at observation 21 that is detected with tsoutliers renders normality in the residuals. Thus, the estimate of the intercept and the forecast are not affected by the outlying observation. res <- tsoutliers::tso(sales, types=c("AO", "TC", "LS"), args.tsmethod=list(ic="bic", d=0, D=0)) res #ARIMA(0,0,0) with non-zero mean #Coefficients: # intercept AO21 # 78.4231 79.5769 #s.e. 3.3885 17.6072 #sigma^2 estimated as 298.5: log likelihood=-115.25 #AIC=236.49 AICc=237.54 BIC=240.38 #Outliers: # type ind time coefhat tstat #1 AO 21 2:09 79.58 4.52 JarqueBera.test(residuals(res$fit))[[1]] #X-squared = 1.3555, df = 2, p-value = 0.5077
Determining parameters (p, d, q) for ARIMA modeling
As it has been pointed out by Stephan Kolassa there is no much structure in your data. The autocorrelation functions do not suggest an ARMA structure (see acf(sales), pacf(sales)) and forecast::auto.a
Determining parameters (p, d, q) for ARIMA modeling As it has been pointed out by Stephan Kolassa there is no much structure in your data. The autocorrelation functions do not suggest an ARMA structure (see acf(sales), pacf(sales)) and forecast::auto.arima does not choose any AR or MA order. require(forecast) require(tsoutliers) fit1 <- auto.arima(sales, d=0, D=0, ic="bic") fit1 #ARIMA(0,0,0) with non-zero mean #Coefficients: # intercept # 81.3704 #s.e. 4.4070 Nevertheless, notice that the null of normality in the residuals is rejected at the 5% significance level. JarqueBera.test(residuals(fit1))[[1]] #X-squared = 12.9466, df = 2, p-value = 0.001544 Aside note: JarqueBera.test is based on function jarque.bera.test available in package tseries. Including the additive outlier at observation 21 that is detected with tsoutliers renders normality in the residuals. Thus, the estimate of the intercept and the forecast are not affected by the outlying observation. res <- tsoutliers::tso(sales, types=c("AO", "TC", "LS"), args.tsmethod=list(ic="bic", d=0, D=0)) res #ARIMA(0,0,0) with non-zero mean #Coefficients: # intercept AO21 # 78.4231 79.5769 #s.e. 3.3885 17.6072 #sigma^2 estimated as 298.5: log likelihood=-115.25 #AIC=236.49 AICc=237.54 BIC=240.38 #Outliers: # type ind time coefhat tstat #1 AO 21 2:09 79.58 4.52 JarqueBera.test(residuals(res$fit))[[1]] #X-squared = 1.3555, df = 2, p-value = 0.5077
Determining parameters (p, d, q) for ARIMA modeling As it has been pointed out by Stephan Kolassa there is no much structure in your data. The autocorrelation functions do not suggest an ARMA structure (see acf(sales), pacf(sales)) and forecast::auto.a
19,991
Is there any functional difference between an odds ratio and hazard ratio?
an odds ratio of 2 means that the event is 2 time more probable given a one-unit increase in the predictor It means the odds would double, which is not the same as the probability doubling. In Cox regression, a hazard ratio of 2 means the event will occur twice as often at each time point given a one-unit increase in the predictor. Aside a bit of handwaving, yes - the rate of occurrence doubles. It's like a scaled instantaneous probability. Are these not practically the same thing? They're almost the same thing when doubling the odds of the event is almost the same as doubling the hazard of the event. They're not automatically similar, but under some (fairly common) circumstances they may correspond very closely. You may want to consider the difference between odds and probability more carefully. See, for example, the first sentence here, which makes it clear that odds are the ratio of a probability to its complement. So for example, increasing the odds (in favor) from 1 to 2 is the same as probability increasing from $\frac{1}{2}$ to $\frac{2}{3}$. Odds increase faster than probability increases. For very small probabilities, odds-in-favor and probability are very similar, while odds-against become increasingly similar to (in the sense that the ratio will go to 1) reciprocals of probability as probability gets small. An odds ratio is simply the ratio of two sets of odds. Increasing the odds ratio while holding a base odds constant corresponds to increasing the other odds, but may or may not be similar to the relative change in probability. You may also want to ponder the difference between hazard and probability (see my earlier discussion where I make mention of hand-waving; now we don't gloss over the difference). For example, if a probability is 0.6, you can't double it – but an instantaneous hazard of 0.6 can be doubled to 1.2. They're not the same thing, in the same way that probability density is not probability.
Is there any functional difference between an odds ratio and hazard ratio?
an odds ratio of 2 means that the event is 2 time more probable given a one-unit increase in the predictor It means the odds would double, which is not the same as the probability doubling. In Cox r
Is there any functional difference between an odds ratio and hazard ratio? an odds ratio of 2 means that the event is 2 time more probable given a one-unit increase in the predictor It means the odds would double, which is not the same as the probability doubling. In Cox regression, a hazard ratio of 2 means the event will occur twice as often at each time point given a one-unit increase in the predictor. Aside a bit of handwaving, yes - the rate of occurrence doubles. It's like a scaled instantaneous probability. Are these not practically the same thing? They're almost the same thing when doubling the odds of the event is almost the same as doubling the hazard of the event. They're not automatically similar, but under some (fairly common) circumstances they may correspond very closely. You may want to consider the difference between odds and probability more carefully. See, for example, the first sentence here, which makes it clear that odds are the ratio of a probability to its complement. So for example, increasing the odds (in favor) from 1 to 2 is the same as probability increasing from $\frac{1}{2}$ to $\frac{2}{3}$. Odds increase faster than probability increases. For very small probabilities, odds-in-favor and probability are very similar, while odds-against become increasingly similar to (in the sense that the ratio will go to 1) reciprocals of probability as probability gets small. An odds ratio is simply the ratio of two sets of odds. Increasing the odds ratio while holding a base odds constant corresponds to increasing the other odds, but may or may not be similar to the relative change in probability. You may also want to ponder the difference between hazard and probability (see my earlier discussion where I make mention of hand-waving; now we don't gloss over the difference). For example, if a probability is 0.6, you can't double it – but an instantaneous hazard of 0.6 can be doubled to 1.2. They're not the same thing, in the same way that probability density is not probability.
Is there any functional difference between an odds ratio and hazard ratio? an odds ratio of 2 means that the event is 2 time more probable given a one-unit increase in the predictor It means the odds would double, which is not the same as the probability doubling. In Cox r
19,992
Is there any functional difference between an odds ratio and hazard ratio?
This is a good question. But what you are really asking should not be how the statistic is interpreted but what assumptions underlie each of your respective models (hazard or logistic). A logistic model is a static model which effectively predicts the likelihood of an event occurring at a particular time given observable information. However, a hazard model or Cox model is a duration model which models survival rates over time. You might ask a question like "what is the likelihood of a cigarette user surviving to the age of 75 relative to that of a nonuser with your logistic regression" (given that you have information about mortality for a cohort up to 75 years of age). But if instead you want to take advantage of the fullness of the time dimension of your data then using a hazard model will be more appropriate. Ultimately though it really comes down to what you want to model. Do you believe what you are modelling is a one time event? Use logistic. If you believe your event has fixed or proportional chance of occurring each period over an observable time spectrum? Use a hazard model. Choosing methods should not be based on how you interpret the statistic. If this were the case then there would be no difference between OLS, LAD, Tobit, Heckit, IV, 2SLS, or a host of other regression methods. It should instead be based on what form you believe the underlying model you are trying to estimate takes.
Is there any functional difference between an odds ratio and hazard ratio?
This is a good question. But what you are really asking should not be how the statistic is interpreted but what assumptions underlie each of your respective models (hazard or logistic). A logistic m
Is there any functional difference between an odds ratio and hazard ratio? This is a good question. But what you are really asking should not be how the statistic is interpreted but what assumptions underlie each of your respective models (hazard or logistic). A logistic model is a static model which effectively predicts the likelihood of an event occurring at a particular time given observable information. However, a hazard model or Cox model is a duration model which models survival rates over time. You might ask a question like "what is the likelihood of a cigarette user surviving to the age of 75 relative to that of a nonuser with your logistic regression" (given that you have information about mortality for a cohort up to 75 years of age). But if instead you want to take advantage of the fullness of the time dimension of your data then using a hazard model will be more appropriate. Ultimately though it really comes down to what you want to model. Do you believe what you are modelling is a one time event? Use logistic. If you believe your event has fixed or proportional chance of occurring each period over an observable time spectrum? Use a hazard model. Choosing methods should not be based on how you interpret the statistic. If this were the case then there would be no difference between OLS, LAD, Tobit, Heckit, IV, 2SLS, or a host of other regression methods. It should instead be based on what form you believe the underlying model you are trying to estimate takes.
Is there any functional difference between an odds ratio and hazard ratio? This is a good question. But what you are really asking should not be how the statistic is interpreted but what assumptions underlie each of your respective models (hazard or logistic). A logistic m
19,993
What algorithm should I use to cluster a huge binary dataset into few categories?
You are asking the wrong question. Instead of asking "what algorithm", you should be asking "what is a meaningful category/cluster in your application". I'm not surprised that above algorithms did not work - they are designed for very different use cases. k-means does not work with arbitrary other distances. Don't use it with Hamming distance. There is a reason why it is called k-means, it only makes sense to use when the arithmetic mean is meaningful (which it isn't for binary data). You may want to try k-modes instead, IIRC this is a variant that is actually meant to be used with categorial data, and binary data is somewhat categorial (but sparsity may still kill you). But first of all, have you removed duplicates to simplify your data, and removed unique/empty columns for example? Maybe APRIORI or similar approaches are also more meaningful for your problem. Either way, first figure out what you need, then which algorithm can solve this challenge. Work data-driven, not by trying out random algorithms.
What algorithm should I use to cluster a huge binary dataset into few categories?
You are asking the wrong question. Instead of asking "what algorithm", you should be asking "what is a meaningful category/cluster in your application". I'm not surprised that above algorithms did not
What algorithm should I use to cluster a huge binary dataset into few categories? You are asking the wrong question. Instead of asking "what algorithm", you should be asking "what is a meaningful category/cluster in your application". I'm not surprised that above algorithms did not work - they are designed for very different use cases. k-means does not work with arbitrary other distances. Don't use it with Hamming distance. There is a reason why it is called k-means, it only makes sense to use when the arithmetic mean is meaningful (which it isn't for binary data). You may want to try k-modes instead, IIRC this is a variant that is actually meant to be used with categorial data, and binary data is somewhat categorial (but sparsity may still kill you). But first of all, have you removed duplicates to simplify your data, and removed unique/empty columns for example? Maybe APRIORI or similar approaches are also more meaningful for your problem. Either way, first figure out what you need, then which algorithm can solve this challenge. Work data-driven, not by trying out random algorithms.
What algorithm should I use to cluster a huge binary dataset into few categories? You are asking the wrong question. Instead of asking "what algorithm", you should be asking "what is a meaningful category/cluster in your application". I'm not surprised that above algorithms did not
19,994
What algorithm should I use to cluster a huge binary dataset into few categories?
A classic algorithm for binary data clustering is Bernoulli Mixture model. The model can be fit using Bayesian methods and can be fit also using EM (Expectation Maximization). You can find sample python code all over the GitHub while the former is more powerful but also more difficult. I have a C# implementation of the model on GitHub (uses Infer.NET which has a restrictive license!). The model is fairly simple. First sample the cluster to which a data point belongs to. Then independently sample from as many Bernoullis as you have dimensions in your dataset. Note that this implies conditional independence of the binary values given the cluster! In Bayesian setting, the prior over cluster assignments is a Dirichlet distribution. This is the place to put priors if you believe some clusters are larger than others. For each cluster you must specify prior, a Beta distribution, for each Bernoulli distribution. Typically this prior is Beta(1,1) or uniform. Finally, don't forget to randomly initialize cluster assignments when data is given. This will break symmetry and the sampler won't get stuck. There are several cool features of the BMM model in Bayesian setting: Online clustering (data can arrive as a stream) Model can be used to infer the missing dimensions The first is very handy when the dataset is very large and won't fit in RAM of a machine. The second can be used in all sorts of missing data imputation tasks eg. imputing the missing half of binary MNIST image.
What algorithm should I use to cluster a huge binary dataset into few categories?
A classic algorithm for binary data clustering is Bernoulli Mixture model. The model can be fit using Bayesian methods and can be fit also using EM (Expectation Maximization). You can find sample pyth
What algorithm should I use to cluster a huge binary dataset into few categories? A classic algorithm for binary data clustering is Bernoulli Mixture model. The model can be fit using Bayesian methods and can be fit also using EM (Expectation Maximization). You can find sample python code all over the GitHub while the former is more powerful but also more difficult. I have a C# implementation of the model on GitHub (uses Infer.NET which has a restrictive license!). The model is fairly simple. First sample the cluster to which a data point belongs to. Then independently sample from as many Bernoullis as you have dimensions in your dataset. Note that this implies conditional independence of the binary values given the cluster! In Bayesian setting, the prior over cluster assignments is a Dirichlet distribution. This is the place to put priors if you believe some clusters are larger than others. For each cluster you must specify prior, a Beta distribution, for each Bernoulli distribution. Typically this prior is Beta(1,1) or uniform. Finally, don't forget to randomly initialize cluster assignments when data is given. This will break symmetry and the sampler won't get stuck. There are several cool features of the BMM model in Bayesian setting: Online clustering (data can arrive as a stream) Model can be used to infer the missing dimensions The first is very handy when the dataset is very large and won't fit in RAM of a machine. The second can be used in all sorts of missing data imputation tasks eg. imputing the missing half of binary MNIST image.
What algorithm should I use to cluster a huge binary dataset into few categories? A classic algorithm for binary data clustering is Bernoulli Mixture model. The model can be fit using Bayesian methods and can be fit also using EM (Expectation Maximization). You can find sample pyth
19,995
What algorithm should I use to cluster a huge binary dataset into few categories?
You are asking the right question. And you can use kmeans!!! Despite what you may be told by some, you absolutely can cluster with kmeans. There is nothing about binary data that will cause kmeans to fail. However, you might want to consider the following: 1 - Zero-mean your matrix by column. This means that you compute the mean row vector, which now becomes a real valued vector, and then subtract that vector from each of the original binary vectors. Your 0/1 binary matrix of 650K row vectors now becomes a real valued matrix of 650K vectors. Note that this DOES NOT change the mutual distance (or similarity) between vectors. It is just a translation operation, applied identically to each vector. 2 - Apply the sign function to the matrix. The sign function forces each matrix element to -1 if it is negative, or to +1 otherwise. The result of this transformation, in steps 1 and 2, is that the new matrix is no longer sparse. 3 - Now apply kmeans. you can use the Euclidean metric, or experiment with other metrics that you kmeans implementation supports. No need to use a specific binary clustering algorithm. kmeans is simple and clustering 650K vectors should be easily feasible on a decent desktop. 4 - If you wish to have binary cluster vectors as the result, then apply the sign function to the final k clusters. You may also convert the final cluster vectors from +1/-1 representation to 0/1 representation (but only after applying the sign function). Things to note: Because you only have 62 dimensional vectors the range of 'similarity' values that are possible between vectors in the binary representation is 62 (corresponding to a Hamming distance between 0 and 62.) Since the range of distances between binary vectors is thus limited, any ranking by hamming distances will necessarily result in numerous ties. As you try to squeeze 650K vectors into only 62 possible distance buckets, the number of vectors per bucket will depend on the number of clusters, but will generally be large and you may need to resolve ties by going back to the original data from which you derived the initial binary matrix.
What algorithm should I use to cluster a huge binary dataset into few categories?
You are asking the right question. And you can use kmeans!!! Despite what you may be told by some, you absolutely can cluster with kmeans. There is nothing about binary data that will cause kmeans
What algorithm should I use to cluster a huge binary dataset into few categories? You are asking the right question. And you can use kmeans!!! Despite what you may be told by some, you absolutely can cluster with kmeans. There is nothing about binary data that will cause kmeans to fail. However, you might want to consider the following: 1 - Zero-mean your matrix by column. This means that you compute the mean row vector, which now becomes a real valued vector, and then subtract that vector from each of the original binary vectors. Your 0/1 binary matrix of 650K row vectors now becomes a real valued matrix of 650K vectors. Note that this DOES NOT change the mutual distance (or similarity) between vectors. It is just a translation operation, applied identically to each vector. 2 - Apply the sign function to the matrix. The sign function forces each matrix element to -1 if it is negative, or to +1 otherwise. The result of this transformation, in steps 1 and 2, is that the new matrix is no longer sparse. 3 - Now apply kmeans. you can use the Euclidean metric, or experiment with other metrics that you kmeans implementation supports. No need to use a specific binary clustering algorithm. kmeans is simple and clustering 650K vectors should be easily feasible on a decent desktop. 4 - If you wish to have binary cluster vectors as the result, then apply the sign function to the final k clusters. You may also convert the final cluster vectors from +1/-1 representation to 0/1 representation (but only after applying the sign function). Things to note: Because you only have 62 dimensional vectors the range of 'similarity' values that are possible between vectors in the binary representation is 62 (corresponding to a Hamming distance between 0 and 62.) Since the range of distances between binary vectors is thus limited, any ranking by hamming distances will necessarily result in numerous ties. As you try to squeeze 650K vectors into only 62 possible distance buckets, the number of vectors per bucket will depend on the number of clusters, but will generally be large and you may need to resolve ties by going back to the original data from which you derived the initial binary matrix.
What algorithm should I use to cluster a huge binary dataset into few categories? You are asking the right question. And you can use kmeans!!! Despite what you may be told by some, you absolutely can cluster with kmeans. There is nothing about binary data that will cause kmeans
19,996
What algorithm should I use to cluster a huge binary dataset into few categories?
Maybe I'm little bit late with the answer, but probably it would be useful for some body in the future. Adaptive Resonance Theory is a good algorithm for binary classification problems. You can read more about ART 1 (i.e. Neural Network Design book in chapter 19). The algorithm is quite easy to implement and, in the book, you can also find step-by-step instruction on how to build the classifier.
What algorithm should I use to cluster a huge binary dataset into few categories?
Maybe I'm little bit late with the answer, but probably it would be useful for some body in the future. Adaptive Resonance Theory is a good algorithm for binary classification problems. You can read m
What algorithm should I use to cluster a huge binary dataset into few categories? Maybe I'm little bit late with the answer, but probably it would be useful for some body in the future. Adaptive Resonance Theory is a good algorithm for binary classification problems. You can read more about ART 1 (i.e. Neural Network Design book in chapter 19). The algorithm is quite easy to implement and, in the book, you can also find step-by-step instruction on how to build the classifier.
What algorithm should I use to cluster a huge binary dataset into few categories? Maybe I'm little bit late with the answer, but probably it would be useful for some body in the future. Adaptive Resonance Theory is a good algorithm for binary classification problems. You can read m
19,997
What algorithm should I use to cluster a huge binary dataset into few categories?
You can certainly use kmeans as it was mentioned by Shlomo Geva, but kmeans does not give best answers for some datasets. I recommend you look at these publications: 'Powered Outer Probabilistic Clustering' (http://www.iaeng.org/publication/WCECS2017/WCECS2017_pp394-398.pdf) 'Clustering for Binary Featured Datasets' (https://link.springer.com/chapter/10.1007/978-981-13-2191-7_10) You can also look at Principal Component Analysis - PCA (https://en.wikipedia.org/wiki/Principal_component_analysis), but in my experience with some examples, that might not provide great answers either, similarly to kmeans.
What algorithm should I use to cluster a huge binary dataset into few categories?
You can certainly use kmeans as it was mentioned by Shlomo Geva, but kmeans does not give best answers for some datasets. I recommend you look at these publications: 'Powered Outer Probabilistic Clust
What algorithm should I use to cluster a huge binary dataset into few categories? You can certainly use kmeans as it was mentioned by Shlomo Geva, but kmeans does not give best answers for some datasets. I recommend you look at these publications: 'Powered Outer Probabilistic Clustering' (http://www.iaeng.org/publication/WCECS2017/WCECS2017_pp394-398.pdf) 'Clustering for Binary Featured Datasets' (https://link.springer.com/chapter/10.1007/978-981-13-2191-7_10) You can also look at Principal Component Analysis - PCA (https://en.wikipedia.org/wiki/Principal_component_analysis), but in my experience with some examples, that might not provide great answers either, similarly to kmeans.
What algorithm should I use to cluster a huge binary dataset into few categories? You can certainly use kmeans as it was mentioned by Shlomo Geva, but kmeans does not give best answers for some datasets. I recommend you look at these publications: 'Powered Outer Probabilistic Clust
19,998
Intuition behind the t-distributions density function
If you have a standard normal random variable, $Z$, and an independent chi-square random variable $Q$ with $\nu$ df, then $T = Z/\sqrt{Q/\nu}$ has a $t$ distribution with $\nu$ df. (I'm not sure what $Z/Q$ is distributed as, but it isn't $t$.) The actual derivation is a fairly standard result. Alecos does it a couple of ways here. As far as intuition goes, I don't have particular intuition for the specific functional form, but some general sense of the shape can be obtained by considering that the (scaled by $\sqrt \nu$) independent chi-distribution on the denominator is right skew: The mode is slightly below 1 (but gets closer to 1 as the df increases), with some chance of values substantially above and below 1. The variation in $\sqrt{Q/\nu}$ means that the variance of $t$ will be larger than that of $Z$. The values of $\sqrt{Q/\nu}$ substantially above 1 will lead to a $t$-value that's closer to 0 than $Z$ is, while the ones substantially below 1 will result in a $t$-value that's further from 0 than $Z$ is. All this means that $t$ values will be (i) more variable, (ii) relatively more peaked and (iii) heavier tailed than a normal. As the df increases, $\sqrt{Q/\nu}$ becomes concentrated around 1, and then $t$ will be closer to the normal. (the 'relatively more peaked' results in a slightly sharper peak relative to the spread, but the larger variance pulls the center down, which means that the peak is slightly lower with lower d.f.) So that's some intuition about why the $t$ looks as it does.
Intuition behind the t-distributions density function
If you have a standard normal random variable, $Z$, and an independent chi-square random variable $Q$ with $\nu$ df, then $T = Z/\sqrt{Q/\nu}$ has a $t$ distribution with $\nu$ df. (I'm not sure wha
Intuition behind the t-distributions density function If you have a standard normal random variable, $Z$, and an independent chi-square random variable $Q$ with $\nu$ df, then $T = Z/\sqrt{Q/\nu}$ has a $t$ distribution with $\nu$ df. (I'm not sure what $Z/Q$ is distributed as, but it isn't $t$.) The actual derivation is a fairly standard result. Alecos does it a couple of ways here. As far as intuition goes, I don't have particular intuition for the specific functional form, but some general sense of the shape can be obtained by considering that the (scaled by $\sqrt \nu$) independent chi-distribution on the denominator is right skew: The mode is slightly below 1 (but gets closer to 1 as the df increases), with some chance of values substantially above and below 1. The variation in $\sqrt{Q/\nu}$ means that the variance of $t$ will be larger than that of $Z$. The values of $\sqrt{Q/\nu}$ substantially above 1 will lead to a $t$-value that's closer to 0 than $Z$ is, while the ones substantially below 1 will result in a $t$-value that's further from 0 than $Z$ is. All this means that $t$ values will be (i) more variable, (ii) relatively more peaked and (iii) heavier tailed than a normal. As the df increases, $\sqrt{Q/\nu}$ becomes concentrated around 1, and then $t$ will be closer to the normal. (the 'relatively more peaked' results in a slightly sharper peak relative to the spread, but the larger variance pulls the center down, which means that the peak is slightly lower with lower d.f.) So that's some intuition about why the $t$ looks as it does.
Intuition behind the t-distributions density function If you have a standard normal random variable, $Z$, and an independent chi-square random variable $Q$ with $\nu$ df, then $T = Z/\sqrt{Q/\nu}$ has a $t$ distribution with $\nu$ df. (I'm not sure wha
19,999
Intuition behind the t-distributions density function
The answer by Glen is correct one, but from a Bayesian viewpoint it is also helpful to think of the t-distribution as a continuous mixture of normal distributions with different variances. You can find the derivation here: Student t as mixture of gaussian I feel that this approach helps your intuition because it clarifies how the t-distribution arises when you don't know the exact variability of your population.
Intuition behind the t-distributions density function
The answer by Glen is correct one, but from a Bayesian viewpoint it is also helpful to think of the t-distribution as a continuous mixture of normal distributions with different variances. You can fin
Intuition behind the t-distributions density function The answer by Glen is correct one, but from a Bayesian viewpoint it is also helpful to think of the t-distribution as a continuous mixture of normal distributions with different variances. You can find the derivation here: Student t as mixture of gaussian I feel that this approach helps your intuition because it clarifies how the t-distribution arises when you don't know the exact variability of your population.
Intuition behind the t-distributions density function The answer by Glen is correct one, but from a Bayesian viewpoint it is also helpful to think of the t-distribution as a continuous mixture of normal distributions with different variances. You can fin
20,000
Choice of neural net hidden activation function
LeCun discusses this in Efficient Backprop Section 4.4. The motivation is similar to the motivation for normalizing the input to zero mean (Section 4.3). The average outputs of the tanh activation function are more likely to be close to zero than the sigmoid, whose average output must be positive.
Choice of neural net hidden activation function
LeCun discusses this in Efficient Backprop Section 4.4. The motivation is similar to the motivation for normalizing the input to zero mean (Section 4.3). The average outputs of the tanh activation fun
Choice of neural net hidden activation function LeCun discusses this in Efficient Backprop Section 4.4. The motivation is similar to the motivation for normalizing the input to zero mean (Section 4.3). The average outputs of the tanh activation function are more likely to be close to zero than the sigmoid, whose average output must be positive.
Choice of neural net hidden activation function LeCun discusses this in Efficient Backprop Section 4.4. The motivation is similar to the motivation for normalizing the input to zero mean (Section 4.3). The average outputs of the tanh activation fun