idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
20,301
What are the parameters of a Wishart-Wishart posterior?
The product of the two densities in $$ p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) \propto \mathcal{W}(\boldsymbol{\Lambda} | \upsilon, \boldsymbol{\Lambda_0}) \mathcal{W}(\boldsymbol{\Lambda_0} |D, \frac{1}{D}\boldsymbol{\Lambda_x}) \\ $$ leads to \begin{align*} p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) &\propto |\boldsymbol{\Lambda_0}|^{-\upsilon/2}\,\exp\{-\text{tr}(\boldsymbol{\Lambda_0}^{-1}\boldsymbol{\Lambda})/2\}\\ &\times |\boldsymbol{\Lambda_0}|^{(D-p-1)/2}\,\exp\{-D\,\text{tr}(\boldsymbol{\Lambda_x}^{-1}\boldsymbol{\Lambda_0})/2\}\\ &\propto|\boldsymbol{\Lambda_0}|^{(D-\upsilon-p-1)/2}\,\exp\{-tr(\boldsymbol{\Lambda_0}^{-1}\boldsymbol{\Lambda}+D\,\boldsymbol{\Lambda_x}^{-1}\boldsymbol{\Lambda_0})/2\}\,, \end{align*} which does not appear to be a standard density. To keep conjugacy of sorts, the right hierarchical prior on $\boldsymbol{\Lambda_0}$ should be something like $$ \boldsymbol{\Lambda_0}\sim\mathcal{IW}(\boldsymbol{\Lambda_0} |D, \frac{1}{D}\boldsymbol{\Lambda_x})\,. $$
What are the parameters of a Wishart-Wishart posterior?
The product of the two densities in $$ p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) \propto \mathcal{W}(\boldsymbol{\Lambda} | \upsilon, \boldsymbol{\Lambda_0}) \mathcal
What are the parameters of a Wishart-Wishart posterior? The product of the two densities in $$ p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) \propto \mathcal{W}(\boldsymbol{\Lambda} | \upsilon, \boldsymbol{\Lambda_0}) \mathcal{W}(\boldsymbol{\Lambda_0} |D, \frac{1}{D}\boldsymbol{\Lambda_x}) \\ $$ leads to \begin{align*} p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) &\propto |\boldsymbol{\Lambda_0}|^{-\upsilon/2}\,\exp\{-\text{tr}(\boldsymbol{\Lambda_0}^{-1}\boldsymbol{\Lambda})/2\}\\ &\times |\boldsymbol{\Lambda_0}|^{(D-p-1)/2}\,\exp\{-D\,\text{tr}(\boldsymbol{\Lambda_x}^{-1}\boldsymbol{\Lambda_0})/2\}\\ &\propto|\boldsymbol{\Lambda_0}|^{(D-\upsilon-p-1)/2}\,\exp\{-tr(\boldsymbol{\Lambda_0}^{-1}\boldsymbol{\Lambda}+D\,\boldsymbol{\Lambda_x}^{-1}\boldsymbol{\Lambda_0})/2\}\,, \end{align*} which does not appear to be a standard density. To keep conjugacy of sorts, the right hierarchical prior on $\boldsymbol{\Lambda_0}$ should be something like $$ \boldsymbol{\Lambda_0}\sim\mathcal{IW}(\boldsymbol{\Lambda_0} |D, \frac{1}{D}\boldsymbol{\Lambda_x})\,. $$
What are the parameters of a Wishart-Wishart posterior? The product of the two densities in $$ p(\boldsymbol{\Lambda_0 | X, \Lambda}, \upsilon, D, \boldsymbol{\Lambda_x}) \propto \mathcal{W}(\boldsymbol{\Lambda} | \upsilon, \boldsymbol{\Lambda_0}) \mathcal
20,302
What are the parameters of a Wishart-Wishart posterior?
Ok, thanks to @Xi'an answer I could make the whole derivation. I will write it for a general case: \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) \end{align} where the $\mathbf{S^{-1}}$ is the key to conjugacy. If we want to use $\mathbf{S}$ then it should be : \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S} ) \times \mathcal{IW}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) \end{align} I'm doing the first case (please correct me if I am wrong): \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) &\propto |\mathbf{S}|^{\upsilon/2} \exp\{-\frac{1}{2} tr(\mathbf{SW}) \}\\ &\times |\mathbf{S}|^{\frac{\upsilon_0 - D -1 }{2}} \exp\{-\frac{1}{2} tr (\mathbf{S_0^{-1} S})\}\\ &\propto |\mathbf{S}|^{\frac{\upsilon + \upsilon_0 - D -1 }{2}} \exp\{-\frac{1}{2} tr ( (\mathbf{W} + \mathbf{S_0^{-1}) S})\} \end{align} where we used the fact that $tr({\mathbf{SW}}) = tr({\mathbf{WS}})$. By inspection, we see that this is a Wishart distribution: \begin{align} p(\mathbf{S} | \cdot) = \mathcal{W}(\upsilon+ \upsilon_0, \mathbf{(W+S_0^{-1})^{-1}}) \end{align} Extension for $N$ draws $\mathbf{W_1...W_N}$: For the case when we have $N$ precision matrices then the likelihood becomes a product of $N$ likelihoods and we get: \begin{align} p(\mathbf{S} | \cdot) = \mathcal{W}(N \upsilon+ \upsilon_0, (\sum_{i=1}^N \mathbf{W_i+S_0^{-1})^{-1}}) \end{align}
What are the parameters of a Wishart-Wishart posterior?
Ok, thanks to @Xi'an answer I could make the whole derivation. I will write it for a general case: \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \u
What are the parameters of a Wishart-Wishart posterior? Ok, thanks to @Xi'an answer I could make the whole derivation. I will write it for a general case: \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) \end{align} where the $\mathbf{S^{-1}}$ is the key to conjugacy. If we want to use $\mathbf{S}$ then it should be : \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S} ) \times \mathcal{IW}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) \end{align} I'm doing the first case (please correct me if I am wrong): \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \upsilon_0, \mathbf{S_0}) &\propto |\mathbf{S}|^{\upsilon/2} \exp\{-\frac{1}{2} tr(\mathbf{SW}) \}\\ &\times |\mathbf{S}|^{\frac{\upsilon_0 - D -1 }{2}} \exp\{-\frac{1}{2} tr (\mathbf{S_0^{-1} S})\}\\ &\propto |\mathbf{S}|^{\frac{\upsilon + \upsilon_0 - D -1 }{2}} \exp\{-\frac{1}{2} tr ( (\mathbf{W} + \mathbf{S_0^{-1}) S})\} \end{align} where we used the fact that $tr({\mathbf{SW}}) = tr({\mathbf{WS}})$. By inspection, we see that this is a Wishart distribution: \begin{align} p(\mathbf{S} | \cdot) = \mathcal{W}(\upsilon+ \upsilon_0, \mathbf{(W+S_0^{-1})^{-1}}) \end{align} Extension for $N$ draws $\mathbf{W_1...W_N}$: For the case when we have $N$ precision matrices then the likelihood becomes a product of $N$ likelihoods and we get: \begin{align} p(\mathbf{S} | \cdot) = \mathcal{W}(N \upsilon+ \upsilon_0, (\sum_{i=1}^N \mathbf{W_i+S_0^{-1})^{-1}}) \end{align}
What are the parameters of a Wishart-Wishart posterior? Ok, thanks to @Xi'an answer I could make the whole derivation. I will write it for a general case: \begin{align} \mathcal{W}(\mathbf{W} | \upsilon, \mathbf{S^{-1}} ) \times \mathcal{W}(\mathbf{S} | \u
20,303
Residual autocorrelation versus lagged dependent variable
There are many approaches to modeling integrated or nearly-integrated time series data. Many of the models make more specific assumptions than more general models forms, and so might be considered as special cases. de Boef and Keele (2008) do a nice job of spelling out various models and pointing out where they relate to one another. The single equation generalized error correction model (GECM; Banerjee, 1993) is a nice one because it is (a) agnostic with respect to the stationarity/non-stationarity of the independent variables, (b) can accommodate multiple dependent variables, random effects, multiple lags, etc, and (c) has more stable estimation properties than two-stage error correction models (de Boef, 2001). Of course the specifics of any given modeling choice will be particular to the researchers' needs, so your mileage may vary. Simple example of GECM: $$\Delta{y_{ti}} = \beta_{0} + \beta_{\text{c}}\left(y_{t-1}-x_{t-1}\right) + \beta_{\Delta{x}}\Delta{x_{t}} + \beta_{x}x_{t-1} + \varepsilon$$ Where: $\Delta$ is the change operator; instantaneous short run effects of $x$ on $\Delta{y}$ are given by $\beta_{\Delta{x}}$; lagged short run effects of $x$ on $\Delta{y}$ are given by $\beta_{x} - \beta_{\text{c}} - \beta_{\Delta{x}}$; and long run equilibrium effects of $x$ on $\Delta{y}$ are given by $\left(\beta_{\text{c}} - \beta_{x}\right)/\beta_{\text{c}}$. References Banerjee, A., Dolado, J. J., Galbraith, J. W., and Hendry, D. F. (1993). Co-integration, error correction, and the econometric analysis of non-stationary data. Oxford University Press, USA. De Boef, S. (2001). Modeling equilibrium relationships: Error correction models with strongly autoregressive data. Political Analysis, 9(1):78–94. De Boef, S. and Keele, L. (2008). Taking time seriously. American Journal of Political Science, 52(1):184–200.
Residual autocorrelation versus lagged dependent variable
There are many approaches to modeling integrated or nearly-integrated time series data. Many of the models make more specific assumptions than more general models forms, and so might be considered as
Residual autocorrelation versus lagged dependent variable There are many approaches to modeling integrated or nearly-integrated time series data. Many of the models make more specific assumptions than more general models forms, and so might be considered as special cases. de Boef and Keele (2008) do a nice job of spelling out various models and pointing out where they relate to one another. The single equation generalized error correction model (GECM; Banerjee, 1993) is a nice one because it is (a) agnostic with respect to the stationarity/non-stationarity of the independent variables, (b) can accommodate multiple dependent variables, random effects, multiple lags, etc, and (c) has more stable estimation properties than two-stage error correction models (de Boef, 2001). Of course the specifics of any given modeling choice will be particular to the researchers' needs, so your mileage may vary. Simple example of GECM: $$\Delta{y_{ti}} = \beta_{0} + \beta_{\text{c}}\left(y_{t-1}-x_{t-1}\right) + \beta_{\Delta{x}}\Delta{x_{t}} + \beta_{x}x_{t-1} + \varepsilon$$ Where: $\Delta$ is the change operator; instantaneous short run effects of $x$ on $\Delta{y}$ are given by $\beta_{\Delta{x}}$; lagged short run effects of $x$ on $\Delta{y}$ are given by $\beta_{x} - \beta_{\text{c}} - \beta_{\Delta{x}}$; and long run equilibrium effects of $x$ on $\Delta{y}$ are given by $\left(\beta_{\text{c}} - \beta_{x}\right)/\beta_{\text{c}}$. References Banerjee, A., Dolado, J. J., Galbraith, J. W., and Hendry, D. F. (1993). Co-integration, error correction, and the econometric analysis of non-stationary data. Oxford University Press, USA. De Boef, S. (2001). Modeling equilibrium relationships: Error correction models with strongly autoregressive data. Political Analysis, 9(1):78–94. De Boef, S. and Keele, L. (2008). Taking time seriously. American Journal of Political Science, 52(1):184–200.
Residual autocorrelation versus lagged dependent variable There are many approaches to modeling integrated or nearly-integrated time series data. Many of the models make more specific assumptions than more general models forms, and so might be considered as
20,304
Residual autocorrelation versus lagged dependent variable
This boils down to maximum likelihood vs. methods of moments, and finite sample efficiency vs. computational expediency. Using a 'proper' AR(1) process and estimating the parameter $\rho$ (and unknown variance $\sigma^2$) via maximum likelihood (ML) gives the most efficient (lowest variance) estimates for a given amount of data. The regression approach amounts to the Yule-Walker estimation method, which is the method of moments. For a finite sample it isn't as efficient as ML, but for this case (i.e. an AR model) it has an asymptotic relative efficiency of 1.0 (i.e. with enough data it should give answers nearly as good as ML). Plus, as a linear method it is computationally efficient and avoids any convergence issues of ML. I gleaned most of this from dim memories of a time series class and Peter Bartlett's lecture notes for Introduction to Time Series, lecture 12 in particular. Note that the above wisdom relates to traditional time series models, i.e. where there are no other variables under consideration. For time series regression models, where there are various independent (i.e. explanatory) variables, see these other references: Achen, C. H. (2001). Why lagged dependent variables can supress the explanatory power of other independent variables. Annual Meeting of the Polictical Methodology Section of the American Politcal Science Association, 1–42. PDF Nelson, C. R., & Kang, H. (1984). Pitfalls in the Use of Time as an Explanatory Variable in Regression. Journal of Business & Economic Statistics, 2(1), 73–82. doi:10.2307/1391356 Keele, L., & Kelly, N. J. (2006). Dynamic models for dynamic theories: The ins and outs of lagged dependent variables. Political analysis, 14(2), 186-205. PDF (Thanks to Jake Westfall for the last one). The general take away seems to be "it depends".
Residual autocorrelation versus lagged dependent variable
This boils down to maximum likelihood vs. methods of moments, and finite sample efficiency vs. computational expediency. Using a 'proper' AR(1) process and estimating the parameter $\rho$ (and unknow
Residual autocorrelation versus lagged dependent variable This boils down to maximum likelihood vs. methods of moments, and finite sample efficiency vs. computational expediency. Using a 'proper' AR(1) process and estimating the parameter $\rho$ (and unknown variance $\sigma^2$) via maximum likelihood (ML) gives the most efficient (lowest variance) estimates for a given amount of data. The regression approach amounts to the Yule-Walker estimation method, which is the method of moments. For a finite sample it isn't as efficient as ML, but for this case (i.e. an AR model) it has an asymptotic relative efficiency of 1.0 (i.e. with enough data it should give answers nearly as good as ML). Plus, as a linear method it is computationally efficient and avoids any convergence issues of ML. I gleaned most of this from dim memories of a time series class and Peter Bartlett's lecture notes for Introduction to Time Series, lecture 12 in particular. Note that the above wisdom relates to traditional time series models, i.e. where there are no other variables under consideration. For time series regression models, where there are various independent (i.e. explanatory) variables, see these other references: Achen, C. H. (2001). Why lagged dependent variables can supress the explanatory power of other independent variables. Annual Meeting of the Polictical Methodology Section of the American Politcal Science Association, 1–42. PDF Nelson, C. R., & Kang, H. (1984). Pitfalls in the Use of Time as an Explanatory Variable in Regression. Journal of Business & Economic Statistics, 2(1), 73–82. doi:10.2307/1391356 Keele, L., & Kelly, N. J. (2006). Dynamic models for dynamic theories: The ins and outs of lagged dependent variables. Political analysis, 14(2), 186-205. PDF (Thanks to Jake Westfall for the last one). The general take away seems to be "it depends".
Residual autocorrelation versus lagged dependent variable This boils down to maximum likelihood vs. methods of moments, and finite sample efficiency vs. computational expediency. Using a 'proper' AR(1) process and estimating the parameter $\rho$ (and unknow
20,305
Residual autocorrelation versus lagged dependent variable
A good presentation of a Transfer Function (TF) is here Transfer function in forecasting models - interpretation and alternatively here http://en.wikipedia.org/wiki/Distributed_lag. Since we both have a $Y$ and one $X$ for simplicity sake then I believe that one can form a TF with appropriate assumed lags and appropriate assumed differences of these two series that would match the assumed ECM, illustrating that the ECM is a particular constrained subset of a TF model. Perhaps some other readers (heavy econometricians) have already gone thought the proof/algebra but I will consider your positive suggestion in helping other readers. After a brief search on the web http://springschool.politics.ox.ac.uk/archive/2008/OxfordECM.pdf discussed how an ECM was a particular case of an ADL (Autoregressive Distributed Lag Model also known as a PDL). An ADL/PDL model is a particular case of a Transfer Function. This material from the above reference shows the equivalence of an ADL and ECM. Note that Transfer Functions are more general than ADL models as they allow explicit decay structure. My point is that the powerful model identification features available with Transfer Functions should be used rather than assuming a model because it fits the desire to have simple explanations such as Short Run/Long Run etc. The Transfer Function model/approach enables robustification by allowing the identification of an arbitrary ARIMA component and the detection of Gaussian Violations such as Pulses/Level Shifts/Seasonal Pulses (Seasonal Dummies) and Local Time Trends along with variance/parameter change augmentations. I would be interested in seeing examples of an ECM that were not functionally equivalent to an ADL model and couldn't be recast as a Transfer Function. is an excerpt De Boef and Keele (slide 89 )
Residual autocorrelation versus lagged dependent variable
A good presentation of a Transfer Function (TF) is here Transfer function in forecasting models - interpretation and alternatively here http://en.wikipedia.org/wiki/Distributed_lag. Since we both have
Residual autocorrelation versus lagged dependent variable A good presentation of a Transfer Function (TF) is here Transfer function in forecasting models - interpretation and alternatively here http://en.wikipedia.org/wiki/Distributed_lag. Since we both have a $Y$ and one $X$ for simplicity sake then I believe that one can form a TF with appropriate assumed lags and appropriate assumed differences of these two series that would match the assumed ECM, illustrating that the ECM is a particular constrained subset of a TF model. Perhaps some other readers (heavy econometricians) have already gone thought the proof/algebra but I will consider your positive suggestion in helping other readers. After a brief search on the web http://springschool.politics.ox.ac.uk/archive/2008/OxfordECM.pdf discussed how an ECM was a particular case of an ADL (Autoregressive Distributed Lag Model also known as a PDL). An ADL/PDL model is a particular case of a Transfer Function. This material from the above reference shows the equivalence of an ADL and ECM. Note that Transfer Functions are more general than ADL models as they allow explicit decay structure. My point is that the powerful model identification features available with Transfer Functions should be used rather than assuming a model because it fits the desire to have simple explanations such as Short Run/Long Run etc. The Transfer Function model/approach enables robustification by allowing the identification of an arbitrary ARIMA component and the detection of Gaussian Violations such as Pulses/Level Shifts/Seasonal Pulses (Seasonal Dummies) and Local Time Trends along with variance/parameter change augmentations. I would be interested in seeing examples of an ECM that were not functionally equivalent to an ADL model and couldn't be recast as a Transfer Function. is an excerpt De Boef and Keele (slide 89 )
Residual autocorrelation versus lagged dependent variable A good presentation of a Transfer Function (TF) is here Transfer function in forecasting models - interpretation and alternatively here http://en.wikipedia.org/wiki/Distributed_lag. Since we both have
20,306
Why do we calculate Information value?
Generally speaking, Information Value provides a measure of how well a variable $X$ is able to distinguish between a binary response (e.g. "good" versus "bad") in some target variable $Y$. The idea is if a variable $X$ has a low Information Value, it may not do a sufficient job of classifying the target variable, and hence is removed as an explanatory variable. To see how this works, let $X$ be grouped into $n$ bins. Each $x \in X$ corresponds to a $y \in Y$ that may take one of two values, say 0 or 1. Then for bins $X_i$, $1 \leq i \leq n$, $$ IV= \sum_{i=1}^n (g_i-b_i)*\ln(g_i/b_i) $$ where $b_i= (\#$ of $0$'s in $X_i)/(\#$ of $0$'s in $X) =$ the proportion of $0$'s in bin $i$ versus all bins $g_i= (\#$ of $1$'s in $X_i)/(\#$ of $1$'s in $X) =$ the proportion of $1$'s in bin $i$ versus all bins $\ln(g_i/b_i)$ is also known as the Weight of Evidence (for bin $X_i$). Cutoff values may vary and the selection is subjective. I often use $IV < 0.3$ (as does [1] below). In the context of credit scoring, these two resources should help: [1] http://www.mwsug.org/proceedings/2013/AA/MWSUG-2013-AA14.pdf [2] http://support.sas.com/resources/papers/proceedings12/141-2012.pdf
Why do we calculate Information value?
Generally speaking, Information Value provides a measure of how well a variable $X$ is able to distinguish between a binary response (e.g. "good" versus "bad") in some target variable $Y$. The idea is
Why do we calculate Information value? Generally speaking, Information Value provides a measure of how well a variable $X$ is able to distinguish between a binary response (e.g. "good" versus "bad") in some target variable $Y$. The idea is if a variable $X$ has a low Information Value, it may not do a sufficient job of classifying the target variable, and hence is removed as an explanatory variable. To see how this works, let $X$ be grouped into $n$ bins. Each $x \in X$ corresponds to a $y \in Y$ that may take one of two values, say 0 or 1. Then for bins $X_i$, $1 \leq i \leq n$, $$ IV= \sum_{i=1}^n (g_i-b_i)*\ln(g_i/b_i) $$ where $b_i= (\#$ of $0$'s in $X_i)/(\#$ of $0$'s in $X) =$ the proportion of $0$'s in bin $i$ versus all bins $g_i= (\#$ of $1$'s in $X_i)/(\#$ of $1$'s in $X) =$ the proportion of $1$'s in bin $i$ versus all bins $\ln(g_i/b_i)$ is also known as the Weight of Evidence (for bin $X_i$). Cutoff values may vary and the selection is subjective. I often use $IV < 0.3$ (as does [1] below). In the context of credit scoring, these two resources should help: [1] http://www.mwsug.org/proceedings/2013/AA/MWSUG-2013-AA14.pdf [2] http://support.sas.com/resources/papers/proceedings12/141-2012.pdf
Why do we calculate Information value? Generally speaking, Information Value provides a measure of how well a variable $X$ is able to distinguish between a binary response (e.g. "good" versus "bad") in some target variable $Y$. The idea is
20,307
Shape of confidence and prediction intervals for nonlinear regression
Confidence and prediction bands should be expected to typically get wider near the ends - and for the same reason that they always do so in ordinary regression; generally the parameter uncertainty leads to wider intervals near the ends than in the middle You can see this by simulation easily enough, either by simulating data from a given model, or by simulating from the sampling distribution of the parameter vector. The usual (approximately correct) calculations done for nonlinear regression involve taking a local linear approximation (this is given in Harvey's answer), but even without those we can get some notion of what's going on. However, doing the actual calculations is nontrivial and it may be that programs might take a shortcut in calculation which ignores that effect. It's also possible that for some data and some models the effect is relatively small and hard to see. Indeed with prediction intervals, especially with large variance but lots of data it can sometimes be hard to see the curve in ordinary linear regression - they can look almost straight, and it's relatively easy to discern deviation from straightness. Here's an example of how hard it can be to see just with a confidence interval for the mean (prediction intervals can be far harder to see because their relative variation is so much less). Here's some data and a nonlinear least squares fit, with a confidence interval for the population mean (in this case generated from the sampling distribution since I know the true model, but something very similar could be done by asymptotic approximation or by bootstrapping): The purple bounds look almost parallel to the blue predictions... but they aren't. Here's the standard error of the sampling distribution of those mean predictions: which clearly isn't constant. Edit: Those "sp" expressions you just posted come straight from the prediction interval for linear regression!
Shape of confidence and prediction intervals for nonlinear regression
Confidence and prediction bands should be expected to typically get wider near the ends - and for the same reason that they always do so in ordinary regression; generally the parameter uncertainty lea
Shape of confidence and prediction intervals for nonlinear regression Confidence and prediction bands should be expected to typically get wider near the ends - and for the same reason that they always do so in ordinary regression; generally the parameter uncertainty leads to wider intervals near the ends than in the middle You can see this by simulation easily enough, either by simulating data from a given model, or by simulating from the sampling distribution of the parameter vector. The usual (approximately correct) calculations done for nonlinear regression involve taking a local linear approximation (this is given in Harvey's answer), but even without those we can get some notion of what's going on. However, doing the actual calculations is nontrivial and it may be that programs might take a shortcut in calculation which ignores that effect. It's also possible that for some data and some models the effect is relatively small and hard to see. Indeed with prediction intervals, especially with large variance but lots of data it can sometimes be hard to see the curve in ordinary linear regression - they can look almost straight, and it's relatively easy to discern deviation from straightness. Here's an example of how hard it can be to see just with a confidence interval for the mean (prediction intervals can be far harder to see because their relative variation is so much less). Here's some data and a nonlinear least squares fit, with a confidence interval for the population mean (in this case generated from the sampling distribution since I know the true model, but something very similar could be done by asymptotic approximation or by bootstrapping): The purple bounds look almost parallel to the blue predictions... but they aren't. Here's the standard error of the sampling distribution of those mean predictions: which clearly isn't constant. Edit: Those "sp" expressions you just posted come straight from the prediction interval for linear regression!
Shape of confidence and prediction intervals for nonlinear regression Confidence and prediction bands should be expected to typically get wider near the ends - and for the same reason that they always do so in ordinary regression; generally the parameter uncertainty lea
20,308
Shape of confidence and prediction intervals for nonlinear regression
The mathematics of computing confidence and prediction bands of curves fit by nonlinear regression are explained in this Cross-Validated page. It shows that the bands are not always/usually symmetrical. And here is an explanation with more words and less math: First, let's define G|x, which is the gradient of the parameters at a particular value of X and using all the best-fit values of the parameters. The result is a vector, with one element per parameter. For each parameter, it is defined as dY/dP, where Y is the Y value of the curve given the particular value of X and all the best-fit parameter values, and P is one of the parameters.) G'|x is that gradient vector transposed, so it is a column rather than a row of values. Cov is the covariance matrix (inversed Hessian from last iteration). It is a square matrix with the number of rows and columns equal to the number of parameters. Each item in the matrix is the covariance between two parameters. We use Cov to refer to the normalized covariance matrix, where each value is between -1 and 1. Now compute c = G'|x * Cov * G|x. The result is a single number for any value of X. The confidence and prediction bands are centered on the best fit curve, and extend above and below the curve an equal amount. The confidence bands extend above and below the curve by: = sqrt(c)*sqrt(SS/DF)*CriticalT(Confidence%, DF) The prediction bands extend a further distance above and below the curve, equal to: = sqrt(c+1)*sqrt(SS/DF)*CriticalT(Confidence%,DF) In both these equations, the value of c (defined above) depends on the value of X, so the confidence and prediction bands are not a constant distance from the curve. The value of SS is the sum-of-squares for the fit, and DF is the number of degrees of freedom (number of data points minus number of parameters). CriticalT is a constant from the t distribution based on the confidence level you want (traditionally 95%) and the number of degrees of freedom. For 95% limits, and a fairly large df, this value is close to 1.96. If DF is small, this value is higher.
Shape of confidence and prediction intervals for nonlinear regression
The mathematics of computing confidence and prediction bands of curves fit by nonlinear regression are explained in this Cross-Validated page. It shows that the bands are not always/usually symmetrica
Shape of confidence and prediction intervals for nonlinear regression The mathematics of computing confidence and prediction bands of curves fit by nonlinear regression are explained in this Cross-Validated page. It shows that the bands are not always/usually symmetrical. And here is an explanation with more words and less math: First, let's define G|x, which is the gradient of the parameters at a particular value of X and using all the best-fit values of the parameters. The result is a vector, with one element per parameter. For each parameter, it is defined as dY/dP, where Y is the Y value of the curve given the particular value of X and all the best-fit parameter values, and P is one of the parameters.) G'|x is that gradient vector transposed, so it is a column rather than a row of values. Cov is the covariance matrix (inversed Hessian from last iteration). It is a square matrix with the number of rows and columns equal to the number of parameters. Each item in the matrix is the covariance between two parameters. We use Cov to refer to the normalized covariance matrix, where each value is between -1 and 1. Now compute c = G'|x * Cov * G|x. The result is a single number for any value of X. The confidence and prediction bands are centered on the best fit curve, and extend above and below the curve an equal amount. The confidence bands extend above and below the curve by: = sqrt(c)*sqrt(SS/DF)*CriticalT(Confidence%, DF) The prediction bands extend a further distance above and below the curve, equal to: = sqrt(c+1)*sqrt(SS/DF)*CriticalT(Confidence%,DF) In both these equations, the value of c (defined above) depends on the value of X, so the confidence and prediction bands are not a constant distance from the curve. The value of SS is the sum-of-squares for the fit, and DF is the number of degrees of freedom (number of data points minus number of parameters). CriticalT is a constant from the t distribution based on the confidence level you want (traditionally 95%) and the number of degrees of freedom. For 95% limits, and a fairly large df, this value is close to 1.96. If DF is small, this value is higher.
Shape of confidence and prediction intervals for nonlinear regression The mathematics of computing confidence and prediction bands of curves fit by nonlinear regression are explained in this Cross-Validated page. It shows that the bands are not always/usually symmetrica
20,309
Calculate uncertainty of linear regression slope based on data uncertainty
Responding to "I'm trying to find how much $k$ and $n$ in $y = k x + n$ can change but still fit the data if we know uncertainty in $y$ values." If the true relation is linear and the errors in $y$ are independent normal random variables with zero means and known standard deviations then the $100(1-\alpha)$% confidence region for $(k,n)$ is the ellipse for which $\sum (k x_i + n - y_i)^2/\sigma_i^2 < \chi_{d,\alpha}^2$, where $\sigma_i$ is the standard deviation of the error in $y_i$, $d$ is the number of $(x,y)$ pairs, and $\chi_{d,\alpha}^2$ is the upper $\alpha$ fractile of the chi-square distribution with $d$ degrees of freedom. EDIT - Taking the standard error of each $y_i$ to be 3 -- i.e., taking the error bars in the plot to represent approximate 95% confidence intervals for each $y_i$ separately -- the equation for the boundary of the 95% confidence region for $(k,n)$ is $204 (k-2)^2 + 72n(k-2) + 9n^2 = 152.271$.
Calculate uncertainty of linear regression slope based on data uncertainty
Responding to "I'm trying to find how much $k$ and $n$ in $y = k x + n$ can change but still fit the data if we know uncertainty in $y$ values." If the true relation is linear and the errors in $y$ ar
Calculate uncertainty of linear regression slope based on data uncertainty Responding to "I'm trying to find how much $k$ and $n$ in $y = k x + n$ can change but still fit the data if we know uncertainty in $y$ values." If the true relation is linear and the errors in $y$ are independent normal random variables with zero means and known standard deviations then the $100(1-\alpha)$% confidence region for $(k,n)$ is the ellipse for which $\sum (k x_i + n - y_i)^2/\sigma_i^2 < \chi_{d,\alpha}^2$, where $\sigma_i$ is the standard deviation of the error in $y_i$, $d$ is the number of $(x,y)$ pairs, and $\chi_{d,\alpha}^2$ is the upper $\alpha$ fractile of the chi-square distribution with $d$ degrees of freedom. EDIT - Taking the standard error of each $y_i$ to be 3 -- i.e., taking the error bars in the plot to represent approximate 95% confidence intervals for each $y_i$ separately -- the equation for the boundary of the 95% confidence region for $(k,n)$ is $204 (k-2)^2 + 72n(k-2) + 9n^2 = 152.271$.
Calculate uncertainty of linear regression slope based on data uncertainty Responding to "I'm trying to find how much $k$ and $n$ in $y = k x + n$ can change but still fit the data if we know uncertainty in $y$ values." If the true relation is linear and the errors in $y$ ar
20,310
Calculate uncertainty of linear regression slope based on data uncertainty
I did a naive direct sampling with this simple code in Python: import random import numpy as np import pylab def uncreg(x, y, xu, yu, N=100000): out = np.zeros((N, 2)) for n in xrange(N): tx = [s+random.uniform(-xu, xu) for s in x] ty = [s+random.uniform(-yu, yu) for s in y] a, b = np.linalg.lstsq(np.vstack([tx, np.ones(len(x))]).T, ty)[0] out[n, 0:2] = [a, b] return out if __name__ == "__main__": P = uncreg(np.arange(0, 8.01), np.arange(0, 16.01, 2), 0.1, 6.) H, xedges, yedges = np.histogram2d(P[:, 0], P[:, 1], bins=(50, 50)) pylab.imshow(H, interpolation='nearest', origin='low', aspect='auto', extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]]) and got this: Of course you can mine the P for the data you want, or change the uncertainty distributions.
Calculate uncertainty of linear regression slope based on data uncertainty
I did a naive direct sampling with this simple code in Python: import random import numpy as np import pylab def uncreg(x, y, xu, yu, N=100000): out = np.zeros((N, 2)) for n in xrange(N):
Calculate uncertainty of linear regression slope based on data uncertainty I did a naive direct sampling with this simple code in Python: import random import numpy as np import pylab def uncreg(x, y, xu, yu, N=100000): out = np.zeros((N, 2)) for n in xrange(N): tx = [s+random.uniform(-xu, xu) for s in x] ty = [s+random.uniform(-yu, yu) for s in y] a, b = np.linalg.lstsq(np.vstack([tx, np.ones(len(x))]).T, ty)[0] out[n, 0:2] = [a, b] return out if __name__ == "__main__": P = uncreg(np.arange(0, 8.01), np.arange(0, 16.01, 2), 0.1, 6.) H, xedges, yedges = np.histogram2d(P[:, 0], P[:, 1], bins=(50, 50)) pylab.imshow(H, interpolation='nearest', origin='low', aspect='auto', extent=[xedges[0], xedges[-1], yedges[0], yedges[-1]]) and got this: Of course you can mine the P for the data you want, or change the uncertainty distributions.
Calculate uncertainty of linear regression slope based on data uncertainty I did a naive direct sampling with this simple code in Python: import random import numpy as np import pylab def uncreg(x, y, xu, yu, N=100000): out = np.zeros((N, 2)) for n in xrange(N):
20,311
Calculate uncertainty of linear regression slope based on data uncertainty
I was on the same hunt before and I think this may be a useful place to start. The excel macro function gives linear fit terms and their uncertainties based on tabular points and uncertainty for each point in both ordinates. Maybe look up the paper it is based on to decide if you want to implement it in a different environment, modify, etc. (There is some legwork done for Mathematica.) It seems to have good walk-through documentation on the surface but haven't opened up the macro to see how well annotated it is.
Calculate uncertainty of linear regression slope based on data uncertainty
I was on the same hunt before and I think this may be a useful place to start. The excel macro function gives linear fit terms and their uncertainties based on tabular points and uncertainty for each
Calculate uncertainty of linear regression slope based on data uncertainty I was on the same hunt before and I think this may be a useful place to start. The excel macro function gives linear fit terms and their uncertainties based on tabular points and uncertainty for each point in both ordinates. Maybe look up the paper it is based on to decide if you want to implement it in a different environment, modify, etc. (There is some legwork done for Mathematica.) It seems to have good walk-through documentation on the surface but haven't opened up the macro to see how well annotated it is.
Calculate uncertainty of linear regression slope based on data uncertainty I was on the same hunt before and I think this may be a useful place to start. The excel macro function gives linear fit terms and their uncertainties based on tabular points and uncertainty for each
20,312
Flexible and inflexible models in machine learning
In these 2 situations, comparative performance flexible vs. inflexible model also depends on: is true relation y=f(x) close to linear or very non-linear; do you tune/constrain flexibility degree of the "flexible" model when fitting it. If relation is close to linear and you don't constrain flexibility, then linear model should give better test error in both cases because flexible model likely to overfit in both cases. You can look at it as that: In both cases data doesn't contain enough information about true relation (in first case relation is high dimensional and you have not enough data, in second case it corrupted by noise) but linear model brings some external prior information about true relation (constrain class of fitted relations to linear ones) and that prior info turns out to be right (true relation is close to linear). While flexible model doesn't contain prior information (it can fit anything), so it fits to noise. If however true relation is very non-linear, it's hard to say who will win (both will loose :)). If you tune/constrain degree of flexibility and do it in a right way (say by cross-validation), then flexible model should win in all cases.
Flexible and inflexible models in machine learning
In these 2 situations, comparative performance flexible vs. inflexible model also depends on: is true relation y=f(x) close to linear or very non-linear; do you tune/constrain flexibility degree of t
Flexible and inflexible models in machine learning In these 2 situations, comparative performance flexible vs. inflexible model also depends on: is true relation y=f(x) close to linear or very non-linear; do you tune/constrain flexibility degree of the "flexible" model when fitting it. If relation is close to linear and you don't constrain flexibility, then linear model should give better test error in both cases because flexible model likely to overfit in both cases. You can look at it as that: In both cases data doesn't contain enough information about true relation (in first case relation is high dimensional and you have not enough data, in second case it corrupted by noise) but linear model brings some external prior information about true relation (constrain class of fitted relations to linear ones) and that prior info turns out to be right (true relation is close to linear). While flexible model doesn't contain prior information (it can fit anything), so it fits to noise. If however true relation is very non-linear, it's hard to say who will win (both will loose :)). If you tune/constrain degree of flexibility and do it in a right way (say by cross-validation), then flexible model should win in all cases.
Flexible and inflexible models in machine learning In these 2 situations, comparative performance flexible vs. inflexible model also depends on: is true relation y=f(x) close to linear or very non-linear; do you tune/constrain flexibility degree of t
20,313
Flexible and inflexible models in machine learning
Of course it depends on the underlying data which you should always explore to find out some of its characteristics before trying to fit a model but what I've learnt as general rules of thumb are: A flexible model allows you to take full advantage of a large sample size (large n). A flexible model will be necessary to find the nonlinear effect. A flexible model will cause you to fit too much of the noise in the problem (when variance of the error terms is high).
Flexible and inflexible models in machine learning
Of course it depends on the underlying data which you should always explore to find out some of its characteristics before trying to fit a model but what I've learnt as general rules of thumb are: A
Flexible and inflexible models in machine learning Of course it depends on the underlying data which you should always explore to find out some of its characteristics before trying to fit a model but what I've learnt as general rules of thumb are: A flexible model allows you to take full advantage of a large sample size (large n). A flexible model will be necessary to find the nonlinear effect. A flexible model will cause you to fit too much of the noise in the problem (when variance of the error terms is high).
Flexible and inflexible models in machine learning Of course it depends on the underlying data which you should always explore to find out some of its characteristics before trying to fit a model but what I've learnt as general rules of thumb are: A
20,314
Flexible and inflexible models in machine learning
Well, for the second part, I think more more flexible model will try to fit the model hard and training data contains a high noise, so flexible model will also try to learn that noise and will result in more test error. I know the source of this question as I'm also reading the same book :)
Flexible and inflexible models in machine learning
Well, for the second part, I think more more flexible model will try to fit the model hard and training data contains a high noise, so flexible model will also try to learn that noise and will result
Flexible and inflexible models in machine learning Well, for the second part, I think more more flexible model will try to fit the model hard and training data contains a high noise, so flexible model will also try to learn that noise and will result in more test error. I know the source of this question as I'm also reading the same book :)
Flexible and inflexible models in machine learning Well, for the second part, I think more more flexible model will try to fit the model hard and training data contains a high noise, so flexible model will also try to learn that noise and will result
20,315
Flexible and inflexible models in machine learning
For the first part, I would expect the inflexible model would perform better with a limited number of observations. When n is very small, both models (whether it's flexible or inflexible) would not yield good enough prediction. However, the flexible model would tend to overfit the data and would perform more poorly when it comes to a new testset. Ideally, I would collect more observations to improve the fitting, but if that is not the case, then I would use the inflexible model, trying to minimize a test error with a new testset.
Flexible and inflexible models in machine learning
For the first part, I would expect the inflexible model would perform better with a limited number of observations. When n is very small, both models (whether it's flexible or inflexible) would not yi
Flexible and inflexible models in machine learning For the first part, I would expect the inflexible model would perform better with a limited number of observations. When n is very small, both models (whether it's flexible or inflexible) would not yield good enough prediction. However, the flexible model would tend to overfit the data and would perform more poorly when it comes to a new testset. Ideally, I would collect more observations to improve the fitting, but if that is not the case, then I would use the inflexible model, trying to minimize a test error with a new testset.
Flexible and inflexible models in machine learning For the first part, I would expect the inflexible model would perform better with a limited number of observations. When n is very small, both models (whether it's flexible or inflexible) would not yi
20,316
Flexible and inflexible models in machine learning
For each of parts (a) through (d), indicate whether i. or ii. is correct, and explain your answer. In general, do we expect the performance of a flexible statistical learning method to perform better or worse than an inflexible method when : The sample size n is extremely large, and the number of predictors p is small ? Better. A flexible method will fit the data closer and with the large sample size, would perform better than an inflexible approach. The number of predictors p is extremely large, and the number of observations n is small ? Worse. A flexible method would overfit the small number of observations. The relationship between the predictors and response is highly non-linear ? Better. With more degrees of freedom, a flexible method would fit better than an inflexible one. The variance of the error terms, i.e. σ2=Var(ε), is extremely high ? Worse. A flexible method would fit to the noise in the error terms and increase variance. Taken from here.
Flexible and inflexible models in machine learning
For each of parts (a) through (d), indicate whether i. or ii. is correct, and explain your answer. In general, do we expect the performance of a flexible statistical learning method to perform better
Flexible and inflexible models in machine learning For each of parts (a) through (d), indicate whether i. or ii. is correct, and explain your answer. In general, do we expect the performance of a flexible statistical learning method to perform better or worse than an inflexible method when : The sample size n is extremely large, and the number of predictors p is small ? Better. A flexible method will fit the data closer and with the large sample size, would perform better than an inflexible approach. The number of predictors p is extremely large, and the number of observations n is small ? Worse. A flexible method would overfit the small number of observations. The relationship between the predictors and response is highly non-linear ? Better. With more degrees of freedom, a flexible method would fit better than an inflexible one. The variance of the error terms, i.e. σ2=Var(ε), is extremely high ? Worse. A flexible method would fit to the noise in the error terms and increase variance. Taken from here.
Flexible and inflexible models in machine learning For each of parts (a) through (d), indicate whether i. or ii. is correct, and explain your answer. In general, do we expect the performance of a flexible statistical learning method to perform better
20,317
Flexible and inflexible models in machine learning
If n is small and p is very large , we have a small observation set in which the flexible model might find non-existing relationships due to high number of predictors . If the var of error terms is very high , the flexible models will go ahead and try to fit the unexplained error terms , so we should use a rather inflexible method.
Flexible and inflexible models in machine learning
If n is small and p is very large , we have a small observation set in which the flexible model might find non-existing relationships due to high number of predictors . If the var of error terms is v
Flexible and inflexible models in machine learning If n is small and p is very large , we have a small observation set in which the flexible model might find non-existing relationships due to high number of predictors . If the var of error terms is very high , the flexible models will go ahead and try to fit the unexplained error terms , so we should use a rather inflexible method.
Flexible and inflexible models in machine learning If n is small and p is very large , we have a small observation set in which the flexible model might find non-existing relationships due to high number of predictors . If the var of error terms is v
20,318
Flexible and inflexible models in machine learning
Part a: Since the sample (training data) is small, both the models will not very well capture the true underlying relationship compared to the case where the sample size is large as a large sample means that the training data resembles very closely the underlying population data. So, the test data is likely to be very different from the sample data in this case. With test data (data that we really care about), flexible models will underperform as they are fitted to a small training dataset. With a large number of predictors, the over-fitting will again be very high (much higher in flexible models compared to inflexible models) and a change in the input data can give very unreliable and inaccurate results. Again this will ensure flexible models under-perform the inflexible models. Part b: If the variance of error terms is very high, the flexible models will try to fit the irreducible error (noise) in the model. This would be the case with the inflexible models as well but the results will be very drastic in the case of flexible models. So we should use an inflexible method in such a case.
Flexible and inflexible models in machine learning
Part a: Since the sample (training data) is small, both the models will not very well capture the true underlying relationship compared to the case where the sample size is large as a large sample mea
Flexible and inflexible models in machine learning Part a: Since the sample (training data) is small, both the models will not very well capture the true underlying relationship compared to the case where the sample size is large as a large sample means that the training data resembles very closely the underlying population data. So, the test data is likely to be very different from the sample data in this case. With test data (data that we really care about), flexible models will underperform as they are fitted to a small training dataset. With a large number of predictors, the over-fitting will again be very high (much higher in flexible models compared to inflexible models) and a change in the input data can give very unreliable and inaccurate results. Again this will ensure flexible models under-perform the inflexible models. Part b: If the variance of error terms is very high, the flexible models will try to fit the irreducible error (noise) in the model. This would be the case with the inflexible models as well but the results will be very drastic in the case of flexible models. So we should use an inflexible method in such a case.
Flexible and inflexible models in machine learning Part a: Since the sample (training data) is small, both the models will not very well capture the true underlying relationship compared to the case where the sample size is large as a large sample mea
20,319
Flexible and inflexible models in machine learning
For the second question I believe the answer is both of them will perform equally (assuming that those errors are irreducible, i.e., this error). More information is provided in An Introduction to Statistical Learning on page 18 (topic: Why estimate $f$) where the author explains saying The accuracy of $Y$ as a prediction for $Y$ depends on two quantities, which we will call the reducible error and the irreducible error. In general, $\hat f$ will not be a perfect estimate for $f$, and this inaccuracy will introduce some error. This error is reducible because we can potentially improve the accuracy of $\hat f$ by using the most appropriate statistical learning technique to estimate $\hat f$. However, even if it were possible to form a perfect estimate for $f$, so that our estimated response took the form $\hat Y = f(X)$, our prediction would still have some error in it! This is because $Y$ is also a function of $\epsilon$, which, by definition, cannot be predicted using $X$. Therefore, variability associated with $\epsilon$ also affects the accuracy of our predictions. This is known as the irreducible error, because no matter how well we estimate $f$, we cannot reduce the error introduced by $\epsilon$.
Flexible and inflexible models in machine learning
For the second question I believe the answer is both of them will perform equally (assuming that those errors are irreducible, i.e., this error). More information is provided in An Introduction to Sta
Flexible and inflexible models in machine learning For the second question I believe the answer is both of them will perform equally (assuming that those errors are irreducible, i.e., this error). More information is provided in An Introduction to Statistical Learning on page 18 (topic: Why estimate $f$) where the author explains saying The accuracy of $Y$ as a prediction for $Y$ depends on two quantities, which we will call the reducible error and the irreducible error. In general, $\hat f$ will not be a perfect estimate for $f$, and this inaccuracy will introduce some error. This error is reducible because we can potentially improve the accuracy of $\hat f$ by using the most appropriate statistical learning technique to estimate $\hat f$. However, even if it were possible to form a perfect estimate for $f$, so that our estimated response took the form $\hat Y = f(X)$, our prediction would still have some error in it! This is because $Y$ is also a function of $\epsilon$, which, by definition, cannot be predicted using $X$. Therefore, variability associated with $\epsilon$ also affects the accuracy of our predictions. This is known as the irreducible error, because no matter how well we estimate $f$, we cannot reduce the error introduced by $\epsilon$.
Flexible and inflexible models in machine learning For the second question I believe the answer is both of them will perform equally (assuming that those errors are irreducible, i.e., this error). More information is provided in An Introduction to Sta
20,320
Maximum Likelihood estimator - confidence interval
Use the fact that for an i.i.d. sample of size $n$, given some regularity conditions, the MLE $\hat{\theta}$ is a consistent estimator of the true parameter $\theta_0$, & its distribution asymptotically Normal, with variance determined by the reciprocal of the Fisher information: $$\sqrt{n}\left(\hat{\theta}-\theta_0\right) \rightarrow \mathcal{N}\left(0,\frac{1}{\mathcal{I}_1(\theta_0)}\right)$$ where $\mathcal{I}_1(\theta_0)$ is the Fisher information from a single sample. The observed information at the MLE $I(\hat{\theta})$ tends to the expected information asymptotically, so you can calculate (say 95%) confidence intervals with $$\hat{\theta} \pm \frac{1.96}{\sqrt{nI_1\left(\hat\theta\right)}}$$ For example, if $X$ is a zero-truncated Poisson variate, you can get a formula for the observed information in terms of the MLE (which you have to calculate numerically): $$\newcommand{\e}{\mathrm{e}}\newcommand{\d}{\operatorname{d}}f(x) = \frac{\e^{-\theta}\theta^x}{x!(1-e^{-\theta})}$$ $$\ell(\theta)=-\theta+ x\log\theta -\log(1-\e^{-\theta})$$ $$\frac{\d\ell(\theta)}{\d\theta} = -1 +\frac{x}{\theta} - \frac{\e^{-\theta}}{1-\e^{-\theta}}$$ $$I_1\left(\hat{\theta}\right)=-\frac{\d^2\ell\left(\hat\theta\right)}{\left(\d\hat\theta\right)^2} = \frac{x}{\hat\theta} - \frac{\e^{-\hat\theta}}{\left(1-\e^{-\hat{\theta}}\right)^2}$$ Notable cases excluded by the regularity conditions include those where the parameter $\theta$ determines the support of the data, e.g. sampling from a uniform distribution between nought and $\theta$ the number of nuisance parameters increases with sample size
Maximum Likelihood estimator - confidence interval
Use the fact that for an i.i.d. sample of size $n$, given some regularity conditions, the MLE $\hat{\theta}$ is a consistent estimator of the true parameter $\theta_0$, & its distribution asymptotical
Maximum Likelihood estimator - confidence interval Use the fact that for an i.i.d. sample of size $n$, given some regularity conditions, the MLE $\hat{\theta}$ is a consistent estimator of the true parameter $\theta_0$, & its distribution asymptotically Normal, with variance determined by the reciprocal of the Fisher information: $$\sqrt{n}\left(\hat{\theta}-\theta_0\right) \rightarrow \mathcal{N}\left(0,\frac{1}{\mathcal{I}_1(\theta_0)}\right)$$ where $\mathcal{I}_1(\theta_0)$ is the Fisher information from a single sample. The observed information at the MLE $I(\hat{\theta})$ tends to the expected information asymptotically, so you can calculate (say 95%) confidence intervals with $$\hat{\theta} \pm \frac{1.96}{\sqrt{nI_1\left(\hat\theta\right)}}$$ For example, if $X$ is a zero-truncated Poisson variate, you can get a formula for the observed information in terms of the MLE (which you have to calculate numerically): $$\newcommand{\e}{\mathrm{e}}\newcommand{\d}{\operatorname{d}}f(x) = \frac{\e^{-\theta}\theta^x}{x!(1-e^{-\theta})}$$ $$\ell(\theta)=-\theta+ x\log\theta -\log(1-\e^{-\theta})$$ $$\frac{\d\ell(\theta)}{\d\theta} = -1 +\frac{x}{\theta} - \frac{\e^{-\theta}}{1-\e^{-\theta}}$$ $$I_1\left(\hat{\theta}\right)=-\frac{\d^2\ell\left(\hat\theta\right)}{\left(\d\hat\theta\right)^2} = \frac{x}{\hat\theta} - \frac{\e^{-\hat\theta}}{\left(1-\e^{-\hat{\theta}}\right)^2}$$ Notable cases excluded by the regularity conditions include those where the parameter $\theta$ determines the support of the data, e.g. sampling from a uniform distribution between nought and $\theta$ the number of nuisance parameters increases with sample size
Maximum Likelihood estimator - confidence interval Use the fact that for an i.i.d. sample of size $n$, given some regularity conditions, the MLE $\hat{\theta}$ is a consistent estimator of the true parameter $\theta_0$, & its distribution asymptotical
20,321
Why do we use k-means instead of other algorithms?
Other clustering algorithms with better features tend to be more expensive. In this case, k-means becomes a great solution for pre-clustering, reducing the space into disjoint smaller sub-spaces where other clustering algorithms can be applied.
Why do we use k-means instead of other algorithms?
Other clustering algorithms with better features tend to be more expensive. In this case, k-means becomes a great solution for pre-clustering, reducing the space into disjoint smaller sub-spaces where
Why do we use k-means instead of other algorithms? Other clustering algorithms with better features tend to be more expensive. In this case, k-means becomes a great solution for pre-clustering, reducing the space into disjoint smaller sub-spaces where other clustering algorithms can be applied.
Why do we use k-means instead of other algorithms? Other clustering algorithms with better features tend to be more expensive. In this case, k-means becomes a great solution for pre-clustering, reducing the space into disjoint smaller sub-spaces where
20,322
Why do we use k-means instead of other algorithms?
K-means is the simplest. To implement and to run. All you need to do is choose "k" and run it a number of times. Most more clever algorithms (in particular the good ones) are much harder to implement efficiently (you'll see factors of 100x in runtime differences) and have much more parameters to set. Plus, most people don't need quality clusters. They actually are happy with anything remotely working for them. Plus, they don't really know what to do when they had more complex clusters. K-means, which models clusters using the simplest model ever - a centroid - is exactly what they need: massive data reduction to centroids.
Why do we use k-means instead of other algorithms?
K-means is the simplest. To implement and to run. All you need to do is choose "k" and run it a number of times. Most more clever algorithms (in particular the good ones) are much harder to implement
Why do we use k-means instead of other algorithms? K-means is the simplest. To implement and to run. All you need to do is choose "k" and run it a number of times. Most more clever algorithms (in particular the good ones) are much harder to implement efficiently (you'll see factors of 100x in runtime differences) and have much more parameters to set. Plus, most people don't need quality clusters. They actually are happy with anything remotely working for them. Plus, they don't really know what to do when they had more complex clusters. K-means, which models clusters using the simplest model ever - a centroid - is exactly what they need: massive data reduction to centroids.
Why do we use k-means instead of other algorithms? K-means is the simplest. To implement and to run. All you need to do is choose "k" and run it a number of times. Most more clever algorithms (in particular the good ones) are much harder to implement
20,323
Why do we use k-means instead of other algorithms?
K-means is like the Exchange Sort algorithm. Easy to understand, helps one get into the topic, but should never be used for anything real, ever. In the case of Exchange Sort, even Bubble Sort is better because it can stop early if the array is partially sorted. In the case of K-means, the EM algorithm is the same algorithm but assumes Gaussian distributions for clusters instead of the uniform distribution assumption of K-means. K-means is an edge case of E-M when all clusters have diagonal covariance matrices. The Gaussian structure means that the clusters shrink-wrap themselves to the data in a very nice way. This gets around the serious objections you correctly raise in the question. And E-M is not much more expensive than K-means, really. (I can implement both in an Excel spreadsheet.) But for serious clustering applications, one should really look at the hierarchical spectrum from single-link to complete-link clustering.
Why do we use k-means instead of other algorithms?
K-means is like the Exchange Sort algorithm. Easy to understand, helps one get into the topic, but should never be used for anything real, ever. In the case of Exchange Sort, even Bubble Sort is bette
Why do we use k-means instead of other algorithms? K-means is like the Exchange Sort algorithm. Easy to understand, helps one get into the topic, but should never be used for anything real, ever. In the case of Exchange Sort, even Bubble Sort is better because it can stop early if the array is partially sorted. In the case of K-means, the EM algorithm is the same algorithm but assumes Gaussian distributions for clusters instead of the uniform distribution assumption of K-means. K-means is an edge case of E-M when all clusters have diagonal covariance matrices. The Gaussian structure means that the clusters shrink-wrap themselves to the data in a very nice way. This gets around the serious objections you correctly raise in the question. And E-M is not much more expensive than K-means, really. (I can implement both in an Excel spreadsheet.) But for serious clustering applications, one should really look at the hierarchical spectrum from single-link to complete-link clustering.
Why do we use k-means instead of other algorithms? K-means is like the Exchange Sort algorithm. Easy to understand, helps one get into the topic, but should never be used for anything real, ever. In the case of Exchange Sort, even Bubble Sort is bette
20,324
Simulating data for logistic regression with a categorical variable
The model Let $x_B = 1$ if one has category "B", and $x_B = 0$ otherwise. Define $x_C$, $x_D$, and $x_E$ similary. If $x_B = x_C = x_D = x_E = 0$, then we have category "A" (i.e., "A" is the reference level). Your model can then be written as $$ \textrm{logit}(\pi) = \beta_0 + \beta_B x_B + \beta_C x_C + \beta_D x_D + \beta_E x_E $$ with $\beta_0$ an intercept. Data generation in R (a) x <- sample(x=c("A","B", "C", "D", "E"), size=n, replace=TRUE, prob=rep(1/5, 5)) The x vector has n components (one for each individual). Each component is either "A", "B", "C", "D", or "E". Each of "A", "B", "C", "D", and "E" is equally likely. (b) library(dummies) dummy(x) dummy(x) is a matrix with n rows (one for each individual) and 5 columns corresponding to $x_A$, $x_B$, $x_C$, $x_D$, and $x_E$. The linear predictors (one for each individual) can then be written as linpred <- cbind(1, dummy(x)[, -1]) %*% c(beta0, betaB, betaC, betaD, betaE) (c) The probabilities of success follows from the logistic model: pi <- exp(linpred) / (1 + exp(linpred)) (d) Now we can generate the binary response variable. The $i$th response comes from a binomial random variable $\textrm{Bin}(n, p)$ with $n = 1$ and $p =$ pi[i]: y <- rbinom(n=n, size=1, prob=pi) Some quick simulations to check this is OK > #------ parameters ------ > n <- 1000 > beta0 <- 0.07 > betaB <- 0.1 > betaC <- -0.15 > betaD <- -0.03 > betaE <- 0.9 > #------------------------ > > #------ initialisation ------ > beta0Hat <- rep(NA, 1000) > betaBHat <- rep(NA, 1000) > betaCHat <- rep(NA, 1000) > betaDHat <- rep(NA, 1000) > betaEHat <- rep(NA, 1000) > #---------------------------- > > #------ simulations ------ > for(i in 1:1000) + { + #data generation + x <- sample(x=c("A","B", "C", "D", "E"), + size=n, replace=TRUE, prob=rep(1/5, 5)) #(a) + linpred <- cbind(1, dummy(x)[, -1]) %*% c(beta0, betaB, betaC, betaD, betaE) #(b) + pi <- exp(linpred) / (1 + exp(linpred)) #(c) + y <- rbinom(n=n, size=1, prob=pi) #(d) + data <- data.frame(x=x, y=y) + + #fit the logistic model + mod <- glm(y ~ x, family="binomial", data=data) + + #save the estimates + beta0Hat[i] <- mod$coef[1] + betaBHat[i] <- mod$coef[2] + betaCHat[i] <- mod$coef[3] + betaDHat[i] <- mod$coef[4] + betaEHat[i] <- mod$coef[5] + } > #------------------------- > > #------ results ------ > round(c(beta0=mean(beta0Hat), + betaB=mean(betaBHat), + betaC=mean(betaCHat), + betaD=mean(betaDHat), + betaE=mean(betaEHat)), 3) beta0 betaB betaC betaD betaE 0.066 0.100 -0.152 -0.026 0.908 > #---------------------
Simulating data for logistic regression with a categorical variable
The model Let $x_B = 1$ if one has category "B", and $x_B = 0$ otherwise. Define $x_C$, $x_D$, and $x_E$ similary. If $x_B = x_C = x_D = x_E = 0$, then we have category "A" (i.e., "A" is the reference
Simulating data for logistic regression with a categorical variable The model Let $x_B = 1$ if one has category "B", and $x_B = 0$ otherwise. Define $x_C$, $x_D$, and $x_E$ similary. If $x_B = x_C = x_D = x_E = 0$, then we have category "A" (i.e., "A" is the reference level). Your model can then be written as $$ \textrm{logit}(\pi) = \beta_0 + \beta_B x_B + \beta_C x_C + \beta_D x_D + \beta_E x_E $$ with $\beta_0$ an intercept. Data generation in R (a) x <- sample(x=c("A","B", "C", "D", "E"), size=n, replace=TRUE, prob=rep(1/5, 5)) The x vector has n components (one for each individual). Each component is either "A", "B", "C", "D", or "E". Each of "A", "B", "C", "D", and "E" is equally likely. (b) library(dummies) dummy(x) dummy(x) is a matrix with n rows (one for each individual) and 5 columns corresponding to $x_A$, $x_B$, $x_C$, $x_D$, and $x_E$. The linear predictors (one for each individual) can then be written as linpred <- cbind(1, dummy(x)[, -1]) %*% c(beta0, betaB, betaC, betaD, betaE) (c) The probabilities of success follows from the logistic model: pi <- exp(linpred) / (1 + exp(linpred)) (d) Now we can generate the binary response variable. The $i$th response comes from a binomial random variable $\textrm{Bin}(n, p)$ with $n = 1$ and $p =$ pi[i]: y <- rbinom(n=n, size=1, prob=pi) Some quick simulations to check this is OK > #------ parameters ------ > n <- 1000 > beta0 <- 0.07 > betaB <- 0.1 > betaC <- -0.15 > betaD <- -0.03 > betaE <- 0.9 > #------------------------ > > #------ initialisation ------ > beta0Hat <- rep(NA, 1000) > betaBHat <- rep(NA, 1000) > betaCHat <- rep(NA, 1000) > betaDHat <- rep(NA, 1000) > betaEHat <- rep(NA, 1000) > #---------------------------- > > #------ simulations ------ > for(i in 1:1000) + { + #data generation + x <- sample(x=c("A","B", "C", "D", "E"), + size=n, replace=TRUE, prob=rep(1/5, 5)) #(a) + linpred <- cbind(1, dummy(x)[, -1]) %*% c(beta0, betaB, betaC, betaD, betaE) #(b) + pi <- exp(linpred) / (1 + exp(linpred)) #(c) + y <- rbinom(n=n, size=1, prob=pi) #(d) + data <- data.frame(x=x, y=y) + + #fit the logistic model + mod <- glm(y ~ x, family="binomial", data=data) + + #save the estimates + beta0Hat[i] <- mod$coef[1] + betaBHat[i] <- mod$coef[2] + betaCHat[i] <- mod$coef[3] + betaDHat[i] <- mod$coef[4] + betaEHat[i] <- mod$coef[5] + } > #------------------------- > > #------ results ------ > round(c(beta0=mean(beta0Hat), + betaB=mean(betaBHat), + betaC=mean(betaCHat), + betaD=mean(betaDHat), + betaE=mean(betaEHat)), 3) beta0 betaB betaC betaD betaE 0.066 0.100 -0.152 -0.026 0.908 > #---------------------
Simulating data for logistic regression with a categorical variable The model Let $x_B = 1$ if one has category "B", and $x_B = 0$ otherwise. Define $x_C$, $x_D$, and $x_E$ similary. If $x_B = x_C = x_D = x_E = 0$, then we have category "A" (i.e., "A" is the reference
20,325
Is this method of resampling time-series known in the literature? Does it have a name?
If you include the last bin of size $2^N$, the random permutation is uniformly chosen from the iterated wreath product of groups of order $2$, denoted $C_2 \wr C_2 \wr ... \wr C_2$. (If you leave out the last possible reversal, then you get a uniform sample from an index $2$ subgroup, the product of two iterated wreath products with $N-1$ factors.) This is also the Sylow $2$-subgroup of the symmetric group on $2^N$ elements (a largest subgroup of order a power of $2$ -- all such subgroups are conjugate). It is also the group of symmetries of a perfect binary tree with $2^N$ leaves all at level $N$ (counting the root as level $0$). A lot of work has been done on groups like this on the mathematical side, but much of it may be irrelevant to you. I took the above image from a recent MO question on the maximal subgroups of the iterated wreath product.
Is this method of resampling time-series known in the literature? Does it have a name?
If you include the last bin of size $2^N$, the random permutation is uniformly chosen from the iterated wreath product of groups of order $2$, denoted $C_2 \wr C_2 \wr ... \wr C_2$. (If you leave out
Is this method of resampling time-series known in the literature? Does it have a name? If you include the last bin of size $2^N$, the random permutation is uniformly chosen from the iterated wreath product of groups of order $2$, denoted $C_2 \wr C_2 \wr ... \wr C_2$. (If you leave out the last possible reversal, then you get a uniform sample from an index $2$ subgroup, the product of two iterated wreath products with $N-1$ factors.) This is also the Sylow $2$-subgroup of the symmetric group on $2^N$ elements (a largest subgroup of order a power of $2$ -- all such subgroups are conjugate). It is also the group of symmetries of a perfect binary tree with $2^N$ leaves all at level $N$ (counting the root as level $0$). A lot of work has been done on groups like this on the mathematical side, but much of it may be irrelevant to you. I took the above image from a recent MO question on the maximal subgroups of the iterated wreath product.
Is this method of resampling time-series known in the literature? Does it have a name? If you include the last bin of size $2^N$, the random permutation is uniformly chosen from the iterated wreath product of groups of order $2$, denoted $C_2 \wr C_2 \wr ... \wr C_2$. (If you leave out
20,326
Can logistic regression's predicted probability be interpreted as the confidence in the classification
As other answers correctly state, the reported probabilities from models such as logistic regression and naive Bayes are estimates of the class probability. If the model were true, the probability would indeed be the probability of a correct classification. However, it is quite important to understand that this could be misleading because the model is estimated and thus not a correct model. There are at least three issues. Uncertainty of estimates. Model misspecification. Bias. The uncertainty is just the everywhere present fact that the probability is just an estimate. A confidence interval of the estimated class probability could provide some idea about the uncertainty (of the class probability, not the classification). If the model is wrong $-$ and face it, it is $-$ the class probabilities can be quite misleading even if the class predictions are good. Logistic regression can get the class probabilities wrong for two fairly well separated classes if some data points are a little extreme. It might still do a fine job in terms of classification. If the estimation procedure (intentionally) provides a biased estimate, the class probabilities are wrong. This is something I see with regularization methods like lasso and ridge for logistic regression. While a cross-validated choice of the regularization leads to a model with good performance in terms of classification, the resulting class probabilities are clearly underestimated (too close to 0.5) on test cases. This is not necessarily bad, but important to be aware of.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati
As other answers correctly state, the reported probabilities from models such as logistic regression and naive Bayes are estimates of the class probability. If the model were true, the probability wou
Can logistic regression's predicted probability be interpreted as the confidence in the classification As other answers correctly state, the reported probabilities from models such as logistic regression and naive Bayes are estimates of the class probability. If the model were true, the probability would indeed be the probability of a correct classification. However, it is quite important to understand that this could be misleading because the model is estimated and thus not a correct model. There are at least three issues. Uncertainty of estimates. Model misspecification. Bias. The uncertainty is just the everywhere present fact that the probability is just an estimate. A confidence interval of the estimated class probability could provide some idea about the uncertainty (of the class probability, not the classification). If the model is wrong $-$ and face it, it is $-$ the class probabilities can be quite misleading even if the class predictions are good. Logistic regression can get the class probabilities wrong for two fairly well separated classes if some data points are a little extreme. It might still do a fine job in terms of classification. If the estimation procedure (intentionally) provides a biased estimate, the class probabilities are wrong. This is something I see with regularization methods like lasso and ridge for logistic regression. While a cross-validated choice of the regularization leads to a model with good performance in terms of classification, the resulting class probabilities are clearly underestimated (too close to 0.5) on test cases. This is not necessarily bad, but important to be aware of.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati As other answers correctly state, the reported probabilities from models such as logistic regression and naive Bayes are estimates of the class probability. If the model were true, the probability wou
20,327
Can logistic regression's predicted probability be interpreted as the confidence in the classification
For a test case (particular input), its class (say the label 1 for the binary output) predictive probability is the chance the test example belongs to that class. Over many such test cases, the proportion that belong class 1 will tend to the predictive probability. Confidence has connotations of confidence intervals, which are something quite different.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati
For a test case (particular input), its class (say the label 1 for the binary output) predictive probability is the chance the test example belongs to that class. Over many such test cases, the propor
Can logistic regression's predicted probability be interpreted as the confidence in the classification For a test case (particular input), its class (say the label 1 for the binary output) predictive probability is the chance the test example belongs to that class. Over many such test cases, the proportion that belong class 1 will tend to the predictive probability. Confidence has connotations of confidence intervals, which are something quite different.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati For a test case (particular input), its class (say the label 1 for the binary output) predictive probability is the chance the test example belongs to that class. Over many such test cases, the propor
20,328
Can logistic regression's predicted probability be interpreted as the confidence in the classification
Given a classifier with 2-classes (e.g. a 2 class linear discriminant or logistic regression classifier) the discriminant value for both classes can be applied to a softmax function to yield an estimate of the posterior probability for that class: P1 = exp(d1)/(exp(d1) + exp(d2)) Where P1 is the posterior probability estimate for class 1, d1 and d2 are discriminant values for classes 1 and 2 respectively. In this case the estimated posterior probability for a given class can be taken as a degree of confidence in the class, for a given case as P1 will equal 1 - P2.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati
Given a classifier with 2-classes (e.g. a 2 class linear discriminant or logistic regression classifier) the discriminant value for both classes can be applied to a softmax function to yield an estima
Can logistic regression's predicted probability be interpreted as the confidence in the classification Given a classifier with 2-classes (e.g. a 2 class linear discriminant or logistic regression classifier) the discriminant value for both classes can be applied to a softmax function to yield an estimate of the posterior probability for that class: P1 = exp(d1)/(exp(d1) + exp(d2)) Where P1 is the posterior probability estimate for class 1, d1 and d2 are discriminant values for classes 1 and 2 respectively. In this case the estimated posterior probability for a given class can be taken as a degree of confidence in the class, for a given case as P1 will equal 1 - P2.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati Given a classifier with 2-classes (e.g. a 2 class linear discriminant or logistic regression classifier) the discriminant value for both classes can be applied to a softmax function to yield an estima
20,329
Can logistic regression's predicted probability be interpreted as the confidence in the classification
If a classifier predicts a certain class with a probability, that number can be used as a proxy for the degree of confidence in that classification. Not to be confused with confidence intervals. For example if classifier P predicts two cases as +1 & -1 with probability 80% & 60% then it is correct to say that it is more sure of the +1 classification than the -1 classification. The variance as measured by p(1-p) is also a good measure of uncertainty. Note, the baseline confidence is 50% not 0.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati
If a classifier predicts a certain class with a probability, that number can be used as a proxy for the degree of confidence in that classification. Not to be confused with confidence intervals. For e
Can logistic regression's predicted probability be interpreted as the confidence in the classification If a classifier predicts a certain class with a probability, that number can be used as a proxy for the degree of confidence in that classification. Not to be confused with confidence intervals. For example if classifier P predicts two cases as +1 & -1 with probability 80% & 60% then it is correct to say that it is more sure of the +1 classification than the -1 classification. The variance as measured by p(1-p) is also a good measure of uncertainty. Note, the baseline confidence is 50% not 0.
Can logistic regression's predicted probability be interpreted as the confidence in the classificati If a classifier predicts a certain class with a probability, that number can be used as a proxy for the degree of confidence in that classification. Not to be confused with confidence intervals. For e
20,330
Hidden Markov model thresholding
This is somewhat common in the field of gesture recognition. The answer is to create a threshold model as described in the paper by Lee and Kim (1999) It plays the same role as a filler or garbage model, but it doesn't need to be trained separately as you says. You can create a threshold model by connecting all self-transition states from your other models and initializing the transition with uniform probabilities, fully connecting those states. Please take a look on the paper to see how it can actually be done. Even if your library does not support ergodic models, it shouldn't prevent you from manually creating a model of the required size and setting the states accordingly. If you would really want a library for that, then implementations for hidden Markov model classifiers including support for threshold models are available in the Accord.NET Framework, for example. Disclaimer: I am the author of this library.
Hidden Markov model thresholding
This is somewhat common in the field of gesture recognition. The answer is to create a threshold model as described in the paper by Lee and Kim (1999) It plays the same role as a filler or garbage mod
Hidden Markov model thresholding This is somewhat common in the field of gesture recognition. The answer is to create a threshold model as described in the paper by Lee and Kim (1999) It plays the same role as a filler or garbage model, but it doesn't need to be trained separately as you says. You can create a threshold model by connecting all self-transition states from your other models and initializing the transition with uniform probabilities, fully connecting those states. Please take a look on the paper to see how it can actually be done. Even if your library does not support ergodic models, it shouldn't prevent you from manually creating a model of the required size and setting the states accordingly. If you would really want a library for that, then implementations for hidden Markov model classifiers including support for threshold models are available in the Accord.NET Framework, for example. Disclaimer: I am the author of this library.
Hidden Markov model thresholding This is somewhat common in the field of gesture recognition. The answer is to create a threshold model as described in the paper by Lee and Kim (1999) It plays the same role as a filler or garbage mod
20,331
Hidden Markov model thresholding
Very good question! As you mention, the only way to get the HMM to give you an "I don't know" (let's call it OOV) answer is to give it a special state because it always outputs the states with the highest likelihood under your model. So you have to ensure that OOV has higher likelihood under every input that is not speech, watertap or knocking. The short answer is that this is not possible. Because an HMM is not an absolute pattern recognizer. It only compares the likelihood of the outputs under your model, and in the context it was trained. Think about an input that would be speech and knocking at the same time. Most likely the HMM will "hesitate" between these two states because this input has features of each. In the end it would output one of those, but it is quite unlikely that it would output OOV. In the case of keyword spotting, my guess is that you could find clever inputs that would consistently fool their HMM. However, the authors probably know what input to expect and they have chosen their finite list of unknown words so that these poisonous inputs are uncommon. I advise that you do the same. Think about the situations that you will use the HMM and train an OOV state on the most common inputs you wish to eliminate. You can even think of having several OOV states.
Hidden Markov model thresholding
Very good question! As you mention, the only way to get the HMM to give you an "I don't know" (let's call it OOV) answer is to give it a special state because it always outputs the states with the hig
Hidden Markov model thresholding Very good question! As you mention, the only way to get the HMM to give you an "I don't know" (let's call it OOV) answer is to give it a special state because it always outputs the states with the highest likelihood under your model. So you have to ensure that OOV has higher likelihood under every input that is not speech, watertap or knocking. The short answer is that this is not possible. Because an HMM is not an absolute pattern recognizer. It only compares the likelihood of the outputs under your model, and in the context it was trained. Think about an input that would be speech and knocking at the same time. Most likely the HMM will "hesitate" between these two states because this input has features of each. In the end it would output one of those, but it is quite unlikely that it would output OOV. In the case of keyword spotting, my guess is that you could find clever inputs that would consistently fool their HMM. However, the authors probably know what input to expect and they have chosen their finite list of unknown words so that these poisonous inputs are uncommon. I advise that you do the same. Think about the situations that you will use the HMM and train an OOV state on the most common inputs you wish to eliminate. You can even think of having several OOV states.
Hidden Markov model thresholding Very good question! As you mention, the only way to get the HMM to give you an "I don't know" (let's call it OOV) answer is to give it a special state because it always outputs the states with the hig
20,332
Hidden Markov model thresholding
So what I have done is: I created my simplified version of a filler model. Each hmm representing watertap sound, knocking sound and speech sound is a seperate 6 state hmm trained by sounds from training set of 30, 50, 90 sounds respectively of various lengths 0.3 sec to 10 seconds. Then I created a filler model which is a 1 state hmm consisting od all the training set sounds for knocking, watertap and speech. So if the hmm model score is greater for a given sound than the filler's score - sound is recognized otherwise it is an unknown sound. I don't really have large data but I have perfoormed a following test for false positives rejection and true positives rejection on unseen sounds. true positives rejection knocking 1/11 = 90% accuracy watertap 1/9 = 89% accuracy speech 0/14 = 100% accuracy false positives rejection Tested 7 unknown sounds 6/7 = 86% accuracy So from this quick test I can conclude that this approach gives reasonable results although I have a strange feeling it may not be enough.
Hidden Markov model thresholding
So what I have done is: I created my simplified version of a filler model. Each hmm representing watertap sound, knocking sound and speech sound is a seperate 6 state hmm trained by sounds from traini
Hidden Markov model thresholding So what I have done is: I created my simplified version of a filler model. Each hmm representing watertap sound, knocking sound and speech sound is a seperate 6 state hmm trained by sounds from training set of 30, 50, 90 sounds respectively of various lengths 0.3 sec to 10 seconds. Then I created a filler model which is a 1 state hmm consisting od all the training set sounds for knocking, watertap and speech. So if the hmm model score is greater for a given sound than the filler's score - sound is recognized otherwise it is an unknown sound. I don't really have large data but I have perfoormed a following test for false positives rejection and true positives rejection on unseen sounds. true positives rejection knocking 1/11 = 90% accuracy watertap 1/9 = 89% accuracy speech 0/14 = 100% accuracy false positives rejection Tested 7 unknown sounds 6/7 = 86% accuracy So from this quick test I can conclude that this approach gives reasonable results although I have a strange feeling it may not be enough.
Hidden Markov model thresholding So what I have done is: I created my simplified version of a filler model. Each hmm representing watertap sound, knocking sound and speech sound is a seperate 6 state hmm trained by sounds from traini
20,333
What is the correct procedure to choose the lag when performing Johansen cointegration test?
You are correct. The weakness of Johansen approach is that it is sensitive to the lag length. So, the lag length should be determined in a systematic manner. Following is the normal process used in the literature. a. Choose maximum lag length "m" for VAR model. Usually, for annual data this is set to 1, for quarterly data this is set to 4, and for monthly data this is set to 12. b. Run the VAR model in level. For example, if the data is monthly, run the VAR model for lag lengths 1,2, 3,....12. c. Find the AIC (Akaike information criterion) and SIC (Schwarz information criterion) [ there are also other criteria such as HQ (Hannan-Quin information criterion), FPE (Final prediction error criterion) but AIC and SIC are mostly used) for the VAR model for each lag length. Choose the lag length that minimizes AIC and SIC for the VAR model. Note that SIC and AIC may give conflicting results. d. Finally, you MUST confirm that for the lag length you selected in step c, the residuals of the VAR model are not correlated [use Portmanteau Tests for autocorrelations]. You may have to modify the lag length, if there is the autocorrelation. Usually, beginners in time series econometrics tend to skip step d. e. For the cointegration, the lag length is the lag length chosen from step d minus one (since we are running the model in first difference now, unlike in level when we used VAR to decide the lag length).
What is the correct procedure to choose the lag when performing Johansen cointegration test?
You are correct. The weakness of Johansen approach is that it is sensitive to the lag length. So, the lag length should be determined in a systematic manner. Following is the normal process used in th
What is the correct procedure to choose the lag when performing Johansen cointegration test? You are correct. The weakness of Johansen approach is that it is sensitive to the lag length. So, the lag length should be determined in a systematic manner. Following is the normal process used in the literature. a. Choose maximum lag length "m" for VAR model. Usually, for annual data this is set to 1, for quarterly data this is set to 4, and for monthly data this is set to 12. b. Run the VAR model in level. For example, if the data is monthly, run the VAR model for lag lengths 1,2, 3,....12. c. Find the AIC (Akaike information criterion) and SIC (Schwarz information criterion) [ there are also other criteria such as HQ (Hannan-Quin information criterion), FPE (Final prediction error criterion) but AIC and SIC are mostly used) for the VAR model for each lag length. Choose the lag length that minimizes AIC and SIC for the VAR model. Note that SIC and AIC may give conflicting results. d. Finally, you MUST confirm that for the lag length you selected in step c, the residuals of the VAR model are not correlated [use Portmanteau Tests for autocorrelations]. You may have to modify the lag length, if there is the autocorrelation. Usually, beginners in time series econometrics tend to skip step d. e. For the cointegration, the lag length is the lag length chosen from step d minus one (since we are running the model in first difference now, unlike in level when we used VAR to decide the lag length).
What is the correct procedure to choose the lag when performing Johansen cointegration test? You are correct. The weakness of Johansen approach is that it is sensitive to the lag length. So, the lag length should be determined in a systematic manner. Following is the normal process used in th
20,334
What is the correct procedure to choose the lag when performing Johansen cointegration test?
AIC or SBC could be used to help you decide what lag. The URCA package in R recommends selecting the lag having minimum AIC or SBC.
What is the correct procedure to choose the lag when performing Johansen cointegration test?
AIC or SBC could be used to help you decide what lag. The URCA package in R recommends selecting the lag having minimum AIC or SBC.
What is the correct procedure to choose the lag when performing Johansen cointegration test? AIC or SBC could be used to help you decide what lag. The URCA package in R recommends selecting the lag having minimum AIC or SBC.
What is the correct procedure to choose the lag when performing Johansen cointegration test? AIC or SBC could be used to help you decide what lag. The URCA package in R recommends selecting the lag having minimum AIC or SBC.
20,335
How does one interpret a Bland-Altman plot?
The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey). The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result and the y-axis is the difference between the two measurement differences. The chart can then highlight certain types of anomalies in the measurements. For example, if one method always gives too high a result, then you'll get all of your points above or all below the zero line. It can also reveal, for example, that one method over-estimates high values and under-estimates low values. If you see the points on the Bland-Altman plot scattered all over the place, above and below zero, then the suggests that there is no consistent bias of one approach versus the other (of course, there could be hidden biases that this plot does not show up). Essentially, it is a good first step for exploring the data. Other techniques can be used to dig into more particular sorts of behaviour of the measurements.
How does one interpret a Bland-Altman plot?
The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey). The idea is that x-axis is the mean of
How does one interpret a Bland-Altman plot? The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey). The idea is that x-axis is the mean of your two measurements, which is your best guess as to the "correct" result and the y-axis is the difference between the two measurement differences. The chart can then highlight certain types of anomalies in the measurements. For example, if one method always gives too high a result, then you'll get all of your points above or all below the zero line. It can also reveal, for example, that one method over-estimates high values and under-estimates low values. If you see the points on the Bland-Altman plot scattered all over the place, above and below zero, then the suggests that there is no consistent bias of one approach versus the other (of course, there could be hidden biases that this plot does not show up). Essentially, it is a good first step for exploring the data. Other techniques can be used to dig into more particular sorts of behaviour of the measurements.
How does one interpret a Bland-Altman plot? The Bland-Altman plot is more widely known as the Tukey Mean-Difference Plot (one of many charts devised by John Tukey http://en.wikipedia.org/wiki/John_Tukey). The idea is that x-axis is the mean of
20,336
How does one interpret a Bland-Altman plot?
In addition to difference versus average plot, Bland and Altman plots can also be ratio versus average. plots. For example, a new weighing machine gives the following data when people of weights 60, 70 and 80 kg step on it.. 66 kg 77 kg 88 kg In such a scenario, the weighing machine overestimates the weight by 10% every time. So a ratio versus average plot will give a better visualization of the data in this case.
How does one interpret a Bland-Altman plot?
In addition to difference versus average plot, Bland and Altman plots can also be ratio versus average. plots. For example, a new weighing machine gives the following data when people of weights 60, 7
How does one interpret a Bland-Altman plot? In addition to difference versus average plot, Bland and Altman plots can also be ratio versus average. plots. For example, a new weighing machine gives the following data when people of weights 60, 70 and 80 kg step on it.. 66 kg 77 kg 88 kg In such a scenario, the weighing machine overestimates the weight by 10% every time. So a ratio versus average plot will give a better visualization of the data in this case.
How does one interpret a Bland-Altman plot? In addition to difference versus average plot, Bland and Altman plots can also be ratio versus average. plots. For example, a new weighing machine gives the following data when people of weights 60, 7
20,337
How does one interpret a Bland-Altman plot?
This is the Wikipedia definition of a Bland-Altman plot: A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement between two different assays. It is identical to a Tukey mean-difference plot1, the name by which it is known in other fields, but was popularised in medical statistics by J. Martin Bland and Douglas G. Altman If you want to implement a Bland-Altman plot in Python you can use this: If you would like to do this in Python you can use this code import matplotlib.pyplot as plt import numpy as np from numpy.random import random %matplotlib inline plt.style.use('ggplot') I just added the last line because I like the ggplot style. def plotblandaltman(x,y,title,sd_limit): plt.figure(figsize=(20,8)) plt.suptitle(title, fontsize="20") if len(x) != len(y): raise ValueError('x does not have the same length as y') else: for i in range(len(x)): a = np.asarray(x) b = np.asarray(x)+np.asarray(y) mean_diff = np.mean(b) std_diff = np.std(b, axis=0) limit_of_agreement = sd_limit * std_diff lower = mean_diff - limit_of_agreement upper = mean_diff + limit_of_agreement difference = upper - lower lowerplot = lower - (difference * 0.5) upperplot = upper + (difference * 0.5) plt.axhline(y=mean_diff, linestyle = "--", color = "red", label="mean diff") plt.axhline(y=lower, linestyle = "--", color = "grey", label="-1.96 SD") plt.axhline(y=upper, linestyle = "--", color = "grey", label="1.96 SD") plt.text(a.max()*0.85, upper * 1.1, " 1.96 SD", color = "grey", fontsize = "14") plt.text(a.max()*0.85, lower * 0.9, "-1.96 SD", color = "grey", fontsize = "14") plt.text(a.max()*0.85, mean_diff * 0.85, "Mean", color = "red", fontsize = "14") plt.ylim(lowerplot, upperplot) plt.scatter(x=a,y=b) And finaly I just make some random values and compare them in this function x = np.random.rand(100) y = np.random.rand(100) plotblandaltman(x,y,"Bland-altman plot",1.96) With some minor modification, you can easily add a for-loop and make several plots
How does one interpret a Bland-Altman plot?
This is the Wikipedia definition of a Bland-Altman plot: A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement betw
How does one interpret a Bland-Altman plot? This is the Wikipedia definition of a Bland-Altman plot: A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement between two different assays. It is identical to a Tukey mean-difference plot1, the name by which it is known in other fields, but was popularised in medical statistics by J. Martin Bland and Douglas G. Altman If you want to implement a Bland-Altman plot in Python you can use this: If you would like to do this in Python you can use this code import matplotlib.pyplot as plt import numpy as np from numpy.random import random %matplotlib inline plt.style.use('ggplot') I just added the last line because I like the ggplot style. def plotblandaltman(x,y,title,sd_limit): plt.figure(figsize=(20,8)) plt.suptitle(title, fontsize="20") if len(x) != len(y): raise ValueError('x does not have the same length as y') else: for i in range(len(x)): a = np.asarray(x) b = np.asarray(x)+np.asarray(y) mean_diff = np.mean(b) std_diff = np.std(b, axis=0) limit_of_agreement = sd_limit * std_diff lower = mean_diff - limit_of_agreement upper = mean_diff + limit_of_agreement difference = upper - lower lowerplot = lower - (difference * 0.5) upperplot = upper + (difference * 0.5) plt.axhline(y=mean_diff, linestyle = "--", color = "red", label="mean diff") plt.axhline(y=lower, linestyle = "--", color = "grey", label="-1.96 SD") plt.axhline(y=upper, linestyle = "--", color = "grey", label="1.96 SD") plt.text(a.max()*0.85, upper * 1.1, " 1.96 SD", color = "grey", fontsize = "14") plt.text(a.max()*0.85, lower * 0.9, "-1.96 SD", color = "grey", fontsize = "14") plt.text(a.max()*0.85, mean_diff * 0.85, "Mean", color = "red", fontsize = "14") plt.ylim(lowerplot, upperplot) plt.scatter(x=a,y=b) And finaly I just make some random values and compare them in this function x = np.random.rand(100) y = np.random.rand(100) plotblandaltman(x,y,"Bland-altman plot",1.96) With some minor modification, you can easily add a for-loop and make several plots
How does one interpret a Bland-Altman plot? This is the Wikipedia definition of a Bland-Altman plot: A Bland–Altman plot (difference plot) in analytical chemistry or biomedicine is a method of data plotting used in analyzing the agreement betw
20,338
Why is multicollinearity different than correlation?
According to the Wikipedia encyclopedia In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. So multicollinearity is a special case of correlation. It is more specific in two ways It relates specifically to the predictor variables in a regression model. It relates to correlation with a linear combination of multiple variables combined. Correlation doesn't need to be that linear combination of multiple variables (correlation is about two variables). And correlation describes a much wider phenomena than just this case with predictor variables. There is a distinction between perfect multicollinearity (a perfect linear relation between variables/predictors) or just multicollinearity (not an exact linear relationship but strong correlation of at least one of the predictors with a linear combination of the other predictors). Is checking for multicollinearity the same as checking for correlation? No, not exactly. what is the difference between multicollinearity and correlation? How do I check mathematically multicollinearity? What kind of math is behind this? Multicollinearity may occur even when there is little correlation present between individual pairs of predictors. The issue of multicollinearity can occur when there is correlation with one predictor and a linear sum of the other predictors. Imagine for instance when there are six predictors and a seventh is added as the sum $X_7 = X_1 + X_2 + X_ 3 + X_4 + X_5 + X_6$. The correlation between $X_7$ and the other predictors will only be relatively small. But there will be perfect multicollinearity. In that example of perfect linear relationship, you can check perfect multicollinearity by computing the rank of the design matrix, and this should equal the number of columns in order for perfect multicollinearity to be absent. But when $X_7$ has just a tiny bit of difference with the sum $X_1 + X_2 + X_3 + X_4 + X_5 + X_6$ then there won't be perfect multicollinearity. Yet, the predictor $X_7$ is still a lot dependent on the other six (and causing troubles). In this case you can not easily verify just by the correlations, because these are not very large. One common method, in this case, is to compute the variance inflation factor (VIF). This VIF expresses how much of the variation/variance/error in the estimate of a coefficient (computed as $s^2 (X^TX)^{-1}$, where $X$ is the design matrix) is due to the interactions with the other variables. Computational example Below we created the seven variables as explained above. The seventh is the mean of the previous six with a bit of noise added. The correlation table (of the design matrix $X$) looks like: 1.00 -0.30 -0.25 -0.32 0.11 -0.24 0.01 -0.30 1.00 0.29 0.29 -0.12 0.10 0.64 -0.25 0.29 1.00 0.11 -0.54 0.34 0.41 -0.32 0.29 0.11 1.00 0.03 -0.17 0.46 0.11 -0.12 -0.54 0.03 1.00 -0.44 0.10 -0.24 0.10 0.34 -0.17 -0.44 1.00 0.28 0.01 0.64 0.41 0.46 0.10 0.28 1.00 Not very remarkable. But the inverse of the covariance table $(X^TX)^{-1}$. Has the last entry on the diagonal a large value 6.42. This relates to the error (variance) of the 7th coefficient which is almost 36 times larger/inflated. 0.15 0.17 0.15 0.15 0.17 0.13 -0.93 0.17 0.24 0.18 0.18 0.23 0.16 -1.15 0.15 0.18 0.22 0.16 0.22 0.14 -1.07 0.15 0.18 0.16 0.21 0.20 0.15 -1.06 0.17 0.23 0.22 0.20 0.29 0.19 -1.29 0.13 0.16 0.14 0.15 0.19 0.16 -0.93 -0.93 -1.15 -1.07 -1.06 -1.29 -0.93 6.42 Below is the table of the design matrix x_1 x_2 x_3 x_4 x_5 x_6 6 x_7 4 7 6 7 4 5 5.67 4 3 5 4 5 4 4.17 5 5 6 4 4 4 4.67 6 3 5 4 4 8 4.83 3 4 5 5 6 5 4.83 6 4 6 4 3 3 4.33 7 1 2 4 5 3 3.67 5 4 4 6 3 4 4.17 5 6 5 2 3 7 4.50 2 4 5 6 3 5 4.17 3 4 4 4 3 7 4.17 3 5 6 6 2 5 4.33 5 4 4 4 5 4 4.17 4 3 3 4 6 4 4.00 6 6 2 4 6 3 4.67 4 5 3 6 6 1 4.17 5 6 4 6 4 5 5.00 8 3 5 4 4 3 4.50 4 5 5 6 6 4 5.17 6 4 4 7 5 5 5.17 The example is created with this R-code set.seed(1) # generate 6 random variables X_1_6 = matrix(rbinom(120,9,0.5), ncol = 6) # generate a 7-th variable with a bit noise x7 = (rowSums(X_1_6) + rbinom(20, 2, 0.5) - 1)/6 # make the design matrix from all 7 variables X = cbind(X_1_6,x7) ### the correlations round(cor(X), 2) ### the inverse of the covariance table round(solve(t(X) %*% X),2) Geometric view I'm trying to 'see' the difference mathematically. From a geometric point of view you can see regression as projection of the $n$ observations $y$ onto the surface spanned by the $m$ predictor vectors. These observations can be seen as a point in $n$-dimensional space. The $m$ predictor vectors form a sub-space (m-dimensional, in 2 dimensions you can see it as a surface) within this $n$-dimensional space. The fitted solution of the regression is the point within this sub-space that is closest to the observation. The image below from this question might help to see this. We could look perpendicular to the surface spanned by the vectors $x_1$ and $x_2$ above. The coefficients $\beta_1$ and $\beta_2$ of the solution can be seen as coordinates on this surface telling how much of vector $x_1$ and $x_2$ you need to add to get to the solution/prediction $\hat{y}$. (see the question Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression for an explanation of the difference between the $\alpha$ and $\beta$ coordinates in this image) The axes of this coordinate space are however not perpendicular to each other. The following image illustrates that this creates large changes in coordinates for small changes of a point in space. The image contains a sample of 100 randomly distributed observations/experiments as the projections/solutions of the fitting/regression. The distribution of this is a circular cloud in the case of Gaussian distributed errors. The axes $x_1$ and $x_2$ are not perpendicular to each other but are diagonal instead. You can see that this sort of places the lines of the coordinates closer to each other (and these coordinates correspond to the coefficients that are the output of the regression). This means that the variation in this coordinate/coefficient will be larger. The case of this graphical example is in two dimensions, but you could imagine it as extended to multiple dimensions. Multicollinearity means that at least one of the axes (corresponding to the predictor vector) has a small angle with the combination of the other axes (with the space spanned by those other axes). That means it is not necessarily one single axis and another single axis having a small angle relative to each other (in the 2D case it is), but one axis having a small angle with the subspace created by the other $m-1$ other axes. In 3 dimensions you can view it as below. Say the axis $x_3$ is sheared and at an angle with the others. You can have this $x_3$ at a very small angle with the bottom plane of the cube (or even inside of it) without the independent angles with $x_1$ and $x_2$ being very small. See for a related question (with many more links to other related questions) why does the same variable have a different slope when incorporated into a linear model with multiple x variables with
Why is multicollinearity different than correlation?
According to the Wikipedia encyclopedia In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted f
Why is multicollinearity different than correlation? According to the Wikipedia encyclopedia In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. So multicollinearity is a special case of correlation. It is more specific in two ways It relates specifically to the predictor variables in a regression model. It relates to correlation with a linear combination of multiple variables combined. Correlation doesn't need to be that linear combination of multiple variables (correlation is about two variables). And correlation describes a much wider phenomena than just this case with predictor variables. There is a distinction between perfect multicollinearity (a perfect linear relation between variables/predictors) or just multicollinearity (not an exact linear relationship but strong correlation of at least one of the predictors with a linear combination of the other predictors). Is checking for multicollinearity the same as checking for correlation? No, not exactly. what is the difference between multicollinearity and correlation? How do I check mathematically multicollinearity? What kind of math is behind this? Multicollinearity may occur even when there is little correlation present between individual pairs of predictors. The issue of multicollinearity can occur when there is correlation with one predictor and a linear sum of the other predictors. Imagine for instance when there are six predictors and a seventh is added as the sum $X_7 = X_1 + X_2 + X_ 3 + X_4 + X_5 + X_6$. The correlation between $X_7$ and the other predictors will only be relatively small. But there will be perfect multicollinearity. In that example of perfect linear relationship, you can check perfect multicollinearity by computing the rank of the design matrix, and this should equal the number of columns in order for perfect multicollinearity to be absent. But when $X_7$ has just a tiny bit of difference with the sum $X_1 + X_2 + X_3 + X_4 + X_5 + X_6$ then there won't be perfect multicollinearity. Yet, the predictor $X_7$ is still a lot dependent on the other six (and causing troubles). In this case you can not easily verify just by the correlations, because these are not very large. One common method, in this case, is to compute the variance inflation factor (VIF). This VIF expresses how much of the variation/variance/error in the estimate of a coefficient (computed as $s^2 (X^TX)^{-1}$, where $X$ is the design matrix) is due to the interactions with the other variables. Computational example Below we created the seven variables as explained above. The seventh is the mean of the previous six with a bit of noise added. The correlation table (of the design matrix $X$) looks like: 1.00 -0.30 -0.25 -0.32 0.11 -0.24 0.01 -0.30 1.00 0.29 0.29 -0.12 0.10 0.64 -0.25 0.29 1.00 0.11 -0.54 0.34 0.41 -0.32 0.29 0.11 1.00 0.03 -0.17 0.46 0.11 -0.12 -0.54 0.03 1.00 -0.44 0.10 -0.24 0.10 0.34 -0.17 -0.44 1.00 0.28 0.01 0.64 0.41 0.46 0.10 0.28 1.00 Not very remarkable. But the inverse of the covariance table $(X^TX)^{-1}$. Has the last entry on the diagonal a large value 6.42. This relates to the error (variance) of the 7th coefficient which is almost 36 times larger/inflated. 0.15 0.17 0.15 0.15 0.17 0.13 -0.93 0.17 0.24 0.18 0.18 0.23 0.16 -1.15 0.15 0.18 0.22 0.16 0.22 0.14 -1.07 0.15 0.18 0.16 0.21 0.20 0.15 -1.06 0.17 0.23 0.22 0.20 0.29 0.19 -1.29 0.13 0.16 0.14 0.15 0.19 0.16 -0.93 -0.93 -1.15 -1.07 -1.06 -1.29 -0.93 6.42 Below is the table of the design matrix x_1 x_2 x_3 x_4 x_5 x_6 6 x_7 4 7 6 7 4 5 5.67 4 3 5 4 5 4 4.17 5 5 6 4 4 4 4.67 6 3 5 4 4 8 4.83 3 4 5 5 6 5 4.83 6 4 6 4 3 3 4.33 7 1 2 4 5 3 3.67 5 4 4 6 3 4 4.17 5 6 5 2 3 7 4.50 2 4 5 6 3 5 4.17 3 4 4 4 3 7 4.17 3 5 6 6 2 5 4.33 5 4 4 4 5 4 4.17 4 3 3 4 6 4 4.00 6 6 2 4 6 3 4.67 4 5 3 6 6 1 4.17 5 6 4 6 4 5 5.00 8 3 5 4 4 3 4.50 4 5 5 6 6 4 5.17 6 4 4 7 5 5 5.17 The example is created with this R-code set.seed(1) # generate 6 random variables X_1_6 = matrix(rbinom(120,9,0.5), ncol = 6) # generate a 7-th variable with a bit noise x7 = (rowSums(X_1_6) + rbinom(20, 2, 0.5) - 1)/6 # make the design matrix from all 7 variables X = cbind(X_1_6,x7) ### the correlations round(cor(X), 2) ### the inverse of the covariance table round(solve(t(X) %*% X),2) Geometric view I'm trying to 'see' the difference mathematically. From a geometric point of view you can see regression as projection of the $n$ observations $y$ onto the surface spanned by the $m$ predictor vectors. These observations can be seen as a point in $n$-dimensional space. The $m$ predictor vectors form a sub-space (m-dimensional, in 2 dimensions you can see it as a surface) within this $n$-dimensional space. The fitted solution of the regression is the point within this sub-space that is closest to the observation. The image below from this question might help to see this. We could look perpendicular to the surface spanned by the vectors $x_1$ and $x_2$ above. The coefficients $\beta_1$ and $\beta_2$ of the solution can be seen as coordinates on this surface telling how much of vector $x_1$ and $x_2$ you need to add to get to the solution/prediction $\hat{y}$. (see the question Intuition behind $(X^TX)^{-1}$ in closed form of w in Linear Regression for an explanation of the difference between the $\alpha$ and $\beta$ coordinates in this image) The axes of this coordinate space are however not perpendicular to each other. The following image illustrates that this creates large changes in coordinates for small changes of a point in space. The image contains a sample of 100 randomly distributed observations/experiments as the projections/solutions of the fitting/regression. The distribution of this is a circular cloud in the case of Gaussian distributed errors. The axes $x_1$ and $x_2$ are not perpendicular to each other but are diagonal instead. You can see that this sort of places the lines of the coordinates closer to each other (and these coordinates correspond to the coefficients that are the output of the regression). This means that the variation in this coordinate/coefficient will be larger. The case of this graphical example is in two dimensions, but you could imagine it as extended to multiple dimensions. Multicollinearity means that at least one of the axes (corresponding to the predictor vector) has a small angle with the combination of the other axes (with the space spanned by those other axes). That means it is not necessarily one single axis and another single axis having a small angle relative to each other (in the 2D case it is), but one axis having a small angle with the subspace created by the other $m-1$ other axes. In 3 dimensions you can view it as below. Say the axis $x_3$ is sheared and at an angle with the others. You can have this $x_3$ at a very small angle with the bottom plane of the cube (or even inside of it) without the independent angles with $x_1$ and $x_2$ being very small. See for a related question (with many more links to other related questions) why does the same variable have a different slope when incorporated into a linear model with multiple x variables with
Why is multicollinearity different than correlation? According to the Wikipedia encyclopedia In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted f
20,339
Where is the bomb: How to estimate the probability, given row and column totals?
The solution space (valid bomb configurations) can be viewed as the set of bipartite graphs with given degree sequence. (The grid is the biadjacency matrix.) Generating a uniform distribution on that space can be approached using Markov Chain Monte Carlo (MCMC) methods: every solution can be obtained from any other using a sequence of "switches," which in your puzzle formulation look like: $$ \begin{pmatrix} x & - \\ - & x \end{pmatrix} \to \begin{pmatrix} - & x \\ x & - \end{pmatrix} $$ It's been proven that this has a fast mixing property. So, starting with any valid configuration and setting a MCMC running for a while, you should end up with an approximation of the uniform distribution on solutions, which you can average pointwise for the probabilities you're looking for. I'm only vaguely familiar with these approaches and their computational aspects, but at least this way you avoid enumerating any of the non-solutions. A start to the literature on the topic: https://faculty.math.illinois.edu/~mlavrov/seminar/2018-erdos.pdf https://arxiv.org/pdf/1701.07101.pdf https://www.tandfonline.com/doi/abs/10.1198/016214504000001303
Where is the bomb: How to estimate the probability, given row and column totals?
The solution space (valid bomb configurations) can be viewed as the set of bipartite graphs with given degree sequence. (The grid is the biadjacency matrix.) Generating a uniform distribution on tha
Where is the bomb: How to estimate the probability, given row and column totals? The solution space (valid bomb configurations) can be viewed as the set of bipartite graphs with given degree sequence. (The grid is the biadjacency matrix.) Generating a uniform distribution on that space can be approached using Markov Chain Monte Carlo (MCMC) methods: every solution can be obtained from any other using a sequence of "switches," which in your puzzle formulation look like: $$ \begin{pmatrix} x & - \\ - & x \end{pmatrix} \to \begin{pmatrix} - & x \\ x & - \end{pmatrix} $$ It's been proven that this has a fast mixing property. So, starting with any valid configuration and setting a MCMC running for a while, you should end up with an approximation of the uniform distribution on solutions, which you can average pointwise for the probabilities you're looking for. I'm only vaguely familiar with these approaches and their computational aspects, but at least this way you avoid enumerating any of the non-solutions. A start to the literature on the topic: https://faculty.math.illinois.edu/~mlavrov/seminar/2018-erdos.pdf https://arxiv.org/pdf/1701.07101.pdf https://www.tandfonline.com/doi/abs/10.1198/016214504000001303
Where is the bomb: How to estimate the probability, given row and column totals? The solution space (valid bomb configurations) can be viewed as the set of bipartite graphs with given degree sequence. (The grid is the biadjacency matrix.) Generating a uniform distribution on tha
20,340
Where is the bomb: How to estimate the probability, given row and column totals?
There's no unique solution I don't think that true discrete probability distribution can be recovered, unless you make some additional assumptions. Your situation is basically a problem of recovering the joint distribution from marginals. It is sometimes solved by using copulas in the industry, for example financial risk management, but usually for continuous distributions. Presence, Independent, AS 205 In the presence problem no more than one bomb is allowed in a cell. Again, for the special case of independence, there is relatively efficient computational solution. If you know FORTRAN, you can use this code that implements AS 205 Algorithm: Ian Saunders, Algorithm AS 205: Enumeration of R x C Tables with Repeated Row Totals, Applied Statistics, Volume 33, Number 3, 1984, pages 340-352. It's related to Panefield's algo that @Glen_B referred to. This algo enumerates all presence tables, i.e. goes through all possible tables where only one bomb is in a field. It also calculates the multiplicity, i.e. multiple tables that look the same, and calculates some probabilities (not those you're interested in). With this algorithm you may be able to run the complete enumeration faster than you did before. Presence, not independent The AS 205 algorithm can be applied to a case where the rows and columns are not independent. In this case you'd have to apply different weights to each table generated by the enumeration logic. The weight will depend on the process of placement of bombs. Counts, independence The count problem allows more than one bomb placed in a cell, of course. The special case of independent rows and columns of count problem is easy: $P_i^j=P_i\times P^j$ where $P_i$ and $P^j$ are marginals of rows and columns. For instance, row $P_6=3/15=0.2$ and column $P^3=3/15=0.2$, hence the probability that a bomb is in row 6 and column 3 is $P_6^3=0.04$. You actually produced this distribution in your first table. Counts, Not independent, Discrete Copulas In order to solve the counts problem where rows and columns are not independent, we could apply discrete copulas. They have issues: they're not unique. It doesn't make them useless though. So, I'd try applying discrete copulas. You can find a good overview of them in Genest, C. and J. Nešlehová (2007). A primer on copulas for count data. Astin Bull. 37(2), 475–515. Copulas can be especially useful, as they usually allow to explicitly induce dependence, or to estimate it from data when the data is available. I mean the dependence of row and columns when placing bombs. For instance, it could be the case when if the bomb is one the first row, then it is more likely that it will be one the first column too. Example Let's apply Kimeldorf and Sampson copula to your data, assuming again that more than one bomb can be placed in a cell. The copula for a dependency parameter $\theta$ is defined as: $$C(u,v)=(u^{-\theta}+u^{-\theta}-1)^{-1/\theta}$$ You can think of $\theta$ as an analog of the correlation coefficient. Independent Let's start with the case of weak dependence, $\theta=0.000001$, where we have the following probabilities (PMF) and marginal PDFs are shown too on the panels on the right and at the bottom: You can see how in the column 5 the second row probability does have twice higher probability than the first row. This is not wrong contrary to what you seemed to imply in your question. All probability do add up to 100%, of course, as do the marginals on the panels match the frequencies. For instance, the column 5 in the lower panel shows 1/3 which corresponds to stated 5 bombs out of total 15 as expected. Positive Correlation For stronger dependency (positive correlation) with $\theta=10$ we have the following: Negative Correlation The same for stronger but negative correlation (dependency) $\theta=-0.2$: You can see that all probabilities add up to 100%, of course. Also, you can see how dependency impacts the PMF's shape. For positive dependency (correlation) you get the highest PMF concentrated on the diagonal, while for the negative dependency it is off-diagonal
Where is the bomb: How to estimate the probability, given row and column totals?
There's no unique solution I don't think that true discrete probability distribution can be recovered, unless you make some additional assumptions. Your situation is basically a problem of recovering
Where is the bomb: How to estimate the probability, given row and column totals? There's no unique solution I don't think that true discrete probability distribution can be recovered, unless you make some additional assumptions. Your situation is basically a problem of recovering the joint distribution from marginals. It is sometimes solved by using copulas in the industry, for example financial risk management, but usually for continuous distributions. Presence, Independent, AS 205 In the presence problem no more than one bomb is allowed in a cell. Again, for the special case of independence, there is relatively efficient computational solution. If you know FORTRAN, you can use this code that implements AS 205 Algorithm: Ian Saunders, Algorithm AS 205: Enumeration of R x C Tables with Repeated Row Totals, Applied Statistics, Volume 33, Number 3, 1984, pages 340-352. It's related to Panefield's algo that @Glen_B referred to. This algo enumerates all presence tables, i.e. goes through all possible tables where only one bomb is in a field. It also calculates the multiplicity, i.e. multiple tables that look the same, and calculates some probabilities (not those you're interested in). With this algorithm you may be able to run the complete enumeration faster than you did before. Presence, not independent The AS 205 algorithm can be applied to a case where the rows and columns are not independent. In this case you'd have to apply different weights to each table generated by the enumeration logic. The weight will depend on the process of placement of bombs. Counts, independence The count problem allows more than one bomb placed in a cell, of course. The special case of independent rows and columns of count problem is easy: $P_i^j=P_i\times P^j$ where $P_i$ and $P^j$ are marginals of rows and columns. For instance, row $P_6=3/15=0.2$ and column $P^3=3/15=0.2$, hence the probability that a bomb is in row 6 and column 3 is $P_6^3=0.04$. You actually produced this distribution in your first table. Counts, Not independent, Discrete Copulas In order to solve the counts problem where rows and columns are not independent, we could apply discrete copulas. They have issues: they're not unique. It doesn't make them useless though. So, I'd try applying discrete copulas. You can find a good overview of them in Genest, C. and J. Nešlehová (2007). A primer on copulas for count data. Astin Bull. 37(2), 475–515. Copulas can be especially useful, as they usually allow to explicitly induce dependence, or to estimate it from data when the data is available. I mean the dependence of row and columns when placing bombs. For instance, it could be the case when if the bomb is one the first row, then it is more likely that it will be one the first column too. Example Let's apply Kimeldorf and Sampson copula to your data, assuming again that more than one bomb can be placed in a cell. The copula for a dependency parameter $\theta$ is defined as: $$C(u,v)=(u^{-\theta}+u^{-\theta}-1)^{-1/\theta}$$ You can think of $\theta$ as an analog of the correlation coefficient. Independent Let's start with the case of weak dependence, $\theta=0.000001$, where we have the following probabilities (PMF) and marginal PDFs are shown too on the panels on the right and at the bottom: You can see how in the column 5 the second row probability does have twice higher probability than the first row. This is not wrong contrary to what you seemed to imply in your question. All probability do add up to 100%, of course, as do the marginals on the panels match the frequencies. For instance, the column 5 in the lower panel shows 1/3 which corresponds to stated 5 bombs out of total 15 as expected. Positive Correlation For stronger dependency (positive correlation) with $\theta=10$ we have the following: Negative Correlation The same for stronger but negative correlation (dependency) $\theta=-0.2$: You can see that all probabilities add up to 100%, of course. Also, you can see how dependency impacts the PMF's shape. For positive dependency (correlation) you get the highest PMF concentrated on the diagonal, while for the negative dependency it is off-diagonal
Where is the bomb: How to estimate the probability, given row and column totals? There's no unique solution I don't think that true discrete probability distribution can be recovered, unless you make some additional assumptions. Your situation is basically a problem of recovering
20,341
Where is the bomb: How to estimate the probability, given row and column totals?
Your question does not make this clear, but I'm going to assume that the bombs are initially distributed via simple-random-sampling without replacement over the cells (so a cell cannot contain more than one bomb). The question you have raised is essentially asking for the development of an estimation method for a probability distribution that can be computed exactly (in theory), but which becomes computationally infeasible to compute for large parameter values. The exact solution exists, but it is computationally intensive As you point out in your question, it is possible for you to perform a computational search over all possible allocations, to identify the allocations that match the row and column totals. We can proceed formally as follows. Suppose we are dealing with an $n \times m$ grid and we allocate $b$ bombs via simple random sampling without replacement (so each cell cannot contain more than one bomb). Let $\mathbf{x} = (x_1,...,x_{nm})$ be a vector of indicator variables indicating whether or not a bomb is present in each cell, and let $\mathbf{s} = (r_1, ..., r_n, c_1, ..., c_m)$ denote the corresponding vector of row and column sums. Define the function $S: \mathbf{x} \mapsto \mathbf{s}$, which maps from the allocation vector to the row and column sums. The goal is to determine the probability of each allocation vector conditional on knowledge of the row and column sums. Under simple-random-sampling we have $\mathbb{P}(\mathbf{x}) \propto 1$, so the conditional probability of interest is: $$\begin{equation} \begin{aligned} \mathbb{P}(\mathbf{x} | \mathbf{s}) = \frac{\mathbb{P}(\mathbf{x}, \mathbf{s})}{\mathbb{P}(\mathbf{s})} &= \frac{\mathbb{P}(\mathbf{x}) \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s})}{\sum_\mathbf{x} \mathbb{P}(\mathbf{x}) \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s})} \\[6pt] &= \frac{\mathbb{I}(S(\mathbf{x}) = \mathbf{s})}{\sum_\mathbf{x} \mathbb{I}(S(\mathbf{x}) = \mathbf{s})} \\[6pt] &= \frac{1}{|\mathscr{X}_\mathbf{s}|} \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s}) \\[6pt] &= \text{U}(\mathbf{x} | \mathscr{X}_\mathbf{s}), \\[6pt] \end{aligned} \end{equation}$$ where $\mathscr{X}_\mathbf{s} \equiv \{ \mathbf{x} \in \{ 0, 1\}^{nm} | S(\mathbf{x}) = \mathbf{s} \}$ is the set of all allocation vectors compatible with the vector $\mathbf{s}$. This shows that (under simple-random-sampling of the bombs) we have $\mathbf{x} | \mathbf{s} \sim \text{U}(\mathscr{X}_\mathbf{s})$. That is, the conditional distribution of the allocation vector for the bombs is uniform over the set of all allocation vectors compatible with the observed row and column totals. The marginal probability of a bomb in a given cell can then be obtained by marginalising over this joint distribution: $$\begin{equation} \begin{aligned} \mathbb{P}(x_{ij} = 1 | \mathbf{s}) = \sum_{\mathbf{x}: x_{ij} = 1} \text{U}(\mathbf{x} | \mathscr{X}_\mathbf{s}) = \frac{|\mathscr{X}_{ij} \cap \mathscr{X}_\mathbf{s}|}{|\mathscr{X}_\mathbf{s}|}. \end{aligned} \end{equation}$$ where $\mathscr{X}_{ij} \equiv \{ \mathbf{x} \in \{ 0, 1\}^{nm} | x_{ij} = 1 \}$ is the set of all allocation vectors with a bomb in the cell in the $i$th row and $j$th column. Now, in your particular problem you computed the set $\mathscr{X}_\mathbf{s}$ and found that $|\mathscr{X}_\mathbf{s}| = 276$, so the conditional probability of the allocation vectors is uniform over the set of allocations you computed (assuming you did this correctly). This is ane exact solution to the problem. However, it is computationally intensive to compute the set $\mathscr{X}_\mathbf{s}$ and so the computation of this solution may become infeasible when $n$, $m$ or $b$ become larger. Searching for good estimation methods In the case where it is infeasible to compute the set $\mathscr{X}_\mathbf{s}$, you want to be able to estimate the marginal probabilities of a bomb being in any particular cell. I am not aware of any existing research that gives estimation methods for this problem, so this is going to require you to develop some plausible estimators and then test their performance against the exact solution using computer simulations for parameter values that are sufficiently low for this to be feasible. The naive empirical estimator: The estimator you have proposed and used in your green table is: $$\hat{\mathbb{P}}(x_{ij} = 1 | \mathbf{s}) = \frac{r_i}{b} \cdot \frac{c_j}{b} \cdot b = \frac{r_i \cdot c_j}{b}.$$ This estimation method treats the rows and columns as independent, and estimates the probability of a bomb in a particular row/column by the relative frequencies in the row and column sums. It is simple to establish that this estimator sums to $b$ over all the cells, as you would want. Unfortunately, it has the major drawback that it can yield an estimated probability above one in some cases. That is a bad property for an estimator.
Where is the bomb: How to estimate the probability, given row and column totals?
Your question does not make this clear, but I'm going to assume that the bombs are initially distributed via simple-random-sampling without replacement over the cells (so a cell cannot contain more th
Where is the bomb: How to estimate the probability, given row and column totals? Your question does not make this clear, but I'm going to assume that the bombs are initially distributed via simple-random-sampling without replacement over the cells (so a cell cannot contain more than one bomb). The question you have raised is essentially asking for the development of an estimation method for a probability distribution that can be computed exactly (in theory), but which becomes computationally infeasible to compute for large parameter values. The exact solution exists, but it is computationally intensive As you point out in your question, it is possible for you to perform a computational search over all possible allocations, to identify the allocations that match the row and column totals. We can proceed formally as follows. Suppose we are dealing with an $n \times m$ grid and we allocate $b$ bombs via simple random sampling without replacement (so each cell cannot contain more than one bomb). Let $\mathbf{x} = (x_1,...,x_{nm})$ be a vector of indicator variables indicating whether or not a bomb is present in each cell, and let $\mathbf{s} = (r_1, ..., r_n, c_1, ..., c_m)$ denote the corresponding vector of row and column sums. Define the function $S: \mathbf{x} \mapsto \mathbf{s}$, which maps from the allocation vector to the row and column sums. The goal is to determine the probability of each allocation vector conditional on knowledge of the row and column sums. Under simple-random-sampling we have $\mathbb{P}(\mathbf{x}) \propto 1$, so the conditional probability of interest is: $$\begin{equation} \begin{aligned} \mathbb{P}(\mathbf{x} | \mathbf{s}) = \frac{\mathbb{P}(\mathbf{x}, \mathbf{s})}{\mathbb{P}(\mathbf{s})} &= \frac{\mathbb{P}(\mathbf{x}) \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s})}{\sum_\mathbf{x} \mathbb{P}(\mathbf{x}) \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s})} \\[6pt] &= \frac{\mathbb{I}(S(\mathbf{x}) = \mathbf{s})}{\sum_\mathbf{x} \mathbb{I}(S(\mathbf{x}) = \mathbf{s})} \\[6pt] &= \frac{1}{|\mathscr{X}_\mathbf{s}|} \cdot \mathbb{I}(S(\mathbf{x}) = \mathbf{s}) \\[6pt] &= \text{U}(\mathbf{x} | \mathscr{X}_\mathbf{s}), \\[6pt] \end{aligned} \end{equation}$$ where $\mathscr{X}_\mathbf{s} \equiv \{ \mathbf{x} \in \{ 0, 1\}^{nm} | S(\mathbf{x}) = \mathbf{s} \}$ is the set of all allocation vectors compatible with the vector $\mathbf{s}$. This shows that (under simple-random-sampling of the bombs) we have $\mathbf{x} | \mathbf{s} \sim \text{U}(\mathscr{X}_\mathbf{s})$. That is, the conditional distribution of the allocation vector for the bombs is uniform over the set of all allocation vectors compatible with the observed row and column totals. The marginal probability of a bomb in a given cell can then be obtained by marginalising over this joint distribution: $$\begin{equation} \begin{aligned} \mathbb{P}(x_{ij} = 1 | \mathbf{s}) = \sum_{\mathbf{x}: x_{ij} = 1} \text{U}(\mathbf{x} | \mathscr{X}_\mathbf{s}) = \frac{|\mathscr{X}_{ij} \cap \mathscr{X}_\mathbf{s}|}{|\mathscr{X}_\mathbf{s}|}. \end{aligned} \end{equation}$$ where $\mathscr{X}_{ij} \equiv \{ \mathbf{x} \in \{ 0, 1\}^{nm} | x_{ij} = 1 \}$ is the set of all allocation vectors with a bomb in the cell in the $i$th row and $j$th column. Now, in your particular problem you computed the set $\mathscr{X}_\mathbf{s}$ and found that $|\mathscr{X}_\mathbf{s}| = 276$, so the conditional probability of the allocation vectors is uniform over the set of allocations you computed (assuming you did this correctly). This is ane exact solution to the problem. However, it is computationally intensive to compute the set $\mathscr{X}_\mathbf{s}$ and so the computation of this solution may become infeasible when $n$, $m$ or $b$ become larger. Searching for good estimation methods In the case where it is infeasible to compute the set $\mathscr{X}_\mathbf{s}$, you want to be able to estimate the marginal probabilities of a bomb being in any particular cell. I am not aware of any existing research that gives estimation methods for this problem, so this is going to require you to develop some plausible estimators and then test their performance against the exact solution using computer simulations for parameter values that are sufficiently low for this to be feasible. The naive empirical estimator: The estimator you have proposed and used in your green table is: $$\hat{\mathbb{P}}(x_{ij} = 1 | \mathbf{s}) = \frac{r_i}{b} \cdot \frac{c_j}{b} \cdot b = \frac{r_i \cdot c_j}{b}.$$ This estimation method treats the rows and columns as independent, and estimates the probability of a bomb in a particular row/column by the relative frequencies in the row and column sums. It is simple to establish that this estimator sums to $b$ over all the cells, as you would want. Unfortunately, it has the major drawback that it can yield an estimated probability above one in some cases. That is a bad property for an estimator.
Where is the bomb: How to estimate the probability, given row and column totals? Your question does not make this clear, but I'm going to assume that the bombs are initially distributed via simple-random-sampling without replacement over the cells (so a cell cannot contain more th
20,342
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
Your question inspired me to have a look on loss function from point of view of mathematical analysis. This is a disclaimer - my background is in physics, not in statistics. Let's rewrite $\rm-loss$ as a function of NN output $x$ and find its derivative: \begin{align} f(x) &= a \ln x + (1-a) \ln (1-x)\\ f^\prime(x) &= \frac{a-x}{x(1-x)} \end{align} where $a$ is the target value. Now we put $x = a + \delta$ and assuming that $\delta$ is small we can neglect terms with $\delta^2$ for clarity: $$ f^\prime(\delta) = \frac{\delta}{a(a-1) + \delta(2a-1)} $$ This equation let us get some intuition how loss behaves. When target value $a$ is (close to) zero or one, derivative is constant $-1$ or $+1$. For $a$ around 0.5 the derivative is linear in $\delta$. In other words, during backpropagation this loss cares more about very bright and very dark pixels, but puts less effort on optimizing middle tones. Regarding assymetry - when NN is far from optimum, it does not matter probably, as you will converge faster or slower. When NN is close to optimum ($\delta$ is small) assymetry disappears.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
Your question inspired me to have a look on loss function from point of view of mathematical analysis. This is a disclaimer - my background is in physics, not in statistics. Let's rewrite $\rm-loss$ a
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data Your question inspired me to have a look on loss function from point of view of mathematical analysis. This is a disclaimer - my background is in physics, not in statistics. Let's rewrite $\rm-loss$ as a function of NN output $x$ and find its derivative: \begin{align} f(x) &= a \ln x + (1-a) \ln (1-x)\\ f^\prime(x) &= \frac{a-x}{x(1-x)} \end{align} where $a$ is the target value. Now we put $x = a + \delta$ and assuming that $\delta$ is small we can neglect terms with $\delta^2$ for clarity: $$ f^\prime(\delta) = \frac{\delta}{a(a-1) + \delta(2a-1)} $$ This equation let us get some intuition how loss behaves. When target value $a$ is (close to) zero or one, derivative is constant $-1$ or $+1$. For $a$ around 0.5 the derivative is linear in $\delta$. In other words, during backpropagation this loss cares more about very bright and very dark pixels, but puts less effort on optimizing middle tones. Regarding assymetry - when NN is far from optimum, it does not matter probably, as you will converge faster or slower. When NN is close to optimum ($\delta$ is small) assymetry disappears.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data Your question inspired me to have a look on loss function from point of view of mathematical analysis. This is a disclaimer - my background is in physics, not in statistics. Let's rewrite $\rm-loss$ a
20,343
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
If you think the loss from 0.1 and 0.3 should be equal when the true is 0.2, there is no reason to use the cross entropy. The loss function should reflect what your or your field's common sense. However, if the true value $p$ is corresponding to a Bernoulli distribution with mean $p$, then, cross entropy loss between $p$ and $q$ is equal to the KL divergence between $\operatorname{Ber}(p)$ and $\operatorname{Ber}(q)$ which is one of the most natural and optimal loss in some senses. In general, every strongly convex loss behaves similarly to the $l_2$ loss around the true value. So the sensitivity of the choice of loss will vanish as your prediction getting accurate in any loss.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
If you think the loss from 0.1 and 0.3 should be equal when the true is 0.2, there is no reason to use the cross entropy. The loss function should reflect what your or your field's common sense. Howev
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data If you think the loss from 0.1 and 0.3 should be equal when the true is 0.2, there is no reason to use the cross entropy. The loss function should reflect what your or your field's common sense. However, if the true value $p$ is corresponding to a Bernoulli distribution with mean $p$, then, cross entropy loss between $p$ and $q$ is equal to the KL divergence between $\operatorname{Ber}(p)$ and $\operatorname{Ber}(q)$ which is one of the most natural and optimal loss in some senses. In general, every strongly convex loss behaves similarly to the $l_2$ loss around the true value. So the sensitivity of the choice of loss will vanish as your prediction getting accurate in any loss.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data If you think the loss from 0.1 and 0.3 should be equal when the true is 0.2, there is no reason to use the cross entropy. The loss function should reflect what your or your field's common sense. Howev
20,344
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
A change of 0.1 in either direction introduces a symmetric additive effect, but an asymmetric multiplicative effect. This means that while both A and B are the same shift from the true mean, the true value is twice A but 2/3 of B. Inversely A if half the true value, B is 1.5 times it. I. E. Their multiplicative distances are different. One would use a symmetric function when one is evaluating something expected to be symmetric, an asymmetric one for asymmetric situations. Note that logs are used because they allow us to handle multiplicative processes in a more additive way.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
A change of 0.1 in either direction introduces a symmetric additive effect, but an asymmetric multiplicative effect. This means that while both A and B are the same shift from the true mean, the true
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data A change of 0.1 in either direction introduces a symmetric additive effect, but an asymmetric multiplicative effect. This means that while both A and B are the same shift from the true mean, the true value is twice A but 2/3 of B. Inversely A if half the true value, B is 1.5 times it. I. E. Their multiplicative distances are different. One would use a symmetric function when one is evaluating something expected to be symmetric, an asymmetric one for asymmetric situations. Note that logs are used because they allow us to handle multiplicative processes in a more additive way.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data A change of 0.1 in either direction introduces a symmetric additive effect, but an asymmetric multiplicative effect. This means that while both A and B are the same shift from the true mean, the true
20,345
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
Under the Bernoulli distribution parameterized by say $p = 0.3$ by the output of the autoencoder, the probability of drawing $x = 0.2$ is zero (and is zero for all $0 < x < 1$). This indeed makes the Bernoulli distribution a bad choice for non-binary data. However, a slightly different view of the input can resurrect the Bernoulli distribution. Let's assume instead that $x = 0.2$ is a sample from some measuring device, and this $x = 0.2$ reading might be best described as itself being a parameter of a probability distribution, such as a normal or Bernoulli distribution. Let's go with the latter and say that $x = 0.2$ represents a Bernoulli process with parameter $p' = x = 0.2$. Thus, there is some underlying binary sensor or event which is $0$ with probability $0.2$ and $1$ with probability $0.8$. The output of our autoencoder is a Bernoulli distribution with say $p = 0.3$. It does make sense to ask: what is the expected result of drawing $0$ or $1$ readings from the real Bernoulli process (with parameter $p'=0.2$) and then calculating its likelihood value according to the autoencoder's Bernoulli distribution (with parameter $p = 0.3$). This expected likelihood is $p'p + (1-p')(1-p) = (0.2)(0.3) + (0.8)(0.7)$. We can also ask what the expected log-likelihood is, and that is $p'\log(p) + (1-p')\log(1-p)$. When we replace the symbol $p'$ with the usual symbol, $y$, we get the usual expression $y\log(p) + (1-y)\log(1-p)$. By interpreting the input differently (as a distribution parameter), the cross-entropy loss does make sense as the negative of the expected log-likelihood, where the expectation is over the "input" distribution, and likelihood is calculated against our "output" distribution.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data
Under the Bernoulli distribution parameterized by say $p = 0.3$ by the output of the autoencoder, the probability of drawing $x = 0.2$ is zero (and is zero for all $0 < x < 1$). This indeed makes the
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data Under the Bernoulli distribution parameterized by say $p = 0.3$ by the output of the autoencoder, the probability of drawing $x = 0.2$ is zero (and is zero for all $0 < x < 1$). This indeed makes the Bernoulli distribution a bad choice for non-binary data. However, a slightly different view of the input can resurrect the Bernoulli distribution. Let's assume instead that $x = 0.2$ is a sample from some measuring device, and this $x = 0.2$ reading might be best described as itself being a parameter of a probability distribution, such as a normal or Bernoulli distribution. Let's go with the latter and say that $x = 0.2$ represents a Bernoulli process with parameter $p' = x = 0.2$. Thus, there is some underlying binary sensor or event which is $0$ with probability $0.2$ and $1$ with probability $0.8$. The output of our autoencoder is a Bernoulli distribution with say $p = 0.3$. It does make sense to ask: what is the expected result of drawing $0$ or $1$ readings from the real Bernoulli process (with parameter $p'=0.2$) and then calculating its likelihood value according to the autoencoder's Bernoulli distribution (with parameter $p = 0.3$). This expected likelihood is $p'p + (1-p')(1-p) = (0.2)(0.3) + (0.8)(0.7)$. We can also ask what the expected log-likelihood is, and that is $p'\log(p) + (1-p')\log(1-p)$. When we replace the symbol $p'$ with the usual symbol, $y$, we get the usual expression $y\log(p) + (1-y)\log(1-p)$. By interpreting the input differently (as a distribution parameter), the cross-entropy loss does make sense as the negative of the expected log-likelihood, where the expectation is over the "input" distribution, and likelihood is calculated against our "output" distribution.
Why is binary cross entropy (or log loss) used in autoencoders for non-binary data Under the Bernoulli distribution parameterized by say $p = 0.3$ by the output of the autoencoder, the probability of drawing $x = 0.2$ is zero (and is zero for all $0 < x < 1$). This indeed makes the
20,346
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance?
The Bhattacharyya coefficient is $$ BC(h,g)= \int \sqrt{h(x) g(x)}\; dx $$ in the continuous case. There is a good wikipedia article https://en.wikipedia.org/wiki/Bhattacharyya_distance. How to understand this (and the related distance)? Let us start with the multivariate normal case, which is instructive and can be found at the link above. When the two multivariate normal distributions have the same covariance matrix, the Bhattacharyya distance coincides with the Mahalanobis distance, while in the case of two different covariance matrices it does have a second term, and so generalizes the Mahalanobis distance. This maybe underlies claims that in some cases the Bhattacharyya distance works better than the Mahalanobis. The Bhattacharyya distance is also closely related to the Hellinger distance https://en.wikipedia.org/wiki/Hellinger_distance. Working with the formula above, we can find some stochastic interpretation. Write $$ \DeclareMathOperator{\E}{\mathbb{E}} BC(h,g) = \int \sqrt{h(x) g(x)}\; dx = \\ \int h(x) \cdot \sqrt{\frac{g(x)}{h(x)}}\; dx = \E_h \sqrt{\frac{g(X)}{h(X)}} $$ so it is the expected value of the square root of the likelihood ratio statistic, calculated under the distribution $h$ (the null distribution of $X$). That makes for comparisons with Intuition on the Kullback-Leibler (KL) Divergence, which interprets Kullback-Leibler divergence as expectation of the loglikelihood ratio statistic (but calculated under the alternative $g$). Such a viewpoint might be interesting in some applications. Still another viewpoint, compare with the general family of f-divergencies, defined as, see Rényi entropy $$ D_f(h,g) = \int h(x) f\left( \frac{g(x)}{h(x)}\right)\; dx $$ If we choose $f(t)= 4( \frac{1+t}{2}-\sqrt{t} )$ the resulting f-divergence is the Hellinger divergence, from which we can calculate the Bhattacharyya coefficient. This can also be seen as an example of a Renyi divergence, obtained from a Renyi entropy, see link above.
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance?
The Bhattacharyya coefficient is $$ BC(h,g)= \int \sqrt{h(x) g(x)}\; dx $$ in the continuous case. There is a good wikipedia article https://en.wikipedia.org/wiki/Bhattacharyya_distance. How to un
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance? The Bhattacharyya coefficient is $$ BC(h,g)= \int \sqrt{h(x) g(x)}\; dx $$ in the continuous case. There is a good wikipedia article https://en.wikipedia.org/wiki/Bhattacharyya_distance. How to understand this (and the related distance)? Let us start with the multivariate normal case, which is instructive and can be found at the link above. When the two multivariate normal distributions have the same covariance matrix, the Bhattacharyya distance coincides with the Mahalanobis distance, while in the case of two different covariance matrices it does have a second term, and so generalizes the Mahalanobis distance. This maybe underlies claims that in some cases the Bhattacharyya distance works better than the Mahalanobis. The Bhattacharyya distance is also closely related to the Hellinger distance https://en.wikipedia.org/wiki/Hellinger_distance. Working with the formula above, we can find some stochastic interpretation. Write $$ \DeclareMathOperator{\E}{\mathbb{E}} BC(h,g) = \int \sqrt{h(x) g(x)}\; dx = \\ \int h(x) \cdot \sqrt{\frac{g(x)}{h(x)}}\; dx = \E_h \sqrt{\frac{g(X)}{h(X)}} $$ so it is the expected value of the square root of the likelihood ratio statistic, calculated under the distribution $h$ (the null distribution of $X$). That makes for comparisons with Intuition on the Kullback-Leibler (KL) Divergence, which interprets Kullback-Leibler divergence as expectation of the loglikelihood ratio statistic (but calculated under the alternative $g$). Such a viewpoint might be interesting in some applications. Still another viewpoint, compare with the general family of f-divergencies, defined as, see Rényi entropy $$ D_f(h,g) = \int h(x) f\left( \frac{g(x)}{h(x)}\right)\; dx $$ If we choose $f(t)= 4( \frac{1+t}{2}-\sqrt{t} )$ the resulting f-divergence is the Hellinger divergence, from which we can calculate the Bhattacharyya coefficient. This can also be seen as an example of a Renyi divergence, obtained from a Renyi entropy, see link above.
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance? The Bhattacharyya coefficient is $$ BC(h,g)= \int \sqrt{h(x) g(x)}\; dx $$ in the continuous case. There is a good wikipedia article https://en.wikipedia.org/wiki/Bhattacharyya_distance. How to un
20,347
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The Bhattacharya distance is also defined using the following equation where $\mu_i$ and $\sum_i$ refer to mean and and covariance of $i^{th}$ cluster.
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The Bhattacharya distance is also defined using the following equation where $\mu_i$ and $\sum_i$ refer to mean and and covariance of $i^{th}$ cluster.
Intuition of the Bhattacharya Coefficient and the Bhattacharya distance? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
20,348
What does it mean for a probability distribution to not have a density function?
I would suggest to the OP to read Koopmans, L. H. "Teaching singular distributions to undergraduates." The American Statistician 37, no. 4a (1983): 313-316. My general impression is that "CDF without densities" arise in a way that can be "real-world intuitive" in multivariate settings rather than univariate. As he writes (p. 314) Finally, whereas singular distributions are difficult (if not impossible) to visualize in one dimension, being distributions concentrated on uncountable zero- dimensional sets (so to speak), in two dimensions they are, or can be constructed to be, concentrated on rather familiar one-dimensional figures such as lines and circles. He also provides a real-world example indeed (Example 1, p. 314). Elaborating in a naive way on the bivariate CDF with which he starts, he considers $$F(x,y) = \frac{x+y}{2},\;\;\;\; 0\leq x\leq 1,\;\; 0\leq y\leq 1.$$ Don't be misled by the notation that "points" towards "continuous distributions". After all this is exactly the issue here. One can verify that the cross-partial of $F(x,y)$ is zero, $$\frac{\partial^2 F(x,y)}{\partial x \partial y} =0.$$ Strictly speaking the mathematical operation of computing the derivative appears legal, given the information we have, but by giving us the constant zero-function, it leaves us scratching our heads (Koopmans writes "At this point in an exam, it is not unreasonable to expect the normal undergraduate to panic.") An indication that something peculiar may be going on here is by deriving the marginal CDF, say, for $X$, $$\Pr (X\leq x) = G(x) = \lim_{y\to 1}F(x,y) = \frac{x}{2} + \frac 12.$$ But this means that $\Pr (X=0) = \frac 12$, implying that $X$ is a random variable of "mixed" type (not mixture), and that its marginal CDF $G(x)$ has one continuous and a non-zero discrete component: $$G(x) = \begin{cases} 1/2\qquad\;\;\;\;\;\;\; x=0 \\ 1/2 + x/2 \;\;\;\; 0 < x \leq 1.\end{cases}$$ Koopmans shows that such a mixed distribution leads to a non-zero "singular" component, in the Lebesgue decomposition of a joint density in three parts, one continuous, one discrete and one singular.
What does it mean for a probability distribution to not have a density function?
I would suggest to the OP to read Koopmans, L. H. "Teaching singular distributions to undergraduates." The American Statistician 37, no. 4a (1983): 313-316. My general impression is that "CDF without
What does it mean for a probability distribution to not have a density function? I would suggest to the OP to read Koopmans, L. H. "Teaching singular distributions to undergraduates." The American Statistician 37, no. 4a (1983): 313-316. My general impression is that "CDF without densities" arise in a way that can be "real-world intuitive" in multivariate settings rather than univariate. As he writes (p. 314) Finally, whereas singular distributions are difficult (if not impossible) to visualize in one dimension, being distributions concentrated on uncountable zero- dimensional sets (so to speak), in two dimensions they are, or can be constructed to be, concentrated on rather familiar one-dimensional figures such as lines and circles. He also provides a real-world example indeed (Example 1, p. 314). Elaborating in a naive way on the bivariate CDF with which he starts, he considers $$F(x,y) = \frac{x+y}{2},\;\;\;\; 0\leq x\leq 1,\;\; 0\leq y\leq 1.$$ Don't be misled by the notation that "points" towards "continuous distributions". After all this is exactly the issue here. One can verify that the cross-partial of $F(x,y)$ is zero, $$\frac{\partial^2 F(x,y)}{\partial x \partial y} =0.$$ Strictly speaking the mathematical operation of computing the derivative appears legal, given the information we have, but by giving us the constant zero-function, it leaves us scratching our heads (Koopmans writes "At this point in an exam, it is not unreasonable to expect the normal undergraduate to panic.") An indication that something peculiar may be going on here is by deriving the marginal CDF, say, for $X$, $$\Pr (X\leq x) = G(x) = \lim_{y\to 1}F(x,y) = \frac{x}{2} + \frac 12.$$ But this means that $\Pr (X=0) = \frac 12$, implying that $X$ is a random variable of "mixed" type (not mixture), and that its marginal CDF $G(x)$ has one continuous and a non-zero discrete component: $$G(x) = \begin{cases} 1/2\qquad\;\;\;\;\;\;\; x=0 \\ 1/2 + x/2 \;\;\;\; 0 < x \leq 1.\end{cases}$$ Koopmans shows that such a mixed distribution leads to a non-zero "singular" component, in the Lebesgue decomposition of a joint density in three parts, one continuous, one discrete and one singular.
What does it mean for a probability distribution to not have a density function? I would suggest to the OP to read Koopmans, L. H. "Teaching singular distributions to undergraduates." The American Statistician 37, no. 4a (1983): 313-316. My general impression is that "CDF without
20,349
What does it mean for a probability distribution to not have a density function?
As an alternative to Alecos' excellent answer, I will try to give a more intuitive explanation, with the help of two examples. If the distribution of a random quantity $X$ has a density $\varphi$, numeric values for the probability that $X$ takes values in a given set $A$ can be obtained by integrating $\varphi$ over $A$: $$ P(X\in A) = \int_A \varphi(x) \, dx. $$ If $X$ takes real numbers as values, a consequence of this property is that the probability of $X$ taking any specific value equals zero: $$ P(X = a) = \int_{\{a\}} \varphi(x) \, dx = \int_a^a \varphi(x) \, dx = 0. $$ The following example shows a random quantity which is continuous but has $P(X=0) > 0$ and therefore cannot have a density. Example 1. Let $X$ be the amount of rainfall in millimetres on this day next year. Clearly the amount of rainfall is a continuous quantity. But where I live there are only 120 days of rain per year, so it seems reasonable to assume $P(X=0) = (365-120)/365 \approx 2/3 > 0$. If we try to find a density for $X$ we may come up with something like the following picture: The problem is what we should do for $x=0$, where all the days with no rain are represented by a single point on the $x$-axis. To get a density, we would need a "peak" with width 0 and a height such that the integral over the peak equals $P(X=0)$. One might be tempted to set $\phi(0) = \infty$, but even if we accept such a $\phi$ as a valid density, it seems impossible to somehow encode the value $P(X=0) = 2/3$ in the function $\varphi$. Because of these complications, the distribution of $X$ cannot be represented by a density. The distribution can instead be described using a probability measure or the cumulative distribution function. The same problem also appears in higher dimensions. If $X$ is a random vector with a density, then we still have $P(X=a) = 0$. Similarly, if $A$ is a set with area/volume zero, then the integral discussed above is zero and thus we also must have $P(X\in A) = 0$. Example 2. Assume that $X = (X_1, X_2)$ is uniformly distributed on the unit circle in the Euclidean plane: Clearly this is a continuous distribution (there are uncountably many points in the unit circle), but since the circle line has area $0$ it does not contribute to the integral, and any function $\varphi$ with $\varphi(x) = 0$ outside the circle will have $$ \int_{-\infty}^\infty \int_{-\infty}^\infty \varphi(x_1, x_2) \, dx_2 \, dx_1 = 0. $$ Thus there cannot be a density function (integrating to 1) which only takes values on the circle and the distribution of $X$ cannot be described by a density on the Euclidean plane. It can instead be described by a probability measure or by using polar coordinates and using a density just for the angle.
What does it mean for a probability distribution to not have a density function?
As an alternative to Alecos' excellent answer, I will try to give a more intuitive explanation, with the help of two examples. If the distribution of a random quantity $X$ has a density $\varphi$, num
What does it mean for a probability distribution to not have a density function? As an alternative to Alecos' excellent answer, I will try to give a more intuitive explanation, with the help of two examples. If the distribution of a random quantity $X$ has a density $\varphi$, numeric values for the probability that $X$ takes values in a given set $A$ can be obtained by integrating $\varphi$ over $A$: $$ P(X\in A) = \int_A \varphi(x) \, dx. $$ If $X$ takes real numbers as values, a consequence of this property is that the probability of $X$ taking any specific value equals zero: $$ P(X = a) = \int_{\{a\}} \varphi(x) \, dx = \int_a^a \varphi(x) \, dx = 0. $$ The following example shows a random quantity which is continuous but has $P(X=0) > 0$ and therefore cannot have a density. Example 1. Let $X$ be the amount of rainfall in millimetres on this day next year. Clearly the amount of rainfall is a continuous quantity. But where I live there are only 120 days of rain per year, so it seems reasonable to assume $P(X=0) = (365-120)/365 \approx 2/3 > 0$. If we try to find a density for $X$ we may come up with something like the following picture: The problem is what we should do for $x=0$, where all the days with no rain are represented by a single point on the $x$-axis. To get a density, we would need a "peak" with width 0 and a height such that the integral over the peak equals $P(X=0)$. One might be tempted to set $\phi(0) = \infty$, but even if we accept such a $\phi$ as a valid density, it seems impossible to somehow encode the value $P(X=0) = 2/3$ in the function $\varphi$. Because of these complications, the distribution of $X$ cannot be represented by a density. The distribution can instead be described using a probability measure or the cumulative distribution function. The same problem also appears in higher dimensions. If $X$ is a random vector with a density, then we still have $P(X=a) = 0$. Similarly, if $A$ is a set with area/volume zero, then the integral discussed above is zero and thus we also must have $P(X\in A) = 0$. Example 2. Assume that $X = (X_1, X_2)$ is uniformly distributed on the unit circle in the Euclidean plane: Clearly this is a continuous distribution (there are uncountably many points in the unit circle), but since the circle line has area $0$ it does not contribute to the integral, and any function $\varphi$ with $\varphi(x) = 0$ outside the circle will have $$ \int_{-\infty}^\infty \int_{-\infty}^\infty \varphi(x_1, x_2) \, dx_2 \, dx_1 = 0. $$ Thus there cannot be a density function (integrating to 1) which only takes values on the circle and the distribution of $X$ cannot be described by a density on the Euclidean plane. It can instead be described by a probability measure or by using polar coordinates and using a density just for the angle.
What does it mean for a probability distribution to not have a density function? As an alternative to Alecos' excellent answer, I will try to give a more intuitive explanation, with the help of two examples. If the distribution of a random quantity $X$ has a density $\varphi$, num
20,350
Is random state a parameter to tune?
No, you should not. Hyperparameters are variables which control some high-level aspect of an algorithm's behavior. As opposed to regular parameters, hyperparameters cannot be automatically learned from training data by the algorithm itself. For this reason, an experienced user will select an appropriate value based on his intuition, domain knowledge and the semantic meaning of the hyperparameter (if any). Alternatively, one might use a validation set to perform hyperparameter selection. Here, we try to find an optimal hyperparameter value for the entire population of data by testing different candidate values on a sample of the population (the validation set). Regarding the random state, it is used in many randomized algorithms in sklearn to determine the random seed passed to the pseudo-random number generator. Therefore, it does not govern any aspect of the algorithm's behavior. As a consequence, random state values which performed well in the validation set do not correspond to those which would perform well in a new, unseen test set. Indeed, depending on the algorithm, you might see completely different results by just changing the ordering of training samples. I suggest you select a random state value at random and use it for all your experiments. Alternatively you could take the average accuracy of your models over a random set of random states. In any case, do not try to optimize random states, this will most certainly produce optimistically biased performance measures.
Is random state a parameter to tune?
No, you should not. Hyperparameters are variables which control some high-level aspect of an algorithm's behavior. As opposed to regular parameters, hyperparameters cannot be automatically learned fro
Is random state a parameter to tune? No, you should not. Hyperparameters are variables which control some high-level aspect of an algorithm's behavior. As opposed to regular parameters, hyperparameters cannot be automatically learned from training data by the algorithm itself. For this reason, an experienced user will select an appropriate value based on his intuition, domain knowledge and the semantic meaning of the hyperparameter (if any). Alternatively, one might use a validation set to perform hyperparameter selection. Here, we try to find an optimal hyperparameter value for the entire population of data by testing different candidate values on a sample of the population (the validation set). Regarding the random state, it is used in many randomized algorithms in sklearn to determine the random seed passed to the pseudo-random number generator. Therefore, it does not govern any aspect of the algorithm's behavior. As a consequence, random state values which performed well in the validation set do not correspond to those which would perform well in a new, unseen test set. Indeed, depending on the algorithm, you might see completely different results by just changing the ordering of training samples. I suggest you select a random state value at random and use it for all your experiments. Alternatively you could take the average accuracy of your models over a random set of random states. In any case, do not try to optimize random states, this will most certainly produce optimistically biased performance measures.
Is random state a parameter to tune? No, you should not. Hyperparameters are variables which control some high-level aspect of an algorithm's behavior. As opposed to regular parameters, hyperparameters cannot be automatically learned fro
20,351
Is random state a parameter to tune?
What does the random_state effect? training and validation set splitting, or what? If it's the first case, I think you can try to find differences between the splitting scheme under two random states, and this might give you some intuition in your model(I mean, you can explore why it works to train model on some data, and use the trained model to predict some validation data, but doesn't work to train model on some other data, and predict some other validation data. Are they distributed differently?) Such Analysis may give you some intuition. And by the way, I encountered this problem too:), and just don't understand it. Maybe we can work together on investigating it. Cheers.
Is random state a parameter to tune?
What does the random_state effect? training and validation set splitting, or what? If it's the first case, I think you can try to find differences between the splitting scheme under two random states
Is random state a parameter to tune? What does the random_state effect? training and validation set splitting, or what? If it's the first case, I think you can try to find differences between the splitting scheme under two random states, and this might give you some intuition in your model(I mean, you can explore why it works to train model on some data, and use the trained model to predict some validation data, but doesn't work to train model on some other data, and predict some other validation data. Are they distributed differently?) Such Analysis may give you some intuition. And by the way, I encountered this problem too:), and just don't understand it. Maybe we can work together on investigating it. Cheers.
Is random state a parameter to tune? What does the random_state effect? training and validation set splitting, or what? If it's the first case, I think you can try to find differences between the splitting scheme under two random states
20,352
Best way to initialize LSTM state
Normally, you would set the initial states to zero, but the network is going to learn to adapt to that initial state. The following article suggests learning the initial hidden states or using random noise. Basically, if your data includes many short sequences, then training the initial state can accelerate learning. Alternatively, if your data includes a small number of long sequences then there may not be enough data to effectively train the initial state. In that case using a noisy initial state can accelerate learning. An idea they don't mention would be to learn the mean and std of the noise generator. The article notes that if you choose to learn the initial state, then adding noise is of little benefit.
Best way to initialize LSTM state
Normally, you would set the initial states to zero, but the network is going to learn to adapt to that initial state. The following article suggests learning the initial hidden states or using random
Best way to initialize LSTM state Normally, you would set the initial states to zero, but the network is going to learn to adapt to that initial state. The following article suggests learning the initial hidden states or using random noise. Basically, if your data includes many short sequences, then training the initial state can accelerate learning. Alternatively, if your data includes a small number of long sequences then there may not be enough data to effectively train the initial state. In that case using a noisy initial state can accelerate learning. An idea they don't mention would be to learn the mean and std of the noise generator. The article notes that if you choose to learn the initial state, then adding noise is of little benefit.
Best way to initialize LSTM state Normally, you would set the initial states to zero, but the network is going to learn to adapt to that initial state. The following article suggests learning the initial hidden states or using random
20,353
Best way to initialize LSTM state
You can use initialized parameters that are learned using transfer learning, but keep in mind that it also began somewhere from a non-learned initialized state. Basically, you have to start from some point, usually a bunch of zeros, and then refine by training. So, if you are not using any transfer learning mechanisms, you also have to start from a manual initial state, I am sure there might be works of literature available for manually setting the initial states. This is the simplest explanation I could put. Thank You.
Best way to initialize LSTM state
You can use initialized parameters that are learned using transfer learning, but keep in mind that it also began somewhere from a non-learned initialized state. Basically, you have to start from some
Best way to initialize LSTM state You can use initialized parameters that are learned using transfer learning, but keep in mind that it also began somewhere from a non-learned initialized state. Basically, you have to start from some point, usually a bunch of zeros, and then refine by training. So, if you are not using any transfer learning mechanisms, you also have to start from a manual initial state, I am sure there might be works of literature available for manually setting the initial states. This is the simplest explanation I could put. Thank You.
Best way to initialize LSTM state You can use initialized parameters that are learned using transfer learning, but keep in mind that it also began somewhere from a non-learned initialized state. Basically, you have to start from some
20,354
How to identify outliers and do model diagnostics for an lme4 model?
(This started out as a comment but seemed to be getting too long.) This question may be getting less attention than it would otherwise deserve because it is very broad (among other things, you've asked 5 separate questions here). A few answers: Conditional and marginal residuals just mean different things, I'm not sure there is a "right answer" here -- you would just be asking about different kinds of outlierishness/leverage. In general it would seem that conditional residuals (i.e. re.form=NULL, or the default, in lme4) would make more sense. Note that many of the influence measures you get (e.g. by hatvalues.merMod(), see below) will be conditional on the estimated variance-covariance matrices of the random effects; this is different from the question of whether you're conditioning on conditional modes/BLUPs or not. If you don't want to condition on these estimates, you'll have to (1) assume multivariate Normality of the estimates of the variance-covariance parameters (ugh) or (2) do some kind of parametric bootstrapping (double-ugh). Many of the standard influence measures are more difficult for (G)LMMs if they involve inverting large matrices -- that's not always practical. The influence.ME package does a lot of its work by a semi-brute force method: the influence() function iteratively modifies the mixed effects model to neutralize the effect a grouped set of data has on the parameters, and which returns returns [sic] the fixed parameters of these iteratively modified models. Note also the difference between influential observations and influential groups, either of which might be of interest. The lme4 package does provide a hat matrix (or its diagonal) via ?hatvalues.merMod, so you could use these to compute some standard influence measures. As far as marginal Q-Q plots for the BLUPs/conditional modes go: if the BLUPs/conditional modes are multivariate Normal, then the univariate distributions will be too. The contrapositive holds (if the univariate distributions are bad, then the multivariate distribution is bad), but not necessarily the converse (if the univariate distributions look good, the multivariate distribution might still be bad), but IMO you'd have to work pretty hard to construct such an example. There are formal tests for the misspecification of random effects, e.g. Abad et al. Biostatistics 2010 (see full citation below). Don't know offhand where it has been implemented. Finally, it seems that a lot of what you want has already been discussed in the conference paper that you linked (ref below). Why not just draw the plots they suggest and pick a cutoff (e.g. $\pm 1.96 \sigma$) to identify outliers from them? Abad, Ariel Alonso, Saskia Litière, and Geert Molenberghs. “Testing for Misspecification in Generalized Linear Mixed Models.” Biostatistics 11, no. 4 (October 1, 2010): 771–86. doi:10.1093/biostatistics/kxq019. Julio M. Singer, Juvencio S. Nobre, and Francisco M. M. Rocha. “Diagnostic and Treatment for Linear Mixed Models,” 5486. Hong Kong, 2013. http://2013.isiproceedings.org/Files/CPS203-P28-S.pdf.
How to identify outliers and do model diagnostics for an lme4 model?
(This started out as a comment but seemed to be getting too long.) This question may be getting less attention than it would otherwise deserve because it is very broad (among other things, you've aske
How to identify outliers and do model diagnostics for an lme4 model? (This started out as a comment but seemed to be getting too long.) This question may be getting less attention than it would otherwise deserve because it is very broad (among other things, you've asked 5 separate questions here). A few answers: Conditional and marginal residuals just mean different things, I'm not sure there is a "right answer" here -- you would just be asking about different kinds of outlierishness/leverage. In general it would seem that conditional residuals (i.e. re.form=NULL, or the default, in lme4) would make more sense. Note that many of the influence measures you get (e.g. by hatvalues.merMod(), see below) will be conditional on the estimated variance-covariance matrices of the random effects; this is different from the question of whether you're conditioning on conditional modes/BLUPs or not. If you don't want to condition on these estimates, you'll have to (1) assume multivariate Normality of the estimates of the variance-covariance parameters (ugh) or (2) do some kind of parametric bootstrapping (double-ugh). Many of the standard influence measures are more difficult for (G)LMMs if they involve inverting large matrices -- that's not always practical. The influence.ME package does a lot of its work by a semi-brute force method: the influence() function iteratively modifies the mixed effects model to neutralize the effect a grouped set of data has on the parameters, and which returns returns [sic] the fixed parameters of these iteratively modified models. Note also the difference between influential observations and influential groups, either of which might be of interest. The lme4 package does provide a hat matrix (or its diagonal) via ?hatvalues.merMod, so you could use these to compute some standard influence measures. As far as marginal Q-Q plots for the BLUPs/conditional modes go: if the BLUPs/conditional modes are multivariate Normal, then the univariate distributions will be too. The contrapositive holds (if the univariate distributions are bad, then the multivariate distribution is bad), but not necessarily the converse (if the univariate distributions look good, the multivariate distribution might still be bad), but IMO you'd have to work pretty hard to construct such an example. There are formal tests for the misspecification of random effects, e.g. Abad et al. Biostatistics 2010 (see full citation below). Don't know offhand where it has been implemented. Finally, it seems that a lot of what you want has already been discussed in the conference paper that you linked (ref below). Why not just draw the plots they suggest and pick a cutoff (e.g. $\pm 1.96 \sigma$) to identify outliers from them? Abad, Ariel Alonso, Saskia Litière, and Geert Molenberghs. “Testing for Misspecification in Generalized Linear Mixed Models.” Biostatistics 11, no. 4 (October 1, 2010): 771–86. doi:10.1093/biostatistics/kxq019. Julio M. Singer, Juvencio S. Nobre, and Francisco M. M. Rocha. “Diagnostic and Treatment for Linear Mixed Models,” 5486. Hong Kong, 2013. http://2013.isiproceedings.org/Files/CPS203-P28-S.pdf.
How to identify outliers and do model diagnostics for an lme4 model? (This started out as a comment but seemed to be getting too long.) This question may be getting less attention than it would otherwise deserve because it is very broad (among other things, you've aske
20,355
How do you interpret the condition number of a correlation matrix
The condition number of a correlation matrix is not of great interest in its own right. It comes into its own when that matrix gives the coefficients of a set of linear equations, as happens for multiple linear regression using standardized regressors. Belsley, Kuh, and Welsch--who were among the first to point out and systematically exploit the relevance of the condition number in this context--have a nice explanation, which I will broadly quote. They begin by giving a definition of the spectral norm, denoted $||A||$ and defined as $$||A|| \equiv {\sup}_{||z||=1}||Az||.$$ Geometrically, it is the maximum amount by which $A$ will rescale the unit sphere: its maximum "stretch," if you will. They point out the obvious relations that $||A||$ therefore is the largest singular value of $A$ and $||A^{-1}||$ is the reciprocal of the smallest singular value of $A$ (when $A$ is invertible). (I like to think of this as the maximum "squeezing" of $A$.) They then assert that $||A||$ actually is a norm, and add the (easily proven) facts $||Az|| \le ||A|| \cdot ||z|| \tag{4}$ $||AB|| \le ||A||\cdot ||B|| \tag{5} $ for all commensurate $A$ and $B$. These remarks are then applied: We shall now see that the spectral norm is directly relevant to an analysis of the conditioning of a linear system of equations $Az = c, A $ $n\times n$ and nonsingular with solution $z=A^{-1}c$. We can ask how much the solution vector $z$ would change $(\delta z)$ if there were small changes or perturbations in the elements of $c$ or $A$, denoted $\delta c$ and $\delta A$. In the event that $A$ is fixed but $c$ changes by $\delta c$, we have $\delta z = A^{-1}\delta c$, or $$||\delta z|| \le ||A ^{-1} || \cdot || \delta c ||.$$ Further, employing property $(4)$ above to the equation system, we have $$||c|| \le ||A|| \cdot ||z||;$$ and from multiplying these last two expressions we obtain $$\frac{||\delta z||}{||z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta c || }{||c||}.$$ That is, the magnitude $||A||\cdot ||A^{-1}||$ provides a bound for the relative change in the length of the solution vector $z$ that can result from a given relative change in the length of $c$. A similar result holds for perturbations in the elements of the matrix $A$. Here it can be shown that $$\frac{||\delta z||}{||z + \delta z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta A||}{||A||}.$$ (The key step in this demonstration, which is left as an exercise, is to observe $\delta z = -A^{-1}(\delta A)(z + \delta z)$ and apply norms to both sides.) Because of its usefulness in this context, the magnitude $||A||\cdot ||A^{-1}||$ is defined to be the condition number of the nonsingular matrix $A$ ... . (Based on the earlier characterizations, we may conceive of the condition number as being a kind of "aspect ratio" of $A$: the most it can stretch any vector times the most it can squeeze any vector. It would be directly related to the maximum eccentricity attained by any great circle on the unit sphere after being operated on by $A$.) The condition number bounds how much the solution $z$ of a system of equations $Az=c$ can change, on a relative basis, when its components $A$ and $c$ are changed. However, these inequalities are not tight: for any given $A$, the extent to which the bounds are reasonably accurate representations of actual changes depends on $A$ and the changes $\delta A$ and $\delta c$. Condition numbers are assertions about worst cases. Thus, a matrix with condition number $9$ can be considered to be $70/9$ times better than one with condition number $70$, but that does not necessarily mean that it will be precisely that much better (at not propagating errors) than the other. Reference Belsley, Kuh, & Welsch, Regression Diagnostics. Wiley, 1980: Section 3.2.
How do you interpret the condition number of a correlation matrix
The condition number of a correlation matrix is not of great interest in its own right. It comes into its own when that matrix gives the coefficients of a set of linear equations, as happens for mult
How do you interpret the condition number of a correlation matrix The condition number of a correlation matrix is not of great interest in its own right. It comes into its own when that matrix gives the coefficients of a set of linear equations, as happens for multiple linear regression using standardized regressors. Belsley, Kuh, and Welsch--who were among the first to point out and systematically exploit the relevance of the condition number in this context--have a nice explanation, which I will broadly quote. They begin by giving a definition of the spectral norm, denoted $||A||$ and defined as $$||A|| \equiv {\sup}_{||z||=1}||Az||.$$ Geometrically, it is the maximum amount by which $A$ will rescale the unit sphere: its maximum "stretch," if you will. They point out the obvious relations that $||A||$ therefore is the largest singular value of $A$ and $||A^{-1}||$ is the reciprocal of the smallest singular value of $A$ (when $A$ is invertible). (I like to think of this as the maximum "squeezing" of $A$.) They then assert that $||A||$ actually is a norm, and add the (easily proven) facts $||Az|| \le ||A|| \cdot ||z|| \tag{4}$ $||AB|| \le ||A||\cdot ||B|| \tag{5} $ for all commensurate $A$ and $B$. These remarks are then applied: We shall now see that the spectral norm is directly relevant to an analysis of the conditioning of a linear system of equations $Az = c, A $ $n\times n$ and nonsingular with solution $z=A^{-1}c$. We can ask how much the solution vector $z$ would change $(\delta z)$ if there were small changes or perturbations in the elements of $c$ or $A$, denoted $\delta c$ and $\delta A$. In the event that $A$ is fixed but $c$ changes by $\delta c$, we have $\delta z = A^{-1}\delta c$, or $$||\delta z|| \le ||A ^{-1} || \cdot || \delta c ||.$$ Further, employing property $(4)$ above to the equation system, we have $$||c|| \le ||A|| \cdot ||z||;$$ and from multiplying these last two expressions we obtain $$\frac{||\delta z||}{||z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta c || }{||c||}.$$ That is, the magnitude $||A||\cdot ||A^{-1}||$ provides a bound for the relative change in the length of the solution vector $z$ that can result from a given relative change in the length of $c$. A similar result holds for perturbations in the elements of the matrix $A$. Here it can be shown that $$\frac{||\delta z||}{||z + \delta z||} \le ||A|| \cdot ||A^{-1}|| \cdot \frac{||\delta A||}{||A||}.$$ (The key step in this demonstration, which is left as an exercise, is to observe $\delta z = -A^{-1}(\delta A)(z + \delta z)$ and apply norms to both sides.) Because of its usefulness in this context, the magnitude $||A||\cdot ||A^{-1}||$ is defined to be the condition number of the nonsingular matrix $A$ ... . (Based on the earlier characterizations, we may conceive of the condition number as being a kind of "aspect ratio" of $A$: the most it can stretch any vector times the most it can squeeze any vector. It would be directly related to the maximum eccentricity attained by any great circle on the unit sphere after being operated on by $A$.) The condition number bounds how much the solution $z$ of a system of equations $Az=c$ can change, on a relative basis, when its components $A$ and $c$ are changed. However, these inequalities are not tight: for any given $A$, the extent to which the bounds are reasonably accurate representations of actual changes depends on $A$ and the changes $\delta A$ and $\delta c$. Condition numbers are assertions about worst cases. Thus, a matrix with condition number $9$ can be considered to be $70/9$ times better than one with condition number $70$, but that does not necessarily mean that it will be precisely that much better (at not propagating errors) than the other. Reference Belsley, Kuh, & Welsch, Regression Diagnostics. Wiley, 1980: Section 3.2.
How do you interpret the condition number of a correlation matrix The condition number of a correlation matrix is not of great interest in its own right. It comes into its own when that matrix gives the coefficients of a set of linear equations, as happens for mult
20,356
How do you interpret the condition number of a correlation matrix
Super high condition number would mean that some variables are highly correlated. 70 is not that big of a condition number to me. High or low condition number doesn't mean that one correlation matrix is "better" than the other. All it means is that variables are more correlated or less. Whether it's good or not depends on the application. UPDATE: I'm assuming you don't have a super high dimensional case, because in this case @whuber is right, and you may end up with low correlation but high condition number. Intuitively, it's easy to see why. Consider, a matrix where all elements are equal to $\rho$, except the diagonals that are ones. In this case if you take any two columns, they'll look very similar to each other. In fact, they'll differ in exactly only two rows, where one of them is 1 and the other is $\rho$. If you have a very high dimensional matrix, then from linear algebra point of view these are almost the same columns, i.e. the matrix will look kinda rank-defficient.
How do you interpret the condition number of a correlation matrix
Super high condition number would mean that some variables are highly correlated. 70 is not that big of a condition number to me. High or low condition number doesn't mean that one correlation matrix
How do you interpret the condition number of a correlation matrix Super high condition number would mean that some variables are highly correlated. 70 is not that big of a condition number to me. High or low condition number doesn't mean that one correlation matrix is "better" than the other. All it means is that variables are more correlated or less. Whether it's good or not depends on the application. UPDATE: I'm assuming you don't have a super high dimensional case, because in this case @whuber is right, and you may end up with low correlation but high condition number. Intuitively, it's easy to see why. Consider, a matrix where all elements are equal to $\rho$, except the diagonals that are ones. In this case if you take any two columns, they'll look very similar to each other. In fact, they'll differ in exactly only two rows, where one of them is 1 and the other is $\rho$. If you have a very high dimensional matrix, then from linear algebra point of view these are almost the same columns, i.e. the matrix will look kinda rank-defficient.
How do you interpret the condition number of a correlation matrix Super high condition number would mean that some variables are highly correlated. 70 is not that big of a condition number to me. High or low condition number doesn't mean that one correlation matrix
20,357
Bias in jury selection?
Here's how I might approach answering your question using standard statistical tools. Below are the results of a probit analysis on the probability of being rejected given the juror's group membership. First, here's what the data looks like. I have 30 observations of group and a binary rejected indicator: . tab group rejected | rejected group | 0 1 | Total -----------+----------------------+---------- A | 9 1 | 10 B | 6 4 | 10 C | 2 4 | 6 D | 3 1 | 4 -----------+----------------------+---------- Total | 20 10 | 30 Here are the individual marginal effects as well as the joint test: . qui probit rejected ib2.group . margins rb2.group Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rejected), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- group | (A vs B) | 1 2.73 0.0986 (C vs B) | 1 1.17 0.2804 (D vs B) | 1 0.32 0.5731 Joint | 3 8.12 0.0436 ------------------------------------------------ -------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ group | (A vs B) | -.3 .181659 -.6560451 .0560451 (C vs B) | .2666667 .2470567 -.2175557 .750889 (D vs B) | -.15 .2662236 -.6717886 .3717886 -------------------------------------------------------------- Here we are testing the individual hypotheses that the differences in the probability of being rejected for groups A, C, and D compared to group B are zero. If everyone was as likely to be rejected as group B, these would be zero. The last piece of output tells us that group A and D jurors are less likely to be rejected, while group C jurors are more likely to be turned away. These differences are not statistically significant individually, though the signs agree with your bias conjecture. However, we can reject the joint hypothesis that the three differences are all zero at $p=0.0436$. Addendum: If I combine groups A and D into one since they share the victims' races, the probit results get stronger and have a nice symmetry: Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rejected), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- group2 | (A+D vs B) | 1 2.02 0.1553 (C vs B) | 1 1.17 0.2804 Joint | 2 6.79 0.0336 ------------------------------------------------ -------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ group2 | (A+D vs B) | -.2571429 .1809595 -.611817 .0975313 (C vs B) | .2666667 .2470568 -.2175557 .750889 -------------------------------------------------------------- This also allows Fisher's exact to give congruent results (though still not at 5%): RECODE of | rejected group | 0 1 | Total -----------+----------------------+---------- A+D | 12 2 | 14 B | 6 4 | 10 C | 2 4 | 6 -----------+----------------------+---------- Total | 20 10 | 30 Pearson chi2(2) = 5.4857 Pr = 0.064 Fisher's exact = 0.060
Bias in jury selection?
Here's how I might approach answering your question using standard statistical tools. Below are the results of a probit analysis on the probability of being rejected given the juror's group membership
Bias in jury selection? Here's how I might approach answering your question using standard statistical tools. Below are the results of a probit analysis on the probability of being rejected given the juror's group membership. First, here's what the data looks like. I have 30 observations of group and a binary rejected indicator: . tab group rejected | rejected group | 0 1 | Total -----------+----------------------+---------- A | 9 1 | 10 B | 6 4 | 10 C | 2 4 | 6 D | 3 1 | 4 -----------+----------------------+---------- Total | 20 10 | 30 Here are the individual marginal effects as well as the joint test: . qui probit rejected ib2.group . margins rb2.group Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rejected), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- group | (A vs B) | 1 2.73 0.0986 (C vs B) | 1 1.17 0.2804 (D vs B) | 1 0.32 0.5731 Joint | 3 8.12 0.0436 ------------------------------------------------ -------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ group | (A vs B) | -.3 .181659 -.6560451 .0560451 (C vs B) | .2666667 .2470567 -.2175557 .750889 (D vs B) | -.15 .2662236 -.6717886 .3717886 -------------------------------------------------------------- Here we are testing the individual hypotheses that the differences in the probability of being rejected for groups A, C, and D compared to group B are zero. If everyone was as likely to be rejected as group B, these would be zero. The last piece of output tells us that group A and D jurors are less likely to be rejected, while group C jurors are more likely to be turned away. These differences are not statistically significant individually, though the signs agree with your bias conjecture. However, we can reject the joint hypothesis that the three differences are all zero at $p=0.0436$. Addendum: If I combine groups A and D into one since they share the victims' races, the probit results get stronger and have a nice symmetry: Contrasts of adjusted predictions Model VCE : OIM Expression : Pr(rejected), predict() ------------------------------------------------ | df chi2 P>chi2 -------------+---------------------------------- group2 | (A+D vs B) | 1 2.02 0.1553 (C vs B) | 1 1.17 0.2804 Joint | 2 6.79 0.0336 ------------------------------------------------ -------------------------------------------------------------- | Delta-method | Contrast Std. Err. [95% Conf. Interval] -------------+------------------------------------------------ group2 | (A+D vs B) | -.2571429 .1809595 -.611817 .0975313 (C vs B) | .2666667 .2470568 -.2175557 .750889 -------------------------------------------------------------- This also allows Fisher's exact to give congruent results (though still not at 5%): RECODE of | rejected group | 0 1 | Total -----------+----------------------+---------- A+D | 12 2 | 14 B | 6 4 | 10 C | 2 4 | 6 -----------+----------------------+---------- Total | 20 10 | 30 Pearson chi2(2) = 5.4857 Pr = 0.064 Fisher's exact = 0.060
Bias in jury selection? Here's how I might approach answering your question using standard statistical tools. Below are the results of a probit analysis on the probability of being rejected given the juror's group membership
20,358
Bias in jury selection?
I would think that introducing an ad hoc statistical method is going to be a no-go with the court. It is better to use methods that are "standard practice". Otherwise, you probably get to prove your qualifications to develop new methods. To be more explicit, I do not think that your method would meet the Daubert standard. I also very much doubt that your method has any academic reference in and of itself. You would probably have to go the route of hiring a statistical expert witness to introduce it. It would be easily countered, I would think. The basic question here is likely: "Was jury challenge independent of racial grouping?" These are small numbers to which to be applying asymptotically based statistical methods. However, the "standard" for testing association in this setting is the $\chi^2$ test: > M <- as.table(cbind(c(9, 6, 2, 3), c(1, 4, 4, 1))) > dimnames(M) <- list(Group=c("A", "B", "C", "D"), Challenged=c("No", "Yes")) > M Challenged Group No Yes A 9 1 B 6 4 C 2 4 D 3 1 > chisq.test(M) Pearson's Chi-squared test data: M X-squared = 5.775, df = 3, p-value = 0.1231 Warning message: In chisq.test(M) : Chi-squared approximation may be incorrect Using the Fisher exact test gives similar results: > fisher.test(M) Fisher's Exact Test for Count Data data: M p-value = 0.1167 alternative hypothesis: two.sided The note about the hypothesis being two-sided applies to the case of a $2\times2$ table. My interpretation is that there is not much evidence to argue racial bias.
Bias in jury selection?
I would think that introducing an ad hoc statistical method is going to be a no-go with the court. It is better to use methods that are "standard practice". Otherwise, you probably get to prove your
Bias in jury selection? I would think that introducing an ad hoc statistical method is going to be a no-go with the court. It is better to use methods that are "standard practice". Otherwise, you probably get to prove your qualifications to develop new methods. To be more explicit, I do not think that your method would meet the Daubert standard. I also very much doubt that your method has any academic reference in and of itself. You would probably have to go the route of hiring a statistical expert witness to introduce it. It would be easily countered, I would think. The basic question here is likely: "Was jury challenge independent of racial grouping?" These are small numbers to which to be applying asymptotically based statistical methods. However, the "standard" for testing association in this setting is the $\chi^2$ test: > M <- as.table(cbind(c(9, 6, 2, 3), c(1, 4, 4, 1))) > dimnames(M) <- list(Group=c("A", "B", "C", "D"), Challenged=c("No", "Yes")) > M Challenged Group No Yes A 9 1 B 6 4 C 2 4 D 3 1 > chisq.test(M) Pearson's Chi-squared test data: M X-squared = 5.775, df = 3, p-value = 0.1231 Warning message: In chisq.test(M) : Chi-squared approximation may be incorrect Using the Fisher exact test gives similar results: > fisher.test(M) Fisher's Exact Test for Count Data data: M p-value = 0.1167 alternative hypothesis: two.sided The note about the hypothesis being two-sided applies to the case of a $2\times2$ table. My interpretation is that there is not much evidence to argue racial bias.
Bias in jury selection? I would think that introducing an ad hoc statistical method is going to be a no-go with the court. It is better to use methods that are "standard practice". Otherwise, you probably get to prove your
20,359
Bias in jury selection?
I asked a similar question previously (for reference here is the particular case I discuss). The defense needs to simply show a prima facia case of discrimination in Batson challenges (assuming US criminal law) - so hypothesis tests are probably a larger burden than is needed. So for: $n = 30$ people on the venire panel $p = 6$ people of racial class C on the panel $k = 4$ jurors of racial group C eliminated on preemptory challenges $d = 10$ preemptory challenges Whuber's previous answer gives the probability of this particular outcome being dictated by the hypergeometric distribution: $$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}$$ Which Wolfram-Alpha says equals in this case: $$\frac{{6 \choose 4} {30-6 \choose 10-4} }{{30 \choose 10}} = \frac{76}{1131} \approx 0.07$$ Unfortunately I do not have a reference besides the links I have provided - I imagine you can dig up a suitable reference for the hypergeometric distribution from the Wikipedia page. This ignores the question about whether racial groups A and D are "under-challenged". I'm skeptical you could make a legal argument for this -- it would be a weird twist on the equal protection clause, This particular group is too protected!, that I do not think would fly. (I'm not a lawyer though - so take with a grain of salt.) If you really want a hypothesis test I am not sure how to go about it. You can generate the $30 \choose 10$ permutations, give it a probability under the null of racial groups being equally chosen per their proportions in the venire, and then calculate the exact distribution of your test statistic under the null. I'm not quite sure what test statistic is satisfactory though, $\chi^2$ doesn't really answer the question of interest. (Is it alright you make up your own test statistic -- I do not know?) I've updated some of my thoughts in a blog post. My post is specific to Batson Challenges, so it is unclear if you seek another situation (your updates for 1 and 2 don't make sense in the context of Batson Challenges.) I was able to find one related article (available in full at the link): Gastwirth, J. L. (2005). Case comment: statistical tests for the analysis of data on peremptory challenges: clarifying the standard of proof needed to establish a prima facie case of discrimination in Johnson v. California. Law, Probability and Risk, 4(3), 179-185. That gave the same suggestion for using the hypergeometric distribution. In my blog post I show how if you collapse the categories into two groups it is equivalent to Fisher's Exact test. Gastwirth suggests as I did in my comment that you could consider $k$ as the test statistic, and so add the probability of $k = 5$ and $k = 6$ to that above if you prefer. Gastwirth also gives an example for calculating a test statistic based on changing numbers of $n$ in the jury pool. In my blog post I just conduct sensitivity analysis for varying levels of $n$ and $d$ (for a different case) to provide ranges of possible percentages. If someone becomes aware of case law that actually uses this (or anything besides fractions) I would be interested.
Bias in jury selection?
I asked a similar question previously (for reference here is the particular case I discuss). The defense needs to simply show a prima facia case of discrimination in Batson challenges (assuming US cri
Bias in jury selection? I asked a similar question previously (for reference here is the particular case I discuss). The defense needs to simply show a prima facia case of discrimination in Batson challenges (assuming US criminal law) - so hypothesis tests are probably a larger burden than is needed. So for: $n = 30$ people on the venire panel $p = 6$ people of racial class C on the panel $k = 4$ jurors of racial group C eliminated on preemptory challenges $d = 10$ preemptory challenges Whuber's previous answer gives the probability of this particular outcome being dictated by the hypergeometric distribution: $$\frac{{p \choose k} {n-p \choose d-k} }{{n \choose d}}$$ Which Wolfram-Alpha says equals in this case: $$\frac{{6 \choose 4} {30-6 \choose 10-4} }{{30 \choose 10}} = \frac{76}{1131} \approx 0.07$$ Unfortunately I do not have a reference besides the links I have provided - I imagine you can dig up a suitable reference for the hypergeometric distribution from the Wikipedia page. This ignores the question about whether racial groups A and D are "under-challenged". I'm skeptical you could make a legal argument for this -- it would be a weird twist on the equal protection clause, This particular group is too protected!, that I do not think would fly. (I'm not a lawyer though - so take with a grain of salt.) If you really want a hypothesis test I am not sure how to go about it. You can generate the $30 \choose 10$ permutations, give it a probability under the null of racial groups being equally chosen per their proportions in the venire, and then calculate the exact distribution of your test statistic under the null. I'm not quite sure what test statistic is satisfactory though, $\chi^2$ doesn't really answer the question of interest. (Is it alright you make up your own test statistic -- I do not know?) I've updated some of my thoughts in a blog post. My post is specific to Batson Challenges, so it is unclear if you seek another situation (your updates for 1 and 2 don't make sense in the context of Batson Challenges.) I was able to find one related article (available in full at the link): Gastwirth, J. L. (2005). Case comment: statistical tests for the analysis of data on peremptory challenges: clarifying the standard of proof needed to establish a prima facie case of discrimination in Johnson v. California. Law, Probability and Risk, 4(3), 179-185. That gave the same suggestion for using the hypergeometric distribution. In my blog post I show how if you collapse the categories into two groups it is equivalent to Fisher's Exact test. Gastwirth suggests as I did in my comment that you could consider $k$ as the test statistic, and so add the probability of $k = 5$ and $k = 6$ to that above if you prefer. Gastwirth also gives an example for calculating a test statistic based on changing numbers of $n$ in the jury pool. In my blog post I just conduct sensitivity analysis for varying levels of $n$ and $d$ (for a different case) to provide ranges of possible percentages. If someone becomes aware of case law that actually uses this (or anything besides fractions) I would be interested.
Bias in jury selection? I asked a similar question previously (for reference here is the particular case I discuss). The defense needs to simply show a prima facia case of discrimination in Batson challenges (assuming US cri
20,360
Bias in jury selection?
Let's not forget the multiple testing issue. Imagine 100 defense lawyers each looking for grounds to appeal. All of the juror rejections had been performed by flipping coins or rolling dice for each prospective juror. Therefore, none of the rejections were racially biased. Each of the 100 lawyers now does whatever statistical test all of you guys agree on. Roughly five out of that 100 will reject the null hypothesis of "unbiased" and have grounds for appeal.
Bias in jury selection?
Let's not forget the multiple testing issue. Imagine 100 defense lawyers each looking for grounds to appeal. All of the juror rejections had been performed by flipping coins or rolling dice for each
Bias in jury selection? Let's not forget the multiple testing issue. Imagine 100 defense lawyers each looking for grounds to appeal. All of the juror rejections had been performed by flipping coins or rolling dice for each prospective juror. Therefore, none of the rejections were racially biased. Each of the 100 lawyers now does whatever statistical test all of you guys agree on. Roughly five out of that 100 will reject the null hypothesis of "unbiased" and have grounds for appeal.
Bias in jury selection? Let's not forget the multiple testing issue. Imagine 100 defense lawyers each looking for grounds to appeal. All of the juror rejections had been performed by flipping coins or rolling dice for each
20,361
How do you report a Mann–Whitney test?
Wikipedia appears to have your answers. Here's an excerpt from the example statement of results: In reporting the results of a Mann–Whitney test, it is important to state: A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended) The value of U The sample sizes The significance level. In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run, "Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney U = 10.5, n1 = n2 = 8, P < 0.05 two-tailed)." The Wilcoxon signed-rank test is appropriate for paired samples, whereas the Mann–Whitney test assumes independent samples. However, according to Field (2000), the Wilcoxon $W$ in your SPSS output is "a different version of this statistic, which can be converted into a Z score and can, therefore, be compared against critical values of the normal distribution." That explains your $z$ score too then! FYI, Wikipedia adds that, for large samples, $U$ is approximately normally distributed. Given all these values, one can also calculate the effect size $η^2$, which in the case of Wikipedia's example is 0.319 (a calculator is implemented in section 11 here). However, this transformation of the test statistic depends on the approximate normality of $U$, so it might be inaccurate with ns = 8 (Fritz et al., 2012). P.S. The Kruskal–Wallis test's results should not be interpreted as revealing differences between means except under special circumstances. See @Glen_b's answer to another question, "Difference Between ANOVA and Kruskal-Wallis test" for details. References Field, A. (2000). 3.1. Mann-Whitney test. Research Methods 1: SPSS for Windows part 3: Nonparametric tests. Retrieved from http://www.statisticshell.com/docs/nonparametric.pdf. Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18. PDF available via ResearchGate.
How do you report a Mann–Whitney test?
Wikipedia appears to have your answers. Here's an excerpt from the example statement of results: In reporting the results of a Mann–Whitney test, it is important to state: A measure of the central t
How do you report a Mann–Whitney test? Wikipedia appears to have your answers. Here's an excerpt from the example statement of results: In reporting the results of a Mann–Whitney test, it is important to state: A measure of the central tendencies of the two groups (means or medians; since the Mann–Whitney is an ordinal test, medians are usually recommended) The value of U The sample sizes The significance level. In practice some of this information may already have been supplied and common sense should be used in deciding whether to repeat it. A typical report might run, "Median latencies in groups E and C were 153 and 247 ms; the distributions in the two groups differed significantly (Mann–Whitney U = 10.5, n1 = n2 = 8, P < 0.05 two-tailed)." The Wilcoxon signed-rank test is appropriate for paired samples, whereas the Mann–Whitney test assumes independent samples. However, according to Field (2000), the Wilcoxon $W$ in your SPSS output is "a different version of this statistic, which can be converted into a Z score and can, therefore, be compared against critical values of the normal distribution." That explains your $z$ score too then! FYI, Wikipedia adds that, for large samples, $U$ is approximately normally distributed. Given all these values, one can also calculate the effect size $η^2$, which in the case of Wikipedia's example is 0.319 (a calculator is implemented in section 11 here). However, this transformation of the test statistic depends on the approximate normality of $U$, so it might be inaccurate with ns = 8 (Fritz et al., 2012). P.S. The Kruskal–Wallis test's results should not be interpreted as revealing differences between means except under special circumstances. See @Glen_b's answer to another question, "Difference Between ANOVA and Kruskal-Wallis test" for details. References Field, A. (2000). 3.1. Mann-Whitney test. Research Methods 1: SPSS for Windows part 3: Nonparametric tests. Retrieved from http://www.statisticshell.com/docs/nonparametric.pdf. Fritz, C. O., Morris, P. E., & Richler, J. J. (2012). Effect size estimates: current use, calculations, and interpretation. Journal of Experimental Psychology: General, 141(1), 2–18. PDF available via ResearchGate.
How do you report a Mann–Whitney test? Wikipedia appears to have your answers. Here's an excerpt from the example statement of results: In reporting the results of a Mann–Whitney test, it is important to state: A measure of the central t
20,362
Which is more accurate glm or glmnet?
Glmnet is for elastic net regression. This penalises the size of estimated coefficients (via a mix of L1 and L2 penalties). It tries to explain as much variance in the data through the model as possible while keeping the model coefficients small. I found these slides helpful to understand it. Glm doesn't use a penalty term. The effect, as I understand it, that with elastic net you may be accepting some bias in return for a reduction in the variance of the estimator. So which is best must depend on how you define 'best' in terms of bias and variance. (E.g. I know glmnet has advantages when you have many features compared to observations)
Which is more accurate glm or glmnet?
Glmnet is for elastic net regression. This penalises the size of estimated coefficients (via a mix of L1 and L2 penalties). It tries to explain as much variance in the data through the model as possib
Which is more accurate glm or glmnet? Glmnet is for elastic net regression. This penalises the size of estimated coefficients (via a mix of L1 and L2 penalties). It tries to explain as much variance in the data through the model as possible while keeping the model coefficients small. I found these slides helpful to understand it. Glm doesn't use a penalty term. The effect, as I understand it, that with elastic net you may be accepting some bias in return for a reduction in the variance of the estimator. So which is best must depend on how you define 'best' in terms of bias and variance. (E.g. I know glmnet has advantages when you have many features compared to observations)
Which is more accurate glm or glmnet? Glmnet is for elastic net regression. This penalises the size of estimated coefficients (via a mix of L1 and L2 penalties). It tries to explain as much variance in the data through the model as possib
20,363
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
First differencing or within transformations like demeaning are not available in models like logit because in the case of nonlinear models such tricks do not remove the unobserved fixed effects. Even if you had a smaller data set in which it was feasible to include N-1 individual dummies to estimate the fixed effects directly, this would lead to biased estimates unless the time dimension of your data is large. Elimination of the fixed effects in panel logit therefore follows neither differencing nor demeaning and is only possible due to the logit functional form. If you are interested in the details you could have a look at these notes by Söderbom on PDF page 30 (explanation for why demeaning/first differencing in logit/probit doesn't help) and page 42 (introduction of the panel logit estimator). Another problem is that xtlogit and panel logit models in general do not estimate the fixed effects directly which are needed to calculate marginal effects. Without those it will be very awkward to interpret your coefficients which might be disappointing after having run the model for hours and hours. With such a large data set and the previously mentioned conceptional difficulties of FE panel logit I would stick with the linear probability model. I hope this answer does not disappoint you but there are many good reasons for giving such advice: the LPM is much faster, the coefficients can be interpreted straight away (this holds in particular if you have interaction effects in your model because the interpretation of their coefficients in non-linear models changes!), the fixed effects are easily controlled for and you can adjust the standard errors for autocorrelation and clusters without estimation times increasing beyond reason. I hope this helps.
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
First differencing or within transformations like demeaning are not available in models like logit because in the case of nonlinear models such tricks do not remove the unobserved fixed effects. Even
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? First differencing or within transformations like demeaning are not available in models like logit because in the case of nonlinear models such tricks do not remove the unobserved fixed effects. Even if you had a smaller data set in which it was feasible to include N-1 individual dummies to estimate the fixed effects directly, this would lead to biased estimates unless the time dimension of your data is large. Elimination of the fixed effects in panel logit therefore follows neither differencing nor demeaning and is only possible due to the logit functional form. If you are interested in the details you could have a look at these notes by Söderbom on PDF page 30 (explanation for why demeaning/first differencing in logit/probit doesn't help) and page 42 (introduction of the panel logit estimator). Another problem is that xtlogit and panel logit models in general do not estimate the fixed effects directly which are needed to calculate marginal effects. Without those it will be very awkward to interpret your coefficients which might be disappointing after having run the model for hours and hours. With such a large data set and the previously mentioned conceptional difficulties of FE panel logit I would stick with the linear probability model. I hope this answer does not disappoint you but there are many good reasons for giving such advice: the LPM is much faster, the coefficients can be interpreted straight away (this holds in particular if you have interaction effects in your model because the interpretation of their coefficients in non-linear models changes!), the fixed effects are easily controlled for and you can adjust the standard errors for autocorrelation and clusters without estimation times increasing beyond reason. I hope this helps.
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? First differencing or within transformations like demeaning are not available in models like logit because in the case of nonlinear models such tricks do not remove the unobserved fixed effects. Even
20,364
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
I believe conditional logit ("clogit" on Stata), it's an alternative fixed-effect logit panel estimator. http://www3.nd.edu/~rwilliam/stats3/Panel03-FixedEffects.pdf
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
I believe conditional logit ("clogit" on Stata), it's an alternative fixed-effect logit panel estimator. http://www3.nd.edu/~rwilliam/stats3/Panel03-FixedEffects.pdf
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? I believe conditional logit ("clogit" on Stata), it's an alternative fixed-effect logit panel estimator. http://www3.nd.edu/~rwilliam/stats3/Panel03-FixedEffects.pdf
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? I believe conditional logit ("clogit" on Stata), it's an alternative fixed-effect logit panel estimator. http://www3.nd.edu/~rwilliam/stats3/Panel03-FixedEffects.pdf
20,365
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
Allison have discussed this problem in Allison, (2009), "Fixed effects regression models", p.32f. Allison argues that it is not possible to estimate an unconditional model with maximum likelihood. This is so because the models becomes biased due to "the incidental parameters problem". Instead, he recommends using a conditional logit model (Chamberlain, 1980). This is accomplished by conditioning the likelihood function on the number of events observed for each individual.
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies?
Allison have discussed this problem in Allison, (2009), "Fixed effects regression models", p.32f. Allison argues that it is not possible to estimate an unconditional model with maximum likelihood. Thi
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? Allison have discussed this problem in Allison, (2009), "Fixed effects regression models", p.32f. Allison argues that it is not possible to estimate an unconditional model with maximum likelihood. This is so because the models becomes biased due to "the incidental parameters problem". Instead, he recommends using a conditional logit model (Chamberlain, 1980). This is accomplished by conditioning the likelihood function on the number of events observed for each individual.
Is the Mundlak fixed effects procedure applicable for logistic regression with dummies? Allison have discussed this problem in Allison, (2009), "Fixed effects regression models", p.32f. Allison argues that it is not possible to estimate an unconditional model with maximum likelihood. Thi
20,366
Variable importance randomForest negative values
Variable importance in Random forest is calculated as follows: Initially, MSE of the model is calculated with the original variables Then, the values of a single column are permuted and the MSE is calculated again. For example, If a column (Col1) takes the values 1,2,3,4, and a random permutation of the values results in 4,3,1,2. This results in an MSE1. Then an increase in the MSE, i.e., MSE1 - MSE, would signify the importance of the variable. We expect the difference to be positive, but in the cases of a negative number, it denotes that the random permutation worked better. It can be inferred that the variable does not have a role in the prediction,i.e, not important. Hope this helps! Please refer to the following link for a elaborated explanation! https://stackoverflow.com/questions/27918320/what-does-negative-incmse-in-randomforest-package-mean
Variable importance randomForest negative values
Variable importance in Random forest is calculated as follows: Initially, MSE of the model is calculated with the original variables Then, the values of a single column are permuted and the MSE is ca
Variable importance randomForest negative values Variable importance in Random forest is calculated as follows: Initially, MSE of the model is calculated with the original variables Then, the values of a single column are permuted and the MSE is calculated again. For example, If a column (Col1) takes the values 1,2,3,4, and a random permutation of the values results in 4,3,1,2. This results in an MSE1. Then an increase in the MSE, i.e., MSE1 - MSE, would signify the importance of the variable. We expect the difference to be positive, but in the cases of a negative number, it denotes that the random permutation worked better. It can be inferred that the variable does not have a role in the prediction,i.e, not important. Hope this helps! Please refer to the following link for a elaborated explanation! https://stackoverflow.com/questions/27918320/what-does-negative-incmse-in-randomforest-package-mean
Variable importance randomForest negative values Variable importance in Random forest is calculated as follows: Initially, MSE of the model is calculated with the original variables Then, the values of a single column are permuted and the MSE is ca
20,367
Variable importance randomForest negative values
This may be just a random fluctuation (for instance if you have small ntree). If not, it may show that you have some serious amount of paradoxes in your data, i.e. pairs of objects with almost identical predictors and very different outcome. In this case, I would check twice if the model actually makes any sense and start thinking how I could get more attributes to resolve them.
Variable importance randomForest negative values
This may be just a random fluctuation (for instance if you have small ntree). If not, it may show that you have some serious amount of paradoxes in your data, i.e. pairs of objects with almost identi
Variable importance randomForest negative values This may be just a random fluctuation (for instance if you have small ntree). If not, it may show that you have some serious amount of paradoxes in your data, i.e. pairs of objects with almost identical predictors and very different outcome. In this case, I would check twice if the model actually makes any sense and start thinking how I could get more attributes to resolve them.
Variable importance randomForest negative values This may be just a random fluctuation (for instance if you have small ntree). If not, it may show that you have some serious amount of paradoxes in your data, i.e. pairs of objects with almost identi
20,368
Methods to predict multiple dependent variables
You need to check for correlations amongst your dependent variables (edit: @BilalBarakat's answer is right, the residuals are what's important here). If all or some are independent, you can run separate analyses on each. If they are not independent, or whichever ones aren't, you could run a multivariate analysis. This will maximize your power while holding the type I error rate at your alpha level. You should know, however, that this will not make your analysis more accurate/robust. This is a different issue than simply whether or not your model predicts the data better than the null model. In fact, with so much going on, unless you have a lot of data, it is likely that you could get very different parameter estimates with a new sample. It is even possible that the sign on a beta will flip. Much depends on the size of p and q and the nature of their correlation matrices, but the volume of data required for robustness can be massive. Remember that, although many people use 'significant' and 'reliable' as synonyms, they actually aren't. It is one thing to know that a variable is not independent of another variable, but another thing entirely to specify the nature of that relationship in your sample as it is in the population. It can be easy to run a study twice and find a predictor significant both times, but with the parameter estimate sufficiently different to be theoretically meaningful. Furthermore, unless you are doing structural equation modeling, you can't very well incorporate your theoretical knowledge regarding the variables. That is, techniques like MANOVA tend to be rawly empirical. Another approach is to utilize what you know about the issue at hand. For example, if you have several different measures of the same construct (you could check this with a factor analysis), you can combine them. This can be done by turning them into z scores and averaging them. Knowledge of other sources of correlation (e.g., common cause or mediation) could also be utilized. Some people are uncomfortable with putting so much weight on domain knowledge, and I acknowledge that this is a philosophical issue, but I think it can be a mistake to require the analyses to do all of the work and assume that this is the best answer. As for a reference, any good multivariate textbook should discuss these issues. Tabachnick and Fidell is well regarded as a simple and applied treatment of this topic.
Methods to predict multiple dependent variables
You need to check for correlations amongst your dependent variables (edit: @BilalBarakat's answer is right, the residuals are what's important here). If all or some are independent, you can run separ
Methods to predict multiple dependent variables You need to check for correlations amongst your dependent variables (edit: @BilalBarakat's answer is right, the residuals are what's important here). If all or some are independent, you can run separate analyses on each. If they are not independent, or whichever ones aren't, you could run a multivariate analysis. This will maximize your power while holding the type I error rate at your alpha level. You should know, however, that this will not make your analysis more accurate/robust. This is a different issue than simply whether or not your model predicts the data better than the null model. In fact, with so much going on, unless you have a lot of data, it is likely that you could get very different parameter estimates with a new sample. It is even possible that the sign on a beta will flip. Much depends on the size of p and q and the nature of their correlation matrices, but the volume of data required for robustness can be massive. Remember that, although many people use 'significant' and 'reliable' as synonyms, they actually aren't. It is one thing to know that a variable is not independent of another variable, but another thing entirely to specify the nature of that relationship in your sample as it is in the population. It can be easy to run a study twice and find a predictor significant both times, but with the parameter estimate sufficiently different to be theoretically meaningful. Furthermore, unless you are doing structural equation modeling, you can't very well incorporate your theoretical knowledge regarding the variables. That is, techniques like MANOVA tend to be rawly empirical. Another approach is to utilize what you know about the issue at hand. For example, if you have several different measures of the same construct (you could check this with a factor analysis), you can combine them. This can be done by turning them into z scores and averaging them. Knowledge of other sources of correlation (e.g., common cause or mediation) could also be utilized. Some people are uncomfortable with putting so much weight on domain knowledge, and I acknowledge that this is a philosophical issue, but I think it can be a mistake to require the analyses to do all of the work and assume that this is the best answer. As for a reference, any good multivariate textbook should discuss these issues. Tabachnick and Fidell is well regarded as a simple and applied treatment of this topic.
Methods to predict multiple dependent variables You need to check for correlations amongst your dependent variables (edit: @BilalBarakat's answer is right, the residuals are what's important here). If all or some are independent, you can run separ
20,369
Methods to predict multiple dependent variables
To contradict @gung's first paragraph (sorry!), you should actually check for correlations among the residuals in your multiple models, rather than for correlations among the dependent variables as such. The fact that the latter are correlated by itself tells you nothing about whether your estimates will improve by modelling them jointly.
Methods to predict multiple dependent variables
To contradict @gung's first paragraph (sorry!), you should actually check for correlations among the residuals in your multiple models, rather than for correlations among the dependent variables as su
Methods to predict multiple dependent variables To contradict @gung's first paragraph (sorry!), you should actually check for correlations among the residuals in your multiple models, rather than for correlations among the dependent variables as such. The fact that the latter are correlated by itself tells you nothing about whether your estimates will improve by modelling them jointly.
Methods to predict multiple dependent variables To contradict @gung's first paragraph (sorry!), you should actually check for correlations among the residuals in your multiple models, rather than for correlations among the dependent variables as su
20,370
Methods to predict multiple dependent variables
A reasonable possibility is to make a Principal Component Analysis (PCA) of the $q$ dependent variables $Y_i$ and construct other $q$ independent variables as linear combinations: $$\tilde{Y}_i = \lambda_{i,1}Y_1+\dots \lambda_{i,q}Y_q$$ Then, try to correleate each $\tilde{Y}_i$ with the $p$ $X_i$. Thus, you can select the significant coefficients, eliminating non-significant effects. Finally you have: $$Y_i = \mu_{i,1}\tilde{Y}_1+\dots + \mu_{i,q}\tilde{Y}_q $$ where: $$\begin{bmatrix} \mu_{1,1} & \dots & \mu_{1,q}\\ \dots & \dots & \dots \\ \mu_{q,1} & \dots & \mu_{q,q}\end{bmatrix} = \begin{bmatrix} \lambda_{1,1} & \dots & \lambda_{1,q}\\ \dots & \dots & \dots \\ \lambda_{q,1} & \dots & \lambda_{q,q}\end{bmatrix}^{-1}$$ Depending on the nature of the date, you should use Independent Component Analysis (ICA) instead of PCA.
Methods to predict multiple dependent variables
A reasonable possibility is to make a Principal Component Analysis (PCA) of the $q$ dependent variables $Y_i$ and construct other $q$ independent variables as linear combinations: $$\tilde{Y}_i = \lam
Methods to predict multiple dependent variables A reasonable possibility is to make a Principal Component Analysis (PCA) of the $q$ dependent variables $Y_i$ and construct other $q$ independent variables as linear combinations: $$\tilde{Y}_i = \lambda_{i,1}Y_1+\dots \lambda_{i,q}Y_q$$ Then, try to correleate each $\tilde{Y}_i$ with the $p$ $X_i$. Thus, you can select the significant coefficients, eliminating non-significant effects. Finally you have: $$Y_i = \mu_{i,1}\tilde{Y}_1+\dots + \mu_{i,q}\tilde{Y}_q $$ where: $$\begin{bmatrix} \mu_{1,1} & \dots & \mu_{1,q}\\ \dots & \dots & \dots \\ \mu_{q,1} & \dots & \mu_{q,q}\end{bmatrix} = \begin{bmatrix} \lambda_{1,1} & \dots & \lambda_{1,q}\\ \dots & \dots & \dots \\ \lambda_{q,1} & \dots & \lambda_{q,q}\end{bmatrix}^{-1}$$ Depending on the nature of the date, you should use Independent Component Analysis (ICA) instead of PCA.
Methods to predict multiple dependent variables A reasonable possibility is to make a Principal Component Analysis (PCA) of the $q$ dependent variables $Y_i$ and construct other $q$ independent variables as linear combinations: $$\tilde{Y}_i = \lam
20,371
How to scale violin plots for comparisons?
Box plots are used for schematic summaries of a distribution. The violin plots are just box plots in which the Q1, Q2, and Q3 boxes are replaced by a wide range of quantiles. For that reason, I think the accepted practice is to use uniform width across groups. However, you bring up a good point: how should densities across groups be compared? The answer depends on whether you are looking at each group as it's own population or as subpopulations. I think that a useful DEFAULT behavior is to think of the full data as being the density we want to estimate. The groups are subpopulations such that the full density is a MIXTURE of the sub-densities. That suggests that each sub-density should be weighted by the number of observations. The areas (integral of the densities) of the k groups should be P_i, where $\Sigma_i P_i = 1$. This says that "Weighted Areas" is a good approach.
How to scale violin plots for comparisons?
Box plots are used for schematic summaries of a distribution. The violin plots are just box plots in which the Q1, Q2, and Q3 boxes are replaced by a wide range of quantiles. For that reason, I think
How to scale violin plots for comparisons? Box plots are used for schematic summaries of a distribution. The violin plots are just box plots in which the Q1, Q2, and Q3 boxes are replaced by a wide range of quantiles. For that reason, I think the accepted practice is to use uniform width across groups. However, you bring up a good point: how should densities across groups be compared? The answer depends on whether you are looking at each group as it's own population or as subpopulations. I think that a useful DEFAULT behavior is to think of the full data as being the density we want to estimate. The groups are subpopulations such that the full density is a MIXTURE of the sub-densities. That suggests that each sub-density should be weighted by the number of observations. The areas (integral of the densities) of the k groups should be P_i, where $\Sigma_i P_i = 1$. This says that "Weighted Areas" is a good approach.
How to scale violin plots for comparisons? Box plots are used for schematic summaries of a distribution. The violin plots are just box plots in which the Q1, Q2, and Q3 boxes are replaced by a wide range of quantiles. For that reason, I think
20,372
How to scale violin plots for comparisons?
Honestly, I think you're approaching it from the wrong direction. All three plots clearly tell you information with value - otherwise, you wouldn't be considering which plot to use. Exploratory data analysis is about understanding your data. Where it conforms to expectation. Where it doesn't. How is it shaped over multiple variables. The whole point of doing EDA is evaluating whether our defaults, be they distribution or colinearity assumptions, the statistical model that was going to be used, etc. are well justified. As such, the concept of a "default" EDA is somewhat flawed. Look at all of them - or at least all of the plots that relate to the question you intend to ask. There's no reason to hamstring yourself into "What's interesting" and "What am I going to ignore" at the EDA stage. And if we're just feeding the data through defaults, it's not really EDA in the first place.
How to scale violin plots for comparisons?
Honestly, I think you're approaching it from the wrong direction. All three plots clearly tell you information with value - otherwise, you wouldn't be considering which plot to use. Exploratory data a
How to scale violin plots for comparisons? Honestly, I think you're approaching it from the wrong direction. All three plots clearly tell you information with value - otherwise, you wouldn't be considering which plot to use. Exploratory data analysis is about understanding your data. Where it conforms to expectation. Where it doesn't. How is it shaped over multiple variables. The whole point of doing EDA is evaluating whether our defaults, be they distribution or colinearity assumptions, the statistical model that was going to be used, etc. are well justified. As such, the concept of a "default" EDA is somewhat flawed. Look at all of them - or at least all of the plots that relate to the question you intend to ask. There's no reason to hamstring yourself into "What's interesting" and "What am I going to ignore" at the EDA stage. And if we're just feeding the data through defaults, it's not really EDA in the first place.
How to scale violin plots for comparisons? Honestly, I think you're approaching it from the wrong direction. All three plots clearly tell you information with value - otherwise, you wouldn't be considering which plot to use. Exploratory data a
20,373
How to scale violin plots for comparisons?
And what about the bandwidth? Did you think about that? If you use the default settings of your Software to get the pdf, you're most likely using the rule of thumb for optimal bandwidth of a gaussian kernel. This 'optimal bandwidth' might then differ then for each subset. Now ask yourself, are the shapes still comparable? It could be, that one runs into measuring the same variable (kernel density estimate) with double Standards. For kernel density estimation clear rules have been developed to get the right bandwidth(some sort of cross-validation), but for violin plots they are mostly ignored. Might be important, when the sample sizes differs a lot. I am having this problem right now. What do you think about it? How do you solve it? Any comments are greatly appreciated.
How to scale violin plots for comparisons?
And what about the bandwidth? Did you think about that? If you use the default settings of your Software to get the pdf, you're most likely using the rule of thumb for optimal bandwidth of a gaussian
How to scale violin plots for comparisons? And what about the bandwidth? Did you think about that? If you use the default settings of your Software to get the pdf, you're most likely using the rule of thumb for optimal bandwidth of a gaussian kernel. This 'optimal bandwidth' might then differ then for each subset. Now ask yourself, are the shapes still comparable? It could be, that one runs into measuring the same variable (kernel density estimate) with double Standards. For kernel density estimation clear rules have been developed to get the right bandwidth(some sort of cross-validation), but for violin plots they are mostly ignored. Might be important, when the sample sizes differs a lot. I am having this problem right now. What do you think about it? How do you solve it? Any comments are greatly appreciated.
How to scale violin plots for comparisons? And what about the bandwidth? Did you think about that? If you use the default settings of your Software to get the pdf, you're most likely using the rule of thumb for optimal bandwidth of a gaussian
20,374
LARS vs coordinate descent for the lasso
In scikit-learn the implementation of Lasso with coordinate descent tends to be faster than our implementation of LARS although for small p (such as in your case) they are roughly equivalent (LARS might even be a bit faster with the latest optimizations available in the master repo). Furthermore coordinate descent allows for efficient implementation of elastic net regularized problems. This is not the case for LARS (that solves only Lasso, aka L1 penalized problems). Elastic Net penalization tends to yield a better generalization than Lasso (closer to the solution of ridge regression) while keeping the nice sparsity inducing features of Lasso (supervised feature selection). For large N (and large p, sparse or not) you might also give a stochastic gradient descent (with L1 or elastic net penalty) a try (also implemented in scikit-learn). Edit: here are some benchmarks comparing LassoLARS and the coordinate descent implementation in scikit-learn
LARS vs coordinate descent for the lasso
In scikit-learn the implementation of Lasso with coordinate descent tends to be faster than our implementation of LARS although for small p (such as in your case) they are roughly equivalent (LARS mig
LARS vs coordinate descent for the lasso In scikit-learn the implementation of Lasso with coordinate descent tends to be faster than our implementation of LARS although for small p (such as in your case) they are roughly equivalent (LARS might even be a bit faster with the latest optimizations available in the master repo). Furthermore coordinate descent allows for efficient implementation of elastic net regularized problems. This is not the case for LARS (that solves only Lasso, aka L1 penalized problems). Elastic Net penalization tends to yield a better generalization than Lasso (closer to the solution of ridge regression) while keeping the nice sparsity inducing features of Lasso (supervised feature selection). For large N (and large p, sparse or not) you might also give a stochastic gradient descent (with L1 or elastic net penalty) a try (also implemented in scikit-learn). Edit: here are some benchmarks comparing LassoLARS and the coordinate descent implementation in scikit-learn
LARS vs coordinate descent for the lasso In scikit-learn the implementation of Lasso with coordinate descent tends to be faster than our implementation of LARS although for small p (such as in your case) they are roughly equivalent (LARS mig
20,375
Parametric sample size calculation and non-parametric analysis
Some people seem to use a concept of Pitman Asymptotic Relative Efficiency (ARE) to inflate the sample size obtained by using a sample size formula for a parametric test. Ironically, in order to compute it, one has to assume a distribution again... see e.g. Sample size for the Mann-Whitney U test There are some links in the end of the article that provide pointers for further reading.
Parametric sample size calculation and non-parametric analysis
Some people seem to use a concept of Pitman Asymptotic Relative Efficiency (ARE) to inflate the sample size obtained by using a sample size formula for a parametric test. Ironically, in order to compu
Parametric sample size calculation and non-parametric analysis Some people seem to use a concept of Pitman Asymptotic Relative Efficiency (ARE) to inflate the sample size obtained by using a sample size formula for a parametric test. Ironically, in order to compute it, one has to assume a distribution again... see e.g. Sample size for the Mann-Whitney U test There are some links in the end of the article that provide pointers for further reading.
Parametric sample size calculation and non-parametric analysis Some people seem to use a concept of Pitman Asymptotic Relative Efficiency (ARE) to inflate the sample size obtained by using a sample size formula for a parametric test. Ironically, in order to compu
20,376
Parametric sample size calculation and non-parametric analysis
It sounds dodgy to me. Nonparametric methods almost always involve more degrees of freedom than parametric methods and so need more data. In your particular example, the Mann-Whitney test has lower power than the t-test and so more data are required for the same specified power and size. A simple way to do sample size calculation for any method (non-parametric or otherwise) is to use a bootstrap approach.
Parametric sample size calculation and non-parametric analysis
It sounds dodgy to me. Nonparametric methods almost always involve more degrees of freedom than parametric methods and so need more data. In your particular example, the Mann-Whitney test has lower po
Parametric sample size calculation and non-parametric analysis It sounds dodgy to me. Nonparametric methods almost always involve more degrees of freedom than parametric methods and so need more data. In your particular example, the Mann-Whitney test has lower power than the t-test and so more data are required for the same specified power and size. A simple way to do sample size calculation for any method (non-parametric or otherwise) is to use a bootstrap approach.
Parametric sample size calculation and non-parametric analysis It sounds dodgy to me. Nonparametric methods almost always involve more degrees of freedom than parametric methods and so need more data. In your particular example, the Mann-Whitney test has lower po
20,377
Multiple Chi-Squared Tests
You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily attributable to differences in particular categories or groups of categories. A quick google turned up this presentation, which at the end discusses methods for partitioning chi-squared. http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/2way_chi-ha-online.pdf
Multiple Chi-Squared Tests
You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily att
Multiple Chi-Squared Tests You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily attributable to differences in particular categories or groups of categories. A quick google turned up this presentation, which at the end discusses methods for partitioning chi-squared. http://www.ed.uiuc.edu/courses/EdPsy490AT/lectures/2way_chi-ha-online.pdf
Multiple Chi-Squared Tests You should look into "partitioning chi-squared". This is similar in logic to performing post-hoc tests in ANOVA. It will allow you to determine whether your significant overall test is primarily att
20,378
Multiple Chi-Squared Tests
The unprincipled approach is to discard the disproportionate data, refit the model and see if logit/conditional odds ratios for response and A are very different (controlling for B). This might tell you if there's cause for concern. Pooling the levels of B is another approach. On more principled lines, If you're worried about relative proportions inducing Simpson's paradox, then you can look into the conditional and marginal odds ratios for response/A and see if they reverse. For avoiding multiple comparisons in particular, the only thing that occurs to me is to use a hierarchical model which accounts for random effects across levels.
Multiple Chi-Squared Tests
The unprincipled approach is to discard the disproportionate data, refit the model and see if logit/conditional odds ratios for response and A are very different (controlling for B). This might tell
Multiple Chi-Squared Tests The unprincipled approach is to discard the disproportionate data, refit the model and see if logit/conditional odds ratios for response and A are very different (controlling for B). This might tell you if there's cause for concern. Pooling the levels of B is another approach. On more principled lines, If you're worried about relative proportions inducing Simpson's paradox, then you can look into the conditional and marginal odds ratios for response/A and see if they reverse. For avoiding multiple comparisons in particular, the only thing that occurs to me is to use a hierarchical model which accounts for random effects across levels.
Multiple Chi-Squared Tests The unprincipled approach is to discard the disproportionate data, refit the model and see if logit/conditional odds ratios for response and A are very different (controlling for B). This might tell
20,379
Multiple Chi-Squared Tests
Post Hoc test may fit to your problem. chisqPostHoc() function in R tests for significant differences among all pairs of populations in a chi-square test. Even though, I haven't use it but this link may be useful. https://www.rforge.net/doc/packages/NCStats/chisqPostHoc.html Another alternative may be chisq.desc() function from EnQuireR package.
Multiple Chi-Squared Tests
Post Hoc test may fit to your problem. chisqPostHoc() function in R tests for significant differences among all pairs of populations in a chi-square test. Even though, I haven't use it but this link m
Multiple Chi-Squared Tests Post Hoc test may fit to your problem. chisqPostHoc() function in R tests for significant differences among all pairs of populations in a chi-square test. Even though, I haven't use it but this link may be useful. https://www.rforge.net/doc/packages/NCStats/chisqPostHoc.html Another alternative may be chisq.desc() function from EnQuireR package.
Multiple Chi-Squared Tests Post Hoc test may fit to your problem. chisqPostHoc() function in R tests for significant differences among all pairs of populations in a chi-square test. Even though, I haven't use it but this link m
20,380
Simulated Annealing vs. Basin-hopping algorithm
The reason for Simulated Annealing to be Deprecated is not because Basin-hopping outperform it theoretically. Is because the specific implementation done for Simulated Annealing in the library is a special case of the second. If you want to use a Simulated Annealing algorithm I recomend you to use scipy.optimize.dual_annealing instead, but with $'visit'=q_v=1, \, 'acept'=q_a=1$ (this recover Classical Simulated Annealing, i.e. the temperature decreases logarithmically). Other parameter election leads to more sophisticated Annealing processes, like $'visit'=q_v=2, \, 'acept'=q_a=1$ (which recover the Fast Simulated Annealing, i.e. the temperature decrease up to inverse). Observation: As @JamesBowery points out in his comment you should turn of the local optimizer.
Simulated Annealing vs. Basin-hopping algorithm
The reason for Simulated Annealing to be Deprecated is not because Basin-hopping outperform it theoretically. Is because the specific implementation done for Simulated Annealing in the library is a sp
Simulated Annealing vs. Basin-hopping algorithm The reason for Simulated Annealing to be Deprecated is not because Basin-hopping outperform it theoretically. Is because the specific implementation done for Simulated Annealing in the library is a special case of the second. If you want to use a Simulated Annealing algorithm I recomend you to use scipy.optimize.dual_annealing instead, but with $'visit'=q_v=1, \, 'acept'=q_a=1$ (this recover Classical Simulated Annealing, i.e. the temperature decreases logarithmically). Other parameter election leads to more sophisticated Annealing processes, like $'visit'=q_v=2, \, 'acept'=q_a=1$ (which recover the Fast Simulated Annealing, i.e. the temperature decrease up to inverse). Observation: As @JamesBowery points out in his comment you should turn of the local optimizer.
Simulated Annealing vs. Basin-hopping algorithm The reason for Simulated Annealing to be Deprecated is not because Basin-hopping outperform it theoretically. Is because the specific implementation done for Simulated Annealing in the library is a sp
20,381
Distribution of sum of exponentials
Conditioning Approach Condition on the value of $X_1$. Start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= \int_0^\infty P(X_1+X_2\le x|X_1=x_1)f_{X_1}(x_1)dx_1 \\ &= \int_0^x P(X_1+X_2\le x|X_1=x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x P(X_2 \le x - x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x\left(1-\text{e}^{-\lambda(x-x_1)}\right)\lambda \text{e}^{-\lambda x_1}dx_1\\ &=(1-e^{-\lambda x}) - \lambda x e^{-\lambda x}\end{align} $ This is the CDF of the distribution. To get the PDF, differentiate with respect to $x$ (see here). $$f_{S_2}(x) = \lambda^2 x \text{e}^{-\lambda x} \quad\square$$ This is an Erlang$(2,\lambda)$ distribution (see here). General Approach Direct integration relying on the independence of $X_1$ & $X_2$. Again, start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= P\left( (X_1,X_2)\in A \right) \quad \quad \text{(See figure below)}\\ &= \int\int_{(x_1,x_2)\in A} f_{X_1,X_2}(x_1,x_2)dx_1 dx_2 \\ &(\text{Joint distribution is the product of marginals by independence}) \\ &= \int_0^{x} \int_0^{x-x_{2}} f_{X_1}(x_1)f_{X_2}(x_2)dx_1 dx_2\\ &= \int_0^{x} \int_0^{x-x_{2}} \lambda \text{e}^{-\lambda x_1}\lambda \text{e}^{-\lambda x_2}dx_1 dx_2\\ \end{align}$ Since this is the CDF, differentiation gives the PDF, $f_{S_2}(x) = \lambda^2 x \text{e}^{-\lambda x} \quad\square$ MGF Approach This approach uses the moment generating function (MGF). $\begin{align} M_{S_2}(t) &= \text{E}\left[\text{e}^{t S_2}\right] \\ &= \text{E}\left[\text{e}^{t(X_1 + X_2)}\right] \\ &= \text{E}\left[\text{e}^{t X_1 + t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1} \text{e}^{t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1}\right]\text{E}\left[\text{e}^{t X_2}\right] \quad \text{(by independence)} \\ &= M_{X_1}(t)M_{X_2}(t) \\ &= \left(\frac{\lambda}{\lambda-t}\right)\left(\frac{\lambda}{\lambda-t}\right) \quad \quad t<\lambda\\ &= \frac{\lambda^2}{(\lambda-t)^2} \quad \quad t<\lambda \end{align}$ While this may not yield the PDF, once the MGF matches that of a known distribution, the PDF also known.
Distribution of sum of exponentials
Conditioning Approach Condition on the value of $X_1$. Start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= \int_0
Distribution of sum of exponentials Conditioning Approach Condition on the value of $X_1$. Start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= \int_0^\infty P(X_1+X_2\le x|X_1=x_1)f_{X_1}(x_1)dx_1 \\ &= \int_0^x P(X_1+X_2\le x|X_1=x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x P(X_2 \le x - x_1)\lambda \text{e}^{-\lambda x_1}dx_1 \\ &= \int_0^x\left(1-\text{e}^{-\lambda(x-x_1)}\right)\lambda \text{e}^{-\lambda x_1}dx_1\\ &=(1-e^{-\lambda x}) - \lambda x e^{-\lambda x}\end{align} $ This is the CDF of the distribution. To get the PDF, differentiate with respect to $x$ (see here). $$f_{S_2}(x) = \lambda^2 x \text{e}^{-\lambda x} \quad\square$$ This is an Erlang$(2,\lambda)$ distribution (see here). General Approach Direct integration relying on the independence of $X_1$ & $X_2$. Again, start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= P\left( (X_1,X_2)\in A \right) \quad \quad \text{(See figure below)}\\ &= \int\int_{(x_1,x_2)\in A} f_{X_1,X_2}(x_1,x_2)dx_1 dx_2 \\ &(\text{Joint distribution is the product of marginals by independence}) \\ &= \int_0^{x} \int_0^{x-x_{2}} f_{X_1}(x_1)f_{X_2}(x_2)dx_1 dx_2\\ &= \int_0^{x} \int_0^{x-x_{2}} \lambda \text{e}^{-\lambda x_1}\lambda \text{e}^{-\lambda x_2}dx_1 dx_2\\ \end{align}$ Since this is the CDF, differentiation gives the PDF, $f_{S_2}(x) = \lambda^2 x \text{e}^{-\lambda x} \quad\square$ MGF Approach This approach uses the moment generating function (MGF). $\begin{align} M_{S_2}(t) &= \text{E}\left[\text{e}^{t S_2}\right] \\ &= \text{E}\left[\text{e}^{t(X_1 + X_2)}\right] \\ &= \text{E}\left[\text{e}^{t X_1 + t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1} \text{e}^{t X_2}\right] \\ &= \text{E}\left[\text{e}^{t X_1}\right]\text{E}\left[\text{e}^{t X_2}\right] \quad \text{(by independence)} \\ &= M_{X_1}(t)M_{X_2}(t) \\ &= \left(\frac{\lambda}{\lambda-t}\right)\left(\frac{\lambda}{\lambda-t}\right) \quad \quad t<\lambda\\ &= \frac{\lambda^2}{(\lambda-t)^2} \quad \quad t<\lambda \end{align}$ While this may not yield the PDF, once the MGF matches that of a known distribution, the PDF also known.
Distribution of sum of exponentials Conditioning Approach Condition on the value of $X_1$. Start with the cumulative distribution function (CDF) for $S_2$. $\begin{align} F_{S_2}(x) &= P(S_2\le x) \\ &= P(X_1 + X_2 \le x) \\ &= \int_0
20,382
Difference between a single unit LSTM and 3-unit LSTM neural network
In Keras LSTM(n) means "create an LSTM layer consisting of LSTM units. The following picture demonstrates what layer and unit (or neuron) are, and the rightmost image shows the internal structure of a single LSTM unit. The following picture shows how the whole LSTM layer operates. As we know an LSTM layer processes a sequence, i.e, $\mathbb{x}_1, \dots, \mathbb{x}_N$. At each step $t$ the layer (each neuron) takes the input $\mathbb{x_t}$, output from previous step $\mathbb{h_{t-1}}$, and bias $b$, and outputs a vector $\mathbb{h_t}$. Coordinates of $\mathbb{h_t}$ are outputs of the neurons/units, and hence the size of the vector $\mathbb{h_t}$ is equal to the number of units/neurons. This process continues until $\mathbb{x}_N$. Now let's compute the number of parameters for LSTM(1) and LSTM(3) and compare it with what Keras shows when we call model.summary(). Let $inp$ be the size of the vector $\mathbb{x_t}$ and $out$ be the size of the vector $\mathbb{h_t}$ (this is also the number of neurons/units). Each neuron/unit takes input vector, output from the previous step, and a bias which makes $inp + out + 1$ parameters (weights). But we have $out$ number of neurons and so we have $out\times(inp + out + 1)$ parameters. Finally each unit has 4 weights (see the rightmost image, yellow boxes) and we have the following formula for the number of parameters: $$4out(inp + out + 1)$$ Let's compare with what Keras outputs. Example 1. t1 = Input(shape=(1, 1)) t2 = LSTM(1)(t1) model = Model(inputs=t1, outputs=t2) print(model.summary()) Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 1, 1) 0 _________________________________________________________________ lstm_2 (LSTM) (None, 1) 12 ================================================================= Total params: 12 Trainable params: 12 Non-trainable params: 0 _________________________________________________________________ Number of units is 1, the size of input vector is 1, so $4\times 1 \times (1 + 1 + 1) = 12$. Example 2. input_t = Input((4, 2)) output_t = LSTM(3)(input_t) model = Model(inputs=input_t, outputs=output_t) print(model.summary()) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_6 (InputLayer) (None, 4, 2) 0 _________________________________________________________________ lstm_6 (LSTM) (None, 3) 72 ================================================================= Total params: 72 Trainable params: 72 Non-trainable params: 0 Number of units is 3, the size of the input vector is 2, so $4\times 3 \times (2 + 3 +1) = 72$
Difference between a single unit LSTM and 3-unit LSTM neural network
In Keras LSTM(n) means "create an LSTM layer consisting of LSTM units. The following picture demonstrates what layer and unit (or neuron) are, and the rightmost image shows the internal structure of a
Difference between a single unit LSTM and 3-unit LSTM neural network In Keras LSTM(n) means "create an LSTM layer consisting of LSTM units. The following picture demonstrates what layer and unit (or neuron) are, and the rightmost image shows the internal structure of a single LSTM unit. The following picture shows how the whole LSTM layer operates. As we know an LSTM layer processes a sequence, i.e, $\mathbb{x}_1, \dots, \mathbb{x}_N$. At each step $t$ the layer (each neuron) takes the input $\mathbb{x_t}$, output from previous step $\mathbb{h_{t-1}}$, and bias $b$, and outputs a vector $\mathbb{h_t}$. Coordinates of $\mathbb{h_t}$ are outputs of the neurons/units, and hence the size of the vector $\mathbb{h_t}$ is equal to the number of units/neurons. This process continues until $\mathbb{x}_N$. Now let's compute the number of parameters for LSTM(1) and LSTM(3) and compare it with what Keras shows when we call model.summary(). Let $inp$ be the size of the vector $\mathbb{x_t}$ and $out$ be the size of the vector $\mathbb{h_t}$ (this is also the number of neurons/units). Each neuron/unit takes input vector, output from the previous step, and a bias which makes $inp + out + 1$ parameters (weights). But we have $out$ number of neurons and so we have $out\times(inp + out + 1)$ parameters. Finally each unit has 4 weights (see the rightmost image, yellow boxes) and we have the following formula for the number of parameters: $$4out(inp + out + 1)$$ Let's compare with what Keras outputs. Example 1. t1 = Input(shape=(1, 1)) t2 = LSTM(1)(t1) model = Model(inputs=t1, outputs=t2) print(model.summary()) Layer (type) Output Shape Param # ================================================================= input_2 (InputLayer) (None, 1, 1) 0 _________________________________________________________________ lstm_2 (LSTM) (None, 1) 12 ================================================================= Total params: 12 Trainable params: 12 Non-trainable params: 0 _________________________________________________________________ Number of units is 1, the size of input vector is 1, so $4\times 1 \times (1 + 1 + 1) = 12$. Example 2. input_t = Input((4, 2)) output_t = LSTM(3)(input_t) model = Model(inputs=input_t, outputs=output_t) print(model.summary()) _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_6 (InputLayer) (None, 4, 2) 0 _________________________________________________________________ lstm_6 (LSTM) (None, 3) 72 ================================================================= Total params: 72 Trainable params: 72 Non-trainable params: 0 Number of units is 3, the size of the input vector is 2, so $4\times 3 \times (2 + 3 +1) = 72$
Difference between a single unit LSTM and 3-unit LSTM neural network In Keras LSTM(n) means "create an LSTM layer consisting of LSTM units. The following picture demonstrates what layer and unit (or neuron) are, and the rightmost image shows the internal structure of a
20,383
Difference between a single unit LSTM and 3-unit LSTM neural network
I usually work with Tensorflow but I as I could see in the documentation it's similar to Keras. Basically, when you are calling LSTM(3) you are NOT creating LSTM one top of each other like on this image 1. This is a completely different problem. However, when you are creating LSTM(3) you are making a LSTM with 3 hidden units or hidden cells. On your code, 3 will be the dimension of the inner cells in LSTM. What does it means? This means that the dimensionality of the hidden state and the dimensionality of the output state will be the same as your parameter of hidden units. Instead of imagine a LSTM as something that get a sequence of scalars and give and output scalar, imagine this: You have a sequence of T length with 512 values each T so [Batchsize, T, 512]. At the first timestemp T=1, you will feed the LSTM with these 512 values at once and this is thanks to the hidden units. I attach you some references and links if my explanation is not very clear. Q Reference, S Reference.
Difference between a single unit LSTM and 3-unit LSTM neural network
I usually work with Tensorflow but I as I could see in the documentation it's similar to Keras. Basically, when you are calling LSTM(3) you are NOT creating LSTM one top of each other like on this ima
Difference between a single unit LSTM and 3-unit LSTM neural network I usually work with Tensorflow but I as I could see in the documentation it's similar to Keras. Basically, when you are calling LSTM(3) you are NOT creating LSTM one top of each other like on this image 1. This is a completely different problem. However, when you are creating LSTM(3) you are making a LSTM with 3 hidden units or hidden cells. On your code, 3 will be the dimension of the inner cells in LSTM. What does it means? This means that the dimensionality of the hidden state and the dimensionality of the output state will be the same as your parameter of hidden units. Instead of imagine a LSTM as something that get a sequence of scalars and give and output scalar, imagine this: You have a sequence of T length with 512 values each T so [Batchsize, T, 512]. At the first timestemp T=1, you will feed the LSTM with these 512 values at once and this is thanks to the hidden units. I attach you some references and links if my explanation is not very clear. Q Reference, S Reference.
Difference between a single unit LSTM and 3-unit LSTM neural network I usually work with Tensorflow but I as I could see in the documentation it's similar to Keras. Basically, when you are calling LSTM(3) you are NOT creating LSTM one top of each other like on this ima
20,384
Difference between a single unit LSTM and 3-unit LSTM neural network
I made a picture (see sources for original pictures) showing cells as classically represented in tutorials (Source1: Colah's Blog) and an equivalent cell with 2 units (Source2: Raimi Karim 's post). Hope it will clarify confusion between cells/units and what really is the network architecture.
Difference between a single unit LSTM and 3-unit LSTM neural network
I made a picture (see sources for original pictures) showing cells as classically represented in tutorials (Source1: Colah's Blog) and an equivalent cell with 2 units (Source2: Raimi Karim 's post). H
Difference between a single unit LSTM and 3-unit LSTM neural network I made a picture (see sources for original pictures) showing cells as classically represented in tutorials (Source1: Colah's Blog) and an equivalent cell with 2 units (Source2: Raimi Karim 's post). Hope it will clarify confusion between cells/units and what really is the network architecture.
Difference between a single unit LSTM and 3-unit LSTM neural network I made a picture (see sources for original pictures) showing cells as classically represented in tutorials (Source1: Colah's Blog) and an equivalent cell with 2 units (Source2: Raimi Karim 's post). H
20,385
Is the Error rate a Convex function of the Regularization parameter lambda?
The original question asked whether the error function needs to be convex. No, it does not. The analysis presented below is intended to provide some insight and intuition about this and the modified question, which asks whether the error function could have multiple local minima. Intuitively, there doesn't have to be any mathematically necessary relationship between the data and the training set. We should be able to find training data for which the model initially is poor, gets better with some regularization, and then gets worse again. The error curve cannot be convex in that case--at least not if we make the regularization parameter vary from $0$ to $\infty$. Note that convex is not equivalent to having a unique minimum! However, similar ideas suggest multiple local minima are possible: during regularization, first the fitted model might get better for some training data while not appreciably changing for other training data, and then later it will get better for other training data, etc. A suitable mix of such training data ought to produce multiple local minima. To keep the analysis simple I won't attempt to show that. Edit (to respond to the changed question) I was so confident in the analysis presented below and the intuition behind it that I set about finding an example in the crudest possible way: I generated small random datasets, ran a Lasso on them, computed the total squared error for a small training set, and plotted its error curve. A few attempts produced one with two minima, which I will describe. The vectors are in the form $(x_1,x_2,y)$ for features $x_1$ and $x_2$ and response $y$. Training data $$(1,1,-0.1),\ (2,1,0.8),\ (1,2,1.2),\ (2,2,0.9)$$ Test data $$(1,1,0.2),\ (1,2,0.4)$$ The Lasso was run using glmnet::glmmet in R, with all arguments left at their defaults. The values of $\lambda$ on the x axis are the reciprocals of the values reported by that software (because it parameterizes its penalty with $1/\lambda$). An error curve with multiple local minima Analysis Let's consider any regularization method of fitting parameters $\beta=(\beta_1, \ldots, \beta_p)$ to data $x_i$ and corresponding responses $y_i$ that has these properties common to Ridge Regression and Lasso: (Parameterization) The method is parameterized by real numbers $\lambda \in [0, \infty)$, with the unregularized model corresponding to $\lambda=0$. (Continuity) The parameter estimate $\hat\beta$ depends continuously on $\lambda$ and the predicted values for any features vary continuously with $\hat\beta$. (Shrinkage) As $\lambda\to\infty$, $\hat\beta\to 0$. (Finiteness) For any feature vector $x$, as $\hat\beta\to 0$, the prediction $\hat y(x) = f(x, \hat\beta) \to 0$. (Monotonic error) The error function comparing any value $y$ to a predicted value $\hat y$, $\mathcal{L}(y, \hat y)$, increases with the discrepancy $|\hat y - y|$ so that, with some abuse of notation, we may express it as $\mathcal{L}(|\hat y - y|)$. (Zero in $(4)$ could be replaced by any constant.) Suppose the data are such that the initial (unregularized) parameter estimate $\hat\beta(0)$ is not zero. Let's construct a training data set consisting of one observation $(x_0, y_0)$ for which $f(x_0, \hat\beta(0))\ne 0$. (If it's not possible to find such an $x_0$, then the initial model won't be very interesting!) Set $y_0=f(x_0, \hat\beta(0))/2$. The assumptions imply the error curve $e: \lambda \to \mathcal{L}(y_0, f(x_0, \hat\beta(\lambda))$ has these properties: $e(0) = \mathcal{L}(y_0, f(x_0, \hat\beta(0)) = \mathcal{L}(y_0, 2y_0) = \mathcal{L}(|y_0|)$ (because of the choice of $y_0$). $\lim_{\lambda\to\infty}e(\lambda) = \mathcal{L}(y_0, 0) = \mathcal{L}(|y_0|)$ (because as $\lambda\to\infty$, $\hat\beta(\lambda)\to 0$, whence $\hat{y}(x_0)\to 0$). Thus, its graph continuously connects two equally high (and finite) endpoints. Qualitatively, there are three possibilities: The prediction for the training set never changes. This is unlikely--just about any example you choose will not have this property. Some intermediate predictions for $0\lt \lambda \lt \infty$ are worse than at the start $\lambda=0$ or in the limit $\lambda\to\infty$. This function cannot be convex. All intermediate predictions lie between $0$ and $2y_0$. The continuity implies there will be at least one minimum of $e$, near which $e$ must be convex. But since $e(\lambda)$ approaches a finite constant asymptotically, it cannot be convex for large enough $\lambda$. The vertical dashed line in the figure shows where the plot changes from convex (at its left) to non-convex (to the right). (There is also a region of non-convexity near $\lambda\approx 0$ in this figure, but this won't necessarily be the case in general.)
Is the Error rate a Convex function of the Regularization parameter lambda?
The original question asked whether the error function needs to be convex. No, it does not. The analysis presented below is intended to provide some insight and intuition about this and the modified
Is the Error rate a Convex function of the Regularization parameter lambda? The original question asked whether the error function needs to be convex. No, it does not. The analysis presented below is intended to provide some insight and intuition about this and the modified question, which asks whether the error function could have multiple local minima. Intuitively, there doesn't have to be any mathematically necessary relationship between the data and the training set. We should be able to find training data for which the model initially is poor, gets better with some regularization, and then gets worse again. The error curve cannot be convex in that case--at least not if we make the regularization parameter vary from $0$ to $\infty$. Note that convex is not equivalent to having a unique minimum! However, similar ideas suggest multiple local minima are possible: during regularization, first the fitted model might get better for some training data while not appreciably changing for other training data, and then later it will get better for other training data, etc. A suitable mix of such training data ought to produce multiple local minima. To keep the analysis simple I won't attempt to show that. Edit (to respond to the changed question) I was so confident in the analysis presented below and the intuition behind it that I set about finding an example in the crudest possible way: I generated small random datasets, ran a Lasso on them, computed the total squared error for a small training set, and plotted its error curve. A few attempts produced one with two minima, which I will describe. The vectors are in the form $(x_1,x_2,y)$ for features $x_1$ and $x_2$ and response $y$. Training data $$(1,1,-0.1),\ (2,1,0.8),\ (1,2,1.2),\ (2,2,0.9)$$ Test data $$(1,1,0.2),\ (1,2,0.4)$$ The Lasso was run using glmnet::glmmet in R, with all arguments left at their defaults. The values of $\lambda$ on the x axis are the reciprocals of the values reported by that software (because it parameterizes its penalty with $1/\lambda$). An error curve with multiple local minima Analysis Let's consider any regularization method of fitting parameters $\beta=(\beta_1, \ldots, \beta_p)$ to data $x_i$ and corresponding responses $y_i$ that has these properties common to Ridge Regression and Lasso: (Parameterization) The method is parameterized by real numbers $\lambda \in [0, \infty)$, with the unregularized model corresponding to $\lambda=0$. (Continuity) The parameter estimate $\hat\beta$ depends continuously on $\lambda$ and the predicted values for any features vary continuously with $\hat\beta$. (Shrinkage) As $\lambda\to\infty$, $\hat\beta\to 0$. (Finiteness) For any feature vector $x$, as $\hat\beta\to 0$, the prediction $\hat y(x) = f(x, \hat\beta) \to 0$. (Monotonic error) The error function comparing any value $y$ to a predicted value $\hat y$, $\mathcal{L}(y, \hat y)$, increases with the discrepancy $|\hat y - y|$ so that, with some abuse of notation, we may express it as $\mathcal{L}(|\hat y - y|)$. (Zero in $(4)$ could be replaced by any constant.) Suppose the data are such that the initial (unregularized) parameter estimate $\hat\beta(0)$ is not zero. Let's construct a training data set consisting of one observation $(x_0, y_0)$ for which $f(x_0, \hat\beta(0))\ne 0$. (If it's not possible to find such an $x_0$, then the initial model won't be very interesting!) Set $y_0=f(x_0, \hat\beta(0))/2$. The assumptions imply the error curve $e: \lambda \to \mathcal{L}(y_0, f(x_0, \hat\beta(\lambda))$ has these properties: $e(0) = \mathcal{L}(y_0, f(x_0, \hat\beta(0)) = \mathcal{L}(y_0, 2y_0) = \mathcal{L}(|y_0|)$ (because of the choice of $y_0$). $\lim_{\lambda\to\infty}e(\lambda) = \mathcal{L}(y_0, 0) = \mathcal{L}(|y_0|)$ (because as $\lambda\to\infty$, $\hat\beta(\lambda)\to 0$, whence $\hat{y}(x_0)\to 0$). Thus, its graph continuously connects two equally high (and finite) endpoints. Qualitatively, there are three possibilities: The prediction for the training set never changes. This is unlikely--just about any example you choose will not have this property. Some intermediate predictions for $0\lt \lambda \lt \infty$ are worse than at the start $\lambda=0$ or in the limit $\lambda\to\infty$. This function cannot be convex. All intermediate predictions lie between $0$ and $2y_0$. The continuity implies there will be at least one minimum of $e$, near which $e$ must be convex. But since $e(\lambda)$ approaches a finite constant asymptotically, it cannot be convex for large enough $\lambda$. The vertical dashed line in the figure shows where the plot changes from convex (at its left) to non-convex (to the right). (There is also a region of non-convexity near $\lambda\approx 0$ in this figure, but this won't necessarily be the case in general.)
Is the Error rate a Convex function of the Regularization parameter lambda? The original question asked whether the error function needs to be convex. No, it does not. The analysis presented below is intended to provide some insight and intuition about this and the modified
20,386
Is the Error rate a Convex function of the Regularization parameter lambda?
$\newcommand{\dbeta}{\frac{\partial}{\partial \lambda} \hat\beta_\lambda}$ $\newcommand{\ddbeta}{\frac{\partial^2}{{\partial \lambda}^2} \hat\beta_\lambda}$ This answer specifically concerns the lasso (and does not hold for ridge regression.) Setup Suppose that we have $p$ covariates that we're using to model a response. Suppose that we have $n$ training data points and $m$ validation data points. Let the training input be $X_{(1)} \in \mathbb{R}^{n \times p}$ and response be $y_{(1)} \in \mathbb{R}^n$. We will use the lasso on this training data. That is, put $$\hat\beta_\lambda = \arg\min_{\beta \in \mathbb{R}^p} \|y_{(1)} - X_{(1)} \beta\|_2^2 + \lambda \|\beta\|_1, \tag{1}$$ a family of coefficients estimated from the training data. We will choose which $\hat\beta_\lambda$ to use as our estimator based on its error on a validation set, with input $X_{(2)} \in \mathbb{R}^{m \times p}$ and response $y_{(2)} \in \mathbb{R}^m$. With $$\hat\lambda = \arg\min_{\lambda \in \mathbb{R}_+} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2, \tag{2}$$ we are interested in studying the error function $e(\lambda) = \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2$ which gives rise to our data-driven estimator $\hat\beta_{\hat\lambda}$. Calculation Now, we will calculate the second derivative of the objective in equation $(2)$, without making any distributional assumptions on the $X$'s or $y$'s. Using differentiation and some reorganization, we (formally) compute that \begin{align*} \frac{\partial^2}{{\partial \lambda}^2} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2 & = \frac{\partial}{\partial \lambda} \left\{ -2 y_{(2)}^T X_{(2)} \dbeta + 2 \hat\beta_\lambda^T X_{(2)}^T X_{(2)} \dbeta \right\} \\ & = -2 y_{(2)}^T X_{(2)} \ddbeta + 2 \left( \hat\beta_\lambda \right)^T X_{(2)}^T X_{(2)} \ddbeta + 2 \dbeta^T X_{(2)}^T X_{(2)}^T \dbeta \\ & = -2 \left\{ \left( y_{(2)} - X_{(2)} \hat\beta_\lambda \right)^T \ddbeta - \|X_{(2)} \dbeta\|_2^2 \right\}. \end{align*} Since $\hat\beta_\lambda$ is piecewise linear for $\lambda \not\in K$ (for $K$ being the finite set of knots in the lasso solution path), the derivative $\dbeta$ is piecewise constant and $\ddbeta$ is zero for all $\lambda \not\in K$. Therefore, $$\frac{\partial^2}{{\partial \lambda}^2} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2 = 2 \|X_{(2)} \dbeta\|_2^2,$$ a non-negative function of $\lambda$. Conclusion If we assume further that $X_{(2)}$ is drawn from some continuous distribution independent of $\{X_{(1)}, y_{(1)} \}$, the vector $X_{(2)} \dbeta \neq 0$ almost surely for $\lambda < \lambda_\max$. Therefore, the error function $e(\lambda)$ has second derivative on $\mathbb{R} \setminus K$ which is (almost surely) strictly positive. However, knowing that that $\hat\beta_\lambda$ is continuous, we know that the validation error $e(\lambda)$ is continuous. Finally, from the lasso dual, we know that $\|X_{(1)} \hat\beta_\lambda\|_2^2$ decreases monotonically as $\lambda$ increases. If we can establish that $\|X_{(2)} \hat\beta_\lambda\|_2^2$ is also monotonic, then the strong convexity of $e(\lambda)$ follows. However, this holds with some probability approaching one if $\mathcal{L} \left( X_{(1)} \right) = \mathcal{L} \left( X_{(2)} \right)$. (I'll fill in details here soon.)
Is the Error rate a Convex function of the Regularization parameter lambda?
$\newcommand{\dbeta}{\frac{\partial}{\partial \lambda} \hat\beta_\lambda}$ $\newcommand{\ddbeta}{\frac{\partial^2}{{\partial \lambda}^2} \hat\beta_\lambda}$ This answer specifically concerns the lasso
Is the Error rate a Convex function of the Regularization parameter lambda? $\newcommand{\dbeta}{\frac{\partial}{\partial \lambda} \hat\beta_\lambda}$ $\newcommand{\ddbeta}{\frac{\partial^2}{{\partial \lambda}^2} \hat\beta_\lambda}$ This answer specifically concerns the lasso (and does not hold for ridge regression.) Setup Suppose that we have $p$ covariates that we're using to model a response. Suppose that we have $n$ training data points and $m$ validation data points. Let the training input be $X_{(1)} \in \mathbb{R}^{n \times p}$ and response be $y_{(1)} \in \mathbb{R}^n$. We will use the lasso on this training data. That is, put $$\hat\beta_\lambda = \arg\min_{\beta \in \mathbb{R}^p} \|y_{(1)} - X_{(1)} \beta\|_2^2 + \lambda \|\beta\|_1, \tag{1}$$ a family of coefficients estimated from the training data. We will choose which $\hat\beta_\lambda$ to use as our estimator based on its error on a validation set, with input $X_{(2)} \in \mathbb{R}^{m \times p}$ and response $y_{(2)} \in \mathbb{R}^m$. With $$\hat\lambda = \arg\min_{\lambda \in \mathbb{R}_+} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2, \tag{2}$$ we are interested in studying the error function $e(\lambda) = \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2$ which gives rise to our data-driven estimator $\hat\beta_{\hat\lambda}$. Calculation Now, we will calculate the second derivative of the objective in equation $(2)$, without making any distributional assumptions on the $X$'s or $y$'s. Using differentiation and some reorganization, we (formally) compute that \begin{align*} \frac{\partial^2}{{\partial \lambda}^2} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2 & = \frac{\partial}{\partial \lambda} \left\{ -2 y_{(2)}^T X_{(2)} \dbeta + 2 \hat\beta_\lambda^T X_{(2)}^T X_{(2)} \dbeta \right\} \\ & = -2 y_{(2)}^T X_{(2)} \ddbeta + 2 \left( \hat\beta_\lambda \right)^T X_{(2)}^T X_{(2)} \ddbeta + 2 \dbeta^T X_{(2)}^T X_{(2)}^T \dbeta \\ & = -2 \left\{ \left( y_{(2)} - X_{(2)} \hat\beta_\lambda \right)^T \ddbeta - \|X_{(2)} \dbeta\|_2^2 \right\}. \end{align*} Since $\hat\beta_\lambda$ is piecewise linear for $\lambda \not\in K$ (for $K$ being the finite set of knots in the lasso solution path), the derivative $\dbeta$ is piecewise constant and $\ddbeta$ is zero for all $\lambda \not\in K$. Therefore, $$\frac{\partial^2}{{\partial \lambda}^2} \|y_{(2)} - X_{(2)} \hat\beta_\lambda\|_2^2 = 2 \|X_{(2)} \dbeta\|_2^2,$$ a non-negative function of $\lambda$. Conclusion If we assume further that $X_{(2)}$ is drawn from some continuous distribution independent of $\{X_{(1)}, y_{(1)} \}$, the vector $X_{(2)} \dbeta \neq 0$ almost surely for $\lambda < \lambda_\max$. Therefore, the error function $e(\lambda)$ has second derivative on $\mathbb{R} \setminus K$ which is (almost surely) strictly positive. However, knowing that that $\hat\beta_\lambda$ is continuous, we know that the validation error $e(\lambda)$ is continuous. Finally, from the lasso dual, we know that $\|X_{(1)} \hat\beta_\lambda\|_2^2$ decreases monotonically as $\lambda$ increases. If we can establish that $\|X_{(2)} \hat\beta_\lambda\|_2^2$ is also monotonic, then the strong convexity of $e(\lambda)$ follows. However, this holds with some probability approaching one if $\mathcal{L} \left( X_{(1)} \right) = \mathcal{L} \left( X_{(2)} \right)$. (I'll fill in details here soon.)
Is the Error rate a Convex function of the Regularization parameter lambda? $\newcommand{\dbeta}{\frac{\partial}{\partial \lambda} \hat\beta_\lambda}$ $\newcommand{\ddbeta}{\frac{\partial^2}{{\partial \lambda}^2} \hat\beta_\lambda}$ This answer specifically concerns the lasso
20,387
Why do lm and biglm in R give different p-values for the same data?
To see which p-values are correct (if either), let's repeat the calculation for simulated data in which the null hypothesis is true. In the present setting, the calculation is a least-squares fit to (x,y) data and the null hypothesis is that the slope is zero. In the question there are four x values 1,2,3,4 and the estimated error is around 0.7, so let's incorporate that in the simulation. Here is the setup, written to be understandable to everyone, even those unfamiliar with R. beta <- c(intercept=0, slope=0) sigma <- 0.7 x <- 1:4 y.expected <- beta["intercept"] + beta["slope"] * x The simulation generates independent errors, adds them to y.expected, invokes lm to make the fit, and summary to compute the p-values. Although this is inefficient, it is testing the actual code that was used. We can still do thousands of iterations in a second: n.sim <- 1e3 set.seed(17) data.simulated <- matrix(rnorm(n.sim*length(y.expected), y.expected, sigma), ncol=n.sim) slope.p.value <- function(e) coef(summary(lm(y.expected + e ~ x)))["x", "Pr(>|t|)"] p.values <- apply(data.simulated, 2, slope.p.value) Correctly computed p-values will act like uniform random numbers between $0$ and $1$ when the null hypothesis is true. A histogram of these p-values will allow us to check this visually--does it look roughly horizontal--and a chi-squared test of uniformity will permit a more formal evaluation. Here's the histogram: h <- hist(p.values, breaks=seq(0, 1, length.out=20)) and, for those who might imagine this isn't sufficiently uniform, here's the chi-squared test: chisq.test(h$counts) X-squared = 13.042, df = 18, p-value = 0.7891 The large p-value in this test shows these results are consistent with the expected uniformity. In other words, lm is correct. Where, then, do the differences in p-values come from? Let's check the likely formulas that might be invoked to compute a p-value. In any case the test statistic will be $$|t| = \left| \frac{\hat\beta - 0}{\operatorname{se}(\hat \beta)}\right|,$$ equal to the discrepancy between the estimated coefficient $\hat \beta$ and the hypothesized (and correct value) $\beta = 0$, expressed as a multiple of the standard error of the coefficient estimate. In the question these values are $$|t| = \left|\frac{3.05}{0.87378 }\right| = 3.491$$ for the intercept estimate and $$|t| = \left|\frac{-1.38 }{ 0.31906 }\right| = 4.321$$ for the slope estimate. Ordinarily these would be compared to the Student $t$ distribution whose degrees of freedom parameter is $4$ (the amount of data) minus $2$ (the number of estimated coefficients). Let's calculate it for the intercept: pt(-abs(3.05/0.87378), 4-2) * 2 [1] 0.0732 (This calculation multiplies the left-tailed Student $t$ probability by $2$ because this is a test of $H_0:\beta=0$ against the two-sided alternative $H_A:\beta \ne 0$.) It agrees with the lm output. An alternative calculation would use the standard Normal distribution to approximate the Student $t$ distribution. Let's see what it produces: pnorm(-abs(3.05/0.87378)) * 2 [1] 0.000482 Sure enough: biglm assumes the null distribution of the $t$ statistic is standard Normal. How much of an error is this? Re-running the preceding simulation using biglm in place of lm gives this histogram of p-values: Almost 18% of these p-values are less than $0.05$, a standard threshold of "significance." That's an enormous error. Some lessons we can learn from this little investigation are: Do not use approximations derived from asymptotic analyses (like the standard Normal distribution) with small datasets. Know your software.
Why do lm and biglm in R give different p-values for the same data?
To see which p-values are correct (if either), let's repeat the calculation for simulated data in which the null hypothesis is true. In the present setting, the calculation is a least-squares fit to
Why do lm and biglm in R give different p-values for the same data? To see which p-values are correct (if either), let's repeat the calculation for simulated data in which the null hypothesis is true. In the present setting, the calculation is a least-squares fit to (x,y) data and the null hypothesis is that the slope is zero. In the question there are four x values 1,2,3,4 and the estimated error is around 0.7, so let's incorporate that in the simulation. Here is the setup, written to be understandable to everyone, even those unfamiliar with R. beta <- c(intercept=0, slope=0) sigma <- 0.7 x <- 1:4 y.expected <- beta["intercept"] + beta["slope"] * x The simulation generates independent errors, adds them to y.expected, invokes lm to make the fit, and summary to compute the p-values. Although this is inefficient, it is testing the actual code that was used. We can still do thousands of iterations in a second: n.sim <- 1e3 set.seed(17) data.simulated <- matrix(rnorm(n.sim*length(y.expected), y.expected, sigma), ncol=n.sim) slope.p.value <- function(e) coef(summary(lm(y.expected + e ~ x)))["x", "Pr(>|t|)"] p.values <- apply(data.simulated, 2, slope.p.value) Correctly computed p-values will act like uniform random numbers between $0$ and $1$ when the null hypothesis is true. A histogram of these p-values will allow us to check this visually--does it look roughly horizontal--and a chi-squared test of uniformity will permit a more formal evaluation. Here's the histogram: h <- hist(p.values, breaks=seq(0, 1, length.out=20)) and, for those who might imagine this isn't sufficiently uniform, here's the chi-squared test: chisq.test(h$counts) X-squared = 13.042, df = 18, p-value = 0.7891 The large p-value in this test shows these results are consistent with the expected uniformity. In other words, lm is correct. Where, then, do the differences in p-values come from? Let's check the likely formulas that might be invoked to compute a p-value. In any case the test statistic will be $$|t| = \left| \frac{\hat\beta - 0}{\operatorname{se}(\hat \beta)}\right|,$$ equal to the discrepancy between the estimated coefficient $\hat \beta$ and the hypothesized (and correct value) $\beta = 0$, expressed as a multiple of the standard error of the coefficient estimate. In the question these values are $$|t| = \left|\frac{3.05}{0.87378 }\right| = 3.491$$ for the intercept estimate and $$|t| = \left|\frac{-1.38 }{ 0.31906 }\right| = 4.321$$ for the slope estimate. Ordinarily these would be compared to the Student $t$ distribution whose degrees of freedom parameter is $4$ (the amount of data) minus $2$ (the number of estimated coefficients). Let's calculate it for the intercept: pt(-abs(3.05/0.87378), 4-2) * 2 [1] 0.0732 (This calculation multiplies the left-tailed Student $t$ probability by $2$ because this is a test of $H_0:\beta=0$ against the two-sided alternative $H_A:\beta \ne 0$.) It agrees with the lm output. An alternative calculation would use the standard Normal distribution to approximate the Student $t$ distribution. Let's see what it produces: pnorm(-abs(3.05/0.87378)) * 2 [1] 0.000482 Sure enough: biglm assumes the null distribution of the $t$ statistic is standard Normal. How much of an error is this? Re-running the preceding simulation using biglm in place of lm gives this histogram of p-values: Almost 18% of these p-values are less than $0.05$, a standard threshold of "significance." That's an enormous error. Some lessons we can learn from this little investigation are: Do not use approximations derived from asymptotic analyses (like the standard Normal distribution) with small datasets. Know your software.
Why do lm and biglm in R give different p-values for the same data? To see which p-values are correct (if either), let's repeat the calculation for simulated data in which the null hypothesis is true. In the present setting, the calculation is a least-squares fit to
20,388
Random variables for which Markov, Chebyshev inequalities are tight
The class of distributions for which the limiting case of the Chebyshev bound holds is well known (and not that hard to simply guess). Normalized to mean $0$ and variance $1$, it is $$Z=\begin{cases}-k,&{\text{with probability }}{\:\frac {1}{2k^2}}\\\:0,&{\text{with probability }}1-\frac {1}{k^2}\\\:k,&{\text{with probability }}{\:\frac {1}{2k^2}}\end{cases}$$ This is (up to scale) the solution given at the Wikipedia page for the Chebyshev inequality. [You can write a sequence of distributions (by placing $\epsilon>0$ more probability at the center with the same removed evenly from the endpoints) that strictly satisfy the inequality and approach that limiting case as closely as desired.] Any other solution can be obtained by location and scale shifts of this: Let $X=\mu+\sigma Z$. For the Markov inequality, let $Y=|Z|$ so you have probability $1-1/k^2$ at 0 and $1/k^2$ at $k$. (One can introduce a scale parameter here but not a location-shift parameter) Moment inequalities - and indeed many other similar inequalities - tend to have discrete distributions as their limiting cases.
Random variables for which Markov, Chebyshev inequalities are tight
The class of distributions for which the limiting case of the Chebyshev bound holds is well known (and not that hard to simply guess). Normalized to mean $0$ and variance $1$, it is $$Z=\begin{cases}-
Random variables for which Markov, Chebyshev inequalities are tight The class of distributions for which the limiting case of the Chebyshev bound holds is well known (and not that hard to simply guess). Normalized to mean $0$ and variance $1$, it is $$Z=\begin{cases}-k,&{\text{with probability }}{\:\frac {1}{2k^2}}\\\:0,&{\text{with probability }}1-\frac {1}{k^2}\\\:k,&{\text{with probability }}{\:\frac {1}{2k^2}}\end{cases}$$ This is (up to scale) the solution given at the Wikipedia page for the Chebyshev inequality. [You can write a sequence of distributions (by placing $\epsilon>0$ more probability at the center with the same removed evenly from the endpoints) that strictly satisfy the inequality and approach that limiting case as closely as desired.] Any other solution can be obtained by location and scale shifts of this: Let $X=\mu+\sigma Z$. For the Markov inequality, let $Y=|Z|$ so you have probability $1-1/k^2$ at 0 and $1/k^2$ at $k$. (One can introduce a scale parameter here but not a location-shift parameter) Moment inequalities - and indeed many other similar inequalities - tend to have discrete distributions as their limiting cases.
Random variables for which Markov, Chebyshev inequalities are tight The class of distributions for which the limiting case of the Chebyshev bound holds is well known (and not that hard to simply guess). Normalized to mean $0$ and variance $1$, it is $$Z=\begin{cases}-
20,389
Random variables for which Markov, Chebyshev inequalities are tight
I believe that getting a continuous distribution over the whole real axis that follows Chebyshev's bound exactly may be impossible. Assume that a continuous distribution's mean and standard deviation are 0 and 1, or make it so via rescaling. Then require $P(\mid X\mid >x)=1/x^2$. For simplicity consider $x>0$; the negative values will be defined symmetrically. Then the CDF of the distribution is $1-1/x^2$. And so the pdf, the derivative of the cdf, is $2/x^3$. Obviously this must be defined only for $x>0$ because of the discontinuity. In fact, this can't even be true everywhere, or the integral of the pdf is not finite. Instead, if discontinuities are to be avoided (e.g. the pdf cat just be 0 for $\mid x \mid <\alpha$) the pdf must be piecewise, equal to $\mid x \mid^3$ for $\mid x\mid \geq \alpha$. However, this distribution fails the hypothesis - it does not have finite variance. To get a continuous distribution over the real axis with a finite variance, the expected values of $x$ and $x^2$ must be finite. Examining inverse polynomials, tails that go like $x^{-3}$ lead to a finite $E[x]$, but an undefined $E[x^2]$ because this involves an integral with asymptotically logarithm behavior. So, Chebychev's bound can't be satisfied exactly. You can require $P(\mid X \mid > x ) = x^{-(2+\epsilon)}$ for arbitrarily small $\epsilon$, however. The tail of the pdf goes like $x^{-(3+\epsilon)}$ and has a defined variance on the order of $1/\epsilon$. If you're willing to let the distribution live on only part of the real line, but still be continuous, then defining $pdf(x) = 2/\mid x \mid^3$ for $\epsilon < \mid x \mid < \Lambda$ works for $$ \epsilon = \sqrt{2 \left( 1 - \frac{1}{\sqrt{e}} \right) } $$ and $$ \Lambda = \epsilon = \sqrt{2 \left( \sqrt{e} - 1 \right) } $$ or any linear scaling thereof -- but this is basically $0.887 < |x| < 1.39$, which isn't much of a range. And its doubtful whether this restriction is still in line with the original motivation.
Random variables for which Markov, Chebyshev inequalities are tight
I believe that getting a continuous distribution over the whole real axis that follows Chebyshev's bound exactly may be impossible. Assume that a continuous distribution's mean and standard deviation
Random variables for which Markov, Chebyshev inequalities are tight I believe that getting a continuous distribution over the whole real axis that follows Chebyshev's bound exactly may be impossible. Assume that a continuous distribution's mean and standard deviation are 0 and 1, or make it so via rescaling. Then require $P(\mid X\mid >x)=1/x^2$. For simplicity consider $x>0$; the negative values will be defined symmetrically. Then the CDF of the distribution is $1-1/x^2$. And so the pdf, the derivative of the cdf, is $2/x^3$. Obviously this must be defined only for $x>0$ because of the discontinuity. In fact, this can't even be true everywhere, or the integral of the pdf is not finite. Instead, if discontinuities are to be avoided (e.g. the pdf cat just be 0 for $\mid x \mid <\alpha$) the pdf must be piecewise, equal to $\mid x \mid^3$ for $\mid x\mid \geq \alpha$. However, this distribution fails the hypothesis - it does not have finite variance. To get a continuous distribution over the real axis with a finite variance, the expected values of $x$ and $x^2$ must be finite. Examining inverse polynomials, tails that go like $x^{-3}$ lead to a finite $E[x]$, but an undefined $E[x^2]$ because this involves an integral with asymptotically logarithm behavior. So, Chebychev's bound can't be satisfied exactly. You can require $P(\mid X \mid > x ) = x^{-(2+\epsilon)}$ for arbitrarily small $\epsilon$, however. The tail of the pdf goes like $x^{-(3+\epsilon)}$ and has a defined variance on the order of $1/\epsilon$. If you're willing to let the distribution live on only part of the real line, but still be continuous, then defining $pdf(x) = 2/\mid x \mid^3$ for $\epsilon < \mid x \mid < \Lambda$ works for $$ \epsilon = \sqrt{2 \left( 1 - \frac{1}{\sqrt{e}} \right) } $$ and $$ \Lambda = \epsilon = \sqrt{2 \left( \sqrt{e} - 1 \right) } $$ or any linear scaling thereof -- but this is basically $0.887 < |x| < 1.39$, which isn't much of a range. And its doubtful whether this restriction is still in line with the original motivation.
Random variables for which Markov, Chebyshev inequalities are tight I believe that getting a continuous distribution over the whole real axis that follows Chebyshev's bound exactly may be impossible. Assume that a continuous distribution's mean and standard deviation
20,390
How much data for deep learning?
This is a great question and there's actually been some research tackling the capacity/depth issues you mentioned. There's been a lot of evidence that depth in convolutional neural networks has led to learning richer and more diverse feature hierarchies. Empirically we see the best performing nets tend to be "deep": the Oxford VGG-Net had 19 layers, the Google Inception architecture is deep, the Microsoft Deep Residual Network has a reported 152 layers, and these all are obtaining very impressive ImageNet benchmark results. On the surface, it's a fact that higher capacity models have a tendency to overfit unless you use some sort of regularizer. One way very deep networks overfitting can hurt performance is that they will rapidly approach very low training error in a small number of training epochs, i.e. we cannot train the network for a large number of passes through the dataset. A technique like Dropout, a stochastic regularization technique, allows us to train very deep nets for longer periods of time. This in effect allows us to learn better features and improve our classification accuracy because we get more passes through the training data. With regards to your first question: Why can you not just reduce the number of layers / nodes per layer in a deep neural network, and make it work with a smaller amount of data? If we reduce the training set size, how does that affect the generalization performance? If we use a smaller training set size, this may result in learning a smaller distributed feature representation, and this may hurt our generalization ability. Ultimately, we want to be able to generalize well. Having a larger training set allows us to learn a more diverse distributed feature hierarchy. With regards to your second question: Is there a fundamental "minimum number of parameters" that a neural network requires until it "kicks in"? Below a certain number of layers, neural networks do not seem to perform as well as hand-coded features. Now let's add some nuance to the above discussion about the depth issue. It appears, given where we are at right now with current state of the art, to train a high performance conv net from scratch, some sort of deep architecture is used. But there's been a string of results that are focused on model compression. So this isn't a direct answer to your question, but it's related. Model compression is interested in the following question: Given a high performance model (in our case let's say a deep conv net), can we compress the model, reducing it's depth or even parameter count, and retain the same performance? We can view the high performance, high capacity conv net as the teacher. Can we use the teacher to train a more compact student model? Surprisingly the answer is: yes. There's been a series of results, a good article for the conv net perspective is an article by Rich Caruana and Jimmy Ba Do Deep Nets Really Need to be Deep?. They are able to train a shallow model to mimic the deeper model, with very little loss in performance. There's been some more work as well on this topic, for example: FitNets: Hints for Thin Deep Nets Distilling the Knowledge in a Neural Network among other works. I'm sure I'm missing some other good articles. To me these sorts of results question how much capacity these shallow models really have. In the Caruana, Ba article, they state the following possibility: "The results suggest that the strength of deep learning may arise in part from a good match between deep architectures and current training procedures, and that it may be possible to devise better learning algorithms to train more accurate shallow feed-forward nets. For a given number of parameters, depth may make learning easier, but may not always be essential" It's important to be clear: in the Caruana, Ba article, they are not training a shallow model from scratch, i.e. training from just the class labels, to obtain state of the art performance. Rather, they train a high performance deep model, and from this model they extract log probabilities for each datapoint. We then train a shallow model to predict these log probabilities. So we do not train the shallow model on the class labels, but rather using these log probabilities. Nonetheless, it's still quite an interesting result. While this doesn't provide a direct answer to your question, there are some interesting ideas here that are very relevant. Fundamentally: it's always important to remember that there is a difference between the theoretical "capacity" of a model and finding a good configuration of your model. The latter depends on your optimization methods.
How much data for deep learning?
This is a great question and there's actually been some research tackling the capacity/depth issues you mentioned. There's been a lot of evidence that depth in convolutional neural networks has led t
How much data for deep learning? This is a great question and there's actually been some research tackling the capacity/depth issues you mentioned. There's been a lot of evidence that depth in convolutional neural networks has led to learning richer and more diverse feature hierarchies. Empirically we see the best performing nets tend to be "deep": the Oxford VGG-Net had 19 layers, the Google Inception architecture is deep, the Microsoft Deep Residual Network has a reported 152 layers, and these all are obtaining very impressive ImageNet benchmark results. On the surface, it's a fact that higher capacity models have a tendency to overfit unless you use some sort of regularizer. One way very deep networks overfitting can hurt performance is that they will rapidly approach very low training error in a small number of training epochs, i.e. we cannot train the network for a large number of passes through the dataset. A technique like Dropout, a stochastic regularization technique, allows us to train very deep nets for longer periods of time. This in effect allows us to learn better features and improve our classification accuracy because we get more passes through the training data. With regards to your first question: Why can you not just reduce the number of layers / nodes per layer in a deep neural network, and make it work with a smaller amount of data? If we reduce the training set size, how does that affect the generalization performance? If we use a smaller training set size, this may result in learning a smaller distributed feature representation, and this may hurt our generalization ability. Ultimately, we want to be able to generalize well. Having a larger training set allows us to learn a more diverse distributed feature hierarchy. With regards to your second question: Is there a fundamental "minimum number of parameters" that a neural network requires until it "kicks in"? Below a certain number of layers, neural networks do not seem to perform as well as hand-coded features. Now let's add some nuance to the above discussion about the depth issue. It appears, given where we are at right now with current state of the art, to train a high performance conv net from scratch, some sort of deep architecture is used. But there's been a string of results that are focused on model compression. So this isn't a direct answer to your question, but it's related. Model compression is interested in the following question: Given a high performance model (in our case let's say a deep conv net), can we compress the model, reducing it's depth or even parameter count, and retain the same performance? We can view the high performance, high capacity conv net as the teacher. Can we use the teacher to train a more compact student model? Surprisingly the answer is: yes. There's been a series of results, a good article for the conv net perspective is an article by Rich Caruana and Jimmy Ba Do Deep Nets Really Need to be Deep?. They are able to train a shallow model to mimic the deeper model, with very little loss in performance. There's been some more work as well on this topic, for example: FitNets: Hints for Thin Deep Nets Distilling the Knowledge in a Neural Network among other works. I'm sure I'm missing some other good articles. To me these sorts of results question how much capacity these shallow models really have. In the Caruana, Ba article, they state the following possibility: "The results suggest that the strength of deep learning may arise in part from a good match between deep architectures and current training procedures, and that it may be possible to devise better learning algorithms to train more accurate shallow feed-forward nets. For a given number of parameters, depth may make learning easier, but may not always be essential" It's important to be clear: in the Caruana, Ba article, they are not training a shallow model from scratch, i.e. training from just the class labels, to obtain state of the art performance. Rather, they train a high performance deep model, and from this model they extract log probabilities for each datapoint. We then train a shallow model to predict these log probabilities. So we do not train the shallow model on the class labels, but rather using these log probabilities. Nonetheless, it's still quite an interesting result. While this doesn't provide a direct answer to your question, there are some interesting ideas here that are very relevant. Fundamentally: it's always important to remember that there is a difference between the theoretical "capacity" of a model and finding a good configuration of your model. The latter depends on your optimization methods.
How much data for deep learning? This is a great question and there's actually been some research tackling the capacity/depth issues you mentioned. There's been a lot of evidence that depth in convolutional neural networks has led t
20,391
Why is Bayes Classifier the ideal classifier?
Why is that with Bayes classifier we achieve the best performance that can be achieved ? What is the formal proof/explanation for this? Usually, a dataset $D$ is considered to consist of $n$ i.i.d. samples $x_i$ of a distribution that generates your data. Then, you build a predictive model from the given data: given a sample $x_i$, you predict the class $\hat{f}(x_i)$, whereas the real class of the sample is $f(x_i)$. However, in theory, you could decide not to choose one particular model $\hat{f}_\text{chosen}$, but rather consider all possible models $\hat{f}$ at once and combine them somehow into one big model $\hat F$. Of course, given the data, many of the smaller modells could be quite improbable or inappropriate (for example, models that predict only one value of the target, even though there are multiple values of the target in your dataset $D$). In any case, you want to predict the target value of new samples, which are drawn from the same distribution as $x_i$s. A good measure $e$ of the performance of your model would be $$e(\text{model}) = P[f(X) = \text{model}(X)]\text{,}$$ i.e., the probability that you predict the true target value for a randomly sampled $X$. Using Bayes formula, you can compute, what is the probability that a new sample $x$ has target value $v$, given the data $D$: $$P(v\mid D) = \sum_{\hat{f}} P(v\mid \hat{f}) P(\hat{f}\mid D)\text{.}$$ One should stress that usually $P(v\mid \hat{f})$ is either $0$ or $1$, since $\hat{f}$ is a deterministic function of $x$, not usually, but almost all the time, it is impossible to estimate $P(\hat{f}\mid D)$ (except for the aforementioned trivial cases), not usually, but almost all the time, the number of possible models $\hat{f}$ is too big, for the upper sum to be evaluated. Hence, it is very hard to obtain/estimate $P(v\mid D)$ in most of the cases. Now, we proceed to the Optimal Bayes classifier. For a given $x$, it predicts the value $$\hat{v} = \text{argmax}_v \sum_{\hat{f}} P(v\mid \hat{f}) P(\hat{f}\mid D)\text{.}$$ Since this is the most probable value among all possible target values $v$, the Optimal Bayes classifier maximizes the performance measure $e(\hat{f})$. As we always use Bayes classifier as a benchmark to compare the performance of all other classifiers. Probably, you use the naive version of Bayes classifier. It is easy to implement, works reasonably well most of the time, but computes only a naive estimate of $P(v\mid D)$.
Why is Bayes Classifier the ideal classifier?
Why is that with Bayes classifier we achieve the best performance that can be achieved ? What is the formal proof/explanation for this? Usually, a dataset $D$ is considered to consist of $n$ i.i.d. s
Why is Bayes Classifier the ideal classifier? Why is that with Bayes classifier we achieve the best performance that can be achieved ? What is the formal proof/explanation for this? Usually, a dataset $D$ is considered to consist of $n$ i.i.d. samples $x_i$ of a distribution that generates your data. Then, you build a predictive model from the given data: given a sample $x_i$, you predict the class $\hat{f}(x_i)$, whereas the real class of the sample is $f(x_i)$. However, in theory, you could decide not to choose one particular model $\hat{f}_\text{chosen}$, but rather consider all possible models $\hat{f}$ at once and combine them somehow into one big model $\hat F$. Of course, given the data, many of the smaller modells could be quite improbable or inappropriate (for example, models that predict only one value of the target, even though there are multiple values of the target in your dataset $D$). In any case, you want to predict the target value of new samples, which are drawn from the same distribution as $x_i$s. A good measure $e$ of the performance of your model would be $$e(\text{model}) = P[f(X) = \text{model}(X)]\text{,}$$ i.e., the probability that you predict the true target value for a randomly sampled $X$. Using Bayes formula, you can compute, what is the probability that a new sample $x$ has target value $v$, given the data $D$: $$P(v\mid D) = \sum_{\hat{f}} P(v\mid \hat{f}) P(\hat{f}\mid D)\text{.}$$ One should stress that usually $P(v\mid \hat{f})$ is either $0$ or $1$, since $\hat{f}$ is a deterministic function of $x$, not usually, but almost all the time, it is impossible to estimate $P(\hat{f}\mid D)$ (except for the aforementioned trivial cases), not usually, but almost all the time, the number of possible models $\hat{f}$ is too big, for the upper sum to be evaluated. Hence, it is very hard to obtain/estimate $P(v\mid D)$ in most of the cases. Now, we proceed to the Optimal Bayes classifier. For a given $x$, it predicts the value $$\hat{v} = \text{argmax}_v \sum_{\hat{f}} P(v\mid \hat{f}) P(\hat{f}\mid D)\text{.}$$ Since this is the most probable value among all possible target values $v$, the Optimal Bayes classifier maximizes the performance measure $e(\hat{f})$. As we always use Bayes classifier as a benchmark to compare the performance of all other classifiers. Probably, you use the naive version of Bayes classifier. It is easy to implement, works reasonably well most of the time, but computes only a naive estimate of $P(v\mid D)$.
Why is Bayes Classifier the ideal classifier? Why is that with Bayes classifier we achieve the best performance that can be achieved ? What is the formal proof/explanation for this? Usually, a dataset $D$ is considered to consist of $n$ i.i.d. s
20,392
Why is Bayes Classifier the ideal classifier?
The performance in terms of success rate of a classifier relates to the probability that a true class $C_T$ equals the predicted class $C_P$. You could express this probability as the integral over all possible situations of the feature vector $X$ (or sum when $X$ is discrete) and the conditional probability to classify correct for those $x$ $$P(C_T=C_P) = \int_{\text{all possible $X$}} f(x)P(C_T=C_P|x) \text{d}x$$ Where $f(x)$ is the probability density for the feature vector $X$. If, for some possible set of features $x$, a classifier does not select the most probable class for that set of features then it can be improved upon. The Bayes classifier always selects the most probable class for each set of features $x$ (the term $P(C_T=C_P|x)$ is maximum), thus can not be improved upon, at least not based on the features $x$.
Why is Bayes Classifier the ideal classifier?
The performance in terms of success rate of a classifier relates to the probability that a true class $C_T$ equals the predicted class $C_P$. You could express this probability as the integral over a
Why is Bayes Classifier the ideal classifier? The performance in terms of success rate of a classifier relates to the probability that a true class $C_T$ equals the predicted class $C_P$. You could express this probability as the integral over all possible situations of the feature vector $X$ (or sum when $X$ is discrete) and the conditional probability to classify correct for those $x$ $$P(C_T=C_P) = \int_{\text{all possible $X$}} f(x)P(C_T=C_P|x) \text{d}x$$ Where $f(x)$ is the probability density for the feature vector $X$. If, for some possible set of features $x$, a classifier does not select the most probable class for that set of features then it can be improved upon. The Bayes classifier always selects the most probable class for each set of features $x$ (the term $P(C_T=C_P|x)$ is maximum), thus can not be improved upon, at least not based on the features $x$.
Why is Bayes Classifier the ideal classifier? The performance in terms of success rate of a classifier relates to the probability that a true class $C_T$ equals the predicted class $C_P$. You could express this probability as the integral over a
20,393
Ill-conditioned covariance matrix in GP regression for Bayesian optimization
Another option is to essentially average the points causing - for example if you have 1000 points and 50 cause issues, you could take the optimal low rank approximation using the first 950 eigenvalues / vectors. However, this isn't far off removing the datapoints close together which you said you would rather not do. Please bear in mind though that as you add jitter you reduce the degrees of freedom - ie each point influences your prediction less, so this could be worse than using less points. Another option (which I personally think is neat) is to combine the two points in a slights smarter way. You could for instance take 2 points and combine them into one but also use them to determine an approximation for the gradient too. To include gradient information all you need from your kernel is to find $dxk(x,x')$ and $dxdx'k(x,x')$. Derivatives usually have no correlation with their observation so you don't run into conditioning issues and retain local information. Edit: Based on the comments I thought I would elaborate what I meant by including derivative observations. If we use a gaussian kernel (as an example), $k_{x,x'} = k(x, x') = \sigma\exp(-\frac{(x-x')^2}{l^2})$ its derivatives are, $k_{dx,x'} =\frac{dk(x, x')}{dx} = - \frac{2(x-x')}{l^2} \sigma\exp(-\frac{(x-x')^2}{l^2})$ $k_{dx,dx'} =\frac{d^2k(x, x')}{dxdx'} = 2 \frac{l^2 - 2(x-x')}{l^4} \sigma\exp(-\frac{(x-x')^2}{l^2})$ Now, let us assume we have some data point $\{x_i, y_i ; i = 1,...,n \}$ and a derivative at $x_1$ which I'll call $m_1$. Let $Y = [m_1, y_1, \dots, y_n]$, then we use a single standard GP with covariance matrix as, $K = \left( \begin{array}{cccc} k_{dx_0,dx_0} & k_{dx_0,x_0} & \dots & k_{dx_0,x_n} \\ k_{dx_0,x_0} & k_{x_0,x_0} & \dots & k_{x_0,x_n} \\ \vdots & \vdots & \ddots & \vdots \\ k_{dx_0,x_n} & k_{x_0,x_n} & \dots & k_{x_n,x_n} \end{array} \right)$ The rest of the GP is the same as usual.
Ill-conditioned covariance matrix in GP regression for Bayesian optimization
Another option is to essentially average the points causing - for example if you have 1000 points and 50 cause issues, you could take the optimal low rank approximation using the first 950 eigenvalues
Ill-conditioned covariance matrix in GP regression for Bayesian optimization Another option is to essentially average the points causing - for example if you have 1000 points and 50 cause issues, you could take the optimal low rank approximation using the first 950 eigenvalues / vectors. However, this isn't far off removing the datapoints close together which you said you would rather not do. Please bear in mind though that as you add jitter you reduce the degrees of freedom - ie each point influences your prediction less, so this could be worse than using less points. Another option (which I personally think is neat) is to combine the two points in a slights smarter way. You could for instance take 2 points and combine them into one but also use them to determine an approximation for the gradient too. To include gradient information all you need from your kernel is to find $dxk(x,x')$ and $dxdx'k(x,x')$. Derivatives usually have no correlation with their observation so you don't run into conditioning issues and retain local information. Edit: Based on the comments I thought I would elaborate what I meant by including derivative observations. If we use a gaussian kernel (as an example), $k_{x,x'} = k(x, x') = \sigma\exp(-\frac{(x-x')^2}{l^2})$ its derivatives are, $k_{dx,x'} =\frac{dk(x, x')}{dx} = - \frac{2(x-x')}{l^2} \sigma\exp(-\frac{(x-x')^2}{l^2})$ $k_{dx,dx'} =\frac{d^2k(x, x')}{dxdx'} = 2 \frac{l^2 - 2(x-x')}{l^4} \sigma\exp(-\frac{(x-x')^2}{l^2})$ Now, let us assume we have some data point $\{x_i, y_i ; i = 1,...,n \}$ and a derivative at $x_1$ which I'll call $m_1$. Let $Y = [m_1, y_1, \dots, y_n]$, then we use a single standard GP with covariance matrix as, $K = \left( \begin{array}{cccc} k_{dx_0,dx_0} & k_{dx_0,x_0} & \dots & k_{dx_0,x_n} \\ k_{dx_0,x_0} & k_{x_0,x_0} & \dots & k_{x_0,x_n} \\ \vdots & \vdots & \ddots & \vdots \\ k_{dx_0,x_n} & k_{x_0,x_n} & \dots & k_{x_n,x_n} \end{array} \right)$ The rest of the GP is the same as usual.
Ill-conditioned covariance matrix in GP regression for Bayesian optimization Another option is to essentially average the points causing - for example if you have 1000 points and 50 cause issues, you could take the optimal low rank approximation using the first 950 eigenvalues
20,394
Ill-conditioned covariance matrix in GP regression for Bayesian optimization
One solution that we've kicked around at the office is to just alter the troublesome points. This can take the form of straight-up deletion or something more sophisticated. Essentially, the observation is that close-by points are highly redundant: in fact, so redundant that they reduce the rank of the covariance matrix. By the same token, one point is contributing little information to the problem at hand anyway, so removing one or the other (or doing something else, like averaging them or "bouncing" one point away from the other to some minimal acceptable distance) will not really change your solution all that much. I'm not sure how to judge at what point the two points become "too close." Perhaps this could be a tuning option left to the user. (Oops! After I posted this, I found your question here which advances this answer to a much more elaborate solution. I hope that by linking to it from my answer, I'll be helping with SEO...)
Ill-conditioned covariance matrix in GP regression for Bayesian optimization
One solution that we've kicked around at the office is to just alter the troublesome points. This can take the form of straight-up deletion or something more sophisticated. Essentially, the observatio
Ill-conditioned covariance matrix in GP regression for Bayesian optimization One solution that we've kicked around at the office is to just alter the troublesome points. This can take the form of straight-up deletion or something more sophisticated. Essentially, the observation is that close-by points are highly redundant: in fact, so redundant that they reduce the rank of the covariance matrix. By the same token, one point is contributing little information to the problem at hand anyway, so removing one or the other (or doing something else, like averaging them or "bouncing" one point away from the other to some minimal acceptable distance) will not really change your solution all that much. I'm not sure how to judge at what point the two points become "too close." Perhaps this could be a tuning option left to the user. (Oops! After I posted this, I found your question here which advances this answer to a much more elaborate solution. I hope that by linking to it from my answer, I'll be helping with SEO...)
Ill-conditioned covariance matrix in GP regression for Bayesian optimization One solution that we've kicked around at the office is to just alter the troublesome points. This can take the form of straight-up deletion or something more sophisticated. Essentially, the observatio
20,395
Using the median for calculating Variance
Mean minimizes the squared error (or L2 norm, see here or here), so natural choice for variance to measure distance from the mean is to use squared error (see here on why we square it). On the other hand, median minimizes the absolute error (L1 norm), i.e. it is a value that is in the "middle" of your data, so absolute distance from the median (so called Median Absolute Deviation or MAD) seems to be a better measure of the degree of variability around the median. You can read more about this relations in this thread. Saying it short, variance differs from MAD on how do they define the central point of your data and this influences the way how we measure variation of datapoints around it. Squaring the values make outliers have greater influence on the central point (mean), while in case of median, all the points have the same impact on it, so the absolute distance seems more appropriate. This can be shown also by simple simulation. If you compare values squared distances from the mean and median, then the total squared distance is almost always smaller from mean than from median. On the other hand, total absolute distance is smaller from median, then from mean. The R code for conducting the simulation is posted below. sqtest <- function(x) sum((x-mean(x))^2) < sum((x-median(x))^2) abstest <- function(x) sum(abs(x-mean(x))) > sum(abs(x-median(x))) mean(replicate(1000, sqtest(rnorm(1000)))) mean(replicate(1000, abstest(rnorm(1000)))) mean(replicate(1000, sqtest(rexp(1000)))) mean(replicate(1000, abstest(rexp(1000)))) mean(replicate(1000, sqtest(runif(1000)))) mean(replicate(1000, abstest(runif(1000)))) In the case of using median instead of mean in estimating such "variance" this would lead to higher estimates, than with using mean as it is done traditionally. By the way, the relations of L1 and L2 norms can be considered also in the Bayesian context, as in this thread.
Using the median for calculating Variance
Mean minimizes the squared error (or L2 norm, see here or here), so natural choice for variance to measure distance from the mean is to use squared error (see here on why we square it). On the other h
Using the median for calculating Variance Mean minimizes the squared error (or L2 norm, see here or here), so natural choice for variance to measure distance from the mean is to use squared error (see here on why we square it). On the other hand, median minimizes the absolute error (L1 norm), i.e. it is a value that is in the "middle" of your data, so absolute distance from the median (so called Median Absolute Deviation or MAD) seems to be a better measure of the degree of variability around the median. You can read more about this relations in this thread. Saying it short, variance differs from MAD on how do they define the central point of your data and this influences the way how we measure variation of datapoints around it. Squaring the values make outliers have greater influence on the central point (mean), while in case of median, all the points have the same impact on it, so the absolute distance seems more appropriate. This can be shown also by simple simulation. If you compare values squared distances from the mean and median, then the total squared distance is almost always smaller from mean than from median. On the other hand, total absolute distance is smaller from median, then from mean. The R code for conducting the simulation is posted below. sqtest <- function(x) sum((x-mean(x))^2) < sum((x-median(x))^2) abstest <- function(x) sum(abs(x-mean(x))) > sum(abs(x-median(x))) mean(replicate(1000, sqtest(rnorm(1000)))) mean(replicate(1000, abstest(rnorm(1000)))) mean(replicate(1000, sqtest(rexp(1000)))) mean(replicate(1000, abstest(rexp(1000)))) mean(replicate(1000, sqtest(runif(1000)))) mean(replicate(1000, abstest(runif(1000)))) In the case of using median instead of mean in estimating such "variance" this would lead to higher estimates, than with using mean as it is done traditionally. By the way, the relations of L1 and L2 norms can be considered also in the Bayesian context, as in this thread.
Using the median for calculating Variance Mean minimizes the squared error (or L2 norm, see here or here), so natural choice for variance to measure distance from the mean is to use squared error (see here on why we square it). On the other h
20,396
Why should one do a WOE transformation of categorical predictors in logistic regression?
In the example you link to, the categorical predictor is represented by a single continuous variable taking a value for each level equal to the observed log odds of the response in that level (plus a constant): $$\log \frac{y_j} {n_j-y_j} + \log \frac{\sum_j^k (n_j-y_j)}{\sum_j^k {y_j}}$$ This obfuscation doesn't serve any purpose at all that I can think of: you'll get the same predicted response as if you'd used the usual dummy coding; but the degrees of freedom are wrong, invalidating several useful forms of inference about the model. In multiple regression, with several categorical predictors to transform, I suppose you'd calculate WOEs for each using marginal log odds. That will change the predicted responses; but as confounding isn't taken into account—the conditional log odds aren't a linear function of the marginal log odds—I can't see any reason to suppose it an improvement, & the inferential problems remain.
Why should one do a WOE transformation of categorical predictors in logistic regression?
In the example you link to, the categorical predictor is represented by a single continuous variable taking a value for each level equal to the observed log odds of the response in that level (plus a
Why should one do a WOE transformation of categorical predictors in logistic regression? In the example you link to, the categorical predictor is represented by a single continuous variable taking a value for each level equal to the observed log odds of the response in that level (plus a constant): $$\log \frac{y_j} {n_j-y_j} + \log \frac{\sum_j^k (n_j-y_j)}{\sum_j^k {y_j}}$$ This obfuscation doesn't serve any purpose at all that I can think of: you'll get the same predicted response as if you'd used the usual dummy coding; but the degrees of freedom are wrong, invalidating several useful forms of inference about the model. In multiple regression, with several categorical predictors to transform, I suppose you'd calculate WOEs for each using marginal log odds. That will change the predicted responses; but as confounding isn't taken into account—the conditional log odds aren't a linear function of the marginal log odds—I can't see any reason to suppose it an improvement, & the inferential problems remain.
Why should one do a WOE transformation of categorical predictors in logistic regression? In the example you link to, the categorical predictor is represented by a single continuous variable taking a value for each level equal to the observed log odds of the response in that level (plus a
20,397
Why should one do a WOE transformation of categorical predictors in logistic regression?
Coarse classing using the measure of weight of Evidence (WoE) has the following advantage- WoE displays a linear relationship with the natural logarithm of the odds ratio which is the dependent variable in logistic regression. Therefore, the question of model misspecification does not arise in logistic regression when we use WoE instead of the actual values of the variable. $ln(p/1-p)$ = $\alpha$ + $\beta$*$WoE(Var1)$ + $\gamma$*$WoE(Var2)$ + $\eta$*$WoE(Var3 )$ Source: In one of the PPTs my trainer showed me during the company training.
Why should one do a WOE transformation of categorical predictors in logistic regression?
Coarse classing using the measure of weight of Evidence (WoE) has the following advantage- WoE displays a linear relationship with the natural logarithm of the odds ratio which is the dependent varia
Why should one do a WOE transformation of categorical predictors in logistic regression? Coarse classing using the measure of weight of Evidence (WoE) has the following advantage- WoE displays a linear relationship with the natural logarithm of the odds ratio which is the dependent variable in logistic regression. Therefore, the question of model misspecification does not arise in logistic regression when we use WoE instead of the actual values of the variable. $ln(p/1-p)$ = $\alpha$ + $\beta$*$WoE(Var1)$ + $\gamma$*$WoE(Var2)$ + $\eta$*$WoE(Var3 )$ Source: In one of the PPTs my trainer showed me during the company training.
Why should one do a WOE transformation of categorical predictors in logistic regression? Coarse classing using the measure of weight of Evidence (WoE) has the following advantage- WoE displays a linear relationship with the natural logarithm of the odds ratio which is the dependent varia
20,398
Why should one do a WOE transformation of categorical predictors in logistic regression?
WOE transformations help when you have both numeric and categorical data that you need to combine and missing values throughout that you would like to extract information from. Converting everything to WOE helps "standardize" many different types of data (even missing data) onto the the same log odds scale. This blog post explains things reasonably well: http://multithreaded.stitchfix.com/blog/2015/08/13/weight-of-evidence/ The short of the story is that Logistic Regression with WOE, should just be (and is) called a Semi-Naive Bayesian Classifier (SNBC). If you are trying to understand the algorithm, the name SNBC is, to me, far more informative.
Why should one do a WOE transformation of categorical predictors in logistic regression?
WOE transformations help when you have both numeric and categorical data that you need to combine and missing values throughout that you would like to extract information from. Converting everything
Why should one do a WOE transformation of categorical predictors in logistic regression? WOE transformations help when you have both numeric and categorical data that you need to combine and missing values throughout that you would like to extract information from. Converting everything to WOE helps "standardize" many different types of data (even missing data) onto the the same log odds scale. This blog post explains things reasonably well: http://multithreaded.stitchfix.com/blog/2015/08/13/weight-of-evidence/ The short of the story is that Logistic Regression with WOE, should just be (and is) called a Semi-Naive Bayesian Classifier (SNBC). If you are trying to understand the algorithm, the name SNBC is, to me, far more informative.
Why should one do a WOE transformation of categorical predictors in logistic regression? WOE transformations help when you have both numeric and categorical data that you need to combine and missing values throughout that you would like to extract information from. Converting everything
20,399
TfidfVectorizer: should it be used on train only or train+test
Using TF-IDF-vectors, that have been calculated with the entire corpus (training and test subsets combined), while training the model might introduce some data leakage and hence yield in too optimistic performance measures. This is because the IDF-part of the training set's TF-IDF features will then include information from the test set already. Calculating them completely separately for the training and test set is not a good idea either, because besides testing the quality of your model then you will be also testing the quality of your IDF-estimation. And because the test set is usually small this will be a poor estimation and will worsen your performance measures. Therefore I would suggest (analogously to the common mean imputation of missing values) to perform TF-IDF-normalization on the training set seperately and then use the IDF-vector from the training set to calculate the TF-IDF vectors of the test set.
TfidfVectorizer: should it be used on train only or train+test
Using TF-IDF-vectors, that have been calculated with the entire corpus (training and test subsets combined), while training the model might introduce some data leakage and hence yield in too optimisti
TfidfVectorizer: should it be used on train only or train+test Using TF-IDF-vectors, that have been calculated with the entire corpus (training and test subsets combined), while training the model might introduce some data leakage and hence yield in too optimistic performance measures. This is because the IDF-part of the training set's TF-IDF features will then include information from the test set already. Calculating them completely separately for the training and test set is not a good idea either, because besides testing the quality of your model then you will be also testing the quality of your IDF-estimation. And because the test set is usually small this will be a poor estimation and will worsen your performance measures. Therefore I would suggest (analogously to the common mean imputation of missing values) to perform TF-IDF-normalization on the training set seperately and then use the IDF-vector from the training set to calculate the TF-IDF vectors of the test set.
TfidfVectorizer: should it be used on train only or train+test Using TF-IDF-vectors, that have been calculated with the entire corpus (training and test subsets combined), while training the model might introduce some data leakage and hence yield in too optimisti
20,400
TfidfVectorizer: should it be used on train only or train+test
Usually, as this site's name suggests, you'd want to separate your train, cross-validation and test datasets. As @Alexey Grigorev mentioned, the main concern is having some certainty that your model can generalize to some unseen dataset. In a more intuitive way, you'd want your model to be able to grasp the relations between each row's features and each row's prediction, and to apply it later on a different, unseen, 1 or more rows. These relations are at the row level, but they are learnt at deep by looking at the entire training data. The challenge of generalizing is, then, making sure the model is grasping a formula, not depending (over-fitting) on the specific set of training values. I'd thus discern between two TFIDF scenarios, regarding how you consider your corpus: 1. The corpus is at the row level We have 1 or more text features that we'd like to TFIDF in order to discern some term frequencies for this row. Usually it'd be a large text field, important by "itself", like an additional document describing a house buying contract in house sale dataset. In this case the text features should be processed at the row level, like all the other features. 2. The corpus is at the dataset level In addition to having a row context, there is meaning to the text feature of each row in the context of the entire dataset. Usually a smaller text field (like a sentence). The TFIDF idea here might be calculating some "rareness" of words, but in a larger context. The larger context might be the entire text column from the train and even the test datasets, since the more corpus knowledge we'd have - the better we'd be able to ascertain the rareness. And I'd even say you could use the text from the unseen dataset, or even an outer corpus. The TFIDF here helps you feature-engineering at the row-level, from an outside (larger, lookup-table like) knowledge Take a look at HashingVectorizer, a "stateless" vectorizer, suitable for a mutable corpus
TfidfVectorizer: should it be used on train only or train+test
Usually, as this site's name suggests, you'd want to separate your train, cross-validation and test datasets. As @Alexey Grigorev mentioned, the main concern is having some certainty that your model
TfidfVectorizer: should it be used on train only or train+test Usually, as this site's name suggests, you'd want to separate your train, cross-validation and test datasets. As @Alexey Grigorev mentioned, the main concern is having some certainty that your model can generalize to some unseen dataset. In a more intuitive way, you'd want your model to be able to grasp the relations between each row's features and each row's prediction, and to apply it later on a different, unseen, 1 or more rows. These relations are at the row level, but they are learnt at deep by looking at the entire training data. The challenge of generalizing is, then, making sure the model is grasping a formula, not depending (over-fitting) on the specific set of training values. I'd thus discern between two TFIDF scenarios, regarding how you consider your corpus: 1. The corpus is at the row level We have 1 or more text features that we'd like to TFIDF in order to discern some term frequencies for this row. Usually it'd be a large text field, important by "itself", like an additional document describing a house buying contract in house sale dataset. In this case the text features should be processed at the row level, like all the other features. 2. The corpus is at the dataset level In addition to having a row context, there is meaning to the text feature of each row in the context of the entire dataset. Usually a smaller text field (like a sentence). The TFIDF idea here might be calculating some "rareness" of words, but in a larger context. The larger context might be the entire text column from the train and even the test datasets, since the more corpus knowledge we'd have - the better we'd be able to ascertain the rareness. And I'd even say you could use the text from the unseen dataset, or even an outer corpus. The TFIDF here helps you feature-engineering at the row-level, from an outside (larger, lookup-table like) knowledge Take a look at HashingVectorizer, a "stateless" vectorizer, suitable for a mutable corpus
TfidfVectorizer: should it be used on train only or train+test Usually, as this site's name suggests, you'd want to separate your train, cross-validation and test datasets. As @Alexey Grigorev mentioned, the main concern is having some certainty that your model