idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,201
|
Observed information matrix is a consistent estimator of the expected information matrix?
|
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a strongly consistent estimator of the information matrix
, i.e. $N^{-1}J_{N}(\hat{\theta}_{N}(Y))\overset{a.s.}{\longrightarrow}I(\theta_{0})$ if we plug-in a strongly consistent sequence of estimators. I hope it is correct in all details.
We will use $I_{N}=\{1,2,...,N\}$ to be an index set, and let us temporarily adopt the notation $J(\tilde{Y},\theta):=J(\theta)$ in order to be explicit about the dependence of $J(\theta)$ on the random vector $\tilde{Y}$. We shall also work elementwise with $(J(\tilde{Y},\theta))_{rs}$ and $(J_{N}(\theta))_{rs}=\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}$, $r,s=1,...,k$, for this discussion. The function $(J(\cdot,\theta))_{rs}$ is real-valued on the set $\mathbb{R}^{n}\times\Theta^{\circ}$, and we will suppose that it is Lebesgue measurable for every $\theta\in\Theta^{\circ}$. A uniform (strong) law of large numbers defines a set of conditions under which
$\underset{\theta\in\Theta}{\text{sup}}\left|N^{-1}(J_{N}(\theta))_{rs}-E_{\theta}\left[(J(Y_{1},\theta))_{rs}\right]\right|=\nonumber\\
\hspace{60pt}\underset{\theta\in\Theta}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|\overset{a.s}{\longrightarrow}0\hspace{100pt}(1)$
The conditions that must be satisfied in order that (1) holds are (a) $\Theta^{\circ}$ is a compact set; (b) $(J(\tilde{Y},\theta))_{rs}$ is a continuous function on $\Theta^{\circ}$ with probability 1; (c) for each $\theta\in \Theta^{\circ}$ $(J(\tilde{Y},\theta))_{rs}$ is dominated by a function $h(\tilde{Y})$, i.e. $|(J(\tilde{Y},\theta))_{rs}|<h(\tilde{Y})$; and
(d) for each $\theta\in \Theta^{\circ}$ $E_{\theta}[h(\tilde{Y})]<\infty$;. These conditions come from Jennrich (1969, Theorem 2).
Now for any $y_{i}\in\mathbb{R}^{n}$, $i\in I_{N}$ and $\theta'\in S\subseteq\Theta^{\circ}$, the following inequality obviously holds
$\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(y_{i},\theta'))_{rs}-(I(\theta'))_{rs}\right|\leq\underset{\theta\in S}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|.\hspace{50pt}(2)$
Suppose that $\{\hat{\theta}_{N}(Y)\}$ is a strongly consistent sequence of estimators for $\theta_{0}$, and let $\Theta_{N_{1}}=B_{\delta_{N_{1}}}(\theta_{0})\subseteq K\subseteq \Theta^{\circ}$ be an open ball in $\mathbb{R}^{k}$ with radius $\delta_{N_{1}}\rightarrow 0$ as $N_{1}\rightarrow\infty$, and suppose $K$ is compact. Then since $\hat{\theta}_{N}(Y)\in \Theta_{N_{1}}$ for $N$ sufficiently large enough we have $P[\underset{N}{\text{lim}}\{\hat{\theta}_{N}(Y)\in\Theta_{N_{1}}\}]=1$ for sufficiently large $N$. Together with (2) this implies
$P\left[\underset{N\rightarrow\infty}{\text{lim}}\left\{\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\hat{\theta}_{N}(Y)))_{rs}-(I(\hat{\theta}_{N}(Y)))_{rs}\right|\leq\right.\right.\nonumber\\
\hspace{40pt}\left.\left.\underset{\theta\in\Theta_{N_{1}}}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|\right\}\right]=1.\hspace{100pt}(3)$
Now $\Theta_{N_{1}}\subseteq\Theta^{\circ}$ implies conditions (a)-(d) of Jennrich (1969, Theorem 2) apply to $\Theta_{N_{1}}$. Thus (1) and (3) imply
$P\left[\underset{N\rightarrow\infty}{\text{lim}}\left\{\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\hat{\theta}_{N}(Y)))_{rs}-(I(\hat{\theta}_{N}(Y)))_{rs}\right|=0\right\}\right]=1.\hspace{100pt}(4)$
Since $(I(\hat{\theta}_{N}(Y)))_{rs}\overset{a.s.}{\longrightarrow}I(\theta_{0})$ then (4) implies that $N^{-1}(J_{N}(\hat{\theta}_{N}(Y)))_{rs}\overset{a.s.}{\longrightarrow}(I(\theta_{0}))_{rs}$. Note that (3) holds however small $\Theta_{N_{1}}$ is, and so the result in (4) is independent of the choice of $N_{1}$ other than $N_{1}$ must be chosen such that
$\Theta_{N_{1}}\subseteq \Theta^{\circ}$. This result holds for all $r,s=1,...,k$, and so in terms of matrices we have $N^{-1}J_{N}(\hat{\theta}_{N}(Y))\overset{a.s.}{\longrightarrow}I(\theta_{0})$.
|
Observed information matrix is a consistent estimator of the expected information matrix?
|
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a stro
|
Observed information matrix is a consistent estimator of the expected information matrix?
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a strongly consistent estimator of the information matrix
, i.e. $N^{-1}J_{N}(\hat{\theta}_{N}(Y))\overset{a.s.}{\longrightarrow}I(\theta_{0})$ if we plug-in a strongly consistent sequence of estimators. I hope it is correct in all details.
We will use $I_{N}=\{1,2,...,N\}$ to be an index set, and let us temporarily adopt the notation $J(\tilde{Y},\theta):=J(\theta)$ in order to be explicit about the dependence of $J(\theta)$ on the random vector $\tilde{Y}$. We shall also work elementwise with $(J(\tilde{Y},\theta))_{rs}$ and $(J_{N}(\theta))_{rs}=\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}$, $r,s=1,...,k$, for this discussion. The function $(J(\cdot,\theta))_{rs}$ is real-valued on the set $\mathbb{R}^{n}\times\Theta^{\circ}$, and we will suppose that it is Lebesgue measurable for every $\theta\in\Theta^{\circ}$. A uniform (strong) law of large numbers defines a set of conditions under which
$\underset{\theta\in\Theta}{\text{sup}}\left|N^{-1}(J_{N}(\theta))_{rs}-E_{\theta}\left[(J(Y_{1},\theta))_{rs}\right]\right|=\nonumber\\
\hspace{60pt}\underset{\theta\in\Theta}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|\overset{a.s}{\longrightarrow}0\hspace{100pt}(1)$
The conditions that must be satisfied in order that (1) holds are (a) $\Theta^{\circ}$ is a compact set; (b) $(J(\tilde{Y},\theta))_{rs}$ is a continuous function on $\Theta^{\circ}$ with probability 1; (c) for each $\theta\in \Theta^{\circ}$ $(J(\tilde{Y},\theta))_{rs}$ is dominated by a function $h(\tilde{Y})$, i.e. $|(J(\tilde{Y},\theta))_{rs}|<h(\tilde{Y})$; and
(d) for each $\theta\in \Theta^{\circ}$ $E_{\theta}[h(\tilde{Y})]<\infty$;. These conditions come from Jennrich (1969, Theorem 2).
Now for any $y_{i}\in\mathbb{R}^{n}$, $i\in I_{N}$ and $\theta'\in S\subseteq\Theta^{\circ}$, the following inequality obviously holds
$\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(y_{i},\theta'))_{rs}-(I(\theta'))_{rs}\right|\leq\underset{\theta\in S}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|.\hspace{50pt}(2)$
Suppose that $\{\hat{\theta}_{N}(Y)\}$ is a strongly consistent sequence of estimators for $\theta_{0}$, and let $\Theta_{N_{1}}=B_{\delta_{N_{1}}}(\theta_{0})\subseteq K\subseteq \Theta^{\circ}$ be an open ball in $\mathbb{R}^{k}$ with radius $\delta_{N_{1}}\rightarrow 0$ as $N_{1}\rightarrow\infty$, and suppose $K$ is compact. Then since $\hat{\theta}_{N}(Y)\in \Theta_{N_{1}}$ for $N$ sufficiently large enough we have $P[\underset{N}{\text{lim}}\{\hat{\theta}_{N}(Y)\in\Theta_{N_{1}}\}]=1$ for sufficiently large $N$. Together with (2) this implies
$P\left[\underset{N\rightarrow\infty}{\text{lim}}\left\{\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\hat{\theta}_{N}(Y)))_{rs}-(I(\hat{\theta}_{N}(Y)))_{rs}\right|\leq\right.\right.\nonumber\\
\hspace{40pt}\left.\left.\underset{\theta\in\Theta_{N_{1}}}{\text{sup}}\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\theta))_{rs}-(I(\theta))_{rs}\right|\right\}\right]=1.\hspace{100pt}(3)$
Now $\Theta_{N_{1}}\subseteq\Theta^{\circ}$ implies conditions (a)-(d) of Jennrich (1969, Theorem 2) apply to $\Theta_{N_{1}}$. Thus (1) and (3) imply
$P\left[\underset{N\rightarrow\infty}{\text{lim}}\left\{\left|N^{-1}\sum\nolimits_{i=1}^{N}(J(Y_{i},\hat{\theta}_{N}(Y)))_{rs}-(I(\hat{\theta}_{N}(Y)))_{rs}\right|=0\right\}\right]=1.\hspace{100pt}(4)$
Since $(I(\hat{\theta}_{N}(Y)))_{rs}\overset{a.s.}{\longrightarrow}I(\theta_{0})$ then (4) implies that $N^{-1}(J_{N}(\hat{\theta}_{N}(Y)))_{rs}\overset{a.s.}{\longrightarrow}(I(\theta_{0}))_{rs}$. Note that (3) holds however small $\Theta_{N_{1}}$ is, and so the result in (4) is independent of the choice of $N_{1}$ other than $N_{1}$ must be chosen such that
$\Theta_{N_{1}}\subseteq \Theta^{\circ}$. This result holds for all $r,s=1,...,k$, and so in terms of matrices we have $N^{-1}J_{N}(\hat{\theta}_{N}(Y))\overset{a.s.}{\longrightarrow}I(\theta_{0})$.
|
Observed information matrix is a consistent estimator of the expected information matrix?
The answer above using stochastic equicontinuity works very well, but here I am answering my own question by using a uniform law of large numbers to show that the observed information matrix is a stro
|
14,202
|
How to predict when the next event occurs, based on times of previous events?
|
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about point processes, which match your particular data well. There is a great deal of work on predicting earthquakes (though I don't know much about it) and even crime.
If there are many different people printing, and you're just seeing the times but not the individual identities, a Poisson process might work well (the superposition of multiple independent point processes is approximately Poisson), though it would have to be inhomogeneous (the chance of a point varies over time): people are less likely to be printing at 3am than at 3pm.
For the inhomogeneous Poisson process model, the key would be getting a good estimate of the chance of a print job at a particular time on a particular day.
If these print times are for students in a classroom, though, it could be quite tricky, as they're not likely to be independent and so the Poisson process wouldn't work well.
Here's a link to a paper on the crime application.
|
How to predict when the next event occurs, based on times of previous events?
|
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about
|
How to predict when the next event occurs, based on times of previous events?
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about point processes, which match your particular data well. There is a great deal of work on predicting earthquakes (though I don't know much about it) and even crime.
If there are many different people printing, and you're just seeing the times but not the individual identities, a Poisson process might work well (the superposition of multiple independent point processes is approximately Poisson), though it would have to be inhomogeneous (the chance of a point varies over time): people are less likely to be printing at 3am than at 3pm.
For the inhomogeneous Poisson process model, the key would be getting a good estimate of the chance of a print job at a particular time on a particular day.
If these print times are for students in a classroom, though, it could be quite tricky, as they're not likely to be independent and so the Poisson process wouldn't work well.
Here's a link to a paper on the crime application.
|
How to predict when the next event occurs, based on times of previous events?
Hidden Markov models would apply if the data were random emissions from some underlying unobserved Markov model; I wouldn't rule that out, but it doesn't seem a very natural model.
I would think about
|
14,203
|
How to predict when the next event occurs, based on times of previous events?
|
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection.
|
How to predict when the next event occurs, based on times of previous events?
|
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection.
|
How to predict when the next event occurs, based on times of previous events?
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection.
|
How to predict when the next event occurs, based on times of previous events?
Based on the predicting the likely time using multivariate Bayesian scan statistic (MBSS) could be of assistance. This MBSS has advantage of improving the timeliness and accuracy of event detection.
|
14,204
|
Analysis of time series with many zero values
|
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysis. This arises normally when you have "lots of zeros" relative to the number of non-zeros.The issue is that there are two random variables ; the time between events and the expected size of the event. As you said the autocorrelation (acf) of the complete set of readings is meaningless due to the sequence of zeroes falsely enhancing the acf. You can pursue threads like "Croston's method” which is a model-based procedure rather than a data-based procedure. Croston's method is vulnerable to outliers and changes/trends/level shifts in the rate of demand i.e. the demand divided by the number of periods since the last demand. A much more rigorous approach might be to pursue "Sparse Data - Unequally Spaced Data" or searches like that. A rather ingenious solution was suggested to me by Prof. Ramesh Sharda of OSU and I have been using it for a number of years in my consulting practice.
If a series has time points where sales arise and long periods of time where no sales arise it is possible to convert sales to sales per period by dividing the observed sales by the number of periods of no sales thus obtaining a rate. It is then possible to identify a model between rate and the interval between sales culminating in a forecasted rate and a forecasted interval. You can find out more about this at autobox.com and google "intermittent demand"
|
Analysis of time series with many zero values
|
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysi
|
Analysis of time series with many zero values
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysis. This arises normally when you have "lots of zeros" relative to the number of non-zeros.The issue is that there are two random variables ; the time between events and the expected size of the event. As you said the autocorrelation (acf) of the complete set of readings is meaningless due to the sequence of zeroes falsely enhancing the acf. You can pursue threads like "Croston's method” which is a model-based procedure rather than a data-based procedure. Croston's method is vulnerable to outliers and changes/trends/level shifts in the rate of demand i.e. the demand divided by the number of periods since the last demand. A much more rigorous approach might be to pursue "Sparse Data - Unequally Spaced Data" or searches like that. A rather ingenious solution was suggested to me by Prof. Ramesh Sharda of OSU and I have been using it for a number of years in my consulting practice.
If a series has time points where sales arise and long periods of time where no sales arise it is possible to convert sales to sales per period by dividing the observed sales by the number of periods of no sales thus obtaining a rate. It is then possible to identify a model between rate and the interval between sales culminating in a forecasted rate and a forecasted interval. You can find out more about this at autobox.com and google "intermittent demand"
|
Analysis of time series with many zero values
To restate your question “ How does the analyst deal with long periods of no demand that follow no specific pattern?”
The answer to your question is Intermittent Demand Analysis or Sparse Data Analysi
|
14,205
|
What does the term "sparse prior" refer to (FBProphet Paper)?
|
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka double exponential) distribution, that is peaked around zero.
(image source Tibshirani, 1996)
This effect is true for any value of $\tau$ (the distribution is always peaked at it's location parameter, here equal to zero), although the smaller the value of the parameter, the more regularizing effect it has.
For this reason Laplace prior is often used as robust prior, having the regularizing effect. Having this said, the Laplace prior is popular choice, but if you want really sparse solutions there may be better choices, as described by Van Erp et al (2019).
Van Erp, S., Oberski, D. L., & Mulder, J. (2019). Shrinkage Priors for Bayesian Penalized Regression. Journal of Mathematical Psychology, 89, 31-50. doi:10.1016/j.jmp.2018.12.004
|
What does the term "sparse prior" refer to (FBProphet Paper)?
|
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka do
|
What does the term "sparse prior" refer to (FBProphet Paper)?
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka double exponential) distribution, that is peaked around zero.
(image source Tibshirani, 1996)
This effect is true for any value of $\tau$ (the distribution is always peaked at it's location parameter, here equal to zero), although the smaller the value of the parameter, the more regularizing effect it has.
For this reason Laplace prior is often used as robust prior, having the regularizing effect. Having this said, the Laplace prior is popular choice, but if you want really sparse solutions there may be better choices, as described by Van Erp et al (2019).
Van Erp, S., Oberski, D. L., & Mulder, J. (2019). Shrinkage Priors for Bayesian Penalized Regression. Journal of Mathematical Psychology, 89, 31-50. doi:10.1016/j.jmp.2018.12.004
|
What does the term "sparse prior" refer to (FBProphet Paper)?
Sparse data is data with many zeros. Here the authors seem to be calling the prior as sparse because it favorites the zeros. This is pretty self-explanatory if you look at the shape of Laplace (aka do
|
14,206
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
|
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data miners who do descriptive or predictive work typically don't deal with bias of this form). The term "bias" that I used in the article is what econometricians and social scientists treat as a serious danger to inferring causality from empirical studies. It refers to the difference between your statistical model and the causal theoretical model that underlies it. A related term is "model specification", a topic taught heavily in econometrics due to the importance of "correctly specifying your regression model" (with respect to the theory) when your goal is causal explanation. See the Wikipedia article on Specification for a brief description. A major misspecification issue is under-specification, called "Omitted Variable Bias" (OVB), where you omit an explanatory variable from the regression that should have been there (according to theory) - this is a variable that correlates with the dependent variable and with at least one of the explanatory variables. See this neat description) that explains what are the implications of this type of bias. From a theory point of view, OVB harms your ability to infer causality from the model.
In the appendix of my paper To Explain or To Predict? there's an example showing how an underspecified ("wrong") model can sometimes have higher predictive power. But now hopefully you can see why that contradicts with the goal of a "good causal explanatory model".
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
|
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data miners who do descriptive or predictive work typically don't deal with bias of this form). The term "bias" that I used in the article is what econometricians and social scientists treat as a serious danger to inferring causality from empirical studies. It refers to the difference between your statistical model and the causal theoretical model that underlies it. A related term is "model specification", a topic taught heavily in econometrics due to the importance of "correctly specifying your regression model" (with respect to the theory) when your goal is causal explanation. See the Wikipedia article on Specification for a brief description. A major misspecification issue is under-specification, called "Omitted Variable Bias" (OVB), where you omit an explanatory variable from the regression that should have been there (according to theory) - this is a variable that correlates with the dependent variable and with at least one of the explanatory variables. See this neat description) that explains what are the implications of this type of bias. From a theory point of view, OVB harms your ability to infer causality from the model.
In the appendix of my paper To Explain or To Predict? there's an example showing how an underspecified ("wrong") model can sometimes have higher predictive power. But now hopefully you can see why that contradicts with the goal of a "good causal explanatory model".
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
This is indeed a great question, which requires a tour into the world of the use of statistical models in econometric and social science research (from what I have seen, applied statisticians and data
|
14,207
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
|
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parameters are involved, the original role of econometrics was to quantify them. So in economics/econometrics models the parameters are the core of the theory. Them carried out the causal meaning that economists looking for (or it should be so).
Exactly for this reason econometrics manuals are mostly focused on concept like endogeneity and, then, bias. Even for this reason, at least until a few year ago, estimator like LASSO and RIDGE (that induce bias) was not considered at all in several econometrics books.
In prediction the theory is not the core, then nor causal questions are. Only the reliability of predicted values is the core and overfitting is the main related problem. Therefore the focus is not on the parameters, then not on bias/endogeneity.
Unfortunately in past years econometricians made some confusion about the key role of causality. This fact seems me related to the problem of conflation between causation and prediction.
In the article To explain or to predict? is underscored that the wrong model (biased) can remain useful for prediction. In some cases it can be also better than the right one (correctly specified). This fact was remarked in the reply of the Prof herself. In my view the main contribution of the article is that it put light on the fact that, if we understand the difference and avoid the conflation between causation and prediction, we can also understand that some concept and tools are useful for one scope but not much for the other.
In several generalistic econometric manuals that address also forecasting problems, the role of overfitting, in terms of in vs out of sample performance, is not discussed at all or, at best, not adequately. Overfitting do not have the same respectability of endogeneity in these texts, while it should be if we understand that overfitting deal with prediction and endogeneity deal with causation. I checked al lot for this distinction and it is far from clear in several econometrics books. Some obscurities about causality are related. Only recently something start to go better … but not enough yet.
I wrote something about these problem in this site. For example:
Endogeneity in forecasting
Regression and causality in econometrics
Are inconsistent estimators ever preferable?
endogenous regressor and correlation
I hope that them can help someone
Moreover
If the theory has many parameters, and we have scant data to estimate
them, the estimation error will be dominated by variance. Why would it
be inappropriate to use a biased estimation procedure like ridge
regression (resulting in biased estimates of lower variance) in this
situation?
Interesting point. Parsimony is good for both, prediction and causality. In basic linear model can seem also more important for prediction then causality. The reply of Prof (see appendix in the article) seems to go toward this direction; underspecification good for prediction. This discussion is strongly related (Paradox in model selection (AIC, BIC, to explain or to predict?)). However I suggest to consider the example in the article as very relevant ma, at the same time, as didactic example; his technical implications should not be exaggerated … econometrics/statistics modeling is a wide and complex area.
In my opinion the opportunity to have a good theory that imply model with many parameters is debatable; parsimony is good in causal models also. In some cases more for causation then prediction. As relevant example, the so called big data give us possibility that seems me more relevant for prediction than causality. Infact big data, many predictors, are good if we can skip any theoretical scrutiny about them and only correlations matters. This position is good for pure prediction but is hardly justifiable in causal models. The tools that you claim (RIDGE, LASSO, ecc) are good for big data, then for prediction more than causation.
warning 1: here the differences between causation and prediction are extremized, several overlapping can be invoked. The same article warning about this fact.
Warning 2: many parameters case open the door to the non-parametric model. This is not the standard in economic theory, or at least not yet. Maybe in this area the overlap between prediction and causation are more difficult disentangle. I have to study more about that.
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
|
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parame
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parameters are involved, the original role of econometrics was to quantify them. So in economics/econometrics models the parameters are the core of the theory. Them carried out the causal meaning that economists looking for (or it should be so).
Exactly for this reason econometrics manuals are mostly focused on concept like endogeneity and, then, bias. Even for this reason, at least until a few year ago, estimator like LASSO and RIDGE (that induce bias) was not considered at all in several econometrics books.
In prediction the theory is not the core, then nor causal questions are. Only the reliability of predicted values is the core and overfitting is the main related problem. Therefore the focus is not on the parameters, then not on bias/endogeneity.
Unfortunately in past years econometricians made some confusion about the key role of causality. This fact seems me related to the problem of conflation between causation and prediction.
In the article To explain or to predict? is underscored that the wrong model (biased) can remain useful for prediction. In some cases it can be also better than the right one (correctly specified). This fact was remarked in the reply of the Prof herself. In my view the main contribution of the article is that it put light on the fact that, if we understand the difference and avoid the conflation between causation and prediction, we can also understand that some concept and tools are useful for one scope but not much for the other.
In several generalistic econometric manuals that address also forecasting problems, the role of overfitting, in terms of in vs out of sample performance, is not discussed at all or, at best, not adequately. Overfitting do not have the same respectability of endogeneity in these texts, while it should be if we understand that overfitting deal with prediction and endogeneity deal with causation. I checked al lot for this distinction and it is far from clear in several econometrics books. Some obscurities about causality are related. Only recently something start to go better … but not enough yet.
I wrote something about these problem in this site. For example:
Endogeneity in forecasting
Regression and causality in econometrics
Are inconsistent estimators ever preferable?
endogenous regressor and correlation
I hope that them can help someone
Moreover
If the theory has many parameters, and we have scant data to estimate
them, the estimation error will be dominated by variance. Why would it
be inappropriate to use a biased estimation procedure like ridge
regression (resulting in biased estimates of lower variance) in this
situation?
Interesting point. Parsimony is good for both, prediction and causality. In basic linear model can seem also more important for prediction then causality. The reply of Prof (see appendix in the article) seems to go toward this direction; underspecification good for prediction. This discussion is strongly related (Paradox in model selection (AIC, BIC, to explain or to predict?)). However I suggest to consider the example in the article as very relevant ma, at the same time, as didactic example; his technical implications should not be exaggerated … econometrics/statistics modeling is a wide and complex area.
In my opinion the opportunity to have a good theory that imply model with many parameters is debatable; parsimony is good in causal models also. In some cases more for causation then prediction. As relevant example, the so called big data give us possibility that seems me more relevant for prediction than causality. Infact big data, many predictors, are good if we can skip any theoretical scrutiny about them and only correlations matters. This position is good for pure prediction but is hardly justifiable in causal models. The tools that you claim (RIDGE, LASSO, ecc) are good for big data, then for prediction more than causation.
warning 1: here the differences between causation and prediction are extremized, several overlapping can be invoked. The same article warning about this fact.
Warning 2: many parameters case open the door to the non-parametric model. This is not the standard in economic theory, or at least not yet. Maybe in this area the overlap between prediction and causation are more difficult disentangle. I have to study more about that.
|
Minimizing bias in explanatory modeling, why? (Galit Shmueli's "To Explain or to Predict")
In what sense does minimizing the bias in estimates give the most
accurate representation of the underlying theory?
In the usual sense intended in econometrics. In typical economic models some parame
|
14,208
|
What is the significance of the number of convolution filters in a convolutional network?
|
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detectors intuitively is the number of features (like edges, lines, object parts etc...) that the network can potentially learn. Also note that each filter generates a feature map. Feature maps allow you to learn the explanatory factors within the image, so the more # filters means the more the network learns (not necessarily good all the time - saturation and convergence matter the most)
How does this number affect the performance or quality of the architecture?
- I don't think you will find a good answer for these types of question since we are still trying to formalize what is going on inside DL black box. Intuitively once again you will learn a more robust non-linear function the more filter banks you have however the performance is going to depend on the type of task and the data characteristics. You typically want to know what kind of data you are dealing with to determine the # parameters in your architecture (including filter). How many filters do I need? is more like asking how complex (specially) are the images in my dataset. There isn't any formal notion that relates # filters to performance. Its all experimental and iterative. Lots of trail and error forsure.
|
What is the significance of the number of convolution filters in a convolutional network?
|
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detec
|
What is the significance of the number of convolution filters in a convolutional network?
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detectors intuitively is the number of features (like edges, lines, object parts etc...) that the network can potentially learn. Also note that each filter generates a feature map. Feature maps allow you to learn the explanatory factors within the image, so the more # filters means the more the network learns (not necessarily good all the time - saturation and convergence matter the most)
How does this number affect the performance or quality of the architecture?
- I don't think you will find a good answer for these types of question since we are still trying to formalize what is going on inside DL black box. Intuitively once again you will learn a more robust non-linear function the more filter banks you have however the performance is going to depend on the type of task and the data characteristics. You typically want to know what kind of data you are dealing with to determine the # parameters in your architecture (including filter). How many filters do I need? is more like asking how complex (specially) are the images in my dataset. There isn't any formal notion that relates # filters to performance. Its all experimental and iterative. Lots of trail and error forsure.
|
What is the significance of the number of convolution filters in a convolutional network?
What does the number of filters in a convolution layer convey?
- I usually like to think of filters as feature detectors. Although it depends on the problem domain, the significance # of feature detec
|
14,209
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
|
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law total of probability where $A^c$ is the complement of $A$, i.e. the person is not there. Likely you would assume $P(X|A^c)=1$.
I'm not really sure what "normalized in the normal Bayesian fashion means" since I didn't write the manual. But they are certainly talking about the fact that the following three equations are sufficient to find $P(A|X)$: $$P(A|X) \propto P(X|A)P(A)\quad P(A^c|X), \propto P(X|A^c)P(A^c), \mbox{ and } P(A|X)+P(A^c|X) = 1$$ So you never have to calculate $P(X)$, i.e. the normalizing constant. Whether they used this to update the probability for a single grid cell or for the entire map, I don't know (probably both).
Let's expand the notation to have grid cell $i$ and $A_i$ be the event the individual is in grid cell $i$ and $X_i$ be the event that grid cell $i$ was searched and nobody was found. With the new notation, $X$ is going to be the collection of searches that failed. We assume the following:
$\sum_i P(A_i|X)=1$, i.e. after performing searches, the sum overall cells of the probability that the individual is in that cell is 1. This is the total law of probability again.
If we assume searching in one cell does not tell us anything about any other cell, then for cells that were searched $P(A_i|X) = P(A_i|X_i)\propto P(X_i|A_i)P(A_i)$ and for cells that were not searched $P(A_i|X) \propto P(A_i)$. If we don't assume independence, the formulas will be more complicated but the intuition will be similar, i.e. calculating $P(A_i|X)$ up to a proportionality constant.
We can use these two assumptions to calculate $P(A_i|X)$ and update the map accordingly.
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
|
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law total of probability where $A^c$ is the complement of $A$, i.e. the person is not there. Likely you would assume $P(X|A^c)=1$.
I'm not really sure what "normalized in the normal Bayesian fashion means" since I didn't write the manual. But they are certainly talking about the fact that the following three equations are sufficient to find $P(A|X)$: $$P(A|X) \propto P(X|A)P(A)\quad P(A^c|X), \propto P(X|A^c)P(A^c), \mbox{ and } P(A|X)+P(A^c|X) = 1$$ So you never have to calculate $P(X)$, i.e. the normalizing constant. Whether they used this to update the probability for a single grid cell or for the entire map, I don't know (probably both).
Let's expand the notation to have grid cell $i$ and $A_i$ be the event the individual is in grid cell $i$ and $X_i$ be the event that grid cell $i$ was searched and nobody was found. With the new notation, $X$ is going to be the collection of searches that failed. We assume the following:
$\sum_i P(A_i|X)=1$, i.e. after performing searches, the sum overall cells of the probability that the individual is in that cell is 1. This is the total law of probability again.
If we assume searching in one cell does not tell us anything about any other cell, then for cells that were searched $P(A_i|X) = P(A_i|X_i)\propto P(X_i|A_i)P(A_i)$ and for cells that were not searched $P(A_i|X) \propto P(A_i)$. If we don't assume independence, the formulas will be more complicated but the intuition will be similar, i.e. calculating $P(A_i|X)$ up to a proportionality constant.
We can use these two assumptions to calculate $P(A_i|X)$ and update the map accordingly.
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
Assuming independence between the grid cells, then yes it appears Bayes' Theorem has been properly applied.
The denominator can be expanded, e.g.
$$P(X) = P(X|A)P(A) + P(X|A^c)P(A^c)$$
using the law
|
14,210
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
|
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and rescue missions, no less!
In chapter 8 an example is provided something like this (I customized it a bit):
To start with, there's a gridded prior distribution for the location of the missing person(s), boat, etc.
Prior distribution:
A search is performed on part of the grid and the probabilities are updated with a normalized posterior distribution by applying the Bayes' equation in the same way I mentioned in my questions:
$$
P(\text{target in (i,j)}\mid\text{no detection}) =
\frac{P(\text{no detection}\mid\text{target in (i,j)})
\times
P(\text{target in (i,j)})}{P(\text{no detection})}
$$
where (i,j) = (lat,long)
In this case, I decided to search column 3 because that column had the largest total prior probability.
Normalized posterior distribution after searching the third column w/ pFail = 0.2:
My question was mainly about how the posterior was normalized. Here's how it was done in the book - simply divide each individual posterior probability by the total sum, S:
I chose a 0.2 probability of a failed search because my professor had this to say, "We only search to 80% probability of detection because that is typically the best tradeoff between timlieness and accuracy."
Just for kicks, I ran another example with a pFail of 0.5. Whereas in the first example (pFail = 0.2), the next best search route (given the normalized posterior and assuming straight-line searches, no diagonal or zig-zag) would be to fly over column 2, in the second example (pFail = 0.5) the next best route is over row 2.
Normalized posterior distribution after searching the third column w/ pFail = 0.5:
He also added this, "Aircraft carry a small checklist with them to help determine best altitude and airspeed. Working this in a flying helicopter is like sitting atop a washing machine, reading a book that is duct taped to a different washing machine."
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
|
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and re
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and rescue missions, no less!
In chapter 8 an example is provided something like this (I customized it a bit):
To start with, there's a gridded prior distribution for the location of the missing person(s), boat, etc.
Prior distribution:
A search is performed on part of the grid and the probabilities are updated with a normalized posterior distribution by applying the Bayes' equation in the same way I mentioned in my questions:
$$
P(\text{target in (i,j)}\mid\text{no detection}) =
\frac{P(\text{no detection}\mid\text{target in (i,j)})
\times
P(\text{target in (i,j)})}{P(\text{no detection})}
$$
where (i,j) = (lat,long)
In this case, I decided to search column 3 because that column had the largest total prior probability.
Normalized posterior distribution after searching the third column w/ pFail = 0.2:
My question was mainly about how the posterior was normalized. Here's how it was done in the book - simply divide each individual posterior probability by the total sum, S:
I chose a 0.2 probability of a failed search because my professor had this to say, "We only search to 80% probability of detection because that is typically the best tradeoff between timlieness and accuracy."
Just for kicks, I ran another example with a pFail of 0.5. Whereas in the first example (pFail = 0.2), the next best search route (given the normalized posterior and assuming straight-line searches, no diagonal or zig-zag) would be to fly over column 2, in the second example (pFail = 0.5) the next best route is over row 2.
Normalized posterior distribution after searching the third column w/ pFail = 0.5:
He also added this, "Aircraft carry a small checklist with them to help determine best altitude and airspeed. Working this in a flying helicopter is like sitting atop a washing machine, reading a book that is duct taped to a different washing machine."
|
How to apply Bayes' theorem to the search for a fisherman lost at sea
I was pointed to a book that has a whole chapter dedicated to my question - Naval Operations Analysis - by a former professor who used to be a helicopter pilot and has actually performed search and re
|
14,211
|
Why not always use bootstrap CIs?
|
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap because they are the more general case of the Bootstrap Percentile Intervals (i.e. the confidence interval based solely upon the bootstrap distribution itself).
In particular, look at the relationship between BCa intervals and the Bootstrap Percentile Intervals: when the adjustment for acceleration (the first "correction factor") and skewness (the second "correction factor") are both zero, then the BCa intervals revert back to the typical Bootstrap Percentile Interval.
I do not think that it would be a good idea to ALWAYS use bootstrapping. Bootstrapping is a robust technique that has a variety of mechanisms (ex: confidence intervals and there are different variations of the bootstrap for different types of problems such as the wild bootstrap when there is heteroscedasticity) for adjusting for different problems (ex: non-normality), but it is relies upon one crucial assumption: the data accurately represent the true population.
This assumption, although simple in nature, can be difficult to verify especially in the context of small sample sizes (it could be though that a small sample is an accurate reflection of the true population!). If the original sample on which the bootstrap distribution (and hence all of the results that follow from it) is not adequately accurate, then your results (and hence your decision based upon those results) will be flawed.
CONCLUSION: There is a lot of ambiguity with the bootstrap and you should exercise caution before applying it.
|
Why not always use bootstrap CIs?
|
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap be
|
Why not always use bootstrap CIs?
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap because they are the more general case of the Bootstrap Percentile Intervals (i.e. the confidence interval based solely upon the bootstrap distribution itself).
In particular, look at the relationship between BCa intervals and the Bootstrap Percentile Intervals: when the adjustment for acceleration (the first "correction factor") and skewness (the second "correction factor") are both zero, then the BCa intervals revert back to the typical Bootstrap Percentile Interval.
I do not think that it would be a good idea to ALWAYS use bootstrapping. Bootstrapping is a robust technique that has a variety of mechanisms (ex: confidence intervals and there are different variations of the bootstrap for different types of problems such as the wild bootstrap when there is heteroscedasticity) for adjusting for different problems (ex: non-normality), but it is relies upon one crucial assumption: the data accurately represent the true population.
This assumption, although simple in nature, can be difficult to verify especially in the context of small sample sizes (it could be though that a small sample is an accurate reflection of the true population!). If the original sample on which the bootstrap distribution (and hence all of the results that follow from it) is not adequately accurate, then your results (and hence your decision based upon those results) will be flawed.
CONCLUSION: There is a lot of ambiguity with the bootstrap and you should exercise caution before applying it.
|
Why not always use bootstrap CIs?
It is beneficial to look at the motivation for the BCa interval and it mechanisms (i.e. the so called "correction factors"). The BCa intervals are one of the most important aspects of the bootstrap be
|
14,212
|
Why not always use bootstrap CIs?
|
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leading to the CLT). Assuming that both methods are constructed appropriately, we usually find three things: (1) the parametric method usually works better than the nonparametric method on small sample problems where the underlying assumptions hold; (2) the nonparametric method usually works better than the parametric method on small sample problems where the underlying assumptions for the parametric method are substantially violated; and (3) when the sample size gets large, both methods work about equally well.
Under this circumstance, some practitioners do indeed prefer to always use nonparametric methods. Such practitioners typically prefer to use methods that have minimal assumptions and they are suspicious of making statistical assumptions to facilitate analysis in cases where the dataset is small. That is a perfectly reasonable position to take, so if you prefer to always use bootstrap CIs, my view is that that is a defensible position. Having said that, you should be careful not to exaggerate the assumptions made by other methods. The standard CIs for means, variances, etc., do not require us to assume that the underlying data is normal --- rather, we assume that the underlying data is such that we can apply the CLT so that important sample quantities (e.g., sample mean) are roughly normally distributed over samples that are not too small.
I understand the motivation for not using non-parametric tests systematically, since they have less power, but my simulations tell me this is not the case for bootstrap CIs. They are even smaller.
Smaller is not necessarily good on its own. Smaller CIs are good if the coverage level is accurate. If the CI is too small, such that the actual coverage level is less than the confidence level, that is bad.
If you would like to do a detailed simulation analysis comparing the bootstrap CI to other CIs, I recommend you look at the width of the CIs but also the proportion of the time the true parameter value falls within the CI in the simulations. Ideally, you want the coverage proportion to be the same as the confidence level, but if the confidence level is an underestimate of the true coverage probability, that is not a catastrophic problem. If you do a large simulation study over appropriate cases, you should be able to determine whether the competing methods produce CIs with accurate confidence levels, and whether the intervals produced are more/less accurate (i.e., narrower/wider) under different methods.
A similar question that bugs me is why not always use the median as the measure of central tendency. People often recommend to use it to characterize non normally-distributed data, but since the median is the same as the mean for normally-distributed data, why make a distinction?
Again, you seem to be proceeding under the view that standard CIs assume normality of the data, when actually what they assume is much weaker than this --- the standard CI for a population mean only assumes that the underlying distribution of the data has finite variance, such that we can apply the CLT to ensure that the sample mean is roughly normally distributed. In samples that are not too small, the sample mean should be roughly normally distributed, but the underlying data usually is not. Consequently, the sample mean will not generally correspond to the sample median in such cases.
Here is is worth noting that the use of the sample mean or median really depends on what you want to make an inference about. If you want a CI for the population mean then the sample mean is a natural consistent estimator; if you want a CI for the population media then the sample median is a natural consistent estimator. In both cases there are applicable CLT results that say that these quantities are roughly normally distributed under weak assumptions for samples that are not too small. Nevertheless, other than for underlying symmetric distributions, these two things do not usually correspond.
|
Why not always use bootstrap CIs?
|
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leadi
|
Why not always use bootstrap CIs?
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leading to the CLT). Assuming that both methods are constructed appropriately, we usually find three things: (1) the parametric method usually works better than the nonparametric method on small sample problems where the underlying assumptions hold; (2) the nonparametric method usually works better than the parametric method on small sample problems where the underlying assumptions for the parametric method are substantially violated; and (3) when the sample size gets large, both methods work about equally well.
Under this circumstance, some practitioners do indeed prefer to always use nonparametric methods. Such practitioners typically prefer to use methods that have minimal assumptions and they are suspicious of making statistical assumptions to facilitate analysis in cases where the dataset is small. That is a perfectly reasonable position to take, so if you prefer to always use bootstrap CIs, my view is that that is a defensible position. Having said that, you should be careful not to exaggerate the assumptions made by other methods. The standard CIs for means, variances, etc., do not require us to assume that the underlying data is normal --- rather, we assume that the underlying data is such that we can apply the CLT so that important sample quantities (e.g., sample mean) are roughly normally distributed over samples that are not too small.
I understand the motivation for not using non-parametric tests systematically, since they have less power, but my simulations tell me this is not the case for bootstrap CIs. They are even smaller.
Smaller is not necessarily good on its own. Smaller CIs are good if the coverage level is accurate. If the CI is too small, such that the actual coverage level is less than the confidence level, that is bad.
If you would like to do a detailed simulation analysis comparing the bootstrap CI to other CIs, I recommend you look at the width of the CIs but also the proportion of the time the true parameter value falls within the CI in the simulations. Ideally, you want the coverage proportion to be the same as the confidence level, but if the confidence level is an underestimate of the true coverage probability, that is not a catastrophic problem. If you do a large simulation study over appropriate cases, you should be able to determine whether the competing methods produce CIs with accurate confidence levels, and whether the intervals produced are more/less accurate (i.e., narrower/wider) under different methods.
A similar question that bugs me is why not always use the median as the measure of central tendency. People often recommend to use it to characterize non normally-distributed data, but since the median is the same as the mean for normally-distributed data, why make a distinction?
Again, you seem to be proceeding under the view that standard CIs assume normality of the data, when actually what they assume is much weaker than this --- the standard CI for a population mean only assumes that the underlying distribution of the data has finite variance, such that we can apply the CLT to ensure that the sample mean is roughly normally distributed. In samples that are not too small, the sample mean should be roughly normally distributed, but the underlying data usually is not. Consequently, the sample mean will not generally correspond to the sample median in such cases.
Here is is worth noting that the use of the sample mean or median really depends on what you want to make an inference about. If you want a CI for the population mean then the sample mean is a natural consistent estimator; if you want a CI for the population media then the sample median is a natural consistent estimator. In both cases there are applicable CLT results that say that these quantities are roughly normally distributed under weak assumptions for samples that are not too small. Nevertheless, other than for underlying symmetric distributions, these two things do not usually correspond.
|
Why not always use bootstrap CIs?
This is a situation like a lot of situations that arise when comparing fully nonparametric methods with parametric methods that rely on broad assumptions (e.g., distribution with finite variance leadi
|
14,213
|
Why not always use bootstrap CIs?
|
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started gathering acorns like they were treasures. Her hands were quickly full, so I gestured that it would be okay to deposit the acorns in my pocket. Well she turned my pants into a sumo suit, and we ended up going home with about 300 acorns.
When we got home, I starting wondering what we could learn about the population of fallen acorns from the sample. I weighed the whole sample on a kitchen scale which came out to be about 1150g. Then I started thinking about what this sample might be able to say about the tree's population of acorns. The first thing that came to mind was doing a bootstrap analysis.
However, the equipment I have on hand has an accuracy of 1g. I couldn't just weigh one acorn at a time, since an acorn weighing in at 4g might actually weigh 3g, 4g or 5g. In order to reduce the amount of scale error. I figured the per-acorn scale accuracy would only be off by a tiny amount if I weighed each sample group together. But this group weighing constraint means it was not possible to introduce sampling with replacement, since I could only weigh the acorns as a group. Apart from investing in a better scale it seemed the best option might be weighing 20 random acorns say 25 times. Then using the average of these sample means to approximate a population mean and make some inferences. Sampling without replacement seems like the only option under these conditions.
|
Why not always use bootstrap CIs?
|
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started g
|
Why not always use bootstrap CIs?
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started gathering acorns like they were treasures. Her hands were quickly full, so I gestured that it would be okay to deposit the acorns in my pocket. Well she turned my pants into a sumo suit, and we ended up going home with about 300 acorns.
When we got home, I starting wondering what we could learn about the population of fallen acorns from the sample. I weighed the whole sample on a kitchen scale which came out to be about 1150g. Then I started thinking about what this sample might be able to say about the tree's population of acorns. The first thing that came to mind was doing a bootstrap analysis.
However, the equipment I have on hand has an accuracy of 1g. I couldn't just weigh one acorn at a time, since an acorn weighing in at 4g might actually weigh 3g, 4g or 5g. In order to reduce the amount of scale error. I figured the per-acorn scale accuracy would only be off by a tiny amount if I weighed each sample group together. But this group weighing constraint means it was not possible to introduce sampling with replacement, since I could only weigh the acorns as a group. Apart from investing in a better scale it seemed the best option might be weighing 20 random acorns say 25 times. Then using the average of these sample means to approximate a population mean and make some inferences. Sampling without replacement seems like the only option under these conditions.
|
Why not always use bootstrap CIs?
The other day, I came across a situational constraint where bootstrap analysis would not work on my presumed normally distributed sample.
I was at the park with my four-year old daughter who started g
|
14,214
|
Why not always use bootstrap CIs?
|
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely on the CLT. The data don't have to be Normal, but the sampling distribution should be (asymptotically) Normal.
Alas, bootstrap methods typically also have similar assumptions. The data don't have to be Normal, but the sampling distribution of $\sqrt{n}(\hat\theta-\theta)$ has to be well-defined and well-behaved in order for us to guarantee that the bootstrap works (asymptotically).
See Larry Wasserman's notes:
The bootstrap does not always work. It can fail for a variety of reasons such as when the dimension is high or when [the estimator] is poorly behaved.
So we can't guarantee that the bootstrap will save us from needing a CLT. On the other hand, if a CLT is appropriate, some bootstrap methods can be "second-order accurate." In other words, if your sampling distribution of interest has an asymptotic approximation and you'd rely on the CLT anyway, some kinds of bootstrapping might get you there slightly "faster" (better approximation at lower $n$) and/or with CI coverage closer to nominal. See Section 3 of Davidson, Hinkley, Young (2003), "Recent Developments in Bootstrap Methodology".
OP:
A similar question that bugs me is why not always use the median as the measure of central tendency.
The median is one of those estimators that aren't always well-behaved -- whether you're using bootstrap or other methods.
I once was working with a fairly large survey dataset, where one of the questions was about income. We tried to do exactly what you suggest: focus on the median rather than the mean, and use a bootstrap approach to get the CI (though I don't think it was BCa).
It turned out that many respondents had rounded their income to exactly \$50,000. So many, in fact, that in EVERY bootstrap sample the median was also \$50,000! Our bootstrap SE was 0 and our bootstrap CI was (\$50k, \$50k).
We ended up using one of the much older nonparametric CIs for a median, based simply on order statistics of the sample, which (thankfully) gave a reasonable result.
|
Why not always use bootstrap CIs?
|
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely o
|
Why not always use bootstrap CIs?
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely on the CLT. The data don't have to be Normal, but the sampling distribution should be (asymptotically) Normal.
Alas, bootstrap methods typically also have similar assumptions. The data don't have to be Normal, but the sampling distribution of $\sqrt{n}(\hat\theta-\theta)$ has to be well-defined and well-behaved in order for us to guarantee that the bootstrap works (asymptotically).
See Larry Wasserman's notes:
The bootstrap does not always work. It can fail for a variety of reasons such as when the dimension is high or when [the estimator] is poorly behaved.
So we can't guarantee that the bootstrap will save us from needing a CLT. On the other hand, if a CLT is appropriate, some bootstrap methods can be "second-order accurate." In other words, if your sampling distribution of interest has an asymptotic approximation and you'd rely on the CLT anyway, some kinds of bootstrapping might get you there slightly "faster" (better approximation at lower $n$) and/or with CI coverage closer to nominal. See Section 3 of Davidson, Hinkley, Young (2003), "Recent Developments in Bootstrap Methodology".
OP:
A similar question that bugs me is why not always use the median as the measure of central tendency.
The median is one of those estimators that aren't always well-behaved -- whether you're using bootstrap or other methods.
I once was working with a fairly large survey dataset, where one of the questions was about income. We tried to do exactly what you suggest: focus on the median rather than the mean, and use a bootstrap approach to get the CI (though I don't think it was BCa).
It turned out that many respondents had rounded their income to exactly \$50,000. So many, in fact, that in EVERY bootstrap sample the median was also \$50,000! Our bootstrap SE was 0 and our bootstrap CI was (\$50k, \$50k).
We ended up using one of the much older nonparametric CIs for a median, based simply on order statistics of the sample, which (thankfully) gave a reasonable result.
|
Why not always use bootstrap CIs?
OP:
This makes me wonder whether there is any good reason not to always use bootstrapping. Given the difficulty of assessing whether a distribution is normal...
Traditional parametric methods rely o
|
14,215
|
What do normal residuals mean and what does this tell me about my data?
|
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of your predictor X, filling that out in the equation
$$
E[Y|X] = \beta_0 + \beta_1 X
$$
will have you calculate the expected value for $Y$ over all (possible) observations that have this given value for $X$.
However: you don't really expect any single $Y$ value for that given $X$ value to be exactly equal to the (conditional) mean. Not because your model is wrong, but because there are some effects you have not accounted for (e.g. measuring error). So these $Y$ values for a given $X$ values will fluctuate around the mean value (i.e. geometrically: around the point of the regression line for that $X$).
The normality assumption, now, says that the difference between the $Y$s and their matching $E[Y|X]$ follows a normal distribution with mean zero. This means, if you have an $X$ value, then you can sample a $Y$ value by first calculating $\beta_0 + \beta_1 X$ (i.e. again $E[Y|X]$, the point on the regression line), next sampling $\epsilon$ from that normal distribution and adding them:
$$
Y'=E[Y|X] + \epsilon
$$
In short: this normal distribution represents the variability in your outcome on top of the variability explained by the model.
Note: in most datasets, you don't have multiple $Y$ values for any given $X$ (unless your predictor set is categorical), but this normality goes for the whole population, not just the observations in your dataset.
Note: I've done the reasoning for linear regression with one predictor, but the same goes for more: just replace "line" with "hyperplane" in the above.
|
What do normal residuals mean and what does this tell me about my data?
|
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of y
|
What do normal residuals mean and what does this tell me about my data?
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of your predictor X, filling that out in the equation
$$
E[Y|X] = \beta_0 + \beta_1 X
$$
will have you calculate the expected value for $Y$ over all (possible) observations that have this given value for $X$.
However: you don't really expect any single $Y$ value for that given $X$ value to be exactly equal to the (conditional) mean. Not because your model is wrong, but because there are some effects you have not accounted for (e.g. measuring error). So these $Y$ values for a given $X$ values will fluctuate around the mean value (i.e. geometrically: around the point of the regression line for that $X$).
The normality assumption, now, says that the difference between the $Y$s and their matching $E[Y|X]$ follows a normal distribution with mean zero. This means, if you have an $X$ value, then you can sample a $Y$ value by first calculating $\beta_0 + \beta_1 X$ (i.e. again $E[Y|X]$, the point on the regression line), next sampling $\epsilon$ from that normal distribution and adding them:
$$
Y'=E[Y|X] + \epsilon
$$
In short: this normal distribution represents the variability in your outcome on top of the variability explained by the model.
Note: in most datasets, you don't have multiple $Y$ values for any given $X$ (unless your predictor set is categorical), but this normality goes for the whole population, not just the observations in your dataset.
Note: I've done the reasoning for linear regression with one predictor, but the same goes for more: just replace "line" with "hyperplane" in the above.
|
What do normal residuals mean and what does this tell me about my data?
Linear regression in fact models the conditional expected values of your outcome. That means: if you knew the true values of the regression parameters (say $\beta_0$ and $\beta_1$), given a value of y
|
14,216
|
What do normal residuals mean and what does this tell me about my data?
|
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predictions) should also be valid. It's that simple!
|
What do normal residuals mean and what does this tell me about my data?
|
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predict
|
What do normal residuals mean and what does this tell me about my data?
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predictions) should also be valid. It's that simple!
|
What do normal residuals mean and what does this tell me about my data?
Normality of the residuals is an assumption of running a linear model. So, if your residuals are normal, it means that your assumption is valid and model inference (confidence intervals, model predict
|
14,217
|
What do normal residuals mean and what does this tell me about my data?
|
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables are necessary and needed and care for identifying outliers then you have done a good job. Take a look here for more on this http://www.autobox.com/cms/index.php?option=com_content&view=article&id=175
|
What do normal residuals mean and what does this tell me about my data?
|
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables ar
|
What do normal residuals mean and what does this tell me about my data?
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables are necessary and needed and care for identifying outliers then you have done a good job. Take a look here for more on this http://www.autobox.com/cms/index.php?option=com_content&view=article&id=175
|
What do normal residuals mean and what does this tell me about my data?
It could mean a lot or it could mean nothing. If you fit a model to get the highest R-Squared it could mean that you have been foolish. If you fit a model to be parsimonious in that the variables ar
|
14,218
|
What do normal residuals mean and what does this tell me about my data?
|
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is the difference between the true value and fitted value, and we hope this difference is appproximately zero.
But in most real-life cases, the appropriate data is not linear, so we can use some treatment methods or some methods of estimation such as robust tools.
|
What do normal residuals mean and what does this tell me about my data?
|
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is
|
What do normal residuals mean and what does this tell me about my data?
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is the difference between the true value and fitted value, and we hope this difference is appproximately zero.
But in most real-life cases, the appropriate data is not linear, so we can use some treatment methods or some methods of estimation such as robust tools.
|
What do normal residuals mean and what does this tell me about my data?
In some cases, the assumption that the data is approximately linear allows us to use OLS to minimize the number of observations in the data out that are far from a straight line.
Then the residual is
|
14,219
|
Elementary statistics for jurors
|
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brush-up in basic statistics which I'd recommend to everyone. What may be even more important in the context of a jury is that he gives examples of how to talk about statistical topics in a way that can be understood by (statistical) lay persons. And how to translate certain kinds of statements into something that can be understood by humans.
(I could point you to some other nice and relevant popular statistics books, but they are available in German only)
|
Elementary statistics for jurors
|
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brus
|
Elementary statistics for jurors
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brush-up in basic statistics which I'd recommend to everyone. What may be even more important in the context of a jury is that he gives examples of how to talk about statistical topics in a way that can be understood by (statistical) lay persons. And how to translate certain kinds of statements into something that can be understood by humans.
(I could point you to some other nice and relevant popular statistics books, but they are available in German only)
|
Elementary statistics for jurors
I very much enjoyed reading Gerd Gigerenzer's book "Das Einmaleins der Skepsis" - I believe there are two English versions, Reckoning with Risk and Calculated Risks.
I think that could be a good brus
|
14,220
|
Elementary statistics for jurors
|
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries.
|
Elementary statistics for jurors
|
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries.
|
Elementary statistics for jurors
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries.
|
Elementary statistics for jurors
I don't think you should study anything, unless your goal is to be kicked off during the Voir Dire. Personally, telling lawyers that I am a psychometrician has gotten me removed from a few juries.
|
14,221
|
Elementary statistics for jurors
|
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt mean. These are subjective notions. It is up to the prosecution and the defense to present evidence and explain any statistical issues that affect the interpretation of the evidence.
|
Elementary statistics for jurors
|
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt
|
Elementary statistics for jurors
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt mean. These are subjective notions. It is up to the prosecution and the defense to present evidence and explain any statistical issues that affect the interpretation of the evidence.
|
Elementary statistics for jurors
I am not sure that specific statistical knowledge is crucial for jurors. Jurors need to understand the strength of evidence and decide what preponderance of the evidence and beyond a reasoanble doubt
|
14,222
|
Interpreting proportions that sum to one as independent variables in linear regression
|
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"Your estimated linear regression model looks like:
ln(Radon) = (a linear expression in other variables) + 0.43M + 0.92I
where M and I represent the percentages of metamorphic and igneous rocks, respectively, in the ZIP code. You are constrained by:
M + I + S = 100
where S represents the percentages of sedimentary rock in the ZIP code.
The interpretation of the 0.43 is that a one percentage point increase in M is associated with an increase of 0.43 in ln(Radon) holding all other variables in the model fixed. Thus, the value of I cannot change, and the only way to have a one percentage point increase in M while satisfying the constraint is to have a one percentage point decrease in S, the omitted category.
Of course, this change cannot occur in ZIP codes in which S = 0, but a decrease in M and a corresponding increase in S would be possible in such ZIP codes."
Here is the link to the thread ASA: http://community.amstat.org/communities/community-home/digestviewer/viewthread?GroupId=2653&MID=29924&tab=digestviewer&UserKey=5adc7e8b-ae4f-43f9-b561-4427476d3ddf&sKey=bf9cef9062314b07a5f2#bm13
I'm posting this as the accepted correct answer, but am still open to further discussion if anyone has something to add.
|
Interpreting proportions that sum to one as independent variables in linear regression
|
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"
|
Interpreting proportions that sum to one as independent variables in linear regression
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"Your estimated linear regression model looks like:
ln(Radon) = (a linear expression in other variables) + 0.43M + 0.92I
where M and I represent the percentages of metamorphic and igneous rocks, respectively, in the ZIP code. You are constrained by:
M + I + S = 100
where S represents the percentages of sedimentary rock in the ZIP code.
The interpretation of the 0.43 is that a one percentage point increase in M is associated with an increase of 0.43 in ln(Radon) holding all other variables in the model fixed. Thus, the value of I cannot change, and the only way to have a one percentage point increase in M while satisfying the constraint is to have a one percentage point decrease in S, the omitted category.
Of course, this change cannot occur in ZIP codes in which S = 0, but a decrease in M and a corresponding increase in S would be possible in such ZIP codes."
Here is the link to the thread ASA: http://community.amstat.org/communities/community-home/digestviewer/viewthread?GroupId=2653&MID=29924&tab=digestviewer&UserKey=5adc7e8b-ae4f-43f9-b561-4427476d3ddf&sKey=bf9cef9062314b07a5f2#bm13
I'm posting this as the accepted correct answer, but am still open to further discussion if anyone has something to add.
|
Interpreting proportions that sum to one as independent variables in linear regression
As follow-up and what I think is the correct answer (seems reasonable to me): I posted this question on to the ASA Connect listserv, and got the following response from Thomas Sexton at Stony Brook:
"
|
14,223
|
Online estimation of quartiles without storing observations
|
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the median between the median and the max, so yes, I think you're on solid ground applying whatever median algorithm you use first on the entire data set to partition it, and then on the two resulting pieces.
Update:
This question on stackoverflow leads to this paper: Raj Jain, Imrich Chlamtac: The P² Algorithm for Dynamic Calculation of Quantiiles and Histograms Without Storing Observations. Commun. ACM 28(10): 1076-1085 (1985) whose abstract indicates it's probably of great interest to you:
A heuristic algorithm is proposed for dynamic calculation qf the
median and other quantiles. The estimates are produced dynamically as
the observations are generated. The observations are not stored;
therefore, the algorithm has a very small and fixed storage
requirement regardless of the number of observations. This makes it
ideal for implementing in a quantile chip that can be used in
industrial controllers and recorders. The algorithm is further
extended to histogram plotting. The accuracy of the algorithm is
analyzed.
|
Online estimation of quartiles without storing observations
|
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the me
|
Online estimation of quartiles without storing observations
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the median between the median and the max, so yes, I think you're on solid ground applying whatever median algorithm you use first on the entire data set to partition it, and then on the two resulting pieces.
Update:
This question on stackoverflow leads to this paper: Raj Jain, Imrich Chlamtac: The P² Algorithm for Dynamic Calculation of Quantiiles and Histograms Without Storing Observations. Commun. ACM 28(10): 1076-1085 (1985) whose abstract indicates it's probably of great interest to you:
A heuristic algorithm is proposed for dynamic calculation qf the
median and other quantiles. The estimates are produced dynamically as
the observations are generated. The observations are not stored;
therefore, the algorithm has a very small and fixed storage
requirement regardless of the number of observations. This makes it
ideal for implementing in a quantile chip that can be used in
industrial controllers and recorders. The algorithm is further
extended to histogram plotting. The accuracy of the algorithm is
analyzed.
|
Online estimation of quartiles without storing observations
The median is the point at which 1/2 the observations fall below and 1/2 above. Similarly, the 25th perecentile is the median for data between the min and the median, and the 75th percentile is the me
|
14,224
|
Online estimation of quartiles without storing observations
|
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __init__(self, percentile=0.5, step=0.1):
self.step = step
self.step_up = 1.0 - percentile
self.step_down = percentile
self.x = None
def push(self, observation):
if self.x is None:
self.x = observation
return
if self.x > observation:
self.x -= self.step * self.step_up
elif self.x < observation:
self.x += self.step * self.step_down
if abs(observation - self.x) < self.step:
self.step /= 2.0
and an example:
import numpy as np
import matplotlib.pyplot as plt
distribution = np.random.normal
running_percentile = RunningPercentile(0.841)
observations = []
for _ in range(1000000):
observation = distribution()
running_percentile.push(observation)
observations.append(observation)
plt.figure(figsize=(10, 3))
plt.hist(observations, bins=100)
plt.axvline(running_percentile.x, c='k')
plt.show()
|
Online estimation of quartiles without storing observations
|
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __i
|
Online estimation of quartiles without storing observations
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __init__(self, percentile=0.5, step=0.1):
self.step = step
self.step_up = 1.0 - percentile
self.step_down = percentile
self.x = None
def push(self, observation):
if self.x is None:
self.x = observation
return
if self.x > observation:
self.x -= self.step * self.step_up
elif self.x < observation:
self.x += self.step * self.step_down
if abs(observation - self.x) < self.step:
self.step /= 2.0
and an example:
import numpy as np
import matplotlib.pyplot as plt
distribution = np.random.normal
running_percentile = RunningPercentile(0.841)
observations = []
for _ in range(1000000):
observation = distribution()
running_percentile.push(observation)
observations.append(observation)
plt.figure(figsize=(10, 3))
plt.hist(observations, bins=100)
plt.axvline(running_percentile.x, c='k')
plt.show()
|
Online estimation of quartiles without storing observations
A very slight change to the method you posted and you can compute any arbitrary percentile, without having to compute all of the quantiles. Here's the Python code:
class RunningPercentile:
def __i
|
14,225
|
Is there any statistical test that is parametric and non-parametric?
|
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical distinction in some areas between parametric and non-parametric tests.
Just above the table referred to there is a remark:
"... parametric data has an underlying normal distribution .... Anything else is non-parametric."
It may be a accepted criterion in some areas that either we assume normality and use ANOVA, and this is parametric, or we don't assume normality and use non-parametric alternatives.
It's perhaps not a very good definition, and it's not really correct in my opinion, but it may be a practical rule of thumb. Mostly because the end goal in the social sciences, say, is to analyze data, and what good is it to be able to formulate a parametric model based on a non-normal distribution and then not be able to analyze the data?
An alternative definition, is to define "non-parametric tests" as tests that do not rely on distributional assumptions and parametric tests as anything else.
The former as well as the latter definition presented defines one class of tests and then defines the other class as the complement (anything else). By definition, this rules out that a test can be parametric as well as non-parametric.
The truth is that also the latter definition is problematic. What if there are certain natural "non-parametric" assumptions, such as symmetry, that can be imposed? Will that turn a test statistic that does otherwise not rely on any distributional assumptions into a parametric test? Most would say no!
Hence there are tests in the class of non-parametric tests that are allowed to make some distributional assumptions $-$ as long as they are not "too parametric". The borderline between the "parametric" and the "non-parametric" tests has become blurred, but I believe that most will uphold that either a test is parametric or it is non-parametric, perhaps it can be neither but saying that it is both makes little sense.
Taking a different point of view, many parametric tests are (equivalent to) likelihood ratio tests. This makes a general theory possible, and we have a unified understanding of the distributional properties of likelihood ratio tests under suitable regularity conditions. Non-parametric tests are, on the contrary, not equivalent to likelihood ratio tests per se $-$ there is no likelihood $-$ and without the unifying methodology based on the likelihood we have to derive distributional results on a case-by-case basis. The theory of empirical likelihood developed mainly by Art Owen at Stanford is, however, a very interesting compromise. It offers a likelihood based approach to statistics (an important point to me, as I regard the likelihood as a more important object than a $p$-value, say) without the need of typical parametric distributional assumptions. The fundamental idea is a clever use of the multinomial distribution on the empirical data, the methods are very "parametric" yet valid without restricting parametric assumptions.
Tests based on empirical likelihood have, IMHO, the virtues of parametric tests and the generality of non-parametric tests, hence among the tests I can think of, they come closest to qualify for being parametric as well as non-parametric, though I would not use this terminology.
|
Is there any statistical test that is parametric and non-parametric?
|
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is param
|
Is there any statistical test that is parametric and non-parametric?
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is parametric or non-parametric (but never both). A quick search gave this table, which I imagine represents a common practical distinction in some areas between parametric and non-parametric tests.
Just above the table referred to there is a remark:
"... parametric data has an underlying normal distribution .... Anything else is non-parametric."
It may be a accepted criterion in some areas that either we assume normality and use ANOVA, and this is parametric, or we don't assume normality and use non-parametric alternatives.
It's perhaps not a very good definition, and it's not really correct in my opinion, but it may be a practical rule of thumb. Mostly because the end goal in the social sciences, say, is to analyze data, and what good is it to be able to formulate a parametric model based on a non-normal distribution and then not be able to analyze the data?
An alternative definition, is to define "non-parametric tests" as tests that do not rely on distributional assumptions and parametric tests as anything else.
The former as well as the latter definition presented defines one class of tests and then defines the other class as the complement (anything else). By definition, this rules out that a test can be parametric as well as non-parametric.
The truth is that also the latter definition is problematic. What if there are certain natural "non-parametric" assumptions, such as symmetry, that can be imposed? Will that turn a test statistic that does otherwise not rely on any distributional assumptions into a parametric test? Most would say no!
Hence there are tests in the class of non-parametric tests that are allowed to make some distributional assumptions $-$ as long as they are not "too parametric". The borderline between the "parametric" and the "non-parametric" tests has become blurred, but I believe that most will uphold that either a test is parametric or it is non-parametric, perhaps it can be neither but saying that it is both makes little sense.
Taking a different point of view, many parametric tests are (equivalent to) likelihood ratio tests. This makes a general theory possible, and we have a unified understanding of the distributional properties of likelihood ratio tests under suitable regularity conditions. Non-parametric tests are, on the contrary, not equivalent to likelihood ratio tests per se $-$ there is no likelihood $-$ and without the unifying methodology based on the likelihood we have to derive distributional results on a case-by-case basis. The theory of empirical likelihood developed mainly by Art Owen at Stanford is, however, a very interesting compromise. It offers a likelihood based approach to statistics (an important point to me, as I regard the likelihood as a more important object than a $p$-value, say) without the need of typical parametric distributional assumptions. The fundamental idea is a clever use of the multinomial distribution on the empirical data, the methods are very "parametric" yet valid without restricting parametric assumptions.
Tests based on empirical likelihood have, IMHO, the virtues of parametric tests and the generality of non-parametric tests, hence among the tests I can think of, they come closest to qualify for being parametric as well as non-parametric, though I would not use this terminology.
|
Is there any statistical test that is parametric and non-parametric?
It is fundamentally difficult to tell exactly what is meant by a "parametric test" and a "non-parametric test", though there are many concrete examples where most will agree on whether a test is param
|
14,226
|
Is there any statistical test that is parametric and non-parametric?
|
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relationship between the explanatory variables and the outcome.
Some examples:
A quantile regression with a linear link would qualify as B-parametric and A-non-parametric.
Spline smoothing of a time series with Gaussian noise can quality as A-non-parametric and B-parametric.
The term "semi-parametric" usually refers to case B and means you are not assuming the whole functional relation, but rather you have milder assumptions such as "additive in some smooth transformation of the predictors".
You could also have milder assumptions on the distribution of the noise- such as "all moments are finite", without specifically specifying the shape of the distribution. To the best of my knowledge, there is no term for this type of assumption.
Note that the answer relates to the underlying assumptions behind the data generating process. When saying "a-parametric test", one usually refers to non-parametric in sense A. In this is what you meant, then I would answer "no". It would be impossible to be parametric and non-parametric in the same sense at the same time.
|
Is there any statistical test that is parametric and non-parametric?
|
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relatio
|
Is there any statistical test that is parametric and non-parametric?
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relationship between the explanatory variables and the outcome.
Some examples:
A quantile regression with a linear link would qualify as B-parametric and A-non-parametric.
Spline smoothing of a time series with Gaussian noise can quality as A-non-parametric and B-parametric.
The term "semi-parametric" usually refers to case B and means you are not assuming the whole functional relation, but rather you have milder assumptions such as "additive in some smooth transformation of the predictors".
You could also have milder assumptions on the distribution of the noise- such as "all moments are finite", without specifically specifying the shape of the distribution. To the best of my knowledge, there is no term for this type of assumption.
Note that the answer relates to the underlying assumptions behind the data generating process. When saying "a-parametric test", one usually refers to non-parametric in sense A. In this is what you meant, then I would answer "no". It would be impossible to be parametric and non-parametric in the same sense at the same time.
|
Is there any statistical test that is parametric and non-parametric?
Parametric is used in (at least) two meanings: A- To declare you are assuming the family of the noise distribution up to it's parameters. B- To declare you are assuming the specific functional relatio
|
14,227
|
Is there any statistical test that is parametric and non-parametric?
|
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametric, as it doesn't parametrically estimate the baseline hazard.
Or you might choose to view many non-parametric statistics as actually massively parametric.
|
Is there any statistical test that is parametric and non-parametric?
|
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametri
|
Is there any statistical test that is parametric and non-parametric?
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametric, as it doesn't parametrically estimate the baseline hazard.
Or you might choose to view many non-parametric statistics as actually massively parametric.
|
Is there any statistical test that is parametric and non-parametric?
I suppose that depends on what they mean by "parametric and non-parametric"? At the same time exactly both, or a blend of the two?
Many consider the Cox proportional hazards model to be semi-parametri
|
14,228
|
Is there any statistical test that is parametric and non-parametric?
|
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says are often conflated with each other, and gives an example of a parametric distribution-free test as the Sign test for the median. This test makes no assumption about the underlying distribution of the sampled population of variate values, so it is distribution-free. However, if the selected median is correct, values above and below it should be selected at equal probability, testing random samples from the sampled variates as to whether they are above or below the median estimate should be binomial with $p=0.5$ so it is simultaneously parametric.
Update
Based on the discussion in the comments (thank you, whuber), it seems as if Bradley is in the minority, and what Bradley calls distribution-free, most others call parametric. And while nothing can really be $(A \cap \neg A)$ simultaneously, the answer to the question may just well depend on how you define the term, whether you make Bradley's distinction or call both elements of Bradley "parametric".
|
Is there any statistical test that is parametric and non-parametric?
|
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says a
|
Is there any statistical test that is parametric and non-parametric?
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says are often conflated with each other, and gives an example of a parametric distribution-free test as the Sign test for the median. This test makes no assumption about the underlying distribution of the sampled population of variate values, so it is distribution-free. However, if the selected median is correct, values above and below it should be selected at equal probability, testing random samples from the sampled variates as to whether they are above or below the median estimate should be binomial with $p=0.5$ so it is simultaneously parametric.
Update
Based on the discussion in the comments (thank you, whuber), it seems as if Bradley is in the minority, and what Bradley calls distribution-free, most others call parametric. And while nothing can really be $(A \cap \neg A)$ simultaneously, the answer to the question may just well depend on how you define the term, whether you make Bradley's distinction or call both elements of Bradley "parametric".
|
Is there any statistical test that is parametric and non-parametric?
Bradley, in his classic Distribution-Free Statistical Tests (1968, p. 15–16 - see this question for a quote) clarifies the difference between distribution-free and nonparametric tests, which he says a
|
14,229
|
Why does Q-learning overestimate action values?
|
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected value of a dice roll is 3.5, but if you throw the dice 100 times and take the max over all throws, you're very likely taking a value that is greater than 3.5 (think that every possible action value at state s in a dice roll).
If all values were equally overestimated this would be no problem, since what matters is the difference between the Q values. But if the overestimations are not uniform, this might slow down learning (because you will spend time exploring states that you think are good but aren't).
The proposed solution (Double Q-learning) is to use two different function approximators that are trained on different samples, one for selecting the best action and other for calculating the value of this action, since the two functions approximators seen different samples, it is unlikely that they overestimate the same action.
|
Why does Q-learning overestimate action values?
|
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected
|
Why does Q-learning overestimate action values?
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected value of a dice roll is 3.5, but if you throw the dice 100 times and take the max over all throws, you're very likely taking a value that is greater than 3.5 (think that every possible action value at state s in a dice roll).
If all values were equally overestimated this would be no problem, since what matters is the difference between the Q values. But if the overestimations are not uniform, this might slow down learning (because you will spend time exploring states that you think are good but aren't).
The proposed solution (Double Q-learning) is to use two different function approximators that are trained on different samples, one for selecting the best action and other for calculating the value of this action, since the two functions approximators seen different samples, it is unlikely that they overestimate the same action.
|
Why does Q-learning overestimate action values?
$$Q(s, a) = r + \gamma \text{max}_{a'}[Q(s', a')]$$
Since Q values are very noisy, when you take the max over all actions, you're probably getting an overestimated value. Think like this, the expected
|
14,230
|
Why does Q-learning overestimate action values?
|
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper says
These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value.
Together, these seem to be saying that when the $Q$ function is in reality stochastic, observed rewards $\hat{r}$ resulting from a state-action pair $(s,a)$ will have some (0-mean) noise associated with them, e.g. $\hat{r}=r+\epsilon$. Then, because $Q$ is updated based on $\max_aQ_\text{old}$, the maximum value will tend to be a combination of high reward $r$ and/or large positive noise realizations $\epsilon$. By assuming $r_\max\approx\hat{r}_\max$ and ignoring $\epsilon$, the value of $Q$ will tend to be an over-estimate.
(As noted I am unfamiliar with this area, and only glanced at Wikipedia and the above abstract, so this interpretation could be wrong.)
|
Why does Q-learning overestimate action values?
|
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper s
|
Why does Q-learning overestimate action values?
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper says
These overestimations result from a positive bias that is introduced because Q-learning uses the maximum action value as an approximation for the maximum expected action value.
Together, these seem to be saying that when the $Q$ function is in reality stochastic, observed rewards $\hat{r}$ resulting from a state-action pair $(s,a)$ will have some (0-mean) noise associated with them, e.g. $\hat{r}=r+\epsilon$. Then, because $Q$ is updated based on $\max_aQ_\text{old}$, the maximum value will tend to be a combination of high reward $r$ and/or large positive noise realizations $\epsilon$. By assuming $r_\max\approx\hat{r}_\max$ and ignoring $\epsilon$, the value of $Q$ will tend to be an over-estimate.
(As noted I am unfamiliar with this area, and only glanced at Wikipedia and the above abstract, so this interpretation could be wrong.)
|
Why does Q-learning overestimate action values?
I am not very familiar with reinforcement learning, but the very next line in the Wikipedia article you cite (currently) refers to the paper Double Q-learning (NIPS 2010). The abstract to that paper s
|
14,231
|
Why does Q-learning overestimate action values?
|
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant positive bias. To see why, consider a single state s where
there are many actions a whose true values, q(s, a), are all zero but
whose estimated values, Q(s, a), are uncertain and thus distributed
some above and some below zero.
It's a little bit vague. here is a simple example. where Q1(s, X) = Q2(s, X) = 0, but in practice, the values may be uncertain.
Q1(s,A) = 0.1, Q1(s,B) = 0, Q1(s,C) = -0.1
Q2(s,A) = -0.1, Q2(s,B) = 0.1, Q2(s,C) = 0
If you only update Q1 by itself, it always tends to select A at s to update. But if you select max_a Q2(s,a) to update Q1, then, Q2 can compensate the situation. Also, you have to use Q1 to train Q2 in the other way. The noise in Q2 are independent of that in Q1 since Q1 and Q2 are trained using different dataset separately.
|
Why does Q-learning overestimate action values?
|
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant
|
Why does Q-learning overestimate action values?
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant positive bias. To see why, consider a single state s where
there are many actions a whose true values, q(s, a), are all zero but
whose estimated values, Q(s, a), are uncertain and thus distributed
some above and some below zero.
It's a little bit vague. here is a simple example. where Q1(s, X) = Q2(s, X) = 0, but in practice, the values may be uncertain.
Q1(s,A) = 0.1, Q1(s,B) = 0, Q1(s,C) = -0.1
Q2(s,A) = -0.1, Q2(s,B) = 0.1, Q2(s,C) = 0
If you only update Q1 by itself, it always tends to select A at s to update. But if you select max_a Q2(s,a) to update Q1, then, Q2 can compensate the situation. Also, you have to use Q1 to train Q2 in the other way. The noise in Q2 are independent of that in Q1 since Q1 and Q2 are trained using different dataset separately.
|
Why does Q-learning overestimate action values?
First, I want to quote from Sutton and Barto book
... In these algorithms, a maximum over estimated values is used
implicitly as an estimate of the maximum value, which can lead to a
significant
|
14,232
|
Why does Q-learning overestimate action values?
|
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering this correlation).
Normal Explanation:
Essentially, the OC states, that if we constantly choose the maximum estimate of an outcome, on average our estimate will lie over the maximum of the actual outcome we're trying to predict. I.e We over-estimate.
This correlates to the Q-value in the following way; The Q-value of the state-action pair (s,a) is actually an estimate of the maximum expected future rewards gained by following the optimal policy $\pi$.
The way we approximate this optimal policy - and therefore the Q-value of (s,a) - is with the following - well known - equation:
$Q(s_t,a_t) <- R_{t+1} + \gamma * max_{a_t}Q(s_{t+1},a_{t+1})$
Concluding; We try to approximate the optimal policy of the current state, which is itself is an expectation of the future rewards, by constantly taking the maximum of the expected reward at the next state.
This falls under the OC, so therefore we overestimate in our max-term.
Very short explanation:
We have a "curse," which tells us, that constantly taking the maximum of our expectations/estimates will give us, on average, an estimate that is higher than the thing we're trying to estimate. I.e we overestimate.
As Q-learning is the act of estimating the maximum future rewards, with its accompanying approximating and well-known equation, it too falls under the curse thanks to the max-term in this equation.
|
Why does Q-learning overestimate action values?
|
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering
|
Why does Q-learning overestimate action values?
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering this correlation).
Normal Explanation:
Essentially, the OC states, that if we constantly choose the maximum estimate of an outcome, on average our estimate will lie over the maximum of the actual outcome we're trying to predict. I.e We over-estimate.
This correlates to the Q-value in the following way; The Q-value of the state-action pair (s,a) is actually an estimate of the maximum expected future rewards gained by following the optimal policy $\pi$.
The way we approximate this optimal policy - and therefore the Q-value of (s,a) - is with the following - well known - equation:
$Q(s_t,a_t) <- R_{t+1} + \gamma * max_{a_t}Q(s_{t+1},a_{t+1})$
Concluding; We try to approximate the optimal policy of the current state, which is itself is an expectation of the future rewards, by constantly taking the maximum of the expected reward at the next state.
This falls under the OC, so therefore we overestimate in our max-term.
Very short explanation:
We have a "curse," which tells us, that constantly taking the maximum of our expectations/estimates will give us, on average, an estimate that is higher than the thing we're trying to estimate. I.e we overestimate.
As Q-learning is the act of estimating the maximum future rewards, with its accompanying approximating and well-known equation, it too falls under the curse thanks to the max-term in this equation.
|
Why does Q-learning overestimate action values?
It is based on the Optimizer's Curse (OC from now on). (And a lot of other math, which correlates the OC to Q-learning. Here is an article written by the original author of the DDQN algorithm covering
|
14,233
|
Diagnostic plot for assessing homogeneity of variance-covariance matrices
|
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155). It suggests several graphical procedures to compare covariance matrices.
The authors' R package heplot supports these procedures. The illustrations in this post are modifications of those in the article based on the supplemental code maintained at https://github.com/mattsigal/eqcov_supp/blob/master/iris-ex.R. (I have removed some distracting graphical elements.)
Let's go there step by step, using the well-known Iris dataset, which will require us to compare three covariance matrices of $d=4$ variables.
Here is a scatterplot of two of its four variables with symbol size and color distinguishing the three species of Iris.
As usual, the first two bivariate moments of any group can be depicted using a covariance ellipse. It is a contour of the Mahalanobis distance centered at the point of means. The software shows two such contours, presumably estimating 68% and 95% tolerance ellipses (for bivariate Normal distributions). (The contour levels are found, as usual, by referring to quantiles of a suitable chi-squared distribution.)
Provided the data don't have outliers and strong nonlinearities, these provide a nice visual summary, as we can see simply by erasing the data:
The first innovation is to plot a pooled covariance ellipse. This is obtained by first recovering the sums of squares and products matrices upon multiplication of each covariance matrix by the degrees of freedom in its estimation. Those SSP matrices are then summed (componentwise, of course) and the result is divided by the total degrees of freedom. We may distinguish the pooled covariance ellipse by shading it:
The second innovation translates all ellipses to a common center:
For example, the Virginica covariance is similar to the Versicolor covariance but tends to be larger. The Setosa covariance is smaller and oriented differently, clearly distinguishing the Setosa sepal width-length relationship from that of the other two species.
(Note that because the contour level (such as 68% or 95%) merely rescales all ellipses equally, the choice of which level to use for this plot is no longer material.)
The final innovation emulates the scatterplot matrix: with $d \gt 2$ variables, create a $d\times d$ array doubly indexed by those variables and, in the cell for variables "X" and "Y," draw all the covariance ellipses for those two variables, including the pooled ellipse. Distinguish the covariances graphically using the line style for the contours and/or a fill style for the polygons they bound. Choose a relatively prominent style for the pooled ellipse: here, it is the only one that is filled and it has the darkest boundary.
A pattern emerges in which the Setosa covariance matrix departs from those for the other two species and that of Virginica (still shown in red) tends to exhibit larger values overall.
Although this "bivariate slicing" approach doesn't allow us to see everything that's going on in these covariance matrices, the visualization is a pretty good start at making a reasoned comparison of covariance matrices. Further simplification of the graphical representation is possible (using design principles inspired by, say, Tufte or Bertin) and, I think, likely to make this approach even more effective.
When $d$ grows large (in my experience, greater than $8$ becomes unwieldy unless you're willing to produce output on a high-resolution large-format printer, but even then $40$ is around an upper limit), some kind of variance reduction technique is called for. Friendly and Sigal explore PCA solutions. Of interest are the applications that focus on the principal components with smallest eigenvalues.
|
Diagnostic plot for assessing homogeneity of variance-covariance matrices
|
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155)
|
Diagnostic plot for assessing homogeneity of variance-covariance matrices
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155). It suggests several graphical procedures to compare covariance matrices.
The authors' R package heplot supports these procedures. The illustrations in this post are modifications of those in the article based on the supplemental code maintained at https://github.com/mattsigal/eqcov_supp/blob/master/iris-ex.R. (I have removed some distracting graphical elements.)
Let's go there step by step, using the well-known Iris dataset, which will require us to compare three covariance matrices of $d=4$ variables.
Here is a scatterplot of two of its four variables with symbol size and color distinguishing the three species of Iris.
As usual, the first two bivariate moments of any group can be depicted using a covariance ellipse. It is a contour of the Mahalanobis distance centered at the point of means. The software shows two such contours, presumably estimating 68% and 95% tolerance ellipses (for bivariate Normal distributions). (The contour levels are found, as usual, by referring to quantiles of a suitable chi-squared distribution.)
Provided the data don't have outliers and strong nonlinearities, these provide a nice visual summary, as we can see simply by erasing the data:
The first innovation is to plot a pooled covariance ellipse. This is obtained by first recovering the sums of squares and products matrices upon multiplication of each covariance matrix by the degrees of freedom in its estimation. Those SSP matrices are then summed (componentwise, of course) and the result is divided by the total degrees of freedom. We may distinguish the pooled covariance ellipse by shading it:
The second innovation translates all ellipses to a common center:
For example, the Virginica covariance is similar to the Versicolor covariance but tends to be larger. The Setosa covariance is smaller and oriented differently, clearly distinguishing the Setosa sepal width-length relationship from that of the other two species.
(Note that because the contour level (such as 68% or 95%) merely rescales all ellipses equally, the choice of which level to use for this plot is no longer material.)
The final innovation emulates the scatterplot matrix: with $d \gt 2$ variables, create a $d\times d$ array doubly indexed by those variables and, in the cell for variables "X" and "Y," draw all the covariance ellipses for those two variables, including the pooled ellipse. Distinguish the covariances graphically using the line style for the contours and/or a fill style for the polygons they bound. Choose a relatively prominent style for the pooled ellipse: here, it is the only one that is filled and it has the darkest boundary.
A pattern emerges in which the Setosa covariance matrix departs from those for the other two species and that of Virginica (still shown in red) tends to exhibit larger values overall.
Although this "bivariate slicing" approach doesn't allow us to see everything that's going on in these covariance matrices, the visualization is a pretty good start at making a reasoned comparison of covariance matrices. Further simplification of the graphical representation is possible (using design principles inspired by, say, Tufte or Bertin) and, I think, likely to make this approach even more effective.
When $d$ grows large (in my experience, greater than $8$ becomes unwieldy unless you're willing to produce output on a high-resolution large-format printer, but even then $40$ is around an upper limit), some kind of variance reduction technique is called for. Friendly and Sigal explore PCA solutions. Of interest are the applications that focus on the principal components with smallest eigenvalues.
|
Diagnostic plot for assessing homogeneity of variance-covariance matrices
An article Visualizing Tests for Equality of Covariance Matrices, by Michael Friendly and Matthew Sigal, has just appeared in print in The American Statistician (Volume 74, 2020 - Issue 2, pp 144-155)
|
14,234
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia article, which you linked to, is unfortunately a little sparse), but there are several ways to go about this. I think @user27564 is mistaken in saying that these analyses must necessarily be Bayesian--there are certainly frequentist approaches for interim monitoring too.
Your first approach resembles one of the original approaches to interim monitoring, called 'curtailment.' The idea is very simple: you should stop collecting data once the experiment's outcome is inevitable. Suppose you've got a collection of 100 $As$ and/or $Bs$ and you want to know whether it was generated by a process that selects an $A$ or $B$ at random each time (i.e., $P(A)=P(B)=0.5$. In this case, you should stop as soon as you count at least 58 items of the same kind; counting the remaining items won't change the significance after that point. The number $58$ comes from finding $x \textrm{ such that } 1-F(x;100;0.5) \lt \alpha$, where $F$ is the cumulative binomial distribution.
Similar logic lets you find the "inevitability points" for other tests where:
The total sample size* is fixed, and
Each observation contributes a bounded amount to the sample.
This would probably be easy for you to implement--calculate the stopping criteria offline and then just plug it into your site's code--but you can often do even better if you're willing to terminate the experiment not only when the outcome is inevitable, but when it is also very unlikely to change.
This is called stochastic curtailment. For example, suppose, in the example above, that we've seen 57 $A$s and 2 $B$s. We might feel reasonably confident, if not absolutely certain, that there is at least one more $A$ in the box of 100, and so we could stop. This review by Christopher Jennison and Bruce Turnbull, works through Stochastic Curtailment in Section 4. They also have a longer book; you can peek at Chapter 10 via Google Books. In addition to the derivation, the book has some formulae where you can more or less plug in the results of your interim tests.
There are a number of other approaches too. Group sequential methods are designed for situations where you may not be able to obtain a set number of subjects and the subjects trickle in at variable rates. Depending on your site's traffic, you might or might not want to look into this.
There are a fair number of R packages floating around CRAN, if that's what you're using for your analysis. A good place to start might actually be the Clinical Trials Task View, since a lot of this work came out of that field.
[*] Just some friendly advice: be careful when looking at significance values calculated from very large numbers of data points. As you collect more and more data, you will eventually find a significant result, but the effect might be trivially small. For instance, if you asked the whole planet whether they prefer A or B, it's very unlikely that you would see an exact 50:50 split, but it's probably not worth retooling your product if the split is 50.001:49.999. Keep checking the effect size (i.e., difference in conversion rates) too!
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia art
|
How could one develop a stopping rule in a power analysis of two independent proportions?
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia article, which you linked to, is unfortunately a little sparse), but there are several ways to go about this. I think @user27564 is mistaken in saying that these analyses must necessarily be Bayesian--there are certainly frequentist approaches for interim monitoring too.
Your first approach resembles one of the original approaches to interim monitoring, called 'curtailment.' The idea is very simple: you should stop collecting data once the experiment's outcome is inevitable. Suppose you've got a collection of 100 $As$ and/or $Bs$ and you want to know whether it was generated by a process that selects an $A$ or $B$ at random each time (i.e., $P(A)=P(B)=0.5$. In this case, you should stop as soon as you count at least 58 items of the same kind; counting the remaining items won't change the significance after that point. The number $58$ comes from finding $x \textrm{ such that } 1-F(x;100;0.5) \lt \alpha$, where $F$ is the cumulative binomial distribution.
Similar logic lets you find the "inevitability points" for other tests where:
The total sample size* is fixed, and
Each observation contributes a bounded amount to the sample.
This would probably be easy for you to implement--calculate the stopping criteria offline and then just plug it into your site's code--but you can often do even better if you're willing to terminate the experiment not only when the outcome is inevitable, but when it is also very unlikely to change.
This is called stochastic curtailment. For example, suppose, in the example above, that we've seen 57 $A$s and 2 $B$s. We might feel reasonably confident, if not absolutely certain, that there is at least one more $A$ in the box of 100, and so we could stop. This review by Christopher Jennison and Bruce Turnbull, works through Stochastic Curtailment in Section 4. They also have a longer book; you can peek at Chapter 10 via Google Books. In addition to the derivation, the book has some formulae where you can more or less plug in the results of your interim tests.
There are a number of other approaches too. Group sequential methods are designed for situations where you may not be able to obtain a set number of subjects and the subjects trickle in at variable rates. Depending on your site's traffic, you might or might not want to look into this.
There are a fair number of R packages floating around CRAN, if that's what you're using for your analysis. A good place to start might actually be the Clinical Trials Task View, since a lot of this work came out of that field.
[*] Just some friendly advice: be careful when looking at significance values calculated from very large numbers of data points. As you collect more and more data, you will eventually find a significant result, but the effect might be trivially small. For instance, if you asked the whole planet whether they prefer A or B, it's very unlikely that you would see an exact 50:50 split, but it's probably not worth retooling your product if the split is 50.001:49.999. Keep checking the effect size (i.e., difference in conversion rates) too!
|
How could one develop a stopping rule in a power analysis of two independent proportions?
This is an interesting problem and the associated techniques are have lots of applications. They are often called "interim monitoring" strategies or "sequential experimental design" (the wikipedia art
|
14,235
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions are 'no' doesn't matter (too much). Your client seems pragmatic, so the true interpretation of a p-value is probably not a fine point you care about.
I can't speak to the second approach you propose.
However, the first approach is not on solid ground. Normal approximations of binomial distributions aren't valid for proportions that low (which is the method power.prop.test uses, also the method used by Cohen in his classical book on power). Moreover, as far as I am aware, there is no closed form power analysis solution for two-sample proportion tests (cf. How can one perform a two-group binomial power analysis without using normal approximations?). There are however better methods of approximating the confidence intervals of proportions (cf. the package binom). You can use non-overlapping confidence intervals as a partial solution... but this is not the same as estimating a p-value and thus doesn't provide a route to power directly. I hope somebody has a nice closed form solution they will share with the rest of us. If I stumble on one, I'll update the above referenced question. Good luck.
Edit: While I am thinking about it, let me totally pragmatic here for a moment. Your client wants this experiment to end when they are certain that the experimental site is working better than the control site. After you get a decent sample, if you aren't ready to make a decision, just start adjusting the ratio of your random assignment to whatever side is 'winning'. If it was just a blip, regression towards the mean will slip in, you'll become less certain and ease off the ratio. When you are reasonably certain, call it quits and declare a winner. The optimal approach probably would involve Bayesian updating, but I don't know enough about that topic off the top of my head to direct you. However I can assure you that while it may seem counter intuitive at times, the math itself isn't all that hard.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions ar
|
How could one develop a stopping rule in a power analysis of two independent proportions?
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions are 'no' doesn't matter (too much). Your client seems pragmatic, so the true interpretation of a p-value is probably not a fine point you care about.
I can't speak to the second approach you propose.
However, the first approach is not on solid ground. Normal approximations of binomial distributions aren't valid for proportions that low (which is the method power.prop.test uses, also the method used by Cohen in his classical book on power). Moreover, as far as I am aware, there is no closed form power analysis solution for two-sample proportion tests (cf. How can one perform a two-group binomial power analysis without using normal approximations?). There are however better methods of approximating the confidence intervals of proportions (cf. the package binom). You can use non-overlapping confidence intervals as a partial solution... but this is not the same as estimating a p-value and thus doesn't provide a route to power directly. I hope somebody has a nice closed form solution they will share with the rest of us. If I stumble on one, I'll update the above referenced question. Good luck.
Edit: While I am thinking about it, let me totally pragmatic here for a moment. Your client wants this experiment to end when they are certain that the experimental site is working better than the control site. After you get a decent sample, if you aren't ready to make a decision, just start adjusting the ratio of your random assignment to whatever side is 'winning'. If it was just a blip, regression towards the mean will slip in, you'll become less certain and ease off the ratio. When you are reasonably certain, call it quits and declare a winner. The optimal approach probably would involve Bayesian updating, but I don't know enough about that topic off the top of my head to direct you. However I can assure you that while it may seem counter intuitive at times, the math itself isn't all that hard.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
You can stop early, but if you do, your p-values aren't easily interpreted. If you don't care about the interpretation of your p-value, then the way in which the answer to your first two questions ar
|
14,236
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
maybe other works could be added here.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
maybe other works could be added here.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
maybe some methods could be used there like
Pocock
O’Brien and Flemming
Peto
this will adjust the P cutoff based on results and will help wou stop collecting data and economize resources and time.
|
14,237
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your questions its easy:
NO
No, you can't stop early
No, you can't measure just longer
Once you defined your setup, you are not allowed to even look at the data (blind analysis). From the frequentist point of view, there is no way around, no cheating no tricks!
(EDIT: Of course, there are attempts to do so, and they will also work if used correctly, but most of them are known to introduce biases. )
But there is the bayesian point of view, which is quite different. The bayesian approach needs in contrast to the frequentists an additional input, the a-priori probability distribution. We can call it also previous knowledge or prejudice. Having this, we can use the data/measurement to update our knowledge to the a-posteriori probability. The point is, we can use the data and even more, we can use the data at every intermediate point of the measurement. In each update, the last posterior is our new prior and we can update it with a new measurement to our up-to date knowledge. No early stopping problem at all!
I found a talk discussing quite similar like problems you have and I described above:
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/JoAnnAlvarez/BayesianAdaptivePres.pdf
But beside this, are you really sure you need this at all? It seems that you have some system running deciding where to link a request. For this you don't need to proof that your decisions are correct in a statistical sense with a hypothesis test. Have you ever bought a coke, because you could exclude that pepsi is 'right' now with a probability of 95%? It's sufficient to take the one which is just better, not excluding a hypothesis. That would be a trivial algorithm: Calculate uncertainty of rate A, calculate uncertainty of B. Take the difference of both rates and divide it by the uncertainty of the difference. The result is something like the significance of the difference in sigma. Then just take all the links where there is more than two or three sigma difference. Drawback, you will never know if a single decision was statistical correct with some evidence, but in average you will have higher conversion rates.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
|
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your q
|
How could one develop a stopping rule in a power analysis of two independent proportions?
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your questions its easy:
NO
No, you can't stop early
No, you can't measure just longer
Once you defined your setup, you are not allowed to even look at the data (blind analysis). From the frequentist point of view, there is no way around, no cheating no tricks!
(EDIT: Of course, there are attempts to do so, and they will also work if used correctly, but most of them are known to introduce biases. )
But there is the bayesian point of view, which is quite different. The bayesian approach needs in contrast to the frequentists an additional input, the a-priori probability distribution. We can call it also previous knowledge or prejudice. Having this, we can use the data/measurement to update our knowledge to the a-posteriori probability. The point is, we can use the data and even more, we can use the data at every intermediate point of the measurement. In each update, the last posterior is our new prior and we can update it with a new measurement to our up-to date knowledge. No early stopping problem at all!
I found a talk discussing quite similar like problems you have and I described above:
http://biostat.mc.vanderbilt.edu/wiki/pub/Main/JoAnnAlvarez/BayesianAdaptivePres.pdf
But beside this, are you really sure you need this at all? It seems that you have some system running deciding where to link a request. For this you don't need to proof that your decisions are correct in a statistical sense with a hypothesis test. Have you ever bought a coke, because you could exclude that pepsi is 'right' now with a probability of 95%? It's sufficient to take the one which is just better, not excluding a hypothesis. That would be a trivial algorithm: Calculate uncertainty of rate A, calculate uncertainty of B. Take the difference of both rates and divide it by the uncertainty of the difference. The result is something like the significance of the difference in sigma. Then just take all the links where there is more than two or three sigma difference. Drawback, you will never know if a single decision was statistical correct with some evidence, but in average you will have higher conversion rates.
|
How could one develop a stopping rule in a power analysis of two independent proportions?
The questions you have are typical questions emerging in statistical tests. There are two 'flavours' of statistics out there, the frequentist and the bayesian. The frequentist answer to both of your q
|
14,238
|
How should I mentally deal with Borel's paradox?
|
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there is no paradox in defining a posterior distribution as conditional on a set of measure zero $\{X=x\}$ is that $x$ is not chosen in advance, but as the result of the observation. Thus, if we want to use exotic definitions for the conditional distributions on sets of measure zero, there is zero chance that those sets will contain the $x$ that we will observe in the end. The conditional distribution is defined uniquely almost everywhere and hence almost surely wrt our observation. This is also the meaning of the (great) quote of A. Kolmogorov in the wikipedia entry.
A spot in Bayesian analysis where measure-theoretic subtleties may turn into a paradox is the Savage-Dickey representation of the Bayes factor, since it depends on a specific version of the prior density (as discussed in our paper on the topic...)
|
How should I mentally deal with Borel's paradox?
|
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there i
|
How should I mentally deal with Borel's paradox?
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there is no paradox in defining a posterior distribution as conditional on a set of measure zero $\{X=x\}$ is that $x$ is not chosen in advance, but as the result of the observation. Thus, if we want to use exotic definitions for the conditional distributions on sets of measure zero, there is zero chance that those sets will contain the $x$ that we will observe in the end. The conditional distribution is defined uniquely almost everywhere and hence almost surely wrt our observation. This is also the meaning of the (great) quote of A. Kolmogorov in the wikipedia entry.
A spot in Bayesian analysis where measure-theoretic subtleties may turn into a paradox is the Savage-Dickey representation of the Bayes factor, since it depends on a specific version of the prior density (as discussed in our paper on the topic...)
|
How should I mentally deal with Borel's paradox?
As a Bayesian, I would say Borel's paradox has nothing (or very little) to do with Bayesian statistics. Except that Bayesian statistics uses conditional distributions, of course. The fact that there i
|
14,239
|
How should I mentally deal with Borel's paradox?
|
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 123.45678kg. I'm conditioning on myself having measured their mass as 123.45678kg, something which is consistent with their mass being anywhere in the range [123.456775kg, 123.456785kg] - i.e. it's an event of nonzero probability. So I don't see how the paradox would ever arise.
|
How should I mentally deal with Borel's paradox?
|
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 12
|
How should I mentally deal with Borel's paradox?
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 123.45678kg. I'm conditioning on myself having measured their mass as 123.45678kg, something which is consistent with their mass being anywhere in the range [123.456775kg, 123.456785kg] - i.e. it's an event of nonzero probability. So I don't see how the paradox would ever arise.
|
How should I mentally deal with Borel's paradox?
I'm not sure we ever do condition on events of probability zero in real life. Suppose I measure that a person's mass as 123.45678kg. Going forwards, I'm not conditioning on their mass being exactly 12
|
14,240
|
How to define number of clusters in K-means clustering?
|
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. This would be similar to using a scree plot to picking the number of principal components.
SAS Technical Report A-108 Cubic Clustering Criterion (pdf)
$n$ = number of observations
$n_k$ = number in cluster $k$
$p$ = number of variables
$q$ = number of clusters
$X$ = $n\times p$ data matrix
$M$ = $q\times p$ matrix of cluster means
$Z$ = cluster indicator ($z_{ik}=1$ if obs. $i$ in cluster $k$, 0 otherwise)
Assume each variable has mean 0:
$Z’Z = \text{diag}(n_1, \cdots, n_q)$, $M = (Z’Z)-1Z’X$
$SS$(total) matrix = $T$= $X’X$
$SS$(between clusters) matrix = $B$ = $M’ Z’Z M$
$SS$(within clusters) matrix = $W$ = $T-B$
$R^2 = 1 – \frac{\text{trace(W)}}{\text{trace}(T)}$
(trace = sum of diagonal elements)
Stack columns of $X$ into one long column.
Regress on Kronecker product of $Z$ with $p\times p$ identity matrix
Compute $R^2$ for this regression – same $R^2$
The CCC idea is to compare the $R^2$ you get for a given set of clusters with the $R^2$ you would get by clustering a uniformly distributed set of points in $p$ dimensional space.
|
How to define number of clusters in K-means clustering?
|
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At t
|
How to define number of clusters in K-means clustering?
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At that point I take the number of clusters at the (local) maximum. This would be similar to using a scree plot to picking the number of principal components.
SAS Technical Report A-108 Cubic Clustering Criterion (pdf)
$n$ = number of observations
$n_k$ = number in cluster $k$
$p$ = number of variables
$q$ = number of clusters
$X$ = $n\times p$ data matrix
$M$ = $q\times p$ matrix of cluster means
$Z$ = cluster indicator ($z_{ik}=1$ if obs. $i$ in cluster $k$, 0 otherwise)
Assume each variable has mean 0:
$Z’Z = \text{diag}(n_1, \cdots, n_q)$, $M = (Z’Z)-1Z’X$
$SS$(total) matrix = $T$= $X’X$
$SS$(between clusters) matrix = $B$ = $M’ Z’Z M$
$SS$(within clusters) matrix = $W$ = $T-B$
$R^2 = 1 – \frac{\text{trace(W)}}{\text{trace}(T)}$
(trace = sum of diagonal elements)
Stack columns of $X$ into one long column.
Regress on Kronecker product of $Z$ with $p\times p$ identity matrix
Compute $R^2$ for this regression – same $R^2$
The CCC idea is to compare the $R^2$ you get for a given set of clusters with the $R^2$ you would get by clustering a uniformly distributed set of points in $p$ dimensional space.
|
How to define number of clusters in K-means clustering?
The method I use is to use CCC (Cubic Clustering Criteria). I look for CCC to increase to a maximum as I increment the number of clusters by 1, and then observe when the CCC starts to decrease. At t
|
14,241
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
|
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log base e to log base 2, then divide by the number of
pixels (e.g. 3072 pixels for a 32x32 rgb image).
To change base for the log, just divide the log base e value by log(2)
-- e.g. in python it's like: (nll_val / num_pixels) / numpy.log(2)
and
As noted by DWF, the continuous log-likelihood is not directly
comparable to discrete log-likelihood. Values in the PixelRNN paper
for NICE's bits/pixel were computed after correctly accounting for the
discrete nature of pixel values in the relevant datasets. In the case
of the number in the NICE paper, you'd have to subtract log(128) from
the log-likelihood of each pixel (this is to account for data
scaling).
I.e. -((5371.78 / 3072.) - 4.852) / np.log(2.) = 4.477
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
|
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log bas
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log base e to log base 2, then divide by the number of
pixels (e.g. 3072 pixels for a 32x32 rgb image).
To change base for the log, just divide the log base e value by log(2)
-- e.g. in python it's like: (nll_val / num_pixels) / numpy.log(2)
and
As noted by DWF, the continuous log-likelihood is not directly
comparable to discrete log-likelihood. Values in the PixelRNN paper
for NICE's bits/pixel were computed after correctly accounting for the
discrete nature of pixel values in the relevant datasets. In the case
of the number in the NICE paper, you'd have to subtract log(128) from
the log-likelihood of each pixel (this is to account for data
scaling).
I.e. -((5371.78 / 3072.) - 4.852) / np.log(2.) = 4.477
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
It is explained on page 12 here in great detail.
and is also discussed
here although in not as much detail.
Compute the negative log likelihood in base e, apply change of base
for converting log bas
|
14,242
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
|
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_cross_entropy_with_logits the log-likelihood is in natural log so you need to divide by np.log(2.)
If your reconstruction loss is reported as the mean, e.g. tf.reduce_mean you don't need to divide with the image dimensions and/or batch size. On the other hand if it is tf.reduce_sum you will need to divide with the batch size and dimensions of the image.
In case your model is outputting continuous values (e.g. L2 loss) for reconstruction, you are modeling directly a Gaussian Distribution. For that you need to do some transformation, that I am not 100% sure works but is reported at Masked Autoregressive Flow for Density Estimation
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
|
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_cross_entropy_with_logits the log-likelihood is in natural log so you need to divide by np.log(2.)
If your reconstruction loss is reported as the mean, e.g. tf.reduce_mean you don't need to divide with the image dimensions and/or batch size. On the other hand if it is tf.reduce_sum you will need to divide with the batch size and dimensions of the image.
In case your model is outputting continuous values (e.g. L2 loss) for reconstruction, you are modeling directly a Gaussian Distribution. For that you need to do some transformation, that I am not 100% sure works but is reported at Masked Autoregressive Flow for Density Estimation
|
What is bits per dimension (bits/dim) exactly (in pixel CNN papers)?
To add to the answer above, the log-likelihood is your reconstruction loss. In the case of a 256-way softmax it is the categorical cross-entropy.
If you are using tensorflow eg: tf.nn.sparse_softmax_
|
14,243
|
Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
|
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_+$ in Brooks & Gelman (1998) and with $\widehat{\rm var}^+$ in BDA2 (Gelman et al, 2003) and BDA3 (Gelman et al, 2013).
BDA2 and BDA3 (couldn't check now BDA1) have an exercise with hints to show that $\widehat{\rm var}^+$ is unbiased estimate of the desired quantity.
Gelman & Brooks (1998) has equation 1.1
$$
\hat{R} = \frac{m+1}{m}\frac{\hat{\sigma}_+}{W} - \frac{n-1}{mn},
$$
which can be rearranged as
$$
\hat{R} = \frac{\hat{\sigma}_+}{W} + \frac{\hat{\sigma}_+}{Wm}- \frac{n-1}{mn}.
$$
We can see that the effect of second and third term are negligible for decision making when $n$ is large. See also the discussion in the paragraph before Section 3.1 in Brooks & Gelman (1998).
Gelman & Rubin (1992) also had the term with df as df/(df-2). Brooks & Gelman (1998) have a section describing why this df corretion is incorrect and define (df+3)/(df+1). The paragraph before Section 3.1 in Brooks & Gelman (1998) explains why (d+3)/(d+1) can be dropped.
It seems your source for the equations was something post Brooks & Gelman (1998) as you had (d+3)/(d+1) there and Gelman & Rubin (1992) had df/df(-2). Otherwise Gelman & Rubin (1992) and Brooks & Gelman (1998) have equivalent equations (with slightly different notations and some terms are arranged differently). BDA2 (Gelman, et al., 2003) doesn't have anymore terms $\frac{\hat{\sigma}_+}{Wm}- \frac{n-1}{mn}$. BDA3 (Gelman et al., 2003) and Stan introduced split chains version.
My interpretation of the papers and experiences using different versions of $\hat{R}$ is that the terms which have been eventually dropped can be ignored when $n$ is large, even when $m$ is not. I also vaguely remember discussing this with Andrew Gelman years ago, but if you want to be certain of the history, you should ask him.
Usually M is not too large, and can often be as low so as 2
I really do hope that this is not often the case. In cases where you want to use split-$\hat{R}$ convergence diagnostic, you should use at least 4 chains split and thus have M=8. You may use less chains, if you already know that in your specific cases the convergence and mixing is fast.
Additional reference:
Brooks and Gelman (1998). Journal of Computational and Graphical Statistics, 7(4)434-455.
|
Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
|
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_
|
Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_+$ in Brooks & Gelman (1998) and with $\widehat{\rm var}^+$ in BDA2 (Gelman et al, 2003) and BDA3 (Gelman et al, 2013).
BDA2 and BDA3 (couldn't check now BDA1) have an exercise with hints to show that $\widehat{\rm var}^+$ is unbiased estimate of the desired quantity.
Gelman & Brooks (1998) has equation 1.1
$$
\hat{R} = \frac{m+1}{m}\frac{\hat{\sigma}_+}{W} - \frac{n-1}{mn},
$$
which can be rearranged as
$$
\hat{R} = \frac{\hat{\sigma}_+}{W} + \frac{\hat{\sigma}_+}{Wm}- \frac{n-1}{mn}.
$$
We can see that the effect of second and third term are negligible for decision making when $n$ is large. See also the discussion in the paragraph before Section 3.1 in Brooks & Gelman (1998).
Gelman & Rubin (1992) also had the term with df as df/(df-2). Brooks & Gelman (1998) have a section describing why this df corretion is incorrect and define (df+3)/(df+1). The paragraph before Section 3.1 in Brooks & Gelman (1998) explains why (d+3)/(d+1) can be dropped.
It seems your source for the equations was something post Brooks & Gelman (1998) as you had (d+3)/(d+1) there and Gelman & Rubin (1992) had df/df(-2). Otherwise Gelman & Rubin (1992) and Brooks & Gelman (1998) have equivalent equations (with slightly different notations and some terms are arranged differently). BDA2 (Gelman, et al., 2003) doesn't have anymore terms $\frac{\hat{\sigma}_+}{Wm}- \frac{n-1}{mn}$. BDA3 (Gelman et al., 2003) and Stan introduced split chains version.
My interpretation of the papers and experiences using different versions of $\hat{R}$ is that the terms which have been eventually dropped can be ignored when $n$ is large, even when $m$ is not. I also vaguely remember discussing this with Andrew Gelman years ago, but if you want to be certain of the history, you should ask him.
Usually M is not too large, and can often be as low so as 2
I really do hope that this is not often the case. In cases where you want to use split-$\hat{R}$ convergence diagnostic, you should use at least 4 chains split and thus have M=8. You may use less chains, if you already know that in your specific cases the convergence and mixing is fast.
Additional reference:
Brooks and Gelman (1998). Journal of Computational and Graphical Statistics, 7(4)434-455.
|
Stan $\hat{R}$ versus Gelman-Rubin $\hat{R}$ definition
I followed the specific link given for Gelman & Rubin (1992) and it has
$$
\hat{\sigma} = \frac{n-1}{n}W+ \frac{1}{n}B
$$
as in the later versions, although $\hat{\sigma}$ replaced with $\hat{\sigma}_
|
14,244
|
Differences between logistic regression and perceptrons
|
You mentioned already the important differences. So the results should not differ that much.
|
Differences between logistic regression and perceptrons
|
You mentioned already the important differences. So the results should not differ that much.
|
Differences between logistic regression and perceptrons
You mentioned already the important differences. So the results should not differ that much.
|
Differences between logistic regression and perceptrons
You mentioned already the important differences. So the results should not differ that much.
|
14,245
|
Differences between logistic regression and perceptrons
|
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a linear equation (the mean being equal to the probability p of a Bernoulli event). By using the logit link as a function of the mean (p), the logarithm of the odds (log-odds) can be derived analytically and used as the response of a so-called generalised linear model. Parameter estimation on this GLM is then a statistical process which yields p-values and confidence intervals for model parameters. On top of prediction, this allows you to interpret the model in causal inference. This is something that you cannot achieve with a linear Perceptron.
The Perceptron is a reverse engineering process of logistic regression: Instead of taking the logit of y, it takes the inverse logit (logistic) function of wx, and doesn't use probabilistic assumptions for neither the model nor its parameter estimation. Online training will give you exactly the same estimates for the model weights/parameters, but you won't be able to interpret them in causal inference due to the lack of p-values, confidence intervals, and well, an underlying probability model.
Long story short, logistic regression is a GLM which can perform prediction and inference, whereas the linear Perceptron can only achieve prediction (in which case it will perform the same as logistic regression). The difference between the two is also the fundamental difference between statistical modelling and machine learning.
|
Differences between logistic regression and perceptrons
|
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a lin
|
Differences between logistic regression and perceptrons
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a linear equation (the mean being equal to the probability p of a Bernoulli event). By using the logit link as a function of the mean (p), the logarithm of the odds (log-odds) can be derived analytically and used as the response of a so-called generalised linear model. Parameter estimation on this GLM is then a statistical process which yields p-values and confidence intervals for model parameters. On top of prediction, this allows you to interpret the model in causal inference. This is something that you cannot achieve with a linear Perceptron.
The Perceptron is a reverse engineering process of logistic regression: Instead of taking the logit of y, it takes the inverse logit (logistic) function of wx, and doesn't use probabilistic assumptions for neither the model nor its parameter estimation. Online training will give you exactly the same estimates for the model weights/parameters, but you won't be able to interpret them in causal inference due to the lack of p-values, confidence intervals, and well, an underlying probability model.
Long story short, logistic regression is a GLM which can perform prediction and inference, whereas the linear Perceptron can only achieve prediction (in which case it will perform the same as logistic regression). The difference between the two is also the fundamental difference between statistical modelling and machine learning.
|
Differences between logistic regression and perceptrons
There is actually a big substantial difference, which is related to the technical differences that you mentioned. Logistic regression models a function of the mean of a Bernoulli distribution as a lin
|
14,246
|
Differences between logistic regression and perceptrons
|
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the Wiki article on Multinomial logistic regression.
|
Differences between logistic regression and perceptrons
|
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the
|
Differences between logistic regression and perceptrons
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the Wiki article on Multinomial logistic regression.
|
Differences between logistic regression and perceptrons
I believe one difference you're missing is the fact that logistic regression returns a principled classification probability whereas perceptrons classify with a hard boundary.
This is mentioned in the
|
14,247
|
Basic questions about discrete time survival analysis
|
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameters $\mathbf{B}$ a vector of conditioning variables $\mathbf{X}$: $h_{j,k} = \frac{e^{\alpha_{k} + \mathbf{BX}}}{1 + e^{\alpha_{k} + \mathbf{BX}}}$. The hazard function may also be built around alternative parameterizations of time (e.g. include $k$ or functions of it as a variable in the model), or around a hybrid of both.
The baseline logit hazard function describes the probability of event occurrence in time $k$, conditional upon having survived to time $k$. Adding predictors ($\mathbf{X}$) to the model further constrains this conditionality.
No, logistic regression estimates (e.g. $\hat{\alpha}_{1}$, $\dots$, $\hat{\alpha}_{K}$, $\mathbf{\hat{B}}$) are not the hazard functions themselves. The logistic regression models: logit$(h_{j,k}) = \alpha_{k} + \mathbf{BX}$, and you need to perform the anti-logit transform in (1) above to get the hazard estimates.
Yes. Although I would notate it $\hat{S}_{j,q} = \prod_{i=1}^{q}{(1-h_{j,i})}$. The survival function is the probability of not experiencing the event by time $k$, and of course may also be conditioned on $\mathbf{X}$.
This is a subtle question, not sure I have answers. I do have questions, though. :) The sample size at each time period decreases over time due to right-censoring and due to event occurrence: would you account for this in your calculation of mean survival time? How? What do you mean by "the population?" What population are the individuals recruited to your study generalizing to? Or do you mean some statistical "super-population" concept? Inference is a big challenge in these models, because we estimate $\beta$s and their standard errors, but need to do delta-method back-flips to get standard errors for $\hat{h}_{j,k}$, and (from my own work) deriving valid standard errors for $\hat{S}_{j,k}$ works only on paper (I can't get correct CI coverages for $\hat{S}_{j,k}$ in conditional models).
You can use Kaplan-Meier-like step-function graphs, and you can also use straight up line graphs (i.e. connect the dots between time periods with a line). You should use the latter case only when the concept of "discrete time" itself admits the possibility of subdivided periods. You can also plot/communicate estimates of cumulative incidence (which is $1 - S_{j,k}$... at least epidemiologists will often define "cumulative incidence" this way, the term is used differently in competing risks models. The term uptake may also be used here.).
|
Basic questions about discrete time survival analysis
|
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameter
|
Basic questions about discrete time survival analysis
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameters $\mathbf{B}$ a vector of conditioning variables $\mathbf{X}$: $h_{j,k} = \frac{e^{\alpha_{k} + \mathbf{BX}}}{1 + e^{\alpha_{k} + \mathbf{BX}}}$. The hazard function may also be built around alternative parameterizations of time (e.g. include $k$ or functions of it as a variable in the model), or around a hybrid of both.
The baseline logit hazard function describes the probability of event occurrence in time $k$, conditional upon having survived to time $k$. Adding predictors ($\mathbf{X}$) to the model further constrains this conditionality.
No, logistic regression estimates (e.g. $\hat{\alpha}_{1}$, $\dots$, $\hat{\alpha}_{K}$, $\mathbf{\hat{B}}$) are not the hazard functions themselves. The logistic regression models: logit$(h_{j,k}) = \alpha_{k} + \mathbf{BX}$, and you need to perform the anti-logit transform in (1) above to get the hazard estimates.
Yes. Although I would notate it $\hat{S}_{j,q} = \prod_{i=1}^{q}{(1-h_{j,i})}$. The survival function is the probability of not experiencing the event by time $k$, and of course may also be conditioned on $\mathbf{X}$.
This is a subtle question, not sure I have answers. I do have questions, though. :) The sample size at each time period decreases over time due to right-censoring and due to event occurrence: would you account for this in your calculation of mean survival time? How? What do you mean by "the population?" What population are the individuals recruited to your study generalizing to? Or do you mean some statistical "super-population" concept? Inference is a big challenge in these models, because we estimate $\beta$s and their standard errors, but need to do delta-method back-flips to get standard errors for $\hat{h}_{j,k}$, and (from my own work) deriving valid standard errors for $\hat{S}_{j,k}$ works only on paper (I can't get correct CI coverages for $\hat{S}_{j,k}$ in conditional models).
You can use Kaplan-Meier-like step-function graphs, and you can also use straight up line graphs (i.e. connect the dots between time periods with a line). You should use the latter case only when the concept of "discrete time" itself admits the possibility of subdivided periods. You can also plot/communicate estimates of cumulative incidence (which is $1 - S_{j,k}$... at least epidemiologists will often define "cumulative incidence" this way, the term is used differently in competing risks models. The term uptake may also be used here.).
|
Basic questions about discrete time survival analysis
Assume $K$ is the largest value of $k$ (i.e. the largest month/period observed in your data).
Here is the hazard function with a fully discrete parametrization of time, and with a vector of parameter
|
14,248
|
Standard error of random effects in R (lme4) vs Stata (xtmixed)
|
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If estimation is done by REML, these standard errors
account for uncertainty in the estimate of $\beta$, while for ML the
standard errors treat $\beta$ as known. As such, standard errors of
REML-based BLUPs will usually be larger.
As the Stata ML standard errors match the standard errors from R in your example, it seems R is not accounting for uncertainty in estimating $\beta$. Whether it should, I don't know.
From your question, you have tried REML in both Stata and R, and ML in Stata with REML in R. If you try ML in both, you should get the same results in both.
|
Standard error of random effects in R (lme4) vs Stata (xtmixed)
|
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If
|
Standard error of random effects in R (lme4) vs Stata (xtmixed)
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If estimation is done by REML, these standard errors
account for uncertainty in the estimate of $\beta$, while for ML the
standard errors treat $\beta$ as known. As such, standard errors of
REML-based BLUPs will usually be larger.
As the Stata ML standard errors match the standard errors from R in your example, it seems R is not accounting for uncertainty in estimating $\beta$. Whether it should, I don't know.
From your question, you have tried REML in both Stata and R, and ML in Stata with REML in R. If you try ML in both, you should get the same results in both.
|
Standard error of random effects in R (lme4) vs Stata (xtmixed)
According to the [XT] manual for Stata 11:
Standard errors for BLUPs are calculated based on the iterative
technique of Bates and Pinheiro (1998, sec. 3.3) for estimating the
BLUPs themselves. If
|
14,249
|
Which robust correlation methods are actually used?
|
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation procedures on constituent variables prior to performing Pearson's correlation. I imagine any examination of robustness should consider the effects of:
transformations of one or both variables in order to make variables approximate a normal distribution
adjustment or deletion of outliers based on a statistical rule or knowledge of problems with an observation
|
Which robust correlation methods are actually used?
|
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation proced
|
Which robust correlation methods are actually used?
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation procedures on constituent variables prior to performing Pearson's correlation. I imagine any examination of robustness should consider the effects of:
transformations of one or both variables in order to make variables approximate a normal distribution
adjustment or deletion of outliers based on a statistical rule or knowledge of problems with an observation
|
Which robust correlation methods are actually used?
Coming from a psychology perspective, Pearson and Spearman's correlation do appear to be the most common. However, I think a lot of researchers in psychology engage in various data manipulation proced
|
14,250
|
Which robust correlation methods are actually used?
|
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison with other ones. Moreover, all measures are tested on robustness. Note that this new measure is also capable to identify more than one functional relation in data and also to identify nonfunctional relationships.
|
Which robust correlation methods are actually used?
|
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison w
|
Which robust correlation methods are actually used?
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison with other ones. Moreover, all measures are tested on robustness. Note that this new measure is also capable to identify more than one functional relation in data and also to identify nonfunctional relationships.
|
Which robust correlation methods are actually used?
I would recommend you this excellent article published in Science in 2011 that I previously posted here. There is proposal of one new robust measure together with exhaustive and excellent comparison w
|
14,251
|
Which robust correlation methods are actually used?
|
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and Rivest as a way to choose a model among families of bivariate copulas.
Link to Genest Rivest (1993) paper
|
Which robust correlation methods are actually used?
|
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and R
|
Which robust correlation methods are actually used?
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and Rivest as a way to choose a model among families of bivariate copulas.
Link to Genest Rivest (1993) paper
|
Which robust correlation methods are actually used?
Kendall's tau is very widely used in copula theory, probably because it is a very natural thing to consider for archimedean copulas. Plots of the cumulative Kendall tau were introduced by Genest and R
|
14,252
|
Which robust correlation methods are actually used?
|
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Correlation Coefficient
References:
• Blomqvist, N. (1950) "On a Measure of Dependence between Two Random Variables", Annals of Mathematical Statistics, 21(4): 593-600.
• Bradley, C. (1985) “The Absolute Correlation”, The Mathematical Gazette, 69(447): 12-17.
• Shevlyakov, G.L. (1997) “On Robust Estimation of a Correlation Coefficient”, Journal of Mathematical Sciences, 83(3): 434-438.
• Spearman, C. (1904) "The Proof and Measurement of Association between Two Things", American Journal of Psychology, 15: 88-93.
|
Which robust correlation methods are actually used?
|
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Corre
|
Which robust correlation methods are actually used?
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Correlation Coefficient
References:
• Blomqvist, N. (1950) "On a Measure of Dependence between Two Random Variables", Annals of Mathematical Statistics, 21(4): 593-600.
• Bradley, C. (1985) “The Absolute Correlation”, The Mathematical Gazette, 69(447): 12-17.
• Shevlyakov, G.L. (1997) “On Robust Estimation of a Correlation Coefficient”, Journal of Mathematical Sciences, 83(3): 434-438.
• Spearman, C. (1904) "The Proof and Measurement of Association between Two Things", American Journal of Psychology, 15: 88-93.
|
Which robust correlation methods are actually used?
Some robust measures of correlation are:
Spearman’s Rank Correlation Coefficient
Signum (Blomqvist) Correlation Coefficient
Kendall’s Tau
Bradley’s Absolute Correlation Coefficient
Shevlyakov Corre
|
14,253
|
Which robust correlation methods are actually used?
|
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and FastSpar
|
Which robust correlation methods are actually used?
|
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and
|
Which robust correlation methods are actually used?
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and FastSpar
|
Which robust correlation methods are actually used?
Biweight midcorrelation implemented in R (very fast) via WGCNA and in Python (not so fast) via astropy. That's my go-to for network analysis.
For sparse compositional data, there's also SparCC and
|
14,254
|
Bounding mutual information given bounds on pointwise mutual information
|
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p(x) = 1/n$ for all $x \in X$. For any $m \in \{1,\ldots, n/2\}$ let $k > 0$ be the solution to the equation
$$m e^{k} + (n - m) e^{-k} = n.$$
Then we place point mass $e^k / n^2$ in $nm$ points in the product space $\{1,\ldots,n\}^2$ in such a way that there are $m$ of these points in each row and each column. (This can be done in several ways. Start, for instance, with the first $m$ points in the first row and then fill out the remaining rows by shifting the $m$ points one to the right with a cyclic boundary condition for each row). We place the point mass $e^{-k}/n^2$ in the remaining $n^2 - nm$ points. The sum of these point masses is
$$\frac{nm}{n^2} e^{k} + \frac{n^2 - nm}{n^2} e^{-k} = \frac{me^k + (n-m)e^{-k}}{n} = 1,$$
so they give a probability measure. All the marginal point probabilities are
$$\frac{m}{n^2} e^{k} + \frac{m - n}{n^2} e^{-k} = \frac{1}{n},$$
so both marginal distributions are uniform.
By the construction it is clear that $\mathrm{pmi}(x,y) \in \{-k,k\},$ for all $x,y \in \{1,\ldots,n\}$, and (after some computations)
$$I(X;Y) = k \frac{nm}{n^2} e^{k} - k \frac{n^2 - nm}{n^2} e^{-k} = k\Big(\frac{1-e^{-k}}{e^k - e^{-k}} (e^k + e^{-k}) - e^{-k}\Big),$$
with the mutual information behaving as $k^2 / 2$ for $k \to 0$ and as $k$ for $k \to \infty$.
|
Bounding mutual information given bounds on pointwise mutual information
|
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p
|
Bounding mutual information given bounds on pointwise mutual information
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p(x) = 1/n$ for all $x \in X$. For any $m \in \{1,\ldots, n/2\}$ let $k > 0$ be the solution to the equation
$$m e^{k} + (n - m) e^{-k} = n.$$
Then we place point mass $e^k / n^2$ in $nm$ points in the product space $\{1,\ldots,n\}^2$ in such a way that there are $m$ of these points in each row and each column. (This can be done in several ways. Start, for instance, with the first $m$ points in the first row and then fill out the remaining rows by shifting the $m$ points one to the right with a cyclic boundary condition for each row). We place the point mass $e^{-k}/n^2$ in the remaining $n^2 - nm$ points. The sum of these point masses is
$$\frac{nm}{n^2} e^{k} + \frac{n^2 - nm}{n^2} e^{-k} = \frac{me^k + (n-m)e^{-k}}{n} = 1,$$
so they give a probability measure. All the marginal point probabilities are
$$\frac{m}{n^2} e^{k} + \frac{m - n}{n^2} e^{-k} = \frac{1}{n},$$
so both marginal distributions are uniform.
By the construction it is clear that $\mathrm{pmi}(x,y) \in \{-k,k\},$ for all $x,y \in \{1,\ldots,n\}$, and (after some computations)
$$I(X;Y) = k \frac{nm}{n^2} e^{k} - k \frac{n^2 - nm}{n^2} e^{-k} = k\Big(\frac{1-e^{-k}}{e^k - e^{-k}} (e^k + e^{-k}) - e^{-k}\Big),$$
with the mutual information behaving as $k^2 / 2$ for $k \to 0$ and as $k$ for $k \to \infty$.
|
Bounding mutual information given bounds on pointwise mutual information
My contribution consists of an example. It illustrates some limits on how the mutual information can be bounded given bounds on the pointwise mutual information.
Take $X = Y = \{1,\ldots, n\}$ and $p
|
14,255
|
Bounding mutual information given bounds on pointwise mutual information
|
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to the bounds on pmi, clearly $\frac{p(x,y)}{p(x)p(y)}\leq e^k$ and thus $p(x,y)\leq p(x)p(y)\cdot e^k$. We can substitute for $p(x,y)$ in $I(X;Y)$ to get $I(X;Y)\leq \sum_{x,y}p(x)p(y)\cdot e^k\cdot log(\frac{p(x)p(y)\cdot e^k}{p(x)p(y)}) = \sum_{x,y}p(x)p(y)\cdot e^k\cdot k$
I'm not sure if that's helpful or not.
EDIT: Upon further review I believe this is actually less useful than the original upper bound of k. I won't delete this though in case it might hint at a starting point.
|
Bounding mutual information given bounds on pointwise mutual information
|
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to th
|
Bounding mutual information given bounds on pointwise mutual information
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to the bounds on pmi, clearly $\frac{p(x,y)}{p(x)p(y)}\leq e^k$ and thus $p(x,y)\leq p(x)p(y)\cdot e^k$. We can substitute for $p(x,y)$ in $I(X;Y)$ to get $I(X;Y)\leq \sum_{x,y}p(x)p(y)\cdot e^k\cdot log(\frac{p(x)p(y)\cdot e^k}{p(x)p(y)}) = \sum_{x,y}p(x)p(y)\cdot e^k\cdot k$
I'm not sure if that's helpful or not.
EDIT: Upon further review I believe this is actually less useful than the original upper bound of k. I won't delete this though in case it might hint at a starting point.
|
Bounding mutual information given bounds on pointwise mutual information
I'm not sure if this is what you are looking for, as it is mostly algebraic and not really leveraging the properties of p being a probability distribution, but here is something you can try.
Due to th
|
14,256
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
|
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this answer is that during the running of the LASSO the solver generates a sequence of $\lambda$, so while it may counterintuitive providing a single $\lambda$ value may actually slow the solver down considerably (When you provide an exact parameter the solver resorts to solving a semi definite program which can be slow for reasonably 'simple' cases.)
As for the exact value of $\lambda$ you can potentially chose whatever you want from $[0,\infty[$. Note that if your $\lambda$ value is too large the penalty will be too large and hence none of the coefficients can be non-zero. If the penalty is too small you will overfit the model and this will not be the best cross validated solution
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
|
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this answer is that during the running of the LASSO the solver generates a sequence of $\lambda$, so while it may counterintuitive providing a single $\lambda$ value may actually slow the solver down considerably (When you provide an exact parameter the solver resorts to solving a semi definite program which can be slow for reasonably 'simple' cases.)
As for the exact value of $\lambda$ you can potentially chose whatever you want from $[0,\infty[$. Note that if your $\lambda$ value is too large the penalty will be too large and hence none of the coefficients can be non-zero. If the penalty is too small you will overfit the model and this will not be the best cross validated solution
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
You don't really need to bother. In most packages (like glmnet) if you do not specify $\lambda$, the software package generates its own sequence (which is often recommended). The reason I stress this
|
14,257
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
|
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is an example using "applicants" in the College data set from ISLR
# Don't forget to set seed
set.seed(1)
train <- sample(1:dim(College)[1], 0.75*dim(College[1]))
# Matrices
xmat.train <- model.matrix(Apps~.-1,data=College[train,])
xmat.test <- model.matrix(Apps~.-1, data= College[-train,])
y <- College$Apps[train]
# Create a grid of values for the scope of lambda (optional):
grid <- 10 ^ seq(10,-2,length = 100)
# Add the grid here as lambda (optional)
ridge.fit <- glmnet(xmat.train, y, alpha = 0, lambda=grid)
cv.ridge <- cv.glmnet(xmat.train, y, alpha =0, lambda=grid)
bestlam <- cv.ridge$lambda.min
cat("\nBestlam (with grid):",bestlam)
pred <- predict(ridge.fit, s = bestlam, newx= xmat.test)
cat("\nWith Grid:", mean((College$Apps[-train]-pred)^2))
# Again but without the grid (allowing R to figure lambda out
ridge.fit <- glmnet(xmat.train, y, alpha = 0)
cv.ridge <- cv.glmnet(xmat.train, y, alpha =0)
bestlam <- cv.ridge$lambda.min
cat("\n\nBestlam (no grid):",bestlam)
pred <- predict(ridge.fit, s = bestlam, newx= xmat.test)
cat("\nWithout Grid:", mean((College$Apps[-train]-pred)^2))
You can run this yourself, and you can change grid accordingly as well, I've seen examples ranging from grid <- 10 ^ seq(10,-2,length = 100) to grid <- 10^seq(3, -2, by = -.1).
My best guess is that $\lambda$ can be restricted to certain values, and it is up to us in figuring out the most optimal range.
I have also found this guide quite helpful -> https://drsimonj.svbtle.com/ridge-regression-with-glmnet
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
|
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is an example using "applicants" in the College data set from ISLR
# Don't forget to set seed
set.seed(1)
train <- sample(1:dim(College)[1], 0.75*dim(College[1]))
# Matrices
xmat.train <- model.matrix(Apps~.-1,data=College[train,])
xmat.test <- model.matrix(Apps~.-1, data= College[-train,])
y <- College$Apps[train]
# Create a grid of values for the scope of lambda (optional):
grid <- 10 ^ seq(10,-2,length = 100)
# Add the grid here as lambda (optional)
ridge.fit <- glmnet(xmat.train, y, alpha = 0, lambda=grid)
cv.ridge <- cv.glmnet(xmat.train, y, alpha =0, lambda=grid)
bestlam <- cv.ridge$lambda.min
cat("\nBestlam (with grid):",bestlam)
pred <- predict(ridge.fit, s = bestlam, newx= xmat.test)
cat("\nWith Grid:", mean((College$Apps[-train]-pred)^2))
# Again but without the grid (allowing R to figure lambda out
ridge.fit <- glmnet(xmat.train, y, alpha = 0)
cv.ridge <- cv.glmnet(xmat.train, y, alpha =0)
bestlam <- cv.ridge$lambda.min
cat("\n\nBestlam (no grid):",bestlam)
pred <- predict(ridge.fit, s = bestlam, newx= xmat.test)
cat("\nWithout Grid:", mean((College$Apps[-train]-pred)^2))
You can run this yourself, and you can change grid accordingly as well, I've seen examples ranging from grid <- 10 ^ seq(10,-2,length = 100) to grid <- 10^seq(3, -2, by = -.1).
My best guess is that $\lambda$ can be restricted to certain values, and it is up to us in figuring out the most optimal range.
I have also found this guide quite helpful -> https://drsimonj.svbtle.com/ridge-regression-with-glmnet
|
What's the typical range of possible values for the shrinkage parameter in penalized regression?
For those trying to figure this out:
I have found that there is a great difference between allowing glmnet to calculate $\lambda$, and for when we create a range for it to choose from (grid).
Here is
|
14,258
|
Least squares logistic regression [duplicate]
|
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood estimation is asymptotically optimal (in the class of regular estimators). I have doubts about the UMVUE concept, since MLE rarely gives unbiased estimators.
The question of why is MLE optimal is rather tough, you can check for example Van der Vaart's "Asymptotic Statistics", chapter 8.
Now it is known that least squares coincides with MLE if and only if the distribution of error terms in the regression is normal (you can check the OLS article on Wikipedia). Since in logistic regression the distribution is not normal, LS will be less efficient than MLE.
|
Least squares logistic regression [duplicate]
|
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood est
|
Least squares logistic regression [duplicate]
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood estimation is asymptotically optimal (in the class of regular estimators). I have doubts about the UMVUE concept, since MLE rarely gives unbiased estimators.
The question of why is MLE optimal is rather tough, you can check for example Van der Vaart's "Asymptotic Statistics", chapter 8.
Now it is known that least squares coincides with MLE if and only if the distribution of error terms in the regression is normal (you can check the OLS article on Wikipedia). Since in logistic regression the distribution is not normal, LS will be less efficient than MLE.
|
Least squares logistic regression [duplicate]
It is a well known fact that if the model is parametric (that is, specified completely up to a finite number of unknown parameters), and certain regularity conditions hold, then Maximum Likelihood est
|
14,259
|
Least squares logistic regression [duplicate]
|
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regression, the errors are not expected to have
the same variance: we should have high variance for p
near .5, lower variance towards the extremes
I Leads to (iteratively) (re)weighted least squares (IRWLS)
method, where errors are penalized more where we expect
less variance around p
|
Least squares logistic regression [duplicate]
|
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regress
|
Least squares logistic regression [duplicate]
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regression, the errors are not expected to have
the same variance: we should have high variance for p
near .5, lower variance towards the extremes
I Leads to (iteratively) (re)weighted least squares (IRWLS)
method, where errors are penalized more where we expect
less variance around p
|
Least squares logistic regression [duplicate]
In ordinary linear regression maximizing the likelihood is
equivalent to minimizing the sum of squared errors across
the board (and consequently the estimated variance of
errors)
I In logistic regress
|
14,260
|
Geometric understanding of PCA in the subject (dual) space
|
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point cloud--each point is a row of $\mathbf X$--we may ask what simple operations on these points preserve the properties of $\mathbf{X^\prime X}$.
One is to left-multiply $\mathbf X$ by an $n\times n$ matrix $\mathbf U$, which would produce another $n\times 2$ matrix $\mathbf{UX}$. For this to work, it is essential that
$$\mathbf{X^\prime X} = \mathbf{(UX)^\prime UX} = \mathbf{X^\prime (U^\prime U) X}.$$
Equality is guaranteed when $\mathbf{U^\prime U}$ is the $n\times n$ identity matrix: that is, when $\mathbf{U}$ is orthogonal.
It is well known (and easy to demonstrate) that orthogonal matrices are products of Euclidean reflections and rotations (they form a reflection group in $\mathbb{R}^n$). By choosing rotations wisely, we can dramatically simplify $\mathbf{X}$. One idea is to focus on rotations that affect only two points in the cloud at a time. These are particularly simple, because we can visualize them.
Specifically, let $(x_i, y_i)$ and $(x_j, y_j)$ be two distinct nonzero points in the cloud, constituting rows $i$ and $j$ of $\mathbf{X}$. A rotation of the column space $\mathbb{R}^n$ affecting only these two points converts them to
$$\cases{(x_i^\prime, y_i^\prime) = (\cos(\theta)x_i + \sin(\theta)x_j, \cos(\theta)y_i + \sin(\theta)y_j) \\
(x_j^\prime, y_j^\prime) = (-\sin(\theta)x_i + \cos(\theta)x_j, -\sin(\theta)y_i + \cos(\theta)y_j).}$$
What this amounts to is drawing the vectors $(x_i, x_j)$ and $(y_i, y_j)$ in the plane and rotating them by the angle $\theta$. (Notice how the coordinates get mixed up here! The $x$'s go with each other and the $y$'s go together. Thus, the effect of this rotation in $\mathbb{R}^n$ will not usually look like a rotation of the vectors $(x_i, y_i)$ and $(x_j, y_j)$ as drawn in $\mathbb{R}^2$.)
By choosing the angle just right, we can zero out any one of these new components. To be concrete, let's choose $\theta$ so that
$$\cases{\cos(\theta) = \pm \frac{x_i}{\sqrt{x_i^2 + x_j^2}} \\
\sin(\theta) = \pm \frac{x_j}{\sqrt{x_i^2 + x_j^2}}}.$$
This makes $x_j^\prime=0$. Choose the sign to make $y_j^\prime \ge 0$. Let's call this operation, which changes points $i$ and $j$ in the cloud represented by $\mathbf X$, $\gamma(i,j)$.
Recursively applying $\gamma(1,2), \gamma(1,3), \ldots, \gamma(1,n)$ to $\mathbf{X}$ will cause the first column of $\mathbf{X}$ to be nonzero only in the first row. Geometrically, we will have moved all but one point in the cloud onto the $y$ axis. Now we may apply a single rotation, potentially involving coordinates $2, 3, \ldots, n$ in $\mathbb{R}^n$, to squeeze those $n-1$ points down to a single point. Equivalently, $X$ has been reduced to a block form
$$\mathbf{X} = \pmatrix{x_1^\prime & y_1^\prime \\ \mathbf{0} & \mathbf{z}},$$
with $\mathbf{0}$ and $\mathbf{z}$ both column vectors with $n-1$ coordinates, in such a way that
$$\mathbf{X^\prime X} = \pmatrix{\left(x_1^\prime\right)^2 & x_1^\prime y_1^\prime \\ x_1^\prime y_1^\prime & \left(y_1^\prime\right)^2 + ||\mathbf{z}||^2}.$$
This final rotation further reduces $\mathbf{X}$ to its upper triangular form
$$\mathbf{X} = \pmatrix{x_1^\prime & y_1^\prime \\ 0 & ||\mathbf{z}|| \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0}.$$
In effect, we can now understand $\mathbf{X}$ in terms of the much simpler $2\times 2$ matrix $\pmatrix{x_1^\prime & y_1^\prime \\ 0 & ||\mathbf{z}||}$ created by the last two nonzero points left standing.
To illustrate, I drew four iid points from a bivariate Normal distribution and rounded their values to
$$\mathbf{X} = \pmatrix{ 0.09 & 0.12 \\
-0.31 & -0.63 \\
0.74 & -0.23 \\
-1.8 & -0.39}$$
This initial point cloud is shown at the left of the next figure using solid black dots, with colored arrows pointing from the origin to each dot (to help us visualize them as vectors).
The sequence of operations effected on these points by $\gamma(1,2), \gamma(1,3),$ and $\gamma(1,4)$ results in the clouds shown in the middle. At the very right, the three points lying along the $y$ axis have been coalesced into a single point, leaving a representation of the reduced form of $\mathbf X$. The length of the vertical red vector is $||\mathbf{z}||$; the other (blue) vector is $(x_1^\prime, y_1^\prime)$.
Notice the faint dotted shape drawn for reference in all five panels. It represents the last remaining flexibility in representing $\mathbf X$: as we rotate the first two rows, the last two vectors trace out this ellipse. Thus, the first vector traces out the path
$$\theta\ \to\ (\cos(\theta)x_1^\prime, \cos(\theta) y_1^\prime + \sin(\theta)||\mathbf{z}||)\tag{1}$$
while the second vector traces out the same path according to
$$\theta\ \to\ (-\sin(\theta)x_1^\prime, -\sin(\theta) y_1^\prime + \cos(\theta)||\mathbf{z}||).\tag{2}$$
We may avoid tedious algebra by noting that because this curve is the image of the set of points $\{(\cos(\theta), \sin(\theta))\,:\, 0 \le \theta\lt 2\pi\}$ under the linear transformation determined by
$$(1,0)\ \to\ (x_1^\prime, 0);\quad (0,1)\ \to\ (y_1^\prime, ||\mathbf{z}||),$$
it must be an ellipse. (Question 2 has now been fully answered.) Thus there will be four critical values of $\theta$ in the parameterization $(1)$, of which two correspond to the ends of the major axis and two correspond to the ends of the minor axis; and it immediately follows that simultaneously $(2)$ gives the ends of the minor axis and major axis, respectively. If we choose such a $\theta$, the corresponding points in the point cloud will be located at the ends of the principal axes, like this:
Because these are orthogonal and are directed along the axes of the ellipse, they correctly depict the principal axes: the PCA solution. That answers Question 1.
The analysis given here complements that of my answer at Bottom to top explanation of the Mahalanobis distance. There, by examining rotations and rescalings in $\mathbb{R}^2$, I explained how any point cloud in $p=2$ dimensions geometrically determines a natural coordinate system for $\mathbb{R}^2$. Here, I have shown how it geometrically determines an ellipse which is the image of a circle under a linear transformation. This ellipse is, of course, an isocontour of constant Mahalanobis distance.
Another thing accomplished by this analysis is to display an intimate connection between QR decomposition (of a rectangular matrix) and the Singular Value Decomposition, or SVD. The $\gamma(i,j)$ are known as Givens rotations. Their composition constitutes the orthogonal, or "$Q$", part of the QR decomposition. What remained--the reduced form of $\mathbf{X}$--is the upper triangular, or "$R$" part of the QR decomposition. At the same time, the rotation and rescalings (described as relabelings of the coordinates in the other post) constitute the $\mathbf{D}\cdot \mathbf{V}^\prime$ part of the SVD, $\mathbf{X} = \mathbf{U\, D\, V^\prime}$. The rows of $\mathbf{U}$, incidentally, form the point cloud displayed in the last figure of that post.
Finally, the analysis presented here generalizes in obvious ways to the cases $p\ne 2$: that is, when there are just one or more than two principal components.
|
Geometric understanding of PCA in the subject (dual) space
|
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point
|
Geometric understanding of PCA in the subject (dual) space
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point cloud--each point is a row of $\mathbf X$--we may ask what simple operations on these points preserve the properties of $\mathbf{X^\prime X}$.
One is to left-multiply $\mathbf X$ by an $n\times n$ matrix $\mathbf U$, which would produce another $n\times 2$ matrix $\mathbf{UX}$. For this to work, it is essential that
$$\mathbf{X^\prime X} = \mathbf{(UX)^\prime UX} = \mathbf{X^\prime (U^\prime U) X}.$$
Equality is guaranteed when $\mathbf{U^\prime U}$ is the $n\times n$ identity matrix: that is, when $\mathbf{U}$ is orthogonal.
It is well known (and easy to demonstrate) that orthogonal matrices are products of Euclidean reflections and rotations (they form a reflection group in $\mathbb{R}^n$). By choosing rotations wisely, we can dramatically simplify $\mathbf{X}$. One idea is to focus on rotations that affect only two points in the cloud at a time. These are particularly simple, because we can visualize them.
Specifically, let $(x_i, y_i)$ and $(x_j, y_j)$ be two distinct nonzero points in the cloud, constituting rows $i$ and $j$ of $\mathbf{X}$. A rotation of the column space $\mathbb{R}^n$ affecting only these two points converts them to
$$\cases{(x_i^\prime, y_i^\prime) = (\cos(\theta)x_i + \sin(\theta)x_j, \cos(\theta)y_i + \sin(\theta)y_j) \\
(x_j^\prime, y_j^\prime) = (-\sin(\theta)x_i + \cos(\theta)x_j, -\sin(\theta)y_i + \cos(\theta)y_j).}$$
What this amounts to is drawing the vectors $(x_i, x_j)$ and $(y_i, y_j)$ in the plane and rotating them by the angle $\theta$. (Notice how the coordinates get mixed up here! The $x$'s go with each other and the $y$'s go together. Thus, the effect of this rotation in $\mathbb{R}^n$ will not usually look like a rotation of the vectors $(x_i, y_i)$ and $(x_j, y_j)$ as drawn in $\mathbb{R}^2$.)
By choosing the angle just right, we can zero out any one of these new components. To be concrete, let's choose $\theta$ so that
$$\cases{\cos(\theta) = \pm \frac{x_i}{\sqrt{x_i^2 + x_j^2}} \\
\sin(\theta) = \pm \frac{x_j}{\sqrt{x_i^2 + x_j^2}}}.$$
This makes $x_j^\prime=0$. Choose the sign to make $y_j^\prime \ge 0$. Let's call this operation, which changes points $i$ and $j$ in the cloud represented by $\mathbf X$, $\gamma(i,j)$.
Recursively applying $\gamma(1,2), \gamma(1,3), \ldots, \gamma(1,n)$ to $\mathbf{X}$ will cause the first column of $\mathbf{X}$ to be nonzero only in the first row. Geometrically, we will have moved all but one point in the cloud onto the $y$ axis. Now we may apply a single rotation, potentially involving coordinates $2, 3, \ldots, n$ in $\mathbb{R}^n$, to squeeze those $n-1$ points down to a single point. Equivalently, $X$ has been reduced to a block form
$$\mathbf{X} = \pmatrix{x_1^\prime & y_1^\prime \\ \mathbf{0} & \mathbf{z}},$$
with $\mathbf{0}$ and $\mathbf{z}$ both column vectors with $n-1$ coordinates, in such a way that
$$\mathbf{X^\prime X} = \pmatrix{\left(x_1^\prime\right)^2 & x_1^\prime y_1^\prime \\ x_1^\prime y_1^\prime & \left(y_1^\prime\right)^2 + ||\mathbf{z}||^2}.$$
This final rotation further reduces $\mathbf{X}$ to its upper triangular form
$$\mathbf{X} = \pmatrix{x_1^\prime & y_1^\prime \\ 0 & ||\mathbf{z}|| \\ 0 & 0 \\ \vdots & \vdots \\ 0 & 0}.$$
In effect, we can now understand $\mathbf{X}$ in terms of the much simpler $2\times 2$ matrix $\pmatrix{x_1^\prime & y_1^\prime \\ 0 & ||\mathbf{z}||}$ created by the last two nonzero points left standing.
To illustrate, I drew four iid points from a bivariate Normal distribution and rounded their values to
$$\mathbf{X} = \pmatrix{ 0.09 & 0.12 \\
-0.31 & -0.63 \\
0.74 & -0.23 \\
-1.8 & -0.39}$$
This initial point cloud is shown at the left of the next figure using solid black dots, with colored arrows pointing from the origin to each dot (to help us visualize them as vectors).
The sequence of operations effected on these points by $\gamma(1,2), \gamma(1,3),$ and $\gamma(1,4)$ results in the clouds shown in the middle. At the very right, the three points lying along the $y$ axis have been coalesced into a single point, leaving a representation of the reduced form of $\mathbf X$. The length of the vertical red vector is $||\mathbf{z}||$; the other (blue) vector is $(x_1^\prime, y_1^\prime)$.
Notice the faint dotted shape drawn for reference in all five panels. It represents the last remaining flexibility in representing $\mathbf X$: as we rotate the first two rows, the last two vectors trace out this ellipse. Thus, the first vector traces out the path
$$\theta\ \to\ (\cos(\theta)x_1^\prime, \cos(\theta) y_1^\prime + \sin(\theta)||\mathbf{z}||)\tag{1}$$
while the second vector traces out the same path according to
$$\theta\ \to\ (-\sin(\theta)x_1^\prime, -\sin(\theta) y_1^\prime + \cos(\theta)||\mathbf{z}||).\tag{2}$$
We may avoid tedious algebra by noting that because this curve is the image of the set of points $\{(\cos(\theta), \sin(\theta))\,:\, 0 \le \theta\lt 2\pi\}$ under the linear transformation determined by
$$(1,0)\ \to\ (x_1^\prime, 0);\quad (0,1)\ \to\ (y_1^\prime, ||\mathbf{z}||),$$
it must be an ellipse. (Question 2 has now been fully answered.) Thus there will be four critical values of $\theta$ in the parameterization $(1)$, of which two correspond to the ends of the major axis and two correspond to the ends of the minor axis; and it immediately follows that simultaneously $(2)$ gives the ends of the minor axis and major axis, respectively. If we choose such a $\theta$, the corresponding points in the point cloud will be located at the ends of the principal axes, like this:
Because these are orthogonal and are directed along the axes of the ellipse, they correctly depict the principal axes: the PCA solution. That answers Question 1.
The analysis given here complements that of my answer at Bottom to top explanation of the Mahalanobis distance. There, by examining rotations and rescalings in $\mathbb{R}^2$, I explained how any point cloud in $p=2$ dimensions geometrically determines a natural coordinate system for $\mathbb{R}^2$. Here, I have shown how it geometrically determines an ellipse which is the image of a circle under a linear transformation. This ellipse is, of course, an isocontour of constant Mahalanobis distance.
Another thing accomplished by this analysis is to display an intimate connection between QR decomposition (of a rectangular matrix) and the Singular Value Decomposition, or SVD. The $\gamma(i,j)$ are known as Givens rotations. Their composition constitutes the orthogonal, or "$Q$", part of the QR decomposition. What remained--the reduced form of $\mathbf{X}$--is the upper triangular, or "$R$" part of the QR decomposition. At the same time, the rotation and rescalings (described as relabelings of the coordinates in the other post) constitute the $\mathbf{D}\cdot \mathbf{V}^\prime$ part of the SVD, $\mathbf{X} = \mathbf{U\, D\, V^\prime}$. The rows of $\mathbf{U}$, incidentally, form the point cloud displayed in the last figure of that post.
Finally, the analysis presented here generalizes in obvious ways to the cases $p\ne 2$: that is, when there are just one or more than two principal components.
|
Geometric understanding of PCA in the subject (dual) space
All the summaries of $\mathbf X$ displayed in the question depend only on its second moments; or, equivalently, on the matrix $\mathbf{X^\prime X}$. Because we are thinking of $\mathbf X$ as a point
|
14,261
|
Least stupid way to forecast a short multivariate time series
|
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well), PCA can be used to look for (linear) dependencies in a systematic way. I will show that this works well for the given data in this question.
Given there is not much data (112 numbers in total), only a few model parameters can be estimated (e.g. fitting full seasonal effects is not an option), and trying a custom model may make sense.
Here is how I would make a forecast, following these principles:
Step 1. We can use PCA to reveal dependencies in the data. Using R, with the data stored in x:
> library(jvcoords)
> m <- PCA(x)
> m
PCA: mapping p = 4 coordinates to q = 4 coordinates
PC1 PC2 PC3 PC4
standard deviation 0.18609759 0.079351671 0.0305622047 0.0155353709
variance 0.03463231 0.006296688 0.0009340484 0.0002413477
cum. variance fraction 0.82253436 0.972083769 0.9942678731 1.0000000000
This shows that the first two principal components explain 97% of the variance, and using three components covers 99.4% of the variance. Thus, it will be enough to make a model for first two or three PCs. (The data approximately satisfy $W = 0.234\, wd - 1.152\, wc - 8.842 \,p$.)
Doing PCA involved finding a $4\times 4$ orthogonal matrix. The space of such matrices is 6-dimensional, so we have estimated 6 parameters. (Since we only really use PC1 below, this may be fewer "effective" parameters.)
Step 2. There is a clear trend in PC1:
> t <- 1:28
> plot(m$y[,1], type = "b", ylab = "PC1")
> trend <- lm(m$y[,1] ~ t)
> abline(trend)
I create a copy of the PC scores with this trend removed:
> y2 <- m$y
> y2[,1] <- y2[,1] - fitted(trend)
Plotting the scores of the other PCs reveal no clear trends, so I leave these unchanged.
Since the PC scores are centred, the trend goes through the centre of mass of the PC1 sample and fitting the trend only corresponds to estimating one parameter.
Step 3. A pair scatter plot shows no clear structure, so I model the
PCs as being independent:
> pairs(y2, asp = 1, oma = c(1.7, 1.7, 1.7, 1.7))
Step 4. There is a clear periodicity in PC1, with lag 13 (as suggested by the question). This can be seen in different ways. For example, the lag 13 autocorrelation shows up as being significantly different from 0 in a correlogram:
> acf(y2[,1])
(The periodicity is visually more striking when plotting the data together with a shifted copy.)
Since we want to keep the number of estimated parameters low, and since the correlogram shows lag 13 as the only lag with a significant contribution, I will model PC1 as $y^{(1)}_{t+13} = \alpha_{13} y^{(1)}_t + \sigma \varepsilon_{t+13}$, where the $\varepsilon_t$ are independent and standard normally distributed (i.e. this is an AR(13) process with most coefficients fixed to 0). An easy way to estimate $\alpha_{13}$ and $\sigma$ is using the lm() function:
> lag13 <- lm(y2[14:28,1] ~ y2[1:15,1] + 0)
> lag13
Call:
lm(formula = y2[14:28, 1] ~ y2[1:15, 1] + 0)
Coefficients:
y2[1:15, 1]
0.6479
> a13 <- coef(lag13)
> s13 <- summary(lag13)$sigma
As a plausibility test, I plot the given data (black), together with a random trajectory of my model for PC1 (blue), ranging one year into the future:
t.f <- 29:41
pc1 <- m$y[,1]
pc1.f <- (predict(trend, newdata = data.frame(t = t.f))
+ a13 * y2[16:28, 1]
+ rnorm(13, sd = s13))
plot(t, pc1, xlim = range(t, t.f), ylim = range(pc1, pc1.f),
type = "b", ylab = "PC1")
points(t.f, pc1.f, col = "blue", type = "b")
The blue, simulated piece of path looks like a reasonable continuation of the data. The correlograms for PC2 and PC3 show no significant correlations, so I model these components as white noise. PC4 does show correlations, but contributes so little to the total variance that it seem not worth modelling, and I also model this component as white noise.
Here we have fitted two more parameters. This brings us to a total of nine parameters in the model (including the PCA), which does not seem absurd when we started with data consisting of 112 numbers.
Forecast. We can get a numeric forecast by leaving out the noise (to get the mean) and reversing the PCA:
> pc1.f <- predict(trend, newdata = data.frame(t = t.f)) + a13 * y2[16:28, 1]
> y.f <- data.frame(PC1 = pc1.f, PC2 = 0, PC3 = 0, PC4 = 0)
> x.f <- fromCoords(m, y.f)
> rownames(x.f) <- t.f
> x.f
W wd wc p
29 4.456825 4.582231 3.919151 0.5616497
30 4.407551 4.563510 3.899012 0.5582053
31 4.427701 4.571166 3.907248 0.5596139
32 4.466062 4.585740 3.922927 0.5622955
33 4.327391 4.533055 3.866250 0.5526018
34 4.304330 4.524294 3.856824 0.5509898
35 4.342835 4.538923 3.872562 0.5536814
36 4.297404 4.521663 3.853993 0.5505056
37 4.281638 4.515673 3.847549 0.5494035
38 4.186515 4.479533 3.808671 0.5427540
39 4.377147 4.551959 3.886586 0.5560799
40 4.257569 4.506528 3.837712 0.5477210
41 4.289875 4.518802 3.850916 0.5499793
Uncertainty bands can be obtained either analytically or simply using Monte Carlo:
N <- 1000 # number of Monte Carlo samples
W.f <- matrix(NA, N, 13)
for (i in 1:N) {
y.f <- data.frame(PC1 = (predict(trend, newdata =
data.frame(t = t.f))
+ a13 * y2[16:28, 1]
+ rnorm(13, sd = s13)),
PC2 = rnorm(13, sd = sd(y2[,2])),
PC3 = rnorm(13, sd = sd(y2[, 3])),
PC4 = rnorm(13, sd = sd(y2[, 4])))
x.f <- fromCoords(m, y.f)
W.f[i,] <- x.f[, 1]
}
bands <- apply(W.f, 2,
function(x) quantile(x, c(0.025, 0.15, 0.5,
0.85, 0.975)))
plot(t, x$W, xlim = range(t, t.f), ylim = range(x$W, bands),
type = "b", ylab = "W")
for (b in 1:5) {
lines(c(28, t.f), c(x$W[28], bands[b,]), col = "grey")
}
The plot shows the actual data for $W$, together with 60% (inner three lines) and 95% (outer two lines) uncertainty bands for a forecast using the fitted model.
|
Least stupid way to forecast a short multivariate time series
|
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well)
|
Least stupid way to forecast a short multivariate time series
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well), PCA can be used to look for (linear) dependencies in a systematic way. I will show that this works well for the given data in this question.
Given there is not much data (112 numbers in total), only a few model parameters can be estimated (e.g. fitting full seasonal effects is not an option), and trying a custom model may make sense.
Here is how I would make a forecast, following these principles:
Step 1. We can use PCA to reveal dependencies in the data. Using R, with the data stored in x:
> library(jvcoords)
> m <- PCA(x)
> m
PCA: mapping p = 4 coordinates to q = 4 coordinates
PC1 PC2 PC3 PC4
standard deviation 0.18609759 0.079351671 0.0305622047 0.0155353709
variance 0.03463231 0.006296688 0.0009340484 0.0002413477
cum. variance fraction 0.82253436 0.972083769 0.9942678731 1.0000000000
This shows that the first two principal components explain 97% of the variance, and using three components covers 99.4% of the variance. Thus, it will be enough to make a model for first two or three PCs. (The data approximately satisfy $W = 0.234\, wd - 1.152\, wc - 8.842 \,p$.)
Doing PCA involved finding a $4\times 4$ orthogonal matrix. The space of such matrices is 6-dimensional, so we have estimated 6 parameters. (Since we only really use PC1 below, this may be fewer "effective" parameters.)
Step 2. There is a clear trend in PC1:
> t <- 1:28
> plot(m$y[,1], type = "b", ylab = "PC1")
> trend <- lm(m$y[,1] ~ t)
> abline(trend)
I create a copy of the PC scores with this trend removed:
> y2 <- m$y
> y2[,1] <- y2[,1] - fitted(trend)
Plotting the scores of the other PCs reveal no clear trends, so I leave these unchanged.
Since the PC scores are centred, the trend goes through the centre of mass of the PC1 sample and fitting the trend only corresponds to estimating one parameter.
Step 3. A pair scatter plot shows no clear structure, so I model the
PCs as being independent:
> pairs(y2, asp = 1, oma = c(1.7, 1.7, 1.7, 1.7))
Step 4. There is a clear periodicity in PC1, with lag 13 (as suggested by the question). This can be seen in different ways. For example, the lag 13 autocorrelation shows up as being significantly different from 0 in a correlogram:
> acf(y2[,1])
(The periodicity is visually more striking when plotting the data together with a shifted copy.)
Since we want to keep the number of estimated parameters low, and since the correlogram shows lag 13 as the only lag with a significant contribution, I will model PC1 as $y^{(1)}_{t+13} = \alpha_{13} y^{(1)}_t + \sigma \varepsilon_{t+13}$, where the $\varepsilon_t$ are independent and standard normally distributed (i.e. this is an AR(13) process with most coefficients fixed to 0). An easy way to estimate $\alpha_{13}$ and $\sigma$ is using the lm() function:
> lag13 <- lm(y2[14:28,1] ~ y2[1:15,1] + 0)
> lag13
Call:
lm(formula = y2[14:28, 1] ~ y2[1:15, 1] + 0)
Coefficients:
y2[1:15, 1]
0.6479
> a13 <- coef(lag13)
> s13 <- summary(lag13)$sigma
As a plausibility test, I plot the given data (black), together with a random trajectory of my model for PC1 (blue), ranging one year into the future:
t.f <- 29:41
pc1 <- m$y[,1]
pc1.f <- (predict(trend, newdata = data.frame(t = t.f))
+ a13 * y2[16:28, 1]
+ rnorm(13, sd = s13))
plot(t, pc1, xlim = range(t, t.f), ylim = range(pc1, pc1.f),
type = "b", ylab = "PC1")
points(t.f, pc1.f, col = "blue", type = "b")
The blue, simulated piece of path looks like a reasonable continuation of the data. The correlograms for PC2 and PC3 show no significant correlations, so I model these components as white noise. PC4 does show correlations, but contributes so little to the total variance that it seem not worth modelling, and I also model this component as white noise.
Here we have fitted two more parameters. This brings us to a total of nine parameters in the model (including the PCA), which does not seem absurd when we started with data consisting of 112 numbers.
Forecast. We can get a numeric forecast by leaving out the noise (to get the mean) and reversing the PCA:
> pc1.f <- predict(trend, newdata = data.frame(t = t.f)) + a13 * y2[16:28, 1]
> y.f <- data.frame(PC1 = pc1.f, PC2 = 0, PC3 = 0, PC4 = 0)
> x.f <- fromCoords(m, y.f)
> rownames(x.f) <- t.f
> x.f
W wd wc p
29 4.456825 4.582231 3.919151 0.5616497
30 4.407551 4.563510 3.899012 0.5582053
31 4.427701 4.571166 3.907248 0.5596139
32 4.466062 4.585740 3.922927 0.5622955
33 4.327391 4.533055 3.866250 0.5526018
34 4.304330 4.524294 3.856824 0.5509898
35 4.342835 4.538923 3.872562 0.5536814
36 4.297404 4.521663 3.853993 0.5505056
37 4.281638 4.515673 3.847549 0.5494035
38 4.186515 4.479533 3.808671 0.5427540
39 4.377147 4.551959 3.886586 0.5560799
40 4.257569 4.506528 3.837712 0.5477210
41 4.289875 4.518802 3.850916 0.5499793
Uncertainty bands can be obtained either analytically or simply using Monte Carlo:
N <- 1000 # number of Monte Carlo samples
W.f <- matrix(NA, N, 13)
for (i in 1:N) {
y.f <- data.frame(PC1 = (predict(trend, newdata =
data.frame(t = t.f))
+ a13 * y2[16:28, 1]
+ rnorm(13, sd = s13)),
PC2 = rnorm(13, sd = sd(y2[,2])),
PC3 = rnorm(13, sd = sd(y2[, 3])),
PC4 = rnorm(13, sd = sd(y2[, 4])))
x.f <- fromCoords(m, y.f)
W.f[i,] <- x.f[, 1]
}
bands <- apply(W.f, 2,
function(x) quantile(x, c(0.025, 0.15, 0.5,
0.85, 0.975)))
plot(t, x$W, xlim = range(t, t.f), ylim = range(x$W, bands),
type = "b", ylab = "W")
for (b in 1:5) {
lines(c(28, t.f), c(x$W[28], bands[b,]), col = "grey")
}
The plot shows the actual data for $W$, together with 60% (inner three lines) and 95% (outer two lines) uncertainty bands for a forecast using the fitted model.
|
Least stupid way to forecast a short multivariate time series
I understand that this question has been sitting here for years, but still, the following ideas may be useful:
If there are links between variables (and the theoretical formula does not work so well)
|
14,262
|
Estimating R-squared and statistical significance from penalized regression model
|
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data points ("big p small n")
The amount of time you have to investigate the variables
The computational cost of inverting a giant matrix
My reaction was based on "large" with respect to point 1. This is because in this case it is usually worth the trade-off in bias for the reduction in variance that you get. Bias is only important "in-the-long-run". So if you have a small sample, then who care's about "the-long-run"?
Having said all that above, $R^2$ is probably not a particularly good quantity to calculate, especially when you have lots of variables (because that's pretty much all $R^2$ tells you: you have lots of variables). I would calculate something more like a "prediction error" using cross validation.
Ideally this "prediction error" should be based on the context of your modeling situation. You basically want to answer the question "How well does my model reproduce the data?". The context of your situation should be able to tell you what "how well" means in the real world. You then need to translate this into some sort of mathematical equation.
However, I have no obvious context to go off from the question. So a "default" would be something like PRESS:
$$PRESS=\sum_{i=1}^{N} (Y_{i}-\hat{Y}_{i,-i})^2$$
Where $\hat{Y}_{i,-i}$ is the predicted value for $Y_{i}$ for a model fitted without the ith data point ($Y_i$ doesn't influence the model parameters). The terms in the summation are also known as "deletion residuals". If this is too computationally expensive to do $N$ model fits (although most programs usually gives you something like this with the standard output), then I would suggest grouping the data. So you set the amount of time you are prepared to wait for $T$ (preferably not 0 ^_^), and then divide this by the time it takes to fit your model $M$. This will give a total of $G=\frac{T}{M}$ re-fits, with a sample size of $N_{g}=\frac{N\times M}{T}$.
$$PRESS=\sum_{g=1}^{G}\sum_{i=1}^{N_{g}} (Y_{ig}-\hat{Y}_{ig,-g})^2$$
A way you can get an idea of how important each variable is, is to re-fit an ordinary regression (variables in the same order). Then check proportionately how much each estimator has been shrunk towards zero $\frac{\beta_{LASSO}}{\beta_{UNCONSTRAINED}}$. Lasso, and other constrained regression can be seen as "smooth variable selection", because rather than adopt a binary "in-or-out" approach, each estimate is brought closer to zero, depending on how important it is for the model (as measured by the errors).
|
Estimating R-squared and statistical significance from penalized regression model
|
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data
|
Estimating R-squared and statistical significance from penalized regression model
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data points ("big p small n")
The amount of time you have to investigate the variables
The computational cost of inverting a giant matrix
My reaction was based on "large" with respect to point 1. This is because in this case it is usually worth the trade-off in bias for the reduction in variance that you get. Bias is only important "in-the-long-run". So if you have a small sample, then who care's about "the-long-run"?
Having said all that above, $R^2$ is probably not a particularly good quantity to calculate, especially when you have lots of variables (because that's pretty much all $R^2$ tells you: you have lots of variables). I would calculate something more like a "prediction error" using cross validation.
Ideally this "prediction error" should be based on the context of your modeling situation. You basically want to answer the question "How well does my model reproduce the data?". The context of your situation should be able to tell you what "how well" means in the real world. You then need to translate this into some sort of mathematical equation.
However, I have no obvious context to go off from the question. So a "default" would be something like PRESS:
$$PRESS=\sum_{i=1}^{N} (Y_{i}-\hat{Y}_{i,-i})^2$$
Where $\hat{Y}_{i,-i}$ is the predicted value for $Y_{i}$ for a model fitted without the ith data point ($Y_i$ doesn't influence the model parameters). The terms in the summation are also known as "deletion residuals". If this is too computationally expensive to do $N$ model fits (although most programs usually gives you something like this with the standard output), then I would suggest grouping the data. So you set the amount of time you are prepared to wait for $T$ (preferably not 0 ^_^), and then divide this by the time it takes to fit your model $M$. This will give a total of $G=\frac{T}{M}$ re-fits, with a sample size of $N_{g}=\frac{N\times M}{T}$.
$$PRESS=\sum_{g=1}^{G}\sum_{i=1}^{N_{g}} (Y_{ig}-\hat{Y}_{ig,-g})^2$$
A way you can get an idea of how important each variable is, is to re-fit an ordinary regression (variables in the same order). Then check proportionately how much each estimator has been shrunk towards zero $\frac{\beta_{LASSO}}{\beta_{UNCONSTRAINED}}$. Lasso, and other constrained regression can be seen as "smooth variable selection", because rather than adopt a binary "in-or-out" approach, each estimate is brought closer to zero, depending on how important it is for the model (as measured by the errors).
|
Estimating R-squared and statistical significance from penalized regression model
My first reaction to Jelle's comments given is "bias-schmias". You have to be careful about what you mean by "large amount of predictors". This could be "large" with respect to:
The number of data
|
14,263
|
Estimating R-squared and statistical significance from penalized regression model
|
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. The theory behind the test and how to apply it is briefly explained in the hdm documentation. In short, it's based on a framework for theory-driven penalisation (developed by Belloni, Chernozhukov and Hansen, et al.). This paper is a good starting point if you want to know more about the underlying theory. The only downside is that the test only works for the lasso and (square-root lasso). Not for other penalized regression methods.
Belloni, A. , Chen, D. , Chernozhukov, V. and Hansen, C. (2012), Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica, 80: 2369-2429.
|
Estimating R-squared and statistical significance from penalized regression model
|
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. Th
|
Estimating R-squared and statistical significance from penalized regression model
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. The theory behind the test and how to apply it is briefly explained in the hdm documentation. In short, it's based on a framework for theory-driven penalisation (developed by Belloni, Chernozhukov and Hansen, et al.). This paper is a good starting point if you want to know more about the underlying theory. The only downside is that the test only works for the lasso and (square-root lasso). Not for other penalized regression methods.
Belloni, A. , Chen, D. , Chernozhukov, V. and Hansen, C. (2012), Sparse Models and Methods for Optimal Instruments With an Application to Eminent Domain. Econometrica, 80: 2369-2429.
|
Estimating R-squared and statistical significance from penalized regression model
The R package hdm and the Stata package lassopack support a joint significance test for the lasso. The theory allows for the number of predictors to be large relative to the number of observations. Th
|
14,264
|
Why is LASSO not finding my perfect predictor pair at high dimensionality?
|
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcome these drawbacks by including an additional ridge penalty, hence the elastic net regression. This Tibshirani paper is about the $p>n$ (i.e. number of covariates larger than number of observations) problem:
The lasso is a popular tool for sparse linear regression, especially
for problems in which the number of variable exceeds the number of
observation. But when p > n, the lasso criterion is not strictly
convex, and hence it may not have a unique minimizer.
As @ben mentioned, when you have 2e16 covariates, its not unlike that some are quite similar to the true covariates. Hence why the above point is relevant: LASSO is indifferent to picking either one.
Perhaps more relevantly and more recently (2013), there’s another Candes paper about how, even when statistical conditions are ideal (uncorrelated predictors, only a few large effects), the LASSO still produces false positives, such as what you see in your data:
In regression settings where explanatory variables have very low
correlations and there are relatively few effects, each of large
magnitude, we expect the Lasso to find the important variables with
few errors, if any. This paper shows that in a regime of linear
sparsity---meaning that the fraction of variables with a non-vanishing
effect tends to a constant, however small---this cannot really be the
case, even when the design variables are stochastically independent.
|
Why is LASSO not finding my perfect predictor pair at high dimensionality?
|
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcom
|
Why is LASSO not finding my perfect predictor pair at high dimensionality?
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcome these drawbacks by including an additional ridge penalty, hence the elastic net regression. This Tibshirani paper is about the $p>n$ (i.e. number of covariates larger than number of observations) problem:
The lasso is a popular tool for sparse linear regression, especially
for problems in which the number of variable exceeds the number of
observation. But when p > n, the lasso criterion is not strictly
convex, and hence it may not have a unique minimizer.
As @ben mentioned, when you have 2e16 covariates, its not unlike that some are quite similar to the true covariates. Hence why the above point is relevant: LASSO is indifferent to picking either one.
Perhaps more relevantly and more recently (2013), there’s another Candes paper about how, even when statistical conditions are ideal (uncorrelated predictors, only a few large effects), the LASSO still produces false positives, such as what you see in your data:
In regression settings where explanatory variables have very low
correlations and there are relatively few effects, each of large
magnitude, we expect the Lasso to find the important variables with
few errors, if any. This paper shows that in a regime of linear
sparsity---meaning that the fraction of variables with a non-vanishing
effect tends to a constant, however small---this cannot really be the
case, even when the design variables are stochastically independent.
|
Why is LASSO not finding my perfect predictor pair at high dimensionality?
This problem is well-known by academics and researchers. The answer, however, is not simple and pertains more—in my opinion—to optimization than it does to statistics. People have attempted to overcom
|
14,265
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
|
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot gung made, you will be clear on why we need regularization / shrinkage. At first, I feel strange that why we need biased estimations? But looking at that figure, I realized, have a low variance model has a lot of advantages: for example, it is more "stable" in production use.
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
|
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot g
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot gung made, you will be clear on why we need regularization / shrinkage. At first, I feel strange that why we need biased estimations? But looking at that figure, I realized, have a low variance model has a lot of advantages: for example, it is more "stable" in production use.
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
Yes, when we want biased low variance estimations. I particularly like gung's post here What problem do shrinkage methods solve? Please allow me to paste gung's figure here...
If you check the plot g
|
14,266
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
|
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estimation through Decision Trees. Every time I build a tree, I check the cross-validation error. I try to decrease the error as much as I can, then I will go to the next step of estimating the parameters' importance. It is possible that if the first tree that you build is very bad and you don't check the error, you will have less accurate (if not wrong) answers.
The main reason I believe is due to the many number of control variables that each technique has. Even slight change in one control variable will provide a different result.
How to improve your model after you check the cross-validation error? Well, it depends on your model. Hopefully, after trying a few times you will get some idea of the most important control variables and can manipulate them in order to find a low error.
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
|
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estim
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estimation through Decision Trees. Every time I build a tree, I check the cross-validation error. I try to decrease the error as much as I can, then I will go to the next step of estimating the parameters' importance. It is possible that if the first tree that you build is very bad and you don't check the error, you will have less accurate (if not wrong) answers.
The main reason I believe is due to the many number of control variables that each technique has. Even slight change in one control variable will provide a different result.
How to improve your model after you check the cross-validation error? Well, it depends on your model. Hopefully, after trying a few times you will get some idea of the most important control variables and can manipulate them in order to find a low error.
|
Can regularization be helpful if we are interested only in modeling, not in forecasting?
Can cross-validation be helpful if we are interested only in modeling (i.e. estimating parameters), not in forecasting?
Yes, it can.
For instance, the other day I was using parameter importance estim
|
14,267
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
|
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a structure with unknown (typically mean and variance) parameters. In this sense, using a gamma distribution for the individual variance (or lognormal for heavier tail) conditioned to the mean variance and its dispersion looks valid to me (at least with regard to Gelman arguments).
The critics of Gelman about the gamma for scale parameter are about the fact that the gamma is used to approximate the Jeffreys by setting extreme values to its parameter. The problem is that depending on how extreme these values are (which is quite arbitrary) the posterior may be very different. This observation invalidates the use of this prior, at least when we don't have information to set in the prior. In it discussion, it looks to me that the gamma or inverse-gamma is never calibrated in terms of mean and variance from prior information or from a hierarchical structure. So its recommendation concerns a context which is quite different from yours which, if I understand well your purpose, consists in using a hierarchical prior structure relating the individual variance through a structure whose mean and variance parameters are also estimated.
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
|
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a str
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a structure with unknown (typically mean and variance) parameters. In this sense, using a gamma distribution for the individual variance (or lognormal for heavier tail) conditioned to the mean variance and its dispersion looks valid to me (at least with regard to Gelman arguments).
The critics of Gelman about the gamma for scale parameter are about the fact that the gamma is used to approximate the Jeffreys by setting extreme values to its parameter. The problem is that depending on how extreme these values are (which is quite arbitrary) the posterior may be very different. This observation invalidates the use of this prior, at least when we don't have information to set in the prior. In it discussion, it looks to me that the gamma or inverse-gamma is never calibrated in terms of mean and variance from prior information or from a hierarchical structure. So its recommendation concerns a context which is quite different from yours which, if I understand well your purpose, consists in using a hierarchical prior structure relating the individual variance through a structure whose mean and variance parameters are also estimated.
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
I disagree with the way you interpret Gelman concerning the choice of the Gamma for scale parameter. The basis of hierarchical modeling is to relate individual parameters to a common one through a str
|
14,268
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
|
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to refer to the case where some knowledge about the variance exists. Also notice that the picture of the distribution of the variance $\tau_i$ is not flat at all.
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
|
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to r
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model when the mean variance is of interest?
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to refer to the case where some knowledge about the variance exists. Also notice that the picture of the distribution of the variance $\tau_i$ is not flat at all.
|
What prior distributions could/should be used for the variance in a hierarchical bayesisan model whe
Shortly, Gelman outlines problems in using Gamma distributions as vague (he uses the word noninformative) priors for the variance. On the contrary, your problem (and the Kruschke's example) seems to r
|
14,269
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock picking is actually evaluated in the “real world”. As an extreme example, suppose stock picker A chose 1 stock that went up 1000%, and 9 that went down by 1%, and stock picker B chose 10 stocks that all went up 1%. If these stocks were actually used to construct an index, then clearly A would be the better performer, but B would do much better in your experiment. A more financially interesting challenge would be to construct a portfolio and compare its performance to that of the S&P 500. In turn, there is a commonly-used machinery for evaluating such performance: simply take a linear regression of the day-to-day returns of the portfolio against those of the S&P. The intercept term (often called “alpha”) measures the average performance “over and above the market”. Since it is a coefficient of a linear regression, it is a trivial matter to construct a 95% confidence interval if you so choose. Then compare this to the fees her bank would charge for this service.
2) Disregarding 1, since it sounds like you both have already agreed on the form the experiment, consider how this could be gamed. Suppose I had a magic oracle that told me the probability of each stock being above its current price a month from now (say). Then I could just pick the n stocks with the highest such probabilities, and most likely over 50% of them would indeed go up. Now, such probabilities are encoded (imperfectly) in various options prices. For example, I can buy a so-called “binary option”, which is basically just a gamble on the event “Stock X willl be above price Y on date Z”. The pricing of such implies a probability of this event (although the closer date Z is to the present, the less reliable this will be). Since blindly following the “wisdom of the crowds” requires no particular expertise, I would argue that the performance of a strategy like this should be considered “chance levels” for your particular experiment. Alternatively, you present her with a list of stocks of your choosing, and have her indicate whether she thinks each will be up or down, together with her confidence on each prediction. Then group all answers by confidence level and see how closely they align (i.e., of those stocks that she was 90% confident about, did she correctly predict 90% of them?). There’s a standard way to quantify this; i don’t remember offhand what it’s called, but you can read about it in Superforecasters by Phil Tetlock.
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock pic
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock picking is actually evaluated in the “real world”. As an extreme example, suppose stock picker A chose 1 stock that went up 1000%, and 9 that went down by 1%, and stock picker B chose 10 stocks that all went up 1%. If these stocks were actually used to construct an index, then clearly A would be the better performer, but B would do much better in your experiment. A more financially interesting challenge would be to construct a portfolio and compare its performance to that of the S&P 500. In turn, there is a commonly-used machinery for evaluating such performance: simply take a linear regression of the day-to-day returns of the portfolio against those of the S&P. The intercept term (often called “alpha”) measures the average performance “over and above the market”. Since it is a coefficient of a linear regression, it is a trivial matter to construct a 95% confidence interval if you so choose. Then compare this to the fees her bank would charge for this service.
2) Disregarding 1, since it sounds like you both have already agreed on the form the experiment, consider how this could be gamed. Suppose I had a magic oracle that told me the probability of each stock being above its current price a month from now (say). Then I could just pick the n stocks with the highest such probabilities, and most likely over 50% of them would indeed go up. Now, such probabilities are encoded (imperfectly) in various options prices. For example, I can buy a so-called “binary option”, which is basically just a gamble on the event “Stock X willl be above price Y on date Z”. The pricing of such implies a probability of this event (although the closer date Z is to the present, the less reliable this will be). Since blindly following the “wisdom of the crowds” requires no particular expertise, I would argue that the performance of a strategy like this should be considered “chance levels” for your particular experiment. Alternatively, you present her with a list of stocks of your choosing, and have her indicate whether she thinks each will be up or down, together with her confidence on each prediction. Then group all answers by confidence level and see how closely they align (i.e., of those stocks that she was 90% confident about, did she correctly predict 90% of them?). There’s a standard way to quantify this; i don’t remember offhand what it’s called, but you can read about it in Superforecasters by Phil Tetlock.
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
Interesting question. This isn’t really an answer, but it’s too long to be a comment.
I think your experimental design is challenged for these reasons:
1) This does not reflect the way that stock pic
|
14,270
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be approx. random.
Using this method, you can improve the statistical power by imposing some rules:
Both of you assign the same forecast (decrease or increase). She is allowed to choose which one.
You should define at what time you evaluate the stocks.
You should define how many stocks you have to buy (>20 would be nice) and that you have to buy them for the same amount of money. Hence, when she says she buys stock A, that implies that she will buy them for 10 000 dollars.
Things become more precise, if both of you limit your choices to stocks of a special index. Than you don't have to pick any stocks, but you could run a simulation. Then you could even evaluate the expected variance. However, you will need do store the stock data somewhere. An alternative would be, that when ever she buy a stock, you pick 10 random stocks -- you just simulate the pick of ten random "experts". :)
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be a
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be approx. random.
Using this method, you can improve the statistical power by imposing some rules:
Both of you assign the same forecast (decrease or increase). She is allowed to choose which one.
You should define at what time you evaluate the stocks.
You should define how many stocks you have to buy (>20 would be nice) and that you have to buy them for the same amount of money. Hence, when she says she buys stock A, that implies that she will buy them for 10 000 dollars.
Things become more precise, if both of you limit your choices to stocks of a special index. Than you don't have to pick any stocks, but you could run a simulation. Then you could even evaluate the expected variance. However, you will need do store the stock data somewhere. An alternative would be, that when ever she buy a stock, you pick 10 random stocks -- you just simulate the pick of ten random "experts". :)
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
A very simple test would be as follows: Whenever she picks a stock, you pick one stock as well. I reckon you don't think of yourself as been an expert in the stock market. Hence, your choice will be a
|
14,271
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining sample size.
To provide an answer, let's make some assumptions
Let's assume we want a power of 80%, and confidence level of 95%, and a one sided test.
To prevent making a single prediction (i.e. everything stock will go up), force her to predict n markets that will go up and n markets that will go down. This will ensure that she can predict the ones that will go up as well as the ones that will go down.
We will test against a random guesser (50:50), i.e. $H_0: p>0.5$.
Under this frame work, she would have to pick 15 stocks that will go up, and 15 stocks that will go down.
Link to calculator
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
|
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining s
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining sample size.
To provide an answer, let's make some assumptions
Let's assume we want a power of 80%, and confidence level of 95%, and a one sided test.
To prevent making a single prediction (i.e. everything stock will go up), force her to predict n markets that will go up and n markets that will go down. This will ensure that she can predict the ones that will go up as well as the ones that will go down.
We will test against a random guesser (50:50), i.e. $H_0: p>0.5$.
Under this frame work, she would have to pick 15 stocks that will go up, and 15 stocks that will go down.
Link to calculator
|
How to tell if girlfriend can tell the future (i.e. predict stocks)?
How much power do you want your statistical test to have? That is, if she does have the ability, with what probability do you want to detect the ability? Defining power is essential to determining s
|
14,272
|
Plot and interpret ordinal logistic regression
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
My Regression Modeling Strategies course notes has two chapters on ordinal regression that may help. See also this tutorial.
The course notes go into detail about what model assumptions mean, how they are checked, and how to interpret the fitted model.
|
Plot and interpret ordinal logistic regression
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Plot and interpret ordinal logistic regression
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
My Regression Modeling Strategies course notes has two chapters on ordinal regression that may help. See also this tutorial.
The course notes go into detail about what model assumptions mean, how they are checked, and how to interpret the fitted model.
|
Plot and interpret ordinal logistic regression
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,273
|
Standardization vs. Normalization for Lasso/Ridge Regression
|
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppose one variable is in a very large scale, say order of millions and another variable is from 0 to 1. Then, we can think the regularization will have little effect on first variable.
As well as we do normalization, normalize it to 0 to 1 or standardize the features does not matter too much.
|
Standardization vs. Normalization for Lasso/Ridge Regression
|
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppos
|
Standardization vs. Normalization for Lasso/Ridge Regression
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppose one variable is in a very large scale, say order of millions and another variable is from 0 to 1. Then, we can think the regularization will have little effect on first variable.
As well as we do normalization, normalize it to 0 to 1 or standardize the features does not matter too much.
|
Standardization vs. Normalization for Lasso/Ridge Regression
Normalization is very important for methods with regularization. This is because the scale of the variables affect the how much regularization will be applies to specific variable.
For example, suppos
|
14,274
|
From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-output network
|
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing. No one that I know of has ever tried to suggest a connection between option pricing and heat flow per se. I think that the quote from Danilov's link is saying the same thing. Both Bayesian Graphs and Neural Nets use the language of graphs to express the relations between their different internal pieces. However, Bayesian graphs tells one about the correlation structure of the input variables and the graph of a neural net tells one how to build the prediction function from the input variables. These are very different things.
Various methods used in DL attempt to 'chose' the most important variables, but that is an empirical issue. It also doesn't tell one about the correlation structure of either the entire set of variables or the remaining variables. It merely suggests that the surviving variables will be best for prediciton.
For example if one looks at neural nets, one will be led to the German credit data set, which has, if I recall correctly, 2000 data points and 5 dependent variables. Through trial and error I think you will find that a net with only 1 hidden layer and using only 2 of the variables gives the best results for prediction. However, this can only be discovered by building all the models and testing them on the independent testing set.
|
From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-
|
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing
|
From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-output network
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing. No one that I know of has ever tried to suggest a connection between option pricing and heat flow per se. I think that the quote from Danilov's link is saying the same thing. Both Bayesian Graphs and Neural Nets use the language of graphs to express the relations between their different internal pieces. However, Bayesian graphs tells one about the correlation structure of the input variables and the graph of a neural net tells one how to build the prediction function from the input variables. These are very different things.
Various methods used in DL attempt to 'chose' the most important variables, but that is an empirical issue. It also doesn't tell one about the correlation structure of either the entire set of variables or the remaining variables. It merely suggests that the surviving variables will be best for prediciton.
For example if one looks at neural nets, one will be led to the German credit data set, which has, if I recall correctly, 2000 data points and 5 dependent variables. Through trial and error I think you will find that a net with only 1 hidden layer and using only 2 of the variables gives the best results for prediction. However, this can only be discovered by building all the models and testing them on the independent testing set.
|
From Bayesian Networks to Neural Networks: how multivariate regression can be transposed to a multi-
For the record, I don't view this as an answer, but just a long comment !
The PDE (heat equation) that is used to model the flow of heat through a metal rod can also be used to model option pricing
|
14,275
|
Clustered standard errors vs. multilevel modeling?
|
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for your data. I did this and was surprised to find that tests (regarding parameters in the first level) based on multilevel modelling were outperforming any other method (power-wise), while retaining size even in small samples with few and unevenly sized "clusters". I am yet to find a paper that makes that point, and from how I see this is not really a niche topic and deserves more attention.
I think it is fairly under-researched how different methods compare vis-a-vis finite-sample or few/uneven clusters.
|
Clustered standard errors vs. multilevel modeling?
|
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for yo
|
Clustered standard errors vs. multilevel modeling?
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for your data. I did this and was surprised to find that tests (regarding parameters in the first level) based on multilevel modelling were outperforming any other method (power-wise), while retaining size even in small samples with few and unevenly sized "clusters". I am yet to find a paper that makes that point, and from how I see this is not really a niche topic and deserves more attention.
I think it is fairly under-researched how different methods compare vis-a-vis finite-sample or few/uneven clusters.
|
Clustered standard errors vs. multilevel modeling?
This post bases on personal experiences which might be specific to my data, so I'm not sure it qualifies as an answer.
I suggest to use simulations if possible to assess which method works best for yo
|
14,276
|
Estimation of ARMA: state space vs. alternatives
|
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.e., tens of parameters.
If you use the direct variables, you have one (or more) parameters per state, so if your time series has 1000 entries, you have a 1000-dimensional likelihood.
High-dimensional spaces are hard to explore.
|
Estimation of ARMA: state space vs. alternatives
|
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.
|
Estimation of ARMA: state space vs. alternatives
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.e., tens of parameters.
If you use the direct variables, you have one (or more) parameters per state, so if your time series has 1000 entries, you have a 1000-dimensional likelihood.
High-dimensional spaces are hard to explore.
|
Estimation of ARMA: state space vs. alternatives
If you manage to use a Kalman filter, you can marginalize or optimize out the state at each time analytically. Thus the remaining likelihood is much simpler, having only the ARMA process variables, i.
|
14,277
|
Physical/pictoral interpretation of higher-order moments
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
If by graphical representation you meant histograms, I gather this is the best approach to provide a visual display of the moments. Please be aware we cannot specify any value for kurtosis. But there is a formula to provide kurtosis under random simulations.
Being k = kurtosis, sk = skewness, the formula is: k > sk^2 +1.
Below, a histogram with the specifications.
|
Physical/pictoral interpretation of higher-order moments
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Physical/pictoral interpretation of higher-order moments
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
If by graphical representation you meant histograms, I gather this is the best approach to provide a visual display of the moments. Please be aware we cannot specify any value for kurtosis. But there is a formula to provide kurtosis under random simulations.
Being k = kurtosis, sk = skewness, the formula is: k > sk^2 +1.
Below, a histogram with the specifications.
|
Physical/pictoral interpretation of higher-order moments
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,278
|
Why do we worry about overfitting even if "all models are wrong"?
|
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data.
|
Why do we worry about overfitting even if "all models are wrong"?
|
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data.
|
Why do we worry about overfitting even if "all models are wrong"?
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data.
|
Why do we worry about overfitting even if "all models are wrong"?
The quote by Box is along the lines of "All models are wrong, but some are useful."
If we have bad overfitting, our model will not be useful in making predictions on new data.
|
14,279
|
Why do we worry about overfitting even if "all models are wrong"?
|
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every model is equally satisfactory (and therefore flaws in models are irrelevant). Observe that you could just as easily ask this same question about any flaw in a model:
Why do we worry about maximum likelihood estimation even if “all models are wrong”?
Why do we worry about standard errors even if “all models are wrong”?
Why do we worry about cleaning our data even if “all models are wrong”?
Why do we worry about correct arithmetic even if “all models are wrong”?
The correct answer to all such questions is that we should not make the perfect the enemy of the good --- even if "all models are wrong", a model that is less wrong is still preferable to a model that is more wrong.
|
Why do we worry about overfitting even if "all models are wrong"?
|
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every m
|
Why do we worry about overfitting even if "all models are wrong"?
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every model is equally satisfactory (and therefore flaws in models are irrelevant). Observe that you could just as easily ask this same question about any flaw in a model:
Why do we worry about maximum likelihood estimation even if “all models are wrong”?
Why do we worry about standard errors even if “all models are wrong”?
Why do we worry about cleaning our data even if “all models are wrong”?
Why do we worry about correct arithmetic even if “all models are wrong”?
The correct answer to all such questions is that we should not make the perfect the enemy of the good --- even if "all models are wrong", a model that is less wrong is still preferable to a model that is more wrong.
|
Why do we worry about overfitting even if "all models are wrong"?
Why do we worry about overfitting even if “all models are wrong”?
Your question appears to be a variation of the Nirvana fallacy, implicitly suggesting that if there is no perfect model, then every m
|
14,280
|
Why do we worry about overfitting even if "all models are wrong"?
|
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all models are wrong" statement is roughly equivalent to saying "all models have non-zero bias". Overfitting is the issue that while we can increase the number of parameters in a model to reduce the bias, typically the more parameters we have, the more variance there will be in our estimate. A useful model is one that balances between being flexible enough to reduce the bias, but not so flexible that the variance is too high.
|
Why do we worry about overfitting even if "all models are wrong"?
|
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all
|
Why do we worry about overfitting even if "all models are wrong"?
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all models are wrong" statement is roughly equivalent to saying "all models have non-zero bias". Overfitting is the issue that while we can increase the number of parameters in a model to reduce the bias, typically the more parameters we have, the more variance there will be in our estimate. A useful model is one that balances between being flexible enough to reduce the bias, but not so flexible that the variance is too high.
|
Why do we worry about overfitting even if "all models are wrong"?
The full quote is "All models are wrong, but some are useful". We care about overfitting, because we still want our models to be useful.
If you are familiar with the Bias-variance tradeoff, the "all
|
14,281
|
Why do we worry about overfitting even if "all models are wrong"?
|
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is seen as the pinnacle of automotive engineering. Fast, precise and using only the finest components. I wouldn't fancy driving one across an open field though.
The 2CV has general applicability, while the F1 car only has very specific applicability. The F1 car has been overfitted to the specific problem of going round a racetrack as quickly as possible with the benefit of a team of professional engineers to monitor, assess and problem solve any issues that may arise from high performance operation.
Similarly, an overfitted model will perform well in situations it is overfit, but poorly (or not at all) elsewhere. A model with general applicaility will be more useful if it will be exposed to different environments out of your control even if it is not as good as specific models.
|
Why do we worry about overfitting even if "all models are wrong"?
|
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is s
|
Why do we worry about overfitting even if "all models are wrong"?
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is seen as the pinnacle of automotive engineering. Fast, precise and using only the finest components. I wouldn't fancy driving one across an open field though.
The 2CV has general applicability, while the F1 car only has very specific applicability. The F1 car has been overfitted to the specific problem of going round a racetrack as quickly as possible with the benefit of a team of professional engineers to monitor, assess and problem solve any issues that may arise from high performance operation.
Similarly, an overfitted model will perform well in situations it is overfit, but poorly (or not at all) elsewhere. A model with general applicaility will be more useful if it will be exposed to different environments out of your control even if it is not as good as specific models.
|
Why do we worry about overfitting even if "all models are wrong"?
A Citroën 2CV is, in many respects, a poor car. Slow, unrefined and cheap. But it is versatile and can operate effectively on both paved road and freshly ploughed fields.
An F1 car by comparison, is s
|
14,282
|
Why do we worry about overfitting even if "all models are wrong"?
|
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(123)
x1 <- rnorm(6)
x2 <- rnorm(6)
x3 <- rnorm(6)
x4 <- rnorm(6)
y <- rnorm(6)
which creates 5 variables, each a standard normal, each with N = 6.
Now, let's fit a model:
overfit <- lm(y~x1+x2+x3+x4)
The model has $R^2$ of 0.996. x2 has a significant p value and x4 is almost sig. (at the usual level of 0.05).
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.54317 0.08887 -6.112 0.1032
x1 2.01199 0.14595 13.785 0.0461 *
x2 0.14325 0.08022 1.786 0.3250
x3 0.45653 0.08997 5.074 0.1239
x4 1.21557 0.15086 8.058 0.0786 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1601 on 1 degrees of freedom
Multiple R-squared: 0.9961, Adjusted R-squared: 0.9805
F-statistic: 64.01 on 4 and 1 DF, p-value: 0.09344
It fits the data almost perfectly e.g. try
plot(predict(overfit),y)
But it's all random noise.
If we try to apply this model to other data, we will get junk.
|
Why do we worry about overfitting even if "all models are wrong"?
|
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(1
|
Why do we worry about overfitting even if "all models are wrong"?
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(123)
x1 <- rnorm(6)
x2 <- rnorm(6)
x3 <- rnorm(6)
x4 <- rnorm(6)
y <- rnorm(6)
which creates 5 variables, each a standard normal, each with N = 6.
Now, let's fit a model:
overfit <- lm(y~x1+x2+x3+x4)
The model has $R^2$ of 0.996. x2 has a significant p value and x4 is almost sig. (at the usual level of 0.05).
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.54317 0.08887 -6.112 0.1032
x1 2.01199 0.14595 13.785 0.0461 *
x2 0.14325 0.08022 1.786 0.3250
x3 0.45653 0.08997 5.074 0.1239
x4 1.21557 0.15086 8.058 0.0786 .
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.1601 on 1 degrees of freedom
Multiple R-squared: 0.9961, Adjusted R-squared: 0.9805
F-statistic: 64.01 on 4 and 1 DF, p-value: 0.09344
It fits the data almost perfectly e.g. try
plot(predict(overfit),y)
But it's all random noise.
If we try to apply this model to other data, we will get junk.
|
Why do we worry about overfitting even if "all models are wrong"?
As others have noted, the full quote is "all models are wrong, but some are useful."
When we overfit a data set, we create a model that is not useful. For instance, let's make up some data:
set.seed(1
|
14,283
|
Why do we worry about overfitting even if "all models are wrong"?
|
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sample), and then applied to the remaining 'out of sample' data set. An over-fitted model will typically have a greater prediction error in practice than a well formulated one. In addition, a model should be intellectually robust: there is no point constructing a model that works in one 'regime' if it does not work at all in the event of regime change. Such a model might appear to be very well formed until such time as the regime changes because essentially such a model has been constructed 'in-sample'. Another way of saying that is that the model's expected error must be well formulated too. There is also the matter of 'Occam's Razor', which is a philosophical idea that essentially the model should be the simplest possible, using the least number of variables required to describe the system being modelled. This serves as a useful guide, rather than a set-in-stone rule, but I believe that this is the idea behind using the 'adjusted R squared' rather than the R squared, to adjust for the natural improvement in fit associated with using more variables (e.g. you would have perfect fit, an R squared of 100% if you had a separate variable for every piece of data!). It is also an idea that should be applied to modern ML techniques: throwing e.g. thousands of variables at an ML algorithm is dangerous unless you have millions of pieces of data (and even then ... you might be better off transforming your data to reduce the number of variables first). One final point: every model requires belief. Even our laws of Physics are based on observation, and indeed they have required modification as we moved from Newtonian physics into the realms of the very small (Quantum mechanics) and the very large (General Relativity). We cannot say with absolute certainty that our current laws of Physics will hold in the future, or even in the past (e.g. around the time of the big bang). But appealing to our philosophical belief in Occam's razor results in us accepting these models and ideas because they are the simplest models yet devised that fit our observations and data.
In summary, there are no hard and fast rules. Imagine a complex (chaotic?) dynamical system, for example, the global economy. You might construct a well-formed model that works well for a short period of time. But 'regime change' is a very real issue: the economic system is highly complex and non-linear, and there are far more variables than you can measure, that might be of no consequence in the in-sample regime, but of huge significance in another 'regime'. But within your short, essentially in-sample, period, you might find that linear regression works quite well. Common sense should prevail: sometimes a very complex model is required, but it should be heavily caveated if the error associated with its predictions is unknown.
I'm sure that a proper statistician can give a much better answer than this, but since none of the above points seem to have been made yet, I thought that I would stick my neck out ...
|
Why do we worry about overfitting even if "all models are wrong"?
|
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sampl
|
Why do we worry about overfitting even if "all models are wrong"?
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sample), and then applied to the remaining 'out of sample' data set. An over-fitted model will typically have a greater prediction error in practice than a well formulated one. In addition, a model should be intellectually robust: there is no point constructing a model that works in one 'regime' if it does not work at all in the event of regime change. Such a model might appear to be very well formed until such time as the regime changes because essentially such a model has been constructed 'in-sample'. Another way of saying that is that the model's expected error must be well formulated too. There is also the matter of 'Occam's Razor', which is a philosophical idea that essentially the model should be the simplest possible, using the least number of variables required to describe the system being modelled. This serves as a useful guide, rather than a set-in-stone rule, but I believe that this is the idea behind using the 'adjusted R squared' rather than the R squared, to adjust for the natural improvement in fit associated with using more variables (e.g. you would have perfect fit, an R squared of 100% if you had a separate variable for every piece of data!). It is also an idea that should be applied to modern ML techniques: throwing e.g. thousands of variables at an ML algorithm is dangerous unless you have millions of pieces of data (and even then ... you might be better off transforming your data to reduce the number of variables first). One final point: every model requires belief. Even our laws of Physics are based on observation, and indeed they have required modification as we moved from Newtonian physics into the realms of the very small (Quantum mechanics) and the very large (General Relativity). We cannot say with absolute certainty that our current laws of Physics will hold in the future, or even in the past (e.g. around the time of the big bang). But appealing to our philosophical belief in Occam's razor results in us accepting these models and ideas because they are the simplest models yet devised that fit our observations and data.
In summary, there are no hard and fast rules. Imagine a complex (chaotic?) dynamical system, for example, the global economy. You might construct a well-formed model that works well for a short period of time. But 'regime change' is a very real issue: the economic system is highly complex and non-linear, and there are far more variables than you can measure, that might be of no consequence in the in-sample regime, but of huge significance in another 'regime'. But within your short, essentially in-sample, period, you might find that linear regression works quite well. Common sense should prevail: sometimes a very complex model is required, but it should be heavily caveated if the error associated with its predictions is unknown.
I'm sure that a proper statistician can give a much better answer than this, but since none of the above points seem to have been made yet, I thought that I would stick my neck out ...
|
Why do we worry about overfitting even if "all models are wrong"?
Every model has an error. The best model is that which minimizes the error associated with its predictions. This is why models are typically constructed using only a proportion of the data (in-sampl
|
14,284
|
Why do we worry about overfitting even if "all models are wrong"?
|
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cancer, would you rather have them be wrong 50% of the time (very wrong) or 0.1% of the time (much less wrong)?
Or, let's say you give away something for free if your model predicts this will lead to the customer buying something later. Would you rather give away many things for free without this making a difference to whether customers buy things later (quite wrong) or have most customers come back to buy things later (less wrong)?
Clearly less wrong is better.
|
Why do we worry about overfitting even if "all models are wrong"?
|
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cance
|
Why do we worry about overfitting even if "all models are wrong"?
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cancer, would you rather have them be wrong 50% of the time (very wrong) or 0.1% of the time (much less wrong)?
Or, let's say you give away something for free if your model predicts this will lead to the customer buying something later. Would you rather give away many things for free without this making a difference to whether customers buy things later (quite wrong) or have most customers come back to buy things later (less wrong)?
Clearly less wrong is better.
|
Why do we worry about overfitting even if "all models are wrong"?
All models are wrong, but some are less wrong than others.
Overfitting generally makes your model more wrong in dealing with real-world data.
If a doctor were to try to diagnose whether you have cance
|
14,285
|
Do we actually take random line in first step of linear regression?
|
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Error often gets denoted by an $L$ for "loss".
$$
L(y, \hat y) = \sum_{i = 1}^N \bigg(y_i - \hat y_i\bigg)^2
$$
We have our regression model, $\hat y_i =\hat\beta_0 + \hat\beta_1x$, so the $\hat y$ is a function of $\hat\beta_0$ and $\hat\beta_1$.
$$
L(y, \hat\beta_0, \hat\beta_1) = \sum_{i = 1}^N \bigg(y_i - (\hat\beta_0 + \hat\beta_1x)\bigg)^2
$$
We want to find the $\hat\beta_0$ and $\hat\beta_1$ that minimize $L$.
What the video does is simulate pieces of the entire "loss function". For $\hat\beta_0 = 1$ and $\hat\beta_1 = 7$, you get a certain loss value. For $\hat\beta_0 = 1$ and $\hat\beta_1 = 8$, you get another loss value. One approach to finding the minimum is to pick random values until you find one that results in a loss value that seems low enough (or you're tired of waiting). Much of the deep learning work uses variations of this, with tricks like stochastic gradient descent to make the algorithm get (close to) the right answer in a short amount of time.
In OLS linear regression, however, calculus gives us a solution to the minimization problem, and we do not have to play such games.
$$\hat\beta_1=\frac{cov(x,y)}{var(x)}\\
\hat\beta_0=\bar y-\hat\beta_1\bar x$$
|
Do we actually take random line in first step of linear regression?
|
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Err
|
Do we actually take random line in first step of linear regression?
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Error often gets denoted by an $L$ for "loss".
$$
L(y, \hat y) = \sum_{i = 1}^N \bigg(y_i - \hat y_i\bigg)^2
$$
We have our regression model, $\hat y_i =\hat\beta_0 + \hat\beta_1x$, so the $\hat y$ is a function of $\hat\beta_0$ and $\hat\beta_1$.
$$
L(y, \hat\beta_0, \hat\beta_1) = \sum_{i = 1}^N \bigg(y_i - (\hat\beta_0 + \hat\beta_1x)\bigg)^2
$$
We want to find the $\hat\beta_0$ and $\hat\beta_1$ that minimize $L$.
What the video does is simulate pieces of the entire "loss function". For $\hat\beta_0 = 1$ and $\hat\beta_1 = 7$, you get a certain loss value. For $\hat\beta_0 = 1$ and $\hat\beta_1 = 8$, you get another loss value. One approach to finding the minimum is to pick random values until you find one that results in a loss value that seems low enough (or you're tired of waiting). Much of the deep learning work uses variations of this, with tricks like stochastic gradient descent to make the algorithm get (close to) the right answer in a short amount of time.
In OLS linear regression, however, calculus gives us a solution to the minimization problem, and we do not have to play such games.
$$\hat\beta_1=\frac{cov(x,y)}{var(x)}\\
\hat\beta_0=\bar y-\hat\beta_1\bar x$$
|
Do we actually take random line in first step of linear regression?
NO
What we want to find are the parameters that result in the least amount of error, and OLS defines error as the squared differences between observed values $y_i$ and predicted values $\hat y_i$. Err
|
14,286
|
Do we actually take random line in first step of linear regression?
|
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to start somewhere looking for the optimal parameters, and the random set of parameters is one place to start.
So, in a way, we do start with a line, though we don’t draw it. Also, the algorithm itself is not exactly the one presented, of course. The instructor was probably trying to explain it without the notion of a gradient, and it’s tough. So, I’d give him a pass on a sloppy attempt.
|
Do we actually take random line in first step of linear regression?
|
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to
|
Do we actually take random line in first step of linear regression?
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to start somewhere looking for the optimal parameters, and the random set of parameters is one place to start.
So, in a way, we do start with a line, though we don’t draw it. Also, the algorithm itself is not exactly the one presented, of course. The instructor was probably trying to explain it without the notion of a gradient, and it’s tough. So, I’d give him a pass on a sloppy attempt.
|
Do we actually take random line in first step of linear regression?
We, sort of, do something like this effectively, especially in Gradient descent algorithms. A random line is simply a set of random parameters $\beta_0,\beta_1$. The gradient descent algorithm has to
|
14,287
|
Do we actually take random line in first step of linear regression?
|
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as Frank Harrel mentioned in the comment. If you have a single independent variable, as in simple linear regression, $\hat y_i = \hat\beta_0 + \hat\beta_1x$, you solve it analytically, through the equations below:
$\hat\beta_1=\frac{cov(x,y)}{var(x)}$
and
$\hat\beta_0=\bar y-\beta_1\bar x$
By the way, it is possible that this question (Why is a regression coefficient covariance/variance?) is of your interest.
|
Do we actually take random line in first step of linear regression?
|
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as F
|
Do we actually take random line in first step of linear regression?
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as Frank Harrel mentioned in the comment. If you have a single independent variable, as in simple linear regression, $\hat y_i = \hat\beta_0 + \hat\beta_1x$, you solve it analytically, through the equations below:
$\hat\beta_1=\frac{cov(x,y)}{var(x)}$
and
$\hat\beta_0=\bar y-\beta_1\bar x$
By the way, it is possible that this question (Why is a regression coefficient covariance/variance?) is of your interest.
|
Do we actually take random line in first step of linear regression?
Sometimes it is more intuitive to show things graphically, mostly for beginners. You can do it this way, of course, but in practice this is not how it is done, as there is a closed form solution, as F
|
14,288
|
Do we actually take random line in first step of linear regression?
|
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form solution for finding the Least Squares Regression equation for a set of points.
That being said, what's being shown in the snippet is a method for algorithmically finding a line that gets close to the points by trial and error (i.e. iterations).
As a simple analogy to show the difference between a closed form solution and an algorithm: if I were to give you a mathematical equation, say $10 = 2x+4$, and asked you to solve for $x$, we know that you can solve this exactly using algebra.
$10 = 2x+4$
$\implies 2x=6$
$\implies x=3$ ** Exact solution **
Alternatively, an algorithmic approach to this could be used to solve this same equation by guessing a solution (e.g. start with a random guess: $x=0$) and systematically adjusting $x$ until your condition (statement of equality) is met, or approximately met.
$x = 0 \implies 10=4$ ** too low, adjust up **
$x = 1 \implies 10=6$ ** too low, adjust up **
$x = 2 \implies 10=8$ ** too low, adjust up **
$x = 3 \implies 10=10$ ** condition met, stop **
As this crude example shows, algorithms can sometimes approximate the answers returned by closed form solutions, but this isn't guaranteed to happen for all types of equations.
Personally, I don't find the snippet in the question to be pedagogically helpful to showing how regression lines work, and I think there are better examples of how algorithms can be used to find approximate solutions to mathematical equations.
|
Do we actually take random line in first step of linear regression?
|
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form
|
Do we actually take random line in first step of linear regression?
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form solution for finding the Least Squares Regression equation for a set of points.
That being said, what's being shown in the snippet is a method for algorithmically finding a line that gets close to the points by trial and error (i.e. iterations).
As a simple analogy to show the difference between a closed form solution and an algorithm: if I were to give you a mathematical equation, say $10 = 2x+4$, and asked you to solve for $x$, we know that you can solve this exactly using algebra.
$10 = 2x+4$
$\implies 2x=6$
$\implies x=3$ ** Exact solution **
Alternatively, an algorithmic approach to this could be used to solve this same equation by guessing a solution (e.g. start with a random guess: $x=0$) and systematically adjusting $x$ until your condition (statement of equality) is met, or approximately met.
$x = 0 \implies 10=4$ ** too low, adjust up **
$x = 1 \implies 10=6$ ** too low, adjust up **
$x = 2 \implies 10=8$ ** too low, adjust up **
$x = 3 \implies 10=10$ ** condition met, stop **
As this crude example shows, algorithms can sometimes approximate the answers returned by closed form solutions, but this isn't guaranteed to happen for all types of equations.
Personally, I don't find the snippet in the question to be pedagogically helpful to showing how regression lines work, and I think there are better examples of how algorithms can be used to find approximate solutions to mathematical equations.
|
Do we actually take random line in first step of linear regression?
That example is definitely NOT the way linear regression is typically done, but I suppose it is an algorithm to find a regression line. As other answers have correctly stated, there is a closed form
|
14,289
|
Do we actually take random line in first step of linear regression?
|
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or without a mathematical background in general.
If it was up to me I would do it in a slightly different way - start with some "goodness of fit" measure, then, since this is a simple linear regression with one covariate, perform a grid search over the intercept and slope, and calculate the selected goodness of fit measures for every point on a grid computationally using a loop.
This would give students some intuition and they would even feel being able to get an approximate answer themselves. After this step we then can mention that for some goodness of fit measures such problem can be solved precisely without the need to perform any iterations, but that doesn't change the goal or intuition behind looking for the best-fitting solution and minimising residuals.
Sticking with the iterative procedure, however, does require a starting point with a random line. This is similar to how some neural network optimisations start with initiation of random weights. Still, I feel, that calculating the goodness of fit to every subsequent iteration would help to clarify things further. Without it it might seem unclear why the line needs to move at all, and how is it getting any better by moving towards the points.
|
Do we actually take random line in first step of linear regression?
|
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or withou
|
Do we actually take random line in first step of linear regression?
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or without a mathematical background in general.
If it was up to me I would do it in a slightly different way - start with some "goodness of fit" measure, then, since this is a simple linear regression with one covariate, perform a grid search over the intercept and slope, and calculate the selected goodness of fit measures for every point on a grid computationally using a loop.
This would give students some intuition and they would even feel being able to get an approximate answer themselves. After this step we then can mention that for some goodness of fit measures such problem can be solved precisely without the need to perform any iterations, but that doesn't change the goal or intuition behind looking for the best-fitting solution and minimising residuals.
Sticking with the iterative procedure, however, does require a starting point with a random line. This is similar to how some neural network optimisations start with initiation of random weights. Still, I feel, that calculating the goodness of fit to every subsequent iteration would help to clarify things further. Without it it might seem unclear why the line needs to move at all, and how is it getting any better by moving towards the points.
|
Do we actually take random line in first step of linear regression?
This clearly looks like an attempt by an instructor to introduce some intuition behind linear regression and iterative optimisation to computer science students not familiar with derivatives or withou
|
14,290
|
Do we actually take random line in first step of linear regression?
|
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrative example of how Stochastic Algorithms work rather than how to best fit a linear regression model.
However, linear regression really is the exception to the rule in this case. For fitting most models, we do not have a closed form solution and we do need to start with an initial set of parameters and then iteratively improve them.
In such cases, it is often the case that choosing a good starting point, as you have suggested, will help the algorithm to converge faster. For some problems, choosing a good starting point is crucial for acceptable performance (both in terms of speed of convergence and probability that algorithm will converge to an acceptable answer), while for other model/algorithm combinations, the improvement may be so minor that it is not worth the extra effort to find a good starting values and initializing with random values is fine.
|
Do we actually take random line in first step of linear regression?
|
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrativ
|
Do we actually take random line in first step of linear regression?
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrative example of how Stochastic Algorithms work rather than how to best fit a linear regression model.
However, linear regression really is the exception to the rule in this case. For fitting most models, we do not have a closed form solution and we do need to start with an initial set of parameters and then iteratively improve them.
In such cases, it is often the case that choosing a good starting point, as you have suggested, will help the algorithm to converge faster. For some problems, choosing a good starting point is crucial for acceptable performance (both in terms of speed of convergence and probability that algorithm will converge to an acceptable answer), while for other model/algorithm combinations, the improvement may be so minor that it is not worth the extra effort to find a good starting values and initializing with random values is fine.
|
Do we actually take random line in first step of linear regression?
To be clear, there's a closed form solution for linear regression that is almost always used to find the fit, so there's no need for a "guess" to start with at all. This example is more of illustrativ
|
14,291
|
Do we actually take random line in first step of linear regression?
|
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with those who say that
it is a pedagogical tool
the problem can be solved exactly (optimal least squares)
it is reminiscent of the gradient descent
In the above mentioned robust methods one actually uses exact regression for fitting a line to a random subset of datapoints, thus diminishing the influence of the outliers (to which the exact solution to linear least squares regression is extremely sensitive).
|
Do we actually take random line in first step of linear regression?
|
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with thos
|
Do we actually take random line in first step of linear regression?
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with those who say that
it is a pedagogical tool
the problem can be solved exactly (optimal least squares)
it is reminiscent of the gradient descent
In the above mentioned robust methods one actually uses exact regression for fitting a line to a random subset of datapoints, thus diminishing the influence of the outliers (to which the exact solution to linear least squares regression is extremely sensitive).
|
Do we actually take random line in first step of linear regression?
Some methods for robust regression, notably RANSAC (Random sample consensus) are actually built around fitting random lines. But this is, of course, far from what is happening here - I agree with thos
|
14,292
|
What are essential rules for designing and producing plots?
|
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows.
|
What are essential rules for designing and producing plots?
|
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows.
|
What are essential rules for designing and producing plots?
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows.
|
What are essential rules for designing and producing plots?
Substance over Form: Choose the appropriate plot, style, coloring or other graphical parameters to show what you want the plot to show, rather than what your graphing package necessarily allows.
|
14,293
|
What are essential rules for designing and producing plots?
|
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain readable even in black and white.
This simple rule should account for colour blindness, low-quality printers
and bad lighting conditions.
Even if you use different hues, make sure that the values are sufficiently different.
In particular, the plots should be dark on a light background (or the opposite),
but not grey on a grey value.
The worst example would be a blue plot of a red background -- both are middle values,
i.e., would give very similar greys after conversion in black and white.
Saturation.
Saturation should be used with moderation: a pure red line may be fine, but a thicker,
less saturated red line will be more readable
(the increased thickness helps distinguish colours and allows you to reduce saturation).
On the other hand, a pure red area is painful to look at: do not use saturated colours to fill areas.
The Brewer colour palettes
(designed for maps, not line plots) give examples of low-saturation colour choices.
The worst example would be, again, a saturated background (blue on red or red on blue).
Hue.
As mentioned by @gung, avoid the red/green (traffic lights) combination:
there are much more colour-blind people than you think.
Especially with hue, less is more. For instance, to plot "diverging" values
(i.e., quantities that can be positive or negative), only use two hues
(for positive and negative values),
so that the reader can immediately distinguish what is high and what is low.
Using a discrete gradient can result in a much more readable plot:
the boundaries between the colours become visible and form a contour plot.
You may want to read S. Few's
Practical Rules for Using Color in Charts
or refer to any material about "Colour Theory" for art or design students.
|
What are essential rules for designing and producing plots?
|
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain r
|
What are essential rules for designing and producing plots?
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain readable even in black and white.
This simple rule should account for colour blindness, low-quality printers
and bad lighting conditions.
Even if you use different hues, make sure that the values are sufficiently different.
In particular, the plots should be dark on a light background (or the opposite),
but not grey on a grey value.
The worst example would be a blue plot of a red background -- both are middle values,
i.e., would give very similar greys after conversion in black and white.
Saturation.
Saturation should be used with moderation: a pure red line may be fine, but a thicker,
less saturated red line will be more readable
(the increased thickness helps distinguish colours and allows you to reduce saturation).
On the other hand, a pure red area is painful to look at: do not use saturated colours to fill areas.
The Brewer colour palettes
(designed for maps, not line plots) give examples of low-saturation colour choices.
The worst example would be, again, a saturated background (blue on red or red on blue).
Hue.
As mentioned by @gung, avoid the red/green (traffic lights) combination:
there are much more colour-blind people than you think.
Especially with hue, less is more. For instance, to plot "diverging" values
(i.e., quantities that can be positive or negative), only use two hues
(for positive and negative values),
so that the reader can immediately distinguish what is high and what is low.
Using a discrete gradient can result in a much more readable plot:
the boundaries between the colours become visible and form a contour plot.
You may want to read S. Few's
Practical Rules for Using Color in Charts
or refer to any material about "Colour Theory" for art or design students.
|
What are essential rules for designing and producing plots?
Being familiar with the three dimensions of colour can be helpful.
If you use several colours, they should ideally differ on several of those dimensions, not just one.
Value. The graph should remain r
|
14,294
|
What are essential rules for designing and producing plots?
|
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever information (or supplementary information) that cannot go into the figure itself in the caption. The idea is to minimize the effort required by a graph viewer to extract the relevant information--best: graph is self-explanatory, next best: supplementary information required can quickly be gleaned from the caption, worst: the viewer must closely read through the whole results section searching for some crucial detail to figure out what is going on.
|
What are essential rules for designing and producing plots?
|
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever inform
|
What are essential rules for designing and producing plots?
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever information (or supplementary information) that cannot go into the figure itself in the caption. The idea is to minimize the effort required by a graph viewer to extract the relevant information--best: graph is self-explanatory, next best: supplementary information required can quickly be gleaned from the caption, worst: the viewer must closely read through the whole results section searching for some crucial detail to figure out what is going on.
|
What are essential rules for designing and producing plots?
Place as much of the required information within the figure itself. Do not require the reader to reference the caption, e.g. to identify the meaning of various symbols or colors. Place whatever inform
|
14,295
|
What are essential rules for designing and producing plots?
|
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
using a legend when objects can be labeled directly
|
What are essential rules for designing and producing plots?
|
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
|
What are essential rules for designing and producing plots?
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
using a legend when objects can be labeled directly
|
What are essential rules for designing and producing plots?
Make the plot as simple as possible. In Tufte's words, 'minimize the data-ink ratio'.
For example, avoid:
more colors or shapes than required
more tick marks than necessary
3-D effects on a 2-D plot.
|
14,296
|
What are essential rules for designing and producing plots?
|
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries.
|
What are essential rules for designing and producing plots?
|
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries.
|
What are essential rules for designing and producing plots?
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries.
|
What are essential rules for designing and producing plots?
Leave time to edit. Making a good graph takes time and it often takes (at least for me) multiple tries.
|
14,297
|
What are essential rules for designing and producing plots?
|
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, and he couldn't make out what was going on in my graphs--it was a waste and I felt pretty stupid. Other forms of colorblindness are very rare, but red-green is fairly common. This page has a lot of good information. Here are some tips:
If you only need two colors, use blue and yellow--don't use red and green.
If you need a gradient, go from blue to yellow while changing saturation and lightness simultaneously--don't use the rainbow.
If you need to encode more than two elements (e.g., points on a scatterplot from more than two groups, or several lines) back your colors up with different plotting symbols / line styles as well. For example, distinct plotting symbols: o + < s w, or lines: solid, dotted, dashed, dot-dashed, etc (you can also add plotting symbols to your lines or change line weights).
|
What are essential rules for designing and producing plots?
|
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, an
|
What are essential rules for designing and producing plots?
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, and he couldn't make out what was going on in my graphs--it was a waste and I felt pretty stupid. Other forms of colorblindness are very rare, but red-green is fairly common. This page has a lot of good information. Here are some tips:
If you only need two colors, use blue and yellow--don't use red and green.
If you need a gradient, go from blue to yellow while changing saturation and lightness simultaneously--don't use the rainbow.
If you need to encode more than two elements (e.g., points on a scatterplot from more than two groups, or several lines) back your colors up with different plotting symbols / line styles as well. For example, distinct plotting symbols: o + < s w, or lines: solid, dotted, dashed, dot-dashed, etc (you can also add plotting symbols to your lines or change line weights).
|
What are essential rules for designing and producing plots?
Don't oppose red and green. Color can be helpful, but when using color always bear in mind that a substantial minority of people are red-green colorblind. I once was showing some data to someone, an
|
14,298
|
What are essential rules for designing and producing plots?
|
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't use pie-charts.
Don't duplicate data that is contained in a graph by throwing in a table.
Use a sans serif font like Arial for graph titles, etc, because those types of fonts are designed to be used that way.
No post on design is complete without a book reference, I really like Statistical Rules of Thumb. Chapter 9 is the bit relevant to the discussion here, and the bits I point to when asked why I hate stacked bar graphs and pie charts. :)
Confession: in one of my first student consulting roles for a small NGO client I gave them a report that had lots of stacked bar graphs, printed in colour (this was the mid 1990s). I think I managed to get yellow, purple, and red into those puppies.
|
What are essential rules for designing and producing plots?
|
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't
|
What are essential rules for designing and producing plots?
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't use pie-charts.
Don't duplicate data that is contained in a graph by throwing in a table.
Use a sans serif font like Arial for graph titles, etc, because those types of fonts are designed to be used that way.
No post on design is complete without a book reference, I really like Statistical Rules of Thumb. Chapter 9 is the bit relevant to the discussion here, and the bits I point to when asked why I hate stacked bar graphs and pie charts. :)
Confession: in one of my first student consulting roles for a small NGO client I gave them a report that had lots of stacked bar graphs, printed in colour (this was the mid 1990s). I think I managed to get yellow, purple, and red into those puppies.
|
What are essential rules for designing and producing plots?
Don't use stacked bar graphs. And on a related note, if you have a Likert scale item, don't feel the need to show the proportion for every response to each item. Those graphs make my eyes bleed.
Don't
|
14,299
|
What are essential rules for designing and producing plots?
|
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than was actually measured.
|
What are essential rules for designing and producing plots?
|
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than
|
What are essential rules for designing and producing plots?
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than was actually measured.
|
What are essential rules for designing and producing plots?
Don't mess with the axes. Don't cut off the first hundred units just because then the slope of the graph looks more impressive. The image will stick and people will remember a much larger effect than
|
14,300
|
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
|
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like one in the set. Models typically have parameters that specify which model you mean from the set. I'll write $\theta$ for the parameters
An estimator of a parameter is something you can compute from the data that you think will be close to the parameter. Write $\hat\theta$ for an estimator of $\theta$
An algorithm is a recipe for computing something from the data, usually something you hope will be useful.
The Gaussian mixture model is a model. It is an assumption or approximation to how the data (and future data, often) were generated. Data from a Gaussian mixture model tend to fall into elliptical (or spherical) clumps
$k$-means is an algorithm. Given a data set, it divides it into $k$ clusters in a way that attempts to minimise the average Euclidean distance from a point to the centre of its clusters.
There's no necessary relationship between the two, but they are at least good friends. If your data are a good fit to a spherical Gaussian mixture model they come in roughly spherical clumps centered at the means of each mixture component. That's the sort of data where $k$-means clustering does well: it will tend to find clusters that each correspond to a mixture component, with cluster centres close to the mixture means.
However, you can use $k$-means clustering without any assumption about the data-generating process. As with other clustering tools, it can be used just to chop up data into convenient and relatively homogenous pieces, with no philosophical commitment to those pieces being real things (eg, for market segmentation). You can prove things about what $k$-means estimates without assuming mixture models (eg, this and this by David Pollard)
You can fit Gaussian mixture models by maximum likelihood, which is a different estimator and different algorithm than $k$-means. Or with Bayesian estimators and their corresponding algorithms (see eg)
So: spherical Gaussian mixture models are quite closely connected to $k$-means clustering in some ways. In other ways they are not just different things but different kinds of things.
|
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
|
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like
|
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters are spherical?
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like one in the set. Models typically have parameters that specify which model you mean from the set. I'll write $\theta$ for the parameters
An estimator of a parameter is something you can compute from the data that you think will be close to the parameter. Write $\hat\theta$ for an estimator of $\theta$
An algorithm is a recipe for computing something from the data, usually something you hope will be useful.
The Gaussian mixture model is a model. It is an assumption or approximation to how the data (and future data, often) were generated. Data from a Gaussian mixture model tend to fall into elliptical (or spherical) clumps
$k$-means is an algorithm. Given a data set, it divides it into $k$ clusters in a way that attempts to minimise the average Euclidean distance from a point to the centre of its clusters.
There's no necessary relationship between the two, but they are at least good friends. If your data are a good fit to a spherical Gaussian mixture model they come in roughly spherical clumps centered at the means of each mixture component. That's the sort of data where $k$-means clustering does well: it will tend to find clusters that each correspond to a mixture component, with cluster centres close to the mixture means.
However, you can use $k$-means clustering without any assumption about the data-generating process. As with other clustering tools, it can be used just to chop up data into convenient and relatively homogenous pieces, with no philosophical commitment to those pieces being real things (eg, for market segmentation). You can prove things about what $k$-means estimates without assuming mixture models (eg, this and this by David Pollard)
You can fit Gaussian mixture models by maximum likelihood, which is a different estimator and different algorithm than $k$-means. Or with Bayesian estimators and their corresponding algorithms (see eg)
So: spherical Gaussian mixture models are quite closely connected to $k$-means clustering in some ways. In other ways they are not just different things but different kinds of things.
|
In cluster analysis, how does Gaussian mixture model differ from K Means when we know the clusters a
Ok, we need to start off by talking about models and estimators and algorithms.
A model is a set of probability distributions, usually chosen because you think the data came from a distribution like
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.