idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
5,801
|
Why is RSS distributed chi square times n-p?
|
There is a more general result that underlies many instances of the chi-squared distribution.
Quadratic form $Z^TAZ$ with standard normal $Z$ and symmetric idempotent $A$
Lemma: If $A$ is a symmetric and idempotent $n\times n$ real matrix and $Z\sim N(0,I_n)$ is a random vector of $n$ independent standard normal variables, then $Z^TAZ$ has chi-squared($r$) distribution, $r$ being the trace of $A$.
Proof. Use the decomposition lemma (below) to find an $n\times r$ matrix $U$ with orthonormal columns such that $A=UU^T$ and $r$ is the trace of $A$. Consider $N:=U^TZ$. Then $N$ is a random vector of $r$ variables having multivariate normal distribution with mean vector $0$ and covariance matrix $U^TU=I_r$.
It follows that
$
Z^T AZ = Z^TUU^TZ=N^TN
$
is the sum of squares of $r$ IID standard normal variables, so it has chi-squared($r$) distribution.
Decomposition of symmetric idempotent matrix
Lemma: If $A$ is a symmetric and idempotent $n\times n$ real matrix, then $A=UU^T$ where $U$ is an $n\times r$ matrix with orthonormal columns, $r$ being the trace of $A$.
Proof. Since matrix $A$ is idempotent, its eigenvalues are zero and one, and the multiplicity of unit eigenvalues equals the rank $r$ of $A$, which in turn equals the trace of $A$. Apply the spectral theorem for symmetric matrices to write $A=UDU^T$ where $D$ is a diagonal matrix of the eigenvalues of $A$ and $U$ is an $n\times n$ orthogonal matrix whose columns are the corresponding eigenvectors. We can delete from $U$ the columns corresponding to zero eigenvalue, leaving an $n\times r$ matrix; $D$ then becomes the identity.
In the present situation, for the linear model $y=X\beta +\epsilon$ with $X$ of full rank $p$ we establish that the residual vector $\hat\epsilon:=y-X\hat\beta$ can be written $\hat\epsilon=(I-H)\epsilon$ where the hat matrix $H:=X(X^TX)^{-1}X^T$ is idempotent and symmetric. The same is true for $I-H$, so $\operatorname{RSS}:=\hat\epsilon^T\hat\epsilon=\epsilon^T(I-H)\epsilon$. The quadratic form lemma then asserts that $\operatorname{RSS}/\sigma^2$ has chi-squared($r$) distribution, with $r$ the trace of $I-H$. Since the trace of the hat matrix equals the rank of $X$, conclude $r=\operatorname{tr}(I-H)=n-\operatorname{tr}(H)=n-p$.
|
Why is RSS distributed chi square times n-p?
|
There is a more general result that underlies many instances of the chi-squared distribution.
Quadratic form $Z^TAZ$ with standard normal $Z$ and symmetric idempotent $A$
Lemma: If $A$ is a symmetric
|
Why is RSS distributed chi square times n-p?
There is a more general result that underlies many instances of the chi-squared distribution.
Quadratic form $Z^TAZ$ with standard normal $Z$ and symmetric idempotent $A$
Lemma: If $A$ is a symmetric and idempotent $n\times n$ real matrix and $Z\sim N(0,I_n)$ is a random vector of $n$ independent standard normal variables, then $Z^TAZ$ has chi-squared($r$) distribution, $r$ being the trace of $A$.
Proof. Use the decomposition lemma (below) to find an $n\times r$ matrix $U$ with orthonormal columns such that $A=UU^T$ and $r$ is the trace of $A$. Consider $N:=U^TZ$. Then $N$ is a random vector of $r$ variables having multivariate normal distribution with mean vector $0$ and covariance matrix $U^TU=I_r$.
It follows that
$
Z^T AZ = Z^TUU^TZ=N^TN
$
is the sum of squares of $r$ IID standard normal variables, so it has chi-squared($r$) distribution.
Decomposition of symmetric idempotent matrix
Lemma: If $A$ is a symmetric and idempotent $n\times n$ real matrix, then $A=UU^T$ where $U$ is an $n\times r$ matrix with orthonormal columns, $r$ being the trace of $A$.
Proof. Since matrix $A$ is idempotent, its eigenvalues are zero and one, and the multiplicity of unit eigenvalues equals the rank $r$ of $A$, which in turn equals the trace of $A$. Apply the spectral theorem for symmetric matrices to write $A=UDU^T$ where $D$ is a diagonal matrix of the eigenvalues of $A$ and $U$ is an $n\times n$ orthogonal matrix whose columns are the corresponding eigenvectors. We can delete from $U$ the columns corresponding to zero eigenvalue, leaving an $n\times r$ matrix; $D$ then becomes the identity.
In the present situation, for the linear model $y=X\beta +\epsilon$ with $X$ of full rank $p$ we establish that the residual vector $\hat\epsilon:=y-X\hat\beta$ can be written $\hat\epsilon=(I-H)\epsilon$ where the hat matrix $H:=X(X^TX)^{-1}X^T$ is idempotent and symmetric. The same is true for $I-H$, so $\operatorname{RSS}:=\hat\epsilon^T\hat\epsilon=\epsilon^T(I-H)\epsilon$. The quadratic form lemma then asserts that $\operatorname{RSS}/\sigma^2$ has chi-squared($r$) distribution, with $r$ the trace of $I-H$. Since the trace of the hat matrix equals the rank of $X$, conclude $r=\operatorname{tr}(I-H)=n-\operatorname{tr}(H)=n-p$.
|
Why is RSS distributed chi square times n-p?
There is a more general result that underlies many instances of the chi-squared distribution.
Quadratic form $Z^TAZ$ with standard normal $Z$ and symmetric idempotent $A$
Lemma: If $A$ is a symmetric
|
5,802
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
First off, I would get yourself a copy of this classic and approachable article and read it: Anscombe FJ. (1973) Graphs in statistical analysis. The American Statistician. 27:17–21.
On to your questions:
Answer 1: Neither the dependent nor independent variable needs to be normally distributed. In fact they can have all kinds of loopy distributions. The normality assumption applies to the distribution of the errors ($Y_{i} - \widehat{Y}_{i}$).
Answer 2: You are actually asking about two separate assumptions of ordinary least squares (OLS) regression:
One is the assumption of linearity. This means that the trend in $\overline{Y}$ across $X$ is expressed by a straight line (Right? Straight back to algebra: $y = a +bx$, where $a$ is the $y$-intercept, and $b$ is the slope of the line.) A violation of this assumption simply means that the relationship is not well described by a straight line (e.g., $\overline{Y}$ is a sinusoidal function of $X$, or a quadratic function, or even a straight line that changes slope at some point). My own preferred two-step approach to address non-linearity is to (1) perform some kind of non-parametric smoothing regression to suggest specific nonlinear functional relationships between $Y$ and $X$ (e.g., using LOWESS, or GAMs, etc.), and (2) to specify a functional relationship using either a multiple regression that includes nonlinearities in $X$, (e.g., $Y \sim X + X^{2}$), or a nonlinear least squares regression model that includes nonlinearities in parameters of $X$ (e.g., $Y \sim X + \max{(X-\theta,0)}$, where $\theta$ represents the point where the regression line of $\overline{Y}$ on $X$ changes slope).
Another is the assumption of normally distributed residuals. Sometimes one can validly get away with non-normal residuals in an OLS context; see for example, Lumley T, Emerson S. (2002) The Importance of the Normality Assumption in Large Public Health Data Sets. Annual Review of Public Health. 23:151–69. Sometimes, one cannot (again, see the Anscombe article).
However, I would recommend thinking about the assumptions in OLS not so much as desired properties of your data, but rather as interesting points of departure for describing nature. After all, most of what we care about in the world is more interesting than $y$-intercept and slope. Creatively violating OLS assumptions (with the appropriate methods) allows us to ask and answer more interesting questions.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
First off, I would get yourself a copy of this classic and approachable article and read it: Anscombe FJ. (1973) Graphs in statistical analysis. The American Statistician. 27:17–21.
On to your questio
|
Assumptions of linear models and what to do if the residuals are not normally distributed
First off, I would get yourself a copy of this classic and approachable article and read it: Anscombe FJ. (1973) Graphs in statistical analysis. The American Statistician. 27:17–21.
On to your questions:
Answer 1: Neither the dependent nor independent variable needs to be normally distributed. In fact they can have all kinds of loopy distributions. The normality assumption applies to the distribution of the errors ($Y_{i} - \widehat{Y}_{i}$).
Answer 2: You are actually asking about two separate assumptions of ordinary least squares (OLS) regression:
One is the assumption of linearity. This means that the trend in $\overline{Y}$ across $X$ is expressed by a straight line (Right? Straight back to algebra: $y = a +bx$, where $a$ is the $y$-intercept, and $b$ is the slope of the line.) A violation of this assumption simply means that the relationship is not well described by a straight line (e.g., $\overline{Y}$ is a sinusoidal function of $X$, or a quadratic function, or even a straight line that changes slope at some point). My own preferred two-step approach to address non-linearity is to (1) perform some kind of non-parametric smoothing regression to suggest specific nonlinear functional relationships between $Y$ and $X$ (e.g., using LOWESS, or GAMs, etc.), and (2) to specify a functional relationship using either a multiple regression that includes nonlinearities in $X$, (e.g., $Y \sim X + X^{2}$), or a nonlinear least squares regression model that includes nonlinearities in parameters of $X$ (e.g., $Y \sim X + \max{(X-\theta,0)}$, where $\theta$ represents the point where the regression line of $\overline{Y}$ on $X$ changes slope).
Another is the assumption of normally distributed residuals. Sometimes one can validly get away with non-normal residuals in an OLS context; see for example, Lumley T, Emerson S. (2002) The Importance of the Normality Assumption in Large Public Health Data Sets. Annual Review of Public Health. 23:151–69. Sometimes, one cannot (again, see the Anscombe article).
However, I would recommend thinking about the assumptions in OLS not so much as desired properties of your data, but rather as interesting points of departure for describing nature. After all, most of what we care about in the world is more interesting than $y$-intercept and slope. Creatively violating OLS assumptions (with the appropriate methods) allows us to ask and answer more interesting questions.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
First off, I would get yourself a copy of this classic and approachable article and read it: Anscombe FJ. (1973) Graphs in statistical analysis. The American Statistician. 27:17–21.
On to your questio
|
5,803
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
Your first problems are
in spite of your assurances, the residual plot shows that the conditional expected response isn't linear in the fitted values; the model for the mean is wrong.
you don't have constant variance. The model for the variance is wrong.
you can't even assess normality with those problems there.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
Your first problems are
in spite of your assurances, the residual plot shows that the conditional expected response isn't linear in the fitted values; the model for the mean is wrong.
you don't have
|
Assumptions of linear models and what to do if the residuals are not normally distributed
Your first problems are
in spite of your assurances, the residual plot shows that the conditional expected response isn't linear in the fitted values; the model for the mean is wrong.
you don't have constant variance. The model for the variance is wrong.
you can't even assess normality with those problems there.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
Your first problems are
in spite of your assurances, the residual plot shows that the conditional expected response isn't linear in the fitted values; the model for the mean is wrong.
you don't have
|
5,804
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
The most accessible exploration of the impact of non-normal errors that I have found is this paper by Schmidt and Finan.
Here is the summary of the results in the abstract:
Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
The most accessible exploration of the impact of non-normal errors that I have found is this paper by Schmidt and Finan.
Here is the summary of the results in the abstract:
Although outcome transform
|
Assumptions of linear models and what to do if the residuals are not normally distributed
The most accessible exploration of the impact of non-normal errors that I have found is this paper by Schmidt and Finan.
Here is the summary of the results in the abstract:
Although outcome transformations bias point estimates, violations of the normality assumption in linear regression analyses do not. The normality assumption is necessary to unbiasedly estimate standard errors, and hence confidence intervals and P-values. However, in large sample sizes (e.g., where the number of observations per variable is >10) violations of this normality assumption often do not noticeably impact results. Contrary to this, assumptions on, the parametric model, absence of extreme observations, homoscedasticity, and independency of the errors, remain influential even in large sample size settings.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
The most accessible exploration of the impact of non-normal errors that I have found is this paper by Schmidt and Finan.
Here is the summary of the results in the abstract:
Although outcome transform
|
5,805
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
In addition to the previous answer, I would like to add some points to improve your model:
Sometimes non-normality of residuals indicates the presence of outliers. If this is the case, handle the outliers first.
Maybe using some transformations solve the purpose however, it has consequences. Like the interpretation of coefficients changes if we transform variables.
Additionally, to deal with multi-colinearity, you can refer https://www.researchgate.net/post/My_data_has_the_problem_of_multicolinearity_Removing_unique_variables_using_variance_inflation_factor_VIF_didnt_work_Any_solution
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
In addition to the previous answer, I would like to add some points to improve your model:
Sometimes non-normality of residuals indicates the presence of outliers. If this is the case, handle the out
|
Assumptions of linear models and what to do if the residuals are not normally distributed
In addition to the previous answer, I would like to add some points to improve your model:
Sometimes non-normality of residuals indicates the presence of outliers. If this is the case, handle the outliers first.
Maybe using some transformations solve the purpose however, it has consequences. Like the interpretation of coefficients changes if we transform variables.
Additionally, to deal with multi-colinearity, you can refer https://www.researchgate.net/post/My_data_has_the_problem_of_multicolinearity_Removing_unique_variables_using_variance_inflation_factor_VIF_didnt_work_Any_solution
|
Assumptions of linear models and what to do if the residuals are not normally distributed
In addition to the previous answer, I would like to add some points to improve your model:
Sometimes non-normality of residuals indicates the presence of outliers. If this is the case, handle the out
|
5,806
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
I wouldn't say the linear model is completely useless. However, this means that your model doesn't correctly/fully explain your data. There is a part where you have to decide whether the model is "good enough" or not.
For your first question, I don't think that a linear regression model assumes that your dependent and independent variables have to be normal. However, there is an assumption about the normality of the residuals.
For your second question, there is two different things you could consider :
Check different kind of models. Another model might be better to explain your data (for example, non-linear regression, etc). You would still have to check that the assumptions of this "new model" are not violated.
Your data may not contain enough covariates (dependent variables) to explain the response (outcome). In this case, you cannot do anything else. Sometimes, we may accept to check if the residuals follow a different distributions (e.g. t-distribution) but it doesn't seem to be the case for you.
In addition to your question, I see that your QQPlot is not "normalized". Usually it is easier to look at the plot when your residuals are standardised, see stdres.
stdres(lmobject)
I hope it helps you, maybe someone else will explain this better than me.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
I wouldn't say the linear model is completely useless. However, this means that your model doesn't correctly/fully explain your data. There is a part where you have to decide whether the model is "goo
|
Assumptions of linear models and what to do if the residuals are not normally distributed
I wouldn't say the linear model is completely useless. However, this means that your model doesn't correctly/fully explain your data. There is a part where you have to decide whether the model is "good enough" or not.
For your first question, I don't think that a linear regression model assumes that your dependent and independent variables have to be normal. However, there is an assumption about the normality of the residuals.
For your second question, there is two different things you could consider :
Check different kind of models. Another model might be better to explain your data (for example, non-linear regression, etc). You would still have to check that the assumptions of this "new model" are not violated.
Your data may not contain enough covariates (dependent variables) to explain the response (outcome). In this case, you cannot do anything else. Sometimes, we may accept to check if the residuals follow a different distributions (e.g. t-distribution) but it doesn't seem to be the case for you.
In addition to your question, I see that your QQPlot is not "normalized". Usually it is easier to look at the plot when your residuals are standardised, see stdres.
stdres(lmobject)
I hope it helps you, maybe someone else will explain this better than me.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
I wouldn't say the linear model is completely useless. However, this means that your model doesn't correctly/fully explain your data. There is a part where you have to decide whether the model is "goo
|
5,807
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
For your second question,
Something that happened to me in practice was that I was overfitting my response with many independent variables. In the overfitted model I had non normal residuals. Even though, the results stablished that there wasn´t enought evidence to discart the posibility that some coeficients were zero (with p-values grater than 0.2). So in a second model, dismissing variables following a backward selection procedure I got normal residuals validated both graphically with a qqplot and by hypotesis testing with a Shapiro-Wilk test. Check if this could be your case.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
|
For your second question,
Something that happened to me in practice was that I was overfitting my response with many independent variables. In the overfitted model I had non normal residuals. Even tho
|
Assumptions of linear models and what to do if the residuals are not normally distributed
For your second question,
Something that happened to me in practice was that I was overfitting my response with many independent variables. In the overfitted model I had non normal residuals. Even though, the results stablished that there wasn´t enought evidence to discart the posibility that some coeficients were zero (with p-values grater than 0.2). So in a second model, dismissing variables following a backward selection procedure I got normal residuals validated both graphically with a qqplot and by hypotesis testing with a Shapiro-Wilk test. Check if this could be your case.
|
Assumptions of linear models and what to do if the residuals are not normally distributed
For your second question,
Something that happened to me in practice was that I was overfitting my response with many independent variables. In the overfitted model I had non normal residuals. Even tho
|
5,808
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
|
There are IMHO no formal differences that distinguish machine learning and statistics at the fundamental level of fitting models to data. There may be cultural differences in the choice of models, the objectives of fitting models to data, and to some extend the interpretations.
In the typical examples I can think of we always have
a collection of models $M_i$ for $i \in I$ for some index set $I$,
and for each $i$ an unknown component $\theta_i$ (the parameters, may be infinite dimensional) of the model $M_i$.
Fitting $M_i$ to data is almost always a mathematical optimization problem consisting of finding the optimal choice of the unknown component $\theta_i$ to make $M_i$ fit the data as measured by some favorite function.
The selection among the models $M_i$ is less standard, and there is a range of techniques available. If the objective of the model fitting is purely predictive, the model selection is done with an attempt to get good predictive performance, whereas if the primary objective is to interpret the resulting models, more easily interpretable models may be selected over other models even if their predictive power is expected to be worse.
What could be called old school statistical model selection is based on statistical tests perhaps combined with step-wise selection strategies, whereas machine learning model selection typically focuses on the expected generalization error, which is often estimated using cross-validation. Current developments in and understandings of model selection do, however, seem to converge towards a more common ground, see, for instance, Model Selection and Model Averaging.
Inferring causality from models
The crux of the matter is how we can interpret a model? If the data obtained are from a carefully designed experiment and the model is adequate it is plausible that we can interpret the effect of a change of a variable in the model as a causal effect, and if we repeat the experiment and intervene on this particular variable we can expect to observe the estimated effect. If, however, the data are observational, we can not expect that estimated effects in the model correspond to observable intervention effects. This will require additional assumptions irrespectively of whether the model is a "machine learning model" or a "classical statistical model".
It may be that people trained in using classical statistical models with a focus on univariate parameter estimates and effect size interpretations are of the impression that a causal interpretation is more valid in this framework than in a machine learning framework. I would say it is not.
The area of causal inference in statistics does not really remove the problem, but it does make the assumptions upon which causal conclusions rest explicit. They are referred to as untestable assumptions. The paper Causal inference in statistics: An overview by Judea Pearl is a good paper to read. A major contribution from causal inference is the collection of methods for the estimation of causal effects under assumptions where there actually are unobserved confounders, which is otherwise a major concern. See Section 3.3 in the Pearl paper above. A more advanced example can be found in the paper Marginal Structural Models and Causal Inference in Epidemiology.
It is a subject matter question whether the untestable assumptions hold. They are precisely untestable because we can not test them using the data. To justify the assumptions other arguments are required.
As an example of where machine learning and causal inference meets, the ideas of targeted maximum-likelihood estimation as presented in Targeted Maximum Likelihood Learning by Mark van der Laan and Daniel Rubin typically exploit machine learning techniques for non-parametric estimation followed by the "targeting" towards a parameter of interest. The latter could very well be a parameter with a causal interpretation. The idea in Super Learner is to rely heavily on machine learning techniques for estimation of parameters of interest. It is an important point by Mark van der Laan (personal communication) that classical, simple and "interpretable" statistical models are often wrong, which leads to biased estimators and too optimistic assessment of the uncertainty of the estimates.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
|
There are IMHO no formal differences that distinguish machine learning and statistics at the fundamental level of fitting models to data. There may be cultural differences in the choice of models, the
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
There are IMHO no formal differences that distinguish machine learning and statistics at the fundamental level of fitting models to data. There may be cultural differences in the choice of models, the objectives of fitting models to data, and to some extend the interpretations.
In the typical examples I can think of we always have
a collection of models $M_i$ for $i \in I$ for some index set $I$,
and for each $i$ an unknown component $\theta_i$ (the parameters, may be infinite dimensional) of the model $M_i$.
Fitting $M_i$ to data is almost always a mathematical optimization problem consisting of finding the optimal choice of the unknown component $\theta_i$ to make $M_i$ fit the data as measured by some favorite function.
The selection among the models $M_i$ is less standard, and there is a range of techniques available. If the objective of the model fitting is purely predictive, the model selection is done with an attempt to get good predictive performance, whereas if the primary objective is to interpret the resulting models, more easily interpretable models may be selected over other models even if their predictive power is expected to be worse.
What could be called old school statistical model selection is based on statistical tests perhaps combined with step-wise selection strategies, whereas machine learning model selection typically focuses on the expected generalization error, which is often estimated using cross-validation. Current developments in and understandings of model selection do, however, seem to converge towards a more common ground, see, for instance, Model Selection and Model Averaging.
Inferring causality from models
The crux of the matter is how we can interpret a model? If the data obtained are from a carefully designed experiment and the model is adequate it is plausible that we can interpret the effect of a change of a variable in the model as a causal effect, and if we repeat the experiment and intervene on this particular variable we can expect to observe the estimated effect. If, however, the data are observational, we can not expect that estimated effects in the model correspond to observable intervention effects. This will require additional assumptions irrespectively of whether the model is a "machine learning model" or a "classical statistical model".
It may be that people trained in using classical statistical models with a focus on univariate parameter estimates and effect size interpretations are of the impression that a causal interpretation is more valid in this framework than in a machine learning framework. I would say it is not.
The area of causal inference in statistics does not really remove the problem, but it does make the assumptions upon which causal conclusions rest explicit. They are referred to as untestable assumptions. The paper Causal inference in statistics: An overview by Judea Pearl is a good paper to read. A major contribution from causal inference is the collection of methods for the estimation of causal effects under assumptions where there actually are unobserved confounders, which is otherwise a major concern. See Section 3.3 in the Pearl paper above. A more advanced example can be found in the paper Marginal Structural Models and Causal Inference in Epidemiology.
It is a subject matter question whether the untestable assumptions hold. They are precisely untestable because we can not test them using the data. To justify the assumptions other arguments are required.
As an example of where machine learning and causal inference meets, the ideas of targeted maximum-likelihood estimation as presented in Targeted Maximum Likelihood Learning by Mark van der Laan and Daniel Rubin typically exploit machine learning techniques for non-parametric estimation followed by the "targeting" towards a parameter of interest. The latter could very well be a parameter with a causal interpretation. The idea in Super Learner is to rely heavily on machine learning techniques for estimation of parameters of interest. It is an important point by Mark van der Laan (personal communication) that classical, simple and "interpretable" statistical models are often wrong, which leads to biased estimators and too optimistic assessment of the uncertainty of the estimates.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
There are IMHO no formal differences that distinguish machine learning and statistics at the fundamental level of fitting models to data. There may be cultural differences in the choice of models, the
|
5,809
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
|
There is a (fairly limited) set of statistical tools for so-called "causal inference". These are designed for actually assessing causal relationships and are proven to do this correctly. Excellent, but not for the meek of heart (or brain, for that matter).
Apart from that, in many instances, the ability to imply causality is much more a consequence of your design than of the techniques at hand: if you have control over 'all' the variables in your experiment, and you see something happening everytime you (only) change one variable, it is reasonable to call the thing that happens a 'consequence' of the thing you change (unfortunately, in real research, these extreme cases rarely actually occur). Another intuitive but sound reasoning is time-based: if you randomly (but in a controlled manner) change a variable and another changes the day after, causality is also around the corner.
All of my second paragraph essentially works regardless of which methods you use to find which variables changed in which conditions, so at least in theory there is no reason why Machine Learning (ML) would be worse than Statistics based methods.
Disclaimer: Highly subjective paragraph folowing
However, in my experience, too often ML techniques are just let loose on a blob of data without consideration of where the data came from or how it was collected (i.e. disregarding the design). In those cases, ever so often a result bobs up, but it will be extremely hard to say something useful about causality. This will be exactly the same when some statistically sound method is run upon that same data. However, people with a strong statistics background are trained to be critical towards these matters, and if all goes well, will avoid these pitfalls. Perhaps it is simply the mindset of early (but sloppy) adopters of ML techniques (typically not the developers of new techniques but those eager to 'prove' some results with them in their field of interest) that has given ML its bad reputation on this account. (note that I am not saying statistics is better than ML, or that all people doing ML are sloppy and those doing stats aren't)
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
|
There is a (fairly limited) set of statistical tools for so-called "causal inference". These are designed for actually assessing causal relationships and are proven to do this correctly. Excellent, bu
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
There is a (fairly limited) set of statistical tools for so-called "causal inference". These are designed for actually assessing causal relationships and are proven to do this correctly. Excellent, but not for the meek of heart (or brain, for that matter).
Apart from that, in many instances, the ability to imply causality is much more a consequence of your design than of the techniques at hand: if you have control over 'all' the variables in your experiment, and you see something happening everytime you (only) change one variable, it is reasonable to call the thing that happens a 'consequence' of the thing you change (unfortunately, in real research, these extreme cases rarely actually occur). Another intuitive but sound reasoning is time-based: if you randomly (but in a controlled manner) change a variable and another changes the day after, causality is also around the corner.
All of my second paragraph essentially works regardless of which methods you use to find which variables changed in which conditions, so at least in theory there is no reason why Machine Learning (ML) would be worse than Statistics based methods.
Disclaimer: Highly subjective paragraph folowing
However, in my experience, too often ML techniques are just let loose on a blob of data without consideration of where the data came from or how it was collected (i.e. disregarding the design). In those cases, ever so often a result bobs up, but it will be extremely hard to say something useful about causality. This will be exactly the same when some statistically sound method is run upon that same data. However, people with a strong statistics background are trained to be critical towards these matters, and if all goes well, will avoid these pitfalls. Perhaps it is simply the mindset of early (but sloppy) adopters of ML techniques (typically not the developers of new techniques but those eager to 'prove' some results with them in their field of interest) that has given ML its bad reputation on this account. (note that I am not saying statistics is better than ML, or that all people doing ML are sloppy and those doing stats aren't)
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
There is a (fairly limited) set of statistical tools for so-called "causal inference". These are designed for actually assessing causal relationships and are proven to do this correctly. Excellent, bu
|
5,810
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
|
My view is that the models used in economics and the other social sciences are useful only insofar as they have predictive power in the real world - a model which doesn't predict the real world is just some clever math. A favorite saying of mine to colleagues is that "data is king".
It seems to me that your question raises two critiques of a predictive approach. First, you point out that the models produced by machine learning techniques may not be interpretable. Second, you suggest that the methods used by those in the social sciences are more useful for uncovering causal relationships than machine learning.
To address the first point, I'd offer the following counter argument. The present fad in machine learning favours methods (like SVMs and NN) which are not at all easy for a layperson to understand. This does not mean that all machine learning techniques have this property. For example, the venerable C4.5 decision tree is still widely used 20 years after reaching the final stage of its development, and produces as output a number of classification rules. I would argue that such rules lend themselves better to interpretation than do concepts like the log odds ratio, but that's a subjective claim. In any case, such models are interpretable.
In addressing the second point, I will concede that if you train a machine learning model in one environment, and test it in another, it will likely fail, however, there is no reason to suppose a priori that this is not also true of a more conventional model: if you build your model under one set of assumptions, and then evaluate it under another, you'll get bad results. To co-opt a phrase from computer programming: "garbage in, garbage out" applies equally well to both machine learning and designed models.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
|
My view is that the models used in economics and the other social sciences are useful only insofar as they have predictive power in the real world - a model which doesn't predict the real world is jus
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
My view is that the models used in economics and the other social sciences are useful only insofar as they have predictive power in the real world - a model which doesn't predict the real world is just some clever math. A favorite saying of mine to colleagues is that "data is king".
It seems to me that your question raises two critiques of a predictive approach. First, you point out that the models produced by machine learning techniques may not be interpretable. Second, you suggest that the methods used by those in the social sciences are more useful for uncovering causal relationships than machine learning.
To address the first point, I'd offer the following counter argument. The present fad in machine learning favours methods (like SVMs and NN) which are not at all easy for a layperson to understand. This does not mean that all machine learning techniques have this property. For example, the venerable C4.5 decision tree is still widely used 20 years after reaching the final stage of its development, and produces as output a number of classification rules. I would argue that such rules lend themselves better to interpretation than do concepts like the log odds ratio, but that's a subjective claim. In any case, such models are interpretable.
In addressing the second point, I will concede that if you train a machine learning model in one environment, and test it in another, it will likely fail, however, there is no reason to suppose a priori that this is not also true of a more conventional model: if you build your model under one set of assumptions, and then evaluate it under another, you'll get bad results. To co-opt a phrase from computer programming: "garbage in, garbage out" applies equally well to both machine learning and designed models.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
My view is that the models used in economics and the other social sciences are useful only insofar as they have predictive power in the real world - a model which doesn't predict the real world is jus
|
5,811
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
|
No. Causal inference is an active area of research in machine learning, for instance see the proceedings of this workshop and this one. I would however point out that even if causal inference or model interpretation is your primary interest, it is still a good idea to try an opaque purely predictive approach in parallel, so that you will know if there is a significant performance penalty involved in insisting on an interpretable model.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
|
No. Causal inference is an active area of research in machine learning, for instance see the proceedings of this workshop and this one. I would however point out that even if causal inference or mod
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
No. Causal inference is an active area of research in machine learning, for instance see the proceedings of this workshop and this one. I would however point out that even if causal inference or model interpretation is your primary interest, it is still a good idea to try an opaque purely predictive approach in parallel, so that you will know if there is a significant performance penalty involved in insisting on an interpretable model.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
No. Causal inference is an active area of research in machine learning, for instance see the proceedings of this workshop and this one. I would however point out that even if causal inference or mod
|
5,812
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
|
I will not re-iterate the very good points already made in other answers, but would like to add a somewhat different perspective. What I say here is somewhat philosophical, not necessarily drawn from professional experience, but from a mixed background in the physical sciences, complex systems theory and machine learning (and, I have to admit, largely undergraduate statistics).
One substantial difference between machine learning and classical statistical approaches (that I am aware of) is in the set of assumptions that are made. In classical statistics, many assumptions about the underlying processes and distributions are fixed and tend to be taken for granted. In machine learning, however, these assumptions are explicitly chosen for each model, resulting in a much broader set of possibilities and perhaps a greater awareness of the assumptions being made.
We are seeing more and more that systems in the world around us behave in complex, non-linear ways, and that many processes do not obey assumptions of normality etc. typically present in classical statistics. I would argue that, due to the flexibility and variety of model assumptions, machine learning approaches will often lead to a more robust model in such cases.
There are strong model assumptions built into phrases such as "magnitude of effect", "causal relation", and "degree to which one variable affects the outcome". In a complex system (such as an economy), these assumptions will only be valid within a certain window of possible system states. With some observables and processes, this window may be large, leading to relatively robust models. With others it may be small or even empty. Perhaps the greatest danger is the middle ground: a model may appear to be working, but when the system shifts, fail in a sudden and surprising ways.
Machine learning is no panacea. Rather, I see it as a search for new ways of gleaning meaning from our observations, seeking new paradigms that are needed if we are to deal effectively with the complexity we are starting to perceive in the world around us.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
|
I will not re-iterate the very good points already made in other answers, but would like to add a somewhat different perspective. What I say here is somewhat philosophical, not necessarily drawn from
|
Is machine learning less useful for understanding causality, thus less interesting for social science?
I will not re-iterate the very good points already made in other answers, but would like to add a somewhat different perspective. What I say here is somewhat philosophical, not necessarily drawn from professional experience, but from a mixed background in the physical sciences, complex systems theory and machine learning (and, I have to admit, largely undergraduate statistics).
One substantial difference between machine learning and classical statistical approaches (that I am aware of) is in the set of assumptions that are made. In classical statistics, many assumptions about the underlying processes and distributions are fixed and tend to be taken for granted. In machine learning, however, these assumptions are explicitly chosen for each model, resulting in a much broader set of possibilities and perhaps a greater awareness of the assumptions being made.
We are seeing more and more that systems in the world around us behave in complex, non-linear ways, and that many processes do not obey assumptions of normality etc. typically present in classical statistics. I would argue that, due to the flexibility and variety of model assumptions, machine learning approaches will often lead to a more robust model in such cases.
There are strong model assumptions built into phrases such as "magnitude of effect", "causal relation", and "degree to which one variable affects the outcome". In a complex system (such as an economy), these assumptions will only be valid within a certain window of possible system states. With some observables and processes, this window may be large, leading to relatively robust models. With others it may be small or even empty. Perhaps the greatest danger is the middle ground: a model may appear to be working, but when the system shifts, fail in a sudden and surprising ways.
Machine learning is no panacea. Rather, I see it as a search for new ways of gleaning meaning from our observations, seeking new paradigms that are needed if we are to deal effectively with the complexity we are starting to perceive in the world around us.
|
Is machine learning less useful for understanding causality, thus less interesting for social scienc
I will not re-iterate the very good points already made in other answers, but would like to add a somewhat different perspective. What I say here is somewhat philosophical, not necessarily drawn from
|
5,813
|
Softmax layer in a neural network
|
I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back to a vector.
You correctly derived the gradient of the diagonal of the Jacobian matrix, which is to say that
$ {\partial h_i \over \partial z_j}= h_i(1-h_j)\;\;\;\;\;\;: i = j $
and as amoeba stated it, you also have to derive the off diagonal entries of the Jacobian, which yield
$ {\partial h_i \over \partial z_j}= -h_ih_j\;\;\;\;\;\;: i \ne j $
These two concepts definitions can be conveniently combined using a construct called the Kronecker Delta, so the definition of the gradient becomes
$ {\partial h_i \over \partial z_j}= h_i(\delta_{ij}-h_j) $
So the Jacobian is a square matrix $ \left[J \right]_{ij}=h_i(\delta_{ij}-h_j) $
All of the information up to this point is already covered by amoeba and juampa. The problem is of course, that we need to get the input errors from the output errors that are already computed. Since the gradient of the output error $\nabla h_i$ depends on all of the inputs, then the gradient of the input $x_i$ is
$[\nabla x]_k = \sum\limits_{i=1} \nabla h_{i,k} $
Given the Jacobian matrix defined above, this is implemented trivially as the product of the matrix and the output error vector:
$ \vec{\sigma_l} = J\vec{\sigma_{l+1}} $
If the softmax layer is your output layer, then combining it with the cross-entropy cost model simplifies the computation to simply
$ \vec{\sigma_l} = \vec{h}-\vec{t} $
where $\vec{t}$ is the vector of labels, and $\vec{h}$ is the output from the softmax function. Not only is the simplified form convenient, it is also extremely useful from a numerical stability standpoint.
|
Softmax layer in a neural network
|
I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back
|
Softmax layer in a neural network
I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back to a vector.
You correctly derived the gradient of the diagonal of the Jacobian matrix, which is to say that
$ {\partial h_i \over \partial z_j}= h_i(1-h_j)\;\;\;\;\;\;: i = j $
and as amoeba stated it, you also have to derive the off diagonal entries of the Jacobian, which yield
$ {\partial h_i \over \partial z_j}= -h_ih_j\;\;\;\;\;\;: i \ne j $
These two concepts definitions can be conveniently combined using a construct called the Kronecker Delta, so the definition of the gradient becomes
$ {\partial h_i \over \partial z_j}= h_i(\delta_{ij}-h_j) $
So the Jacobian is a square matrix $ \left[J \right]_{ij}=h_i(\delta_{ij}-h_j) $
All of the information up to this point is already covered by amoeba and juampa. The problem is of course, that we need to get the input errors from the output errors that are already computed. Since the gradient of the output error $\nabla h_i$ depends on all of the inputs, then the gradient of the input $x_i$ is
$[\nabla x]_k = \sum\limits_{i=1} \nabla h_{i,k} $
Given the Jacobian matrix defined above, this is implemented trivially as the product of the matrix and the output error vector:
$ \vec{\sigma_l} = J\vec{\sigma_{l+1}} $
If the softmax layer is your output layer, then combining it with the cross-entropy cost model simplifies the computation to simply
$ \vec{\sigma_l} = \vec{h}-\vec{t} $
where $\vec{t}$ is the vector of labels, and $\vec{h}$ is the output from the softmax function. Not only is the simplified form convenient, it is also extremely useful from a numerical stability standpoint.
|
Softmax layer in a neural network
I feel a little bit bad about providing my own answer for this because it is pretty well captured by amoeba and juampa, except for maybe the final intuition about how the Jacobian can be reduced back
|
5,814
|
Softmax layer in a neural network
|
The derivative is wrong. It should be,
$$\frac{\partial h_{j}}{\partial z_{k}} = h_{j}\delta_{kj}-h_{j}h_{k}$$
check your calculations again.
Also, the expression given by amoeba for the cross-entropy is not entirely correct. For a set of data samples drawn from $C$ different classes, it reads,
$$-\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln y_{k}(\boldsymbol{x}^{n})$$
where the superindex runs over the sample set, $t_{k}^{n}$ is the value of the k-th component of the target for the n-th sample. Here it is assumed that you are using a 1-of-C coding scheme, that is, $t_{k}^{n}$. In such case all t's are zero except for the component representing its corresponding class, which is one.
Note, that the t's are constant. Hence minimizing this functional is equivalent to minimizing,
$$-\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln y_{k}(\boldsymbol{x}^{n}) + \sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln t_{k}^{n} = -\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln \frac{y_{k}(\boldsymbol{x}^{n})}{t_{k}^{n}}$$
which has the advantage that the Jacobian takes a very convenient form, namely,
$$\frac{\partial E}{\partial z_{j}} = h_{j}-t_{j}$$
I would recommend you to get a copy of Bishop's Neural Networks for Pattern Recognition. IMHO still the best book on neural networks.
|
Softmax layer in a neural network
|
The derivative is wrong. It should be,
$$\frac{\partial h_{j}}{\partial z_{k}} = h_{j}\delta_{kj}-h_{j}h_{k}$$
check your calculations again.
Also, the expression given by amoeba for the cross-entropy
|
Softmax layer in a neural network
The derivative is wrong. It should be,
$$\frac{\partial h_{j}}{\partial z_{k}} = h_{j}\delta_{kj}-h_{j}h_{k}$$
check your calculations again.
Also, the expression given by amoeba for the cross-entropy is not entirely correct. For a set of data samples drawn from $C$ different classes, it reads,
$$-\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln y_{k}(\boldsymbol{x}^{n})$$
where the superindex runs over the sample set, $t_{k}^{n}$ is the value of the k-th component of the target for the n-th sample. Here it is assumed that you are using a 1-of-C coding scheme, that is, $t_{k}^{n}$. In such case all t's are zero except for the component representing its corresponding class, which is one.
Note, that the t's are constant. Hence minimizing this functional is equivalent to minimizing,
$$-\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln y_{k}(\boldsymbol{x}^{n}) + \sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln t_{k}^{n} = -\sum_{n}\sum_{k=1}^{C}t_{k}^{n}\ln \frac{y_{k}(\boldsymbol{x}^{n})}{t_{k}^{n}}$$
which has the advantage that the Jacobian takes a very convenient form, namely,
$$\frac{\partial E}{\partial z_{j}} = h_{j}-t_{j}$$
I would recommend you to get a copy of Bishop's Neural Networks for Pattern Recognition. IMHO still the best book on neural networks.
|
Softmax layer in a neural network
The derivative is wrong. It should be,
$$\frac{\partial h_{j}}{\partial z_{k}} = h_{j}\delta_{kj}-h_{j}h_{k}$$
check your calculations again.
Also, the expression given by amoeba for the cross-entropy
|
5,815
|
Softmax layer in a neural network
|
Each output of the softmax depends on all the inputs, so the gradient is indeed a whole Jacobian matrix. You correctly computed $\partial_j h_j = \frac{\partial h_j}{\partial z_j}=h_j(1-h_j)$, but you also need $\partial_k h_j=-h_jh_k$ if $j \neq k$. I guess if you can derive the first expression, you should easily be able to derive the second one as well.
I am not sure what problem you see with back-propagating: in the softmax layer you have $j$ outputs and $j$ inputs, so an error from each output should be propagated to each input, and that is precisely why you need the whole Jacobian. On the other hand, usually you would have a cost function associated with the softmax output, e.g. $$C=-\sum_j t_j \log h_j, $$ where $t_j$ are your desired outputs (when you do classification, then often one of them is equal to 1, and others to 0). Then in fact you are interested in $\frac{\partial C}{\partial z_j}$, which can be computed with a chain rule resulting in a neat expression, and is indeed a vector (not a matrix).
|
Softmax layer in a neural network
|
Each output of the softmax depends on all the inputs, so the gradient is indeed a whole Jacobian matrix. You correctly computed $\partial_j h_j = \frac{\partial h_j}{\partial z_j}=h_j(1-h_j)$, but you
|
Softmax layer in a neural network
Each output of the softmax depends on all the inputs, so the gradient is indeed a whole Jacobian matrix. You correctly computed $\partial_j h_j = \frac{\partial h_j}{\partial z_j}=h_j(1-h_j)$, but you also need $\partial_k h_j=-h_jh_k$ if $j \neq k$. I guess if you can derive the first expression, you should easily be able to derive the second one as well.
I am not sure what problem you see with back-propagating: in the softmax layer you have $j$ outputs and $j$ inputs, so an error from each output should be propagated to each input, and that is precisely why you need the whole Jacobian. On the other hand, usually you would have a cost function associated with the softmax output, e.g. $$C=-\sum_j t_j \log h_j, $$ where $t_j$ are your desired outputs (when you do classification, then often one of them is equal to 1, and others to 0). Then in fact you are interested in $\frac{\partial C}{\partial z_j}$, which can be computed with a chain rule resulting in a neat expression, and is indeed a vector (not a matrix).
|
Softmax layer in a neural network
Each output of the softmax depends on all the inputs, so the gradient is indeed a whole Jacobian matrix. You correctly computed $\partial_j h_j = \frac{\partial h_j}{\partial z_j}=h_j(1-h_j)$, but you
|
5,816
|
Difference between naive Bayes & multinomial naive Bayes
|
The general term Naive Bayes refers the the strong independence assumptions in the model, rather than the particular distribution of each feature. A Naive Bayes model assumes that each of the features it uses are conditionally independent of one another given some class. More formally, if I want to calculate the probability of observing features $f_1$ through $f_n$, given some class c, under the Naive Bayes assumption the following holds:
$$ p(f_1,..., f_n|c) = \prod_{i=1}^n p(f_i|c)$$
This means that when I want to use a Naive Bayes model to classify a new example, the posterior probability is much simpler to work with:
$$ p(c|f_1,...,f_n) \propto p(c)p(f_1|c)...p(f_n|c) $$
Of course these assumptions of independence are rarely true, which may explain why some have referred to the model as the "Idiot Bayes" model, but in practice Naive Bayes models have performed surprisingly well, even on complex tasks where it is clear that the strong independence assumptions are false.
Up to this point we have said nothing about the distribution of each feature. In other words, we have left $p(f_i|c)$ undefined. The term Multinomial Naive Bayes simply lets us know that each $p(f_i|c)$ is a multinomial distribution, rather than some other distribution. This works well for data which can easily be turned into counts, such as word counts in text.
The distribution you had been using with your Naive Bayes classifier is a Guassian p.d.f., so I guess you could call it a Guassian Naive Bayes classifier.
In summary, Naive Bayes classifier is a general term which refers to conditional independence of each of the features in the model, while Multinomial Naive Bayes classifier is a specific instance of a Naive Bayes classifier which uses a multinomial distribution for each of the features.
References:
Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach (2 ed.). Pearson Education. See p. 499 for reference to "idiot Bayes" as well as the general definition of the Naive Bayes model and its independence assumptions
|
Difference between naive Bayes & multinomial naive Bayes
|
The general term Naive Bayes refers the the strong independence assumptions in the model, rather than the particular distribution of each feature. A Naive Bayes model assumes that each of the features
|
Difference between naive Bayes & multinomial naive Bayes
The general term Naive Bayes refers the the strong independence assumptions in the model, rather than the particular distribution of each feature. A Naive Bayes model assumes that each of the features it uses are conditionally independent of one another given some class. More formally, if I want to calculate the probability of observing features $f_1$ through $f_n$, given some class c, under the Naive Bayes assumption the following holds:
$$ p(f_1,..., f_n|c) = \prod_{i=1}^n p(f_i|c)$$
This means that when I want to use a Naive Bayes model to classify a new example, the posterior probability is much simpler to work with:
$$ p(c|f_1,...,f_n) \propto p(c)p(f_1|c)...p(f_n|c) $$
Of course these assumptions of independence are rarely true, which may explain why some have referred to the model as the "Idiot Bayes" model, but in practice Naive Bayes models have performed surprisingly well, even on complex tasks where it is clear that the strong independence assumptions are false.
Up to this point we have said nothing about the distribution of each feature. In other words, we have left $p(f_i|c)$ undefined. The term Multinomial Naive Bayes simply lets us know that each $p(f_i|c)$ is a multinomial distribution, rather than some other distribution. This works well for data which can easily be turned into counts, such as word counts in text.
The distribution you had been using with your Naive Bayes classifier is a Guassian p.d.f., so I guess you could call it a Guassian Naive Bayes classifier.
In summary, Naive Bayes classifier is a general term which refers to conditional independence of each of the features in the model, while Multinomial Naive Bayes classifier is a specific instance of a Naive Bayes classifier which uses a multinomial distribution for each of the features.
References:
Stuart J. Russell and Peter Norvig. 2003. Artificial Intelligence: A Modern Approach (2 ed.). Pearson Education. See p. 499 for reference to "idiot Bayes" as well as the general definition of the Naive Bayes model and its independence assumptions
|
Difference between naive Bayes & multinomial naive Bayes
The general term Naive Bayes refers the the strong independence assumptions in the model, rather than the particular distribution of each feature. A Naive Bayes model assumes that each of the features
|
5,817
|
Difference between naive Bayes & multinomial naive Bayes
|
In general, to train Naive Bayes for n-dimensional data, and k classes you need to estimate $P(x_i | c_j)$ for each $1 \leq i \leq n$, $1 \leq j \leq k$ . You can assume any probability distribution for any pair $(i,j)$ (although it's better to not assume discrete distribution for $P(x_i|c_{j_1})$ and continuous for $P(x_i | c_{j_2})$). You can have Gaussian distribution on one variable, Poisson on other and some discrete on yet another variable.
Multinomial Naive Bayes simply assumes multinomial distribution for all the pairs, which seem to be a reasonable assumption in some cases, i.e. for word counts in documents.
|
Difference between naive Bayes & multinomial naive Bayes
|
In general, to train Naive Bayes for n-dimensional data, and k classes you need to estimate $P(x_i | c_j)$ for each $1 \leq i \leq n$, $1 \leq j \leq k$ . You can assume any probability distribution f
|
Difference between naive Bayes & multinomial naive Bayes
In general, to train Naive Bayes for n-dimensional data, and k classes you need to estimate $P(x_i | c_j)$ for each $1 \leq i \leq n$, $1 \leq j \leq k$ . You can assume any probability distribution for any pair $(i,j)$ (although it's better to not assume discrete distribution for $P(x_i|c_{j_1})$ and continuous for $P(x_i | c_{j_2})$). You can have Gaussian distribution on one variable, Poisson on other and some discrete on yet another variable.
Multinomial Naive Bayes simply assumes multinomial distribution for all the pairs, which seem to be a reasonable assumption in some cases, i.e. for word counts in documents.
|
Difference between naive Bayes & multinomial naive Bayes
In general, to train Naive Bayes for n-dimensional data, and k classes you need to estimate $P(x_i | c_j)$ for each $1 \leq i \leq n$, $1 \leq j \leq k$ . You can assume any probability distribution f
|
5,818
|
How to interpret F- and p-value in ANOVA?
|
To answer your questions:
You find the critical F value from an F distribution (here's a table). See an example. You have to be careful about one-way versus two-way, degrees of freedom of numerator and denominator.
Yes.
|
How to interpret F- and p-value in ANOVA?
|
To answer your questions:
You find the critical F value from an F distribution (here's a table). See an example. You have to be careful about one-way versus two-way, degrees of freedom of numerator
|
How to interpret F- and p-value in ANOVA?
To answer your questions:
You find the critical F value from an F distribution (here's a table). See an example. You have to be careful about one-way versus two-way, degrees of freedom of numerator and denominator.
Yes.
|
How to interpret F- and p-value in ANOVA?
To answer your questions:
You find the critical F value from an F distribution (here's a table). See an example. You have to be careful about one-way versus two-way, degrees of freedom of numerator
|
5,819
|
How to interpret F- and p-value in ANOVA?
|
The F statistic is a ratio of 2 different measure of variance for the data. If the null hypothesis is true then these are both estimates of the same thing and the ratio will be around 1.
The numerator is computed by measuring the variance of the means and if the true means of the groups are identical then this is a function of the overall variance of the data. But if the null hypothesis is false and the means are not all equal, then this measure of variance will be larger.
The denominator is an average of the sample variances for each group, which is an estimate of the overall population variance (assuming all groups have equal variances).
So when the null of all means equal is true then the 2 measures (with some extra terms for degrees of freedom) will be similar and the ratio will be close to 1. If the null is false, then the numerator will be large relative to the denominator and the ratio will be greater than 1. Looking up this ratio on the F-table (or computing it with a function like pf in R) will give the p-value.
If you would rather use a rejection region than a p-value, then you can use the F table or the qf function in R (or other software). The F distribution has 2 types of degrees of freedom. The numerator degrees of freedom are based on the number of groups that you are comparing (for 1-way it is the number of groups minus 1) and the denominator degrees of freedom are based on the number of observations within the groups (for 1-way it is the number of observations minus the number of groups). For more complicated models the degrees of freedom get more complicated, but follow similar ideas.
|
How to interpret F- and p-value in ANOVA?
|
The F statistic is a ratio of 2 different measure of variance for the data. If the null hypothesis is true then these are both estimates of the same thing and the ratio will be around 1.
The numera
|
How to interpret F- and p-value in ANOVA?
The F statistic is a ratio of 2 different measure of variance for the data. If the null hypothesis is true then these are both estimates of the same thing and the ratio will be around 1.
The numerator is computed by measuring the variance of the means and if the true means of the groups are identical then this is a function of the overall variance of the data. But if the null hypothesis is false and the means are not all equal, then this measure of variance will be larger.
The denominator is an average of the sample variances for each group, which is an estimate of the overall population variance (assuming all groups have equal variances).
So when the null of all means equal is true then the 2 measures (with some extra terms for degrees of freedom) will be similar and the ratio will be close to 1. If the null is false, then the numerator will be large relative to the denominator and the ratio will be greater than 1. Looking up this ratio on the F-table (or computing it with a function like pf in R) will give the p-value.
If you would rather use a rejection region than a p-value, then you can use the F table or the qf function in R (or other software). The F distribution has 2 types of degrees of freedom. The numerator degrees of freedom are based on the number of groups that you are comparing (for 1-way it is the number of groups minus 1) and the denominator degrees of freedom are based on the number of observations within the groups (for 1-way it is the number of observations minus the number of groups). For more complicated models the degrees of freedom get more complicated, but follow similar ideas.
|
How to interpret F- and p-value in ANOVA?
The F statistic is a ratio of 2 different measure of variance for the data. If the null hypothesis is true then these are both estimates of the same thing and the ratio will be around 1.
The numera
|
5,820
|
How to interpret F- and p-value in ANOVA?
|
The best way to think about the relationship between $F$, $p$, and the critical value is with a picture:
The curve here is an $F$ distribution, that is, the distribution of $F$ statistics that we'd see if the null hypothesis were true. In this diagram, the observed $F$ statistic is the distance from black dashed line to the vertical axis. The $p$ value is the dark blue area under the curve from $F$ to infinity. Notice that every value of $F$ must correspond to a unique $p$ value, and that higher $F$ values correspond to lower $p$ values.
You should notice a couple of other things about the distribution under null hypothesis:
1) $F$ values approaching zero are highly unlikely (this is not always true, but it's true for the curve in this example)
2) After a certain point, the larger the $F$ is, the less likely it is. (The curve tapers off to the right.)
The critical value $C$ also makes an appearance in this diagram. The area under the curve from $C$ to infinity equals the significance level (here, 5%). You can tell that the $F$ statistic here would result in a failure to reject the null hypothesis because it is less than $C$, that is, its $p$ value is greater than .05. In this specific example, $p=0.175$, but you'd need a ruler to calculate that by hand :-)
Note that the shape of the $F$ distribution is contingent on its degrees of freedom, which for ANOVA correspond to the # of groups (minus 1) and # of observations (minus the # of groups). In general, the overall "shape" of the $F$ curve is determined by the first number, and its "flatness" is determined by the second number. The above example has a $df_1 = 3$ (4 groups), but you'll see that setting $df_1 = 2$ (3 groups) results in a markedly different curve:
You can see other variants of the curve on Mr. Wikipedia Page. One thing worth noting is that because the $F$ statistic is a ratio, large numbers are uncommon under the null hypothesis, even with large degrees of freedom. This is in contrast to $\chi^2$ statistics, which are not divided by the number of groups, and essentially grow with the degrees of freedom. (Otherwise $\chi^2$ is analogous to $F$ in the sense that $\chi^2$ is derived from normally distributed $z$ scores, whereas $F$ is derived from $t$-distributed $t$ statistics.)
That's a lot more than I meant to type, but I hope that covers your questions!
(If you're wondering where the diagrams came from, they were automatically generated by my desktop statistics package, Wizard.)
|
How to interpret F- and p-value in ANOVA?
|
The best way to think about the relationship between $F$, $p$, and the critical value is with a picture:
The curve here is an $F$ distribution, that is, the distribution of $F$ statistics that we'd s
|
How to interpret F- and p-value in ANOVA?
The best way to think about the relationship between $F$, $p$, and the critical value is with a picture:
The curve here is an $F$ distribution, that is, the distribution of $F$ statistics that we'd see if the null hypothesis were true. In this diagram, the observed $F$ statistic is the distance from black dashed line to the vertical axis. The $p$ value is the dark blue area under the curve from $F$ to infinity. Notice that every value of $F$ must correspond to a unique $p$ value, and that higher $F$ values correspond to lower $p$ values.
You should notice a couple of other things about the distribution under null hypothesis:
1) $F$ values approaching zero are highly unlikely (this is not always true, but it's true for the curve in this example)
2) After a certain point, the larger the $F$ is, the less likely it is. (The curve tapers off to the right.)
The critical value $C$ also makes an appearance in this diagram. The area under the curve from $C$ to infinity equals the significance level (here, 5%). You can tell that the $F$ statistic here would result in a failure to reject the null hypothesis because it is less than $C$, that is, its $p$ value is greater than .05. In this specific example, $p=0.175$, but you'd need a ruler to calculate that by hand :-)
Note that the shape of the $F$ distribution is contingent on its degrees of freedom, which for ANOVA correspond to the # of groups (minus 1) and # of observations (minus the # of groups). In general, the overall "shape" of the $F$ curve is determined by the first number, and its "flatness" is determined by the second number. The above example has a $df_1 = 3$ (4 groups), but you'll see that setting $df_1 = 2$ (3 groups) results in a markedly different curve:
You can see other variants of the curve on Mr. Wikipedia Page. One thing worth noting is that because the $F$ statistic is a ratio, large numbers are uncommon under the null hypothesis, even with large degrees of freedom. This is in contrast to $\chi^2$ statistics, which are not divided by the number of groups, and essentially grow with the degrees of freedom. (Otherwise $\chi^2$ is analogous to $F$ in the sense that $\chi^2$ is derived from normally distributed $z$ scores, whereas $F$ is derived from $t$-distributed $t$ statistics.)
That's a lot more than I meant to type, but I hope that covers your questions!
(If you're wondering where the diagrams came from, they were automatically generated by my desktop statistics package, Wizard.)
|
How to interpret F- and p-value in ANOVA?
The best way to think about the relationship between $F$, $p$, and the critical value is with a picture:
The curve here is an $F$ distribution, that is, the distribution of $F$ statistics that we'd s
|
5,821
|
Relationship between Binomial and Beta distributions
|
Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because order statistics have Beta distributions, the chance that $x_{[k]}$ does not exceed $p$ is given by the Beta integral
$$\Pr[x_{[k]} \le p] = \frac{1}{B(k+1, n-k+1)} \int_0^p{x^k(1-x)^{n-k}dx}.$$
(Why is this? Here is a non-rigorous but memorable demonstration. The chance that $x_{[k]}$ lies between $p$ and $p + dp$ is the chance that out of $n+1$ uniform values, $k$ of them lie between $0$ and $p$, at least one of them lies between $p$ and $p + dp$, and the remainder lie between $p + dp$ and $1$. To first order in the infinitesimal $dp$ we only need to consider the case where exactly one value (namely, $x_{[k]}$ itself) lies between $p$ and $p + dp$ and therefore $n - k$ values exceed $p + dp$. Because all values are independent and uniform, this probability is proportional to $p^k (dp) (1 - p - dp)^{n-k}$. To first order in $dp$ this equals $p^k(1-p)^{n-k}dp$, precisely the integrand of the Beta distribution. The term $\frac{1}{B(k+1, n-k+1)}$ can be computed directly from this argument as the multinomial coefficient ${n+1}\choose{k,1, n-k}$ or derived indirectly as the normalizing constant of the integral.)
By definition, the event $x_{[k]} \le p$ is that the $k+1^\text{st}$ value does not exceed $p$. Equivalently, at least $k+1$ of the values do not exceed $p$: this simple (and I hope obvious) assertion provides the intuition you seek. The probability of the equivalent statement is given by the Binomial distribution,
$$\Pr[\text{at least }k+1\text{ of the }x_i \le p] = \sum_{j=k+1}^{n+1}{{n+1}\choose{j}} p^j (1-p)^{n+1-j}.$$
In summary, the Beta integral breaks the calculation of an event into a series of calculations: finding at least $k+1$ values in the range $[0, p]$, whose probability we normally would compute with a Binomial cdf, is broken down into mutually exclusive cases where exactly $k$ values are in the range $[0, x]$ and 1 value is in the range $[x, x+dx]$ for all possible $x$, $0 \le x \lt p$, and $dx$ is an infinitesimal length. Summing over all such "windows" $[x, x+dx]$--that is, integrating--must give the same probability as the Binomial cdf.
|
Relationship between Binomial and Beta distributions
|
Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because order statistics have Beta distributions, the chance that $x
|
Relationship between Binomial and Beta distributions
Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because order statistics have Beta distributions, the chance that $x_{[k]}$ does not exceed $p$ is given by the Beta integral
$$\Pr[x_{[k]} \le p] = \frac{1}{B(k+1, n-k+1)} \int_0^p{x^k(1-x)^{n-k}dx}.$$
(Why is this? Here is a non-rigorous but memorable demonstration. The chance that $x_{[k]}$ lies between $p$ and $p + dp$ is the chance that out of $n+1$ uniform values, $k$ of them lie between $0$ and $p$, at least one of them lies between $p$ and $p + dp$, and the remainder lie between $p + dp$ and $1$. To first order in the infinitesimal $dp$ we only need to consider the case where exactly one value (namely, $x_{[k]}$ itself) lies between $p$ and $p + dp$ and therefore $n - k$ values exceed $p + dp$. Because all values are independent and uniform, this probability is proportional to $p^k (dp) (1 - p - dp)^{n-k}$. To first order in $dp$ this equals $p^k(1-p)^{n-k}dp$, precisely the integrand of the Beta distribution. The term $\frac{1}{B(k+1, n-k+1)}$ can be computed directly from this argument as the multinomial coefficient ${n+1}\choose{k,1, n-k}$ or derived indirectly as the normalizing constant of the integral.)
By definition, the event $x_{[k]} \le p$ is that the $k+1^\text{st}$ value does not exceed $p$. Equivalently, at least $k+1$ of the values do not exceed $p$: this simple (and I hope obvious) assertion provides the intuition you seek. The probability of the equivalent statement is given by the Binomial distribution,
$$\Pr[\text{at least }k+1\text{ of the }x_i \le p] = \sum_{j=k+1}^{n+1}{{n+1}\choose{j}} p^j (1-p)^{n+1-j}.$$
In summary, the Beta integral breaks the calculation of an event into a series of calculations: finding at least $k+1$ values in the range $[0, p]$, whose probability we normally would compute with a Binomial cdf, is broken down into mutually exclusive cases where exactly $k$ values are in the range $[0, x]$ and 1 value is in the range $[x, x+dx]$ for all possible $x$, $0 \le x \lt p$, and $dx$ is an infinitesimal length. Summing over all such "windows" $[x, x+dx]$--that is, integrating--must give the same probability as the Binomial cdf.
|
Relationship between Binomial and Beta distributions
Consider the order statistics $x_{[0]} \le x_{[1]} \le \cdots \le x_{[n]}$ of $n+1$ independent draws from a uniform distribution. Because order statistics have Beta distributions, the chance that $x
|
5,822
|
Relationship between Binomial and Beta distributions
|
Look at the pdf of Binomial as a function of $x$: $$f(x) = {n\choose{x}}p^{x}(1-p)^{n-x}$$ and the pdf of Beta as a function of $p$: $$g(p)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}p^{a-1}(1-p)^{b-1}$$
You probably can see that with an appropriate (integer) choice for $a$ and $b$ these are the same. As far as I can tell, that's all there is to this relationship: the way $p$ enters into the binomial pdf just happens to be called a Beta distribution.
|
Relationship between Binomial and Beta distributions
|
Look at the pdf of Binomial as a function of $x$: $$f(x) = {n\choose{x}}p^{x}(1-p)^{n-x}$$ and the pdf of Beta as a function of $p$: $$g(p)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}p^{a-1}(1-p)^{b-1}$$
|
Relationship between Binomial and Beta distributions
Look at the pdf of Binomial as a function of $x$: $$f(x) = {n\choose{x}}p^{x}(1-p)^{n-x}$$ and the pdf of Beta as a function of $p$: $$g(p)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}p^{a-1}(1-p)^{b-1}$$
You probably can see that with an appropriate (integer) choice for $a$ and $b$ these are the same. As far as I can tell, that's all there is to this relationship: the way $p$ enters into the binomial pdf just happens to be called a Beta distribution.
|
Relationship between Binomial and Beta distributions
Look at the pdf of Binomial as a function of $x$: $$f(x) = {n\choose{x}}p^{x}(1-p)^{n-x}$$ and the pdf of Beta as a function of $p$: $$g(p)=\frac{\Gamma(a+b)}{\Gamma(a)\Gamma(b)}p^{a-1}(1-p)^{b-1}$$
|
5,823
|
Relationship between Binomial and Beta distributions
|
As you noted, the Beta distribution describes the distribution of the trial probability parameter $F$, while the binomial distribution describes the distribution of the outcome parameter $I$. Rewriting your question, what you asked about was why
$$P(F \le \frac {i+1} n)+P(I \le fn-1)=1$$
$$P(Fn \le i+1)+P(I+1 \le fn)=1$$
$$P(Fn \le i+1)=P(fn<I+1)$$
That is, the likelihood that the observation plus one is greater than the expectation of the observation is the same as the likelihood that the observation plus one is greater than the expectation of the observation.
I admit that this may not help intuit the original formulation of the problem, but maybe it helps to at least see how the two distributions use the same underlying model of repeated Bernoulli trials to describe the behavior of different parameters.
|
Relationship between Binomial and Beta distributions
|
As you noted, the Beta distribution describes the distribution of the trial probability parameter $F$, while the binomial distribution describes the distribution of the outcome parameter $I$. Rewritin
|
Relationship between Binomial and Beta distributions
As you noted, the Beta distribution describes the distribution of the trial probability parameter $F$, while the binomial distribution describes the distribution of the outcome parameter $I$. Rewriting your question, what you asked about was why
$$P(F \le \frac {i+1} n)+P(I \le fn-1)=1$$
$$P(Fn \le i+1)+P(I+1 \le fn)=1$$
$$P(Fn \le i+1)=P(fn<I+1)$$
That is, the likelihood that the observation plus one is greater than the expectation of the observation is the same as the likelihood that the observation plus one is greater than the expectation of the observation.
I admit that this may not help intuit the original formulation of the problem, but maybe it helps to at least see how the two distributions use the same underlying model of repeated Bernoulli trials to describe the behavior of different parameters.
|
Relationship between Binomial and Beta distributions
As you noted, the Beta distribution describes the distribution of the trial probability parameter $F$, while the binomial distribution describes the distribution of the outcome parameter $I$. Rewritin
|
5,824
|
Relationship between Binomial and Beta distributions
|
Summary: It is often said that Beta distribution is a distribution on distributions! But what is means?
It essentially means that you may fix $n,k$ and think of $\mathbb P[Bin(n,p)\geqslant k]$ as a function of $p$. What the calculation below says is that the value of $\mathbb P[Bin(n,p)\geqslant k]$ increases from $0$ to $1$ when you tune $p$ from $0$ to $1$. The increasing rate at each $p$ is exactly $\beta(k,n-k+1)$ at that $p$.
Let $Bin(n,p)$ denote a Binomial random variable with $n$ samples and the probability of success $p$. Using basic algebra we have
$$\frac d{dp}\mathbb P[Bin(n,p)=i]=n\Big(\mathbb P[Bin(n-1,p)=i-1]-\mathbb P[Bin(n-1,p)=i]\Big).$$
It has also some nice combinatorial proof, think of it as an exercise!
So, we have:
$$\frac d{dp}\mathbb P[Bin(n,p)\geqslant k]=\frac d{dp}\sum_{i=k}^{n}\mathbb P[Bin(n,p)=i]=n\Big(\sum_{i=k}^{n}\mathbb P[Bin(n-1,p)=i-1]-\mathbb P[Bin(n-1,p)=i]\Big)$$
which is a telescoping series and can be simplified as
$$\frac d{dp}\mathbb P[Bin(n,p)\geqslant k]=n\mathbb P[Bin(n-1,p)=k-1]=\frac{n!}{(k-1)!(n-k)!}p^{k-1}(1-p)^{n-k}=\beta(k,n-k+1).$$
Remark To see an interactive version of the plot look at this. You may download the notebook or just use the Binder link.
|
Relationship between Binomial and Beta distributions
|
Summary: It is often said that Beta distribution is a distribution on distributions! But what is means?
It essentially means that you may fix $n,k$ and think of $\mathbb P[Bin(n,p)\geqslant k]$ as a f
|
Relationship between Binomial and Beta distributions
Summary: It is often said that Beta distribution is a distribution on distributions! But what is means?
It essentially means that you may fix $n,k$ and think of $\mathbb P[Bin(n,p)\geqslant k]$ as a function of $p$. What the calculation below says is that the value of $\mathbb P[Bin(n,p)\geqslant k]$ increases from $0$ to $1$ when you tune $p$ from $0$ to $1$. The increasing rate at each $p$ is exactly $\beta(k,n-k+1)$ at that $p$.
Let $Bin(n,p)$ denote a Binomial random variable with $n$ samples and the probability of success $p$. Using basic algebra we have
$$\frac d{dp}\mathbb P[Bin(n,p)=i]=n\Big(\mathbb P[Bin(n-1,p)=i-1]-\mathbb P[Bin(n-1,p)=i]\Big).$$
It has also some nice combinatorial proof, think of it as an exercise!
So, we have:
$$\frac d{dp}\mathbb P[Bin(n,p)\geqslant k]=\frac d{dp}\sum_{i=k}^{n}\mathbb P[Bin(n,p)=i]=n\Big(\sum_{i=k}^{n}\mathbb P[Bin(n-1,p)=i-1]-\mathbb P[Bin(n-1,p)=i]\Big)$$
which is a telescoping series and can be simplified as
$$\frac d{dp}\mathbb P[Bin(n,p)\geqslant k]=n\mathbb P[Bin(n-1,p)=k-1]=\frac{n!}{(k-1)!(n-k)!}p^{k-1}(1-p)^{n-k}=\beta(k,n-k+1).$$
Remark To see an interactive version of the plot look at this. You may download the notebook or just use the Binder link.
|
Relationship between Binomial and Beta distributions
Summary: It is often said that Beta distribution is a distribution on distributions! But what is means?
It essentially means that you may fix $n,k$ and think of $\mathbb P[Bin(n,p)\geqslant k]$ as a f
|
5,825
|
Relationship between Binomial and Beta distributions
|
In Bayesian land, the Beta distribution is the conjugate prior for the p parameter of the Binomial distribution.
|
Relationship between Binomial and Beta distributions
|
In Bayesian land, the Beta distribution is the conjugate prior for the p parameter of the Binomial distribution.
|
Relationship between Binomial and Beta distributions
In Bayesian land, the Beta distribution is the conjugate prior for the p parameter of the Binomial distribution.
|
Relationship between Binomial and Beta distributions
In Bayesian land, the Beta distribution is the conjugate prior for the p parameter of the Binomial distribution.
|
5,826
|
Relationship between Binomial and Beta distributions
|
Can't comment on other answers, so i have to create my own answer.
Posterior = C * Likelihood * Prior (C is a constant that makes Posterior integrated to 1)
Given a model that uses Binomial distribution for likelihood, and Beta distribution for Prior. The product of the two which generates the Posterior is also a Beta distribution. Since the Prior and Posterior are both Beta, and thus they are conjugate distributions. the Prior (a Beta) is called conjugate prior for the likelihood (a Binomial). For example, if you multiply a Beta with a Normal, the Posterior is no longer a Beta. In summary, Beta and Binomial are two distributions that are frequently used in Bayesian inference. Beta is Conjugate Prior of Binomial, but the two distributions are not a subset or superset of the other.
The key idea of Bayesian inference is we are treating the parameter p as a random variable that ranges from [0,1] which is contrary to frequentist inference approach where we are treating parameter p as fixed. If you look closely to the properties of Beta distribution, you will see its Mean and Mode are solely determined by $\alpha$ and $\beta$ irrelevant to the parameter p . This, coupled with its flexibility, is why Beta is usually used as a Prior.
|
Relationship between Binomial and Beta distributions
|
Can't comment on other answers, so i have to create my own answer.
Posterior = C * Likelihood * Prior (C is a constant that makes Posterior integrated to 1)
Given a model that uses Binomial distributi
|
Relationship between Binomial and Beta distributions
Can't comment on other answers, so i have to create my own answer.
Posterior = C * Likelihood * Prior (C is a constant that makes Posterior integrated to 1)
Given a model that uses Binomial distribution for likelihood, and Beta distribution for Prior. The product of the two which generates the Posterior is also a Beta distribution. Since the Prior and Posterior are both Beta, and thus they are conjugate distributions. the Prior (a Beta) is called conjugate prior for the likelihood (a Binomial). For example, if you multiply a Beta with a Normal, the Posterior is no longer a Beta. In summary, Beta and Binomial are two distributions that are frequently used in Bayesian inference. Beta is Conjugate Prior of Binomial, but the two distributions are not a subset or superset of the other.
The key idea of Bayesian inference is we are treating the parameter p as a random variable that ranges from [0,1] which is contrary to frequentist inference approach where we are treating parameter p as fixed. If you look closely to the properties of Beta distribution, you will see its Mean and Mode are solely determined by $\alpha$ and $\beta$ irrelevant to the parameter p . This, coupled with its flexibility, is why Beta is usually used as a Prior.
|
Relationship between Binomial and Beta distributions
Can't comment on other answers, so i have to create my own answer.
Posterior = C * Likelihood * Prior (C is a constant that makes Posterior integrated to 1)
Given a model that uses Binomial distributi
|
5,827
|
Relationship between Binomial and Beta distributions
|
Here is an intuitive explanation that works for me:
$Binomial(n, p)$:
When repeating a Bernoulli trial with $p$ probability $n$ times. The chance of exactly $k$ successes is:
$$Binomial_\mathit{pmf}(\pmb{k}, n, p) = {n\choose \pmb{k}} p^{\pmb{k}} (1-p)^{n-\pmb{k}}$$
$Beta(n, k)^*$:
For a fixed $n$ and $k$, given probability $p$, calculate the probability, $p'$, of getting $k$ in the former experiment. Then multiply this $p'$ by $n+1$ to get $k'$, the most probable (interpolated) outcome if we have done the experiment with $p'$ (conceptually this is like the mode of $Binomial(n, p')$, only it allowes for non-integer values):
$$Beta_\mathit{pdf}(\pmb{p}, n, k) = \underbrace{(n+1) \overbrace{{n \choose k} \pmb{p}^k (1-\pmb{p})^{n-k}}^{p'=Binomial_\mathit{pmf}(k, n, \pmb{p})}}_{k' \approx mode(Binomial(n, p'))}$$
$\small{*}$ I'm using ${n \choose k}$ to emphasize the similarity with the $Binomial$. To get the actual $Beta$ function we need to replace $\cdot!$ with $\Gamma(\cdot+1)$, which interpolates the factorial for non-integer values.
Note 1: If $p$ is close to $k/n$, k′ is larger.
Note 2: If the parameter $n$ is larger we are more certain of the result (the concentration is higher).
Note 3: The get the common formulation for the $Beta(\alpha,\beta)$ function:
$$k \to \alpha - 1$$
$$n \to \alpha + \beta- 2$$
Note 4: When replacing $\cdot!$ by $\Gamma(\cdot+1)$, $Beta(n, k)$ is actually defined for real-valued $n$ and $k$, with ranges $n > k - 1$ and $k > -1$. We can think of things like -0.3 successes out of -1.1 Bernoulli trials as interpolations from the integer $0 \le k \le n$ cases.
Note 5: $Beta(n=0, k=0) \equiv Uniform(0,1)$
Note 6: $\int_0^1(n+1){n\choose k} p^k (1-p)^{n-k} \,dp = 1$
|
Relationship between Binomial and Beta distributions
|
Here is an intuitive explanation that works for me:
$Binomial(n, p)$:
When repeating a Bernoulli trial with $p$ probability $n$ times. The chance of exactly $k$ successes is:
$$Binomial_\mathit{pmf}(
|
Relationship between Binomial and Beta distributions
Here is an intuitive explanation that works for me:
$Binomial(n, p)$:
When repeating a Bernoulli trial with $p$ probability $n$ times. The chance of exactly $k$ successes is:
$$Binomial_\mathit{pmf}(\pmb{k}, n, p) = {n\choose \pmb{k}} p^{\pmb{k}} (1-p)^{n-\pmb{k}}$$
$Beta(n, k)^*$:
For a fixed $n$ and $k$, given probability $p$, calculate the probability, $p'$, of getting $k$ in the former experiment. Then multiply this $p'$ by $n+1$ to get $k'$, the most probable (interpolated) outcome if we have done the experiment with $p'$ (conceptually this is like the mode of $Binomial(n, p')$, only it allowes for non-integer values):
$$Beta_\mathit{pdf}(\pmb{p}, n, k) = \underbrace{(n+1) \overbrace{{n \choose k} \pmb{p}^k (1-\pmb{p})^{n-k}}^{p'=Binomial_\mathit{pmf}(k, n, \pmb{p})}}_{k' \approx mode(Binomial(n, p'))}$$
$\small{*}$ I'm using ${n \choose k}$ to emphasize the similarity with the $Binomial$. To get the actual $Beta$ function we need to replace $\cdot!$ with $\Gamma(\cdot+1)$, which interpolates the factorial for non-integer values.
Note 1: If $p$ is close to $k/n$, k′ is larger.
Note 2: If the parameter $n$ is larger we are more certain of the result (the concentration is higher).
Note 3: The get the common formulation for the $Beta(\alpha,\beta)$ function:
$$k \to \alpha - 1$$
$$n \to \alpha + \beta- 2$$
Note 4: When replacing $\cdot!$ by $\Gamma(\cdot+1)$, $Beta(n, k)$ is actually defined for real-valued $n$ and $k$, with ranges $n > k - 1$ and $k > -1$. We can think of things like -0.3 successes out of -1.1 Bernoulli trials as interpolations from the integer $0 \le k \le n$ cases.
Note 5: $Beta(n=0, k=0) \equiv Uniform(0,1)$
Note 6: $\int_0^1(n+1){n\choose k} p^k (1-p)^{n-k} \,dp = 1$
|
Relationship between Binomial and Beta distributions
Here is an intuitive explanation that works for me:
$Binomial(n, p)$:
When repeating a Bernoulli trial with $p$ probability $n$ times. The chance of exactly $k$ successes is:
$$Binomial_\mathit{pmf}(
|
5,828
|
Analysis with complex data, anything different?
|
Summary
The generalization of least-squares regression to complex-valued variables is straightforward, consisting primarily of replacing matrix transposes by conjugate transposes in the usual matrix formulas. A complex-valued regression, though, corresponds to a complicated multivariate multiple regression whose solution would be much more difficult to obtain using standard (real variable) methods. Thus, when the complex-valued model is meaningful, using complex arithmetic to obtain a solution is strongly recommended. This answer also includes some suggested ways to display the data and present diagnostic plots of the fit.
For simplicity, let's discuss the case of ordinary (univariate) regression, which can be written
$$z_j = \beta_0 + \beta_1 w_j + \varepsilon_j.$$
I have taken the liberty of naming the independent variable $W$ and the dependent variable $Z$, which is conventional (see, for instance, Lars Ahlfors, Complex Analysis). All that follows is straightforward to extend to the multiple regression setting.
Interpretation
This model has an easily visualized geometric interpretation: multiplication by $\beta_1$ will rescale $w_j$ by the modulus of $\beta_1$ and rotate it around the origin by the argument of $\beta_1$. Subsequently, adding $\beta_0$ translates the result by this amount. The effect of $\varepsilon_j$ is to "jitter" that translation a little bit. Thus, regressing the $z_j$ on the $w_j$ in this manner is an effort to understand the collection of 2D points $(z_j)$ as arising from a constellation of 2D points $(w_j)$ via such a transformation, allowing for some error in the process. This is illustrated below with the figure titled "Fit as a Transformation."
Note that the rescaling and rotation are not just any linear transformation of the plane: they rule out skew transformations, for instance. Thus this model is not the same as a bivariate multiple regression with four parameters.
Ordinary Least Squares
To connect the complex case with the real case, let's write
$z_j = x_j + i y_j$ for the values of the dependent variable and
$w_j = u_j + i v_j$ for the values of the independent variable.
Furthermore, for the parameters write
$\beta_0 = \gamma_0 + i \delta_0$ and $\beta_1 = \gamma_1 +i \delta_1$.
Every one of the new terms introduced is, of course, real, and $i^2 = -1$ is imaginary while $j=1, 2, \ldots, n$ indexes the data.
OLS finds $\hat\beta_0$ and $\hat\beta_1$ that minimize the sum of squares of deviations,
$$\sum_{j=1}^n ||z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)||^2
= \sum_{j=1}^n \left(\bar z_j - \left(\bar{\hat\beta_0} + \bar{\hat\beta_1} \bar w_j\right)\right) \left(z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)\right).$$
Formally this is identical to the usual matrix formulation: compare it to $\left(z - X\beta\right)'\left(z - X\beta\right).$ The only difference we find is that the transpose of the design matrix $X'$ is replaced by the conjugate transpose $X^* = \bar X '$. Consequently the formal matrix solution is
$$\hat\beta = \left(X^*X\right)^{-1}X^* z.$$
At the same time, to see what might be accomplished by casting this into a purely real-variable problem, we may write the OLS objective out in terms of the real components:
$$\sum_{j=1}^n \left(x_j-\gamma_0-\gamma_1u_j+\delta_1v_j\right)^2
+ \sum_{j=1}^n\left(y_j-\delta_0-\delta_1u_j-\gamma_1v_j\right)^2.$$
Evidently this represents two linked real regressions: one of them regresses $x$ on $u$ and $v$, the other regresses $y$ on $u$ and $v$; and we require that the $v$ coefficient for $x$ be the negative of the $u$ coefficient for $y$ and the $u$ coefficient for $x$ equal the $v$ coefficient for $y$. Moreover, because the total squares of residuals from the two regressions are to be minimized, it will usually not be the case that either set of coefficients gives the best estimate for $x$ or $y$ alone. This is confirmed in the example below, which carries out the two real regressions separately and compares their solutions to the complex regression.
This analysis makes it apparent that rewriting the complex regression in terms of the real parts (1) complicates the formulas, (2) obscures the simple geometric interpretation, and (3) would require a generalized multivariate multiple regression (with nontrivial correlations among the variables) to solve. We can do better.
Example
As an example, I take a grid of $w$ values at integral points near the origin in the complex plane. To the transformed values $w\beta$ are added iid errors having a bivariate Gaussian distribution: in particular, the real and imaginary parts of the errors are not independent.
It is difficult to draw the usual scatterplot of $(w_j, z_j)$ for complex variables, because it would consist of points in four dimensions. Instead we can view the scatterplot matrix of their real and imaginary parts.
Ignore the fit for now and look at the top four rows and four left columns: these display the data. The circular grid of $w$ is evident in the upper left; it has $81$ points. The scatterplots of the components of $w$ against the components of $z$ show clear correlations. Three of them have negative correlations; only the $y$ (the imaginary part of $z$) and $u$ (the real part of $w$) are positively correlated.
For these data, the true value of $\beta$ is $(-20 + 5i, -3/4 + 3/4\sqrt{3}i)$. It represents an expansion by $3/2$ and a counterclockwise rotation of 120 degrees followed by translation of $20$ units to the left and $5$ units up. I compute three fits: the complex least squares solution and two OLS solutions for $(x_j)$ and $(y_j)$ separately, for comparison.
Fit Intercept Slope(s)
True -20 + 5 i -0.75 + 1.30 i
Complex -20.02 + 5.01 i -0.83 + 1.38 i
Real only -20.02 -0.75, -1.46
Imaginary only 5.01 1.30, -0.92
It will always be the case that the real-only intercept agrees with the real part of the complex intercept and the imaginary-only intercept agrees with the imaginary part fo the complex intercept. It is apparent, though, that the real-only and imaginary-only slopes neither agree with the complex slope coefficients nor with each other, exactly as predicted.
Let's take a closer look at the results of the complex fit. First, a plot of the residuals gives us an indication of their bivariate Gaussian distribution. (The underlying distribution has marginal standard deviations of $2$ and a correlation of $0.8$.) Then, we can plot the magnitudes of the residuals (represented by sizes of the circular symbols) and their arguments (represented by colors exactly as in the first plot) against the fitted values: this plot should look like a random distribution of sizes and colors, which it does.
Finally, we can depict the fit in several ways. The fit appeared in the last rows and columns of the scatterplot matrix (q.v.) and may be worth a closer look at this point. Below on the left the fits are plotted as open blue circles and arrows (representing the residuals) connect them to the data, shown as solid red circles. On the right the $(w_j)$ are shown as open black circles filled in with colors corresponding to their arguments; these are connected by arrows to the corresponding values of $(z_j)$. Recall that each arrow represents an expansion by $3/2$ around the origin, rotation by $120$ degrees, and translation by $(-20, 5)$, plus that bivariate Guassian error.
These results, the plots, and the diagnostic plots all suggest that the complex regression formula works correctly and achieves something different than separate linear regressions of the real and imaginary parts of the variables.
Code
The R code to create the data, fits, and plots appears below. Note that the actual solution of $\hat\beta$ is obtained in a single line of code. Additional work--but not too much of it--would be needed to obtain the usual least squares output: the variance-covariance matrix of the fit, standard errors, p-values, etc.
#
# Synthesize data.
# (1) the independent variable `w`.
#
w.max <- 5 # Max extent of the independent values
w <- expand.grid(seq(-w.max,w.max), seq(-w.max,w.max))
w <- complex(real=w[[1]], imaginary=w[[2]])
w <- w[Mod(w) <= w.max]
n <- length(w)
#
# (2) the dependent variable `z`.
#
beta <- c(-20+5i, complex(argument=2*pi/3, modulus=3/2))
sigma <- 2; rho <- 0.8 # Parameters of the error distribution
library(MASS) #mvrnorm
set.seed(17)
e <- mvrnorm(n, c(0,0), matrix(c(1,rho,rho,1)*sigma^2, 2))
e <- complex(real=e[,1], imaginary=e[,2])
z <- as.vector((X <- cbind(rep(1,n), w)) %*% beta + e)
#
# Fit the models.
#
print(beta, digits=3)
print(beta.hat <- solve(Conj(t(X)) %*% X, Conj(t(X)) %*% z), digits=3)
print(beta.r <- coef(lm(Re(z) ~ Re(w) + Im(w))), digits=3)
print(beta.i <- coef(lm(Im(z) ~ Re(w) + Im(w))), digits=3)
#
# Show some diagnostics.
#
par(mfrow=c(1,2))
res <- as.vector(z - X %*% beta.hat)
fit <- z - res
s <- sqrt(Re(mean(Conj(res)*res)))
col <- hsv((Arg(res)/pi + 1)/2, .8, .9)
size <- Mod(res) / s
plot(res, pch=16, cex=size, col=col, main="Residuals")
plot(Re(fit), Im(fit), pch=16, cex = size, col=col,
main="Residuals vs. Fitted")
plot(Re(c(z, fit)), Im(c(z, fit)), type="n",
main="Residuals as Fit --> Data", xlab="Real", ylab="Imaginary")
points(Re(fit), Im(fit), col="Blue")
points(Re(z), Im(z), pch=16, col="Red")
arrows(Re(fit), Im(fit), Re(z), Im(z), col="Gray", length=0.1)
col.w <- hsv((Arg(w)/pi + 1)/2, .8, .9)
plot(Re(c(w, z)), Im(c(w, z)), type="n",
main="Fit as a Transformation", xlab="Real", ylab="Imaginary")
points(Re(w), Im(w), pch=16, col=col.w)
points(Re(w), Im(w))
points(Re(z), Im(z), pch=16, col=col.w)
arrows(Re(w), Im(w), Re(z), Im(z), col="#00000030", length=0.1)
#
# Display the data.
#
par(mfrow=c(1,1))
pairs(cbind(w.Re=Re(w), w.Im=Im(w), z.Re=Re(z), z.Im=Im(z),
fit.Re=Re(fit), fit.Im=Im(fit)), cex=1/2)
|
Analysis with complex data, anything different?
|
Summary
The generalization of least-squares regression to complex-valued variables is straightforward, consisting primarily of replacing matrix transposes by conjugate transposes in the usual matrix f
|
Analysis with complex data, anything different?
Summary
The generalization of least-squares regression to complex-valued variables is straightforward, consisting primarily of replacing matrix transposes by conjugate transposes in the usual matrix formulas. A complex-valued regression, though, corresponds to a complicated multivariate multiple regression whose solution would be much more difficult to obtain using standard (real variable) methods. Thus, when the complex-valued model is meaningful, using complex arithmetic to obtain a solution is strongly recommended. This answer also includes some suggested ways to display the data and present diagnostic plots of the fit.
For simplicity, let's discuss the case of ordinary (univariate) regression, which can be written
$$z_j = \beta_0 + \beta_1 w_j + \varepsilon_j.$$
I have taken the liberty of naming the independent variable $W$ and the dependent variable $Z$, which is conventional (see, for instance, Lars Ahlfors, Complex Analysis). All that follows is straightforward to extend to the multiple regression setting.
Interpretation
This model has an easily visualized geometric interpretation: multiplication by $\beta_1$ will rescale $w_j$ by the modulus of $\beta_1$ and rotate it around the origin by the argument of $\beta_1$. Subsequently, adding $\beta_0$ translates the result by this amount. The effect of $\varepsilon_j$ is to "jitter" that translation a little bit. Thus, regressing the $z_j$ on the $w_j$ in this manner is an effort to understand the collection of 2D points $(z_j)$ as arising from a constellation of 2D points $(w_j)$ via such a transformation, allowing for some error in the process. This is illustrated below with the figure titled "Fit as a Transformation."
Note that the rescaling and rotation are not just any linear transformation of the plane: they rule out skew transformations, for instance. Thus this model is not the same as a bivariate multiple regression with four parameters.
Ordinary Least Squares
To connect the complex case with the real case, let's write
$z_j = x_j + i y_j$ for the values of the dependent variable and
$w_j = u_j + i v_j$ for the values of the independent variable.
Furthermore, for the parameters write
$\beta_0 = \gamma_0 + i \delta_0$ and $\beta_1 = \gamma_1 +i \delta_1$.
Every one of the new terms introduced is, of course, real, and $i^2 = -1$ is imaginary while $j=1, 2, \ldots, n$ indexes the data.
OLS finds $\hat\beta_0$ and $\hat\beta_1$ that minimize the sum of squares of deviations,
$$\sum_{j=1}^n ||z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)||^2
= \sum_{j=1}^n \left(\bar z_j - \left(\bar{\hat\beta_0} + \bar{\hat\beta_1} \bar w_j\right)\right) \left(z_j - \left(\hat\beta_0 + \hat\beta_1 w_j\right)\right).$$
Formally this is identical to the usual matrix formulation: compare it to $\left(z - X\beta\right)'\left(z - X\beta\right).$ The only difference we find is that the transpose of the design matrix $X'$ is replaced by the conjugate transpose $X^* = \bar X '$. Consequently the formal matrix solution is
$$\hat\beta = \left(X^*X\right)^{-1}X^* z.$$
At the same time, to see what might be accomplished by casting this into a purely real-variable problem, we may write the OLS objective out in terms of the real components:
$$\sum_{j=1}^n \left(x_j-\gamma_0-\gamma_1u_j+\delta_1v_j\right)^2
+ \sum_{j=1}^n\left(y_j-\delta_0-\delta_1u_j-\gamma_1v_j\right)^2.$$
Evidently this represents two linked real regressions: one of them regresses $x$ on $u$ and $v$, the other regresses $y$ on $u$ and $v$; and we require that the $v$ coefficient for $x$ be the negative of the $u$ coefficient for $y$ and the $u$ coefficient for $x$ equal the $v$ coefficient for $y$. Moreover, because the total squares of residuals from the two regressions are to be minimized, it will usually not be the case that either set of coefficients gives the best estimate for $x$ or $y$ alone. This is confirmed in the example below, which carries out the two real regressions separately and compares their solutions to the complex regression.
This analysis makes it apparent that rewriting the complex regression in terms of the real parts (1) complicates the formulas, (2) obscures the simple geometric interpretation, and (3) would require a generalized multivariate multiple regression (with nontrivial correlations among the variables) to solve. We can do better.
Example
As an example, I take a grid of $w$ values at integral points near the origin in the complex plane. To the transformed values $w\beta$ are added iid errors having a bivariate Gaussian distribution: in particular, the real and imaginary parts of the errors are not independent.
It is difficult to draw the usual scatterplot of $(w_j, z_j)$ for complex variables, because it would consist of points in four dimensions. Instead we can view the scatterplot matrix of their real and imaginary parts.
Ignore the fit for now and look at the top four rows and four left columns: these display the data. The circular grid of $w$ is evident in the upper left; it has $81$ points. The scatterplots of the components of $w$ against the components of $z$ show clear correlations. Three of them have negative correlations; only the $y$ (the imaginary part of $z$) and $u$ (the real part of $w$) are positively correlated.
For these data, the true value of $\beta$ is $(-20 + 5i, -3/4 + 3/4\sqrt{3}i)$. It represents an expansion by $3/2$ and a counterclockwise rotation of 120 degrees followed by translation of $20$ units to the left and $5$ units up. I compute three fits: the complex least squares solution and two OLS solutions for $(x_j)$ and $(y_j)$ separately, for comparison.
Fit Intercept Slope(s)
True -20 + 5 i -0.75 + 1.30 i
Complex -20.02 + 5.01 i -0.83 + 1.38 i
Real only -20.02 -0.75, -1.46
Imaginary only 5.01 1.30, -0.92
It will always be the case that the real-only intercept agrees with the real part of the complex intercept and the imaginary-only intercept agrees with the imaginary part fo the complex intercept. It is apparent, though, that the real-only and imaginary-only slopes neither agree with the complex slope coefficients nor with each other, exactly as predicted.
Let's take a closer look at the results of the complex fit. First, a plot of the residuals gives us an indication of their bivariate Gaussian distribution. (The underlying distribution has marginal standard deviations of $2$ and a correlation of $0.8$.) Then, we can plot the magnitudes of the residuals (represented by sizes of the circular symbols) and their arguments (represented by colors exactly as in the first plot) against the fitted values: this plot should look like a random distribution of sizes and colors, which it does.
Finally, we can depict the fit in several ways. The fit appeared in the last rows and columns of the scatterplot matrix (q.v.) and may be worth a closer look at this point. Below on the left the fits are plotted as open blue circles and arrows (representing the residuals) connect them to the data, shown as solid red circles. On the right the $(w_j)$ are shown as open black circles filled in with colors corresponding to their arguments; these are connected by arrows to the corresponding values of $(z_j)$. Recall that each arrow represents an expansion by $3/2$ around the origin, rotation by $120$ degrees, and translation by $(-20, 5)$, plus that bivariate Guassian error.
These results, the plots, and the diagnostic plots all suggest that the complex regression formula works correctly and achieves something different than separate linear regressions of the real and imaginary parts of the variables.
Code
The R code to create the data, fits, and plots appears below. Note that the actual solution of $\hat\beta$ is obtained in a single line of code. Additional work--but not too much of it--would be needed to obtain the usual least squares output: the variance-covariance matrix of the fit, standard errors, p-values, etc.
#
# Synthesize data.
# (1) the independent variable `w`.
#
w.max <- 5 # Max extent of the independent values
w <- expand.grid(seq(-w.max,w.max), seq(-w.max,w.max))
w <- complex(real=w[[1]], imaginary=w[[2]])
w <- w[Mod(w) <= w.max]
n <- length(w)
#
# (2) the dependent variable `z`.
#
beta <- c(-20+5i, complex(argument=2*pi/3, modulus=3/2))
sigma <- 2; rho <- 0.8 # Parameters of the error distribution
library(MASS) #mvrnorm
set.seed(17)
e <- mvrnorm(n, c(0,0), matrix(c(1,rho,rho,1)*sigma^2, 2))
e <- complex(real=e[,1], imaginary=e[,2])
z <- as.vector((X <- cbind(rep(1,n), w)) %*% beta + e)
#
# Fit the models.
#
print(beta, digits=3)
print(beta.hat <- solve(Conj(t(X)) %*% X, Conj(t(X)) %*% z), digits=3)
print(beta.r <- coef(lm(Re(z) ~ Re(w) + Im(w))), digits=3)
print(beta.i <- coef(lm(Im(z) ~ Re(w) + Im(w))), digits=3)
#
# Show some diagnostics.
#
par(mfrow=c(1,2))
res <- as.vector(z - X %*% beta.hat)
fit <- z - res
s <- sqrt(Re(mean(Conj(res)*res)))
col <- hsv((Arg(res)/pi + 1)/2, .8, .9)
size <- Mod(res) / s
plot(res, pch=16, cex=size, col=col, main="Residuals")
plot(Re(fit), Im(fit), pch=16, cex = size, col=col,
main="Residuals vs. Fitted")
plot(Re(c(z, fit)), Im(c(z, fit)), type="n",
main="Residuals as Fit --> Data", xlab="Real", ylab="Imaginary")
points(Re(fit), Im(fit), col="Blue")
points(Re(z), Im(z), pch=16, col="Red")
arrows(Re(fit), Im(fit), Re(z), Im(z), col="Gray", length=0.1)
col.w <- hsv((Arg(w)/pi + 1)/2, .8, .9)
plot(Re(c(w, z)), Im(c(w, z)), type="n",
main="Fit as a Transformation", xlab="Real", ylab="Imaginary")
points(Re(w), Im(w), pch=16, col=col.w)
points(Re(w), Im(w))
points(Re(z), Im(z), pch=16, col=col.w)
arrows(Re(w), Im(w), Re(z), Im(z), col="#00000030", length=0.1)
#
# Display the data.
#
par(mfrow=c(1,1))
pairs(cbind(w.Re=Re(w), w.Im=Im(w), z.Re=Re(z), z.Im=Im(z),
fit.Re=Re(fit), fit.Im=Im(fit)), cex=1/2)
|
Analysis with complex data, anything different?
Summary
The generalization of least-squares regression to complex-valued variables is straightforward, consisting primarily of replacing matrix transposes by conjugate transposes in the usual matrix f
|
5,829
|
Analysis with complex data, anything different?
|
After a nice long google sesh, I found some relevant information on understanding the problem in an alternative manner. It turns out similar problems are somewhat common in statistical signal processing. Instead of starting with a Gaussian likelihood which corresponds to a linear least squares for real data, one starts with a:
http://en.wikipedia.org/wiki/Complex_normal_distribution
Specifically, if you can assume that the distribution of your estimator $\hat{\beta}$ is multivariate normal, then in the case of complex data one would use the complex normal. The computation of the covariance of this estimator is a bit different, and given on the wiki page.
The textbook by Giri, Multivariate Statistical Analysis also covers this.
|
Analysis with complex data, anything different?
|
After a nice long google sesh, I found some relevant information on understanding the problem in an alternative manner. It turns out similar problems are somewhat common in statistical signal process
|
Analysis with complex data, anything different?
After a nice long google sesh, I found some relevant information on understanding the problem in an alternative manner. It turns out similar problems are somewhat common in statistical signal processing. Instead of starting with a Gaussian likelihood which corresponds to a linear least squares for real data, one starts with a:
http://en.wikipedia.org/wiki/Complex_normal_distribution
Specifically, if you can assume that the distribution of your estimator $\hat{\beta}$ is multivariate normal, then in the case of complex data one would use the complex normal. The computation of the covariance of this estimator is a bit different, and given on the wiki page.
The textbook by Giri, Multivariate Statistical Analysis also covers this.
|
Analysis with complex data, anything different?
After a nice long google sesh, I found some relevant information on understanding the problem in an alternative manner. It turns out similar problems are somewhat common in statistical signal process
|
5,830
|
Analysis with complex data, anything different?
|
This issue has come up again on the Mathematica StackExchange and my answer/extended comment there is that @whuber 's excellent answer should be followed.
My answer here is an attempt to extend @whuber 's answer just a little bit by making the error structure a little more explicit. The proposed least squares estimator is what one would use if the bivariate error distribution has a zero correlation between the real and imaginary components. (But the data generated has a error correlation of 0.8.)
If one has access to a symbolic algebra program, then some of the messiness of constructing maximum likelihood estimators of the parameters (both the "fixed" effects and the covariance structure) can be eliminated. Below I use the same data as in @whuber 's answer and construct the maximum likelihood estimates by assuming $\rho=0$ and then by assuming $\rho\neq0$. I've used Mathematica but I suspect any other symbolic algebra program can do something similar. (And I've first posted a picture of the code and output followed by the actual code in an appendix as I can't get the Mathematica code to look as it should with just using text.)
Now for the maximum likelihood estimates assuming $\rho=0$...
We see that the maximum likelihood estimates which assume that $\rho=0$ match perfectly with the total least squares estimates.
Now let the data determine an estimate for $\rho$:
We see that $\gamma_0$ and $\delta_0$ are essentially identical whether or not we allow for the estimation of $\rho$. But $\gamma_1$ is much closer to the value that generated the data (although inferences with a sample size of 1 shouldn't be considered definitive to say the least) and the log of the likelihood is much higher.
My point in all of this is that the model being fit needs to be made completely explicit and that symbolic algebra programs can help alleviate the messiness. (And, of course, the maximum likelihood estimators assume a bivariate normal distribution which the least squares estimators do not assume.)
Appendix: The full Mathematica code
(* Predictor variable *)
w = {0 - 5 I, -3 - 4 I, -2 - 4 I, -1 - 4 I, 0 - 4 I, 1 - 4 I, 2 - 4 I,
3 - 4 I, -4 - 3 I, -3 - 3 I, -2 - 3 I, -1 - 3 I, 0 - 3 I, 1 - 3 I,
2 - 3 I, 3 - 3 I, 4 - 3 I, -4 - 2 I, -3 - 2 I, -2 - 2 I, -1 - 2 I,
0 - 2 I, 1 - 2 I, 2 - 2 I, 3 - 2 I,
4 - 2 I, -4 - 1 I, -3 - 1 I, -2 - 1 I, -1 - 1 I, 0 - 1 I, 1 - 1 I,
2 - 1 I, 3 - 1 I,
4 - 1 I, -5 + 0 I, -4 + 0 I, -3 + 0 I, -2 + 0 I, -1 + 0 I, 0 + 0 I,
1 + 0 I, 2 + 0 I, 3 + 0 I, 4 + 0 I,
5 + 0 I, -4 + 1 I, -3 + 1 I, -2 + 1 I, -1 + 1 I, 0 + 1 I, 1 + 1 I,
2 + 1 I, 3 + 1 I, 4 + 1 I, -4 + 2 I, -3 + 2 I, -2 + 2 I, -1 + 2 I,
0 + 2 I, 1 + 2 I, 2 + 2 I, 3 + 2 I,
4 + 2 I, -4 + 3 I, -3 + 3 I, -2 + 3 I, -1 + 3 I, 0 + 3 I, 1 + 3 I,
2 + 3 I, 3 + 3 I, 4 + 3 I, -3 + 4 I, -2 + 4 I, -1 + 4 I, 0 + 4 I,
1 + 4 I, 2 + 4 I, 3 + 4 I, 0 + 5 I};
(* Add in a "1" for the intercept *)
w1 = Transpose[{ConstantArray[1 + 0 I, Length[w]], w}];
z = {-15.83651 + 7.23001 I, -13.45474 + 4.70158 I, -13.63353 +
4.84748 I, -14.79109 + 4.33689 I, -13.63202 +
9.75805 I, -16.42506 + 9.54179 I, -14.54613 +
12.53215 I, -13.55975 + 14.91680 I, -12.64551 +
2.56503 I, -13.55825 + 4.44933 I, -11.28259 +
5.81240 I, -14.14497 + 7.18378 I, -13.45621 +
9.51873 I, -16.21694 + 8.62619 I, -14.95755 +
13.24094 I, -17.74017 + 10.32501 I, -17.23451 +
13.75955 I, -14.31768 + 1.82437 I, -13.68003 +
3.50632 I, -14.72750 + 5.13178 I, -15.00054 +
6.13389 I, -19.85013 + 6.36008 I, -19.79806 +
6.70061 I, -14.87031 + 11.41705 I, -21.51244 +
9.99690 I, -18.78360 + 14.47913 I, -15.19441 +
0.49289 I, -17.26867 + 3.65427 I, -16.34927 +
3.75119 I, -18.58678 + 2.38690 I, -20.11586 +
2.69634 I, -22.05726 + 6.01176 I, -22.94071 +
7.75243 I, -28.01594 + 3.21750 I, -24.60006 +
8.46907 I, -16.78006 - 2.66809 I, -18.23789 -
1.90286 I, -20.28243 + 0.47875 I, -18.37027 +
2.46888 I, -21.29372 + 3.40504 I, -19.80125 +
5.76661 I, -21.28269 + 5.57369 I, -22.05546 +
7.37060 I, -18.92492 + 10.18391 I, -18.13950 +
12.51550 I, -22.34471 + 10.37145 I, -15.05198 +
2.45401 I, -19.34279 - 0.23179 I, -17.37708 +
1.29222 I, -21.34378 - 0.00729 I, -20.84346 +
4.99178 I, -18.01642 + 10.78440 I, -23.08955 +
9.22452 I, -23.21163 + 7.69873 I, -26.54236 +
8.53687 I, -16.19653 - 0.36781 I, -23.49027 -
2.47554 I, -21.39397 - 0.05865 I, -20.02732 +
4.10250 I, -18.14814 + 7.36346 I, -23.70820 +
5.27508 I, -25.31022 + 4.32939 I, -24.04835 +
7.83235 I, -26.43708 + 6.19259 I, -21.58159 -
0.96734 I, -21.15339 - 1.06770 I, -21.88608 -
1.66252 I, -22.26280 + 4.00421 I, -22.37417 +
4.71425 I, -27.54631 + 4.83841 I, -24.39734 +
6.47424 I, -30.37850 + 4.07676 I, -30.30331 +
5.41201 I, -28.99194 - 8.45105 I, -24.05801 +
0.35091 I, -24.43580 - 0.69305 I, -29.71399 -
2.71735 I, -26.30489 + 4.93457 I, -27.16450 +
2.63608 I, -23.40265 + 8.76427 I, -29.56214 - 2.69087 I};
(* whuber 's least squares estimates *)
{a, b} = Inverse[ConjugateTranspose[w1].w1].ConjugateTranspose[w1].z
(* {-20.0172+5.00968 \[ImaginaryI],-0.830797+1.37827 \[ImaginaryI]} *)
(* Break up into the real and imaginary components *)
x = Re[z];
y = Im[z];
u = Re[w];
v = Im[w];
n = Length[z]; (* Sample size *)
(* Construct the real and imaginary components of the model *)
(* This is the messy part you probably don't want to do too often with paper and pencil *)
model = \[Gamma]0 + I \[Delta]0 + (\[Gamma]1 + I \[Delta]1) (u + I v);
modelR = Table[
Re[ComplexExpand[model[[j]]]] /. Im[h_] -> 0 /. Re[h_] -> h, {j, n}];
(* \[Gamma]0+u \[Gamma]1-v \[Delta]1 *)
modelI = Table[
Im[ComplexExpand[model[[j]]]] /. Im[h_] -> 0 /. Re[h_] -> h, {j, n}];
(* v \[Gamma]1+\[Delta]0+u \[Delta]1 *)
(* Construct the log of the likelihood as we are estimating the parameters associated with a bivariate normal distribution *)
logL = LogLikelihood[
BinormalDistribution[{0, 0}, {\[Sigma]1, \[Sigma]2}, \[Rho]],
Transpose[{x - modelR, y - modelI}]];
mle0 = FindMaximum[{logL /. {\[Rho] ->
0, \[Sigma]1 -> \[Sigma], \[Sigma]2 -> \[Sigma]}, \[Sigma] >
0}, {\[Gamma]0, \[Delta]0, \[Gamma]1, \[Delta]1, \[Sigma]}]
(* {-357.626,{\[Gamma]0\[Rule]-20.0172,\[Delta]0\[Rule]5.00968,\[Gamma]1\[Rule]-0.830797,\[Delta]1\[Rule]1.37827,\[Sigma]\[Rule]2.20038}} *)
(* Now suppose we don't want to restrict \[Rho]=0 *)
mle1 = FindMaximum[{logL /. {\[Sigma]1 -> \[Sigma], \[Sigma]2 -> \[Sigma]}, \[Sigma] > 0 && -1 < \[Rho] <
1}, {\[Gamma]0, \[Delta]0, \[Gamma]1, \[Delta]1, \[Sigma], \[Rho]}]
(* {-315.313,{\[Gamma]0\[Rule]-20.0172,\[Delta]0\[Rule]5.00968,\[Gamma]1\[Rule]-0.763237,\[Delta]1\[Rule]1.30859,\[Sigma]\[Rule]2.21424,\[Rho]\[Rule]0.810525}} *)
|
Analysis with complex data, anything different?
|
This issue has come up again on the Mathematica StackExchange and my answer/extended comment there is that @whuber 's excellent answer should be followed.
My answer here is an attempt to extend @whube
|
Analysis with complex data, anything different?
This issue has come up again on the Mathematica StackExchange and my answer/extended comment there is that @whuber 's excellent answer should be followed.
My answer here is an attempt to extend @whuber 's answer just a little bit by making the error structure a little more explicit. The proposed least squares estimator is what one would use if the bivariate error distribution has a zero correlation between the real and imaginary components. (But the data generated has a error correlation of 0.8.)
If one has access to a symbolic algebra program, then some of the messiness of constructing maximum likelihood estimators of the parameters (both the "fixed" effects and the covariance structure) can be eliminated. Below I use the same data as in @whuber 's answer and construct the maximum likelihood estimates by assuming $\rho=0$ and then by assuming $\rho\neq0$. I've used Mathematica but I suspect any other symbolic algebra program can do something similar. (And I've first posted a picture of the code and output followed by the actual code in an appendix as I can't get the Mathematica code to look as it should with just using text.)
Now for the maximum likelihood estimates assuming $\rho=0$...
We see that the maximum likelihood estimates which assume that $\rho=0$ match perfectly with the total least squares estimates.
Now let the data determine an estimate for $\rho$:
We see that $\gamma_0$ and $\delta_0$ are essentially identical whether or not we allow for the estimation of $\rho$. But $\gamma_1$ is much closer to the value that generated the data (although inferences with a sample size of 1 shouldn't be considered definitive to say the least) and the log of the likelihood is much higher.
My point in all of this is that the model being fit needs to be made completely explicit and that symbolic algebra programs can help alleviate the messiness. (And, of course, the maximum likelihood estimators assume a bivariate normal distribution which the least squares estimators do not assume.)
Appendix: The full Mathematica code
(* Predictor variable *)
w = {0 - 5 I, -3 - 4 I, -2 - 4 I, -1 - 4 I, 0 - 4 I, 1 - 4 I, 2 - 4 I,
3 - 4 I, -4 - 3 I, -3 - 3 I, -2 - 3 I, -1 - 3 I, 0 - 3 I, 1 - 3 I,
2 - 3 I, 3 - 3 I, 4 - 3 I, -4 - 2 I, -3 - 2 I, -2 - 2 I, -1 - 2 I,
0 - 2 I, 1 - 2 I, 2 - 2 I, 3 - 2 I,
4 - 2 I, -4 - 1 I, -3 - 1 I, -2 - 1 I, -1 - 1 I, 0 - 1 I, 1 - 1 I,
2 - 1 I, 3 - 1 I,
4 - 1 I, -5 + 0 I, -4 + 0 I, -3 + 0 I, -2 + 0 I, -1 + 0 I, 0 + 0 I,
1 + 0 I, 2 + 0 I, 3 + 0 I, 4 + 0 I,
5 + 0 I, -4 + 1 I, -3 + 1 I, -2 + 1 I, -1 + 1 I, 0 + 1 I, 1 + 1 I,
2 + 1 I, 3 + 1 I, 4 + 1 I, -4 + 2 I, -3 + 2 I, -2 + 2 I, -1 + 2 I,
0 + 2 I, 1 + 2 I, 2 + 2 I, 3 + 2 I,
4 + 2 I, -4 + 3 I, -3 + 3 I, -2 + 3 I, -1 + 3 I, 0 + 3 I, 1 + 3 I,
2 + 3 I, 3 + 3 I, 4 + 3 I, -3 + 4 I, -2 + 4 I, -1 + 4 I, 0 + 4 I,
1 + 4 I, 2 + 4 I, 3 + 4 I, 0 + 5 I};
(* Add in a "1" for the intercept *)
w1 = Transpose[{ConstantArray[1 + 0 I, Length[w]], w}];
z = {-15.83651 + 7.23001 I, -13.45474 + 4.70158 I, -13.63353 +
4.84748 I, -14.79109 + 4.33689 I, -13.63202 +
9.75805 I, -16.42506 + 9.54179 I, -14.54613 +
12.53215 I, -13.55975 + 14.91680 I, -12.64551 +
2.56503 I, -13.55825 + 4.44933 I, -11.28259 +
5.81240 I, -14.14497 + 7.18378 I, -13.45621 +
9.51873 I, -16.21694 + 8.62619 I, -14.95755 +
13.24094 I, -17.74017 + 10.32501 I, -17.23451 +
13.75955 I, -14.31768 + 1.82437 I, -13.68003 +
3.50632 I, -14.72750 + 5.13178 I, -15.00054 +
6.13389 I, -19.85013 + 6.36008 I, -19.79806 +
6.70061 I, -14.87031 + 11.41705 I, -21.51244 +
9.99690 I, -18.78360 + 14.47913 I, -15.19441 +
0.49289 I, -17.26867 + 3.65427 I, -16.34927 +
3.75119 I, -18.58678 + 2.38690 I, -20.11586 +
2.69634 I, -22.05726 + 6.01176 I, -22.94071 +
7.75243 I, -28.01594 + 3.21750 I, -24.60006 +
8.46907 I, -16.78006 - 2.66809 I, -18.23789 -
1.90286 I, -20.28243 + 0.47875 I, -18.37027 +
2.46888 I, -21.29372 + 3.40504 I, -19.80125 +
5.76661 I, -21.28269 + 5.57369 I, -22.05546 +
7.37060 I, -18.92492 + 10.18391 I, -18.13950 +
12.51550 I, -22.34471 + 10.37145 I, -15.05198 +
2.45401 I, -19.34279 - 0.23179 I, -17.37708 +
1.29222 I, -21.34378 - 0.00729 I, -20.84346 +
4.99178 I, -18.01642 + 10.78440 I, -23.08955 +
9.22452 I, -23.21163 + 7.69873 I, -26.54236 +
8.53687 I, -16.19653 - 0.36781 I, -23.49027 -
2.47554 I, -21.39397 - 0.05865 I, -20.02732 +
4.10250 I, -18.14814 + 7.36346 I, -23.70820 +
5.27508 I, -25.31022 + 4.32939 I, -24.04835 +
7.83235 I, -26.43708 + 6.19259 I, -21.58159 -
0.96734 I, -21.15339 - 1.06770 I, -21.88608 -
1.66252 I, -22.26280 + 4.00421 I, -22.37417 +
4.71425 I, -27.54631 + 4.83841 I, -24.39734 +
6.47424 I, -30.37850 + 4.07676 I, -30.30331 +
5.41201 I, -28.99194 - 8.45105 I, -24.05801 +
0.35091 I, -24.43580 - 0.69305 I, -29.71399 -
2.71735 I, -26.30489 + 4.93457 I, -27.16450 +
2.63608 I, -23.40265 + 8.76427 I, -29.56214 - 2.69087 I};
(* whuber 's least squares estimates *)
{a, b} = Inverse[ConjugateTranspose[w1].w1].ConjugateTranspose[w1].z
(* {-20.0172+5.00968 \[ImaginaryI],-0.830797+1.37827 \[ImaginaryI]} *)
(* Break up into the real and imaginary components *)
x = Re[z];
y = Im[z];
u = Re[w];
v = Im[w];
n = Length[z]; (* Sample size *)
(* Construct the real and imaginary components of the model *)
(* This is the messy part you probably don't want to do too often with paper and pencil *)
model = \[Gamma]0 + I \[Delta]0 + (\[Gamma]1 + I \[Delta]1) (u + I v);
modelR = Table[
Re[ComplexExpand[model[[j]]]] /. Im[h_] -> 0 /. Re[h_] -> h, {j, n}];
(* \[Gamma]0+u \[Gamma]1-v \[Delta]1 *)
modelI = Table[
Im[ComplexExpand[model[[j]]]] /. Im[h_] -> 0 /. Re[h_] -> h, {j, n}];
(* v \[Gamma]1+\[Delta]0+u \[Delta]1 *)
(* Construct the log of the likelihood as we are estimating the parameters associated with a bivariate normal distribution *)
logL = LogLikelihood[
BinormalDistribution[{0, 0}, {\[Sigma]1, \[Sigma]2}, \[Rho]],
Transpose[{x - modelR, y - modelI}]];
mle0 = FindMaximum[{logL /. {\[Rho] ->
0, \[Sigma]1 -> \[Sigma], \[Sigma]2 -> \[Sigma]}, \[Sigma] >
0}, {\[Gamma]0, \[Delta]0, \[Gamma]1, \[Delta]1, \[Sigma]}]
(* {-357.626,{\[Gamma]0\[Rule]-20.0172,\[Delta]0\[Rule]5.00968,\[Gamma]1\[Rule]-0.830797,\[Delta]1\[Rule]1.37827,\[Sigma]\[Rule]2.20038}} *)
(* Now suppose we don't want to restrict \[Rho]=0 *)
mle1 = FindMaximum[{logL /. {\[Sigma]1 -> \[Sigma], \[Sigma]2 -> \[Sigma]}, \[Sigma] > 0 && -1 < \[Rho] <
1}, {\[Gamma]0, \[Delta]0, \[Gamma]1, \[Delta]1, \[Sigma], \[Rho]}]
(* {-315.313,{\[Gamma]0\[Rule]-20.0172,\[Delta]0\[Rule]5.00968,\[Gamma]1\[Rule]-0.763237,\[Delta]1\[Rule]1.30859,\[Sigma]\[Rule]2.21424,\[Rho]\[Rule]0.810525}} *)
|
Analysis with complex data, anything different?
This issue has come up again on the Mathematica StackExchange and my answer/extended comment there is that @whuber 's excellent answer should be followed.
My answer here is an attempt to extend @whube
|
5,831
|
Analysis with complex data, anything different?
|
While @whuber has a beautifully-illustrated and well-explained answer, I think it's a simplified model that misses some of the power of the complex space.
Linear least-squares regression on reals is equivalent to the following model with inputs $w$, parameters $\beta$, and target $x$:
$$z = \beta_0 + \beta_1 w + \epsilon$$
where $\epsilon$ is normally-distributed with zero mean and some (typically constant) variance.
I suggest that complex linear regression be defined as follows:
$$z = \beta_0 + \beta_1 w + \beta_2 \overline w + \epsilon$$
There are two major differences.
First, there is an additional degree of freedom $\beta_2$ that allows phase sensitivity. You might not want that, but you can easily have that.
Second, $\epsilon$ is a complex normal distribution with zero mean and some variance and “pseudo-variance”.
Going back to the real model, the ordinary least squares solution comes out minimizing the loss, which is the negative log-likelihood. For a normal distribution, this is the parabola:
$$y = ax^2 + cx + d.$$
where $x = z - (\beta_0 + \beta_1 w)$, $a$ is fixed (typically), $c$ is zero as per the model, and $d$ doesn't matter since loss functions are invariant under constant addition.
Back to the complex model, the negative log-likelihood is
\begin{align}
y = a{|x|}^2 + \Re({bx^2 + cx}) + d.
\end{align}
$c$ and $d$ are zero as before. $a$ is the curvature and $b$ is the “pseudo-curvature”. $b$ captures anisotropic components. If the $\Re$ function bothers you, then an equivalent way of writing this is
\begin{align}
{\begin{bmatrix}x-\mu \\ \overline{x-\mu}\end{bmatrix}}^H
\begin{bmatrix}s & u \\ \overline{u} & \overline{s}\end{bmatrix}^{-1}\!
\begin{bmatrix}x-\mu \\ \overline{x-\mu}\end{bmatrix} + d
\end{align}
for another set of parameters $s, u, \mu, d$.
Here $s$ is the variance and $u$ is the pseudo-variance. $\mu$ is zero as per our model.
Here's an image of a complex normal distribution's density:
Notice how it's asymmetric. Without the $b$ parameter, it can't be asymmetric.
This complicates the regression although I'm pretty sure the solution is still analytical. I solved it for the case of one input, and I'm happy to transcribe my solution here, but I have a feeling that whuber might solve the general case.
|
Analysis with complex data, anything different?
|
While @whuber has a beautifully-illustrated and well-explained answer, I think it's a simplified model that misses some of the power of the complex space.
Linear least-squares regression on reals is e
|
Analysis with complex data, anything different?
While @whuber has a beautifully-illustrated and well-explained answer, I think it's a simplified model that misses some of the power of the complex space.
Linear least-squares regression on reals is equivalent to the following model with inputs $w$, parameters $\beta$, and target $x$:
$$z = \beta_0 + \beta_1 w + \epsilon$$
where $\epsilon$ is normally-distributed with zero mean and some (typically constant) variance.
I suggest that complex linear regression be defined as follows:
$$z = \beta_0 + \beta_1 w + \beta_2 \overline w + \epsilon$$
There are two major differences.
First, there is an additional degree of freedom $\beta_2$ that allows phase sensitivity. You might not want that, but you can easily have that.
Second, $\epsilon$ is a complex normal distribution with zero mean and some variance and “pseudo-variance”.
Going back to the real model, the ordinary least squares solution comes out minimizing the loss, which is the negative log-likelihood. For a normal distribution, this is the parabola:
$$y = ax^2 + cx + d.$$
where $x = z - (\beta_0 + \beta_1 w)$, $a$ is fixed (typically), $c$ is zero as per the model, and $d$ doesn't matter since loss functions are invariant under constant addition.
Back to the complex model, the negative log-likelihood is
\begin{align}
y = a{|x|}^2 + \Re({bx^2 + cx}) + d.
\end{align}
$c$ and $d$ are zero as before. $a$ is the curvature and $b$ is the “pseudo-curvature”. $b$ captures anisotropic components. If the $\Re$ function bothers you, then an equivalent way of writing this is
\begin{align}
{\begin{bmatrix}x-\mu \\ \overline{x-\mu}\end{bmatrix}}^H
\begin{bmatrix}s & u \\ \overline{u} & \overline{s}\end{bmatrix}^{-1}\!
\begin{bmatrix}x-\mu \\ \overline{x-\mu}\end{bmatrix} + d
\end{align}
for another set of parameters $s, u, \mu, d$.
Here $s$ is the variance and $u$ is the pseudo-variance. $\mu$ is zero as per our model.
Here's an image of a complex normal distribution's density:
Notice how it's asymmetric. Without the $b$ parameter, it can't be asymmetric.
This complicates the regression although I'm pretty sure the solution is still analytical. I solved it for the case of one input, and I'm happy to transcribe my solution here, but I have a feeling that whuber might solve the general case.
|
Analysis with complex data, anything different?
While @whuber has a beautifully-illustrated and well-explained answer, I think it's a simplified model that misses some of the power of the complex space.
Linear least-squares regression on reals is e
|
5,832
|
Analysis with complex data, anything different?
|
Start replacing transpose for conjugate transpose in the normal equations:
$$ {\hat\beta} = (X^H X)^{-1} (X^H y) $$
The residual vector remains simply:
$$ r = y - H {\hat\beta} $$
The covariance matrix and the additional pseudo-covariance matrix will be:
$$ K = (X^H X)^{-1} \sigma^2_K $$
$$ J = (X^T X)^{-1} \sigma^2_J $$
The reduced qui-squared scalars are:
$$ \sigma^2_K = (r^H r) / \nu $$
$$ \sigma^2_J = (r^T r) / \nu $$
The the degree of freedom $\nu=m-n$ involves the size of the Jacobian matrix $X$ (number of lines minus number of columns).
If you need to report the precision of real and imaginary components of $\hat\beta$, obtain their covariance matrices as:
$$ K_r = 0.5\cdot \Re\{K+J\} $$
$$ K_i = 0.5\cdot \Re\{K-J\} $$
More information:
https://en.wikipedia.org/wiki/Complex_random_vector#Covariance_matrix_and_pseudo-covariance_matrix
https://en.wikipedia.org/wiki/Complex_normal_distribution#Relationships_between_covariance_matrices
|
Analysis with complex data, anything different?
|
Start replacing transpose for conjugate transpose in the normal equations:
$$ {\hat\beta} = (X^H X)^{-1} (X^H y) $$
The residual vector remains simply:
$$ r = y - H {\hat\beta} $$
The covariance matri
|
Analysis with complex data, anything different?
Start replacing transpose for conjugate transpose in the normal equations:
$$ {\hat\beta} = (X^H X)^{-1} (X^H y) $$
The residual vector remains simply:
$$ r = y - H {\hat\beta} $$
The covariance matrix and the additional pseudo-covariance matrix will be:
$$ K = (X^H X)^{-1} \sigma^2_K $$
$$ J = (X^T X)^{-1} \sigma^2_J $$
The reduced qui-squared scalars are:
$$ \sigma^2_K = (r^H r) / \nu $$
$$ \sigma^2_J = (r^T r) / \nu $$
The the degree of freedom $\nu=m-n$ involves the size of the Jacobian matrix $X$ (number of lines minus number of columns).
If you need to report the precision of real and imaginary components of $\hat\beta$, obtain their covariance matrices as:
$$ K_r = 0.5\cdot \Re\{K+J\} $$
$$ K_i = 0.5\cdot \Re\{K-J\} $$
More information:
https://en.wikipedia.org/wiki/Complex_random_vector#Covariance_matrix_and_pseudo-covariance_matrix
https://en.wikipedia.org/wiki/Complex_normal_distribution#Relationships_between_covariance_matrices
|
Analysis with complex data, anything different?
Start replacing transpose for conjugate transpose in the normal equations:
$$ {\hat\beta} = (X^H X)^{-1} (X^H y) $$
The residual vector remains simply:
$$ r = y - H {\hat\beta} $$
The covariance matri
|
5,833
|
What is the difference between a population and a sample?
|
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the future. I like this example because it drives home the point that we, as analysts, choose the population that we wish to study. Typically it is impossible to survey/measure the entire population because not all members are observable (e.g. men who will exist in the future). If it is possible to enumerate the entire population it is often costly to do so and would take a great deal of time. In the example above we have a population "men" and a parameter of interest, their height.
Instead, we could take a subset of this population called a sample and use this sample to draw inferences about the population under study, given some conditions. Thus we could measure the mean height of men in a sample of the population which we call a statistic and use this to draw inferences about the parameter of interest in the population. It is an inference because there will be some uncertainty and inaccuracy involved in drawing conclusions about the population based upon a sample. This should be obvious - we have fewer members in our sample than our population therefore we have lost some information.
There are many ways to select a sample and the study of this is called sampling theory. A commonly used method is called Simple Random Sampling (SRS). In SRS each member of the population has an equal probability of being included in the sample, hence the term "random". There are many other sampling methods e.g. stratified sampling, cluster sampling, etc which all have their advantages and disadvantages.
It is important to remember that the sample we draw from the population is only one from a large number of potential samples. If ten researchers were all studying the same population, drawing their own samples then they may obtain different answers. Returning to our earlier example, each of the ten researchers may come up with a different mean height of men i.e. the statistic in question (mean height) varies of sample to sample -- it has a distribution called a sampling distribution. We can use this distribution to understand the uncertainty in our estimate of the population parameter.
The sampling distribution of the sample mean is known to be a normal distribution with a standard deviation equal to the sample standard deviation divided by the sample size. Because this could easily be confused with the standard deviation of the sample it more common to call the standard deviation of the sampling distribution the standard error.
|
What is the difference between a population and a sample?
|
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the fu
|
What is the difference between a population and a sample?
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the future. I like this example because it drives home the point that we, as analysts, choose the population that we wish to study. Typically it is impossible to survey/measure the entire population because not all members are observable (e.g. men who will exist in the future). If it is possible to enumerate the entire population it is often costly to do so and would take a great deal of time. In the example above we have a population "men" and a parameter of interest, their height.
Instead, we could take a subset of this population called a sample and use this sample to draw inferences about the population under study, given some conditions. Thus we could measure the mean height of men in a sample of the population which we call a statistic and use this to draw inferences about the parameter of interest in the population. It is an inference because there will be some uncertainty and inaccuracy involved in drawing conclusions about the population based upon a sample. This should be obvious - we have fewer members in our sample than our population therefore we have lost some information.
There are many ways to select a sample and the study of this is called sampling theory. A commonly used method is called Simple Random Sampling (SRS). In SRS each member of the population has an equal probability of being included in the sample, hence the term "random". There are many other sampling methods e.g. stratified sampling, cluster sampling, etc which all have their advantages and disadvantages.
It is important to remember that the sample we draw from the population is only one from a large number of potential samples. If ten researchers were all studying the same population, drawing their own samples then they may obtain different answers. Returning to our earlier example, each of the ten researchers may come up with a different mean height of men i.e. the statistic in question (mean height) varies of sample to sample -- it has a distribution called a sampling distribution. We can use this distribution to understand the uncertainty in our estimate of the population parameter.
The sampling distribution of the sample mean is known to be a normal distribution with a standard deviation equal to the sample standard deviation divided by the sample size. Because this could easily be confused with the standard deviation of the sample it more common to call the standard deviation of the sampling distribution the standard error.
|
What is the difference between a population and a sample?
The population is the set of entities under study. For example, the mean height of men. This is a hypothetical population because it includes all men that have lived, are alive and will live in the fu
|
5,834
|
What is the difference between a population and a sample?
|
The population is the whole set of values, or individuals, you are interested in. The sample is a subset of the population, and is the set of values you actually use in your estimation.
So, for example, if you want to know the average height of the residents of China, that is your population, ie, the population of China. The thing is, this is quite large a number, and you wouldn't be able to get data for everyone there. So you draw a sample, that is, you get some observations, or the height of some of the people in China (a subset of the population, the sample) and do your inference based on that.
|
What is the difference between a population and a sample?
|
The population is the whole set of values, or individuals, you are interested in. The sample is a subset of the population, and is the set of values you actually use in your estimation.
So, for exampl
|
What is the difference between a population and a sample?
The population is the whole set of values, or individuals, you are interested in. The sample is a subset of the population, and is the set of values you actually use in your estimation.
So, for example, if you want to know the average height of the residents of China, that is your population, ie, the population of China. The thing is, this is quite large a number, and you wouldn't be able to get data for everyone there. So you draw a sample, that is, you get some observations, or the height of some of the people in China (a subset of the population, the sample) and do your inference based on that.
|
What is the difference between a population and a sample?
The population is the whole set of values, or individuals, you are interested in. The sample is a subset of the population, and is the set of values you actually use in your estimation.
So, for exampl
|
5,835
|
What is the difference between a population and a sample?
|
The population is everything in the group of study. For example, if you are studying the price of Apple's shares, it is the historical, current, and even all future stock prices. Or, if you run an egg factory, it is all the eggs made by the factory.
You don't always have to sample, and do statistical tests. If your population is your immediate living family, you don't need to sample, as the population is small.
Sampling is popular for a variety of reasons:
it is cheaper than a census (sampling the whole population)
you don't have access to future data, so must sample the past
you have to destroy some items by testing them, and don't want to destroy them all (say, eggs)
|
What is the difference between a population and a sample?
|
The population is everything in the group of study. For example, if you are studying the price of Apple's shares, it is the historical, current, and even all future stock prices. Or, if you run an egg
|
What is the difference between a population and a sample?
The population is everything in the group of study. For example, if you are studying the price of Apple's shares, it is the historical, current, and even all future stock prices. Or, if you run an egg factory, it is all the eggs made by the factory.
You don't always have to sample, and do statistical tests. If your population is your immediate living family, you don't need to sample, as the population is small.
Sampling is popular for a variety of reasons:
it is cheaper than a census (sampling the whole population)
you don't have access to future data, so must sample the past
you have to destroy some items by testing them, and don't want to destroy them all (say, eggs)
|
What is the difference between a population and a sample?
The population is everything in the group of study. For example, if you are studying the price of Apple's shares, it is the historical, current, and even all future stock prices. Or, if you run an egg
|
5,836
|
What is the difference between a population and a sample?
|
When we think of the term “population,” we usually think of people in our town, region, state or country and their respective characteristics such as gender, age, marital status, ethnic membership, religion and so forth. In statistics the term “population” takes on a slightly different meaning. The “population” in statistics includes all members of a defined group that we are studying or collecting information on for data driven decisions.
A part of the population is called a sample. It is a proportion of the population, a slice of it, a part of it and all its characteristics. A sample is a scientifically drawn group that actually possesses the same characteristics as the population – if it is drawn randomly.(This may be hard for you to believe, but it is true!)
Randomly drawn samples must have two characteristics:
*Every person has an equal opportunity to be selected for your sample; and,
*Selection of one person is independent of the selection of another person.
What is great about random samples is that you can generalize to the population that you are interested in. So if you sample 500 households in your community, you can generalize to the 50,000 households that live there. If you match some of the demographic characteristics of the 500 with the 50,000, you will see that they are surprisingly similar.
|
What is the difference between a population and a sample?
|
When we think of the term “population,” we usually think of people in our town, region, state or country and their respective characteristics such as gender, age, marital status, ethnic membership, re
|
What is the difference between a population and a sample?
When we think of the term “population,” we usually think of people in our town, region, state or country and their respective characteristics such as gender, age, marital status, ethnic membership, religion and so forth. In statistics the term “population” takes on a slightly different meaning. The “population” in statistics includes all members of a defined group that we are studying or collecting information on for data driven decisions.
A part of the population is called a sample. It is a proportion of the population, a slice of it, a part of it and all its characteristics. A sample is a scientifically drawn group that actually possesses the same characteristics as the population – if it is drawn randomly.(This may be hard for you to believe, but it is true!)
Randomly drawn samples must have two characteristics:
*Every person has an equal opportunity to be selected for your sample; and,
*Selection of one person is independent of the selection of another person.
What is great about random samples is that you can generalize to the population that you are interested in. So if you sample 500 households in your community, you can generalize to the 50,000 households that live there. If you match some of the demographic characteristics of the 500 with the 50,000, you will see that they are surprisingly similar.
|
What is the difference between a population and a sample?
When we think of the term “population,” we usually think of people in our town, region, state or country and their respective characteristics such as gender, age, marital status, ethnic membership, re
|
5,837
|
What is the difference between a population and a sample?
|
A population includes all of the elements from a set of data.
A sample consists of one or more observations from the population.
BOA, A.(2012, 17)
|
What is the difference between a population and a sample?
|
A population includes all of the elements from a set of data.
A sample consists of one or more observations from the population.
BOA, A.(2012, 17)
|
What is the difference between a population and a sample?
A population includes all of the elements from a set of data.
A sample consists of one or more observations from the population.
BOA, A.(2012, 17)
|
What is the difference between a population and a sample?
A population includes all of the elements from a set of data.
A sample consists of one or more observations from the population.
BOA, A.(2012, 17)
|
5,838
|
PCA objective function: what is the connection between maximizing variance and minimizing error?
|
Let $\newcommand{\X}{\mathbf X}\X$ be a centered data matrix with $n$ observations in rows. Let $\newcommand{\S}{\boldsymbol \Sigma}\S=\X^\top\X/(n-1)$ be its covariance matrix. Let $\newcommand{\w}{\mathbf w}\w$ be a unit vector specifying an axis in the variable space. We want $\w$ to be the first principal axis.
According to the first approach, first principal axis maximizes the variance of the projection $\X \w$ (variance of the first principal component). This variance is given by the $$\mathrm{Var}(\X\w)=\w^\top\X^\top \X \w/(n-1)=\w^\top\S\w.$$
According to the second approach, first principal axis minimizes the reconstruction error between $\X$ and its reconstruction $\X\w\w^\top$, i.e. the sum of squared distances between the original points and their projections onto $\w$. The square of the reconstruction error is given by
\begin{align}\newcommand{\tr}{\mathrm{tr}}
\|\X-\X\w\w^\top\|^2
&=\tr\left((\X-\X\w\w^\top)(\X-\X\w\w^\top)^\top\right) \\
&=\tr\left((\X-\X\w\w^\top)(\X^\top-\w\w^\top\X^\top)\right) \\
&=\tr(\X\X^\top)-2\tr(\X\w\w^\top\X^\top)+\tr(\X\w\w^\top\w\w^\top\X^\top) \\
&=\mathrm{const}-\tr(\X\w\w^\top\X^\top) \\
&=\mathrm{const}-\tr(\w^\top\X^\top\X\w) \\
&=\mathrm{const} - \mathrm{const} \cdot \w^\top \S \w. \end{align}
Notice the minus sign before the main term. Because of that, minimizing the reconstruction error amounts to maximizing $\w^\top \S \w$, which is the variance. So minimizing reconstruction error is equivalent to maximizing the variance; both formulations yield the same $\w$.
|
PCA objective function: what is the connection between maximizing variance and minimizing error?
|
Let $\newcommand{\X}{\mathbf X}\X$ be a centered data matrix with $n$ observations in rows. Let $\newcommand{\S}{\boldsymbol \Sigma}\S=\X^\top\X/(n-1)$ be its covariance matrix. Let $\newcommand{\w}{\
|
PCA objective function: what is the connection between maximizing variance and minimizing error?
Let $\newcommand{\X}{\mathbf X}\X$ be a centered data matrix with $n$ observations in rows. Let $\newcommand{\S}{\boldsymbol \Sigma}\S=\X^\top\X/(n-1)$ be its covariance matrix. Let $\newcommand{\w}{\mathbf w}\w$ be a unit vector specifying an axis in the variable space. We want $\w$ to be the first principal axis.
According to the first approach, first principal axis maximizes the variance of the projection $\X \w$ (variance of the first principal component). This variance is given by the $$\mathrm{Var}(\X\w)=\w^\top\X^\top \X \w/(n-1)=\w^\top\S\w.$$
According to the second approach, first principal axis minimizes the reconstruction error between $\X$ and its reconstruction $\X\w\w^\top$, i.e. the sum of squared distances between the original points and their projections onto $\w$. The square of the reconstruction error is given by
\begin{align}\newcommand{\tr}{\mathrm{tr}}
\|\X-\X\w\w^\top\|^2
&=\tr\left((\X-\X\w\w^\top)(\X-\X\w\w^\top)^\top\right) \\
&=\tr\left((\X-\X\w\w^\top)(\X^\top-\w\w^\top\X^\top)\right) \\
&=\tr(\X\X^\top)-2\tr(\X\w\w^\top\X^\top)+\tr(\X\w\w^\top\w\w^\top\X^\top) \\
&=\mathrm{const}-\tr(\X\w\w^\top\X^\top) \\
&=\mathrm{const}-\tr(\w^\top\X^\top\X\w) \\
&=\mathrm{const} - \mathrm{const} \cdot \w^\top \S \w. \end{align}
Notice the minus sign before the main term. Because of that, minimizing the reconstruction error amounts to maximizing $\w^\top \S \w$, which is the variance. So minimizing reconstruction error is equivalent to maximizing the variance; both formulations yield the same $\w$.
|
PCA objective function: what is the connection between maximizing variance and minimizing error?
Let $\newcommand{\X}{\mathbf X}\X$ be a centered data matrix with $n$ observations in rows. Let $\newcommand{\S}{\boldsymbol \Sigma}\S=\X^\top\X/(n-1)$ be its covariance matrix. Let $\newcommand{\w}{\
|
5,839
|
How to model non-negative zero-inflated continuous data?
|
There are a variety of solutions to the case of zero-inflated (semi-)continuous distributions:
Tobit regression: assumes that the data come from a single underlying Normal distribution, but that negative values are censored and stacked on zero (e.g. censReg package). Here is a good book about Tobit model, see chapters 1 and 5.
see this answer for other censored-Gaussian alternatives
hurdle or "two-stage" model: use a binomial model to predict whether the values are 0 or >0, then use a linear model (or Gamma, or truncated Normal, or log-Normal) to model the observed non-zero values (typically you need to roll your own by running two separate models; combined versions where you fit the zero component and the non-zero component at the same time exist for count distributions such as Poisson (e.g glmmTMB, pscl); glmmTMB will also do 'zero-inflated'/hurdle models for Beta or Gamma responses)
Tweedie distributions: distributions in the exponential family that for a given range of shape parameters ($1<p<2$) have a point mass at zero and a skewed positive distribution for $x>0$ (e.g. tweedie, cplm, glmmTMB packages)
Or, if your data structure is simple enough, you could just use linear models and use permutation tests or some other robust approach to make sure that your inference isn't being messed up by the interesting distribution of the data.
There are R packages/solutions available for most of these cases.
There are other questions on SE about zero-inflated (semi)continuous data (e.g. here, here, and here), but they don't seem to offer a clear general answer ...
See also Min & Agresti, 2002, Modeling Nonnegative Data with Clumping at
Zero: A Survey for an overview.
|
How to model non-negative zero-inflated continuous data?
|
There are a variety of solutions to the case of zero-inflated (semi-)continuous distributions:
Tobit regression: assumes that the data come from a single underlying Normal distribution, but that nega
|
How to model non-negative zero-inflated continuous data?
There are a variety of solutions to the case of zero-inflated (semi-)continuous distributions:
Tobit regression: assumes that the data come from a single underlying Normal distribution, but that negative values are censored and stacked on zero (e.g. censReg package). Here is a good book about Tobit model, see chapters 1 and 5.
see this answer for other censored-Gaussian alternatives
hurdle or "two-stage" model: use a binomial model to predict whether the values are 0 or >0, then use a linear model (or Gamma, or truncated Normal, or log-Normal) to model the observed non-zero values (typically you need to roll your own by running two separate models; combined versions where you fit the zero component and the non-zero component at the same time exist for count distributions such as Poisson (e.g glmmTMB, pscl); glmmTMB will also do 'zero-inflated'/hurdle models for Beta or Gamma responses)
Tweedie distributions: distributions in the exponential family that for a given range of shape parameters ($1<p<2$) have a point mass at zero and a skewed positive distribution for $x>0$ (e.g. tweedie, cplm, glmmTMB packages)
Or, if your data structure is simple enough, you could just use linear models and use permutation tests or some other robust approach to make sure that your inference isn't being messed up by the interesting distribution of the data.
There are R packages/solutions available for most of these cases.
There are other questions on SE about zero-inflated (semi)continuous data (e.g. here, here, and here), but they don't seem to offer a clear general answer ...
See also Min & Agresti, 2002, Modeling Nonnegative Data with Clumping at
Zero: A Survey for an overview.
|
How to model non-negative zero-inflated continuous data?
There are a variety of solutions to the case of zero-inflated (semi-)continuous distributions:
Tobit regression: assumes that the data come from a single underlying Normal distribution, but that nega
|
5,840
|
How to model non-negative zero-inflated continuous data?
|
You can also use the Poisson Pseudo-Maximum Likelihood (PPML). It was firstly developed by Santos Silva and Tenreyero (2006) for the application of international trade among countries. In 2011, the same authors extended the analysis of the PPML's performance (see in here). They also have this page with some material about the model. Later, it was used in many other applications. In my field, it was used in the energy economics, policy and regulation fields (for instance, Zhao et al. (2013), De Groote et al. (2016), Gautier and Jacqmin (2020))
In Stata you can use with the ppmlhdfe command and its implementation is here.
|
How to model non-negative zero-inflated continuous data?
|
You can also use the Poisson Pseudo-Maximum Likelihood (PPML). It was firstly developed by Santos Silva and Tenreyero (2006) for the application of international trade among countries. In 2011, the sa
|
How to model non-negative zero-inflated continuous data?
You can also use the Poisson Pseudo-Maximum Likelihood (PPML). It was firstly developed by Santos Silva and Tenreyero (2006) for the application of international trade among countries. In 2011, the same authors extended the analysis of the PPML's performance (see in here). They also have this page with some material about the model. Later, it was used in many other applications. In my field, it was used in the energy economics, policy and regulation fields (for instance, Zhao et al. (2013), De Groote et al. (2016), Gautier and Jacqmin (2020))
In Stata you can use with the ppmlhdfe command and its implementation is here.
|
How to model non-negative zero-inflated continuous data?
You can also use the Poisson Pseudo-Maximum Likelihood (PPML). It was firstly developed by Santos Silva and Tenreyero (2006) for the application of international trade among countries. In 2011, the sa
|
5,841
|
What is meant by 'weak learner'?
|
A 'weak' learner (classifer, predictor, etc) is just one which performs relatively poorly--its accuracy is above chance, but just barely. There is often, but not always, the added implication that it is computationally simple. Weak learner also suggests that many instances of the algorithm are being pooled (via boosting, bagging, etc) together into to create a "strong" ensemble classifier.
It's mentioned in the original AdaBoost paper by Freund & Schapire:
Perhaps the most surprising of these applications is the derivation of a new application for "boosting", i.e., converting a "weak" PAC learning algorithm that performs just slightly better than random guessing into one with arbitrarily high accuracy. --(Freund & Schapire, 1995)
but I think the phrase is actually older than that--I've seen people cite a term paper(?!) by Michael Kearns from the 1980s.
The classic example of a Weak Learner is a Decision Stump, a one-level decision tree (1R or OneR is another commonly-used weak learner; it's fairly similar). It would be somewhat strange to call a SVM a 'weak learner', even in situations where it performs poorly, but it would be perfectly reasonable to call a single decision stump a weak learner even when it performs surprisingly well by itself.
Adaboost is an iterative algorithm and $T$ typically denotes the number of iterations or "rounds". The algorithm starts by training/testing a weak learner on the data, weighting each example equally. The examples which are misclassified get their weights increased for the next round(s), while those that are correctly classified get their weights decreased.
I'm not sure there's anything magical about $T=10$. In the 1995 paper, $T$ is given as a free parameter (i.e., you set it yourself).
|
What is meant by 'weak learner'?
|
A 'weak' learner (classifer, predictor, etc) is just one which performs relatively poorly--its accuracy is above chance, but just barely. There is often, but not always, the added implication that it
|
What is meant by 'weak learner'?
A 'weak' learner (classifer, predictor, etc) is just one which performs relatively poorly--its accuracy is above chance, but just barely. There is often, but not always, the added implication that it is computationally simple. Weak learner also suggests that many instances of the algorithm are being pooled (via boosting, bagging, etc) together into to create a "strong" ensemble classifier.
It's mentioned in the original AdaBoost paper by Freund & Schapire:
Perhaps the most surprising of these applications is the derivation of a new application for "boosting", i.e., converting a "weak" PAC learning algorithm that performs just slightly better than random guessing into one with arbitrarily high accuracy. --(Freund & Schapire, 1995)
but I think the phrase is actually older than that--I've seen people cite a term paper(?!) by Michael Kearns from the 1980s.
The classic example of a Weak Learner is a Decision Stump, a one-level decision tree (1R or OneR is another commonly-used weak learner; it's fairly similar). It would be somewhat strange to call a SVM a 'weak learner', even in situations where it performs poorly, but it would be perfectly reasonable to call a single decision stump a weak learner even when it performs surprisingly well by itself.
Adaboost is an iterative algorithm and $T$ typically denotes the number of iterations or "rounds". The algorithm starts by training/testing a weak learner on the data, weighting each example equally. The examples which are misclassified get their weights increased for the next round(s), while those that are correctly classified get their weights decreased.
I'm not sure there's anything magical about $T=10$. In the 1995 paper, $T$ is given as a free parameter (i.e., you set it yourself).
|
What is meant by 'weak learner'?
A 'weak' learner (classifer, predictor, etc) is just one which performs relatively poorly--its accuracy is above chance, but just barely. There is often, but not always, the added implication that it
|
5,842
|
What is meant by 'weak learner'?
|
Weak learner is a learner that no matter what the distribution over the training data is will always do better than chance, when it tries to label the data.
Doing better than chance means we are always going to have an error rate which is less than 1/2.
This means that the learner algorithm is always going to learn something, not always completely accurate i.e., it is weak and poor when it comes to learning the relationships between $X$ (inputs) and $Y$ (target).
But then comes boosting, in which we start by looking over the training data and generate some distributions, then find some set of Weak Learners (classifiers) with low errors, and each learner outputs some hypothesis, $H_x$. This generates some $Y$ (class label), and at the end combines the set of good hypotheses to generate a final hypothesis.
This eventually improves the weak learners and converts them to strong learners.
For more information: https://youtu.be/zUXJb1hdU0k.
|
What is meant by 'weak learner'?
|
Weak learner is a learner that no matter what the distribution over the training data is will always do better than chance, when it tries to label the data.
Doing better than chance means we are alway
|
What is meant by 'weak learner'?
Weak learner is a learner that no matter what the distribution over the training data is will always do better than chance, when it tries to label the data.
Doing better than chance means we are always going to have an error rate which is less than 1/2.
This means that the learner algorithm is always going to learn something, not always completely accurate i.e., it is weak and poor when it comes to learning the relationships between $X$ (inputs) and $Y$ (target).
But then comes boosting, in which we start by looking over the training data and generate some distributions, then find some set of Weak Learners (classifiers) with low errors, and each learner outputs some hypothesis, $H_x$. This generates some $Y$ (class label), and at the end combines the set of good hypotheses to generate a final hypothesis.
This eventually improves the weak learners and converts them to strong learners.
For more information: https://youtu.be/zUXJb1hdU0k.
|
What is meant by 'weak learner'?
Weak learner is a learner that no matter what the distribution over the training data is will always do better than chance, when it tries to label the data.
Doing better than chance means we are alway
|
5,843
|
What is meant by 'weak learner'?
|
Weak learner is the same as weak classifier, or weak predictor. The idea is that you use a classifier that is, well..., not that good, but at least better than random. The benefit is that the classifier will be robust in overfitting. Of course you don't use just one but a large set of those, each one slightly better than random. The exact way you select/combine them depends on the methodology/algorithm, e.g. AdaBoost.
In practice as weak classifier you use something like a simple threshold on a single feature. If feature is above the threshold then you predict it belongs to the positives otherwise you decide it belongs to the negatives. Not sure about the T=10, since there is no context, but I can assume it is an example on thresholding some feature.
|
What is meant by 'weak learner'?
|
Weak learner is the same as weak classifier, or weak predictor. The idea is that you use a classifier that is, well..., not that good, but at least better than random. The benefit is that the classifi
|
What is meant by 'weak learner'?
Weak learner is the same as weak classifier, or weak predictor. The idea is that you use a classifier that is, well..., not that good, but at least better than random. The benefit is that the classifier will be robust in overfitting. Of course you don't use just one but a large set of those, each one slightly better than random. The exact way you select/combine them depends on the methodology/algorithm, e.g. AdaBoost.
In practice as weak classifier you use something like a simple threshold on a single feature. If feature is above the threshold then you predict it belongs to the positives otherwise you decide it belongs to the negatives. Not sure about the T=10, since there is no context, but I can assume it is an example on thresholding some feature.
|
What is meant by 'weak learner'?
Weak learner is the same as weak classifier, or weak predictor. The idea is that you use a classifier that is, well..., not that good, but at least better than random. The benefit is that the classifi
|
5,844
|
If only prediction is of interest, why use lasso over ridge?
|
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the information trying to find the "right" predictors and it's not even great at doing that in many cases. Relative performance of the two will depend on the distribution of true regression coefficients. If you have a small fraction of nonzero coefficients in truth, lasso can perform better. Personally I use ridge almost all the time when interested in predictive accuracy.
|
If only prediction is of interest, why use lasso over ridge?
|
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the in
|
If only prediction is of interest, why use lasso over ridge?
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the information trying to find the "right" predictors and it's not even great at doing that in many cases. Relative performance of the two will depend on the distribution of true regression coefficients. If you have a small fraction of nonzero coefficients in truth, lasso can perform better. Personally I use ridge almost all the time when interested in predictive accuracy.
|
If only prediction is of interest, why use lasso over ridge?
You are right to ask this question. In general, when a proper accuracy scoring rule is used (e.g., mean squared prediction error), ridge regression will outperform lasso. Lasso spends some of the in
|
5,845
|
If only prediction is of interest, why use lasso over ridge?
|
I think the specific setup of the example you reference is key to understanding why lasso outperforms ridge: only 2 of 45 predictors are actually relevant.
This borders on a pathological case: lasso, specifically intended to make reductions to zero easy, performs exactly as intended, while ridge will have to deal with a large number of useless terms (even their effect is reduced closed to zero, it is still a non-zero effect).
|
If only prediction is of interest, why use lasso over ridge?
|
I think the specific setup of the example you reference is key to understanding why lasso outperforms ridge: only 2 of 45 predictors are actually relevant.
This borders on a pathological case: lasso,
|
If only prediction is of interest, why use lasso over ridge?
I think the specific setup of the example you reference is key to understanding why lasso outperforms ridge: only 2 of 45 predictors are actually relevant.
This borders on a pathological case: lasso, specifically intended to make reductions to zero easy, performs exactly as intended, while ridge will have to deal with a large number of useless terms (even their effect is reduced closed to zero, it is still a non-zero effect).
|
If only prediction is of interest, why use lasso over ridge?
I think the specific setup of the example you reference is key to understanding why lasso outperforms ridge: only 2 of 45 predictors are actually relevant.
This borders on a pathological case: lasso,
|
5,846
|
Why are non zero-centered activation functions a problem in backpropagation?
|
$$f=\sum w_ix_i+b$$ $$\frac{df}{dw_i}=x_i$$ $$\frac{dL}{dw_i}=\frac{dL}{df}\frac{df}{dw_i}=\frac{dL}{df}x_i$$
because $x_i>0$, the gradient $\dfrac{dL}{dw_i}$ always has the same sign as $\dfrac{dL}{df}$ (all positive or all negative).
Update
Say there are two parameters $w_1$ and $w_2$. If the gradients of two dimensions are always of the same sign (i.e., either both are positive or both are negative), it means we can only move roughly in the direction of northeast or southwest in the parameter space.
If our goal happens to be in the northwest, we can only move in a zig-zagging fashion to get there, just like parallel parking in a narrow space. (forgive my drawing)
Therefore all-positive or all-negative activation functions (relu, sigmoid) can be difficult for gradient based optimization. To solve this problem we can normalize the data in advance to be zero-centered as in batch/layer normalization.
Also another solution I can think of is to add a bias term for each input so the layer becomes
$$f=\sum w_i(x_i+b_i).$$
The gradients is then
$$\frac{dL}{dw_i}=\frac{dL}{df}(x_i-b_i)$$
the sign won't solely depend on $x_i$.
|
Why are non zero-centered activation functions a problem in backpropagation?
|
$$f=\sum w_ix_i+b$$ $$\frac{df}{dw_i}=x_i$$ $$\frac{dL}{dw_i}=\frac{dL}{df}\frac{df}{dw_i}=\frac{dL}{df}x_i$$
because $x_i>0$, the gradient $\dfrac{dL}{dw_i}$ always has the same sign as $\dfrac{dL}{d
|
Why are non zero-centered activation functions a problem in backpropagation?
$$f=\sum w_ix_i+b$$ $$\frac{df}{dw_i}=x_i$$ $$\frac{dL}{dw_i}=\frac{dL}{df}\frac{df}{dw_i}=\frac{dL}{df}x_i$$
because $x_i>0$, the gradient $\dfrac{dL}{dw_i}$ always has the same sign as $\dfrac{dL}{df}$ (all positive or all negative).
Update
Say there are two parameters $w_1$ and $w_2$. If the gradients of two dimensions are always of the same sign (i.e., either both are positive or both are negative), it means we can only move roughly in the direction of northeast or southwest in the parameter space.
If our goal happens to be in the northwest, we can only move in a zig-zagging fashion to get there, just like parallel parking in a narrow space. (forgive my drawing)
Therefore all-positive or all-negative activation functions (relu, sigmoid) can be difficult for gradient based optimization. To solve this problem we can normalize the data in advance to be zero-centered as in batch/layer normalization.
Also another solution I can think of is to add a bias term for each input so the layer becomes
$$f=\sum w_i(x_i+b_i).$$
The gradients is then
$$\frac{dL}{dw_i}=\frac{dL}{df}(x_i-b_i)$$
the sign won't solely depend on $x_i$.
|
Why are non zero-centered activation functions a problem in backpropagation?
$$f=\sum w_ix_i+b$$ $$\frac{df}{dw_i}=x_i$$ $$\frac{dL}{dw_i}=\frac{dL}{df}\frac{df}{dw_i}=\frac{dL}{df}x_i$$
because $x_i>0$, the gradient $\dfrac{dL}{dw_i}$ always has the same sign as $\dfrac{dL}{d
|
5,847
|
Why is Laplace prior producing sparse solutions?
|
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see Stingler, 1986 or Wikipedia). This means that regression with Laplace errors distribution estimates the median (like e.g. quantile regression), while Normal errors refer to OLS estimate.
The robust priors you asked about were described also by Tibshirani (1996) who noticed that robust Lasso regression in Bayesian setting is equivalent to using Laplace prior. Such prior for coefficients are centered around zero (with centered variables) and has wide tails - so most regression coefficients estimated using it end up being exactly zero. This is clear if you look closely at the picture below, Laplace distribution has a peak around zero (there is a greater distribution mass), while Normal distribution is more diffuse around zero, so non-zero values have greater probability mass. Other possibilities for robust priors are Cauchy or $t$- distributions.
Using such priors you are more prone to end up with many zero-valued coefficients, some moderate-sized and some large-sized (long tail), while with Normal prior you get more moderate-sized coefficients that are rather not exactly zero, but also not that far from zero.
(image source Tibshirani, 1996)
Stigler, S.M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: Belknap Press of Harvard University Press.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288.
Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383.
Norton, R.M. (1984). The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator. The American Statistician, 38(2): 135-136.
|
Why is Laplace prior producing sparse solutions?
|
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see
|
Why is Laplace prior producing sparse solutions?
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see Stingler, 1986 or Wikipedia). This means that regression with Laplace errors distribution estimates the median (like e.g. quantile regression), while Normal errors refer to OLS estimate.
The robust priors you asked about were described also by Tibshirani (1996) who noticed that robust Lasso regression in Bayesian setting is equivalent to using Laplace prior. Such prior for coefficients are centered around zero (with centered variables) and has wide tails - so most regression coefficients estimated using it end up being exactly zero. This is clear if you look closely at the picture below, Laplace distribution has a peak around zero (there is a greater distribution mass), while Normal distribution is more diffuse around zero, so non-zero values have greater probability mass. Other possibilities for robust priors are Cauchy or $t$- distributions.
Using such priors you are more prone to end up with many zero-valued coefficients, some moderate-sized and some large-sized (long tail), while with Normal prior you get more moderate-sized coefficients that are rather not exactly zero, but also not that far from zero.
(image source Tibshirani, 1996)
Stigler, S.M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Cambridge, MA: Belknap Press of Harvard University Press.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society. Series B (Methodological), 267-288.
Gelman, A., Jakulin, A., Pittau, G.M., and Su, Y.-S. (2008). A weakly informative default prior distribution for logistic and other regression models. The Annals of Applied Statistics, 2(4), 1360-1383.
Norton, R.M. (1984). The Double Exponential Distribution: Using Calculus to Find a Maximum Likelihood Estimator. The American Statistician, 38(2): 135-136.
|
Why is Laplace prior producing sparse solutions?
The relation of Laplace distribution prior with median (or L1 norm) was found by Laplace himself, who found that using such prior you estimate median rather than mean as with Normal distribution (see
|
5,848
|
Why is Laplace prior producing sparse solutions?
|
Frequentist view 👀
In one sense, we can think of both regularizations as "shrinking the weights"; L2 minimizes the Euclidean norm of the weights, while L1 minimizes the Manhattan norm. Following this line of thinking, we can reason that the equipotentials of L1 and L2 are spherical and diamond-shaped respectively, so L1 is more likely to lead to sparse solutions, as illustrated in Bishop's Pattern Recognition and Machine Learning:
Bayesian view 👀
However, in order to understand how priors relate to the linear model, we need to understand the Bayesian interpretation of ordinary linear regression. Katherine Bailey's blogpost is an excellent read for this. In a nutshell, we assume normally-distributed i.i.d. errors in our linear model
$$\mathbf{y} = \mathbf{\theta}^\top\mathbf{X} + \mathbf\epsilon$$
i.e. each of our $N$ measurements $y_i, i = 1, 2, \ldots, N$ have a noise $\epsilon_k\sim \mathcal{N}(0,\sigma)$.
Then we can say that our linear model has a Gaussian likelihood too! The likelihood of $\mathbf{y}$ is
\begin{equation}
p(\mathbf{y}|\mathbf{X}, \mathbf{\theta}; \mathbf{\epsilon}) = \mathcal{N}(\mathbf{\theta}^\top\mathbf{X}, \mathbf{\sigma})
\end{equation}
As it turns out... The maximum likelihood estimator is identical to minimizing the squared error between predicted and actual output values under the normality assumption for the error.
\begin{align*}
{\bf \hat{\theta}_{\text{MLE}}} &= \arg\max_{\bf \theta} \log P(y | \theta) \\
&=\underset{\theta}{\arg\min} \sum_{i=1}^n(y_i - \theta^\top{\mathbf{x}_i})^2
\end{align*}
Regularization as putting priors on weights
If we were to place a non-uniform prior $P(\theta)$ on the weights of linear regression, the maximum a posteriori probability (MAP) estimate would be:
\begin{equation*}
{\bf \hat{\theta}_{\text{MAP}}} = \arg\max_{\bf \theta} \log P(y | \theta) + \log P(\theta)
\end{equation*}
As derived in Brian Keng's blogpost, if $P(\theta)$ is a Laplace distribution it's equivalent to L1 regularization on $\theta$.
Similarly, if $P(\theta)$ is a Gaussian distribution, it's equivalent to L2 regularization on $\theta$.
Now we have another view into why putting a Laplace prior on the weights is more likely to induce sparsity: because the Laplace distribution is more concentrated around zero, our weights are more likely to be zero.
|
Why is Laplace prior producing sparse solutions?
|
Frequentist view 👀
In one sense, we can think of both regularizations as "shrinking the weights"; L2 minimizes the Euclidean norm of the weights, while L1 minimizes the Manhattan norm. Following this
|
Why is Laplace prior producing sparse solutions?
Frequentist view 👀
In one sense, we can think of both regularizations as "shrinking the weights"; L2 minimizes the Euclidean norm of the weights, while L1 minimizes the Manhattan norm. Following this line of thinking, we can reason that the equipotentials of L1 and L2 are spherical and diamond-shaped respectively, so L1 is more likely to lead to sparse solutions, as illustrated in Bishop's Pattern Recognition and Machine Learning:
Bayesian view 👀
However, in order to understand how priors relate to the linear model, we need to understand the Bayesian interpretation of ordinary linear regression. Katherine Bailey's blogpost is an excellent read for this. In a nutshell, we assume normally-distributed i.i.d. errors in our linear model
$$\mathbf{y} = \mathbf{\theta}^\top\mathbf{X} + \mathbf\epsilon$$
i.e. each of our $N$ measurements $y_i, i = 1, 2, \ldots, N$ have a noise $\epsilon_k\sim \mathcal{N}(0,\sigma)$.
Then we can say that our linear model has a Gaussian likelihood too! The likelihood of $\mathbf{y}$ is
\begin{equation}
p(\mathbf{y}|\mathbf{X}, \mathbf{\theta}; \mathbf{\epsilon}) = \mathcal{N}(\mathbf{\theta}^\top\mathbf{X}, \mathbf{\sigma})
\end{equation}
As it turns out... The maximum likelihood estimator is identical to minimizing the squared error between predicted and actual output values under the normality assumption for the error.
\begin{align*}
{\bf \hat{\theta}_{\text{MLE}}} &= \arg\max_{\bf \theta} \log P(y | \theta) \\
&=\underset{\theta}{\arg\min} \sum_{i=1}^n(y_i - \theta^\top{\mathbf{x}_i})^2
\end{align*}
Regularization as putting priors on weights
If we were to place a non-uniform prior $P(\theta)$ on the weights of linear regression, the maximum a posteriori probability (MAP) estimate would be:
\begin{equation*}
{\bf \hat{\theta}_{\text{MAP}}} = \arg\max_{\bf \theta} \log P(y | \theta) + \log P(\theta)
\end{equation*}
As derived in Brian Keng's blogpost, if $P(\theta)$ is a Laplace distribution it's equivalent to L1 regularization on $\theta$.
Similarly, if $P(\theta)$ is a Gaussian distribution, it's equivalent to L2 regularization on $\theta$.
Now we have another view into why putting a Laplace prior on the weights is more likely to induce sparsity: because the Laplace distribution is more concentrated around zero, our weights are more likely to be zero.
|
Why is Laplace prior producing sparse solutions?
Frequentist view 👀
In one sense, we can think of both regularizations as "shrinking the weights"; L2 minimizes the Euclidean norm of the weights, while L1 minimizes the Manhattan norm. Following this
|
5,849
|
Why does increasing the sample size lower the (sampling) variance?
|
Standard deviations of averages are smaller than standard deviations of individual observations. [Here I will assume independent identically distributed
observations with finite population variance; something similar can be said if you relax the first two conditions.]
It's a consequence of the simple fact that the standard deviation of the sum of two random variables is smaller than the sum of the standard deviations (it can only be equal when the two variables are perfectly correlated).
In fact, when you're dealing with uncorrelated random variables, we can say something more specific: the variance of a sum of variates is the sum of their variances.
This means that with $n$ independent (or even just uncorrelated) variates with the same distribution, the variance of the mean is the variance of an individual divided by the sample size.
Correspondingly with $n$ independent (or even just uncorrelated) variates with the same distribution, the standard deviation of their mean is the standard deviation of an individual divided by the square root of the sample size:
$\sigma_{\bar{X}}=\sigma/\sqrt{n}$.
So as you add more data, you get increasingly precise estimates of group means. A similar effect applies in regression problems.
Since we can get more precise estimates of averages by increasing the sample size, we are more easily able to tell apart means which are close together -- even though the distributions overlap quite a bit, by taking a large sample size we can still estimate their population means accurately enough to tell that they're not the same.
|
Why does increasing the sample size lower the (sampling) variance?
|
Standard deviations of averages are smaller than standard deviations of individual observations. [Here I will assume independent identically distributed
observations with finite population variance; s
|
Why does increasing the sample size lower the (sampling) variance?
Standard deviations of averages are smaller than standard deviations of individual observations. [Here I will assume independent identically distributed
observations with finite population variance; something similar can be said if you relax the first two conditions.]
It's a consequence of the simple fact that the standard deviation of the sum of two random variables is smaller than the sum of the standard deviations (it can only be equal when the two variables are perfectly correlated).
In fact, when you're dealing with uncorrelated random variables, we can say something more specific: the variance of a sum of variates is the sum of their variances.
This means that with $n$ independent (or even just uncorrelated) variates with the same distribution, the variance of the mean is the variance of an individual divided by the sample size.
Correspondingly with $n$ independent (or even just uncorrelated) variates with the same distribution, the standard deviation of their mean is the standard deviation of an individual divided by the square root of the sample size:
$\sigma_{\bar{X}}=\sigma/\sqrt{n}$.
So as you add more data, you get increasingly precise estimates of group means. A similar effect applies in regression problems.
Since we can get more precise estimates of averages by increasing the sample size, we are more easily able to tell apart means which are close together -- even though the distributions overlap quite a bit, by taking a large sample size we can still estimate their population means accurately enough to tell that they're not the same.
|
Why does increasing the sample size lower the (sampling) variance?
Standard deviations of averages are smaller than standard deviations of individual observations. [Here I will assume independent identically distributed
observations with finite population variance; s
|
5,850
|
Why does increasing the sample size lower the (sampling) variance?
|
The variability that's shrinking when N increases is the variability of the sample mean, often expressed as standard error. Or, in other terms, the certainty of the veracity of the sample mean is increasing.
Imagine you run an experiment where you collect 3 men and 3 women and measure their heights. How certain are you that the mean heights of each group are the true mean of the separate populations of men and women? I should think that you wouldn't be very certain at all. You could easily collect new samples of 3 and find new means several inches from the first ones. Quite a few of the repeated experiments like this might even result in women being pronounced taller than men because the means would vary so much. With a low N you don't have much certainty in the mean from the sample and it varies a lot across samples.
Now imagine 10,000 observations in each group. It's going to be pretty hard to find new samples of 10,000 that have means that differ much from each other. They will be far less variable and you'll be more certain of their accuracy.
If you can accept this line of thinking then we can insert it into the calculations of your statistics as standard error. As you can see from it's equation, it's an estimation of a parameter, $\sigma$ (that should become more accurate as n increases) divided by a value that always increases with n, $\sqrt n$. That standard error is representing the variability of the means or effects in your calculations. The smaller it is, the more powerful your statistical test.
Here's a little simulation in R to demonstrate the relation between a standard error and the standard deviation of the means of many many replications of the initial experiment. In this case we'll start with a population mean of 100 and standard deviation of 15.
mu <- 100
s <- 50
n <- 5
nsim <- 10000 # number of simulations
# theoretical standard error
s / sqrt(n)
# simulation of experiment and the standard deviations of their means
y <- replicate( nsim, mean( rnorm(n, mu, s) ) )
sd(y)
Note how the final standard deviation is close to the theoretical standard error. By playing with the n variable here you can see the variability measure will get smaller as n increases.
[As an aside, kurtosis in the graphs isn't really changing (assuming they are normal distributions). Lowering the variance doesn't change the kurtosis but the distribution will look narrower. The only way to visually examine the kurtosis changes is put the distributions on the same scale.]
|
Why does increasing the sample size lower the (sampling) variance?
|
The variability that's shrinking when N increases is the variability of the sample mean, often expressed as standard error. Or, in other terms, the certainty of the veracity of the sample mean is inc
|
Why does increasing the sample size lower the (sampling) variance?
The variability that's shrinking when N increases is the variability of the sample mean, often expressed as standard error. Or, in other terms, the certainty of the veracity of the sample mean is increasing.
Imagine you run an experiment where you collect 3 men and 3 women and measure their heights. How certain are you that the mean heights of each group are the true mean of the separate populations of men and women? I should think that you wouldn't be very certain at all. You could easily collect new samples of 3 and find new means several inches from the first ones. Quite a few of the repeated experiments like this might even result in women being pronounced taller than men because the means would vary so much. With a low N you don't have much certainty in the mean from the sample and it varies a lot across samples.
Now imagine 10,000 observations in each group. It's going to be pretty hard to find new samples of 10,000 that have means that differ much from each other. They will be far less variable and you'll be more certain of their accuracy.
If you can accept this line of thinking then we can insert it into the calculations of your statistics as standard error. As you can see from it's equation, it's an estimation of a parameter, $\sigma$ (that should become more accurate as n increases) divided by a value that always increases with n, $\sqrt n$. That standard error is representing the variability of the means or effects in your calculations. The smaller it is, the more powerful your statistical test.
Here's a little simulation in R to demonstrate the relation between a standard error and the standard deviation of the means of many many replications of the initial experiment. In this case we'll start with a population mean of 100 and standard deviation of 15.
mu <- 100
s <- 50
n <- 5
nsim <- 10000 # number of simulations
# theoretical standard error
s / sqrt(n)
# simulation of experiment and the standard deviations of their means
y <- replicate( nsim, mean( rnorm(n, mu, s) ) )
sd(y)
Note how the final standard deviation is close to the theoretical standard error. By playing with the n variable here you can see the variability measure will get smaller as n increases.
[As an aside, kurtosis in the graphs isn't really changing (assuming they are normal distributions). Lowering the variance doesn't change the kurtosis but the distribution will look narrower. The only way to visually examine the kurtosis changes is put the distributions on the same scale.]
|
Why does increasing the sample size lower the (sampling) variance?
The variability that's shrinking when N increases is the variability of the sample mean, often expressed as standard error. Or, in other terms, the certainty of the veracity of the sample mean is inc
|
5,851
|
Why does increasing the sample size lower the (sampling) variance?
|
If you wanted to know what is the average weight of american citizens, then in the ideal case you'd immediately ask every citizen to step on the scales, and collect the data. You'd get an exact answer. This is very difficult, so maybe you could get a few citizens to step on scale, compute the average and get an idea of what is the average of the population. Would you expect that the sample average be exactly equal to the population average? I hope not.
Now, would you agree that if you got more and more people, at some point we'd be getting closer to population mean? We should, right? In the end the most people we can get is entire population, and its mean is what we're looking for. This is the intuition.
This was an idealized thought experiment. In reality, there are complications. I'll give you two.
Imagine that the data is coming from a Cauchy distribution. You can increase your sample infinitely, yet the variance will not decrease. This distribution has no population variance. In fact, strictly speaking, it has no sample mean either. It's sad. Amazingly, this distribution is quite real, it pops up here and there in physics.
Imagine that you decided to go on with a task of determining the average weight of american citizens. So, you take your scale and go from home to home. This will take you many many years. By the time you collect million observations, some of the citizens in your data set will have changed their weight a lot, some had died etc. The point is that increasing sample size in this case doesn't help you.
|
Why does increasing the sample size lower the (sampling) variance?
|
If you wanted to know what is the average weight of american citizens, then in the ideal case you'd immediately ask every citizen to step on the scales, and collect the data. You'd get an exact answer
|
Why does increasing the sample size lower the (sampling) variance?
If you wanted to know what is the average weight of american citizens, then in the ideal case you'd immediately ask every citizen to step on the scales, and collect the data. You'd get an exact answer. This is very difficult, so maybe you could get a few citizens to step on scale, compute the average and get an idea of what is the average of the population. Would you expect that the sample average be exactly equal to the population average? I hope not.
Now, would you agree that if you got more and more people, at some point we'd be getting closer to population mean? We should, right? In the end the most people we can get is entire population, and its mean is what we're looking for. This is the intuition.
This was an idealized thought experiment. In reality, there are complications. I'll give you two.
Imagine that the data is coming from a Cauchy distribution. You can increase your sample infinitely, yet the variance will not decrease. This distribution has no population variance. In fact, strictly speaking, it has no sample mean either. It's sad. Amazingly, this distribution is quite real, it pops up here and there in physics.
Imagine that you decided to go on with a task of determining the average weight of american citizens. So, you take your scale and go from home to home. This will take you many many years. By the time you collect million observations, some of the citizens in your data set will have changed their weight a lot, some had died etc. The point is that increasing sample size in this case doesn't help you.
|
Why does increasing the sample size lower the (sampling) variance?
If you wanted to know what is the average weight of american citizens, then in the ideal case you'd immediately ask every citizen to step on the scales, and collect the data. You'd get an exact answer
|
5,852
|
Why does increasing the sample size lower the (sampling) variance?
|
I believe that the Law of Large Numbers explains why the variance (standard error) goes down when the sample size increases. Wikipedia's article on this says:
According to the law, the average of the results obtained from a large
number of trials should be close to the expected value, and will tend
to become closer as more trials are performed.
In terms of the Central Limit Theorem:
When drawing a single random sample, the larger the sample is the closer the sample mean will be to the population mean (in the above quote, think of "number of trials" as "sample size", so each "trial" is an observation). Therefore, when drawing an infinite number of random samples, the variance of the sampling distribution will be lower the larger the size of each sample is.
In other words, the bell shape will be narrower when each sample is large instead of small, because in that way each sample mean will be closer to the center of the bell.
|
Why does increasing the sample size lower the (sampling) variance?
|
I believe that the Law of Large Numbers explains why the variance (standard error) goes down when the sample size increases. Wikipedia's article on this says:
According to the law, the average of th
|
Why does increasing the sample size lower the (sampling) variance?
I believe that the Law of Large Numbers explains why the variance (standard error) goes down when the sample size increases. Wikipedia's article on this says:
According to the law, the average of the results obtained from a large
number of trials should be close to the expected value, and will tend
to become closer as more trials are performed.
In terms of the Central Limit Theorem:
When drawing a single random sample, the larger the sample is the closer the sample mean will be to the population mean (in the above quote, think of "number of trials" as "sample size", so each "trial" is an observation). Therefore, when drawing an infinite number of random samples, the variance of the sampling distribution will be lower the larger the size of each sample is.
In other words, the bell shape will be narrower when each sample is large instead of small, because in that way each sample mean will be closer to the center of the bell.
|
Why does increasing the sample size lower the (sampling) variance?
I believe that the Law of Large Numbers explains why the variance (standard error) goes down when the sample size increases. Wikipedia's article on this says:
According to the law, the average of th
|
5,853
|
Why does increasing the sample size lower the (sampling) variance?
|
As a sample size increases, sample variance (variation between observations) increases but the variance of the sample mean (standard error) decreases and hence precision increases.
|
Why does increasing the sample size lower the (sampling) variance?
|
As a sample size increases, sample variance (variation between observations) increases but the variance of the sample mean (standard error) decreases and hence precision increases.
|
Why does increasing the sample size lower the (sampling) variance?
As a sample size increases, sample variance (variation between observations) increases but the variance of the sample mean (standard error) decreases and hence precision increases.
|
Why does increasing the sample size lower the (sampling) variance?
As a sample size increases, sample variance (variation between observations) increases but the variance of the sample mean (standard error) decreases and hence precision increases.
|
5,854
|
Why does increasing the sample size lower the (sampling) variance?
|
First, I'd like to blow your mind: Increasing sample size does not decrease the variance of an estimate. What you call variance can, for example, stay essentially flat, even as $n$ goes to infinity. But let's come back to that.
We need to define terms. What you're referring to as "variance" is generally called standard error, the standard deviation of the sampling distribution of the statistic. A statistic's sampling distribution is just the distribution of all possible values the estimate could take, over all possible samples of a given size drawn from the population of interest. So, when you ask why variance decreases with sample size, you're really asking why the sampling distribution of a statistic is wider for smaller $n$ and narrower for larger $n$. For the vast majority of practical statistics, your assumption that this will be the case is correct.
The reason this tends to be the case is, as others have said, the Central Limit Theorem. However, what others have neglected to tell you is that there are many limit theorems. Which one applies depends on a) the family of distributions a statistic's sampling distribution belongs to, and b) the asymptotic behavior of the distribution as $n$ goes to infinity.
The vast majority of statistics have a sampling distribution that is approximately and asymptotically normal, so the famous Central Limit Theorem applies. Normal distributions get skinnier as n increases, for reasons others detail. Having a normal sampling distribution gives a statistic a property called efficiency, which just means its observed value gets closer to its expected value with increasing $n$.
That's exactly what we want. We therefore purposely choose statistics with this property when we have the option. It's easy to assume that all statistics have this property when all statistics you see have it, but I guess that's what you'd call selection bias. (It's also called stats being oversimplified for beginners because it's hard enough as it is!)
A particularly interesting counterexample is a certain function of the Hamming distance, computed between a pair of bivariate normal random vectors that have been converted to ranks without ties. That is, suppose you draw $n$ pairs at random from a bivariate normal population with correlation parameter $\rho$, $(X, Y)$. You replace each real number in $X$ with an integer indicating its relative order in $X$ after sorting the vector (first $= 1$, second $= 2$, and so on), and the same for $Y$.
You then count the number of bivariate observations with equal ranks. (So, the count equals $n$ minus the Hamming distance between $X$ and $Y$.) This count's sampling distribution is approximately and asymptotically Poisson with parameter $np$, where $p$ is the probability of obtaining at least one pair of equal ranks (Zolotukhina and Latyshev, 1987).
It is well known that $np = 1$ when $\rho = 0$ (As first shown by Montmort in 1708). Because a Poisson variable's parameter is equal to both its mean and its variance, the standard error of the count only gets closer and closer to $1$ as sample size increases, when $\rho = 0$ (Rae, 1987; Rae and Spencer, 1991). Cool, right? In general, for $\rho \geq 0$, the variance converges to $1/(1 - \rho)$ as $n$ goes to infinity (Zolotukhina and Latyshev, 1987) under bivariate normality. It does NOT decrease for non-negative $\rho$.
|
Why does increasing the sample size lower the (sampling) variance?
|
First, I'd like to blow your mind: Increasing sample size does not decrease the variance of an estimate. What you call variance can, for example, stay essentially flat, even as $n$ goes to infinity. B
|
Why does increasing the sample size lower the (sampling) variance?
First, I'd like to blow your mind: Increasing sample size does not decrease the variance of an estimate. What you call variance can, for example, stay essentially flat, even as $n$ goes to infinity. But let's come back to that.
We need to define terms. What you're referring to as "variance" is generally called standard error, the standard deviation of the sampling distribution of the statistic. A statistic's sampling distribution is just the distribution of all possible values the estimate could take, over all possible samples of a given size drawn from the population of interest. So, when you ask why variance decreases with sample size, you're really asking why the sampling distribution of a statistic is wider for smaller $n$ and narrower for larger $n$. For the vast majority of practical statistics, your assumption that this will be the case is correct.
The reason this tends to be the case is, as others have said, the Central Limit Theorem. However, what others have neglected to tell you is that there are many limit theorems. Which one applies depends on a) the family of distributions a statistic's sampling distribution belongs to, and b) the asymptotic behavior of the distribution as $n$ goes to infinity.
The vast majority of statistics have a sampling distribution that is approximately and asymptotically normal, so the famous Central Limit Theorem applies. Normal distributions get skinnier as n increases, for reasons others detail. Having a normal sampling distribution gives a statistic a property called efficiency, which just means its observed value gets closer to its expected value with increasing $n$.
That's exactly what we want. We therefore purposely choose statistics with this property when we have the option. It's easy to assume that all statistics have this property when all statistics you see have it, but I guess that's what you'd call selection bias. (It's also called stats being oversimplified for beginners because it's hard enough as it is!)
A particularly interesting counterexample is a certain function of the Hamming distance, computed between a pair of bivariate normal random vectors that have been converted to ranks without ties. That is, suppose you draw $n$ pairs at random from a bivariate normal population with correlation parameter $\rho$, $(X, Y)$. You replace each real number in $X$ with an integer indicating its relative order in $X$ after sorting the vector (first $= 1$, second $= 2$, and so on), and the same for $Y$.
You then count the number of bivariate observations with equal ranks. (So, the count equals $n$ minus the Hamming distance between $X$ and $Y$.) This count's sampling distribution is approximately and asymptotically Poisson with parameter $np$, where $p$ is the probability of obtaining at least one pair of equal ranks (Zolotukhina and Latyshev, 1987).
It is well known that $np = 1$ when $\rho = 0$ (As first shown by Montmort in 1708). Because a Poisson variable's parameter is equal to both its mean and its variance, the standard error of the count only gets closer and closer to $1$ as sample size increases, when $\rho = 0$ (Rae, 1987; Rae and Spencer, 1991). Cool, right? In general, for $\rho \geq 0$, the variance converges to $1/(1 - \rho)$ as $n$ goes to infinity (Zolotukhina and Latyshev, 1987) under bivariate normality. It does NOT decrease for non-negative $\rho$.
|
Why does increasing the sample size lower the (sampling) variance?
First, I'd like to blow your mind: Increasing sample size does not decrease the variance of an estimate. What you call variance can, for example, stay essentially flat, even as $n$ goes to infinity. B
|
5,855
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
If I understand correctly, your model is
$$ Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon, $$
with $\sum_k \pi_k = 1$ and $\pi_k\ge0$. You need to minimize
$$\sum_i \left(Y_i - (\pi_1 X_{i1} + \pi_2 X_{i2} + \pi_3 X_{i3}) \right)^2 $$
subject to these constraints. This kind of problem is known as quadratic programming.
Here a few line of R codes giving a possible solution ($X_1, X_2, X_3$ are the columns of X, the true values of the $\pi_k$ are 0.2, 0.3 and 0.5).
library("quadprog");
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2)
Rinv <- solve(chol(t(X) %*% X));
C <- cbind(rep(1,3), diag(3))
b <- c(1,rep(0,3))
d <- t(Y) %*% X
solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)
$solution
[1] 0.2049587 0.3098867 0.4851546
$value
[1] -16.0402
$unconstrained.solution
[1] 0.2295507 0.3217405 0.5002459
$iterations
[1] 2 0
$Lagrangian
[1] 1.454517 0.000000 0.000000 0.000000
$iact
[1] 1
I don’t know any results on the asymptotic distribution of the estimators, etc. If someone has pointers, I’ll be curious to get some (if you wish I can open a new question on this).
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
If I understand correctly, your model is
$$ Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon, $$
with $\sum_k \pi_k = 1$ and $\pi_k\ge0$. You need to minimize
$$\sum_i \left(Y_i - (\pi_1 X_{i1} + \
|
How do I fit a constrained regression in R so that coefficients total = 1?
If I understand correctly, your model is
$$ Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon, $$
with $\sum_k \pi_k = 1$ and $\pi_k\ge0$. You need to minimize
$$\sum_i \left(Y_i - (\pi_1 X_{i1} + \pi_2 X_{i2} + \pi_3 X_{i3}) \right)^2 $$
subject to these constraints. This kind of problem is known as quadratic programming.
Here a few line of R codes giving a possible solution ($X_1, X_2, X_3$ are the columns of X, the true values of the $\pi_k$ are 0.2, 0.3 and 0.5).
library("quadprog");
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2)
Rinv <- solve(chol(t(X) %*% X));
C <- cbind(rep(1,3), diag(3))
b <- c(1,rep(0,3))
d <- t(Y) %*% X
solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)
$solution
[1] 0.2049587 0.3098867 0.4851546
$value
[1] -16.0402
$unconstrained.solution
[1] 0.2295507 0.3217405 0.5002459
$iterations
[1] 2 0
$Lagrangian
[1] 1.454517 0.000000 0.000000 0.000000
$iact
[1] 1
I don’t know any results on the asymptotic distribution of the estimators, etc. If someone has pointers, I’ll be curious to get some (if you wish I can open a new question on this).
|
How do I fit a constrained regression in R so that coefficients total = 1?
If I understand correctly, your model is
$$ Y = \pi_1 X_1 + \pi_2 X_2 + \pi_3 X_3 + \varepsilon, $$
with $\sum_k \pi_k = 1$ and $\pi_k\ge0$. You need to minimize
$$\sum_i \left(Y_i - (\pi_1 X_{i1} + \
|
5,856
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
As mentioned by whuber, if you are interested only in the equality constraints, you can also just use the standard lm() function by rewriting your model:
\begin{eqnarray}
Y&=&\alpha+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\epsilon\\
&=& \alpha+\beta_1 X_1+\beta_2 X_2+(1-\beta_1-\beta_2) X_3+\epsilon\\
&=& \alpha + \beta_1( X_1-X_3) +\beta_2 (X_2-X_3)+ X_3+\epsilon
\end{eqnarray}
But this does not guarantee that your inequality constraints are satisfied! In this case, it is however, so you get exactly the same result as using the quadratic programming example above (putting the X3 on the left):
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2)
X1 <- X[,1]; X2 <-X[,2]; X3 <- X[,3]
lm(Y-X3~-1+I(X1-X3)+I(X2-X3))
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
As mentioned by whuber, if you are interested only in the equality constraints, you can also just use the standard lm() function by rewriting your model:
\begin{eqnarray}
Y&=&\alpha+\beta_1 X_1+\beta_
|
How do I fit a constrained regression in R so that coefficients total = 1?
As mentioned by whuber, if you are interested only in the equality constraints, you can also just use the standard lm() function by rewriting your model:
\begin{eqnarray}
Y&=&\alpha+\beta_1 X_1+\beta_2 X_2+\beta_3 X_3+\epsilon\\
&=& \alpha+\beta_1 X_1+\beta_2 X_2+(1-\beta_1-\beta_2) X_3+\epsilon\\
&=& \alpha + \beta_1( X_1-X_3) +\beta_2 (X_2-X_3)+ X_3+\epsilon
\end{eqnarray}
But this does not guarantee that your inequality constraints are satisfied! In this case, it is however, so you get exactly the same result as using the quadratic programming example above (putting the X3 on the left):
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2,0.3,0.5) + rnorm(100, sd=0.2)
X1 <- X[,1]; X2 <-X[,2]; X3 <- X[,3]
lm(Y-X3~-1+I(X1-X3)+I(X2-X3))
|
How do I fit a constrained regression in R so that coefficients total = 1?
As mentioned by whuber, if you are interested only in the equality constraints, you can also just use the standard lm() function by rewriting your model:
\begin{eqnarray}
Y&=&\alpha+\beta_1 X_1+\beta_
|
5,857
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
Old question but since I'm facing the same problem I thought to post my 2p...
Use quadratic programming as suggested by @Elvis but using sqlincon from the pracma package. I think the advantage over quadrpog::solve.QP is a simpler user interface to specify the constraints. (In fact, lsqlincon is a wrapper around solve.QP).
Example:
library(pracma)
set.seed(1234)
# Test data
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2, 0.3, 0.5) + rnorm(100, sd=0.2)
# Equality constraint: We want the sum of the coefficients to be 1.
# I.e. Aeq x == beq
Aeq <- matrix(rep(1, ncol(X)), nrow= 1)
beq <- c(1)
# Lower and upper bounds of the parameters, i.e [0, 1]
lb <- rep(0, ncol(X))
ub <- rep(1, ncol(X))
# And solve:
lsqlincon(X, Y, Aeq= Aeq, beq= beq, lb= lb, ub= ub)
[1] 0.1583139 0.3304708 0.5112153
Same results as Elvis's:
library(quadprog)
Rinv <- solve(chol(t(X) %*% X));
C <- cbind(rep(1,3), diag(3))
b <- c(1,rep(0,3))
d <- t(Y) %*% X
solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)$solution
EDIT To try to address gung's comment here's some explanation. sqlincon emulates matlab's lsqlin which has a nice help page. Here's the relevant bits with some (minor) edits of mine:
X Multiplier matrix, specified as a matrix of doubles. C represents the multiplier of the solution x in the expression C*x - Y. C is M-by-N, where M is the number of equations, and N is the number of elements of x.
Y Constant vector, specified as a vector of doubles. Y represents the additive constant term in the expression C*x - Y. Y is M-by-1, where M is the number of equations.
Aeq: Linear equality constraint matrix, specified as a matrix of doubles. Aeq represents the linear coefficients in the constraints Aeq*x = beq. Aeq has size Meq-by-N, where Meq is the number of constraints and N is the number of elements of x
beq Linear equality constraint vector, specified as a vector of doubles. beq represents the constant vector in the constraints Aeq*x = beq. beq has length Meq, where Aeq is Meq-by-N.
lb Lower bounds, specified as a vector of doubles. lb represents the lower bounds elementwise in lb ≤ x ≤ ub.
ub Upper bounds, specified as a vector of doubles. ub represents the upper bounds elementwise in lb ≤ x ≤ ub.
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
Old question but since I'm facing the same problem I thought to post my 2p...
Use quadratic programming as suggested by @Elvis but using sqlincon from the pracma package. I think the advantage over qu
|
How do I fit a constrained regression in R so that coefficients total = 1?
Old question but since I'm facing the same problem I thought to post my 2p...
Use quadratic programming as suggested by @Elvis but using sqlincon from the pracma package. I think the advantage over quadrpog::solve.QP is a simpler user interface to specify the constraints. (In fact, lsqlincon is a wrapper around solve.QP).
Example:
library(pracma)
set.seed(1234)
# Test data
X <- matrix(runif(300), ncol=3)
Y <- X %*% c(0.2, 0.3, 0.5) + rnorm(100, sd=0.2)
# Equality constraint: We want the sum of the coefficients to be 1.
# I.e. Aeq x == beq
Aeq <- matrix(rep(1, ncol(X)), nrow= 1)
beq <- c(1)
# Lower and upper bounds of the parameters, i.e [0, 1]
lb <- rep(0, ncol(X))
ub <- rep(1, ncol(X))
# And solve:
lsqlincon(X, Y, Aeq= Aeq, beq= beq, lb= lb, ub= ub)
[1] 0.1583139 0.3304708 0.5112153
Same results as Elvis's:
library(quadprog)
Rinv <- solve(chol(t(X) %*% X));
C <- cbind(rep(1,3), diag(3))
b <- c(1,rep(0,3))
d <- t(Y) %*% X
solve.QP(Dmat = Rinv, factorized = TRUE, dvec = d, Amat = C, bvec = b, meq = 1)$solution
EDIT To try to address gung's comment here's some explanation. sqlincon emulates matlab's lsqlin which has a nice help page. Here's the relevant bits with some (minor) edits of mine:
X Multiplier matrix, specified as a matrix of doubles. C represents the multiplier of the solution x in the expression C*x - Y. C is M-by-N, where M is the number of equations, and N is the number of elements of x.
Y Constant vector, specified as a vector of doubles. Y represents the additive constant term in the expression C*x - Y. Y is M-by-1, where M is the number of equations.
Aeq: Linear equality constraint matrix, specified as a matrix of doubles. Aeq represents the linear coefficients in the constraints Aeq*x = beq. Aeq has size Meq-by-N, where Meq is the number of constraints and N is the number of elements of x
beq Linear equality constraint vector, specified as a vector of doubles. beq represents the constant vector in the constraints Aeq*x = beq. beq has length Meq, where Aeq is Meq-by-N.
lb Lower bounds, specified as a vector of doubles. lb represents the lower bounds elementwise in lb ≤ x ≤ ub.
ub Upper bounds, specified as a vector of doubles. ub represents the upper bounds elementwise in lb ≤ x ≤ ub.
|
How do I fit a constrained regression in R so that coefficients total = 1?
Old question but since I'm facing the same problem I thought to post my 2p...
Use quadratic programming as suggested by @Elvis but using sqlincon from the pracma package. I think the advantage over qu
|
5,858
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
As I understand your model, you're seeking to find
$$
\bar{\bar{x}} \cdot \bar{b} = \bar{y}
$$
such that
$$
\sum \left [ \begin{matrix} \bar{b} \end{matrix} \right ] =1
$$
I've found the easiest way to treat these sorts of problems is to use matrices' associative properties to treat $\bar{b}$ as a function of other variables.
E.g. $\bar{b}$ is a function of $\bar{c}$ via the transform block $\bar{\bar{T_c}}$. In your case, $r$ below is $1$.
$$
\bar{b} = \left [
\begin{matrix}
k_0 \\
k_1 \\
k_2
\end{matrix}
\right ] =
\bar{\bar{T_c}} \cdot \bar{c} =
\left [
\begin{matrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
-1 & -1 & 1
\end{matrix}
\right ] \cdot
\left[
\begin{matrix}
k_0 \\
k_1 \\
r
\end{matrix}
\right ]
$$
Here we can separate our $k$nowns and $u$nknowns.
$$
\bar{c} =
\left[
\begin{matrix}
k_0 \\
k_1 \\
r
\end{matrix}
\right ] =
\bar{\bar{S_u}} \cdot
\bar{c_u} +
\bar{\bar{S_k}} \cdot
\bar{c_k} =
\left[
\begin{matrix}
1 & 0 \\
0 & 1 \\
0 & 0
\end{matrix}
\right] \cdot
\left [
\begin{matrix}
k_0 \\
k_1
\end{matrix}
\right ] +
\left [
\begin{matrix}
0 \\ 0 \\ 1
\end{matrix}
\right ] \cdot
r
$$
While I could combine the different transform/separation blocks, that gets cumbersome with more intricate models. These blocks allow knowns and unknowns to be separated.
$$
\bar{\bar{x}} \cdot
\bar{\bar{T_c}} \cdot
\left (
\bar{\bar{S_u}} \cdot \bar{c_u} + \bar{\bar{S_k}} \cdot \bar{c_k}
\right ) =
\bar{y}
\\
\bar{\bar{v}} = \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_u}}
\\
\bar{w} = \bar{y} - \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_k}} \cdot \bar{c_k}
$$
Finally the problem is in a familiar form.
$$
\bar{\bar{v}} \cdot \bar{c_u} = \bar{w}
$$
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
As I understand your model, you're seeking to find
$$
\bar{\bar{x}} \cdot \bar{b} = \bar{y}
$$
such that
$$
\sum \left [ \begin{matrix} \bar{b} \end{matrix} \right ] =1
$$
I've found the easiest wa
|
How do I fit a constrained regression in R so that coefficients total = 1?
As I understand your model, you're seeking to find
$$
\bar{\bar{x}} \cdot \bar{b} = \bar{y}
$$
such that
$$
\sum \left [ \begin{matrix} \bar{b} \end{matrix} \right ] =1
$$
I've found the easiest way to treat these sorts of problems is to use matrices' associative properties to treat $\bar{b}$ as a function of other variables.
E.g. $\bar{b}$ is a function of $\bar{c}$ via the transform block $\bar{\bar{T_c}}$. In your case, $r$ below is $1$.
$$
\bar{b} = \left [
\begin{matrix}
k_0 \\
k_1 \\
k_2
\end{matrix}
\right ] =
\bar{\bar{T_c}} \cdot \bar{c} =
\left [
\begin{matrix}
1 & 0 & 0 \\
0 & 1 & 0 \\
-1 & -1 & 1
\end{matrix}
\right ] \cdot
\left[
\begin{matrix}
k_0 \\
k_1 \\
r
\end{matrix}
\right ]
$$
Here we can separate our $k$nowns and $u$nknowns.
$$
\bar{c} =
\left[
\begin{matrix}
k_0 \\
k_1 \\
r
\end{matrix}
\right ] =
\bar{\bar{S_u}} \cdot
\bar{c_u} +
\bar{\bar{S_k}} \cdot
\bar{c_k} =
\left[
\begin{matrix}
1 & 0 \\
0 & 1 \\
0 & 0
\end{matrix}
\right] \cdot
\left [
\begin{matrix}
k_0 \\
k_1
\end{matrix}
\right ] +
\left [
\begin{matrix}
0 \\ 0 \\ 1
\end{matrix}
\right ] \cdot
r
$$
While I could combine the different transform/separation blocks, that gets cumbersome with more intricate models. These blocks allow knowns and unknowns to be separated.
$$
\bar{\bar{x}} \cdot
\bar{\bar{T_c}} \cdot
\left (
\bar{\bar{S_u}} \cdot \bar{c_u} + \bar{\bar{S_k}} \cdot \bar{c_k}
\right ) =
\bar{y}
\\
\bar{\bar{v}} = \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_u}}
\\
\bar{w} = \bar{y} - \bar{\bar{x}} \cdot \bar{\bar{T_c}} \cdot \bar{\bar{S_k}} \cdot \bar{c_k}
$$
Finally the problem is in a familiar form.
$$
\bar{\bar{v}} \cdot \bar{c_u} = \bar{w}
$$
|
How do I fit a constrained regression in R so that coefficients total = 1?
As I understand your model, you're seeking to find
$$
\bar{\bar{x}} \cdot \bar{b} = \bar{y}
$$
such that
$$
\sum \left [ \begin{matrix} \bar{b} \end{matrix} \right ] =1
$$
I've found the easiest wa
|
5,859
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Using matrix algebra it is possible write following formula if you want to relax non-negative coefficients constraint,
$\beta=(X^{T}X)^{-1}X^{T}y+1\left[\frac{1_{scalar}-1^{T}(X^{T}X)^{-1}X^{T}y}{1^{T}(X^{T}X)^{-1}1}\right](X^{T}X)^{-1}.$
This might be helpful in case of a need for quick, simple and exact solution.
|
How do I fit a constrained regression in R so that coefficients total = 1?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
How do I fit a constrained regression in R so that coefficients total = 1?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Using matrix algebra it is possible write following formula if you want to relax non-negative coefficients constraint,
$\beta=(X^{T}X)^{-1}X^{T}y+1\left[\frac{1_{scalar}-1^{T}(X^{T}X)^{-1}X^{T}y}{1^{T}(X^{T}X)^{-1}1}\right](X^{T}X)^{-1}.$
This might be helpful in case of a need for quick, simple and exact solution.
|
How do I fit a constrained regression in R so that coefficients total = 1?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
5,860
|
References for survival analysis
|
I like:
Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger)
Modeling Survival Data: Extending the Cox Model (Therneau)
The first does a good job of straddling theory and model building issues. It's mostly focused on semi-parametric techniques, but there is reasonable coverage of parametric methods. It doesn't really provide any R or other code examples, if that's what you're after.
The second is heavy with modeling on the Cox PH side (as the title might indicate). It's by the author of the survival package in R and there are plenty of R examples and mini-case studies. I think both books complement each other, but I'd recommend the first for getting started.
A quick way to get started in R is David Diez's guide.
|
References for survival analysis
|
I like:
Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger)
Modeling Survival Data: Extending the Cox Model (Therneau)
The first does a good job of straddling theory
|
References for survival analysis
I like:
Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger)
Modeling Survival Data: Extending the Cox Model (Therneau)
The first does a good job of straddling theory and model building issues. It's mostly focused on semi-parametric techniques, but there is reasonable coverage of parametric methods. It doesn't really provide any R or other code examples, if that's what you're after.
The second is heavy with modeling on the Cox PH side (as the title might indicate). It's by the author of the survival package in R and there are plenty of R examples and mini-case studies. I think both books complement each other, but I'd recommend the first for getting started.
A quick way to get started in R is David Diez's guide.
|
References for survival analysis
I like:
Survival Analysis: Techniques for Censored and Truncated Data (Klein & Moeschberger)
Modeling Survival Data: Extending the Cox Model (Therneau)
The first does a good job of straddling theory
|
5,861
|
References for survival analysis
|
For a very clear, succinct and applied approach, I highly recommend Event History Modeling by Box-Steffenmeier and Jones
|
References for survival analysis
|
For a very clear, succinct and applied approach, I highly recommend Event History Modeling by Box-Steffenmeier and Jones
|
References for survival analysis
For a very clear, succinct and applied approach, I highly recommend Event History Modeling by Box-Steffenmeier and Jones
|
References for survival analysis
For a very clear, succinct and applied approach, I highly recommend Event History Modeling by Box-Steffenmeier and Jones
|
5,862
|
References for survival analysis
|
"Survival analysis using SAS: a practical guide" by Paul D. Allison provides a good guide to the connection between the math and SAS code - how to think about your information, how to code, how to interpret results. Even if you are using R, there will be parallels that could prove useful.
|
References for survival analysis
|
"Survival analysis using SAS: a practical guide" by Paul D. Allison provides a good guide to the connection between the math and SAS code - how to think about your information, how to code, how to int
|
References for survival analysis
"Survival analysis using SAS: a practical guide" by Paul D. Allison provides a good guide to the connection between the math and SAS code - how to think about your information, how to code, how to interpret results. Even if you are using R, there will be parallels that could prove useful.
|
References for survival analysis
"Survival analysis using SAS: a practical guide" by Paul D. Allison provides a good guide to the connection between the math and SAS code - how to think about your information, how to code, how to int
|
5,863
|
References for survival analysis
|
David Collett. Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. 2003. ISBN 978-1584883258
Software section focuses on SAS not R though.
|
References for survival analysis
|
David Collett. Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. 2003. ISBN 978-1584883258
Software section focuses on SAS not R though.
|
References for survival analysis
David Collett. Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. 2003. ISBN 978-1584883258
Software section focuses on SAS not R though.
|
References for survival analysis
David Collett. Modelling Survival Data in Medical Research, Second Edition. Chapman & Hall/CRC. 2003. ISBN 978-1584883258
Software section focuses on SAS not R though.
|
5,864
|
References for survival analysis
|
Take a look at the course page for Sociology 761: Statistical Applications in Social Research. Professor John Fox at McMaster University has course notes on survival analysis as well as an example R script and several data files.
For another perspective, see Models for Quantifying Risk, 3/e, the standard textbook for actuarial exam 3/MLC. The bulk of the book, chapters 3-10, covers survival-contingent payment models.
|
References for survival analysis
|
Take a look at the course page for Sociology 761: Statistical Applications in Social Research. Professor John Fox at McMaster University has course notes on survival analysis as well as an example R s
|
References for survival analysis
Take a look at the course page for Sociology 761: Statistical Applications in Social Research. Professor John Fox at McMaster University has course notes on survival analysis as well as an example R script and several data files.
For another perspective, see Models for Quantifying Risk, 3/e, the standard textbook for actuarial exam 3/MLC. The bulk of the book, chapters 3-10, covers survival-contingent payment models.
|
References for survival analysis
Take a look at the course page for Sociology 761: Statistical Applications in Social Research. Professor John Fox at McMaster University has course notes on survival analysis as well as an example R s
|
5,865
|
References for survival analysis
|
I learned from Hosmer, Lemeshow & May "Applied Survival Analysis: Regression Modeling of Time-to-Event Data" (2nd ed., 2008), which covers the basics. It also helped that I found a really cheap copy...
|
References for survival analysis
|
I learned from Hosmer, Lemeshow & May "Applied Survival Analysis: Regression Modeling of Time-to-Event Data" (2nd ed., 2008), which covers the basics. It also helped that I found a really cheap copy..
|
References for survival analysis
I learned from Hosmer, Lemeshow & May "Applied Survival Analysis: Regression Modeling of Time-to-Event Data" (2nd ed., 2008), which covers the basics. It also helped that I found a really cheap copy...
|
References for survival analysis
I learned from Hosmer, Lemeshow & May "Applied Survival Analysis: Regression Modeling of Time-to-Event Data" (2nd ed., 2008), which covers the basics. It also helped that I found a really cheap copy..
|
5,866
|
References for survival analysis
|
Survival Analysis: A Self-Learning Text
by Kleinbaum and Klein
is pretty good. It depends on what you want. This is more of a non-technical introduction. It's focused on practical applications and minimizes the mathematics. Pedegocially, it's also intended for learning outside of the classroom.
|
References for survival analysis
|
Survival Analysis: A Self-Learning Text
by Kleinbaum and Klein
is pretty good. It depends on what you want. This is more of a non-technical introduction. It's focused on practical applications and
|
References for survival analysis
Survival Analysis: A Self-Learning Text
by Kleinbaum and Klein
is pretty good. It depends on what you want. This is more of a non-technical introduction. It's focused on practical applications and minimizes the mathematics. Pedegocially, it's also intended for learning outside of the classroom.
|
References for survival analysis
Survival Analysis: A Self-Learning Text
by Kleinbaum and Klein
is pretty good. It depends on what you want. This is more of a non-technical introduction. It's focused on practical applications and
|
5,867
|
References for survival analysis
|
I found "Analysis of survival data" by Cox and Oakes (Chapman and Hall Monographs on Statistics and Applied Probability - vol. 21) to be very readable and informative. No material on survival analysis in R though.
|
References for survival analysis
|
I found "Analysis of survival data" by Cox and Oakes (Chapman and Hall Monographs on Statistics and Applied Probability - vol. 21) to be very readable and informative. No material on survival analysi
|
References for survival analysis
I found "Analysis of survival data" by Cox and Oakes (Chapman and Hall Monographs on Statistics and Applied Probability - vol. 21) to be very readable and informative. No material on survival analysis in R though.
|
References for survival analysis
I found "Analysis of survival data" by Cox and Oakes (Chapman and Hall Monographs on Statistics and Applied Probability - vol. 21) to be very readable and informative. No material on survival analysi
|
5,868
|
References for survival analysis
|
Dirk F. Moore
Applied Survival Analysis
Using R
|
References for survival analysis
|
Dirk F. Moore
Applied Survival Analysis
Using R
|
References for survival analysis
Dirk F. Moore
Applied Survival Analysis
Using R
|
References for survival analysis
Dirk F. Moore
Applied Survival Analysis
Using R
|
5,869
|
References for survival analysis
|
Sage pubs book, Introducing Survival and Event History Analysis by Melinda Mills, has been build for an R users' adience.
|
References for survival analysis
|
Sage pubs book, Introducing Survival and Event History Analysis by Melinda Mills, has been build for an R users' adience.
|
References for survival analysis
Sage pubs book, Introducing Survival and Event History Analysis by Melinda Mills, has been build for an R users' adience.
|
References for survival analysis
Sage pubs book, Introducing Survival and Event History Analysis by Melinda Mills, has been build for an R users' adience.
|
5,870
|
References for survival analysis
|
I'm surprised no one has mentioned it, but there is a book that exactly meets your specifications:
Tableman & Kim. Survival Analysis using S. Chapman & Hall/CRC.
|
References for survival analysis
|
I'm surprised no one has mentioned it, but there is a book that exactly meets your specifications:
Tableman & Kim. Survival Analysis using S. Chapman & Hall/CRC.
|
References for survival analysis
I'm surprised no one has mentioned it, but there is a book that exactly meets your specifications:
Tableman & Kim. Survival Analysis using S. Chapman & Hall/CRC.
|
References for survival analysis
I'm surprised no one has mentioned it, but there is a book that exactly meets your specifications:
Tableman & Kim. Survival Analysis using S. Chapman & Hall/CRC.
|
5,871
|
References for survival analysis
|
For survival analysis with R see Event History Analysis with R by Broström. With alot of R examples of survival analysis on historical demographic data.
|
References for survival analysis
|
For survival analysis with R see Event History Analysis with R by Broström. With alot of R examples of survival analysis on historical demographic data.
|
References for survival analysis
For survival analysis with R see Event History Analysis with R by Broström. With alot of R examples of survival analysis on historical demographic data.
|
References for survival analysis
For survival analysis with R see Event History Analysis with R by Broström. With alot of R examples of survival analysis on historical demographic data.
|
5,872
|
References for survival analysis
|
The book we used as a text book is called
Applied Survival Analysis by David W Hosmer
This book is from a biostat perspective and I found it was covered almost everything I used in my work. Also they have R/state/SAS code on their website according to their examples in the book.
|
References for survival analysis
|
The book we used as a text book is called
Applied Survival Analysis by David W Hosmer
This book is from a biostat perspective and I found it was covered almost everything I used in my work. Also they
|
References for survival analysis
The book we used as a text book is called
Applied Survival Analysis by David W Hosmer
This book is from a biostat perspective and I found it was covered almost everything I used in my work. Also they have R/state/SAS code on their website according to their examples in the book.
|
References for survival analysis
The book we used as a text book is called
Applied Survival Analysis by David W Hosmer
This book is from a biostat perspective and I found it was covered almost everything I used in my work. Also they
|
5,873
|
References for survival analysis
|
The book "Survival Analysis, Techniques for Censored and Truncated Data" written by Klein & Moeschberger (2003) is always the 1st reference I would recommend for the people who are interested in learning, practicing and studying survival analysis. This book not only provides comprehensive discussions to the problems we will face when analyzing the time-to-event data, with lots of examples of variety, and useful techniques we can apply to correct the "bias" induced from the above problems, but also prepares tons of practical notes and theoretical notes to lead us to the front door of the beautiful applications and methodologies in survival analysis.
The 2nd book I would recommend is "The Statistical Analysis of Failure Time Data" by Kalbfleisch & Prentice (2002). Both professors are masters in this challenging field, and in this book they lecture not-so-trivial concepts in a very clear way and derive lots of state-of-the-art techniques at that time, with their guidance we are well prepared to step into the abundant world of survival analysis.
If we really spend quality time to study these two books, we can acquire lots of fundamental and deep knowledge to analyze censored and/or truncated data, which will cause seriously biased conclusions if we just ignore these problems inherently almost everywhere in real-world applications. Enjoy reading.
|
References for survival analysis
|
The book "Survival Analysis, Techniques for Censored and Truncated Data" written by Klein & Moeschberger (2003) is always the 1st reference I would recommend for the people who are interested in learn
|
References for survival analysis
The book "Survival Analysis, Techniques for Censored and Truncated Data" written by Klein & Moeschberger (2003) is always the 1st reference I would recommend for the people who are interested in learning, practicing and studying survival analysis. This book not only provides comprehensive discussions to the problems we will face when analyzing the time-to-event data, with lots of examples of variety, and useful techniques we can apply to correct the "bias" induced from the above problems, but also prepares tons of practical notes and theoretical notes to lead us to the front door of the beautiful applications and methodologies in survival analysis.
The 2nd book I would recommend is "The Statistical Analysis of Failure Time Data" by Kalbfleisch & Prentice (2002). Both professors are masters in this challenging field, and in this book they lecture not-so-trivial concepts in a very clear way and derive lots of state-of-the-art techniques at that time, with their guidance we are well prepared to step into the abundant world of survival analysis.
If we really spend quality time to study these two books, we can acquire lots of fundamental and deep knowledge to analyze censored and/or truncated data, which will cause seriously biased conclusions if we just ignore these problems inherently almost everywhere in real-world applications. Enjoy reading.
|
References for survival analysis
The book "Survival Analysis, Techniques for Censored and Truncated Data" written by Klein & Moeschberger (2003) is always the 1st reference I would recommend for the people who are interested in learn
|
5,874
|
References for survival analysis
|
Tutz & Schmid "Modeling Discrete Time-to-Event Data" (2016). It is fairly terse, dry and technical but has only 200+ pages and contains some references to R packages and functions. The authors suggest in the preface that while most of the textbooks focus on continuous time, this one focuses on discrete time. An advantage of discrete time modelling is amenability to smoothing and regularization methods that have proliferated in recent decades.
|
References for survival analysis
|
Tutz & Schmid "Modeling Discrete Time-to-Event Data" (2016). It is fairly terse, dry and technical but has only 200+ pages and contains some references to R packages and functions. The authors suggest
|
References for survival analysis
Tutz & Schmid "Modeling Discrete Time-to-Event Data" (2016). It is fairly terse, dry and technical but has only 200+ pages and contains some references to R packages and functions. The authors suggest in the preface that while most of the textbooks focus on continuous time, this one focuses on discrete time. An advantage of discrete time modelling is amenability to smoothing and regularization methods that have proliferated in recent decades.
|
References for survival analysis
Tutz & Schmid "Modeling Discrete Time-to-Event Data" (2016). It is fairly terse, dry and technical but has only 200+ pages and contains some references to R packages and functions. The authors suggest
|
5,875
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
Using glmnet is really easy once you get the grasp of it thanks to its excellent vignette in http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html (you can also check the CRAN package page).
As for the best lambda for glmnet, the rule of thumb is to use
cvfit <- glmnet::cv.glmnet(x, y)
coef(cvfit, s = "lambda.1se")
instead of lambda.min.
To do the same for lars you have to do it by hand. Here is my solution
cv <- lars::cv.lars(x, y, plot.it = FALSE, mode = "step")
idx <- which.max(cv$cv - cv$cv.error <= min(cv$cv))
coef(lars::lars(x, y))[idx,]
Bear in mind that this is not exactly the same, because this is stopping at a lasso knot (when a variable enters) instead of at any point.
Please note that glmnet is the preferred package now, it is actively maintained, more so than lars, and that there have been questions about glmnet vs lars answered before (algorithms used differ).
As for your question of using lasso to choose variables and then fit OLS, it is an ongoing debate. Google for OLS post Lasso and there are some papers discussing the topic. Even the authors of Elements of Statistical Learning admit it is possible.
Edit: Here is the code to reproduce more accurately what glmnet does in lars
cv <- lars::cv.lars(x, y, plot.it = FALSE)
ideal_l1_ratio <- cv$index[which.max(cv$cv - cv$cv.error <= min(cv$cv))]
obj <- lars::lars(x, y)
scaled_coefs <- scale(obj$beta, FALSE, 1 / obj$normx)
l1 <- apply(X = scaled_coefs, MARGIN = 1, FUN = function(x) sum(abs(x)))
coef(obj)[which.max(l1 / tail(l1, 1) > ideal_l1_ratio),]
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
Using glmnet is really easy once you get the grasp of it thanks to its excellent vignette in http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html (you can also check the CRAN package page).
As for
|
Using LASSO from lars (or glmnet) package in R for variable selection
Using glmnet is really easy once you get the grasp of it thanks to its excellent vignette in http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html (you can also check the CRAN package page).
As for the best lambda for glmnet, the rule of thumb is to use
cvfit <- glmnet::cv.glmnet(x, y)
coef(cvfit, s = "lambda.1se")
instead of lambda.min.
To do the same for lars you have to do it by hand. Here is my solution
cv <- lars::cv.lars(x, y, plot.it = FALSE, mode = "step")
idx <- which.max(cv$cv - cv$cv.error <= min(cv$cv))
coef(lars::lars(x, y))[idx,]
Bear in mind that this is not exactly the same, because this is stopping at a lasso knot (when a variable enters) instead of at any point.
Please note that glmnet is the preferred package now, it is actively maintained, more so than lars, and that there have been questions about glmnet vs lars answered before (algorithms used differ).
As for your question of using lasso to choose variables and then fit OLS, it is an ongoing debate. Google for OLS post Lasso and there are some papers discussing the topic. Even the authors of Elements of Statistical Learning admit it is possible.
Edit: Here is the code to reproduce more accurately what glmnet does in lars
cv <- lars::cv.lars(x, y, plot.it = FALSE)
ideal_l1_ratio <- cv$index[which.max(cv$cv - cv$cv.error <= min(cv$cv))]
obj <- lars::lars(x, y)
scaled_coefs <- scale(obj$beta, FALSE, 1 / obj$normx)
l1 <- apply(X = scaled_coefs, MARGIN = 1, FUN = function(x) sum(abs(x)))
coef(obj)[which.max(l1 / tail(l1, 1) > ideal_l1_ratio),]
|
Using LASSO from lars (or glmnet) package in R for variable selection
Using glmnet is really easy once you get the grasp of it thanks to its excellent vignette in http://web.stanford.edu/~hastie/glmnet/glmnet_alpha.html (you can also check the CRAN package page).
As for
|
5,876
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
I'm returning to this question from a while ago since I think I've solved the correct solution.
Here's a replica using the mtcars dataset:
library(glmnet)
`%ni%`<-Negate(`%in%')
data(mtcars)
x<-model.matrix(mpg~.,data=mtcars)
x=x[,-1]
glmnet1<-cv.glmnet(x=x,y=mtcars$mpg,type.measure='mse',nfolds=5,alpha=.5)
c<-coef(glmnet1,s='lambda.min',exact=TRUE)
inds<-which(c!=0)
variables<-row.names(c)[inds]
variables<-variables[variables %ni% '(Intercept)']
'variables' gives you the list of the variables that solve the best solution.
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
I'm returning to this question from a while ago since I think I've solved the correct solution.
Here's a replica using the mtcars dataset:
library(glmnet)
`%ni%`<-Negate(`%in%')
data(mtcars)
x<-model
|
Using LASSO from lars (or glmnet) package in R for variable selection
I'm returning to this question from a while ago since I think I've solved the correct solution.
Here's a replica using the mtcars dataset:
library(glmnet)
`%ni%`<-Negate(`%in%')
data(mtcars)
x<-model.matrix(mpg~.,data=mtcars)
x=x[,-1]
glmnet1<-cv.glmnet(x=x,y=mtcars$mpg,type.measure='mse',nfolds=5,alpha=.5)
c<-coef(glmnet1,s='lambda.min',exact=TRUE)
inds<-which(c!=0)
variables<-row.names(c)[inds]
variables<-variables[variables %ni% '(Intercept)']
'variables' gives you the list of the variables that solve the best solution.
|
Using LASSO from lars (or glmnet) package in R for variable selection
I'm returning to this question from a while ago since I think I've solved the correct solution.
Here's a replica using the mtcars dataset:
library(glmnet)
`%ni%`<-Negate(`%in%')
data(mtcars)
x<-model
|
5,877
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
Perhaps the comparison with forward selection stepwise regression will help (see the following link to a site by one of the authors http://www-stat.stanford.edu/~tibs/lasso/simple.html). This is the approach used in Chapter 3.4.4 of The Elements of Statistical Learning (available online for free). I thought that Chapter 3.6 in that book helped to understand the relationship between least squares, best subset, and lasso (plus a couple of other procedures). I also find it helpful to take the transpose of the coefficient, t(coef(model)) and write.csv, so that I can open it in Excel along with a copy of the plot(model) on the side. You might want to sort by the last column, which contains the least squares estimate. Then you can see clearly how each variable gets added at each piecewise step and how the coefficients change as a result. Of course this is not the whole story, but hopefully it will be a start.
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
Perhaps the comparison with forward selection stepwise regression will help (see the following link to a site by one of the authors http://www-stat.stanford.edu/~tibs/lasso/simple.html). This is the
|
Using LASSO from lars (or glmnet) package in R for variable selection
Perhaps the comparison with forward selection stepwise regression will help (see the following link to a site by one of the authors http://www-stat.stanford.edu/~tibs/lasso/simple.html). This is the approach used in Chapter 3.4.4 of The Elements of Statistical Learning (available online for free). I thought that Chapter 3.6 in that book helped to understand the relationship between least squares, best subset, and lasso (plus a couple of other procedures). I also find it helpful to take the transpose of the coefficient, t(coef(model)) and write.csv, so that I can open it in Excel along with a copy of the plot(model) on the side. You might want to sort by the last column, which contains the least squares estimate. Then you can see clearly how each variable gets added at each piecewise step and how the coefficients change as a result. Of course this is not the whole story, but hopefully it will be a start.
|
Using LASSO from lars (or glmnet) package in R for variable selection
Perhaps the comparison with forward selection stepwise regression will help (see the following link to a site by one of the authors http://www-stat.stanford.edu/~tibs/lasso/simple.html). This is the
|
5,878
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
lars and glmnet operate on raw matrices. To includ interaction terms, you will have to construct the matrices yourself. That means one column per interaction (which is per level per factor if you have factors). Look into lm() to see how it does it (warning: there be dragons).
To do it right now, do something like:
To make an interaction term manually, you could (but maybe shouldn't, because it's slow) do:
int = D["x1"]*D["x2"]
names(int) = c("x1*x2")
D = cbind(D, int)
Then to use this in lars (assuming you have a y kicking around):
lars(as.matrix(D), as.matrix(y))
I wish I could help you more with the other questions. I found this one because lars is giving me grief and the documentation in it and on the web is very thin.
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
lars and glmnet operate on raw matrices. To includ interaction terms, you will have to construct the matrices yourself. That means one column per interaction (which is per level per factor if you have
|
Using LASSO from lars (or glmnet) package in R for variable selection
lars and glmnet operate on raw matrices. To includ interaction terms, you will have to construct the matrices yourself. That means one column per interaction (which is per level per factor if you have factors). Look into lm() to see how it does it (warning: there be dragons).
To do it right now, do something like:
To make an interaction term manually, you could (but maybe shouldn't, because it's slow) do:
int = D["x1"]*D["x2"]
names(int) = c("x1*x2")
D = cbind(D, int)
Then to use this in lars (assuming you have a y kicking around):
lars(as.matrix(D), as.matrix(y))
I wish I could help you more with the other questions. I found this one because lars is giving me grief and the documentation in it and on the web is very thin.
|
Using LASSO from lars (or glmnet) package in R for variable selection
lars and glmnet operate on raw matrices. To includ interaction terms, you will have to construct the matrices yourself. That means one column per interaction (which is per level per factor if you have
|
5,879
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
LARS solves the ENTIRE solution path. The solution path is piecewise linear -- there are a finite number of "notch" points (i.e., values of the regularization parameter) at which the solution changes.
So the matrix of solutions you're getting is all the possible solutions. In the list that it returns, it should also give you the values of the regularization parameter.
|
Using LASSO from lars (or glmnet) package in R for variable selection
|
LARS solves the ENTIRE solution path. The solution path is piecewise linear -- there are a finite number of "notch" points (i.e., values of the regularization parameter) at which the solution changes
|
Using LASSO from lars (or glmnet) package in R for variable selection
LARS solves the ENTIRE solution path. The solution path is piecewise linear -- there are a finite number of "notch" points (i.e., values of the regularization parameter) at which the solution changes.
So the matrix of solutions you're getting is all the possible solutions. In the list that it returns, it should also give you the values of the regularization parameter.
|
Using LASSO from lars (or glmnet) package in R for variable selection
LARS solves the ENTIRE solution path. The solution path is piecewise linear -- there are a finite number of "notch" points (i.e., values of the regularization parameter) at which the solution changes
|
5,880
|
Intuition behind tensor product interactions in GAMs (MGCV package in R)
|
I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Finally, I'll make my best attempt at describing a tensor product smooth.
1) Univariate smooth
Let's say we have some response data $y$ that we conjecture is an unknown function $f$ of a predictor variable $x$ plus some error $ε$. The model would be:
$$y=f(x)+ε$$
Now, in order to fit this model, we have to identify the functional form of $f$. The way we do this is by identifying basis functions, which are superposed in order to represent the function $f$ in its entirety. A very simple example is a linear regression, in which the basis functions are just $β_2x$ and $β_1$, the intercept. Applying the basis expansion, we have
$$y=β_1+β_2x+ε$$
In matrix form, we would have:
$$Y=Xβ+ε$$
Where $Y$ is an n-by-1 column vector, $X$ is an n-by-2 model matrix, $β$ is a 2-by-1 column vector of model coefficients, and $ε$ is an n-by-1 column vector of errors. $X$ has two columns because there are two terms in our basis expansion: the linear term and the intercept.
The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable $x$. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).
To be explicit, a basis expansion of dimension $i-2$ could look like:
$$y=β_1+β_2x+β_3f_1(x)+β_4f_2(x)+...+β_if_{i-2} (x)+ε$$
where each function $f$ is, perhaps, a cubic function of the independent variable $x$.
The matrix equation $Y=Xβ+ε$ can still be used to represent our model. The only difference is that $X$ is now an n-by-i matrix; that is, it has a column for every term in the basis expansion (including the intercept and linear term). Since the process of basis expansion has allowed us to represent the model in the form of a matrix equation, we can use linear least squares to fit the model and find the coefficients $β$.
This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:
$$β=(X^TX)^{-1}X^TY$$
we have:
$$β=(X^TX+λS)^{-1}X^TY$$
where $S$ is a quadratic $i$-by-$i$ penalty matrix and $λ$ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix $S$.
MGCV can use various means of estimating the optimal smoothing parameter $λ$. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.
2) Multivariate smooth
The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response $y$ as a function $f$ of predictors $x$ and $z$. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:
$$y=f(x,z)+ε$$
Now, it should be intuitively obvious that we are going to represent $f(x,z)$ with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of $f(x)$ above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both $x$ and $z$ (if this was not the case, then implicitly $f$ would be separable such that $f(x,z)=f_x(x)+f_z(z)$). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension $i-3$ could look something like:
$$y=β_1+β_2x+β_3z+β_4f_1(x,z)+...+β_if_{i-3} (x,z)+ε$$
I think it's pretty clear that we can still represent this in matrix form with:
$$Y=Xβ+ε$$
by simply evaluating each basis function at every unique combination of $x$ and $z$. The solution is still:
$$β=(X^TX)^{-1}X^TY$$
Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix $S$ and use the same method to get the optimal value of smoothing parameter $λ$, and given that smoothing parameter, the vector of coefficients is still:
$$β=(X^TX+λS)^{-1}X^TY$$
Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of $λ$ applies in both directions. This works fine when both $x$ and $z$ are on approximately the same scale, such as a spatial application. But what if we replace spatial variable $z$ with temporal variable $t$? The units of $t$ may be much larger or smaller than the units of $x$, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure $t$ in nanoseconds and $x$ in light years, the integral of the second derivative with respect to $t$ may be vastly larger than the integral of the second derivative with respect to $x$, and thus "wiggliness" along the $x$ direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.
It is worth noting that we did not decompose the basis functions into marginal bases of $x$ and $z$. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.
3) Tensor product smooths
Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response $y$ that is a function $f$ of spatial variable $x$ and temporal variable $t$. Our model is then:
$$y=f(x,t)+ε$$
What we'd like to do is construct a two-dimensional basis for the variables $x$ and $t$. This will be a lot easier if we can represent $f$ as:
$$f(x,t)=f_x(x)f_t(t)$$
In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of $x$ and $t$ (imagine a two-dimensional "lattice" defined by the locations of knots on the $x$ and $t$ axes) such that the "true" function $f$ is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function $f(x,t)$ may be approximated by the product of simpler functions $f_x(x)$ and $f_t(t)$ on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!
Our basis expansion, given an $i$-dimensional basis in $x$ and $j$-dimensional basis in $t$, would then look like:
\begin{align}
y = &β_{1} + β_{2}x + β_{3}f_{x1}(x)+β_{4}f_{x2}(x)+...+ \\
&β_{i}f_{x(i-3)}(x)+ β_{i+1}t + β_{i+2}tx + β_{i+3}tf_{x1}(x)+β_{i+4}tf_{x2}(x)+...+ \\
&β_{2i}tf_{x(i-3)}(x)+ β_{2i+1}f_{t1}(t) + β_{2i+2}f_{t1}(t)x + β_{2i+3}f_{t1}(t)f_{x1}(x)+β_{i+4}f_{t1}(t)f_{x2}(x){\small +...+} \\
&β_{2i}f_{t1}(t)f_{x(i-3)}(x)+\ldots+ \\
&β_{ij}f_{t(j-3)}(t)f_{x(i-3)}(x) + ε
\end{align}
Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in $x$ and $t$, thereby constructing n-by-i and n-by-j model matrices $X$ and $T$, respectively. We could then compute the $n^2$-by-$ij$ tensor product $X \otimes T$ of these two model matrices and reorganize into columns, such that each column represented a unique combination $ij$. Recall that the marginal model matrices had $i$ and $j$ columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension $ij$, and therefore the same number of columns in its model matrix.
NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.
In reality, this process results in an overall basis expansion of dimension $ij-i-j+1$ because the full multiplication includes multiplying every $t$ basis function by the x-intercept $β_{x1}$ (so we subtract $j$) as well as multiplying every $x$ basis function by the t-intercept $β_{t1}$ (so we subtract $i$), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.
So we can represent this as:
$$y=β_1+β_2x+β_3t+β_4f_1(x,t)+β_5f_2(x,t)+...+β_{ij-i-j+1}f_{ij-i-j-2}(x,t)+ε$$
Where each of the multivariate basis functions $f$ is the product of a pair of marginal $x$ and $t$ basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:
$$Y=Xβ+ε$$
Which (still) has the solution:
$$β=(X^TX)^{-1}X^TY$$
Where the model matrix $X$ has $ij-i-j+1$ columns. As for the penalty matrices $J_x$ and $J_t$, these are are constructed separately for each independent variable as follows:
$$J_x=β^T I_j \otimes S_x β$$
and,
$$J_t=β^T S_t \otimes I_i β$$
This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of $x$ are added up at each knot on the $t$ axis, and vice versa). The smoothing parameters $λ_x$ and $λ_t$ may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.
I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.
|
Intuition behind tensor product interactions in GAMs (MGCV package in R)
|
I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Fin
|
Intuition behind tensor product interactions in GAMs (MGCV package in R)
I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Finally, I'll make my best attempt at describing a tensor product smooth.
1) Univariate smooth
Let's say we have some response data $y$ that we conjecture is an unknown function $f$ of a predictor variable $x$ plus some error $ε$. The model would be:
$$y=f(x)+ε$$
Now, in order to fit this model, we have to identify the functional form of $f$. The way we do this is by identifying basis functions, which are superposed in order to represent the function $f$ in its entirety. A very simple example is a linear regression, in which the basis functions are just $β_2x$ and $β_1$, the intercept. Applying the basis expansion, we have
$$y=β_1+β_2x+ε$$
In matrix form, we would have:
$$Y=Xβ+ε$$
Where $Y$ is an n-by-1 column vector, $X$ is an n-by-2 model matrix, $β$ is a 2-by-1 column vector of model coefficients, and $ε$ is an n-by-1 column vector of errors. $X$ has two columns because there are two terms in our basis expansion: the linear term and the intercept.
The same principle applies for basis expansion in MGCV, although the basis functions are much more sophisticated. Specifically, individual basis functions need not be defined over the full domain of the independent variable $x$. Such is often the case when using knot-based bases (see "knot based example"). The model is then represented as the sum of the basis functions, each of which is evaluated at every value of the independent variable. However, as I mentioned, some of these basis functions take on a value of zero outside of a given interval and thus do not contribute to the basis expansion outside of that interval. As an example, consider a cubic spline basis in which each basis function is symmetric about a different value (knot) of the independent variable -- in other words, every basis function looks the same but is just shifted along the axis of the independent variable (this is an oversimplification, as any practical basis will also include an intercept and a linear term, but hopefully you get the idea).
To be explicit, a basis expansion of dimension $i-2$ could look like:
$$y=β_1+β_2x+β_3f_1(x)+β_4f_2(x)+...+β_if_{i-2} (x)+ε$$
where each function $f$ is, perhaps, a cubic function of the independent variable $x$.
The matrix equation $Y=Xβ+ε$ can still be used to represent our model. The only difference is that $X$ is now an n-by-i matrix; that is, it has a column for every term in the basis expansion (including the intercept and linear term). Since the process of basis expansion has allowed us to represent the model in the form of a matrix equation, we can use linear least squares to fit the model and find the coefficients $β$.
This is an example of unpenalized regression, and one of the main strengths of MGCV is its smoothness estimation via a penalty matrix and smoothing parameter. In other words, instead of:
$$β=(X^TX)^{-1}X^TY$$
we have:
$$β=(X^TX+λS)^{-1}X^TY$$
where $S$ is a quadratic $i$-by-$i$ penalty matrix and $λ$ is a scalar smoothing parameter. I will not go into the specification of the penalty matrix here, but it should suffice to say that for any given basis expansion of some independent variable and definition of a quadratic "wiggliness" penalty (for example, a second-derivative penalty), one can calculate the penalty matrix $S$.
MGCV can use various means of estimating the optimal smoothing parameter $λ$. I will not go into that subject since my goal here was to give a broad overview of how a univariate smooth is constructed, which I believe I have done.
2) Multivariate smooth
The above explanation can be generalized to multiple dimensions. Let's go back to our model that gives the response $y$ as a function $f$ of predictors $x$ and $z$. The restriction to two independent variables will prevent cluttering the explanation with arcane notation. The model is then:
$$y=f(x,z)+ε$$
Now, it should be intuitively obvious that we are going to represent $f(x,z)$ with a basis expansion (that is, a superposition of basis functions) just like we did in the univariate case of $f(x)$ above. It should also be obvious that at least one, and almost certainly many more, of these basis functions must be functions of both $x$ and $z$ (if this was not the case, then implicitly $f$ would be separable such that $f(x,z)=f_x(x)+f_z(z)$). A visual illustration of a multidimensional spline basis can be found here. A full two dimensional basis expansion of dimension $i-3$ could look something like:
$$y=β_1+β_2x+β_3z+β_4f_1(x,z)+...+β_if_{i-3} (x,z)+ε$$
I think it's pretty clear that we can still represent this in matrix form with:
$$Y=Xβ+ε$$
by simply evaluating each basis function at every unique combination of $x$ and $z$. The solution is still:
$$β=(X^TX)^{-1}X^TY$$
Computing the second derivative penalty matrix is very much the same as in the univariate case, except that instead of integrating the second derivative of each basis function with respect to a single variable, we integrate the sum of all second derivatives (including partials) with respect to all independent variables. The details of the foregoing are not especially important: the point is that we can still construct penalty matrix $S$ and use the same method to get the optimal value of smoothing parameter $λ$, and given that smoothing parameter, the vector of coefficients is still:
$$β=(X^TX+λS)^{-1}X^TY$$
Now, this two-dimensional smooth has an isotropic penalty: this means that a single value of $λ$ applies in both directions. This works fine when both $x$ and $z$ are on approximately the same scale, such as a spatial application. But what if we replace spatial variable $z$ with temporal variable $t$? The units of $t$ may be much larger or smaller than the units of $x$, and this can throw off the integration of our second derivatives because some of those derivatives will contribute disproportionately to the overall integration (for example, if we measure $t$ in nanoseconds and $x$ in light years, the integral of the second derivative with respect to $t$ may be vastly larger than the integral of the second derivative with respect to $x$, and thus "wiggliness" along the $x$ direction may go largely unpenalized). Slide 15 of the "smooth toolbox" I linked has more detail on this topic.
It is worth noting that we did not decompose the basis functions into marginal bases of $x$ and $z$. The implication here is that multivariate smooths must be constructed from bases supporting multiple variables. Tensor product smooths support construction of multivariate bases from univariate marginal bases, as I explain below.
3) Tensor product smooths
Tensor product smooths address the issue of modeling responses to interactions of multiple inputs with different units. Let's suppose we have a response $y$ that is a function $f$ of spatial variable $x$ and temporal variable $t$. Our model is then:
$$y=f(x,t)+ε$$
What we'd like to do is construct a two-dimensional basis for the variables $x$ and $t$. This will be a lot easier if we can represent $f$ as:
$$f(x,t)=f_x(x)f_t(t)$$
In an algebraic / analytical sense, this is not necessarily possible. But remember, we are discretizing the domains of $x$ and $t$ (imagine a two-dimensional "lattice" defined by the locations of knots on the $x$ and $t$ axes) such that the "true" function $f$ is represented by the superposition of basis functions. Just as we assumed that a very complex univariate function may be approximated by a simple cubic function on a specific interval of its domain, we may assume that the non-separable function $f(x,t)$ may be approximated by the product of simpler functions $f_x(x)$ and $f_t(t)$ on an interval—provided that our choice of basis dimensions makes those intervals sufficiently small!
Our basis expansion, given an $i$-dimensional basis in $x$ and $j$-dimensional basis in $t$, would then look like:
\begin{align}
y = &β_{1} + β_{2}x + β_{3}f_{x1}(x)+β_{4}f_{x2}(x)+...+ \\
&β_{i}f_{x(i-3)}(x)+ β_{i+1}t + β_{i+2}tx + β_{i+3}tf_{x1}(x)+β_{i+4}tf_{x2}(x)+...+ \\
&β_{2i}tf_{x(i-3)}(x)+ β_{2i+1}f_{t1}(t) + β_{2i+2}f_{t1}(t)x + β_{2i+3}f_{t1}(t)f_{x1}(x)+β_{i+4}f_{t1}(t)f_{x2}(x){\small +...+} \\
&β_{2i}f_{t1}(t)f_{x(i-3)}(x)+\ldots+ \\
&β_{ij}f_{t(j-3)}(t)f_{x(i-3)}(x) + ε
\end{align}
Which may be interpreted as a tensor product. Imagine that we evaluated each basis function in $x$ and $t$, thereby constructing n-by-i and n-by-j model matrices $X$ and $T$, respectively. We could then compute the $n^2$-by-$ij$ tensor product $X \otimes T$ of these two model matrices and reorganize into columns, such that each column represented a unique combination $ij$. Recall that the marginal model matrices had $i$ and $j$ columns, respectively. These values correspond to their respective basis dimensions. Our new two-variable basis should then have dimension $ij$, and therefore the same number of columns in its model matrix.
NOTE: I'd like to point out that since we explicitly constructed the tensor product basis functions by taking products of marginal basis functions, tensor product bases may be constructed from marginal bases of any type. They need not support more than one variable, unlike the multivariate smooth discussed above.
In reality, this process results in an overall basis expansion of dimension $ij-i-j+1$ because the full multiplication includes multiplying every $t$ basis function by the x-intercept $β_{x1}$ (so we subtract $j$) as well as multiplying every $x$ basis function by the t-intercept $β_{t1}$ (so we subtract $i$), but we must add the intercept back in by itself (so we add 1). This is known as applying an identifiability constraint.
So we can represent this as:
$$y=β_1+β_2x+β_3t+β_4f_1(x,t)+β_5f_2(x,t)+...+β_{ij-i-j+1}f_{ij-i-j-2}(x,t)+ε$$
Where each of the multivariate basis functions $f$ is the product of a pair of marginal $x$ and $t$ basis functions. Again, it's pretty clear having constructed this basis that we can still represent this with the matrix equation:
$$Y=Xβ+ε$$
Which (still) has the solution:
$$β=(X^TX)^{-1}X^TY$$
Where the model matrix $X$ has $ij-i-j+1$ columns. As for the penalty matrices $J_x$ and $J_t$, these are are constructed separately for each independent variable as follows:
$$J_x=β^T I_j \otimes S_x β$$
and,
$$J_t=β^T S_t \otimes I_i β$$
This allows for an overall anisotropic (different in each direction) penalty (Note: the penalties on the second derivative of $x$ are added up at each knot on the $t$ axis, and vice versa). The smoothing parameters $λ_x$ and $λ_t$ may now be estimated in much the same way as the single smoothing parameter was for the univariate and multivariate smooths. The result is that the overall shape of a tensor product smooth is invariant to rescaling of its independent variables.
I recommend reading all the vignettes on the MGCV website, as well as "Generalized Additive Models: and introduction with R." Long live Simon Wood.
|
Intuition behind tensor product interactions in GAMs (MGCV package in R)
I'll (try to) answer this in three steps: first, let's identify exactly what we mean by a univariate smooth. Next, we will describe a multivariate smooth (specifically, a smooth of two variables). Fin
|
5,881
|
Reviewing statistics in papers [closed]
|
I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...)
Anyway, as a biologist, I will answer from a "biological" point of view:
How much effort should we put in to understand the application area?
I tend at least to read the previous papers from the same authors and look for a few review on the subject if I am not too familiar with it. This is especially true when dealing with new techniques I don't know, because I need to understand if they did all the proper controls etc.
How much time should I spend on a report?
As much as needed (OK, dumb answer, I know! :P)
In general I would not like someone reviewing my paper to do an approximative job just because he/she has other things to do, so I try not to do it myself.
How picky are you when looking at figures/tables.
Quite picky. Figures are the first thing you look at when browsing through a paper. They need to be consistent (e.g. right titles on the axes, correct legend etc.). On occasion I have suggested to use a different kind of plot to show data when I thought the one used was not the best. This happens a lot in biology, a field that is dominated by the "barplot +/- SEM" type of graph.
I'm also quite picky on the "materials and methods" section: a perfect statistical analysis on a inherently wrong biological model is completely useless.
How do you cope with the data not being available.
You just do and trust the Authors, I guess. In many cases in biology there's not much you can do, especially when dealing with things like imaging or animal behaviour and similar. Unless you want people to publish tons of images, videos etc (that you most likely would not go through anyways), but that may be very unpractical. If you think the data are really necessary ask for the authors to provide them as supplementary data/figures.
Do you try and rerun the analysis used.
Only if I have serious doubts on the conclusions drawn by the authors.
In biology there's often a difference between what is (or not) "statistically significant" and what is "biologically significant". I prefer a thinner statistical analysis with good biological reasoning then the other way around. But again, in the very unlikely event that I were to review a bio-statistics paper (ahah, that would be some fun!!!) I would probably pay much more attention to the stats than to the biology in there.
|
Reviewing statistics in papers [closed]
|
I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...)
Anyway, as a biologist, I will answer
|
Reviewing statistics in papers [closed]
I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...)
Anyway, as a biologist, I will answer from a "biological" point of view:
How much effort should we put in to understand the application area?
I tend at least to read the previous papers from the same authors and look for a few review on the subject if I am not too familiar with it. This is especially true when dealing with new techniques I don't know, because I need to understand if they did all the proper controls etc.
How much time should I spend on a report?
As much as needed (OK, dumb answer, I know! :P)
In general I would not like someone reviewing my paper to do an approximative job just because he/she has other things to do, so I try not to do it myself.
How picky are you when looking at figures/tables.
Quite picky. Figures are the first thing you look at when browsing through a paper. They need to be consistent (e.g. right titles on the axes, correct legend etc.). On occasion I have suggested to use a different kind of plot to show data when I thought the one used was not the best. This happens a lot in biology, a field that is dominated by the "barplot +/- SEM" type of graph.
I'm also quite picky on the "materials and methods" section: a perfect statistical analysis on a inherently wrong biological model is completely useless.
How do you cope with the data not being available.
You just do and trust the Authors, I guess. In many cases in biology there's not much you can do, especially when dealing with things like imaging or animal behaviour and similar. Unless you want people to publish tons of images, videos etc (that you most likely would not go through anyways), but that may be very unpractical. If you think the data are really necessary ask for the authors to provide them as supplementary data/figures.
Do you try and rerun the analysis used.
Only if I have serious doubts on the conclusions drawn by the authors.
In biology there's often a difference between what is (or not) "statistically significant" and what is "biologically significant". I prefer a thinner statistical analysis with good biological reasoning then the other way around. But again, in the very unlikely event that I were to review a bio-statistics paper (ahah, that would be some fun!!!) I would probably pay much more attention to the stats than to the biology in there.
|
Reviewing statistics in papers [closed]
I am not sure about which area of science you are referring to (I'm sure the answer would be really different if dealing with biology vs physics for instance...)
Anyway, as a biologist, I will answer
|
5,882
|
Reviewing statistics in papers [closed]
|
This addresses the new question #6: "What's the maximum number of papers you would review in a year?" I'm responding as a member of several editorial boards. The perennial problem is finding enough reviewers. Depending on the journal, every submitted paper needs one to three peer reviewers, usually three. If the journal has an $x$% acceptance rate, then the mean number of reviews per accepted paper obviously is around $3/(x/100)$. E.g., if the acceptance rate is 33%, the editors need to obtain nine reviews for every paper published. If you, as an author, take this seriously, then you should attempt to provide nine reviews (or whatever the number turns out to be for your target journals) for every paper you publish!
I was moved to write this due to the strong parallel with voting on this site: in order for you to garner a reputation of $r$, other people have to upvote some combination of $r/10$ of your answers and $r/5$ of your questions. Thus, if you're pulling your weight, a check of your profile should show at least $r/10$ upvotes. That is the case for many but certainly not all of the most active members of this site. Something to think about... Remember to vote!
|
Reviewing statistics in papers [closed]
|
This addresses the new question #6: "What's the maximum number of papers you would review in a year?" I'm responding as a member of several editorial boards. The perennial problem is finding enough
|
Reviewing statistics in papers [closed]
This addresses the new question #6: "What's the maximum number of papers you would review in a year?" I'm responding as a member of several editorial boards. The perennial problem is finding enough reviewers. Depending on the journal, every submitted paper needs one to three peer reviewers, usually three. If the journal has an $x$% acceptance rate, then the mean number of reviews per accepted paper obviously is around $3/(x/100)$. E.g., if the acceptance rate is 33%, the editors need to obtain nine reviews for every paper published. If you, as an author, take this seriously, then you should attempt to provide nine reviews (or whatever the number turns out to be for your target journals) for every paper you publish!
I was moved to write this due to the strong parallel with voting on this site: in order for you to garner a reputation of $r$, other people have to upvote some combination of $r/10$ of your answers and $r/5$ of your questions. Thus, if you're pulling your weight, a check of your profile should show at least $r/10$ upvotes. That is the case for many but certainly not all of the most active members of this site. Something to think about... Remember to vote!
|
Reviewing statistics in papers [closed]
This addresses the new question #6: "What's the maximum number of papers you would review in a year?" I'm responding as a member of several editorial boards. The perennial problem is finding enough
|
5,883
|
Reviewing statistics in papers [closed]
|
My POV would be reviewing a paper in psychology or forecasting on its statistical merits. I'll mostly second Nico's very good remarks.
How much effort should we put in to
understand the application area?
Quite a lot, actually. I wouldn't trust myself to comment on more than the most basic statistical problems without having understood the area. Fortunately, this is often not very hard in many branches of psychology.
How much time should I spend on a
report?
I'll go out on a limb and state a specific time: I'll spend anything between two and eight hours on a review, sometimes more. If I find that I'm spending more than a day on a paper, it probably means that I'm really not qualified to understand it, so I'll recommend the journal find someone else (and try to suggest some people).
How picky are you when looking at
figures/tables.
Very picky indeed. The figures are going to be what people remember of a paper and what ends up in lecture presentations without much context, so these really need to be done well.
How do you cope with the data not
being available.
In psychology, the data are usually not shared - measuring 50 people by MRI is very expensive, and the authors will want to use these data for further papers, so I kind of understand their reluctance to just give out the data. So anyone who does share their data gets a big bonus in my book, but not sharing is understandable.
In forecasting, many datasets are publicly available. In this case I usually recommend that the authors share their code (and do so myself).
Do you try and rerun the analysis
used.
Without the data, there is only so much one can learn from this. I'll play around with simulated data if something is very surprising about the paper's results; otherwise one can often tell appropriate from inappropriate methods without the data (once one understands the area, see above).
What's the maximum number of papers
your would review in a year?
There is really little to add to whuber's point above - assuming that every paper with on average n coauthors I (co-)submit gets 3 reviews, one should really aim at reviewing at least 3/(n+1) papers for each own submission (counting submissions rather than own papers which may be rejected and resubmitted). And of course, the number of submissions as well as the number of coauthors varies strongly with the discipline.
|
Reviewing statistics in papers [closed]
|
My POV would be reviewing a paper in psychology or forecasting on its statistical merits. I'll mostly second Nico's very good remarks.
How much effort should we put in to
understand the application
|
Reviewing statistics in papers [closed]
My POV would be reviewing a paper in psychology or forecasting on its statistical merits. I'll mostly second Nico's very good remarks.
How much effort should we put in to
understand the application area?
Quite a lot, actually. I wouldn't trust myself to comment on more than the most basic statistical problems without having understood the area. Fortunately, this is often not very hard in many branches of psychology.
How much time should I spend on a
report?
I'll go out on a limb and state a specific time: I'll spend anything between two and eight hours on a review, sometimes more. If I find that I'm spending more than a day on a paper, it probably means that I'm really not qualified to understand it, so I'll recommend the journal find someone else (and try to suggest some people).
How picky are you when looking at
figures/tables.
Very picky indeed. The figures are going to be what people remember of a paper and what ends up in lecture presentations without much context, so these really need to be done well.
How do you cope with the data not
being available.
In psychology, the data are usually not shared - measuring 50 people by MRI is very expensive, and the authors will want to use these data for further papers, so I kind of understand their reluctance to just give out the data. So anyone who does share their data gets a big bonus in my book, but not sharing is understandable.
In forecasting, many datasets are publicly available. In this case I usually recommend that the authors share their code (and do so myself).
Do you try and rerun the analysis
used.
Without the data, there is only so much one can learn from this. I'll play around with simulated data if something is very surprising about the paper's results; otherwise one can often tell appropriate from inappropriate methods without the data (once one understands the area, see above).
What's the maximum number of papers
your would review in a year?
There is really little to add to whuber's point above - assuming that every paper with on average n coauthors I (co-)submit gets 3 reviews, one should really aim at reviewing at least 3/(n+1) papers for each own submission (counting submissions rather than own papers which may be rejected and resubmitted). And of course, the number of submissions as well as the number of coauthors varies strongly with the discipline.
|
Reviewing statistics in papers [closed]
My POV would be reviewing a paper in psychology or forecasting on its statistical merits. I'll mostly second Nico's very good remarks.
How much effort should we put in to
understand the application
|
5,884
|
What are the differences between hidden Markov models and neural networks?
|
What is hidden and what is observed
The thing that is hidden in a hidden Markov model is the same as the thing that is hidden in a discrete mixture model, so for clarity, forget about the hidden state's dynamics and stick with a finite mixture model as an example. The 'state' in this model is the identity of the component that caused each observation. In this class of model such causes are never observed, so 'hidden cause' is translated statistically into the claim that the observed data have marginal dependencies which are removed when the source component is known. And the source components are estimated to be whatever makes this statistical relationship true.
The thing that is hidden in a feedforward multilayer neural network with sigmoid middle units is the states of those units, not the outputs which are the target of inference. When the output of the network is a classification, i.e., a probability distribution over possible output categories, these hidden units values define a space within which categories are separable. The trick in learning such a model is to make a hidden space (by adjusting the mapping out of the input units) within which the problem is linear. Consequently, non-linear decision boundaries are possible from the system as a whole.
Generative versus discriminative
The mixture model (and HMM) is a model of the data generating process, sometimes called a likelihood or 'forward model'. When coupled with some assumptions about the prior probabilities of each state you can infer a distribution over possible values of the hidden state using Bayes theorem (a generative approach). Note that, while called a 'prior', both the prior and the parameters in the likelihood are usually learned from data.
In contrast to the mixture model (and HMM) the neural network learns a posterior distribution over the output categories directly (a discriminative approach). This is possible because the output values were observed during estimation. And since they were observed, it is not necessary to construct a posterior distribution from a prior and a specific model for the likelihood such as a mixture. The posterior is learnt directly from data, which is more efficient and less model dependent.
Mix and match
To make things more confusing, these approaches can be mixed together, e.g. when mixture model (or HMM) state is sometimes actually observed. When that is true, and in some other circumstances not relevant here, it is possible to train discriminatively in an otherwise generative model. Similarly it is possible to replace the mixture model mapping of an HMM with a more flexible forward model, e.g., a neural network.
The questions
So it's not quite true that both models predict hidden state. HMMs can be used to predict hidden state, albeit only of the kind that the forward model is expecting. Neural networks can be used to predict a not yet observed state, e.g. future states for which predictors are available. This sort of state is not hidden in principle, it just hasn't been observed yet.
When would you use one rather than the other? Well, neural networks make rather awkward time series models in my experience. They also assume you have observed output. HMMs don't but you don't really have any control of what the hidden state actually is. Nevertheless they are proper time series models.
|
What are the differences between hidden Markov models and neural networks?
|
What is hidden and what is observed
The thing that is hidden in a hidden Markov model is the same as the thing that is hidden in a discrete mixture model, so for clarity, forget about the hidden state
|
What are the differences between hidden Markov models and neural networks?
What is hidden and what is observed
The thing that is hidden in a hidden Markov model is the same as the thing that is hidden in a discrete mixture model, so for clarity, forget about the hidden state's dynamics and stick with a finite mixture model as an example. The 'state' in this model is the identity of the component that caused each observation. In this class of model such causes are never observed, so 'hidden cause' is translated statistically into the claim that the observed data have marginal dependencies which are removed when the source component is known. And the source components are estimated to be whatever makes this statistical relationship true.
The thing that is hidden in a feedforward multilayer neural network with sigmoid middle units is the states of those units, not the outputs which are the target of inference. When the output of the network is a classification, i.e., a probability distribution over possible output categories, these hidden units values define a space within which categories are separable. The trick in learning such a model is to make a hidden space (by adjusting the mapping out of the input units) within which the problem is linear. Consequently, non-linear decision boundaries are possible from the system as a whole.
Generative versus discriminative
The mixture model (and HMM) is a model of the data generating process, sometimes called a likelihood or 'forward model'. When coupled with some assumptions about the prior probabilities of each state you can infer a distribution over possible values of the hidden state using Bayes theorem (a generative approach). Note that, while called a 'prior', both the prior and the parameters in the likelihood are usually learned from data.
In contrast to the mixture model (and HMM) the neural network learns a posterior distribution over the output categories directly (a discriminative approach). This is possible because the output values were observed during estimation. And since they were observed, it is not necessary to construct a posterior distribution from a prior and a specific model for the likelihood such as a mixture. The posterior is learnt directly from data, which is more efficient and less model dependent.
Mix and match
To make things more confusing, these approaches can be mixed together, e.g. when mixture model (or HMM) state is sometimes actually observed. When that is true, and in some other circumstances not relevant here, it is possible to train discriminatively in an otherwise generative model. Similarly it is possible to replace the mixture model mapping of an HMM with a more flexible forward model, e.g., a neural network.
The questions
So it's not quite true that both models predict hidden state. HMMs can be used to predict hidden state, albeit only of the kind that the forward model is expecting. Neural networks can be used to predict a not yet observed state, e.g. future states for which predictors are available. This sort of state is not hidden in principle, it just hasn't been observed yet.
When would you use one rather than the other? Well, neural networks make rather awkward time series models in my experience. They also assume you have observed output. HMMs don't but you don't really have any control of what the hidden state actually is. Nevertheless they are proper time series models.
|
What are the differences between hidden Markov models and neural networks?
What is hidden and what is observed
The thing that is hidden in a hidden Markov model is the same as the thing that is hidden in a discrete mixture model, so for clarity, forget about the hidden state
|
5,885
|
What are the differences between hidden Markov models and neural networks?
|
Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have a HMM that models a set of sequences, you would be able to generate members of this family, by listing sequences that would be fall into the group of sequences we are modelling.
Neural Networks, take an input from a high-dimensional space and simply map it to a lower dimensional space (the way that the Neural Networks map this input is based on the training, its topology and other factors). For example, you might take a 64-bit image of a number and map it to a true / false value that describes whether this number is 1 or 0.
Whilst both methods are able to (or can at least try to) discriminate whether an item is a member of a class or not, Neural Networks cannot generate a language as described above.
There are alternatives to Hidden Markov Models available, for example you might be able to use a more general Bayesian Network, a different topology or a Stochastic Context-Free Grammar (SCFG) if you believe that the problem lies within the HMMs lack of power to model your problem - that is, if you need an algorithm that is able to discriminate between more complex hypotheses and/or describe the behaviour of data that is much more complex.
|
What are the differences between hidden Markov models and neural networks?
|
Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have a HMM that models a set of sequences, you would be able to generate
|
What are the differences between hidden Markov models and neural networks?
Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have a HMM that models a set of sequences, you would be able to generate members of this family, by listing sequences that would be fall into the group of sequences we are modelling.
Neural Networks, take an input from a high-dimensional space and simply map it to a lower dimensional space (the way that the Neural Networks map this input is based on the training, its topology and other factors). For example, you might take a 64-bit image of a number and map it to a true / false value that describes whether this number is 1 or 0.
Whilst both methods are able to (or can at least try to) discriminate whether an item is a member of a class or not, Neural Networks cannot generate a language as described above.
There are alternatives to Hidden Markov Models available, for example you might be able to use a more general Bayesian Network, a different topology or a Stochastic Context-Free Grammar (SCFG) if you believe that the problem lies within the HMMs lack of power to model your problem - that is, if you need an algorithm that is able to discriminate between more complex hypotheses and/or describe the behaviour of data that is much more complex.
|
What are the differences between hidden Markov models and neural networks?
Hidden Markov Models can be used to generate a language, that is, list elements from a family of strings. For example, if you have a HMM that models a set of sequences, you would be able to generate
|
5,886
|
What are the differences between hidden Markov models and neural networks?
|
The best answer to this question from what I have found is this: Is deep learning a Markov chain in disguise. This is exactly what I understood, but since there was already a discussion elsewhere in the Internet, I am putting the link here.
Markov chains model:
$$p(x_1....x_n) = p(x_1)p(x_2 | x_1)p(x_3 | x_2) ...$$
RNNs attempt to model:
$$p(x_1 .... x_n) = p(x_1)p(x_2 | x_1)p(x_3 | x_2, x_1)p(x_4 | x_3, x_2, x_1) ... $$
We can use a character sequence as the input instead of a single character. This way, we can capture the state better (depending on the context).
|
What are the differences between hidden Markov models and neural networks?
|
The best answer to this question from what I have found is this: Is deep learning a Markov chain in disguise. This is exactly what I understood, but since there was already a discussion elsewhere in t
|
What are the differences between hidden Markov models and neural networks?
The best answer to this question from what I have found is this: Is deep learning a Markov chain in disguise. This is exactly what I understood, but since there was already a discussion elsewhere in the Internet, I am putting the link here.
Markov chains model:
$$p(x_1....x_n) = p(x_1)p(x_2 | x_1)p(x_3 | x_2) ...$$
RNNs attempt to model:
$$p(x_1 .... x_n) = p(x_1)p(x_2 | x_1)p(x_3 | x_2, x_1)p(x_4 | x_3, x_2, x_1) ... $$
We can use a character sequence as the input instead of a single character. This way, we can capture the state better (depending on the context).
|
What are the differences between hidden Markov models and neural networks?
The best answer to this question from what I have found is this: Is deep learning a Markov chain in disguise. This is exactly what I understood, but since there was already a discussion elsewhere in t
|
5,887
|
PCA and Correspondence analysis in their relation to Biplot
|
SVD
Singular-value decomposition is at the root of the three kindred techniques. Let $\bf X$ be $r \times c$ table of real values. SVD is $\bf X = U_{r\times r}S_{r\times c}V_{c\times c}'$. We may use just $m$ $[m \le\min(r,c)]$ first latent vectors and roots to obtain $\bf X_{(m)}$ as the best $m$-rank approximation of $\bf X$: $\bf X_{(m)} = U_{r\times m}S_{m\times m}V_{c\times m}'$. Further, we'll notate $\bf U=U_{r\times m}$, $\bf V=V_{c\times m}$, $\bf S=S_{m\times m}$.
Singular values $\bf S$ and their squares, the eigenvalues, represent scale, also called inertia, of the data. Left eigenvectors $\bf U$ are the coordinates of the rows of the data onto the $m$ principal axes; while right eigenvectors $\bf V$ are the coordinates of the columns of the data onto those same latent axes. The entire scale (inertia) is stored in $\bf S$ and so the coordinates $\bf U$ and $\bf V$ are unit-normalized (column SS=1).
Principal Component Analysis by SVD
In PCA, it is agreed upon to consider rows of $\bf X$ as random observations (which can come or go), but to consider columns of $\bf X$ as fixed number of dimensions or variables. Hence it is appropriate and convenient to remove the effect of the number of rows (and only rows) on the results, particularly on the eigenvalues, by svd-decomposing of $\mathbf Z=\mathbf X/\sqrt{r}$ instead of $\bf X$. Note that this corresponds to eigen-decomposition of $\mathbf {X'X}/r$, $r$ being the sample size n. (Often, mostly with covariances - to make them unbiased - we'll prefer to divide by $r-1$, but it is a nuance.)
The multiplication of $\bf X$ by a constant affected only $\bf S$; $\bf U$ and $\bf V$ remain to be the unit-normalized coordinates of rows and of columns.
From here and everywhere below we redefine $\bf S$, $\bf U$ and $\bf V$ as given by svd of $\bf Z$, not of $\bf X$; $\bf Z$ being a normalized version of $\bf X$, and the normalization varies between types of analysis.
By multiplying $\mathbf U\sqrt{r}=\bf U_*$ we bring the mean square in the columns of $\bf U$ to 1. Given that rows are random cases to us, it is logical. We've thus obtained what is called in PCA standard or standardized principal component scores of observations, $\bf U_*$. We do not do the same thing with $\bf V$ because variables are fixed entities.
We then can confer rows with all the inertia, to obtain unstandardized row coordinates, also called in PCA raw principal component scores of observations: $\bf U_*S$. This formula we'll call "direct way". The same result is returned by $\bf XV$; we'll label it "indirect way".
Analogously, we can confer columns with all the inertia, to obtain unstandardized column coordinates, also called in PCA the component-variable loadings: $\bf VS'$ [may ignore transpose if $\bf S$ is square], - the "direct way". The same result is returned by $\bf Z'U$, - the "indirect way". (The above standardized principal component scores can also be computed from the loadings as $\bf X(AS^{-1/2})$, where $\bf A$ are the loadings.)
Biplot
Consider biplot in a sense of a dimensionality reduction analysis on its own, not simply as "a dual scatterplot". This analysis is very similar to PCA. Unlike PCA, both rows and columns are treated, symmetrically, as random observations, which means that $\bf X$ is being seen as a random two-way table of varying dimensionality. Then, naturally, normalize it by both $r$ and $c$ before svd: $\mathbf Z=\mathbf X/\sqrt{rc}$.
After svd, compute standard row coordinates as we did it in PCA: $\mathbf U_*=\mathbf U\sqrt{r}$. Do the same thing (unlike PCA) with column vectors, to obtain standard column coordinates: $\mathbf V_*=\mathbf V\sqrt{c}$. Standard coordinates, both of rows and of columns, have mean square 1.
We may confer rows and/or columns coordinates with inertia of eigenvalues like we do it in PCA. Unstandardized row coordinates: $\bf U_*S$ (direct way). Unstandardized column coordinates: $\bf V_*S'$ (direct way). What's about the indirect way? You can easily deduce by substitutions that the indirect formula for the unstandardized row coordinates is $\mathbf {XV_*}/c$, and for the unstandardized column coordinates is $\mathbf {X'U_*}/r$.
PCA as a particular case of Biplot. From the above descriptions you probably learned that PCA and biplot differ only in how they normalize $\bf X$ into $\bf Z$ which is then decomposed. Biplot normalizes by both the number of rows and the number of columns; PCA normalizes only by the number of rows. Consequently, there is a little difference between the two in the post-svd computations. If in doing biplot you set $c=1$ in its formulas you will get exactly PCA results. Thus, biplot can be seen as a generic method and PCA as a particular case of biplot.
[Column centering. Some user may say: Stop, but doesn't PCA require also and first of all the centering of the data columns (variables) in order it to explain variance? While biplot may not do the centering? My answer: only PCA-in-narrow-sense does the centering and explains variance; I'm discussing linear PCA-in-general-sense, PCA which explains some sort sum of squared deviations from the origin chosen; you might choose it to be the data mean, the native 0 or whatever you like. Thus, the "centering" operation isn't what could distinguish PCA from biplot.]
Passive rows and columns
In biplot or PCA, you can set some rows and/or columns to be passive, or supplementary. Passive row or column does not influence the SVD and therefore does not influence the inertia or the coordinates of other rows/columns, but receives its coordinates in the space of principal axes produced by the active (not passive) rows/columns.
To set some points (rows/columns) to be passive, (1) define $r$ and $c$ be the number of active rows and columns only. (2) Set to zero passive rows and columns in $\bf Z$ before svd. (3) Use the "indirect" ways to compute coordinates of passive rows/columns, since their eigenvector values will be zero.
In PCA, when you compute component scores for new incoming cases with the help of loadings obtained on old observations (using the score coefficient matrix), you actually doing the same thing as taking these new cases in PCA and keeping them passive. Similarly, to compute correlations/covariances of some external variables with the component scores produced by a PCA is equivalent to taking those variables in that PCA and keeping them passive.
Arbitrary spreading of inertia
The column mean squares (MS) of standard coordinates are 1. The column mean squares (MS) of unstandardized coordinates are equal to the inertia of the respective principal axes: all the inertia of eigenvalues was donated to eigenvectors to produce the unstandardized coordinates.
In biplot: row standard coordinates $\bf U_*$ have MS=1 for each principal axis. Row unstandardized coordinates, also called row principal coordinates $\mathbf {U_*S} = \mathbf {XV_*}/c$ have MS = corresponding eigenvalue of $\bf Z$. The same is true for column standard and unstandardized (principal) coordinates.
Generally, it is not required that one endows coordinates with inertia either in full or in none. Arbitrary spreading is allowed, if needed for some reason. Let $p_1$ be the proportion of inertia which is to go to rows. Then the general formula of row coordinates is: $\bf U_*S^{p1}$ (direct way) = $\mathbf {XV_*S^{p1-1}}/c$ (indirect way). If $p_1=0$ we get standard row coordinates, whereas with $p_1=1$ we get principal row coordinates.
Likewise $p_2$ be the proportion of inertia which is to go to columns. Then the general formula of column coordinates is: $\bf V_*S^{p2}$ (direct way) = $\mathbf {X'U_*S^{p2-1}}/r$ (indirect way). If $p_2=0$ we get standard column coordinates, whereas with $p_2=1$ we get principal column coordinates.
The general indirect formulas are universal in that they allow to compute coordinates (standard, principal or in-between) also for the passive points, if there are any.
If $p_1+p_2=1$ they say the inertia is distributed between row and column points. The $p_1=1,p_2=0$, i.e. row-principal-column-standard, biplots are sometimes called "form biplots" or "row-metric preservation" biplots.
The $p_1=0,p_2=1$, i.e. row-standard-column-principal, biplots are often called within PCA literature "covariance biplots" or "column-metric preservation" biplots; they display variable loadings (which are juxtaposed to covariances) plus standardized component scores, when applied within PCA.
In correspondence analysis, $p_1=p_2=1/2$ is often used and is called "symmetric" or "canonical" normalization by inertia - it allows (albeit at some expence of euclidean geometric strictness) compare proximity between row and column points, like we can do on multidimensional unfolding map.
Correspondence Analysis (Euclidean model)
Two-way (=simple) correspondence analysis (CA) is biplot used to analyze a two-way contingency table, that is, a non-negative table which entries bear the meaning of some sort of affinity between a row and a column. When the table is frequencies chi-square model correspondence analysis is used. When the entries is, say, means or other scores, a simplier Euclidean model CA is used.
Euclidean model CA is just the biplot described above, only that the table $\bf X$ is additionally preprocessed before it enters the biplot operations. In particular, the values are normalized not only by $r$ and $c$ but also by the total sum $N$.
The preprocessing consists of centering, then normalizing by the mean mass. Centering can be various, most often: (1) centering of columns; (2) centering of rows; (3) two-way centering which is the same operation as computation of frequency residuals; (4) centering of columns after equalizing column sums; (5) centering of rows after equalizing row sums. Normalizing by the mean mass is dividing by the mean cell value of the initial table. At preprocessing step, passive rows/columns, if exist, are standardized passively: they are centered/normalized by the values computed from active rows/columns.
Then usual biplot is done on the preprocessed $\bf X$, starting from $\mathbf Z=\mathbf X/\sqrt{rc}$.
Weighted Biplot
Imagine that the activity or importance of a row or a column can be any number between 0 and 1, and not only 0 (passive) or 1 (active) as in the classic biplot discussed so far. We could weight the input data by these row and column weights and perform weighted biplot. With weighted biplot, the greater is the weight the more influential is that row or that column regarding all the results - the inertia and the coordinates of all the points onto the principal axes.
The user supplies row weights and column weights. These and those are first normalized separately to sum to 1. Then the normalization step is $\mathbf{Z_{ij} = X_{ij}}\sqrt{w_i w_j}$, with $w_i$ and $w_j$ being the weights for row i and column j. Exactly zero weight designates the row or the column to be passive.
At that point we may discover that classic biplot is simply this weighted biplot with equal weights $1/r$ for all active rows and equal weights $1/c$ for all active columns; $r$ and $c$ the numbers of active rows and active columns.
Perform svd of $\bf Z$. All operations are the same as in classic biplot, the only difference being that $w_i$ is in place of $1/r$ and $w_j$ is in place of $1/c$. Standard row coordinates: $\mathbf {U_{*i}=U_i}/\sqrt{w_i}$ and standard column coordinates: $\mathbf {V_{*j}=V_j}/\sqrt{w_j}$. (These are for rows/columns with nonzero weight. Leave values as 0 for those with zero weight and use the indirect formulas below to obtain standard or whatever coordinates for them.)
Give inertia to coordinates in the proportion you want (with $p_1=1$ and $p_2=1$ the coordinates will be fully unstandardized, or principal; with $p_1=0$ and $p_2=0$ they will stay standard). Rows: $\bf U_*S^{p1}$ (direct way) = $\bf X[Wj]V_*S^{p1-1}$ (indirect way). Columns: $\bf V_*S^{p2}$ (direct way) = $\bf ([Wi]X)'U_*S^{p2-1}$ (indirect way). Matrices in brackets here are the diagonal matrices of the column and the row weights, respectively. For passive points (that is, with zero weights) only the indirect way of computation is suited. For active (positive weights) points you may go either way.
PCA as a particular case of Biplot revisited. When considering unweighted biplot earlier I mentioned that PCA and biplot are equivalent, the only difference being that biplot sees columns (variables) of the data as random cases symmetrically to observations (rows). Having extended now biplot to more general weighted biplot we may once again claim it, observing that the only difference is that (weighted) biplot normalizes the sum of column weights of input data to 1, and (weighted) PCA - to the number of (active) columns. So here is the weighted PCA introduced. Its results are proportionally identical to those of weighted biplot. Specifically, if $c$ is the number of active columns, then the following relationships are true, for weighted as well as classic versions of the two analyses:
eigenvalues of PCA = eigenvalues of biplot $\cdot c$;
loadings = column coordinates under "principal normalization" of columns;
standardized component scores = row coordinates under "standard normalization" of rows;
eigenvectors of PCA = column coordinates under "standard normalization" of columns $/ \sqrt c$;
raw component scores = row coordinates under "principal normalization" of rows $\cdot \sqrt c$.
Correspondence Analysis (Chi-square model)
This is technically a weighted biplot where weights are being computed from a table itself rather then supplied by the user. It is used mostly to analyze frequency cross-tables. This biplot will approximate, by euclidean distances on the plot, chi-square distances in the table. Chi-square distance is mathematically the euclidean distance inversely weighted by the marginal totals. I will not go further in details of Chi-square model CA geometry.
The preprocessing of frequency table $\bf X$ is as follows: divide each frequency by the expected frequency, then subtract 1. It is the same as to first obtain the frequency residual and then to divide by the expected frequency. Set row weights to $w_i=R_i/N$ and column weights to $w_j=C_j/N$, where $R_i$ is the marginal sum of row i (active columns only), $C_j$ is the marginal sum of column j (active rows only), $N$ is the table total active sum (the three numbers come from the initial table).
Then do weighted biplot: (1) Normalize $\bf X$ into $\bf Z$. (2) The weights are never zero (zero $R_i$ and $C_j$ are not allowed in CA); however you can force rows/columns to become passive by zeroing them in $\bf Z$, so their weights are ineffective at svd. (3) Do svd. (4) Compute standard and inertia-vested coordinates as in weighted biplot.
In Chi-square model CA as well as in Euclidean model CA using two-way centering one last eigenvalue is always 0, so the maximal possible number of principal dimensions is $\min(r-1,c-1)$.
See also a nice overview of chi-square model CA in this answer.
Illustrations
Here is some data table.
row A B C D E F
1 6 8 6 2 9 9
2 0 3 8 5 1 3
3 2 3 9 2 4 7
4 2 4 2 2 7 7
5 6 9 9 3 9 6
6 6 4 7 5 5 8
7 7 9 6 6 4 8
8 4 4 8 5 3 7
9 4 6 7 3 3 7
10 1 5 4 5 3 6
11 1 5 6 4 8 3
12 0 6 7 5 3 1
13 6 9 6 3 5 4
14 1 6 4 7 8 4
15 1 1 5 2 4 3
16 8 9 7 5 5 9
17 2 7 1 3 4 4
28 5 3 3 9 6 4
19 6 7 6 2 9 6
20 10 7 4 4 8 7
Several dual scatterplots (in 2 first principal dimensions) built on analyses of these values follow. Column points are connected with the origin by spikes for visual emphasis. There were no passive rows or columns in these analyses.
The first biplot is SVD results of the data table analyzed "as is"; the coordinates are the row and the column eigenvectors.
Below is one of possible biplots coming from PCA. PCA was done on the data "as is", without centering the columns; however, as it is adopted in PCA, normalization by the number of rows (the number of cases) was done initially. This specific biplot displays principal row coordinates (i.e. raw component scores) and principal column coordinates (i.e. variable loadings).
Next is biplot sensu stricto: The table was initially normalized both by the number of rows and the number of columns. Principal normalization (inertia spreading) was used for both row and column coordinates - as with PCA above. Note the similarity with the PCA biplot: the only difference is due to the difference in the initial normalization.
Chi-square model correspondence analysis biplot. The data table was preprocessed in the special manner, it included two-way centering and a normalization using marginal totals. It is a weighted biplot. Inertia was spread over the row and the column coordinates symmetrically - both are halfway between "principal" and "standard" coordinates.
The coordinates displayed on all these scatterplots:
point dim1_1 dim2_1 dim1_2 dim2_2 dim1_3 dim2_3 dim1_4 dim2_4
1 .290 .247 16.871 3.048 6.887 1.244 -.479 -.101
2 .141 -.509 8.222 -6.284 3.356 -2.565 1.460 -.413
3 .198 -.282 11.504 -3.486 4.696 -1.423 .414 -.820
4 .175 .178 10.156 2.202 4.146 .899 -.421 .339
5 .303 .045 17.610 .550 7.189 .224 -.171 -.090
6 .245 -.054 14.226 -.665 5.808 -.272 -.061 -.319
7 .280 .051 16.306 .631 6.657 .258 -.180 -.112
8 .218 -.248 12.688 -3.065 5.180 -1.251 .322 -.480
9 .216 -.105 12.557 -1.300 5.126 -.531 .036 -.533
10 .171 -.157 9.921 -1.934 4.050 -.789 .433 .187
11 .194 -.137 11.282 -1.689 4.606 -.690 .384 .535
12 .157 -.384 9.117 -4.746 3.722 -1.938 1.121 .304
13 .235 .099 13.676 1.219 5.583 .498 -.295 -.072
14 .210 -.105 12.228 -1.295 4.992 -.529 .399 .962
15 .115 -.163 6.677 -2.013 2.726 -.822 .517 -.227
16 .304 .103 17.656 1.269 7.208 .518 -.289 -.257
17 .151 .147 8.771 1.814 3.581 .741 -.316 .670
18 .198 -.026 11.509 -.324 4.699 -.132 .137 .776
19 .259 .213 15.058 2.631 6.147 1.074 -.459 .005
20 .278 .414 16.159 5.112 6.597 2.087 -.753 .040
A .337 .534 4.387 1.475 4.387 1.475 -.865 -.289
B .461 .156 5.998 .430 5.998 .430 -.127 .186
C .441 -.666 5.741 -1.840 5.741 -1.840 .635 -.563
D .306 -.394 3.976 -1.087 3.976 -1.087 .656 .571
E .427 .289 5.556 .797 5.556 .797 -.230 .518
F .451 .087 5.860 .240 5.860 .240 -.176 -.325
|
PCA and Correspondence analysis in their relation to Biplot
|
SVD
Singular-value decomposition is at the root of the three kindred techniques. Let $\bf X$ be $r \times c$ table of real values. SVD is $\bf X = U_{r\times r}S_{r\times c}V_{c\times c}'$. We may use
|
PCA and Correspondence analysis in their relation to Biplot
SVD
Singular-value decomposition is at the root of the three kindred techniques. Let $\bf X$ be $r \times c$ table of real values. SVD is $\bf X = U_{r\times r}S_{r\times c}V_{c\times c}'$. We may use just $m$ $[m \le\min(r,c)]$ first latent vectors and roots to obtain $\bf X_{(m)}$ as the best $m$-rank approximation of $\bf X$: $\bf X_{(m)} = U_{r\times m}S_{m\times m}V_{c\times m}'$. Further, we'll notate $\bf U=U_{r\times m}$, $\bf V=V_{c\times m}$, $\bf S=S_{m\times m}$.
Singular values $\bf S$ and their squares, the eigenvalues, represent scale, also called inertia, of the data. Left eigenvectors $\bf U$ are the coordinates of the rows of the data onto the $m$ principal axes; while right eigenvectors $\bf V$ are the coordinates of the columns of the data onto those same latent axes. The entire scale (inertia) is stored in $\bf S$ and so the coordinates $\bf U$ and $\bf V$ are unit-normalized (column SS=1).
Principal Component Analysis by SVD
In PCA, it is agreed upon to consider rows of $\bf X$ as random observations (which can come or go), but to consider columns of $\bf X$ as fixed number of dimensions or variables. Hence it is appropriate and convenient to remove the effect of the number of rows (and only rows) on the results, particularly on the eigenvalues, by svd-decomposing of $\mathbf Z=\mathbf X/\sqrt{r}$ instead of $\bf X$. Note that this corresponds to eigen-decomposition of $\mathbf {X'X}/r$, $r$ being the sample size n. (Often, mostly with covariances - to make them unbiased - we'll prefer to divide by $r-1$, but it is a nuance.)
The multiplication of $\bf X$ by a constant affected only $\bf S$; $\bf U$ and $\bf V$ remain to be the unit-normalized coordinates of rows and of columns.
From here and everywhere below we redefine $\bf S$, $\bf U$ and $\bf V$ as given by svd of $\bf Z$, not of $\bf X$; $\bf Z$ being a normalized version of $\bf X$, and the normalization varies between types of analysis.
By multiplying $\mathbf U\sqrt{r}=\bf U_*$ we bring the mean square in the columns of $\bf U$ to 1. Given that rows are random cases to us, it is logical. We've thus obtained what is called in PCA standard or standardized principal component scores of observations, $\bf U_*$. We do not do the same thing with $\bf V$ because variables are fixed entities.
We then can confer rows with all the inertia, to obtain unstandardized row coordinates, also called in PCA raw principal component scores of observations: $\bf U_*S$. This formula we'll call "direct way". The same result is returned by $\bf XV$; we'll label it "indirect way".
Analogously, we can confer columns with all the inertia, to obtain unstandardized column coordinates, also called in PCA the component-variable loadings: $\bf VS'$ [may ignore transpose if $\bf S$ is square], - the "direct way". The same result is returned by $\bf Z'U$, - the "indirect way". (The above standardized principal component scores can also be computed from the loadings as $\bf X(AS^{-1/2})$, where $\bf A$ are the loadings.)
Biplot
Consider biplot in a sense of a dimensionality reduction analysis on its own, not simply as "a dual scatterplot". This analysis is very similar to PCA. Unlike PCA, both rows and columns are treated, symmetrically, as random observations, which means that $\bf X$ is being seen as a random two-way table of varying dimensionality. Then, naturally, normalize it by both $r$ and $c$ before svd: $\mathbf Z=\mathbf X/\sqrt{rc}$.
After svd, compute standard row coordinates as we did it in PCA: $\mathbf U_*=\mathbf U\sqrt{r}$. Do the same thing (unlike PCA) with column vectors, to obtain standard column coordinates: $\mathbf V_*=\mathbf V\sqrt{c}$. Standard coordinates, both of rows and of columns, have mean square 1.
We may confer rows and/or columns coordinates with inertia of eigenvalues like we do it in PCA. Unstandardized row coordinates: $\bf U_*S$ (direct way). Unstandardized column coordinates: $\bf V_*S'$ (direct way). What's about the indirect way? You can easily deduce by substitutions that the indirect formula for the unstandardized row coordinates is $\mathbf {XV_*}/c$, and for the unstandardized column coordinates is $\mathbf {X'U_*}/r$.
PCA as a particular case of Biplot. From the above descriptions you probably learned that PCA and biplot differ only in how they normalize $\bf X$ into $\bf Z$ which is then decomposed. Biplot normalizes by both the number of rows and the number of columns; PCA normalizes only by the number of rows. Consequently, there is a little difference between the two in the post-svd computations. If in doing biplot you set $c=1$ in its formulas you will get exactly PCA results. Thus, biplot can be seen as a generic method and PCA as a particular case of biplot.
[Column centering. Some user may say: Stop, but doesn't PCA require also and first of all the centering of the data columns (variables) in order it to explain variance? While biplot may not do the centering? My answer: only PCA-in-narrow-sense does the centering and explains variance; I'm discussing linear PCA-in-general-sense, PCA which explains some sort sum of squared deviations from the origin chosen; you might choose it to be the data mean, the native 0 or whatever you like. Thus, the "centering" operation isn't what could distinguish PCA from biplot.]
Passive rows and columns
In biplot or PCA, you can set some rows and/or columns to be passive, or supplementary. Passive row or column does not influence the SVD and therefore does not influence the inertia or the coordinates of other rows/columns, but receives its coordinates in the space of principal axes produced by the active (not passive) rows/columns.
To set some points (rows/columns) to be passive, (1) define $r$ and $c$ be the number of active rows and columns only. (2) Set to zero passive rows and columns in $\bf Z$ before svd. (3) Use the "indirect" ways to compute coordinates of passive rows/columns, since their eigenvector values will be zero.
In PCA, when you compute component scores for new incoming cases with the help of loadings obtained on old observations (using the score coefficient matrix), you actually doing the same thing as taking these new cases in PCA and keeping them passive. Similarly, to compute correlations/covariances of some external variables with the component scores produced by a PCA is equivalent to taking those variables in that PCA and keeping them passive.
Arbitrary spreading of inertia
The column mean squares (MS) of standard coordinates are 1. The column mean squares (MS) of unstandardized coordinates are equal to the inertia of the respective principal axes: all the inertia of eigenvalues was donated to eigenvectors to produce the unstandardized coordinates.
In biplot: row standard coordinates $\bf U_*$ have MS=1 for each principal axis. Row unstandardized coordinates, also called row principal coordinates $\mathbf {U_*S} = \mathbf {XV_*}/c$ have MS = corresponding eigenvalue of $\bf Z$. The same is true for column standard and unstandardized (principal) coordinates.
Generally, it is not required that one endows coordinates with inertia either in full or in none. Arbitrary spreading is allowed, if needed for some reason. Let $p_1$ be the proportion of inertia which is to go to rows. Then the general formula of row coordinates is: $\bf U_*S^{p1}$ (direct way) = $\mathbf {XV_*S^{p1-1}}/c$ (indirect way). If $p_1=0$ we get standard row coordinates, whereas with $p_1=1$ we get principal row coordinates.
Likewise $p_2$ be the proportion of inertia which is to go to columns. Then the general formula of column coordinates is: $\bf V_*S^{p2}$ (direct way) = $\mathbf {X'U_*S^{p2-1}}/r$ (indirect way). If $p_2=0$ we get standard column coordinates, whereas with $p_2=1$ we get principal column coordinates.
The general indirect formulas are universal in that they allow to compute coordinates (standard, principal or in-between) also for the passive points, if there are any.
If $p_1+p_2=1$ they say the inertia is distributed between row and column points. The $p_1=1,p_2=0$, i.e. row-principal-column-standard, biplots are sometimes called "form biplots" or "row-metric preservation" biplots.
The $p_1=0,p_2=1$, i.e. row-standard-column-principal, biplots are often called within PCA literature "covariance biplots" or "column-metric preservation" biplots; they display variable loadings (which are juxtaposed to covariances) plus standardized component scores, when applied within PCA.
In correspondence analysis, $p_1=p_2=1/2$ is often used and is called "symmetric" or "canonical" normalization by inertia - it allows (albeit at some expence of euclidean geometric strictness) compare proximity between row and column points, like we can do on multidimensional unfolding map.
Correspondence Analysis (Euclidean model)
Two-way (=simple) correspondence analysis (CA) is biplot used to analyze a two-way contingency table, that is, a non-negative table which entries bear the meaning of some sort of affinity between a row and a column. When the table is frequencies chi-square model correspondence analysis is used. When the entries is, say, means or other scores, a simplier Euclidean model CA is used.
Euclidean model CA is just the biplot described above, only that the table $\bf X$ is additionally preprocessed before it enters the biplot operations. In particular, the values are normalized not only by $r$ and $c$ but also by the total sum $N$.
The preprocessing consists of centering, then normalizing by the mean mass. Centering can be various, most often: (1) centering of columns; (2) centering of rows; (3) two-way centering which is the same operation as computation of frequency residuals; (4) centering of columns after equalizing column sums; (5) centering of rows after equalizing row sums. Normalizing by the mean mass is dividing by the mean cell value of the initial table. At preprocessing step, passive rows/columns, if exist, are standardized passively: they are centered/normalized by the values computed from active rows/columns.
Then usual biplot is done on the preprocessed $\bf X$, starting from $\mathbf Z=\mathbf X/\sqrt{rc}$.
Weighted Biplot
Imagine that the activity or importance of a row or a column can be any number between 0 and 1, and not only 0 (passive) or 1 (active) as in the classic biplot discussed so far. We could weight the input data by these row and column weights and perform weighted biplot. With weighted biplot, the greater is the weight the more influential is that row or that column regarding all the results - the inertia and the coordinates of all the points onto the principal axes.
The user supplies row weights and column weights. These and those are first normalized separately to sum to 1. Then the normalization step is $\mathbf{Z_{ij} = X_{ij}}\sqrt{w_i w_j}$, with $w_i$ and $w_j$ being the weights for row i and column j. Exactly zero weight designates the row or the column to be passive.
At that point we may discover that classic biplot is simply this weighted biplot with equal weights $1/r$ for all active rows and equal weights $1/c$ for all active columns; $r$ and $c$ the numbers of active rows and active columns.
Perform svd of $\bf Z$. All operations are the same as in classic biplot, the only difference being that $w_i$ is in place of $1/r$ and $w_j$ is in place of $1/c$. Standard row coordinates: $\mathbf {U_{*i}=U_i}/\sqrt{w_i}$ and standard column coordinates: $\mathbf {V_{*j}=V_j}/\sqrt{w_j}$. (These are for rows/columns with nonzero weight. Leave values as 0 for those with zero weight and use the indirect formulas below to obtain standard or whatever coordinates for them.)
Give inertia to coordinates in the proportion you want (with $p_1=1$ and $p_2=1$ the coordinates will be fully unstandardized, or principal; with $p_1=0$ and $p_2=0$ they will stay standard). Rows: $\bf U_*S^{p1}$ (direct way) = $\bf X[Wj]V_*S^{p1-1}$ (indirect way). Columns: $\bf V_*S^{p2}$ (direct way) = $\bf ([Wi]X)'U_*S^{p2-1}$ (indirect way). Matrices in brackets here are the diagonal matrices of the column and the row weights, respectively. For passive points (that is, with zero weights) only the indirect way of computation is suited. For active (positive weights) points you may go either way.
PCA as a particular case of Biplot revisited. When considering unweighted biplot earlier I mentioned that PCA and biplot are equivalent, the only difference being that biplot sees columns (variables) of the data as random cases symmetrically to observations (rows). Having extended now biplot to more general weighted biplot we may once again claim it, observing that the only difference is that (weighted) biplot normalizes the sum of column weights of input data to 1, and (weighted) PCA - to the number of (active) columns. So here is the weighted PCA introduced. Its results are proportionally identical to those of weighted biplot. Specifically, if $c$ is the number of active columns, then the following relationships are true, for weighted as well as classic versions of the two analyses:
eigenvalues of PCA = eigenvalues of biplot $\cdot c$;
loadings = column coordinates under "principal normalization" of columns;
standardized component scores = row coordinates under "standard normalization" of rows;
eigenvectors of PCA = column coordinates under "standard normalization" of columns $/ \sqrt c$;
raw component scores = row coordinates under "principal normalization" of rows $\cdot \sqrt c$.
Correspondence Analysis (Chi-square model)
This is technically a weighted biplot where weights are being computed from a table itself rather then supplied by the user. It is used mostly to analyze frequency cross-tables. This biplot will approximate, by euclidean distances on the plot, chi-square distances in the table. Chi-square distance is mathematically the euclidean distance inversely weighted by the marginal totals. I will not go further in details of Chi-square model CA geometry.
The preprocessing of frequency table $\bf X$ is as follows: divide each frequency by the expected frequency, then subtract 1. It is the same as to first obtain the frequency residual and then to divide by the expected frequency. Set row weights to $w_i=R_i/N$ and column weights to $w_j=C_j/N$, where $R_i$ is the marginal sum of row i (active columns only), $C_j$ is the marginal sum of column j (active rows only), $N$ is the table total active sum (the three numbers come from the initial table).
Then do weighted biplot: (1) Normalize $\bf X$ into $\bf Z$. (2) The weights are never zero (zero $R_i$ and $C_j$ are not allowed in CA); however you can force rows/columns to become passive by zeroing them in $\bf Z$, so their weights are ineffective at svd. (3) Do svd. (4) Compute standard and inertia-vested coordinates as in weighted biplot.
In Chi-square model CA as well as in Euclidean model CA using two-way centering one last eigenvalue is always 0, so the maximal possible number of principal dimensions is $\min(r-1,c-1)$.
See also a nice overview of chi-square model CA in this answer.
Illustrations
Here is some data table.
row A B C D E F
1 6 8 6 2 9 9
2 0 3 8 5 1 3
3 2 3 9 2 4 7
4 2 4 2 2 7 7
5 6 9 9 3 9 6
6 6 4 7 5 5 8
7 7 9 6 6 4 8
8 4 4 8 5 3 7
9 4 6 7 3 3 7
10 1 5 4 5 3 6
11 1 5 6 4 8 3
12 0 6 7 5 3 1
13 6 9 6 3 5 4
14 1 6 4 7 8 4
15 1 1 5 2 4 3
16 8 9 7 5 5 9
17 2 7 1 3 4 4
28 5 3 3 9 6 4
19 6 7 6 2 9 6
20 10 7 4 4 8 7
Several dual scatterplots (in 2 first principal dimensions) built on analyses of these values follow. Column points are connected with the origin by spikes for visual emphasis. There were no passive rows or columns in these analyses.
The first biplot is SVD results of the data table analyzed "as is"; the coordinates are the row and the column eigenvectors.
Below is one of possible biplots coming from PCA. PCA was done on the data "as is", without centering the columns; however, as it is adopted in PCA, normalization by the number of rows (the number of cases) was done initially. This specific biplot displays principal row coordinates (i.e. raw component scores) and principal column coordinates (i.e. variable loadings).
Next is biplot sensu stricto: The table was initially normalized both by the number of rows and the number of columns. Principal normalization (inertia spreading) was used for both row and column coordinates - as with PCA above. Note the similarity with the PCA biplot: the only difference is due to the difference in the initial normalization.
Chi-square model correspondence analysis biplot. The data table was preprocessed in the special manner, it included two-way centering and a normalization using marginal totals. It is a weighted biplot. Inertia was spread over the row and the column coordinates symmetrically - both are halfway between "principal" and "standard" coordinates.
The coordinates displayed on all these scatterplots:
point dim1_1 dim2_1 dim1_2 dim2_2 dim1_3 dim2_3 dim1_4 dim2_4
1 .290 .247 16.871 3.048 6.887 1.244 -.479 -.101
2 .141 -.509 8.222 -6.284 3.356 -2.565 1.460 -.413
3 .198 -.282 11.504 -3.486 4.696 -1.423 .414 -.820
4 .175 .178 10.156 2.202 4.146 .899 -.421 .339
5 .303 .045 17.610 .550 7.189 .224 -.171 -.090
6 .245 -.054 14.226 -.665 5.808 -.272 -.061 -.319
7 .280 .051 16.306 .631 6.657 .258 -.180 -.112
8 .218 -.248 12.688 -3.065 5.180 -1.251 .322 -.480
9 .216 -.105 12.557 -1.300 5.126 -.531 .036 -.533
10 .171 -.157 9.921 -1.934 4.050 -.789 .433 .187
11 .194 -.137 11.282 -1.689 4.606 -.690 .384 .535
12 .157 -.384 9.117 -4.746 3.722 -1.938 1.121 .304
13 .235 .099 13.676 1.219 5.583 .498 -.295 -.072
14 .210 -.105 12.228 -1.295 4.992 -.529 .399 .962
15 .115 -.163 6.677 -2.013 2.726 -.822 .517 -.227
16 .304 .103 17.656 1.269 7.208 .518 -.289 -.257
17 .151 .147 8.771 1.814 3.581 .741 -.316 .670
18 .198 -.026 11.509 -.324 4.699 -.132 .137 .776
19 .259 .213 15.058 2.631 6.147 1.074 -.459 .005
20 .278 .414 16.159 5.112 6.597 2.087 -.753 .040
A .337 .534 4.387 1.475 4.387 1.475 -.865 -.289
B .461 .156 5.998 .430 5.998 .430 -.127 .186
C .441 -.666 5.741 -1.840 5.741 -1.840 .635 -.563
D .306 -.394 3.976 -1.087 3.976 -1.087 .656 .571
E .427 .289 5.556 .797 5.556 .797 -.230 .518
F .451 .087 5.860 .240 5.860 .240 -.176 -.325
|
PCA and Correspondence analysis in their relation to Biplot
SVD
Singular-value decomposition is at the root of the three kindred techniques. Let $\bf X$ be $r \times c$ table of real values. SVD is $\bf X = U_{r\times r}S_{r\times c}V_{c\times c}'$. We may use
|
5,888
|
Are pooling layers added before or after dropout layers?
|
Edit: As @Toke Faurby correctly pointed out, the default implementation in tensorflow actually uses an element-wise dropout. What I described earlier applies to a specific variant of dropout in CNNs, called spatial dropout:
In a CNN, each neuron produces one feature map. Since dropout spatial dropout works per-neuron, dropping a neuron means that the corresponding feature map is dropped - e.g. each position has the same value (usually 0). So each feature map is either fully dropped or not dropped at all.
Pooling usually operates separately on each feature map, so it should not make any difference if you apply dropout before or after pooling. At least this is the case for pooling operations like maxpooling or averaging.
Edit: However, if you actually use element-wise dropout (which seems to be set as default for tensorflow), it actually makes a difference if you apply dropout before or after pooling. However, there is not necessarily a wrong way of doing it. Consider the average pooling operation: if you apply dropout before pooling, you effectively scale the resulting neuron activations by 1.0 - dropout_probability, but most neurons will be non-zero (in general). If you apply dropout after average pooling, you generally end up with a fraction of (1.0 - dropout_probability) non-zero "unscaled" neuron activations and a fraction of dropout_probability zero neurons. Both seems viable to me, neither is outright wrong.
|
Are pooling layers added before or after dropout layers?
|
Edit: As @Toke Faurby correctly pointed out, the default implementation in tensorflow actually uses an element-wise dropout. What I described earlier applies to a specific variant of dropout in CNNs,
|
Are pooling layers added before or after dropout layers?
Edit: As @Toke Faurby correctly pointed out, the default implementation in tensorflow actually uses an element-wise dropout. What I described earlier applies to a specific variant of dropout in CNNs, called spatial dropout:
In a CNN, each neuron produces one feature map. Since dropout spatial dropout works per-neuron, dropping a neuron means that the corresponding feature map is dropped - e.g. each position has the same value (usually 0). So each feature map is either fully dropped or not dropped at all.
Pooling usually operates separately on each feature map, so it should not make any difference if you apply dropout before or after pooling. At least this is the case for pooling operations like maxpooling or averaging.
Edit: However, if you actually use element-wise dropout (which seems to be set as default for tensorflow), it actually makes a difference if you apply dropout before or after pooling. However, there is not necessarily a wrong way of doing it. Consider the average pooling operation: if you apply dropout before pooling, you effectively scale the resulting neuron activations by 1.0 - dropout_probability, but most neurons will be non-zero (in general). If you apply dropout after average pooling, you generally end up with a fraction of (1.0 - dropout_probability) non-zero "unscaled" neuron activations and a fraction of dropout_probability zero neurons. Both seems viable to me, neither is outright wrong.
|
Are pooling layers added before or after dropout layers?
Edit: As @Toke Faurby correctly pointed out, the default implementation in tensorflow actually uses an element-wise dropout. What I described earlier applies to a specific variant of dropout in CNNs,
|
5,889
|
Are pooling layers added before or after dropout layers?
|
This tutorial uses pooling before dropout and gets good results.
That doesn't necessarily mean the other order doesn't work of course. My experience is limited, I've only used them on dense layers without pooling.
|
Are pooling layers added before or after dropout layers?
|
This tutorial uses pooling before dropout and gets good results.
That doesn't necessarily mean the other order doesn't work of course. My experience is limited, I've only used them on dense layers wit
|
Are pooling layers added before or after dropout layers?
This tutorial uses pooling before dropout and gets good results.
That doesn't necessarily mean the other order doesn't work of course. My experience is limited, I've only used them on dense layers without pooling.
|
Are pooling layers added before or after dropout layers?
This tutorial uses pooling before dropout and gets good results.
That doesn't necessarily mean the other order doesn't work of course. My experience is limited, I've only used them on dense layers wit
|
5,890
|
Are pooling layers added before or after dropout layers?
|
Example of VGG-like convnet from Keras (dropout used after pooling):
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.optimizers import SGD
# Generate dummy data
x_train = np.random.random((100, 100, 100, 3))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)
x_test = np.random.random((20, 100, 100, 3))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(20, 1)), num_classes=10)
model = Sequential()
# input: 100x100 images with 3 channels -> (100, 100, 3) tensors.
# this applies 32 convolution filters of size 3x3 each.
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(x_train, y_train, batch_size=32, epochs=10)
score = model.evaluate(x_test, y_test, batch_size=32)
|
Are pooling layers added before or after dropout layers?
|
Example of VGG-like convnet from Keras (dropout used after pooling):
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.lay
|
Are pooling layers added before or after dropout layers?
Example of VGG-like convnet from Keras (dropout used after pooling):
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras.optimizers import SGD
# Generate dummy data
x_train = np.random.random((100, 100, 100, 3))
y_train = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)
x_test = np.random.random((20, 100, 100, 3))
y_test = keras.utils.to_categorical(np.random.randint(10, size=(20, 1)), num_classes=10)
model = Sequential()
# input: 100x100 images with 3 channels -> (100, 100, 3) tensors.
# this applies 32 convolution filters of size 3x3 each.
model.add(Conv2D(32, (3, 3), activation='relu', input_shape=(100, 100, 3)))
model.add(Conv2D(32, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(256, activation='relu'))
model.add(Dropout(0.5))
model.add(Dense(10, activation='softmax'))
sgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)
model.compile(loss='categorical_crossentropy', optimizer=sgd)
model.fit(x_train, y_train, batch_size=32, epochs=10)
score = model.evaluate(x_test, y_test, batch_size=32)
|
Are pooling layers added before or after dropout layers?
Example of VGG-like convnet from Keras (dropout used after pooling):
import numpy as np
import keras
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.lay
|
5,891
|
How to set up neural network to output ordinal data?
|
I think the approach to only encode the ordinal labels as
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
and use binary cross-entropy as the loss function is suboptimal. As mentioned in the comments, it might happen that the predicted vector is for example [1 0 1 0 ...]. This is undesirable for making predictions.
The paper Rank-consistent ordinal regression for neural networks describes how to restrict the neural network to make rank-consistent predictions. You have to make sure that the last layer shares its weights, but should have different biases. You can implement this in Tensorflow by adding the following as the last part of the network (credits for https://stackoverflow.com/questions/59656313/how-to-share-weights-and-not-biases-in-keras-dense-layers):
class BiasLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super(BiasLayer, self).__init__(*args, **kwargs)
self.bias = self.add_weight('bias',
shape=[units],
initializer='zeros',
trainable=True)
def call(self, x):
return x + self.bias
# Add the following as the output of the Sequential model
model.add(keras.layers.Dense(1, use_bias=False))
model.add(BiasLayer(4))
model.add(keras.layers.Activation("sigmoid"))
Note that the number of ordinal classes here is 5, hence the $K-1$ biases.
I tested the difference in performance on actual data, and the predictive accuracy improved substantially. Hope this helps.
|
How to set up neural network to output ordinal data?
|
I think the approach to only encode the ordinal labels as
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
and use binary cr
|
How to set up neural network to output ordinal data?
I think the approach to only encode the ordinal labels as
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
and use binary cross-entropy as the loss function is suboptimal. As mentioned in the comments, it might happen that the predicted vector is for example [1 0 1 0 ...]. This is undesirable for making predictions.
The paper Rank-consistent ordinal regression for neural networks describes how to restrict the neural network to make rank-consistent predictions. You have to make sure that the last layer shares its weights, but should have different biases. You can implement this in Tensorflow by adding the following as the last part of the network (credits for https://stackoverflow.com/questions/59656313/how-to-share-weights-and-not-biases-in-keras-dense-layers):
class BiasLayer(tf.keras.layers.Layer):
def __init__(self, units, *args, **kwargs):
super(BiasLayer, self).__init__(*args, **kwargs)
self.bias = self.add_weight('bias',
shape=[units],
initializer='zeros',
trainable=True)
def call(self, x):
return x + self.bias
# Add the following as the output of the Sequential model
model.add(keras.layers.Dense(1, use_bias=False))
model.add(BiasLayer(4))
model.add(keras.layers.Activation("sigmoid"))
Note that the number of ordinal classes here is 5, hence the $K-1$ biases.
I tested the difference in performance on actual data, and the predictive accuracy improved substantially. Hope this helps.
|
How to set up neural network to output ordinal data?
I think the approach to only encode the ordinal labels as
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
and use binary cr
|
5,892
|
How to set up neural network to output ordinal data?
|
Update: Meanwhile, I delved more into the topic and even wrote a package implementing many ordinal losses from the literature. It includes the loss I mention here (ordinal encoding) but many others as well.
I believe what most people do is to simply treat ordinal classification as a generic multi-class classification. So, if they have $K$ classes, they will have $K$ outputs, and simply use cross-entropy as the loss.
But some people have managed to invent a clever encoding for your ordinal classes (see this stackoverflow answer). It's a sort of one-hot encoding,
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
i.e. each neuron is predicting the probability $P(\hat y < k)$. You still have to use a sigmoid as the activation function, but I think this helps the network understanding some continuity between classes, I don't know. Afterwards, you do a post-processing (np.sum) to convert the binary output into your classes.
This strategy resembles the ensemble from Frank and Hall, and I think this is the first publication of such.
|
How to set up neural network to output ordinal data?
|
Update: Meanwhile, I delved more into the topic and even wrote a package implementing many ordinal losses from the literature. It includes the loss I mention here (ordinal encoding) but many others as
|
How to set up neural network to output ordinal data?
Update: Meanwhile, I delved more into the topic and even wrote a package implementing many ordinal losses from the literature. It includes the loss I mention here (ordinal encoding) but many others as well.
I believe what most people do is to simply treat ordinal classification as a generic multi-class classification. So, if they have $K$ classes, they will have $K$ outputs, and simply use cross-entropy as the loss.
But some people have managed to invent a clever encoding for your ordinal classes (see this stackoverflow answer). It's a sort of one-hot encoding,
class 1 is represented as [0 0 0 0 ...]
class 2 is represented as [1 0 0 0 ...]
class 3 is represented as [1 1 0 0 ...]
i.e. each neuron is predicting the probability $P(\hat y < k)$. You still have to use a sigmoid as the activation function, but I think this helps the network understanding some continuity between classes, I don't know. Afterwards, you do a post-processing (np.sum) to convert the binary output into your classes.
This strategy resembles the ensemble from Frank and Hall, and I think this is the first publication of such.
|
How to set up neural network to output ordinal data?
Update: Meanwhile, I delved more into the topic and even wrote a package implementing many ordinal losses from the literature. It includes the loss I mention here (ordinal encoding) but many others as
|
5,893
|
Are smaller p-values more convincing?
|
Are smaller $p$-values "more convincing"? Yes, of course they are.
In the Fisher framework, $p$-value is a quantification of the amount of evidence against the null hypothesis. The evidence can be more or less convincing; the smaller the $p$-value, the more convincing it is. Note that in any given experiment with fixed sample size $n$, the $p$-value is monotonically related to the effect size, as @Scortchi nicely points out in his answer (+1). So smaller $p$-values correspond to larger effect sizes; of course they are more convincing!
In the Neyman-Pearson framework, the goal is to obtain a binary decision: either the evidence is "significant" or it is not. By choosing the threshold $\alpha$, we guarantee that we will not have more than $\alpha$ false positives. Note that different people can have different $\alpha$ in mind when looking at the same data; perhaps when I read a paper from a field that I am skeptical about, I would not personally consider as "significant" results with e.g. $p=0.03$ even though the authors do call them significant. My personal $\alpha$ might be set to $0.001$ or something. Obviously the lower the reported $p$-value, the more skeptical readers it will be able to convince! Hence, again, lower $p$-values are more convincing.
The currently standard practice is to combine Fisher and Neyman-Pearson approaches: if $p<\alpha$, then the results are called "significant" and the $p$-value is [exactly or approximately] reported and used as a measure of convincingness (by marking it with stars, using expressions as "highly significant", etc.); if $p>\alpha$ , then the results are called "not significant" and that's it.
This is usually referred to as a "hybrid approach", and indeed it is hybrid. Some people argue that this hybrid is incoherent; I tend to disagree. Why would it be invalid to do two valid things at the same time?
Further reading:
Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? -- my question about the "hybrid". It generated some discussion, but I am still not satisfied with any of the answers, and plan to get back to that thread at some point.
Is it wrong to refer to results as being "highly significant"? -- see my yesterday's answer, which is essentially saying: it isn't wrong (but perhaps a bit sloppy).
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 -- an example of an anti-Fisher paper arguing that $p$-values do not provide evidence against the null; the top answer by @Momo does a good job in debunking the arguments. My answer to the title question is: But of course they are.
|
Are smaller p-values more convincing?
|
Are smaller $p$-values "more convincing"? Yes, of course they are.
In the Fisher framework, $p$-value is a quantification of the amount of evidence against the null hypothesis. The evidence can be mor
|
Are smaller p-values more convincing?
Are smaller $p$-values "more convincing"? Yes, of course they are.
In the Fisher framework, $p$-value is a quantification of the amount of evidence against the null hypothesis. The evidence can be more or less convincing; the smaller the $p$-value, the more convincing it is. Note that in any given experiment with fixed sample size $n$, the $p$-value is monotonically related to the effect size, as @Scortchi nicely points out in his answer (+1). So smaller $p$-values correspond to larger effect sizes; of course they are more convincing!
In the Neyman-Pearson framework, the goal is to obtain a binary decision: either the evidence is "significant" or it is not. By choosing the threshold $\alpha$, we guarantee that we will not have more than $\alpha$ false positives. Note that different people can have different $\alpha$ in mind when looking at the same data; perhaps when I read a paper from a field that I am skeptical about, I would not personally consider as "significant" results with e.g. $p=0.03$ even though the authors do call them significant. My personal $\alpha$ might be set to $0.001$ or something. Obviously the lower the reported $p$-value, the more skeptical readers it will be able to convince! Hence, again, lower $p$-values are more convincing.
The currently standard practice is to combine Fisher and Neyman-Pearson approaches: if $p<\alpha$, then the results are called "significant" and the $p$-value is [exactly or approximately] reported and used as a measure of convincingness (by marking it with stars, using expressions as "highly significant", etc.); if $p>\alpha$ , then the results are called "not significant" and that's it.
This is usually referred to as a "hybrid approach", and indeed it is hybrid. Some people argue that this hybrid is incoherent; I tend to disagree. Why would it be invalid to do two valid things at the same time?
Further reading:
Is the "hybrid" between Fisher and Neyman-Pearson approaches to statistical testing really an "incoherent mishmash"? -- my question about the "hybrid". It generated some discussion, but I am still not satisfied with any of the answers, and plan to get back to that thread at some point.
Is it wrong to refer to results as being "highly significant"? -- see my yesterday's answer, which is essentially saying: it isn't wrong (but perhaps a bit sloppy).
Why are lower p-values not more evidence against the null? Arguments from Johansson 2011 -- an example of an anti-Fisher paper arguing that $p$-values do not provide evidence against the null; the top answer by @Momo does a good job in debunking the arguments. My answer to the title question is: But of course they are.
|
Are smaller p-values more convincing?
Are smaller $p$-values "more convincing"? Yes, of course they are.
In the Fisher framework, $p$-value is a quantification of the amount of evidence against the null hypothesis. The evidence can be mor
|
5,894
|
Are smaller p-values more convincing?
|
I don't know what's meant by smaller p-values being "better", or by us being "more confident in" them. But regarding p-values as a measure of how surprised we should be by the data, if we believed the null hypothesis, seems reasonable enough; the p-value is a monotonic function of the test statistic you've chosen to measure discrepancy with the null hypothesis in a direction you're interested in, calibrating it with respect to its properties under a relevant procedure of sampling from a population or random assignment of experimental treatments. "Significance" has become a technical term to refer to p-values' being either above or below some specified value; thus even those with no interest in specifying significance levels & accepting or rejecting hypotheses tend to avoid phrases such as "highly significant"—mere adherence to convention.
Regarding the dependence of p-values on sample size & effect size, perhaps some confusion arises because e.g. it might seem that 474 heads out of 1000 tosses should be less surprising than 2 out of 10 to someone who thinks the coin is fair—after all the sample proportion only deviates a little from 50% in the former case—yet the p-values are about the same. But true or false don't admit of degrees; the p-value's doing what's asked of it: often confidence intervals for a parameter are really what's wanted to assess how precisely an effect's been measured, & the practical or theoretical importance of its estimated magnitude.
|
Are smaller p-values more convincing?
|
I don't know what's meant by smaller p-values being "better", or by us being "more confident in" them. But regarding p-values as a measure of how surprised we should be by the data, if we believed the
|
Are smaller p-values more convincing?
I don't know what's meant by smaller p-values being "better", or by us being "more confident in" them. But regarding p-values as a measure of how surprised we should be by the data, if we believed the null hypothesis, seems reasonable enough; the p-value is a monotonic function of the test statistic you've chosen to measure discrepancy with the null hypothesis in a direction you're interested in, calibrating it with respect to its properties under a relevant procedure of sampling from a population or random assignment of experimental treatments. "Significance" has become a technical term to refer to p-values' being either above or below some specified value; thus even those with no interest in specifying significance levels & accepting or rejecting hypotheses tend to avoid phrases such as "highly significant"—mere adherence to convention.
Regarding the dependence of p-values on sample size & effect size, perhaps some confusion arises because e.g. it might seem that 474 heads out of 1000 tosses should be less surprising than 2 out of 10 to someone who thinks the coin is fair—after all the sample proportion only deviates a little from 50% in the former case—yet the p-values are about the same. But true or false don't admit of degrees; the p-value's doing what's asked of it: often confidence intervals for a parameter are really what's wanted to assess how precisely an effect's been measured, & the practical or theoretical importance of its estimated magnitude.
|
Are smaller p-values more convincing?
I don't know what's meant by smaller p-values being "better", or by us being "more confident in" them. But regarding p-values as a measure of how surprised we should be by the data, if we believed the
|
5,895
|
Are smaller p-values more convincing?
|
Thank you for the comments and suggested readings. I've had some more time to ponder on this problem and I believe I've managed to isolate my main sources of confusion.
Initially I thought there was a dichotomy between viewing the p-value as a measure of surprise versus stating that it's not an absolute measure. Now I realise these statements don't necessarily contradict each other. The former allows us to be more or less confident in the extremeness (unlikeness even?) of an observed effect, compared to other hypothetical results of the same experiment. Whereas the latter only tells us that what might be considered a convincing p-value in one experiment, might not be impressive at all in another one, e.g. if the sample sizes differ.
The fact that some fields of science utilise a different baseline of strong p-values, could either be a reflection of the difference in common sample sizes (astronomy, clinical, psychological experiments) and/or an attempt to convey effect size in a p-value. But the latter is an incorrect conflation of the two.
Significance is a yes/no question based on the alpha that was chosen prior to the experiment. A p-value can therefore not be more significant than another one, since they are either smaller or larger than the chosen significance level. On the other hand, a smaller p-value will be more convincing than a larger one (for a similar sample size/identical experiment, as mentioned in my first point).
Confidence intervals inherently convey the effect size, making them a nice choice to guard against the issues mentioned above.
|
Are smaller p-values more convincing?
|
Thank you for the comments and suggested readings. I've had some more time to ponder on this problem and I believe I've managed to isolate my main sources of confusion.
Initially I thought there was
|
Are smaller p-values more convincing?
Thank you for the comments and suggested readings. I've had some more time to ponder on this problem and I believe I've managed to isolate my main sources of confusion.
Initially I thought there was a dichotomy between viewing the p-value as a measure of surprise versus stating that it's not an absolute measure. Now I realise these statements don't necessarily contradict each other. The former allows us to be more or less confident in the extremeness (unlikeness even?) of an observed effect, compared to other hypothetical results of the same experiment. Whereas the latter only tells us that what might be considered a convincing p-value in one experiment, might not be impressive at all in another one, e.g. if the sample sizes differ.
The fact that some fields of science utilise a different baseline of strong p-values, could either be a reflection of the difference in common sample sizes (astronomy, clinical, psychological experiments) and/or an attempt to convey effect size in a p-value. But the latter is an incorrect conflation of the two.
Significance is a yes/no question based on the alpha that was chosen prior to the experiment. A p-value can therefore not be more significant than another one, since they are either smaller or larger than the chosen significance level. On the other hand, a smaller p-value will be more convincing than a larger one (for a similar sample size/identical experiment, as mentioned in my first point).
Confidence intervals inherently convey the effect size, making them a nice choice to guard against the issues mentioned above.
|
Are smaller p-values more convincing?
Thank you for the comments and suggested readings. I've had some more time to ponder on this problem and I believe I've managed to isolate my main sources of confusion.
Initially I thought there was
|
5,896
|
Are smaller p-values more convincing?
|
The p-value cannot be a measure of surprise because it is only a measure of probability when the null is true. If the null is true then each possible value of p is equally likely. One cannot be surprised at any p-value prior to deciding to reject the null. Once one decides there is an effect then the p-value's meaning vanishes. One merely reports it as a link in a relatively weak inductive chain to justify the rejection, or not, of the null. But if it was rejected it actually no longer has any meaning.
|
Are smaller p-values more convincing?
|
The p-value cannot be a measure of surprise because it is only a measure of probability when the null is true. If the null is true then each possible value of p is equally likely. One cannot be surpri
|
Are smaller p-values more convincing?
The p-value cannot be a measure of surprise because it is only a measure of probability when the null is true. If the null is true then each possible value of p is equally likely. One cannot be surprised at any p-value prior to deciding to reject the null. Once one decides there is an effect then the p-value's meaning vanishes. One merely reports it as a link in a relatively weak inductive chain to justify the rejection, or not, of the null. But if it was rejected it actually no longer has any meaning.
|
Are smaller p-values more convincing?
The p-value cannot be a measure of surprise because it is only a measure of probability when the null is true. If the null is true then each possible value of p is equally likely. One cannot be surpri
|
5,897
|
Is standardisation before Lasso really necessary?
|
Lasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to center and reduce, or standardize, the variables.
The result of centering the variables means that there is no longer an intercept. This applies equally to ridge regression, by the way.
Another good explanation is this post: Need for centering and standardizing data in regression
|
Is standardisation before Lasso really necessary?
|
Lasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to center an
|
Is standardisation before Lasso really necessary?
Lasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to center and reduce, or standardize, the variables.
The result of centering the variables means that there is no longer an intercept. This applies equally to ridge regression, by the way.
Another good explanation is this post: Need for centering and standardizing data in regression
|
Is standardisation before Lasso really necessary?
Lasso regression puts constraints on the size of the coefficients associated to each variable. However, this value will depend on the magnitude of each variable. It is therefore necessary to center an
|
5,898
|
Is standardisation before Lasso really necessary?
|
The L1 penalty parameter is a summation of absolute beta terms. If the variables are all of different dimensionality then this term is really not additive even though mathematically there isn't any error.
However, I don't see the dummy/ categorical variables suffering from this issue and think they need not be standardized. standardizing these may just reduce interpretability of variables
|
Is standardisation before Lasso really necessary?
|
The L1 penalty parameter is a summation of absolute beta terms. If the variables are all of different dimensionality then this term is really not additive even though mathematically there isn't any er
|
Is standardisation before Lasso really necessary?
The L1 penalty parameter is a summation of absolute beta terms. If the variables are all of different dimensionality then this term is really not additive even though mathematically there isn't any error.
However, I don't see the dummy/ categorical variables suffering from this issue and think they need not be standardized. standardizing these may just reduce interpretability of variables
|
Is standardisation before Lasso really necessary?
The L1 penalty parameter is a summation of absolute beta terms. If the variables are all of different dimensionality then this term is really not additive even though mathematically there isn't any er
|
5,899
|
Is standardisation before Lasso really necessary?
|
If by standardize you mean transform all variables to z-scores (as is often the case), then you may want to consider that z-scoring a pre-scaled dataset may result in amplification of noise. That is--variables with low variance may have measurement noise amplified more so after applying z-scoring.
|
Is standardisation before Lasso really necessary?
|
If by standardize you mean transform all variables to z-scores (as is often the case), then you may want to consider that z-scoring a pre-scaled dataset may result in amplification of noise. That is--
|
Is standardisation before Lasso really necessary?
If by standardize you mean transform all variables to z-scores (as is often the case), then you may want to consider that z-scoring a pre-scaled dataset may result in amplification of noise. That is--variables with low variance may have measurement noise amplified more so after applying z-scoring.
|
Is standardisation before Lasso really necessary?
If by standardize you mean transform all variables to z-scores (as is often the case), then you may want to consider that z-scoring a pre-scaled dataset may result in amplification of noise. That is--
|
5,900
|
Compendium of cross-validation techniques
|
You can add to that list:
Repeated-cross validation
Leave-group-out cross-validation
Out-of-bag (for random forests and other bagged models)
The 632+ bootstrap
I don't really have a lot of advice as far as how to use these techniques or when to use them. You can use the caret package in R to compare CV, Boot, Boot632, leave-one-out, leave-group-out, and out-of-bag cross-validation.
In general, I usually use the boostrap because it is less computationally intensive than repeated k-fold CV, or leave-one-out CV. Boot632 is my algorithm of choice because it doesn't require much more computation than the bootstrap, and has show to be better than cross-validation or the basic bootstap in certain situations.
I almost always use out-of-bag error estimates for random forests, rather than cross-validation. Out-of-bag errors are generally unbiased, and random forests take long enough to compute as it is.
|
Compendium of cross-validation techniques
|
You can add to that list:
Repeated-cross validation
Leave-group-out cross-validation
Out-of-bag (for random forests and other bagged models)
The 632+ bootstrap
I don't really have a lot of advice a
|
Compendium of cross-validation techniques
You can add to that list:
Repeated-cross validation
Leave-group-out cross-validation
Out-of-bag (for random forests and other bagged models)
The 632+ bootstrap
I don't really have a lot of advice as far as how to use these techniques or when to use them. You can use the caret package in R to compare CV, Boot, Boot632, leave-one-out, leave-group-out, and out-of-bag cross-validation.
In general, I usually use the boostrap because it is less computationally intensive than repeated k-fold CV, or leave-one-out CV. Boot632 is my algorithm of choice because it doesn't require much more computation than the bootstrap, and has show to be better than cross-validation or the basic bootstap in certain situations.
I almost always use out-of-bag error estimates for random forests, rather than cross-validation. Out-of-bag errors are generally unbiased, and random forests take long enough to compute as it is.
|
Compendium of cross-validation techniques
You can add to that list:
Repeated-cross validation
Leave-group-out cross-validation
Out-of-bag (for random forests and other bagged models)
The 632+ bootstrap
I don't really have a lot of advice a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.