idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
24,301 | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | Is Greene being sloppy? Should he actually have written: $E(y|X)=X\beta$? This is a "linearity assumption" that actually puts
structure on the model.
In a sense, yes and no. In the one hand, yes, given current modern causality research he is sloppy, but just like most econometrics textbooks are, in the sense that they do not make a clear distinction of causal and observational quantities, leading to common confusions like this very question. But, in the other hand, no, this assumption is not sloppy in the sense that it is indeed different from simply assuming $E(y|X)=X\beta$.
The crux of the matter here is the difference between the conditional expectation, $E(y|X)$, and the structural (causal) equation of $y$, as well as its structural (causal) expectation $E[Y|do(X)]$. The linearity assumption in Greene is a structural assumption. Let's see a simple example. Imagine the structural equation is:
$$
y= \beta x + \gamma x^2 + \epsilon
$$
Now let $E[\epsilon |x] = \delta x - \gamma x^2$. Then we would have:
$$
E[y|x] = \beta'x
$$
where $\beta' = \beta + \delta$. Moreover, we can write $y = \beta'x + \epsilon'$ and we would have $E[\epsilon'|x] = 0$. This shows we can have a correctly specified linear conditional expectation $E[y|x]$ which by definition is going to have an orthogonal disturbance, yet the structural equation would be nonlinear.
Or do I have to accept that the linearity assumption does not put structure on the model but only defines an $\epsilon$, where the other
assumptions will use that definition of $\epsilon$ to put structure on
the model?
The linearity assumption does define an $\epsilon$, that is, $\epsilon := y - X\beta = y - E[Y|do(X)]$ by definition, where $\epsilon$ represents the deviations of $y$ from its expectation when we experimentally set $X$ (see Pearl section 5.4). The other assumptions are used either for identification of the structural parameters (for instance, the assumption of exogeneity of $\epsilon$ allows you to identify the structural expectation $E[Y|do(X)]$ with the conditional expectation $E[Y|X]$) or for derivation of statistical properties of the estimators (for instance, the assumption of homoskedasticity guarantees OLS is BLUE, the assumption of normality makes it easy to derive "finite sample" results for inference etc).
However, the linearity assumption by itself does not put any
structure on our model, since $\epsilon$ can be completely arbitrary.
For any variables $X, y$ whatsoever, no matter what the relation between the two we could define an $\epsilon$ such that the linearity
assumption holds.
Your statement here goes into the main problem of causal inference in general! As shown in the simple example above, we can cook up structural disturbances that could make the conditional expectation of $y$ given $x$ linear. In general, several different structural (causal) models can have the same observational distribution, you can even have causation without observed association. Therefore, in this sense, you are correct --- we need more assumptions on $\epsilon$ in order to put "more structure" into the problem and identify the structural parameters $\beta$ with observational data.
Side note
It's worth mentioning most econometrics textbooks are confusing when it comes to the distinction between regression and structural equations and their meaning. This has been documented lately. You can check a paper by Chen and Pearl here as well as an extended survey by Chris Auld. Greene is one of the books examined. | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | Is Greene being sloppy? Should he actually have written: $E(y|X)=X\beta$? This is a "linearity assumption" that actually puts
structure on the model.
In a sense, yes and no. In the one hand, yes, | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
Is Greene being sloppy? Should he actually have written: $E(y|X)=X\beta$? This is a "linearity assumption" that actually puts
structure on the model.
In a sense, yes and no. In the one hand, yes, given current modern causality research he is sloppy, but just like most econometrics textbooks are, in the sense that they do not make a clear distinction of causal and observational quantities, leading to common confusions like this very question. But, in the other hand, no, this assumption is not sloppy in the sense that it is indeed different from simply assuming $E(y|X)=X\beta$.
The crux of the matter here is the difference between the conditional expectation, $E(y|X)$, and the structural (causal) equation of $y$, as well as its structural (causal) expectation $E[Y|do(X)]$. The linearity assumption in Greene is a structural assumption. Let's see a simple example. Imagine the structural equation is:
$$
y= \beta x + \gamma x^2 + \epsilon
$$
Now let $E[\epsilon |x] = \delta x - \gamma x^2$. Then we would have:
$$
E[y|x] = \beta'x
$$
where $\beta' = \beta + \delta$. Moreover, we can write $y = \beta'x + \epsilon'$ and we would have $E[\epsilon'|x] = 0$. This shows we can have a correctly specified linear conditional expectation $E[y|x]$ which by definition is going to have an orthogonal disturbance, yet the structural equation would be nonlinear.
Or do I have to accept that the linearity assumption does not put structure on the model but only defines an $\epsilon$, where the other
assumptions will use that definition of $\epsilon$ to put structure on
the model?
The linearity assumption does define an $\epsilon$, that is, $\epsilon := y - X\beta = y - E[Y|do(X)]$ by definition, where $\epsilon$ represents the deviations of $y$ from its expectation when we experimentally set $X$ (see Pearl section 5.4). The other assumptions are used either for identification of the structural parameters (for instance, the assumption of exogeneity of $\epsilon$ allows you to identify the structural expectation $E[Y|do(X)]$ with the conditional expectation $E[Y|X]$) or for derivation of statistical properties of the estimators (for instance, the assumption of homoskedasticity guarantees OLS is BLUE, the assumption of normality makes it easy to derive "finite sample" results for inference etc).
However, the linearity assumption by itself does not put any
structure on our model, since $\epsilon$ can be completely arbitrary.
For any variables $X, y$ whatsoever, no matter what the relation between the two we could define an $\epsilon$ such that the linearity
assumption holds.
Your statement here goes into the main problem of causal inference in general! As shown in the simple example above, we can cook up structural disturbances that could make the conditional expectation of $y$ given $x$ linear. In general, several different structural (causal) models can have the same observational distribution, you can even have causation without observed association. Therefore, in this sense, you are correct --- we need more assumptions on $\epsilon$ in order to put "more structure" into the problem and identify the structural parameters $\beta$ with observational data.
Side note
It's worth mentioning most econometrics textbooks are confusing when it comes to the distinction between regression and structural equations and their meaning. This has been documented lately. You can check a paper by Chen and Pearl here as well as an extended survey by Chris Auld. Greene is one of the books examined. | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
Is Greene being sloppy? Should he actually have written: $E(y|X)=X\beta$? This is a "linearity assumption" that actually puts
structure on the model.
In a sense, yes and no. In the one hand, yes, |
24,302 | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | edited after comments by OP and Matthew Drury
To answer this question I assume Greene, and OP, have the following definition of linearity in mind:
Linearity means that for every one unit increase in this predictor, the outcome is increased by beta ($β$), wherever on the range of possible predictor values this one-unit increase occurs. I.e. the function $y=f(x)$ is $y=a+bx$ and not e.g. $y=a+bx^2$ or $y=a+sin(x)$. Further, this assumption is focused on the betas and thus applies to predictors (aka independent variables).
The expectation of residuals conditional on the model $E(ϵ|X)$ is something else. Yes, it is true that the math behind a linear regression defines/tries to define $E(ϵ|X)=0$. However, this is usually set over the entire range of fitted/predicted values for $y$. If you look at specific parts of the linear predictor and the predicted value of $y$, you might notice heteroscedasticity (areas where the variation of $ϵ$ is larger than elsewhere), or areas where $E(ϵ|X)≠0$. A non-linear association between the $x$'s and $y$ might be the cause for this, but is not the only reason heteroscedasticity or $E(ϵ|X)≠0$ might occur (see for example missing predictor bias).
From the comments: OP states "the linearity assumption does not restrict the model in any way, given that epsilon is arbitrary and can be any function of XX whatsoever", to which I would agree. I think this is made clear by linear regressions being able to fit to any data, whether or not the linearity assumption is violated or not. I'm speculating here, but that might be the reason why Greene chose to keep the error $ϵ$ in the formula - saving the $E(ϵ|X)=0$ for later - to denote that in assuming linearity, $y$ (and not the expected $y$) can be defined based on $X$ but maintains some error $ϵ$, regardless of what values $ϵ$ takes. I can only hope that he would later go on to state the relevancy of $E(ϵ|X)=0$.
In short (admittedly, without fully reading Greene's book and checking his argumentation):
Greene probably refers to the betas being constant for the entire range of the predictor (emphasis should be placed on the beta in the $y=Xβ + ϵ$ or $E(ϵ|X)=Xβ$ equations;
The linearity assumption does put some structure on the model. You should however note that transformations or additions such as splines prior to modelling, can make non-linear associations conform to the linear regression framework. | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | edited after comments by OP and Matthew Drury
To answer this question I assume Greene, and OP, have the following definition of linearity in mind:
Linearity means that for every one unit increase in | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
edited after comments by OP and Matthew Drury
To answer this question I assume Greene, and OP, have the following definition of linearity in mind:
Linearity means that for every one unit increase in this predictor, the outcome is increased by beta ($β$), wherever on the range of possible predictor values this one-unit increase occurs. I.e. the function $y=f(x)$ is $y=a+bx$ and not e.g. $y=a+bx^2$ or $y=a+sin(x)$. Further, this assumption is focused on the betas and thus applies to predictors (aka independent variables).
The expectation of residuals conditional on the model $E(ϵ|X)$ is something else. Yes, it is true that the math behind a linear regression defines/tries to define $E(ϵ|X)=0$. However, this is usually set over the entire range of fitted/predicted values for $y$. If you look at specific parts of the linear predictor and the predicted value of $y$, you might notice heteroscedasticity (areas where the variation of $ϵ$ is larger than elsewhere), or areas where $E(ϵ|X)≠0$. A non-linear association between the $x$'s and $y$ might be the cause for this, but is not the only reason heteroscedasticity or $E(ϵ|X)≠0$ might occur (see for example missing predictor bias).
From the comments: OP states "the linearity assumption does not restrict the model in any way, given that epsilon is arbitrary and can be any function of XX whatsoever", to which I would agree. I think this is made clear by linear regressions being able to fit to any data, whether or not the linearity assumption is violated or not. I'm speculating here, but that might be the reason why Greene chose to keep the error $ϵ$ in the formula - saving the $E(ϵ|X)=0$ for later - to denote that in assuming linearity, $y$ (and not the expected $y$) can be defined based on $X$ but maintains some error $ϵ$, regardless of what values $ϵ$ takes. I can only hope that he would later go on to state the relevancy of $E(ϵ|X)=0$.
In short (admittedly, without fully reading Greene's book and checking his argumentation):
Greene probably refers to the betas being constant for the entire range of the predictor (emphasis should be placed on the beta in the $y=Xβ + ϵ$ or $E(ϵ|X)=Xβ$ equations;
The linearity assumption does put some structure on the model. You should however note that transformations or additions such as splines prior to modelling, can make non-linear associations conform to the linear regression framework. | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
edited after comments by OP and Matthew Drury
To answer this question I assume Greene, and OP, have the following definition of linearity in mind:
Linearity means that for every one unit increase in |
24,303 | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | I was a little confused by the answer above, hence I'll give it another shot. I think the question is not actually about 'classical' linear regression but about the style of that particular source. On the classical regression part:
However, the linearity assumption by itself does not put any structure on our model
That is absolutely correct. As you have stated, $\epsilon$ might as well kill the linear relation and add up something completely independent from $X$ so that we cannot compute any model at all.
Is Greene being sloppy? Should he actually have written: $E(y|X)=Xβ$
I do not want to answer the first question but let me sum up the assumptions you need for usual linear regression:
Let us assume that you observe (you are given) data points $x_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$ for $i=1,...,n$. You need to assume that the data $(x_i, y_i)$ you have observed comes from independently, identically distributed random variables $(X_i, Y_i)$ such that ...
There exists a fixed (independent of $i$) $\beta \in \mathbb{R}^d$ such that $Y_i = \beta X_i + \epsilon_i$ for all $i$ and the random variables $\epsilon_i$ are such that
The $\epsilon_i$ are iid as well and $\epsilon_i$ is distributed as $\mathcal{N}(0, \sigma)$ ($\sigma$ must be independent of $i$ as well)
For $X = (X_1, ..., X_n)$ and $Y = (Y_1, ..., Y_n)$ the variables $X, Y$ have a common density, i.e. the single random variable $(X, Y)$ has a density $f_{X,Y}$
Now you can run down the usual path and compute
$$f_{Y|X}(y|x) = f_{Y,X}(y,x)/f_X(x) = \left(\frac{1}{\sqrt{2\pi d}}\right)^n \exp{\left( \frac{-\sum_{i=1}^n (y_i - \beta x_i)^2}{2\sigma}\right)} $$
so that by the usual 'duality' between machine learning (minimalization of error functions) and probability theory (maximization of likelihoods) you maximize $-\log f_{Y|X}(y|x)$ in $\beta$ which in fact, gives you the usual "RMSE" stuff.
Now as stated: If the author of the book you are quoting wants to make this point (which you have to do if you ever want to be able to compute the 'best possible' regression line in the basic setup) then yes, he must make this assumption on the normalicity of the $\epsilon$ somewhere in the book.
There are different possibilities now:
He does not write this assumption down in the book. Then it is an error in the book.
He does write it down in form of a 'global' remark like 'whenever I write $+ \epsilon$ then the $\epsilon$ are iid normally distributed with mean zero unless stated otherwise'. Then IMHO it is bad style because it causes exactly the confusion that you feel right now. That is why I tend to write the assumptions in some shortened form in every Theorem. Only then every building block can be viewed cleanly in its own right.
He does write it down closely to the part you are quoting and you/we just did not notice it (also a possibility :-))
However, also in a strict mathematical sense, the normal error is something canonical (the distribution with the highest entropy [once the variance is fixed], hence, producing the strongest models) so that some authors tend to skip this assumption but use in nontheless. Formally, you are absolutely right: They are using mathematics in the "wrong way".
Whenever they want to come up with the equation for the density $f_{Y|X}$ as stated above then they need to know $\epsilon$ pretty well, otherwise you just have properties of it flying around in every senseful equation that you try to write down. | Is the linearity assumption in linear regression merely a definition of $\epsilon$? | I was a little confused by the answer above, hence I'll give it another shot. I think the question is not actually about 'classical' linear regression but about the style of that particular source. On | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
I was a little confused by the answer above, hence I'll give it another shot. I think the question is not actually about 'classical' linear regression but about the style of that particular source. On the classical regression part:
However, the linearity assumption by itself does not put any structure on our model
That is absolutely correct. As you have stated, $\epsilon$ might as well kill the linear relation and add up something completely independent from $X$ so that we cannot compute any model at all.
Is Greene being sloppy? Should he actually have written: $E(y|X)=Xβ$
I do not want to answer the first question but let me sum up the assumptions you need for usual linear regression:
Let us assume that you observe (you are given) data points $x_i \in \mathbb{R}^d$ and $y_i \in \mathbb{R}$ for $i=1,...,n$. You need to assume that the data $(x_i, y_i)$ you have observed comes from independently, identically distributed random variables $(X_i, Y_i)$ such that ...
There exists a fixed (independent of $i$) $\beta \in \mathbb{R}^d$ such that $Y_i = \beta X_i + \epsilon_i$ for all $i$ and the random variables $\epsilon_i$ are such that
The $\epsilon_i$ are iid as well and $\epsilon_i$ is distributed as $\mathcal{N}(0, \sigma)$ ($\sigma$ must be independent of $i$ as well)
For $X = (X_1, ..., X_n)$ and $Y = (Y_1, ..., Y_n)$ the variables $X, Y$ have a common density, i.e. the single random variable $(X, Y)$ has a density $f_{X,Y}$
Now you can run down the usual path and compute
$$f_{Y|X}(y|x) = f_{Y,X}(y,x)/f_X(x) = \left(\frac{1}{\sqrt{2\pi d}}\right)^n \exp{\left( \frac{-\sum_{i=1}^n (y_i - \beta x_i)^2}{2\sigma}\right)} $$
so that by the usual 'duality' between machine learning (minimalization of error functions) and probability theory (maximization of likelihoods) you maximize $-\log f_{Y|X}(y|x)$ in $\beta$ which in fact, gives you the usual "RMSE" stuff.
Now as stated: If the author of the book you are quoting wants to make this point (which you have to do if you ever want to be able to compute the 'best possible' regression line in the basic setup) then yes, he must make this assumption on the normalicity of the $\epsilon$ somewhere in the book.
There are different possibilities now:
He does not write this assumption down in the book. Then it is an error in the book.
He does write it down in form of a 'global' remark like 'whenever I write $+ \epsilon$ then the $\epsilon$ are iid normally distributed with mean zero unless stated otherwise'. Then IMHO it is bad style because it causes exactly the confusion that you feel right now. That is why I tend to write the assumptions in some shortened form in every Theorem. Only then every building block can be viewed cleanly in its own right.
He does write it down closely to the part you are quoting and you/we just did not notice it (also a possibility :-))
However, also in a strict mathematical sense, the normal error is something canonical (the distribution with the highest entropy [once the variance is fixed], hence, producing the strongest models) so that some authors tend to skip this assumption but use in nontheless. Formally, you are absolutely right: They are using mathematics in the "wrong way".
Whenever they want to come up with the equation for the density $f_{Y|X}$ as stated above then they need to know $\epsilon$ pretty well, otherwise you just have properties of it flying around in every senseful equation that you try to write down. | Is the linearity assumption in linear regression merely a definition of $\epsilon$?
I was a little confused by the answer above, hence I'll give it another shot. I think the question is not actually about 'classical' linear regression but about the style of that particular source. On |
24,304 | Imputing missing values on a testing set | Yes.
It is fine to perform mean imputation, however, make sure to calculate the mean (or any other metrics) only on the train data to avoid data leakage to your test set. | Imputing missing values on a testing set | Yes.
It is fine to perform mean imputation, however, make sure to calculate the mean (or any other metrics) only on the train data to avoid data leakage to your test set. | Imputing missing values on a testing set
Yes.
It is fine to perform mean imputation, however, make sure to calculate the mean (or any other metrics) only on the train data to avoid data leakage to your test set. | Imputing missing values on a testing set
Yes.
It is fine to perform mean imputation, however, make sure to calculate the mean (or any other metrics) only on the train data to avoid data leakage to your test set. |
24,305 | Imputing missing values on a testing set | Is it ok to impute mean based missing values with the mean whenever implementing the model?
Yes, as long as you use the mean of your training set---not the mean of the testing set---to impute. Likewise, if you remove values above some threshold in the test case, make sure that the threshold is derived from the training and not test set.
You might also consider holding out two "test" sets and trying all of the methods described above on one of them (using this set to "select" a method) and using the second to estimate error of the method that works best (using this set to "evaluate" the selected method). You would then have a train-validation-test split, which is good practice. | Imputing missing values on a testing set | Is it ok to impute mean based missing values with the mean whenever implementing the model?
Yes, as long as you use the mean of your training set---not the mean of the testing set---to impute. Likew | Imputing missing values on a testing set
Is it ok to impute mean based missing values with the mean whenever implementing the model?
Yes, as long as you use the mean of your training set---not the mean of the testing set---to impute. Likewise, if you remove values above some threshold in the test case, make sure that the threshold is derived from the training and not test set.
You might also consider holding out two "test" sets and trying all of the methods described above on one of them (using this set to "select" a method) and using the second to estimate error of the method that works best (using this set to "evaluate" the selected method). You would then have a train-validation-test split, which is good practice. | Imputing missing values on a testing set
Is it ok to impute mean based missing values with the mean whenever implementing the model?
Yes, as long as you use the mean of your training set---not the mean of the testing set---to impute. Likew |
24,306 | Why is it hard to train deep neural networks? | Resources
The chapter Why are deep neural networks hard to train? (in the book "Neural Networks and Deep Learning" by Michael Nielsen) is probably the best answer to your question that I encountered, but hopefully my answer would contain the gist of the chapter.
The paper On the difficulty of training recurrent neural networks contains a proof that some condition is sufficient to cause the vanishing gradient problem in a simple recurrent neural network (RNN). I would give an explanation which is similar to the proof, but for the case of a simple deep feedforward neural network.
The chapter How the backpropagation algorithm works (in the same book by Nielsen) explains clearly and rigorously how backpropagation works, and I would use its notations, definitions and conclusions in my explanation.
Unstable Gradient Problem
Nielsen claims that when training a deep feedforward neural network using Stochastic Gradient Descent (SGD) and backpropagation, the main difficulty in the training is the "unstable gradient problem". Here is Nielsen's explanation of this problem:
[...] the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out. Without some mechanism or underlying reason for that balancing to occur, it's highly unlikely to happen simply by chance. In short, the real problem here is that neural networks suffer from an unstable gradient problem. As a result, if we use standard gradient-based learning techniques, different layers in the network will tend to learn at wildly different speeds.
Next, we would use equations that Nielsen proved to show that "gradient in early layers is the product of terms from all the later layers".
For that, we need some notations and definitions:
Layer $1$ is the input layer.
Layer $L$ is the output layer.
$x$ is a vector of inputs in a single training example.
$y$ is a vector of desired outputs in a single training example.
$a^l$ is a vector of the activations of the neurons in layer $l$.
$C\equiv\frac{1}{2}||y-a^{L}||^{2}$ is the cost function with regard to a single training example $(x, y)$. (This is a simplification. In a real implementation, we would use mini-batches instead.)
$w^l$ is a matrix of weights for the connections from layer $l-1$ to layer $l$.
$b^l$ is a vector of the biases used while computing the weighted inputs to the neurons in layer $l$.
$z^{l}\equiv w^{l}a^{l-1}+b^{l}$ is a vector of the weighted inputs to the neurons in layer $l$.
$\sigma$ is the activation function.
$a^l\equiv \sigma(z^l)$, while $\sigma$ is applied element-wise.
$\delta^{l}\equiv\frac{\partial C}{\partial z^{l}}$
$\Sigma'\left(z^{l}\right)$ is a diagonal matrix whose diagonal is $\sigma'(z^l)$ (while $\sigma'$ is applied element-wise).
Nielsen proved the following equations:
(34): $\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\delta^{l+1}$
(30): $\delta^{L}=\left(a^{L}-y\right)\odot\sigma'\left(z^{L}\right)$, which is equivalent to $\delta^{L}=\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)$
Thus: $$\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\delta^{L}\\\downarrow\\\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)$$
Nielsen also proved:
(BP3): $\frac{\partial C}{\partial b_{j}^{l}}=\delta_{j}^{l}$
(BP4): $\frac{\partial C}{\partial w_{jk}^{l}}=\delta_{j}^{l}a_{k}^{l-1}$
Therefore (this is my notation, so don't blame Nielsen in case it is ugly):
$$\frac{\partial C}{\partial b^{l}}\equiv\left(\begin{gathered}\frac{\partial C}{\partial b_{1}^{l}}\\
\frac{\partial C}{\partial b_{2}^{l}}\\
\vdots
\end{gathered}
\right)=\delta^{l}$$
$$\frac{\partial C}{\partial w^{l}}\equiv\left(\begin{matrix}\frac{\partial C}{\partial w_{11}^{l}} & \frac{\partial C}{\partial w_{12}^{l}} & \cdots\\
\frac{\partial C}{\partial w_{21}^{l}} & \frac{\partial C}{\partial w_{22}^{l}} & \cdots\\
\vdots & \vdots & \ddots
\end{matrix}\right)=\delta^{l}\left(a^{l-1}\right)^{T}$$
From these conclusions, we deduce the components of the gradient in layer $l$:
$$\frac{\partial C}{\partial b^{l}}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)\\\frac{\partial C}{\partial w^{l}}=\frac{\partial C}{\partial b^{l}}\left(a^{l-1}\right)^{T}$$
Indeed, both components (i.e. partial derivatives with regard to weights and biases) of the gradient in layer $l$ are products that include all of the weight matrices of the next layers, and also the derivatives of the activation function of the next layers.
Vanishing Gradient Problem
If you are still not convinced that the "unstable gradient problem" is real or that it actually matters, we would next show why the "vanishing gradient problem" is probable in a deep feedforward neural network.
As in the proof in the paper, we can use vector norms and induced matrix norms to get a rough upper bound on $\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|$ and $\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|$.
In the case of induced matrix norms, both $\left|\left|ABx\right|\right|\le\left|\left|A\right|\right|\cdot\left|\left|B\right|\right|\cdot\left|\left|x\right|\right|$ and $\left|\left|AB\right|\right|\le\left|\left|A\right|\right|\cdot\left|\left|B\right|\right|$ hold for any matrices $A,B$ and vector $x$ such that $ABx$ is defined.
Therefore:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|=\left|\left|\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)\right|\right|\le\\
\left|\left|\Sigma'\left(z^{l}\right)\right|\right|\left|\left|\left(w^{l+1}\right)^{T}\right|\right|\cdots\left|\left|\Sigma'\left(z^{L-1}\right)\right|\right|\left|\left|\left(w^{L}\right)^{T}\right|\right|\left|\left|\Sigma'\left(z^{L}\right)\right|\right|\left|\left|a^{L}-y\right|\right|\\
\downarrow\\
\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\left|\left|\Sigma'\left(z^{r}\right)\right|\right|\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|\left(w^{r}\right)^{T}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|
\end{gathered}
$$
and also:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|\le\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\left|\left|\left(a^{l-1}\right)^{T}\right|\right|\\
\downarrow\\
\left(*\right)\\
\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|\le\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\left|\left|a^{l-1}\right|\right|
\end{gathered}
$$
It turns out that $||A||=||A^T||$ for any square matrix $A$, as shown here (which uses what is shown here).
Thus:
$$\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\left|\left|\Sigma'\left(z^{r}\right)\right|\right|\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|$$
Let $\gamma\equiv\text{sup}\left\{ \sigma'\left(\alpha\right)\,:\,\alpha\in\mathbb{R}\right\} $.
The norm of a diagonal matrix is the largest absolute value of the elements in the matrix. (This is quite immediate from the claim that the norm of a symmetric matrix is equal to its spectral radius.)
So $\left|\left|\Sigma'\left(z\right)\right|\right|\le\gamma$ for any $z$, and thus:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\gamma\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|\\
\downarrow\\
\left(**\right)\\
\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\gamma^{L-l+1}\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|
\end{gathered}
$$
Now, consider the derivatives of sigmoid (green) and $\text{tanh}$ (red).
In case $\sigma$ is the sigmoid function, $\gamma=0.25$, and so from $(*)$ and $(**)$ we can deduce that $\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|$ and $\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|$ would probably be very small for a high $L-l$. I.e. for an early layer in a deep network with many layers, the gradient would be quite small.
$(*)$ and $(**)$ won't help much in showing that the vanishing gradient problem is also probable for the case that $\sigma$ is $\text{tanh}$, but using the same approach and some approximations would work. | Why is it hard to train deep neural networks? | Resources
The chapter Why are deep neural networks hard to train? (in the book "Neural Networks and Deep Learning" by Michael Nielsen) is probably the best answer to your question that I encountered, | Why is it hard to train deep neural networks?
Resources
The chapter Why are deep neural networks hard to train? (in the book "Neural Networks and Deep Learning" by Michael Nielsen) is probably the best answer to your question that I encountered, but hopefully my answer would contain the gist of the chapter.
The paper On the difficulty of training recurrent neural networks contains a proof that some condition is sufficient to cause the vanishing gradient problem in a simple recurrent neural network (RNN). I would give an explanation which is similar to the proof, but for the case of a simple deep feedforward neural network.
The chapter How the backpropagation algorithm works (in the same book by Nielsen) explains clearly and rigorously how backpropagation works, and I would use its notations, definitions and conclusions in my explanation.
Unstable Gradient Problem
Nielsen claims that when training a deep feedforward neural network using Stochastic Gradient Descent (SGD) and backpropagation, the main difficulty in the training is the "unstable gradient problem". Here is Nielsen's explanation of this problem:
[...] the gradient in early layers is the product of terms from all the later layers. When there are many layers, that's an intrinsically unstable situation. The only way all layers can learn at close to the same speed is if all those products of terms come close to balancing out. Without some mechanism or underlying reason for that balancing to occur, it's highly unlikely to happen simply by chance. In short, the real problem here is that neural networks suffer from an unstable gradient problem. As a result, if we use standard gradient-based learning techniques, different layers in the network will tend to learn at wildly different speeds.
Next, we would use equations that Nielsen proved to show that "gradient in early layers is the product of terms from all the later layers".
For that, we need some notations and definitions:
Layer $1$ is the input layer.
Layer $L$ is the output layer.
$x$ is a vector of inputs in a single training example.
$y$ is a vector of desired outputs in a single training example.
$a^l$ is a vector of the activations of the neurons in layer $l$.
$C\equiv\frac{1}{2}||y-a^{L}||^{2}$ is the cost function with regard to a single training example $(x, y)$. (This is a simplification. In a real implementation, we would use mini-batches instead.)
$w^l$ is a matrix of weights for the connections from layer $l-1$ to layer $l$.
$b^l$ is a vector of the biases used while computing the weighted inputs to the neurons in layer $l$.
$z^{l}\equiv w^{l}a^{l-1}+b^{l}$ is a vector of the weighted inputs to the neurons in layer $l$.
$\sigma$ is the activation function.
$a^l\equiv \sigma(z^l)$, while $\sigma$ is applied element-wise.
$\delta^{l}\equiv\frac{\partial C}{\partial z^{l}}$
$\Sigma'\left(z^{l}\right)$ is a diagonal matrix whose diagonal is $\sigma'(z^l)$ (while $\sigma'$ is applied element-wise).
Nielsen proved the following equations:
(34): $\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\delta^{l+1}$
(30): $\delta^{L}=\left(a^{L}-y\right)\odot\sigma'\left(z^{L}\right)$, which is equivalent to $\delta^{L}=\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)$
Thus: $$\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\delta^{L}\\\downarrow\\\delta^{l}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)$$
Nielsen also proved:
(BP3): $\frac{\partial C}{\partial b_{j}^{l}}=\delta_{j}^{l}$
(BP4): $\frac{\partial C}{\partial w_{jk}^{l}}=\delta_{j}^{l}a_{k}^{l-1}$
Therefore (this is my notation, so don't blame Nielsen in case it is ugly):
$$\frac{\partial C}{\partial b^{l}}\equiv\left(\begin{gathered}\frac{\partial C}{\partial b_{1}^{l}}\\
\frac{\partial C}{\partial b_{2}^{l}}\\
\vdots
\end{gathered}
\right)=\delta^{l}$$
$$\frac{\partial C}{\partial w^{l}}\equiv\left(\begin{matrix}\frac{\partial C}{\partial w_{11}^{l}} & \frac{\partial C}{\partial w_{12}^{l}} & \cdots\\
\frac{\partial C}{\partial w_{21}^{l}} & \frac{\partial C}{\partial w_{22}^{l}} & \cdots\\
\vdots & \vdots & \ddots
\end{matrix}\right)=\delta^{l}\left(a^{l-1}\right)^{T}$$
From these conclusions, we deduce the components of the gradient in layer $l$:
$$\frac{\partial C}{\partial b^{l}}=\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)\\\frac{\partial C}{\partial w^{l}}=\frac{\partial C}{\partial b^{l}}\left(a^{l-1}\right)^{T}$$
Indeed, both components (i.e. partial derivatives with regard to weights and biases) of the gradient in layer $l$ are products that include all of the weight matrices of the next layers, and also the derivatives of the activation function of the next layers.
Vanishing Gradient Problem
If you are still not convinced that the "unstable gradient problem" is real or that it actually matters, we would next show why the "vanishing gradient problem" is probable in a deep feedforward neural network.
As in the proof in the paper, we can use vector norms and induced matrix norms to get a rough upper bound on $\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|$ and $\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|$.
In the case of induced matrix norms, both $\left|\left|ABx\right|\right|\le\left|\left|A\right|\right|\cdot\left|\left|B\right|\right|\cdot\left|\left|x\right|\right|$ and $\left|\left|AB\right|\right|\le\left|\left|A\right|\right|\cdot\left|\left|B\right|\right|$ hold for any matrices $A,B$ and vector $x$ such that $ABx$ is defined.
Therefore:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|=\left|\left|\Sigma'\left(z^{l}\right)\left(w^{l+1}\right)^{T}\cdots\Sigma'\left(z^{L-1}\right)\left(w^{L}\right)^{T}\Sigma'\left(z^{L}\right)\left(a^{L}-y\right)\right|\right|\le\\
\left|\left|\Sigma'\left(z^{l}\right)\right|\right|\left|\left|\left(w^{l+1}\right)^{T}\right|\right|\cdots\left|\left|\Sigma'\left(z^{L-1}\right)\right|\right|\left|\left|\left(w^{L}\right)^{T}\right|\right|\left|\left|\Sigma'\left(z^{L}\right)\right|\right|\left|\left|a^{L}-y\right|\right|\\
\downarrow\\
\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\left|\left|\Sigma'\left(z^{r}\right)\right|\right|\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|\left(w^{r}\right)^{T}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|
\end{gathered}
$$
and also:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|\le\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\left|\left|\left(a^{l-1}\right)^{T}\right|\right|\\
\downarrow\\
\left(*\right)\\
\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|\le\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\left|\left|a^{l-1}\right|\right|
\end{gathered}
$$
It turns out that $||A||=||A^T||$ for any square matrix $A$, as shown here (which uses what is shown here).
Thus:
$$\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\left|\left|\Sigma'\left(z^{r}\right)\right|\right|\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|$$
Let $\gamma\equiv\text{sup}\left\{ \sigma'\left(\alpha\right)\,:\,\alpha\in\mathbb{R}\right\} $.
The norm of a diagonal matrix is the largest absolute value of the elements in the matrix. (This is quite immediate from the claim that the norm of a symmetric matrix is equal to its spectral radius.)
So $\left|\left|\Sigma'\left(z\right)\right|\right|\le\gamma$ for any $z$, and thus:
$$\begin{gathered}\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\overset{L}{\underset{r=l}{\prod}}\gamma\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|\\
\downarrow\\
\left(**\right)\\
\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|\le\gamma^{L-l+1}\cdot\overset{L}{\underset{r=l+1}{\prod}}\left|\left|w^{r}\right|\right|\cdot\left|\left|a^{L}-y\right|\right|
\end{gathered}
$$
Now, consider the derivatives of sigmoid (green) and $\text{tanh}$ (red).
In case $\sigma$ is the sigmoid function, $\gamma=0.25$, and so from $(*)$ and $(**)$ we can deduce that $\left|\left|\frac{\partial C}{\partial b^{l}}\right|\right|$ and $\left|\left|\frac{\partial C}{\partial w^{l}}\right|\right|$ would probably be very small for a high $L-l$. I.e. for an early layer in a deep network with many layers, the gradient would be quite small.
$(*)$ and $(**)$ won't help much in showing that the vanishing gradient problem is also probable for the case that $\sigma$ is $\text{tanh}$, but using the same approach and some approximations would work. | Why is it hard to train deep neural networks?
Resources
The chapter Why are deep neural networks hard to train? (in the book "Neural Networks and Deep Learning" by Michael Nielsen) is probably the best answer to your question that I encountered, |
24,307 | Left-hand & right-hand side nomenclature in regression models | This is an excellent question. Actually, it is so good that there are no answer to it. To the best of my knowledge, there are no true "agnostic" term for describing Y.
In my experience and readings, I found that the semantic is domain-specific and also model-objective-specific.
Econometricians will use the Dependent variable terms when building a model that is explanatory. They may use the terms Predicted or Fitted or Estimated variable when they are building a forecasting model that is more focused on accurate estimation/prediction than on theoretical explanatory power.
The Big Data/Deep Learning crowd uses a completely different language. And, they will typically use the terms Response variable or Target variable. Their models are such black boxes that they typically do not attempt to explain a phenomenon as rather to predict it and estimate it accurately. But, somehow they wouldn't be caught using the term Predicted. They far prefer the terms Response or Target.
I am less familiar with the term Outcome variable. It may be prevalent in other areas I am less exposed to such as social sciences including psychology, medicine, clinical trials, epidemiology.
In view of the above, I could not provide you with any "agnostic" semantic for describing Y. Instead, I provided a bit of information on what semantic to use when catering to different audience and also reflecting the objective of your model. In summary, I don't think anyone gets hurt if you talk about Dependent variable with econometricians and Response or Target variable with Deep Learning types. Hopefully, you can separate those crowds apart otherwise you could have a verbal food fight on your hand. | Left-hand & right-hand side nomenclature in regression models | This is an excellent question. Actually, it is so good that there are no answer to it. To the best of my knowledge, there are no true "agnostic" term for describing Y.
In my experience and reading | Left-hand & right-hand side nomenclature in regression models
This is an excellent question. Actually, it is so good that there are no answer to it. To the best of my knowledge, there are no true "agnostic" term for describing Y.
In my experience and readings, I found that the semantic is domain-specific and also model-objective-specific.
Econometricians will use the Dependent variable terms when building a model that is explanatory. They may use the terms Predicted or Fitted or Estimated variable when they are building a forecasting model that is more focused on accurate estimation/prediction than on theoretical explanatory power.
The Big Data/Deep Learning crowd uses a completely different language. And, they will typically use the terms Response variable or Target variable. Their models are such black boxes that they typically do not attempt to explain a phenomenon as rather to predict it and estimate it accurately. But, somehow they wouldn't be caught using the term Predicted. They far prefer the terms Response or Target.
I am less familiar with the term Outcome variable. It may be prevalent in other areas I am less exposed to such as social sciences including psychology, medicine, clinical trials, epidemiology.
In view of the above, I could not provide you with any "agnostic" semantic for describing Y. Instead, I provided a bit of information on what semantic to use when catering to different audience and also reflecting the objective of your model. In summary, I don't think anyone gets hurt if you talk about Dependent variable with econometricians and Response or Target variable with Deep Learning types. Hopefully, you can separate those crowds apart otherwise you could have a verbal food fight on your hand. | Left-hand & right-hand side nomenclature in regression models
This is an excellent question. Actually, it is so good that there are no answer to it. To the best of my knowledge, there are no true "agnostic" term for describing Y.
In my experience and reading |
24,308 | Finite $k$th moment for a random vector | The answer is in the negative, but the problem can be fixed up.
To see what goes wrong, let $X$ have a Student t distribution with two degrees of freedom. Its salient properties are that $\mathbb{E}(|X|)$ is finite but $\mathbb{E}(|X|^2)=\infty$. Consider the bivariate distribution of $(X,X)$. Let $f(x,y)dxdy$ be its distribution element (which is singular: it is supported only on the diagonal $x=y$). Along the diagonal, $||(x,y)||=|x|\sqrt{2}$, whence
$$\mathbb{E}\left(||(X,X)||^1\right) = \mathbb{E}\left(\sqrt{2}|X|\right) \lt \infty$$
whereas
$$\iint x^1 y^1 f(x,y) dx dy = \int x^2 f(x,x) dx = \infty.$$
Analogous computations in $p$ dimensions should make it clear that $$\int\cdots\int |x_1|^k|x_2|^k\cdots |x_p|^k f(x_1,\ldots, x_p)dx_1\cdots dx_p$$
really is a moment of order $pk$, not $k$. For more about multivariate moments, please see Let $\mathbf{Y}$ be a random vector. Are $k$th moments of $\mathbf{Y}$ considered?.
To find out what the relationships ought to be between the multivariate moments and the moments of the norm, we will need two inequalities. Let $x=(x_1, \ldots, x_p)$ be any $p$-dimensional vector and let $k_1, k_2, \ldots, k_p$ be positive numbers. Write $k=k_1+k_2+\cdots k_p$ for their sum (implying $k_i/k \le 1$ for all $i$). Let $q \gt 0$ be any positive number (in the application, $q=2$ for the Euclidean norm, but it turns out there's nothing special about the value $2$). As is customary, write
$$||x||_q = \left(\sum_i |x_i|^q\right)^{1/q}.$$
First, let's apply the AM-GM inequality to the non-negative numbers $|x_i|^q$ with weights $k_i$. This asserts that the weighted geometric mean cannot exceed the weighted arithmetic mean:
$$\left(\prod_i (|x_i|^q)^{k_i}\right)^{1/k} \le \frac{1}{k}\sum_i k_i|x_i|^q.$$
Overestimate the right hand side by replacing each $k_i/k$ by $1$ and take the $k/q$ power of both sides:
$$\prod_i |x_i|^{k_i} = \left(\left(\prod_i (|x_i|^q)^{k_i}\right)^{1/k}\right)^{k/q} \le \left(\sum_i |x_i|^q\right)^{k/q} = ||x||_q^k.\tag{1}$$
Now let's overestimate $||x||_q$ by replacing each term $|x_i|^q$ by the largest among them, $\max(|x_i|^q) = \max(|x_i|)^q$:
$$||x||_q \le \left(\sum_i \max(|x_i|^q)\right)^{1/q} = \left(p \max(|x_i|)^q\right)^{1/q} = p^{1/q} \max(|x_i|).$$
Taking $k^\text{th}$ powers yields
$$||x||_q^k \le p^{k/q} \max(|x_i|^k) \le p^{k/q} \sum_i |x_i|^k.\tag{2}$$
As a matter of notation, write
$$\mu(k_1,k_2,\ldots,k_p) = \int\cdots \int |x_1|^{k_1}|x_2|^{k_2}\cdots|x_p|^{k_p} f(x)\,dx.$$
This is the moment of order $(k_1,k_2,\ldots,k_p)$ (and total order $k$). By integrating aginst $f$, inequality $(1)$ establishes
$$\mu(k_1,\ldots,k_p) \le \int\cdots\int ||x||_q^k f(x)\,dx = \mathbb{E}(||X||_q^{k})\tag{3}$$
and inequality $(2)$ gives $$\mathbb{E}(||X||_q^{k})\le p^{k/q}\left(\mu(k,0,\ldots,0) + \mu(0,k,0,\ldots,0) + \cdots + \mu(0,\ldots,0,k)\right).\tag{4}$$
Its right hand side is, up to a constant multiple, the sum of the univariate $k^\text{th}$ moments. Together, $(3)$ and $(4)$ show
Finiteness of all univariate $k^\text{th}$ moments implies finiteness of $\mathbb{E}(||X||_q^{k})$.
Finiteness of $\mathbb{E}(||X||_q^{k})$ implies finiteness of all $\mu(k_1,\ldots,k_p)$ for which $k_1+\cdots +k_p=k$.
Indeed, these two conclusions combine as a syllogism to show that finiteness of the univariate moments of order $k$ implies finiteness of all multivariate moments of total order $k$.
Thus,
For all $q \gt 0$, the $k^\text{th}$ moment of the $L_q$ norm $\mathbb{E}(||X||_q^{k})$ is finite if and only if all moments of total order $k$ are finite. | Finite $k$th moment for a random vector | The answer is in the negative, but the problem can be fixed up.
To see what goes wrong, let $X$ have a Student t distribution with two degrees of freedom. Its salient properties are that $\mathbb{E}( | Finite $k$th moment for a random vector
The answer is in the negative, but the problem can be fixed up.
To see what goes wrong, let $X$ have a Student t distribution with two degrees of freedom. Its salient properties are that $\mathbb{E}(|X|)$ is finite but $\mathbb{E}(|X|^2)=\infty$. Consider the bivariate distribution of $(X,X)$. Let $f(x,y)dxdy$ be its distribution element (which is singular: it is supported only on the diagonal $x=y$). Along the diagonal, $||(x,y)||=|x|\sqrt{2}$, whence
$$\mathbb{E}\left(||(X,X)||^1\right) = \mathbb{E}\left(\sqrt{2}|X|\right) \lt \infty$$
whereas
$$\iint x^1 y^1 f(x,y) dx dy = \int x^2 f(x,x) dx = \infty.$$
Analogous computations in $p$ dimensions should make it clear that $$\int\cdots\int |x_1|^k|x_2|^k\cdots |x_p|^k f(x_1,\ldots, x_p)dx_1\cdots dx_p$$
really is a moment of order $pk$, not $k$. For more about multivariate moments, please see Let $\mathbf{Y}$ be a random vector. Are $k$th moments of $\mathbf{Y}$ considered?.
To find out what the relationships ought to be between the multivariate moments and the moments of the norm, we will need two inequalities. Let $x=(x_1, \ldots, x_p)$ be any $p$-dimensional vector and let $k_1, k_2, \ldots, k_p$ be positive numbers. Write $k=k_1+k_2+\cdots k_p$ for their sum (implying $k_i/k \le 1$ for all $i$). Let $q \gt 0$ be any positive number (in the application, $q=2$ for the Euclidean norm, but it turns out there's nothing special about the value $2$). As is customary, write
$$||x||_q = \left(\sum_i |x_i|^q\right)^{1/q}.$$
First, let's apply the AM-GM inequality to the non-negative numbers $|x_i|^q$ with weights $k_i$. This asserts that the weighted geometric mean cannot exceed the weighted arithmetic mean:
$$\left(\prod_i (|x_i|^q)^{k_i}\right)^{1/k} \le \frac{1}{k}\sum_i k_i|x_i|^q.$$
Overestimate the right hand side by replacing each $k_i/k$ by $1$ and take the $k/q$ power of both sides:
$$\prod_i |x_i|^{k_i} = \left(\left(\prod_i (|x_i|^q)^{k_i}\right)^{1/k}\right)^{k/q} \le \left(\sum_i |x_i|^q\right)^{k/q} = ||x||_q^k.\tag{1}$$
Now let's overestimate $||x||_q$ by replacing each term $|x_i|^q$ by the largest among them, $\max(|x_i|^q) = \max(|x_i|)^q$:
$$||x||_q \le \left(\sum_i \max(|x_i|^q)\right)^{1/q} = \left(p \max(|x_i|)^q\right)^{1/q} = p^{1/q} \max(|x_i|).$$
Taking $k^\text{th}$ powers yields
$$||x||_q^k \le p^{k/q} \max(|x_i|^k) \le p^{k/q} \sum_i |x_i|^k.\tag{2}$$
As a matter of notation, write
$$\mu(k_1,k_2,\ldots,k_p) = \int\cdots \int |x_1|^{k_1}|x_2|^{k_2}\cdots|x_p|^{k_p} f(x)\,dx.$$
This is the moment of order $(k_1,k_2,\ldots,k_p)$ (and total order $k$). By integrating aginst $f$, inequality $(1)$ establishes
$$\mu(k_1,\ldots,k_p) \le \int\cdots\int ||x||_q^k f(x)\,dx = \mathbb{E}(||X||_q^{k})\tag{3}$$
and inequality $(2)$ gives $$\mathbb{E}(||X||_q^{k})\le p^{k/q}\left(\mu(k,0,\ldots,0) + \mu(0,k,0,\ldots,0) + \cdots + \mu(0,\ldots,0,k)\right).\tag{4}$$
Its right hand side is, up to a constant multiple, the sum of the univariate $k^\text{th}$ moments. Together, $(3)$ and $(4)$ show
Finiteness of all univariate $k^\text{th}$ moments implies finiteness of $\mathbb{E}(||X||_q^{k})$.
Finiteness of $\mathbb{E}(||X||_q^{k})$ implies finiteness of all $\mu(k_1,\ldots,k_p)$ for which $k_1+\cdots +k_p=k$.
Indeed, these two conclusions combine as a syllogism to show that finiteness of the univariate moments of order $k$ implies finiteness of all multivariate moments of total order $k$.
Thus,
For all $q \gt 0$, the $k^\text{th}$ moment of the $L_q$ norm $\mathbb{E}(||X||_q^{k})$ is finite if and only if all moments of total order $k$ are finite. | Finite $k$th moment for a random vector
The answer is in the negative, but the problem can be fixed up.
To see what goes wrong, let $X$ have a Student t distribution with two degrees of freedom. Its salient properties are that $\mathbb{E}( |
24,309 | Finite $k$th moment for a random vector | @whuber 's answer is correct and well-composed.
I wrote this thread only to elaborate why such a problem can be better addressed in language of tensors. I previously thought that tensor viewpoint is widely accepted in statistics community, now I know this is not the case.
In pp.46-47 of [McCullagh], he stated how we could view moments as tensors. I explained it basically following his words.
Let $\boldsymbol{X}=(X_{1},\cdots X_{p})$ be a random vector, and we can discuss its (central) moments $\kappa^{i,j}=E(X_{i}-EX_{i})(X_{j}-EX_{j})$. And if we take affine transformations $Y_{r}=\boldsymbol{A}_{r}\boldsymbol{X}+b_{r}$ (equivalently we can write it in matrix notation $\boldsymbol{Y=AX+b})$ in the probability space, then the resulting (central) moment of $Y_{r},Y_{s}$ is $$\kappa^{r,s}=\frac{\partial Y_{r}}{\partial X_{i}}\frac{\partial Y_{s}}{\partial X_{j}}\kappa^{i,j}$$ by transformation formula. So the moment behaves like a (0,1) contravariant tensor. If we accept such a tensor view, then the $L^{p}$ norm/the moments of a random variable can be treated as a tensor norm. So as a matter of fact, multi-index tensor norm of the highest order does not necessarily bound lower order multi-index tensor norm. Now since the tensor is given by first order differential operators, Sobolev tensor norm comes into play naturally, e.g. in wavelets. And there is a lot of counter examples that the highest order norm does not bound lower order norms in Sobolev-Besov spaces. (MO post)
As for the reason why we should adopt such a view, the story is much longer, but a brief comment is following.
The classic reference in establishing this view is [McCullagh] and later scattered works in "machine learning" literature. But the origin of such a view is actually pursued much earlier in the Bayesian's works [Jeffereys]. Such a view definitely helps visualization and probably motivated some research in statistical shape analysis like those early works by Mardia.
$\blacksquare$ Reference
[McCullagh]http://www.stat.uchicago.edu/~pmcc/tensorbook/ch1.pdf
[Jeffreys]Jeffreys, Harold. Cartesian tensors. Cambridge University Press, 1931. | Finite $k$th moment for a random vector | @whuber 's answer is correct and well-composed.
I wrote this thread only to elaborate why such a problem can be better addressed in language of tensors. I previously thought that tensor viewpoint is | Finite $k$th moment for a random vector
@whuber 's answer is correct and well-composed.
I wrote this thread only to elaborate why such a problem can be better addressed in language of tensors. I previously thought that tensor viewpoint is widely accepted in statistics community, now I know this is not the case.
In pp.46-47 of [McCullagh], he stated how we could view moments as tensors. I explained it basically following his words.
Let $\boldsymbol{X}=(X_{1},\cdots X_{p})$ be a random vector, and we can discuss its (central) moments $\kappa^{i,j}=E(X_{i}-EX_{i})(X_{j}-EX_{j})$. And if we take affine transformations $Y_{r}=\boldsymbol{A}_{r}\boldsymbol{X}+b_{r}$ (equivalently we can write it in matrix notation $\boldsymbol{Y=AX+b})$ in the probability space, then the resulting (central) moment of $Y_{r},Y_{s}$ is $$\kappa^{r,s}=\frac{\partial Y_{r}}{\partial X_{i}}\frac{\partial Y_{s}}{\partial X_{j}}\kappa^{i,j}$$ by transformation formula. So the moment behaves like a (0,1) contravariant tensor. If we accept such a tensor view, then the $L^{p}$ norm/the moments of a random variable can be treated as a tensor norm. So as a matter of fact, multi-index tensor norm of the highest order does not necessarily bound lower order multi-index tensor norm. Now since the tensor is given by first order differential operators, Sobolev tensor norm comes into play naturally, e.g. in wavelets. And there is a lot of counter examples that the highest order norm does not bound lower order norms in Sobolev-Besov spaces. (MO post)
As for the reason why we should adopt such a view, the story is much longer, but a brief comment is following.
The classic reference in establishing this view is [McCullagh] and later scattered works in "machine learning" literature. But the origin of such a view is actually pursued much earlier in the Bayesian's works [Jeffereys]. Such a view definitely helps visualization and probably motivated some research in statistical shape analysis like those early works by Mardia.
$\blacksquare$ Reference
[McCullagh]http://www.stat.uchicago.edu/~pmcc/tensorbook/ch1.pdf
[Jeffreys]Jeffreys, Harold. Cartesian tensors. Cambridge University Press, 1931. | Finite $k$th moment for a random vector
@whuber 's answer is correct and well-composed.
I wrote this thread only to elaborate why such a problem can be better addressed in language of tensors. I previously thought that tensor viewpoint is |
24,310 | Using lm for 2-sample proportion test | It's not to do with how they solve the optimization problems that correspond to fitting the models, it's to do with the actual optimization problems the models pose.
Specifically, in large samples, you can effectively consider it as comparing two weighted least squares problems
The linear model (lm) one assumes (when unweighted) that the variance of the proportions is constant. The glm assumes that the variance of the proportions comes from the binomial assumption $\text{Var}(\hat{p})=\text{Var}(X/n) = p(1-p)/n$. This weights the data points differently, and so comes to somewhat different estimates* and different variance of differences.
* at least in some situations, though not necessarily in a straight comparison of proportions | Using lm for 2-sample proportion test | It's not to do with how they solve the optimization problems that correspond to fitting the models, it's to do with the actual optimization problems the models pose.
Specifically, in large samples, yo | Using lm for 2-sample proportion test
It's not to do with how they solve the optimization problems that correspond to fitting the models, it's to do with the actual optimization problems the models pose.
Specifically, in large samples, you can effectively consider it as comparing two weighted least squares problems
The linear model (lm) one assumes (when unweighted) that the variance of the proportions is constant. The glm assumes that the variance of the proportions comes from the binomial assumption $\text{Var}(\hat{p})=\text{Var}(X/n) = p(1-p)/n$. This weights the data points differently, and so comes to somewhat different estimates* and different variance of differences.
* at least in some situations, though not necessarily in a straight comparison of proportions | Using lm for 2-sample proportion test
It's not to do with how they solve the optimization problems that correspond to fitting the models, it's to do with the actual optimization problems the models pose.
Specifically, in large samples, yo |
24,311 | Using lm for 2-sample proportion test | In terms of calculation, compare the standard error of the treatmentB coefficient for lm vs. binomial glm. You have the formula for the standard error of the treatmentB coefficient in the binomial glm (the denominator of z_unpooled). The standard error of the treatmentB coefficient in the standard lm is (SE_lm):
test = lm(outcome ~ treatment, data = df)
treat_B = as.numeric(df$treatment == "B")
SE_lm = sqrt( sum(test$residuals^2)/(n_A+n_B-2) /
sum((treat_B - mean(treat_B))^2))
See this post for a derivation, the only difference being that here the sample error is found instead of $\sigma^2$ (i.e. subtract 2 from $n_A+n_B$ for lost degrees of freedom). Without that $-2$, the lm and binomial glm standard errors actually seem to match when $n_A = n_B$. | Using lm for 2-sample proportion test | In terms of calculation, compare the standard error of the treatmentB coefficient for lm vs. binomial glm. You have the formula for the standard error of the treatmentB coefficient in the binomial gl | Using lm for 2-sample proportion test
In terms of calculation, compare the standard error of the treatmentB coefficient for lm vs. binomial glm. You have the formula for the standard error of the treatmentB coefficient in the binomial glm (the denominator of z_unpooled). The standard error of the treatmentB coefficient in the standard lm is (SE_lm):
test = lm(outcome ~ treatment, data = df)
treat_B = as.numeric(df$treatment == "B")
SE_lm = sqrt( sum(test$residuals^2)/(n_A+n_B-2) /
sum((treat_B - mean(treat_B))^2))
See this post for a derivation, the only difference being that here the sample error is found instead of $\sigma^2$ (i.e. subtract 2 from $n_A+n_B$ for lost degrees of freedom). Without that $-2$, the lm and binomial glm standard errors actually seem to match when $n_A = n_B$. | Using lm for 2-sample proportion test
In terms of calculation, compare the standard error of the treatmentB coefficient for lm vs. binomial glm. You have the formula for the standard error of the treatmentB coefficient in the binomial gl |
24,312 | How to find derivative of softmax function for the purpose of gradient descent? | As whuber points out, $\delta_{ij}$ is the Kronecker delta https://en.wikipedia.org/wiki/Kronecker_delta :
$$
\begin{align}
\delta_{ij} = \begin{cases}
0\: \text{when } i \ne j \\
1 \: \text{when } i = j
\end{cases}
\end{align}
$$
... and remember that a softmax has multiple inputs, a vector of inputs; and also gives a vector output, where the length of the input and output vectors are identical.
Each of the values in the output vector will change if any of the input vector values change. So the output vector values are each a function of all the input vector value:
$$
y_{k'} = f_{k'}(a_1, a_2, a_3,\dots, a_K)
$$
where $k'$ is the index into the output vector, the vectors are of length $K$, and $f_{k'}$ is some function. So, the input vector is length $K$ and the output vector is length $K$, and both $k$ and $k'$ take values $\in \{1,2,3,...,K\}$.
When we differentiate $y_{k'}$, we differentiate partially with respect to each of the input vector values. So we will have:
$\frac{\partial y_{k'}}{\partial a_1}$
$\frac{\partial y_{k'}}{\partial a_2}$
etc ...
Rather than calculating individually for each $a_1$, $a_2$ etc, we'll just use $k$ to represent the 1,2,3, etc, ie we will calculate:
$$
\frac{\partial y_{k'}}{\partial a_k}
$$
...where:
$k \in \{1,2,3,\dots,K\}$ and
$k' \in \{1,2,3\dots K\}$
When we do this differentiation, eg see https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/ , the derivative will be:
$$
\frac{\partial y_{k'}}{\partial a_k} = \begin{cases}
y_k(1 - y_{k'}) &\text{when }k = k'\\
- y_k y_{k'} &\text{when }k \ne k'
\end{cases}
$$
We can then write this using the Kronecker delta, which is simply for notational convenience, to avoid having to write out the 'cases' statement each time. | How to find derivative of softmax function for the purpose of gradient descent? | As whuber points out, $\delta_{ij}$ is the Kronecker delta https://en.wikipedia.org/wiki/Kronecker_delta :
$$
\begin{align}
\delta_{ij} = \begin{cases}
0\: \text{when } i \ne j \\
1 \: \text{when } i | How to find derivative of softmax function for the purpose of gradient descent?
As whuber points out, $\delta_{ij}$ is the Kronecker delta https://en.wikipedia.org/wiki/Kronecker_delta :
$$
\begin{align}
\delta_{ij} = \begin{cases}
0\: \text{when } i \ne j \\
1 \: \text{when } i = j
\end{cases}
\end{align}
$$
... and remember that a softmax has multiple inputs, a vector of inputs; and also gives a vector output, where the length of the input and output vectors are identical.
Each of the values in the output vector will change if any of the input vector values change. So the output vector values are each a function of all the input vector value:
$$
y_{k'} = f_{k'}(a_1, a_2, a_3,\dots, a_K)
$$
where $k'$ is the index into the output vector, the vectors are of length $K$, and $f_{k'}$ is some function. So, the input vector is length $K$ and the output vector is length $K$, and both $k$ and $k'$ take values $\in \{1,2,3,...,K\}$.
When we differentiate $y_{k'}$, we differentiate partially with respect to each of the input vector values. So we will have:
$\frac{\partial y_{k'}}{\partial a_1}$
$\frac{\partial y_{k'}}{\partial a_2}$
etc ...
Rather than calculating individually for each $a_1$, $a_2$ etc, we'll just use $k$ to represent the 1,2,3, etc, ie we will calculate:
$$
\frac{\partial y_{k'}}{\partial a_k}
$$
...where:
$k \in \{1,2,3,\dots,K\}$ and
$k' \in \{1,2,3\dots K\}$
When we do this differentiation, eg see https://eli.thegreenplace.net/2016/the-softmax-function-and-its-derivative/ , the derivative will be:
$$
\frac{\partial y_{k'}}{\partial a_k} = \begin{cases}
y_k(1 - y_{k'}) &\text{when }k = k'\\
- y_k y_{k'} &\text{when }k \ne k'
\end{cases}
$$
We can then write this using the Kronecker delta, which is simply for notational convenience, to avoid having to write out the 'cases' statement each time. | How to find derivative of softmax function for the purpose of gradient descent?
As whuber points out, $\delta_{ij}$ is the Kronecker delta https://en.wikipedia.org/wiki/Kronecker_delta :
$$
\begin{align}
\delta_{ij} = \begin{cases}
0\: \text{when } i \ne j \\
1 \: \text{when } i |
24,313 | How to find derivative of softmax function for the purpose of gradient descent? | The author's formula violates the Einstein convention, since repeated indices imply summation.
A better way to write the result is
$$
\frac{\partial y}{\partial a} = {\rm Diag}(y) - yy^T
$$
where the Diag function creates a diagonal matrix by putting the vector $y$ along the main diagonal and zeros elsewhere.
If you wish to use the summation convention, you'll need to define a third-order tensor $T_{ijk}$ whose elements are equal to 1 when $i=j=k,\,$ and zero otherwise.
With this tensor you can write
$$
\frac{\partial y_i}{\partial a_j} = T_{ijk}\,y_k - y_i\,y_j
$$
where the repeated index $k$ is summed. | How to find derivative of softmax function for the purpose of gradient descent? | The author's formula violates the Einstein convention, since repeated indices imply summation.
A better way to write the result is
$$
\frac{\partial y}{\partial a} = {\rm Diag}(y) - yy^T
$$
where | How to find derivative of softmax function for the purpose of gradient descent?
The author's formula violates the Einstein convention, since repeated indices imply summation.
A better way to write the result is
$$
\frac{\partial y}{\partial a} = {\rm Diag}(y) - yy^T
$$
where the Diag function creates a diagonal matrix by putting the vector $y$ along the main diagonal and zeros elsewhere.
If you wish to use the summation convention, you'll need to define a third-order tensor $T_{ijk}$ whose elements are equal to 1 when $i=j=k,\,$ and zero otherwise.
With this tensor you can write
$$
\frac{\partial y_i}{\partial a_j} = T_{ijk}\,y_k - y_i\,y_j
$$
where the repeated index $k$ is summed. | How to find derivative of softmax function for the purpose of gradient descent?
The author's formula violates the Einstein convention, since repeated indices imply summation.
A better way to write the result is
$$
\frac{\partial y}{\partial a} = {\rm Diag}(y) - yy^T
$$
where |
24,314 | How to find derivative of softmax function for the purpose of gradient descent? | This is the partial derivative of the softmax function $y_{k'}$ with respect to its activation $a_k$. Someone on this site has already written an excellent answer that explains the full evaluation of this derivative, just with slightly different notation:
Derivative of Softmax with respect to weights
I'm confused by the delta kk' and i have never seen anything like it.
As others have mentioned, the Kronecker delta function $\delta_{kk'}=1$ when the indices match (i.e. $k=k'$) and is $0$ elsewhere. It's a handy way to "select" only part of the multivariate input when performing derivatives over multiply-connected paths like a neural network.
Think of it almost like an if-else statement for a particular link. Also, it's not the only way you can represent this mathematically - some texts use an indicator function like $I[k=k']$ to represent the same logic.
In our case, the partial derivative of the specific activation $y_{k'}$ (indexed by $k'$) is not just with respect to any activation, but to the specific activation $a_k$ indexed by $k$. That's why a lot of the terms in the partial derivative are set to $0$ when $k \ne k'$, using the Kronecker delta.
Another question is do we consider the summation while taking the
derivative, why or why not?
If I understand your question correctly - we do consider the summation in the denominator, $\sum_{k'=1}^K e^{a_k{k'}}$. The derivation I linked to above shows how that is done.
https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function
is a bit relevant, but the result of differentiation is different.
It is indeed related, but focusses on a different partial derivative. If you had a Loss function $L$ that is a function of your softmax output $y_{k'}$, then you could go one step further and evaluate this using the chain rule
$$\frac{\partial L}{\partial a_k} = \frac{\partial L}{\partial y_{k'}} \frac{\partial y_{k'}}{\partial a_k}$$
The last term in the above, $\frac{\partial y_{k'}}{\partial a_k}$, is what you are focussing on in your question, while the question you linked to is trying to evaluate $\frac{\partial L}{\partial a_k}$. So they are related but focussed on different terms. | How to find derivative of softmax function for the purpose of gradient descent? | This is the partial derivative of the softmax function $y_{k'}$ with respect to its activation $a_k$. Someone on this site has already written an excellent answer that explains the full evaluation of | How to find derivative of softmax function for the purpose of gradient descent?
This is the partial derivative of the softmax function $y_{k'}$ with respect to its activation $a_k$. Someone on this site has already written an excellent answer that explains the full evaluation of this derivative, just with slightly different notation:
Derivative of Softmax with respect to weights
I'm confused by the delta kk' and i have never seen anything like it.
As others have mentioned, the Kronecker delta function $\delta_{kk'}=1$ when the indices match (i.e. $k=k'$) and is $0$ elsewhere. It's a handy way to "select" only part of the multivariate input when performing derivatives over multiply-connected paths like a neural network.
Think of it almost like an if-else statement for a particular link. Also, it's not the only way you can represent this mathematically - some texts use an indicator function like $I[k=k']$ to represent the same logic.
In our case, the partial derivative of the specific activation $y_{k'}$ (indexed by $k'$) is not just with respect to any activation, but to the specific activation $a_k$ indexed by $k$. That's why a lot of the terms in the partial derivative are set to $0$ when $k \ne k'$, using the Kronecker delta.
Another question is do we consider the summation while taking the
derivative, why or why not?
If I understand your question correctly - we do consider the summation in the denominator, $\sum_{k'=1}^K e^{a_k{k'}}$. The derivation I linked to above shows how that is done.
https://math.stackexchange.com/questions/945871/derivative-of-softmax-loss-function
is a bit relevant, but the result of differentiation is different.
It is indeed related, but focusses on a different partial derivative. If you had a Loss function $L$ that is a function of your softmax output $y_{k'}$, then you could go one step further and evaluate this using the chain rule
$$\frac{\partial L}{\partial a_k} = \frac{\partial L}{\partial y_{k'}} \frac{\partial y_{k'}}{\partial a_k}$$
The last term in the above, $\frac{\partial y_{k'}}{\partial a_k}$, is what you are focussing on in your question, while the question you linked to is trying to evaluate $\frac{\partial L}{\partial a_k}$. So they are related but focussed on different terms. | How to find derivative of softmax function for the purpose of gradient descent?
This is the partial derivative of the softmax function $y_{k'}$ with respect to its activation $a_k$. Someone on this site has already written an excellent answer that explains the full evaluation of |
24,315 | How to find derivative of softmax function for the purpose of gradient descent? | First thing to remember is that when you differentiate an ($n,1$) vector $y$ with respect to a ($n,1$) vector $a$, you get an ($n,n$) matrix, who's first column is the the differentiation w.r.t. $a_1$, 2nd w.r.t $a_2$ ... etc. all the way to $a_n$.
\begin{pmatrix}
\frac{\partial y_1}{\partial a_1} & \frac{\partial y_1}{\partial a_2} & \frac{\partial y_1}{\partial a_3} & \cdots & \frac{\partial y_1}{\partial a_n} \\
\frac{\partial y_2}{\partial a_1} & \frac{\partial y_2}{\partial a_2} &\frac{\partial y_2}{\partial a_3} & \cdots & \frac{\partial y_2}{\partial a_n} \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
\frac{\partial y_n}{\partial a_1} & \frac{\partial y_n}{\partial a_2} & \frac{\partial y_n}{\partial a_3} & \cdots & \frac{\partial y_n}{\partial a_n}
\end{pmatrix}
Now, since $y_i = \frac{e^{a_i}}{\sum_je^{a_j}}$, let's see the general form of it's derivatives:
$$\frac{\partial y_i}{\partial a_i} = \frac{e^{a_i} \sum_je^{a_j} - e^{a_i}e^{a_i}}{(\sum_je^{a_j})^2} = \frac{e^{a_i}}{\sum_je^{a_j}} \frac{\sum_je^{a_j} - e^{a_i}}{\sum_je^{a_j}} = y_i(1-y_i) = y_i - y_i y_i\\
\frac{\partial y_i}{\partial a_k} = \frac{- e^{a_i}e^{a_k}}{(\sum_je^{a_j})^2} = -\frac{e^{a_i}}{\sum_je^{a_j}}\frac{e^{a_k}}{\sum_je^{a_j}} = -y_i y_k
$$
So, you can separate the matrix into 2 matrices:
$$ \left (
\begin{matrix}
y_1 & 0 & 0 & \cdots & 0 \\
0 & y_2 & 0 & \cdots & 0 \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & y_n
\end{matrix} \right) -
\left (
\begin{matrix}
y_1 y_1 & y_1 y_2 & y_1 y_3 & \cdots & y_1 y_n \\
y_2 y_1 & y_2 y_2 & y_2 y_3 & \cdots & y_2 y_n \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
y_n y_1 & y_n y_2 & y_n y_3 & \cdots & y_n y_n
\end{matrix} \right)
$$
You can see that the first matrix correspond to the first term in your equation $y_k \delta_{k k'}$ and the end matrix correspond to the 2nd term $y_k y_{k'}$.
You can check this blog post for more information including graphs and code. | How to find derivative of softmax function for the purpose of gradient descent? | First thing to remember is that when you differentiate an ($n,1$) vector $y$ with respect to a ($n,1$) vector $a$, you get an ($n,n$) matrix, who's first column is the the differentiation w.r.t. $a_1$ | How to find derivative of softmax function for the purpose of gradient descent?
First thing to remember is that when you differentiate an ($n,1$) vector $y$ with respect to a ($n,1$) vector $a$, you get an ($n,n$) matrix, who's first column is the the differentiation w.r.t. $a_1$, 2nd w.r.t $a_2$ ... etc. all the way to $a_n$.
\begin{pmatrix}
\frac{\partial y_1}{\partial a_1} & \frac{\partial y_1}{\partial a_2} & \frac{\partial y_1}{\partial a_3} & \cdots & \frac{\partial y_1}{\partial a_n} \\
\frac{\partial y_2}{\partial a_1} & \frac{\partial y_2}{\partial a_2} &\frac{\partial y_2}{\partial a_3} & \cdots & \frac{\partial y_2}{\partial a_n} \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
\frac{\partial y_n}{\partial a_1} & \frac{\partial y_n}{\partial a_2} & \frac{\partial y_n}{\partial a_3} & \cdots & \frac{\partial y_n}{\partial a_n}
\end{pmatrix}
Now, since $y_i = \frac{e^{a_i}}{\sum_je^{a_j}}$, let's see the general form of it's derivatives:
$$\frac{\partial y_i}{\partial a_i} = \frac{e^{a_i} \sum_je^{a_j} - e^{a_i}e^{a_i}}{(\sum_je^{a_j})^2} = \frac{e^{a_i}}{\sum_je^{a_j}} \frac{\sum_je^{a_j} - e^{a_i}}{\sum_je^{a_j}} = y_i(1-y_i) = y_i - y_i y_i\\
\frac{\partial y_i}{\partial a_k} = \frac{- e^{a_i}e^{a_k}}{(\sum_je^{a_j})^2} = -\frac{e^{a_i}}{\sum_je^{a_j}}\frac{e^{a_k}}{\sum_je^{a_j}} = -y_i y_k
$$
So, you can separate the matrix into 2 matrices:
$$ \left (
\begin{matrix}
y_1 & 0 & 0 & \cdots & 0 \\
0 & y_2 & 0 & \cdots & 0 \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & y_n
\end{matrix} \right) -
\left (
\begin{matrix}
y_1 y_1 & y_1 y_2 & y_1 y_3 & \cdots & y_1 y_n \\
y_2 y_1 & y_2 y_2 & y_2 y_3 & \cdots & y_2 y_n \\
\vdots & \vdots& \vdots & \ddots & \vdots \\
y_n y_1 & y_n y_2 & y_n y_3 & \cdots & y_n y_n
\end{matrix} \right)
$$
You can see that the first matrix correspond to the first term in your equation $y_k \delta_{k k'}$ and the end matrix correspond to the 2nd term $y_k y_{k'}$.
You can check this blog post for more information including graphs and code. | How to find derivative of softmax function for the purpose of gradient descent?
First thing to remember is that when you differentiate an ($n,1$) vector $y$ with respect to a ($n,1$) vector $a$, you get an ($n,n$) matrix, who's first column is the the differentiation w.r.t. $a_1$ |
24,316 | Use Cases For Coefficient of Variation vs Index of Dispersion | Note that the coefficient of variation (CV) is always dimensionless
and is scale invariant. On the other hand, the index of dispersion
(ID) is not scale invariant and is dimensionless only when it applies
to a dimensionless variable such as a count, as is the case in
practice. Both CV and ID are for non-negative variables, but they are
used in different contexts.
The sample and theoretical CVs provide nice indications about
continuous distributions and samples. The exponential distribution has
unit CV and can be seen as a reference within some families of
distributions. The gamma, the Weibull and the Generalised Pareto (GP)
families embed distributions with arbitrary CVs, and there is a
one-to-one relation between their shape parameter and the CV. In the
three families, $\text{CV}<1$ indicates a tail which is thinner than
exponential, while $\text{CV}>1$ is for a tail thicker than
exponential, and even is an heavy tail in the case of GP.
The sample and theoretical IDs are most often used for discrete variables
with non-negative integer values such as counts. The reference
distribution with $\text{ID} = 1$ is now the Poisson distribution,
notably in the family made of the three distributions. Binomial,
Poisson and Negative Binomial. The binomial is underdispersed
($\text{ID} < 1$) and the Negative Binomial is overdispersed
($\text{ID} > 1$). The ID is often used in the theory of point
processes where the Poisson distribution plays a major role.
An interesting relationship between the two notions is provided by the
renewal process: given as sequence of i.i.d. positive r.vs $X_i$
usually representing lifetimes, the interest is on the sum $S_n := X_1
+ X_2 + \dots +X_n$ for large $n$, and on the number $N_t$ of renewals
$S_n$ falling in the interval $(0,\,t)$. When the $X_i$ are
exponential, $N_t$ is Poisson. Under quite general assumptions the ID
of $N_t$ tends for large $t$ to the square of the CV of $X$ so $N_t$
is overdispersed when the CV of $X$ is $> 1$. | Use Cases For Coefficient of Variation vs Index of Dispersion | Note that the coefficient of variation (CV) is always dimensionless
and is scale invariant. On the other hand, the index of dispersion
(ID) is not scale invariant and is dimensionless only when it app | Use Cases For Coefficient of Variation vs Index of Dispersion
Note that the coefficient of variation (CV) is always dimensionless
and is scale invariant. On the other hand, the index of dispersion
(ID) is not scale invariant and is dimensionless only when it applies
to a dimensionless variable such as a count, as is the case in
practice. Both CV and ID are for non-negative variables, but they are
used in different contexts.
The sample and theoretical CVs provide nice indications about
continuous distributions and samples. The exponential distribution has
unit CV and can be seen as a reference within some families of
distributions. The gamma, the Weibull and the Generalised Pareto (GP)
families embed distributions with arbitrary CVs, and there is a
one-to-one relation between their shape parameter and the CV. In the
three families, $\text{CV}<1$ indicates a tail which is thinner than
exponential, while $\text{CV}>1$ is for a tail thicker than
exponential, and even is an heavy tail in the case of GP.
The sample and theoretical IDs are most often used for discrete variables
with non-negative integer values such as counts. The reference
distribution with $\text{ID} = 1$ is now the Poisson distribution,
notably in the family made of the three distributions. Binomial,
Poisson and Negative Binomial. The binomial is underdispersed
($\text{ID} < 1$) and the Negative Binomial is overdispersed
($\text{ID} > 1$). The ID is often used in the theory of point
processes where the Poisson distribution plays a major role.
An interesting relationship between the two notions is provided by the
renewal process: given as sequence of i.i.d. positive r.vs $X_i$
usually representing lifetimes, the interest is on the sum $S_n := X_1
+ X_2 + \dots +X_n$ for large $n$, and on the number $N_t$ of renewals
$S_n$ falling in the interval $(0,\,t)$. When the $X_i$ are
exponential, $N_t$ is Poisson. Under quite general assumptions the ID
of $N_t$ tends for large $t$ to the square of the CV of $X$ so $N_t$
is overdispersed when the CV of $X$ is $> 1$. | Use Cases For Coefficient of Variation vs Index of Dispersion
Note that the coefficient of variation (CV) is always dimensionless
and is scale invariant. On the other hand, the index of dispersion
(ID) is not scale invariant and is dimensionless only when it app |
24,317 | Covariance between two random matrices | The most common thing to do is probably to simply consider the covariance between the entries of the matrices. Defining $\DeclareMathOperator{\vec}{\mathrm{vec}}\vec(A)$ to be the vectorization of a matrix $A$ (that is, stack up the columns into a single column vector), you can look at $\DeclareMathOperator{\Cov}{\mathrm{Cov}}\Cov(\vec(X), \vec(Y))$. This is then an $mn \times mn$ matrix.
If you preferred, you could instead define an $m \times n \times m \times n$ tensor, which would be essentially the same thing, just reshaped.
In e.g. the matrix normal distribution, we assume that the covariance matrix of the single random matrix $X$ factors as the Kronecker product of an $m \times m$ row covariance $U$ and an $n \times n$ column covariance $V$, in which case you can often just work with $U$ or $V$. | Covariance between two random matrices | The most common thing to do is probably to simply consider the covariance between the entries of the matrices. Defining $\DeclareMathOperator{\vec}{\mathrm{vec}}\vec(A)$ to be the vectorization of a m | Covariance between two random matrices
The most common thing to do is probably to simply consider the covariance between the entries of the matrices. Defining $\DeclareMathOperator{\vec}{\mathrm{vec}}\vec(A)$ to be the vectorization of a matrix $A$ (that is, stack up the columns into a single column vector), you can look at $\DeclareMathOperator{\Cov}{\mathrm{Cov}}\Cov(\vec(X), \vec(Y))$. This is then an $mn \times mn$ matrix.
If you preferred, you could instead define an $m \times n \times m \times n$ tensor, which would be essentially the same thing, just reshaped.
In e.g. the matrix normal distribution, we assume that the covariance matrix of the single random matrix $X$ factors as the Kronecker product of an $m \times m$ row covariance $U$ and an $n \times n$ column covariance $V$, in which case you can often just work with $U$ or $V$. | Covariance between two random matrices
The most common thing to do is probably to simply consider the covariance between the entries of the matrices. Defining $\DeclareMathOperator{\vec}{\mathrm{vec}}\vec(A)$ to be the vectorization of a m |
24,318 | Are the terms probability density function and probability distribution (or just "distribution") interchangeable? | The phrase probability density function (pdf) means a specific thing: a function $f_X(\cdot)$ for a specific random variable $X$ (that's what
that subscript there is for, to distinguish this function from
the pdfs of other random variables) with the property that for all
real numbers $a$ and $b$ such that $a < b$,
$$P\{a < X \leq b\} = \int_a^b f_X(u)\,\mathrm du
= \int_a^b f_X(v)\,\mathrm dv = \int_a^b f_X(t)\,\mathrm dt.$$
The different integrals are intended to serve as a reminder that
it does not matter in the least what symbol we use as the argument
of $f_X(\cdot)$ and that it is not the case (as is regrettably
far too often believed by those starting on this subject) that
the argument must be the lower-case letter corresponding to
the upper-case letter that denotes the random variable. We also
insist that
$$\int_{-\infty}^\infty f_X(u)\,\mathrm du = 1.$$
If $P\{X = \alpha\} > 0$ for some real number $\alpha$, then
$X$ does not have a pdf except for those who incorporate
Dirac deltas into their probability calculus.
The cumulative probability distribution function (cdf or CDF)
$F_X(\cdot)$ of $X$ is the function defined as
$$F_X(\alpha) = P\{X \leq \alpha\}, -\infty < \alpha < \infty.$$
It is related to the pdf (for functions that do have a pdf) through
$$F_X(\alpha) = \int_{-\infty}^\alpha f_X(u)\,\mathrm du.$$
=======
While there might be a very restrictive definition of
the phrase probability distribution that some people insist
on, the colloquial use of the term broadly encompasses the
pdf and the CDF and the pmf (probability mass function which
is also called the ddf or discrete density function) and whatever
else we might want to include as descriptive of the probabilistic
behavior of a random variable. For example, the phrase
the probability distribution of $X$ is uniform on
$(a,b)$
will hardly ever be interpreted as meaning that the CDF of
$X$ has constant value on $(a,b)~$!! Although it is the
distribution which is alleged to be uniform, everyone
in his/her right mind will take that as meaning that the
density of $X$ has constant value $(b-a)^{-1}$ on the
interval $(a,b)$ (and has value $0$ elsewhere). Similarly,
for "$X$ is uniformly distributed on $(a,b)$" when what
is meant is that the pdf of $X$ has constant value
on $(a,b)$.
As another instance of colloquial usage of distribution to
mean density, consider this quote from a recently
posted answer
by Moderator Glen_b.
"Saying the mode implies that the distribution has one and only one."
A density might possess a unique mode but a CDF cannot have a unique
mode (in the unextended reals). However, no one reading that quote
is likely to think that Glen_b meant the CDF when he wrote "distribution". | Are the terms probability density function and probability distribution (or just "distribution") int | The phrase probability density function (pdf) means a specific thing: a function $f_X(\cdot)$ for a specific random variable $X$ (that's what
that subscript there is for, to distinguish this function | Are the terms probability density function and probability distribution (or just "distribution") interchangeable?
The phrase probability density function (pdf) means a specific thing: a function $f_X(\cdot)$ for a specific random variable $X$ (that's what
that subscript there is for, to distinguish this function from
the pdfs of other random variables) with the property that for all
real numbers $a$ and $b$ such that $a < b$,
$$P\{a < X \leq b\} = \int_a^b f_X(u)\,\mathrm du
= \int_a^b f_X(v)\,\mathrm dv = \int_a^b f_X(t)\,\mathrm dt.$$
The different integrals are intended to serve as a reminder that
it does not matter in the least what symbol we use as the argument
of $f_X(\cdot)$ and that it is not the case (as is regrettably
far too often believed by those starting on this subject) that
the argument must be the lower-case letter corresponding to
the upper-case letter that denotes the random variable. We also
insist that
$$\int_{-\infty}^\infty f_X(u)\,\mathrm du = 1.$$
If $P\{X = \alpha\} > 0$ for some real number $\alpha$, then
$X$ does not have a pdf except for those who incorporate
Dirac deltas into their probability calculus.
The cumulative probability distribution function (cdf or CDF)
$F_X(\cdot)$ of $X$ is the function defined as
$$F_X(\alpha) = P\{X \leq \alpha\}, -\infty < \alpha < \infty.$$
It is related to the pdf (for functions that do have a pdf) through
$$F_X(\alpha) = \int_{-\infty}^\alpha f_X(u)\,\mathrm du.$$
=======
While there might be a very restrictive definition of
the phrase probability distribution that some people insist
on, the colloquial use of the term broadly encompasses the
pdf and the CDF and the pmf (probability mass function which
is also called the ddf or discrete density function) and whatever
else we might want to include as descriptive of the probabilistic
behavior of a random variable. For example, the phrase
the probability distribution of $X$ is uniform on
$(a,b)$
will hardly ever be interpreted as meaning that the CDF of
$X$ has constant value on $(a,b)~$!! Although it is the
distribution which is alleged to be uniform, everyone
in his/her right mind will take that as meaning that the
density of $X$ has constant value $(b-a)^{-1}$ on the
interval $(a,b)$ (and has value $0$ elsewhere). Similarly,
for "$X$ is uniformly distributed on $(a,b)$" when what
is meant is that the pdf of $X$ has constant value
on $(a,b)$.
As another instance of colloquial usage of distribution to
mean density, consider this quote from a recently
posted answer
by Moderator Glen_b.
"Saying the mode implies that the distribution has one and only one."
A density might possess a unique mode but a CDF cannot have a unique
mode (in the unextended reals). However, no one reading that quote
is likely to think that Glen_b meant the CDF when he wrote "distribution". | Are the terms probability density function and probability distribution (or just "distribution") int
The phrase probability density function (pdf) means a specific thing: a function $f_X(\cdot)$ for a specific random variable $X$ (that's what
that subscript there is for, to distinguish this function |
24,319 | Are the terms probability density function and probability distribution (or just "distribution") interchangeable? | In terms of common usage, consider parsing the terminology used in R. The Description on the Distributions {stats} help page says:
Density, cumulative distribution function, quantile function and random variate generation for many standard probability distributions are available in the stats package.
For each of the built-in Distributions, it provides (according to the individual help pages) the "density" (e.g. dnorm for Normal, dbinom for Binomial) and the "distribution function" (e.g., pnorm, pbinom; called the "cumulative distribution function" on the main Distributions page, as quoted above).
So one might interpret that "probability distribution" describes (perhaps a member of) a family of distributions, "density" can be used for discrete distributions like the binomial, and the phrase "distribution function" might be preferred over "distribution" when the cumulative distribution function is what is intended.
Alternatively, one might argue that common usage even among the experienced often depends on context for clarity. | Are the terms probability density function and probability distribution (or just "distribution") int | In terms of common usage, consider parsing the terminology used in R. The Description on the Distributions {stats} help page says:
Density, cumulative distribution function, quantile function and ra | Are the terms probability density function and probability distribution (or just "distribution") interchangeable?
In terms of common usage, consider parsing the terminology used in R. The Description on the Distributions {stats} help page says:
Density, cumulative distribution function, quantile function and random variate generation for many standard probability distributions are available in the stats package.
For each of the built-in Distributions, it provides (according to the individual help pages) the "density" (e.g. dnorm for Normal, dbinom for Binomial) and the "distribution function" (e.g., pnorm, pbinom; called the "cumulative distribution function" on the main Distributions page, as quoted above).
So one might interpret that "probability distribution" describes (perhaps a member of) a family of distributions, "density" can be used for discrete distributions like the binomial, and the phrase "distribution function" might be preferred over "distribution" when the cumulative distribution function is what is intended.
Alternatively, one might argue that common usage even among the experienced often depends on context for clarity. | Are the terms probability density function and probability distribution (or just "distribution") int
In terms of common usage, consider parsing the terminology used in R. The Description on the Distributions {stats} help page says:
Density, cumulative distribution function, quantile function and ra |
24,320 | Are the terms probability density function and probability distribution (or just "distribution") interchangeable? | No.
"probability density function" is used only for continuous distributions. A discrete distribution can't have a pdf (though it can be approximated with a pdf). "probability distribution" is often used for discrete distributions, e.g., the binomial distribution.
"probability distribution" has a meaning for both discrete and continuous distributions, but a probability distribution is directly applicable only for discrete distributions. When the word is used with continuous distributions, it refers to an underlying mathematical construct such as the normal distribution, which must for most purposes be instantiated in a function, typically a probability density function or a cumulative density function, before it can be applied. | Are the terms probability density function and probability distribution (or just "distribution") int | No.
"probability density function" is used only for continuous distributions. A discrete distribution can't have a pdf (though it can be approximated with a pdf). "probability distribution" is often | Are the terms probability density function and probability distribution (or just "distribution") interchangeable?
No.
"probability density function" is used only for continuous distributions. A discrete distribution can't have a pdf (though it can be approximated with a pdf). "probability distribution" is often used for discrete distributions, e.g., the binomial distribution.
"probability distribution" has a meaning for both discrete and continuous distributions, but a probability distribution is directly applicable only for discrete distributions. When the word is used with continuous distributions, it refers to an underlying mathematical construct such as the normal distribution, which must for most purposes be instantiated in a function, typically a probability density function or a cumulative density function, before it can be applied. | Are the terms probability density function and probability distribution (or just "distribution") int
No.
"probability density function" is used only for continuous distributions. A discrete distribution can't have a pdf (though it can be approximated with a pdf). "probability distribution" is often |
24,321 | Why are activation functions needed in neural networks? [duplicate] | In general, non-linearities are used to flexibly squash the input through a function and pave the way recovering a higher-level abstraction structure, but allow me to become more specific.
One of the most informative illustrations I have found about ReLU activations is the following:
The picture is from the work of A. Punjani and P. Abbeel, and depicts a simple neural network with a ReLU activation unit. Now imagine you have a 2D input space, as it can be seen the ReLU unit $\phi$, actually partitions the input space and regresses towards your desired outcome, where the space partitioning is a non-linear operation. That's the reason it is so powerful as it allows combinations of different input transformations, which are learnt from the dataset itself.
More specifically, the author's description is:
A pictorial
representation of the flexibility of the ReLU Network Model. Each hidden
unit can be thought of as defining a hyperplane (line) in the 2D input space
pictured. The data points (grey) each fall somewhere in input space and each
has a value we wish to regress (not pictured). Consider the hidden unit $i$,
drawn in purple. The purple arrow points in the direction of weight vector
$Wi$
, and has length according to $Bi$
. Points on one side of this line do not
activate unit $i$, while points on the other side (shaded) cause positive output.
This effectively partitions the input space, and the partitions generated by considering many hidden units (blue) together split the space into regions. These regions give the model flexibility to capture structure in the input data. In the ReLU Network Model, the partitions are learned from data. | Why are activation functions needed in neural networks? [duplicate] | In general, non-linearities are used to flexibly squash the input through a function and pave the way recovering a higher-level abstraction structure, but allow me to become more specific.
One of the | Why are activation functions needed in neural networks? [duplicate]
In general, non-linearities are used to flexibly squash the input through a function and pave the way recovering a higher-level abstraction structure, but allow me to become more specific.
One of the most informative illustrations I have found about ReLU activations is the following:
The picture is from the work of A. Punjani and P. Abbeel, and depicts a simple neural network with a ReLU activation unit. Now imagine you have a 2D input space, as it can be seen the ReLU unit $\phi$, actually partitions the input space and regresses towards your desired outcome, where the space partitioning is a non-linear operation. That's the reason it is so powerful as it allows combinations of different input transformations, which are learnt from the dataset itself.
More specifically, the author's description is:
A pictorial
representation of the flexibility of the ReLU Network Model. Each hidden
unit can be thought of as defining a hyperplane (line) in the 2D input space
pictured. The data points (grey) each fall somewhere in input space and each
has a value we wish to regress (not pictured). Consider the hidden unit $i$,
drawn in purple. The purple arrow points in the direction of weight vector
$Wi$
, and has length according to $Bi$
. Points on one side of this line do not
activate unit $i$, while points on the other side (shaded) cause positive output.
This effectively partitions the input space, and the partitions generated by considering many hidden units (blue) together split the space into regions. These regions give the model flexibility to capture structure in the input data. In the ReLU Network Model, the partitions are learned from data. | Why are activation functions needed in neural networks? [duplicate]
In general, non-linearities are used to flexibly squash the input through a function and pave the way recovering a higher-level abstraction structure, but allow me to become more specific.
One of the |
24,322 | What is the "variance component parameter" in mixed effect model? | The variance-component parameter vector $\theta$ is estimated iteratively to minimise the model deviance $\widetilde{d}$ according to eq. 1.10 (p. 14).
The relative covariance factor, $\Lambda_\theta$, is a $q \times q$ matrix (dimensions are explained in the excerpt you posted). For a model with a simple scalar random-effects term, (p. 15, Fig. 1.3) it is calculated as a multiple of $\theta$ and identity matrix of dimensions $q \times q$:
$$\Lambda_\theta = \theta \times {I_q}$$
This is the general way to calculate $\Lambda_\theta$, and it is modified according to the number of random-effects and their covariance structure. For a model with two uncorrelated random-effects terms in a crossed design, as on pp. 32-34, it is block diagonal with two blocks each of which is a multiple of $\theta$ and identity (p. 34, Fig. 2.4):
Same with two nested random-effects terms (p. 43, Fig. 2.10, not shown here).
For a longitudinal (repeated-measures) model with a random intercept and a random slope which are allowed to correlate $\Lambda_\theta$ consists of triangular blocks representing both random-effects and their correlation (p. 62, Fig. 3.2):
Modelling the same dataset with two uncorrelated random-effects terms (p. 65, Fig. 3.3) returns $\Lambda_\theta$ of the same structure as shown previously, in Fig. 2.4:
Additional notes:
$\theta_i = \frac{\sigma_i}{\sigma}$
Where $\sigma_i$ refers to the square root of the i-th random-effect variance, and $\sigma$ refers to the square root of the residual variance (compare with pp. 32-34).
The book version from June 25, 2010 refers to a version of lme4 which has been modified. One of the consequences is that in the current version 1.1.-10. the random-effects model object-class merMod has a different structure and $\Lambda_\theta$ is accessed in a different way, using the method getME:
image(getME(fm01ML, "Lambda")) | What is the "variance component parameter" in mixed effect model? | The variance-component parameter vector $\theta$ is estimated iteratively to minimise the model deviance $\widetilde{d}$ according to eq. 1.10 (p. 14).
The relative covariance factor, $\Lambda_\theta$ | What is the "variance component parameter" in mixed effect model?
The variance-component parameter vector $\theta$ is estimated iteratively to minimise the model deviance $\widetilde{d}$ according to eq. 1.10 (p. 14).
The relative covariance factor, $\Lambda_\theta$, is a $q \times q$ matrix (dimensions are explained in the excerpt you posted). For a model with a simple scalar random-effects term, (p. 15, Fig. 1.3) it is calculated as a multiple of $\theta$ and identity matrix of dimensions $q \times q$:
$$\Lambda_\theta = \theta \times {I_q}$$
This is the general way to calculate $\Lambda_\theta$, and it is modified according to the number of random-effects and their covariance structure. For a model with two uncorrelated random-effects terms in a crossed design, as on pp. 32-34, it is block diagonal with two blocks each of which is a multiple of $\theta$ and identity (p. 34, Fig. 2.4):
Same with two nested random-effects terms (p. 43, Fig. 2.10, not shown here).
For a longitudinal (repeated-measures) model with a random intercept and a random slope which are allowed to correlate $\Lambda_\theta$ consists of triangular blocks representing both random-effects and their correlation (p. 62, Fig. 3.2):
Modelling the same dataset with two uncorrelated random-effects terms (p. 65, Fig. 3.3) returns $\Lambda_\theta$ of the same structure as shown previously, in Fig. 2.4:
Additional notes:
$\theta_i = \frac{\sigma_i}{\sigma}$
Where $\sigma_i$ refers to the square root of the i-th random-effect variance, and $\sigma$ refers to the square root of the residual variance (compare with pp. 32-34).
The book version from June 25, 2010 refers to a version of lme4 which has been modified. One of the consequences is that in the current version 1.1.-10. the random-effects model object-class merMod has a different structure and $\Lambda_\theta$ is accessed in a different way, using the method getME:
image(getME(fm01ML, "Lambda")) | What is the "variance component parameter" in mixed effect model?
The variance-component parameter vector $\theta$ is estimated iteratively to minimise the model deviance $\widetilde{d}$ according to eq. 1.10 (p. 14).
The relative covariance factor, $\Lambda_\theta$ |
24,323 | What is the "variance component parameter" in mixed effect model? | It's hierarchical reasoning. There are a bunch of parameters in your linear model, the components of b. In a pure fixed effects model you would just get estimates of these and that would be that. Instead, you imagine that the values in b themselves are drawn from a multivariate normal distribution with a covariance matrix that is parameterized by theta. Here is a simple example. Suppose we look at animal counts at five different time periods at 10 different locations. We would get a linear model (I'm using R talk here) that would look like count ~ time + factor(location), so that you would have (in this case) a common slope for all of the regression (one at each location) but a different intercept at each location. We could just punt and call it a fixed effect model and estimate all of the intercepts. However, we want might not care about the particular locations if they were 10 locations selected from a large number of possible locations. So we put a covariance model on the intercepts. For instance, we declare the intercepts to be multivariate normal and independent with common variance sigma2. Then sigma2 is the "theta" parameter, because it characterizes the population of intercepts at each location (which are thus random effects). | What is the "variance component parameter" in mixed effect model? | It's hierarchical reasoning. There are a bunch of parameters in your linear model, the components of b. In a pure fixed effects model you would just get estimates of these and that would be that. I | What is the "variance component parameter" in mixed effect model?
It's hierarchical reasoning. There are a bunch of parameters in your linear model, the components of b. In a pure fixed effects model you would just get estimates of these and that would be that. Instead, you imagine that the values in b themselves are drawn from a multivariate normal distribution with a covariance matrix that is parameterized by theta. Here is a simple example. Suppose we look at animal counts at five different time periods at 10 different locations. We would get a linear model (I'm using R talk here) that would look like count ~ time + factor(location), so that you would have (in this case) a common slope for all of the regression (one at each location) but a different intercept at each location. We could just punt and call it a fixed effect model and estimate all of the intercepts. However, we want might not care about the particular locations if they were 10 locations selected from a large number of possible locations. So we put a covariance model on the intercepts. For instance, we declare the intercepts to be multivariate normal and independent with common variance sigma2. Then sigma2 is the "theta" parameter, because it characterizes the population of intercepts at each location (which are thus random effects). | What is the "variance component parameter" in mixed effect model?
It's hierarchical reasoning. There are a bunch of parameters in your linear model, the components of b. In a pure fixed effects model you would just get estimates of these and that would be that. I |
24,324 | Overall p-value and pairwise p-values? | That is, since I am already "0.007 confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my p should go down
Short answer : Your likelihood should go down. But here, the p-values do not measure the likelihood, but whether the release of some constraints provides a significant improvement on the likelihood. That's why it's not necessarily easier to reject $\beta_1=\beta_2=\beta_3$ than to reject $\beta_1=\beta_3$ because you need to show much better likelihood improvements in the most constrained model to prove that the release of 2 degrees of freedom to reach the full model was "worth it".
Elaboration :
Let's draw a graph of likelihood improvements.
The only constraint to avoid a contradiction is that the likelihood improvements must be equal with the sum of likelihood improvement from the indirect path. That's how I found the p-value from the step 1 of the indirect path : $$\frac{L_3}{L_1}=\frac{L_3}{L_2}\times\frac{L_2}{L_1}$$ By likelihood improvements, I mean the log likelihood ratio represented by the $\Delta$Chi-squared, that's why they are summed in the graph.
With this schema, one can discard the apparent contradiction because much of the likelihood improvement of the direct path comes from the release of only one degree of freedom ($\beta_1=\beta_3$).
I would suggest two factors that can contribute to this pattern.
$\beta_2$ has a large confidence interval in the full model
$\beta_2$ is around the mean of $\beta_3$ and $\beta_1$ in the full model
Under these conditions, there is not a big likelihood improvement by releasing one degree of freedom from $\beta_3=\beta_1=\beta_2$ model to the $\beta_3=\beta_1$ model because in the later model the estimation of $\beta_2$ can be close from the two other coefficients.
From this analysis and the two other p-values you gave one could suggest that maybe $\frac{\beta_3+\beta_1}{2}=\beta_2$ can provide a good fit. | Overall p-value and pairwise p-values? | That is, since I am already "0.007 confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my p should go down
Short answer : | Overall p-value and pairwise p-values?
That is, since I am already "0.007 confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my p should go down
Short answer : Your likelihood should go down. But here, the p-values do not measure the likelihood, but whether the release of some constraints provides a significant improvement on the likelihood. That's why it's not necessarily easier to reject $\beta_1=\beta_2=\beta_3$ than to reject $\beta_1=\beta_3$ because you need to show much better likelihood improvements in the most constrained model to prove that the release of 2 degrees of freedom to reach the full model was "worth it".
Elaboration :
Let's draw a graph of likelihood improvements.
The only constraint to avoid a contradiction is that the likelihood improvements must be equal with the sum of likelihood improvement from the indirect path. That's how I found the p-value from the step 1 of the indirect path : $$\frac{L_3}{L_1}=\frac{L_3}{L_2}\times\frac{L_2}{L_1}$$ By likelihood improvements, I mean the log likelihood ratio represented by the $\Delta$Chi-squared, that's why they are summed in the graph.
With this schema, one can discard the apparent contradiction because much of the likelihood improvement of the direct path comes from the release of only one degree of freedom ($\beta_1=\beta_3$).
I would suggest two factors that can contribute to this pattern.
$\beta_2$ has a large confidence interval in the full model
$\beta_2$ is around the mean of $\beta_3$ and $\beta_1$ in the full model
Under these conditions, there is not a big likelihood improvement by releasing one degree of freedom from $\beta_3=\beta_1=\beta_2$ model to the $\beta_3=\beta_1$ model because in the later model the estimation of $\beta_2$ can be close from the two other coefficients.
From this analysis and the two other p-values you gave one could suggest that maybe $\frac{\beta_3+\beta_1}{2}=\beta_2$ can provide a good fit. | Overall p-value and pairwise p-values?
That is, since I am already "0.007 confident" that $\beta_1=\beta_3$ does not hold, I should be "more confident" that $\beta_1=\beta_2=\beta_3$ does not hold. So my p should go down
Short answer : |
24,325 | Understanding fractional-differencing formula | Yes it seems to be correct. The fractional filter is defined by the binomial expansion:
$\Delta^{d}=\left(1-L\right)^{d}=1-dL+\frac{d\left(d-1\right)}{2!}L^{2}-\frac{d\left(d-1\right)\left(d-2\right)}{3!}L^{3}+\cdots$
Note that $L$ is the lag operator and that this filter cannot be simplified when $0<d<1$. Now consider the process:
$\Delta^{d}X_{t}=\left(1-L\right)^{d}X_{t}=\varepsilon_{t}$
Expanding, we get:
$\Delta^{d}X_{t}=\left(1-L\right)^{d}X_{t}=X_{t}-dLX_{t}+\frac{d\left(d-1\right)}{2!}L^{2}X_{t}-\frac{d\left(d-1\right)\left(d-2\right)}{3!}L^{3}X_{t}+\cdots=\varepsilon_{t}$
which can be written as:
$X_{t}=dX_{t-1}-\frac{d\left(d-1\right)}{2!}X_{t-2}+\frac{d\left(d-1\right)\left(d-2\right)}{3!}X_{t-3}-\cdots+\varepsilon_{t}$
See Asset Price Dynamics, Volatility and Prediction by Stephen J. Taylor (p. 243 in the 2007 ed.) or Time Series: Theory and Methods by Brockwell and Davis for further references. | Understanding fractional-differencing formula | Yes it seems to be correct. The fractional filter is defined by the binomial expansion:
$\Delta^{d}=\left(1-L\right)^{d}=1-dL+\frac{d\left(d-1\right)}{2!}L^{2}-\frac{d\left(d-1\right)\left(d-2\right) | Understanding fractional-differencing formula
Yes it seems to be correct. The fractional filter is defined by the binomial expansion:
$\Delta^{d}=\left(1-L\right)^{d}=1-dL+\frac{d\left(d-1\right)}{2!}L^{2}-\frac{d\left(d-1\right)\left(d-2\right)}{3!}L^{3}+\cdots$
Note that $L$ is the lag operator and that this filter cannot be simplified when $0<d<1$. Now consider the process:
$\Delta^{d}X_{t}=\left(1-L\right)^{d}X_{t}=\varepsilon_{t}$
Expanding, we get:
$\Delta^{d}X_{t}=\left(1-L\right)^{d}X_{t}=X_{t}-dLX_{t}+\frac{d\left(d-1\right)}{2!}L^{2}X_{t}-\frac{d\left(d-1\right)\left(d-2\right)}{3!}L^{3}X_{t}+\cdots=\varepsilon_{t}$
which can be written as:
$X_{t}=dX_{t-1}-\frac{d\left(d-1\right)}{2!}X_{t-2}+\frac{d\left(d-1\right)\left(d-2\right)}{3!}X_{t-3}-\cdots+\varepsilon_{t}$
See Asset Price Dynamics, Volatility and Prediction by Stephen J. Taylor (p. 243 in the 2007 ed.) or Time Series: Theory and Methods by Brockwell and Davis for further references. | Understanding fractional-differencing formula
Yes it seems to be correct. The fractional filter is defined by the binomial expansion:
$\Delta^{d}=\left(1-L\right)^{d}=1-dL+\frac{d\left(d-1\right)}{2!}L^{2}-\frac{d\left(d-1\right)\left(d-2\right) |
24,326 | When do coefficients estimated by logistic and logit-linear regression differ? | Perhaps this can be answered in the "reverse" fashion - I.e. when are they the same?
Now the IRLS algorithm used in logistic regression provides some insight here. At convergence you can express the model coefficients as:
$$\hat {\beta}_{logistic}=\left (X^TWX\right)^{-1} X^TWz$$
where $ W $ is a diagonal weight matrix with ith term $ W_{ii}=n_ip_i (1-p_i) $ and $ z $ is a pseudo response that has ith element $ z_i=x_i^T\hat {\beta}_{logistic} +\frac {y_i -n_ip_i}{n_ip_i (1-p_i)} $. Note that $ var (z_i -x_i^T\hat {\beta})=W_{ii}^{-1} $ which makes logistic regression seem very similar to weighted least squares on a "logit type" of quantity. Note that all the relationships are implicit in logistic regression (eg $z $ depends on $\beta $ which depends on $ z $).
So I would suggest that the difference is mostly in using weighted least squares (logistic) vs unweighted least squares (ols on logits). If you weighted the logits $\log (y)-\log (n-y) $ by $ y (1-y/n)$ (where $ y $ is the number of "events" and $ n $ the number of "trials") in the lm () call you would get more similar results. | When do coefficients estimated by logistic and logit-linear regression differ? | Perhaps this can be answered in the "reverse" fashion - I.e. when are they the same?
Now the IRLS algorithm used in logistic regression provides some insight here. At convergence you can express the | When do coefficients estimated by logistic and logit-linear regression differ?
Perhaps this can be answered in the "reverse" fashion - I.e. when are they the same?
Now the IRLS algorithm used in logistic regression provides some insight here. At convergence you can express the model coefficients as:
$$\hat {\beta}_{logistic}=\left (X^TWX\right)^{-1} X^TWz$$
where $ W $ is a diagonal weight matrix with ith term $ W_{ii}=n_ip_i (1-p_i) $ and $ z $ is a pseudo response that has ith element $ z_i=x_i^T\hat {\beta}_{logistic} +\frac {y_i -n_ip_i}{n_ip_i (1-p_i)} $. Note that $ var (z_i -x_i^T\hat {\beta})=W_{ii}^{-1} $ which makes logistic regression seem very similar to weighted least squares on a "logit type" of quantity. Note that all the relationships are implicit in logistic regression (eg $z $ depends on $\beta $ which depends on $ z $).
So I would suggest that the difference is mostly in using weighted least squares (logistic) vs unweighted least squares (ols on logits). If you weighted the logits $\log (y)-\log (n-y) $ by $ y (1-y/n)$ (where $ y $ is the number of "events" and $ n $ the number of "trials") in the lm () call you would get more similar results. | When do coefficients estimated by logistic and logit-linear regression differ?
Perhaps this can be answered in the "reverse" fashion - I.e. when are they the same?
Now the IRLS algorithm used in logistic regression provides some insight here. At convergence you can express the |
24,327 | When do coefficients estimated by logistic and logit-linear regression differ? | Please don't hesitate to point it out if I am wrong.
First, I have so say, in the second fit, you call glm in a wrong way! To fit a logistic regression by glm, the response should be (binary) categorical variable, but you use p, a numeric variable! I have to say warning is just too gentle to let users know their mistakes...
And, as you might expect, you get similar estimates of coefficients by the two fits just by COINCIDENCE. If you replace logit.p <- a + b*x + rnorm(1000, 0, 0.2) with logit.p <- a + b*x + rnorm(1000, 0, 0.7), ie, changing the variance of the error term from 0.2 to 0.7, then the results of the two fits will be greatly different, although the second fit (glm) is meaningless at all...
Logistic regression is used for (binary) classification, so you should have categorical response, as is stated above. For example, the observations of the response should be a series of "success" or "failure", rather than a series of "probability (frequency)" as in your data. For a given categorical data set, you can calculate only one overall frequency for "response=success" or "response=failure", rather than a series. In the data you generate, there is no categorical variable at all, so it is impossible to apply logistic regression. Now you can see, although they have similar appearance, logit-linear regression (as you call it) is just an ordinary linear REGRESSION problem (ie, response is a numeric variable) using transformed response (just like sqr or sqrt transformation), and logistic regression is a CLASSIFICATION problem (ie, response is a categorical variable; don't get confused by the word "regression" in "logistic regression").
Typically, linear regression is fitted through Ordinary Least Squares (OLS), which minimizes the square loss for regression problem; logistic regression is fitted through Maximum Likelihood Estimate (MLE), which minimizes log-loss for classification problem. Here is a reference on loss functions Loss Function, Deva Ramanan.
In the first example, you regard p as the response, and fit a ordinary linear regression model through OLS; in the second example, you tell R that you are fitting a logistic regression model by family=binomial, so R fit the model by MLE. As you can see, in the first model, you get t-test and F-test, which are classical outputs of OLS fit for linear regression. In the second model, the significance test of coefficient is based on z instead of t, which is the classical output of MLE fit of logistic regression. | When do coefficients estimated by logistic and logit-linear regression differ? | Please don't hesitate to point it out if I am wrong.
First, I have so say, in the second fit, you call glm in a wrong way! To fit a logistic regression by glm, the response should be (binary) categori | When do coefficients estimated by logistic and logit-linear regression differ?
Please don't hesitate to point it out if I am wrong.
First, I have so say, in the second fit, you call glm in a wrong way! To fit a logistic regression by glm, the response should be (binary) categorical variable, but you use p, a numeric variable! I have to say warning is just too gentle to let users know their mistakes...
And, as you might expect, you get similar estimates of coefficients by the two fits just by COINCIDENCE. If you replace logit.p <- a + b*x + rnorm(1000, 0, 0.2) with logit.p <- a + b*x + rnorm(1000, 0, 0.7), ie, changing the variance of the error term from 0.2 to 0.7, then the results of the two fits will be greatly different, although the second fit (glm) is meaningless at all...
Logistic regression is used for (binary) classification, so you should have categorical response, as is stated above. For example, the observations of the response should be a series of "success" or "failure", rather than a series of "probability (frequency)" as in your data. For a given categorical data set, you can calculate only one overall frequency for "response=success" or "response=failure", rather than a series. In the data you generate, there is no categorical variable at all, so it is impossible to apply logistic regression. Now you can see, although they have similar appearance, logit-linear regression (as you call it) is just an ordinary linear REGRESSION problem (ie, response is a numeric variable) using transformed response (just like sqr or sqrt transformation), and logistic regression is a CLASSIFICATION problem (ie, response is a categorical variable; don't get confused by the word "regression" in "logistic regression").
Typically, linear regression is fitted through Ordinary Least Squares (OLS), which minimizes the square loss for regression problem; logistic regression is fitted through Maximum Likelihood Estimate (MLE), which minimizes log-loss for classification problem. Here is a reference on loss functions Loss Function, Deva Ramanan.
In the first example, you regard p as the response, and fit a ordinary linear regression model through OLS; in the second example, you tell R that you are fitting a logistic regression model by family=binomial, so R fit the model by MLE. As you can see, in the first model, you get t-test and F-test, which are classical outputs of OLS fit for linear regression. In the second model, the significance test of coefficient is based on z instead of t, which is the classical output of MLE fit of logistic regression. | When do coefficients estimated by logistic and logit-linear regression differ?
Please don't hesitate to point it out if I am wrong.
First, I have so say, in the second fit, you call glm in a wrong way! To fit a logistic regression by glm, the response should be (binary) categori |
24,328 | Regression with different frequency | Three possibilities follow. Depending on the situation, any one could be suitable.
Time aggregation or dis-aggregation.
This is perhaps the simplest approach in which you convert the high-frequency data (monthly) into annual data by, say, taking sums, averages, or end of period values. The low frequency (annual) data could, of course, be converted into monthly data by using some interpolation technique; for example, using the Chow-Lin procedure. It might be useful to refer to the tempdisagg package for this: http://cran.r-project.org/web/packages/tempdisagg/index.html.
Mi(xed) da(ta) s(ampling) (MIDAS).
Midas regressions, popularized by Eric Ghysels, are a second option. There are two main ideas here. The first is frequency alignment. The second is to tackle the curse of dimensionality by specifying an appropriate polynomial. The unrestricted MIDAS model is the simplest from within the class of models and can be estimated by ordinary least squares. Further details and how to implement these models in R using the midasr package can be found here: http://mpiktas.github.io/midasr/. For MATLAB, refer to Ghysels' page: http://www.unc.edu/~eghysels/.
Kalman filter methods.
This is a state-space modelling approach, which involves treating the low-frequency data as containing NAs and filling them in using a Kalman filter. This is my personal preference, but it does have the difficulty of specifying the correct state-space model.
For a more in-depth look at the pros and cons of these methods, refer to State Space Models and MIDAS Regressions by Jennie Bai, Eric Ghysels and Jonathan H. Wright (2013). | Regression with different frequency | Three possibilities follow. Depending on the situation, any one could be suitable.
Time aggregation or dis-aggregation.
This is perhaps the simplest approach in which you convert the high-frequency | Regression with different frequency
Three possibilities follow. Depending on the situation, any one could be suitable.
Time aggregation or dis-aggregation.
This is perhaps the simplest approach in which you convert the high-frequency data (monthly) into annual data by, say, taking sums, averages, or end of period values. The low frequency (annual) data could, of course, be converted into monthly data by using some interpolation technique; for example, using the Chow-Lin procedure. It might be useful to refer to the tempdisagg package for this: http://cran.r-project.org/web/packages/tempdisagg/index.html.
Mi(xed) da(ta) s(ampling) (MIDAS).
Midas regressions, popularized by Eric Ghysels, are a second option. There are two main ideas here. The first is frequency alignment. The second is to tackle the curse of dimensionality by specifying an appropriate polynomial. The unrestricted MIDAS model is the simplest from within the class of models and can be estimated by ordinary least squares. Further details and how to implement these models in R using the midasr package can be found here: http://mpiktas.github.io/midasr/. For MATLAB, refer to Ghysels' page: http://www.unc.edu/~eghysels/.
Kalman filter methods.
This is a state-space modelling approach, which involves treating the low-frequency data as containing NAs and filling them in using a Kalman filter. This is my personal preference, but it does have the difficulty of specifying the correct state-space model.
For a more in-depth look at the pros and cons of these methods, refer to State Space Models and MIDAS Regressions by Jennie Bai, Eric Ghysels and Jonathan H. Wright (2013). | Regression with different frequency
Three possibilities follow. Depending on the situation, any one could be suitable.
Time aggregation or dis-aggregation.
This is perhaps the simplest approach in which you convert the high-frequency |
24,329 | Male and Female Chess Players - Expected Discrepancies at Tails of Distributions | I think you are misreading the paper, they do not claim what you say. Their claims are not based on number of top players, but on their ratings. If the statistical distribution of strength is the same among men and women, then the expected number of women among the top 100 is 6, if their proportion of the total population is 6%. Some citations from the paper:
A popular explanation for the small number of women at the top level
of intellectually demanding activities from chess to science appeals
to biological differences in the intellectual abilities of men and
women. An alternative explanation is that the extreme values in a
large sample are likely to be greater than those in a small one.
That is indeed true. You would expect the rating of the best man to be above the rating of the best woman. The paper goes on to try to compute by how much, a result which will depend very heavily on the assumed distribution.
In section 3, results, they go on to pair best man with best woman, same for next best, and so on, for the first 100 such pairs. Then they calculate the rating difference, and compare that to the expected rating difference given the fact that there are many more male than female players. All of this seems correct, and is very different from how you present it. It might well be that their analysis is little robust, and that a more thorough analysis could be done, but their basic idea is correct. | Male and Female Chess Players - Expected Discrepancies at Tails of Distributions | I think you are misreading the paper, they do not claim what you say. Their claims are not based on number of top players, but on their ratings. If the statistical distribution of strength is the sa | Male and Female Chess Players - Expected Discrepancies at Tails of Distributions
I think you are misreading the paper, they do not claim what you say. Their claims are not based on number of top players, but on their ratings. If the statistical distribution of strength is the same among men and women, then the expected number of women among the top 100 is 6, if their proportion of the total population is 6%. Some citations from the paper:
A popular explanation for the small number of women at the top level
of intellectually demanding activities from chess to science appeals
to biological differences in the intellectual abilities of men and
women. An alternative explanation is that the extreme values in a
large sample are likely to be greater than those in a small one.
That is indeed true. You would expect the rating of the best man to be above the rating of the best woman. The paper goes on to try to compute by how much, a result which will depend very heavily on the assumed distribution.
In section 3, results, they go on to pair best man with best woman, same for next best, and so on, for the first 100 such pairs. Then they calculate the rating difference, and compare that to the expected rating difference given the fact that there are many more male than female players. All of this seems correct, and is very different from how you present it. It might well be that their analysis is little robust, and that a more thorough analysis could be done, but their basic idea is correct. | Male and Female Chess Players - Expected Discrepancies at Tails of Distributions
I think you are misreading the paper, they do not claim what you say. Their claims are not based on number of top players, but on their ratings. If the statistical distribution of strength is the sa |
24,330 | The econometrics of a Bayesian approach to event study methodology | As mentioned in the comments, the model you're looking for is Bayesian linear regression. And since we can use BLR to calculate the posterior predictive distribution $p(r_t|t, \mathcal{D}_\text{ref})$ for any time $t$, we can numerically evaluate the distribution $p(\text{CAR}|\mathcal{D}_\text{event}, \mathcal{D}_\text{ref})$.
The thing is, I don't think a distribution over $\text{CAR}$ is what you really want. The immediate problem is that $p(\text{CAR} = 0|\mathcal{D}_\text{event}, \mathcal{D}_\text{ref})$ has probability zero. The underlying problem is that the "Bayesian version of hypothesis tests" is comparing models via their Bayes factor, but that requires you to define two competing models. And $\text{CAR} = 0, \text{CAR} \neq 0$ are not models (or at least, they're not models without some extremely unnatural number juggling).
From what you've said in the comments, I think what you actually want to answer is
Are $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ better explained by the same model or by different ones?
which has a neat Bayesian answer: define two models
$M_0$: all the data in $\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}$ is drawn from the same BLR. To calculate the marginal likelihood $p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_0)$ of this model, you'd calculate the marginal likelihood of a BLR fit to all the data.
$M_1$: the data in $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ are drawn from two different BLRs. To calculate the marginal likelihood $p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_1)$ of this model, you'd fit BLRs to $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ independently (though using the same hyperparameters!), then take the product of the two BLR marginal likelihoods.
Having done that, you can then calculate the Bayes factor
$$\frac{p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_1)}{p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_0)}$$
to decide which model is more believable. | The econometrics of a Bayesian approach to event study methodology | As mentioned in the comments, the model you're looking for is Bayesian linear regression. And since we can use BLR to calculate the posterior predictive distribution $p(r_t|t, \mathcal{D}_\text{ref})$ | The econometrics of a Bayesian approach to event study methodology
As mentioned in the comments, the model you're looking for is Bayesian linear regression. And since we can use BLR to calculate the posterior predictive distribution $p(r_t|t, \mathcal{D}_\text{ref})$ for any time $t$, we can numerically evaluate the distribution $p(\text{CAR}|\mathcal{D}_\text{event}, \mathcal{D}_\text{ref})$.
The thing is, I don't think a distribution over $\text{CAR}$ is what you really want. The immediate problem is that $p(\text{CAR} = 0|\mathcal{D}_\text{event}, \mathcal{D}_\text{ref})$ has probability zero. The underlying problem is that the "Bayesian version of hypothesis tests" is comparing models via their Bayes factor, but that requires you to define two competing models. And $\text{CAR} = 0, \text{CAR} \neq 0$ are not models (or at least, they're not models without some extremely unnatural number juggling).
From what you've said in the comments, I think what you actually want to answer is
Are $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ better explained by the same model or by different ones?
which has a neat Bayesian answer: define two models
$M_0$: all the data in $\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}$ is drawn from the same BLR. To calculate the marginal likelihood $p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_0)$ of this model, you'd calculate the marginal likelihood of a BLR fit to all the data.
$M_1$: the data in $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ are drawn from two different BLRs. To calculate the marginal likelihood $p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_1)$ of this model, you'd fit BLRs to $\mathcal{D}_\text{ref}$ and $\mathcal{D}_\text{event}$ independently (though using the same hyperparameters!), then take the product of the two BLR marginal likelihoods.
Having done that, you can then calculate the Bayes factor
$$\frac{p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_1)}{p(\mathcal{D}_\text{ref}, \mathcal{D}_\text{event}|M_0)}$$
to decide which model is more believable. | The econometrics of a Bayesian approach to event study methodology
As mentioned in the comments, the model you're looking for is Bayesian linear regression. And since we can use BLR to calculate the posterior predictive distribution $p(r_t|t, \mathcal{D}_\text{ref})$ |
24,331 | The econometrics of a Bayesian approach to event study methodology | You cannot do an event study with a single firm.
Unfortunately you need panel data for any event study. Event studies focus on returns for individual time periods before and after events. Without multiple firm observations per time period before and after the event, it's impossible to distinguish noise (firm specific variation) from the effects of the event. Even with only a few firms, noise will dominate event, as StasK points out.
That being said, with a panel of many firms you can still do Bayesian work.
How to estimate normal and abnormal returns
I'm going to assume that the model you use for normal returns looks something like a standard arbitrage model. If it doesn't you should be able to adapt the rest of this discussion. You'll want to augment your "normal" return regression with a series of dummies for date relative to the announcement date, $S$:
$$r_{it}=\alpha_{i}+\gamma_{t-S}+r_{m,t}^T\beta_i+e_{it}$$
EDIT: It should be that $\gamma_{s}$ is only included if $s>0$. One problem with this problem with this approach is that $\beta_i$ will be informed by data before and after the event. This does not map precisely to traditional event studies where the expected returns are calculated only before the event.
This regression allows you to talk about something similar to the kind of CAR series we usually see, where we have a plot of average abnormal returns before and after an event with maybe some standard errors around it:
(shamelessly taken from Wikipedia)
You'll need to come up with a distribution and error structure for the $e_{it}$'s,probably normally distributed, with some variance-co-variance structure. You can then set up a prior distribution for $\alpha_{i}$, $\beta_i$ and $\gamma_s$ and run Bayesian linear regression as was mentioned above.
Examining announcement effects
On the date of announcement it is reasonable to think there might be some abnormal returns ($\gamma_0\neq 0$). New information has just been released into the market, so reactions are not generally a violation of any kind of arbitrage or efficiency theorems. Neither you nor I know what announcement effects are likely to be. There isn't always much theoretical guidance either. So testing $\gamma_0=0$ may require much more specific knowledge than we have at our disposal (see below).
But part of the attraction of Bayesian analysis is that you can examine the entire posterior distribution of $\gamma_0$. This allows you to answer in some ways more interesting questions like "How likely is it that announcement excess returns are negative?" So for abnormal returns on the announcement date I would suggest abandoning strict hypothesis tests. You're not interested in them anyways - with most event studies you really want to know what the price reaction to an announcement may be, not what it is not!
In this vein one interesting summary of your posteriors might be the probability that $\gamma_0\geq 0$. Another could be the probability that $\gamma_0$ is higher than a variety of threshold values, or the quantiles of the posterior distribution for $\gamma_0$. Finally you can always plot the posterior of $\gamma_0$ along with it's mean, median and mode. But again strict hypothesis tests may not be what you want.
However for dates before and after the announcement, strict hypothesis testing can play an important role, because these returns can be viewed as tests of strong and semi-strong form efficiency
Testing for violations of semi-strong-form efficiency
Semi-strong-form effciency and an abscence of transaction costs imply that share prices should not continue to adjust after the announcment of the event. This corresponds to an intersection of sharp hypotheses that $\gamma_{s> 0}=0$.
Bayesians are uncomfortable with tests of this form, $\gamma_{s}=0$, called "sharp" tests. Why? Let's take this out of the context of finance for a second. If I asked you to form a prior over the average income of American citizens, $\bar x$ you would probably give me a continuous distribution, $f$ over possible incomes, maybe peaking around \$60,000. If you then took a sample of American incomes $X=\{x_i\}_{i=1}^n$ and tried to test the hypothesis that the population average was exactly $\$60,000$ you would use a Bayes factor:
$$P(\bar{x}=\$60,000|X)=\dfrac{\int_{\bar{x}=\$60,000} P(X) f(\bar{x})}{\int_{\bar{x}\neq \$60,000} P(X) f(\bar{x})}$$
The integral on top is zero, because the probability of a single point from the continuous prior distribution is zero. The integral on bottom would be 1, so $P(\bar{x}=\$60,000|X)=0$. This occurs because of the continuous prior, not because of anything essential to the nature of Bayesian inference.
In many ways tests that $\gamma_{s> 0}=0$ are asset pricing tests. Asset pricing is weird for Bayesians. Why is it weird? Because, in contrast to my prior over incomes, strict application of some efficiency hypotheses predicts an intercept of exactly 0 after the event. Any positive or negative $\gamma_{s>0}$ is a violation of semi-strong form efficiency, and potentially a huge profit making opportunity. So a valid prior could put positive probability on $\gamma_{s>0} =0$. This is exactly the approach taken in Harvey and Zhou (1990). More generally, imagine you have a prior with two parts. With probability $p$ you believe in strong-form efficiency ($\gamma_{s\neq 0} =0$) and with probability $1-p$ you don't believe in strong-form efficiency. Conditional on knowing strong-form efficiency is false, you think that there is a continuous distribution over $\gamma_{s>0}$, $f$. Then you can construct the Bayes factor test:
$$P(\gamma_{s> 0} =0|\text{data}) = \dfrac{P(\text{data}|\gamma_{s> 0}=0)p}{\int_{\gamma_{s> 0}\neq 0} P(\text{data}|\gamma_{s> 0})(1-p)f(\gamma_{s> 0})}>0$$
This test works because conditional on strong-form being true you would know that $\gamma_{s>0}=0$. In this case your prior is now a mixture of continuous and discrete distributions.
That a sharp test exists does not preclude you using more subtle tests. There is no reason you cannot examine the distribution of $\gamma_{s> 0}$ the same way I suggested for $\gamma_{s=0}$. This may be more interesting, especially since it is not dependent on a belief that transaction costs are non-existent. Credibile intervals could be formed, and based on your beliefs about transaction costs you could construct model tests based on intervals $\gamma_{s>0}$. Following Brav (2000) you could also predictive densities based on the "normal" return model ($\gamma_s=0$) to compare with actual returns, as a bridge between Bayesian and frequentist methods.
Cumulative abnormal returns
Everything so far has been a discussion of abnormal returns. So I'm going to go quickly into CAR:
$$\text{CAR}_\tau=\sum_{t=0}^\tau \gamma_{t}$$
This is a close counterpart to the average cumulative abnormal returns based on residuals that you are used to. You can find the posterior distribution using either numerical or analytic integration, depending on your prior. Because there is no reason to assume $\gamma_0=0$, there is no reason to assume $\text{CAR}_{t>0}=0$, so I would advocate the same analysis as with announcement effects, with no sharp hypothesis testing.
How to implement in Matlab
For a simple version of these models, you just need regular old Bayesian linear regression. I don't use Matlab but it looks like there's a version here. It's likely this works only with conjugate priors.
For more complicated versions, for instance the sharp hypothesis test, you will likely need a Gibbs sampler. I'm not aware of any out-of-the box solutions for Matlab. You can check for interfaces to JAGS or BUGS. | The econometrics of a Bayesian approach to event study methodology | You cannot do an event study with a single firm.
Unfortunately you need panel data for any event study. Event studies focus on returns for individual time periods before and after events. Without mult | The econometrics of a Bayesian approach to event study methodology
You cannot do an event study with a single firm.
Unfortunately you need panel data for any event study. Event studies focus on returns for individual time periods before and after events. Without multiple firm observations per time period before and after the event, it's impossible to distinguish noise (firm specific variation) from the effects of the event. Even with only a few firms, noise will dominate event, as StasK points out.
That being said, with a panel of many firms you can still do Bayesian work.
How to estimate normal and abnormal returns
I'm going to assume that the model you use for normal returns looks something like a standard arbitrage model. If it doesn't you should be able to adapt the rest of this discussion. You'll want to augment your "normal" return regression with a series of dummies for date relative to the announcement date, $S$:
$$r_{it}=\alpha_{i}+\gamma_{t-S}+r_{m,t}^T\beta_i+e_{it}$$
EDIT: It should be that $\gamma_{s}$ is only included if $s>0$. One problem with this problem with this approach is that $\beta_i$ will be informed by data before and after the event. This does not map precisely to traditional event studies where the expected returns are calculated only before the event.
This regression allows you to talk about something similar to the kind of CAR series we usually see, where we have a plot of average abnormal returns before and after an event with maybe some standard errors around it:
(shamelessly taken from Wikipedia)
You'll need to come up with a distribution and error structure for the $e_{it}$'s,probably normally distributed, with some variance-co-variance structure. You can then set up a prior distribution for $\alpha_{i}$, $\beta_i$ and $\gamma_s$ and run Bayesian linear regression as was mentioned above.
Examining announcement effects
On the date of announcement it is reasonable to think there might be some abnormal returns ($\gamma_0\neq 0$). New information has just been released into the market, so reactions are not generally a violation of any kind of arbitrage or efficiency theorems. Neither you nor I know what announcement effects are likely to be. There isn't always much theoretical guidance either. So testing $\gamma_0=0$ may require much more specific knowledge than we have at our disposal (see below).
But part of the attraction of Bayesian analysis is that you can examine the entire posterior distribution of $\gamma_0$. This allows you to answer in some ways more interesting questions like "How likely is it that announcement excess returns are negative?" So for abnormal returns on the announcement date I would suggest abandoning strict hypothesis tests. You're not interested in them anyways - with most event studies you really want to know what the price reaction to an announcement may be, not what it is not!
In this vein one interesting summary of your posteriors might be the probability that $\gamma_0\geq 0$. Another could be the probability that $\gamma_0$ is higher than a variety of threshold values, or the quantiles of the posterior distribution for $\gamma_0$. Finally you can always plot the posterior of $\gamma_0$ along with it's mean, median and mode. But again strict hypothesis tests may not be what you want.
However for dates before and after the announcement, strict hypothesis testing can play an important role, because these returns can be viewed as tests of strong and semi-strong form efficiency
Testing for violations of semi-strong-form efficiency
Semi-strong-form effciency and an abscence of transaction costs imply that share prices should not continue to adjust after the announcment of the event. This corresponds to an intersection of sharp hypotheses that $\gamma_{s> 0}=0$.
Bayesians are uncomfortable with tests of this form, $\gamma_{s}=0$, called "sharp" tests. Why? Let's take this out of the context of finance for a second. If I asked you to form a prior over the average income of American citizens, $\bar x$ you would probably give me a continuous distribution, $f$ over possible incomes, maybe peaking around \$60,000. If you then took a sample of American incomes $X=\{x_i\}_{i=1}^n$ and tried to test the hypothesis that the population average was exactly $\$60,000$ you would use a Bayes factor:
$$P(\bar{x}=\$60,000|X)=\dfrac{\int_{\bar{x}=\$60,000} P(X) f(\bar{x})}{\int_{\bar{x}\neq \$60,000} P(X) f(\bar{x})}$$
The integral on top is zero, because the probability of a single point from the continuous prior distribution is zero. The integral on bottom would be 1, so $P(\bar{x}=\$60,000|X)=0$. This occurs because of the continuous prior, not because of anything essential to the nature of Bayesian inference.
In many ways tests that $\gamma_{s> 0}=0$ are asset pricing tests. Asset pricing is weird for Bayesians. Why is it weird? Because, in contrast to my prior over incomes, strict application of some efficiency hypotheses predicts an intercept of exactly 0 after the event. Any positive or negative $\gamma_{s>0}$ is a violation of semi-strong form efficiency, and potentially a huge profit making opportunity. So a valid prior could put positive probability on $\gamma_{s>0} =0$. This is exactly the approach taken in Harvey and Zhou (1990). More generally, imagine you have a prior with two parts. With probability $p$ you believe in strong-form efficiency ($\gamma_{s\neq 0} =0$) and with probability $1-p$ you don't believe in strong-form efficiency. Conditional on knowing strong-form efficiency is false, you think that there is a continuous distribution over $\gamma_{s>0}$, $f$. Then you can construct the Bayes factor test:
$$P(\gamma_{s> 0} =0|\text{data}) = \dfrac{P(\text{data}|\gamma_{s> 0}=0)p}{\int_{\gamma_{s> 0}\neq 0} P(\text{data}|\gamma_{s> 0})(1-p)f(\gamma_{s> 0})}>0$$
This test works because conditional on strong-form being true you would know that $\gamma_{s>0}=0$. In this case your prior is now a mixture of continuous and discrete distributions.
That a sharp test exists does not preclude you using more subtle tests. There is no reason you cannot examine the distribution of $\gamma_{s> 0}$ the same way I suggested for $\gamma_{s=0}$. This may be more interesting, especially since it is not dependent on a belief that transaction costs are non-existent. Credibile intervals could be formed, and based on your beliefs about transaction costs you could construct model tests based on intervals $\gamma_{s>0}$. Following Brav (2000) you could also predictive densities based on the "normal" return model ($\gamma_s=0$) to compare with actual returns, as a bridge between Bayesian and frequentist methods.
Cumulative abnormal returns
Everything so far has been a discussion of abnormal returns. So I'm going to go quickly into CAR:
$$\text{CAR}_\tau=\sum_{t=0}^\tau \gamma_{t}$$
This is a close counterpart to the average cumulative abnormal returns based on residuals that you are used to. You can find the posterior distribution using either numerical or analytic integration, depending on your prior. Because there is no reason to assume $\gamma_0=0$, there is no reason to assume $\text{CAR}_{t>0}=0$, so I would advocate the same analysis as with announcement effects, with no sharp hypothesis testing.
How to implement in Matlab
For a simple version of these models, you just need regular old Bayesian linear regression. I don't use Matlab but it looks like there's a version here. It's likely this works only with conjugate priors.
For more complicated versions, for instance the sharp hypothesis test, you will likely need a Gibbs sampler. I'm not aware of any out-of-the box solutions for Matlab. You can check for interfaces to JAGS or BUGS. | The econometrics of a Bayesian approach to event study methodology
You cannot do an event study with a single firm.
Unfortunately you need panel data for any event study. Event studies focus on returns for individual time periods before and after events. Without mult |
24,332 | Normalized RMSE | You have also other choices that are commonly used in such cases, e.g. relative absolute error
$$ \text{RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | } $$
root relative squared error
$$ \text{RRSE} = \sqrt{ \frac{ \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } { \sum^N_{i=1} \left( \overline{\theta} - \theta_i \right)^2 }} $$
mean absolute percentage error
$$ \text{MAPE} = \frac{1}{N} \sum^N_{i=1} \left| \frac{\theta_i - \hat{\theta}_i}{\theta_i} \right| $$
where $\theta$ is true value, $\hat \theta$ is the forecast and $\overline{\theta}$ is a mean of $\theta$ (see also https://www.otexts.org/fpp/2/5). | Normalized RMSE | You have also other choices that are commonly used in such cases, e.g. relative absolute error
$$ \text{RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - | Normalized RMSE
You have also other choices that are commonly used in such cases, e.g. relative absolute error
$$ \text{RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - \theta_i | } $$
root relative squared error
$$ \text{RRSE} = \sqrt{ \frac{ \sum^N_{i=1} \left( \hat{\theta}_i - \theta_i \right)^2 } { \sum^N_{i=1} \left( \overline{\theta} - \theta_i \right)^2 }} $$
mean absolute percentage error
$$ \text{MAPE} = \frac{1}{N} \sum^N_{i=1} \left| \frac{\theta_i - \hat{\theta}_i}{\theta_i} \right| $$
where $\theta$ is true value, $\hat \theta$ is the forecast and $\overline{\theta}$ is a mean of $\theta$ (see also https://www.otexts.org/fpp/2/5). | Normalized RMSE
You have also other choices that are commonly used in such cases, e.g. relative absolute error
$$ \text{RAE} = \frac{ \sum^N_{i=1} | \hat{\theta}_i - \theta_i | } { \sum^N_{i=1} | \overline{\theta} - |
24,333 | Normalized RMSE | A possible way would be to normalize the RMSE with the standard deviation of $Y$:
$NRMSE = \frac{RMSE}{\sigma(Y)}$
If this value is larger than 1, you'd obtain a better model by simply generating a random time series of the same mean and standard deviation as $Y$. | Normalized RMSE | A possible way would be to normalize the RMSE with the standard deviation of $Y$:
$NRMSE = \frac{RMSE}{\sigma(Y)}$
If this value is larger than 1, you'd obtain a better model by simply generating a r | Normalized RMSE
A possible way would be to normalize the RMSE with the standard deviation of $Y$:
$NRMSE = \frac{RMSE}{\sigma(Y)}$
If this value is larger than 1, you'd obtain a better model by simply generating a random time series of the same mean and standard deviation as $Y$. | Normalized RMSE
A possible way would be to normalize the RMSE with the standard deviation of $Y$:
$NRMSE = \frac{RMSE}{\sigma(Y)}$
If this value is larger than 1, you'd obtain a better model by simply generating a r |
24,334 | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning) | Simply expand the square ...
$$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$
... and use linearity of expectations:
$$\mathrm{E}[f(X)- \hat{f}(X) + \epsilon ]^2=E[f(X)- \hat{f}(X)]^2 +2 E[(f(X)- \hat{f}(X))\epsilon]+ E[\epsilon^2]$$
Can you do it from there? (What things remain to be shown?)
Hint in response to comments: Show $E(\epsilon^2)=\text{Var}(\epsilon)$ | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning) | Simply expand the square ...
$$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$
... and use linearity of expectations:
$$\mathrm{E}[f(X)- \hat{f}(X) + | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)
Simply expand the square ...
$$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$
... and use linearity of expectations:
$$\mathrm{E}[f(X)- \hat{f}(X) + \epsilon ]^2=E[f(X)- \hat{f}(X)]^2 +2 E[(f(X)- \hat{f}(X))\epsilon]+ E[\epsilon^2]$$
Can you do it from there? (What things remain to be shown?)
Hint in response to comments: Show $E(\epsilon^2)=\text{Var}(\epsilon)$ | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)
Simply expand the square ...
$$[f(X)- \hat{f}(X) + \epsilon ]^2=[f(X)- \hat{f}(X)]^2 +2 [f(X)- \hat{f}(X)]\epsilon+ \epsilon^2$$
... and use linearity of expectations:
$$\mathrm{E}[f(X)- \hat{f}(X) + |
24,335 | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning) | \begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)+\epsilon-\hat{f}(X))^2] = E[(f(X)-\hat{f}(X))^2 + \epsilon^2 + 2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + E[2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + 2(f(X)-\hat{f}(X))*E[\epsilon].......(1)\\
\end{equation}
The Last term is zero as the expected value of irreducible error is zero. And lets see where variance come from. In general:
\begin{equation}
\ Var(X) = E[(X−\bar{X})^2] = E[X^2 - 2X\bar{X} + \bar{X}^2] = E[X^2] - E[2X\bar{X}] + E[\bar{X}^2]\\
\end{equation}
The mean of X is a constant and so is the square of the mean of X. Therefore equation becomes,
\begin{equation}
\ Var(X) = E[X^2] - 2\bar{X}*E[X] + \bar{X}^2 = E[X^2] - 2\bar{X}*\bar{X} + \bar{X}^2 = E[X^2] - 2\bar{X}^2 + \bar{X}^2 = E[X^2] - \bar{X}^2\\
Hence,\\Var(\epsilon) = E[\epsilon^2] - \bar{\epsilon}^2\\
\end{equation}
But mean of $\epsilon$ is zero. So,
\begin{equation}
\\Var(\epsilon) = E[\epsilon^2].....(2) \\
\end{equation}
Now taking equation 1, whose last term is zero & equation 2:
\begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] = E[(f(X)-\hat{f}(X))^2] + Var(\epsilon)
\end{equation} | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning) | \begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)+\epsilon-\hat{f}(X))^2] = E[(f(X)-\hat{f}(X))^2 + \epsilon^2 + 2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + E[2\epsilon(f(X)-\h | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)
\begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)+\epsilon-\hat{f}(X))^2] = E[(f(X)-\hat{f}(X))^2 + \epsilon^2 + 2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + E[2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + 2(f(X)-\hat{f}(X))*E[\epsilon].......(1)\\
\end{equation}
The Last term is zero as the expected value of irreducible error is zero. And lets see where variance come from. In general:
\begin{equation}
\ Var(X) = E[(X−\bar{X})^2] = E[X^2 - 2X\bar{X} + \bar{X}^2] = E[X^2] - E[2X\bar{X}] + E[\bar{X}^2]\\
\end{equation}
The mean of X is a constant and so is the square of the mean of X. Therefore equation becomes,
\begin{equation}
\ Var(X) = E[X^2] - 2\bar{X}*E[X] + \bar{X}^2 = E[X^2] - 2\bar{X}*\bar{X} + \bar{X}^2 = E[X^2] - 2\bar{X}^2 + \bar{X}^2 = E[X^2] - \bar{X}^2\\
Hence,\\Var(\epsilon) = E[\epsilon^2] - \bar{\epsilon}^2\\
\end{equation}
But mean of $\epsilon$ is zero. So,
\begin{equation}
\\Var(\epsilon) = E[\epsilon^2].....(2) \\
\end{equation}
Now taking equation 1, whose last term is zero & equation 2:
\begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] = E[(f(X)-\hat{f}(X))^2] + Var(\epsilon)
\end{equation} | Proof/Derivation of Residual Sum of Squares (Based on Introduction to Statistical Learning)
\begin{equation}
\ E[(Y−\hat{Y})^2] = E[(f(X)+\epsilon-\hat{f}(X))^2] = E[(f(X)-\hat{f}(X))^2 + \epsilon^2 + 2\epsilon(f(X)-\hat{f}(X))] = E[(f(X)-\hat{f}(X))^2] + E[\epsilon^2] + E[2\epsilon(f(X)-\h |
24,336 | Lmer model fails to converge | See this conversation for an alternative method of assessing convergence. Specifically, this comment from Ben Bolker:
thanks. An even simpler test would be to take a fitted example that gave you convergence warnings and take a look at the results of
relgrad <- with(fitted_model@optinfo$derivs,solve(Hessian,gradient))
max(abs(relgrad))
and see if it's reasonably small (e.g. <0.001?)
Alternatively, you could try Bolker's advice here, which is to try a different optimizer. | Lmer model fails to converge | See this conversation for an alternative method of assessing convergence. Specifically, this comment from Ben Bolker:
thanks. An even simpler test would be to take a fitted example that gave you conv | Lmer model fails to converge
See this conversation for an alternative method of assessing convergence. Specifically, this comment from Ben Bolker:
thanks. An even simpler test would be to take a fitted example that gave you convergence warnings and take a look at the results of
relgrad <- with(fitted_model@optinfo$derivs,solve(Hessian,gradient))
max(abs(relgrad))
and see if it's reasonably small (e.g. <0.001?)
Alternatively, you could try Bolker's advice here, which is to try a different optimizer. | Lmer model fails to converge
See this conversation for an alternative method of assessing convergence. Specifically, this comment from Ben Bolker:
thanks. An even simpler test would be to take a fitted example that gave you conv |
24,337 | Likelihood Ratio Test and boundary parameters | My understanding of the phrase "the boundary of a parameter space" is that the possible values for a parameter in a model is restricted to lie between two values or is bounded at the lower/upper end.
On area where this crops up frequently is in a random or mixed effects model, where one or more of the parameters in the model is for the variance of a random effect term. The variance can not be negative, hence if one is comparing a model with and without a particular random effect term, the model without the term assumes the value of the variance parameter is 0. But 0 is at the lower boundary of the possible values that the parameter could take, yet the default LRT assumes negative values are possible for the parameter. Hence the comment on the reliability of the use of the Chi-square distribution with $d$ degrees of freedom for the test statistic in such cases.
Compare that situation with a parameter for a fixed effect term in a linear (mixed) model. This parameter is the estimated mean of a Gaussian random variable and theoretically it could take on any value and hence in an LRT where we might be comparing a model and without this fixed effect term (and hence setting $\hat{\beta} = 0$ in the model without the term), the parameter for the simpler model ($\hat{\beta} = 0$) is not at the boundary of the set of allowed values. | Likelihood Ratio Test and boundary parameters | My understanding of the phrase "the boundary of a parameter space" is that the possible values for a parameter in a model is restricted to lie between two values or is bounded at the lower/upper end.
| Likelihood Ratio Test and boundary parameters
My understanding of the phrase "the boundary of a parameter space" is that the possible values for a parameter in a model is restricted to lie between two values or is bounded at the lower/upper end.
On area where this crops up frequently is in a random or mixed effects model, where one or more of the parameters in the model is for the variance of a random effect term. The variance can not be negative, hence if one is comparing a model with and without a particular random effect term, the model without the term assumes the value of the variance parameter is 0. But 0 is at the lower boundary of the possible values that the parameter could take, yet the default LRT assumes negative values are possible for the parameter. Hence the comment on the reliability of the use of the Chi-square distribution with $d$ degrees of freedom for the test statistic in such cases.
Compare that situation with a parameter for a fixed effect term in a linear (mixed) model. This parameter is the estimated mean of a Gaussian random variable and theoretically it could take on any value and hence in an LRT where we might be comparing a model and without this fixed effect term (and hence setting $\hat{\beta} = 0$ in the model without the term), the parameter for the simpler model ($\hat{\beta} = 0$) is not at the boundary of the set of allowed values. | Likelihood Ratio Test and boundary parameters
My understanding of the phrase "the boundary of a parameter space" is that the possible values for a parameter in a model is restricted to lie between two values or is bounded at the lower/upper end.
|
24,338 | Forecasting irregular time series (with R) | State space models support the missing data very well. take a look at section 6.4 "Missing Data Modifications" in Time Series Analysis and Its Applications With R Examples, 3rd ed., by Shumway and Stoffer. They have examples in http://www.stat.pitt.edu/stoffer/tsa3/ | Forecasting irregular time series (with R) | State space models support the missing data very well. take a look at section 6.4 "Missing Data Modifications" in Time Series Analysis and Its Applications With R Examples, 3rd ed., by Shumway and Sto | Forecasting irregular time series (with R)
State space models support the missing data very well. take a look at section 6.4 "Missing Data Modifications" in Time Series Analysis and Its Applications With R Examples, 3rd ed., by Shumway and Stoffer. They have examples in http://www.stat.pitt.edu/stoffer/tsa3/ | Forecasting irregular time series (with R)
State space models support the missing data very well. take a look at section 6.4 "Missing Data Modifications" in Time Series Analysis and Its Applications With R Examples, 3rd ed., by Shumway and Sto |
24,339 | Forecasting irregular time series (with R) | Since the interval between two observations is not constant, we are left with two options
Treat the observations as regular time series with missing data. In this case, we need to impute missing values. There is a list of imputation techniques discussed here : https://towardsdatascience.com/6-different-ways-to-compensate-for-missing-values-data-imputation-with-examples-6022d9ca0779. Then use any regular time series forecasting method like ARIMA, Exponential Smoothing, LSTM, etc.
Treat the observations as irregular as they are and use techniques discussed here : https://www.sciencedirect.com/science/article/pii/0169207086990047, https://www.sciencedirect.com/science/article/pii/S2352340920306739 | Forecasting irregular time series (with R) | Since the interval between two observations is not constant, we are left with two options
Treat the observations as regular time series with missing data. In this case, we need to impute missing valu | Forecasting irregular time series (with R)
Since the interval between two observations is not constant, we are left with two options
Treat the observations as regular time series with missing data. In this case, we need to impute missing values. There is a list of imputation techniques discussed here : https://towardsdatascience.com/6-different-ways-to-compensate-for-missing-values-data-imputation-with-examples-6022d9ca0779. Then use any regular time series forecasting method like ARIMA, Exponential Smoothing, LSTM, etc.
Treat the observations as irregular as they are and use techniques discussed here : https://www.sciencedirect.com/science/article/pii/0169207086990047, https://www.sciencedirect.com/science/article/pii/S2352340920306739 | Forecasting irregular time series (with R)
Since the interval between two observations is not constant, we are left with two options
Treat the observations as regular time series with missing data. In this case, we need to impute missing valu |
24,340 | Sampling distribution of the radius of 2D normal distribution | As you mentioned in your post we know the distribution of the estimate of $\widehat{r_{true}}$ if we are given $\mu$ so we know the distribution of the estimate $\widehat{r^2_{true}}$ of the true $r^2$.
We want to find the distribution of $$\widehat{r^2} = \frac{1}{N}\sum_{i=1}^N (x_i-\overline{x})^T(x_i-\overline{x})$$ where $x_i$ are expressed as column vectors.
We now do the standard trick
$$\begin{eqnarray*}
\widehat{r^2_{true}} &=& \frac{1}{N}\sum_{i=1}^N(x_i - \mu)^T(x_i-\mu)\\
&=& \frac{1}{N}\sum_{i=1}^N(x_i-\overline{x} + \overline{x} -\mu)^T(x_i-\overline{x} + \overline{x}-\mu)\\
&=&\left[\frac{1}{N}\sum_{i=1}^N(x_i - \overline{x})^T(x_i-\overline{x})\right] + (\overline{x} - \mu)^T(\overline{x}-\mu) \hspace{20pt}(1)\\
&=& \widehat{r^2} + (\overline{x}-\mu)^T(\overline{x}-\mu)
\end{eqnarray*}
$$
where $(1)$ arises from the equation
$$\frac{1}{N}\sum_{i=1}^N(x_i-\overline{x})^T(\overline{x}-\mu) = (\overline{x} - \overline{x})^T(\overline{x} - \mu) = 0$$
and its transpose.
Notice that $\widehat{r^2}$ is the trace of the sample covariance matrix $S$ and $(\overline{x}-\mu)^T(\overline{x}-\mu)$ only depends only on the sample mean $\overline{x}$. Thus we have written
$$\widehat{r_{true}^2} = \widehat{r^2} + (\overline{x}-\mu)^T(\overline{x}-\mu)$$
as the sum of two independent random variables. We know the distributions of the $\widehat{r^2_{true}}$ and $(\overline{x} - \mu)^T(\overline{x}-\mu)$ and so we are done via the standard trick using that characteristic functions are multiplicative.
Edited to add:
$||x_i-\mu||$ is Hoyt so it has pdf
$$f(\rho) = \frac{1+q^2}{q\omega}\rho e^{-\frac{(1+q^2)^2}{4q^2\omega} \rho^2}I_O\left(\frac{1-q^4}{4q^2\omega} \rho^2\right)$$
where $I_0$ is the $0^{th}$ modified Bessel function of the first kind.
This means that the pdf of $||x_i-\mu||^2$ is
$$f(\rho) = \frac{1}{2}\frac{1+q^2}{q\omega}e^{-\frac{(1+q^2)^2}{4q^2\omega}\rho}I_0\left(\frac{1-q^4}{4q^2\omega}\rho\right).$$
To ease notation set $a = \frac{1-q^4}{4q^2\omega}$, $b=-\frac{(1+q^2)^2}{4q^2\omega}$ and $c=\frac{1}{2}\frac{1+q^2}{q\omega}$.
The moment generating function of $||x_i-\mu||^2$ is
$$\begin{cases}
\frac{c}{\sqrt{(s-b)^2-a^2}} & (s-b) > a\\
0 & \text{ else}\\
\end{cases}$$
Thus the moment generating function of $\widehat{r^2_{true}}$ is
$$\begin{cases}
\frac{c^N}{((s/N-b)^2-a^2)^{N/2}} & (s/N-b) > a\\
0 & \text{else}
\end{cases}$$
and the moment generating function of $||\overline{x} - \mu||^2$ is
$$\begin{cases}
\frac{Nc}{\sqrt{(s-Nb)^2-(Na)^2}} = \frac{c}{\sqrt{(s/N-b)^2-a^2}} & (s/N-b) > a\\
0 & \text{ else}
\end{cases}$$
This implies that the moment generating function of $\widehat{r^2}$ is
$$\begin{cases}
\frac{c^{N-1}}{((s/N-b)^2-a^2)^{(N-1)/2}} & (s/N-b) > a\\
0 & \text{ else}.
\end{cases}$$
Applying the inverse Laplace transform gives that $\widehat{r^2}$ has pdf
$$g(\rho) = \frac{\sqrt{\pi}Nc^{N-1}}{\Gamma(\frac{N-1}{2})}\left(\frac{2\mathrm{i} a}{N\rho}\right)^{(2 - N)/2} e^{b N \rho} J_{N/2-1}( \mathrm{i} a N \rho).$$ | Sampling distribution of the radius of 2D normal distribution | As you mentioned in your post we know the distribution of the estimate of $\widehat{r_{true}}$ if we are given $\mu$ so we know the distribution of the estimate $\widehat{r^2_{true}}$ of the true $r^2 | Sampling distribution of the radius of 2D normal distribution
As you mentioned in your post we know the distribution of the estimate of $\widehat{r_{true}}$ if we are given $\mu$ so we know the distribution of the estimate $\widehat{r^2_{true}}$ of the true $r^2$.
We want to find the distribution of $$\widehat{r^2} = \frac{1}{N}\sum_{i=1}^N (x_i-\overline{x})^T(x_i-\overline{x})$$ where $x_i$ are expressed as column vectors.
We now do the standard trick
$$\begin{eqnarray*}
\widehat{r^2_{true}} &=& \frac{1}{N}\sum_{i=1}^N(x_i - \mu)^T(x_i-\mu)\\
&=& \frac{1}{N}\sum_{i=1}^N(x_i-\overline{x} + \overline{x} -\mu)^T(x_i-\overline{x} + \overline{x}-\mu)\\
&=&\left[\frac{1}{N}\sum_{i=1}^N(x_i - \overline{x})^T(x_i-\overline{x})\right] + (\overline{x} - \mu)^T(\overline{x}-\mu) \hspace{20pt}(1)\\
&=& \widehat{r^2} + (\overline{x}-\mu)^T(\overline{x}-\mu)
\end{eqnarray*}
$$
where $(1)$ arises from the equation
$$\frac{1}{N}\sum_{i=1}^N(x_i-\overline{x})^T(\overline{x}-\mu) = (\overline{x} - \overline{x})^T(\overline{x} - \mu) = 0$$
and its transpose.
Notice that $\widehat{r^2}$ is the trace of the sample covariance matrix $S$ and $(\overline{x}-\mu)^T(\overline{x}-\mu)$ only depends only on the sample mean $\overline{x}$. Thus we have written
$$\widehat{r_{true}^2} = \widehat{r^2} + (\overline{x}-\mu)^T(\overline{x}-\mu)$$
as the sum of two independent random variables. We know the distributions of the $\widehat{r^2_{true}}$ and $(\overline{x} - \mu)^T(\overline{x}-\mu)$ and so we are done via the standard trick using that characteristic functions are multiplicative.
Edited to add:
$||x_i-\mu||$ is Hoyt so it has pdf
$$f(\rho) = \frac{1+q^2}{q\omega}\rho e^{-\frac{(1+q^2)^2}{4q^2\omega} \rho^2}I_O\left(\frac{1-q^4}{4q^2\omega} \rho^2\right)$$
where $I_0$ is the $0^{th}$ modified Bessel function of the first kind.
This means that the pdf of $||x_i-\mu||^2$ is
$$f(\rho) = \frac{1}{2}\frac{1+q^2}{q\omega}e^{-\frac{(1+q^2)^2}{4q^2\omega}\rho}I_0\left(\frac{1-q^4}{4q^2\omega}\rho\right).$$
To ease notation set $a = \frac{1-q^4}{4q^2\omega}$, $b=-\frac{(1+q^2)^2}{4q^2\omega}$ and $c=\frac{1}{2}\frac{1+q^2}{q\omega}$.
The moment generating function of $||x_i-\mu||^2$ is
$$\begin{cases}
\frac{c}{\sqrt{(s-b)^2-a^2}} & (s-b) > a\\
0 & \text{ else}\\
\end{cases}$$
Thus the moment generating function of $\widehat{r^2_{true}}$ is
$$\begin{cases}
\frac{c^N}{((s/N-b)^2-a^2)^{N/2}} & (s/N-b) > a\\
0 & \text{else}
\end{cases}$$
and the moment generating function of $||\overline{x} - \mu||^2$ is
$$\begin{cases}
\frac{Nc}{\sqrt{(s-Nb)^2-(Na)^2}} = \frac{c}{\sqrt{(s/N-b)^2-a^2}} & (s/N-b) > a\\
0 & \text{ else}
\end{cases}$$
This implies that the moment generating function of $\widehat{r^2}$ is
$$\begin{cases}
\frac{c^{N-1}}{((s/N-b)^2-a^2)^{(N-1)/2}} & (s/N-b) > a\\
0 & \text{ else}.
\end{cases}$$
Applying the inverse Laplace transform gives that $\widehat{r^2}$ has pdf
$$g(\rho) = \frac{\sqrt{\pi}Nc^{N-1}}{\Gamma(\frac{N-1}{2})}\left(\frac{2\mathrm{i} a}{N\rho}\right)^{(2 - N)/2} e^{b N \rho} J_{N/2-1}( \mathrm{i} a N \rho).$$ | Sampling distribution of the radius of 2D normal distribution
As you mentioned in your post we know the distribution of the estimate of $\widehat{r_{true}}$ if we are given $\mu$ so we know the distribution of the estimate $\widehat{r^2_{true}}$ of the true $r^2 |
24,341 | Is randomization reliable with small samples? | You are correct to point out the limitations of randomisation in dealing with unknown confounding variables for very small samples. However, the problem is not that the P-values are not reliable, but that their meaning varies with sample size and with the relationship between the assumptions of the method and the actual properties of the populations.
My take on your results is that the P-values performed quite well until the difference in the subgroup means was so large that any sensible experimenter would know that there was an issue prior to doing the experiment.
The idea that an experiment can be done and analysed without reference to a proper understanding of the nature of the data is mistaken. Before analysing a small dataset you must know enough about the data to be able to confidently defend the assumptions implicit in the analysis. Such knowledge commonly comes from prior studies using the same or similar system, studies that can be formal published works or informal 'preliminary' experiments. | Is randomization reliable with small samples? | You are correct to point out the limitations of randomisation in dealing with unknown confounding variables for very small samples. However, the problem is not that the P-values are not reliable, but | Is randomization reliable with small samples?
You are correct to point out the limitations of randomisation in dealing with unknown confounding variables for very small samples. However, the problem is not that the P-values are not reliable, but that their meaning varies with sample size and with the relationship between the assumptions of the method and the actual properties of the populations.
My take on your results is that the P-values performed quite well until the difference in the subgroup means was so large that any sensible experimenter would know that there was an issue prior to doing the experiment.
The idea that an experiment can be done and analysed without reference to a proper understanding of the nature of the data is mistaken. Before analysing a small dataset you must know enough about the data to be able to confidently defend the assumptions implicit in the analysis. Such knowledge commonly comes from prior studies using the same or similar system, studies that can be formal published works or informal 'preliminary' experiments. | Is randomization reliable with small samples?
You are correct to point out the limitations of randomisation in dealing with unknown confounding variables for very small samples. However, the problem is not that the P-values are not reliable, but |
24,342 | Is randomization reliable with small samples? | In ecological research, nonrandom assignment of treatments to experimental units (subjects) is standard practice when sample sizes are small and there is evidence of one or more confounding variables. This nonrandom assignment "intersperses" the subjects across the spectrum of possibly confounding variables, which is exactly what random assignment is supposed to do. But at small sample sizes, randomization is more likely to perform poorly at this (as demonstrated above) and therefore it can be a bad idea to rely on it.
Because randomization is advocated so strongly in most fields (and rightfully so), it is easy to forget that the end goal is to reduce bias rather than to adhere to strict randomization. However, it is incumbent upon the researcher(s) to characterize the suite of confounding variables effectively and to carry out the nonrandom assignment in a defensible way that is blind to experimental outcomes and makes use of all available information and context.
For a summary, see pp. 192-198 in Hurlbert, Stuart H. 1984. Pseudoreplication and the design of field experiments. Ecological Monographs 54(2) pp.187-211. | Is randomization reliable with small samples? | In ecological research, nonrandom assignment of treatments to experimental units (subjects) is standard practice when sample sizes are small and there is evidence of one or more confounding variables. | Is randomization reliable with small samples?
In ecological research, nonrandom assignment of treatments to experimental units (subjects) is standard practice when sample sizes are small and there is evidence of one or more confounding variables. This nonrandom assignment "intersperses" the subjects across the spectrum of possibly confounding variables, which is exactly what random assignment is supposed to do. But at small sample sizes, randomization is more likely to perform poorly at this (as demonstrated above) and therefore it can be a bad idea to rely on it.
Because randomization is advocated so strongly in most fields (and rightfully so), it is easy to forget that the end goal is to reduce bias rather than to adhere to strict randomization. However, it is incumbent upon the researcher(s) to characterize the suite of confounding variables effectively and to carry out the nonrandom assignment in a defensible way that is blind to experimental outcomes and makes use of all available information and context.
For a summary, see pp. 192-198 in Hurlbert, Stuart H. 1984. Pseudoreplication and the design of field experiments. Ecological Monographs 54(2) pp.187-211. | Is randomization reliable with small samples?
In ecological research, nonrandom assignment of treatments to experimental units (subjects) is standard practice when sample sizes are small and there is evidence of one or more confounding variables. |
24,343 | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion | There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests have terrible power in some situations).
In general when selecting a test statistic you choose the kinds of deviation that it's most important to detect and use a test statistic that is good at that job. Some tests do very well at a wide variety of interesting alternatives, making them decent default choices, but that doesn't make them "state of the art".
The Anderson Darling is still very popular, and with good reason. The Cramer-von Mises test is much less used these days (to my surprise because it's usually better than the Kolmogorov-Smirnov, but simpler than the Anderson-Darling -- and often has better power than it on differences "in the middle" of the distribution)
All of these tests suffer from bias against some kinds of alternatives, and it's easy to find cases where the Anderson-Darling does much worse (terribly, really) than the other tests. (As I suggest, it's more 'horses for courses' than one test to rule them all). There's often little consideration given to this issue (what's best at picking up the deviations that matter the most to me?), unfortunately.
You may find some value in some of these posts:
Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises (about two-sample tests but many of the statements carry over
Motivation for Kolmogorov distance between distributions (more theoretical discussion but there are several important points about practical implications)
I don't think you'll be able to form a confidence interval for the cdf in the Cramer-von Mises and Anderson Darline statistics, because the criteria are based on all of the deviations rather than just the largest. | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion | There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests h | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion
There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests have terrible power in some situations).
In general when selecting a test statistic you choose the kinds of deviation that it's most important to detect and use a test statistic that is good at that job. Some tests do very well at a wide variety of interesting alternatives, making them decent default choices, but that doesn't make them "state of the art".
The Anderson Darling is still very popular, and with good reason. The Cramer-von Mises test is much less used these days (to my surprise because it's usually better than the Kolmogorov-Smirnov, but simpler than the Anderson-Darling -- and often has better power than it on differences "in the middle" of the distribution)
All of these tests suffer from bias against some kinds of alternatives, and it's easy to find cases where the Anderson-Darling does much worse (terribly, really) than the other tests. (As I suggest, it's more 'horses for courses' than one test to rule them all). There's often little consideration given to this issue (what's best at picking up the deviations that matter the most to me?), unfortunately.
You may find some value in some of these posts:
Is Shapiro–Wilk the best normality test? Why might it be better than other tests like Anderson-Darling?
2 Sample Kolmogorov-Smirnov vs. Anderson-Darling vs Cramer-von-Mises (about two-sample tests but many of the statements carry over
Motivation for Kolmogorov distance between distributions (more theoretical discussion but there are several important points about practical implications)
I don't think you'll be able to form a confidence interval for the cdf in the Cramer-von Mises and Anderson Darline statistics, because the criteria are based on all of the deviations rather than just the largest. | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion
There can be no single state-of-the-art for goodness of fit (for example no UMP test across general alternatives will exist, and really nothing even comes close -- even highly regarded omnibus tests h |
24,344 | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion | The Anderson-Darling test is not available on all distributions but has power that is good and close to the power for the Shapiro-Wilk test except for small numbers of samples so that the two are equivalent at $n=400$ Razali NM, Wah YB. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics. 2011;2:21-33. However, the Shapiro-Wilk test is only for normal distribution testing. The Cramér–von Mises test and Pearson Chi-squared are general for all distribution fits to histograms and I think that Cramér–von Mises test has more power than Pearson Chi-squared. The Cramér–von Mises test is a more powerful cumulative density function goodness-of-fit test than the Kolmogorov-Smirnov test and can have power greater or less than t-testing. Chi-squared has difficulty with low cell counts, so range restrictions are used for fitting tails.
**Question 1: ... are ... these two methods... still state-of-the-art? or replaced by some better approaches already? Question 2 What is the confidence interval for such tests? **
Answer: They are state of the art. However, sometimes we want confidence intervals not probabilities. When comparing these methods to each other we speak of power rather than confidence intervals. Sometimes goodness-of-fit is analyzed using AIC, BIC and other criteria as contrasted to probabilities of good fitting, and sometimes the goodness-of-fit criterion is irrelevant, for example, when goodness-of-fit is not the criterion for fitting. In the latter case, our regression target may be a physical quantity not related to fitting, e.g., see Tk-GV. | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion | The Anderson-Darling test is not available on all distributions but has power that is good and close to the power for the Shapiro-Wilk test except for small numbers of samples so that the two are equi | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion
The Anderson-Darling test is not available on all distributions but has power that is good and close to the power for the Shapiro-Wilk test except for small numbers of samples so that the two are equivalent at $n=400$ Razali NM, Wah YB. Power comparisons of Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors and Anderson-Darling tests. Journal of Statistical Modeling and Analytics. 2011;2:21-33. However, the Shapiro-Wilk test is only for normal distribution testing. The Cramér–von Mises test and Pearson Chi-squared are general for all distribution fits to histograms and I think that Cramér–von Mises test has more power than Pearson Chi-squared. The Cramér–von Mises test is a more powerful cumulative density function goodness-of-fit test than the Kolmogorov-Smirnov test and can have power greater or less than t-testing. Chi-squared has difficulty with low cell counts, so range restrictions are used for fitting tails.
**Question 1: ... are ... these two methods... still state-of-the-art? or replaced by some better approaches already? Question 2 What is the confidence interval for such tests? **
Answer: They are state of the art. However, sometimes we want confidence intervals not probabilities. When comparing these methods to each other we speak of power rather than confidence intervals. Sometimes goodness-of-fit is analyzed using AIC, BIC and other criteria as contrasted to probabilities of good fitting, and sometimes the goodness-of-fit criterion is irrelevant, for example, when goodness-of-fit is not the criterion for fitting. In the latter case, our regression target may be a physical quantity not related to fitting, e.g., see Tk-GV. | Goodness of fit test: question about Anderson–Darling test and Cramér–von Mises criterion
The Anderson-Darling test is not available on all distributions but has power that is good and close to the power for the Shapiro-Wilk test except for small numbers of samples so that the two are equi |
24,345 | Named entity recognition and class imbalance [duplicate] | Some things you can try:
Oversample your target classes. Insert duplicate records of your other three classes to augment your training dataset
Undersample the negative responses. Instead of including all instances of other in your training data, only use a small portion.
Bootstrap undersample the negative responses. This is probably your most robust option of those I'm presenting. Start by seeding your training data with the non-other classified records. Then train each bootstrap iteration augmenting the seed training set with a different random sample (selected with replacement) from the other class. You can then either derive a confidence interval for your models classifications from the bootstrapping procedure (as Kaushik suggested), or treat the models you generated as an ensemble and combine their scores using an average or majority vote to determine your classifications. You can even implement boosting here if you want. | Named entity recognition and class imbalance [duplicate] | Some things you can try:
Oversample your target classes. Insert duplicate records of your other three classes to augment your training dataset
Undersample the negative responses. Instead of including | Named entity recognition and class imbalance [duplicate]
Some things you can try:
Oversample your target classes. Insert duplicate records of your other three classes to augment your training dataset
Undersample the negative responses. Instead of including all instances of other in your training data, only use a small portion.
Bootstrap undersample the negative responses. This is probably your most robust option of those I'm presenting. Start by seeding your training data with the non-other classified records. Then train each bootstrap iteration augmenting the seed training set with a different random sample (selected with replacement) from the other class. You can then either derive a confidence interval for your models classifications from the bootstrapping procedure (as Kaushik suggested), or treat the models you generated as an ensemble and combine their scores using an average or majority vote to determine your classifications. You can even implement boosting here if you want. | Named entity recognition and class imbalance [duplicate]
Some things you can try:
Oversample your target classes. Insert duplicate records of your other three classes to augment your training dataset
Undersample the negative responses. Instead of including |
24,346 | Named entity recognition and class imbalance [duplicate] | One method that I have used with success is resampling the data. I run bootstraps by taking N samples from each class where N is the size of the class with the smallest samples. The N samples are chosen randomly without replacement. Then I split each resampled class into a training and test set (say 70-30 split) and run my classifier. For each boot strap I get a score. I run about 1000 bootstraps to get confidence intervals on my score.
The resampling forces each class to yield the same number of train and test samples to get around the class imbalance problem, but then I do bootstraps to get a meaningful mean score and a confidence interval.
For what it's worth, I have a short post with some simple Python code discussing imbalanced classes. | Named entity recognition and class imbalance [duplicate] | One method that I have used with success is resampling the data. I run bootstraps by taking N samples from each class where N is the size of the class with the smallest samples. The N samples are chos | Named entity recognition and class imbalance [duplicate]
One method that I have used with success is resampling the data. I run bootstraps by taking N samples from each class where N is the size of the class with the smallest samples. The N samples are chosen randomly without replacement. Then I split each resampled class into a training and test set (say 70-30 split) and run my classifier. For each boot strap I get a score. I run about 1000 bootstraps to get confidence intervals on my score.
The resampling forces each class to yield the same number of train and test samples to get around the class imbalance problem, but then I do bootstraps to get a meaningful mean score and a confidence interval.
For what it's worth, I have a short post with some simple Python code discussing imbalanced classes. | Named entity recognition and class imbalance [duplicate]
One method that I have used with success is resampling the data. I run bootstraps by taking N samples from each class where N is the size of the class with the smallest samples. The N samples are chos |
24,347 | Does this single value match that distribution? | In the unimodal case the Vysochanskij-Petunin inequality can give you a rough prediction interval. Here is the wikipedia site: http://en.wikipedia.org/wiki/Vysochanski%C3%AF%E2%80%93Petunin_inequality
Using $\lambda = 3$ will result in an approximate 95% prediction interval.
So you estimate the mean and standard deviation of your population and just use the sample mean $\bar x $ plus or minus $3s$ as your interval.
There are a couple of problems with this approach. You don't really know the mean or standard deviation; you are using estimates. And in general you won't have unimodal distributions meaning you will have to use specialized versions of Chebyshev's inequality. But at least you have a starting point.
For the general case, Konijn (The American Statistician, February 1987) states the order statistics may be used as a prediction interval. So $ \left[ x_{(i)},x_{(j)} \right]$ is a prediction interval for $X$ with what Konijn calls size ${{j-i} \over {n+1}}. $ Size is defined as "the greatest lower bound (with regard to the set of joint distributions that are admitted) of the probability that the interval will cover the value that $X$ is to take on." With this approach a 93.6% prediction interval would be $ \left[ x_{(1)},x_{(30)} \right].$
He also gives an approach attributed to Saw, Yang, and Mo: $$\left[ \bar x -\lambda \left(1 + {1 \over n}\right)^{1/2}s \ , \ \bar x + \lambda \left(1 + {1 \over n}\right)^{1/2}s \right],$$ with details on the coverage given in the article.
For example with $n=30,$ using $\lambda = 3.2$ would give coverage exceeding 90%. | Does this single value match that distribution? | In the unimodal case the Vysochanskij-Petunin inequality can give you a rough prediction interval. Here is the wikipedia site: http://en.wikipedia.org/wiki/Vysochanski%C3%AF%E2%80%93Petunin_inequality | Does this single value match that distribution?
In the unimodal case the Vysochanskij-Petunin inequality can give you a rough prediction interval. Here is the wikipedia site: http://en.wikipedia.org/wiki/Vysochanski%C3%AF%E2%80%93Petunin_inequality
Using $\lambda = 3$ will result in an approximate 95% prediction interval.
So you estimate the mean and standard deviation of your population and just use the sample mean $\bar x $ plus or minus $3s$ as your interval.
There are a couple of problems with this approach. You don't really know the mean or standard deviation; you are using estimates. And in general you won't have unimodal distributions meaning you will have to use specialized versions of Chebyshev's inequality. But at least you have a starting point.
For the general case, Konijn (The American Statistician, February 1987) states the order statistics may be used as a prediction interval. So $ \left[ x_{(i)},x_{(j)} \right]$ is a prediction interval for $X$ with what Konijn calls size ${{j-i} \over {n+1}}. $ Size is defined as "the greatest lower bound (with regard to the set of joint distributions that are admitted) of the probability that the interval will cover the value that $X$ is to take on." With this approach a 93.6% prediction interval would be $ \left[ x_{(1)},x_{(30)} \right].$
He also gives an approach attributed to Saw, Yang, and Mo: $$\left[ \bar x -\lambda \left(1 + {1 \over n}\right)^{1/2}s \ , \ \bar x + \lambda \left(1 + {1 \over n}\right)^{1/2}s \right],$$ with details on the coverage given in the article.
For example with $n=30,$ using $\lambda = 3.2$ would give coverage exceeding 90%. | Does this single value match that distribution?
In the unimodal case the Vysochanskij-Petunin inequality can give you a rough prediction interval. Here is the wikipedia site: http://en.wikipedia.org/wiki/Vysochanski%C3%AF%E2%80%93Petunin_inequality |
24,348 | Does this single value match that distribution? | Some thoughts I've had:
This is similar to wanting to do a two-sample t-test - except that for the second sample I only have a single value, and the 30 values aren't necessarily normally distributed.
Correct. The idea is a bit like a t-test with a single value. Since the distribution is not known, and normality with only 30 data points may be a bit hard to swallow, this calls for some kind of non-parametric test.
If instead of 30 measurement I had 10000 measurement, the rank of the single measurement could provide some useful information.
Even with 30 measurements the rank can be informative.
As @whuber has pointed out, you want some kind of prediction interval. For the non-parametric case, what you are asking, essentially, is the following: what is the probability that a given data point would have by chance the rank we observe for your 31st measurement?
This can be addressed through a simple permutation test. Here's an example with 15 values and a novel (16th observation) that is actually larger than any of the previous:
932
915
865
998
521
462
688
1228
746
433
662
404
301
473
647
new value: 1374
We perform N permutations, where the order of the elements in the list is shuffled, then ask the question: what is the rank for the value of the first element in the (shuffled) list?
Performing N=1,000 permutations gives us 608 cases in which the rank of the first element in the list is equal or better to the rank of the new value (actually equal, since the new value is the best one). Running the simulation again for 1,000 permutations, we get 658 such cases, then 663...
If we perform N=1,000,000 permutations, we obtain 62825 cases in which the rank of the first element in the list is equal or better to the rank of the new value (further simulations give 62871 cases, then 62840...). If the take the ratio between cases in which the condition is satisfied and total number of permutations, we get numbers like 0.062825, 0.062871, 0.06284...
You can see these values converge towards 1/16=0.0625 (6.25%), which as @whuber notes, is the probability that a given value (out of 16) drawn at random has the best possible rank among them.
For a new dataset, where the new value is the second best value (i.e. rank 2):
6423
8552
6341
6410
6589
6134
6500
6746
8176
6264
6365
5930
6331
6012
5594
new value: 8202
we get (for N=1,000,000 permutations): 125235, 124883... favorable cases which, again, approximates the probability that a given value (out of 16) drawn at random has the second best possible rank among them: 2/16=0.125 (12.5%). | Does this single value match that distribution? | Some thoughts I've had:
This is similar to wanting to do a two-sample t-test - except that for the second sample I only have a single value, and the 30 values aren't necessarily normally distributed.
| Does this single value match that distribution?
Some thoughts I've had:
This is similar to wanting to do a two-sample t-test - except that for the second sample I only have a single value, and the 30 values aren't necessarily normally distributed.
Correct. The idea is a bit like a t-test with a single value. Since the distribution is not known, and normality with only 30 data points may be a bit hard to swallow, this calls for some kind of non-parametric test.
If instead of 30 measurement I had 10000 measurement, the rank of the single measurement could provide some useful information.
Even with 30 measurements the rank can be informative.
As @whuber has pointed out, you want some kind of prediction interval. For the non-parametric case, what you are asking, essentially, is the following: what is the probability that a given data point would have by chance the rank we observe for your 31st measurement?
This can be addressed through a simple permutation test. Here's an example with 15 values and a novel (16th observation) that is actually larger than any of the previous:
932
915
865
998
521
462
688
1228
746
433
662
404
301
473
647
new value: 1374
We perform N permutations, where the order of the elements in the list is shuffled, then ask the question: what is the rank for the value of the first element in the (shuffled) list?
Performing N=1,000 permutations gives us 608 cases in which the rank of the first element in the list is equal or better to the rank of the new value (actually equal, since the new value is the best one). Running the simulation again for 1,000 permutations, we get 658 such cases, then 663...
If we perform N=1,000,000 permutations, we obtain 62825 cases in which the rank of the first element in the list is equal or better to the rank of the new value (further simulations give 62871 cases, then 62840...). If the take the ratio between cases in which the condition is satisfied and total number of permutations, we get numbers like 0.062825, 0.062871, 0.06284...
You can see these values converge towards 1/16=0.0625 (6.25%), which as @whuber notes, is the probability that a given value (out of 16) drawn at random has the best possible rank among them.
For a new dataset, where the new value is the second best value (i.e. rank 2):
6423
8552
6341
6410
6589
6134
6500
6746
8176
6264
6365
5930
6331
6012
5594
new value: 8202
we get (for N=1,000,000 permutations): 125235, 124883... favorable cases which, again, approximates the probability that a given value (out of 16) drawn at random has the second best possible rank among them: 2/16=0.125 (12.5%). | Does this single value match that distribution?
Some thoughts I've had:
This is similar to wanting to do a two-sample t-test - except that for the second sample I only have a single value, and the 30 values aren't necessarily normally distributed.
|
24,349 | Assigning class labels to k-means clusters | Yes. What you propose is entirely standard and it is the way that standard k-means software works automatically. In the case of k-means you compute the euclidean distance between each observation (data point) and each cluster mean (centroid) and assign the observations to the most similar cluster. Then, the label of the cluster is determined by examining that average characteristics of the observations classified to the cluster relative to the averages of those relative to the other clusters. | Assigning class labels to k-means clusters | Yes. What you propose is entirely standard and it is the way that standard k-means software works automatically. In the case of k-means you compute the euclidean distance between each observation (d | Assigning class labels to k-means clusters
Yes. What you propose is entirely standard and it is the way that standard k-means software works automatically. In the case of k-means you compute the euclidean distance between each observation (data point) and each cluster mean (centroid) and assign the observations to the most similar cluster. Then, the label of the cluster is determined by examining that average characteristics of the observations classified to the cluster relative to the averages of those relative to the other clusters. | Assigning class labels to k-means clusters
Yes. What you propose is entirely standard and it is the way that standard k-means software works automatically. In the case of k-means you compute the euclidean distance between each observation (d |
24,350 | Assigning class labels to k-means clusters | If you look at the names in your kmeans object you will notice that there is a "cluster" object. This contains the class labels ordered the same as your input data. Here is a simple example that binds the cluster labels back to your data.
x <- data.frame(X=rnorm(100, sd=0.3), Y=rnorm(100, mean=1, sd=0.3))
k <- kmeans(x, 2)
names(k)
x <- data.frame(x, K=k$cluster)
# You can also directly return the clusters
x <- data.frame(x, K=kmeans(x, 2)$cluster) | Assigning class labels to k-means clusters | If you look at the names in your kmeans object you will notice that there is a "cluster" object. This contains the class labels ordered the same as your input data. Here is a simple example that binds | Assigning class labels to k-means clusters
If you look at the names in your kmeans object you will notice that there is a "cluster" object. This contains the class labels ordered the same as your input data. Here is a simple example that binds the cluster labels back to your data.
x <- data.frame(X=rnorm(100, sd=0.3), Y=rnorm(100, mean=1, sd=0.3))
k <- kmeans(x, 2)
names(k)
x <- data.frame(x, K=k$cluster)
# You can also directly return the clusters
x <- data.frame(x, K=kmeans(x, 2)$cluster) | Assigning class labels to k-means clusters
If you look at the names in your kmeans object you will notice that there is a "cluster" object. This contains the class labels ordered the same as your input data. Here is a simple example that binds |
24,351 | Assigning class labels to k-means clusters | The labels to the cluster may be based on the class of majority samples within a cluster. But this is true only if the number of clusters is equal to number of classes. | Assigning class labels to k-means clusters | The labels to the cluster may be based on the class of majority samples within a cluster. But this is true only if the number of clusters is equal to number of classes. | Assigning class labels to k-means clusters
The labels to the cluster may be based on the class of majority samples within a cluster. But this is true only if the number of clusters is equal to number of classes. | Assigning class labels to k-means clusters
The labels to the cluster may be based on the class of majority samples within a cluster. But this is true only if the number of clusters is equal to number of classes. |
24,352 | When to use robust standard errors in Poisson regression? | In general if you have any suspicion that your errors are heteroskedastic, you should use robust standard errors. The fact that your estimates become non-significant when you don't use robust SEs suggests (but does not prove) the need for robust SEs! These SEs are "robust" to the bias that heteroskedasticity can cause in a generalized linear model.
This situation is a little different, though, in that you're layering them on top of Poisson regression.
Poisson has a well known property that it forces the dispersion to be equal to the mean, whether or not the data supports that. Before considering robust standard errors, I would try a Negative Binomial regression, which does not suffer from this problem. There is a test (see the comment) to help determine whether the resultant change in standard errors is significant.
I do not know for sure whether the change you're seeing (moving to robust SEs narrows the CI) implies under-dispersion, but it seems likely. Take a look at the appropriate model (I think negative binomial, but a quick googling also suggests quasi-Poisson for under-dispersion?) and see what you get in that setting. | When to use robust standard errors in Poisson regression? | In general if you have any suspicion that your errors are heteroskedastic, you should use robust standard errors. The fact that your estimates become non-significant when you don't use robust SEs sug | When to use robust standard errors in Poisson regression?
In general if you have any suspicion that your errors are heteroskedastic, you should use robust standard errors. The fact that your estimates become non-significant when you don't use robust SEs suggests (but does not prove) the need for robust SEs! These SEs are "robust" to the bias that heteroskedasticity can cause in a generalized linear model.
This situation is a little different, though, in that you're layering them on top of Poisson regression.
Poisson has a well known property that it forces the dispersion to be equal to the mean, whether or not the data supports that. Before considering robust standard errors, I would try a Negative Binomial regression, which does not suffer from this problem. There is a test (see the comment) to help determine whether the resultant change in standard errors is significant.
I do not know for sure whether the change you're seeing (moving to robust SEs narrows the CI) implies under-dispersion, but it seems likely. Take a look at the appropriate model (I think negative binomial, but a quick googling also suggests quasi-Poisson for under-dispersion?) and see what you get in that setting. | When to use robust standard errors in Poisson regression?
In general if you have any suspicion that your errors are heteroskedastic, you should use robust standard errors. The fact that your estimates become non-significant when you don't use robust SEs sug |
24,353 | When to use robust standard errors in Poisson regression? | I'll differentiate analyses using model based versus robust standard errors by referring to the latter as "GEEs" which is in fact an exchangeable definition. In addition to Scortchi's fantastic explanation:
GEEs can be "biased" in small samples, i.e. 10-50 subjects: (Lipsitz, Laird, and Harrington, 1990; Emrich and Piedmonte, 1992; Sharples and Breslow, 1992; Lipsitz et al., 1994; Qu, Piedmonte, and Williams, 1994; Gunsolley, Getchell, and
Chinchilli, 1995; Sherman and le Cessie, 1997.) When I say that GEEs are biased what I mean is that the standard error estimate can be either conservative or anticonservative due to small or zero cell counts, depending upon which fitted values exhibit this behavior and how consistent they are with the overall trend of the regression model.
In general, when the parametric model is correctly specified, you still get correct standard error estimates from the model based CIs, but the whole point of using GEE is to accommodate that very big "if". GEEs allow the statistician to merely specify a working probability model for the data, and the parameters (instead of being interpreted in the strictly parametric framework) are considered a type of "sieve" that can generate reproducible values regardless of the underlying, unknown data generating mechanism. This is the heart and soul of semi-parametric analysis, which a GEE is an example of.
GEEs also handle unmeasured sources of covariation in the data, even with specification of an independent correlation matrix. This is because of the use of empirical rather than model based covariance matrix. In Poisson modeling, for instance, you might be interested in fertility rates of salmon sampled from various streams. The ova harvested from female fish might have an underlying Poisson distribution, but genetic variation that comprise of shared heretibility and available resources in specific streams might make fish within those streams more similar than among other streams. The GEE will give correct population standard error estimates as long as the sampling rate is consistent with their population proportion (or is in other ways stratified). | When to use robust standard errors in Poisson regression? | I'll differentiate analyses using model based versus robust standard errors by referring to the latter as "GEEs" which is in fact an exchangeable definition. In addition to Scortchi's fantastic explan | When to use robust standard errors in Poisson regression?
I'll differentiate analyses using model based versus robust standard errors by referring to the latter as "GEEs" which is in fact an exchangeable definition. In addition to Scortchi's fantastic explanation:
GEEs can be "biased" in small samples, i.e. 10-50 subjects: (Lipsitz, Laird, and Harrington, 1990; Emrich and Piedmonte, 1992; Sharples and Breslow, 1992; Lipsitz et al., 1994; Qu, Piedmonte, and Williams, 1994; Gunsolley, Getchell, and
Chinchilli, 1995; Sherman and le Cessie, 1997.) When I say that GEEs are biased what I mean is that the standard error estimate can be either conservative or anticonservative due to small or zero cell counts, depending upon which fitted values exhibit this behavior and how consistent they are with the overall trend of the regression model.
In general, when the parametric model is correctly specified, you still get correct standard error estimates from the model based CIs, but the whole point of using GEE is to accommodate that very big "if". GEEs allow the statistician to merely specify a working probability model for the data, and the parameters (instead of being interpreted in the strictly parametric framework) are considered a type of "sieve" that can generate reproducible values regardless of the underlying, unknown data generating mechanism. This is the heart and soul of semi-parametric analysis, which a GEE is an example of.
GEEs also handle unmeasured sources of covariation in the data, even with specification of an independent correlation matrix. This is because of the use of empirical rather than model based covariance matrix. In Poisson modeling, for instance, you might be interested in fertility rates of salmon sampled from various streams. The ova harvested from female fish might have an underlying Poisson distribution, but genetic variation that comprise of shared heretibility and available resources in specific streams might make fish within those streams more similar than among other streams. The GEE will give correct population standard error estimates as long as the sampling rate is consistent with their population proportion (or is in other ways stratified). | When to use robust standard errors in Poisson regression?
I'll differentiate analyses using model based versus robust standard errors by referring to the latter as "GEEs" which is in fact an exchangeable definition. In addition to Scortchi's fantastic explan |
24,354 | When to use robust standard errors in Poisson regression? | You do a test of the null of equidispersion. It's a simple auxiliary OLS regression. There's description on page 670 of Cameron and Trivedi. With large overdispersion, the standard errors are very deflated, so I would be very wary of any results that hinge on a non-robust VCE when there's overdispersion. With underdispersion, the opposite will be true, which sounds like the scenario you're in. | When to use robust standard errors in Poisson regression? | You do a test of the null of equidispersion. It's a simple auxiliary OLS regression. There's description on page 670 of Cameron and Trivedi. With large overdispersion, the standard errors are very def | When to use robust standard errors in Poisson regression?
You do a test of the null of equidispersion. It's a simple auxiliary OLS regression. There's description on page 670 of Cameron and Trivedi. With large overdispersion, the standard errors are very deflated, so I would be very wary of any results that hinge on a non-robust VCE when there's overdispersion. With underdispersion, the opposite will be true, which sounds like the scenario you're in. | When to use robust standard errors in Poisson regression?
You do a test of the null of equidispersion. It's a simple auxiliary OLS regression. There's description on page 670 of Cameron and Trivedi. With large overdispersion, the standard errors are very def |
24,355 | Stratified classification with random forests (or another classifier) | is there a good way to use the a priori knowledge that all non-c objects likely form two distinct clusters
If you are using a tree based method I don't think it matters as these classifiers partition the feature space then look at the proportion of samples in each class. So all that matters is the relative occurrence of class c in each terminal node.
If however you were using something like a mixture of normals, LDA, etc then combining two clusters would be a bad idea (assuming classes a and b form unique clusters). Here you need to preserve the class structure to accurately describe the feature space that maps to a,b and c. These models assume the features for each class have a different Normal distribution. If you combine a and b you will force a single Normal distribution to be fit to a mixture.
In summary for trees it shouldn't matter much if you:
I. Create three classifiers (1. a vs b, 2. a vs c and 3. b vs c) then predict with a voting based method.
II. Merge classes a and b to form a two-class problem.
III. Predict all three classes then map the prediction to a two class value (e.g. f(c) = c, f(a) = not c, f(b) = not c).
However if you use a method that is fitting a distribution to each class then avoid II. and test which of I. or III. works better for your problem | Stratified classification with random forests (or another classifier) | is there a good way to use the a priori knowledge that all non-c objects likely form two distinct clusters
If you are using a tree based method I don't think it matters as these classifiers partition | Stratified classification with random forests (or another classifier)
is there a good way to use the a priori knowledge that all non-c objects likely form two distinct clusters
If you are using a tree based method I don't think it matters as these classifiers partition the feature space then look at the proportion of samples in each class. So all that matters is the relative occurrence of class c in each terminal node.
If however you were using something like a mixture of normals, LDA, etc then combining two clusters would be a bad idea (assuming classes a and b form unique clusters). Here you need to preserve the class structure to accurately describe the feature space that maps to a,b and c. These models assume the features for each class have a different Normal distribution. If you combine a and b you will force a single Normal distribution to be fit to a mixture.
In summary for trees it shouldn't matter much if you:
I. Create three classifiers (1. a vs b, 2. a vs c and 3. b vs c) then predict with a voting based method.
II. Merge classes a and b to form a two-class problem.
III. Predict all three classes then map the prediction to a two class value (e.g. f(c) = c, f(a) = not c, f(b) = not c).
However if you use a method that is fitting a distribution to each class then avoid II. and test which of I. or III. works better for your problem | Stratified classification with random forests (or another classifier)
is there a good way to use the a priori knowledge that all non-c objects likely form two distinct clusters
If you are using a tree based method I don't think it matters as these classifiers partition |
24,356 | Why are random effects shrunk towards 0? | generally speaking, most "random effects" occur in situations where there is also a "fixed effect" or some other part of the model. The general linear mixed model looks like this:
$$y_i=x_i^T\beta+z_i^Tu+\epsilon_i$$
Where $\beta$ is the "fixed effects" and $u$ is the "random effects". Clearly, the distinction can only be at the conceptual level, or in the method of estimation of $u$ and $\beta$. For if I define a new "fixed effect" $\tilde{x}_i=(x_i^T,z_i^T)^T$ and $\tilde{\beta}=(\beta^T,u^T)^T$ then I have an ordinary linear regression:
$$y_i=\tilde{x}_i^T\tilde{\beta}+\epsilon_i$$
This is often a real practical problem when it comes to fitting mixed models when the underlying conceptual goals are not clear. I think the fact that the random effects $u$ are shrunk toward zero, and that the fixed effects $\beta$ are not provides some help here. This means that we will tend to favour the model with only $\beta$ included (i.e. $u=0$) when the estimates of $u$ have low precision in the OLS formulation, and tend to favour the full OLS formulation when the estimates $u$ have high precision. | Why are random effects shrunk towards 0? | generally speaking, most "random effects" occur in situations where there is also a "fixed effect" or some other part of the model. The general linear mixed model looks like this:
$$y_i=x_i^T\beta+z_ | Why are random effects shrunk towards 0?
generally speaking, most "random effects" occur in situations where there is also a "fixed effect" or some other part of the model. The general linear mixed model looks like this:
$$y_i=x_i^T\beta+z_i^Tu+\epsilon_i$$
Where $\beta$ is the "fixed effects" and $u$ is the "random effects". Clearly, the distinction can only be at the conceptual level, or in the method of estimation of $u$ and $\beta$. For if I define a new "fixed effect" $\tilde{x}_i=(x_i^T,z_i^T)^T$ and $\tilde{\beta}=(\beta^T,u^T)^T$ then I have an ordinary linear regression:
$$y_i=\tilde{x}_i^T\tilde{\beta}+\epsilon_i$$
This is often a real practical problem when it comes to fitting mixed models when the underlying conceptual goals are not clear. I think the fact that the random effects $u$ are shrunk toward zero, and that the fixed effects $\beta$ are not provides some help here. This means that we will tend to favour the model with only $\beta$ included (i.e. $u=0$) when the estimates of $u$ have low precision in the OLS formulation, and tend to favour the full OLS formulation when the estimates $u$ have high precision. | Why are random effects shrunk towards 0?
generally speaking, most "random effects" occur in situations where there is also a "fixed effect" or some other part of the model. The general linear mixed model looks like this:
$$y_i=x_i^T\beta+z_ |
24,357 | Why are random effects shrunk towards 0? | Doesn't your question answer itself? If a value is expected then a technique that brings values closer to that would be best.
A simple answer comes from the law of large numbers. Let's say subjects are your random effect. If you run subjects A through D in 200 trials and subject E in 20 trials which of the subject's measured mean performance do you think is more representative of mu? The law of large numbers would predict that subject E's performance will be more likely to deviate by a larger amount from mu than any of A through D. It may or may not, and any of the subjects could deviate, but we would be much more justified in shrinking subject E's effect toward subject's A through D than the other way around. So random effects that are larger and have smaller N's tend to be the ones that are shrunk the most.
From this description also comes why fixed effects are not shrunk. It's because they're fixed,there's only one in the model. You have no reference to shrink it toward. You could use a slope of 0 as reference but that's not what random effects are shrunk toward. They're toward an overall estimate such as mu. The fixed effect that you have from your model is that estimate. | Why are random effects shrunk towards 0? | Doesn't your question answer itself? If a value is expected then a technique that brings values closer to that would be best.
A simple answer comes from the law of large numbers. Let's say subjects | Why are random effects shrunk towards 0?
Doesn't your question answer itself? If a value is expected then a technique that brings values closer to that would be best.
A simple answer comes from the law of large numbers. Let's say subjects are your random effect. If you run subjects A through D in 200 trials and subject E in 20 trials which of the subject's measured mean performance do you think is more representative of mu? The law of large numbers would predict that subject E's performance will be more likely to deviate by a larger amount from mu than any of A through D. It may or may not, and any of the subjects could deviate, but we would be much more justified in shrinking subject E's effect toward subject's A through D than the other way around. So random effects that are larger and have smaller N's tend to be the ones that are shrunk the most.
From this description also comes why fixed effects are not shrunk. It's because they're fixed,there's only one in the model. You have no reference to shrink it toward. You could use a slope of 0 as reference but that's not what random effects are shrunk toward. They're toward an overall estimate such as mu. The fixed effect that you have from your model is that estimate. | Why are random effects shrunk towards 0?
Doesn't your question answer itself? If a value is expected then a technique that brings values closer to that would be best.
A simple answer comes from the law of large numbers. Let's say subjects |
24,358 | Why are random effects shrunk towards 0? | I think it might be helpful to your intuition to think of a mixed model as a hierarchical or multilevel model. At least to me, it makes more sense when I think of nesting and how the model is working within and across categories in a hierarchical manner.
EDIT: Macro, I had left this a little open-ended because it does help me view it more intuitively, but I'm not sure it's correct. But to expand it in possibly incorrect directions...
I look at it as fixed effects averaging across categories and random effects distinguishing between categories. In some sense, the random effects are "clusters" that share some characteristics, and larger and more compact clusters will have greater influence over the average at the higher level.
With OLS doing the fitting (in phases, I believe), larger and more compact random effect "clusters" will thus pull the fit more strongly towards themselves, while smaller or more diffused "clusters" will pull the fit less. Or perhaps the fit begins closer to larger and more compact "clusters" since the higher-level average is closer to begin with
Sorry I can't be clearer, and may even be wrong. It makes sense to me intuitively, but as I try to write it I'm not sure if it's a top-down or bottom-up thing, or something different. Is it a matter of lower-level "clusters" pulling fits towards themselves more strongly, or of having greater influence over the higher-level averaging -- and thus "ending up" nearer to the higher-level average -- or neither?
In either case, I feel that it explains why smaller, more diffuse categories of random variables will be pulled farther towards the mean than larger, more compact categories. | Why are random effects shrunk towards 0? | I think it might be helpful to your intuition to think of a mixed model as a hierarchical or multilevel model. At least to me, it makes more sense when I think of nesting and how the model is working | Why are random effects shrunk towards 0?
I think it might be helpful to your intuition to think of a mixed model as a hierarchical or multilevel model. At least to me, it makes more sense when I think of nesting and how the model is working within and across categories in a hierarchical manner.
EDIT: Macro, I had left this a little open-ended because it does help me view it more intuitively, but I'm not sure it's correct. But to expand it in possibly incorrect directions...
I look at it as fixed effects averaging across categories and random effects distinguishing between categories. In some sense, the random effects are "clusters" that share some characteristics, and larger and more compact clusters will have greater influence over the average at the higher level.
With OLS doing the fitting (in phases, I believe), larger and more compact random effect "clusters" will thus pull the fit more strongly towards themselves, while smaller or more diffused "clusters" will pull the fit less. Or perhaps the fit begins closer to larger and more compact "clusters" since the higher-level average is closer to begin with
Sorry I can't be clearer, and may even be wrong. It makes sense to me intuitively, but as I try to write it I'm not sure if it's a top-down or bottom-up thing, or something different. Is it a matter of lower-level "clusters" pulling fits towards themselves more strongly, or of having greater influence over the higher-level averaging -- and thus "ending up" nearer to the higher-level average -- or neither?
In either case, I feel that it explains why smaller, more diffuse categories of random variables will be pulled farther towards the mean than larger, more compact categories. | Why are random effects shrunk towards 0?
I think it might be helpful to your intuition to think of a mixed model as a hierarchical or multilevel model. At least to me, it makes more sense when I think of nesting and how the model is working |
24,359 | How to perform post-hoc comparison on interaction term with mixed-effects model? | Do you mean you want to do all pairwise comparisons for the three factors?
mod1<-lme(Variable~Sediment*Hydrology*Depth, data=mydata, random=~1|Site/Hydrology/Depth)
mydata$SHD<-interaction(mydata$Sediment,mydata$Hydrology,mydata$Depth)
mod2<-lme(Variable~-1+SHD, data=mydata, random=~1|Site/Hydrology/Depth)
summary(glht(mod2,linfct=mcp(SHD="Tukey"))) | How to perform post-hoc comparison on interaction term with mixed-effects model? | Do you mean you want to do all pairwise comparisons for the three factors?
mod1<-lme(Variable~Sediment*Hydrology*Depth, data=mydata, random=~1|Site/Hydrology/Depth)
mydata$SHD<-interaction(mydata$Sedi | How to perform post-hoc comparison on interaction term with mixed-effects model?
Do you mean you want to do all pairwise comparisons for the three factors?
mod1<-lme(Variable~Sediment*Hydrology*Depth, data=mydata, random=~1|Site/Hydrology/Depth)
mydata$SHD<-interaction(mydata$Sediment,mydata$Hydrology,mydata$Depth)
mod2<-lme(Variable~-1+SHD, data=mydata, random=~1|Site/Hydrology/Depth)
summary(glht(mod2,linfct=mcp(SHD="Tukey"))) | How to perform post-hoc comparison on interaction term with mixed-effects model?
Do you mean you want to do all pairwise comparisons for the three factors?
mod1<-lme(Variable~Sediment*Hydrology*Depth, data=mydata, random=~1|Site/Hydrology/Depth)
mydata$SHD<-interaction(mydata$Sedi |
24,360 | How to perform post-hoc comparison on interaction term with mixed-effects model? | I found the package "lsmeans" quite useful especially when there is a x*z*v interaction.
However, the package is available only for newer versions of R.
http://cran.r-project.org/web/packages/lsmeans/vignettes/using-lsmeans.pdf | How to perform post-hoc comparison on interaction term with mixed-effects model? | I found the package "lsmeans" quite useful especially when there is a x*z*v interaction.
However, the package is available only for newer versions of R.
http://cran.r-project.org/web/packages/lsmeans/ | How to perform post-hoc comparison on interaction term with mixed-effects model?
I found the package "lsmeans" quite useful especially when there is a x*z*v interaction.
However, the package is available only for newer versions of R.
http://cran.r-project.org/web/packages/lsmeans/vignettes/using-lsmeans.pdf | How to perform post-hoc comparison on interaction term with mixed-effects model?
I found the package "lsmeans" quite useful especially when there is a x*z*v interaction.
However, the package is available only for newer versions of R.
http://cran.r-project.org/web/packages/lsmeans/ |
24,361 | How to make R's gamm work faster? | You are not going to be able to achieve substantial speed-up here as most of the computation will be being done inside compiled C code.
If you are fitting correlation structures in gamm() then you can either simplify the correlation structure you want to fit (i.e. don't use corARMA(p=1, .....) when corAR1(....) would suffice. Or nest the correlations within years if you have many observations per year, rather than for the whole time interval.
If you aren't fitting correlation structures, gam() can fit simple random effects, and if you need more complex random effects, consider the gamm4 which is by the same author as mgcv but which uses the lme4 package (lmer()) instead of the slower/older nlme package (lme()).
You could try simpler bases for the smooth terms; bs = "cr" rather than the default thin-plate spline bases.
If all else fails, and you are just facing big-data issues, the best you can do is exploit multiple cores (manually split a job into ncores chunks and run them in BATCH mode over night, or by one of the parallel processing packages on R) and run models as the weekend. If you do this, make sure you wrap your gamm() calls in try() so that the whole job doesn't stop because you have a convergence problem part way through the run. | How to make R's gamm work faster? | You are not going to be able to achieve substantial speed-up here as most of the computation will be being done inside compiled C code.
If you are fitting correlation structures in gamm() then you can | How to make R's gamm work faster?
You are not going to be able to achieve substantial speed-up here as most of the computation will be being done inside compiled C code.
If you are fitting correlation structures in gamm() then you can either simplify the correlation structure you want to fit (i.e. don't use corARMA(p=1, .....) when corAR1(....) would suffice. Or nest the correlations within years if you have many observations per year, rather than for the whole time interval.
If you aren't fitting correlation structures, gam() can fit simple random effects, and if you need more complex random effects, consider the gamm4 which is by the same author as mgcv but which uses the lme4 package (lmer()) instead of the slower/older nlme package (lme()).
You could try simpler bases for the smooth terms; bs = "cr" rather than the default thin-plate spline bases.
If all else fails, and you are just facing big-data issues, the best you can do is exploit multiple cores (manually split a job into ncores chunks and run them in BATCH mode over night, or by one of the parallel processing packages on R) and run models as the weekend. If you do this, make sure you wrap your gamm() calls in try() so that the whole job doesn't stop because you have a convergence problem part way through the run. | How to make R's gamm work faster?
You are not going to be able to achieve substantial speed-up here as most of the computation will be being done inside compiled C code.
If you are fitting correlation structures in gamm() then you can |
24,362 | How to make R's gamm work faster? | If gamm() is in R code rather than C it might be worth using the byte-code compiler that is new in R 2.13. There is a new core package called compiler and you can compile a function using the cmpfun() function.
More details can be found here:
http://www.r-bloggers.com/the-new-r-compiler-package-in-r-2-13-0-some-first-experiments/ | How to make R's gamm work faster? | If gamm() is in R code rather than C it might be worth using the byte-code compiler that is new in R 2.13. There is a new core package called compiler and you can compile a function using the cmpfun( | How to make R's gamm work faster?
If gamm() is in R code rather than C it might be worth using the byte-code compiler that is new in R 2.13. There is a new core package called compiler and you can compile a function using the cmpfun() function.
More details can be found here:
http://www.r-bloggers.com/the-new-r-compiler-package-in-r-2-13-0-some-first-experiments/ | How to make R's gamm work faster?
If gamm() is in R code rather than C it might be worth using the byte-code compiler that is new in R 2.13. There is a new core package called compiler and you can compile a function using the cmpfun( |
24,363 | How to estimate parameters for a Kalman filter | Max Welling has a nice tutorial that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start. | How to estimate parameters for a Kalman filter | Max Welling has a nice tutorial that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start. | How to estimate parameters for a Kalman filter
Max Welling has a nice tutorial that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start. | How to estimate parameters for a Kalman filter
Max Welling has a nice tutorial that describes all of the Kalman Filtering and Smoothing equations as well as parameter estimation. This may be a good place to start. |
24,364 | How to estimate parameters for a Kalman filter | The usual method is to use Maximum Likelihood Estimation. Basically, you need a Likelihood function and then run a standard optimizer (such as optim) to maximize your Likelihood. | How to estimate parameters for a Kalman filter | The usual method is to use Maximum Likelihood Estimation. Basically, you need a Likelihood function and then run a standard optimizer (such as optim) to maximize your Likelihood. | How to estimate parameters for a Kalman filter
The usual method is to use Maximum Likelihood Estimation. Basically, you need a Likelihood function and then run a standard optimizer (such as optim) to maximize your Likelihood. | How to estimate parameters for a Kalman filter
The usual method is to use Maximum Likelihood Estimation. Basically, you need a Likelihood function and then run a standard optimizer (such as optim) to maximize your Likelihood. |
24,365 | GLM after model selection or regularization | You might check out David Freedman's paper, "A Note on Screening Regression Equations." (ungated)
Using completely uncorrelated data in a simulation, he shows that, if there are many predictors relative to the number of observations, then a standard screening procedure will produce a final regression that contains many (more than by chance) significant predictors and a highly significant F statistic. The final model suggests that it is effective at predicting the outcome, but this success is spurious. He also illustrates these results using asymptotic calculations. Suggested solutions include screening on a sample and assessing the model on the full data set and using at least an order of magnitude more observations than predictors. | GLM after model selection or regularization | You might check out David Freedman's paper, "A Note on Screening Regression Equations." (ungated)
Using completely uncorrelated data in a simulation, he shows that, if there are many predictors relat | GLM after model selection or regularization
You might check out David Freedman's paper, "A Note on Screening Regression Equations." (ungated)
Using completely uncorrelated data in a simulation, he shows that, if there are many predictors relative to the number of observations, then a standard screening procedure will produce a final regression that contains many (more than by chance) significant predictors and a highly significant F statistic. The final model suggests that it is effective at predicting the outcome, but this success is spurious. He also illustrates these results using asymptotic calculations. Suggested solutions include screening on a sample and assessing the model on the full data set and using at least an order of magnitude more observations than predictors. | GLM after model selection or regularization
You might check out David Freedman's paper, "A Note on Screening Regression Equations." (ungated)
Using completely uncorrelated data in a simulation, he shows that, if there are many predictors relat |
24,366 | GLM after model selection or regularization | Regarding 1) Yes, you do lose this. See e.g. Harrell Regression Modeling Strategies, a book published by Wiley or a paper I presented with David Cassell called "Stopping Stepwise" available e.g. www.nesug.org/proceedings/nesug07/sa/sa07.pdf | GLM after model selection or regularization | Regarding 1) Yes, you do lose this. See e.g. Harrell Regression Modeling Strategies, a book published by Wiley or a paper I presented with David Cassell called "Stopping Stepwise" available e.g. www. | GLM after model selection or regularization
Regarding 1) Yes, you do lose this. See e.g. Harrell Regression Modeling Strategies, a book published by Wiley or a paper I presented with David Cassell called "Stopping Stepwise" available e.g. www.nesug.org/proceedings/nesug07/sa/sa07.pdf | GLM after model selection or regularization
Regarding 1) Yes, you do lose this. See e.g. Harrell Regression Modeling Strategies, a book published by Wiley or a paper I presented with David Cassell called "Stopping Stepwise" available e.g. www. |
24,367 | Soft-thresholding vs. Lasso penalization | What i'll say holds for regression, but should be true for PLS also. So it's not a bijection because depeding on how much you enforce the constrained in the $l1$, you will have a variety of 'answers' while the second solution admits only $p$ possible answers (where $p$ is the number of variables) <-> there are more solutions in the $l1$ formulation than in the 'truncation' formulation. | Soft-thresholding vs. Lasso penalization | What i'll say holds for regression, but should be true for PLS also. So it's not a bijection because depeding on how much you enforce the constrained in the $l1$, you will have a variety of 'answers' | Soft-thresholding vs. Lasso penalization
What i'll say holds for regression, but should be true for PLS also. So it's not a bijection because depeding on how much you enforce the constrained in the $l1$, you will have a variety of 'answers' while the second solution admits only $p$ possible answers (where $p$ is the number of variables) <-> there are more solutions in the $l1$ formulation than in the 'truncation' formulation. | Soft-thresholding vs. Lasso penalization
What i'll say holds for regression, but should be true for PLS also. So it's not a bijection because depeding on how much you enforce the constrained in the $l1$, you will have a variety of 'answers' |
24,368 | Soft-thresholding vs. Lasso penalization | $L_1$ penalization is part of an optimization problem. Soft-thresholding is part of an algorithm. Sometimes $L_1$ penalization leads to soft-thresholding.
For regression, $L_1$ penalized least squares (Lasso) results in soft-thresholding when the columns of the $X$ matrix are orthogonal (assuming the rows correspond to different samples). It is really straight-forward to derive when you consider the special case of mean estimation, where the $X$ matrix consists of a single $1$ in each row and zeroes everywhere else.
For the general $X$ matrix, computing the Lasso solution via cyclic coordinate descent results in essentially iterative soft-thresholding. See http://projecteuclid.org/euclid.aoas/1196438020 . | Soft-thresholding vs. Lasso penalization | $L_1$ penalization is part of an optimization problem. Soft-thresholding is part of an algorithm. Sometimes $L_1$ penalization leads to soft-thresholding.
For regression, $L_1$ penalized least squar | Soft-thresholding vs. Lasso penalization
$L_1$ penalization is part of an optimization problem. Soft-thresholding is part of an algorithm. Sometimes $L_1$ penalization leads to soft-thresholding.
For regression, $L_1$ penalized least squares (Lasso) results in soft-thresholding when the columns of the $X$ matrix are orthogonal (assuming the rows correspond to different samples). It is really straight-forward to derive when you consider the special case of mean estimation, where the $X$ matrix consists of a single $1$ in each row and zeroes everywhere else.
For the general $X$ matrix, computing the Lasso solution via cyclic coordinate descent results in essentially iterative soft-thresholding. See http://projecteuclid.org/euclid.aoas/1196438020 . | Soft-thresholding vs. Lasso penalization
$L_1$ penalization is part of an optimization problem. Soft-thresholding is part of an algorithm. Sometimes $L_1$ penalization leads to soft-thresholding.
For regression, $L_1$ penalized least squar |
24,369 | How can I adapt ANOVA for binary data? | Contingency table (chi-square). Also Logistic Regression is your friend - use dummy variables. | How can I adapt ANOVA for binary data? | Contingency table (chi-square). Also Logistic Regression is your friend - use dummy variables. | How can I adapt ANOVA for binary data?
Contingency table (chi-square). Also Logistic Regression is your friend - use dummy variables. | How can I adapt ANOVA for binary data?
Contingency table (chi-square). Also Logistic Regression is your friend - use dummy variables. |
24,370 | Name for a distribution between exponential and gamma? | The density function becomes
$$
f(s) = {\frac {\alpha}{1-2\,{{\rm e}^{\alpha}}{\it Ei} \left( 3,\alpha
\right) }}\cdot \frac{s}{s+\alpha} e^{-s}, \quad s>0
$$
where ${\it Ei}$ is the exponential integral.
I cannot recognize that as something having a known name. Where did you encounter this? | Name for a distribution between exponential and gamma? | The density function becomes
$$
f(s) = {\frac {\alpha}{1-2\,{{\rm e}^{\alpha}}{\it Ei} \left( 3,\alpha
\right) }}\cdot \frac{s}{s+\alpha} e^{-s}, \quad s>0
$$
where ${\it Ei}$ is the exponential i | Name for a distribution between exponential and gamma?
The density function becomes
$$
f(s) = {\frac {\alpha}{1-2\,{{\rm e}^{\alpha}}{\it Ei} \left( 3,\alpha
\right) }}\cdot \frac{s}{s+\alpha} e^{-s}, \quad s>0
$$
where ${\it Ei}$ is the exponential integral.
I cannot recognize that as something having a known name. Where did you encounter this? | Name for a distribution between exponential and gamma?
The density function becomes
$$
f(s) = {\frac {\alpha}{1-2\,{{\rm e}^{\alpha}}{\it Ei} \left( 3,\alpha
\right) }}\cdot \frac{s}{s+\alpha} e^{-s}, \quad s>0
$$
where ${\it Ei}$ is the exponential i |
24,371 | Generate random variable with given moments | We really need that you give some more information as asked for in comments.
There is a monograph Recovery of Distributions via Moments dedicated to your question.
Here: Constructing and Estimating Probability Distributions
from Moments is another paper.
Some related posts on sister sites:
https://math.stackexchange.com/questions/386025/finding-a-probability-distribution-given-the-moment-generating-function
https://mathoverflow.net/questions/3525/when-are-probability-distributions-completely-determined-by-their-moments
Another paper is http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.6130 Its author lists some possible approaches, like maximum entropy methods (Jaynes 1994), a method of obtaining upper and lower bounds on the cumulative distribution function (cdf) using the first $n$ moments (https://www.semanticscholar.org/paper/A-moments-based-distribution-bounding-method-R%C3%A1cz-Tari/cd28087b8ead5c4d5c4eebc2b91e2a4b8caef3f3), but then diced to assume a unimodal distribution and fit to a flexible distribution family, like Pearson family, Johnson family or Generalized Tukey Lambda family. Finally she implements a solution based on fitting first four moments to the Generalized Lambda family. | Generate random variable with given moments | We really need that you give some more information as asked for in comments.
There is a monograph Recovery of Distributions via Moments dedicated to your question.
Here: Constructing and Estimating P | Generate random variable with given moments
We really need that you give some more information as asked for in comments.
There is a monograph Recovery of Distributions via Moments dedicated to your question.
Here: Constructing and Estimating Probability Distributions
from Moments is another paper.
Some related posts on sister sites:
https://math.stackexchange.com/questions/386025/finding-a-probability-distribution-given-the-moment-generating-function
https://mathoverflow.net/questions/3525/when-are-probability-distributions-completely-determined-by-their-moments
Another paper is http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.106.6130 Its author lists some possible approaches, like maximum entropy methods (Jaynes 1994), a method of obtaining upper and lower bounds on the cumulative distribution function (cdf) using the first $n$ moments (https://www.semanticscholar.org/paper/A-moments-based-distribution-bounding-method-R%C3%A1cz-Tari/cd28087b8ead5c4d5c4eebc2b91e2a4b8caef3f3), but then diced to assume a unimodal distribution and fit to a flexible distribution family, like Pearson family, Johnson family or Generalized Tukey Lambda family. Finally she implements a solution based on fitting first four moments to the Generalized Lambda family. | Generate random variable with given moments
We really need that you give some more information as asked for in comments.
There is a monograph Recovery of Distributions via Moments dedicated to your question.
Here: Constructing and Estimating P |
24,372 | Multivariate linear regression vs. several univariate regression models | In the setting of classical multivariate linear regression, we have the model:
$$Y = X \beta + \epsilon$$
where $X$ represents the independent variables, $Y$ represents multiple response variables, and $\epsilon$ is an i.i.d. Gaussian noise term. Noise has zero mean, and can be correlated across response variables. The maximum likelihood solution for the weights is equivalent to the least squares solution (regardless of noise correlations) [1][2]:
$$\hat{\beta} = (X^T X)^{-1} X^T Y$$
This is equivalent to independently solving a separate regression problem for each response variable. This can be seen from the fact that the $i$th column of $\hat{\beta}$ (containing weights for the $i$th output variable) can be obtained by multiplying $(X^T X)^{-1} X^T$ by the $i$th column of $Y$ (containing values of the $i$th response variable).
However, multivariate linear regression differs from separately solving individual regression problems because statistical inference procedures account for correlations between the multiple response variables (e.g. see [2],[3],[4]). For example, the noise covariance matrix shows up in sampling distributions, test statistics, and interval estimates.
Another difference emerges if we allow each response variable to have its own set of covariates:
$$Y_i = X_i \beta_i + \epsilon_i$$
where $Y_i$ represents the $i$th response variable, and $X_i$ and $\epsilon_i$ represents its corresponding set of covariates and noise term. As above, the noise terms can be correlated across response variables. In this setting, there exist estimators that are more efficient than least squares, and cannot be reduced to solving separate regression problems for each response variable. For example, see [1].
References
Zellner (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias.
Helwig (2017). Multivariate linear regression [Slides]
Fox and Weisberg (2011). Multivariate linear models in R. [Appendix to: An R Companion to Applied Regression]
Maitra (2013). Multivariate Linear Regression Models. [Slides] | Multivariate linear regression vs. several univariate regression models | In the setting of classical multivariate linear regression, we have the model:
$$Y = X \beta + \epsilon$$
where $X$ represents the independent variables, $Y$ represents multiple response variables, an | Multivariate linear regression vs. several univariate regression models
In the setting of classical multivariate linear regression, we have the model:
$$Y = X \beta + \epsilon$$
where $X$ represents the independent variables, $Y$ represents multiple response variables, and $\epsilon$ is an i.i.d. Gaussian noise term. Noise has zero mean, and can be correlated across response variables. The maximum likelihood solution for the weights is equivalent to the least squares solution (regardless of noise correlations) [1][2]:
$$\hat{\beta} = (X^T X)^{-1} X^T Y$$
This is equivalent to independently solving a separate regression problem for each response variable. This can be seen from the fact that the $i$th column of $\hat{\beta}$ (containing weights for the $i$th output variable) can be obtained by multiplying $(X^T X)^{-1} X^T$ by the $i$th column of $Y$ (containing values of the $i$th response variable).
However, multivariate linear regression differs from separately solving individual regression problems because statistical inference procedures account for correlations between the multiple response variables (e.g. see [2],[3],[4]). For example, the noise covariance matrix shows up in sampling distributions, test statistics, and interval estimates.
Another difference emerges if we allow each response variable to have its own set of covariates:
$$Y_i = X_i \beta_i + \epsilon_i$$
where $Y_i$ represents the $i$th response variable, and $X_i$ and $\epsilon_i$ represents its corresponding set of covariates and noise term. As above, the noise terms can be correlated across response variables. In this setting, there exist estimators that are more efficient than least squares, and cannot be reduced to solving separate regression problems for each response variable. For example, see [1].
References
Zellner (1962). An efficient method of estimating seemingly unrelated regressions and tests for aggregation bias.
Helwig (2017). Multivariate linear regression [Slides]
Fox and Weisberg (2011). Multivariate linear models in R. [Appendix to: An R Companion to Applied Regression]
Maitra (2013). Multivariate Linear Regression Models. [Slides] | Multivariate linear regression vs. several univariate regression models
In the setting of classical multivariate linear regression, we have the model:
$$Y = X \beta + \epsilon$$
where $X$ represents the independent variables, $Y$ represents multiple response variables, an |
24,373 | How to interpret the results when both ridge and lasso separately perform well but produce different coefficients | Ridge regression encourages all coefficients to becomes small. Lasso encourages many/most[**] coefficients to become zero, and a few non-zero. Both of them will reduce the accuracy on the training set, but improve prediction in some way:
ridge regression attempts to improve generalization to the testing set, by reducing overfit
lasso will reduce the number of non-zero coefficients, even if this penalizes performance on both training and test sets
You can get different choices of coefficients if your data is highly correlated. So, you might have 5 features that are correlated:
by assigning small but non-zero coefficients to all of these features, ridge regression can achieve low loss on training set, which might plausibly generalize to testing set
lasso might choose only one single one of these, that correlates well with the other four. and there's no reason why it should pick the feature with highest coefficient in the ridge regression version
[*] for a definition of 'choose' meaning: assigns a non-zero coefficient, which is still a bit hand-waving, since ridge regression coefficients will tend to all be non-zero, but eg some might be like 1e-8, and others might be eg 0.01
[**] nuance: as Richard Hardy points out, for some use-cases, a value of $\lambda$ can be chosen which will result in all LASSO coefficients being non-zero, but with some shrinkage | How to interpret the results when both ridge and lasso separately perform well but produce different | Ridge regression encourages all coefficients to becomes small. Lasso encourages many/most[**] coefficients to become zero, and a few non-zero. Both of them will reduce the accuracy on the training set | How to interpret the results when both ridge and lasso separately perform well but produce different coefficients
Ridge regression encourages all coefficients to becomes small. Lasso encourages many/most[**] coefficients to become zero, and a few non-zero. Both of them will reduce the accuracy on the training set, but improve prediction in some way:
ridge regression attempts to improve generalization to the testing set, by reducing overfit
lasso will reduce the number of non-zero coefficients, even if this penalizes performance on both training and test sets
You can get different choices of coefficients if your data is highly correlated. So, you might have 5 features that are correlated:
by assigning small but non-zero coefficients to all of these features, ridge regression can achieve low loss on training set, which might plausibly generalize to testing set
lasso might choose only one single one of these, that correlates well with the other four. and there's no reason why it should pick the feature with highest coefficient in the ridge regression version
[*] for a definition of 'choose' meaning: assigns a non-zero coefficient, which is still a bit hand-waving, since ridge regression coefficients will tend to all be non-zero, but eg some might be like 1e-8, and others might be eg 0.01
[**] nuance: as Richard Hardy points out, for some use-cases, a value of $\lambda$ can be chosen which will result in all LASSO coefficients being non-zero, but with some shrinkage | How to interpret the results when both ridge and lasso separately perform well but produce different
Ridge regression encourages all coefficients to becomes small. Lasso encourages many/most[**] coefficients to become zero, and a few non-zero. Both of them will reduce the accuracy on the training set |
24,374 | How are SVMs = Template Matching? | How is SVM related to neural network? How is it a shallow network?
The SVM is a single layer neural network with the hinge loss as loss function and exclusively linear activation. The concept has been alluded in previous threads, such as this one: Single layer NeuralNetwork with RelU activation equal to SVM?
SVM solves an optimization problem with a well defined objective function, how is it doing template matching? What is the template here to which an input is matched?
The Gram Matrix (Kernel Matrix, if you prefer) is a measure of similarity. As the SVM allows sparse solutions, prediction becomes a matter of comparing your sample with the templates, i.e. the support vectors. | How are SVMs = Template Matching? | How is SVM related to neural network? How is it a shallow network?
The SVM is a single layer neural network with the hinge loss as loss function and exclusively linear activation. The concept has be | How are SVMs = Template Matching?
How is SVM related to neural network? How is it a shallow network?
The SVM is a single layer neural network with the hinge loss as loss function and exclusively linear activation. The concept has been alluded in previous threads, such as this one: Single layer NeuralNetwork with RelU activation equal to SVM?
SVM solves an optimization problem with a well defined objective function, how is it doing template matching? What is the template here to which an input is matched?
The Gram Matrix (Kernel Matrix, if you prefer) is a measure of similarity. As the SVM allows sparse solutions, prediction becomes a matter of comparing your sample with the templates, i.e. the support vectors. | How are SVMs = Template Matching?
How is SVM related to neural network? How is it a shallow network?
The SVM is a single layer neural network with the hinge loss as loss function and exclusively linear activation. The concept has be |
24,375 | What does it mean for something to have good frequentist properties? | A tricky thing about good frequentist properties is that they are properties of a procedure rather than properties of a particular result or inference. A good frequentist procedure yields correct inferences on the specified proportion of cases in the long run, but a good Bayesian procedure is often the one that yields correct inferences in the individual case in question.
For example, consider a Bayesian procedure that is "good" in a general sense because it supplies a posterior probability distribution or credible interval that correctly represents combination of the evidence (likelihood function) with the prior probability distribution. If the prior contains accurate information (say, rather than empty opinion or some form of uninformative prior), that posterior or interval might result in better inference than a frequentist result from the same data. Better in the sense of leading to more accurate inference about this particular case or a narrower estimation interval because the procedure utilises a customised prior containing accurate information. In the long run the coverage percentage of the intervals and the correctness of inferences is influenced by the quality of each prior. Such a procedure will not have "good frequentist properties" because it is dependent on the quality of the prior and the prior is not appropriately customised in the long run accounting.
Notice that the procedure does not specify how the prior is to be obtained and so the long run accounting of performance would, presumably, assume any-old prior rather than a custom-designed prior for each case.
A Bayesian procedure can have good frequentist properties. For example, in many cases a Bayesian procedure with a recipe-provided uninformative prior will have fairly good to excellent frequentist properties. Those good properties would be an accident rather than design feature, and would be a straightforward consequence of such a procedure yielding similar intervals to the frequentist procedures.
Thus a Bayesian procedure can have superior inferential properties in an individual experiment while having poor frequentist properties in the long run. Equivalently, frequentist procedures with good long run frequentist properties often have poor performance in the case of individual experiments. | What does it mean for something to have good frequentist properties? | A tricky thing about good frequentist properties is that they are properties of a procedure rather than properties of a particular result or inference. A good frequentist procedure yields correct infe | What does it mean for something to have good frequentist properties?
A tricky thing about good frequentist properties is that they are properties of a procedure rather than properties of a particular result or inference. A good frequentist procedure yields correct inferences on the specified proportion of cases in the long run, but a good Bayesian procedure is often the one that yields correct inferences in the individual case in question.
For example, consider a Bayesian procedure that is "good" in a general sense because it supplies a posterior probability distribution or credible interval that correctly represents combination of the evidence (likelihood function) with the prior probability distribution. If the prior contains accurate information (say, rather than empty opinion or some form of uninformative prior), that posterior or interval might result in better inference than a frequentist result from the same data. Better in the sense of leading to more accurate inference about this particular case or a narrower estimation interval because the procedure utilises a customised prior containing accurate information. In the long run the coverage percentage of the intervals and the correctness of inferences is influenced by the quality of each prior. Such a procedure will not have "good frequentist properties" because it is dependent on the quality of the prior and the prior is not appropriately customised in the long run accounting.
Notice that the procedure does not specify how the prior is to be obtained and so the long run accounting of performance would, presumably, assume any-old prior rather than a custom-designed prior for each case.
A Bayesian procedure can have good frequentist properties. For example, in many cases a Bayesian procedure with a recipe-provided uninformative prior will have fairly good to excellent frequentist properties. Those good properties would be an accident rather than design feature, and would be a straightforward consequence of such a procedure yielding similar intervals to the frequentist procedures.
Thus a Bayesian procedure can have superior inferential properties in an individual experiment while having poor frequentist properties in the long run. Equivalently, frequentist procedures with good long run frequentist properties often have poor performance in the case of individual experiments. | What does it mean for something to have good frequentist properties?
A tricky thing about good frequentist properties is that they are properties of a procedure rather than properties of a particular result or inference. A good frequentist procedure yields correct infe |
24,376 | What does it mean for something to have good frequentist properties? | I would answer that your analysis is correct.
To provide a few more insights, I would mention matching priors.
Matching priors are typically priors designed to build Bayesian models with a frequentist property. In particular, they are defined so that the obtained hpd intervals satisfy the frequentist coverage of confidence interval (so 95% of the 95% hpd contain the true values on the long run).
Notice that, in 1d, there are analytical solutions: the Jeffreys priors are matching priors. In higher dimension, this is not necessary the case (to my knownledge, there is no result proving that this is never the case).
In practice, this matching principle is sometimes also applied to tune the value of some parameters of a model: ground truth data are used to optimize these parameters in the sense that their values maximise the frequentist coverage of the resulting credible intervals for the parameter of interest. From my own experimence, this may be a very subtle task. | What does it mean for something to have good frequentist properties? | I would answer that your analysis is correct.
To provide a few more insights, I would mention matching priors.
Matching priors are typically priors designed to build Bayesian models with a frequenti | What does it mean for something to have good frequentist properties?
I would answer that your analysis is correct.
To provide a few more insights, I would mention matching priors.
Matching priors are typically priors designed to build Bayesian models with a frequentist property. In particular, they are defined so that the obtained hpd intervals satisfy the frequentist coverage of confidence interval (so 95% of the 95% hpd contain the true values on the long run).
Notice that, in 1d, there are analytical solutions: the Jeffreys priors are matching priors. In higher dimension, this is not necessary the case (to my knownledge, there is no result proving that this is never the case).
In practice, this matching principle is sometimes also applied to tune the value of some parameters of a model: ground truth data are used to optimize these parameters in the sense that their values maximise the frequentist coverage of the resulting credible intervals for the parameter of interest. From my own experimence, this may be a very subtle task. | What does it mean for something to have good frequentist properties?
I would answer that your analysis is correct.
To provide a few more insights, I would mention matching priors.
Matching priors are typically priors designed to build Bayesian models with a frequenti |
24,377 | What does it mean for something to have good frequentist properties? | If there is any contribution that I can make, let me add a clarification first, and then answer your question directly. There are a lot of confusion about the topic (frequestist properties of bayesian procedure), and even disagreement among specialists. The first misconception is "Bayesian intervals are meant to contain the true value of the parameter with probability $p$." If you are pure Bayesian (one that does not adopt any frequentist notion to evaluate the Bayesian procedure), there is no such thing as "true parameter". The main quantities of interested that are fixed parameters in the frequentist world are random variables in the Bayesian world . As a Bayesian, you do not recover the true value of the parameters, but the distribution of the "parameters", or their moments.
Now, to answer your question: no, it does not imply any assessement of the Bayesian method. Skiping the nuances and focusing in estimation procedure to keep it simple: the frequentism in statistics is that idea of estimating an unknown fixed quantity, or testing a hypothesis, and evaluating such procedure against a hypothetical repetition of it. You can adopt many criteria to evaluate a procedure. What makes it a frequentist criterium is that one cares about what happen if one adopts the same procedure over and over again. If you do so, you care about the frequentist properties. In other words: "what are the frequentist properties?" means "what happens if we repeat the procedure over and over?" Now, what makes such frequentist properties good is another layer of criteria. The most common frequentist properties that are considered good properties are consistency (in an estimation, if you keep sampling the estimator will converge to the fixed value you are estimating), efficiency (if you keep sampling, the variance of the estimator will go to zero, so you will be more and more accurate), coverage probability (in many repetitions of the procedure, a 95% confidence interval will contain the true value 95% of the time). The first two are called large sample properties, the third is the Neyman-genuinely frequentist property in the sense that it does not need to use asymptotic results necessarily. So, in sum, in the frequentist framework, there is a true and unknown value. You estimate it and you are always (except in a rare lucky accident) wrong in the estimation, but you are trying to save yourself by requiring that at least under a hypothetically indefinitely repetition of your estimation, you would be less and less wrong or you know you would be right a certain amount of times. I won't discuss if it makes sense or not, or the additional assumptions required to justify it, given it was not your questions. Conceptually, that is what frequentist properties refer to, and which good means in general in such context.
I will close by pointing you this paper, so that you judge by yourself if it makes sense and what it means a Bayesian procedure to have good frequentist properties (you will find more references there):
Little, R., & others, (2011). Calibrated bayes, for statistics in general, and missing data in particular. Statistical Science, 26(2),162–174. | What does it mean for something to have good frequentist properties? | If there is any contribution that I can make, let me add a clarification first, and then answer your question directly. There are a lot of confusion about the topic (frequestist properties of bayesian | What does it mean for something to have good frequentist properties?
If there is any contribution that I can make, let me add a clarification first, and then answer your question directly. There are a lot of confusion about the topic (frequestist properties of bayesian procedure), and even disagreement among specialists. The first misconception is "Bayesian intervals are meant to contain the true value of the parameter with probability $p$." If you are pure Bayesian (one that does not adopt any frequentist notion to evaluate the Bayesian procedure), there is no such thing as "true parameter". The main quantities of interested that are fixed parameters in the frequentist world are random variables in the Bayesian world . As a Bayesian, you do not recover the true value of the parameters, but the distribution of the "parameters", or their moments.
Now, to answer your question: no, it does not imply any assessement of the Bayesian method. Skiping the nuances and focusing in estimation procedure to keep it simple: the frequentism in statistics is that idea of estimating an unknown fixed quantity, or testing a hypothesis, and evaluating such procedure against a hypothetical repetition of it. You can adopt many criteria to evaluate a procedure. What makes it a frequentist criterium is that one cares about what happen if one adopts the same procedure over and over again. If you do so, you care about the frequentist properties. In other words: "what are the frequentist properties?" means "what happens if we repeat the procedure over and over?" Now, what makes such frequentist properties good is another layer of criteria. The most common frequentist properties that are considered good properties are consistency (in an estimation, if you keep sampling the estimator will converge to the fixed value you are estimating), efficiency (if you keep sampling, the variance of the estimator will go to zero, so you will be more and more accurate), coverage probability (in many repetitions of the procedure, a 95% confidence interval will contain the true value 95% of the time). The first two are called large sample properties, the third is the Neyman-genuinely frequentist property in the sense that it does not need to use asymptotic results necessarily. So, in sum, in the frequentist framework, there is a true and unknown value. You estimate it and you are always (except in a rare lucky accident) wrong in the estimation, but you are trying to save yourself by requiring that at least under a hypothetically indefinitely repetition of your estimation, you would be less and less wrong or you know you would be right a certain amount of times. I won't discuss if it makes sense or not, or the additional assumptions required to justify it, given it was not your questions. Conceptually, that is what frequentist properties refer to, and which good means in general in such context.
I will close by pointing you this paper, so that you judge by yourself if it makes sense and what it means a Bayesian procedure to have good frequentist properties (you will find more references there):
Little, R., & others, (2011). Calibrated bayes, for statistics in general, and missing data in particular. Statistical Science, 26(2),162–174. | What does it mean for something to have good frequentist properties?
If there is any contribution that I can make, let me add a clarification first, and then answer your question directly. There are a lot of confusion about the topic (frequestist properties of bayesian |
24,378 | Random Forest with longitudinal data | There is a previous post that discussed including mixed-effects for clustered/longitudinal data.
How can I include random effects into a randomForest
Here is a good reference for decision tree implementations in R:
http://statistical-research.com/a-brief-tour-of-the-trees-and-forests/
Also, you may want to review these slides
http://www2.ims.nus.edu.sg/Programs/014swclass/files/denis.pdf | Random Forest with longitudinal data | There is a previous post that discussed including mixed-effects for clustered/longitudinal data.
How can I include random effects into a randomForest
Here is a good reference for decision tree impleme | Random Forest with longitudinal data
There is a previous post that discussed including mixed-effects for clustered/longitudinal data.
How can I include random effects into a randomForest
Here is a good reference for decision tree implementations in R:
http://statistical-research.com/a-brief-tour-of-the-trees-and-forests/
Also, you may want to review these slides
http://www2.ims.nus.edu.sg/Programs/014swclass/files/denis.pdf | Random Forest with longitudinal data
There is a previous post that discussed including mixed-effects for clustered/longitudinal data.
How can I include random effects into a randomForest
Here is a good reference for decision tree impleme |
24,379 | Random Forest with longitudinal data | You could try the following packages in R:
REEMtree: which is no random forest but a single tree model where differences between objects are accounted for over time (so called random or mixed effects), and several trees could possible be ensembled, or
glmertree: like approaches that can account for segment-wise constant means - which could be adapted to account for individual specific growth patterns (see here).
Or you simply put age as a variable in your model to account for at least that bit of the individual tree characteristic? | Random Forest with longitudinal data | You could try the following packages in R:
REEMtree: which is no random forest but a single tree model where differences between objects are accounted for over time (so called random or mixed effects | Random Forest with longitudinal data
You could try the following packages in R:
REEMtree: which is no random forest but a single tree model where differences between objects are accounted for over time (so called random or mixed effects), and several trees could possible be ensembled, or
glmertree: like approaches that can account for segment-wise constant means - which could be adapted to account for individual specific growth patterns (see here).
Or you simply put age as a variable in your model to account for at least that bit of the individual tree characteristic? | Random Forest with longitudinal data
You could try the following packages in R:
REEMtree: which is no random forest but a single tree model where differences between objects are accounted for over time (so called random or mixed effects |
24,380 | Relation between learning rate and number of hidden layers? | This question has been answered here:
With neural networks, should the learning rate be in some way
proportional to hidden layer sizes? Should they affect each other?
Short answer is yes, there is a relation. Though, the relation is not this trivial, all I can tell you that what you see is because the optimization surface becomes more complex as the the number of hidden layers increase, therefore smaller learning rates are generally better. While stucking in local minima is a possibility with low learning rate, it's much better than complex surface and high learning rate. | Relation between learning rate and number of hidden layers? | This question has been answered here:
With neural networks, should the learning rate be in some way
proportional to hidden layer sizes? Should they affect each other?
Short answer is yes, there is | Relation between learning rate and number of hidden layers?
This question has been answered here:
With neural networks, should the learning rate be in some way
proportional to hidden layer sizes? Should they affect each other?
Short answer is yes, there is a relation. Though, the relation is not this trivial, all I can tell you that what you see is because the optimization surface becomes more complex as the the number of hidden layers increase, therefore smaller learning rates are generally better. While stucking in local minima is a possibility with low learning rate, it's much better than complex surface and high learning rate. | Relation between learning rate and number of hidden layers?
This question has been answered here:
With neural networks, should the learning rate be in some way
proportional to hidden layer sizes? Should they affect each other?
Short answer is yes, there is |
24,381 | Definition and calculation of the log pointwise predictive density | Pointwise because you are calculating predictive density values for each point observation. Note that you could take 2 or 3 observations together instead. At the end of the day, it is just a name that Gelman invented for marketing purposes.
No, it is not the (log)marginal likelihood. In the marginal likelihood you integrate the full likelihood multiplied by the prior, with respect to the parameters (and the integral of the logarithm is not the logarithm of the integral). Note that the lppd is larger when the samples are close to the mode(s) or in high density regions, which is what you would like to achieve typically with a statistical model.
If you have a posterior sample, calculating the lppd is straightforward since you just need to plug in the samples in a Monte Carlo integration:
$$ \int p(y_i| \theta) p_\text{post}(\theta) d\theta \approx \dfrac{1}{S}\sum_{s=1}^S p(y_i\vert \theta^{(s)})$$ | Definition and calculation of the log pointwise predictive density | Pointwise because you are calculating predictive density values for each point observation. Note that you could take 2 or 3 observations together instead. At the end of the day, it is just a name that | Definition and calculation of the log pointwise predictive density
Pointwise because you are calculating predictive density values for each point observation. Note that you could take 2 or 3 observations together instead. At the end of the day, it is just a name that Gelman invented for marketing purposes.
No, it is not the (log)marginal likelihood. In the marginal likelihood you integrate the full likelihood multiplied by the prior, with respect to the parameters (and the integral of the logarithm is not the logarithm of the integral). Note that the lppd is larger when the samples are close to the mode(s) or in high density regions, which is what you would like to achieve typically with a statistical model.
If you have a posterior sample, calculating the lppd is straightforward since you just need to plug in the samples in a Monte Carlo integration:
$$ \int p(y_i| \theta) p_\text{post}(\theta) d\theta \approx \dfrac{1}{S}\sum_{s=1}^S p(y_i\vert \theta^{(s)})$$ | Definition and calculation of the log pointwise predictive density
Pointwise because you are calculating predictive density values for each point observation. Note that you could take 2 or 3 observations together instead. At the end of the day, it is just a name that |
24,382 | xgboost - what is the difference between the tree booster and the linear booster? | I just recently started using gradient boosted trees, please correct me if I'm wrong. I found this wiki page https://en.wikipedia.org/wiki/Gradient_boosting informative. Check out the algorithm and gradient tree boosting section.
As far as I understand, gradient boosting will of course work with most learners. Gradient boosting will in iterations($m$) train one new learner $h_m$ on the ensemble residuals of of previous iteration.
The ensemble $F_m$ is updated with
$F_m \leftarrow F_{m-1} + \gamma_m h_m$
where $F_{m-1}$ was the previous ensemble and $\gamma_m$ is a coefficient such that,
$ \gamma_m = \underset{\gamma}{\operatorname{arg\,min}} \sum_{i=1}^n L\left(y_i, F_{m-1}(x_i) + \gamma_m h_m(x_i)\right).$
Hereby is the new learner fused with old ensemble by coefficient $\gamma_m$, such that new ensemble explains the target $y$ the most accurately(defined by loss function metric $L$)
As explained on wiki page, Friedman proposed at special modification for decision trees, where each terminal node $j$*** of the new learner $h_m$, has it own separate $\gamma_{jm}$ value. This modification would not be transferrable to most other learners, such as gblinear.
*** (wiki article describes each $\gamma_{jm}$ to cover a disjoint region(R) of the feature space. I prefer to think of its as the terminal nodes, that happens to cover each a disjoint region)
Also to mention, if you pick a strictly additive linear regression as base learner, I think the model will fail fitting interactions and non-lineairties. In example below the xgboost cannot fit $y=x_1 x_2$
library(xgboost)
X = replicate(2,rnorm(5000))
y = apply(X,1,prod)
test = sample(5000,2000)
Data = cbind(X=X)
xbm = xgboost(Data[-test,],label=y[-test],params=list(booster="gblinear"),nrounds=500)
ytest = predict(xbm,Data[test,])
plot(y[test],ytest) | xgboost - what is the difference between the tree booster and the linear booster? | I just recently started using gradient boosted trees, please correct me if I'm wrong. I found this wiki page https://en.wikipedia.org/wiki/Gradient_boosting informative. Check out the algorithm and gr | xgboost - what is the difference between the tree booster and the linear booster?
I just recently started using gradient boosted trees, please correct me if I'm wrong. I found this wiki page https://en.wikipedia.org/wiki/Gradient_boosting informative. Check out the algorithm and gradient tree boosting section.
As far as I understand, gradient boosting will of course work with most learners. Gradient boosting will in iterations($m$) train one new learner $h_m$ on the ensemble residuals of of previous iteration.
The ensemble $F_m$ is updated with
$F_m \leftarrow F_{m-1} + \gamma_m h_m$
where $F_{m-1}$ was the previous ensemble and $\gamma_m$ is a coefficient such that,
$ \gamma_m = \underset{\gamma}{\operatorname{arg\,min}} \sum_{i=1}^n L\left(y_i, F_{m-1}(x_i) + \gamma_m h_m(x_i)\right).$
Hereby is the new learner fused with old ensemble by coefficient $\gamma_m$, such that new ensemble explains the target $y$ the most accurately(defined by loss function metric $L$)
As explained on wiki page, Friedman proposed at special modification for decision trees, where each terminal node $j$*** of the new learner $h_m$, has it own separate $\gamma_{jm}$ value. This modification would not be transferrable to most other learners, such as gblinear.
*** (wiki article describes each $\gamma_{jm}$ to cover a disjoint region(R) of the feature space. I prefer to think of its as the terminal nodes, that happens to cover each a disjoint region)
Also to mention, if you pick a strictly additive linear regression as base learner, I think the model will fail fitting interactions and non-lineairties. In example below the xgboost cannot fit $y=x_1 x_2$
library(xgboost)
X = replicate(2,rnorm(5000))
y = apply(X,1,prod)
test = sample(5000,2000)
Data = cbind(X=X)
xbm = xgboost(Data[-test,],label=y[-test],params=list(booster="gblinear"),nrounds=500)
ytest = predict(xbm,Data[test,])
plot(y[test],ytest) | xgboost - what is the difference between the tree booster and the linear booster?
I just recently started using gradient boosted trees, please correct me if I'm wrong. I found this wiki page https://en.wikipedia.org/wiki/Gradient_boosting informative. Check out the algorithm and gr |
24,383 | Modelling auto-correlated binary time series | The class of score-driven models might be of interest to you:
Creal, D. D., S. J. Koopman, and A. Lucas (2013). Generalized autoregressive score models with applications. Journal of Applied Econometrics 28(5), 777--795
Score-driven models were applied to a binary time series of outcomes of The Boat Race between Oxford and Cambridge,
https://timeserieslab.com/articles.html#boatrace
In that paper, the time-varying probability was obtained with the score-driven methodology by using the (free) Time Series Lab software package.
The score-driven model for binary observations in short:
Our observations can take on either two values: 0 and 1.
We therefore assume that these observations come from the Binary distribution with probability density function (pdf)
\begin{equation}\label{eq:pdf}
p(y_t | \pi_t) = \pi_t^{y_t} (1-\pi_t)^{1-y_t}
\end{equation}
where $\pi_t$ is a time-varying probability and $y_t \in \{0,1\}$ for $t = 1,\ldots,T$ where $T$ is the
length of the time series.
We can specify the time-varying probability $\pi_t$ as a function of a dynamic process $\alpha _t$, that is
\begin{equation}\label{eq:pi}
\begin{aligned}
\pi_{t+1} &= f(\alpha_t),\\
\alpha_{t+1} &= \omega + \phi \alpha_t + \kappa s_t ,
\end{aligned}
\end{equation}
where the link function $f(\cdot)$ is the logit link function so that $\pi_t$ takes values between 0 and 1.
You can easily include more lags of $\alpha_t$ or $s_t$ into the equation above.
The unknown coefficients include the constant $\omega$, the autoregressive parameter $\phi$, and the score updating parameter $\kappa$ which are estimated by maximum likelihood.
The driving force behind the updating equation of $\alpha_t$ is the scaled score innovation $s_t$
as given by
\begin{equation}\label{eq:score}
\qquad s_t = S_t \cdot \nabla_t, \qquad \nabla_t = \frac{\partial \, \log \, p(y_t | \pi_t)}{\partial \alpha_t},
\end{equation}
for $t = 1,\ldots,T$ and where $\nabla_{t}$ is the score of the density $p(y_t | \pi_t)$ and $S_t$ a scaling factor which is often the inverse of the Fisher information.
[Disclaimer] I am one of the developers of Time Series Lab. | Modelling auto-correlated binary time series | The class of score-driven models might be of interest to you:
Creal, D. D., S. J. Koopman, and A. Lucas (2013). Generalized autoregressive score models with applications. Journal of Applied Econometri | Modelling auto-correlated binary time series
The class of score-driven models might be of interest to you:
Creal, D. D., S. J. Koopman, and A. Lucas (2013). Generalized autoregressive score models with applications. Journal of Applied Econometrics 28(5), 777--795
Score-driven models were applied to a binary time series of outcomes of The Boat Race between Oxford and Cambridge,
https://timeserieslab.com/articles.html#boatrace
In that paper, the time-varying probability was obtained with the score-driven methodology by using the (free) Time Series Lab software package.
The score-driven model for binary observations in short:
Our observations can take on either two values: 0 and 1.
We therefore assume that these observations come from the Binary distribution with probability density function (pdf)
\begin{equation}\label{eq:pdf}
p(y_t | \pi_t) = \pi_t^{y_t} (1-\pi_t)^{1-y_t}
\end{equation}
where $\pi_t$ is a time-varying probability and $y_t \in \{0,1\}$ for $t = 1,\ldots,T$ where $T$ is the
length of the time series.
We can specify the time-varying probability $\pi_t$ as a function of a dynamic process $\alpha _t$, that is
\begin{equation}\label{eq:pi}
\begin{aligned}
\pi_{t+1} &= f(\alpha_t),\\
\alpha_{t+1} &= \omega + \phi \alpha_t + \kappa s_t ,
\end{aligned}
\end{equation}
where the link function $f(\cdot)$ is the logit link function so that $\pi_t$ takes values between 0 and 1.
You can easily include more lags of $\alpha_t$ or $s_t$ into the equation above.
The unknown coefficients include the constant $\omega$, the autoregressive parameter $\phi$, and the score updating parameter $\kappa$ which are estimated by maximum likelihood.
The driving force behind the updating equation of $\alpha_t$ is the scaled score innovation $s_t$
as given by
\begin{equation}\label{eq:score}
\qquad s_t = S_t \cdot \nabla_t, \qquad \nabla_t = \frac{\partial \, \log \, p(y_t | \pi_t)}{\partial \alpha_t},
\end{equation}
for $t = 1,\ldots,T$ and where $\nabla_{t}$ is the score of the density $p(y_t | \pi_t)$ and $S_t$ a scaling factor which is often the inverse of the Fisher information.
[Disclaimer] I am one of the developers of Time Series Lab. | Modelling auto-correlated binary time series
The class of score-driven models might be of interest to you:
Creal, D. D., S. J. Koopman, and A. Lucas (2013). Generalized autoregressive score models with applications. Journal of Applied Econometri |
24,384 | Modelling auto-correlated binary time series | If I understand your question correctly, the "usual approach" would be a dynamic probit approach, cf. "Predicting U.S. Recessions with Dynamic Binary Response Models", Heikki Kauppi and Pentti Saikkonen, The Review of Economics and Statistics Vol. 90, No. 4 (Nov., 2008), pp. 777-791, The MIT Press, Stable URL: http://www.jstor.org/stable/40043114
Whether that model class directly reflects your underlying example process might depend on what epsilon_t is like exactly, but I think the model fits your statement "all I know is that there is significant autocorrelation". | Modelling auto-correlated binary time series | If I understand your question correctly, the "usual approach" would be a dynamic probit approach, cf. "Predicting U.S. Recessions with Dynamic Binary Response Models", Heikki Kauppi and Pentti Saikkon | Modelling auto-correlated binary time series
If I understand your question correctly, the "usual approach" would be a dynamic probit approach, cf. "Predicting U.S. Recessions with Dynamic Binary Response Models", Heikki Kauppi and Pentti Saikkonen, The Review of Economics and Statistics Vol. 90, No. 4 (Nov., 2008), pp. 777-791, The MIT Press, Stable URL: http://www.jstor.org/stable/40043114
Whether that model class directly reflects your underlying example process might depend on what epsilon_t is like exactly, but I think the model fits your statement "all I know is that there is significant autocorrelation". | Modelling auto-correlated binary time series
If I understand your question correctly, the "usual approach" would be a dynamic probit approach, cf. "Predicting U.S. Recessions with Dynamic Binary Response Models", Heikki Kauppi and Pentti Saikkon |
24,385 | Incremental Gaussian Process Regression | There have been several recursive algorithms for doing this. You should take a look at kernel recursive least squares (KRLS) algorithm, and related online GP algorithms.
Van Vaerenbergh, S., Santamaria, I., Liu, W., and Principe, J. C. (2010). Fixed-budget kernel recursive least-squares. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 1882-1885. IEEE.
Lazaro-Gredilla, M., Van Vaerenbergh, S., and Santamaria, I. (2011). A Bayesian approach to tracking with kernel recursive least-squares. In Machine Learning for Signal Processing (MLSP), 2011 IEEE International Workshop on, pages 1-6. IEEE.
Perez-Cruz, F., Van Vaerenbergh, S., Murillo-Fuentes, J. J., Lazaro-Gredilla, M., and Santamaria, I. (2013). Gaussian processes for nonlinear signal processing: An overview of recent advances. IEEE Signal Processing Magazine, 30(4):40-50. | Incremental Gaussian Process Regression | There have been several recursive algorithms for doing this. You should take a look at kernel recursive least squares (KRLS) algorithm, and related online GP algorithms.
Van Vaerenbergh, S., Santamar | Incremental Gaussian Process Regression
There have been several recursive algorithms for doing this. You should take a look at kernel recursive least squares (KRLS) algorithm, and related online GP algorithms.
Van Vaerenbergh, S., Santamaria, I., Liu, W., and Principe, J. C. (2010). Fixed-budget kernel recursive least-squares. In Acoustics Speech and Signal Processing (ICASSP), 2010 IEEE International Conference on, pages 1882-1885. IEEE.
Lazaro-Gredilla, M., Van Vaerenbergh, S., and Santamaria, I. (2011). A Bayesian approach to tracking with kernel recursive least-squares. In Machine Learning for Signal Processing (MLSP), 2011 IEEE International Workshop on, pages 1-6. IEEE.
Perez-Cruz, F., Van Vaerenbergh, S., Murillo-Fuentes, J. J., Lazaro-Gredilla, M., and Santamaria, I. (2013). Gaussian processes for nonlinear signal processing: An overview of recent advances. IEEE Signal Processing Magazine, 30(4):40-50. | Incremental Gaussian Process Regression
There have been several recursive algorithms for doing this. You should take a look at kernel recursive least squares (KRLS) algorithm, and related online GP algorithms.
Van Vaerenbergh, S., Santamar |
24,386 | Incremental Gaussian Process Regression | Stepwise estimation of GP models is well studied in literature.
The underlying idea is instead of conditioning on all new observations you want to predict, condition on the one-step ahead point and do this repeatedly. This becomes somehow close to kalman filtering. | Incremental Gaussian Process Regression | Stepwise estimation of GP models is well studied in literature.
The underlying idea is instead of conditioning on all new observations you want to predict, condition on the one-step ahead point and d | Incremental Gaussian Process Regression
Stepwise estimation of GP models is well studied in literature.
The underlying idea is instead of conditioning on all new observations you want to predict, condition on the one-step ahead point and do this repeatedly. This becomes somehow close to kalman filtering. | Incremental Gaussian Process Regression
Stepwise estimation of GP models is well studied in literature.
The underlying idea is instead of conditioning on all new observations you want to predict, condition on the one-step ahead point and d |
24,387 | Generalized Linear Models vs Timseries models for forecasting | Not really an expert but this question has been unanswered for a while, so I will try an answer: I can think of 3 differences between GLMs and Time series models a là Box and Jenkins:
1) GLMs are rather to model variable Y as function of some other variable X (Y = f(X) ). In the time series models you are (mostly?) modeling variable Y as function of itself, but from previous time steps ( Y(t) = f(Y(t-1), Y(t-2),...) );
2) Related to the previous point: GLMs do not consider per se autocorrelation of the input covariate, while the time series models like ARIMA are auto-correlative in nature;
3) I think the auto-regressive models base on the assumption that residuals are normal with zero mean, whereas GLMs accept more complex data structure of the response variable, possibly having a non-normal distribution (Gamma, Poisson, etc).
Are there any rules when to use GLM and when to use Time Series? Unless you are considering in your model time as a random effect, I think GLMs are simply the wrong approach to model time series. | Generalized Linear Models vs Timseries models for forecasting | Not really an expert but this question has been unanswered for a while, so I will try an answer: I can think of 3 differences between GLMs and Time series models a là Box and Jenkins:
1) GLMs are rath | Generalized Linear Models vs Timseries models for forecasting
Not really an expert but this question has been unanswered for a while, so I will try an answer: I can think of 3 differences between GLMs and Time series models a là Box and Jenkins:
1) GLMs are rather to model variable Y as function of some other variable X (Y = f(X) ). In the time series models you are (mostly?) modeling variable Y as function of itself, but from previous time steps ( Y(t) = f(Y(t-1), Y(t-2),...) );
2) Related to the previous point: GLMs do not consider per se autocorrelation of the input covariate, while the time series models like ARIMA are auto-correlative in nature;
3) I think the auto-regressive models base on the assumption that residuals are normal with zero mean, whereas GLMs accept more complex data structure of the response variable, possibly having a non-normal distribution (Gamma, Poisson, etc).
Are there any rules when to use GLM and when to use Time Series? Unless you are considering in your model time as a random effect, I think GLMs are simply the wrong approach to model time series. | Generalized Linear Models vs Timseries models for forecasting
Not really an expert but this question has been unanswered for a while, so I will try an answer: I can think of 3 differences between GLMs and Time series models a là Box and Jenkins:
1) GLMs are rath |
24,388 | Generalized Linear Models vs Timseries models for forecasting | I myself was studying neural behavior a long time and I must say that GLMs did a really good job in predicting the complex neural behavior based on external factors but also the activity of other neurons or the past of that specific neuron (i.e. refactory period, rhytmic modulations etc.). I also used them for extensive surrogate modelling, studying zero-inflation, response behavior, noise correlations and more.
Today, I am often surprised that people usually do not even really know what GLMs are but everyone knows AR models. I still did not encounter many situations in which I'd prefer an AR model.
I think there is a common misconception that GLMs do not account for temporal correlations (i.e. auto-correlation or cross-correlations) but that is not true.
My rule of thumb when to use an autoregressive model was: Only when I have to or the nature of the problem suggests! Whenever the main predictor is the past of the predicted quantity itself AR models should be considered. They are based on the past and error terms. For example a pendulum would offer itself for AR models. Things that may be described by a differential equation.
I.e. if there are no independent variables that carry information about the future.
If I don't have to I would always prefer GLM models over AR models, because in my perspective they are clearer, more interpretable - at least for me.
However, I think the best way to decide is a) really understand your problem and b) try to model it yourself on paper and math. Eventually you may naturally arrive at one or the other model. For proper modelling both variants may need adoptions and modifications. The more complex you model things, the closer both descriptions of your data get.
Often the question is, whether you have an appropriate solver for your approach. If you tweak a GLM too much it may not be a convex problem anymore and the likelihood function has local maxima. Sometimes it makes more sense to switch to an EM Algorithm or evolutionary optimizations.
Generally, I believe that the more you get into it, the more confusing are the different terminologies - especially when it comes to crazy variations of GLMs. | Generalized Linear Models vs Timseries models for forecasting | I myself was studying neural behavior a long time and I must say that GLMs did a really good job in predicting the complex neural behavior based on external factors but also the activity of other neur | Generalized Linear Models vs Timseries models for forecasting
I myself was studying neural behavior a long time and I must say that GLMs did a really good job in predicting the complex neural behavior based on external factors but also the activity of other neurons or the past of that specific neuron (i.e. refactory period, rhytmic modulations etc.). I also used them for extensive surrogate modelling, studying zero-inflation, response behavior, noise correlations and more.
Today, I am often surprised that people usually do not even really know what GLMs are but everyone knows AR models. I still did not encounter many situations in which I'd prefer an AR model.
I think there is a common misconception that GLMs do not account for temporal correlations (i.e. auto-correlation or cross-correlations) but that is not true.
My rule of thumb when to use an autoregressive model was: Only when I have to or the nature of the problem suggests! Whenever the main predictor is the past of the predicted quantity itself AR models should be considered. They are based on the past and error terms. For example a pendulum would offer itself for AR models. Things that may be described by a differential equation.
I.e. if there are no independent variables that carry information about the future.
If I don't have to I would always prefer GLM models over AR models, because in my perspective they are clearer, more interpretable - at least for me.
However, I think the best way to decide is a) really understand your problem and b) try to model it yourself on paper and math. Eventually you may naturally arrive at one or the other model. For proper modelling both variants may need adoptions and modifications. The more complex you model things, the closer both descriptions of your data get.
Often the question is, whether you have an appropriate solver for your approach. If you tweak a GLM too much it may not be a convex problem anymore and the likelihood function has local maxima. Sometimes it makes more sense to switch to an EM Algorithm or evolutionary optimizations.
Generally, I believe that the more you get into it, the more confusing are the different terminologies - especially when it comes to crazy variations of GLMs. | Generalized Linear Models vs Timseries models for forecasting
I myself was studying neural behavior a long time and I must say that GLMs did a really good job in predicting the complex neural behavior based on external factors but also the activity of other neur |
24,389 | Test if two samples of binomial distributions comply with the same p | The test statistics $p(k_2)$ is that of Fisher’s Exact Test.
Since $$\sum_{k_2}^{n_2} \frac{1}{n_1+n_2+1}\binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1} = \frac{1}{n_1+n_2+1},$$ normalisation can be obtained by multiplying with $n_1+n_2+1$ and thus:
$$p(k_2) = \binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1}.$$ | Test if two samples of binomial distributions comply with the same p | The test statistics $p(k_2)$ is that of Fisher’s Exact Test.
Since $$\sum_{k_2}^{n_2} \frac{1}{n_1+n_2+1}\binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1} = \frac{1}{n_1+n_2+1},$$ normalis | Test if two samples of binomial distributions comply with the same p
The test statistics $p(k_2)$ is that of Fisher’s Exact Test.
Since $$\sum_{k_2}^{n_2} \frac{1}{n_1+n_2+1}\binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1} = \frac{1}{n_1+n_2+1},$$ normalisation can be obtained by multiplying with $n_1+n_2+1$ and thus:
$$p(k_2) = \binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1}.$$ | Test if two samples of binomial distributions comply with the same p
The test statistics $p(k_2)$ is that of Fisher’s Exact Test.
Since $$\sum_{k_2}^{n_2} \frac{1}{n_1+n_2+1}\binom{n_1}{k_1}\binom{n_2}{k_2}\binom{n_1+n_2}{k_1+k_2}^{-1} = \frac{1}{n_1+n_2+1},$$ normalis |
24,390 | Test if two samples of binomial distributions comply with the same p | I had a similar idea independently and explored it a little further. The result you provide is related to Fisher's exact test, but Fisher's test is conditional. The classic example for this is the lady tasting tea experiment. Eight cups of tea are prepared; in four of them the milk is added first and in the other four the tea is added first. The lady must guess in which cups the milk was added first. The key here is that, whatever her answer, she will choose exactly four cups.
It turns out that the idea of integrating the binomial distribution can be easily extended to an unconditional test (tentatively called m-test). This means that $p\left(k_2\right)$ is compared to every possible result with $n_1$ and $n_2$ independent trials, with any allowed value for $k_1$ and $k_2$. The m-test can be extended to more experiments and more outcomes (not only success and failure). It is relatively easy to test a one-sided hypothesis ($p$ value if $p_1 > p_2$). In case you find it interesting, I uploaded the details to arxiv here, and an R package to apply the test here. My collaborator and I found that the m-test seems to be more powerful than Fisher's exact test and Barnard's test when $n_1$ and $n_2$ are low. | Test if two samples of binomial distributions comply with the same p | I had a similar idea independently and explored it a little further. The result you provide is related to Fisher's exact test, but Fisher's test is conditional. The classic example for this is the lad | Test if two samples of binomial distributions comply with the same p
I had a similar idea independently and explored it a little further. The result you provide is related to Fisher's exact test, but Fisher's test is conditional. The classic example for this is the lady tasting tea experiment. Eight cups of tea are prepared; in four of them the milk is added first and in the other four the tea is added first. The lady must guess in which cups the milk was added first. The key here is that, whatever her answer, she will choose exactly four cups.
It turns out that the idea of integrating the binomial distribution can be easily extended to an unconditional test (tentatively called m-test). This means that $p\left(k_2\right)$ is compared to every possible result with $n_1$ and $n_2$ independent trials, with any allowed value for $k_1$ and $k_2$. The m-test can be extended to more experiments and more outcomes (not only success and failure). It is relatively easy to test a one-sided hypothesis ($p$ value if $p_1 > p_2$). In case you find it interesting, I uploaded the details to arxiv here, and an R package to apply the test here. My collaborator and I found that the m-test seems to be more powerful than Fisher's exact test and Barnard's test when $n_1$ and $n_2$ are low. | Test if two samples of binomial distributions comply with the same p
I had a similar idea independently and explored it a little further. The result you provide is related to Fisher's exact test, but Fisher's test is conditional. The classic example for this is the lad |
24,391 | Good PCA examples for teaching? | There are some step-by-step guides in Shalizi's notes here : http://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch18.pdf,
one being the cars data set from R and another being art and music articles from the New York Times. (Inferring the topic of an article from the words contained in it is a very active research area.) If you don't know/don't want to learn R then you could still use his notes and graphics.
Edit: forgot to say that there are also several good examples in a book by Everitt and Hothorn, which is available on SpringerLink. As I recall, one data set is jet fighters and there is also Roman pottery. | Good PCA examples for teaching? | There are some step-by-step guides in Shalizi's notes here : http://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch18.pdf,
one being the cars data set from R and another being art and music articles f | Good PCA examples for teaching?
There are some step-by-step guides in Shalizi's notes here : http://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch18.pdf,
one being the cars data set from R and another being art and music articles from the New York Times. (Inferring the topic of an article from the words contained in it is a very active research area.) If you don't know/don't want to learn R then you could still use his notes and graphics.
Edit: forgot to say that there are also several good examples in a book by Everitt and Hothorn, which is available on SpringerLink. As I recall, one data set is jet fighters and there is also Roman pottery. | Good PCA examples for teaching?
There are some step-by-step guides in Shalizi's notes here : http://www.stat.cmu.edu/~cshalizi/uADA/12/lectures/ch18.pdf,
one being the cars data set from R and another being art and music articles f |
24,392 | Good PCA examples for teaching? | I know it's too late for your lecture, but here's an example using olympic decathlon data that I found very helpful when learning PCA.
A couple R-based write-ups of it:
http://factominer.free.fr/classical-methods/principal-components-analysis.html
http://www.math.vu.nl/sto/onderwijs/multivar/College2.pdf | Good PCA examples for teaching? | I know it's too late for your lecture, but here's an example using olympic decathlon data that I found very helpful when learning PCA.
A couple R-based write-ups of it:
http://factominer.free.fr/class | Good PCA examples for teaching?
I know it's too late for your lecture, but here's an example using olympic decathlon data that I found very helpful when learning PCA.
A couple R-based write-ups of it:
http://factominer.free.fr/classical-methods/principal-components-analysis.html
http://www.math.vu.nl/sto/onderwijs/multivar/College2.pdf | Good PCA examples for teaching?
I know it's too late for your lecture, but here's an example using olympic decathlon data that I found very helpful when learning PCA.
A couple R-based write-ups of it:
http://factominer.free.fr/class |
24,393 | Statistical learning theory VS computational learning theory? | Computational learning, more concretely the probably approximately correct (PAC) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a good hypothesis? how much computational effort do I need to learn with high probability such hypothesis? It does not deal with the concrete classifier you are working with. It is about what you can and cannot learn with some samples at hand.
In statistical learning theory you rather answer questions of the sort: how many training samples will the classifier misclassify before it has converged to a good hypothesis? i.e. how hard is it to train a classifier, and what warranties do I have on its performance?
Regretfully I do not know a source where these two areas are described/compared in an unified manner. Still, though not much hope that helps | Statistical learning theory VS computational learning theory? | Computational learning, more concretely the probably approximately correct (PAC) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a | Statistical learning theory VS computational learning theory?
Computational learning, more concretely the probably approximately correct (PAC) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a good hypothesis? how much computational effort do I need to learn with high probability such hypothesis? It does not deal with the concrete classifier you are working with. It is about what you can and cannot learn with some samples at hand.
In statistical learning theory you rather answer questions of the sort: how many training samples will the classifier misclassify before it has converged to a good hypothesis? i.e. how hard is it to train a classifier, and what warranties do I have on its performance?
Regretfully I do not know a source where these two areas are described/compared in an unified manner. Still, though not much hope that helps | Statistical learning theory VS computational learning theory?
Computational learning, more concretely the probably approximately correct (PAC) framework, answers questions like: how many training examples are needed for a learner to learn with high probability a |
24,394 | Statistical learning theory VS computational learning theory? | Supplementing the answer by @jpmuc, the distinction between computational and statistical learning seems to be a historical accident, and the theories are slowly merging (and sometimes being taught) as a single unified 'learning theory'. Computational and statistical learning theory are increasingly used as synonyms.
The main idea to come out of computational learning theory thus far is PAC learning, whose formulations often make use of the main contribution of statistical learning theory, the VC dimension.
For more detail and references: https://machinelearningmastery.com/introduction-to-computational-learning-theory/ | Statistical learning theory VS computational learning theory? | Supplementing the answer by @jpmuc, the distinction between computational and statistical learning seems to be a historical accident, and the theories are slowly merging (and sometimes being taught) a | Statistical learning theory VS computational learning theory?
Supplementing the answer by @jpmuc, the distinction between computational and statistical learning seems to be a historical accident, and the theories are slowly merging (and sometimes being taught) as a single unified 'learning theory'. Computational and statistical learning theory are increasingly used as synonyms.
The main idea to come out of computational learning theory thus far is PAC learning, whose formulations often make use of the main contribution of statistical learning theory, the VC dimension.
For more detail and references: https://machinelearningmastery.com/introduction-to-computational-learning-theory/ | Statistical learning theory VS computational learning theory?
Supplementing the answer by @jpmuc, the distinction between computational and statistical learning seems to be a historical accident, and the theories are slowly merging (and sometimes being taught) a |
24,395 | Multi armed bandit for general reward distribution | The research into MAB algorithms is closely tied to theoretical performance guarantees. Indeed, the resurgence of interest into these algorithms (recall Thompson sampling was proposed in the 30s) only really happened since Auer's 2002 paper proving $\mathcal{O}(\log(T))$ regret bounds for the various UCB and $\epsilon$-greedy algorithms. As such, there is little interest in problems where the reward distribution has no known bound since there is almost nothing that can be said theoretically.
Even the simple Thompson sampling algorithm you mention requires Bernoulli distributed rewards, and even that took 80 years to prove a logarithmic regret bound!
In practice, however, In cases where you do not know the reward distribution for certain, you may simply scale it to $[0,1]$ by dividing by large number $S$, and if you observe a reward above $S$ just double the value, $S:=2S$. There are no regret guarantees using this approach though, but it typically works quite well.
Also, the Thompson sampling algorithm you mention needs Bernoulli trials, so you can't use arbitrary continuous rewards. You could fit a Gaussian posterior distribution instead of a Beta, but this is a bit sensitive to your choice of prior, so you may want to set it to be very flat. If you're not looking to prove anything about your implementation this will probably work quite well. | Multi armed bandit for general reward distribution | The research into MAB algorithms is closely tied to theoretical performance guarantees. Indeed, the resurgence of interest into these algorithms (recall Thompson sampling was proposed in the 30s) only | Multi armed bandit for general reward distribution
The research into MAB algorithms is closely tied to theoretical performance guarantees. Indeed, the resurgence of interest into these algorithms (recall Thompson sampling was proposed in the 30s) only really happened since Auer's 2002 paper proving $\mathcal{O}(\log(T))$ regret bounds for the various UCB and $\epsilon$-greedy algorithms. As such, there is little interest in problems where the reward distribution has no known bound since there is almost nothing that can be said theoretically.
Even the simple Thompson sampling algorithm you mention requires Bernoulli distributed rewards, and even that took 80 years to prove a logarithmic regret bound!
In practice, however, In cases where you do not know the reward distribution for certain, you may simply scale it to $[0,1]$ by dividing by large number $S$, and if you observe a reward above $S$ just double the value, $S:=2S$. There are no regret guarantees using this approach though, but it typically works quite well.
Also, the Thompson sampling algorithm you mention needs Bernoulli trials, so you can't use arbitrary continuous rewards. You could fit a Gaussian posterior distribution instead of a Beta, but this is a bit sensitive to your choice of prior, so you may want to set it to be very flat. If you're not looking to prove anything about your implementation this will probably work quite well. | Multi armed bandit for general reward distribution
The research into MAB algorithms is closely tied to theoretical performance guarantees. Indeed, the resurgence of interest into these algorithms (recall Thompson sampling was proposed in the 30s) only |
24,396 | Logistic regression model manipulation | What the function does:
In essence, the function generates new pseudorandom response (i.e., $Y$) data from a model of your data. The model being used is a standard frequentist model. As is customary, it is assuming that your $X$* data are known constants--they are not sampled in any way. What I see as the important feature of this function is that it is incorporating uncertainty about the estimated parameters.
* Note that you have to manually add a vector of $1$'s as the leftmost column of your $X$ matrix before inputting it to the function, unless you want to suppress the intercept (which is generally not a good idea).
What was the point of this function:
I don't honestly know. It could have been part of a Bayesian MCMC routine, but it would only have been one piece--you would need more code elsewhere to actually run a Bayesian analysis. I don't feel sufficiently expert on Bayesian methods to comment definitively on this, but the function doesn't 'feel' to me like what would typically be used.
It could also have been used in simulation-based power analyses. (See my answer here: Simulation of logistic regression power analysis - designed experiments, for information on this type of thing.) It is worth noting that power analyses based on prior data that do not take the uncertainty of the parameter estimates into account are often optimistic. (I discuss that point here: Desired effect size vs. expected effect size.)
If you want to use this function:
As @whuber notes in the comments, this function will be inefficient. If you want to use this to do (for example) power analyses, I would split the function into two new functions. The first would read in your data and output the parameters and the uncertainties. The second new function would generate the new pseudorandom $Y$ data. The following is an example (although it may be possible to improve it further):
simulationParameters <- function(Y,X) {
# Y is a vector of binary responses
# X is a design matrix, you don't have to add a vector of 1's
# for the intercept
X <- cbind(1, X) # this adds the intercept for you
fit <- glm.fit(X,Y, family = binomial(link = logit))
beta <- coef(fit)
fs <- summary.glm(fit)
M <- t(chol(fs$cov.unscaled))
return(list(betas=beta, uncertainties=M))
}
simulateY <- function(X, betas, uncertainties, ncolM, N){
# X <- cbind(1, X) # it will be slightly faster if you input w/ 1's
# ncolM <- ncol(uncertainties) # faster if you input this
betastar <- betas + uncertainties %*% rnorm(ncolM)
p <- 1/(1 + exp(-(X %*% betastar)))
return(rbinom(N, size=1, prob=p))
} | Logistic regression model manipulation | What the function does:
In essence, the function generates new pseudorandom response (i.e., $Y$) data from a model of your data. The model being used is a standard frequentist model. As is customary | Logistic regression model manipulation
What the function does:
In essence, the function generates new pseudorandom response (i.e., $Y$) data from a model of your data. The model being used is a standard frequentist model. As is customary, it is assuming that your $X$* data are known constants--they are not sampled in any way. What I see as the important feature of this function is that it is incorporating uncertainty about the estimated parameters.
* Note that you have to manually add a vector of $1$'s as the leftmost column of your $X$ matrix before inputting it to the function, unless you want to suppress the intercept (which is generally not a good idea).
What was the point of this function:
I don't honestly know. It could have been part of a Bayesian MCMC routine, but it would only have been one piece--you would need more code elsewhere to actually run a Bayesian analysis. I don't feel sufficiently expert on Bayesian methods to comment definitively on this, but the function doesn't 'feel' to me like what would typically be used.
It could also have been used in simulation-based power analyses. (See my answer here: Simulation of logistic regression power analysis - designed experiments, for information on this type of thing.) It is worth noting that power analyses based on prior data that do not take the uncertainty of the parameter estimates into account are often optimistic. (I discuss that point here: Desired effect size vs. expected effect size.)
If you want to use this function:
As @whuber notes in the comments, this function will be inefficient. If you want to use this to do (for example) power analyses, I would split the function into two new functions. The first would read in your data and output the parameters and the uncertainties. The second new function would generate the new pseudorandom $Y$ data. The following is an example (although it may be possible to improve it further):
simulationParameters <- function(Y,X) {
# Y is a vector of binary responses
# X is a design matrix, you don't have to add a vector of 1's
# for the intercept
X <- cbind(1, X) # this adds the intercept for you
fit <- glm.fit(X,Y, family = binomial(link = logit))
beta <- coef(fit)
fs <- summary.glm(fit)
M <- t(chol(fs$cov.unscaled))
return(list(betas=beta, uncertainties=M))
}
simulateY <- function(X, betas, uncertainties, ncolM, N){
# X <- cbind(1, X) # it will be slightly faster if you input w/ 1's
# ncolM <- ncol(uncertainties) # faster if you input this
betastar <- betas + uncertainties %*% rnorm(ncolM)
p <- 1/(1 + exp(-(X %*% betastar)))
return(rbinom(N, size=1, prob=p))
} | Logistic regression model manipulation
What the function does:
In essence, the function generates new pseudorandom response (i.e., $Y$) data from a model of your data. The model being used is a standard frequentist model. As is customary |
24,397 | Meaning of partial correlation | Note that correlation conditional on $Z$ is a variable that depends on $Z$, whereas partial correlation is a single number.
Furthermore, partial correlation is defined based on the residuals from linear regression. Thus, if the actual relationship is nonlinear, the partial correlation may obtain a different value than the conditional correlation, even if the correlation conditional on $Z$ is a constant independent of $Z$. On the other hand, it $X,Y,X$ are multivariate Gaussian, the partial correlation equals the conditional correlation.
For an example where constant conditional correlation $\neq$ partial correlation: $$Z\sim U(-1,1),~X=Z^2+e,~Y=Z^2-e,~e\sim N(0,1),e\perp Z.$$ No matter which value $Z$ takes, the conditional correlation will be -1. However, the linear regressions $X|Z$,$Y|Z$ will be constants 0, and thus the residuals will be the values $X$,$Y$ themselves. Thus, the partial correlation equals the correlation between $X$,$Y$; which does not equal -1, as clearly the variables are not perfectly correlated if $Z$ is not known.
Apparently, Baba and Sibuya (2005) show the equivalence of partial correlation and conditional correlation for some other distributions besides multivariate Gaussian, but I did not read this.
The answer to your question 2 seems to exist in the Wikipedia article, the second equation under Using recursive formula. | Meaning of partial correlation | Note that correlation conditional on $Z$ is a variable that depends on $Z$, whereas partial correlation is a single number.
Furthermore, partial correlation is defined based on the residuals from line | Meaning of partial correlation
Note that correlation conditional on $Z$ is a variable that depends on $Z$, whereas partial correlation is a single number.
Furthermore, partial correlation is defined based on the residuals from linear regression. Thus, if the actual relationship is nonlinear, the partial correlation may obtain a different value than the conditional correlation, even if the correlation conditional on $Z$ is a constant independent of $Z$. On the other hand, it $X,Y,X$ are multivariate Gaussian, the partial correlation equals the conditional correlation.
For an example where constant conditional correlation $\neq$ partial correlation: $$Z\sim U(-1,1),~X=Z^2+e,~Y=Z^2-e,~e\sim N(0,1),e\perp Z.$$ No matter which value $Z$ takes, the conditional correlation will be -1. However, the linear regressions $X|Z$,$Y|Z$ will be constants 0, and thus the residuals will be the values $X$,$Y$ themselves. Thus, the partial correlation equals the correlation between $X$,$Y$; which does not equal -1, as clearly the variables are not perfectly correlated if $Z$ is not known.
Apparently, Baba and Sibuya (2005) show the equivalence of partial correlation and conditional correlation for some other distributions besides multivariate Gaussian, but I did not read this.
The answer to your question 2 seems to exist in the Wikipedia article, the second equation under Using recursive formula. | Meaning of partial correlation
Note that correlation conditional on $Z$ is a variable that depends on $Z$, whereas partial correlation is a single number.
Furthermore, partial correlation is defined based on the residuals from line |
24,398 | Kernel Ridge Regression Algorithmic Efficiency | (a) The purpose of using a kernel is to solve a nonlinear regression problem in this case. A good kernel will allow you to solve problems in a possibly infinite-dimensional feature space.
But, using a linear kernel $K(\mathbf{x,y}) = \mathbf{x}^\top \mathbf{y}$ and doing the kernel ridge regression in the dual space is same as solving the problem in the primal space, i.e., it doesn't bring any advantage (it's just much slower as the number of sample grows as you observed).
(b) One of the most popular choices is the squared exponential kernel $K(x,y) = \exp(-\frac{\tau}{2} ||\mathbf{x}-\mathbf{y}||^2)$ which is universal (see ref below). There are many many kernels, and each of them will induce different inner product (and hence metric) to your feature space.
(c) Straightforward implementation requires solving a linear equation of size $n$, so it's $O(n^3)$. There are many faster approximation methods such as Nyström approximation. This is an area of active research.
References:
Bharath Sriperumbudur, Kenji Fukumizu, and Gert Lanckriet. On the relation between universality, characteristic kernels and RKHS embedding of measures. Journal of Machine
Learning Research, 9:773–780, 2010.
Bernhard Schlkopf, Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond 2002 | Kernel Ridge Regression Algorithmic Efficiency | (a) The purpose of using a kernel is to solve a nonlinear regression problem in this case. A good kernel will allow you to solve problems in a possibly infinite-dimensional feature space.
But, using | Kernel Ridge Regression Algorithmic Efficiency
(a) The purpose of using a kernel is to solve a nonlinear regression problem in this case. A good kernel will allow you to solve problems in a possibly infinite-dimensional feature space.
But, using a linear kernel $K(\mathbf{x,y}) = \mathbf{x}^\top \mathbf{y}$ and doing the kernel ridge regression in the dual space is same as solving the problem in the primal space, i.e., it doesn't bring any advantage (it's just much slower as the number of sample grows as you observed).
(b) One of the most popular choices is the squared exponential kernel $K(x,y) = \exp(-\frac{\tau}{2} ||\mathbf{x}-\mathbf{y}||^2)$ which is universal (see ref below). There are many many kernels, and each of them will induce different inner product (and hence metric) to your feature space.
(c) Straightforward implementation requires solving a linear equation of size $n$, so it's $O(n^3)$. There are many faster approximation methods such as Nyström approximation. This is an area of active research.
References:
Bharath Sriperumbudur, Kenji Fukumizu, and Gert Lanckriet. On the relation between universality, characteristic kernels and RKHS embedding of measures. Journal of Machine
Learning Research, 9:773–780, 2010.
Bernhard Schlkopf, Alexander J. Smola. Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond 2002 | Kernel Ridge Regression Algorithmic Efficiency
(a) The purpose of using a kernel is to solve a nonlinear regression problem in this case. A good kernel will allow you to solve problems in a possibly infinite-dimensional feature space.
But, using |
24,399 | Steps to figure out a posterior distribution when it might be simple enough to have an analytic form? | The clue that was in my answer to the previous answer is to look at how I integrated out the parameters - because you will do exactly the same integrals here. You question assumes the variance parameters known, so they are constants. You only need to look at the $\alpha,\mu$ dependence on the numerator. To see this, note that we can write:
$$p(\mu,\alpha|Y)=\frac{p(\mu,\alpha)p(Y|\mu,\alpha)}{\int\int p(\mu,\alpha)p(Y|\mu,\alpha)d\mu d\alpha}$$
$$=\frac{\frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}} \exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}(Y_{i} - \mu - \alpha\cdot{}Y_{i-1})^{2} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }}{\int\int \frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}} \exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}(Y_{i} - \mu - \alpha\cdot{}Y_{i-1})^{2} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }d\mu d\alpha}$$
Notice how we can pull the first factor $\frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}}$ out of the double integral on the denominator, and it cancels with the numerator. We can also pull out the sum of squares $\exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}Y_{i}^{2} \biggr ]}$ and it will also cancel. The integral we are left with is now (after expanding the squared term):
$$=\frac{\exp{\biggl [ -\frac{10\mu^2+\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\mu\sum_{i=2}^{11}Y_i-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}+2\mu\alpha\sum_{i=1}^{10}Y_i}{2\sigma_{e}^{2}} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }}{\int\int \exp{\biggl [ -\frac{10\mu^2+\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\mu\sum_{i=2}^{11}Y_i-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}+2\mu\alpha\sum_{i=1}^{10}Y_i}{2\sigma_{e}^{2}} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }d\mu d\alpha}$$
Now we can use a general result from the normal pdf.
$$\int \exp\left(-az^2+bz-c\right)dz=\sqrt{\frac{\pi}{a}}\exp\left(\frac{b^2}{4a}-c\right)$$
This follows from completing the square on $-az^2+bz$ and noting that $c$ does not depend on $z$. Note that the inner integral over $\mu$ is of this form with $a=\frac{10}{2\sigma^2_e}+\frac{1}{2\sigma^2_p}$ and $b=\frac{\sum_{i=2}^{11}Y_i-\alpha\sum_{i=1}^{10}Y_i}{\sigma_{e}^{2}}$ and $c=\frac{\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}}{2\sigma_{e}^{2}}+ \frac{\alpha^{2}}{2\sigma_{p}^{2}}$. After doing this integral, you will find that the remaining integral over $\alpha$ is also of this form, so you can use this formula again, with a different $a,b,c$. Then you should be able to write your posterior in the form $\frac{1}{2\pi|V|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mu-\hat{\mu},\alpha-\hat{\alpha})V^{-1}(\mu-\hat{\mu},\alpha-\hat{\alpha})^T\right]$ where $V$ is a $2\times 2$ matrix
Let me know if you need more clues.
update
(note: correct formula, should be $10\mu^2$ instead of $\mu^2$)
if we look at the quadratic form you've written in the update, we notice there is $5$ coefficients ($L$ is irrelevant for posterior as we can always add any constant which will cancel in the denominator). We also have $5$ unknowns $\hat{\mu},\hat{\alpha},Q_{11},Q_{12}=Q_{21},Q_{22}$. Hence this is a "well posed" problem so long as the equations are linearly independent. If we expand the quadratic $(\mu-\hat{\mu},\alpha-\hat{\alpha})Q(\mu-\hat{\mu},\alpha-\hat{\alpha})^{t}$ we get:
$$Q_{11}(\mu-\hat{\mu})^2+Q_{22}(\alpha-\hat{\alpha})^2+2Q_{12}(\mu-\hat{\mu})(\alpha-\hat{\alpha})$$
$$=Q_{11}\mu^{2} + 2Q_{21}\mu\alpha + Q_{22}\alpha^{2} - (2Q_{11}\hat{\mu}+2Q_{12}\hat{\alpha})\mu - (2Q_{22}\hat{\alpha}+2Q_{12}\hat{\mu})\alpha +$$
$$+Q_{11}\hat{\mu}^2+Q_{22}\hat{\alpha}^2+2Q_{12}\hat{\mu}\hat{\alpha}$$
Comparing second order coefficient we get $A=Q_{11},B=2Q_{12},C=Q_{22}$ which tells us what the (inverse) covariance matrix looks like. Also we have two slightly more complicated equations for $\hat{\alpha},\hat{\mu}$ after substituting for $Q$. These can be written in matrix form as:
$$
-\begin{pmatrix}2A & B \\ B & 2C\end{pmatrix}
\begin{pmatrix}\hat{\mu} \\ \hat{\alpha}\end{pmatrix} = \begin{pmatrix}J \\ K\end{pmatrix}
$$
Thus the estimates are given by:
$$
\begin{pmatrix}\hat{\mu} \\ \hat{\alpha}\end{pmatrix} = -\begin{pmatrix}2A & B \\ B & 2C\end{pmatrix}^{-1}\begin{pmatrix}J \\ K\end{pmatrix}=\frac{1}{4AC-B^2}\begin{pmatrix}BK-2JC \\ BJ-2KA\end{pmatrix}
$$
Showing that we do not have unique estimates unless $4AC\neq B^2$. Now we have:
$$\begin{array}{c c}
A=\frac{10}{2\sigma^2_e}+\frac{1}{2\sigma^2_p} &
B=\frac{\sum_{i=1}^{10}Y_i}{\sigma_{e}^{2}} &
C=\frac{\sum_{i=1}^{10}Y_{i}^{2}}{2\sigma^2_e}+\frac{1}{2\sigma^2_p} \\
J=-\frac{\sum_{i=2}^{11}Y_i}{\sigma_{e}^{2}} &
K=-\frac{\sum_{i=2}^{11}Y_{i}Y_{i-1}}{\sigma_{e}^{2}}
\end{array}$$
Note that if we define $X_i=Y_{i-1}$ for $i=2,\dots,11$ and take the limit $\sigma^2_p\to\infty$ then the estimates for $\mu,\alpha$ are given by the usual least squares estimate $\hat{\alpha}=\frac{\sum_{i=2}^{11}(Y_{i}-\overline{Y})(X_{i}-\overline{X})}{\sum_{i=2}^{11}(X_{i}-\overline{X})^2}$ and $\hat{\mu}=\overline{Y}-\hat{\alpha}\overline{X}$ where $\overline{Y}=\frac{1}{10}\sum_{i=2}^{11}Y_i$ and $\overline{X}=\frac{1}{10}\sum_{i=2}^{11}X_i=\frac{1}{10}\sum_{i=1}^{10}Y_i$. So the posterior estimates are a weighted average between the OLS estimates and the prior estimate $(0,0)$. | Steps to figure out a posterior distribution when it might be simple enough to have an analytic form | The clue that was in my answer to the previous answer is to look at how I integrated out the parameters - because you will do exactly the same integrals here. You question assumes the variance parame | Steps to figure out a posterior distribution when it might be simple enough to have an analytic form?
The clue that was in my answer to the previous answer is to look at how I integrated out the parameters - because you will do exactly the same integrals here. You question assumes the variance parameters known, so they are constants. You only need to look at the $\alpha,\mu$ dependence on the numerator. To see this, note that we can write:
$$p(\mu,\alpha|Y)=\frac{p(\mu,\alpha)p(Y|\mu,\alpha)}{\int\int p(\mu,\alpha)p(Y|\mu,\alpha)d\mu d\alpha}$$
$$=\frac{\frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}} \exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}(Y_{i} - \mu - \alpha\cdot{}Y_{i-1})^{2} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }}{\int\int \frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}} \exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}(Y_{i} - \mu - \alpha\cdot{}Y_{i-1})^{2} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }d\mu d\alpha}$$
Notice how we can pull the first factor $\frac{1}{(2\pi\sigma_{e}^{2})^{5}\cdot{}2\pi\sigma_{p}^{2}}$ out of the double integral on the denominator, and it cancels with the numerator. We can also pull out the sum of squares $\exp{\biggl [ -\frac{1}{2\sigma_{e}^{2}}\sum_{i=2}^{11}Y_{i}^{2} \biggr ]}$ and it will also cancel. The integral we are left with is now (after expanding the squared term):
$$=\frac{\exp{\biggl [ -\frac{10\mu^2+\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\mu\sum_{i=2}^{11}Y_i-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}+2\mu\alpha\sum_{i=1}^{10}Y_i}{2\sigma_{e}^{2}} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }}{\int\int \exp{\biggl [ -\frac{10\mu^2+\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\mu\sum_{i=2}^{11}Y_i-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}+2\mu\alpha\sum_{i=1}^{10}Y_i}{2\sigma_{e}^{2}} - \frac{\mu^{2}}{2\sigma_{p}^{2}} - \frac{\alpha^{2}}{2\sigma_{p}^{2}} \biggr ] }d\mu d\alpha}$$
Now we can use a general result from the normal pdf.
$$\int \exp\left(-az^2+bz-c\right)dz=\sqrt{\frac{\pi}{a}}\exp\left(\frac{b^2}{4a}-c\right)$$
This follows from completing the square on $-az^2+bz$ and noting that $c$ does not depend on $z$. Note that the inner integral over $\mu$ is of this form with $a=\frac{10}{2\sigma^2_e}+\frac{1}{2\sigma^2_p}$ and $b=\frac{\sum_{i=2}^{11}Y_i-\alpha\sum_{i=1}^{10}Y_i}{\sigma_{e}^{2}}$ and $c=\frac{\alpha^2\sum_{i=1}^{10}Y_{i}^{2}-2\alpha\sum_{i=2}^{11}Y_{i}Y_{i-1}}{2\sigma_{e}^{2}}+ \frac{\alpha^{2}}{2\sigma_{p}^{2}}$. After doing this integral, you will find that the remaining integral over $\alpha$ is also of this form, so you can use this formula again, with a different $a,b,c$. Then you should be able to write your posterior in the form $\frac{1}{2\pi|V|^{\frac{1}{2}}}\exp\left[-\frac{1}{2}(\mu-\hat{\mu},\alpha-\hat{\alpha})V^{-1}(\mu-\hat{\mu},\alpha-\hat{\alpha})^T\right]$ where $V$ is a $2\times 2$ matrix
Let me know if you need more clues.
update
(note: correct formula, should be $10\mu^2$ instead of $\mu^2$)
if we look at the quadratic form you've written in the update, we notice there is $5$ coefficients ($L$ is irrelevant for posterior as we can always add any constant which will cancel in the denominator). We also have $5$ unknowns $\hat{\mu},\hat{\alpha},Q_{11},Q_{12}=Q_{21},Q_{22}$. Hence this is a "well posed" problem so long as the equations are linearly independent. If we expand the quadratic $(\mu-\hat{\mu},\alpha-\hat{\alpha})Q(\mu-\hat{\mu},\alpha-\hat{\alpha})^{t}$ we get:
$$Q_{11}(\mu-\hat{\mu})^2+Q_{22}(\alpha-\hat{\alpha})^2+2Q_{12}(\mu-\hat{\mu})(\alpha-\hat{\alpha})$$
$$=Q_{11}\mu^{2} + 2Q_{21}\mu\alpha + Q_{22}\alpha^{2} - (2Q_{11}\hat{\mu}+2Q_{12}\hat{\alpha})\mu - (2Q_{22}\hat{\alpha}+2Q_{12}\hat{\mu})\alpha +$$
$$+Q_{11}\hat{\mu}^2+Q_{22}\hat{\alpha}^2+2Q_{12}\hat{\mu}\hat{\alpha}$$
Comparing second order coefficient we get $A=Q_{11},B=2Q_{12},C=Q_{22}$ which tells us what the (inverse) covariance matrix looks like. Also we have two slightly more complicated equations for $\hat{\alpha},\hat{\mu}$ after substituting for $Q$. These can be written in matrix form as:
$$
-\begin{pmatrix}2A & B \\ B & 2C\end{pmatrix}
\begin{pmatrix}\hat{\mu} \\ \hat{\alpha}\end{pmatrix} = \begin{pmatrix}J \\ K\end{pmatrix}
$$
Thus the estimates are given by:
$$
\begin{pmatrix}\hat{\mu} \\ \hat{\alpha}\end{pmatrix} = -\begin{pmatrix}2A & B \\ B & 2C\end{pmatrix}^{-1}\begin{pmatrix}J \\ K\end{pmatrix}=\frac{1}{4AC-B^2}\begin{pmatrix}BK-2JC \\ BJ-2KA\end{pmatrix}
$$
Showing that we do not have unique estimates unless $4AC\neq B^2$. Now we have:
$$\begin{array}{c c}
A=\frac{10}{2\sigma^2_e}+\frac{1}{2\sigma^2_p} &
B=\frac{\sum_{i=1}^{10}Y_i}{\sigma_{e}^{2}} &
C=\frac{\sum_{i=1}^{10}Y_{i}^{2}}{2\sigma^2_e}+\frac{1}{2\sigma^2_p} \\
J=-\frac{\sum_{i=2}^{11}Y_i}{\sigma_{e}^{2}} &
K=-\frac{\sum_{i=2}^{11}Y_{i}Y_{i-1}}{\sigma_{e}^{2}}
\end{array}$$
Note that if we define $X_i=Y_{i-1}$ for $i=2,\dots,11$ and take the limit $\sigma^2_p\to\infty$ then the estimates for $\mu,\alpha$ are given by the usual least squares estimate $\hat{\alpha}=\frac{\sum_{i=2}^{11}(Y_{i}-\overline{Y})(X_{i}-\overline{X})}{\sum_{i=2}^{11}(X_{i}-\overline{X})^2}$ and $\hat{\mu}=\overline{Y}-\hat{\alpha}\overline{X}$ where $\overline{Y}=\frac{1}{10}\sum_{i=2}^{11}Y_i$ and $\overline{X}=\frac{1}{10}\sum_{i=2}^{11}X_i=\frac{1}{10}\sum_{i=1}^{10}Y_i$. So the posterior estimates are a weighted average between the OLS estimates and the prior estimate $(0,0)$. | Steps to figure out a posterior distribution when it might be simple enough to have an analytic form
The clue that was in my answer to the previous answer is to look at how I integrated out the parameters - because you will do exactly the same integrals here. You question assumes the variance parame |
24,400 | Objections to randomization | The papers from Koch, Abel, and Urbach do not reject randomization summarily as a means to achieve 1-4, rather they claim it is neither sufficient nor necessary to achieve those criteria. The take-home message is a) An RCT must not necessarily be done to answer every scientific question and b) Any published RCT may not be gold-standard evidence of efficacy.
As an alternative to the (blinded) RCT, the open label trial (OLTs) is an obvious choice since the presumptive purpose of said trial is to evaluate a novel therapy not readily accessible by the patient population. Not every question is answered in analyses of randomized sets in RCTs, so similar principals to analyzing observational studies apply: control of causal factors, block randomization, and so on improve the efficiency and reduce the bias of such studies.
means to validate certain statistical tests
(are randomized participants "independent" and "identically distributed" per assumptions of t-test, log-rank test, and so on?)
RCT pros: Clusters of correlated participants - so called "contamination" - are likely to be "broken up" in study randomization so that, without contamination, the dependence structure is similar within treatment assignment and methods for independent data estimate the correct standard errors anyway. Similarly, prognostic factors are likely to be balanced between study groups at the time of randomization.
RCT cons: Randomization does not completely address contamination: participants as a consequence of their indication and even participation in the study are likely to relate to one another and influence participation and outcomes as a result. Even with blocking, the distribution of prognostic factors is heterogeneous between arms. Those receiving the higher risk treatment and who are at higher risk at baseline are more likely to "die off" sooner, leading to a healthy risk set at future event times (survivor bias). This can lead to crossing hazards which is inefficient for log-rank tests.
basis for causal inference,
is the estimated effect the same as a "rewind-time" instance of assigning all treated participants to control, and subtracting those differences
RCT+: assignment of treatment is completely at random, no confounding by indication, blinding (when possible) may reduce risk of differential treatment discontinuation.
RCT-: Differential and non-differential follow-up due to attrition will contribute to imbalanced participants upon study completion. Non-blinded studies introduce risk of differential treatment discontinuation. Study parameters around randomization, blinding, and invasive therapies necessarily restrict the eligible study pool to a smaller subset which will consent to those parameters (healthy participant bias).
facilitation of masking:
when treatment is randomly assigned, is it possible to administer both treatments in a way that participants do not know what arm they have been randomized to?
RCT+: When an appropriate placebo is available, it can be done. It should be noted that the appropriate use of "placebo" is such that a participant receives standard of care (SOC). For instance, suppose an IND is administered by injection and SOC is a pill. Control participants receive SOC in an (unlabeled) pill form and a saline injection, while active arm participants receive the IND injection and an identical sugar pill.
RCT-: A placebo may not be available. For instance, provenge is a monoclonal antibody therapy for high grade prostate cancer. Administration of this treatment requires an invasive procedure called leukapheresis. Leukapheresis is too invasive and costly to ethically be performed in the control arm, so provenge-assigned participants will know they are receiving the IND.
method to balance comparisons groups.
is the expected distribution of "covariates" in the analysis sample equal in distribution between IND-treated and control participants?
RCT+: at time of randomization a 50/50 sample balance of treatment and control groups is noted, as well as an expected probabilistic balance of possible prognostic factors. Re-randomization is possible for batch-entry designs although they are far less prevalent these days.
RCT-: efficient design still requires control of prognostic factors, the optimal design in presence of a treatment effect is not 50/50 balance for most analyses, attrition and unequal cluster size due to loss-to-follow-up commonly means that balanced design is not guaranteed. Randomization does not guarantee balance of prognostic factors. | Objections to randomization | The papers from Koch, Abel, and Urbach do not reject randomization summarily as a means to achieve 1-4, rather they claim it is neither sufficient nor necessary to achieve those criteria. The take-hom | Objections to randomization
The papers from Koch, Abel, and Urbach do not reject randomization summarily as a means to achieve 1-4, rather they claim it is neither sufficient nor necessary to achieve those criteria. The take-home message is a) An RCT must not necessarily be done to answer every scientific question and b) Any published RCT may not be gold-standard evidence of efficacy.
As an alternative to the (blinded) RCT, the open label trial (OLTs) is an obvious choice since the presumptive purpose of said trial is to evaluate a novel therapy not readily accessible by the patient population. Not every question is answered in analyses of randomized sets in RCTs, so similar principals to analyzing observational studies apply: control of causal factors, block randomization, and so on improve the efficiency and reduce the bias of such studies.
means to validate certain statistical tests
(are randomized participants "independent" and "identically distributed" per assumptions of t-test, log-rank test, and so on?)
RCT pros: Clusters of correlated participants - so called "contamination" - are likely to be "broken up" in study randomization so that, without contamination, the dependence structure is similar within treatment assignment and methods for independent data estimate the correct standard errors anyway. Similarly, prognostic factors are likely to be balanced between study groups at the time of randomization.
RCT cons: Randomization does not completely address contamination: participants as a consequence of their indication and even participation in the study are likely to relate to one another and influence participation and outcomes as a result. Even with blocking, the distribution of prognostic factors is heterogeneous between arms. Those receiving the higher risk treatment and who are at higher risk at baseline are more likely to "die off" sooner, leading to a healthy risk set at future event times (survivor bias). This can lead to crossing hazards which is inefficient for log-rank tests.
basis for causal inference,
is the estimated effect the same as a "rewind-time" instance of assigning all treated participants to control, and subtracting those differences
RCT+: assignment of treatment is completely at random, no confounding by indication, blinding (when possible) may reduce risk of differential treatment discontinuation.
RCT-: Differential and non-differential follow-up due to attrition will contribute to imbalanced participants upon study completion. Non-blinded studies introduce risk of differential treatment discontinuation. Study parameters around randomization, blinding, and invasive therapies necessarily restrict the eligible study pool to a smaller subset which will consent to those parameters (healthy participant bias).
facilitation of masking:
when treatment is randomly assigned, is it possible to administer both treatments in a way that participants do not know what arm they have been randomized to?
RCT+: When an appropriate placebo is available, it can be done. It should be noted that the appropriate use of "placebo" is such that a participant receives standard of care (SOC). For instance, suppose an IND is administered by injection and SOC is a pill. Control participants receive SOC in an (unlabeled) pill form and a saline injection, while active arm participants receive the IND injection and an identical sugar pill.
RCT-: A placebo may not be available. For instance, provenge is a monoclonal antibody therapy for high grade prostate cancer. Administration of this treatment requires an invasive procedure called leukapheresis. Leukapheresis is too invasive and costly to ethically be performed in the control arm, so provenge-assigned participants will know they are receiving the IND.
method to balance comparisons groups.
is the expected distribution of "covariates" in the analysis sample equal in distribution between IND-treated and control participants?
RCT+: at time of randomization a 50/50 sample balance of treatment and control groups is noted, as well as an expected probabilistic balance of possible prognostic factors. Re-randomization is possible for batch-entry designs although they are far less prevalent these days.
RCT-: efficient design still requires control of prognostic factors, the optimal design in presence of a treatment effect is not 50/50 balance for most analyses, attrition and unequal cluster size due to loss-to-follow-up commonly means that balanced design is not guaranteed. Randomization does not guarantee balance of prognostic factors. | Objections to randomization
The papers from Koch, Abel, and Urbach do not reject randomization summarily as a means to achieve 1-4, rather they claim it is neither sufficient nor necessary to achieve those criteria. The take-hom |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.