idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
39,401
|
What is the difference between t-SNE and plain SNE?
|
Actually, I find that because the t-Distribution is a long tail distribution, it prevents the crowding problem (which is one of the disadvantages of SNE).
|
What is the difference between t-SNE and plain SNE?
|
Actually, I find that because the t-Distribution is a long tail distribution, it prevents the crowding problem (which is one of the disadvantages of SNE).
|
What is the difference between t-SNE and plain SNE?
Actually, I find that because the t-Distribution is a long tail distribution, it prevents the crowding problem (which is one of the disadvantages of SNE).
|
What is the difference between t-SNE and plain SNE?
Actually, I find that because the t-Distribution is a long tail distribution, it prevents the crowding problem (which is one of the disadvantages of SNE).
|
39,402
|
What is the difference between t-SNE and plain SNE?
|
You can also watch this lecture from 17:48 to 20:12 to hear the reason with a great example from the author of t-SNE.
|
What is the difference between t-SNE and plain SNE?
|
You can also watch this lecture from 17:48 to 20:12 to hear the reason with a great example from the author of t-SNE.
|
What is the difference between t-SNE and plain SNE?
You can also watch this lecture from 17:48 to 20:12 to hear the reason with a great example from the author of t-SNE.
|
What is the difference between t-SNE and plain SNE?
You can also watch this lecture from 17:48 to 20:12 to hear the reason with a great example from the author of t-SNE.
|
39,403
|
What is the difference between t-SNE and plain SNE?
|
The cluster structure produced by tSNE tend to be more separated, to have more stable shape; and be more repeatable.
|
What is the difference between t-SNE and plain SNE?
|
The cluster structure produced by tSNE tend to be more separated, to have more stable shape; and be more repeatable.
|
What is the difference between t-SNE and plain SNE?
The cluster structure produced by tSNE tend to be more separated, to have more stable shape; and be more repeatable.
|
What is the difference between t-SNE and plain SNE?
The cluster structure produced by tSNE tend to be more separated, to have more stable shape; and be more repeatable.
|
39,404
|
What is the difference between t-SNE and plain SNE?
|
We are learning a topological structure here. So mapping the neighbors in the lower dimension is the necessary and fundamental objective of SNE. Note that, in lower dimension we don't have much space to accommodate all the neighbors.
for motivation note that, we can accommodate maximum $n+1$ equidistant points in a $n$ dimensional space. So, what will a basic SNE algorithm do is collapse all the equidistant point to one point in lower dimension. This phenomenon is called Crowding probelm.
To mitigate this problem t-distribution was suggested. As it has a heavy tail it allows those points suffering from the crowding problem to be placed in a somewhat distant place (but not too much).
|
What is the difference between t-SNE and plain SNE?
|
We are learning a topological structure here. So mapping the neighbors in the lower dimension is the necessary and fundamental objective of SNE. Note that, in lower dimension we don't have much space
|
What is the difference between t-SNE and plain SNE?
We are learning a topological structure here. So mapping the neighbors in the lower dimension is the necessary and fundamental objective of SNE. Note that, in lower dimension we don't have much space to accommodate all the neighbors.
for motivation note that, we can accommodate maximum $n+1$ equidistant points in a $n$ dimensional space. So, what will a basic SNE algorithm do is collapse all the equidistant point to one point in lower dimension. This phenomenon is called Crowding probelm.
To mitigate this problem t-distribution was suggested. As it has a heavy tail it allows those points suffering from the crowding problem to be placed in a somewhat distant place (but not too much).
|
What is the difference between t-SNE and plain SNE?
We are learning a topological structure here. So mapping the neighbors in the lower dimension is the necessary and fundamental objective of SNE. Note that, in lower dimension we don't have much space
|
39,405
|
Neural network softmax activation
|
The internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes $\frac{\partial E} {\partial z_j} = o_j - t_j$ where $t$ is a one-hot encoded target output vector. Is this correct?
Yes. Before going through the proof, let me change the notation to avoid careless mistakes in translation:
Notation:
I'll follow the notation in this made-up example of color classification:
whereby $j$ is the index denoting any of the $K$ output neurons - not necessarily the one corresponding to the true, ($t)$, value. Now,
$$\begin{align} o_j&=\sigma(j)=\sigma(z_j)=\text{softmax}(j)=\text{softmax (neuron }j)=\frac{e^{z_j}}{\displaystyle\sum_K e^{z_k}}\\[3ex]
z_j &= \mathbf w_j^\top \mathbf x = \text{preactivation (neuron }j)
\end{align}$$
The loss function is the negative log likelihood:
$$E = -\log \sigma(t) = -\log \left(\text{softmax}(t)\right)$$
The negative log likelihood is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula.
Gradient of the loss function with respect to the pre-activation of an output neuron:
$$\begin{align}
\frac{\partial E}{\partial z_j}&=\frac{\partial}{\partial z_j}\,-\log\left( \sigma(t)\right)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\sigma(t)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\sigma(z_j)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\frac{e^{z_t}}{\displaystyle\sum_k e^{z_k}}\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left[ \frac{\frac{\partial }{\partial z_j }e^{z_t}}{\displaystyle \sum_K e^{z_k}}
\quad - \quad
\frac{e^{z_t}\quad \frac{\partial}{\partial z_j}\displaystyle \sum_K e^{z_k}}{\left[\displaystyle\sum_K e^{z_k}\right]^2}\right]\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left[ \frac{\delta_{jt}\;e^{z_t}}{\displaystyle \sum_K e^{z_k}}
\quad - \quad \frac{e^{z_t}}{\displaystyle\sum_K e^{z_k}}
\frac{e^{z_j}}{\displaystyle\sum_K e^{z_k}}\right]\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left(\delta_{jt}\sigma(t) - \sigma(t)\sigma(j) \right)\\[2ex]
&= - (\delta_{jt} - \sigma(j))\\[2ex]
&= \sigma(j) - \delta_{jt}
\end{align}$$
This is practically identical to $\frac{\partial E} {\partial z_j} = o_j - t_j$, and it does become identical if instead of focusing on $j$ as an individual output neuron, we transition to vectorial notation (as indicated in your question), and $t_j$ becomes the one-hot encoded vector of true values, which in my notation would be $\small \begin{bmatrix}0&0&0&\cdots&1&0&0&0_K\end{bmatrix}^\top$.
Then, with $\frac{\partial E} {\partial z_j} = o_j - t_j$ we are really calculating the gradient of the loss function with respect to the preactivation of all output neurons: the vector $t_j$ will contain a $1$ only in the neuron corresponding to the correct category, which is equivalent to the delta function $\delta_{jt}$, which is $1$ only when differentiating with respect to the pre-activation of the output neuron of the correct category.
In the Geoffrey Hinton's Coursera ML course the following chunk of code illustrates the implementation in Octave:
%% Compute derivative of cross-entropy loss function.
error_deriv = output_layer_state - expanded_target_batch;
The expanded_target_batch corresponds to the one-hot encoded sparse matrix with corresponding to the target of the training set. Hence, in the majority of the output neurons, the error_deriv = output_layer_state $(\sigma(j))$, because $\delta_{jt}$ is $0$, except for the neuron corresponding to the correct classification, in which case, a $1$ is going to be subtracted from $\sigma(j).$
The actual measurement of the cost is carried out with...
% MEASURE LOSS FUNCTION.
CE = -sum(sum(...
expanded_target_batch .* log(output_layer_state + tiny))) / batchsize;
We see again the $\frac{\partial E}{\partial z_j}$ in the beginning of the backpropagation algorithm:
$$\small\frac{\partial E}{\partial W_{hidd-2-out}}=\frac{\partial \text{outer}_{input}}{\partial W_{hidd-2-out}}\, \frac{\partial E}{\partial \text{outer}_{input}}=\frac{\partial z_j}{\partial W_{hidd-2-out}}\, \frac{\partial E}{\partial z_j}$$
in
hid_to_output_weights_gradient = hidden_layer_state * error_deriv';
output_bias_gradient = sum(error_deriv, 2);
since $z_j = \text{outer}_{in}= W_{hidd-2-out} \times \text{hidden}_{out}$
Observation re: OP additional questions:
The splitting of partials in the OP, $\frac{\partial E} {\partial z_j} = {\frac{\partial E} {\partial o_j}}{\frac{\partial o_j} {\partial z_j}}$, seems unwarranted.
The updating of the weights from hidden to output proceeds as...
hid_to_output_weights_delta = ...
momentum .* hid_to_output_weights_delta + ...
hid_to_output_weights_gradient ./ batchsize;
hid_to_output_weights = hid_to_output_weights...
- learning_rate * hid_to_output_weights_delta;
which don't include the output $o_j$ in the OP formula: $w_{ij} = w'_{ij} - r{\frac{\partial E} {\partial z_j}} {o_i}.$
The formula would be more along the lines of...
$$W_{hidd-2-out}:=W_{hidd-2-out}-r\,
\small \frac{\partial E}{\partial W_{hidd-2-out}}\, \Delta_{hidd-2-out}$$
|
Neural network softmax activation
|
The internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes $\frac{\partial E} {\partial z_j} = o_j - t_j$ where $t$ is a one-hot encoded target output vector.
|
Neural network softmax activation
The internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes $\frac{\partial E} {\partial z_j} = o_j - t_j$ where $t$ is a one-hot encoded target output vector. Is this correct?
Yes. Before going through the proof, let me change the notation to avoid careless mistakes in translation:
Notation:
I'll follow the notation in this made-up example of color classification:
whereby $j$ is the index denoting any of the $K$ output neurons - not necessarily the one corresponding to the true, ($t)$, value. Now,
$$\begin{align} o_j&=\sigma(j)=\sigma(z_j)=\text{softmax}(j)=\text{softmax (neuron }j)=\frac{e^{z_j}}{\displaystyle\sum_K e^{z_k}}\\[3ex]
z_j &= \mathbf w_j^\top \mathbf x = \text{preactivation (neuron }j)
\end{align}$$
The loss function is the negative log likelihood:
$$E = -\log \sigma(t) = -\log \left(\text{softmax}(t)\right)$$
The negative log likelihood is also known as the multiclass cross-entropy (ref: Pattern Recognition and Machine Learning Section 4.3.4), as they are in fact two different interpretations of the same formula.
Gradient of the loss function with respect to the pre-activation of an output neuron:
$$\begin{align}
\frac{\partial E}{\partial z_j}&=\frac{\partial}{\partial z_j}\,-\log\left( \sigma(t)\right)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\sigma(t)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\sigma(z_j)\\[2ex]
&=
\frac{-1}{\sigma(t)}\quad\frac{\partial}{\partial z_j}\frac{e^{z_t}}{\displaystyle\sum_k e^{z_k}}\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left[ \frac{\frac{\partial }{\partial z_j }e^{z_t}}{\displaystyle \sum_K e^{z_k}}
\quad - \quad
\frac{e^{z_t}\quad \frac{\partial}{\partial z_j}\displaystyle \sum_K e^{z_k}}{\left[\displaystyle\sum_K e^{z_k}\right]^2}\right]\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left[ \frac{\delta_{jt}\;e^{z_t}}{\displaystyle \sum_K e^{z_k}}
\quad - \quad \frac{e^{z_t}}{\displaystyle\sum_K e^{z_k}}
\frac{e^{z_j}}{\displaystyle\sum_K e^{z_k}}\right]\\[2ex]
&= \frac{-1}{\sigma(t)}\quad\left(\delta_{jt}\sigma(t) - \sigma(t)\sigma(j) \right)\\[2ex]
&= - (\delta_{jt} - \sigma(j))\\[2ex]
&= \sigma(j) - \delta_{jt}
\end{align}$$
This is practically identical to $\frac{\partial E} {\partial z_j} = o_j - t_j$, and it does become identical if instead of focusing on $j$ as an individual output neuron, we transition to vectorial notation (as indicated in your question), and $t_j$ becomes the one-hot encoded vector of true values, which in my notation would be $\small \begin{bmatrix}0&0&0&\cdots&1&0&0&0_K\end{bmatrix}^\top$.
Then, with $\frac{\partial E} {\partial z_j} = o_j - t_j$ we are really calculating the gradient of the loss function with respect to the preactivation of all output neurons: the vector $t_j$ will contain a $1$ only in the neuron corresponding to the correct category, which is equivalent to the delta function $\delta_{jt}$, which is $1$ only when differentiating with respect to the pre-activation of the output neuron of the correct category.
In the Geoffrey Hinton's Coursera ML course the following chunk of code illustrates the implementation in Octave:
%% Compute derivative of cross-entropy loss function.
error_deriv = output_layer_state - expanded_target_batch;
The expanded_target_batch corresponds to the one-hot encoded sparse matrix with corresponding to the target of the training set. Hence, in the majority of the output neurons, the error_deriv = output_layer_state $(\sigma(j))$, because $\delta_{jt}$ is $0$, except for the neuron corresponding to the correct classification, in which case, a $1$ is going to be subtracted from $\sigma(j).$
The actual measurement of the cost is carried out with...
% MEASURE LOSS FUNCTION.
CE = -sum(sum(...
expanded_target_batch .* log(output_layer_state + tiny))) / batchsize;
We see again the $\frac{\partial E}{\partial z_j}$ in the beginning of the backpropagation algorithm:
$$\small\frac{\partial E}{\partial W_{hidd-2-out}}=\frac{\partial \text{outer}_{input}}{\partial W_{hidd-2-out}}\, \frac{\partial E}{\partial \text{outer}_{input}}=\frac{\partial z_j}{\partial W_{hidd-2-out}}\, \frac{\partial E}{\partial z_j}$$
in
hid_to_output_weights_gradient = hidden_layer_state * error_deriv';
output_bias_gradient = sum(error_deriv, 2);
since $z_j = \text{outer}_{in}= W_{hidd-2-out} \times \text{hidden}_{out}$
Observation re: OP additional questions:
The splitting of partials in the OP, $\frac{\partial E} {\partial z_j} = {\frac{\partial E} {\partial o_j}}{\frac{\partial o_j} {\partial z_j}}$, seems unwarranted.
The updating of the weights from hidden to output proceeds as...
hid_to_output_weights_delta = ...
momentum .* hid_to_output_weights_delta + ...
hid_to_output_weights_gradient ./ batchsize;
hid_to_output_weights = hid_to_output_weights...
- learning_rate * hid_to_output_weights_delta;
which don't include the output $o_j$ in the OP formula: $w_{ij} = w'_{ij} - r{\frac{\partial E} {\partial z_j}} {o_i}.$
The formula would be more along the lines of...
$$W_{hidd-2-out}:=W_{hidd-2-out}-r\,
\small \frac{\partial E}{\partial W_{hidd-2-out}}\, \Delta_{hidd-2-out}$$
|
Neural network softmax activation
The internet has told me that when using Softmax combined with cross entropy, Step 1 simply becomes $\frac{\partial E} {\partial z_j} = o_j - t_j$ where $t$ is a one-hot encoded target output vector.
|
39,406
|
Is a minimal sufficient statistic also a complete statistic
|
Examples of minimal sufficient statistic which are not complete are aplenty.
A simple instance is $X\sim U (\theta,\theta+1)$ where $\theta\in \mathbb R$.
It is not difficult to show $X$ is a minimal sufficient statistic for $\theta$. However, $$E_{\theta}(\sin 2\pi X)=\int_{\theta}^{\theta+1} \sin (2\pi x)\,\mathrm{d}x=0\quad,\forall\,\theta$$
And $\sin 2\pi X$ is not identically zero almost everywhere, so that $X$ is not a complete statistic.
Another example for discrete distribution can be found in textbooks as an exercise or otherwise:
Let $X$ have the mass function
$$f_{\theta}(x)=\begin{cases}\theta&,\text{ if }x=-1\\\theta^x(1-\theta)^2&,\text{ if }x=0,1,2,\ldots\end{cases}\quad,\,\theta\in (0,1)$$
It can be verified that $X$ is minimal sufficient for $\theta$.
Suppose $\psi$ is any measurable function of $X$. Then
\begin{align}
&\qquad\quad E_{\theta}(\psi(X))=0\quad,\forall\,\theta
\\&\implies \theta\psi(-1)+\sum_{x=0}^\infty \psi(x)\theta^x(1-\theta)^2=0\quad,\forall\,\theta
\\&\implies \sum_{x=0}^\infty \psi(x)\theta^x=\frac{-\theta\psi(-1)}{(1-\theta)^2}=-\sum_{x=0}^\infty\psi(-1)x\theta^x\quad,\forall\,\theta
\end{align}
Comparing coefficient of $\theta^x$ for $x=0,1,2,\ldots$ we have $$\psi(x)=-x\psi(-1)\quad,\, x=0,1,2,\ldots$$
If $\psi(-1)=c\ne 0$, then $$\psi(x)=-cx\quad,\, x=0,1,2,\ldots$$
That is, $\psi$ is non-zero with positive probability. Hence $X$ is not complete for $\theta$.
|
Is a minimal sufficient statistic also a complete statistic
|
Examples of minimal sufficient statistic which are not complete are aplenty.
A simple instance is $X\sim U (\theta,\theta+1)$ where $\theta\in \mathbb R$.
It is not difficult to show $X$ is a minimal
|
Is a minimal sufficient statistic also a complete statistic
Examples of minimal sufficient statistic which are not complete are aplenty.
A simple instance is $X\sim U (\theta,\theta+1)$ where $\theta\in \mathbb R$.
It is not difficult to show $X$ is a minimal sufficient statistic for $\theta$. However, $$E_{\theta}(\sin 2\pi X)=\int_{\theta}^{\theta+1} \sin (2\pi x)\,\mathrm{d}x=0\quad,\forall\,\theta$$
And $\sin 2\pi X$ is not identically zero almost everywhere, so that $X$ is not a complete statistic.
Another example for discrete distribution can be found in textbooks as an exercise or otherwise:
Let $X$ have the mass function
$$f_{\theta}(x)=\begin{cases}\theta&,\text{ if }x=-1\\\theta^x(1-\theta)^2&,\text{ if }x=0,1,2,\ldots\end{cases}\quad,\,\theta\in (0,1)$$
It can be verified that $X$ is minimal sufficient for $\theta$.
Suppose $\psi$ is any measurable function of $X$. Then
\begin{align}
&\qquad\quad E_{\theta}(\psi(X))=0\quad,\forall\,\theta
\\&\implies \theta\psi(-1)+\sum_{x=0}^\infty \psi(x)\theta^x(1-\theta)^2=0\quad,\forall\,\theta
\\&\implies \sum_{x=0}^\infty \psi(x)\theta^x=\frac{-\theta\psi(-1)}{(1-\theta)^2}=-\sum_{x=0}^\infty\psi(-1)x\theta^x\quad,\forall\,\theta
\end{align}
Comparing coefficient of $\theta^x$ for $x=0,1,2,\ldots$ we have $$\psi(x)=-x\psi(-1)\quad,\, x=0,1,2,\ldots$$
If $\psi(-1)=c\ne 0$, then $$\psi(x)=-cx\quad,\, x=0,1,2,\ldots$$
That is, $\psi$ is non-zero with positive probability. Hence $X$ is not complete for $\theta$.
|
Is a minimal sufficient statistic also a complete statistic
Examples of minimal sufficient statistic which are not complete are aplenty.
A simple instance is $X\sim U (\theta,\theta+1)$ where $\theta\in \mathbb R$.
It is not difficult to show $X$ is a minimal
|
39,407
|
Is a minimal sufficient statistic also a complete statistic
|
Consider $N(\theta,\theta)$ where $\theta>0$.Of course $\dfrac{1}{n}\sum_{i=1}^n X_i$ is minimal sufficient but not complete. To see why it is not complete, find $a$ and $b$ such that:
$$E\Big(a\sum_{i=1}^n (X_i-\overline{X})^2 \Big)=E\Big(b\sum_{i=1}^nX_i^2\Big)=\theta^2$$
and therefore $E\Big(a\sum_{i=1}^n (X_i-\overline{X})^2-b\sum_{i=1}^nX_i^2\Big)=0$ for all $\theta$.
|
Is a minimal sufficient statistic also a complete statistic
|
Consider $N(\theta,\theta)$ where $\theta>0$.Of course $\dfrac{1}{n}\sum_{i=1}^n X_i$ is minimal sufficient but not complete. To see why it is not complete, find $a$ and $b$ such that:
$$E\Big(a\sum_{
|
Is a minimal sufficient statistic also a complete statistic
Consider $N(\theta,\theta)$ where $\theta>0$.Of course $\dfrac{1}{n}\sum_{i=1}^n X_i$ is minimal sufficient but not complete. To see why it is not complete, find $a$ and $b$ such that:
$$E\Big(a\sum_{i=1}^n (X_i-\overline{X})^2 \Big)=E\Big(b\sum_{i=1}^nX_i^2\Big)=\theta^2$$
and therefore $E\Big(a\sum_{i=1}^n (X_i-\overline{X})^2-b\sum_{i=1}^nX_i^2\Big)=0$ for all $\theta$.
|
Is a minimal sufficient statistic also a complete statistic
Consider $N(\theta,\theta)$ where $\theta>0$.Of course $\dfrac{1}{n}\sum_{i=1}^n X_i$ is minimal sufficient but not complete. To see why it is not complete, find $a$ and $b$ such that:
$$E\Big(a\sum_{
|
39,408
|
Is a minimal sufficient statistic also a complete statistic
|
In the Cauchy distribution with unknown location,
$$f(x;\mu) = \frac{1}{\pi} \, \frac{1}{1+(x-\mu)^2}$$
for a sample $(X_1,\ldots,X_n)$
the order statistic $(X_{(1)},\ldots,X_{(n)})$ is minimal sufficient, but it is incomplete since $$\mathbb{E}_\mu[\phi(X_{(i)} - X_{(j)})]\qquad i\ne j$$is constant in $\mu$ for bounded functions $\phi$. Or since
$$\mathbb{E}_\mu[\phi(X_{(i)} - X_{(j)})]\qquad 1< i\ne j <n$$is (well-defined and) constant in $\mu$.
|
Is a minimal sufficient statistic also a complete statistic
|
In the Cauchy distribution with unknown location,
$$f(x;\mu) = \frac{1}{\pi} \, \frac{1}{1+(x-\mu)^2}$$
for a sample $(X_1,\ldots,X_n)$
the order statistic $(X_{(1)},\ldots,X_{(n)})$ is minimal suffic
|
Is a minimal sufficient statistic also a complete statistic
In the Cauchy distribution with unknown location,
$$f(x;\mu) = \frac{1}{\pi} \, \frac{1}{1+(x-\mu)^2}$$
for a sample $(X_1,\ldots,X_n)$
the order statistic $(X_{(1)},\ldots,X_{(n)})$ is minimal sufficient, but it is incomplete since $$\mathbb{E}_\mu[\phi(X_{(i)} - X_{(j)})]\qquad i\ne j$$is constant in $\mu$ for bounded functions $\phi$. Or since
$$\mathbb{E}_\mu[\phi(X_{(i)} - X_{(j)})]\qquad 1< i\ne j <n$$is (well-defined and) constant in $\mu$.
|
Is a minimal sufficient statistic also a complete statistic
In the Cauchy distribution with unknown location,
$$f(x;\mu) = \frac{1}{\pi} \, \frac{1}{1+(x-\mu)^2}$$
for a sample $(X_1,\ldots,X_n)$
the order statistic $(X_{(1)},\ldots,X_{(n)})$ is minimal suffic
|
39,409
|
what is the “learning” that takes place in Naive Bayes?
|
Different from the nearest neighbor algorithm, the Naive Bayes algorithm is not a lazy method; A real learning takes place for Naive Bayes. The parameters that are learned in Naive Bayes are the prior probabilities of different classes, as well as the likelihood of different features for each class. In the test phase, these learned parameters are used to estimate the probability of each class for the given sample.
In other words, in Naive Bayes, for each sample in the test set, the parameters determined during training are used to estimate the probability of that sample belonging to different classes. For example, $P(c|x)\propto P(c)P(x_1|c)P(x_2|c)...p(x_n|c)$ where $c$ is a class and $x$ is a test sample. All quantities $P(c)$ and $P(x_i|c)$ are parameters which are determined during training and are used during testing. This is similar to NN, but the kind of learning and the kind of applying the learned model is different.
As an example, take a look at the Naive Bayes implementation in nltk. See the train and prob_classify methods. In the train method, label_probdist and feature_probdist are computed, and in the prob_classify method, these parameters are used to estimate the probability of different class for a test sample. Just note that _label_probdist and _feature_probdist are respectively initialized to label_probdist and feature_probdist in the constructor.
About your second question (the final paragraph), even for the lazy methods such as the nearest neighbor method, we need to split data into train/test. This is because we want to evaluate the performance of the model obtained based on the training data on some samples that are not seen during training to obtain a reasonable measure of the model generalization.
|
what is the “learning” that takes place in Naive Bayes?
|
Different from the nearest neighbor algorithm, the Naive Bayes algorithm is not a lazy method; A real learning takes place for Naive Bayes. The parameters that are learned in Naive Bayes are the prior
|
what is the “learning” that takes place in Naive Bayes?
Different from the nearest neighbor algorithm, the Naive Bayes algorithm is not a lazy method; A real learning takes place for Naive Bayes. The parameters that are learned in Naive Bayes are the prior probabilities of different classes, as well as the likelihood of different features for each class. In the test phase, these learned parameters are used to estimate the probability of each class for the given sample.
In other words, in Naive Bayes, for each sample in the test set, the parameters determined during training are used to estimate the probability of that sample belonging to different classes. For example, $P(c|x)\propto P(c)P(x_1|c)P(x_2|c)...p(x_n|c)$ where $c$ is a class and $x$ is a test sample. All quantities $P(c)$ and $P(x_i|c)$ are parameters which are determined during training and are used during testing. This is similar to NN, but the kind of learning and the kind of applying the learned model is different.
As an example, take a look at the Naive Bayes implementation in nltk. See the train and prob_classify methods. In the train method, label_probdist and feature_probdist are computed, and in the prob_classify method, these parameters are used to estimate the probability of different class for a test sample. Just note that _label_probdist and _feature_probdist are respectively initialized to label_probdist and feature_probdist in the constructor.
About your second question (the final paragraph), even for the lazy methods such as the nearest neighbor method, we need to split data into train/test. This is because we want to evaluate the performance of the model obtained based on the training data on some samples that are not seen during training to obtain a reasonable measure of the model generalization.
|
what is the “learning” that takes place in Naive Bayes?
Different from the nearest neighbor algorithm, the Naive Bayes algorithm is not a lazy method; A real learning takes place for Naive Bayes. The parameters that are learned in Naive Bayes are the prior
|
39,410
|
what is the “learning” that takes place in Naive Bayes?
|
I did not see any parameters in the Naive Bayes Classifier.
I think we do not need to learn parameters like what we do in Neural networks, but we learn(or calculate) the prior probability from the training data. Then we apply the prior we calculated from training data to make predictions in test data.
|
what is the “learning” that takes place in Naive Bayes?
|
I did not see any parameters in the Naive Bayes Classifier.
I think we do not need to learn parameters like what we do in Neural networks, but we learn(or calculate) the prior probability from the tra
|
what is the “learning” that takes place in Naive Bayes?
I did not see any parameters in the Naive Bayes Classifier.
I think we do not need to learn parameters like what we do in Neural networks, but we learn(or calculate) the prior probability from the training data. Then we apply the prior we calculated from training data to make predictions in test data.
|
what is the “learning” that takes place in Naive Bayes?
I did not see any parameters in the Naive Bayes Classifier.
I think we do not need to learn parameters like what we do in Neural networks, but we learn(or calculate) the prior probability from the tra
|
39,411
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
|
Boruta and random forrest differences
Boruta algorithm uses randomization on top of results obtained from variable importance obtained from random forest to determine the truly important and statistically valid results. For details of the difference please refer to Section 2 of the article:
Kursa, Miron B., and Witold R. Rudnicki. "Feature selection with the Boruta package." (2010).
Is one method preferred over the other? If so why?
This is a classic case of "No Free Lunch" theorem. Without data and assumptions, it is impossible to decide which one is better. However, please note Boruta is produced as an improvement over random forest variable importance. So, it should perform better in more situations than not (Biased because I like randomization techniques myself). Nevertheless, data and computational time could make variable importance from random forest a better choice.
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
|
Boruta and random forrest differences
Boruta algorithm uses randomization on top of results obtained from variable importance obtained from random forest to determine the truly important and statistic
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
Boruta and random forrest differences
Boruta algorithm uses randomization on top of results obtained from variable importance obtained from random forest to determine the truly important and statistically valid results. For details of the difference please refer to Section 2 of the article:
Kursa, Miron B., and Witold R. Rudnicki. "Feature selection with the Boruta package." (2010).
Is one method preferred over the other? If so why?
This is a classic case of "No Free Lunch" theorem. Without data and assumptions, it is impossible to decide which one is better. However, please note Boruta is produced as an improvement over random forest variable importance. So, it should perform better in more situations than not (Biased because I like randomization techniques myself). Nevertheless, data and computational time could make variable importance from random forest a better choice.
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
Boruta and random forrest differences
Boruta algorithm uses randomization on top of results obtained from variable importance obtained from random forest to determine the truly important and statistic
|
39,412
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
|
Basic Idea of Boruta Algorithm
Perform shuffling of predictors' values and join them with the original predictors and then build random forest on the merged dataset. Then make comparison of original variables with the randomised variables to measure variable importance. Only variables having higher importance than that of the randomised variables are considered important.
Difference between Boruta and Random Forest Importance Measure
In random forest, the Z score is computed by dividing the average accuracy loss by its standard deviation. It is used as the importance measure for all the variables. But we cannot use Z Score which is calculated in random forest, as a measure for finding variable importance as this Z score is not directly related to the statistical significance of the variable importance. To workaround this problem, boruta package runs random forest on both original and random attributes and compute the importance of all variables. Since the whole process is dependent on permuted copies, we repeat random permutation procedure to get statistically robust results.
Is Boruta a solution for all?
Answer is NO. You need to test other algorithms. It is not possible to judge the best algorithm without knowing data and assumptions. Since it is an improvement on random forest variable importance measure, it should work well on most of the times.
Check out the original article - Feature selection with Boruta in R to see implementation of Boruta Algorithm with R and its comparison with other feature selection algorithms.
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
|
Basic Idea of Boruta Algorithm
Perform shuffling of predictors' values and join them with the original predictors and then build random forest on the merged dataset. Then make comparison of original v
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
Basic Idea of Boruta Algorithm
Perform shuffling of predictors' values and join them with the original predictors and then build random forest on the merged dataset. Then make comparison of original variables with the randomised variables to measure variable importance. Only variables having higher importance than that of the randomised variables are considered important.
Difference between Boruta and Random Forest Importance Measure
In random forest, the Z score is computed by dividing the average accuracy loss by its standard deviation. It is used as the importance measure for all the variables. But we cannot use Z Score which is calculated in random forest, as a measure for finding variable importance as this Z score is not directly related to the statistical significance of the variable importance. To workaround this problem, boruta package runs random forest on both original and random attributes and compute the importance of all variables. Since the whole process is dependent on permuted copies, we repeat random permutation procedure to get statistically robust results.
Is Boruta a solution for all?
Answer is NO. You need to test other algorithms. It is not possible to judge the best algorithm without knowing data and assumptions. Since it is an improvement on random forest variable importance measure, it should work well on most of the times.
Check out the original article - Feature selection with Boruta in R to see implementation of Boruta Algorithm with R and its comparison with other feature selection algorithms.
|
Boruta 'all-relevant' feature selection vs Random Forest 'variables of importance'
Basic Idea of Boruta Algorithm
Perform shuffling of predictors' values and join them with the original predictors and then build random forest on the merged dataset. Then make comparison of original v
|
39,413
|
Regression Bounded Between -1 and 1
|
You can always use beta regression (Ferrari and Cribari-Neto, 2004). It's a model for response variable bounded in $(0, 1)$, but you can easily transform your variable by taking $\frac{Y+1}{2}$ (I know you said you do not want to transform, but it's a really basic transformation).
Moreover, such model still makes perfect sense since what you are estimating is mean of non-standard beta distribution parametrized by mean $\mu_i$ and precision $\phi$. The standard beta regression model can be used for variable with any $(a,b)$ bounds by using above transformation
$$
g(\mu_i) = x_i^{T}\beta \\
\tfrac{y_i-a}{b-a} \sim \mathcal{B}(\mu_i, \phi)
$$
where $g$ is a link function (e.g. logistic function). Such model is equivalent to
$$
h(\mu_i) = x_i^{T}\beta \\
y_i \sim \mathcal{B}_{a,b}(\mu_i, \phi)
$$
where $\mathcal{B}_{a,b}$ is is beta distribution bounded in $(a,b)$ and $h$ is a link function used for mapping from given range (e.g. logistic, or hyperbolic tangent functions together re-scaling and shifting if needed).
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815.
|
Regression Bounded Between -1 and 1
|
You can always use beta regression (Ferrari and Cribari-Neto, 2004). It's a model for response variable bounded in $(0, 1)$, but you can easily transform your variable by taking $\frac{Y+1}{2}$ (I kno
|
Regression Bounded Between -1 and 1
You can always use beta regression (Ferrari and Cribari-Neto, 2004). It's a model for response variable bounded in $(0, 1)$, but you can easily transform your variable by taking $\frac{Y+1}{2}$ (I know you said you do not want to transform, but it's a really basic transformation).
Moreover, such model still makes perfect sense since what you are estimating is mean of non-standard beta distribution parametrized by mean $\mu_i$ and precision $\phi$. The standard beta regression model can be used for variable with any $(a,b)$ bounds by using above transformation
$$
g(\mu_i) = x_i^{T}\beta \\
\tfrac{y_i-a}{b-a} \sim \mathcal{B}(\mu_i, \phi)
$$
where $g$ is a link function (e.g. logistic function). Such model is equivalent to
$$
h(\mu_i) = x_i^{T}\beta \\
y_i \sim \mathcal{B}_{a,b}(\mu_i, \phi)
$$
where $\mathcal{B}_{a,b}$ is is beta distribution bounded in $(a,b)$ and $h$ is a link function used for mapping from given range (e.g. logistic, or hyperbolic tangent functions together re-scaling and shifting if needed).
Ferrari, S., & Cribari-Neto, F. (2004). Beta regression for modelling rates and proportions. Journal of Applied Statistics, 31(7), 799-815.
|
Regression Bounded Between -1 and 1
You can always use beta regression (Ferrari and Cribari-Neto, 2004). It's a model for response variable bounded in $(0, 1)$, but you can easily transform your variable by taking $\frac{Y+1}{2}$ (I kno
|
39,414
|
Regression Bounded Between -1 and 1
|
There now exist a proposed advancement of classical beta regression, referred to a 'Boosted Beta Regression', that may be of value in this meta-analysis with locational data.
Per this 2013 research article available online to quote, in part:
In the statistical literature, beta regression has been established as a powerful technique to model percentages and proportions [10]. Also, the method has been used in a variety of research fields [3], [8], [12]. There are applications, however, where classical beta regression methodology still has a number of limitations:
Scientific databases often involve large numbers of potential predictor variables that could be included in a regression model. Consequently, if maximum likelihood estimation is used to fit a beta regression model, the model may become too complex and may thus overfit the data. This usually leads to a large variance and to a high uncertainty about the predictor-response relationships. As a consequence, techniques for variable selection in beta regression models are needed.
Statistical models often suffer from multicollinearity problems, meaning that predictor variables are highly correlated. Also, observations of the response variable may be affected by spatial correlation, which is, for example, a common problem in ecology [13], [14]. To date, these issues have not been incorporated into beta regression methodology.
In many applications, predictor-response relationships are nonlinear in nature [15], [16]. This means that the linear predictor of the classical beta regression model needs to be replaced by a more flexible function that allows for an appropriate quantification of nonlinear predictor effects. Although Simas et al. [17] have recently suggested an approach to incorporate nonlinear effects into beta regression models, this approach requires the functional form of the predictor-response relationships (e.g., quadratic or exponential) to be specified in advance. In cases where the functional forms of predictor effects are unknown, a more flexible approach based on smooth nonlinear effects is desirable.
Percentage outcomes that are based on the binomial model are often overdispersed, meaning that they show a larger variability than expected by the binomial distribution. Classical beta regression models conveniently account for overdispersion by including a precision parameter to adjust the conditional variance of the percentage outcome (see the next section for details). On the other hand, it is often observed that overdispersion depends on the values of one or more predictor variables [17]. In the context of a beta regression model, this implies that is not constant but needs to be regressed to the predictor variables. This issue makes variable selection even more complicated because analysts need to identify the predictor variables that affect.
I welcome opinions of the article claims.
DISCLOSURE: I do not own or have any relationship with said authors or vendors selling this or other competing software.
|
Regression Bounded Between -1 and 1
|
There now exist a proposed advancement of classical beta regression, referred to a 'Boosted Beta Regression', that may be of value in this meta-analysis with locational data.
Per this 2013 research ar
|
Regression Bounded Between -1 and 1
There now exist a proposed advancement of classical beta regression, referred to a 'Boosted Beta Regression', that may be of value in this meta-analysis with locational data.
Per this 2013 research article available online to quote, in part:
In the statistical literature, beta regression has been established as a powerful technique to model percentages and proportions [10]. Also, the method has been used in a variety of research fields [3], [8], [12]. There are applications, however, where classical beta regression methodology still has a number of limitations:
Scientific databases often involve large numbers of potential predictor variables that could be included in a regression model. Consequently, if maximum likelihood estimation is used to fit a beta regression model, the model may become too complex and may thus overfit the data. This usually leads to a large variance and to a high uncertainty about the predictor-response relationships. As a consequence, techniques for variable selection in beta regression models are needed.
Statistical models often suffer from multicollinearity problems, meaning that predictor variables are highly correlated. Also, observations of the response variable may be affected by spatial correlation, which is, for example, a common problem in ecology [13], [14]. To date, these issues have not been incorporated into beta regression methodology.
In many applications, predictor-response relationships are nonlinear in nature [15], [16]. This means that the linear predictor of the classical beta regression model needs to be replaced by a more flexible function that allows for an appropriate quantification of nonlinear predictor effects. Although Simas et al. [17] have recently suggested an approach to incorporate nonlinear effects into beta regression models, this approach requires the functional form of the predictor-response relationships (e.g., quadratic or exponential) to be specified in advance. In cases where the functional forms of predictor effects are unknown, a more flexible approach based on smooth nonlinear effects is desirable.
Percentage outcomes that are based on the binomial model are often overdispersed, meaning that they show a larger variability than expected by the binomial distribution. Classical beta regression models conveniently account for overdispersion by including a precision parameter to adjust the conditional variance of the percentage outcome (see the next section for details). On the other hand, it is often observed that overdispersion depends on the values of one or more predictor variables [17]. In the context of a beta regression model, this implies that is not constant but needs to be regressed to the predictor variables. This issue makes variable selection even more complicated because analysts need to identify the predictor variables that affect.
I welcome opinions of the article claims.
DISCLOSURE: I do not own or have any relationship with said authors or vendors selling this or other competing software.
|
Regression Bounded Between -1 and 1
There now exist a proposed advancement of classical beta regression, referred to a 'Boosted Beta Regression', that may be of value in this meta-analysis with locational data.
Per this 2013 research ar
|
39,415
|
Why is Gaussian Copula's Tail Dependence Zero?
|
Consider a bivariate Gaussian copula $C(\cdot)$.
Because of the radial symmetry of a Gaussian copula we can consider just the lower tail dependence. We know that the lower tail dependence for this copula is:
$$\begin{align}
\lambda&=\lim_{\,\,q\to 0^{+}} \frac{\partial C(q,q)}{\partial q}\\
&=\lim_{\,\,q\to 0^{+}} \text{Pr}(U_{2}\leq q\,|\,U_{1}=q)+ \lim_{\,\,q\to 0^{+}} \text{Pr}(U_{1}\leq q\,|\,U_{2}=q)
\end{align}$$
Since a Gaussian copula is exchangeable, it follows that:
$$\lambda=2\lim_{\,\,q\to 0^{+}}\text{Pr}(U_{2}\leq q\,|\,U_{1}=q)$$
Now, let:
$$(X_{1},X_{2}):=\Big(\Phi^{-1}(U_{1}),\,\Phi^{-1}(U_{2})\Big)$$
This means that $(X_{1},X_{2})$ has a bivariate normal distribution with standard marginals and correlation $\rho$. Now:
$$\begin{align}
\lambda&=2\lim_{\,\,q\to 0^{+}}\text{Pr}(\Phi^{-1}(U_{2})\leq \Phi^{-1}(q)\,|\,\Phi^{-1}(U_{1})=\Phi^{-1}(q))\\
&=2\lim_{x\to -\infty}\text{Pr}(X_{2}\leq x\,|\, X_{1}=x)
\end{align}$$
Finally, we know that $X_{2}\,|\,X_{1}\sim N(\rho x,1-\rho^{2})$, so:
$$\lambda=2\lim_{x\to -\infty}\Phi\Bigg(x\sqrt{\frac{(1-\rho)}{(1
+\rho)}}\Bigg)=0$$
|
Why is Gaussian Copula's Tail Dependence Zero?
|
Consider a bivariate Gaussian copula $C(\cdot)$.
Because of the radial symmetry of a Gaussian copula we can consider just the lower tail dependence. We know that the lower tail dependence for this cop
|
Why is Gaussian Copula's Tail Dependence Zero?
Consider a bivariate Gaussian copula $C(\cdot)$.
Because of the radial symmetry of a Gaussian copula we can consider just the lower tail dependence. We know that the lower tail dependence for this copula is:
$$\begin{align}
\lambda&=\lim_{\,\,q\to 0^{+}} \frac{\partial C(q,q)}{\partial q}\\
&=\lim_{\,\,q\to 0^{+}} \text{Pr}(U_{2}\leq q\,|\,U_{1}=q)+ \lim_{\,\,q\to 0^{+}} \text{Pr}(U_{1}\leq q\,|\,U_{2}=q)
\end{align}$$
Since a Gaussian copula is exchangeable, it follows that:
$$\lambda=2\lim_{\,\,q\to 0^{+}}\text{Pr}(U_{2}\leq q\,|\,U_{1}=q)$$
Now, let:
$$(X_{1},X_{2}):=\Big(\Phi^{-1}(U_{1}),\,\Phi^{-1}(U_{2})\Big)$$
This means that $(X_{1},X_{2})$ has a bivariate normal distribution with standard marginals and correlation $\rho$. Now:
$$\begin{align}
\lambda&=2\lim_{\,\,q\to 0^{+}}\text{Pr}(\Phi^{-1}(U_{2})\leq \Phi^{-1}(q)\,|\,\Phi^{-1}(U_{1})=\Phi^{-1}(q))\\
&=2\lim_{x\to -\infty}\text{Pr}(X_{2}\leq x\,|\, X_{1}=x)
\end{align}$$
Finally, we know that $X_{2}\,|\,X_{1}\sim N(\rho x,1-\rho^{2})$, so:
$$\lambda=2\lim_{x\to -\infty}\Phi\Bigg(x\sqrt{\frac{(1-\rho)}{(1
+\rho)}}\Bigg)=0$$
|
Why is Gaussian Copula's Tail Dependence Zero?
Consider a bivariate Gaussian copula $C(\cdot)$.
Because of the radial symmetry of a Gaussian copula we can consider just the lower tail dependence. We know that the lower tail dependence for this cop
|
39,416
|
Why is Gaussian Copula's Tail Dependence Zero?
|
For a non-technical, intuitive view of what the tail index is telling you, we can look at simulation and compute sample estimates of the quantity $P[F(Y) > q | F(X) > q]$ as $q$ increases.
Here the original correlation is $0.96$, but as we get further into the upper tail of $X$ the correlation decreases, and the proportion where $F(Y) > q$ given $F(X) > q$ decreases.
|
Why is Gaussian Copula's Tail Dependence Zero?
|
For a non-technical, intuitive view of what the tail index is telling you, we can look at simulation and compute sample estimates of the quantity $P[F(Y) > q | F(X) > q]$ as $q$ increases.
Here the o
|
Why is Gaussian Copula's Tail Dependence Zero?
For a non-technical, intuitive view of what the tail index is telling you, we can look at simulation and compute sample estimates of the quantity $P[F(Y) > q | F(X) > q]$ as $q$ increases.
Here the original correlation is $0.96$, but as we get further into the upper tail of $X$ the correlation decreases, and the proportion where $F(Y) > q$ given $F(X) > q$ decreases.
|
Why is Gaussian Copula's Tail Dependence Zero?
For a non-technical, intuitive view of what the tail index is telling you, we can look at simulation and compute sample estimates of the quantity $P[F(Y) > q | F(X) > q]$ as $q$ increases.
Here the o
|
39,417
|
What is the equivalent in R of scikit-learn's `LogisticRegression` with `penalty="l2"`
|
Your question is how to run L2 regularized logistic regression in R.
My another detailed answer can be found here. Regularization methods for logistic regression
For implementation, there are more than one way of doing this.
Method 1, use glmnet(data,label,family="binomial", alpha=0, lambda=1), Details can be found in glmnet manual, check page 9.
Method 2
Use LiblineaR(data,label, type=0) or LiblineaR(data,label, type=7). Details can be found LiblineaR manual page 4. Both are L2-regularized logistic regression, one primal and one dual.
Mehtod 3, manual implementation.
Here are code for the regularized logistic loss and it's gradient. We can use an optimization toolbox (such as BFGS) to optimize.
rm(list=ls())
set.seed(0)
library(mlbench)
d=mlbench.2dnormals(100,2)
x=d$x
y=ifelse(d$classes==1,1,0)
lambda=1
logistic_loss <- function(w){
p=plogis(x %*% w)
L=-y*log(p)-(1-y)*log(1-p)
LwR2=sum(L)+lambda*t(w) %*% w
return(c(LwR2))
}
logistic_loss_gr <- function(w){
p=plogis(x %*% w)
v=t(x) %*% (p - y)
return(c(v)+2*lambda*w)
}
optim(runif(2),logistic_loss,logistic_loss_gr,method="BFGS")
|
What is the equivalent in R of scikit-learn's `LogisticRegression` with `penalty="l2"`
|
Your question is how to run L2 regularized logistic regression in R.
My another detailed answer can be found here. Regularization methods for logistic regression
For implementation, there are more th
|
What is the equivalent in R of scikit-learn's `LogisticRegression` with `penalty="l2"`
Your question is how to run L2 regularized logistic regression in R.
My another detailed answer can be found here. Regularization methods for logistic regression
For implementation, there are more than one way of doing this.
Method 1, use glmnet(data,label,family="binomial", alpha=0, lambda=1), Details can be found in glmnet manual, check page 9.
Method 2
Use LiblineaR(data,label, type=0) or LiblineaR(data,label, type=7). Details can be found LiblineaR manual page 4. Both are L2-regularized logistic regression, one primal and one dual.
Mehtod 3, manual implementation.
Here are code for the regularized logistic loss and it's gradient. We can use an optimization toolbox (such as BFGS) to optimize.
rm(list=ls())
set.seed(0)
library(mlbench)
d=mlbench.2dnormals(100,2)
x=d$x
y=ifelse(d$classes==1,1,0)
lambda=1
logistic_loss <- function(w){
p=plogis(x %*% w)
L=-y*log(p)-(1-y)*log(1-p)
LwR2=sum(L)+lambda*t(w) %*% w
return(c(LwR2))
}
logistic_loss_gr <- function(w){
p=plogis(x %*% w)
v=t(x) %*% (p - y)
return(c(v)+2*lambda*w)
}
optim(runif(2),logistic_loss,logistic_loss_gr,method="BFGS")
|
What is the equivalent in R of scikit-learn's `LogisticRegression` with `penalty="l2"`
Your question is how to run L2 regularized logistic regression in R.
My another detailed answer can be found here. Regularization methods for logistic regression
For implementation, there are more th
|
39,418
|
What is an example use of Auto differentiation such as implemented in Tensorflow and why is it important?
|
In auto differentiation systems mostly an operator (e.g. addition, subtraction) is defined together with its differentiation.
So after you write a function by stacking a series of operators, the system can figure out by itself how the corresponding derivatives should be computed, usually by using computation graphs and the chain rule.
Auto differentiation is beneficial for gradient based optimization (e.g. training a neural network using gradient descent), as it saves us from working out the math, implementing the code and verifying the derivatives numerically case by case.
Here's how to define an operator (op) in Theano and Tensorflow.
|
What is an example use of Auto differentiation such as implemented in Tensorflow and why is it impor
|
In auto differentiation systems mostly an operator (e.g. addition, subtraction) is defined together with its differentiation.
So after you write a function by stacking a series of operators, the syst
|
What is an example use of Auto differentiation such as implemented in Tensorflow and why is it important?
In auto differentiation systems mostly an operator (e.g. addition, subtraction) is defined together with its differentiation.
So after you write a function by stacking a series of operators, the system can figure out by itself how the corresponding derivatives should be computed, usually by using computation graphs and the chain rule.
Auto differentiation is beneficial for gradient based optimization (e.g. training a neural network using gradient descent), as it saves us from working out the math, implementing the code and verifying the derivatives numerically case by case.
Here's how to define an operator (op) in Theano and Tensorflow.
|
What is an example use of Auto differentiation such as implemented in Tensorflow and why is it impor
In auto differentiation systems mostly an operator (e.g. addition, subtraction) is defined together with its differentiation.
So after you write a function by stacking a series of operators, the syst
|
39,419
|
sampling/importance resampling - why resample?
|
SIR uses two ideas. The first idea is importance sampling. The main idea is that you draw from one probability distribution (in your case, it's the uniform), in order to get information about another. You do this by drawing from one distribution, then weighting the samples. Generally you are trying to get information about a tricky distribution, but since this is a tutorial, you're trying to get information about a normal distribution with samples from a uniform.
Say you sample $X_1, \ldots, X_N \overset{iid}{\sim} \text{Uniform}(-10,10)$. In your code these are called draw1. Then you can weight these with the unnormalized weights $w_i = p_{\text{normal}}(X_i)/p_{\text{uniform}}(X_i)$. You can use these particles/weighted-samples to approximate things like expectations:
$$
E_{\text{normal}}[h(X)] \approx \sum_{i=1}^N \tilde{w}_i h(x_i),
$$
where $\tilde{w}_i = w_i/\sum_j w_j$ are the normalized weights.
However these samples (without the weights) are not distributed according to the normal distribution. If you want samples that are (asymptotically) normally distributed, you need to resample from your weighted samples. Samples with higher weights are more likely to be picked. But at the end, all resampled things will have equal weight, as you are sampling with replacement.
So say you draw indexes
$$
I_1, \ldots, I_m \overset{iid}{\sim} \text{Multinomial}(1, \tilde{w}_1, \ldots, \tilde{w}_N).
$$
Your code calls these ind. The samples they select are distributed approximately normally. You can verify this empirically with a command like hist(draws1); it will look like a bell curve. Mathematically you would write these samples as
$$
X_{I_1}, X_{I_2}, \ldots, X_{I_N}
$$
(instead of $X_1, \ldots, X_N$.)
Now, you may have drawn duplicates here. Even though you might have two $3.6$s, they all are treated equal. Every resampled bit has equal weight. It should be $1/m$ for each of them.
Lastly, as I mentioned in a comment above, this example is technically incorrect. One of the requirements of a proposal/instrumental/importance distribution is that it should be able to draw samples evwrywhere in the support of your target distribution. In other words, the proposal should “dominate” your target. This proposal does not satisfy this criterion, though, because even if you choose a uniform that covers most of the support of the normal target (perhaps it is centered at he normal mode and is very wide), it still will leave some tail area uncovered. It might be true that this is negligible in practice, or that it is not a requirement for approximating certain specific expectations, but it’s worth mentioning.
|
sampling/importance resampling - why resample?
|
SIR uses two ideas. The first idea is importance sampling. The main idea is that you draw from one probability distribution (in your case, it's the uniform), in order to get information about another.
|
sampling/importance resampling - why resample?
SIR uses two ideas. The first idea is importance sampling. The main idea is that you draw from one probability distribution (in your case, it's the uniform), in order to get information about another. You do this by drawing from one distribution, then weighting the samples. Generally you are trying to get information about a tricky distribution, but since this is a tutorial, you're trying to get information about a normal distribution with samples from a uniform.
Say you sample $X_1, \ldots, X_N \overset{iid}{\sim} \text{Uniform}(-10,10)$. In your code these are called draw1. Then you can weight these with the unnormalized weights $w_i = p_{\text{normal}}(X_i)/p_{\text{uniform}}(X_i)$. You can use these particles/weighted-samples to approximate things like expectations:
$$
E_{\text{normal}}[h(X)] \approx \sum_{i=1}^N \tilde{w}_i h(x_i),
$$
where $\tilde{w}_i = w_i/\sum_j w_j$ are the normalized weights.
However these samples (without the weights) are not distributed according to the normal distribution. If you want samples that are (asymptotically) normally distributed, you need to resample from your weighted samples. Samples with higher weights are more likely to be picked. But at the end, all resampled things will have equal weight, as you are sampling with replacement.
So say you draw indexes
$$
I_1, \ldots, I_m \overset{iid}{\sim} \text{Multinomial}(1, \tilde{w}_1, \ldots, \tilde{w}_N).
$$
Your code calls these ind. The samples they select are distributed approximately normally. You can verify this empirically with a command like hist(draws1); it will look like a bell curve. Mathematically you would write these samples as
$$
X_{I_1}, X_{I_2}, \ldots, X_{I_N}
$$
(instead of $X_1, \ldots, X_N$.)
Now, you may have drawn duplicates here. Even though you might have two $3.6$s, they all are treated equal. Every resampled bit has equal weight. It should be $1/m$ for each of them.
Lastly, as I mentioned in a comment above, this example is technically incorrect. One of the requirements of a proposal/instrumental/importance distribution is that it should be able to draw samples evwrywhere in the support of your target distribution. In other words, the proposal should “dominate” your target. This proposal does not satisfy this criterion, though, because even if you choose a uniform that covers most of the support of the normal target (perhaps it is centered at he normal mode and is very wide), it still will leave some tail area uncovered. It might be true that this is negligible in practice, or that it is not a requirement for approximating certain specific expectations, but it’s worth mentioning.
|
sampling/importance resampling - why resample?
SIR uses two ideas. The first idea is importance sampling. The main idea is that you draw from one probability distribution (in your case, it's the uniform), in order to get information about another.
|
39,420
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
|
I've unaccepted kjetil's answer since, as was pointed out in the comments, it assumes $X$ and $Y$ are independent.
The following answer should work when $X$ and $Y$ are dependent, by using whuber's suggestion:
\begin{align}
\text{Var}(XY) &= E((XY)^2) - E(XY)^2 \\
&\le E(X^2Y^2) \\
&\le E(X^2)\sup(Y^2) \\
&= E(X^2) \\
&= \text{Var}(X) + E(X)^2 \\
&< \infty
\end{align}
Note that the result also holds for any bounded $Y$ (since $\sup(Y^2)$ will be finite).
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
|
I've unaccepted kjetil's answer since, as was pointed out in the comments, it assumes $X$ and $Y$ are independent.
The following answer should work when $X$ and $Y$ are dependent, by using whuber's su
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
I've unaccepted kjetil's answer since, as was pointed out in the comments, it assumes $X$ and $Y$ are independent.
The following answer should work when $X$ and $Y$ are dependent, by using whuber's suggestion:
\begin{align}
\text{Var}(XY) &= E((XY)^2) - E(XY)^2 \\
&\le E(X^2Y^2) \\
&\le E(X^2)\sup(Y^2) \\
&= E(X^2) \\
&= \text{Var}(X) + E(X)^2 \\
&< \infty
\end{align}
Note that the result also holds for any bounded $Y$ (since $\sup(Y^2)$ will be finite).
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
I've unaccepted kjetil's answer since, as was pointed out in the comments, it assumes $X$ and $Y$ are independent.
The following answer should work when $X$ and $Y$ are dependent, by using whuber's su
|
39,421
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
|
You need to use the formula
$$ \DeclareMathOperator{\Var}{\mathbb{V}} \DeclareMathOperator{\E}{\mathbb{E}}
\Var (XY) = \E (\Var (XY | Y)) + \Var (\E (XY | Y))
$$
where $\Var $ is the variance operator. Take it term for term, write $\mu=\E X, \sigma^2=\Var X$, $\E (XY | Y= y) = \E (yX) = y \E (X) =\mu y$ with variance (over $Y$) $\Var (\mu Y) $ which is finite since $Y$ is bounded.
Then the other term, $\Var (XY | Y=y) = \Var (yX) = y^2 \Var (X) = \sigma^2 y^2$ which again has a finite expectation since $Y$ is bounded. So the answer is yes.
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
|
You need to use the formula
$$ \DeclareMathOperator{\Var}{\mathbb{V}} \DeclareMathOperator{\E}{\mathbb{E}}
\Var (XY) = \E (\Var (XY | Y)) + \Var (\E (XY | Y))
$$
where $\Var $ is the variance ope
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
You need to use the formula
$$ \DeclareMathOperator{\Var}{\mathbb{V}} \DeclareMathOperator{\E}{\mathbb{E}}
\Var (XY) = \E (\Var (XY | Y)) + \Var (\E (XY | Y))
$$
where $\Var $ is the variance operator. Take it term for term, write $\mu=\E X, \sigma^2=\Var X$, $\E (XY | Y= y) = \E (yX) = y \E (X) =\mu y$ with variance (over $Y$) $\Var (\mu Y) $ which is finite since $Y$ is bounded.
Then the other term, $\Var (XY | Y=y) = \Var (yX) = y^2 \Var (X) = \sigma^2 y^2$ which again has a finite expectation since $Y$ is bounded. So the answer is yes.
|
If $\text{Var}(X) < \infty$, is $\text{Var}(XY) < \infty$ for $0 \le Y \le 1$?
You need to use the formula
$$ \DeclareMathOperator{\Var}{\mathbb{V}} \DeclareMathOperator{\E}{\mathbb{E}}
\Var (XY) = \E (\Var (XY | Y)) + \Var (\E (XY | Y))
$$
where $\Var $ is the variance ope
|
39,422
|
Forecasting Time Series: Stationary vs Non-Stationary
|
[W]hat is the difference between forecasting using the original non-stationary series and the forecasting using the now stationary differenced series?
(Here I deliberately left out the qualification that the series can be transformed to a stationary series using first differencing and that the OP is interested in forecasting using ARIMA in particular.)
The problem with nonstationary data is that for most of the time series models, the model assumptions are violated when nonstationary data is used. This leads to the estimators no longer having the nice properties such as asymptotic normality and sometimes even consistency. So if you apply a model that requires a stationary series to a nonstationary series, you will likely get poor estimates of the model parameters and hence poor forecasts.
(Now let me add the qualification back.)
For an integrated series $x_t$ that can be made stationary using first differencing, $\Delta x_t$, and that can be approximated by an ARIMA model reasonably well, there are three ways to go:
Force stationarity and estimate an ARIMA($p,0,q$) model for the original series $x_t$.
Force, or allow for, first differencing so that you end up with ARIMA($p,1,q$) model for the original data $x_t$.
Difference the series manually and then apply ARIMA($p,0,q$) model for the differenced series $\Delta x_t$.
Option 1. is the only one that is clearly asking for trouble as it forces stationarity in presence of nonstationary data. Options 2. and 3. are essentially the same, the difference being in whether you difference $x_t$ manually outside the model or as an initial step within the model.
[C]an I expect the forecast for the stationary series to be more accurate than the forecast for non-stationary series?
If you have in mind an integrated series $x_t$ and its first-differenced stationary version $\Delta x_t$, you will have greater accuracy when forecasting $\Delta x_t$, but does that matter? It could be misleading to think that you can get more accurate forecasts by focusing on $\Delta x_t$ rather than $x_t$. It is perhaps the most natural to think about gains in accuracy when the underlying process of interest is kept the same, e.g. a gain in accuracy due to using a better approximation to the same process. Meanwhile, if you change the underlying object (go from $x_t$ to $\Delta x_t$), the gain is not really a gain, in the following sense. It is a bit like shooting at a target from 100m and from 10m. You will be more accurate from 10m, but isn't that obvious and irrelevant?
If you have in mind two unrelated series $x_{1,t}$ and $\Delta x_{2,t}$ where the first one is integrated while the second one is stationary, you may expect that in the long run you will have greater forecast accuracy for $\Delta x_{2,t}$. In the short run this might not hold if the variance of $\Delta x_{1,t}$ (the increments of the first process) is small compared to the variance of $\Delta x_{2,t}$.
I am aware that one advantage of forecasting with the stationary series will have the advantage of also producing forecast intervals (which are dependent upon the assumption of a stationary series).
Actually, you can get forecast intervals regardless of whether the series is integrated or stationary. If you model an integrated time series using its first differences, you obtain the forecast intervals and cumulatively add them when forming the forecast interval for the integrated series. That is why forecast intervals for an integrated series expand linearly while those of a stationary series expand slower than linearly (illustrations can be found in time series textbooks).
|
Forecasting Time Series: Stationary vs Non-Stationary
|
[W]hat is the difference between forecasting using the original non-stationary series and the forecasting using the now stationary differenced series?
(Here I deliberately left out the qualification
|
Forecasting Time Series: Stationary vs Non-Stationary
[W]hat is the difference between forecasting using the original non-stationary series and the forecasting using the now stationary differenced series?
(Here I deliberately left out the qualification that the series can be transformed to a stationary series using first differencing and that the OP is interested in forecasting using ARIMA in particular.)
The problem with nonstationary data is that for most of the time series models, the model assumptions are violated when nonstationary data is used. This leads to the estimators no longer having the nice properties such as asymptotic normality and sometimes even consistency. So if you apply a model that requires a stationary series to a nonstationary series, you will likely get poor estimates of the model parameters and hence poor forecasts.
(Now let me add the qualification back.)
For an integrated series $x_t$ that can be made stationary using first differencing, $\Delta x_t$, and that can be approximated by an ARIMA model reasonably well, there are three ways to go:
Force stationarity and estimate an ARIMA($p,0,q$) model for the original series $x_t$.
Force, or allow for, first differencing so that you end up with ARIMA($p,1,q$) model for the original data $x_t$.
Difference the series manually and then apply ARIMA($p,0,q$) model for the differenced series $\Delta x_t$.
Option 1. is the only one that is clearly asking for trouble as it forces stationarity in presence of nonstationary data. Options 2. and 3. are essentially the same, the difference being in whether you difference $x_t$ manually outside the model or as an initial step within the model.
[C]an I expect the forecast for the stationary series to be more accurate than the forecast for non-stationary series?
If you have in mind an integrated series $x_t$ and its first-differenced stationary version $\Delta x_t$, you will have greater accuracy when forecasting $\Delta x_t$, but does that matter? It could be misleading to think that you can get more accurate forecasts by focusing on $\Delta x_t$ rather than $x_t$. It is perhaps the most natural to think about gains in accuracy when the underlying process of interest is kept the same, e.g. a gain in accuracy due to using a better approximation to the same process. Meanwhile, if you change the underlying object (go from $x_t$ to $\Delta x_t$), the gain is not really a gain, in the following sense. It is a bit like shooting at a target from 100m and from 10m. You will be more accurate from 10m, but isn't that obvious and irrelevant?
If you have in mind two unrelated series $x_{1,t}$ and $\Delta x_{2,t}$ where the first one is integrated while the second one is stationary, you may expect that in the long run you will have greater forecast accuracy for $\Delta x_{2,t}$. In the short run this might not hold if the variance of $\Delta x_{1,t}$ (the increments of the first process) is small compared to the variance of $\Delta x_{2,t}$.
I am aware that one advantage of forecasting with the stationary series will have the advantage of also producing forecast intervals (which are dependent upon the assumption of a stationary series).
Actually, you can get forecast intervals regardless of whether the series is integrated or stationary. If you model an integrated time series using its first differences, you obtain the forecast intervals and cumulatively add them when forming the forecast interval for the integrated series. That is why forecast intervals for an integrated series expand linearly while those of a stationary series expand slower than linearly (illustrations can be found in time series textbooks).
|
Forecasting Time Series: Stationary vs Non-Stationary
[W]hat is the difference between forecasting using the original non-stationary series and the forecasting using the now stationary differenced series?
(Here I deliberately left out the qualification
|
39,423
|
Forecasting Time Series: Stationary vs Non-Stationary
|
In your case there's no difference. ARIMA(p,1,q) is the same as ARMA(p,q) on the differenced series. ARIMA can model non-stationary series, ARMA cannot. So, for ARMA you do differencing before feeding the series into it.
See how ARIMA(p,D,q) models are defined at MATLAB web site, for instance:
$$\phi(L)(1-L)^Dy_t=c+\theta(L)\varepsilon_t,$$
where $L$ is the difference operator $L^Dy_t=y_{t-D}$, and $\phi(L),\theta(L)$ - lag operator functions.
Here's how ARIMA(1,0,1) would look like following this definition:
$$y_t-\phi_1y_{t-1}=c+\varepsilon_t+\theta_1\varepsilon_{t-1}$$
and ARIMA(1,1,1):
$$(y_t-y_{t-1})-\phi_1(y_{t-1}-y_{t-2})=c+\varepsilon_t+\theta_1\varepsilon_{t-1}$$
which is the same as ARMA(1,1) on the differenced series $\Delta y_t=y_t-y_{t-1}$. It's even easier to see when you know that $(1-L)y_t=\Delta y_t$, the difference operator.
If you're wondering why ARIMA can model non-stationary series, then it's the easiest to see on the simplest ARIMA(0,1,0): $y_t=y_{t-1}+c+\varepsilon_t$. Take a look at the expectations:
$$E[y_t]=E[y_{t-1}]+c=e[y_0]+ct,$$
The expectation of the series is non-stationary, it has a time trend so you could call it trend-stationary though.
|
Forecasting Time Series: Stationary vs Non-Stationary
|
In your case there's no difference. ARIMA(p,1,q) is the same as ARMA(p,q) on the differenced series. ARIMA can model non-stationary series, ARMA cannot. So, for ARMA you do differencing before feeding
|
Forecasting Time Series: Stationary vs Non-Stationary
In your case there's no difference. ARIMA(p,1,q) is the same as ARMA(p,q) on the differenced series. ARIMA can model non-stationary series, ARMA cannot. So, for ARMA you do differencing before feeding the series into it.
See how ARIMA(p,D,q) models are defined at MATLAB web site, for instance:
$$\phi(L)(1-L)^Dy_t=c+\theta(L)\varepsilon_t,$$
where $L$ is the difference operator $L^Dy_t=y_{t-D}$, and $\phi(L),\theta(L)$ - lag operator functions.
Here's how ARIMA(1,0,1) would look like following this definition:
$$y_t-\phi_1y_{t-1}=c+\varepsilon_t+\theta_1\varepsilon_{t-1}$$
and ARIMA(1,1,1):
$$(y_t-y_{t-1})-\phi_1(y_{t-1}-y_{t-2})=c+\varepsilon_t+\theta_1\varepsilon_{t-1}$$
which is the same as ARMA(1,1) on the differenced series $\Delta y_t=y_t-y_{t-1}$. It's even easier to see when you know that $(1-L)y_t=\Delta y_t$, the difference operator.
If you're wondering why ARIMA can model non-stationary series, then it's the easiest to see on the simplest ARIMA(0,1,0): $y_t=y_{t-1}+c+\varepsilon_t$. Take a look at the expectations:
$$E[y_t]=E[y_{t-1}]+c=e[y_0]+ct,$$
The expectation of the series is non-stationary, it has a time trend so you could call it trend-stationary though.
|
Forecasting Time Series: Stationary vs Non-Stationary
In your case there's no difference. ARIMA(p,1,q) is the same as ARMA(p,q) on the differenced series. ARIMA can model non-stationary series, ARMA cannot. So, for ARMA you do differencing before feeding
|
39,424
|
Meta analysis across studies with multiple response measures
|
I'll try to address a few different components of your question
You seem to be under the impression that you want to meta-analyze means of some bird-related variable, but after reading...
I want to look at if, across birds, these response measures increase along with another variable that I have for each bird
it instead appears as though what you're really interested in meta-analyzing are the correlations between that bird-related variable, and some other variable that you have identified (potentially looking at differences across birds). Meta-analyzing correlations is certainly more typical than meta-analyzing means, though the latter is certainly possible--you can meta-analyze virtually any statistic with a corresponding variance or standard error. If it is, in fact, the case that you want to meta-analyze correlations between variables, then recording the means and standard errors of the one variable within each study won't help much. Instead, you will want to collect the correlations between the variables you are interested in, as well as the size of the sample for each correlation, as you'll use sample size to calculate the standard error and/or variance of each correlation.
Once you have this information, it appears as though you'll run into another issue: dependency of effect sizes. Meta-analysis, in many ways, is just a fancy weighted regression, and so many of the same assumptions apply--including assuming that all observations are independent. As you have indicated that...
Some studies use 10 response measures while others use one
..it seems likely that you will have some studies contributing multiple correlations, and therefore will be violating this assumption. Though it sounds scary (especially if you haven't don't meta-analysis before), you can use a method of 3-level meta-analysis (using multilevel structural equation models) to account for this dependency; this is easily accomplished using Cheung's metaSEM package for R (it's syntax is very similar to metafor and other meta-analysis packages). In effect, metaSEM allows you to specify a clustering variable that corresponds to how effect sizes (your correlations) are nested, so you could assign each study an ID number, and just use that as your clustering variable.
Bringing it all together, based on your description, it seems as though you would be interested in at least two models:
Model 1: An "intercept-only" model, whereby you estimate the meta-analytic average correlation between your two bird-related variables of interest.
Model 2: A model whereby you test if/how this correlation is moderated by some other variable(s) (e.g., type of bird, type of outcome measure used, etc.,).
The corresponding metaSEM code for each model is below. In the hypothetical example, you'd be using a data frame called "mydata", with columns called "ID", "corrs", "corrs_v", "bird_type", and "measure_type", corresponding to the ID # you assigned each study, the correlation(s) from the study, the variance of each correlation (metaSEM also allows you to specify standard errors instead), the type of bird, and the type of outcome measure respectively.
#Install and call metaSEM package
install.packages("metaSEM")
library(metaSEM)
#Fit Model 1-Intercept-only
model.1=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, model.name = "Intercept-only")
summary(model.1)
#Fit Model 2a-Moderation by Bird Type
model.2a=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, x = cbind(bird_type), model.name = "Moderation by Bird Type")
summary(model.2a)
#Fit Model 2b-Moderation by Outcome Measure Type
model.2a=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, x = cbind(measure_type), model.name = "Moderation by Outcome Measure Type")
summary(model.2a)
What's nice about the 3-level meta-analysis approach (and metaSEM) is that you will get the descriptive statistics of effect size variability that you normally would from a random-effects model (e.g., $\tau^2$, and $I^2$), except broken down for each level of clustering (i.e., one of each for within-study variability, and one for between study variability). Then, for your moderation models, you also get the benefit of having $R^2$ for each level of clustering, so you have an idea of how much variance in effect sizes your moderator(s) are explaining at each level.
So, to summarize, collect the effect sizes you are actually interested in meta-analyzing (it doesn't sound like you're interested in means, but rather, correlations of some sort) and their corresponding sample sizes (so you can calculate standard error or variance of the correlation; any introductory meta-analysis text will have these formulas). Then, once you have all this information entered, you can code an ID variable corresponding to which correlation(s) came from which study, and use metaSEM to estimate your meta-analytic model while appropriately accounting for the dependency among the effect sizes that come from the same study.
That should be plenty to get you started; if you have remaining questions, or if I misunderstood something, just comment this response and I can edit it as needed.
|
Meta analysis across studies with multiple response measures
|
I'll try to address a few different components of your question
You seem to be under the impression that you want to meta-analyze means of some bird-related variable, but after reading...
I want to l
|
Meta analysis across studies with multiple response measures
I'll try to address a few different components of your question
You seem to be under the impression that you want to meta-analyze means of some bird-related variable, but after reading...
I want to look at if, across birds, these response measures increase along with another variable that I have for each bird
it instead appears as though what you're really interested in meta-analyzing are the correlations between that bird-related variable, and some other variable that you have identified (potentially looking at differences across birds). Meta-analyzing correlations is certainly more typical than meta-analyzing means, though the latter is certainly possible--you can meta-analyze virtually any statistic with a corresponding variance or standard error. If it is, in fact, the case that you want to meta-analyze correlations between variables, then recording the means and standard errors of the one variable within each study won't help much. Instead, you will want to collect the correlations between the variables you are interested in, as well as the size of the sample for each correlation, as you'll use sample size to calculate the standard error and/or variance of each correlation.
Once you have this information, it appears as though you'll run into another issue: dependency of effect sizes. Meta-analysis, in many ways, is just a fancy weighted regression, and so many of the same assumptions apply--including assuming that all observations are independent. As you have indicated that...
Some studies use 10 response measures while others use one
..it seems likely that you will have some studies contributing multiple correlations, and therefore will be violating this assumption. Though it sounds scary (especially if you haven't don't meta-analysis before), you can use a method of 3-level meta-analysis (using multilevel structural equation models) to account for this dependency; this is easily accomplished using Cheung's metaSEM package for R (it's syntax is very similar to metafor and other meta-analysis packages). In effect, metaSEM allows you to specify a clustering variable that corresponds to how effect sizes (your correlations) are nested, so you could assign each study an ID number, and just use that as your clustering variable.
Bringing it all together, based on your description, it seems as though you would be interested in at least two models:
Model 1: An "intercept-only" model, whereby you estimate the meta-analytic average correlation between your two bird-related variables of interest.
Model 2: A model whereby you test if/how this correlation is moderated by some other variable(s) (e.g., type of bird, type of outcome measure used, etc.,).
The corresponding metaSEM code for each model is below. In the hypothetical example, you'd be using a data frame called "mydata", with columns called "ID", "corrs", "corrs_v", "bird_type", and "measure_type", corresponding to the ID # you assigned each study, the correlation(s) from the study, the variance of each correlation (metaSEM also allows you to specify standard errors instead), the type of bird, and the type of outcome measure respectively.
#Install and call metaSEM package
install.packages("metaSEM")
library(metaSEM)
#Fit Model 1-Intercept-only
model.1=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, model.name = "Intercept-only")
summary(model.1)
#Fit Model 2a-Moderation by Bird Type
model.2a=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, x = cbind(bird_type), model.name = "Moderation by Bird Type")
summary(model.2a)
#Fit Model 2b-Moderation by Outcome Measure Type
model.2a=meta3(y = corrs, v = corrs_v, cluster = ID, data = mydata, x = cbind(measure_type), model.name = "Moderation by Outcome Measure Type")
summary(model.2a)
What's nice about the 3-level meta-analysis approach (and metaSEM) is that you will get the descriptive statistics of effect size variability that you normally would from a random-effects model (e.g., $\tau^2$, and $I^2$), except broken down for each level of clustering (i.e., one of each for within-study variability, and one for between study variability). Then, for your moderation models, you also get the benefit of having $R^2$ for each level of clustering, so you have an idea of how much variance in effect sizes your moderator(s) are explaining at each level.
So, to summarize, collect the effect sizes you are actually interested in meta-analyzing (it doesn't sound like you're interested in means, but rather, correlations of some sort) and their corresponding sample sizes (so you can calculate standard error or variance of the correlation; any introductory meta-analysis text will have these formulas). Then, once you have all this information entered, you can code an ID variable corresponding to which correlation(s) came from which study, and use metaSEM to estimate your meta-analytic model while appropriately accounting for the dependency among the effect sizes that come from the same study.
That should be plenty to get you started; if you have remaining questions, or if I misunderstood something, just comment this response and I can edit it as needed.
|
Meta analysis across studies with multiple response measures
I'll try to address a few different components of your question
You seem to be under the impression that you want to meta-analyze means of some bird-related variable, but after reading...
I want to l
|
39,425
|
Meta analysis across studies with multiple response measures
|
I think you need to use the metafor package and look at the function rma.mv and its documentation. It would also be worth looking at his website which has many examples. Perhaps you would like to try and then ask again? The author does post on Cross Validated but it may also be worth your while trying to post on the R-help mailing list. Sorry I cannot be too specific here but this is at the limits of my expertise.
There seems to have been a pause here so I will try to expand my answer with the benefit of the extra information which you have provided. I am going to suggest how to do this using metafor and the rma.mv function.
I assume you have all your variables in a data.frame and I have given them what I hope are obvious names below.
You start by specifying fit <- rma.mv(yi = outcome, V = se^2,
The you need to specify the random effects, which I think from your description will be random = ~ responsetype | study You need to make sure that responsetype is a factor or character variable.
You now need to specify your moderator variables with mods = ~ beaklength + responsetype This suppose that there is enough overlap between studies in the response types other wise you can only use beaklength here.
I would recommend setting slab = paste(study, responsetype) to get good labels in your forest plot. I would also strongly recommend using the profile function on the object to check the fit. With this sort of model it is sometimes the case that you do not have enough information in your data-set to identify the parameters. You may, as a biologist, be interested in this article by Nakagaw and Santos entitled "Methodological issues and advances in biological meta--analysis" which gives some of the theory. I think I downloaded my copy from the site of one of the authors in case you do not have access to that journal.
|
Meta analysis across studies with multiple response measures
|
I think you need to use the metafor package and look at the function rma.mv and its documentation. It would also be worth looking at his website which has many examples. Perhaps you would like to try
|
Meta analysis across studies with multiple response measures
I think you need to use the metafor package and look at the function rma.mv and its documentation. It would also be worth looking at his website which has many examples. Perhaps you would like to try and then ask again? The author does post on Cross Validated but it may also be worth your while trying to post on the R-help mailing list. Sorry I cannot be too specific here but this is at the limits of my expertise.
There seems to have been a pause here so I will try to expand my answer with the benefit of the extra information which you have provided. I am going to suggest how to do this using metafor and the rma.mv function.
I assume you have all your variables in a data.frame and I have given them what I hope are obvious names below.
You start by specifying fit <- rma.mv(yi = outcome, V = se^2,
The you need to specify the random effects, which I think from your description will be random = ~ responsetype | study You need to make sure that responsetype is a factor or character variable.
You now need to specify your moderator variables with mods = ~ beaklength + responsetype This suppose that there is enough overlap between studies in the response types other wise you can only use beaklength here.
I would recommend setting slab = paste(study, responsetype) to get good labels in your forest plot. I would also strongly recommend using the profile function on the object to check the fit. With this sort of model it is sometimes the case that you do not have enough information in your data-set to identify the parameters. You may, as a biologist, be interested in this article by Nakagaw and Santos entitled "Methodological issues and advances in biological meta--analysis" which gives some of the theory. I think I downloaded my copy from the site of one of the authors in case you do not have access to that journal.
|
Meta analysis across studies with multiple response measures
I think you need to use the metafor package and look at the function rma.mv and its documentation. It would also be worth looking at his website which has many examples. Perhaps you would like to try
|
39,426
|
Meta analysis across studies with multiple response measures
|
You can use the standard mean difference for pooling different scales. for further reading check : 'metaanalysis with r' a book writen by the author of the package 'meta' pg 25
However, in many settings different studies use different outcome
scales, e.g. different depression scales or quality of life scales. In
such cases we cannot pool the effect estimates (mean differences)
directly. Instead, we calculate a dimensionless effect measure from
every study and use this for pooling. A very popular dimensionless
effect measure is the standardised mean difference which is the
study’s mean difference divided by a standard deviation based either
on a single treatment group or both treatment groups.
He then used this command
#Ne,Nc sample size for both intervention,control,Me,Se for mean and standard deviation for each group, SMD for standardized mean difference
mc2 <- metacont(Ne, Me, Se, Nc, Mc, Sc, sm="SMD",
+ data=data2)
To get the result use summary(mc2), forest(mc2)
|
Meta analysis across studies with multiple response measures
|
You can use the standard mean difference for pooling different scales. for further reading check : 'metaanalysis with r' a book writen by the author of the package 'meta' pg 25
However, in many setti
|
Meta analysis across studies with multiple response measures
You can use the standard mean difference for pooling different scales. for further reading check : 'metaanalysis with r' a book writen by the author of the package 'meta' pg 25
However, in many settings different studies use different outcome
scales, e.g. different depression scales or quality of life scales. In
such cases we cannot pool the effect estimates (mean differences)
directly. Instead, we calculate a dimensionless effect measure from
every study and use this for pooling. A very popular dimensionless
effect measure is the standardised mean difference which is the
study’s mean difference divided by a standard deviation based either
on a single treatment group or both treatment groups.
He then used this command
#Ne,Nc sample size for both intervention,control,Me,Se for mean and standard deviation for each group, SMD for standardized mean difference
mc2 <- metacont(Ne, Me, Se, Nc, Mc, Sc, sm="SMD",
+ data=data2)
To get the result use summary(mc2), forest(mc2)
|
Meta analysis across studies with multiple response measures
You can use the standard mean difference for pooling different scales. for further reading check : 'metaanalysis with r' a book writen by the author of the package 'meta' pg 25
However, in many setti
|
39,427
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
A scale parameter is defined as a parameter $\sigma$ such that,
if $X\sim F(x;\mu,\sigma)$, then $\sigma^{-1}X \sim F(x;\mu,1)$, that
is, when $\sigma^{-1}X$ is independent from $\sigma$.
This means $\theta$ is a scale parameter for the Gamma $\text{Ga}(\kappa,\theta)$ distribution.
When looking at the exponential family representation of the Gamma $\text{Ga}(\kappa,\theta)$ distribution with density
$$f(x; \kappa, \theta) = \dfrac{1}{\Gamma(\kappa)\theta^\kappa}x^{\kappa - 1}e^{-\frac{x}{\theta}}$$
we get
$$f(x; \kappa, \theta) = \exp\left\{ -\log \Gamma(\kappa)+\kappa\log\theta+\{\kappa - 1\}\log x - \theta^{-1}x\right\}$$
or
$$f(x; \kappa, \theta) = \exp\left\{
(\log x\ x)'(\kappa \ \theta^{-1}) -\log \Gamma(\kappa)-\kappa\log\theta^{-1} -\log x
\right\}$$
it has no extra-scale parameter where you could write
$$f(x; \kappa, \theta,\varphi) = \exp\left\{
\varphi^{-1}(\log x\ x)'(\kappa \ \theta^{-1}) -\Psi(\kappa,\theta,\varphi) -\log x
\right\}$$
as $\varphi$ is superfluous and not identifiable, that is, only $(\kappa \ \theta^{-1})/\varphi$ is identifiable.
As a note, Ferguson published a famous paper in 1962 where he proves that the only exponential family with location-scale parameterisation is the Normal family.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
A scale parameter is defined as a parameter $\sigma$ such that,
if $X\sim F(x;\mu,\sigma)$, then $\sigma^{-1}X \sim F(x;\mu,1)$, that
is, when $\sigma^{-1}X$ is independent from $\sigma$.
This me
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
A scale parameter is defined as a parameter $\sigma$ such that,
if $X\sim F(x;\mu,\sigma)$, then $\sigma^{-1}X \sim F(x;\mu,1)$, that
is, when $\sigma^{-1}X$ is independent from $\sigma$.
This means $\theta$ is a scale parameter for the Gamma $\text{Ga}(\kappa,\theta)$ distribution.
When looking at the exponential family representation of the Gamma $\text{Ga}(\kappa,\theta)$ distribution with density
$$f(x; \kappa, \theta) = \dfrac{1}{\Gamma(\kappa)\theta^\kappa}x^{\kappa - 1}e^{-\frac{x}{\theta}}$$
we get
$$f(x; \kappa, \theta) = \exp\left\{ -\log \Gamma(\kappa)+\kappa\log\theta+\{\kappa - 1\}\log x - \theta^{-1}x\right\}$$
or
$$f(x; \kappa, \theta) = \exp\left\{
(\log x\ x)'(\kappa \ \theta^{-1}) -\log \Gamma(\kappa)-\kappa\log\theta^{-1} -\log x
\right\}$$
it has no extra-scale parameter where you could write
$$f(x; \kappa, \theta,\varphi) = \exp\left\{
\varphi^{-1}(\log x\ x)'(\kappa \ \theta^{-1}) -\Psi(\kappa,\theta,\varphi) -\log x
\right\}$$
as $\varphi$ is superfluous and not identifiable, that is, only $(\kappa \ \theta^{-1})/\varphi$ is identifiable.
As a note, Ferguson published a famous paper in 1962 where he proves that the only exponential family with location-scale parameterisation is the Normal family.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
A scale parameter is defined as a parameter $\sigma$ such that,
if $X\sim F(x;\mu,\sigma)$, then $\sigma^{-1}X \sim F(x;\mu,1)$, that
is, when $\sigma^{-1}X$ is independent from $\sigma$.
This me
|
39,428
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
Comparing (2) and (3), it is that φ=1/k should be called as scale
parameter!
According the equation you wrote it seems that both of your phis (you really need to use different letters for parameters) are part of the scale parameter. Why don't you write it in standard form so that the density is
$$f(x) \propto \exp(ax + b\log x)$$
Then it will be clear that $\frac{1}{a}$ is your scale parameter. I bet it will be a function of both of your phis.
You have:
$$f(x; \phi, \varphi) \propto \exp\left[\frac{x\phi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log x\right] \tag{from 3}$$
so your scale parameter is $\frac{\varphi}\phi = -\theta$ as desired.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
Comparing (2) and (3), it is that φ=1/k should be called as scale
parameter!
According the equation you wrote it seems that both of your phis (you really need to use different letters for parameter
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
Comparing (2) and (3), it is that φ=1/k should be called as scale
parameter!
According the equation you wrote it seems that both of your phis (you really need to use different letters for parameters) are part of the scale parameter. Why don't you write it in standard form so that the density is
$$f(x) \propto \exp(ax + b\log x)$$
Then it will be clear that $\frac{1}{a}$ is your scale parameter. I bet it will be a function of both of your phis.
You have:
$$f(x; \phi, \varphi) \propto \exp\left[\frac{x\phi}{\varphi} + \left(\frac{1}{\varphi} - 1\right)\log x\right] \tag{from 3}$$
so your scale parameter is $\frac{\varphi}\phi = -\theta$ as desired.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
Comparing (2) and (3), it is that φ=1/k should be called as scale
parameter!
According the equation you wrote it seems that both of your phis (you really need to use different letters for parameter
|
39,429
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
Both alternatives are (as mentioned prior) given here, one with $\frac{x}{\theta }$, where $\theta$ is indeed a scale parameter, and $\beta x$, where $\beta$ is a rate scale parameter, the reciprocal of $\theta$. $\theta$ is the scale factor. Similarly for exponential distributions, $\frac{x}{\theta }$, where $\theta$ is the scale factor.
However, $\beta$ is often used for practical reasons. As mentioned, it is not a scale factor, it is a rate scaling factor. As mentioned, the gamma distribution (GD) becomes an exponential distribution (ED) when the GD shape parameter ${\alpha }$ is 1, i.e., $\frac{\beta ^{\alpha }}{\Gamma (\alpha )}x^{\alpha -1} e^{-\beta x}\to \beta e^{-\beta x}$. For time series, both the gamma and exponential distributions often use $\beta$ because it is more normally distributed than ${\theta }$, and using $\frac{x}{\theta }$ would introduce a discontinuity at $\beta = 0$. Thus, for time series ${\theta }$ is problematic for regression analysis, and $\beta$ is far more practical a measure, even if it is not "statistically" appealing from an abstract theoretical point of view.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
|
Both alternatives are (as mentioned prior) given here, one with $\frac{x}{\theta }$, where $\theta$ is indeed a scale parameter, and $\beta x$, where $\beta$ is a rate scale parameter, the reciprocal
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
Both alternatives are (as mentioned prior) given here, one with $\frac{x}{\theta }$, where $\theta$ is indeed a scale parameter, and $\beta x$, where $\beta$ is a rate scale parameter, the reciprocal of $\theta$. $\theta$ is the scale factor. Similarly for exponential distributions, $\frac{x}{\theta }$, where $\theta$ is the scale factor.
However, $\beta$ is often used for practical reasons. As mentioned, it is not a scale factor, it is a rate scaling factor. As mentioned, the gamma distribution (GD) becomes an exponential distribution (ED) when the GD shape parameter ${\alpha }$ is 1, i.e., $\frac{\beta ^{\alpha }}{\Gamma (\alpha )}x^{\alpha -1} e^{-\beta x}\to \beta e^{-\beta x}$. For time series, both the gamma and exponential distributions often use $\beta$ because it is more normally distributed than ${\theta }$, and using $\frac{x}{\theta }$ would introduce a discontinuity at $\beta = 0$. Thus, for time series ${\theta }$ is problematic for regression analysis, and $\beta$ is far more practical a measure, even if it is not "statistically" appealing from an abstract theoretical point of view.
|
Which parameter should be considered as "scale" parameter for Gamma distribution?
Both alternatives are (as mentioned prior) given here, one with $\frac{x}{\theta }$, where $\theta$ is indeed a scale parameter, and $\beta x$, where $\beta$ is a rate scale parameter, the reciprocal
|
39,430
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
You should perform feature normalization only on features - so only on your input vector $x$. Not on output $y$ or $\theta$. When you trained a model using feature normalization, then you should apply that normalization every time you make a prediction. Also it is expected that you have different $\theta$ and cost function $J(\theta)$ with and without normalization. There is no need to ever undo feature scaling.
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
You should perform feature normalization only on features - so only on your input vector $x$. Not on output $y$ or $\theta$. When you trained a model using feature normalization, then you should apply
|
Multiple regression - how to calculate the predicted value after feature normalization?
You should perform feature normalization only on features - so only on your input vector $x$. Not on output $y$ or $\theta$. When you trained a model using feature normalization, then you should apply that normalization every time you make a prediction. Also it is expected that you have different $\theta$ and cost function $J(\theta)$ with and without normalization. There is no need to ever undo feature scaling.
|
Multiple regression - how to calculate the predicted value after feature normalization?
You should perform feature normalization only on features - so only on your input vector $x$. Not on output $y$ or $\theta$. When you trained a model using feature normalization, then you should apply
|
39,431
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
EDIT: I've made a few changes to the code based on @Yurii's answer.
Okay it seems that after a little fiddling about and checking out this answer, I got it to work:
As mentioned, I used
[scaledX,avgX,stdX]=feature_scale(X)
to scale the features. I then used gradient descent to get the vector theta, which is used to make predictions.
Concretely, I had:
>> Y % y=(a^2 -14*a + 10) as the equation of my data
Y =
-38.4218
-31.8576
74.2568
38.2865
-10.7453
36.6208
-35.3849
-9.2554
137.4463
3.0049
>> X %[ 1, a, a^2 ] as my features
X =
1.00000 6.23961 38.93277
1.00000 4.32748 18.72707
1.00000 17.64222 311.24782
1.00000 7.84469 61.53913
1.00000 1.68448 2.83749
1.00000 5.45754 29.78479
1.00000 5.09865 25.99620
1.00000 1.54614 2.39056
1.00000 20.28331 411.41264
1.00000 0.51888 0.26924
>> [scaledX, avgX, stdX]=feature_scale(X)
scaledX =
1.00000 -0.12307 -0.35196
1.00000 -0.40844 -0.49038
1.00000 1.57863 1.51342
1.00000 0.11646 -0.19711
1.00000 -0.80287 -0.59922
1.00000 -0.23979 -0.41463
1.00000 -0.29335 -0.44058
1.00000 -0.82352 -0.60228
1.00000 1.97278 2.19956
1.00000 -0.97683 -0.61681
avgX =
7.0643 90.3138
stdX =
6.7007 145.9833
>> %No need to scale Y
>> [theta,costs_vector]=LinearRegression_GradientDescent(scaledX, Y, alpha=1, number_of_iterations=2000);
FINAL THETA:
theta =
-16.390
-70.020
75.617
FINAL COST: 3.41577e-28
Now, the model has been trained.
When I did not use feature scaling, the theta matrix would come to approximately: theta=[10, -14, 1], which reflected the function y=x^2 -14x + 10 which we are trying to predict.
With feature scaling, as you can see, the theta matrix is completely different. However, we still use it to make predictions, as follows:
>> test_input = 15;
>> testX=[1, test_input, test_input^2]
testX =
1 15 225
>> scaledTestX=testX;
>> scaledTestX(2)=(scaledTestX(2)-avgX(1))/stdX(1);
>> scaledTestX(3)=(scaledTestX(3)-avgX(2))/stdX(2);
>> scaledTestX
scaledTestX =
1.00000 1.18431 0.92261
>>
>> final_predicted=(theta')*(scaledTestX')
scaledPredicted = 25.000
>> % 25 is the correct value:
>> % f(a)=a^2-14a+10, at a=15 (our input value) is 25
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
EDIT: I've made a few changes to the code based on @Yurii's answer.
Okay it seems that after a little fiddling about and checking out this answer, I got it to work:
As mentioned, I used
[scaledX,avgX,
|
Multiple regression - how to calculate the predicted value after feature normalization?
EDIT: I've made a few changes to the code based on @Yurii's answer.
Okay it seems that after a little fiddling about and checking out this answer, I got it to work:
As mentioned, I used
[scaledX,avgX,stdX]=feature_scale(X)
to scale the features. I then used gradient descent to get the vector theta, which is used to make predictions.
Concretely, I had:
>> Y % y=(a^2 -14*a + 10) as the equation of my data
Y =
-38.4218
-31.8576
74.2568
38.2865
-10.7453
36.6208
-35.3849
-9.2554
137.4463
3.0049
>> X %[ 1, a, a^2 ] as my features
X =
1.00000 6.23961 38.93277
1.00000 4.32748 18.72707
1.00000 17.64222 311.24782
1.00000 7.84469 61.53913
1.00000 1.68448 2.83749
1.00000 5.45754 29.78479
1.00000 5.09865 25.99620
1.00000 1.54614 2.39056
1.00000 20.28331 411.41264
1.00000 0.51888 0.26924
>> [scaledX, avgX, stdX]=feature_scale(X)
scaledX =
1.00000 -0.12307 -0.35196
1.00000 -0.40844 -0.49038
1.00000 1.57863 1.51342
1.00000 0.11646 -0.19711
1.00000 -0.80287 -0.59922
1.00000 -0.23979 -0.41463
1.00000 -0.29335 -0.44058
1.00000 -0.82352 -0.60228
1.00000 1.97278 2.19956
1.00000 -0.97683 -0.61681
avgX =
7.0643 90.3138
stdX =
6.7007 145.9833
>> %No need to scale Y
>> [theta,costs_vector]=LinearRegression_GradientDescent(scaledX, Y, alpha=1, number_of_iterations=2000);
FINAL THETA:
theta =
-16.390
-70.020
75.617
FINAL COST: 3.41577e-28
Now, the model has been trained.
When I did not use feature scaling, the theta matrix would come to approximately: theta=[10, -14, 1], which reflected the function y=x^2 -14x + 10 which we are trying to predict.
With feature scaling, as you can see, the theta matrix is completely different. However, we still use it to make predictions, as follows:
>> test_input = 15;
>> testX=[1, test_input, test_input^2]
testX =
1 15 225
>> scaledTestX=testX;
>> scaledTestX(2)=(scaledTestX(2)-avgX(1))/stdX(1);
>> scaledTestX(3)=(scaledTestX(3)-avgX(2))/stdX(2);
>> scaledTestX
scaledTestX =
1.00000 1.18431 0.92261
>>
>> final_predicted=(theta')*(scaledTestX')
scaledPredicted = 25.000
>> % 25 is the correct value:
>> % f(a)=a^2-14a+10, at a=15 (our input value) is 25
|
Multiple regression - how to calculate the predicted value after feature normalization?
EDIT: I've made a few changes to the code based on @Yurii's answer.
Okay it seems that after a little fiddling about and checking out this answer, I got it to work:
As mentioned, I used
[scaledX,avgX,
|
39,432
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
I understand the concepts explained in the previous 2 answers i.e. after we do feature scaling and calculate the intercept (θ0) and the slope (θ1), we get a hypothesis function (h(x)) which uses the scaled down features (Assuming univariate/single variable linear regression)
h(x) = θ0 + θ1x' -- (1)
where
x' = (x-μ)/σ -- (2)
(μ = mean of the feature set x; σ = standard deviation of feature set x)
As Yurii said above, we don't scale the target i.e. y when doing feature scaling. So to predict y for some xm, we simply scale the new input value and feed it into the new hypothesis function (1) using (2)
x'm = (xm-μ)/σ -- (3)
And use this in (1) to get the estimated y. And I think this should work perfectly fine in practice.
But I wanted to plot the regression line against the original i.e. unscaled features and target values. So I needed a way to scale it back. Coordinate geometry to the rescue! :)
Equation (1) gives us the hypothesis with the scaled feature x'. And we know (2) is the relation between the scaled feature x' and the original feature x. So we substitute (2) in (1) and we get (after simplification):
h(x) = (θ0 - θ1*μ/σ) + (θ1/σ)x -- (4)
So to plot a line with the original i.e. unscaled features, we just use the intercept as (θ0 - θ1*μ/σ) and the slope as (θ1/σ).
Here is my complete R code which does the same and plots the regression line:
if(exists("dev")) {dev.off()} # Close the plot
rm(list=c("f", "X", "Y", "m", "alpha", "theta0", "theta1", "i")) # clear variables
f<-read.csv("slr12.csv") # read in source data (data from here: http://goo.gl/fuOV8m)
mu<-mean(f$X) # mean
sig<-sd(f$X) # standard deviation
X<-(f$X-mu)/sig # feature scaled
Y<-f$Y # No scaling of target
m<-length(X)
alpha<-0.05
theta0<-0.5
theta1<-0.5
for(i in 1:350) {
theta0<-theta0 - alpha/m*sum(theta0+theta1*X-Y)
theta1<-theta1 - alpha/m*sum((theta0+theta1*X-Y)*X)
print(c(theta0, theta1))
}
plot(f$X,f$Y) # Plot original data
theta0p<-(theta0-theta1*mu/sig) # "Unscale" the intercept
theta1p<-theta1/sig # "Unscale" the slope
abline(theta0p, theta1p, col="green") # Plot regression line
You can test that theta0p and theta1p are correct above by running lm(f$Y~f$X) to use the inbuilt linear regression function. The values are the same!
> print(c(theta0p, theta1p))
[1] 867.6042128 0.3731579
> lm(f$Y~f$X)
Call:
lm(formula = f$Y ~ f$X)
Coefficients:
(Intercept) f$X
867.6042 0.3732
|
Multiple regression - how to calculate the predicted value after feature normalization?
|
I understand the concepts explained in the previous 2 answers i.e. after we do feature scaling and calculate the intercept (θ0) and the slope (θ1), we get a hypothesis function (h(x)) which uses the s
|
Multiple regression - how to calculate the predicted value after feature normalization?
I understand the concepts explained in the previous 2 answers i.e. after we do feature scaling and calculate the intercept (θ0) and the slope (θ1), we get a hypothesis function (h(x)) which uses the scaled down features (Assuming univariate/single variable linear regression)
h(x) = θ0 + θ1x' -- (1)
where
x' = (x-μ)/σ -- (2)
(μ = mean of the feature set x; σ = standard deviation of feature set x)
As Yurii said above, we don't scale the target i.e. y when doing feature scaling. So to predict y for some xm, we simply scale the new input value and feed it into the new hypothesis function (1) using (2)
x'm = (xm-μ)/σ -- (3)
And use this in (1) to get the estimated y. And I think this should work perfectly fine in practice.
But I wanted to plot the regression line against the original i.e. unscaled features and target values. So I needed a way to scale it back. Coordinate geometry to the rescue! :)
Equation (1) gives us the hypothesis with the scaled feature x'. And we know (2) is the relation between the scaled feature x' and the original feature x. So we substitute (2) in (1) and we get (after simplification):
h(x) = (θ0 - θ1*μ/σ) + (θ1/σ)x -- (4)
So to plot a line with the original i.e. unscaled features, we just use the intercept as (θ0 - θ1*μ/σ) and the slope as (θ1/σ).
Here is my complete R code which does the same and plots the regression line:
if(exists("dev")) {dev.off()} # Close the plot
rm(list=c("f", "X", "Y", "m", "alpha", "theta0", "theta1", "i")) # clear variables
f<-read.csv("slr12.csv") # read in source data (data from here: http://goo.gl/fuOV8m)
mu<-mean(f$X) # mean
sig<-sd(f$X) # standard deviation
X<-(f$X-mu)/sig # feature scaled
Y<-f$Y # No scaling of target
m<-length(X)
alpha<-0.05
theta0<-0.5
theta1<-0.5
for(i in 1:350) {
theta0<-theta0 - alpha/m*sum(theta0+theta1*X-Y)
theta1<-theta1 - alpha/m*sum((theta0+theta1*X-Y)*X)
print(c(theta0, theta1))
}
plot(f$X,f$Y) # Plot original data
theta0p<-(theta0-theta1*mu/sig) # "Unscale" the intercept
theta1p<-theta1/sig # "Unscale" the slope
abline(theta0p, theta1p, col="green") # Plot regression line
You can test that theta0p and theta1p are correct above by running lm(f$Y~f$X) to use the inbuilt linear regression function. The values are the same!
> print(c(theta0p, theta1p))
[1] 867.6042128 0.3731579
> lm(f$Y~f$X)
Call:
lm(formula = f$Y ~ f$X)
Coefficients:
(Intercept) f$X
867.6042 0.3732
|
Multiple regression - how to calculate the predicted value after feature normalization?
I understand the concepts explained in the previous 2 answers i.e. after we do feature scaling and calculate the intercept (θ0) and the slope (θ1), we get a hypothesis function (h(x)) which uses the s
|
39,433
|
When is a Naive Bayes Model not Bayesian?
|
Informally, to be 'Bayesian' about a model (Naive Bayes just names a class of discrete mixture models) is to use Bayes theorem to infer the values of its parameters or other quantities of interest. To be 'Frequentist' about the same model is, roughly, and among other things, to use the sampling distribution of estimators that depend on those quantities to infer what those values might be.
Turning to your Naive Bayes / mixture model. For exposition, let's assume all the component parameters and functional forms are known and there are two components (classes, whatever).
What is described as the 'prior' in a mixture model is a mixing parameter in the early stages of a hierarchically structured generative model. If you estimate this mixing parameter in the usual (ML, i.e. Frequentist) way, via an EM algorithm, then you have taken a convenient route up the model likelihood to find a maximum, and used that as a point estimate of the true value of the mixing parameter. Maybe you use the curvature of the likelihood at that point to give yourself a measure of uncertainty. (But probably not). Typically you'd then use it to get membership probabilities for individual observations by assuming that value and applying Bayes theorem.
This seems Bayesian because it uses Bayes theorem. However, it is unBayesian in two ways: First, you used the same data to determine the 'prior' (the mixing parameter) and some relevant 'posteriors' (membership probabilities for individual observations). So the 'prior' isn't really prior because it's conditioned on the data already. In the second, more general way, of which the first is an instance: Bayes theorem is being used to infer some unknowns (membership probabilities) but not others (the mixing coefficient).
That's why if you decide to do this in a Bayesian fashion then, since you don't know what the mixing parameter value is in advance, you give it some prior distribution. Maybe that's a Dirichlet (hence a Beta in this stripped down exposition) with some parameters or other, set to reflect your uncertainty. Then you figure out how to condition on the data to get a posterior distribution over it and all the other stuff you care about but don't know, such as component memberships for each observation. To infer any subset of these, marginalize out the rest.
In Frequentist terms, there are known and unknown parts of the model, but no uncertain parts, so nothing needs a prior: you either know them e.g. the components are Gaussian, or you don't know them, e.g. the means of each component. Even when there are distributions involved in generating the data, as there are in the mixture model, none of them is a Bayesian prior, regardless of whether you use Bayes theorem on them. Rather they represent actual or hypothetical randomizing mechanisms of some sort. Specifically, the mixture model provides a hypothetical randomization scheme for generating data: Toss a coin weighted according to the value of the mixing parameter to decide on a component, then draw from that component's distribution to generate an observation. This whole process has parameters, and you have to estimate them from the data.
So what looks like 'posterior inference', with a 'prior', is actually regular inference where the data generating process has some distributional machinery in the middle.
This rather like the Frequentist take on mixed models, and unlike Frequentist inference for, say, a regression coefficient, where there is no such intermediate structure to make anybody think of priors or posteriors.
It might be worth noting that Fisher, the arch anti-Bayesian, was happy to use Bayes theorem when he thought there was a real randomization mechanism embedded in the data generation process, e.g. in theoretical biology problems involving gene frequencies. This is a consistent position. Just not a Bayesian one.
Hope that helps.
|
When is a Naive Bayes Model not Bayesian?
|
Informally, to be 'Bayesian' about a model (Naive Bayes just names a class of discrete mixture models) is to use Bayes theorem to infer the values of its parameters or other quantities of interest.
|
When is a Naive Bayes Model not Bayesian?
Informally, to be 'Bayesian' about a model (Naive Bayes just names a class of discrete mixture models) is to use Bayes theorem to infer the values of its parameters or other quantities of interest. To be 'Frequentist' about the same model is, roughly, and among other things, to use the sampling distribution of estimators that depend on those quantities to infer what those values might be.
Turning to your Naive Bayes / mixture model. For exposition, let's assume all the component parameters and functional forms are known and there are two components (classes, whatever).
What is described as the 'prior' in a mixture model is a mixing parameter in the early stages of a hierarchically structured generative model. If you estimate this mixing parameter in the usual (ML, i.e. Frequentist) way, via an EM algorithm, then you have taken a convenient route up the model likelihood to find a maximum, and used that as a point estimate of the true value of the mixing parameter. Maybe you use the curvature of the likelihood at that point to give yourself a measure of uncertainty. (But probably not). Typically you'd then use it to get membership probabilities for individual observations by assuming that value and applying Bayes theorem.
This seems Bayesian because it uses Bayes theorem. However, it is unBayesian in two ways: First, you used the same data to determine the 'prior' (the mixing parameter) and some relevant 'posteriors' (membership probabilities for individual observations). So the 'prior' isn't really prior because it's conditioned on the data already. In the second, more general way, of which the first is an instance: Bayes theorem is being used to infer some unknowns (membership probabilities) but not others (the mixing coefficient).
That's why if you decide to do this in a Bayesian fashion then, since you don't know what the mixing parameter value is in advance, you give it some prior distribution. Maybe that's a Dirichlet (hence a Beta in this stripped down exposition) with some parameters or other, set to reflect your uncertainty. Then you figure out how to condition on the data to get a posterior distribution over it and all the other stuff you care about but don't know, such as component memberships for each observation. To infer any subset of these, marginalize out the rest.
In Frequentist terms, there are known and unknown parts of the model, but no uncertain parts, so nothing needs a prior: you either know them e.g. the components are Gaussian, or you don't know them, e.g. the means of each component. Even when there are distributions involved in generating the data, as there are in the mixture model, none of them is a Bayesian prior, regardless of whether you use Bayes theorem on them. Rather they represent actual or hypothetical randomizing mechanisms of some sort. Specifically, the mixture model provides a hypothetical randomization scheme for generating data: Toss a coin weighted according to the value of the mixing parameter to decide on a component, then draw from that component's distribution to generate an observation. This whole process has parameters, and you have to estimate them from the data.
So what looks like 'posterior inference', with a 'prior', is actually regular inference where the data generating process has some distributional machinery in the middle.
This rather like the Frequentist take on mixed models, and unlike Frequentist inference for, say, a regression coefficient, where there is no such intermediate structure to make anybody think of priors or posteriors.
It might be worth noting that Fisher, the arch anti-Bayesian, was happy to use Bayes theorem when he thought there was a real randomization mechanism embedded in the data generation process, e.g. in theoretical biology problems involving gene frequencies. This is a consistent position. Just not a Bayesian one.
Hope that helps.
|
When is a Naive Bayes Model not Bayesian?
Informally, to be 'Bayesian' about a model (Naive Bayes just names a class of discrete mixture models) is to use Bayes theorem to infer the values of its parameters or other quantities of interest.
|
39,434
|
Help with Taylor expansion of log likelihood function
|
You should convince yourself that $f(x)-f(y)\approx f'(x)(x-y)$ is just another way to express Taylor expansion (under appropriate regularity assumptions).
Then, using linearity of the derivation, you can generalize it to the sum.
|
Help with Taylor expansion of log likelihood function
|
You should convince yourself that $f(x)-f(y)\approx f'(x)(x-y)$ is just another way to express Taylor expansion (under appropriate regularity assumptions).
Then, using linearity of the derivation, you
|
Help with Taylor expansion of log likelihood function
You should convince yourself that $f(x)-f(y)\approx f'(x)(x-y)$ is just another way to express Taylor expansion (under appropriate regularity assumptions).
Then, using linearity of the derivation, you can generalize it to the sum.
|
Help with Taylor expansion of log likelihood function
You should convince yourself that $f(x)-f(y)\approx f'(x)(x-y)$ is just another way to express Taylor expansion (under appropriate regularity assumptions).
Then, using linearity of the derivation, you
|
39,435
|
Help with Taylor expansion of log likelihood function
|
I think your problem is just the notation problem
Let us start from the beginning;
Suppose$X_1, X_2,...,X_n$ are i.i.d random variables with probability density function $f(x;\gamma)$
Usually, people will use $\theta$ here, I will keep it for later use.
The likelihood function is $L(\gamma;x)=f(x_1;\gamma)f(x_2;\gamma)...f(x_n;\gamma)$
The log likelihood function $l(\gamma;x)=log[f(x_1;\gamma)f(x_2;\gamma)...f(x_n;\gamma)]=\sum_{i=1}^nlogf(x_i;\gamma)$
As usual, we will take derivative of the log likelihood and set it to zero i.e $l'(\gamma;x)=0 $
or $\sum_{i=1}^nlog'f(x_i;\gamma)=0$
Here, you can see $U_i(\gamma)=log'f(x_i;\gamma)$
$\therefore l'(\gamma)=\sum_{i=1}^nU_i(\gamma)\tag{1}$ we omit $x$ here since the log likelihood function is a function of $\gamma $
Next we expand the function $l'(\gamma)$ into a Taylor series of order two abut $\theta$ .
$l'(\gamma)=l'(\theta)+\frac{l''(\theta)}{1!}(\gamma-\theta)^1+\frac{l'''(\theta)}{2!}(\gamma-\theta)^2$. This is the Taylor expanding for $l'(\gamma)$.
Next we evaluate the equation at $\phi$
$l'(\phi)=l'(\theta)+\frac{l''(\theta)}{1!}(\phi-\theta)^1+\frac{l'''(\theta)}{2!}(\phi-\theta)^2 \tag{2}$
Here you should see that $l'(\theta)=\sum_{i=1}^nU_i(\theta)$
and $l'(\phi)=\sum_{i=1}^nU_i(\phi)$
and $l''(\phi)=\sum_{i=1}^nU_i'(\phi)$
Ref (1)
If we ignore the third derivative term in (2) then your question will be answered here
We get that $\sum_{i=1}^n U_i(\phi )-\sum_{i=1}^n U_i(\theta)\approx \left( \sum_{i=1}^n U_i'(\theta) \right)(\phi-\theta)$
By the way I think $i$ usually start from $1$ not $0$ ,anyway it is just an index.
Let us don't stop here, we can go further to prove the theorem
We know that $l'(\phi)=0$
$\therefore l'(\theta)+\frac{l''(\theta)}{1!}(\phi-\theta)^1+\frac{l'''(\theta)}{2!}(\phi-\theta)^2=0$
i.e.
$l'(\theta)+(\phi-\theta)*[l''(\theta)+\frac{l'''(\theta)}{2}(\phi-\theta)]=0$
Next we rearrange the above terms:
$$(\phi-\theta)=\frac{l'(\theta)}{-l''(\theta)-\frac{l'''(\theta)}{2}(\phi-\theta)}$$
We multiply $\sqrt{n}$ for both side:
$$\sqrt{n}(\phi-\theta)=\frac{\sqrt{n}*l'(\theta)}{-l''(\theta)-\frac{l'''(\theta)}{2}(\phi-\theta)}\\=\frac{\frac{1}{\sqrt{n}}*l'(\theta)}{\frac{-l''(\theta)}{n}-\frac{l'''(\theta)}{2n}(\phi-\theta)} \tag{3}$$
(divide by n for numerator and denominator at the same time for the left hand side)
The let us see what is the numerator of the left side of (3):
$$\frac{1}{\sqrt{n}}l'(\theta)=\frac{1}{\sqrt{n}}\sum_{i=1}^n\frac{\partial logf(x_i;\theta)}{\partial \theta}$$
And note $$\frac{\partial logf(x_i;\theta)}{\partial \theta}$$ are i.i.d with variance $I(\theta)$ and $$E(\frac{\partial logf(x_i;\theta)}{\partial \theta})=0$$
$\therefore$ by CLT
$$\frac{1}{\sqrt{n}}l'(\theta)\sim \frac{1}{\sqrt{n}}N(0,nI(\theta))=N(0,I(\theta))$$
Next we will see what are in the denominator of (3):
$$-\frac{l''(\theta)}{n}=-\frac{1}{n}\sum_{i=1}^n\frac{\partial^2 logf(x_i;\theta)}{\partial \theta}\overset{P}{\rightarrow} I(\theta)$$ by Law of Large number.
For the term $$\frac{l'''(\theta)}{2n}(\phi-\theta)$$ in denominator of (3) we can prove it convergence in Probability to zero.
Finally, let us warp up everything
$$\sqrt{n}(\phi-\theta)\sim \frac{N(0,I(\theta)}{I(\theta)}=N(0,\frac{1}{I(\theta)})$$
This proved the theorem.
|
Help with Taylor expansion of log likelihood function
|
I think your problem is just the notation problem
Let us start from the beginning;
Suppose$X_1, X_2,...,X_n$ are i.i.d random variables with probability density function $f(x;\gamma)$
Usually, people
|
Help with Taylor expansion of log likelihood function
I think your problem is just the notation problem
Let us start from the beginning;
Suppose$X_1, X_2,...,X_n$ are i.i.d random variables with probability density function $f(x;\gamma)$
Usually, people will use $\theta$ here, I will keep it for later use.
The likelihood function is $L(\gamma;x)=f(x_1;\gamma)f(x_2;\gamma)...f(x_n;\gamma)$
The log likelihood function $l(\gamma;x)=log[f(x_1;\gamma)f(x_2;\gamma)...f(x_n;\gamma)]=\sum_{i=1}^nlogf(x_i;\gamma)$
As usual, we will take derivative of the log likelihood and set it to zero i.e $l'(\gamma;x)=0 $
or $\sum_{i=1}^nlog'f(x_i;\gamma)=0$
Here, you can see $U_i(\gamma)=log'f(x_i;\gamma)$
$\therefore l'(\gamma)=\sum_{i=1}^nU_i(\gamma)\tag{1}$ we omit $x$ here since the log likelihood function is a function of $\gamma $
Next we expand the function $l'(\gamma)$ into a Taylor series of order two abut $\theta$ .
$l'(\gamma)=l'(\theta)+\frac{l''(\theta)}{1!}(\gamma-\theta)^1+\frac{l'''(\theta)}{2!}(\gamma-\theta)^2$. This is the Taylor expanding for $l'(\gamma)$.
Next we evaluate the equation at $\phi$
$l'(\phi)=l'(\theta)+\frac{l''(\theta)}{1!}(\phi-\theta)^1+\frac{l'''(\theta)}{2!}(\phi-\theta)^2 \tag{2}$
Here you should see that $l'(\theta)=\sum_{i=1}^nU_i(\theta)$
and $l'(\phi)=\sum_{i=1}^nU_i(\phi)$
and $l''(\phi)=\sum_{i=1}^nU_i'(\phi)$
Ref (1)
If we ignore the third derivative term in (2) then your question will be answered here
We get that $\sum_{i=1}^n U_i(\phi )-\sum_{i=1}^n U_i(\theta)\approx \left( \sum_{i=1}^n U_i'(\theta) \right)(\phi-\theta)$
By the way I think $i$ usually start from $1$ not $0$ ,anyway it is just an index.
Let us don't stop here, we can go further to prove the theorem
We know that $l'(\phi)=0$
$\therefore l'(\theta)+\frac{l''(\theta)}{1!}(\phi-\theta)^1+\frac{l'''(\theta)}{2!}(\phi-\theta)^2=0$
i.e.
$l'(\theta)+(\phi-\theta)*[l''(\theta)+\frac{l'''(\theta)}{2}(\phi-\theta)]=0$
Next we rearrange the above terms:
$$(\phi-\theta)=\frac{l'(\theta)}{-l''(\theta)-\frac{l'''(\theta)}{2}(\phi-\theta)}$$
We multiply $\sqrt{n}$ for both side:
$$\sqrt{n}(\phi-\theta)=\frac{\sqrt{n}*l'(\theta)}{-l''(\theta)-\frac{l'''(\theta)}{2}(\phi-\theta)}\\=\frac{\frac{1}{\sqrt{n}}*l'(\theta)}{\frac{-l''(\theta)}{n}-\frac{l'''(\theta)}{2n}(\phi-\theta)} \tag{3}$$
(divide by n for numerator and denominator at the same time for the left hand side)
The let us see what is the numerator of the left side of (3):
$$\frac{1}{\sqrt{n}}l'(\theta)=\frac{1}{\sqrt{n}}\sum_{i=1}^n\frac{\partial logf(x_i;\theta)}{\partial \theta}$$
And note $$\frac{\partial logf(x_i;\theta)}{\partial \theta}$$ are i.i.d with variance $I(\theta)$ and $$E(\frac{\partial logf(x_i;\theta)}{\partial \theta})=0$$
$\therefore$ by CLT
$$\frac{1}{\sqrt{n}}l'(\theta)\sim \frac{1}{\sqrt{n}}N(0,nI(\theta))=N(0,I(\theta))$$
Next we will see what are in the denominator of (3):
$$-\frac{l''(\theta)}{n}=-\frac{1}{n}\sum_{i=1}^n\frac{\partial^2 logf(x_i;\theta)}{\partial \theta}\overset{P}{\rightarrow} I(\theta)$$ by Law of Large number.
For the term $$\frac{l'''(\theta)}{2n}(\phi-\theta)$$ in denominator of (3) we can prove it convergence in Probability to zero.
Finally, let us warp up everything
$$\sqrt{n}(\phi-\theta)\sim \frac{N(0,I(\theta)}{I(\theta)}=N(0,\frac{1}{I(\theta)})$$
This proved the theorem.
|
Help with Taylor expansion of log likelihood function
I think your problem is just the notation problem
Let us start from the beginning;
Suppose$X_1, X_2,...,X_n$ are i.i.d random variables with probability density function $f(x;\gamma)$
Usually, people
|
39,436
|
What exactly is the standard error of the intercept in multiple regression analysis?
|
The standard error of the the intercept allows you to test whether or not the estimated intercept is statistically significant from a specified(hypothesized) value ...normally 0.0 . If you test against 0.0 and fail to reject then you can then re-estimate your model without the intercept term being present.
|
What exactly is the standard error of the intercept in multiple regression analysis?
|
The standard error of the the intercept allows you to test whether or not the estimated intercept is statistically significant from a specified(hypothesized) value ...normally 0.0 . If you test agains
|
What exactly is the standard error of the intercept in multiple regression analysis?
The standard error of the the intercept allows you to test whether or not the estimated intercept is statistically significant from a specified(hypothesized) value ...normally 0.0 . If you test against 0.0 and fail to reject then you can then re-estimate your model without the intercept term being present.
|
What exactly is the standard error of the intercept in multiple regression analysis?
The standard error of the the intercept allows you to test whether or not the estimated intercept is statistically significant from a specified(hypothesized) value ...normally 0.0 . If you test agains
|
39,437
|
What exactly is the standard error of the intercept in multiple regression analysis?
|
Your characterization of how multiple regression works is inaccurate. Your version implies fitting a simple linear regression for each variable in turn (and presumably using each of those slopes as the coefficient for that variable in the multiple regression model). This notion leaves you with the problem of how to deal with the fact that the intercepts from each simple regression are quite likely to differ.
However, that approach is not how multiple regression works / estimates the parameters. Instead, all coefficients (including the intercept) are fitted simultaneously. Using Ordinary Least Squares (OLS), we find coefficient estimates that minimize the sum of the squared errors in the dependent variable. That is, we minimize the vertical distance between the model's predicted Y value at a given location in X and the observed Y value there. To find a vector of beta estimates, we use the following matrix equation:
$$
\boldsymbol{\hat\beta} = \bf (X^\top X)^{-1}X^\top Y
$$
It is worth noting explicitly that the coefficients we find this way will not necessarily be the same as those betas found individually. To understand this further, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
At any rate, the standard errors for a multiple regression model are calculated as:
$$
SE_\boldsymbol{\hat\beta} = \sqrt{{\rm diag}\{ s^2\bf (X^\top X)^{-1}\}}
$$
where $s^2$ is the variance of the residuals and $\rm diag$ refers to extracting the elements on the main diagonal of the matrix. Since the intercept ($\hat\beta_0$) is first of our regression parameters, it is the square root of the element in the first row first column.
Once we have our fitted model, the standard error for the intercept means the same thing as any other standard error: It is our estimate of the standard deviation of the sampling distribution of the intercept. For a fuller description of standard errors in a regression context, it may help to read my answer here: How to interpret coefficient standard errors in linear regression?
A common use of the intercept's standard error would be to test if the observed intercept is reasonably likely to have occurred under the assumption that its true value is some pre-specified number (such as $0$), as @IrishStat notes.
|
What exactly is the standard error of the intercept in multiple regression analysis?
|
Your characterization of how multiple regression works is inaccurate. Your version implies fitting a simple linear regression for each variable in turn (and presumably using each of those slopes as t
|
What exactly is the standard error of the intercept in multiple regression analysis?
Your characterization of how multiple regression works is inaccurate. Your version implies fitting a simple linear regression for each variable in turn (and presumably using each of those slopes as the coefficient for that variable in the multiple regression model). This notion leaves you with the problem of how to deal with the fact that the intercepts from each simple regression are quite likely to differ.
However, that approach is not how multiple regression works / estimates the parameters. Instead, all coefficients (including the intercept) are fitted simultaneously. Using Ordinary Least Squares (OLS), we find coefficient estimates that minimize the sum of the squared errors in the dependent variable. That is, we minimize the vertical distance between the model's predicted Y value at a given location in X and the observed Y value there. To find a vector of beta estimates, we use the following matrix equation:
$$
\boldsymbol{\hat\beta} = \bf (X^\top X)^{-1}X^\top Y
$$
It is worth noting explicitly that the coefficients we find this way will not necessarily be the same as those betas found individually. To understand this further, it may help you to read my answer here: Is there a difference between 'controlling for' and 'ignoring' other variables in multiple regression?
At any rate, the standard errors for a multiple regression model are calculated as:
$$
SE_\boldsymbol{\hat\beta} = \sqrt{{\rm diag}\{ s^2\bf (X^\top X)^{-1}\}}
$$
where $s^2$ is the variance of the residuals and $\rm diag$ refers to extracting the elements on the main diagonal of the matrix. Since the intercept ($\hat\beta_0$) is first of our regression parameters, it is the square root of the element in the first row first column.
Once we have our fitted model, the standard error for the intercept means the same thing as any other standard error: It is our estimate of the standard deviation of the sampling distribution of the intercept. For a fuller description of standard errors in a regression context, it may help to read my answer here: How to interpret coefficient standard errors in linear regression?
A common use of the intercept's standard error would be to test if the observed intercept is reasonably likely to have occurred under the assumption that its true value is some pre-specified number (such as $0$), as @IrishStat notes.
|
What exactly is the standard error of the intercept in multiple regression analysis?
Your characterization of how multiple regression works is inaccurate. Your version implies fitting a simple linear regression for each variable in turn (and presumably using each of those slopes as t
|
39,438
|
Joint distribution of AR(1) model
|
Let us write the joint density as
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\,p(y_2\mid y_1) \, p(y_3 \mid y_2,y_1) \ldots \, p(y_T \mid y_{T-1},\ldots y_1).
\end{equation}
Furthermore, since the process is AR(1), the past values influence future values only via the latest value, i.e., we have the Markov property $p(y_t \mid y_1,\ldots,y_{t-1}) = p(y_t \mid y_{t-1})$. Substituting this in the factorization, we get
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\, \prod_{i=2}^T p(y_i \mid y_{i-1}).
\end{equation}
The marginal density of $y_1$ and the required conditional densities were given as assumptions. From now on, we shall ignore multiplicative constants (that are independent of $y$), since they are in the end be determined by the requirement that the joint density integrates to 1.
\begin{equation}
\propto e^{-\frac{1}{2\sigma^2}(y_1 - \phi_0)^2} \times \prod_{i=2}^T e^{-\frac{1}{2\sigma^2}(y_i - \phi_0 - \phi_1\,(y_{i-1} - \phi_0))^2} = e^{-\frac{1}{2}\,E}
\end{equation}
where
\begin{equation}
E = \frac{1}{\sigma^2}(y_1 - \phi_0)^2 + \sum_{i=2}^T \frac{1}{\sigma^2}(y_i - \phi_0 - \phi_1(y_{i-1} - \phi_0))^2
\end{equation}
\begin{equation}
= \frac{1}{\sigma^2}(y_1 - \phi_0)^2 + \sum_{i=2}^T \frac{1}{\sigma^2}\left((y_i - \phi_0)^2 - 2\,(y_i - \phi_0)\,\phi_1\,(y_{i-1} - \phi_0) + \phi_1^2 (y_{i-1} - \phi_0)^2 \right)
\end{equation}
\begin{equation}
= \sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,(1 + \phi_1^2)(y_i - \phi_0)^2 + \frac{1}{2\sigma^2} (y_T - \phi_0)^2 + \sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,2\,(-\phi)\,(y_{i+1}-\phi_0)\,(y_i-\phi_0).
\end{equation}
So the joint density is proportional to
\begin{align}
\mathrm{exp}\bigg(-\frac{1}{2}\,\sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,(1 + \phi_1^2)(y_i - \phi_0)^2 + \frac{1}{2\sigma^2} (y_T - \phi_0)^2 \\+ \sum_{i=1}^{T-1}2\,\frac{1}{\sigma^2}\,(-\phi)\,(y_{i+1}-\phi_0)\,(y_i-\phi_0)\bigg),
\end{align}
Observe that this exponent is a quadratic form of a vector consisting of variables $(y_i - \phi_0)$. Thus we conclude that the joint density is a multivariate normal with means $E(y_i) = \phi_0$ and the precision matrix can be read from the previous expression, since we have $E = (y - \phi_0\mathbf{1})\,\Sigma^{-1}\,(y-\phi_0\,\mathbf{1})$. Namely,
If $i=j$ and $i<T$, $\Sigma^{-1}_{ij} = (1 + \phi_1^2) / \sigma^2$
If $i=j=T$, $\Sigma^{-1}_{ij} = 1 / \sigma^2$
If $|i-j|=1$, $\Sigma^{-1}_{ij} = -\phi_1 / \sigma^2$
If $|i-j|>1$, $\Sigma^{-1}_{ij} = 0$,
which is indeed the form of the precision matrix that was claimed in the question.
|
Joint distribution of AR(1) model
|
Let us write the joint density as
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\,p(y_2\mid y_1) \, p(y_3 \mid y_2,y_1) \ldots \, p(y_T \mid y_{T-1},\ldots y_1).
\end{equation}
Furthermore, since the pro
|
Joint distribution of AR(1) model
Let us write the joint density as
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\,p(y_2\mid y_1) \, p(y_3 \mid y_2,y_1) \ldots \, p(y_T \mid y_{T-1},\ldots y_1).
\end{equation}
Furthermore, since the process is AR(1), the past values influence future values only via the latest value, i.e., we have the Markov property $p(y_t \mid y_1,\ldots,y_{t-1}) = p(y_t \mid y_{t-1})$. Substituting this in the factorization, we get
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\, \prod_{i=2}^T p(y_i \mid y_{i-1}).
\end{equation}
The marginal density of $y_1$ and the required conditional densities were given as assumptions. From now on, we shall ignore multiplicative constants (that are independent of $y$), since they are in the end be determined by the requirement that the joint density integrates to 1.
\begin{equation}
\propto e^{-\frac{1}{2\sigma^2}(y_1 - \phi_0)^2} \times \prod_{i=2}^T e^{-\frac{1}{2\sigma^2}(y_i - \phi_0 - \phi_1\,(y_{i-1} - \phi_0))^2} = e^{-\frac{1}{2}\,E}
\end{equation}
where
\begin{equation}
E = \frac{1}{\sigma^2}(y_1 - \phi_0)^2 + \sum_{i=2}^T \frac{1}{\sigma^2}(y_i - \phi_0 - \phi_1(y_{i-1} - \phi_0))^2
\end{equation}
\begin{equation}
= \frac{1}{\sigma^2}(y_1 - \phi_0)^2 + \sum_{i=2}^T \frac{1}{\sigma^2}\left((y_i - \phi_0)^2 - 2\,(y_i - \phi_0)\,\phi_1\,(y_{i-1} - \phi_0) + \phi_1^2 (y_{i-1} - \phi_0)^2 \right)
\end{equation}
\begin{equation}
= \sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,(1 + \phi_1^2)(y_i - \phi_0)^2 + \frac{1}{2\sigma^2} (y_T - \phi_0)^2 + \sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,2\,(-\phi)\,(y_{i+1}-\phi_0)\,(y_i-\phi_0).
\end{equation}
So the joint density is proportional to
\begin{align}
\mathrm{exp}\bigg(-\frac{1}{2}\,\sum_{i=1}^{T-1}\frac{1}{\sigma^2}\,(1 + \phi_1^2)(y_i - \phi_0)^2 + \frac{1}{2\sigma^2} (y_T - \phi_0)^2 \\+ \sum_{i=1}^{T-1}2\,\frac{1}{\sigma^2}\,(-\phi)\,(y_{i+1}-\phi_0)\,(y_i-\phi_0)\bigg),
\end{align}
Observe that this exponent is a quadratic form of a vector consisting of variables $(y_i - \phi_0)$. Thus we conclude that the joint density is a multivariate normal with means $E(y_i) = \phi_0$ and the precision matrix can be read from the previous expression, since we have $E = (y - \phi_0\mathbf{1})\,\Sigma^{-1}\,(y-\phi_0\,\mathbf{1})$. Namely,
If $i=j$ and $i<T$, $\Sigma^{-1}_{ij} = (1 + \phi_1^2) / \sigma^2$
If $i=j=T$, $\Sigma^{-1}_{ij} = 1 / \sigma^2$
If $|i-j|=1$, $\Sigma^{-1}_{ij} = -\phi_1 / \sigma^2$
If $|i-j|>1$, $\Sigma^{-1}_{ij} = 0$,
which is indeed the form of the precision matrix that was claimed in the question.
|
Joint distribution of AR(1) model
Let us write the joint density as
\begin{equation}
p(y_1,\ldots,y_T) = p(y_1)\,p(y_2\mid y_1) \, p(y_3 \mid y_2,y_1) \ldots \, p(y_T \mid y_{T-1},\ldots y_1).
\end{equation}
Furthermore, since the pro
|
39,439
|
Joint distribution of AR(1) model
|
The title of the question points towards a functional specification of the form
$$y_t= \phi_0 - \phi_1\,\phi_0 + \phi_1\,y_{t-1} + e_t,~e_t \sim N(0,\sigma^2), t>1, |\phi_1|<1$$
with the error term $e_t$ being i.i.d.
Given the assumption on the initial available observation (which does not necessarily represent the beginning of the process, just the first observation of the sample), we can determine that the process from then on is heteroskedastic. Specifically,
$${\rm Var}(y_2) = \phi_1^2{\rm Var}(y_1) + \sigma^2 = (1+\phi_1^2)\sigma^2$$
$${\rm Var}(y_3) = \phi_1^2{\rm Var}(y_2) + \sigma^2 = [\phi_1^2(1+\phi_1^2)+1]\sigma^2$$
$${\rm Var}(y_4) = \phi_1^2{\rm Var}(y_3) + \sigma^2 = [(\phi_1^2)^3+(\phi_1^2)^2+(\phi_1^2)+1]\sigma^2$$
The pattern is clear and, asymptotically, it leads to the familiar $\lim_{t \rightarrow \infty} {\rm Var}(y_t) = \sigma^2/(1-\phi^2)$. But only asymptotically. The joint distribution of the sample therefore will be a joint distribution of random variables with different and monotonically increasing (but bounded) variances of the marginal distributions. The expected value is common for all observations, and equal to $\phi_0$.
We now obtain the covariances for a sample of three observations $\{y_1,y_2, y_3\}$. We have
$${\rm Cov}(y_2,y_1) = E(y_2y_1) - \phi_0^2 = E\Big(\phi_0y_1 - \phi_1\phi_0y_1 + \phi_1y_1^2+ e_2y_1\Big) - \phi_0^2$$
$$=\phi_0^2 - \phi_1\phi_0^2 + \phi_1\big({\rm Var}(y_1) + \phi_0^2\big) - \phi_0^2 = - \phi_1\phi_0^2 + \phi_1\sigma^2 + \phi_1\phi_0^2 $$
and so
$${\rm Cov}(y_2,y_1) = \phi_1\sigma^2,\;\;\; E(y_2y_1) = \phi_1\sigma^2 + \phi_0^2$$
Continuing,
$${\rm Cov}(y_3,y_1) = E(y_3y_1) - \phi_0^2 = E\Big(\phi_0y_1 - \phi_1\phi_0y_1 + \phi_1y_2y_1+ e_3y_1\Big) - \phi_0^2$$
$$= \phi_0^2 - \phi_1\phi_0^2 + \phi_1^2\sigma^2 + \phi_1\phi_0^2 - \phi_0^2$$
$$\implies {\rm Cov}(y_3,y_1) = \phi_1^2\sigma^2,\;\; E(y_3y_1) = \phi_1^2\sigma^2 + \phi_0^2$$
Finally,
$${\rm Cov}(y_3,y_2) = E(y_3y_2) - \phi_0^2 = E\Big(\phi_0y_3 - \phi_1\phi_0y_3 + \phi_1y_1y_3+ e_2y_3\Big) - \phi_0^2$$
$$=\phi_0^2 - \phi_1\phi_0^2 + \phi_1^3\sigma^2 + \phi_1\phi_0^2 +\phi_1\sigma^2-\phi_0^2$$
$$\implies {\rm Cov}(y_3,y_2) = \phi_1(1+\phi_1^2)\sigma^2$$
We observe that
$${\rm Cov}(y_3,y_2) \neq {\rm Cov}(y_2,y_1)$$
namely that the first-order autocovariance depends also on $t$. The covariance matrix of a sample of three observations is therefore
$${\rm Cov}(y_1,y_2,y_3)= \sigma^2
\begin{pmatrix}
1 & \phi_1 & \phi_1^2 \\
\phi_1 & (1+\phi_1^2) & \phi_1(1+\phi_1^2) \\
\phi_1^2 & \phi_1(1+\phi_1^2) & (1+\phi_1^2+\phi_1^4) \\
\end{pmatrix}$$
The inverse of this matrix (i.e. the precision matrix) (calculated online on this site) is given as
$$Q = {\rm Cov}^{-1}(y_1,y_2,y_3)= \frac {1}{\sigma^2}
\begin{pmatrix}
1+\phi_1^2 & -\phi_1 & 0 \\
-\phi_1 & 1+\phi_1^2 & -\phi_1 \\
0 & -\phi_1 & 1 \\
\end{pmatrix}$$
which is, the general result at @JuhoKokkala answer, for $T=3$.
|
Joint distribution of AR(1) model
|
The title of the question points towards a functional specification of the form
$$y_t= \phi_0 - \phi_1\,\phi_0 + \phi_1\,y_{t-1} + e_t,~e_t \sim N(0,\sigma^2), t>1, |\phi_1|<1$$
with the error term $e
|
Joint distribution of AR(1) model
The title of the question points towards a functional specification of the form
$$y_t= \phi_0 - \phi_1\,\phi_0 + \phi_1\,y_{t-1} + e_t,~e_t \sim N(0,\sigma^2), t>1, |\phi_1|<1$$
with the error term $e_t$ being i.i.d.
Given the assumption on the initial available observation (which does not necessarily represent the beginning of the process, just the first observation of the sample), we can determine that the process from then on is heteroskedastic. Specifically,
$${\rm Var}(y_2) = \phi_1^2{\rm Var}(y_1) + \sigma^2 = (1+\phi_1^2)\sigma^2$$
$${\rm Var}(y_3) = \phi_1^2{\rm Var}(y_2) + \sigma^2 = [\phi_1^2(1+\phi_1^2)+1]\sigma^2$$
$${\rm Var}(y_4) = \phi_1^2{\rm Var}(y_3) + \sigma^2 = [(\phi_1^2)^3+(\phi_1^2)^2+(\phi_1^2)+1]\sigma^2$$
The pattern is clear and, asymptotically, it leads to the familiar $\lim_{t \rightarrow \infty} {\rm Var}(y_t) = \sigma^2/(1-\phi^2)$. But only asymptotically. The joint distribution of the sample therefore will be a joint distribution of random variables with different and monotonically increasing (but bounded) variances of the marginal distributions. The expected value is common for all observations, and equal to $\phi_0$.
We now obtain the covariances for a sample of three observations $\{y_1,y_2, y_3\}$. We have
$${\rm Cov}(y_2,y_1) = E(y_2y_1) - \phi_0^2 = E\Big(\phi_0y_1 - \phi_1\phi_0y_1 + \phi_1y_1^2+ e_2y_1\Big) - \phi_0^2$$
$$=\phi_0^2 - \phi_1\phi_0^2 + \phi_1\big({\rm Var}(y_1) + \phi_0^2\big) - \phi_0^2 = - \phi_1\phi_0^2 + \phi_1\sigma^2 + \phi_1\phi_0^2 $$
and so
$${\rm Cov}(y_2,y_1) = \phi_1\sigma^2,\;\;\; E(y_2y_1) = \phi_1\sigma^2 + \phi_0^2$$
Continuing,
$${\rm Cov}(y_3,y_1) = E(y_3y_1) - \phi_0^2 = E\Big(\phi_0y_1 - \phi_1\phi_0y_1 + \phi_1y_2y_1+ e_3y_1\Big) - \phi_0^2$$
$$= \phi_0^2 - \phi_1\phi_0^2 + \phi_1^2\sigma^2 + \phi_1\phi_0^2 - \phi_0^2$$
$$\implies {\rm Cov}(y_3,y_1) = \phi_1^2\sigma^2,\;\; E(y_3y_1) = \phi_1^2\sigma^2 + \phi_0^2$$
Finally,
$${\rm Cov}(y_3,y_2) = E(y_3y_2) - \phi_0^2 = E\Big(\phi_0y_3 - \phi_1\phi_0y_3 + \phi_1y_1y_3+ e_2y_3\Big) - \phi_0^2$$
$$=\phi_0^2 - \phi_1\phi_0^2 + \phi_1^3\sigma^2 + \phi_1\phi_0^2 +\phi_1\sigma^2-\phi_0^2$$
$$\implies {\rm Cov}(y_3,y_2) = \phi_1(1+\phi_1^2)\sigma^2$$
We observe that
$${\rm Cov}(y_3,y_2) \neq {\rm Cov}(y_2,y_1)$$
namely that the first-order autocovariance depends also on $t$. The covariance matrix of a sample of three observations is therefore
$${\rm Cov}(y_1,y_2,y_3)= \sigma^2
\begin{pmatrix}
1 & \phi_1 & \phi_1^2 \\
\phi_1 & (1+\phi_1^2) & \phi_1(1+\phi_1^2) \\
\phi_1^2 & \phi_1(1+\phi_1^2) & (1+\phi_1^2+\phi_1^4) \\
\end{pmatrix}$$
The inverse of this matrix (i.e. the precision matrix) (calculated online on this site) is given as
$$Q = {\rm Cov}^{-1}(y_1,y_2,y_3)= \frac {1}{\sigma^2}
\begin{pmatrix}
1+\phi_1^2 & -\phi_1 & 0 \\
-\phi_1 & 1+\phi_1^2 & -\phi_1 \\
0 & -\phi_1 & 1 \\
\end{pmatrix}$$
which is, the general result at @JuhoKokkala answer, for $T=3$.
|
Joint distribution of AR(1) model
The title of the question points towards a functional specification of the form
$$y_t= \phi_0 - \phi_1\,\phi_0 + \phi_1\,y_{t-1} + e_t,~e_t \sim N(0,\sigma^2), t>1, |\phi_1|<1$$
with the error term $e
|
39,440
|
What does improper learning mean in the context of statistical learning theory and machine learning?
|
In statistical learning theory, the standard batch learning problem is defined in terms of a distribution $P$ over some space $\mathcal{Z}$ belonging to some set of distributions $\mathcal{P}$, a hypothesis class $\mathcal{H}$ and a loss function $\ell$, which assigns a (say) nonnegative real ("a loss") to pairs $(P,h)$ of distributions and hypotheses (i.e., $\ell: \mathcal{P} \times \mathcal{H} \to [0,\infty)$). Then, one is given a sequence of $n$ points $D_n = (Z_1,\dots,Z_n)\in \mathcal{Z}^n$, sampled in an iid fashion from $P$ and the job of the learning algorithm is to come up with a hypothesis $h_n\in \mathcal{H}$ based on $D_n$ that achieves a small (say) expected loss $\mathbb{E}[\ell(P,h_n)]$ (the random quantity in the above expression is $h_n$: $h_n$ depends on the data $D_n$, which is random, hence $h_n$ is also random).
In terms of the goal of learning, one criterion for evaluating the power of a learning algorithm is how fast the excess expected loss $\mathbb{E}[\ell(P,h_n)]-\inf_{h\in \mathcal{H}} \ell(P,h)$ (or excess risk) decreases with $n\to \infty$.
Improper learning changes this metric slightly to evaluate success by $\mathbb{E}[ \ell(P,h_n) ] - \inf_{h\in \mathcal{H}_0 } \ell(P,h)$ for some $\mathcal{H}_0\subset \mathcal{H}$. Intuitively, when $\mathcal{H}_0$ is a true subset of $\mathcal{H}$, competing with the best hypothesis from $\mathcal{H}_0$ should be easier.
Where is this coming from? Learning is all about guessing the right bias. The bias here is expressed in terms of $\mathcal{H}_0$. The designer of the algorithm makes a guess on $\mathcal{H}_0$; the guess concerns that there will be a hypothesis in $\mathcal{H}_0$ which achieves a small loss. Next, the problem is to design an algorithm. However, does it make sense to require the algorithm to output hypotheses from $\mathcal{H}_0$? Unless some specific circumstances require this, why would we make this restriction? By allowing the learning algorithm to produce hypotheses in a larger class $\mathcal{H}_1$ which is in between $\mathcal{H}_0$ and $\mathcal{H}$, the algorithm designer's flexibility is increased and hence potentially lower excess risk over the best hypothesis in $\mathcal{H}_0$ can be achieved. Why not allow then $\mathcal{H}_1 = \mathcal{H}$? The answer to this depends on how the learning algorithm uses $\mathcal{H}_1$. If it really just uses a (potentially small) subset of it, then it won't hurt to have $\mathcal{H}_1 = \mathcal{H}$. However, many learning algorithms are designed to use the full hypothesis space that they are given and they slow down (will be more conservative) when used with a larger hypothesis class. With such algorithms it makes sense to use a proper subset of $\mathcal{H}$ as $\mathcal{H}_1$.
|
What does improper learning mean in the context of statistical learning theory and machine learning?
|
In statistical learning theory, the standard batch learning problem is defined in terms of a distribution $P$ over some space $\mathcal{Z}$ belonging to some set of distributions $\mathcal{P}$, a hypo
|
What does improper learning mean in the context of statistical learning theory and machine learning?
In statistical learning theory, the standard batch learning problem is defined in terms of a distribution $P$ over some space $\mathcal{Z}$ belonging to some set of distributions $\mathcal{P}$, a hypothesis class $\mathcal{H}$ and a loss function $\ell$, which assigns a (say) nonnegative real ("a loss") to pairs $(P,h)$ of distributions and hypotheses (i.e., $\ell: \mathcal{P} \times \mathcal{H} \to [0,\infty)$). Then, one is given a sequence of $n$ points $D_n = (Z_1,\dots,Z_n)\in \mathcal{Z}^n$, sampled in an iid fashion from $P$ and the job of the learning algorithm is to come up with a hypothesis $h_n\in \mathcal{H}$ based on $D_n$ that achieves a small (say) expected loss $\mathbb{E}[\ell(P,h_n)]$ (the random quantity in the above expression is $h_n$: $h_n$ depends on the data $D_n$, which is random, hence $h_n$ is also random).
In terms of the goal of learning, one criterion for evaluating the power of a learning algorithm is how fast the excess expected loss $\mathbb{E}[\ell(P,h_n)]-\inf_{h\in \mathcal{H}} \ell(P,h)$ (or excess risk) decreases with $n\to \infty$.
Improper learning changes this metric slightly to evaluate success by $\mathbb{E}[ \ell(P,h_n) ] - \inf_{h\in \mathcal{H}_0 } \ell(P,h)$ for some $\mathcal{H}_0\subset \mathcal{H}$. Intuitively, when $\mathcal{H}_0$ is a true subset of $\mathcal{H}$, competing with the best hypothesis from $\mathcal{H}_0$ should be easier.
Where is this coming from? Learning is all about guessing the right bias. The bias here is expressed in terms of $\mathcal{H}_0$. The designer of the algorithm makes a guess on $\mathcal{H}_0$; the guess concerns that there will be a hypothesis in $\mathcal{H}_0$ which achieves a small loss. Next, the problem is to design an algorithm. However, does it make sense to require the algorithm to output hypotheses from $\mathcal{H}_0$? Unless some specific circumstances require this, why would we make this restriction? By allowing the learning algorithm to produce hypotheses in a larger class $\mathcal{H}_1$ which is in between $\mathcal{H}_0$ and $\mathcal{H}$, the algorithm designer's flexibility is increased and hence potentially lower excess risk over the best hypothesis in $\mathcal{H}_0$ can be achieved. Why not allow then $\mathcal{H}_1 = \mathcal{H}$? The answer to this depends on how the learning algorithm uses $\mathcal{H}_1$. If it really just uses a (potentially small) subset of it, then it won't hurt to have $\mathcal{H}_1 = \mathcal{H}$. However, many learning algorithms are designed to use the full hypothesis space that they are given and they slow down (will be more conservative) when used with a larger hypothesis class. With such algorithms it makes sense to use a proper subset of $\mathcal{H}$ as $\mathcal{H}_1$.
|
What does improper learning mean in the context of statistical learning theory and machine learning?
In statistical learning theory, the standard batch learning problem is defined in terms of a distribution $P$ over some space $\mathcal{Z}$ belonging to some set of distributions $\mathcal{P}$, a hypo
|
39,441
|
Minimization of the Sum of Absolute Deviations
|
So that this has some form of answer, let's look at what the OP has discovered, and then put some additional clarity/detail into that (I'm going to go into more detail than I normally would for a self-study question because of the issues with the question, which I think require some explanation):
The issue here is that with 3 parameters and two points the system is underdetermined.
Plotting the points immediately suggests that one can get a perfect fit with a straight line ($B_2=0$).
Simply looking at the plot, and the fact that we're fitting a quadratic through two poiints tells us that a second perfect fit is trivial - choose any $B_2$ and solve for the remaining values.
This non-uniqueness doesn't have anything to do with the non-uniqueness of least absolute values regression, since it works just as well with least squares or many other criteria where a perfect fit yields a minimum. As mentioned, this is simply due to the system being underdetermined. [As a result, this strikes me as a poor example for illustrating anything in particular about least absolute values regression.]
To see nonuniqueness of a least absolute values regression that isn't simply due to the system being underdetermined (i.e. where least squares would also have a non-uniqueness problem), you'd need more data values.
For example, consider fitting a straight line the following data:
x 0 1 2 3
y 0 2 3 3
Here, any line with the minimum value for $S=\sum_i|Y_i-B_0-B_1X_{i}|$ doesn't go through all the data points, but there's still a region of values that attains the minimum value for $S$ (deep blue region outlined in gray); larger values of S are less blue/redder:
Here are 4 of the lines, each corresponding to points in that optimal parameter region above:
All except the red one are points on the marked boundary in the first plot; the red one is an interior point inside the marked blue region in that first plot.
The red one is also least squares. It's an interior point in the (b1,b0) plot. The other three lines are corner points. If you imagine placing four (thin) poles at the indicated points in the (x,y) plot (sticking out in the z-direction, out of the screen), and pull a string taut close to the red line, then wiggle the string about within the constraints of the four poles, you're wandering about the optimal region.
There's a related example with additional explanation for an intercept-only L1 model here
|
Minimization of the Sum of Absolute Deviations
|
So that this has some form of answer, let's look at what the OP has discovered, and then put some additional clarity/detail into that (I'm going to go into more detail than I normally would for a self
|
Minimization of the Sum of Absolute Deviations
So that this has some form of answer, let's look at what the OP has discovered, and then put some additional clarity/detail into that (I'm going to go into more detail than I normally would for a self-study question because of the issues with the question, which I think require some explanation):
The issue here is that with 3 parameters and two points the system is underdetermined.
Plotting the points immediately suggests that one can get a perfect fit with a straight line ($B_2=0$).
Simply looking at the plot, and the fact that we're fitting a quadratic through two poiints tells us that a second perfect fit is trivial - choose any $B_2$ and solve for the remaining values.
This non-uniqueness doesn't have anything to do with the non-uniqueness of least absolute values regression, since it works just as well with least squares or many other criteria where a perfect fit yields a minimum. As mentioned, this is simply due to the system being underdetermined. [As a result, this strikes me as a poor example for illustrating anything in particular about least absolute values regression.]
To see nonuniqueness of a least absolute values regression that isn't simply due to the system being underdetermined (i.e. where least squares would also have a non-uniqueness problem), you'd need more data values.
For example, consider fitting a straight line the following data:
x 0 1 2 3
y 0 2 3 3
Here, any line with the minimum value for $S=\sum_i|Y_i-B_0-B_1X_{i}|$ doesn't go through all the data points, but there's still a region of values that attains the minimum value for $S$ (deep blue region outlined in gray); larger values of S are less blue/redder:
Here are 4 of the lines, each corresponding to points in that optimal parameter region above:
All except the red one are points on the marked boundary in the first plot; the red one is an interior point inside the marked blue region in that first plot.
The red one is also least squares. It's an interior point in the (b1,b0) plot. The other three lines are corner points. If you imagine placing four (thin) poles at the indicated points in the (x,y) plot (sticking out in the z-direction, out of the screen), and pull a string taut close to the red line, then wiggle the string about within the constraints of the four poles, you're wandering about the optimal region.
There's a related example with additional explanation for an intercept-only L1 model here
|
Minimization of the Sum of Absolute Deviations
So that this has some form of answer, let's look at what the OP has discovered, and then put some additional clarity/detail into that (I'm going to go into more detail than I normally would for a self
|
39,442
|
What is a multivariate random variable?
|
Yes, throwing two dice is a multivariate random variable. Specifically, you will get two independent and identically distributed discrete variables (assuming fair dice).
Throwing two dice and adding the results gets you a univariate random variable, with possible values between 2 and 12.
Picking a person at random and noting both their biological sex and their height is another multivariate random variable: a binary one and a more-or-less continuous one, and the two will not be independent any more.
One could argue that much of applied statistics is about drawing multivariate random variables and understanding just how exactly they are dependent. People will usually call all but one dimension "independent variables" and the last dimension "the dependent variable". You can also influence variables through your treatment.
|
What is a multivariate random variable?
|
Yes, throwing two dice is a multivariate random variable. Specifically, you will get two independent and identically distributed discrete variables (assuming fair dice).
Throwing two dice and adding t
|
What is a multivariate random variable?
Yes, throwing two dice is a multivariate random variable. Specifically, you will get two independent and identically distributed discrete variables (assuming fair dice).
Throwing two dice and adding the results gets you a univariate random variable, with possible values between 2 and 12.
Picking a person at random and noting both their biological sex and their height is another multivariate random variable: a binary one and a more-or-less continuous one, and the two will not be independent any more.
One could argue that much of applied statistics is about drawing multivariate random variables and understanding just how exactly they are dependent. People will usually call all but one dimension "independent variables" and the last dimension "the dependent variable". You can also influence variables through your treatment.
|
What is a multivariate random variable?
Yes, throwing two dice is a multivariate random variable. Specifically, you will get two independent and identically distributed discrete variables (assuming fair dice).
Throwing two dice and adding t
|
39,443
|
What is a multivariate random variable?
|
With multivariate distributions, correlations between variables are important. If there is no correlation between to variable, then you basically have two univariate distributions.
Throwing two die would be a multivariate distribution, but would probably have a correlation of zero (the exceptions to this are interesting. For example, if you have two loaded dice from the same factor, the two die would probably be correlated!).
In morphometrics people study how different measurements of animals vary. For example, one might care that weight and height are correlated. You might also appreciate this article's other biology and genomic examples
If simulations help you to learn by exploring data, here's some code to help you get started exploring a multivariate normal distribution in R:
library(MASS)
Sigma <- matrix(c(10, 4, 10, 4),2,2)
d <- mvrnorm(n = 3000, mu = c(10, 3), Sigma = Sigma)
plot(d)
That produces this figure:
Edit: I corrected my answer based upon the comment.
|
What is a multivariate random variable?
|
With multivariate distributions, correlations between variables are important. If there is no correlation between to variable, then you basically have two univariate distributions.
Throwing two die w
|
What is a multivariate random variable?
With multivariate distributions, correlations between variables are important. If there is no correlation between to variable, then you basically have two univariate distributions.
Throwing two die would be a multivariate distribution, but would probably have a correlation of zero (the exceptions to this are interesting. For example, if you have two loaded dice from the same factor, the two die would probably be correlated!).
In morphometrics people study how different measurements of animals vary. For example, one might care that weight and height are correlated. You might also appreciate this article's other biology and genomic examples
If simulations help you to learn by exploring data, here's some code to help you get started exploring a multivariate normal distribution in R:
library(MASS)
Sigma <- matrix(c(10, 4, 10, 4),2,2)
d <- mvrnorm(n = 3000, mu = c(10, 3), Sigma = Sigma)
plot(d)
That produces this figure:
Edit: I corrected my answer based upon the comment.
|
What is a multivariate random variable?
With multivariate distributions, correlations between variables are important. If there is no correlation between to variable, then you basically have two univariate distributions.
Throwing two die w
|
39,444
|
Estimation of unit-root AR(1) model with OLS
|
It is generally assumed that the explanatory variables have finite moments at least up to second order. In this case, as the explanatory variable is a random walk, its variance is not finite. This makes the matrix $Q=\hbox{plim } X′X/n$ not finite, with the consequences discussed below.
The explanatory variable $x_{t-1}$ is not fixed (it is stochastic as it depends on $\epsilon$) and is not independent of the error term $\epsilon_t$. This makes OLS in general biased and inference is not valid in small samples.
The explanatory variable and $\epsilon_t$ are not independent of each other but they are contemporaneously uncorrelated, $E(x_t, u_t) = 0 \forall t$. In the classical regression model this will open the possibility for the the OLS estimator to be consistent in large samples.
If the matrix $Q = \hbox{plim } X′X/n$ were finite and positive definite matrix, then the F-test statistic will follow asymptotically follows the $\chi^2$ distribution.
As pointed by @ChristophHanck this matrix is not finite in this context. Hence, the Mann and Wald theorem is not applicable and inference based on OLS will not be reliable even in large samples.
You may be interested in this answer, which discusses similar issues in the context of a stationary AR(q) process.
|
Estimation of unit-root AR(1) model with OLS
|
It is generally assumed that the explanatory variables have finite moments at least up to second order. In this case, as the explanatory variable is a random walk, its variance is not finite. This mak
|
Estimation of unit-root AR(1) model with OLS
It is generally assumed that the explanatory variables have finite moments at least up to second order. In this case, as the explanatory variable is a random walk, its variance is not finite. This makes the matrix $Q=\hbox{plim } X′X/n$ not finite, with the consequences discussed below.
The explanatory variable $x_{t-1}$ is not fixed (it is stochastic as it depends on $\epsilon$) and is not independent of the error term $\epsilon_t$. This makes OLS in general biased and inference is not valid in small samples.
The explanatory variable and $\epsilon_t$ are not independent of each other but they are contemporaneously uncorrelated, $E(x_t, u_t) = 0 \forall t$. In the classical regression model this will open the possibility for the the OLS estimator to be consistent in large samples.
If the matrix $Q = \hbox{plim } X′X/n$ were finite and positive definite matrix, then the F-test statistic will follow asymptotically follows the $\chi^2$ distribution.
As pointed by @ChristophHanck this matrix is not finite in this context. Hence, the Mann and Wald theorem is not applicable and inference based on OLS will not be reliable even in large samples.
You may be interested in this answer, which discusses similar issues in the context of a stationary AR(q) process.
|
Estimation of unit-root AR(1) model with OLS
It is generally assumed that the explanatory variables have finite moments at least up to second order. In this case, as the explanatory variable is a random walk, its variance is not finite. This mak
|
39,445
|
Estimation of unit-root AR(1) model with OLS
|
One of the key assumptions I would list among standard OLS assumptions is that there is no weak LLN for the "average of the $X'X$-matrix" $1/T\sum_tx_{t-1}^2$. Instead, we have weak convergence to a functional of Brownian motion provided we scale by $T^2$, viz.
$$
T^{-2}\sum_{t=1}^Tx^2_{t-1}\Rightarrow\sigma^2\int_0^1W(r)^2d r
$$
I would, btw, not quite agree with @Alecos statement in the link you posted that there is no analytical solution to the distribution of the OLSE - we know the asymptotic distribution of the OLSE, when scaled with the suitable superconsistent rate $T$, to be
\begin{eqnarray*}
T\left(\hat{\beta}^{OLS}-1\right)&=&T\frac{\sum_{t=1}^Tx_{t-1}\epsilon_{t}}{\sum_{t=1}^Tx_{t-1}^2}\\
&=&\frac{T^{-1}\sum_{t=1}^Tx_{t-1}\epsilon_{t}}{T^{-2}\sum_{t=1}^Tx_{t-1}^2}\\
&\Rightarrow&\frac{\sigma^2/2\{W(1)^2-1\}}{\sigma^2\int_0^1W(r)^2d r}\\
&=&\frac{W(1)^2-1}{2\int_0^1W(r)^2d r},
\end{eqnarray*}
the "Dickey-Fuller-distribution" (JASA 1979).
|
Estimation of unit-root AR(1) model with OLS
|
One of the key assumptions I would list among standard OLS assumptions is that there is no weak LLN for the "average of the $X'X$-matrix" $1/T\sum_tx_{t-1}^2$. Instead, we have weak convergence to a f
|
Estimation of unit-root AR(1) model with OLS
One of the key assumptions I would list among standard OLS assumptions is that there is no weak LLN for the "average of the $X'X$-matrix" $1/T\sum_tx_{t-1}^2$. Instead, we have weak convergence to a functional of Brownian motion provided we scale by $T^2$, viz.
$$
T^{-2}\sum_{t=1}^Tx^2_{t-1}\Rightarrow\sigma^2\int_0^1W(r)^2d r
$$
I would, btw, not quite agree with @Alecos statement in the link you posted that there is no analytical solution to the distribution of the OLSE - we know the asymptotic distribution of the OLSE, when scaled with the suitable superconsistent rate $T$, to be
\begin{eqnarray*}
T\left(\hat{\beta}^{OLS}-1\right)&=&T\frac{\sum_{t=1}^Tx_{t-1}\epsilon_{t}}{\sum_{t=1}^Tx_{t-1}^2}\\
&=&\frac{T^{-1}\sum_{t=1}^Tx_{t-1}\epsilon_{t}}{T^{-2}\sum_{t=1}^Tx_{t-1}^2}\\
&\Rightarrow&\frac{\sigma^2/2\{W(1)^2-1\}}{\sigma^2\int_0^1W(r)^2d r}\\
&=&\frac{W(1)^2-1}{2\int_0^1W(r)^2d r},
\end{eqnarray*}
the "Dickey-Fuller-distribution" (JASA 1979).
|
Estimation of unit-root AR(1) model with OLS
One of the key assumptions I would list among standard OLS assumptions is that there is no weak LLN for the "average of the $X'X$-matrix" $1/T\sum_tx_{t-1}^2$. Instead, we have weak convergence to a f
|
39,446
|
Comparing (hidden) regression coefficients in simple linear regression?
|
EDIT to respond to the altered question:
Again you phrase your hypothesis based on parameters outside this model, which makes it a little uncertain what exactly you are going at. But interpreting your hypotheses to be referring to the marginal effect of each of the three topics being covered in one text, I think what you are trying to do can be done. Basically then the answer is given by following the suggestion in D. Stroet's answer, which is (sort of) equivalent to your own answer and the edited part in @EdM's answer:
$\beta_1=0$ indicates that increasing the content of weather [holding sports constant and consequently reducing education] has no bearing on reader rating. This is equivalent to saying that weather and education are contributing equally to reader popularity.
$\beta_2=0$ the same reasoning applies. This is equivalent to saying that sports and education contribute equally to reader popularity.
$\beta_1=\beta_2$ can be tested just as you intended in you question. Finally, this implies that sports and weather contribute equally to reader popularity
Original Anwer:
I can not comment, hence I'm putting this in an answer. Whoever can, please convert this into a comment.
@Sibbs, can you give a reasonable example, where testing equality of these partial effects (I assume that's what you mean, since you never define $\beta_3$) makes sense, yet such a restriction ($x_1+x_2+x_3=1$) holds? As @EdM points out your "solution" already points to the conceptual flaw of your model/question. Maybe if you elaborate on what brought you there, someone could help you.
It would help to know more about the actual data underlying your model, as presented in the first question. You provided a 3-variable case "for simplicity" but that leads to this problem with only 2 independent coefficients when all $x_i$ must add up to 1. If these are data on fruit, however, there may be additional "components" (like water content, fiber) that have no bearing on perceived "sweetness" other than their influences on the effective concentrations of the sweetness-associated $x_i$. Knowing more about all the original data, as opposed to how some variables have already been transformed in a way that requires them to add to 1, may help resolve your underlying issue without trying to perform a set of comparisons that can't really be done with the restrictions you have imposed.
|
Comparing (hidden) regression coefficients in simple linear regression?
|
EDIT to respond to the altered question:
Again you phrase your hypothesis based on parameters outside this model, which makes it a little uncertain what exactly you are going at. But interpreting your
|
Comparing (hidden) regression coefficients in simple linear regression?
EDIT to respond to the altered question:
Again you phrase your hypothesis based on parameters outside this model, which makes it a little uncertain what exactly you are going at. But interpreting your hypotheses to be referring to the marginal effect of each of the three topics being covered in one text, I think what you are trying to do can be done. Basically then the answer is given by following the suggestion in D. Stroet's answer, which is (sort of) equivalent to your own answer and the edited part in @EdM's answer:
$\beta_1=0$ indicates that increasing the content of weather [holding sports constant and consequently reducing education] has no bearing on reader rating. This is equivalent to saying that weather and education are contributing equally to reader popularity.
$\beta_2=0$ the same reasoning applies. This is equivalent to saying that sports and education contribute equally to reader popularity.
$\beta_1=\beta_2$ can be tested just as you intended in you question. Finally, this implies that sports and weather contribute equally to reader popularity
Original Anwer:
I can not comment, hence I'm putting this in an answer. Whoever can, please convert this into a comment.
@Sibbs, can you give a reasonable example, where testing equality of these partial effects (I assume that's what you mean, since you never define $\beta_3$) makes sense, yet such a restriction ($x_1+x_2+x_3=1$) holds? As @EdM points out your "solution" already points to the conceptual flaw of your model/question. Maybe if you elaborate on what brought you there, someone could help you.
It would help to know more about the actual data underlying your model, as presented in the first question. You provided a 3-variable case "for simplicity" but that leads to this problem with only 2 independent coefficients when all $x_i$ must add up to 1. If these are data on fruit, however, there may be additional "components" (like water content, fiber) that have no bearing on perceived "sweetness" other than their influences on the effective concentrations of the sweetness-associated $x_i$. Knowing more about all the original data, as opposed to how some variables have already been transformed in a way that requires them to add to 1, may help resolve your underlying issue without trying to perform a set of comparisons that can't really be done with the restrictions you have imposed.
|
Comparing (hidden) regression coefficients in simple linear regression?
EDIT to respond to the altered question:
Again you phrase your hypothesis based on parameters outside this model, which makes it a little uncertain what exactly you are going at. But interpreting your
|
39,447
|
Comparing (hidden) regression coefficients in simple linear regression?
|
To test if $\beta_1=\beta_3$, we need to "reframe" the regression a bit.
$$Y=\beta_0+\beta_1x_1+\beta_2x_2$$
$$Y=\beta_0+\beta_1x_1+\beta_2(1-x_1-x_3)$$
$$Y=\beta_0+\beta_2+(\beta_1-\beta_2)x_1-\beta_2x_3$$
Now $\beta_1=\beta_3$ is equivalent to $\beta_1-\beta_2=-\beta_2$, becoming $\beta_1=0$. So just test if $\beta_1=0$ for $\beta_1=\beta_3$.
|
Comparing (hidden) regression coefficients in simple linear regression?
|
To test if $\beta_1=\beta_3$, we need to "reframe" the regression a bit.
$$Y=\beta_0+\beta_1x_1+\beta_2x_2$$
$$Y=\beta_0+\beta_1x_1+\beta_2(1-x_1-x_3)$$
$$Y=\beta_0+\beta_2+(\beta_1-\beta_2)x_1-\beta_
|
Comparing (hidden) regression coefficients in simple linear regression?
To test if $\beta_1=\beta_3$, we need to "reframe" the regression a bit.
$$Y=\beta_0+\beta_1x_1+\beta_2x_2$$
$$Y=\beta_0+\beta_1x_1+\beta_2(1-x_1-x_3)$$
$$Y=\beta_0+\beta_2+(\beta_1-\beta_2)x_1-\beta_2x_3$$
Now $\beta_1=\beta_3$ is equivalent to $\beta_1-\beta_2=-\beta_2$, becoming $\beta_1=0$. So just test if $\beta_1=0$ for $\beta_1=\beta_3$.
|
Comparing (hidden) regression coefficients in simple linear regression?
To test if $\beta_1=\beta_3$, we need to "reframe" the regression a bit.
$$Y=\beta_0+\beta_1x_1+\beta_2x_2$$
$$Y=\beta_0+\beta_1x_1+\beta_2(1-x_1-x_3)$$
$$Y=\beta_0+\beta_2+(\beta_1-\beta_2)x_1-\beta_
|
39,448
|
Comparing (hidden) regression coefficients in simple linear regression?
|
It's not possible to do all 3 pairwise comparisons of regression coefficients that you wish, because you only really have two regression coefficients.
The value of $\beta_3$ in a linear regression would be the effect of a change in $x_3$ on $Y$ with $x_1$ and $x_2$ held constant. That is not possible in your situation where the $x_i$ must all add up to 1.
All of the information about your regression is included in any choice of 2 of your 3 $x_i$. This is why your algebra came up with the result that the only way for $\beta_1=\beta_3$ is for both to be zero.
Edit based on further reflection:
What you can do in this case comes from the answer by @Glen_b to your previous question. The $\beta_0$ intercept in the model you present on this page represents the value of the Sweetness outcome variable ($Y$) when $x_3$ is the only fruit component present ($x_1$, $x_2$ both 0).
In that context, $\beta_1$ represents the change in Sweetness per change in $x_1$ when $x_2$ is held constant. Since all the $x_i$ must add to 1 and all are (presumably) non-negative, the only way for this to occur is for $x_3$ to go down as $x_1$ goes up. Thus $\beta_1$ represents the excess Sweetness of $x_1$ as it replaces $x_3$ in the mix; the same argument holds for $\beta_2$ as the excess Sweetness provided by $x_2$ as it replaces $x_3$.
So to compare the relative contributions of the $x_i$ to Sweetness, significantly non-zero values of $\beta_1$ and $\beta_2$ mean that $x_1$ resp. $x_2$ contribute differently to sweetness than does $x_3$. The comparison between $\beta_1$ and $\beta_2$ distinguishes $x_1$ from $x_2$.
Although this answers your fundamental question (not the question you posed for the way you originally wanted to proceed), I urge you to consider the original data underlying this work and whether you would be better served by analyzing data closer to their original forms rather than in a form forced to add up to 1, as the answer by @sheß suggests. When data take on necessarily restricted ranges I worry that linear regression might not model the data adequately; I trust that you have extensively examined diagnostics to make sure that your model actually works here.
|
Comparing (hidden) regression coefficients in simple linear regression?
|
It's not possible to do all 3 pairwise comparisons of regression coefficients that you wish, because you only really have two regression coefficients.
The value of $\beta_3$ in a linear regression wou
|
Comparing (hidden) regression coefficients in simple linear regression?
It's not possible to do all 3 pairwise comparisons of regression coefficients that you wish, because you only really have two regression coefficients.
The value of $\beta_3$ in a linear regression would be the effect of a change in $x_3$ on $Y$ with $x_1$ and $x_2$ held constant. That is not possible in your situation where the $x_i$ must all add up to 1.
All of the information about your regression is included in any choice of 2 of your 3 $x_i$. This is why your algebra came up with the result that the only way for $\beta_1=\beta_3$ is for both to be zero.
Edit based on further reflection:
What you can do in this case comes from the answer by @Glen_b to your previous question. The $\beta_0$ intercept in the model you present on this page represents the value of the Sweetness outcome variable ($Y$) when $x_3$ is the only fruit component present ($x_1$, $x_2$ both 0).
In that context, $\beta_1$ represents the change in Sweetness per change in $x_1$ when $x_2$ is held constant. Since all the $x_i$ must add to 1 and all are (presumably) non-negative, the only way for this to occur is for $x_3$ to go down as $x_1$ goes up. Thus $\beta_1$ represents the excess Sweetness of $x_1$ as it replaces $x_3$ in the mix; the same argument holds for $\beta_2$ as the excess Sweetness provided by $x_2$ as it replaces $x_3$.
So to compare the relative contributions of the $x_i$ to Sweetness, significantly non-zero values of $\beta_1$ and $\beta_2$ mean that $x_1$ resp. $x_2$ contribute differently to sweetness than does $x_3$. The comparison between $\beta_1$ and $\beta_2$ distinguishes $x_1$ from $x_2$.
Although this answers your fundamental question (not the question you posed for the way you originally wanted to proceed), I urge you to consider the original data underlying this work and whether you would be better served by analyzing data closer to their original forms rather than in a form forced to add up to 1, as the answer by @sheß suggests. When data take on necessarily restricted ranges I worry that linear regression might not model the data adequately; I trust that you have extensively examined diagnostics to make sure that your model actually works here.
|
Comparing (hidden) regression coefficients in simple linear regression?
It's not possible to do all 3 pairwise comparisons of regression coefficients that you wish, because you only really have two regression coefficients.
The value of $\beta_3$ in a linear regression wou
|
39,449
|
Comparing (hidden) regression coefficients in simple linear regression?
|
My suggestion is to interpret the t-statistics of $x_1$ and $x_2$ in relation to $x_3$, because the latter is the reference group.
That will answer 2. and 3. Test the equality of $x_{1}$ and $x_2$ separately with an appropriate test which will answer 1.
|
Comparing (hidden) regression coefficients in simple linear regression?
|
My suggestion is to interpret the t-statistics of $x_1$ and $x_2$ in relation to $x_3$, because the latter is the reference group.
That will answer 2. and 3. Test the equality of $x_{1}$ and $x_2$ se
|
Comparing (hidden) regression coefficients in simple linear regression?
My suggestion is to interpret the t-statistics of $x_1$ and $x_2$ in relation to $x_3$, because the latter is the reference group.
That will answer 2. and 3. Test the equality of $x_{1}$ and $x_2$ separately with an appropriate test which will answer 1.
|
Comparing (hidden) regression coefficients in simple linear regression?
My suggestion is to interpret the t-statistics of $x_1$ and $x_2$ in relation to $x_3$, because the latter is the reference group.
That will answer 2. and 3. Test the equality of $x_{1}$ and $x_2$ se
|
39,450
|
What does an error in ANOVA indicate?
|
Many models are based on a model for the dependent variable of the form "population mean + variation about the mean". Indeed, t-tests, one and two way ANOVA, multiple regression are all examples of this.
In the case of a two-way ANOVA with interaction, the model (in simplest terms) looks like this:
$$y_{ijk}=μ_{ij}+ε_{ijk}, $$
-- that is the $k$-th values at level $i$ of the "row" factor and level $j$ of the "column" factor (the IVs) consist of a population mean for that combination of $i$ and $j$ and the individual variation about that mean (since the $k$th observation in factor-combination $i,j$ will not equal the population mean for that subgroup).
Typically we decompose the mean for the two-way ANOVA into main effects and interaction: $μ_{ij}=μ+α_i+β_j+ (αβ)_{ij}$, giving:
$$y_{ijk}=μ+α_i+β_j+ (αβ)_{ij}+ε_{ijk}, $$
so that an observation consists of an overall (population) mean effect, plus a (population) "row" effect (representing deviations from that overall mean due to the row factor), a corresponding "column" effect, and interaction effect (an additional deviation for the particular factor-combination) and the individual variation from the mean.
Going back to the earlier form: $y_{ijk}=μ_{ij}+ε_{ijk},$ the individual variation about the population mean at factor-levels $i$ and $j$ is assumed to be a zero-mean, constant-variance random term, called the "error term".
It doesn't necessarily consist of actual errors in the ordinary sense of the word; the reasons for that are partly historical. It's just a description of the way the observations will vary from the population cell-means. That error term is an important part of the model. However, it may include things we would normally think of as error (such as measurement error in the DV). [The IVs are assumed to be measured without error, by the way, in the usual regression and ANOVA. This is usually not a problem for factors in ANOVA, especially where experiments are concerned.]
In normal theory inference (the usual confidence intervals and hypothesis tests), the error term is assumed to be normally distributed.
Now, why do we have $\text{SS(error)}$ and $\text{df(error)}$ and so on?
The variance of the $y$'s about the overall mean ($\mu$) is decomposed into portions explainable as variation of cell means about the population mean (variation of $\mu_{ij}$ about $\mu$) and random variation about the cell means (unexplained variability in the data). The first one is further decomposed into variance terms for row-effects, columns effects and interaction.
Now, if there really are no row, column or interaction effects at the population level, those variances for row, column and interaction will be non-zero due to the variation about the overall mean - they'll be relatively small, and the typical size is a function of the variance of the error term ($\text{var}(\varepsilon)=\sigma^2$) and we can even work out what distribution the estimates of these components of the y-variance should be. But if there are real row-, column-, and interaction- effects, those components of the y-variance will be typically larger and have a different distribution.
So to investigate the size of an effect (say the interaction-effect) in ANOVA, we compare the size of the implied value of $\sigma^2$ that would result if the effect was zero with the one from residual from the fitted model (the one that estimates $\text{var}(\epsilon)$ directly). The ratio of these two estimates of variance (the F statistic) will be (more or less) close to 1 if the effect is zero, and tends to be larger otherwise.
We do the F-test to see if that ratio is bigger than could reasonably be explained by random variation (with no actual effect -- no interaction say). If it is, we'd reject the null hypothesis that the particular effect is zero.
This kind of calculation -- using ratios of estimates of variances to decide if effects that relate cell means are bigger than zero -- is called analysis of variance.
So terms like $\text{SS(error)}$ and $\text{df(error)}$ are central to figuring out whether there's evidence that the (IV) factors we're looking at really change the mean of the dependent variable or not.
|
What does an error in ANOVA indicate?
|
Many models are based on a model for the dependent variable of the form "population mean + variation about the mean". Indeed, t-tests, one and two way ANOVA, multiple regression are all examples of th
|
What does an error in ANOVA indicate?
Many models are based on a model for the dependent variable of the form "population mean + variation about the mean". Indeed, t-tests, one and two way ANOVA, multiple regression are all examples of this.
In the case of a two-way ANOVA with interaction, the model (in simplest terms) looks like this:
$$y_{ijk}=μ_{ij}+ε_{ijk}, $$
-- that is the $k$-th values at level $i$ of the "row" factor and level $j$ of the "column" factor (the IVs) consist of a population mean for that combination of $i$ and $j$ and the individual variation about that mean (since the $k$th observation in factor-combination $i,j$ will not equal the population mean for that subgroup).
Typically we decompose the mean for the two-way ANOVA into main effects and interaction: $μ_{ij}=μ+α_i+β_j+ (αβ)_{ij}$, giving:
$$y_{ijk}=μ+α_i+β_j+ (αβ)_{ij}+ε_{ijk}, $$
so that an observation consists of an overall (population) mean effect, plus a (population) "row" effect (representing deviations from that overall mean due to the row factor), a corresponding "column" effect, and interaction effect (an additional deviation for the particular factor-combination) and the individual variation from the mean.
Going back to the earlier form: $y_{ijk}=μ_{ij}+ε_{ijk},$ the individual variation about the population mean at factor-levels $i$ and $j$ is assumed to be a zero-mean, constant-variance random term, called the "error term".
It doesn't necessarily consist of actual errors in the ordinary sense of the word; the reasons for that are partly historical. It's just a description of the way the observations will vary from the population cell-means. That error term is an important part of the model. However, it may include things we would normally think of as error (such as measurement error in the DV). [The IVs are assumed to be measured without error, by the way, in the usual regression and ANOVA. This is usually not a problem for factors in ANOVA, especially where experiments are concerned.]
In normal theory inference (the usual confidence intervals and hypothesis tests), the error term is assumed to be normally distributed.
Now, why do we have $\text{SS(error)}$ and $\text{df(error)}$ and so on?
The variance of the $y$'s about the overall mean ($\mu$) is decomposed into portions explainable as variation of cell means about the population mean (variation of $\mu_{ij}$ about $\mu$) and random variation about the cell means (unexplained variability in the data). The first one is further decomposed into variance terms for row-effects, columns effects and interaction.
Now, if there really are no row, column or interaction effects at the population level, those variances for row, column and interaction will be non-zero due to the variation about the overall mean - they'll be relatively small, and the typical size is a function of the variance of the error term ($\text{var}(\varepsilon)=\sigma^2$) and we can even work out what distribution the estimates of these components of the y-variance should be. But if there are real row-, column-, and interaction- effects, those components of the y-variance will be typically larger and have a different distribution.
So to investigate the size of an effect (say the interaction-effect) in ANOVA, we compare the size of the implied value of $\sigma^2$ that would result if the effect was zero with the one from residual from the fitted model (the one that estimates $\text{var}(\epsilon)$ directly). The ratio of these two estimates of variance (the F statistic) will be (more or less) close to 1 if the effect is zero, and tends to be larger otherwise.
We do the F-test to see if that ratio is bigger than could reasonably be explained by random variation (with no actual effect -- no interaction say). If it is, we'd reject the null hypothesis that the particular effect is zero.
This kind of calculation -- using ratios of estimates of variances to decide if effects that relate cell means are bigger than zero -- is called analysis of variance.
So terms like $\text{SS(error)}$ and $\text{df(error)}$ are central to figuring out whether there's evidence that the (IV) factors we're looking at really change the mean of the dependent variable or not.
|
What does an error in ANOVA indicate?
Many models are based on a model for the dependent variable of the form "population mean + variation about the mean". Indeed, t-tests, one and two way ANOVA, multiple regression are all examples of th
|
39,451
|
What does an error in ANOVA indicate?
|
I just wanted to add some information to @Glen_b's nice answer (+1). Perhaps, the OP knows that, but I would still clarify the terminology a little to the best of my knowledge/understanding.
$SS(error)$ represents error (residual) sum-of-squares and usually is referred to as $SSE$. Consequently, $df(error)$ represents degrees of freedom for error. I think that it is different from regression degrees of freedom. It is also my understanding that this term is, generally, different from degrees of freedom as a parameter for probability distributions. Moreover, it might be useful to note the existence of effective degrees of freedom (both regression and error/residual ones).
|
What does an error in ANOVA indicate?
|
I just wanted to add some information to @Glen_b's nice answer (+1). Perhaps, the OP knows that, but I would still clarify the terminology a little to the best of my knowledge/understanding.
$SS(error
|
What does an error in ANOVA indicate?
I just wanted to add some information to @Glen_b's nice answer (+1). Perhaps, the OP knows that, but I would still clarify the terminology a little to the best of my knowledge/understanding.
$SS(error)$ represents error (residual) sum-of-squares and usually is referred to as $SSE$. Consequently, $df(error)$ represents degrees of freedom for error. I think that it is different from regression degrees of freedom. It is also my understanding that this term is, generally, different from degrees of freedom as a parameter for probability distributions. Moreover, it might be useful to note the existence of effective degrees of freedom (both regression and error/residual ones).
|
What does an error in ANOVA indicate?
I just wanted to add some information to @Glen_b's nice answer (+1). Perhaps, the OP knows that, but I would still clarify the terminology a little to the best of my knowledge/understanding.
$SS(error
|
39,452
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
In short, the discrepancy is because you're not doing it correctly.
The least squares estimates of regression coefficients (which are ML at the normal) are $(X'X)^{-1}X'y$, where $X$ consists of a column of 1's beside a column of the independent variable. (In practice, you don't actually compute the inverse.)
However, your approach works for regression through the origin. Ordinary linear regression passes through $(\bar{x},\bar{y})$.
So if you do it by mean correcting first, your approach should work for simple linear regression (at least if you correct $XX'$ in your post to $X'X$ first):
step 1: mean correct x and y:
x <- matrix(c(60, 50, 30, 120, 200, 70))
y <- matrix(c(8, 7, 5, 10, 11, 6))
xm <- x-mean(x)
ym <- y-mean(y)
step 2: apply your approach:
slope=crossprod(xm,ym)/crossprod(xm) # a more efficient way to do your calculation
intercept=mean(y)-slope*mean(x)
print(c(intercept,slope),d=4)
[1] 4.89393 0.03328
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
In short, the discrepancy is because you're not doing it correctly.
The least squares estimates of regression coefficients (which are ML at the normal) are $(X'X)^{-1}X'y$, where $X$ consists of a co
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
In short, the discrepancy is because you're not doing it correctly.
The least squares estimates of regression coefficients (which are ML at the normal) are $(X'X)^{-1}X'y$, where $X$ consists of a column of 1's beside a column of the independent variable. (In practice, you don't actually compute the inverse.)
However, your approach works for regression through the origin. Ordinary linear regression passes through $(\bar{x},\bar{y})$.
So if you do it by mean correcting first, your approach should work for simple linear regression (at least if you correct $XX'$ in your post to $X'X$ first):
step 1: mean correct x and y:
x <- matrix(c(60, 50, 30, 120, 200, 70))
y <- matrix(c(8, 7, 5, 10, 11, 6))
xm <- x-mean(x)
ym <- y-mean(y)
step 2: apply your approach:
slope=crossprod(xm,ym)/crossprod(xm) # a more efficient way to do your calculation
intercept=mean(y)-slope*mean(x)
print(c(intercept,slope),d=4)
[1] 4.89393 0.03328
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
In short, the discrepancy is because you're not doing it correctly.
The least squares estimates of regression coefficients (which are ML at the normal) are $(X'X)^{-1}X'y$, where $X$ consists of a co
|
39,453
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
Thanks a lot!
I understood the problem and your solution.
But I don't get how this could be computed using the $\textbf{X}$ 2xn matrix.
I don't know how to solve this for $\beta_0$ and $\beta_1$:
$$ \beta = \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} = \mathbf{X}^T y [\mathbf{X}^T \mathbf{X}]^{-1}= \frac{\begin{bmatrix} 1& 1 & \dots &1 \\ x_1 & x_2 & \dots & x_n \end{bmatrix} \begin{bmatrix}y_1\\y_2\\ \vdots \\ y_n \end{bmatrix}} {\begin{bmatrix} 1& 1 & \dots &1 \\ x_1 & x_2 & \dots & x_n \end{bmatrix} \begin{bmatrix}x_1 \\x_2\\\vdots \\x_n \end{bmatrix}}$$
This should then go through the origin right?
I know that $\beta_0$ is set 0 when forced through the origin, but why do I need the 1 column in the $\textbf{X}$ matrix then?
Best,
Franz
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
Thanks a lot!
I understood the problem and your solution.
But I don't get how this could be computed using the $\textbf{X}$ 2xn matrix.
I don't know how to solve this for $\beta_0$ and $\beta_1$:
$$ \
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
Thanks a lot!
I understood the problem and your solution.
But I don't get how this could be computed using the $\textbf{X}$ 2xn matrix.
I don't know how to solve this for $\beta_0$ and $\beta_1$:
$$ \beta = \begin{bmatrix} \beta_0 \\ \beta_1 \end{bmatrix} = \mathbf{X}^T y [\mathbf{X}^T \mathbf{X}]^{-1}= \frac{\begin{bmatrix} 1& 1 & \dots &1 \\ x_1 & x_2 & \dots & x_n \end{bmatrix} \begin{bmatrix}y_1\\y_2\\ \vdots \\ y_n \end{bmatrix}} {\begin{bmatrix} 1& 1 & \dots &1 \\ x_1 & x_2 & \dots & x_n \end{bmatrix} \begin{bmatrix}x_1 \\x_2\\\vdots \\x_n \end{bmatrix}}$$
This should then go through the origin right?
I know that $\beta_0$ is set 0 when forced through the origin, but why do I need the 1 column in the $\textbf{X}$ matrix then?
Best,
Franz
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
Thanks a lot!
I understood the problem and your solution.
But I don't get how this could be computed using the $\textbf{X}$ 2xn matrix.
I don't know how to solve this for $\beta_0$ and $\beta_1$:
$$ \
|
39,454
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
The expression $\hat{\beta} = [X^TX]^{-1}X^Ty$ is a product of matrices. Matrix products mostly does not commute and the term $[X^TX]^{-1}$ is a matrix inverse. You can find information about this by doing a search on Linear Algebra.
Further the ones are always needed since the expression derives from $y = [1, x][\beta_0, \beta_1]^T$.
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
|
The expression $\hat{\beta} = [X^TX]^{-1}X^Ty$ is a product of matrices. Matrix products mostly does not commute and the term $[X^TX]^{-1}$ is a matrix inverse. You can find information about this by
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
The expression $\hat{\beta} = [X^TX]^{-1}X^Ty$ is a product of matrices. Matrix products mostly does not commute and the term $[X^TX]^{-1}$ is a matrix inverse. You can find information about this by doing a search on Linear Algebra.
Further the ones are always needed since the expression derives from $y = [1, x][\beta_0, \beta_1]^T$.
|
Derivation of MLE of linear regression: and now? Why is there discrepancy to lm in R?
The expression $\hat{\beta} = [X^TX]^{-1}X^Ty$ is a product of matrices. Matrix products mostly does not commute and the term $[X^TX]^{-1}$ is a matrix inverse. You can find information about this by
|
39,455
|
generate a time series comprising seasonal, trend and remainder components in R
|
One possibility is to generate the data upon the state-space representation of the basic structural time series model described in Harvey (1989).
Harvey, A. C. (1989)
Forecasting, Structural Time Series Models and the Kalman Filter.
Cambridge University Press.
The basic structural model is defined as follows:
\begin{eqnarray*}
\begin{array}{rll}
\hbox{observed series:} & y_t = \mu_t + \gamma_t + \epsilon_t , &
\epsilon_t \sim \hbox{NID}(0,\, \sigma^2_\epsilon) ; \\
\hbox{latent level:} & \mu_t = \mu_{t-1} + \beta_{t-1} + \xi_t , &
\xi_t \sim \hbox{NID}(0,\, \sigma^2_\xi) ; \\
\hbox{latent drift:} & \beta_t = \beta_{t-1} + \zeta_t , &
\zeta_t \sim \hbox{NID}(0,\, \sigma^2_\zeta) ; \\
\hbox{latent seasonal:} & \gamma_t = \sum_{j=1}^{s-1} -\gamma_{t-j} + \omega_t , &
\omega_t \sim \hbox{NID}(0,\, \sigma^2_\omega) , \\
\end{array}
\end{eqnarray*}
for $t=s,\dots,n$; $s$ is the periodicity of the data
(e.g. $s=4$ for quarterly data).
The model provides a flexible framework to generate the kind of the data you are interested in. Setting $\sigma^2_\omega=0$ yields a model with deterministic seasonality. Setting also $\gamma_1=\dots=\gamma_{s-1}=0$ and $\sigma^2_\zeta=0$ removes the seasonal component and gives the local level model (random walk plus noise model with drift $\beta_0$). If $\sigma^2_\zeta > 0$ the local trend model is obtained, where the drift follows a random walk.
The function datagen.stsm in package stsm generates data from this model. For example, the data employed in some of the simulation exercises used to test package are generated as follows:
# generate a quarterly series from a local level plus seasonal model
require(stsm)
pars <- c(var1 = 300, var2 = 10, var3 = 100)
m <- stsm.model(model = "llm+seas", y = ts(seq(120), frequency = 4),
pars = pars, nopars = NULL,)
ss <- char2numeric(m)
set.seed(123)
y <- datagen.stsm(n = 120, model = list(Z = ss$Z, T = ss$T, H = ss$H, Q = ss$Q),
n0 = 20, freq = 4, old.version = TRUE)$data
plot(y, main = "data generated from the local-level plus seasonal component")
|
generate a time series comprising seasonal, trend and remainder components in R
|
One possibility is to generate the data upon the state-space representation of the basic structural time series model described in Harvey (1989).
Harvey, A. C. (1989)
Forecasting, Structural Time
|
generate a time series comprising seasonal, trend and remainder components in R
One possibility is to generate the data upon the state-space representation of the basic structural time series model described in Harvey (1989).
Harvey, A. C. (1989)
Forecasting, Structural Time Series Models and the Kalman Filter.
Cambridge University Press.
The basic structural model is defined as follows:
\begin{eqnarray*}
\begin{array}{rll}
\hbox{observed series:} & y_t = \mu_t + \gamma_t + \epsilon_t , &
\epsilon_t \sim \hbox{NID}(0,\, \sigma^2_\epsilon) ; \\
\hbox{latent level:} & \mu_t = \mu_{t-1} + \beta_{t-1} + \xi_t , &
\xi_t \sim \hbox{NID}(0,\, \sigma^2_\xi) ; \\
\hbox{latent drift:} & \beta_t = \beta_{t-1} + \zeta_t , &
\zeta_t \sim \hbox{NID}(0,\, \sigma^2_\zeta) ; \\
\hbox{latent seasonal:} & \gamma_t = \sum_{j=1}^{s-1} -\gamma_{t-j} + \omega_t , &
\omega_t \sim \hbox{NID}(0,\, \sigma^2_\omega) , \\
\end{array}
\end{eqnarray*}
for $t=s,\dots,n$; $s$ is the periodicity of the data
(e.g. $s=4$ for quarterly data).
The model provides a flexible framework to generate the kind of the data you are interested in. Setting $\sigma^2_\omega=0$ yields a model with deterministic seasonality. Setting also $\gamma_1=\dots=\gamma_{s-1}=0$ and $\sigma^2_\zeta=0$ removes the seasonal component and gives the local level model (random walk plus noise model with drift $\beta_0$). If $\sigma^2_\zeta > 0$ the local trend model is obtained, where the drift follows a random walk.
The function datagen.stsm in package stsm generates data from this model. For example, the data employed in some of the simulation exercises used to test package are generated as follows:
# generate a quarterly series from a local level plus seasonal model
require(stsm)
pars <- c(var1 = 300, var2 = 10, var3 = 100)
m <- stsm.model(model = "llm+seas", y = ts(seq(120), frequency = 4),
pars = pars, nopars = NULL,)
ss <- char2numeric(m)
set.seed(123)
y <- datagen.stsm(n = 120, model = list(Z = ss$Z, T = ss$T, H = ss$H, Q = ss$Q),
n0 = 20, freq = 4, old.version = TRUE)$data
plot(y, main = "data generated from the local-level plus seasonal component")
|
generate a time series comprising seasonal, trend and remainder components in R
One possibility is to generate the data upon the state-space representation of the basic structural time series model described in Harvey (1989).
Harvey, A. C. (1989)
Forecasting, Structural Time
|
39,456
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
|
The usual (and original) definition of a box and whisker plot does include outliers (indeed, Tukey had two kinds of outlying points, which these days are often not distinguished).
Specifically, the ends of the whiskers in the Tukey boxplot go at the nearest observations inside the inner fences, which are generally at the upper hinge + 1.5 H-spreads and lower hinge - 1.5 H-spreads (basically, UQ + 1.5 IQR and LQ - 1.5 IQR). What's outside those is marked as outliers.
That's what R does, for example:
There are many variations on the box plot, and some packages implement other things than the Tukey boxplot, but it's the most common one. Indeed, Wickham & Stryjewski's "40 years of boxplots" mentions numerous variations (and that's only a fraction of what can be found out there).
See Wikipedia's article on the box plot for some basic details.
Incidentally, Tableau isn't just showing outliers - it's showing all the data there. You can see it's marking points between the ends of the whiskers, and even points inside the boxes, not just the ones outside the inner fences.
Tableau describes its boxplots here; as you see the description broadly matches what I describe for Tukey boxplots above.
Edit: This is just to add a drawing of what the boxplot elements look like in the Schmid and Crowe references mentioned in comments so people don't have to chase them down to see what was being discussed:
(the Crowe version is slightly tweaked here in a couple of ways, one of which makes it seem a bit more boxplot-like; I may do a more faithful version later)
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
|
The usual (and original) definition of a box and whisker plot does include outliers (indeed, Tukey had two kinds of outlying points, which these days are often not distinguished).
Specifically, the en
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
The usual (and original) definition of a box and whisker plot does include outliers (indeed, Tukey had two kinds of outlying points, which these days are often not distinguished).
Specifically, the ends of the whiskers in the Tukey boxplot go at the nearest observations inside the inner fences, which are generally at the upper hinge + 1.5 H-spreads and lower hinge - 1.5 H-spreads (basically, UQ + 1.5 IQR and LQ - 1.5 IQR). What's outside those is marked as outliers.
That's what R does, for example:
There are many variations on the box plot, and some packages implement other things than the Tukey boxplot, but it's the most common one. Indeed, Wickham & Stryjewski's "40 years of boxplots" mentions numerous variations (and that's only a fraction of what can be found out there).
See Wikipedia's article on the box plot for some basic details.
Incidentally, Tableau isn't just showing outliers - it's showing all the data there. You can see it's marking points between the ends of the whiskers, and even points inside the boxes, not just the ones outside the inner fences.
Tableau describes its boxplots here; as you see the description broadly matches what I describe for Tukey boxplots above.
Edit: This is just to add a drawing of what the boxplot elements look like in the Schmid and Crowe references mentioned in comments so people don't have to chase them down to see what was being discussed:
(the Crowe version is slightly tweaked here in a couple of ways, one of which makes it seem a bit more boxplot-like; I may do a more faithful version later)
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
The usual (and original) definition of a box and whisker plot does include outliers (indeed, Tukey had two kinds of outlying points, which these days are often not distinguished).
Specifically, the en
|
39,457
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
|
Tableau offers two options - Schematic box plot which is often referred to as Tukey box plot and skeletal box plot. Latter has whiskers extending from minimum to maximum. Former whiskers extending to the nearest data points within 1.5 IQR from the hinges. There is an option to toggle whether to show all points in the visualization or just the outliers.
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
|
Tableau offers two options - Schematic box plot which is often referred to as Tukey box plot and skeletal box plot. Latter has whiskers extending from minimum to maximum. Former whiskers extending to
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
Tableau offers two options - Schematic box plot which is often referred to as Tukey box plot and skeletal box plot. Latter has whiskers extending from minimum to maximum. Former whiskers extending to the nearest data points within 1.5 IQR from the hinges. There is an option to toggle whether to show all points in the visualization or just the outliers.
|
Why does Tableau's Box/Whisker plot show outliers automatically and how can I get rid of it?
Tableau offers two options - Schematic box plot which is often referred to as Tukey box plot and skeletal box plot. Latter has whiskers extending from minimum to maximum. Former whiskers extending to
|
39,458
|
Inconsistency between R and SAS for MLE on Weibull
|
First remind that the fitdistr function (from the MASS package) is a
very general function that can work with nearly any distribution.
The warnings come from non-allowed parameter values (e.g. negative
scale or shape) met during the optimisation unconstrained by default.
It seems a good idea here to try a specific MLE for the Weibull
distribution. A quite well-known fact is that the ML estimation of the
two-parameter Weibull can be rely on a concentration of the
log-likelihood, leading to an easier one-dimensional
optimisation. Moreover, the concentrated log-likelihood is concave, so
there is a unique ML estimate.
The problem here is that the log-likelihood is quite flat near the
optimum, so different optimisations lead to different results as
reported by @Glen_b. Moreover, the data scaling is prone to numerical
problems. After rescaling, similar results are obtained with or without
concentration. A general practical finding about MLE is
that using poorly scaled data can be enough to ruin the estimation.
> library(Renext) ## for concentrated log-lik
> try(fweibull(Y)) ## error (numerical pb with information matrix)
> fit <- fweibull(Y / 1000) ## works
> ## set parameters and logLik back to original scale
> fit$est * c(1, 1000)
shape scale
2.126225 1563.094460
> fit$sd * c(1, 1000)
shape scale
0.2444308 114.1293266
> fit$loglik - length(Y) * log(1000)
[1] -362.2237
> library(MASS)
> ## set parameters and logLik back to original scale
> fit2 <- fitdistr(Y / 1000, "weibull")
> fit2$est * c(1, 1000)
shape scale
2.126231 1563.095165
> fit2$sd * c(1, 1000)
shape scale
0.2288605 114.9071653
> fit2$loglik - length(Y) * log(1000)
[1] -362.2237
|
Inconsistency between R and SAS for MLE on Weibull
|
First remind that the fitdistr function (from the MASS package) is a
very general function that can work with nearly any distribution.
The warnings come from non-allowed parameter values (e.g. negati
|
Inconsistency between R and SAS for MLE on Weibull
First remind that the fitdistr function (from the MASS package) is a
very general function that can work with nearly any distribution.
The warnings come from non-allowed parameter values (e.g. negative
scale or shape) met during the optimisation unconstrained by default.
It seems a good idea here to try a specific MLE for the Weibull
distribution. A quite well-known fact is that the ML estimation of the
two-parameter Weibull can be rely on a concentration of the
log-likelihood, leading to an easier one-dimensional
optimisation. Moreover, the concentrated log-likelihood is concave, so
there is a unique ML estimate.
The problem here is that the log-likelihood is quite flat near the
optimum, so different optimisations lead to different results as
reported by @Glen_b. Moreover, the data scaling is prone to numerical
problems. After rescaling, similar results are obtained with or without
concentration. A general practical finding about MLE is
that using poorly scaled data can be enough to ruin the estimation.
> library(Renext) ## for concentrated log-lik
> try(fweibull(Y)) ## error (numerical pb with information matrix)
> fit <- fweibull(Y / 1000) ## works
> ## set parameters and logLik back to original scale
> fit$est * c(1, 1000)
shape scale
2.126225 1563.094460
> fit$sd * c(1, 1000)
shape scale
0.2444308 114.1293266
> fit$loglik - length(Y) * log(1000)
[1] -362.2237
> library(MASS)
> ## set parameters and logLik back to original scale
> fit2 <- fitdistr(Y / 1000, "weibull")
> fit2$est * c(1, 1000)
shape scale
2.126231 1563.095165
> fit2$sd * c(1, 1000)
shape scale
0.2288605 114.9071653
> fit2$loglik - length(Y) * log(1000)
[1] -362.2237
|
Inconsistency between R and SAS for MLE on Weibull
First remind that the fitdistr function (from the MASS package) is a
very general function that can work with nearly any distribution.
The warnings come from non-allowed parameter values (e.g. negati
|
39,459
|
Inconsistency between R and SAS for MLE on Weibull
|
An optimization function shouldn't be expected to give identical answers to a similar function in a different package - or even to the same function with different options.
I tried a variety of different optimizers and starting places in fitdistr. They generally gave really similar results, of which the SAS and the result you got with fitdistr in R were typical.
I have included one of those fits, using a different optimizer in fitdistr and non-default starting point. In terms of the resulting fit, all three are essentially indistinguishable (and your two results are more alike than the third):
I don't think anything is amiss.
The warning should not be ignored, but investigated as far as possible, but sometimes errors (or in this case, warnings) can be generated without indicating that there's any convergence problem. You should try to figure out what caused that. Trying different starting points and optimizers (and plotting the resulting fits) should indicate if there's much of an issue.
[Ideally, you should plot the function in a 3D plot (or its contours in a 2D plot) near the identified optimum, which will help identify a number of potential problems.]
With the Weibull, one thing you can do is use the survreg function in the survival package, which will fit a Weibull as its default model. Its two parameters are related to the usual Weibull ones (this is described in the help on survreg). You just want a constant-mean model:
> survreg(Surv(Y)~1)
Call:
survreg(formula = Surv(Y) ~ 1)
Coefficients:
(Intercept)
7.354423
Scale= 0.4703164
Loglik(model)= -362.2 Loglik(intercept only)= -362.2
n= 46
> exp(7.354423) # exponentiate the Intercept
[1] 1563.095
> 1/0.4703164 # take inverse of the Scale
[1] 2.126228
summary(survreg) will give standard errors on the scale it uses, but if you take say a 95% CI and transform the endpoints, they can be used as a CI for the transformed parameters.
|
Inconsistency between R and SAS for MLE on Weibull
|
An optimization function shouldn't be expected to give identical answers to a similar function in a different package - or even to the same function with different options.
I tried a variety of differ
|
Inconsistency between R and SAS for MLE on Weibull
An optimization function shouldn't be expected to give identical answers to a similar function in a different package - or even to the same function with different options.
I tried a variety of different optimizers and starting places in fitdistr. They generally gave really similar results, of which the SAS and the result you got with fitdistr in R were typical.
I have included one of those fits, using a different optimizer in fitdistr and non-default starting point. In terms of the resulting fit, all three are essentially indistinguishable (and your two results are more alike than the third):
I don't think anything is amiss.
The warning should not be ignored, but investigated as far as possible, but sometimes errors (or in this case, warnings) can be generated without indicating that there's any convergence problem. You should try to figure out what caused that. Trying different starting points and optimizers (and plotting the resulting fits) should indicate if there's much of an issue.
[Ideally, you should plot the function in a 3D plot (or its contours in a 2D plot) near the identified optimum, which will help identify a number of potential problems.]
With the Weibull, one thing you can do is use the survreg function in the survival package, which will fit a Weibull as its default model. Its two parameters are related to the usual Weibull ones (this is described in the help on survreg). You just want a constant-mean model:
> survreg(Surv(Y)~1)
Call:
survreg(formula = Surv(Y) ~ 1)
Coefficients:
(Intercept)
7.354423
Scale= 0.4703164
Loglik(model)= -362.2 Loglik(intercept only)= -362.2
n= 46
> exp(7.354423) # exponentiate the Intercept
[1] 1563.095
> 1/0.4703164 # take inverse of the Scale
[1] 2.126228
summary(survreg) will give standard errors on the scale it uses, but if you take say a 95% CI and transform the endpoints, they can be used as a CI for the transformed parameters.
|
Inconsistency between R and SAS for MLE on Weibull
An optimization function shouldn't be expected to give identical answers to a similar function in a different package - or even to the same function with different options.
I tried a variety of differ
|
39,460
|
Inconsistency between R and SAS for MLE on Weibull
|
While the SAS output is better than the R output, the unpleasant fact is that both perform rather poorly. To see this, note that the gradient at the reported solution should disappear to 0 ... whereas for both the R and SAS results, this is not the case.
In particular, let $X \sim Weibull(b,c)$ with pdf $f(x)$:
(source: tri.org.au)
I am going to activate mathStatica's SuperLog function:
(source: tri.org.au)
Then, the exact symbolic log-likelihood for $\theta = (b,c)$ is given by:
(source: tri.org.au)
Replacing $(x_1, \dots, x_n)$ by the $n=46$ data values yields the exact observed log- likelihood:
(source: tri.org.au)
For the R solution, and the SAS solution, the gradient vector, calculated at each reported solution, is:
(source: tri.org.au)
At the optimal solution, the gradient should disappear to 0. The SAS solution is better than the R solution, but both are poor. The solution reported by Yves does much better:
(source: tri.org.au)
... but can still be easily improved upon.
The Hessian matrix (at the solution) ... and the eigenvalues of the Hessian ... should also be calculated to ensure that the observed log-likelihood is concave in the neighbourhood.
|
Inconsistency between R and SAS for MLE on Weibull
|
While the SAS output is better than the R output, the unpleasant fact is that both perform rather poorly. To see this, note that the gradient at the reported solution should disappear to 0 ... whereas
|
Inconsistency between R and SAS for MLE on Weibull
While the SAS output is better than the R output, the unpleasant fact is that both perform rather poorly. To see this, note that the gradient at the reported solution should disappear to 0 ... whereas for both the R and SAS results, this is not the case.
In particular, let $X \sim Weibull(b,c)$ with pdf $f(x)$:
(source: tri.org.au)
I am going to activate mathStatica's SuperLog function:
(source: tri.org.au)
Then, the exact symbolic log-likelihood for $\theta = (b,c)$ is given by:
(source: tri.org.au)
Replacing $(x_1, \dots, x_n)$ by the $n=46$ data values yields the exact observed log- likelihood:
(source: tri.org.au)
For the R solution, and the SAS solution, the gradient vector, calculated at each reported solution, is:
(source: tri.org.au)
At the optimal solution, the gradient should disappear to 0. The SAS solution is better than the R solution, but both are poor. The solution reported by Yves does much better:
(source: tri.org.au)
... but can still be easily improved upon.
The Hessian matrix (at the solution) ... and the eigenvalues of the Hessian ... should also be calculated to ensure that the observed log-likelihood is concave in the neighbourhood.
|
Inconsistency between R and SAS for MLE on Weibull
While the SAS output is better than the R output, the unpleasant fact is that both perform rather poorly. To see this, note that the gradient at the reported solution should disappear to 0 ... whereas
|
39,461
|
How to know if "best fit line" really represents known set of data?
|
I want to know if my line represents those datapoints
The problem with such an idea is there are many ways that a fitted model might be unrepresentative of the data. As a result, a single measure won't really capture the ways in which a model can fail to be representative.
This is why regression diagnostics consist not of a single number, but of multiple displays - some of which might reveal any of several different problems with the model.
Taking you to be asking about simple regression (single-x), here's a couple of examples of things you might consider as "not representative":
The underlying relationship you're attempting to model with $E(y|x)=\beta_0+\beta_1 x$ might be non linear. Some form of lack of fit measure can sometimes be useful - this is easy if you have replicates, but if not, with some additional assumptions (such as local smoothness if it's not linear) can allow us to get some measurement of that (lowess, for example, or some form of regression spline can be useful for picking up such changes, and a measure of unrepresenativeness would relate to the improvement given by such a nonlinear model). The more common approach, however, is to examine residuals (where, again, tools like lowess may often be used).
The model for the mean $E(y|x)=\beta_0+\beta_1 x$ might be correct, but the mean may itself be quite unrepresentative of the data (in that much of the data are not behaving like the mean - i.e. the mean may not be a useful descriptor of the conditional distribution):
Here we have a complex situation --
for small $x$ the conditional distribution of the data is unimodal and the line is representative of the relationship between $y$ and $x$ (e.g. near the left end of the data, the mean, median and mode are all linear in $x$), but
for large $x$, the distribution about the line is strongly bimodal, and as such, the line representing $E(y|x)$ - while correctly describing the conditional mean - doesn't represent the data; indeed in that region the relationship of the two modes with the mean are each nonlinear, even though the mean is linear throughout.
There are additional issues besides the form of the mean that you may want to consider. For example, if the variance is far from constant, the usual regression line may be inefficiently estimated, and the usual inference won't work as expected. Additionally, if the aim is to describe the way that $y$ is related to $x$, describing the spread may be just as important as describing the mean.
--
You can construct measures of various aspects of representativeness, but because 'representativeness' is multifaceted, a single measure won't meaningfully capture all those aspects. Indeed, as we see in the example, how representative a line is might be different in different parts of the data. A single number would obscure such subtleties.
[In particular situations, of course, you may be able to discount/disregard many of the possible ways for the line to be unrepresentative, and hence say "I'm mostly interested in one particular aspect" - such as nonlinearity - and then design some measure of that. That may be just fine in situations in which you can do that, especially if automation is needed.]
|
How to know if "best fit line" really represents known set of data?
|
I want to know if my line represents those datapoints
The problem with such an idea is there are many ways that a fitted model might be unrepresentative of the data. As a result, a single measure wo
|
How to know if "best fit line" really represents known set of data?
I want to know if my line represents those datapoints
The problem with such an idea is there are many ways that a fitted model might be unrepresentative of the data. As a result, a single measure won't really capture the ways in which a model can fail to be representative.
This is why regression diagnostics consist not of a single number, but of multiple displays - some of which might reveal any of several different problems with the model.
Taking you to be asking about simple regression (single-x), here's a couple of examples of things you might consider as "not representative":
The underlying relationship you're attempting to model with $E(y|x)=\beta_0+\beta_1 x$ might be non linear. Some form of lack of fit measure can sometimes be useful - this is easy if you have replicates, but if not, with some additional assumptions (such as local smoothness if it's not linear) can allow us to get some measurement of that (lowess, for example, or some form of regression spline can be useful for picking up such changes, and a measure of unrepresenativeness would relate to the improvement given by such a nonlinear model). The more common approach, however, is to examine residuals (where, again, tools like lowess may often be used).
The model for the mean $E(y|x)=\beta_0+\beta_1 x$ might be correct, but the mean may itself be quite unrepresentative of the data (in that much of the data are not behaving like the mean - i.e. the mean may not be a useful descriptor of the conditional distribution):
Here we have a complex situation --
for small $x$ the conditional distribution of the data is unimodal and the line is representative of the relationship between $y$ and $x$ (e.g. near the left end of the data, the mean, median and mode are all linear in $x$), but
for large $x$, the distribution about the line is strongly bimodal, and as such, the line representing $E(y|x)$ - while correctly describing the conditional mean - doesn't represent the data; indeed in that region the relationship of the two modes with the mean are each nonlinear, even though the mean is linear throughout.
There are additional issues besides the form of the mean that you may want to consider. For example, if the variance is far from constant, the usual regression line may be inefficiently estimated, and the usual inference won't work as expected. Additionally, if the aim is to describe the way that $y$ is related to $x$, describing the spread may be just as important as describing the mean.
--
You can construct measures of various aspects of representativeness, but because 'representativeness' is multifaceted, a single measure won't meaningfully capture all those aspects. Indeed, as we see in the example, how representative a line is might be different in different parts of the data. A single number would obscure such subtleties.
[In particular situations, of course, you may be able to discount/disregard many of the possible ways for the line to be unrepresentative, and hence say "I'm mostly interested in one particular aspect" - such as nonlinearity - and then design some measure of that. That may be just fine in situations in which you can do that, especially if automation is needed.]
|
How to know if "best fit line" really represents known set of data?
I want to know if my line represents those datapoints
The problem with such an idea is there are many ways that a fitted model might be unrepresentative of the data. As a result, a single measure wo
|
39,462
|
How to know if "best fit line" really represents known set of data?
|
There are a several ways you could do this. First recall that the linear best fit line is the line which minimizes the sum of squared residuals (see least squares):
$$\sum_{i=1}^{n}{r_i^2}$$
where $r_i$ is the residual for data point $i$, and $n$ is the number of data points. A residual is the distance between a point in your data and a point on your line.
With this in mind, here's a few ideas of how to "score" how well your line fits the data:
Calculate the max absolute distance between your data and the line. This would tell you if you have any data points that are really far away.
$$\max_i{|r_1|}$$
Calculate the average distance between your data and the line (average of L1 norm of your residuals, also known as S). This would tell you how far away most of your data points are.
$$\frac{\sum_{i=1}^{n}{|r_i|}}{n}$$
Calculate the coefficient of determination, $R^2$:
$$ R^2 = 1 - \frac{\sum_{i=1}^{n}{r_i^2}}{\sum_{i=1}^{n}{(y_i - \bar{y})^{2}}} $$
where $y_i$ represents the value of each of your data points, and $\bar{y}$ is the mean of your data.
Given your comment that your goal is to determine if a dataset is linear, consider this:
Approximately 95% of the observations should fall within $\pm$ 2*standard error of the regression from the regression line
(see "How to Interpret S, the Standard Error of the Regression")
Therefore, if 95% of your data points are within $2 * S$ of your linear best fit line, then you can be confident your data is linear (where $S$ is what I called the average distance).
More information: Linear or Nonlinear Regression?
Furthermore, you also mentioned predicting future values as accurately as possible, in this case you could split your data into two parts: a training set, and a test set. Then:
Fit a line to the training set only (leave out the test set)
Evaluate whether the line accurately predicts the test set. (i.e. you are testing the model)
If you can accurately predict the test set, then you've successfully modeled your data, in this case with a linear function. This is the basis of machine learning, which is a large topic so I won't expand on it more here.
|
How to know if "best fit line" really represents known set of data?
|
There are a several ways you could do this. First recall that the linear best fit line is the line which minimizes the sum of squared residuals (see least squares):
$$\sum_{i=1}^{n}{r_i^2}$$
where $r_
|
How to know if "best fit line" really represents known set of data?
There are a several ways you could do this. First recall that the linear best fit line is the line which minimizes the sum of squared residuals (see least squares):
$$\sum_{i=1}^{n}{r_i^2}$$
where $r_i$ is the residual for data point $i$, and $n$ is the number of data points. A residual is the distance between a point in your data and a point on your line.
With this in mind, here's a few ideas of how to "score" how well your line fits the data:
Calculate the max absolute distance between your data and the line. This would tell you if you have any data points that are really far away.
$$\max_i{|r_1|}$$
Calculate the average distance between your data and the line (average of L1 norm of your residuals, also known as S). This would tell you how far away most of your data points are.
$$\frac{\sum_{i=1}^{n}{|r_i|}}{n}$$
Calculate the coefficient of determination, $R^2$:
$$ R^2 = 1 - \frac{\sum_{i=1}^{n}{r_i^2}}{\sum_{i=1}^{n}{(y_i - \bar{y})^{2}}} $$
where $y_i$ represents the value of each of your data points, and $\bar{y}$ is the mean of your data.
Given your comment that your goal is to determine if a dataset is linear, consider this:
Approximately 95% of the observations should fall within $\pm$ 2*standard error of the regression from the regression line
(see "How to Interpret S, the Standard Error of the Regression")
Therefore, if 95% of your data points are within $2 * S$ of your linear best fit line, then you can be confident your data is linear (where $S$ is what I called the average distance).
More information: Linear or Nonlinear Regression?
Furthermore, you also mentioned predicting future values as accurately as possible, in this case you could split your data into two parts: a training set, and a test set. Then:
Fit a line to the training set only (leave out the test set)
Evaluate whether the line accurately predicts the test set. (i.e. you are testing the model)
If you can accurately predict the test set, then you've successfully modeled your data, in this case with a linear function. This is the basis of machine learning, which is a large topic so I won't expand on it more here.
|
How to know if "best fit line" really represents known set of data?
There are a several ways you could do this. First recall that the linear best fit line is the line which minimizes the sum of squared residuals (see least squares):
$$\sum_{i=1}^{n}{r_i^2}$$
where $r_
|
39,463
|
From SAS to R - what are "must" packages for reporting
|
The problem with R is that there are so many ways to construct great reports, and so many R packages that are helpful for this task. One approach, though getting out of date, is shown in http://biostat.app.vumc.org/wiki/pub/Main/StatReport/summary.pdf . Note that some of the functions there have been updated as shown in http://hbiostat.org/R/Hmisc [and really take note of the tabulr function]. That approach revolves around $\LaTeX$, and I believe you'll find that for producing advanced tables (includes ones containing micrographics and footnotes), $\LaTeX$ has many advantages over the markdown-pandoc approach.
But I believe that we should replace almost all tables with graphics. The new R greport ("graphical report") and hreport ("html report") packages takes the philosophy that graphics should be used for the main presentation, and graphs should be hyperlinked to supporting tables that appear in an appendix to the pdf report. See http://hbiostat.org/r. These packages use new functions in the Hmisc package for graphing categorical data (i.e., translating tables to plots) and for showing whole distributions of continuous variables.
|
From SAS to R - what are "must" packages for reporting
|
The problem with R is that there are so many ways to construct great reports, and so many R packages that are helpful for this task. One approach, though getting out of date, is shown in http://biost
|
From SAS to R - what are "must" packages for reporting
The problem with R is that there are so many ways to construct great reports, and so many R packages that are helpful for this task. One approach, though getting out of date, is shown in http://biostat.app.vumc.org/wiki/pub/Main/StatReport/summary.pdf . Note that some of the functions there have been updated as shown in http://hbiostat.org/R/Hmisc [and really take note of the tabulr function]. That approach revolves around $\LaTeX$, and I believe you'll find that for producing advanced tables (includes ones containing micrographics and footnotes), $\LaTeX$ has many advantages over the markdown-pandoc approach.
But I believe that we should replace almost all tables with graphics. The new R greport ("graphical report") and hreport ("html report") packages takes the philosophy that graphics should be used for the main presentation, and graphs should be hyperlinked to supporting tables that appear in an appendix to the pdf report. See http://hbiostat.org/r. These packages use new functions in the Hmisc package for graphing categorical data (i.e., translating tables to plots) and for showing whole distributions of continuous variables.
|
From SAS to R - what are "must" packages for reporting
The problem with R is that there are so many ways to construct great reports, and so many R packages that are helpful for this task. One approach, though getting out of date, is shown in http://biost
|
39,464
|
From SAS to R - what are "must" packages for reporting
|
It seems like reporting and filtering/splitting data by variables are two orthogonal tasks. And people usually use different packages for those.
For managing the data there are few really popular packages: dplyr and data.table.
For reporting tables one package that stands out for me is stargazer
Here are some demonstrations: http://cran.r-project.org/web/packages/stargazer/vignettes/stargazer.pdf
It covers both latex and html (and ASCII, but haven't used that).
I have never used SAS so I don't know if this will cover all the functionality you wanted.
|
From SAS to R - what are "must" packages for reporting
|
It seems like reporting and filtering/splitting data by variables are two orthogonal tasks. And people usually use different packages for those.
For managing the data there are few really popular pack
|
From SAS to R - what are "must" packages for reporting
It seems like reporting and filtering/splitting data by variables are two orthogonal tasks. And people usually use different packages for those.
For managing the data there are few really popular packages: dplyr and data.table.
For reporting tables one package that stands out for me is stargazer
Here are some demonstrations: http://cran.r-project.org/web/packages/stargazer/vignettes/stargazer.pdf
It covers both latex and html (and ASCII, but haven't used that).
I have never used SAS so I don't know if this will cover all the functionality you wanted.
|
From SAS to R - what are "must" packages for reporting
It seems like reporting and filtering/splitting data by variables are two orthogonal tasks. And people usually use different packages for those.
For managing the data there are few really popular pack
|
39,465
|
From SAS to R - what are "must" packages for reporting
|
In 2020 the reporter package was released, which operates much like proc report. You get the data and statistics you want using other R packages, and then send the resulting data frame into reporter. Like this:
library(reporter)
# Create temporary path
tmp <- file.path(tempdir(), "example3.pdf")
# Read in prepared data
df <- read.table(header = TRUE, text = '
var label A B
"ampg" "N" "19" "13"
"ampg" "Mean" "18.8 (6.5)" "22.0 (4.9)"
"ampg" "Median" "16.4" "21.4"
"ampg" "Q1 - Q3" "15.1 - 21.2" "19.2 - 22.8"
"ampg" "Range" "10.4 - 33.9" "14.7 - 32.4"
"cyl" "8 Cylinder" "10 ( 52.6%)" "4 ( 30.8%)"
"cyl" "6 Cylinder" "4 ( 21.1%)" "3 ( 23.1%)"
"cyl" "4 Cylinder" "5 ( 26.3%)" "6 ( 46.2%)"')
# Create table
tbl <- create_table(df, first_row_blank = TRUE) %>%
stub(c("var", "label")) %>%
define(var, blank_after = TRUE, label_row = TRUE,
format = c(ampg = "Miles Per Gallon", cyl = "Cylinders")) %>%
define(label, indent = .25) %>%
define(A, label = "Group A", align = "center", n = 19) %>%
define(B, label = "Group B", align = "center", n = 13)
# Create report and add content
rpt <- create_report(tmp, orientation = "portrait", output_type = "PDF") %>%
page_header(left = "Client: Motor Trend", right = "Study: Cars") %>%
titles("Table 1.0", "MTCARS Summary Table") %>%
add_content(tbl) %>%
footnotes("* Motor Trend, 1974") %>%
page_footer(left = Sys.time(),
center = "Confidential",
right = "Page [pg] of [tpg]")
# Write out report
write_report(rpt)
The report can be output in text, RTF, or PDF. Here is the PDF version:
The advantage of this package is that you can create almost any kind of report. It will take more work than table1 or stargazer. But since it only generates the report, and doesn't try to generate the statistics, you are free to use any R statistical package. So more work, but more freedom.
|
From SAS to R - what are "must" packages for reporting
|
In 2020 the reporter package was released, which operates much like proc report. You get the data and statistics you want using other R packages, and then send the resulting data frame into reporter.
|
From SAS to R - what are "must" packages for reporting
In 2020 the reporter package was released, which operates much like proc report. You get the data and statistics you want using other R packages, and then send the resulting data frame into reporter. Like this:
library(reporter)
# Create temporary path
tmp <- file.path(tempdir(), "example3.pdf")
# Read in prepared data
df <- read.table(header = TRUE, text = '
var label A B
"ampg" "N" "19" "13"
"ampg" "Mean" "18.8 (6.5)" "22.0 (4.9)"
"ampg" "Median" "16.4" "21.4"
"ampg" "Q1 - Q3" "15.1 - 21.2" "19.2 - 22.8"
"ampg" "Range" "10.4 - 33.9" "14.7 - 32.4"
"cyl" "8 Cylinder" "10 ( 52.6%)" "4 ( 30.8%)"
"cyl" "6 Cylinder" "4 ( 21.1%)" "3 ( 23.1%)"
"cyl" "4 Cylinder" "5 ( 26.3%)" "6 ( 46.2%)"')
# Create table
tbl <- create_table(df, first_row_blank = TRUE) %>%
stub(c("var", "label")) %>%
define(var, blank_after = TRUE, label_row = TRUE,
format = c(ampg = "Miles Per Gallon", cyl = "Cylinders")) %>%
define(label, indent = .25) %>%
define(A, label = "Group A", align = "center", n = 19) %>%
define(B, label = "Group B", align = "center", n = 13)
# Create report and add content
rpt <- create_report(tmp, orientation = "portrait", output_type = "PDF") %>%
page_header(left = "Client: Motor Trend", right = "Study: Cars") %>%
titles("Table 1.0", "MTCARS Summary Table") %>%
add_content(tbl) %>%
footnotes("* Motor Trend, 1974") %>%
page_footer(left = Sys.time(),
center = "Confidential",
right = "Page [pg] of [tpg]")
# Write out report
write_report(rpt)
The report can be output in text, RTF, or PDF. Here is the PDF version:
The advantage of this package is that you can create almost any kind of report. It will take more work than table1 or stargazer. But since it only generates the report, and doesn't try to generate the statistics, you are free to use any R statistical package. So more work, but more freedom.
|
From SAS to R - what are "must" packages for reporting
In 2020 the reporter package was released, which operates much like proc report. You get the data and statistics you want using other R packages, and then send the resulting data frame into reporter.
|
39,466
|
How to infer correlations from correlations
|
Given corr(A,B) and corr(A,C) you can obtain bounds on corr(B,C) (and similar such calculations involving more variables), but the bounds are in general quite wide. Indeed, typically such calculations aren't very informative at all.
Specifically, by looking at the relationship between the ordinary pairwise correlation and the partial correlation:
$$\rho_{BC\cdot A } = \frac{\rho_{BC} - \rho_{AB}\rho_{AC}} {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}$$
you can rearrange the formula to back out bounds for $\rho_{BC}$:
$$\rho_{BC}=\rho_{AB}\rho_{AC}+\rho_{BC\cdot A } {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}$$
and noting that the partial correlation must lie between -1 and 1, this implies that $\rho_{BC}$ is bounded to lie in
$$\rho_{AB}\rho_{AC}\pm {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}\,.$$
e.g. Let's say $\rho_{AB}=0.8$ and $\rho_{AC}=0.6$.
Then $\rho_{BC}= 0.6 \times 0.8 \pm \sqrt{(1-.64)(1-.36)}=0.48\pm 0.48 = (0,0.96)$
With more variables the situation becomes more complex; in some situations it's easier to work with Cholesky decompositions.
If you impose additional structure on the problem then in some situations those bounds might reduce.
Additional details may help.
|
How to infer correlations from correlations
|
Given corr(A,B) and corr(A,C) you can obtain bounds on corr(B,C) (and similar such calculations involving more variables), but the bounds are in general quite wide. Indeed, typically such calculations
|
How to infer correlations from correlations
Given corr(A,B) and corr(A,C) you can obtain bounds on corr(B,C) (and similar such calculations involving more variables), but the bounds are in general quite wide. Indeed, typically such calculations aren't very informative at all.
Specifically, by looking at the relationship between the ordinary pairwise correlation and the partial correlation:
$$\rho_{BC\cdot A } = \frac{\rho_{BC} - \rho_{AB}\rho_{AC}} {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}$$
you can rearrange the formula to back out bounds for $\rho_{BC}$:
$$\rho_{BC}=\rho_{AB}\rho_{AC}+\rho_{BC\cdot A } {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}$$
and noting that the partial correlation must lie between -1 and 1, this implies that $\rho_{BC}$ is bounded to lie in
$$\rho_{AB}\rho_{AC}\pm {\sqrt{1-\rho_{AB}^2} \sqrt{1-\rho_{AC}^2}}\,.$$
e.g. Let's say $\rho_{AB}=0.8$ and $\rho_{AC}=0.6$.
Then $\rho_{BC}= 0.6 \times 0.8 \pm \sqrt{(1-.64)(1-.36)}=0.48\pm 0.48 = (0,0.96)$
With more variables the situation becomes more complex; in some situations it's easier to work with Cholesky decompositions.
If you impose additional structure on the problem then in some situations those bounds might reduce.
Additional details may help.
|
How to infer correlations from correlations
Given corr(A,B) and corr(A,C) you can obtain bounds on corr(B,C) (and similar such calculations involving more variables), but the bounds are in general quite wide. Indeed, typically such calculations
|
39,467
|
Rstan Stan model for a simple mixture of normals
|
Because all of the parameters of this distribution are known, and we merely want to draw samples from this distribution, coding the model in rstan is straightforward. Note that this is, by far, one of the least efficient paths to sampling from this particular model, in terms of time that I spent coding it (15 minutes). The author of the original post is correct when he notes that the easiest way to sample from this model is to use the sample function creatively.
library(rstan)
mix_model <- "
data{
int J;
vector<lower=0>[J] weights;
vector[J] means;
vector<lower=0>[J] sdevs;
}
transformed data{
vector[J] ln_weights;
ln_weights <- log(weights);
}
parameters{
real y;
}
model{
vector[J] probs;
for(j in 1:J){
probs[j] <- exp(ln_weights[j]+normal_log(y,means[j],sdevs[j]));
}
increment_log_prob(log(sum(probs)));
}
"
mixdata <- list(J=3, weights=c(0.3,0.4,0.3),means=c(-3,2,10),sdevs=c(2,1,4))
testfit <- stan(model_code=mix_model, data=mixdata, iter=10)
fit <- stan(fit=testfit, data=mixdata, iter=25000, chains=5)
I took the step of reading in each of the parameters of the mixture as data so that the "sum of several known normals" model is easily extended to cases of arbitrary numbers of mixture components.
Transforming the mixture weights to the log scale is done in transformed data because, in this model, it is known. Transforming it there, rather than in the model block means we just read off the stored value, rather than recomputing the log at every iteration.
The only part of this model that I'm unsatisfied with is the loop over the log-probabilities of each mixture component. In general, one prefers to use the native composed functions of rstan because they already have the derivatives worked out, so you don't have to use the slower autodiff routine. On the other hand, the composed function in this case only accepts two arguments, not 3 or more...
y <- extract(fit, "y")[[1]]
plot(density(y))
x <- seq(-10,25,by=0.01)
y1 <- 0.3*dnorm(x, mean=-3,sd=2)
y2 <- 0.4*dnorm(x, mean=2,sd=1)
y3 <- 0.3*dnorm(x, mean=10,sd=4)
lines(x,y1, col="red", lty="dashed")
lines(x,y2, col="red", lty="dashed")
lines(x,y3, col="red", lty="dashed")
Visually, the results appear to be a reasonable approximation of the mixture density.
|
Rstan Stan model for a simple mixture of normals
|
Because all of the parameters of this distribution are known, and we merely want to draw samples from this distribution, coding the model in rstan is straightforward. Note that this is, by far, one of
|
Rstan Stan model for a simple mixture of normals
Because all of the parameters of this distribution are known, and we merely want to draw samples from this distribution, coding the model in rstan is straightforward. Note that this is, by far, one of the least efficient paths to sampling from this particular model, in terms of time that I spent coding it (15 minutes). The author of the original post is correct when he notes that the easiest way to sample from this model is to use the sample function creatively.
library(rstan)
mix_model <- "
data{
int J;
vector<lower=0>[J] weights;
vector[J] means;
vector<lower=0>[J] sdevs;
}
transformed data{
vector[J] ln_weights;
ln_weights <- log(weights);
}
parameters{
real y;
}
model{
vector[J] probs;
for(j in 1:J){
probs[j] <- exp(ln_weights[j]+normal_log(y,means[j],sdevs[j]));
}
increment_log_prob(log(sum(probs)));
}
"
mixdata <- list(J=3, weights=c(0.3,0.4,0.3),means=c(-3,2,10),sdevs=c(2,1,4))
testfit <- stan(model_code=mix_model, data=mixdata, iter=10)
fit <- stan(fit=testfit, data=mixdata, iter=25000, chains=5)
I took the step of reading in each of the parameters of the mixture as data so that the "sum of several known normals" model is easily extended to cases of arbitrary numbers of mixture components.
Transforming the mixture weights to the log scale is done in transformed data because, in this model, it is known. Transforming it there, rather than in the model block means we just read off the stored value, rather than recomputing the log at every iteration.
The only part of this model that I'm unsatisfied with is the loop over the log-probabilities of each mixture component. In general, one prefers to use the native composed functions of rstan because they already have the derivatives worked out, so you don't have to use the slower autodiff routine. On the other hand, the composed function in this case only accepts two arguments, not 3 or more...
y <- extract(fit, "y")[[1]]
plot(density(y))
x <- seq(-10,25,by=0.01)
y1 <- 0.3*dnorm(x, mean=-3,sd=2)
y2 <- 0.4*dnorm(x, mean=2,sd=1)
y3 <- 0.3*dnorm(x, mean=10,sd=4)
lines(x,y1, col="red", lty="dashed")
lines(x,y2, col="red", lty="dashed")
lines(x,y3, col="red", lty="dashed")
Visually, the results appear to be a reasonable approximation of the mixture density.
|
Rstan Stan model for a simple mixture of normals
Because all of the parameters of this distribution are known, and we merely want to draw samples from this distribution, coding the model in rstan is straightforward. Note that this is, by far, one of
|
39,468
|
Forecasting a seasonal time series in R
|
As regards the comparison of models, the idea proposed by @forecaster can be helpful since you have a relatively long series.
Regarding your last question about how to obtain forecasts for the trend component rather than for the whole series: as far as I know there is no package on CRAN that decomposes a fitted ARIMA model into a trend and seasonal component. The package ArDec decomposes a series based on autoregressions but I don't think it is straightforward to apply it to an ARIMA model.
Your idea of decomposing the series by means of moving averages could be a solution. As you are interested in forecasting the trend component, it would be better to fit an ARIMA model to the trend component rather than to the noise component $N_t$ and then obtain forecasts based on this model. However, this may not be an efficient approach since you work with a smoothed version of the data instead of using the observed data.
I would try a structural time series model, where a model is explicitly defined
for each component. For example, using the function StructTS from the stats core package we obtain the following decomposition:
fit1 <- StructTS(IAP, type = "BSM")
fit1
# Variances:
# level slope seas epsilon
# 4.987e+10 0.000e+00 1.575e+11 0.000e+00
plot(tsSmooth(fit1), main = "")
mtext(text = "decomposition of the basic structural model. StructTS() stats package", side = 3, adj = 0, line = 1)
The function predict can be used to obtain forecasts, predict(fit1), but forecasts are returned for the observed series, not for the components. To obtain forecasts of the components based on a structural model you can use the package stsm.
require("stsm")
require("stsm.class") # this package will be merged into package "stsm"
m <- stsm.model(model = "BSM", y = IAP, transPars = "StructTS")
fit2 <- stsmFit(m, stsm.method = "maxlik.td.optim", method = "L-BFGS-B",
KF.args = list(P0cov = TRUE))
fit2
# Parameter estimates:
# var1 var2 var3 var4
# Estimate 0.000 4.987e+10 0.000e+00 1.575e+11
# Std. error 3.201 1.194e+00 6.392e-06 8.366e-01
# Log-likelihood: -2048.649
# Convergence code: 0
# CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH
# Number of function calls: 16
# Variance-covariance matrix: optimHessian
The parameters var1, var2, var3 and var4 are referred respectively to the variances of the disturbance term in the observation equation and in the level, slope and seasonal components.
The components based on the fitted model are returned by tsSmooth:
fit2.comps <- tsSmooth(fit2, P0cov = FALSE)$states
plot(fit2.comps, main = "")
mtext(text = "decomposition of the basic structural model. stsm package",
side = 3, adj = 0, line = 1)
Notice that the parameter estimates based on StructTS and stsm are the same,
however, the estimated components look better in the latter case
compared to those based on StructTS: the trend component is smoother with no fluctuations at the beginning of the sample and the variance of the seasonal component is more stable throughout time. The reason for this difference in the plots is that stsm uses P0cov = FALSE (a diagonal covariance matrix for the initial state vector, but this is a topic for another post).
The forecasts for the components and their $95\%$ confidence intervals
can be obtained as follows using the function predict from package
KFKSDS:
require("KFKSDS")
m2 <- set.pars(m, pmax(fit2$par, .Machine$double.eps))
ss <- char2numeric(m2)
pred <- predict(ss, IAP, n.ahead = 12)
Plot of forecasts and confidence intervals:
par(mfrow = c(3,1), mar = c(3,3,3,3))
# observed series
plot(cbind(IAP, pred$pred), type = "n", plot.type = "single", ylab = "", ylim = c(8283372, 19365461))
lines(IAP)
polygon(c(time(pred$pred), rev(time(pred$pred))), c(pred$pred + 2 * pred$se, rev(pred$pred)), col = "gray85", border = NA)
polygon(c(time(pred$pred), rev(time(pred$pred))), c(pred$pred - 2 * pred$se, rev(pred$pred)), col = " gray85", border = NA)
lines(pred$pred, col = "blue", lwd = 1.5)
mtext(text = "forecasts of the observed series", side = 3, adj = 0)
# level component
plot(cbind(IAP, pred$a[,1]), type = "n", plot.type = "single", ylab = "", ylim = c(8283372, 19365461))
lines(IAP)
polygon(c(time(pred$a[,1]), rev(time(pred$a[,1]))), c(pred$a[,1] + 2 * sqrt(pred$P[,1]), rev(pred$a[,1])), col = "gray85", border = NA)
polygon(c(time(pred$a[,1]), rev(time(pred$a[,1]))), c(pred$a[,1] - 2 * sqrt(pred$P[,1]), rev(pred$a[,1])), col = " gray85", border = NA)
lines(pred$a[,1], col = "blue", lwd = 1.5)
mtext(text = "forecasts of the level component", side = 3, adj = 0)
# seasonal component
plot(cbind(fit2.comps[,3], pred$a[,3]), type = "n", plot.type = "single", ylab = "", ylim = c(-3889253, 3801590))
lines(fit2.comps[,3])
polygon(c(time(pred$a[,3]), rev(time(pred$a[,3]))), c(pred$a[,3] + 2 * sqrt(pred$P[,3]), rev(pred$a[,3])), col = "gray85", border = NA)
polygon(c(time(pred$a[,3]), rev(time(pred$a[,3]))), c(pred$a[,3] - 2 * sqrt(pred$P[,3]), rev(pred$a[,3])), col = " gray85", border = NA)
lines(pred$a[,3], col = "blue", lwd = 1.5)
mtext(text = "forecasts of the seasonal component", side = 3, adj = 0)
|
Forecasting a seasonal time series in R
|
As regards the comparison of models, the idea proposed by @forecaster can be helpful since you have a relatively long series.
Regarding your last question about how to obtain forecasts for the trend c
|
Forecasting a seasonal time series in R
As regards the comparison of models, the idea proposed by @forecaster can be helpful since you have a relatively long series.
Regarding your last question about how to obtain forecasts for the trend component rather than for the whole series: as far as I know there is no package on CRAN that decomposes a fitted ARIMA model into a trend and seasonal component. The package ArDec decomposes a series based on autoregressions but I don't think it is straightforward to apply it to an ARIMA model.
Your idea of decomposing the series by means of moving averages could be a solution. As you are interested in forecasting the trend component, it would be better to fit an ARIMA model to the trend component rather than to the noise component $N_t$ and then obtain forecasts based on this model. However, this may not be an efficient approach since you work with a smoothed version of the data instead of using the observed data.
I would try a structural time series model, where a model is explicitly defined
for each component. For example, using the function StructTS from the stats core package we obtain the following decomposition:
fit1 <- StructTS(IAP, type = "BSM")
fit1
# Variances:
# level slope seas epsilon
# 4.987e+10 0.000e+00 1.575e+11 0.000e+00
plot(tsSmooth(fit1), main = "")
mtext(text = "decomposition of the basic structural model. StructTS() stats package", side = 3, adj = 0, line = 1)
The function predict can be used to obtain forecasts, predict(fit1), but forecasts are returned for the observed series, not for the components. To obtain forecasts of the components based on a structural model you can use the package stsm.
require("stsm")
require("stsm.class") # this package will be merged into package "stsm"
m <- stsm.model(model = "BSM", y = IAP, transPars = "StructTS")
fit2 <- stsmFit(m, stsm.method = "maxlik.td.optim", method = "L-BFGS-B",
KF.args = list(P0cov = TRUE))
fit2
# Parameter estimates:
# var1 var2 var3 var4
# Estimate 0.000 4.987e+10 0.000e+00 1.575e+11
# Std. error 3.201 1.194e+00 6.392e-06 8.366e-01
# Log-likelihood: -2048.649
# Convergence code: 0
# CONVERGENCE: REL_REDUCTION_OF_F <= FACTR*EPSMCH
# Number of function calls: 16
# Variance-covariance matrix: optimHessian
The parameters var1, var2, var3 and var4 are referred respectively to the variances of the disturbance term in the observation equation and in the level, slope and seasonal components.
The components based on the fitted model are returned by tsSmooth:
fit2.comps <- tsSmooth(fit2, P0cov = FALSE)$states
plot(fit2.comps, main = "")
mtext(text = "decomposition of the basic structural model. stsm package",
side = 3, adj = 0, line = 1)
Notice that the parameter estimates based on StructTS and stsm are the same,
however, the estimated components look better in the latter case
compared to those based on StructTS: the trend component is smoother with no fluctuations at the beginning of the sample and the variance of the seasonal component is more stable throughout time. The reason for this difference in the plots is that stsm uses P0cov = FALSE (a diagonal covariance matrix for the initial state vector, but this is a topic for another post).
The forecasts for the components and their $95\%$ confidence intervals
can be obtained as follows using the function predict from package
KFKSDS:
require("KFKSDS")
m2 <- set.pars(m, pmax(fit2$par, .Machine$double.eps))
ss <- char2numeric(m2)
pred <- predict(ss, IAP, n.ahead = 12)
Plot of forecasts and confidence intervals:
par(mfrow = c(3,1), mar = c(3,3,3,3))
# observed series
plot(cbind(IAP, pred$pred), type = "n", plot.type = "single", ylab = "", ylim = c(8283372, 19365461))
lines(IAP)
polygon(c(time(pred$pred), rev(time(pred$pred))), c(pred$pred + 2 * pred$se, rev(pred$pred)), col = "gray85", border = NA)
polygon(c(time(pred$pred), rev(time(pred$pred))), c(pred$pred - 2 * pred$se, rev(pred$pred)), col = " gray85", border = NA)
lines(pred$pred, col = "blue", lwd = 1.5)
mtext(text = "forecasts of the observed series", side = 3, adj = 0)
# level component
plot(cbind(IAP, pred$a[,1]), type = "n", plot.type = "single", ylab = "", ylim = c(8283372, 19365461))
lines(IAP)
polygon(c(time(pred$a[,1]), rev(time(pred$a[,1]))), c(pred$a[,1] + 2 * sqrt(pred$P[,1]), rev(pred$a[,1])), col = "gray85", border = NA)
polygon(c(time(pred$a[,1]), rev(time(pred$a[,1]))), c(pred$a[,1] - 2 * sqrt(pred$P[,1]), rev(pred$a[,1])), col = " gray85", border = NA)
lines(pred$a[,1], col = "blue", lwd = 1.5)
mtext(text = "forecasts of the level component", side = 3, adj = 0)
# seasonal component
plot(cbind(fit2.comps[,3], pred$a[,3]), type = "n", plot.type = "single", ylab = "", ylim = c(-3889253, 3801590))
lines(fit2.comps[,3])
polygon(c(time(pred$a[,3]), rev(time(pred$a[,3]))), c(pred$a[,3] + 2 * sqrt(pred$P[,3]), rev(pred$a[,3])), col = "gray85", border = NA)
polygon(c(time(pred$a[,3]), rev(time(pred$a[,3]))), c(pred$a[,3] - 2 * sqrt(pred$P[,3]), rev(pred$a[,3])), col = " gray85", border = NA)
lines(pred$a[,3], col = "blue", lwd = 1.5)
mtext(text = "forecasts of the seasonal component", side = 3, adj = 0)
|
Forecasting a seasonal time series in R
As regards the comparison of models, the idea proposed by @forecaster can be helpful since you have a relatively long series.
Regarding your last question about how to obtain forecasts for the trend c
|
39,469
|
Exact central confidence interval for a correlation
|
You can get the same values with:
cor.test(cd4$baseline, cd4$oneyear, method = "pearson", conf.level = 0.9)
The method used to obtain such interval is explained in:
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cor.test.html
If method is "pearson", the test statistic is based on Pearson's product moment correlation coefficient cor(x, y) and follows a t distribution with length(x)-2 degrees of freedom if the samples follow independent normal distributions. If there are at least 4 complete pairs of observation, an asymptotic confidence interval is given based on Fisher's Z transform.
So, you can implement your own code to obtain C.I. by following these instructions if you wish to do so. See also:
http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Testing_using_Student.27s_t-distribution
|
Exact central confidence interval for a correlation
|
You can get the same values with:
cor.test(cd4$baseline, cd4$oneyear, method = "pearson", conf.level = 0.9)
The method used to obtain such interval is explained in:
http://stat.ethz.ch/R-manual/R-pat
|
Exact central confidence interval for a correlation
You can get the same values with:
cor.test(cd4$baseline, cd4$oneyear, method = "pearson", conf.level = 0.9)
The method used to obtain such interval is explained in:
http://stat.ethz.ch/R-manual/R-patched/library/stats/html/cor.test.html
If method is "pearson", the test statistic is based on Pearson's product moment correlation coefficient cor(x, y) and follows a t distribution with length(x)-2 degrees of freedom if the samples follow independent normal distributions. If there are at least 4 complete pairs of observation, an asymptotic confidence interval is given based on Fisher's Z transform.
So, you can implement your own code to obtain C.I. by following these instructions if you wish to do so. See also:
http://en.wikipedia.org/wiki/Pearson_product-moment_correlation_coefficient#Testing_using_Student.27s_t-distribution
|
Exact central confidence interval for a correlation
You can get the same values with:
cor.test(cd4$baseline, cd4$oneyear, method = "pearson", conf.level = 0.9)
The method used to obtain such interval is explained in:
http://stat.ethz.ch/R-manual/R-pat
|
39,470
|
Exact central confidence interval for a correlation
|
You already mentioned that the Fisher transformation is not correct in your code. You first have to transform r to a z value (atanh part), then you add and subtract the standard error with the appropriate multiplier to get the correct confidence (as you did correctly). Finally, you have to transform the whole thing back into the r-metric (tanh part).
se <- 1/sqrt(17)
r <- 0.7231654
tanh(atanh(r)+c(1,-1)*qnorm(.95)*se)
Which results in
[1] 0.8650790 0.4740748
As mentioned in the comments, this is NOT the exact interval! To find an exact interval check out this work by Shieh: http://link.springer.com/10.1007/s11336-04-1221-6
|
Exact central confidence interval for a correlation
|
You already mentioned that the Fisher transformation is not correct in your code. You first have to transform r to a z value (atanh part), then you add and subtract the standard error with the appropr
|
Exact central confidence interval for a correlation
You already mentioned that the Fisher transformation is not correct in your code. You first have to transform r to a z value (atanh part), then you add and subtract the standard error with the appropriate multiplier to get the correct confidence (as you did correctly). Finally, you have to transform the whole thing back into the r-metric (tanh part).
se <- 1/sqrt(17)
r <- 0.7231654
tanh(atanh(r)+c(1,-1)*qnorm(.95)*se)
Which results in
[1] 0.8650790 0.4740748
As mentioned in the comments, this is NOT the exact interval! To find an exact interval check out this work by Shieh: http://link.springer.com/10.1007/s11336-04-1221-6
|
Exact central confidence interval for a correlation
You already mentioned that the Fisher transformation is not correct in your code. You first have to transform r to a z value (atanh part), then you add and subtract the standard error with the appropr
|
39,471
|
Nonlinear regression
|
You have several problems there. The biggest problems are easiest to see if you reparameterize your fitted function from
$y = a/(1 + b x^{-c})$
to
$y = 1/(\frac{1}{a} + \frac{b}{a} x^{-c})$
$\quad = 1/(a_1 + b_1 x^{-c})$
(this gives the same model fit, just some of the parameters are different from your expression of the model)
Now let's look at your data:
$y = 50 x^\frac{1}{2}$
$\quad = 1/( \frac{1}{50} x^{-\frac{1}{2}})$
$\quad = 1/(0 + \frac{1}{50} x^{-\frac{1}{2}})$
That is, your model exactly fits your data if $a_1=0$, $b_1=\frac{1}{50}$ and $c=-\frac{1}{2}$.
Therefore, taking that back to the original form you tried to fit, $b/a = \frac{1}{50}$ or $a=50 b$ and $1/a = 0$.
So three problems:
(i) there's a ridge along $a=50b$
(ii) the bigger $a$ is, the better the fit (the sums of squares of error will be minimized as $a\to\infty$.
(iii) as $a\to\infty$, there's no error in the fit. This causes some difficulties with the fitting algorithm - it doesn't terminate nicely, but it can still find the fit if you solve the first two problems. (If you turn trace=TRUE on after the first two problems are fixed, and start in a reasonable place, it does locate the parameter values I mention - 1.662634e-22 : 0.02 -0.50 are the values trace gives for the SSE, b1 and c. If you play with the convergence criteria, you may be able to get it to store the results in model.)
(Well, Alexis correctly points out your starting values are no good, so maybe that's four problems.)
|
Nonlinear regression
|
You have several problems there. The biggest problems are easiest to see if you reparameterize your fitted function from
$y = a/(1 + b x^{-c})$
to
$y = 1/(\frac{1}{a} + \frac{b}{a} x^{-c})$
$\quad = 1
|
Nonlinear regression
You have several problems there. The biggest problems are easiest to see if you reparameterize your fitted function from
$y = a/(1 + b x^{-c})$
to
$y = 1/(\frac{1}{a} + \frac{b}{a} x^{-c})$
$\quad = 1/(a_1 + b_1 x^{-c})$
(this gives the same model fit, just some of the parameters are different from your expression of the model)
Now let's look at your data:
$y = 50 x^\frac{1}{2}$
$\quad = 1/( \frac{1}{50} x^{-\frac{1}{2}})$
$\quad = 1/(0 + \frac{1}{50} x^{-\frac{1}{2}})$
That is, your model exactly fits your data if $a_1=0$, $b_1=\frac{1}{50}$ and $c=-\frac{1}{2}$.
Therefore, taking that back to the original form you tried to fit, $b/a = \frac{1}{50}$ or $a=50 b$ and $1/a = 0$.
So three problems:
(i) there's a ridge along $a=50b$
(ii) the bigger $a$ is, the better the fit (the sums of squares of error will be minimized as $a\to\infty$.
(iii) as $a\to\infty$, there's no error in the fit. This causes some difficulties with the fitting algorithm - it doesn't terminate nicely, but it can still find the fit if you solve the first two problems. (If you turn trace=TRUE on after the first two problems are fixed, and start in a reasonable place, it does locate the parameter values I mention - 1.662634e-22 : 0.02 -0.50 are the values trace gives for the SSE, b1 and c. If you play with the convergence criteria, you may be able to get it to store the results in model.)
(Well, Alexis correctly points out your starting values are no good, so maybe that's four problems.)
|
Nonlinear regression
You have several problems there. The biggest problems are easiest to see if you reparameterize your fitted function from
$y = a/(1 + b x^{-c})$
to
$y = 1/(\frac{1}{a} + \frac{b}{a} x^{-c})$
$\quad = 1
|
39,472
|
Logistic regression: maximum likelihood vs misclassification
|
One could possible estimate the logistic regression model by minimizing the classification error, but there is usually no reason why to do so! Why do you want to do it?
But, such questions have been asked here before, so I will not rewrite an answer, very good answers can be found to this question: Logistic regression: maximizing true positives - false positives
Basically, minimizing misclassification error amounts to using a score function which is not a proper scoring rule, see: https://en.wikipedia.org/wiki/Scoring_rule
If misclassification is minimized by some parameter vector $\beta$, it will be also minimized by many other values of $\beta$ in some vicinity of the first $\beta$. In other words, the criterion function is flat around the maximum! To see this last fact, we explore the connection with scoring rules (se wiki above). We specialize to the case with only a binary variable, possible values 0 or 1, with distribution given by a probability vector $p=[p_1, p_2]$. Let the random variable be $X$, the forecaster makes a probabilistic forecast $r=[r_1,r_2]$, a probability vector. Let $S$ be a score function.
This means that if the forecaster forecasts $r$, then $X=x$ is observed, he receives the reward $S(r,x)$. This reward then has expected value $E S(r,X)=p_1 S(r,1)+p_2 S(r,2)$ and we say the scoring rule (reward) is proper if this expectation is maximized (under $p$) by forecasting $r=p$. It is strictly proper if that maximum is unique.
Trying to minimize the misclassification rate corresponds to using the following score function:
$$
S(r,i)=\begin{cases} 1 ~~\text{if $p_i=\max(p_1,p_2)$} \\
0 ~~\text{otherwise} \end{cases}
$$ $(i=1,2)$
Now, consider if the true $p=[0.99,0.01]$. If the forecaster then reports $r=p=[0.99, 0.01]$ then his expected reward becomes
$$
E S(r,X) = p_1 S(r,1)+ p_2 S(r,2) = p_1
$$
Consider then if the forecaster report $r=s$, some other probability vector such that $s_1 > s_2$, that is $s_1 > 0.5$. Then you can calculate as above, and find the exact same expected reward! So this score function fails to reward truthful forecasting, it only depends on the event actually forecasted having been given a probability larger than one half.
That is the reason use of this will not lead to very effective learning! and so should be avoided.
|
Logistic regression: maximum likelihood vs misclassification
|
One could possible estimate the logistic regression model by minimizing the classification error, but there is usually no reason why to do so! Why do you want to do it?
But, such questions have been
|
Logistic regression: maximum likelihood vs misclassification
One could possible estimate the logistic regression model by minimizing the classification error, but there is usually no reason why to do so! Why do you want to do it?
But, such questions have been asked here before, so I will not rewrite an answer, very good answers can be found to this question: Logistic regression: maximizing true positives - false positives
Basically, minimizing misclassification error amounts to using a score function which is not a proper scoring rule, see: https://en.wikipedia.org/wiki/Scoring_rule
If misclassification is minimized by some parameter vector $\beta$, it will be also minimized by many other values of $\beta$ in some vicinity of the first $\beta$. In other words, the criterion function is flat around the maximum! To see this last fact, we explore the connection with scoring rules (se wiki above). We specialize to the case with only a binary variable, possible values 0 or 1, with distribution given by a probability vector $p=[p_1, p_2]$. Let the random variable be $X$, the forecaster makes a probabilistic forecast $r=[r_1,r_2]$, a probability vector. Let $S$ be a score function.
This means that if the forecaster forecasts $r$, then $X=x$ is observed, he receives the reward $S(r,x)$. This reward then has expected value $E S(r,X)=p_1 S(r,1)+p_2 S(r,2)$ and we say the scoring rule (reward) is proper if this expectation is maximized (under $p$) by forecasting $r=p$. It is strictly proper if that maximum is unique.
Trying to minimize the misclassification rate corresponds to using the following score function:
$$
S(r,i)=\begin{cases} 1 ~~\text{if $p_i=\max(p_1,p_2)$} \\
0 ~~\text{otherwise} \end{cases}
$$ $(i=1,2)$
Now, consider if the true $p=[0.99,0.01]$. If the forecaster then reports $r=p=[0.99, 0.01]$ then his expected reward becomes
$$
E S(r,X) = p_1 S(r,1)+ p_2 S(r,2) = p_1
$$
Consider then if the forecaster report $r=s$, some other probability vector such that $s_1 > s_2$, that is $s_1 > 0.5$. Then you can calculate as above, and find the exact same expected reward! So this score function fails to reward truthful forecasting, it only depends on the event actually forecasted having been given a probability larger than one half.
That is the reason use of this will not lead to very effective learning! and so should be avoided.
|
Logistic regression: maximum likelihood vs misclassification
One could possible estimate the logistic regression model by minimizing the classification error, but there is usually no reason why to do so! Why do you want to do it?
But, such questions have been
|
39,473
|
Logistic regression: maximum likelihood vs misclassification
|
In a word, yes, but it wouldn't be logistic regression anymore.
The logistic regression loss function (i.e., the negative log likelihood), is essentially a regression in the log odds. Changing that to a least squares loss function will make it linear regression, which will lose three things: (i) the interpretation of the regression coefficients in terms of log odds and (ii) interpretation of the model predictions as log odds (or when exponentiated, as probabilities) (iii) that linear regression does not bound the model predictions between 0 and 1, so it can easily make predictions <0 or >1. That said, for many real world applications, simply minimizing the least squares criteria using a 0/1 response (output, DV) works quite well, because there enough noise etc so that the model never gets close to 0 or 1 (the sigmoid function around 0.5 well approximated by a straight line). Plus in terms of things like ROC performance, that doesn't matter since all you care about is the ranks of the predictions.
For the misclassification error, that loss function is not differentiable and not convex (also called 0/1 loss), so it's very hard to minimize effectively. Hence, both support vector machines and logistic regression minimize two convex proxy loss functions, the hinge loss and the logistic loss, respectively, which can be seen as approximations to the 0/1 loss (convex relaxations).
|
Logistic regression: maximum likelihood vs misclassification
|
In a word, yes, but it wouldn't be logistic regression anymore.
The logistic regression loss function (i.e., the negative log likelihood), is essentially a regression in the log odds. Changing that to
|
Logistic regression: maximum likelihood vs misclassification
In a word, yes, but it wouldn't be logistic regression anymore.
The logistic regression loss function (i.e., the negative log likelihood), is essentially a regression in the log odds. Changing that to a least squares loss function will make it linear regression, which will lose three things: (i) the interpretation of the regression coefficients in terms of log odds and (ii) interpretation of the model predictions as log odds (or when exponentiated, as probabilities) (iii) that linear regression does not bound the model predictions between 0 and 1, so it can easily make predictions <0 or >1. That said, for many real world applications, simply minimizing the least squares criteria using a 0/1 response (output, DV) works quite well, because there enough noise etc so that the model never gets close to 0 or 1 (the sigmoid function around 0.5 well approximated by a straight line). Plus in terms of things like ROC performance, that doesn't matter since all you care about is the ranks of the predictions.
For the misclassification error, that loss function is not differentiable and not convex (also called 0/1 loss), so it's very hard to minimize effectively. Hence, both support vector machines and logistic regression minimize two convex proxy loss functions, the hinge loss and the logistic loss, respectively, which can be seen as approximations to the 0/1 loss (convex relaxations).
|
Logistic regression: maximum likelihood vs misclassification
In a word, yes, but it wouldn't be logistic regression anymore.
The logistic regression loss function (i.e., the negative log likelihood), is essentially a regression in the log odds. Changing that to
|
39,474
|
Logistic regression: maximum likelihood vs misclassification
|
I believe you can reduce the probability of misclassification by introducing a class of random effects mixing distributions. Using this approach, you can develop a full-likelihood model, which includes effects of misclassification [Ref 1, 2, 3].
However, there are two major challenges in applying this modelling approach:
Relaxation of the assumption on the mixing distribution of the nuisance parameters
Determining the efficiency and reliability of the model's validations
Finally, you might like to look at the semi-parametric Maximum Likelihood algorithms, such as [Ref 4].
Rice, K. M. (2004). Equivalence between conditional and mixture approaches to the Rasch model and matched case-control studies, with applications. Journal of the American Statistical Association, 99(466), 510-522.
Roeder, K., Carroll, R. J., & Lindsay, B. G. (1996). A semiparametric mixture approach to case-control studies with errors in covariables. Journal of the American Statistical Association, 91(434), 722-732.
Rice, K. (2003). Full‐likelihood approaches to misclassification of a binary exposure in matched case‐control studies. Statistics in medicine, 22(20), 3177-3194.
Schafer, D. W. (2001). Semiparametric maximum likelihood for measurement error model regression. Biometrics, 57(1), 53-61.
|
Logistic regression: maximum likelihood vs misclassification
|
I believe you can reduce the probability of misclassification by introducing a class of random effects mixing distributions. Using this approach, you can develop a full-likelihood model, which include
|
Logistic regression: maximum likelihood vs misclassification
I believe you can reduce the probability of misclassification by introducing a class of random effects mixing distributions. Using this approach, you can develop a full-likelihood model, which includes effects of misclassification [Ref 1, 2, 3].
However, there are two major challenges in applying this modelling approach:
Relaxation of the assumption on the mixing distribution of the nuisance parameters
Determining the efficiency and reliability of the model's validations
Finally, you might like to look at the semi-parametric Maximum Likelihood algorithms, such as [Ref 4].
Rice, K. M. (2004). Equivalence between conditional and mixture approaches to the Rasch model and matched case-control studies, with applications. Journal of the American Statistical Association, 99(466), 510-522.
Roeder, K., Carroll, R. J., & Lindsay, B. G. (1996). A semiparametric mixture approach to case-control studies with errors in covariables. Journal of the American Statistical Association, 91(434), 722-732.
Rice, K. (2003). Full‐likelihood approaches to misclassification of a binary exposure in matched case‐control studies. Statistics in medicine, 22(20), 3177-3194.
Schafer, D. W. (2001). Semiparametric maximum likelihood for measurement error model regression. Biometrics, 57(1), 53-61.
|
Logistic regression: maximum likelihood vs misclassification
I believe you can reduce the probability of misclassification by introducing a class of random effects mixing distributions. Using this approach, you can develop a full-likelihood model, which include
|
39,475
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
|
Since we are calculating the joint distribution, we'll assume that our initial sample is $ x = P(D=0,I=0,G=0,L=0,S=0) $ .
To calculate the next sample, we'll need to sample each variable from the conditional distribution.
$ P(D\mid G,I,S,L) $,from the conditional independencies in the Bayes net, simplifies to just sampling $ P(D)$. We sample and get the value $D=1$.
Similarly for $ I $, we sample and get the value $ I=1$.
Sampling for $ P(G\mid D,I,S,L) $, due to the conditional independencies encoded by the Bayes net, simplifies to $ P(G\mid D,I) $. Since we have already sampled $ D=1,I=1 $, we use those values and sample $ P(G\mid D=1,I=1) $. In the CPD for Grade, we can choose one of the value from the last row (where $ D=1,I=1 $). We sample and get the value $ G=2 $ (the value 0.3)
$ P(L\mid I, G,D,S) $ simplifies to $ P(L\mid G) $. We sample from the second row the Letter CPD, where $ G=2 $, and we sample and get $ L=1 $ (the value 0.6).
Similarly, sample $ P(S \mid I,L, G,D,) $ by simplifying to $ P(S \mid I) $. We get $ S=1 $ (sampling from the second row of the CPD where $ I=1 $.
And we'll have a new sample $ x': P(D=1,I=1,G=2,L=1,S=1) $.
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
|
Since we are calculating the joint distribution, we'll assume that our initial sample is $ x = P(D=0,I=0,G=0,L=0,S=0) $ .
To calculate the next sample, we'll need to sample each variable from the cond
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
Since we are calculating the joint distribution, we'll assume that our initial sample is $ x = P(D=0,I=0,G=0,L=0,S=0) $ .
To calculate the next sample, we'll need to sample each variable from the conditional distribution.
$ P(D\mid G,I,S,L) $,from the conditional independencies in the Bayes net, simplifies to just sampling $ P(D)$. We sample and get the value $D=1$.
Similarly for $ I $, we sample and get the value $ I=1$.
Sampling for $ P(G\mid D,I,S,L) $, due to the conditional independencies encoded by the Bayes net, simplifies to $ P(G\mid D,I) $. Since we have already sampled $ D=1,I=1 $, we use those values and sample $ P(G\mid D=1,I=1) $. In the CPD for Grade, we can choose one of the value from the last row (where $ D=1,I=1 $). We sample and get the value $ G=2 $ (the value 0.3)
$ P(L\mid I, G,D,S) $ simplifies to $ P(L\mid G) $. We sample from the second row the Letter CPD, where $ G=2 $, and we sample and get $ L=1 $ (the value 0.6).
Similarly, sample $ P(S \mid I,L, G,D,) $ by simplifying to $ P(S \mid I) $. We get $ S=1 $ (sampling from the second row of the CPD where $ I=1 $.
And we'll have a new sample $ x': P(D=1,I=1,G=2,L=1,S=1) $.
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
Since we are calculating the joint distribution, we'll assume that our initial sample is $ x = P(D=0,I=0,G=0,L=0,S=0) $ .
To calculate the next sample, we'll need to sample each variable from the cond
|
39,476
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
|
If the conditional distribution cannot be generated directly from standard random generators, you can apply a Metroplolis-Hastings schema within the Gibbs sampler.
draw one sample, $D_t$, from $P(D|G_{t-1},I_{t-1},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $D_{t-1}$
draw one sample, $G_t$, from $P(G|D_{t},I_{t-1},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $G_{t-1}$
draw one sample, $I_t$, from $P(I|D_{t},G_{t},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $I_{t-1}$
...
reference: Introduction Monte Carlo Methods with R chapter 7
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
|
If the conditional distribution cannot be generated directly from standard random generators, you can apply a Metroplolis-Hastings schema within the Gibbs sampler.
draw one sample, $D_t$, from $P(D|G
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
If the conditional distribution cannot be generated directly from standard random generators, you can apply a Metroplolis-Hastings schema within the Gibbs sampler.
draw one sample, $D_t$, from $P(D|G_{t-1},I_{t-1},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $D_{t-1}$
draw one sample, $G_t$, from $P(G|D_{t},I_{t-1},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $G_{t-1}$
draw one sample, $I_t$, from $P(I|D_{t},G_{t},S_{t-1},L_{t-1})$ using Metropolis sampler with initial value $I_{t-1}$
...
reference: Introduction Monte Carlo Methods with R chapter 7
|
Gibbs sampling how to sample from the conditional probability? Bayesian model
If the conditional distribution cannot be generated directly from standard random generators, you can apply a Metroplolis-Hastings schema within the Gibbs sampler.
draw one sample, $D_t$, from $P(D|G
|
39,477
|
Comparing estimators of location of the Cauchy distribution
|
Cauchy distributions have infinite mean and infinite variance. Because of this fact, the laws of large numbers and central limit theorems does not apply.
This demonstration is designed to give some intuition about what happens as you add additional observations to your sample when those samples are Cauchy. Eventually, you draw a value from the distribution which is so large relative to the other values that it "washes out" the effect of reverting to the mean.
Increased sample size will not make the mean "tend toward" the true location of the Cauchy distribution. For a demonstration, write a program to compute a large number $n$ of Cauchy deviates. The mean of the sample for the first $1...i$ s.t. $i\le n$ deviates will wildly oscillate between very small and very large values. You can see this easily in a plot of those running means versus number of deviates used to compute the running mean.
x <- rcauchy(1000)
y <- NULL
for(i in 1:length(x)){
y[i] <- mean(x[1:i])
}
plot(1:length(x),y, type="l")
Now add another 1000 observations to x and see what happens.
x <- c(x, rcauchy(1000))
y <- NULL
for(i in 1:length(x)){
y[i] <- mean(x[1:i])
}
plot(1:length(x),y, type="l")
The running mean still doesn't appear to be returning to $0$ very quickly... it almost seems as if when it gets close, it a very, very large deviate will be drawn, so the mean will "jump" away from the location of the distribution.
I also suggest reading Why does the Cauchy distribution have no mean?
|
Comparing estimators of location of the Cauchy distribution
|
Cauchy distributions have infinite mean and infinite variance. Because of this fact, the laws of large numbers and central limit theorems does not apply.
This demonstration is designed to give some in
|
Comparing estimators of location of the Cauchy distribution
Cauchy distributions have infinite mean and infinite variance. Because of this fact, the laws of large numbers and central limit theorems does not apply.
This demonstration is designed to give some intuition about what happens as you add additional observations to your sample when those samples are Cauchy. Eventually, you draw a value from the distribution which is so large relative to the other values that it "washes out" the effect of reverting to the mean.
Increased sample size will not make the mean "tend toward" the true location of the Cauchy distribution. For a demonstration, write a program to compute a large number $n$ of Cauchy deviates. The mean of the sample for the first $1...i$ s.t. $i\le n$ deviates will wildly oscillate between very small and very large values. You can see this easily in a plot of those running means versus number of deviates used to compute the running mean.
x <- rcauchy(1000)
y <- NULL
for(i in 1:length(x)){
y[i] <- mean(x[1:i])
}
plot(1:length(x),y, type="l")
Now add another 1000 observations to x and see what happens.
x <- c(x, rcauchy(1000))
y <- NULL
for(i in 1:length(x)){
y[i] <- mean(x[1:i])
}
plot(1:length(x),y, type="l")
The running mean still doesn't appear to be returning to $0$ very quickly... it almost seems as if when it gets close, it a very, very large deviate will be drawn, so the mean will "jump" away from the location of the distribution.
I also suggest reading Why does the Cauchy distribution have no mean?
|
Comparing estimators of location of the Cauchy distribution
Cauchy distributions have infinite mean and infinite variance. Because of this fact, the laws of large numbers and central limit theorems does not apply.
This demonstration is designed to give some in
|
39,478
|
Comparing estimators of location of the Cauchy distribution
|
You can also estimate the location of the Cauchy using heavy-tail(s) Lambert W x F distributions (Disclaimer: I am the author.) since both are symmetric around $c$ (location of Cauchy) and $\mu_x$ (mean of the input X ~ F), respectively. In fact, I give an example of estimating the location of a Cauchy in the paper and compare the cumulative sample average estimates as suggested by user777.
For F being the Normal distribution and $\alpha = 1$, the transformed random variable $Y = func(X, \delta)$ reduces to Tukey's h distribution. For $\delta = 0$ they are the Normal distribution; for $\delta > 0$ they have heavier tails. The nice property of Lambert W x F distributions is that you can also go back from non-normal to Normal again; i.e., you can estimate parameters and Gaussianize() your data.
In R you can simulate, estimate, plot, etc. several Lambert W x F distributions with the LambertW package.
library(LambertW)
library(MASS) # for fitdistr()
LogLikCauchy <- function(loc, x.sample) {
# sum(dcauchy(x.sample, location = loc, scale = 1, log = TRUE))
nn <- length(x.sample)
return(- nn * log(pi) - sum((log(1 + (x.sample - loc)^2))))
}
DerivLogLikCauchy <- function(loc, x.sample) {
return(sum(1 / (1 + (x.sample - loc)^2) * 2 * (x.sample - loc)))
}
LocationEstimators <- function(x.sample) {
nn <- length(x.sample)
out <-
c(mean = mean(x.sample),
median = median(x.sample),
mle.cauchy = suppressWarnings(fitdistr(x.sample,
"cauchy")$est["location"]))
out <- c(out,
median.loglik = out["median"] +
DerivLogLikCauchy(out["median"], x.sample) / (nn / 2))
# Lambert W x Gaussian estimates for heavy tails ('h')
igmm.tau <- LambertW::IGMM(x.sample, "h")$tau
beta.hat <- igmm.tau[1:2]
names(beta.hat) <- c("mu", "sigma")
mle.lambertw <- LambertW::MLE_LambertW(x.sample, distname = "normal",
theta.init = LambertW::tau2theta(igmm.tau,
beta = beta.hat),
type = "h",
return.estimate.only = TRUE)
out <- c(out, igmm.tau["mu_x"], mle.lambertw["mu"])
names(out)[3:6] <-
c("median.loglik", "mle.cauchy", "igmm.LambertW", "mle.LambertW")
return(out)
}
Now let's look at simulations
# simulate and look at bias, std dev, and MSE
nsim <- 1000
num.samples <- 100
set.seed(nsim)
est <- t(replicate(nsim,
LocationEstimators(rcauchy(num.samples))))
colMeans(est)
## mean median median.loglik mle.cauchy igmm.LambertW
## -0.43373 0.00255 0.00306 0.00237 -0.00326
## mle.LambertW
## 0.00321
apply(est, 2, sd)
## mean median median.loglik mle.cauchy igmm.LambertW
## 29.183 0.156 0.145 0.144 0.221
## mle.LambertW
## 0.146
# RMSE (since true location = 0)
sqrt(colMeans(est^2))
## mean median median.loglik mle.cauchy igmm.LambertW
## 29.171 0.156 0.145 0.144 0.221
## mle.LambertW
## 0.146
As we knew beforehand, the mean is a bad estimator, so we'll remove it from the plots.
library(ggplot2)
library(reshape2)
theme_set(theme_bw(18))
est.m <- melt(est)
colnames(est.m) <- c("sim.id", "estimator", "value")
# remove 'mean' for good scaling in plots
est.m <- subset(est.m, estimator != 'mean')
ggplot(est.m,
aes(estimator, value, fill = estimator)) +
geom_violin() +
geom_hline(yintercept = 0, size = 1, linetype = "dashed",
colour = "blue") +
theme(legend.position = "none",
axis.text.x = element_text(angle = 90))
They all seem pretty close to each other (with median and IGMM being slightly worse).
|
Comparing estimators of location of the Cauchy distribution
|
You can also estimate the location of the Cauchy using heavy-tail(s) Lambert W x F distributions (Disclaimer: I am the author.) since both are symmetric around $c$ (location of Cauchy) and $\mu_x$ (me
|
Comparing estimators of location of the Cauchy distribution
You can also estimate the location of the Cauchy using heavy-tail(s) Lambert W x F distributions (Disclaimer: I am the author.) since both are symmetric around $c$ (location of Cauchy) and $\mu_x$ (mean of the input X ~ F), respectively. In fact, I give an example of estimating the location of a Cauchy in the paper and compare the cumulative sample average estimates as suggested by user777.
For F being the Normal distribution and $\alpha = 1$, the transformed random variable $Y = func(X, \delta)$ reduces to Tukey's h distribution. For $\delta = 0$ they are the Normal distribution; for $\delta > 0$ they have heavier tails. The nice property of Lambert W x F distributions is that you can also go back from non-normal to Normal again; i.e., you can estimate parameters and Gaussianize() your data.
In R you can simulate, estimate, plot, etc. several Lambert W x F distributions with the LambertW package.
library(LambertW)
library(MASS) # for fitdistr()
LogLikCauchy <- function(loc, x.sample) {
# sum(dcauchy(x.sample, location = loc, scale = 1, log = TRUE))
nn <- length(x.sample)
return(- nn * log(pi) - sum((log(1 + (x.sample - loc)^2))))
}
DerivLogLikCauchy <- function(loc, x.sample) {
return(sum(1 / (1 + (x.sample - loc)^2) * 2 * (x.sample - loc)))
}
LocationEstimators <- function(x.sample) {
nn <- length(x.sample)
out <-
c(mean = mean(x.sample),
median = median(x.sample),
mle.cauchy = suppressWarnings(fitdistr(x.sample,
"cauchy")$est["location"]))
out <- c(out,
median.loglik = out["median"] +
DerivLogLikCauchy(out["median"], x.sample) / (nn / 2))
# Lambert W x Gaussian estimates for heavy tails ('h')
igmm.tau <- LambertW::IGMM(x.sample, "h")$tau
beta.hat <- igmm.tau[1:2]
names(beta.hat) <- c("mu", "sigma")
mle.lambertw <- LambertW::MLE_LambertW(x.sample, distname = "normal",
theta.init = LambertW::tau2theta(igmm.tau,
beta = beta.hat),
type = "h",
return.estimate.only = TRUE)
out <- c(out, igmm.tau["mu_x"], mle.lambertw["mu"])
names(out)[3:6] <-
c("median.loglik", "mle.cauchy", "igmm.LambertW", "mle.LambertW")
return(out)
}
Now let's look at simulations
# simulate and look at bias, std dev, and MSE
nsim <- 1000
num.samples <- 100
set.seed(nsim)
est <- t(replicate(nsim,
LocationEstimators(rcauchy(num.samples))))
colMeans(est)
## mean median median.loglik mle.cauchy igmm.LambertW
## -0.43373 0.00255 0.00306 0.00237 -0.00326
## mle.LambertW
## 0.00321
apply(est, 2, sd)
## mean median median.loglik mle.cauchy igmm.LambertW
## 29.183 0.156 0.145 0.144 0.221
## mle.LambertW
## 0.146
# RMSE (since true location = 0)
sqrt(colMeans(est^2))
## mean median median.loglik mle.cauchy igmm.LambertW
## 29.171 0.156 0.145 0.144 0.221
## mle.LambertW
## 0.146
As we knew beforehand, the mean is a bad estimator, so we'll remove it from the plots.
library(ggplot2)
library(reshape2)
theme_set(theme_bw(18))
est.m <- melt(est)
colnames(est.m) <- c("sim.id", "estimator", "value")
# remove 'mean' for good scaling in plots
est.m <- subset(est.m, estimator != 'mean')
ggplot(est.m,
aes(estimator, value, fill = estimator)) +
geom_violin() +
geom_hline(yintercept = 0, size = 1, linetype = "dashed",
colour = "blue") +
theme(legend.position = "none",
axis.text.x = element_text(angle = 90))
They all seem pretty close to each other (with median and IGMM being slightly worse).
|
Comparing estimators of location of the Cauchy distribution
You can also estimate the location of the Cauchy using heavy-tail(s) Lambert W x F distributions (Disclaimer: I am the author.) since both are symmetric around $c$ (location of Cauchy) and $\mu_x$ (me
|
39,479
|
Reporting from a likelihood ratio test
|
For a likelihood ratio test, the degrees of freedom are equal to the difference in number of parameters for the two models. In this case, df = 1, and so $\chi^2(1)=11.96$, $p=0.0005$.
|
Reporting from a likelihood ratio test
|
For a likelihood ratio test, the degrees of freedom are equal to the difference in number of parameters for the two models. In this case, df = 1, and so $\chi^2(1)=11.96$, $p=0.0005$.
|
Reporting from a likelihood ratio test
For a likelihood ratio test, the degrees of freedom are equal to the difference in number of parameters for the two models. In this case, df = 1, and so $\chi^2(1)=11.96$, $p=0.0005$.
|
Reporting from a likelihood ratio test
For a likelihood ratio test, the degrees of freedom are equal to the difference in number of parameters for the two models. In this case, df = 1, and so $\chi^2(1)=11.96$, $p=0.0005$.
|
39,480
|
Invariance property of maximum likelihood estimator?
|
You seem to be confusing what it means for a parameter transformation to occur. In general, the values of the likelihood functions do not change. To illustrate, let $L(\theta; x)$ be a likelihood function and let $\lambda = g(\theta)$ where $g$ is one-to-one. Then the likelihood function parameterized in terms of $\lambda$ is
$$L^*(\lambda; x) = L(g^{-1}(\lambda) ;x) = L(\theta;x)$$
Some trouble occurs when we want to use a function that isn't one-to-one for $g$. In that case we define the likelihood function parameterized in terms of $\lambda$ through the use of profile likelihood as:
$$L^*(\lambda; x) = \sup_{\theta: g(\theta) = \lambda} L(\theta; x)$$
Using these definitions in your example, if $L(\theta_5;x) = 0.4$ then $L^*(0;x) = 0.4$ as well and the actual likelihood values do not change. We also see that $g(\theta_5) = 0$ so there does not appear to be any contradiction. I'll leave the general proof that the MLE is invariant to any parameter transformations up to the interested reader.
|
Invariance property of maximum likelihood estimator?
|
You seem to be confusing what it means for a parameter transformation to occur. In general, the values of the likelihood functions do not change. To illustrate, let $L(\theta; x)$ be a likelihood func
|
Invariance property of maximum likelihood estimator?
You seem to be confusing what it means for a parameter transformation to occur. In general, the values of the likelihood functions do not change. To illustrate, let $L(\theta; x)$ be a likelihood function and let $\lambda = g(\theta)$ where $g$ is one-to-one. Then the likelihood function parameterized in terms of $\lambda$ is
$$L^*(\lambda; x) = L(g^{-1}(\lambda) ;x) = L(\theta;x)$$
Some trouble occurs when we want to use a function that isn't one-to-one for $g$. In that case we define the likelihood function parameterized in terms of $\lambda$ through the use of profile likelihood as:
$$L^*(\lambda; x) = \sup_{\theta: g(\theta) = \lambda} L(\theta; x)$$
Using these definitions in your example, if $L(\theta_5;x) = 0.4$ then $L^*(0;x) = 0.4$ as well and the actual likelihood values do not change. We also see that $g(\theta_5) = 0$ so there does not appear to be any contradiction. I'll leave the general proof that the MLE is invariant to any parameter transformations up to the interested reader.
|
Invariance property of maximum likelihood estimator?
You seem to be confusing what it means for a parameter transformation to occur. In general, the values of the likelihood functions do not change. To illustrate, let $L(\theta; x)$ be a likelihood func
|
39,481
|
Bayesian regularized NNs over classical NNs
|
The key problem with neural nets tends to be preventing over-fitting. Bayesian regularisation (which restricts the magnitude of the weights) is one approach to this, structural stabilisation (i.e. restricting the number of hidden nodes and/or weights is another). Neither approach is a panacea, and generally a combination of regularisation and structural stabilisation is better (which means you need cross-validation again to select the network architecture - using the Bayesian evidence for this is a bad idea as the evidence is biased as a result of its use in tuning the regularisation parameters and unreliable if there is any model miss-specification). Which works best is essentially problem dependent, and the best way to find out is to try both and see (use e.g. cross-validation to estimate performance in an unbiased manner).
Also regularisation doesn't have to be Bayesian, you can choose how much to regularise the network using cross-validation instead. One of the problems with Bayesian methods is that they can give bad results if the model is miss-specified, in which case cross-validation based regularisation methods may be more robust.
Another important point is that not all Bayesian neural network formulations are the same. The Evidence framework of MacKay tends not to work to well for classification problems as the Laplace approximation that it uses doesn't work very well for skewed posterior distributions for the weights. The MCMC approach of Radford Neal is likely to work better for these tasks, but is computationally expensive and assessing convergence etc. is not as straightforward.
However, neural network models are rather fiddly to get right and in practice it is easier to get good generalisation performance from kernel methods or Gaussian processes, so I would use them instead for most tasks, especially if there is relatively little training data.
I did a very extensive empirical study on this recently, but I need to find a journal that will accept empirical studies of interest to practitioners, but with very little new research content.
|
Bayesian regularized NNs over classical NNs
|
The key problem with neural nets tends to be preventing over-fitting. Bayesian regularisation (which restricts the magnitude of the weights) is one approach to this, structural stabilisation (i.e. re
|
Bayesian regularized NNs over classical NNs
The key problem with neural nets tends to be preventing over-fitting. Bayesian regularisation (which restricts the magnitude of the weights) is one approach to this, structural stabilisation (i.e. restricting the number of hidden nodes and/or weights is another). Neither approach is a panacea, and generally a combination of regularisation and structural stabilisation is better (which means you need cross-validation again to select the network architecture - using the Bayesian evidence for this is a bad idea as the evidence is biased as a result of its use in tuning the regularisation parameters and unreliable if there is any model miss-specification). Which works best is essentially problem dependent, and the best way to find out is to try both and see (use e.g. cross-validation to estimate performance in an unbiased manner).
Also regularisation doesn't have to be Bayesian, you can choose how much to regularise the network using cross-validation instead. One of the problems with Bayesian methods is that they can give bad results if the model is miss-specified, in which case cross-validation based regularisation methods may be more robust.
Another important point is that not all Bayesian neural network formulations are the same. The Evidence framework of MacKay tends not to work to well for classification problems as the Laplace approximation that it uses doesn't work very well for skewed posterior distributions for the weights. The MCMC approach of Radford Neal is likely to work better for these tasks, but is computationally expensive and assessing convergence etc. is not as straightforward.
However, neural network models are rather fiddly to get right and in practice it is easier to get good generalisation performance from kernel methods or Gaussian processes, so I would use them instead for most tasks, especially if there is relatively little training data.
I did a very extensive empirical study on this recently, but I need to find a journal that will accept empirical studies of interest to practitioners, but with very little new research content.
|
Bayesian regularized NNs over classical NNs
The key problem with neural nets tends to be preventing over-fitting. Bayesian regularisation (which restricts the magnitude of the weights) is one approach to this, structural stabilisation (i.e. re
|
39,482
|
Bayesian regularized NNs over classical NNs
|
You use BRANNs for the same purposes as regular ANNs, typically classification and regression. As Dikran Marsupial says, the are better because they are more robust against overfitting, and allows you to work with higher number of neurons without running into overfitting. Besides, it provides you with error bars on the outputs, that is, you have a measure of the confidence of each of the outputs.
Nevertheless, new techniques like dropout and maxout seem to have overriden this technique, both because they are easier to use and yield better results. Here dropout is showed to perform scaling and regularization in certain sense.
Still, if you are interested on the details, you may check the papers by David MacKay (the guy who won some competitions with this technique).
|
Bayesian regularized NNs over classical NNs
|
You use BRANNs for the same purposes as regular ANNs, typically classification and regression. As Dikran Marsupial says, the are better because they are more robust against overfitting, and allows you
|
Bayesian regularized NNs over classical NNs
You use BRANNs for the same purposes as regular ANNs, typically classification and regression. As Dikran Marsupial says, the are better because they are more robust against overfitting, and allows you to work with higher number of neurons without running into overfitting. Besides, it provides you with error bars on the outputs, that is, you have a measure of the confidence of each of the outputs.
Nevertheless, new techniques like dropout and maxout seem to have overriden this technique, both because they are easier to use and yield better results. Here dropout is showed to perform scaling and regularization in certain sense.
Still, if you are interested on the details, you may check the papers by David MacKay (the guy who won some competitions with this technique).
|
Bayesian regularized NNs over classical NNs
You use BRANNs for the same purposes as regular ANNs, typically classification and regression. As Dikran Marsupial says, the are better because they are more robust against overfitting, and allows you
|
39,483
|
Are two Pearson correlation coefficients different?
|
Just in case that someone (else) has to perform a comparison of correlation coefficients on multiple pairs of variables, here's a function based on rg255's helpful reply to copy:
cor.diff.test = function(x1, x2, y1, y2, method="pearson") {
cor1 = cor.test(x1, x2, method=method)
cor2 = cor.test(y1, y2, method=method)
r1 = cor1$estimate
r2 = cor2$estimate
n1 = sum(complete.cases(x1, x2))
n2 = sum(complete.cases(y1, y2))
fisher = ((0.5*log((1+r1)/(1-r1)))-(0.5*log((1+r2)/(1-r2))))/((1/(n1-3))+(1/(n2-3)))^0.5
p.value = (2*(1-pnorm(abs(fisher))))
result= list(
"cor1" = list(
"estimate" = as.numeric(cor1$estimate),
"p.value" = cor1$p.value,
"n" = n1
),
"cor2" = list(
"estimate" = as.numeric(cor2$estimate),
"p.value" = cor2$p.value,
"n" = n2
),
"p.value.twosided" = as.numeric(p.value),
"p.value.onesided" = as.numeric(p.value) / 2
)
cat(paste(sep="",
"cor1: r=", format(result$cor1$estimate, digits=3), ", p=", format(result$cor1$p.value, digits=3), ", n=", result$cor1$n, "\n",
"cor2: r=", format(result$cor2$estimate, digits=3), ", p=", format(result$cor2$p.value, digits=3), ", n=", result$cor2$n, "\n",
"diffence: p(one-sided)=", format(result$p.value.onesided, digits=3), ", p(two-sided)=", format(result$p.value.twosided, digits=3), "\n"
))
return(result);
}
|
Are two Pearson correlation coefficients different?
|
Just in case that someone (else) has to perform a comparison of correlation coefficients on multiple pairs of variables, here's a function based on rg255's helpful reply to copy:
cor.diff.test = funct
|
Are two Pearson correlation coefficients different?
Just in case that someone (else) has to perform a comparison of correlation coefficients on multiple pairs of variables, here's a function based on rg255's helpful reply to copy:
cor.diff.test = function(x1, x2, y1, y2, method="pearson") {
cor1 = cor.test(x1, x2, method=method)
cor2 = cor.test(y1, y2, method=method)
r1 = cor1$estimate
r2 = cor2$estimate
n1 = sum(complete.cases(x1, x2))
n2 = sum(complete.cases(y1, y2))
fisher = ((0.5*log((1+r1)/(1-r1)))-(0.5*log((1+r2)/(1-r2))))/((1/(n1-3))+(1/(n2-3)))^0.5
p.value = (2*(1-pnorm(abs(fisher))))
result= list(
"cor1" = list(
"estimate" = as.numeric(cor1$estimate),
"p.value" = cor1$p.value,
"n" = n1
),
"cor2" = list(
"estimate" = as.numeric(cor2$estimate),
"p.value" = cor2$p.value,
"n" = n2
),
"p.value.twosided" = as.numeric(p.value),
"p.value.onesided" = as.numeric(p.value) / 2
)
cat(paste(sep="",
"cor1: r=", format(result$cor1$estimate, digits=3), ", p=", format(result$cor1$p.value, digits=3), ", n=", result$cor1$n, "\n",
"cor2: r=", format(result$cor2$estimate, digits=3), ", p=", format(result$cor2$p.value, digits=3), ", n=", result$cor2$n, "\n",
"diffence: p(one-sided)=", format(result$p.value.onesided, digits=3), ", p(two-sided)=", format(result$p.value.twosided, digits=3), "\n"
))
return(result);
}
|
Are two Pearson correlation coefficients different?
Just in case that someone (else) has to perform a comparison of correlation coefficients on multiple pairs of variables, here's a function based on rg255's helpful reply to copy:
cor.diff.test = funct
|
39,484
|
Are two Pearson correlation coefficients different?
|
Once the Fisher's z transformations are done it is just a case of obtaining p-values
# Correlations
cor.test (df1$a, df1$b, method = "p")
cor.test (df2$a, df2$b, method = "p")
# function to do fisher transformations
fisher.z<- function (r1,r2,n1,n2) ((0.5*log((1+r1)/(1-r1)))-(0.5*log((1+r2)/(1-r2))))/((1/(n1-3))+(1/(n2-3)))^0.5
# or this (either version will suffice)
fisher.z<- function (r1,r2,n1,n2) (atanh(r1) - atanh(r2)) / ((1/(n1-3))+(1/(n2-3)))^0.5
#input n and r from correlations manually (two tailed test)
2*(1-pnorm(abs(fisher.z(r1= ,r2= ,n1= ,n2= ))))
See the final four slides of this presentation and the pnorm() function in r.
|
Are two Pearson correlation coefficients different?
|
Once the Fisher's z transformations are done it is just a case of obtaining p-values
# Correlations
cor.test (df1$a, df1$b, method = "p")
cor.test (df2$a, df2$b, method = "p")
# function to do fi
|
Are two Pearson correlation coefficients different?
Once the Fisher's z transformations are done it is just a case of obtaining p-values
# Correlations
cor.test (df1$a, df1$b, method = "p")
cor.test (df2$a, df2$b, method = "p")
# function to do fisher transformations
fisher.z<- function (r1,r2,n1,n2) ((0.5*log((1+r1)/(1-r1)))-(0.5*log((1+r2)/(1-r2))))/((1/(n1-3))+(1/(n2-3)))^0.5
# or this (either version will suffice)
fisher.z<- function (r1,r2,n1,n2) (atanh(r1) - atanh(r2)) / ((1/(n1-3))+(1/(n2-3)))^0.5
#input n and r from correlations manually (two tailed test)
2*(1-pnorm(abs(fisher.z(r1= ,r2= ,n1= ,n2= ))))
See the final four slides of this presentation and the pnorm() function in r.
|
Are two Pearson correlation coefficients different?
Once the Fisher's z transformations are done it is just a case of obtaining p-values
# Correlations
cor.test (df1$a, df1$b, method = "p")
cor.test (df2$a, df2$b, method = "p")
# function to do fi
|
39,485
|
Are two Pearson correlation coefficients different?
|
In case people are still looking for an easy way to compare two $r$ Pearson correlations.
There is a function called paired.r in the package psych for R exactly for that.
Usage:
paired.r(xy, xz, yz=NULL, n, n2=NULL,twotailed=TRUE)
Or as a simple example: For r1 and r2 and a sample size of n just do:
paired.r(r1,r2,n=n)
|
Are two Pearson correlation coefficients different?
|
In case people are still looking for an easy way to compare two $r$ Pearson correlations.
There is a function called paired.r in the package psych for R exactly for that.
Usage:
paired.r(xy, xz, yz=N
|
Are two Pearson correlation coefficients different?
In case people are still looking for an easy way to compare two $r$ Pearson correlations.
There is a function called paired.r in the package psych for R exactly for that.
Usage:
paired.r(xy, xz, yz=NULL, n, n2=NULL,twotailed=TRUE)
Or as a simple example: For r1 and r2 and a sample size of n just do:
paired.r(r1,r2,n=n)
|
Are two Pearson correlation coefficients different?
In case people are still looking for an easy way to compare two $r$ Pearson correlations.
There is a function called paired.r in the package psych for R exactly for that.
Usage:
paired.r(xy, xz, yz=N
|
39,486
|
How to check the features which are selected by LASSO
|
Use the coef function on the glmnet model.
You will need to choose a lambda value, as different lambdas will give you different feature sets. Typically this is done through cross-validation.
/edit: For example, using cv.glment:
library(glmnet)
x <- model.matrix(Sepal.Length~., iris)[,-1]
y <- iris$Sepal.Length
mod <- cv.glmnet(as.matrix(x), y, alpha=1)
To see the coefficients with the minimum cross-validation error:
as.matrix(coef(mod, mod$lambda.min))
1
(Intercept) 2.1670759
Sepal.Width 0.5032347
Petal.Length 0.8137398
Petal.Width -0.3127065
Speciesversicolor -0.6763395
Speciesvirginica -0.9595409
To see the coefficients with the "largest value of lambda such that error is within 1 standard error of the minimum:"
as.matrix(coef(mod, mod$lambda.1se))
1
(Intercept) 2.14705035
Sepal.Width 0.59950383
Petal.Length 0.57550203
Petal.Width -0.23632776
Speciesversicolor 0.00000000
Speciesvirginica -0.04770282
You can also select any other value of lambda that you want. Coefficients that are 0 have been dropped out of the model. e.g.:
CF <- as.matrix(coef(mod, mod$lambda.1se))
CF[CF!=0,]
(Intercept) Sepal.Width Petal.Length Petal.Width Speciesvirginica
2.14705035 0.59950383 0.57550203 -0.23632776 -0.04770282
If we uses the 1se lambda, the Speciesversicolor dummy variable gets dropped from the model.
|
How to check the features which are selected by LASSO
|
Use the coef function on the glmnet model.
You will need to choose a lambda value, as different lambdas will give you different feature sets. Typically this is done through cross-validation.
/edit: F
|
How to check the features which are selected by LASSO
Use the coef function on the glmnet model.
You will need to choose a lambda value, as different lambdas will give you different feature sets. Typically this is done through cross-validation.
/edit: For example, using cv.glment:
library(glmnet)
x <- model.matrix(Sepal.Length~., iris)[,-1]
y <- iris$Sepal.Length
mod <- cv.glmnet(as.matrix(x), y, alpha=1)
To see the coefficients with the minimum cross-validation error:
as.matrix(coef(mod, mod$lambda.min))
1
(Intercept) 2.1670759
Sepal.Width 0.5032347
Petal.Length 0.8137398
Petal.Width -0.3127065
Speciesversicolor -0.6763395
Speciesvirginica -0.9595409
To see the coefficients with the "largest value of lambda such that error is within 1 standard error of the minimum:"
as.matrix(coef(mod, mod$lambda.1se))
1
(Intercept) 2.14705035
Sepal.Width 0.59950383
Petal.Length 0.57550203
Petal.Width -0.23632776
Speciesversicolor 0.00000000
Speciesvirginica -0.04770282
You can also select any other value of lambda that you want. Coefficients that are 0 have been dropped out of the model. e.g.:
CF <- as.matrix(coef(mod, mod$lambda.1se))
CF[CF!=0,]
(Intercept) Sepal.Width Petal.Length Petal.Width Speciesvirginica
2.14705035 0.59950383 0.57550203 -0.23632776 -0.04770282
If we uses the 1se lambda, the Speciesversicolor dummy variable gets dropped from the model.
|
How to check the features which are selected by LASSO
Use the coef function on the glmnet model.
You will need to choose a lambda value, as different lambdas will give you different feature sets. Typically this is done through cross-validation.
/edit: F
|
39,487
|
Covariance Matrices
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Look at the corrplot package for R. It has several options for visualizing correlation matrices.
|
Covariance Matrices
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Covariance Matrices
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Look at the corrplot package for R. It has several options for visualizing correlation matrices.
|
Covariance Matrices
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
39,488
|
Covariance Matrices
|
I'm not sure about any R packages that do this, but my favourite approach (and I feel a much more informative approach) to visualization of covariance matrices are using correlation networks. Basically, I trim the covariance matrix using the Glasso algorithm (I'll explain why later), then use a force-directed algorithm to produce a network for the correlations. The size of the nodes are the variances. Ex:
larger image
Remember, this is a network, so the axis mean nothing; it's all about neighbourhoods and distances. The red lines are high correlations, and the blue lines are negative correlations. We can spot clusters of variables, and second and third order correlations better than a matrix approach.
I trim the covariance matrix, by using a penalty term (see Glasso algorithm) for two reasons: reduce estimation variance, and reduce number of lines which improves visualization.
FYI, the above was generated using this Python script.
|
Covariance Matrices
|
I'm not sure about any R packages that do this, but my favourite approach (and I feel a much more informative approach) to visualization of covariance matrices are using correlation networks. Basicall
|
Covariance Matrices
I'm not sure about any R packages that do this, but my favourite approach (and I feel a much more informative approach) to visualization of covariance matrices are using correlation networks. Basically, I trim the covariance matrix using the Glasso algorithm (I'll explain why later), then use a force-directed algorithm to produce a network for the correlations. The size of the nodes are the variances. Ex:
larger image
Remember, this is a network, so the axis mean nothing; it's all about neighbourhoods and distances. The red lines are high correlations, and the blue lines are negative correlations. We can spot clusters of variables, and second and third order correlations better than a matrix approach.
I trim the covariance matrix, by using a penalty term (see Glasso algorithm) for two reasons: reduce estimation variance, and reduce number of lines which improves visualization.
FYI, the above was generated using this Python script.
|
Covariance Matrices
I'm not sure about any R packages that do this, but my favourite approach (and I feel a much more informative approach) to visualization of covariance matrices are using correlation networks. Basicall
|
39,489
|
Covariance Matrices
|
If you're using shiny and have the ability to make interactive tools, then just develop the covariance plot as a ggplot object so that you can add hover, dblclick, and brush tools for zooming and showing pop-up information.
Here's an intro to creating different correlation like plots in R with ggplot2: GGPlot2 CorrMatrix. And here's an intro to adding a pop-up with hover tools using ggplot objects in Shiny: Shiny Plot Hover Tool.
Alternatively, you may find the Plotly has some nice features for graphing the info you want, which also has easy-to-incorporate, interactive tools. For example: Plotly Heatmaps.
If you have questions on the ggplot method, feel free to message me. I've actually just recently developed a similar app with hover and zoom tools.
|
Covariance Matrices
|
If you're using shiny and have the ability to make interactive tools, then just develop the covariance plot as a ggplot object so that you can add hover, dblclick, and brush tools for zooming and show
|
Covariance Matrices
If you're using shiny and have the ability to make interactive tools, then just develop the covariance plot as a ggplot object so that you can add hover, dblclick, and brush tools for zooming and showing pop-up information.
Here's an intro to creating different correlation like plots in R with ggplot2: GGPlot2 CorrMatrix. And here's an intro to adding a pop-up with hover tools using ggplot objects in Shiny: Shiny Plot Hover Tool.
Alternatively, you may find the Plotly has some nice features for graphing the info you want, which also has easy-to-incorporate, interactive tools. For example: Plotly Heatmaps.
If you have questions on the ggplot method, feel free to message me. I've actually just recently developed a similar app with hover and zoom tools.
|
Covariance Matrices
If you're using shiny and have the ability to make interactive tools, then just develop the covariance plot as a ggplot object so that you can add hover, dblclick, and brush tools for zooming and show
|
39,490
|
Can I probe cross-level interactions without random slope in a hierarchical linear model?
|
Having random slopes at level 1 is not a necessary condition for examining cross-level interactions. All that is necessary is that you have 2 predictors that vary at different levels, and their interaction.
EDIT: I looked over the Hofmann paper posted in the comments and I think I see the source of confusion here.
Hofmann describes a situation in which one is building a model by starting with the simplest "empty" random-intercept model, and then working up term-by-term to the full HLM, where the very last term added is the predictor representing the cross-level interaction. Under such an approach, it is true that in the model prior to the cross-level interaction model (i.e., the model that is identical except that the cross-level interaction term is omitted), there must be variation in the level-1 slopes in order for there to be moderation of these slopes by a level-2 predictor. Intuitively, if every group has the same exact level-1 slope, then it is not possible for us to predict variation in these slopes from another predictor in the dataset, because there is no such variation to predict.
Notice that this is not a statement about the cross-level interaction model itself, but rather a statement about a different model which omits the cross-level interaction term. In the cross-level interaction model itself, it is entirely possible for there to be no variation in the level-1 slopes. This would essentially mean that all of the seemingly random variation in the level-1 slopes that we observed in the previous model can be accounted for by adding the cross-level interaction term to the model.
I illustrate just such a situation below with some simulated data in R, where we have a cross-level interaction between x varying at level 1, and z varying at level 2:
# generate data -----------------------------------------------------------
set.seed(12345)
dat <- merge(data.frame(group=rep(1:30,each=30),
x=runif(900, min=-.5, max=.5),
error=rnorm(900)),
data.frame(group=1:30,
z=runif(30, min=-.5, max=.5),
randInt=rnorm(30)))
dat <- within(dat, y <- randInt + 5*x*z + error)
# model with the x:z interaction ------------------------------------------
library(lme4)
mod1 <- lmer(y ~ x*z + (1|group) + (0+x|group), data=dat)
mod1
# Linear mixed model fit by REML
# Formula: y ~ x * z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2658 2692 -1322 2640 2644
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 8.5326e-01 9.2372e-01
# group x 5.4449e-20 2.3334e-10
# Residual 9.9055e-01 9.9526e-01
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.13311 0.17283 -0.770
# x 0.09808 0.11902 0.824
# z -0.24705 0.51424 -0.480
# x:z 5.39969 0.35257 15.315
#
# Correlation of Fixed Effects:
# (Intr) x z
# x -0.010
# z 0.103 0.008
# x:z 0.007 0.137 -0.005
# model without the x:z interaction ---------------------------------------
mod2 <- lmer(y ~ x + z + (1|group) + (0+x|group), data=dat)
mod2
# Linear mixed model fit by REML
# Formula: y ~ x + z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2726 2755 -1357 2713 2714
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 0.85503 0.92468
# group x 3.46811 1.86229
# Residual 0.99607 0.99803
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.14148 0.17312 -0.817
# x -0.05178 0.36056 -0.144
# z -0.26570 0.51509 -0.516
#
# Correlation of Fixed Effects:
# (Intr) x
# x -0.004
# z 0.103 0.002
|
Can I probe cross-level interactions without random slope in a hierarchical linear model?
|
Having random slopes at level 1 is not a necessary condition for examining cross-level interactions. All that is necessary is that you have 2 predictors that vary at different levels, and their intera
|
Can I probe cross-level interactions without random slope in a hierarchical linear model?
Having random slopes at level 1 is not a necessary condition for examining cross-level interactions. All that is necessary is that you have 2 predictors that vary at different levels, and their interaction.
EDIT: I looked over the Hofmann paper posted in the comments and I think I see the source of confusion here.
Hofmann describes a situation in which one is building a model by starting with the simplest "empty" random-intercept model, and then working up term-by-term to the full HLM, where the very last term added is the predictor representing the cross-level interaction. Under such an approach, it is true that in the model prior to the cross-level interaction model (i.e., the model that is identical except that the cross-level interaction term is omitted), there must be variation in the level-1 slopes in order for there to be moderation of these slopes by a level-2 predictor. Intuitively, if every group has the same exact level-1 slope, then it is not possible for us to predict variation in these slopes from another predictor in the dataset, because there is no such variation to predict.
Notice that this is not a statement about the cross-level interaction model itself, but rather a statement about a different model which omits the cross-level interaction term. In the cross-level interaction model itself, it is entirely possible for there to be no variation in the level-1 slopes. This would essentially mean that all of the seemingly random variation in the level-1 slopes that we observed in the previous model can be accounted for by adding the cross-level interaction term to the model.
I illustrate just such a situation below with some simulated data in R, where we have a cross-level interaction between x varying at level 1, and z varying at level 2:
# generate data -----------------------------------------------------------
set.seed(12345)
dat <- merge(data.frame(group=rep(1:30,each=30),
x=runif(900, min=-.5, max=.5),
error=rnorm(900)),
data.frame(group=1:30,
z=runif(30, min=-.5, max=.5),
randInt=rnorm(30)))
dat <- within(dat, y <- randInt + 5*x*z + error)
# model with the x:z interaction ------------------------------------------
library(lme4)
mod1 <- lmer(y ~ x*z + (1|group) + (0+x|group), data=dat)
mod1
# Linear mixed model fit by REML
# Formula: y ~ x * z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2658 2692 -1322 2640 2644
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 8.5326e-01 9.2372e-01
# group x 5.4449e-20 2.3334e-10
# Residual 9.9055e-01 9.9526e-01
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.13311 0.17283 -0.770
# x 0.09808 0.11902 0.824
# z -0.24705 0.51424 -0.480
# x:z 5.39969 0.35257 15.315
#
# Correlation of Fixed Effects:
# (Intr) x z
# x -0.010
# z 0.103 0.008
# x:z 0.007 0.137 -0.005
# model without the x:z interaction ---------------------------------------
mod2 <- lmer(y ~ x + z + (1|group) + (0+x|group), data=dat)
mod2
# Linear mixed model fit by REML
# Formula: y ~ x + z + (1 | group) + (0 + x | group)
# Data: dat
# AIC BIC logLik deviance REMLdev
# 2726 2755 -1357 2713 2714
# Random effects:
# Groups Name Variance Std.Dev.
# group (Intercept) 0.85503 0.92468
# group x 3.46811 1.86229
# Residual 0.99607 0.99803
# Number of obs: 900, groups: group, 30
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) -0.14148 0.17312 -0.817
# x -0.05178 0.36056 -0.144
# z -0.26570 0.51509 -0.516
#
# Correlation of Fixed Effects:
# (Intr) x
# x -0.004
# z 0.103 0.002
|
Can I probe cross-level interactions without random slope in a hierarchical linear model?
Having random slopes at level 1 is not a necessary condition for examining cross-level interactions. All that is necessary is that you have 2 predictors that vary at different levels, and their intera
|
39,491
|
Why are these proper kernels and how to deduce that they are?
|
In a machine learning context (i.e. "kernel methods"), the key requirement for a kernel is that it must be symmetric and positive-definite, that is, if $K$ is a kernel matrix, then for any (column) vector $x$ of the appropriate length, $x^{T}Kx$ must be a positive real number. This restriction is in place mostly due to requirements of optimization processes that operate downstream on this matrix.
To answer your question, certain basic operations preserve positive-definiteness, and some don't. You can use the definition of positive-definiteness above to decide whether an operation preserves that property or not. The product operation does not always preserve positive definiteness (it does preserve it in the case where their product is commutative). The square operation however does preserve it. For a real positive number $r$, $rK$ is positive definite, and the sum of any two positive definite matrices is also positive definite.
So in your examples, $K_3$ and $K_4$ may or may not be proper kernel matrices (to use your terminology) and $K_5$ is definitely a proper kernel matrix.
|
Why are these proper kernels and how to deduce that they are?
|
In a machine learning context (i.e. "kernel methods"), the key requirement for a kernel is that it must be symmetric and positive-definite, that is, if $K$ is a kernel matrix, then for any (column) ve
|
Why are these proper kernels and how to deduce that they are?
In a machine learning context (i.e. "kernel methods"), the key requirement for a kernel is that it must be symmetric and positive-definite, that is, if $K$ is a kernel matrix, then for any (column) vector $x$ of the appropriate length, $x^{T}Kx$ must be a positive real number. This restriction is in place mostly due to requirements of optimization processes that operate downstream on this matrix.
To answer your question, certain basic operations preserve positive-definiteness, and some don't. You can use the definition of positive-definiteness above to decide whether an operation preserves that property or not. The product operation does not always preserve positive definiteness (it does preserve it in the case where their product is commutative). The square operation however does preserve it. For a real positive number $r$, $rK$ is positive definite, and the sum of any two positive definite matrices is also positive definite.
So in your examples, $K_3$ and $K_4$ may or may not be proper kernel matrices (to use your terminology) and $K_5$ is definitely a proper kernel matrix.
|
Why are these proper kernels and how to deduce that they are?
In a machine learning context (i.e. "kernel methods"), the key requirement for a kernel is that it must be symmetric and positive-definite, that is, if $K$ is a kernel matrix, then for any (column) ve
|
39,492
|
Why are these proper kernels and how to deduce that they are?
|
One easy way is to think of a kernel function as a positive-semidefinite (PSD) matrix. Then you can use PSD tricks.
For example, if $K$ is PSD, then $-K$ is NSD and thus not PSD. On the other hand, if $K$ is PSD, $c K$ is PSD if $c$ is a scalar and $c > 0$. Thus, $K_4$ and $K_3$ are not generally kernels and $K_5$ is.
|
Why are these proper kernels and how to deduce that they are?
|
One easy way is to think of a kernel function as a positive-semidefinite (PSD) matrix. Then you can use PSD tricks.
For example, if $K$ is PSD, then $-K$ is NSD and thus not PSD. On the other hand, if
|
Why are these proper kernels and how to deduce that they are?
One easy way is to think of a kernel function as a positive-semidefinite (PSD) matrix. Then you can use PSD tricks.
For example, if $K$ is PSD, then $-K$ is NSD and thus not PSD. On the other hand, if $K$ is PSD, $c K$ is PSD if $c$ is a scalar and $c > 0$. Thus, $K_4$ and $K_3$ are not generally kernels and $K_5$ is.
|
Why are these proper kernels and how to deduce that they are?
One easy way is to think of a kernel function as a positive-semidefinite (PSD) matrix. Then you can use PSD tricks.
For example, if $K$ is PSD, then $-K$ is NSD and thus not PSD. On the other hand, if
|
39,493
|
Difference between a 2 factor ANOVA and mixed effects model
|
I'm absolutely not a specialist, but this is my contribution:
In your ANOVA model, you treated both 'recipe' and 'temperature' as fixed factors, which can be thought of in terms of differences.
In your linear mixed model, you treated 'temperature' as a random factor, which is defined by a distribution and whose values are assumed to be chosen from a population with a normal distribution with a certain variance. It turns out that the corresponding output is now an estimate of this variance (line labeled 'temperature' in the Random effects section). And you can notice that the output for the 'recipe' is indeed an estimate for mean-differences (lines labeled recipeB and recipeC in the Fixed effects section).
|
Difference between a 2 factor ANOVA and mixed effects model
|
I'm absolutely not a specialist, but this is my contribution:
In your ANOVA model, you treated both 'recipe' and 'temperature' as fixed factors, which can be thought of in terms of differences.
In
|
Difference between a 2 factor ANOVA and mixed effects model
I'm absolutely not a specialist, but this is my contribution:
In your ANOVA model, you treated both 'recipe' and 'temperature' as fixed factors, which can be thought of in terms of differences.
In your linear mixed model, you treated 'temperature' as a random factor, which is defined by a distribution and whose values are assumed to be chosen from a population with a normal distribution with a certain variance. It turns out that the corresponding output is now an estimate of this variance (line labeled 'temperature' in the Random effects section). And you can notice that the output for the 'recipe' is indeed an estimate for mean-differences (lines labeled recipeB and recipeC in the Fixed effects section).
|
Difference between a 2 factor ANOVA and mixed effects model
I'm absolutely not a specialist, but this is my contribution:
In your ANOVA model, you treated both 'recipe' and 'temperature' as fixed factors, which can be thought of in terms of differences.
In
|
39,494
|
Difference between a 2 factor ANOVA and mixed effects model
|
Very briefly: In a two factor ANOVA (or, more generally, in a model that can be analyzed with lm in R) variables are controlled for. That is, it asks "Holding other independent variables constant, what is the linear relationship of each independent variable with the dependent variable?" Such models have a number of assumptions, key here is that they assume that the errors (as estimated by the residuals) are independent. Often, this is reasonable; also, often, it is not. In the cake data set it is not, because each recipe is tested multiple times, and surely the errors from the model will be more similar within each recipe than across recipes.
Mixed models relax this assumption.
|
Difference between a 2 factor ANOVA and mixed effects model
|
Very briefly: In a two factor ANOVA (or, more generally, in a model that can be analyzed with lm in R) variables are controlled for. That is, it asks "Holding other independent variables constant, wha
|
Difference between a 2 factor ANOVA and mixed effects model
Very briefly: In a two factor ANOVA (or, more generally, in a model that can be analyzed with lm in R) variables are controlled for. That is, it asks "Holding other independent variables constant, what is the linear relationship of each independent variable with the dependent variable?" Such models have a number of assumptions, key here is that they assume that the errors (as estimated by the residuals) are independent. Often, this is reasonable; also, often, it is not. In the cake data set it is not, because each recipe is tested multiple times, and surely the errors from the model will be more similar within each recipe than across recipes.
Mixed models relax this assumption.
|
Difference between a 2 factor ANOVA and mixed effects model
Very briefly: In a two factor ANOVA (or, more generally, in a model that can be analyzed with lm in R) variables are controlled for. That is, it asks "Holding other independent variables constant, wha
|
39,495
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
|
I don't think the currently accepted answer is correct. What it describes is identification, not standardization. The unstandardized coefficients is what comes directly out of the estimation procedure. The standardized coefficients recast regression coefficients and covariance in metrics of correlations, and unique variances, in terms of $R^2$. So if you have a confirmatory factor analysis model
$$
y_j = \alpha_j + \lambda_j \xi + \delta_j
$$
with ${\rm E}\delta_j=0$, ${\rm E}\delta_j^2 = \psi_j$, ${\rm E}\xi=0$, ${\rm E}\xi^2=\phi$ in the standard SEM notation, then we have ${\rm Var}[y_j] = \lambda_j^2 \phi + \psi_j$, and the standardized coefficients are: $\tilde\alpha_j-$irrelevant;
$$
\tilde\lambda_j = {\rm Corr}(y_j,\xi) = \lambda_j {\rm Var}^{1/2}[\xi] {\rm Var}^{-1/2}[y_j]=\frac{\lambda_j \phi^{1/2}}{\sqrt{\lambda_j^2 \phi + \psi_j}}
$$
$$
\tilde \psi_j = \frac{\psi_j}{\lambda_j^2 \phi + \psi_j}
$$
Unlike the ``raw'' estimates, standardized estimates and standard errors do not depend on particular parameterization and the choice of the identifying scale parameter (i.e., whether the model is identified by setting $\phi=1$ or one of $\lambda_j=1$ -- and is then independent of the choice of the particular scaling variable, provided $\lambda_j\neq0$ in population).
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
|
I don't think the currently accepted answer is correct. What it describes is identification, not standardization. The unstandardized coefficients is what comes directly out of the estimation procedure
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
I don't think the currently accepted answer is correct. What it describes is identification, not standardization. The unstandardized coefficients is what comes directly out of the estimation procedure. The standardized coefficients recast regression coefficients and covariance in metrics of correlations, and unique variances, in terms of $R^2$. So if you have a confirmatory factor analysis model
$$
y_j = \alpha_j + \lambda_j \xi + \delta_j
$$
with ${\rm E}\delta_j=0$, ${\rm E}\delta_j^2 = \psi_j$, ${\rm E}\xi=0$, ${\rm E}\xi^2=\phi$ in the standard SEM notation, then we have ${\rm Var}[y_j] = \lambda_j^2 \phi + \psi_j$, and the standardized coefficients are: $\tilde\alpha_j-$irrelevant;
$$
\tilde\lambda_j = {\rm Corr}(y_j,\xi) = \lambda_j {\rm Var}^{1/2}[\xi] {\rm Var}^{-1/2}[y_j]=\frac{\lambda_j \phi^{1/2}}{\sqrt{\lambda_j^2 \phi + \psi_j}}
$$
$$
\tilde \psi_j = \frac{\psi_j}{\lambda_j^2 \phi + \psi_j}
$$
Unlike the ``raw'' estimates, standardized estimates and standard errors do not depend on particular parameterization and the choice of the identifying scale parameter (i.e., whether the model is identified by setting $\phi=1$ or one of $\lambda_j=1$ -- and is then independent of the choice of the particular scaling variable, provided $\lambda_j\neq0$ in population).
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
I don't think the currently accepted answer is correct. What it describes is identification, not standardization. The unstandardized coefficients is what comes directly out of the estimation procedure
|
39,496
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
|
Two weeks later, I see that no one has answered the question. However extensive Google searching revealed the following.
Assume multivariate data where the $p$ variables supposedly indicate the presence of one or more underlying factors. A simple measurement model involves relationships of the following type:
$$x_i=\lambda_i \xi + \epsilon_i$$
for common factor $\xi$ and uniqueness factor (i.e. noise) $\epsilon_i$. Several such relationships may exist, depending on the number of common factors. None of the "Greeks" are observable, but must be inferred from the data. The problem as stated does not have a unique solution. If correlation $\lambda_i$ and $\xi$ are both solutions, then so are the scaled quantities $a \lambda_i$ and $\xi / a$.
For identifiability, one of $\lambda_i$ and $\xi$ must be fixed -- typically to the value 1.
When the correlation $\lambda_i$ is set to 1, the solutions are said to be "unstandardized". When the common factor $\xi$ is set to one, the solutions are said to be "standardized."
Note that only one $\lambda_i$ need to standardized amongst the correlations associated with a given factor. So if in addition to the model above, I also have:
$$x_j=\lambda_j \xi + \epsilon_j$$
for another variable $x_j$, then only one of the lambda's need be set to 1 --- or equivalently, the common factor.
AMOS, by default, standardizes one of the correlation parameters, but one can request that the common factors be standardized instead.
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
|
Two weeks later, I see that no one has answered the question. However extensive Google searching revealed the following.
Assume multivariate data where the $p$ variables supposedly indicate the presen
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
Two weeks later, I see that no one has answered the question. However extensive Google searching revealed the following.
Assume multivariate data where the $p$ variables supposedly indicate the presence of one or more underlying factors. A simple measurement model involves relationships of the following type:
$$x_i=\lambda_i \xi + \epsilon_i$$
for common factor $\xi$ and uniqueness factor (i.e. noise) $\epsilon_i$. Several such relationships may exist, depending on the number of common factors. None of the "Greeks" are observable, but must be inferred from the data. The problem as stated does not have a unique solution. If correlation $\lambda_i$ and $\xi$ are both solutions, then so are the scaled quantities $a \lambda_i$ and $\xi / a$.
For identifiability, one of $\lambda_i$ and $\xi$ must be fixed -- typically to the value 1.
When the correlation $\lambda_i$ is set to 1, the solutions are said to be "unstandardized". When the common factor $\xi$ is set to one, the solutions are said to be "standardized."
Note that only one $\lambda_i$ need to standardized amongst the correlations associated with a given factor. So if in addition to the model above, I also have:
$$x_j=\lambda_j \xi + \epsilon_j$$
for another variable $x_j$, then only one of the lambda's need be set to 1 --- or equivalently, the common factor.
AMOS, by default, standardizes one of the correlation parameters, but one can request that the common factors be standardized instead.
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
Two weeks later, I see that no one has answered the question. However extensive Google searching revealed the following.
Assume multivariate data where the $p$ variables supposedly indicate the presen
|
39,497
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
|
I think from my simple experience in Amos that unstandardized we request one of the parameters should be 1 to give as the result of factor loading to help determine which parameter needs to dropped first and to keep many parameter that need to be estimated.
According to standardized: Here we got the real value of parameter.
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
|
I think from my simple experience in Amos that unstandardized we request one of the parameters should be 1 to give as the result of factor loading to help determine which parameter needs to dropped fi
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in particular)?
I think from my simple experience in Amos that unstandardized we request one of the parameters should be 1 to give as the result of factor loading to help determine which parameter needs to dropped first and to keep many parameter that need to be estimated.
According to standardized: Here we got the real value of parameter.
|
What is the difference between standardized and unstandardized estimates in SEM (thinking of AMOS in
I think from my simple experience in Amos that unstandardized we request one of the parameters should be 1 to give as the result of factor loading to help determine which parameter needs to dropped fi
|
39,498
|
How to detect outliers in skewed data set?
|
Bottom line is that the decision to remove data from your dataset is a subject-matter decision, not a statistical decision. The statistics help you to identify outliers given what you believe about the dataset.
A very readable applied treatment of outliers is given in
B. Iglewicz and D. C. Hoaglin, How to Detect and Handle Outliers (Milwaukee: ASQC Press) 1993.
A more advanced and detailed treatment is given in
V. Barnett and T. Lewis, Outliers in Statistical Data (New York: John Wiley and Sons) 1994.
|
How to detect outliers in skewed data set?
|
Bottom line is that the decision to remove data from your dataset is a subject-matter decision, not a statistical decision. The statistics help you to identify outliers given what you believe about th
|
How to detect outliers in skewed data set?
Bottom line is that the decision to remove data from your dataset is a subject-matter decision, not a statistical decision. The statistics help you to identify outliers given what you believe about the dataset.
A very readable applied treatment of outliers is given in
B. Iglewicz and D. C. Hoaglin, How to Detect and Handle Outliers (Milwaukee: ASQC Press) 1993.
A more advanced and detailed treatment is given in
V. Barnett and T. Lewis, Outliers in Statistical Data (New York: John Wiley and Sons) 1994.
|
How to detect outliers in skewed data set?
Bottom line is that the decision to remove data from your dataset is a subject-matter decision, not a statistical decision. The statistics help you to identify outliers given what you believe about th
|
39,499
|
How to detect outliers in skewed data set?
|
Flagging outlier is not a subject-matter decision but a statistical one. Outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data. Such observations need to be set apart at the onset of any analysis simply because their distance from the bulk of the data ensures that they will exert a disproportionate pull on any model fitted by maximum likelihood.
Furthermore, detecting outliers is a statistical procedure with a well defined objective and whose efficacy can be measured. It is also important to point out that no matter how they are identified (whether according to an algorithm or simply through faith in someone else's wild guesses) the outlyingness of a group of suspect observations can be assessed simply by measuring their influence on a non-robust fit: outliers are by definition observations that have an abnormal leverage (or 'pull') over the coefficients obtained from an LS/ML fit. In other words, outliers are observations whose removal from the sample should severely impact the LS/ML fit. I have added more explanation of this in my answer to a related question.
In any case, the rule you cite for detecting outliers is flawed. To see why, just notice that the
sum of the squared z-scores always sum to a constant (n-1), regardless of whether your data contains outliers or not. For the precise problem you have I explained
at length in previous answer how adjusted boxplots could be used to identify outliers when the observations of interest are suspected to have a skewed distribution.
As pointed out by Placidia I suspect you are not providing us with all the elements for it is indeed strange to be doing data mining on univariate datasets.
Regardless, I advise you to have a look at a modern book on outlier detection methods. I warmly recommend Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
|
How to detect outliers in skewed data set?
|
Flagging outlier is not a subject-matter decision but a statistical one. Outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data
|
How to detect outliers in skewed data set?
Flagging outlier is not a subject-matter decision but a statistical one. Outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data. Such observations need to be set apart at the onset of any analysis simply because their distance from the bulk of the data ensures that they will exert a disproportionate pull on any model fitted by maximum likelihood.
Furthermore, detecting outliers is a statistical procedure with a well defined objective and whose efficacy can be measured. It is also important to point out that no matter how they are identified (whether according to an algorithm or simply through faith in someone else's wild guesses) the outlyingness of a group of suspect observations can be assessed simply by measuring their influence on a non-robust fit: outliers are by definition observations that have an abnormal leverage (or 'pull') over the coefficients obtained from an LS/ML fit. In other words, outliers are observations whose removal from the sample should severely impact the LS/ML fit. I have added more explanation of this in my answer to a related question.
In any case, the rule you cite for detecting outliers is flawed. To see why, just notice that the
sum of the squared z-scores always sum to a constant (n-1), regardless of whether your data contains outliers or not. For the precise problem you have I explained
at length in previous answer how adjusted boxplots could be used to identify outliers when the observations of interest are suspected to have a skewed distribution.
As pointed out by Placidia I suspect you are not providing us with all the elements for it is indeed strange to be doing data mining on univariate datasets.
Regardless, I advise you to have a look at a modern book on outlier detection methods. I warmly recommend Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
|
How to detect outliers in skewed data set?
Flagging outlier is not a subject-matter decision but a statistical one. Outliers have a precise, objective definition: they are observations that do not follow the pattern of the majority of the data
|
39,500
|
How to detect outliers in skewed data set?
|
Bacon answered this question a few centuries ago in Novum Organum. To paraphrase: To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. "For whoever knows the ways of Nature will more" easily notice her deviations and, on the other hand, whoever knows her deviations "will more accurately describe her ways." One learns the rules by observing when the current rules fail.
In summary, build a model for your data using both user-specified variables and variables that can be suggested by residual diagnostic checking (in time series that would be level shifts, local time time trends, seasonal pulses, changes in parameters, or changes in variance). After forming a useful model, evaluate/scrutinize the residuals for unusual patterns; perhaps activity before and after known events. In this way you can iterate to identifying anomalous data.
|
How to detect outliers in skewed data set?
|
Bacon answered this question a few centuries ago in Novum Organum. To paraphrase: To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeat
|
How to detect outliers in skewed data set?
Bacon answered this question a few centuries ago in Novum Organum. To paraphrase: To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeated patterns. "For whoever knows the ways of Nature will more" easily notice her deviations and, on the other hand, whoever knows her deviations "will more accurately describe her ways." One learns the rules by observing when the current rules fail.
In summary, build a model for your data using both user-specified variables and variables that can be suggested by residual diagnostic checking (in time series that would be level shifts, local time time trends, seasonal pulses, changes in parameters, or changes in variance). After forming a useful model, evaluate/scrutinize the residuals for unusual patterns; perhaps activity before and after known events. In this way you can iterate to identifying anomalous data.
|
How to detect outliers in skewed data set?
Bacon answered this question a few centuries ago in Novum Organum. To paraphrase: To do science is to search for repeated patterns. To detect anomalies is to identify values that do not follow repeat
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.