idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
39,201
Covariance matrix of complex random variables
No a and b from $a+bi$ are not the real and imaginary covariances because there is also a cross term. Writing out using the definition of complex covariance gives: $$cov(x_1,x_2)=E[(x_1-\mu_1)(x_2-\mu_2)^\dagger]$$ $$=\frac{1}{N}\sum_{i=1}^N (x_{1,i}-\mu_1)(x_{2,i}-\mu_2)^{\ast}$$ $$=\frac{1}{N}\sum_{i=1}^N (a_{1,i}+ib_{1,i}-\mu_1^{real}-i\mu_1^{imag})(a_{2,i}+ib_{2,i}-\mu_2^{real}-i\mu_2^{imag})^{\ast}$$ $$=\frac{1}{N}\sum_{i=1}^N (a_{1,i}+ib_{1,i}-\mu_1^{real}-i\mu_1^{imag})(a_{2,i}-ib_{2,i}-\mu_2^{real}+i\mu_2^{imag})$$ $$=\frac{1}{N}\sum_{i=1}^N ((a_{1,i}-\mu_1^{real})(a_{2,i}-\mu_2^{real})-i(a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})+i(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real})+(b_{1,i}-\mu_1^{imag})(b_{2,i}-\mu_2^{imag}))$$ $$=\frac{1}{N}\sum_{i=1}^N ((a_{1,i}-\mu_1^{real})(a_{2,i}-\mu_2^{real})-i(a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})+i(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real})+(b_{1,i}-\mu_1^{imag})(b_{2,i}-\mu_2^{imag}))$$ $$=cov(a_1,a_2)+cov(b_1,b_2)-\sum_{i=1}^Ni((a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})-(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real}))$$ $$=cov(a_1,a_2)+cov(b_1,b_2)-i(cov(a_1,b_2)-cov(b_1,a_2))$$ Therefore, $a=cov(a_1,a_2)+cov(b_1,b_2)$ and $b=-cov(a_1,b_2)+cov(b_1,a_2)$ in $a+bi$. The intuition for this is that the angle of the complex covariance is an unbiased estimate of the mean phase difference between the 2 distributions and the amplitude is a measure (biased) of how well the phasors cluster around this mean. If you reverse the order ($cov(x_2,x_1)$ instead of $cov(x_1,x_2)$) then the only thing that will change is that the mean angle of the phase difference between the two distributions will be flipped onto the other side of the real axis. These terms will be in the lower right triangular part of the matrix making the complex covariance matrix Hermitian and therefore the means of each column and row vector will be a real number.
Covariance matrix of complex random variables
No a and b from $a+bi$ are not the real and imaginary covariances because there is also a cross term. Writing out using the definition of complex covariance gives: $$cov(x_1,x_2)=E[(x_1-\mu_1)(x_2-\
Covariance matrix of complex random variables No a and b from $a+bi$ are not the real and imaginary covariances because there is also a cross term. Writing out using the definition of complex covariance gives: $$cov(x_1,x_2)=E[(x_1-\mu_1)(x_2-\mu_2)^\dagger]$$ $$=\frac{1}{N}\sum_{i=1}^N (x_{1,i}-\mu_1)(x_{2,i}-\mu_2)^{\ast}$$ $$=\frac{1}{N}\sum_{i=1}^N (a_{1,i}+ib_{1,i}-\mu_1^{real}-i\mu_1^{imag})(a_{2,i}+ib_{2,i}-\mu_2^{real}-i\mu_2^{imag})^{\ast}$$ $$=\frac{1}{N}\sum_{i=1}^N (a_{1,i}+ib_{1,i}-\mu_1^{real}-i\mu_1^{imag})(a_{2,i}-ib_{2,i}-\mu_2^{real}+i\mu_2^{imag})$$ $$=\frac{1}{N}\sum_{i=1}^N ((a_{1,i}-\mu_1^{real})(a_{2,i}-\mu_2^{real})-i(a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})+i(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real})+(b_{1,i}-\mu_1^{imag})(b_{2,i}-\mu_2^{imag}))$$ $$=\frac{1}{N}\sum_{i=1}^N ((a_{1,i}-\mu_1^{real})(a_{2,i}-\mu_2^{real})-i(a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})+i(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real})+(b_{1,i}-\mu_1^{imag})(b_{2,i}-\mu_2^{imag}))$$ $$=cov(a_1,a_2)+cov(b_1,b_2)-\sum_{i=1}^Ni((a_{1,i}-\mu_1^{real})(b_{2,i}-\mu_2^{imag})-(b_{1,i}-\mu_1^{imag})(a_{2,i}-\mu_2^{real}))$$ $$=cov(a_1,a_2)+cov(b_1,b_2)-i(cov(a_1,b_2)-cov(b_1,a_2))$$ Therefore, $a=cov(a_1,a_2)+cov(b_1,b_2)$ and $b=-cov(a_1,b_2)+cov(b_1,a_2)$ in $a+bi$. The intuition for this is that the angle of the complex covariance is an unbiased estimate of the mean phase difference between the 2 distributions and the amplitude is a measure (biased) of how well the phasors cluster around this mean. If you reverse the order ($cov(x_2,x_1)$ instead of $cov(x_1,x_2)$) then the only thing that will change is that the mean angle of the phase difference between the two distributions will be flipped onto the other side of the real axis. These terms will be in the lower right triangular part of the matrix making the complex covariance matrix Hermitian and therefore the means of each column and row vector will be a real number.
Covariance matrix of complex random variables No a and b from $a+bi$ are not the real and imaginary covariances because there is also a cross term. Writing out using the definition of complex covariance gives: $$cov(x_1,x_2)=E[(x_1-\mu_1)(x_2-\
39,202
Fourier transform in Machine Learning
A couple things come to mind... Performing convolutions efficiently as products in the Fourier domain. An example would be training large convolutional neural nets. For example, see: Fast Training of Convolutional Networks through FFTs (Mathieu et al. 2013) Another application is sparse signal processing, where the goal is to approximate a signal as a sparse linear combination of basis functions from a 'signal dictionary'. The link here is that the set of sinusoids are, of course, a good dictionary for signals that are sparse in the Fourier domain. If I recall correctly, Fourier dictionaries show up in this literature. On a related note, you should also be able to find Fourier methods in the compressed sensing literature
Fourier transform in Machine Learning
A couple things come to mind... Performing convolutions efficiently as products in the Fourier domain. An example would be training large convolutional neural nets. For example, see: Fast Training of
Fourier transform in Machine Learning A couple things come to mind... Performing convolutions efficiently as products in the Fourier domain. An example would be training large convolutional neural nets. For example, see: Fast Training of Convolutional Networks through FFTs (Mathieu et al. 2013) Another application is sparse signal processing, where the goal is to approximate a signal as a sparse linear combination of basis functions from a 'signal dictionary'. The link here is that the set of sinusoids are, of course, a good dictionary for signals that are sparse in the Fourier domain. If I recall correctly, Fourier dictionaries show up in this literature. On a related note, you should also be able to find Fourier methods in the compressed sensing literature
Fourier transform in Machine Learning A couple things come to mind... Performing convolutions efficiently as products in the Fourier domain. An example would be training large convolutional neural nets. For example, see: Fast Training of
39,203
Fourier transform in Machine Learning
In a theory of random processes we use Fourier transform to get the spectral density of a covariance function. Then spectral density can be used to verify that the function is a covariance function (Bochner-Khinchin's theorem). Also spectral density is useful while proving theoretical results about quality of Gaussian process regression models (see recent Van der Vaart's works or Stein's book on interpolation for spatial data).
Fourier transform in Machine Learning
In a theory of random processes we use Fourier transform to get the spectral density of a covariance function. Then spectral density can be used to verify that the function is a covariance function (
Fourier transform in Machine Learning In a theory of random processes we use Fourier transform to get the spectral density of a covariance function. Then spectral density can be used to verify that the function is a covariance function (Bochner-Khinchin's theorem). Also spectral density is useful while proving theoretical results about quality of Gaussian process regression models (see recent Van der Vaart's works or Stein's book on interpolation for spatial data).
Fourier transform in Machine Learning In a theory of random processes we use Fourier transform to get the spectral density of a covariance function. Then spectral density can be used to verify that the function is a covariance function (
39,204
Explaining Odds Ratio and Relative Risk to the statistically challenged
You are of course right and it is a common mistake to describe an odds ratio like a relative risk ratio. I would suggest that it would be helpful to propose a more appropriate phrasing to them such as "suggesting that the odds of [participants] [doing X in] [A condition] is 1.90 times higher than in [B condition]." Once the authors realise that is all they would have to do, hopefully this is not too much of a discussion.
Explaining Odds Ratio and Relative Risk to the statistically challenged
You are of course right and it is a common mistake to describe an odds ratio like a relative risk ratio. I would suggest that it would be helpful to propose a more appropriate phrasing to them such as
Explaining Odds Ratio and Relative Risk to the statistically challenged You are of course right and it is a common mistake to describe an odds ratio like a relative risk ratio. I would suggest that it would be helpful to propose a more appropriate phrasing to them such as "suggesting that the odds of [participants] [doing X in] [A condition] is 1.90 times higher than in [B condition]." Once the authors realise that is all they would have to do, hopefully this is not too much of a discussion.
Explaining Odds Ratio and Relative Risk to the statistically challenged You are of course right and it is a common mistake to describe an odds ratio like a relative risk ratio. I would suggest that it would be helpful to propose a more appropriate phrasing to them such as
39,205
Explaining Odds Ratio and Relative Risk to the statistically challenged
You are right and they are wrong. This is SUCH a common error that there is a whole genera of papers where the author just catalogues all of the times odds ratios are misinterpreted as changes in likelihood in published papers in a particular field. Here's just one example. Here is a JAMA article discussing the same issue. Just type "odds ratio risk ratio" into Google scholar and cite as many of the articles that come up as you need to to convince the author that they are wrong.
Explaining Odds Ratio and Relative Risk to the statistically challenged
You are right and they are wrong. This is SUCH a common error that there is a whole genera of papers where the author just catalogues all of the times odds ratios are misinterpreted as changes in like
Explaining Odds Ratio and Relative Risk to the statistically challenged You are right and they are wrong. This is SUCH a common error that there is a whole genera of papers where the author just catalogues all of the times odds ratios are misinterpreted as changes in likelihood in published papers in a particular field. Here's just one example. Here is a JAMA article discussing the same issue. Just type "odds ratio risk ratio" into Google scholar and cite as many of the articles that come up as you need to to convince the author that they are wrong.
Explaining Odds Ratio and Relative Risk to the statistically challenged You are right and they are wrong. This is SUCH a common error that there is a whole genera of papers where the author just catalogues all of the times odds ratios are misinterpreted as changes in like
39,206
Explaining Odds Ratio and Relative Risk to the statistically challenged
I believe you were correct with your response to the authors, but that a source other than Wikipedia would have strengthened your response. Much has been published about odds vs risk in reliable journals, which would likely be more convincing than the Wikipedia article. Here is one from 1998 that gives a balanced answer to when and how odds ratios can be misinterpreted and how to avoid that. And here from 2006 is another that perhaps takes a stronger position but also perhaps better demonstrates the difficulty of rendering odds ratios into plain language.
Explaining Odds Ratio and Relative Risk to the statistically challenged
I believe you were correct with your response to the authors, but that a source other than Wikipedia would have strengthened your response. Much has been published about odds vs risk in reliable journ
Explaining Odds Ratio and Relative Risk to the statistically challenged I believe you were correct with your response to the authors, but that a source other than Wikipedia would have strengthened your response. Much has been published about odds vs risk in reliable journals, which would likely be more convincing than the Wikipedia article. Here is one from 1998 that gives a balanced answer to when and how odds ratios can be misinterpreted and how to avoid that. And here from 2006 is another that perhaps takes a stronger position but also perhaps better demonstrates the difficulty of rendering odds ratios into plain language.
Explaining Odds Ratio and Relative Risk to the statistically challenged I believe you were correct with your response to the authors, but that a source other than Wikipedia would have strengthened your response. Much has been published about odds vs risk in reliable journ
39,207
Independence and Order Statistics
Here is a guide to solving this problem (and others like it). I use simulated values to illustrate, so let's begin by simulating a large number of independent realizations from the distribution with density $f$. (All the code in this answer is written in R.) n <- 4e4 # Number of trials in the simulation x <- matrix(pmax(runif(n*3), runif(n*3)), nrow=3) # Plot the data par(mfrow=c(1,3)) for (i in 1:3) { hist(x[i, ], freq=FALSE, main=paste("i =", i)) curve(f(x), add=TRUE, col="Red", lwd=2) } The histograms show $40,000$ independent realizations of the first, second, and third elements of the datasets. The red curves graph $f$. That they coincide with the histograms confirms the simulation is working as intended. You need to work out the joint density of $(Y_1, Y_2, Y_3)$. Since you're studying order statistics, this should be routine--but the code gives some clues, because it plots their distributions for reference. y <- apply(x, 2, sort) # Plot the order statistics. f <- function(x) 2*x ff <- function(x) x^2 for (i in 1:3) { hist(y[i, ], freq=FALSE, main=paste("i =", i)) k <- factorial(3) / (factorial(3-i)*factorial(1)*factorial(i-1)) curve(k * (1-ff(x))^(3-i) * f(x) * ff(x)^(i-1), add=TRUE, col="Red", lwd=2) } The same data have been reordered within each of the $40,000$ datasets. On the left is the histogram of their minima $Y_1$, on the right their maxima $Y_3$, and in the middle their medians $Y_2$. Next, compute the joint distribution of $(U_1, U_2)$ directly. By definition this is $$F(u_1, u_2) = \Pr(U_1 \le u_1, U_2 \le u_2) = \Pr(Y_1 \le u_1 Y_2, Y_2 \le u_2 Y_3).$$ Since you have computed the joint density of $(Y_1, Y_2, Y_3)$, this is a routine matter of doing the (triple) integral expressed by the right-hand probability. The region of integration must be $$0 \le Y_1 \le u_1 Y_2,\ 0 \le Y_2 \le u_2 Y_3,\ 0 \le Y_3 \le 1.$$ The simulation can give us an inkling of how $(U_1, U_2)$ are distributed: here is a scatterplot of the realized values of $(U_1, U_2)$. Your theoretical answer should describe this density. par(mfrow=c(1,1)) u <- cbind(y[1, ]/y[2, ], y[2, ]/y[3, ]) plot(u, pch=16, cex=1/2, col="#00000008", asp=1) As a check, we may look at the marginal distributions and compare them to the theoretical solutions. The marginal densities, shown as red curves, are obtained as $\partial F(u_1, 1)/\partial u_1$ and $\partial F(1, u_2)/\partial u_2$. par(mfrow=c(1,2)) hist(u[, 1], freq=FALSE); curve(2*x, add=TRUE, col="Red", lwd=2) hist(u[, 2], freq=FALSE); curve(4*x^3, add=TRUE, col="Red", lwd=2) par(mfrow=c(1,1)) It is curious that $U_1$ has the same distribution as the original $X_i$.
Independence and Order Statistics
Here is a guide to solving this problem (and others like it). I use simulated values to illustrate, so let's begin by simulating a large number of independent realizations from the distribution with
Independence and Order Statistics Here is a guide to solving this problem (and others like it). I use simulated values to illustrate, so let's begin by simulating a large number of independent realizations from the distribution with density $f$. (All the code in this answer is written in R.) n <- 4e4 # Number of trials in the simulation x <- matrix(pmax(runif(n*3), runif(n*3)), nrow=3) # Plot the data par(mfrow=c(1,3)) for (i in 1:3) { hist(x[i, ], freq=FALSE, main=paste("i =", i)) curve(f(x), add=TRUE, col="Red", lwd=2) } The histograms show $40,000$ independent realizations of the first, second, and third elements of the datasets. The red curves graph $f$. That they coincide with the histograms confirms the simulation is working as intended. You need to work out the joint density of $(Y_1, Y_2, Y_3)$. Since you're studying order statistics, this should be routine--but the code gives some clues, because it plots their distributions for reference. y <- apply(x, 2, sort) # Plot the order statistics. f <- function(x) 2*x ff <- function(x) x^2 for (i in 1:3) { hist(y[i, ], freq=FALSE, main=paste("i =", i)) k <- factorial(3) / (factorial(3-i)*factorial(1)*factorial(i-1)) curve(k * (1-ff(x))^(3-i) * f(x) * ff(x)^(i-1), add=TRUE, col="Red", lwd=2) } The same data have been reordered within each of the $40,000$ datasets. On the left is the histogram of their minima $Y_1$, on the right their maxima $Y_3$, and in the middle their medians $Y_2$. Next, compute the joint distribution of $(U_1, U_2)$ directly. By definition this is $$F(u_1, u_2) = \Pr(U_1 \le u_1, U_2 \le u_2) = \Pr(Y_1 \le u_1 Y_2, Y_2 \le u_2 Y_3).$$ Since you have computed the joint density of $(Y_1, Y_2, Y_3)$, this is a routine matter of doing the (triple) integral expressed by the right-hand probability. The region of integration must be $$0 \le Y_1 \le u_1 Y_2,\ 0 \le Y_2 \le u_2 Y_3,\ 0 \le Y_3 \le 1.$$ The simulation can give us an inkling of how $(U_1, U_2)$ are distributed: here is a scatterplot of the realized values of $(U_1, U_2)$. Your theoretical answer should describe this density. par(mfrow=c(1,1)) u <- cbind(y[1, ]/y[2, ], y[2, ]/y[3, ]) plot(u, pch=16, cex=1/2, col="#00000008", asp=1) As a check, we may look at the marginal distributions and compare them to the theoretical solutions. The marginal densities, shown as red curves, are obtained as $\partial F(u_1, 1)/\partial u_1$ and $\partial F(1, u_2)/\partial u_2$. par(mfrow=c(1,2)) hist(u[, 1], freq=FALSE); curve(2*x, add=TRUE, col="Red", lwd=2) hist(u[, 2], freq=FALSE); curve(4*x^3, add=TRUE, col="Red", lwd=2) par(mfrow=c(1,1)) It is curious that $U_1$ has the same distribution as the original $X_i$.
Independence and Order Statistics Here is a guide to solving this problem (and others like it). I use simulated values to illustrate, so let's begin by simulating a large number of independent realizations from the distribution with
39,208
Independence and Order Statistics
Here is an exact symbolic solution which traces out the steps required ... here using automated tools to do the nitty gritties Let $(X_1, X_2, X_3)$ denote a sample of size 3 from parent pdf $f(x)$: Then, the joint pdf of the ordered sample $(X_{(1)}, X_{(2)}, X_{(3)})$ is say $g(x_1,x_2,x_3)$: where I am using the OrderStat function form the mathStatica package for Mathematica. The joint cdf of $(U_1, U_2)$ is $P\big(\frac{X_{(1)}}{X_{(2)}}<u_1, \,\frac{X_{(2)}}{X_{(3)}}<u_2\big)$: The joint pdf of $(U_1, U_2)$ is derived by simply differentiating the cdf wrt $u_1$ and $u_2$: Finally, as a quick Monte Carlo check, here is a comparison of: the exact theoretical solution derived (the joint pdf - the orange surface) plotted against an empirical Monte Carlo simulated joint pdf (3D histogram):
Independence and Order Statistics
Here is an exact symbolic solution which traces out the steps required ... here using automated tools to do the nitty gritties Let $(X_1, X_2, X_3)$ denote a sample of size 3 from parent pdf $f(x)$:
Independence and Order Statistics Here is an exact symbolic solution which traces out the steps required ... here using automated tools to do the nitty gritties Let $(X_1, X_2, X_3)$ denote a sample of size 3 from parent pdf $f(x)$: Then, the joint pdf of the ordered sample $(X_{(1)}, X_{(2)}, X_{(3)})$ is say $g(x_1,x_2,x_3)$: where I am using the OrderStat function form the mathStatica package for Mathematica. The joint cdf of $(U_1, U_2)$ is $P\big(\frac{X_{(1)}}{X_{(2)}}<u_1, \,\frac{X_{(2)}}{X_{(3)}}<u_2\big)$: The joint pdf of $(U_1, U_2)$ is derived by simply differentiating the cdf wrt $u_1$ and $u_2$: Finally, as a quick Monte Carlo check, here is a comparison of: the exact theoretical solution derived (the joint pdf - the orange surface) plotted against an empirical Monte Carlo simulated joint pdf (3D histogram):
Independence and Order Statistics Here is an exact symbolic solution which traces out the steps required ... here using automated tools to do the nitty gritties Let $(X_1, X_2, X_3)$ denote a sample of size 3 from parent pdf $f(x)$:
39,209
Is every ARIMA(1,1,0) model equivalent to an AR(2) model?
The forecast for the ARIMA(1,1,0) enforces the restriction that $d=1$. It is maybe even easier to see in the AR(1) vs. ARIMA(0,1,0) case: The latter is just $$ \Delta y_t=\epsilon_t $$ whose optimal forecasts are 0 at all horizons (we expect $\epsilon_t$ to take the value zero). If we aim to forecast $y_t$ itself, we take the last in sample value and just accumulate the forecast changes of $y_t$. Basically, we expect the value tomorrow to be today's value plus the expected change from today to tomorrow. So, as we do not expect any changes here, the optimal forecast for such a random walk is $y_T$ ($T$ being the last in sample observation) for all $h=T+1,\ldots$. If, on the other hand, we fit an AR(1) model, we obtain an estimate $\hat\alpha$ and produce the optimal forecasts from an AR(1) model as $$ y_{T+h}=\hat\alpha^hy_T $$ If estimation errors (as they generally will in finite samples) are such that $\hat\alpha$ differs from the true value of 1, the forecasts will differ.
Is every ARIMA(1,1,0) model equivalent to an AR(2) model?
The forecast for the ARIMA(1,1,0) enforces the restriction that $d=1$. It is maybe even easier to see in the AR(1) vs. ARIMA(0,1,0) case: The latter is just $$ \Delta y_t=\epsilon_t $$ whose optimal
Is every ARIMA(1,1,0) model equivalent to an AR(2) model? The forecast for the ARIMA(1,1,0) enforces the restriction that $d=1$. It is maybe even easier to see in the AR(1) vs. ARIMA(0,1,0) case: The latter is just $$ \Delta y_t=\epsilon_t $$ whose optimal forecasts are 0 at all horizons (we expect $\epsilon_t$ to take the value zero). If we aim to forecast $y_t$ itself, we take the last in sample value and just accumulate the forecast changes of $y_t$. Basically, we expect the value tomorrow to be today's value plus the expected change from today to tomorrow. So, as we do not expect any changes here, the optimal forecast for such a random walk is $y_T$ ($T$ being the last in sample observation) for all $h=T+1,\ldots$. If, on the other hand, we fit an AR(1) model, we obtain an estimate $\hat\alpha$ and produce the optimal forecasts from an AR(1) model as $$ y_{T+h}=\hat\alpha^hy_T $$ If estimation errors (as they generally will in finite samples) are such that $\hat\alpha$ differs from the true value of 1, the forecasts will differ.
Is every ARIMA(1,1,0) model equivalent to an AR(2) model? The forecast for the ARIMA(1,1,0) enforces the restriction that $d=1$. It is maybe even easier to see in the AR(1) vs. ARIMA(0,1,0) case: The latter is just $$ \Delta y_t=\epsilon_t $$ whose optimal
39,210
Is every ARIMA(1,1,0) model equivalent to an AR(2) model?
The equivalence depends on definitions. General ARMA(p,q) process can be defined as a stochastic process which is the solution to the following equation: $$ X_t-\phi_1X_{t-1}-...-\phi_pX_{t-p}=Z_t+\theta_1 Z_{t-1}+...+\theta_qZ_{t-q},$$ where $Z_t$ is a white noise process. We must require that polynomials $\phi(z)=1-\phi_1 z-...-\phi_pz^p$ and $\theta(z)= 1+\theta_1z+...\theta_pz^p$ should not have common roots, in order for the equation to be uniquely defined. Now the question arises, when this equation has a solution. The answer relies on the properties of polynomials $\phi(z)$ and $\theta(z)$. The equation has a stationary solution, when polynomials do not have roots on the unit circle. So in this sense ARIMA(1,1,0) is not AR(2) process, because it is not stationary. It can be written as satisfying the AR(2) equation, but since the polynomials have a root on a unit circle, you cannot solve the equation. However if polynomial $\phi(z)$ has a unit root, then $\Delta X_t$ satisfies the ARMA(p-1,q) equation (with different polynomials). So it is possible to solve for $\Delta X_t$ and get back to $X_t$. To mark this difference ARIMA(p,d,q) notation is used. So to sum up, if we strictly define a ARMA(p,q) process as a stationary solution to ARMA(p,q) equation, then ARIMA(1,1,0) and AR(2) are not equivalent. The fact that R manages to find the correct coefficients is an interesting property of estimation, i.e. it is possible to show that in the case of unit roots, the OLS[1] will give consistent estimates of the coefficients, however the inference would be incorrect, as the limiting distributions are not normal. The ADF tests are based on such estimates. However the actual mathematics to show that the estimates are ok is quite complicated and relies on certain assumptions. These assumptions do not generalize well, hence it is not advisable to use usual estimation methods for unit root processes. [1] The MLE and OLS are equivalent for AR(p) type specifications.
Is every ARIMA(1,1,0) model equivalent to an AR(2) model?
The equivalence depends on definitions. General ARMA(p,q) process can be defined as a stochastic process which is the solution to the following equation: $$ X_t-\phi_1X_{t-1}-...-\phi_pX_{t-p}=Z_t+\th
Is every ARIMA(1,1,0) model equivalent to an AR(2) model? The equivalence depends on definitions. General ARMA(p,q) process can be defined as a stochastic process which is the solution to the following equation: $$ X_t-\phi_1X_{t-1}-...-\phi_pX_{t-p}=Z_t+\theta_1 Z_{t-1}+...+\theta_qZ_{t-q},$$ where $Z_t$ is a white noise process. We must require that polynomials $\phi(z)=1-\phi_1 z-...-\phi_pz^p$ and $\theta(z)= 1+\theta_1z+...\theta_pz^p$ should not have common roots, in order for the equation to be uniquely defined. Now the question arises, when this equation has a solution. The answer relies on the properties of polynomials $\phi(z)$ and $\theta(z)$. The equation has a stationary solution, when polynomials do not have roots on the unit circle. So in this sense ARIMA(1,1,0) is not AR(2) process, because it is not stationary. It can be written as satisfying the AR(2) equation, but since the polynomials have a root on a unit circle, you cannot solve the equation. However if polynomial $\phi(z)$ has a unit root, then $\Delta X_t$ satisfies the ARMA(p-1,q) equation (with different polynomials). So it is possible to solve for $\Delta X_t$ and get back to $X_t$. To mark this difference ARIMA(p,d,q) notation is used. So to sum up, if we strictly define a ARMA(p,q) process as a stationary solution to ARMA(p,q) equation, then ARIMA(1,1,0) and AR(2) are not equivalent. The fact that R manages to find the correct coefficients is an interesting property of estimation, i.e. it is possible to show that in the case of unit roots, the OLS[1] will give consistent estimates of the coefficients, however the inference would be incorrect, as the limiting distributions are not normal. The ADF tests are based on such estimates. However the actual mathematics to show that the estimates are ok is quite complicated and relies on certain assumptions. These assumptions do not generalize well, hence it is not advisable to use usual estimation methods for unit root processes. [1] The MLE and OLS are equivalent for AR(p) type specifications.
Is every ARIMA(1,1,0) model equivalent to an AR(2) model? The equivalence depends on definitions. General ARMA(p,q) process can be defined as a stochastic process which is the solution to the following equation: $$ X_t-\phi_1X_{t-1}-...-\phi_pX_{t-p}=Z_t+\th
39,211
Deterministic clustering approaches
I can point you to an algorithm and a family of algorithms: The algorithm is called IGMM (Incremental Gaussian Mixture Model). It is robust (but not insensitive) to order. But when data arrives in the same order, it always gives the same result. A family of clustering algorithms which satisfies your conditions is Spectral Clustering. They are batch algorithms and will give you the same results for the same datasets, even with different order. EDIT: also, there are some methods for deterministic initialization of K-Means clusters, such as this one.
Deterministic clustering approaches
I can point you to an algorithm and a family of algorithms: The algorithm is called IGMM (Incremental Gaussian Mixture Model). It is robust (but not insensitive) to order. But when data arrives in th
Deterministic clustering approaches I can point you to an algorithm and a family of algorithms: The algorithm is called IGMM (Incremental Gaussian Mixture Model). It is robust (but not insensitive) to order. But when data arrives in the same order, it always gives the same result. A family of clustering algorithms which satisfies your conditions is Spectral Clustering. They are batch algorithms and will give you the same results for the same datasets, even with different order. EDIT: also, there are some methods for deterministic initialization of K-Means clusters, such as this one.
Deterministic clustering approaches I can point you to an algorithm and a family of algorithms: The algorithm is called IGMM (Incremental Gaussian Mixture Model). It is robust (but not insensitive) to order. But when data arrives in th
39,212
Deterministic clustering approaches
Hierarchical Agglomerative Clustering is deterministic except for tied distances when not using single-linkage. DBSCAN is deterministic, except for permutation of the data set in rare cases. k-means is deterministic except for initialization. You can initialize with the first k objects, then it is deterministic, too. PAM like k-means. ... but there is probably 100 more clustering algorithms!
Deterministic clustering approaches
Hierarchical Agglomerative Clustering is deterministic except for tied distances when not using single-linkage. DBSCAN is deterministic, except for permutation of the data set in rare cases. k-means i
Deterministic clustering approaches Hierarchical Agglomerative Clustering is deterministic except for tied distances when not using single-linkage. DBSCAN is deterministic, except for permutation of the data set in rare cases. k-means is deterministic except for initialization. You can initialize with the first k objects, then it is deterministic, too. PAM like k-means. ... but there is probably 100 more clustering algorithms!
Deterministic clustering approaches Hierarchical Agglomerative Clustering is deterministic except for tied distances when not using single-linkage. DBSCAN is deterministic, except for permutation of the data set in rare cases. k-means i
39,213
Deterministic clustering approaches
All the algorithms, by definition, are deterministic given their inputs. Any algorithm that uses pseudo-random numbers is deterministic given the seed. K-means, that you used as example, starts with randomly chosen cluster centroids so to find optimal ones. Besides the initialization, the algorithm is totally deterministic, as you can make sure looking at it's pseudocode: Nothing prohibits you from starting with non-random centroids. We use random centroids so to make sure that badly chosen starting points would not lead us to poor results. The same with other "random" algorithms: you can use them in "deterministic" fashion, but in most cases this is not a wise thing to do. In case of k-means the algorithm deterministically minimizes the within-cluster sum of squares to find the optimal clustering solution. Unfortunately, it is sensitive to how the algorithm was initialized. Clustering problems in most cases do not have clear-cut solutions, because of that we often want to use randomized procedures to robustify them. Imagine that you used some deterministic hierarchical clustering algorithm. Imagine that it goes through your data sequentially, starting from the first observation. What would happen if the first case was an outlier? On the other hand, if you initialized it several times at random points, the procedure would be less prone to problems with data. Moreover, if you run non-"deterministic" algorithm multiple times and then use majority vote to choose for each case the class that appeared most commonly among the results, then the final output will also by highly deterministic.
Deterministic clustering approaches
All the algorithms, by definition, are deterministic given their inputs. Any algorithm that uses pseudo-random numbers is deterministic given the seed. K-means, that you used as example, starts with r
Deterministic clustering approaches All the algorithms, by definition, are deterministic given their inputs. Any algorithm that uses pseudo-random numbers is deterministic given the seed. K-means, that you used as example, starts with randomly chosen cluster centroids so to find optimal ones. Besides the initialization, the algorithm is totally deterministic, as you can make sure looking at it's pseudocode: Nothing prohibits you from starting with non-random centroids. We use random centroids so to make sure that badly chosen starting points would not lead us to poor results. The same with other "random" algorithms: you can use them in "deterministic" fashion, but in most cases this is not a wise thing to do. In case of k-means the algorithm deterministically minimizes the within-cluster sum of squares to find the optimal clustering solution. Unfortunately, it is sensitive to how the algorithm was initialized. Clustering problems in most cases do not have clear-cut solutions, because of that we often want to use randomized procedures to robustify them. Imagine that you used some deterministic hierarchical clustering algorithm. Imagine that it goes through your data sequentially, starting from the first observation. What would happen if the first case was an outlier? On the other hand, if you initialized it several times at random points, the procedure would be less prone to problems with data. Moreover, if you run non-"deterministic" algorithm multiple times and then use majority vote to choose for each case the class that appeared most commonly among the results, then the final output will also by highly deterministic.
Deterministic clustering approaches All the algorithms, by definition, are deterministic given their inputs. Any algorithm that uses pseudo-random numbers is deterministic given the seed. K-means, that you used as example, starts with r
39,214
Deterministic clustering approaches
If I were to reinterpret your description of "deterministic," it sounds more like a longitudinal, "confirmatory" cluster analysis to me -- with the important exception that you haven't explicitly integrated time series considerations into your model. Confirmatory clustering methods are deployed once it is felt that the "space" they are meant to describe has been sufficiently well understood as to obviate the need for exploratory approaches. Speciation-based cladistics is one example of this. Longitudinal cluster solutions are finally getting the attention they deserve in the literature. To the best of my knowledge, the earliest work dates back to the 80s with Pieter Kroonenberg's three-mode algorithms. But there is lots of interesting recent research that involves hidden markov chains, e.g., Steve Scott's papers or Oded Netzer's dissertation article both using HMMs, hierarchical, non-moment based information theoretic approaches such as permutation distribution clustering from Andreas Brandmaier as well as the chapters devoted to longitudinal clustering algorithms in Aggarwal and Reddy's book Data Clustering. The key thing to keep in mind with all of these approaches borrows from Kroonenborg's conceptualization of a multi-mode matrix insofar as you can't have all of the modes moving at the same time. This means that the last thing you want to do is reinitialize the algorithm with each new dataset. Rather, you want to "fix," e.g., two out of three of the modes, allowing the third mode to vary in a kind of experimental "design." In this way, you can more carefully study how the dynamics of change impact a given niche in your data. This approach is recommended regardless of the algorithm employed. * EDIT * Actually, I'm wrong about Kroonenberg's work being the earliest on 3-mode analysis. Ledyard Tucker probably wrote the original article in 1966, Some mathematical notes on three-mode factor analysis.
Deterministic clustering approaches
If I were to reinterpret your description of "deterministic," it sounds more like a longitudinal, "confirmatory" cluster analysis to me -- with the important exception that you haven't explicitly inte
Deterministic clustering approaches If I were to reinterpret your description of "deterministic," it sounds more like a longitudinal, "confirmatory" cluster analysis to me -- with the important exception that you haven't explicitly integrated time series considerations into your model. Confirmatory clustering methods are deployed once it is felt that the "space" they are meant to describe has been sufficiently well understood as to obviate the need for exploratory approaches. Speciation-based cladistics is one example of this. Longitudinal cluster solutions are finally getting the attention they deserve in the literature. To the best of my knowledge, the earliest work dates back to the 80s with Pieter Kroonenberg's three-mode algorithms. But there is lots of interesting recent research that involves hidden markov chains, e.g., Steve Scott's papers or Oded Netzer's dissertation article both using HMMs, hierarchical, non-moment based information theoretic approaches such as permutation distribution clustering from Andreas Brandmaier as well as the chapters devoted to longitudinal clustering algorithms in Aggarwal and Reddy's book Data Clustering. The key thing to keep in mind with all of these approaches borrows from Kroonenborg's conceptualization of a multi-mode matrix insofar as you can't have all of the modes moving at the same time. This means that the last thing you want to do is reinitialize the algorithm with each new dataset. Rather, you want to "fix," e.g., two out of three of the modes, allowing the third mode to vary in a kind of experimental "design." In this way, you can more carefully study how the dynamics of change impact a given niche in your data. This approach is recommended regardless of the algorithm employed. * EDIT * Actually, I'm wrong about Kroonenberg's work being the earliest on 3-mode analysis. Ledyard Tucker probably wrote the original article in 1966, Some mathematical notes on three-mode factor analysis.
Deterministic clustering approaches If I were to reinterpret your description of "deterministic," it sounds more like a longitudinal, "confirmatory" cluster analysis to me -- with the important exception that you haven't explicitly inte
39,215
Is the bootstrap useless in a Bayesian setting?
Bradley Efron has written about this as well as recently participating in a JRSS webinar titled Frequentist Accuracy of Bayesian Estimates (here: http://www.rss.org.uk/RSS/Events/Online_and_virtual_events/Journal_club/Past_Journal_webinars/RSS/Events/Online_and_virtual_events_sub/Past_Journal_webinars.aspx?hkey=5c97f80b-3f97-401b-ad75-2ee6ff5f6c0c) where the discussant was Andrew Gelman. Efron makes explicit use of the parametric bootstrap to develop a "frequentist standard deviation of a Bayesian point estimate..." In the absence of relevant prior experience, popular Bayesian estimation techniques usually begin with some form of 'uninformative' prior distribution intended to have minimal inferential influence. Bayes' rule will still produce nice-looking estimates and credible intervals, but these lack the logical force attached to experience-based priors and require further justification. This paper concerns the frequentist assessment of Bayes estimates. A simple formula is shown to give the frequentist standard deviation of a Bayesian point estimate. The same simulations required for the point estimate also produce the standard deviation. Exponential family models make the calculations particularly simple, and bring in a connection to the parametric bootstrap. So, no, the bootstrap is not "useless" to a Bayesian.
Is the bootstrap useless in a Bayesian setting?
Bradley Efron has written about this as well as recently participating in a JRSS webinar titled Frequentist Accuracy of Bayesian Estimates (here: http://www.rss.org.uk/RSS/Events/Online_and_virtual_ev
Is the bootstrap useless in a Bayesian setting? Bradley Efron has written about this as well as recently participating in a JRSS webinar titled Frequentist Accuracy of Bayesian Estimates (here: http://www.rss.org.uk/RSS/Events/Online_and_virtual_events/Journal_club/Past_Journal_webinars/RSS/Events/Online_and_virtual_events_sub/Past_Journal_webinars.aspx?hkey=5c97f80b-3f97-401b-ad75-2ee6ff5f6c0c) where the discussant was Andrew Gelman. Efron makes explicit use of the parametric bootstrap to develop a "frequentist standard deviation of a Bayesian point estimate..." In the absence of relevant prior experience, popular Bayesian estimation techniques usually begin with some form of 'uninformative' prior distribution intended to have minimal inferential influence. Bayes' rule will still produce nice-looking estimates and credible intervals, but these lack the logical force attached to experience-based priors and require further justification. This paper concerns the frequentist assessment of Bayes estimates. A simple formula is shown to give the frequentist standard deviation of a Bayesian point estimate. The same simulations required for the point estimate also produce the standard deviation. Exponential family models make the calculations particularly simple, and bring in a connection to the parametric bootstrap. So, no, the bootstrap is not "useless" to a Bayesian.
Is the bootstrap useless in a Bayesian setting? Bradley Efron has written about this as well as recently participating in a JRSS webinar titled Frequentist Accuracy of Bayesian Estimates (here: http://www.rss.org.uk/RSS/Events/Online_and_virtual_ev
39,216
Is the bootstrap useless in a Bayesian setting?
First, your interpretation of Bayesian statistics seems to be a bit restrictive. Bayesian methods do not necessarily rely on belief, e.g. objective Bayesians view the prior as a catalyst needed to express the parameters distribution having observed the data. Second, when belief is available it is not related to the observations. The prior by definition is independent from the observed data and I guess that when stating "Bayesians only rely beliefs, and by resampling the original data: I doubt the belief would change" you misinterpret the meaning of the posterior distribution. Finally, bootstrap can be used to estimate certain kind of posterior distributions. The answer Is it possible to interpret the bootstrap from a Bayesian perspective? gives you the details but here is an extract from the answer: Hence we might think of the bootstrap distribution as a β€œpoor man’s” Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out.
Is the bootstrap useless in a Bayesian setting?
First, your interpretation of Bayesian statistics seems to be a bit restrictive. Bayesian methods do not necessarily rely on belief, e.g. objective Bayesians view the prior as a catalyst needed to exp
Is the bootstrap useless in a Bayesian setting? First, your interpretation of Bayesian statistics seems to be a bit restrictive. Bayesian methods do not necessarily rely on belief, e.g. objective Bayesians view the prior as a catalyst needed to express the parameters distribution having observed the data. Second, when belief is available it is not related to the observations. The prior by definition is independent from the observed data and I guess that when stating "Bayesians only rely beliefs, and by resampling the original data: I doubt the belief would change" you misinterpret the meaning of the posterior distribution. Finally, bootstrap can be used to estimate certain kind of posterior distributions. The answer Is it possible to interpret the bootstrap from a Bayesian perspective? gives you the details but here is an extract from the answer: Hence we might think of the bootstrap distribution as a β€œpoor man’s” Bayes posterior. By perturbing the data, the bootstrap approximates the Bayesian effect of perturbing the parameters, and is typically much simpler to carry out.
Is the bootstrap useless in a Bayesian setting? First, your interpretation of Bayesian statistics seems to be a bit restrictive. Bayesian methods do not necessarily rely on belief, e.g. objective Bayesians view the prior as a catalyst needed to exp
39,217
What makes neural network a *convolutional* neural network?
Starting from the Neural Network perspective: I would say that the base Neural Network has all neurons interconnected between layers. The convolutional version simplifies this model using two hypotheses: meaningful features have a given size in the image. features are shift equivariant (shifted input leads to similarly shifted output), and may occur anywhere in the image. The first asumption is expressed by setting to zero the weights leading to a hidden neuron, except for a region of interest/patch from the input. Shift invariance is obtained by sharing the same weights across all the patches. In order to capture features anywhere in the image, it is simpler to pave the input with patches only slided by one pixel. Those simplifications drastically reduce the number of parameters and lead to much simpler computations which 'happen' to take the form of a convolution, hence the C in CNN. Note 1: the fixed feature size hypothesis is alleviated by the use of multiresolution and/or by using separate networks with different patch sizes. Note 2: equivariance is usually not as useful as invariance, so the latter is often emulated with additional pooling layers. Alternative approach Before deep learning, a popular problem solving method was to extract features and feed them to a classifier. For images, the features were often extracted using expertly chosen filters such as Gabor filters/wavelets. On can view CNN as a parameterized filtering function, where parameters are trained using methods for Neural Networks
What makes neural network a *convolutional* neural network?
Starting from the Neural Network perspective: I would say that the base Neural Network has all neurons interconnected between layers. The convolutional version simplifies this model using two hypothes
What makes neural network a *convolutional* neural network? Starting from the Neural Network perspective: I would say that the base Neural Network has all neurons interconnected between layers. The convolutional version simplifies this model using two hypotheses: meaningful features have a given size in the image. features are shift equivariant (shifted input leads to similarly shifted output), and may occur anywhere in the image. The first asumption is expressed by setting to zero the weights leading to a hidden neuron, except for a region of interest/patch from the input. Shift invariance is obtained by sharing the same weights across all the patches. In order to capture features anywhere in the image, it is simpler to pave the input with patches only slided by one pixel. Those simplifications drastically reduce the number of parameters and lead to much simpler computations which 'happen' to take the form of a convolution, hence the C in CNN. Note 1: the fixed feature size hypothesis is alleviated by the use of multiresolution and/or by using separate networks with different patch sizes. Note 2: equivariance is usually not as useful as invariance, so the latter is often emulated with additional pooling layers. Alternative approach Before deep learning, a popular problem solving method was to extract features and feed them to a classifier. For images, the features were often extracted using expertly chosen filters such as Gabor filters/wavelets. On can view CNN as a parameterized filtering function, where parameters are trained using methods for Neural Networks
What makes neural network a *convolutional* neural network? Starting from the Neural Network perspective: I would say that the base Neural Network has all neurons interconnected between layers. The convolutional version simplifies this model using two hypothes
39,218
What makes neural network a *convolutional* neural network?
In short, local connectivity and parameter sharing (optional). In terms of image data, local connectivity says only neurons within a local region should be connected together, which basically assumes that pixels nearby are correlated, and pixels far apart are independent. Parameter sharing means that the same set of parameters applies to different regions, which assumes that local patterns are shared across the whole image. But global parameter sharing is not necessary when, for example, the images you have are all frontal faces, in which case you'll know high level patterns (say eyes, noses) would only appear around some certain region of the images. A paper for reference: http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf
What makes neural network a *convolutional* neural network?
In short, local connectivity and parameter sharing (optional). In terms of image data, local connectivity says only neurons within a local region should be connected together, which basically assume
What makes neural network a *convolutional* neural network? In short, local connectivity and parameter sharing (optional). In terms of image data, local connectivity says only neurons within a local region should be connected together, which basically assumes that pixels nearby are correlated, and pixels far apart are independent. Parameter sharing means that the same set of parameters applies to different regions, which assumes that local patterns are shared across the whole image. But global parameter sharing is not necessary when, for example, the images you have are all frontal faces, in which case you'll know high level patterns (say eyes, noses) would only appear around some certain region of the images. A paper for reference: http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf
What makes neural network a *convolutional* neural network? In short, local connectivity and parameter sharing (optional). In terms of image data, local connectivity says only neurons within a local region should be connected together, which basically assume
39,219
What makes neural network a *convolutional* neural network?
The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. The most simple example is to convolve a 1-dimensional vector ${\bf {\it x}}=(x_1,x_2,x_3,\dots,x_n)^T$ with a sampled Gaussian function (the Gaussian probability density function), ${\bf {\it y}}=(y_1,y_2,y_3,\dots,y_v)^T$. Practically this means to compute the summed dot-product, element-by-element: \begin{equation} c\left(\frac{v}{2}\right) = \sum_{i\,=-v/2, \; i \le 0}^{v/2} x_{i} \cdot \ y_{(i+v/2)} \end{equation} Now letting the running variable $z$ in $c(z),\; z = \frac{v}{2}$ run over the whole range of the vector ${\bf {\it x}}$ yields a vector of output values from the convolution, for each position. In a 2-dimensional (gray-level) image, a convolution is performed by a sliding-window operation, where the window (the 2-d convolution kernel) is a $v \times v$ matrix. When a neural network is used for convolution, a $v$-by-$v$ window of pixel values can be provided as input. In this way, the neural network can be trained to recognize objects of a certain size. Also a feature-based neural network can perform a convolution, when the feature vector is computed locally for each pixel coordinate. See Fig.1 in the reference: [M. Egmont-Petersen, E. Pelikan, Detection of bone tumours in radiographs using neural networks, Pattern Analysis and Applications 2(2) ,1999, 172-183]. Image-processing applications of neural networks have been reviewed in: [M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002]
What makes neural network a *convolutional* neural network?
The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. The most simple example is to convolve a 1-dimension
What makes neural network a *convolutional* neural network? The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. The most simple example is to convolve a 1-dimensional vector ${\bf {\it x}}=(x_1,x_2,x_3,\dots,x_n)^T$ with a sampled Gaussian function (the Gaussian probability density function), ${\bf {\it y}}=(y_1,y_2,y_3,\dots,y_v)^T$. Practically this means to compute the summed dot-product, element-by-element: \begin{equation} c\left(\frac{v}{2}\right) = \sum_{i\,=-v/2, \; i \le 0}^{v/2} x_{i} \cdot \ y_{(i+v/2)} \end{equation} Now letting the running variable $z$ in $c(z),\; z = \frac{v}{2}$ run over the whole range of the vector ${\bf {\it x}}$ yields a vector of output values from the convolution, for each position. In a 2-dimensional (gray-level) image, a convolution is performed by a sliding-window operation, where the window (the 2-d convolution kernel) is a $v \times v$ matrix. When a neural network is used for convolution, a $v$-by-$v$ window of pixel values can be provided as input. In this way, the neural network can be trained to recognize objects of a certain size. Also a feature-based neural network can perform a convolution, when the feature vector is computed locally for each pixel coordinate. See Fig.1 in the reference: [M. Egmont-Petersen, E. Pelikan, Detection of bone tumours in radiographs using neural networks, Pattern Analysis and Applications 2(2) ,1999, 172-183]. Image-processing applications of neural networks have been reviewed in: [M. Egmont-Petersen, D. de Ridder, H. Handels. Image processing with neural networks - a review, Pattern Recognition, Vol. 35, No. 10, pp. 2279-2301, 2002]
What makes neural network a *convolutional* neural network? The mathematical operation of convolution means to compute the product of two (continuous or discrete) functions over all possible shift-positions. The most simple example is to convolve a 1-dimension
39,220
Interpreting Standard Deviation of Natural Log Transformed Data
The proposed interpretation in your last paragraph is incorrect -- that increase only applies at the mean. If you started lower, it would be a smaller increase and if you started higher it would be a larger increase. $e^{a+0.8}- e^a=e^{a}(e^{0.8}- 1)\approx 1.2255 e^a$ It's better to think in terms of percentage increase. $\frac{e^{a+0.8}- e^a}{e^a}\approx 1.2255$, or about 122.5% increase. However, I am concerned about your use of logs on a count that could be zero (count of "likes").
Interpreting Standard Deviation of Natural Log Transformed Data
The proposed interpretation in your last paragraph is incorrect -- that increase only applies at the mean. If you started lower, it would be a smaller increase and if you started higher it would be a
Interpreting Standard Deviation of Natural Log Transformed Data The proposed interpretation in your last paragraph is incorrect -- that increase only applies at the mean. If you started lower, it would be a smaller increase and if you started higher it would be a larger increase. $e^{a+0.8}- e^a=e^{a}(e^{0.8}- 1)\approx 1.2255 e^a$ It's better to think in terms of percentage increase. $\frac{e^{a+0.8}- e^a}{e^a}\approx 1.2255$, or about 122.5% increase. However, I am concerned about your use of logs on a count that could be zero (count of "likes").
Interpreting Standard Deviation of Natural Log Transformed Data The proposed interpretation in your last paragraph is incorrect -- that increase only applies at the mean. If you started lower, it would be a smaller increase and if you started higher it would be a
39,221
Interpreting Standard Deviation of Natural Log Transformed Data
So a one standard deviation increase of the log-transformed variable translates to 2,706 likes. Is this ok? You were careful to formulate your statement with 'increase of of log-transformed variable' qualifier. I think this eliminates misunderstanding that could have occurred to a reader who may assume that you're trying to calculate the standard deviation of $Y$. You're clearly not trying to do that. You use a word 'translates', which is not a standard term thus indicating that you're not transforming variables and converting the statistics between these variables by 'standard' means. Compare your procedure to what's described in "SAS/ETS 12.1 Users Guide", p.252 The log transformation is often used to convert time series that are nonstationary with respect to the innovation variance into stationary time series. The usual approach is to take the log of the series in a DATA step and then apply PROC ARIMA to the transformed data. A DATA step is then used to transform the forecasts of the logs back to the original units of measurement. The confidence limits are also transformed by using the exponential function. The highlighted [by me] sentence essentially describes what you're doing. Hence, what you are doing is not wrong, whether it's right is an interesting question. It depends on the interpretations and the intended use. One more thing (c) The estimator of the mean of the original variable $Y$ is not necessarily $e^{\overline{\ln Y}}$. I'm using a soft language here, because there's this seemingly obvious estimator $$\hat\mu_Y=\exp\left(\hat\mu_{\ln Y}+\hat\sigma^2_{\ln Y}/2\right)$$ It is based on the exact relationship for log-normal distribution: $$E[Y]=\exp\left(E[\ln Y]+\sigma^2_{\ln Y}/2\right)$$ However, this estimator is not always the best one in practice for the variance $\sigma^2_{\ln Y}$ is unknown, and has to be estimated. Once you start using the estimator of the variance, things get complicated, as shown in the empirical paper by Helmut Lutkepohl and Fang Xu. "The role of the log transformation in forecasting economic variables." Empirical Economics, 42(3):619{638, 2012. The following, naive, estimator of the mean may end up being the best in such cases: $$\hat\mu_Y'=\exp\left(\hat\mu_{\ln Y}\right)$$ I went to write about the means because when you talk about the 'translation' of the standard deviation increase, you need to mention what is the base. You assumed rather implicitly that the increase is from the point of the naive estimator above. As I wrote it is not wrong, but you have to clearly state that it's what you used, otherwise your reader may assume that you're correcting for the variance or that the 2,706 likes increase is from any point (which is not true). For instance, if you apply your equation to the base of 0, you get $$e^{0+0.8}-e^0=2.2$$
Interpreting Standard Deviation of Natural Log Transformed Data
So a one standard deviation increase of the log-transformed variable translates to 2,706 likes. Is this ok? You were careful to formulate your statement with 'increase of of log-transformed variabl
Interpreting Standard Deviation of Natural Log Transformed Data So a one standard deviation increase of the log-transformed variable translates to 2,706 likes. Is this ok? You were careful to formulate your statement with 'increase of of log-transformed variable' qualifier. I think this eliminates misunderstanding that could have occurred to a reader who may assume that you're trying to calculate the standard deviation of $Y$. You're clearly not trying to do that. You use a word 'translates', which is not a standard term thus indicating that you're not transforming variables and converting the statistics between these variables by 'standard' means. Compare your procedure to what's described in "SAS/ETS 12.1 Users Guide", p.252 The log transformation is often used to convert time series that are nonstationary with respect to the innovation variance into stationary time series. The usual approach is to take the log of the series in a DATA step and then apply PROC ARIMA to the transformed data. A DATA step is then used to transform the forecasts of the logs back to the original units of measurement. The confidence limits are also transformed by using the exponential function. The highlighted [by me] sentence essentially describes what you're doing. Hence, what you are doing is not wrong, whether it's right is an interesting question. It depends on the interpretations and the intended use. One more thing (c) The estimator of the mean of the original variable $Y$ is not necessarily $e^{\overline{\ln Y}}$. I'm using a soft language here, because there's this seemingly obvious estimator $$\hat\mu_Y=\exp\left(\hat\mu_{\ln Y}+\hat\sigma^2_{\ln Y}/2\right)$$ It is based on the exact relationship for log-normal distribution: $$E[Y]=\exp\left(E[\ln Y]+\sigma^2_{\ln Y}/2\right)$$ However, this estimator is not always the best one in practice for the variance $\sigma^2_{\ln Y}$ is unknown, and has to be estimated. Once you start using the estimator of the variance, things get complicated, as shown in the empirical paper by Helmut Lutkepohl and Fang Xu. "The role of the log transformation in forecasting economic variables." Empirical Economics, 42(3):619{638, 2012. The following, naive, estimator of the mean may end up being the best in such cases: $$\hat\mu_Y'=\exp\left(\hat\mu_{\ln Y}\right)$$ I went to write about the means because when you talk about the 'translation' of the standard deviation increase, you need to mention what is the base. You assumed rather implicitly that the increase is from the point of the naive estimator above. As I wrote it is not wrong, but you have to clearly state that it's what you used, otherwise your reader may assume that you're correcting for the variance or that the 2,706 likes increase is from any point (which is not true). For instance, if you apply your equation to the base of 0, you get $$e^{0+0.8}-e^0=2.2$$
Interpreting Standard Deviation of Natural Log Transformed Data So a one standard deviation increase of the log-transformed variable translates to 2,706 likes. Is this ok? You were careful to formulate your statement with 'increase of of log-transformed variabl
39,222
Interpreting Standard Deviation of Natural Log Transformed Data
If I understand, you want the standard deviation of Y. The standard deviation of Y is NOT easily calculated from mean(ln(Y)) and sd(ln(Y)), so your formula is not okay. The easy solution is to ignore the log-transform when calculating the standard deviation of Y: i.e. sd(Y) or sd(e^ln(Y)).
Interpreting Standard Deviation of Natural Log Transformed Data
If I understand, you want the standard deviation of Y. The standard deviation of Y is NOT easily calculated from mean(ln(Y)) and sd(ln(Y)), so your formula is not okay. The easy solution is to ignore
Interpreting Standard Deviation of Natural Log Transformed Data If I understand, you want the standard deviation of Y. The standard deviation of Y is NOT easily calculated from mean(ln(Y)) and sd(ln(Y)), so your formula is not okay. The easy solution is to ignore the log-transform when calculating the standard deviation of Y: i.e. sd(Y) or sd(e^ln(Y)).
Interpreting Standard Deviation of Natural Log Transformed Data If I understand, you want the standard deviation of Y. The standard deviation of Y is NOT easily calculated from mean(ln(Y)) and sd(ln(Y)), so your formula is not okay. The easy solution is to ignore
39,223
Online learning in practice
There is no reason to prefer online learning to batch learning when you can use both methods. But in some cases (millions of features, millions of observations), you have to use online learning because everything else will fail (or will not terminate). Some implementations of online learning methods even have a constant memory footprint (hashing the features and allowing collisions). Click-through rate (CTR) prediction, per example, is generally tackled using online algorithms. Ad Click Prediction: a View from the Trenches is a nice overview.
Online learning in practice
There is no reason to prefer online learning to batch learning when you can use both methods. But in some cases (millions of features, millions of observations), you have to use online learning becaus
Online learning in practice There is no reason to prefer online learning to batch learning when you can use both methods. But in some cases (millions of features, millions of observations), you have to use online learning because everything else will fail (or will not terminate). Some implementations of online learning methods even have a constant memory footprint (hashing the features and allowing collisions). Click-through rate (CTR) prediction, per example, is generally tackled using online algorithms. Ad Click Prediction: a View from the Trenches is a nice overview.
Online learning in practice There is no reason to prefer online learning to batch learning when you can use both methods. But in some cases (millions of features, millions of observations), you have to use online learning becaus
39,224
Online learning in practice
There are two main reasons to use online learning in practice: When not all data fits in memory; When data arrives in real-time and the amount of processing power per arriving datum is low. RUser4512 gave a good example of scenario 1 in his answer. Scenario 2 is very frequently encountered in industrial applications. For instance, stochastic gradient descent and adaptive filtering techniques have been used for decades in electronics and control. Here's a white paper on an implementation of an efficient recursive regression algorithm Implementation of CORDIC-Based QRD-RLS Algorithm on Altera Stratix FPGA with Embedded Nios Soft Processor Technology.
Online learning in practice
There are two main reasons to use online learning in practice: When not all data fits in memory; When data arrives in real-time and the amount of processing power per arriving datum is low. RUser451
Online learning in practice There are two main reasons to use online learning in practice: When not all data fits in memory; When data arrives in real-time and the amount of processing power per arriving datum is low. RUser4512 gave a good example of scenario 1 in his answer. Scenario 2 is very frequently encountered in industrial applications. For instance, stochastic gradient descent and adaptive filtering techniques have been used for decades in electronics and control. Here's a white paper on an implementation of an efficient recursive regression algorithm Implementation of CORDIC-Based QRD-RLS Algorithm on Altera Stratix FPGA with Embedded Nios Soft Processor Technology.
Online learning in practice There are two main reasons to use online learning in practice: When not all data fits in memory; When data arrives in real-time and the amount of processing power per arriving datum is low. RUser451
39,225
Online learning in practice
Online learning is also useful in case of nonstationarities. You might want to track changes in a signal or system.
Online learning in practice
Online learning is also useful in case of nonstationarities. You might want to track changes in a signal or system.
Online learning in practice Online learning is also useful in case of nonstationarities. You might want to track changes in a signal or system.
Online learning in practice Online learning is also useful in case of nonstationarities. You might want to track changes in a signal or system.
39,226
Percentage of variation in each column explained by each SVD mode
If your singular value decomposition is $$\mathbf X = \mathbf{USV}^\top,$$ then the amount of overall variance explained by the $i$-th pair of SVD vectors ($i$-th SVD "mode") is given by $R^2 = s_i^2/\sum_j s_j^2$, where $s_j$ are singular values (diagonal of $\mathbf S$). This can also be computed as the ratio of the norm of rank-1 reconstruction to the norm of the original data matrix: $$R^2 = \frac{\|\mathbf u_i s_i \mathbf v_i^\top\|^2}{\|\mathbf X\|^2}=\frac{s_i^2}{\sum_j s_j^2},$$ where $\mathbf u_i$ and $\mathbf v_i$ are $i$-th columns of $\mathbf U$ and $\mathbf V$ correspondingly (and all norms are Frobenius norms). If you are interested in the amount of variance explained by mode $i$ in column $k$, then you can use the same approach and define it as the ratio of the norm or this column in the rank-1 reconstruction to the norm of this column in the original data, i.e. $$R^2 = \frac{\|\mathbf u_i s_i v_{ik}\|^2}{\|\mathbf x_k\|^2}=\frac{ s_i^2 v_{ik}^2}{\|\mathbf x_k\|^2},$$ where $\mathbf x_k$ is the $k$-th column of $\mathbf X$ (so the $k$-th feature, not the $k$-th data point).
Percentage of variation in each column explained by each SVD mode
If your singular value decomposition is $$\mathbf X = \mathbf{USV}^\top,$$ then the amount of overall variance explained by the $i$-th pair of SVD vectors ($i$-th SVD "mode") is given by $R^2 = s_i^2/
Percentage of variation in each column explained by each SVD mode If your singular value decomposition is $$\mathbf X = \mathbf{USV}^\top,$$ then the amount of overall variance explained by the $i$-th pair of SVD vectors ($i$-th SVD "mode") is given by $R^2 = s_i^2/\sum_j s_j^2$, where $s_j$ are singular values (diagonal of $\mathbf S$). This can also be computed as the ratio of the norm of rank-1 reconstruction to the norm of the original data matrix: $$R^2 = \frac{\|\mathbf u_i s_i \mathbf v_i^\top\|^2}{\|\mathbf X\|^2}=\frac{s_i^2}{\sum_j s_j^2},$$ where $\mathbf u_i$ and $\mathbf v_i$ are $i$-th columns of $\mathbf U$ and $\mathbf V$ correspondingly (and all norms are Frobenius norms). If you are interested in the amount of variance explained by mode $i$ in column $k$, then you can use the same approach and define it as the ratio of the norm or this column in the rank-1 reconstruction to the norm of this column in the original data, i.e. $$R^2 = \frac{\|\mathbf u_i s_i v_{ik}\|^2}{\|\mathbf x_k\|^2}=\frac{ s_i^2 v_{ik}^2}{\|\mathbf x_k\|^2},$$ where $\mathbf x_k$ is the $k$-th column of $\mathbf X$ (so the $k$-th feature, not the $k$-th data point).
Percentage of variation in each column explained by each SVD mode If your singular value decomposition is $$\mathbf X = \mathbf{USV}^\top,$$ then the amount of overall variance explained by the $i$-th pair of SVD vectors ($i$-th SVD "mode") is given by $R^2 = s_i^2/
39,227
Check whether a coin is fair
[I think I'd start by asking for a whiteboard, markers -- and an eraser, because one boardful isn't enough to explain everything wrong with the question.] I'm going to answer this question by rejecting its premises. The "coin" itself is just a coin; by itself it doesn't do anything, and so it cannot be fair or not-fair. What we're talking about is the process of tossing a particular coin in some fashion -- that can be discussed in terms of whether it's fair or not. Data can't show you that a coin-tossing process applied to some coin is exactly fair. Sometimes it can show you that your coin-tossing-process on a given coin is inconsistent with fairness, but failure to identify any inconsistency with fairness doesn't imply fairness (failure to reject is because your sample size is small, not because the coin is actually fair). [e.g. Consider it in terms of a confidence interval for P(head), the fact that $\frac12$ is in the CI doesn't mean that P(head)=$\frac12$, since there are always other values - distinct from $\frac12$ - in there too. Or think in terms of power: on the experiment given in the interview question - 6 tosses - what's the probability that you'd reject as unfair the case where the tossing process applied to a particular coin had $p(\text{head})=0.51$ at some typical significance level? That's clearly an unfair coin, but you'll reject barely more often than your type I error rate, and a large fraction of those rejections in a two tailed test would be "in the wrong tail"!] No coin-tossing process on a given coin will be perfectly fair. (For example, changing the side facing up slightly alters the chances associated with the resulting face on the toss, as experiments run by Persi Diaconis have shown.) Could the coin be close to fair? Possibly; it may even be possible to get very close to fair. Exactly fair? No, it's not possible in practice. But then to discuss whether it's "close to fair" we'd have to define what we mean by 'close'. [If we were to give some usable definition, while some people might suggest some form of equivalence test, or perhaps considering whether some CI lay entirely inside some "close to fair" bounds, I'd be inclined toward a Bayesian approach to deciding whether the coin is sufficiently close to fair. Note that with the tiny sample size mentioned, the data are quite consistent with p(head) so far from $\frac{1}{2}$ that this exercise on that data would not conclude "close to fair" on any of the three mentioned approaches.] So: Given a coin you don’t know it’s fair or unfair. Yes, actually, I do. In fact I don't even need to see data. It's not fair. Throw it 6 times and get 1 tail and 5 heads. Determine whether it’s fair or not. I really don't care what the data are. It makes no difference to my answer, since the data could not possibly demonstrate fairness, even if fairness were a realistically possible state to be in. What’s your confidence value? 100% (in a sense similar to almost surely) (In any case, even if there were a way to do this statistically I don't know of any statistical procedure that gives anything I'd agree to call "confidence values", so I also reject the form of that question. What does that term even mean? If I were asked a question phrased that way in an interview, I'd have serious concerns about working there, because it seems to suggest the people conducting the interview don't really understand what they're even asking - and that suggests either nobody there knows this stuff, or they don't care enough about this position to make sure the interview is being conducted by someone who does. Either way, it would certainly influence my willingness to work there.) Forgetting everything I just said for the moment, some comments on your hypothesis test: Your process for a hypothesis test is wrong. Why do you compare your significance level with 0.05? You've chosen a significance level of 0.21 (which I have no objection to in this experiment, the sample size is so low you only have 3% or 21% and $\alpha$=3% will be too low-powered to be much use) -- 0.05 doesn't relate to anything here. Do you see that in your test when it came time to reject or not reject, you made no reference at all to the sample statistic (5 heads)? Indeed you ignored your rejection rule. The rejection rule you stated algebraically $|X-3|>2$ is inconsistent with the rejection region you mentioned ($0,1,5,6$). That's a lot of errors in a few lines! If I was involved in such an interview**, I might forgive the error with the rejection rule as something one could overlook under interview pressure, but the first two errors would suggest some fundamental problems. ** leaving aside that I'd never ask such a poor question, nor would I likely care enough about hypothesis testing to even think to ask a question about it.
Check whether a coin is fair
[I think I'd start by asking for a whiteboard, markers -- and an eraser, because one boardful isn't enough to explain everything wrong with the question.] I'm going to answer this question by rejectin
Check whether a coin is fair [I think I'd start by asking for a whiteboard, markers -- and an eraser, because one boardful isn't enough to explain everything wrong with the question.] I'm going to answer this question by rejecting its premises. The "coin" itself is just a coin; by itself it doesn't do anything, and so it cannot be fair or not-fair. What we're talking about is the process of tossing a particular coin in some fashion -- that can be discussed in terms of whether it's fair or not. Data can't show you that a coin-tossing process applied to some coin is exactly fair. Sometimes it can show you that your coin-tossing-process on a given coin is inconsistent with fairness, but failure to identify any inconsistency with fairness doesn't imply fairness (failure to reject is because your sample size is small, not because the coin is actually fair). [e.g. Consider it in terms of a confidence interval for P(head), the fact that $\frac12$ is in the CI doesn't mean that P(head)=$\frac12$, since there are always other values - distinct from $\frac12$ - in there too. Or think in terms of power: on the experiment given in the interview question - 6 tosses - what's the probability that you'd reject as unfair the case where the tossing process applied to a particular coin had $p(\text{head})=0.51$ at some typical significance level? That's clearly an unfair coin, but you'll reject barely more often than your type I error rate, and a large fraction of those rejections in a two tailed test would be "in the wrong tail"!] No coin-tossing process on a given coin will be perfectly fair. (For example, changing the side facing up slightly alters the chances associated with the resulting face on the toss, as experiments run by Persi Diaconis have shown.) Could the coin be close to fair? Possibly; it may even be possible to get very close to fair. Exactly fair? No, it's not possible in practice. But then to discuss whether it's "close to fair" we'd have to define what we mean by 'close'. [If we were to give some usable definition, while some people might suggest some form of equivalence test, or perhaps considering whether some CI lay entirely inside some "close to fair" bounds, I'd be inclined toward a Bayesian approach to deciding whether the coin is sufficiently close to fair. Note that with the tiny sample size mentioned, the data are quite consistent with p(head) so far from $\frac{1}{2}$ that this exercise on that data would not conclude "close to fair" on any of the three mentioned approaches.] So: Given a coin you don’t know it’s fair or unfair. Yes, actually, I do. In fact I don't even need to see data. It's not fair. Throw it 6 times and get 1 tail and 5 heads. Determine whether it’s fair or not. I really don't care what the data are. It makes no difference to my answer, since the data could not possibly demonstrate fairness, even if fairness were a realistically possible state to be in. What’s your confidence value? 100% (in a sense similar to almost surely) (In any case, even if there were a way to do this statistically I don't know of any statistical procedure that gives anything I'd agree to call "confidence values", so I also reject the form of that question. What does that term even mean? If I were asked a question phrased that way in an interview, I'd have serious concerns about working there, because it seems to suggest the people conducting the interview don't really understand what they're even asking - and that suggests either nobody there knows this stuff, or they don't care enough about this position to make sure the interview is being conducted by someone who does. Either way, it would certainly influence my willingness to work there.) Forgetting everything I just said for the moment, some comments on your hypothesis test: Your process for a hypothesis test is wrong. Why do you compare your significance level with 0.05? You've chosen a significance level of 0.21 (which I have no objection to in this experiment, the sample size is so low you only have 3% or 21% and $\alpha$=3% will be too low-powered to be much use) -- 0.05 doesn't relate to anything here. Do you see that in your test when it came time to reject or not reject, you made no reference at all to the sample statistic (5 heads)? Indeed you ignored your rejection rule. The rejection rule you stated algebraically $|X-3|>2$ is inconsistent with the rejection region you mentioned ($0,1,5,6$). That's a lot of errors in a few lines! If I was involved in such an interview**, I might forgive the error with the rejection rule as something one could overlook under interview pressure, but the first two errors would suggest some fundamental problems. ** leaving aside that I'd never ask such a poor question, nor would I likely care enough about hypothesis testing to even think to ask a question about it.
Check whether a coin is fair [I think I'd start by asking for a whiteboard, markers -- and an eraser, because one boardful isn't enough to explain everything wrong with the question.] I'm going to answer this question by rejectin
39,228
Check whether a coin is fair
I am not going to attempt to provide final answers to your question; I believe the topic is more than addressed after the comprehensive response given by Glen. However, and apropos of his comment about a Bayesian approach, I'd like to post some illustrations about the way our preconceptions about the "fairness" of the coin (or the experiment in general) affects the posterior probability density, i.e. the $p\,(\theta\,\vert\,\text{Data})$, where $\theta$ stands for the probability of heads in the coin toss. Luckily, we have a conjugate prior distribution for the binomial case that occupies us - the beta distribution, facilitating the calculation of the posterior distribution. First scenario - The Fair-Minded Player: We walk into the game (not a very exciting game, but still...), and we have absolutely no reason to assume that there is foul play going on. Things being by nature less than perfect, we have it in our mind that the coin is fair-ish. In other words, we think that the probability of heads, $\theta$, falls somewhere around $\frac{1}{2}$. Later, the unexpected single tail out of $6$ tosses, will force us to move the posterior probability of $\theta$ to the left (the arrows indicate the influence of the data on the prior distribution): Second Scenario - The Shrewd Player: We strongly suspect from insider's leaked information that the game is markedly biased towards tails, and we not only are about to make a killing, but also in need to further reinforce our conviction after the first round, doubling down our bet: Third Scenario - Losing Your Shirt: We've never played before, but we have read a manual, and we feel ready. All signs clearly indicate that the coin is markedly biased towards $heads$, a mistake that we will soon start to correct at a high $\ $\$ cost: Fourth Scenario - No Idea Whatsoever: It's a good thing that the $\beta(1,1)$ distribution turns into a $U\,(0,1)$ to address this scenario, where only the likelihood will influence the -posterior probability of $\theta$. As brought up to my attention, a Jeffreys prior is close and possibly more correct: So I hope this provides a bit of a light-hearted visual depiction of our approach to estimating the chances of this game being rigged, perhaps encapsulating more of a real scenario than calculations of the type pbinom(1, 6, 0.5). If you want the code in R, and the credits to a great video with Matlab illustrations, I posted it here.
Check whether a coin is fair
I am not going to attempt to provide final answers to your question; I believe the topic is more than addressed after the comprehensive response given by Glen. However, and apropos of his comment abou
Check whether a coin is fair I am not going to attempt to provide final answers to your question; I believe the topic is more than addressed after the comprehensive response given by Glen. However, and apropos of his comment about a Bayesian approach, I'd like to post some illustrations about the way our preconceptions about the "fairness" of the coin (or the experiment in general) affects the posterior probability density, i.e. the $p\,(\theta\,\vert\,\text{Data})$, where $\theta$ stands for the probability of heads in the coin toss. Luckily, we have a conjugate prior distribution for the binomial case that occupies us - the beta distribution, facilitating the calculation of the posterior distribution. First scenario - The Fair-Minded Player: We walk into the game (not a very exciting game, but still...), and we have absolutely no reason to assume that there is foul play going on. Things being by nature less than perfect, we have it in our mind that the coin is fair-ish. In other words, we think that the probability of heads, $\theta$, falls somewhere around $\frac{1}{2}$. Later, the unexpected single tail out of $6$ tosses, will force us to move the posterior probability of $\theta$ to the left (the arrows indicate the influence of the data on the prior distribution): Second Scenario - The Shrewd Player: We strongly suspect from insider's leaked information that the game is markedly biased towards tails, and we not only are about to make a killing, but also in need to further reinforce our conviction after the first round, doubling down our bet: Third Scenario - Losing Your Shirt: We've never played before, but we have read a manual, and we feel ready. All signs clearly indicate that the coin is markedly biased towards $heads$, a mistake that we will soon start to correct at a high $\ $\$ cost: Fourth Scenario - No Idea Whatsoever: It's a good thing that the $\beta(1,1)$ distribution turns into a $U\,(0,1)$ to address this scenario, where only the likelihood will influence the -posterior probability of $\theta$. As brought up to my attention, a Jeffreys prior is close and possibly more correct: So I hope this provides a bit of a light-hearted visual depiction of our approach to estimating the chances of this game being rigged, perhaps encapsulating more of a real scenario than calculations of the type pbinom(1, 6, 0.5). If you want the code in R, and the credits to a great video with Matlab illustrations, I posted it here.
Check whether a coin is fair I am not going to attempt to provide final answers to your question; I believe the topic is more than addressed after the comprehensive response given by Glen. However, and apropos of his comment abou
39,229
Check whether a coin is fair
I'm thinking using chi square to measure the statistical difference between categorical variables. Null hypothesis: half of the coins you tossed are heads and half are tails. Alternative hypothesis: opposite to the above Then you calculate the chi square using this formula sum((f0-fe)^2/fe) where f0 is your statistic or point estimate and fe is the expected value. And then you compare this value with critical chi square value given from table to determine if you reject null hypothesis.
Check whether a coin is fair
I'm thinking using chi square to measure the statistical difference between categorical variables. Null hypothesis: half of the coins you tossed are heads and half are tails. Alternative hypothesis: o
Check whether a coin is fair I'm thinking using chi square to measure the statistical difference between categorical variables. Null hypothesis: half of the coins you tossed are heads and half are tails. Alternative hypothesis: opposite to the above Then you calculate the chi square using this formula sum((f0-fe)^2/fe) where f0 is your statistic or point estimate and fe is the expected value. And then you compare this value with critical chi square value given from table to determine if you reject null hypothesis.
Check whether a coin is fair I'm thinking using chi square to measure the statistical difference between categorical variables. Null hypothesis: half of the coins you tossed are heads and half are tails. Alternative hypothesis: o
39,230
Implementing Balanced Random Forest (BRF) in R using RandomForests
You can balance your random forests using case weights. Here's a simple example: library(ranger) #Best random forest implementation in R #Make a dataste set.seed(43) nrow <- 1000 ncol <- 10 X <- matrix(rnorm(nrow * ncol), ncol=ncol) CF <- rnorm(ncol) Y <- (X %*% CF + rnorm(nrow))[,1] Y <- as.integer(Y > quantile(Y, 0.90)) table(Y) #Compute weights to balance the RF w <- 1/table(Y) w <- w/sum(w) weights <- rep(0, nrow) weights[Y == 0] <- w['0'] weights[Y == 1] <- w['1'] table(weights, Y) #Fit the RF data <- data.frame(Y=factor(ifelse(Y==0, 'no', 'yes')), X) model <- ranger(Y~., data, case.weights=weights) print(model)
Implementing Balanced Random Forest (BRF) in R using RandomForests
You can balance your random forests using case weights. Here's a simple example: library(ranger) #Best random forest implementation in R #Make a dataste set.seed(43) nrow <- 1000 ncol <- 10 X <- mat
Implementing Balanced Random Forest (BRF) in R using RandomForests You can balance your random forests using case weights. Here's a simple example: library(ranger) #Best random forest implementation in R #Make a dataste set.seed(43) nrow <- 1000 ncol <- 10 X <- matrix(rnorm(nrow * ncol), ncol=ncol) CF <- rnorm(ncol) Y <- (X %*% CF + rnorm(nrow))[,1] Y <- as.integer(Y > quantile(Y, 0.90)) table(Y) #Compute weights to balance the RF w <- 1/table(Y) w <- w/sum(w) weights <- rep(0, nrow) weights[Y == 0] <- w['0'] weights[Y == 1] <- w['1'] table(weights, Y) #Fit the RF data <- data.frame(Y=factor(ifelse(Y==0, 'no', 'yes')), X) model <- ranger(Y~., data, case.weights=weights) print(model)
Implementing Balanced Random Forest (BRF) in R using RandomForests You can balance your random forests using case weights. Here's a simple example: library(ranger) #Best random forest implementation in R #Make a dataste set.seed(43) nrow <- 1000 ncol <- 10 X <- mat
39,231
Implementing Balanced Random Forest (BRF) in R using RandomForests
For reference and adding to @zach's answer: The package ranger now(*) implements a sample.fraction argument that allows a vector of class-specific values for a stratified sampling scheme suitable for imbalance cases. (*) see issue #167 and the fix #263 allowing class-wise sample.fraction
Implementing Balanced Random Forest (BRF) in R using RandomForests
For reference and adding to @zach's answer: The package ranger now(*) implements a sample.fraction argument that allows a vector of class-specific values for a stratified sampling scheme suitable for
Implementing Balanced Random Forest (BRF) in R using RandomForests For reference and adding to @zach's answer: The package ranger now(*) implements a sample.fraction argument that allows a vector of class-specific values for a stratified sampling scheme suitable for imbalance cases. (*) see issue #167 and the fix #263 allowing class-wise sample.fraction
Implementing Balanced Random Forest (BRF) in R using RandomForests For reference and adding to @zach's answer: The package ranger now(*) implements a sample.fraction argument that allows a vector of class-specific values for a stratified sampling scheme suitable for
39,232
Implementing Balanced Random Forest (BRF) in R using RandomForests
The writers had a presentation of the techniques found here: http://www.interfacesymposia.org/I04/I2004Proceedings/ChenChao/ChenChao.presentation.pdf According to the authors, there’s an add-on package to R that implements their original Fortran: Here are the working links to the R package: https://cran.r-project.org/web/packages/randomForest/index.html https://CRAN.R-project.org/package=randomForest Unfortunately if you search the documentation for that package here, there is no mention of "balanced" or "brf." This paper, provides a clue: "we estimate balanced RF models using the sampsize argument from the randomForest package" This can save you from having to implement this manually.
Implementing Balanced Random Forest (BRF) in R using RandomForests
The writers had a presentation of the techniques found here: http://www.interfacesymposia.org/I04/I2004Proceedings/ChenChao/ChenChao.presentation.pdf According to the authors, there’s an add-on packa
Implementing Balanced Random Forest (BRF) in R using RandomForests The writers had a presentation of the techniques found here: http://www.interfacesymposia.org/I04/I2004Proceedings/ChenChao/ChenChao.presentation.pdf According to the authors, there’s an add-on package to R that implements their original Fortran: Here are the working links to the R package: https://cran.r-project.org/web/packages/randomForest/index.html https://CRAN.R-project.org/package=randomForest Unfortunately if you search the documentation for that package here, there is no mention of "balanced" or "brf." This paper, provides a clue: "we estimate balanced RF models using the sampsize argument from the randomForest package" This can save you from having to implement this manually.
Implementing Balanced Random Forest (BRF) in R using RandomForests The writers had a presentation of the techniques found here: http://www.interfacesymposia.org/I04/I2004Proceedings/ChenChao/ChenChao.presentation.pdf According to the authors, there’s an add-on packa
39,233
Implementing Balanced Random Forest (BRF) in R using RandomForests
The "randomForest" function in the "randomForest" R package supports the Balanced Random Forest. One need to specify the "strata" and the "sampsize" parameters to enable the balanced bootstrapping resampling. strata A (factor) variable that is used for stratified sampling. sampsize Size(s) of sample to draw. For classification, if sampsize is a vector of the length the number of strata, then sampling is stratified by strata, and the elements of sampsize indicate the numbers to be drawn from the strata. A reference can be found here at: http://appliedpredictivemodeling.com/blog/2013/12/8/28rmc2lv96h8fw8700zm4nl50busep Hope it helps!
Implementing Balanced Random Forest (BRF) in R using RandomForests
The "randomForest" function in the "randomForest" R package supports the Balanced Random Forest. One need to specify the "strata" and the "sampsize" parameters to enable the balanced bootstrapping res
Implementing Balanced Random Forest (BRF) in R using RandomForests The "randomForest" function in the "randomForest" R package supports the Balanced Random Forest. One need to specify the "strata" and the "sampsize" parameters to enable the balanced bootstrapping resampling. strata A (factor) variable that is used for stratified sampling. sampsize Size(s) of sample to draw. For classification, if sampsize is a vector of the length the number of strata, then sampling is stratified by strata, and the elements of sampsize indicate the numbers to be drawn from the strata. A reference can be found here at: http://appliedpredictivemodeling.com/blog/2013/12/8/28rmc2lv96h8fw8700zm4nl50busep Hope it helps!
Implementing Balanced Random Forest (BRF) in R using RandomForests The "randomForest" function in the "randomForest" R package supports the Balanced Random Forest. One need to specify the "strata" and the "sampsize" parameters to enable the balanced bootstrapping res
39,234
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
Yes, these things are done quite often, at least in genetics. To address your specific points: This is quite commonly done, and I have personally reported results this way. Though, make sure to make it clear at what level of FDR you are reporting. Think of it akin to "marginal" significance; people may find it interesting, but they have to know what they're looking at. No, you would not have to do multiple corrections for this, as you mentioned you're simply manipulating the $P$-values in a different way and not changing anything. I would also look further into what Dr. Motulsky has mentioned above, if I were you. Reporting the $q$ value is a very common and useful metric.
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
Yes, these things are done quite often, at least in genetics. To address your specific points: This is quite commonly done, and I have personally reported results this way. Though, make sure to make
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate Yes, these things are done quite often, at least in genetics. To address your specific points: This is quite commonly done, and I have personally reported results this way. Though, make sure to make it clear at what level of FDR you are reporting. Think of it akin to "marginal" significance; people may find it interesting, but they have to know what they're looking at. No, you would not have to do multiple corrections for this, as you mentioned you're simply manipulating the $P$-values in a different way and not changing anything. I would also look further into what Dr. Motulsky has mentioned above, if I were you. Reporting the $q$ value is a very common and useful metric.
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate Yes, these things are done quite often, at least in genetics. To address your specific points: This is quite commonly done, and I have personally reported results this way. Though, make sure to make
39,235
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
An alternative you might find useful is to report -- for each comparison -- the q value. The q value (lower case) is the Q value (upper case) at which that particular comparison would be right at the border of being a discovery. You can then report the q value for each comparison, rather than just a list of which comparisons are "discoveries" using an arbitrary value of Q.
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
An alternative you might find useful is to report -- for each comparison -- the q value. The q value (lower case) is the Q value (upper case) at which that particular comparison would be right at the
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate An alternative you might find useful is to report -- for each comparison -- the q value. The q value (lower case) is the Q value (upper case) at which that particular comparison would be right at the border of being a discovery. You can then report the q value for each comparison, rather than just a list of which comparisons are "discoveries" using an arbitrary value of Q.
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate An alternative you might find useful is to report -- for each comparison -- the q value. The q value (lower case) is the Q value (upper case) at which that particular comparison would be right at the
39,236
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
There's nothing magical about alpha=.05. I see nothing wrong with going with alpha/q=.10. I would also report confidence intervals (and adjust these as well). Alternatively, use a Bayesian model with priors up to the job of wacking down false positives (horseshoe, laplace)
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate
There's nothing magical about alpha=.05. I see nothing wrong with going with alpha/q=.10. I would also report confidence intervals (and adjust these as well). Alternatively, use a Bayesian model wit
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate There's nothing magical about alpha=.05. I see nothing wrong with going with alpha/q=.10. I would also report confidence intervals (and adjust these as well). Alternatively, use a Bayesian model with priors up to the job of wacking down false positives (horseshoe, laplace)
Choosing the Q-value in the Benjamini-Hochberg procedure to control false discovery rate There's nothing magical about alpha=.05. I see nothing wrong with going with alpha/q=.10. I would also report confidence intervals (and adjust these as well). Alternatively, use a Bayesian model wit
39,237
Difference in output between SAS's proc genmod and R's glm
I notice several things here. First, when you enter your data via matrix, all the data have to be the same type. Thus, they are coerced to be the most inclusive type, strings, which in turn are coerced to be factors by default. Note: testdata <- data.frame(matrix(c("f","Test", 1.75, 16, 0, 16, 0, 1, 1, ... sapply(testdata, class) # sex vaccine dose not_p para n pct vacnum sexno # "factor" "factor" "factor" "factor" "factor" "factor" "factor" "factor" "factor" Try using read.table(text='...', sep=",") instead: testdata <- read.table(text='"f", "Test", 1.75, 16, 0, 16, 0, 1, 1 "m", "Test", 1.75, 15, 1, 16, 6.25, 1, 0 "f", "Test", 2.75, 4, 12, 16, 75, 1, 1 "m", "Test", 2.75, 9, 6, 15, 40, 1, 0 "f", "WHO", 1.75, 15, 1, 16, 6.25, 0, 1 "m", "WHO", 1.75, 14, 2, 16, 12.5, 0, 0 "f", "WHO", 2.75, 2, 13, 15, 86.6667, 0, 1 "m", "WHO", 2.75, 3, 13, 16, 81.25, 0, 0', sep=",") names(testdata) <- c("sex", "vaccine", "dose", "not_p", "para", "n", "pct", "vacnum", "sexno") sapply(testdata, class) # sex vaccine dose not_p para n pct vacnum # "factor" "factor" "numeric" "integer" "integer" "integer" "numeric" "integer" # sexno # "integer" (That was small potatoes.) The next trap to worry about is that SAS and R code logistic regression for binomial data differently. SAS uses "events over trials", but R uses the odds, successes/failures. Thus, your model formula should be: form <- as.formula("cbind(para, n-para) ~ dose + sex + vacnum") Finally, you specified family=quasibinomial (i.e., the quasibinomial) in your R code, but \DIST=BIN (i.e., the binomial) in your SAS code. To match the SAS output, use the binomial instead. Thus, your final model is: fitreduced <- glm(form, family=binomial(link="logit"), data=testdata) coef(summary(fitreduced)) # Estimate Std. Error z value Pr(>|z|) # (Intercept) -9.4020028 1.6219570 -5.796703 6.763131e-09 # dose 3.9207805 0.6460193 6.069138 1.285986e-09 # sexf 0.5574087 0.5184112 1.075225 2.822741e-01 # vacnum -1.3221011 0.5482645 -2.411430 1.589012e-02 This seems to match the SAS estimates and standard errors.
Difference in output between SAS's proc genmod and R's glm
I notice several things here. First, when you enter your data via matrix, all the data have to be the same type. Thus, they are coerced to be the most inclusive type, strings, which in turn are coe
Difference in output between SAS's proc genmod and R's glm I notice several things here. First, when you enter your data via matrix, all the data have to be the same type. Thus, they are coerced to be the most inclusive type, strings, which in turn are coerced to be factors by default. Note: testdata <- data.frame(matrix(c("f","Test", 1.75, 16, 0, 16, 0, 1, 1, ... sapply(testdata, class) # sex vaccine dose not_p para n pct vacnum sexno # "factor" "factor" "factor" "factor" "factor" "factor" "factor" "factor" "factor" Try using read.table(text='...', sep=",") instead: testdata <- read.table(text='"f", "Test", 1.75, 16, 0, 16, 0, 1, 1 "m", "Test", 1.75, 15, 1, 16, 6.25, 1, 0 "f", "Test", 2.75, 4, 12, 16, 75, 1, 1 "m", "Test", 2.75, 9, 6, 15, 40, 1, 0 "f", "WHO", 1.75, 15, 1, 16, 6.25, 0, 1 "m", "WHO", 1.75, 14, 2, 16, 12.5, 0, 0 "f", "WHO", 2.75, 2, 13, 15, 86.6667, 0, 1 "m", "WHO", 2.75, 3, 13, 16, 81.25, 0, 0', sep=",") names(testdata) <- c("sex", "vaccine", "dose", "not_p", "para", "n", "pct", "vacnum", "sexno") sapply(testdata, class) # sex vaccine dose not_p para n pct vacnum # "factor" "factor" "numeric" "integer" "integer" "integer" "numeric" "integer" # sexno # "integer" (That was small potatoes.) The next trap to worry about is that SAS and R code logistic regression for binomial data differently. SAS uses "events over trials", but R uses the odds, successes/failures. Thus, your model formula should be: form <- as.formula("cbind(para, n-para) ~ dose + sex + vacnum") Finally, you specified family=quasibinomial (i.e., the quasibinomial) in your R code, but \DIST=BIN (i.e., the binomial) in your SAS code. To match the SAS output, use the binomial instead. Thus, your final model is: fitreduced <- glm(form, family=binomial(link="logit"), data=testdata) coef(summary(fitreduced)) # Estimate Std. Error z value Pr(>|z|) # (Intercept) -9.4020028 1.6219570 -5.796703 6.763131e-09 # dose 3.9207805 0.6460193 6.069138 1.285986e-09 # sexf 0.5574087 0.5184112 1.075225 2.822741e-01 # vacnum -1.3221011 0.5482645 -2.411430 1.589012e-02 This seems to match the SAS estimates and standard errors.
Difference in output between SAS's proc genmod and R's glm I notice several things here. First, when you enter your data via matrix, all the data have to be the same type. Thus, they are coerced to be the most inclusive type, strings, which in turn are coe
39,238
Finding the maximum point of probability density function
It's not a stupid question at all. See this post for a case where a likelihood can have two maxima and a minimum. When dealing with maximum likelihood in a general theoretical approach, we tend to silently assume that the likelihood is a unimodal function (usually having a maximum). Moreover, many "known" distributions have log-concave densities (in their variable). This, coupled with the fact that the unknown coefficients have in many cases a linear relationship with the variable (or we can make it linear through a one-to-one parametrization, which leaves the MLE unaffected), makes the density log-concave in the unknown coefficients also... which are the arguments with respect to which we maximize the (by now, concave) log-likelihood. Satisfaction of the second-order conditions follows, in such cases. But in more specific theoretical works, where novel log-likelihoods arise, the researcher has in my opinion the responsibility to treat specifically the issue of whether the second-order conditions are satisfied or not. Finally, in applied work, the software algorithms check on their own whether the Hessian is negative definite at the point that they locate as stationary, (and report on the matter) so at least we know whether we have a local maximum or not.
Finding the maximum point of probability density function
It's not a stupid question at all. See this post for a case where a likelihood can have two maxima and a minimum. When dealing with maximum likelihood in a general theoretical approach, we tend to si
Finding the maximum point of probability density function It's not a stupid question at all. See this post for a case where a likelihood can have two maxima and a minimum. When dealing with maximum likelihood in a general theoretical approach, we tend to silently assume that the likelihood is a unimodal function (usually having a maximum). Moreover, many "known" distributions have log-concave densities (in their variable). This, coupled with the fact that the unknown coefficients have in many cases a linear relationship with the variable (or we can make it linear through a one-to-one parametrization, which leaves the MLE unaffected), makes the density log-concave in the unknown coefficients also... which are the arguments with respect to which we maximize the (by now, concave) log-likelihood. Satisfaction of the second-order conditions follows, in such cases. But in more specific theoretical works, where novel log-likelihoods arise, the researcher has in my opinion the responsibility to treat specifically the issue of whether the second-order conditions are satisfied or not. Finally, in applied work, the software algorithms check on their own whether the Hessian is negative definite at the point that they locate as stationary, (and report on the matter) so at least we know whether we have a local maximum or not.
Finding the maximum point of probability density function It's not a stupid question at all. See this post for a case where a likelihood can have two maxima and a minimum. When dealing with maximum likelihood in a general theoretical approach, we tend to si
39,239
Finding the maximum point of probability density function
First of all, in response to answer from Alecos Papadopoulos, should the software check for negative definite? Yes? Do they? I suspect many don't. But actually, if there are any constraints, including bound constraints such as parameters being nonnegative, and one or more constraints are "active" at a candidate solution (e.g., parameter being estimated is on a bound), then checking for negative definiteness of the Hessian, is NOT what should be done. The correct 2nd order condition is that Z' * Hessian * Z be negative semidefinite , where Z is a basis for the null space of the Jacobian of active constraints. ( Z' * Hessian * Z is the projection of the Hessian into the null space of the Jacobian of active constraints). If the only active constraints are bounds, then Z' * Hessian * Z amounts to eliminating the rows and columns of parameters on a bound from the Hessian. Moreover, the first order conditions require the correct sign Lagrange multiplier for each active bound constraint, which amounts to the requirement that any parameter on a lower bound needs to have its gradient component be nonpositive, and any parameter on an upper bound needs to have its gradient component be nonnegative. And if all first and second order conditions are satisfied, then that only tells you it's a local maximum, unless you know the likelihood function to be concave (or log-concave). So let's say you found the GLOBAL maximum, but there are several local maxima with likelihood function values almost as high, do you think that maximum likelihood estimation should provide you great confidence in the solution? The confidence intervals which the software spits out are only "valid" relative to that local (even if global) maximum, and will provide you NO indication that similar, or better values, are far outside any confidence intervals you form. If there are many disparate regions with similar likelihood, the maximum likelihood might not be very high likelihood in absolute terms.
Finding the maximum point of probability density function
First of all, in response to answer from Alecos Papadopoulos, should the software check for negative definite? Yes? Do they? I suspect many don't. But actually, if there are any constraints, includi
Finding the maximum point of probability density function First of all, in response to answer from Alecos Papadopoulos, should the software check for negative definite? Yes? Do they? I suspect many don't. But actually, if there are any constraints, including bound constraints such as parameters being nonnegative, and one or more constraints are "active" at a candidate solution (e.g., parameter being estimated is on a bound), then checking for negative definiteness of the Hessian, is NOT what should be done. The correct 2nd order condition is that Z' * Hessian * Z be negative semidefinite , where Z is a basis for the null space of the Jacobian of active constraints. ( Z' * Hessian * Z is the projection of the Hessian into the null space of the Jacobian of active constraints). If the only active constraints are bounds, then Z' * Hessian * Z amounts to eliminating the rows and columns of parameters on a bound from the Hessian. Moreover, the first order conditions require the correct sign Lagrange multiplier for each active bound constraint, which amounts to the requirement that any parameter on a lower bound needs to have its gradient component be nonpositive, and any parameter on an upper bound needs to have its gradient component be nonnegative. And if all first and second order conditions are satisfied, then that only tells you it's a local maximum, unless you know the likelihood function to be concave (or log-concave). So let's say you found the GLOBAL maximum, but there are several local maxima with likelihood function values almost as high, do you think that maximum likelihood estimation should provide you great confidence in the solution? The confidence intervals which the software spits out are only "valid" relative to that local (even if global) maximum, and will provide you NO indication that similar, or better values, are far outside any confidence intervals you form. If there are many disparate regions with similar likelihood, the maximum likelihood might not be very high likelihood in absolute terms.
Finding the maximum point of probability density function First of all, in response to answer from Alecos Papadopoulos, should the software check for negative definite? Yes? Do they? I suspect many don't. But actually, if there are any constraints, includi
39,240
Lognormal Regression?
I would suggest using a generalised linear model (GLM) with a log-link function instead of directly log-transforming your variables; in R you can simply use glm with family= gaussian(link='log') to begin with. I say this because modelling the mean of the log-transformed variable (as you would do by simply taking the logarithms of your dependent variable) is not always the same as modelling the log of the variable's mean. The user @Corone made a very informative post about this issue here. In short, if the logarithm transformation is not perfectly appropriate it will give suboptimal results in comparison with a GLM. A very good initial point is the paper by Lindsey & Jones on "Choosing among generalized linear models applied to medical data". (It is easily found online for free if you google/bing the title...)
Lognormal Regression?
I would suggest using a generalised linear model (GLM) with a log-link function instead of directly log-transforming your variables; in R you can simply use glm with family= gaussian(link='log') to be
Lognormal Regression? I would suggest using a generalised linear model (GLM) with a log-link function instead of directly log-transforming your variables; in R you can simply use glm with family= gaussian(link='log') to begin with. I say this because modelling the mean of the log-transformed variable (as you would do by simply taking the logarithms of your dependent variable) is not always the same as modelling the log of the variable's mean. The user @Corone made a very informative post about this issue here. In short, if the logarithm transformation is not perfectly appropriate it will give suboptimal results in comparison with a GLM. A very good initial point is the paper by Lindsey & Jones on "Choosing among generalized linear models applied to medical data". (It is easily found online for free if you google/bing the title...)
Lognormal Regression? I would suggest using a generalised linear model (GLM) with a log-link function instead of directly log-transforming your variables; in R you can simply use glm with family= gaussian(link='log') to be
39,241
Lognormal Regression?
I want to take the log of the response variable and do a least-squares regression line over my predictive variable. If I expected the relationship to be linear on the log scale, that's where I'd probably start. Is this an okay thing to do? It can be; it depends on what else is going on. I know that for my original variable, variance grows with the mean, but would taking logs adjust for this appropriately? It might, or it might not. It depends on exactly how the variance is related to the mean. If the standard deviation is a constant multiple of the mean (variance proportional to mean squared), then you should end up with constant variance on the log scale. Otherwise you won't.
Lognormal Regression?
I want to take the log of the response variable and do a least-squares regression line over my predictive variable. If I expected the relationship to be linear on the log scale, that's where I'd prob
Lognormal Regression? I want to take the log of the response variable and do a least-squares regression line over my predictive variable. If I expected the relationship to be linear on the log scale, that's where I'd probably start. Is this an okay thing to do? It can be; it depends on what else is going on. I know that for my original variable, variance grows with the mean, but would taking logs adjust for this appropriately? It might, or it might not. It depends on exactly how the variance is related to the mean. If the standard deviation is a constant multiple of the mean (variance proportional to mean squared), then you should end up with constant variance on the log scale. Otherwise you won't.
Lognormal Regression? I want to take the log of the response variable and do a least-squares regression line over my predictive variable. If I expected the relationship to be linear on the log scale, that's where I'd prob
39,242
How can you convert a gamma distribution into normal distribution? [closed]
Hope this answer does not seem facetious: You can transform random variables from one to another with the inverse CDF method: If $\gamma$ is Gamma distributed (with some fixed parameters), and $F$ its CDF then $F(\gamma)$ has uniform(0,1) distribution. Thus $\Phi^{-1}(F(\gamma))$ has Normal distribution. This requires some computation of course, probably more than computing the mean of the Gamma directly. But I guess any suitable transform would, because the gamma and normal distribution PDF shapes are rather different in general (when the Gamma shape parameter is small). However, the Gamma distribution is divisible in the shape parameter, i.e. Gamma(shape $=a+b$, scale $=c$) has the same distribution as Gamma$(a,c)$ $+$ Gamma$(b,c)$. Thus, as Stephane Laurent mentioned, the central limit theorem says that the normal distribution gives a good approximation when the shape parameter is large.
How can you convert a gamma distribution into normal distribution? [closed]
Hope this answer does not seem facetious: You can transform random variables from one to another with the inverse CDF method: If $\gamma$ is Gamma distributed (with some fixed parameters), and $F$ its
How can you convert a gamma distribution into normal distribution? [closed] Hope this answer does not seem facetious: You can transform random variables from one to another with the inverse CDF method: If $\gamma$ is Gamma distributed (with some fixed parameters), and $F$ its CDF then $F(\gamma)$ has uniform(0,1) distribution. Thus $\Phi^{-1}(F(\gamma))$ has Normal distribution. This requires some computation of course, probably more than computing the mean of the Gamma directly. But I guess any suitable transform would, because the gamma and normal distribution PDF shapes are rather different in general (when the Gamma shape parameter is small). However, the Gamma distribution is divisible in the shape parameter, i.e. Gamma(shape $=a+b$, scale $=c$) has the same distribution as Gamma$(a,c)$ $+$ Gamma$(b,c)$. Thus, as Stephane Laurent mentioned, the central limit theorem says that the normal distribution gives a good approximation when the shape parameter is large.
How can you convert a gamma distribution into normal distribution? [closed] Hope this answer does not seem facetious: You can transform random variables from one to another with the inverse CDF method: If $\gamma$ is Gamma distributed (with some fixed parameters), and $F$ its
39,243
What is the terminology for data aggregated via summed totals versus data aggregated via means?
Properties that are physically additive are called extensive. Mass is extensive, as when you add (literally!) weights to a balance. A feature of extensive properties is that totals make sense. In your example, gas used, measured in kWh, is one instance. The word physically is not meant restrictively here. My income in April and my income in May can be added, as can my expenditures. Both are extensive properties. So, there are other non-physical situations in which addition makes sense. If totals make sense, then means make sense too. Whether they are the measures you want to use, however, depends on your purpose. Otherwise properties that are not physically additive are called intensive. Temperature is intensive. If you mix bodies, the resulting temperature is some kind of weighted mean, and certainly not the total. This Wikipedia article says much more from a physical science point of view. One source emphasising the importance of this distinction in statistical science is Cox, D.R. and Snell, E.J. 1981. Applied statistics: principles and examples. London: Chapman and Hall. See p.14. (They use the term non-extensive, which I do not find attractive.)
What is the terminology for data aggregated via summed totals versus data aggregated via means?
Properties that are physically additive are called extensive. Mass is extensive, as when you add (literally!) weights to a balance. A feature of extensive properties is that totals make sense. In you
What is the terminology for data aggregated via summed totals versus data aggregated via means? Properties that are physically additive are called extensive. Mass is extensive, as when you add (literally!) weights to a balance. A feature of extensive properties is that totals make sense. In your example, gas used, measured in kWh, is one instance. The word physically is not meant restrictively here. My income in April and my income in May can be added, as can my expenditures. Both are extensive properties. So, there are other non-physical situations in which addition makes sense. If totals make sense, then means make sense too. Whether they are the measures you want to use, however, depends on your purpose. Otherwise properties that are not physically additive are called intensive. Temperature is intensive. If you mix bodies, the resulting temperature is some kind of weighted mean, and certainly not the total. This Wikipedia article says much more from a physical science point of view. One source emphasising the importance of this distinction in statistical science is Cox, D.R. and Snell, E.J. 1981. Applied statistics: principles and examples. London: Chapman and Hall. See p.14. (They use the term non-extensive, which I do not find attractive.)
What is the terminology for data aggregated via summed totals versus data aggregated via means? Properties that are physically additive are called extensive. Mass is extensive, as when you add (literally!) weights to a balance. A feature of extensive properties is that totals make sense. In you
39,244
What is the terminology for data aggregated via summed totals versus data aggregated via means?
In database applications/data warehouse/BI it is common to refer to additive measures additive example: money semi-additive: balance (can aggregate across eg departments but not time) non-additive: ratios (eg growth rate etc) https://stackoverflow.com/questions/34295293/whats-the-difference-between-additive-semi-additive-and-non-additive-measures http://www.kimballgroup.com/data-warehouse-business-intelligence-resources/kimball-techniques/dimensional-modeling-techniques/additive-semi-additive-non-additive-fact/
What is the terminology for data aggregated via summed totals versus data aggregated via means?
In database applications/data warehouse/BI it is common to refer to additive measures additive example: money semi-additive: balance (can aggregate across eg departments but not time) non-additive: r
What is the terminology for data aggregated via summed totals versus data aggregated via means? In database applications/data warehouse/BI it is common to refer to additive measures additive example: money semi-additive: balance (can aggregate across eg departments but not time) non-additive: ratios (eg growth rate etc) https://stackoverflow.com/questions/34295293/whats-the-difference-between-additive-semi-additive-and-non-additive-measures http://www.kimballgroup.com/data-warehouse-business-intelligence-resources/kimball-techniques/dimensional-modeling-techniques/additive-semi-additive-non-additive-fact/
What is the terminology for data aggregated via summed totals versus data aggregated via means? In database applications/data warehouse/BI it is common to refer to additive measures additive example: money semi-additive: balance (can aggregate across eg departments but not time) non-additive: r
39,245
Student's t-test with a covariate?
One common means of controlling for some other covariate would be via regression. Put the X and Y values into the response (DV), and a Y-group indicator (0 if in X, 1 if in Y) as a DV, along with your covariate (or some suitable proxy for it if the variable can't be measured directly) as another DV. You regress on your covariate and the group-indicator. If there's a difference between X and Y based on the covariate, this will be "adjusted" for by the regession, and significance of the coefficient of the group-indicator will then be a test of a mean-shift between the two groups after accounting for the covariate. This is sometimes called ANCOVA (unless Z is a factor, in which case it would usually be called ANOVA; you can still do it using regression). Some people would formally test parallelism of the two group-lines (by including an interaction between covariate and group-indicator). I think that's wrong-headed (do we believe the hypothesis of no-interaction is exactly true? I don't -- in which case the hypothesis test is a noisy answer to a question we already know the answer to -- surely they're never going to be exactly parallel) ... but don't especially care about. The better question here is "are they close enough to parallel that it doesn't badly impact the properties of the inferences we wish to make?'. Answering that is nearer to measuring an effect size, so a residual display - e.g. residuals vs covariate that distinguishes the groups with symbols or colors - might come closer to addressing that. However, depending on what you mean by 'latent' in your question, it's possible that you may actually be after something more like an instrumental variable. (There are numerous questions on site on the topic.)
Student's t-test with a covariate?
One common means of controlling for some other covariate would be via regression. Put the X and Y values into the response (DV), and a Y-group indicator (0 if in X, 1 if in Y) as a DV, along with your
Student's t-test with a covariate? One common means of controlling for some other covariate would be via regression. Put the X and Y values into the response (DV), and a Y-group indicator (0 if in X, 1 if in Y) as a DV, along with your covariate (or some suitable proxy for it if the variable can't be measured directly) as another DV. You regress on your covariate and the group-indicator. If there's a difference between X and Y based on the covariate, this will be "adjusted" for by the regession, and significance of the coefficient of the group-indicator will then be a test of a mean-shift between the two groups after accounting for the covariate. This is sometimes called ANCOVA (unless Z is a factor, in which case it would usually be called ANOVA; you can still do it using regression). Some people would formally test parallelism of the two group-lines (by including an interaction between covariate and group-indicator). I think that's wrong-headed (do we believe the hypothesis of no-interaction is exactly true? I don't -- in which case the hypothesis test is a noisy answer to a question we already know the answer to -- surely they're never going to be exactly parallel) ... but don't especially care about. The better question here is "are they close enough to parallel that it doesn't badly impact the properties of the inferences we wish to make?'. Answering that is nearer to measuring an effect size, so a residual display - e.g. residuals vs covariate that distinguishes the groups with symbols or colors - might come closer to addressing that. However, depending on what you mean by 'latent' in your question, it's possible that you may actually be after something more like an instrumental variable. (There are numerous questions on site on the topic.)
Student's t-test with a covariate? One common means of controlling for some other covariate would be via regression. Put the X and Y values into the response (DV), and a Y-group indicator (0 if in X, 1 if in Y) as a DV, along with your
39,246
Student's t-test with a covariate?
By means of t-test you are assessing whether there is a significant difference between two sets of data --- e.g. the realizations of two random variables $X$ and $Y$. When using t-test you are doing a hypothesis test, and you can't control for any variable. To be more specific, when doing hypothesis tests you are not establishing any causal relationship between random variables. If you want to investigate the effect of a latent variable $Z$ on both $X$ and $Y$, you should rely on a regression analysis. For a deeper analysis on the role and effects of some variables (which can cause a bias in your analysis), you can apply some methods of graphical models. Here you find a nice IPython Notebook on regression.
Student's t-test with a covariate?
By means of t-test you are assessing whether there is a significant difference between two sets of data --- e.g. the realizations of two random variables $X$ and $Y$. When using t-test you are doing a
Student's t-test with a covariate? By means of t-test you are assessing whether there is a significant difference between two sets of data --- e.g. the realizations of two random variables $X$ and $Y$. When using t-test you are doing a hypothesis test, and you can't control for any variable. To be more specific, when doing hypothesis tests you are not establishing any causal relationship between random variables. If you want to investigate the effect of a latent variable $Z$ on both $X$ and $Y$, you should rely on a regression analysis. For a deeper analysis on the role and effects of some variables (which can cause a bias in your analysis), you can apply some methods of graphical models. Here you find a nice IPython Notebook on regression.
Student's t-test with a covariate? By means of t-test you are assessing whether there is a significant difference between two sets of data --- e.g. the realizations of two random variables $X$ and $Y$. When using t-test you are doing a
39,247
Why the sum of true positive and false positive does not have to be equal to one?
You have four conditional probabilities. $$\text{tp} = \Pr(\text{detected}\mid\text{aircraft present})$$ $$\text{fn} =\Pr(\text{undetected}\mid\text{aircraft present})$$ $$\text{fp} = \Pr(\text{detected}\mid\text{aircraft not present})$$ $$\text{tn} = \Pr(\text{undetected}\mid\text{aircraft not present})$$ Using the definition of conditional probability, it's easy to prove that $$ \text{tp} + \text{fn} = 1 $$ and $$ \text{fp} + \text{tn} = 1. $$ To prove the first one, since $$\{\text{detected}\}\cap\{\text{undetected}\}=\emptyset,$$ and $$\{\text{detected}\}\cup\{\text{undetected}\}=\Omega,$$ the sure event, then (draw a Venn diagram) $$\{\text{aircraft present}\} = \{\text{detected},\text{aircraft present}\}\cup\{\text{undetected},\text{aircraft present}\},$$ in which I've used the shortcut notation $$ \{\text{detected},\text{aircraft present}\} := \{\text{detected}\}\cap\{\text{aircraft present}\}. $$ Hence, $$ \Pr(\text{detected}\mid\text{aircraft present}) + \Pr(\text{undetected}\mid\text{aircraft present}) $$ $$ = \frac{\Pr(\text{detected},\text{aircraft present}) + \Pr(\text{undetected},\text{aircraft present})}{\Pr(\text{aircraft present})} $$ $$ = \frac{\Pr(\text{aircraft present})}{\Pr(\text{aircraft present})} = 1. $$ The intuition is that, if you're given the same information, the conditional probabilities of two complementary events must add up to one. Mathematically, if $\Pr(B)>0$, then $\Pr(\,\cdot\mid B)$ is a probability measure. But, in general, $$ \text{tp} + \text{fp} \neq 1. $$ Consider this: you have a super radar which, given that an aircraft is present, always detect it ($\text{tp}=1$). It never misses a true aircraft. But, sometimes, your super radar is so sensitive that, given that no aircraft is present, it confuses a condor with an aircraft, so that, say, $\text{fp}=0.2$.
Why the sum of true positive and false positive does not have to be equal to one?
You have four conditional probabilities. $$\text{tp} = \Pr(\text{detected}\mid\text{aircraft present})$$ $$\text{fn} =\Pr(\text{undetected}\mid\text{aircraft present})$$ $$\text{fp} = \Pr(\text{detect
Why the sum of true positive and false positive does not have to be equal to one? You have four conditional probabilities. $$\text{tp} = \Pr(\text{detected}\mid\text{aircraft present})$$ $$\text{fn} =\Pr(\text{undetected}\mid\text{aircraft present})$$ $$\text{fp} = \Pr(\text{detected}\mid\text{aircraft not present})$$ $$\text{tn} = \Pr(\text{undetected}\mid\text{aircraft not present})$$ Using the definition of conditional probability, it's easy to prove that $$ \text{tp} + \text{fn} = 1 $$ and $$ \text{fp} + \text{tn} = 1. $$ To prove the first one, since $$\{\text{detected}\}\cap\{\text{undetected}\}=\emptyset,$$ and $$\{\text{detected}\}\cup\{\text{undetected}\}=\Omega,$$ the sure event, then (draw a Venn diagram) $$\{\text{aircraft present}\} = \{\text{detected},\text{aircraft present}\}\cup\{\text{undetected},\text{aircraft present}\},$$ in which I've used the shortcut notation $$ \{\text{detected},\text{aircraft present}\} := \{\text{detected}\}\cap\{\text{aircraft present}\}. $$ Hence, $$ \Pr(\text{detected}\mid\text{aircraft present}) + \Pr(\text{undetected}\mid\text{aircraft present}) $$ $$ = \frac{\Pr(\text{detected},\text{aircraft present}) + \Pr(\text{undetected},\text{aircraft present})}{\Pr(\text{aircraft present})} $$ $$ = \frac{\Pr(\text{aircraft present})}{\Pr(\text{aircraft present})} = 1. $$ The intuition is that, if you're given the same information, the conditional probabilities of two complementary events must add up to one. Mathematically, if $\Pr(B)>0$, then $\Pr(\,\cdot\mid B)$ is a probability measure. But, in general, $$ \text{tp} + \text{fp} \neq 1. $$ Consider this: you have a super radar which, given that an aircraft is present, always detect it ($\text{tp}=1$). It never misses a true aircraft. But, sometimes, your super radar is so sensitive that, given that no aircraft is present, it confuses a condor with an aircraft, so that, say, $\text{fp}=0.2$.
Why the sum of true positive and false positive does not have to be equal to one? You have four conditional probabilities. $$\text{tp} = \Pr(\text{detected}\mid\text{aircraft present})$$ $$\text{fn} =\Pr(\text{undetected}\mid\text{aircraft present})$$ $$\text{fp} = \Pr(\text{detect
39,248
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so?
As Maarten points out, your problem is that you have not set the scale of the second model. True, you have more observed variances/covariances than what you need to identify your model, but you still need to provide a point of reference from which other model parameters can be calculated (Brown, 2015). You can set the scale using one of three methods: Marker variable: one factor loading per latent variable is fixed to 1 Fixed factor: each latent variable's variance is fixed to 1 Effects-coding: factor loadings for each latent variable are constrained to average 1 Code for each approach (using the lavaan package's HolzingerSwineford1939 dataset) is presented below. The latent variable I've created is nonsensical/poor-fitting, but it has the same number of indicators as your model, so the example will hopefully be more transferable to your situation. library(lavaan) #marker-variable; first factor loading fixed to 1 by default marker.variable<-'f1=~ x1+x2+x3+x4+x5+x6' summary(output.marker<-cfa(marker.variable, data=HolzingerSwineford1939), fit.measures=TRUE) #fixed-factor method; manually free first factor loading/fix latent variance to 1 fixed.factor<-'f1=~ NA*x1+x2+x3+x4+x5+x6 f1~~1*f1' summary(output.fixed<-cfa(fixed.factor, data=HolzingerSwineford1939), fit.measures=TRUE) #effects coding; manually free first loading/constrain loadings to average 1 effects.coding<-'f1=~ NA*x1+a*x1+b*x2+c*x3+d*x4+e*x5+f*x6 a+b+c+d+e+f==6' summary(output.effects<-cfa(effects.coding, data=HolzingerSwineford1939), fit.measures=TRUE) Note that model fit is identical, regardless of which method of scale-setting that you use; the fit in all three models is $\chi^2 (df = 9) = 103.23, ~p < .001$. Which method you should use largely depends on the nature of your data and your research goals. The marker variable method is a highly arbitrary method of scale-setting. Like Maarten stated, your latent variables will take on the units of their respective marker variables, so this approach is only informative to the extent that your marker variables are especially meaningful, or perhaps represent some "gold standard" indicator of your latent construct. The fixed factor method, alternatively, is easy to specify, and essentially standardizes your latent variables (if you're examining mean structures, you would fix the latent means to zero as well). Since we standardize variables all the time, this is a highly intuitive and widely acceptable form of scale-setting for latent variables, though the resultant scaling is not inherently meaningful. Even so, it's probably the best method to "default" to, unless you have a strong imperative to use one of the other methods. Effects-coding is a relative new-comer to methods of scale-setting (see Little, Slegers, & Card, 2006, for a thorough discussion). It's greatest advantage is when you are modeling latent means. When doing so, you would also constrain item intercepts to average 0. The effect of these constraints is that your latent variables will be on the exact same scale as your original items. For example, if the average of your indicators was "5", your latent mean would also be "5", though your latent variance would be smaller than you observed variance. Because the constraints on the loadings and intercepts can be more computationally demanding, especially in more complicated models, and occasionally result in convergence errors, effects-coding is probably not worth it unless you plan to examine latent means. But for the particular purpose of examining latent means, it's great. References Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd Edition). New York, NY: Guilford Press. Little, T. D., Slegers, D. W., & Card, N. A. (2006). A non-arbitary method of identifying and scaling latent variables in SEM and MACS models. Structural Equation Modeling, 13, 59-72.
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so?
As Maarten points out, your problem is that you have not set the scale of the second model. True, you have more observed variances/covariances than what you need to identify your model, but you still
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so? As Maarten points out, your problem is that you have not set the scale of the second model. True, you have more observed variances/covariances than what you need to identify your model, but you still need to provide a point of reference from which other model parameters can be calculated (Brown, 2015). You can set the scale using one of three methods: Marker variable: one factor loading per latent variable is fixed to 1 Fixed factor: each latent variable's variance is fixed to 1 Effects-coding: factor loadings for each latent variable are constrained to average 1 Code for each approach (using the lavaan package's HolzingerSwineford1939 dataset) is presented below. The latent variable I've created is nonsensical/poor-fitting, but it has the same number of indicators as your model, so the example will hopefully be more transferable to your situation. library(lavaan) #marker-variable; first factor loading fixed to 1 by default marker.variable<-'f1=~ x1+x2+x3+x4+x5+x6' summary(output.marker<-cfa(marker.variable, data=HolzingerSwineford1939), fit.measures=TRUE) #fixed-factor method; manually free first factor loading/fix latent variance to 1 fixed.factor<-'f1=~ NA*x1+x2+x3+x4+x5+x6 f1~~1*f1' summary(output.fixed<-cfa(fixed.factor, data=HolzingerSwineford1939), fit.measures=TRUE) #effects coding; manually free first loading/constrain loadings to average 1 effects.coding<-'f1=~ NA*x1+a*x1+b*x2+c*x3+d*x4+e*x5+f*x6 a+b+c+d+e+f==6' summary(output.effects<-cfa(effects.coding, data=HolzingerSwineford1939), fit.measures=TRUE) Note that model fit is identical, regardless of which method of scale-setting that you use; the fit in all three models is $\chi^2 (df = 9) = 103.23, ~p < .001$. Which method you should use largely depends on the nature of your data and your research goals. The marker variable method is a highly arbitrary method of scale-setting. Like Maarten stated, your latent variables will take on the units of their respective marker variables, so this approach is only informative to the extent that your marker variables are especially meaningful, or perhaps represent some "gold standard" indicator of your latent construct. The fixed factor method, alternatively, is easy to specify, and essentially standardizes your latent variables (if you're examining mean structures, you would fix the latent means to zero as well). Since we standardize variables all the time, this is a highly intuitive and widely acceptable form of scale-setting for latent variables, though the resultant scaling is not inherently meaningful. Even so, it's probably the best method to "default" to, unless you have a strong imperative to use one of the other methods. Effects-coding is a relative new-comer to methods of scale-setting (see Little, Slegers, & Card, 2006, for a thorough discussion). It's greatest advantage is when you are modeling latent means. When doing so, you would also constrain item intercepts to average 0. The effect of these constraints is that your latent variables will be on the exact same scale as your original items. For example, if the average of your indicators was "5", your latent mean would also be "5", though your latent variance would be smaller than you observed variance. Because the constraints on the loadings and intercepts can be more computationally demanding, especially in more complicated models, and occasionally result in convergence errors, effects-coding is probably not worth it unless you plan to examine latent means. But for the particular purpose of examining latent means, it's great. References Brown, T. A. (2015). Confirmatory factor analysis for applied research (2nd Edition). New York, NY: Guilford Press. Little, T. D., Slegers, D. W., & Card, N. A. (2006). A non-arbitary method of identifying and scaling latent variables in SEM and MACS models. Structural Equation Modeling, 13, 59-72.
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so? As Maarten points out, your problem is that you have not set the scale of the second model. True, you have more observed variances/covariances than what you need to identify your model, but you still
39,249
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so?
Think of the scale of GnM. It is latent so it does not have a natural scale like meters (inches), euros (yen), etc. Instead we need to give it a scale by telling it when it is 0 and what a unit of increase is. In your original model you set GnM to 0 when all the indicators are 0, and by setting the loading of x1 to 1, you are borowing the unit from x1. So a unit increase in GnM is equivalent to a unit increase in x1. When you set the loading of x1 free, what is the unit of GnM? We don't know, so it is unidentified. A common way to solve that is to set the variance of GnM to 1.
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so?
Think of the scale of GnM. It is latent so it does not have a natural scale like meters (inches), euros (yen), etc. Instead we need to give it a scale by telling it when it is 0 and what a unit of inc
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so? Think of the scale of GnM. It is latent so it does not have a natural scale like meters (inches), euros (yen), etc. Instead we need to give it a scale by telling it when it is 0 and what a unit of increase is. In your original model you set GnM to 0 when all the indicators are 0, and by setting the loading of x1 to 1, you are borowing the unit from x1. So a unit increase in GnM is equivalent to a unit increase in x1. When you set the loading of x1 free, what is the unit of GnM? We don't know, so it is unidentified. A common way to solve that is to set the variance of GnM to 1.
Unable to estimate standard error after freeing first indicator in SEM model - Why is it so? Think of the scale of GnM. It is latent so it does not have a natural scale like meters (inches), euros (yen), etc. Instead we need to give it a scale by telling it when it is 0 and what a unit of inc
39,250
Reliability in IRT Style
In Classical Test Theory observed test scores $X$ could be defined as: $$X = T + E$$ where $T$ are the true scores and $E$ is an error of measurement. This means that their variance is: $$\sigma^2_X = \sigma^2_T + \sigma^2_E$$ In this case, reliability could be defined as: $$ \rho_{xx'} = \frac{\sigma^2_T}{\sigma^2_X} = \frac{\sigma^2_X - \sigma^2_E}{\sigma^2_X} = 1 - \frac{ \sigma^2_E }{ \sigma^2_X } $$ In classical test theory $\sigma^2_E$ error variance and $\sigma^2_X$ observed score variance. This approach could be transfered into the IRT framework. One approach would be: $$ \rho_{xx'} = 1 - \frac{ \sigma^2_E }{ \sigma^2_T } $$ where $\sigma^2_T$ is a variance of EAP scores. It is possible since EAP are the estimate of the true ability $\theta$. The problem is that variance of EAP is an under-estimate of the variance of true scores $\sigma^2_X$ (Wu, 2005). The problem is also that with this approach the values of $\rho_{xx'}$ could be negative and we don't want reliability estimate to be negative. In practice, the negative values could suggest that there is something wrong with the model. Other approach would be to define $\sigma^2_E$ as mean of EAP errors and $\sigma^2_X$ as variance of the true scores, i.e. $\sigma^2_X = \sigma^2_T + \sigma^2_E$ and use simply the CTT formula: $$ \rho_{xx'} = \frac{\sigma^2_T}{\sigma^2_X} = \frac{\sigma^2_X - \sigma^2_E}{\sigma^2_X} = 1 - \frac{ \sigma^2_E }{ \sigma^2_X } $$ Other possibility for $\sigma^2_E$, as Raju et al. notes, could also be to define it as: $$\sigma^2_E = E\left[\left( \frac{1}{I_s} \right)^2\right]$$ where $I_s$ is the total test information function for examinee $s$. The nice fact about defining reliability like this is that (a) it is consistent with CTT, and (b) it is easy to compute. Other approaches would be using $\sigma^2_E$ or $\sigma^2_T$ solely as they both say something about reliability. This kind of approach in not commonly used and it is more popular to use information content as in @robin.datadrivers answer since reliability is rather a CTT concept. So some would use for this purpose both the CTT measures (Cronbach $\alpha$) and IRT measures (information content). However, as I mentioned, it is possible to use CTT-like reliability based on IRT. Below I post an R code for computing reliability given mirt or ltm output: rel.mirt <- function(x) { eap <- mirt::fscores(x, full.scores=T, scores.only=T, full.scores.SE=T) e <- mean(eap[, 2]^2) s <- var(eap[, 1]) 1-(e/(s+e)) } rel.ltm <- function(x) { eap <- ltm::factor.scores(x, method="EAP")$score.dat e <- mean(eap$se.z1^2) s <- var(eap$z1) 1-(e/(s+e)) } References Raju, N.S., Price, L.R., Oshima, T.C., & Nering, M.L. (2006). Standardized Conditional SEM: A Case for Conditional Reliability. Applied Psychological Measurement, 30(X), 1-12. Wang, T., Kolen, M.J., & Harris, D.J. (1997). Conditional Standard Errors, Reliability, and Decision Consistency Performance Levels Using Polytomous IRT. Reliability Issues with Performance Assessments: A Collection of Papers. ACT Research Report Series 97-3, 13-40. Adams, R.J. (2005). Reliability as a measurement design effect. Studies in Educational Evaluation, 31(2–3), 162–172. Wu, M. (2005). The role of plausible values in large-scale surveys. Studies in Educational Evaluation, 31(2–3), 114-128.
Reliability in IRT Style
In Classical Test Theory observed test scores $X$ could be defined as: $$X = T + E$$ where $T$ are the true scores and $E$ is an error of measurement. This means that their variance is: $$\sigma^2_X =
Reliability in IRT Style In Classical Test Theory observed test scores $X$ could be defined as: $$X = T + E$$ where $T$ are the true scores and $E$ is an error of measurement. This means that their variance is: $$\sigma^2_X = \sigma^2_T + \sigma^2_E$$ In this case, reliability could be defined as: $$ \rho_{xx'} = \frac{\sigma^2_T}{\sigma^2_X} = \frac{\sigma^2_X - \sigma^2_E}{\sigma^2_X} = 1 - \frac{ \sigma^2_E }{ \sigma^2_X } $$ In classical test theory $\sigma^2_E$ error variance and $\sigma^2_X$ observed score variance. This approach could be transfered into the IRT framework. One approach would be: $$ \rho_{xx'} = 1 - \frac{ \sigma^2_E }{ \sigma^2_T } $$ where $\sigma^2_T$ is a variance of EAP scores. It is possible since EAP are the estimate of the true ability $\theta$. The problem is that variance of EAP is an under-estimate of the variance of true scores $\sigma^2_X$ (Wu, 2005). The problem is also that with this approach the values of $\rho_{xx'}$ could be negative and we don't want reliability estimate to be negative. In practice, the negative values could suggest that there is something wrong with the model. Other approach would be to define $\sigma^2_E$ as mean of EAP errors and $\sigma^2_X$ as variance of the true scores, i.e. $\sigma^2_X = \sigma^2_T + \sigma^2_E$ and use simply the CTT formula: $$ \rho_{xx'} = \frac{\sigma^2_T}{\sigma^2_X} = \frac{\sigma^2_X - \sigma^2_E}{\sigma^2_X} = 1 - \frac{ \sigma^2_E }{ \sigma^2_X } $$ Other possibility for $\sigma^2_E$, as Raju et al. notes, could also be to define it as: $$\sigma^2_E = E\left[\left( \frac{1}{I_s} \right)^2\right]$$ where $I_s$ is the total test information function for examinee $s$. The nice fact about defining reliability like this is that (a) it is consistent with CTT, and (b) it is easy to compute. Other approaches would be using $\sigma^2_E$ or $\sigma^2_T$ solely as they both say something about reliability. This kind of approach in not commonly used and it is more popular to use information content as in @robin.datadrivers answer since reliability is rather a CTT concept. So some would use for this purpose both the CTT measures (Cronbach $\alpha$) and IRT measures (information content). However, as I mentioned, it is possible to use CTT-like reliability based on IRT. Below I post an R code for computing reliability given mirt or ltm output: rel.mirt <- function(x) { eap <- mirt::fscores(x, full.scores=T, scores.only=T, full.scores.SE=T) e <- mean(eap[, 2]^2) s <- var(eap[, 1]) 1-(e/(s+e)) } rel.ltm <- function(x) { eap <- ltm::factor.scores(x, method="EAP")$score.dat e <- mean(eap$se.z1^2) s <- var(eap$z1) 1-(e/(s+e)) } References Raju, N.S., Price, L.R., Oshima, T.C., & Nering, M.L. (2006). Standardized Conditional SEM: A Case for Conditional Reliability. Applied Psychological Measurement, 30(X), 1-12. Wang, T., Kolen, M.J., & Harris, D.J. (1997). Conditional Standard Errors, Reliability, and Decision Consistency Performance Levels Using Polytomous IRT. Reliability Issues with Performance Assessments: A Collection of Papers. ACT Research Report Series 97-3, 13-40. Adams, R.J. (2005). Reliability as a measurement design effect. Studies in Educational Evaluation, 31(2–3), 162–172. Wu, M. (2005). The role of plausible values in large-scale surveys. Studies in Educational Evaluation, 31(2–3), 114-128.
Reliability in IRT Style In Classical Test Theory observed test scores $X$ could be defined as: $$X = T + E$$ where $T$ are the true scores and $E$ is an error of measurement. This means that their variance is: $$\sigma^2_X =
39,251
Reliability in IRT Style
To start off, let's look at what we mean by reliability. Reliability is often thought of as how consistent a measure will be across different measurement scenarios, with everything being equal except the occasion (same assessment, same conditions, same people, different day, e.g.). Reliability can also be thought of as the ability to distinguish between two respondents. One of the key differences between Classical Test Theory (CTT) and Item Response Theory (IRT) is the way it treats the variance of the latent ability ($\theta$). CTT treats the standard error of measurement (SEM) as fixed across the sample: $SEM = \sigma \sqrt{(1-reliability)}$ where $\sigma$ is the standard deviation of the observed scores. In this context, reliability doesn't change, nor does $\sigma$, so the SEM is the same.(see here for some further explanation). In IRT, there is a separate standard error of each value for $\theta$. This makes sense because you are estimating person parameters for each ability level, and because each of these are estimates, they have sampling error. This is captured by the standard errors of $\theta$. These standard errors are very useful in understanding the reliability of your scale, as estimated by an item response model. One useful application is to consider the information content of the scale at different levels of $\theta$. Information here is defined as the inverse of the variance. You can create a nice chart of the Test Information Curve like the following (I'll describe the gray bars in a moment): The R code I used to produce this (using real data I collected): library(ltm) plot(fit1, type = "IIC", items = 0, lwd = 2, xlab = "Factor scores", main=NA,cex.main = 1.5, cex.lab = 1.3, cex.axis = 1.1) This shows you how the information content in the scale changes at different places. Information is higher where you have more items with estimated difficulty parameters. Obviously, this is going to always be true for the tails, where we have less information in the various observed items to reliability differentiate between respondents. You can look at the information content in specific ranges (again with the ltm package). Let's say for my test I want to see the percent of information between $\theta$ between -2 and 2: > information(fit1,range=c(-2,2)) Call: grm(data = dt) Total Information = 107.08 Information in (-2, 2) = 77.79 (72.64%) Based on all the items Here, 72% of all information is between those values. It can get interesting if you use values of $\theta$ that have meaning (these were selected totally arbitrarily). One interesting application is, let's say you've created a cut score using a common standard setting method, like Angoff. That is often done by creating an observed score cut score - we know that observed scores do not typically perfectly align with $\theta$ values, particularly for an item response model with more than 1 parameter. One thing you can do is take the range of $\theta$ values for all respondents with an observed cut score, and look at the information content for that score. You can plot that onto the test information curve and see how well your cut score aligns with the peaks in the curve. That's what the gray bars are on my plot - they correspond to the $\theta$ values for two cut scores we created, which were created based on observed scores, not $\theta$ values (of course, if you use a method based on IRT scores, such as the Bookmark method, this would turn out different). You can also produce a single index of reliability in IRT: person-separation and item-separation reliability. You get one for items and persons, because you get person ability and item difficulty measures from the model. This is a good quick description of the differences. Search the WINSTEPS help file for the formula (I think it's in there). The classic Rasch reference by Bond and Fox I believe has a more detailed description of it. I'm not sure how common these are outside of Rasch modeling.
Reliability in IRT Style
To start off, let's look at what we mean by reliability. Reliability is often thought of as how consistent a measure will be across different measurement scenarios, with everything being equal except
Reliability in IRT Style To start off, let's look at what we mean by reliability. Reliability is often thought of as how consistent a measure will be across different measurement scenarios, with everything being equal except the occasion (same assessment, same conditions, same people, different day, e.g.). Reliability can also be thought of as the ability to distinguish between two respondents. One of the key differences between Classical Test Theory (CTT) and Item Response Theory (IRT) is the way it treats the variance of the latent ability ($\theta$). CTT treats the standard error of measurement (SEM) as fixed across the sample: $SEM = \sigma \sqrt{(1-reliability)}$ where $\sigma$ is the standard deviation of the observed scores. In this context, reliability doesn't change, nor does $\sigma$, so the SEM is the same.(see here for some further explanation). In IRT, there is a separate standard error of each value for $\theta$. This makes sense because you are estimating person parameters for each ability level, and because each of these are estimates, they have sampling error. This is captured by the standard errors of $\theta$. These standard errors are very useful in understanding the reliability of your scale, as estimated by an item response model. One useful application is to consider the information content of the scale at different levels of $\theta$. Information here is defined as the inverse of the variance. You can create a nice chart of the Test Information Curve like the following (I'll describe the gray bars in a moment): The R code I used to produce this (using real data I collected): library(ltm) plot(fit1, type = "IIC", items = 0, lwd = 2, xlab = "Factor scores", main=NA,cex.main = 1.5, cex.lab = 1.3, cex.axis = 1.1) This shows you how the information content in the scale changes at different places. Information is higher where you have more items with estimated difficulty parameters. Obviously, this is going to always be true for the tails, where we have less information in the various observed items to reliability differentiate between respondents. You can look at the information content in specific ranges (again with the ltm package). Let's say for my test I want to see the percent of information between $\theta$ between -2 and 2: > information(fit1,range=c(-2,2)) Call: grm(data = dt) Total Information = 107.08 Information in (-2, 2) = 77.79 (72.64%) Based on all the items Here, 72% of all information is between those values. It can get interesting if you use values of $\theta$ that have meaning (these were selected totally arbitrarily). One interesting application is, let's say you've created a cut score using a common standard setting method, like Angoff. That is often done by creating an observed score cut score - we know that observed scores do not typically perfectly align with $\theta$ values, particularly for an item response model with more than 1 parameter. One thing you can do is take the range of $\theta$ values for all respondents with an observed cut score, and look at the information content for that score. You can plot that onto the test information curve and see how well your cut score aligns with the peaks in the curve. That's what the gray bars are on my plot - they correspond to the $\theta$ values for two cut scores we created, which were created based on observed scores, not $\theta$ values (of course, if you use a method based on IRT scores, such as the Bookmark method, this would turn out different). You can also produce a single index of reliability in IRT: person-separation and item-separation reliability. You get one for items and persons, because you get person ability and item difficulty measures from the model. This is a good quick description of the differences. Search the WINSTEPS help file for the formula (I think it's in there). The classic Rasch reference by Bond and Fox I believe has a more detailed description of it. I'm not sure how common these are outside of Rasch modeling.
Reliability in IRT Style To start off, let's look at what we mean by reliability. Reliability is often thought of as how consistent a measure will be across different measurement scenarios, with everything being equal except
39,252
Difference in chi-squared calculated by anova from cph and coxph
Test differences There are two differences between the two tests used: The use of likelihood ratio tests versus Wald tests The use of a sequential tests versus tests for the effect of one variable given the other variables Since your example data set is huge (1822 complete observations, with 897 events) the first difference doesn’t matter much, so let’s first look at the second difference. Sequential tests versus tests of one variable given the others Note that the output from running anova() on the coxph model says Terms added sequentially (first to last). This means that for the first variable, age, we simply test if age is a statistically significant predictor without looking at any other variables. Basically, we test if the model including age fits the data better than a model with no explanatory variables (only an intercept), using a likelihood ratio test (which we can do, since the models are nested). This should give the same result as anova(coxph(Surv(time, status) ~ age, data=d)) (The actual results differ slightly, because of missing data in the other explanatory variables. If you remove the observations with missing data, you will get the exact same answer.) For the second variable, sex, we test if sex is statistically significant given age; we compare a model containing only age with one containing both age and sex. For the third variable, nodes, we test if nodes is statistically significant given both age and sex; we compare a model containing both age and sex with one containing age, sex and nodes. This is the only test where we can compare the result to the one from anova(m1). Getting tests of one variable given the others for coxph models For getting test results from the coxph models comparable to the ones in the cph models in general, we have several options. One simple method is to use drop1() to compare the full model (three predictors) with ones containing all predictors except one, using likelihood ratio test. First, to avoid some problems with differing number of observations depending on which variables we include, we refit the models on the complete data: d.comp = na.omit(d[c("time","status","age","sex","nodes")]) m2.comp = update(m2, data=d.comp) No we drop each predictor in turn: drop1(m2.comp, test="Chisq") and get Df AIC LRT Pr(>Chi) <none> 12720 age 1 12718 0.031 0.8611 sex 1 12719 0.929 0.3351 nodes 1 12851 132.868 <2e-16 *** As you see, the results are very similar to the ones from the Wald tests from cph. Wald tests? So what are the Wald tests? Basically, since all predictors are continuous, they’re just normal, asymptotic z-tests, but with squared test statistics. That is, each test statistic is the square of the $z$ statistic from summary(m2.comp) (and the $z$ statistic is the estimated coefficient divided by its standard error). Example: summary(m2.comp) coef exp(coef) se(coef) z Pr(>|z|) age 0.0004934 1.0004936 0.0028216 0.175 0.861 sex -0.0645554 0.9374842 0.0669405 -0.964 0.335 nodes 0.0872323 1.0911501 0.0063330 13.774 <2e-16 *** The $z$-statistic of sex is $-0.0645554/0.0669405=-0.964$, and $(-0.964)^2=0.93$, which is the chi-square statistic of the Wald test of the sex predictor from the cph model. (For factors and nonlinear variables, the calculations are slightly more complex, taking the correlation between the estimators of the (dummy/transformed) variables used to represent the factor / nonlinear effect into account.) Which tests to use? Both sequential tests and tests of one variable given the others makes sense, but they test different hypotheses. The former basically ask β€˜if I add this new predictor, does it improve the fit?’ iteratively, for an ordered list of potential predictors. The latter asks β€˜given that I include all other predictors, does adding this one improve the fit?’. Wald tests versus likelihood ratio tests The other difference between the two tests, i.e., difference 1 mentioned above, is the difference between asymptotic Wald tests (basically relying on the central limit theorem – that you have enough observations that test statistics are approximately normally distributed) and (partial-)likelihood ratio tests (LRTs). For small data sets, the results can differ somewhat. (And even here, the test statistic for the nodes variable is quite different.) Usually, likelihood ratio tests are preferred. And if you want to compare the Wald and the LRT tests on the same model fitted using β€˜coxph()’ (or other normal regression functions), it’s very easy to do using the car package: library(car) Anova(m2.comp, test.statistic="Wald") # Equal to anova(m1) Anova(m2.comp, test.statistic="LR") # Equal to drop1(m2.comp, test="Chisq") which gives us: # LR LR Chisq Df Pr(>Chisq) age 0.0 1 0.86 sex 0.9 1 0.34 nodes 132.9 1 <2e-16 *** # Wald Df Chisq Pr(>Chisq) age 1 0.03 0.86 sex 1 0.93 0.33 nodes 1 189.73 <2e-16 *** Not surprisingly, the $p$-values are (for any practical use) identical.
Difference in chi-squared calculated by anova from cph and coxph
Test differences There are two differences between the two tests used: The use of likelihood ratio tests versus Wald tests The use of a sequential tests versus tests for the effect of one variable gi
Difference in chi-squared calculated by anova from cph and coxph Test differences There are two differences between the two tests used: The use of likelihood ratio tests versus Wald tests The use of a sequential tests versus tests for the effect of one variable given the other variables Since your example data set is huge (1822 complete observations, with 897 events) the first difference doesn’t matter much, so let’s first look at the second difference. Sequential tests versus tests of one variable given the others Note that the output from running anova() on the coxph model says Terms added sequentially (first to last). This means that for the first variable, age, we simply test if age is a statistically significant predictor without looking at any other variables. Basically, we test if the model including age fits the data better than a model with no explanatory variables (only an intercept), using a likelihood ratio test (which we can do, since the models are nested). This should give the same result as anova(coxph(Surv(time, status) ~ age, data=d)) (The actual results differ slightly, because of missing data in the other explanatory variables. If you remove the observations with missing data, you will get the exact same answer.) For the second variable, sex, we test if sex is statistically significant given age; we compare a model containing only age with one containing both age and sex. For the third variable, nodes, we test if nodes is statistically significant given both age and sex; we compare a model containing both age and sex with one containing age, sex and nodes. This is the only test where we can compare the result to the one from anova(m1). Getting tests of one variable given the others for coxph models For getting test results from the coxph models comparable to the ones in the cph models in general, we have several options. One simple method is to use drop1() to compare the full model (three predictors) with ones containing all predictors except one, using likelihood ratio test. First, to avoid some problems with differing number of observations depending on which variables we include, we refit the models on the complete data: d.comp = na.omit(d[c("time","status","age","sex","nodes")]) m2.comp = update(m2, data=d.comp) No we drop each predictor in turn: drop1(m2.comp, test="Chisq") and get Df AIC LRT Pr(>Chi) <none> 12720 age 1 12718 0.031 0.8611 sex 1 12719 0.929 0.3351 nodes 1 12851 132.868 <2e-16 *** As you see, the results are very similar to the ones from the Wald tests from cph. Wald tests? So what are the Wald tests? Basically, since all predictors are continuous, they’re just normal, asymptotic z-tests, but with squared test statistics. That is, each test statistic is the square of the $z$ statistic from summary(m2.comp) (and the $z$ statistic is the estimated coefficient divided by its standard error). Example: summary(m2.comp) coef exp(coef) se(coef) z Pr(>|z|) age 0.0004934 1.0004936 0.0028216 0.175 0.861 sex -0.0645554 0.9374842 0.0669405 -0.964 0.335 nodes 0.0872323 1.0911501 0.0063330 13.774 <2e-16 *** The $z$-statistic of sex is $-0.0645554/0.0669405=-0.964$, and $(-0.964)^2=0.93$, which is the chi-square statistic of the Wald test of the sex predictor from the cph model. (For factors and nonlinear variables, the calculations are slightly more complex, taking the correlation between the estimators of the (dummy/transformed) variables used to represent the factor / nonlinear effect into account.) Which tests to use? Both sequential tests and tests of one variable given the others makes sense, but they test different hypotheses. The former basically ask β€˜if I add this new predictor, does it improve the fit?’ iteratively, for an ordered list of potential predictors. The latter asks β€˜given that I include all other predictors, does adding this one improve the fit?’. Wald tests versus likelihood ratio tests The other difference between the two tests, i.e., difference 1 mentioned above, is the difference between asymptotic Wald tests (basically relying on the central limit theorem – that you have enough observations that test statistics are approximately normally distributed) and (partial-)likelihood ratio tests (LRTs). For small data sets, the results can differ somewhat. (And even here, the test statistic for the nodes variable is quite different.) Usually, likelihood ratio tests are preferred. And if you want to compare the Wald and the LRT tests on the same model fitted using β€˜coxph()’ (or other normal regression functions), it’s very easy to do using the car package: library(car) Anova(m2.comp, test.statistic="Wald") # Equal to anova(m1) Anova(m2.comp, test.statistic="LR") # Equal to drop1(m2.comp, test="Chisq") which gives us: # LR LR Chisq Df Pr(>Chisq) age 0.0 1 0.86 sex 0.9 1 0.34 nodes 132.9 1 <2e-16 *** # Wald Df Chisq Pr(>Chisq) age 1 0.03 0.86 sex 1 0.93 0.33 nodes 1 189.73 <2e-16 *** Not surprisingly, the $p$-values are (for any practical use) identical.
Difference in chi-squared calculated by anova from cph and coxph Test differences There are two differences between the two tests used: The use of likelihood ratio tests versus Wald tests The use of a sequential tests versus tests for the effect of one variable gi
39,253
What's the relation between deep learning and extreme learning machine?
Extreme learning machines and deep learning are slightly related, but advocate quite adversary concepts. ELMs are neural nets with a single hidden layer, where the first weight matrix is initialized randomly. This allows the output matrix to be estimated via least squares, which is very quickly done. Deep learning, on the other hand, is the learning of deep architectures (e.g. deep neural nets). Depending on the strategy, all the layers are optimized jointly or greedily. Long story short. ELM says: "only learn the last layer". Deep learning says: "Learn all the layers." It seems that DL is much more successfull than ELMs.
What's the relation between deep learning and extreme learning machine?
Extreme learning machines and deep learning are slightly related, but advocate quite adversary concepts. ELMs are neural nets with a single hidden layer, where the first weight matrix is initialized r
What's the relation between deep learning and extreme learning machine? Extreme learning machines and deep learning are slightly related, but advocate quite adversary concepts. ELMs are neural nets with a single hidden layer, where the first weight matrix is initialized randomly. This allows the output matrix to be estimated via least squares, which is very quickly done. Deep learning, on the other hand, is the learning of deep architectures (e.g. deep neural nets). Depending on the strategy, all the layers are optimized jointly or greedily. Long story short. ELM says: "only learn the last layer". Deep learning says: "Learn all the layers." It seems that DL is much more successfull than ELMs.
What's the relation between deep learning and extreme learning machine? Extreme learning machines and deep learning are slightly related, but advocate quite adversary concepts. ELMs are neural nets with a single hidden layer, where the first weight matrix is initialized r
39,254
What's the relation between deep learning and extreme learning machine?
The difference is: deep learning is original, while ELM is just a fancy name for 3 old methods. The β€œextreme learning machines (ELM)” are indeed worth working on, but they just shouldn’t be called β€œELM”. With annotated PDF files at http://elmorigin.wix.com/originofelm , you can easily verify the following facts within 10 to 20 minutes: The kernel (or constrained-optimization-based) version of ELM (ELM-Kernel, Huang 2012) is identical to kernel ridge regression (for regression and single-output classification, Saunders ICML 1998, as well as the LS-SVM with zero bias; for multiclass multi-output classification, An CVPR 2007). ELM-SLFN (the single-layer feedforward network version of the ELM, Huang IJCNN 2004) is identical to the randomized neural network (RNN, with omission of bias, Schmidt 1992) and another simultaneous work, i.e., the random vector functional link (RVFL, with omission of direct input-output links, Pao 1994). ELM-RBF (Huang ICARCV 2004) is identical to the randomized RBF neural network (Broomhead-Lowe 1988, with a performance-degrading randomization of RBF radii or impact factors). In all three cases above, G.-B. Huang got his papers published after excluding a large volume of very closely related literature. Hence, all 3 "ELM variants" have absolutely no technical originality, promote unethical research practices among researchers, and steal citations from original inventors.
What's the relation between deep learning and extreme learning machine?
The difference is: deep learning is original, while ELM is just a fancy name for 3 old methods. The β€œextreme learning machines (ELM)” are indeed worth working on, but they just shouldn’t be called β€œE
What's the relation between deep learning and extreme learning machine? The difference is: deep learning is original, while ELM is just a fancy name for 3 old methods. The β€œextreme learning machines (ELM)” are indeed worth working on, but they just shouldn’t be called β€œELM”. With annotated PDF files at http://elmorigin.wix.com/originofelm , you can easily verify the following facts within 10 to 20 minutes: The kernel (or constrained-optimization-based) version of ELM (ELM-Kernel, Huang 2012) is identical to kernel ridge regression (for regression and single-output classification, Saunders ICML 1998, as well as the LS-SVM with zero bias; for multiclass multi-output classification, An CVPR 2007). ELM-SLFN (the single-layer feedforward network version of the ELM, Huang IJCNN 2004) is identical to the randomized neural network (RNN, with omission of bias, Schmidt 1992) and another simultaneous work, i.e., the random vector functional link (RVFL, with omission of direct input-output links, Pao 1994). ELM-RBF (Huang ICARCV 2004) is identical to the randomized RBF neural network (Broomhead-Lowe 1988, with a performance-degrading randomization of RBF radii or impact factors). In all three cases above, G.-B. Huang got his papers published after excluding a large volume of very closely related literature. Hence, all 3 "ELM variants" have absolutely no technical originality, promote unethical research practices among researchers, and steal citations from original inventors.
What's the relation between deep learning and extreme learning machine? The difference is: deep learning is original, while ELM is just a fancy name for 3 old methods. The β€œextreme learning machines (ELM)” are indeed worth working on, but they just shouldn’t be called β€œE
39,255
Do random forest variable importance measures take into account the interactions?
The variable importance obtained by permutations is computed only by permuting values for a single variable. Thus, it computes some importance measure of the given variable in the context that all other data is fixed. I think it is reasonable to state that the importance measure includes in the measurement also interactions, if such interactions exists. I mean that I see VI as an impure measure, a measure influenced by the main effect of that variable and also interaction with others. Gini importance is found often to be in concordance with permutation importance, and I see it as a similar measure. There is however something called interaction which is measured in random forests, and this measures if a split on a given variable increase or decrease splits on other measure. This can be computed for each pair of measures. It looks like a 2 measure interactions. If one want to measure interactions with more than 2 variables than I suppose it is possible extending the given procedure, but soon becomes too computer intensive. Last thing called interactions is not implemented in R package randomForests as far as I know. Take a look on the brief description from the Breiman's page on RF here, and check for Interactions section.
Do random forest variable importance measures take into account the interactions?
The variable importance obtained by permutations is computed only by permuting values for a single variable. Thus, it computes some importance measure of the given variable in the context that all oth
Do random forest variable importance measures take into account the interactions? The variable importance obtained by permutations is computed only by permuting values for a single variable. Thus, it computes some importance measure of the given variable in the context that all other data is fixed. I think it is reasonable to state that the importance measure includes in the measurement also interactions, if such interactions exists. I mean that I see VI as an impure measure, a measure influenced by the main effect of that variable and also interaction with others. Gini importance is found often to be in concordance with permutation importance, and I see it as a similar measure. There is however something called interaction which is measured in random forests, and this measures if a split on a given variable increase or decrease splits on other measure. This can be computed for each pair of measures. It looks like a 2 measure interactions. If one want to measure interactions with more than 2 variables than I suppose it is possible extending the given procedure, but soon becomes too computer intensive. Last thing called interactions is not implemented in R package randomForests as far as I know. Take a look on the brief description from the Breiman's page on RF here, and check for Interactions section.
Do random forest variable importance measures take into account the interactions? The variable importance obtained by permutations is computed only by permuting values for a single variable. Thus, it computes some importance measure of the given variable in the context that all oth
39,256
Do random forest variable importance measures take into account the interactions?
Run this code and assert that RF variable importance do incorporate interactions. library(randomForest) obs=1000 vars =4 X = data.frame(replicate(vars,rnorm(obs))) ysignal = with(X,sign(X1*X2)) ynoise = 0.1 * rnorm(obs) y = ysignal + ynoise RF = randomForest(X,y,importance=T) varImpPlot(RF) You should see X1 and X2 are found the important and X3 and X4 are not. y is only explained as the interaction between X1 and X2, alone both variables are useless.
Do random forest variable importance measures take into account the interactions?
Run this code and assert that RF variable importance do incorporate interactions. library(randomForest) obs=1000 vars =4 X = data.frame(replicate(vars,rnorm(obs))) ysignal = with(X,sign(X1*X2)) ynoise
Do random forest variable importance measures take into account the interactions? Run this code and assert that RF variable importance do incorporate interactions. library(randomForest) obs=1000 vars =4 X = data.frame(replicate(vars,rnorm(obs))) ysignal = with(X,sign(X1*X2)) ynoise = 0.1 * rnorm(obs) y = ysignal + ynoise RF = randomForest(X,y,importance=T) varImpPlot(RF) You should see X1 and X2 are found the important and X3 and X4 are not. y is only explained as the interaction between X1 and X2, alone both variables are useless.
Do random forest variable importance measures take into account the interactions? Run this code and assert that RF variable importance do incorporate interactions. library(randomForest) obs=1000 vars =4 X = data.frame(replicate(vars,rnorm(obs))) ysignal = with(X,sign(X1*X2)) ynoise
39,257
strucchange package on ARIMA model
The package strucchange requires as input the formula of a linear model to be passed to lm. I don't think there is a straightforward way to use the package with function arima. I don't know either any other R packages implementing this but I can give some basic guidelines that may be helpful for your purposes. You can carry out some diagnostics based on the cumulative sum of squared residuals (CUMSUM) and based on F-tests for the parameters of the model in different subsamples. Let's take for illustration the following simulated AR process, x. The first 50 observations are generated from an AR(1) model and the next 100 observations from an AR(2) model: set.seed(135) x1 <- arima.sim(model = list(order = c(1,0,0), ar = -0.2), n = 50) x2 <- arima.sim(model = list(order = c(2,0,0), ar = c(0.3, 0.5)), n = 100) x <- ts(c(x1, x2)) CUMSUM approach: Once an AR model is fitted to the entire series the CUMSUM process can be obtained as follows: fit <- arima(x, order = c(2,0,0), include.mean = FALSE) e <- residuals(fit) sigma <- sqrt(fit$sigma2) n <- length(x) cs <- cumsum(e) / sigma As a reference, confidence limits can be obtained as done in package strucchange for the OLS-based CUSUM test. For that, we can create an object of class efp and plot it: require(strucchange) retval <- list() retval$coefficients <- coef(fit) retval$sigma <- sigma retval$process <- cs retval$type.name <- "OLS-based CUSUM test" retval$lim.process <- "Brownian bridge" retval$datatsp <- tsp(x) class(retval) <- c("efp") plot(retval) The confidence limits are just for reference, I'm not sure they are the right values to carry out a formal test in this context. Regardless of this, a sudden change or shift in the sequence cs can be interpreted as a sign that something is going on around that time point, possibly a structural change. In the plot we observe that at around observation 50, where we introduced a change in the data generating process. F-tests: Another approach is based on F-test statistics computed as: $$ Fstat = \frac{RSS - USS}{RSS/n} $$ where RSS is the residual sum of squares in the restricted model (the model fitted for the entire data) and USS is the residual sum of squares of models fitted to two subsamples. The statistics can be computed iteratively for the following sequence of subsamples: from observations 1 to 20 and 21 to $n$; then from 1 to 21 and a next subsample from 22 to $n$, and so on as done below: rss <- sum(residuals(fit)^2) sigma2 <- fit$sigma2 stats <- rep(NA, n) for (i in seq.int(20, n-20)) { fit1 <- arima(x[seq(1,i)], order = c(2,0,0), include.mean = FALSE) fit2 <- arima(x[seq(i+1,n)], order = c(2,0,0), include.mean = FALSE) ess <- sum(c(residuals(fit1), residuals(fit2))^2) stats[i] <- (rss - ess)/sigma2 } Similarly to the CUMSUM plot, a plot of the F-statistics may reveal the presence of a structural change. A 95% confidence limit can be obtained based on the chi-square distribution. plot(stats) abline(h = qchisq(0.05, df = length(coef(fit)), lower.tail = FALSE), lty = 2, col = "red") If the minimum p-value related to each statistic is below a significance level, e.g. 0.05, then we can suspect that there is a structural change at that point. In this simulated series that happens at observation 50, when the AR coefficients changed in the data generating process: which.min(1 - pchisq(stats, df = 2)) #[1] 50 You may find further details in the vignette of the strucchange package that you probably already know and in the references therein.
strucchange package on ARIMA model
The package strucchange requires as input the formula of a linear model to be passed to lm. I don't think there is a straightforward way to use the package with function arima. I don't know either any
strucchange package on ARIMA model The package strucchange requires as input the formula of a linear model to be passed to lm. I don't think there is a straightforward way to use the package with function arima. I don't know either any other R packages implementing this but I can give some basic guidelines that may be helpful for your purposes. You can carry out some diagnostics based on the cumulative sum of squared residuals (CUMSUM) and based on F-tests for the parameters of the model in different subsamples. Let's take for illustration the following simulated AR process, x. The first 50 observations are generated from an AR(1) model and the next 100 observations from an AR(2) model: set.seed(135) x1 <- arima.sim(model = list(order = c(1,0,0), ar = -0.2), n = 50) x2 <- arima.sim(model = list(order = c(2,0,0), ar = c(0.3, 0.5)), n = 100) x <- ts(c(x1, x2)) CUMSUM approach: Once an AR model is fitted to the entire series the CUMSUM process can be obtained as follows: fit <- arima(x, order = c(2,0,0), include.mean = FALSE) e <- residuals(fit) sigma <- sqrt(fit$sigma2) n <- length(x) cs <- cumsum(e) / sigma As a reference, confidence limits can be obtained as done in package strucchange for the OLS-based CUSUM test. For that, we can create an object of class efp and plot it: require(strucchange) retval <- list() retval$coefficients <- coef(fit) retval$sigma <- sigma retval$process <- cs retval$type.name <- "OLS-based CUSUM test" retval$lim.process <- "Brownian bridge" retval$datatsp <- tsp(x) class(retval) <- c("efp") plot(retval) The confidence limits are just for reference, I'm not sure they are the right values to carry out a formal test in this context. Regardless of this, a sudden change or shift in the sequence cs can be interpreted as a sign that something is going on around that time point, possibly a structural change. In the plot we observe that at around observation 50, where we introduced a change in the data generating process. F-tests: Another approach is based on F-test statistics computed as: $$ Fstat = \frac{RSS - USS}{RSS/n} $$ where RSS is the residual sum of squares in the restricted model (the model fitted for the entire data) and USS is the residual sum of squares of models fitted to two subsamples. The statistics can be computed iteratively for the following sequence of subsamples: from observations 1 to 20 and 21 to $n$; then from 1 to 21 and a next subsample from 22 to $n$, and so on as done below: rss <- sum(residuals(fit)^2) sigma2 <- fit$sigma2 stats <- rep(NA, n) for (i in seq.int(20, n-20)) { fit1 <- arima(x[seq(1,i)], order = c(2,0,0), include.mean = FALSE) fit2 <- arima(x[seq(i+1,n)], order = c(2,0,0), include.mean = FALSE) ess <- sum(c(residuals(fit1), residuals(fit2))^2) stats[i] <- (rss - ess)/sigma2 } Similarly to the CUMSUM plot, a plot of the F-statistics may reveal the presence of a structural change. A 95% confidence limit can be obtained based on the chi-square distribution. plot(stats) abline(h = qchisq(0.05, df = length(coef(fit)), lower.tail = FALSE), lty = 2, col = "red") If the minimum p-value related to each statistic is below a significance level, e.g. 0.05, then we can suspect that there is a structural change at that point. In this simulated series that happens at observation 50, when the AR coefficients changed in the data generating process: which.min(1 - pchisq(stats, df = 2)) #[1] 50 You may find further details in the vignette of the strucchange package that you probably already know and in the references therein.
strucchange package on ARIMA model The package strucchange requires as input the formula of a linear model to be passed to lm. I don't think there is a straightforward way to use the package with function arima. I don't know either any
39,258
strucchange package on ARIMA model
I have blogged about detecting structural break using the strucchange package in R. It is pretty straight forward - here's the outline: # assuming you have a 'ts' object in R # 1. install package 'strucchange' # 2. Then write down this code: library(strucchange) # store the breakdates bp_ts <- breakpoints(ts) # this will give you the break dates and their confidence intervals summary(bp_ts) # store the confidence intervals ci_ts <- confint(bp_ts) ## to plot the breakpoints with confidence intervals plot(ts) lines(bp_ts) lines(ci_ts) The time series data used in my blog happens to be an ARIMA(0,1,1) process. If you want to verify that, check my Github repo regarding the same.
strucchange package on ARIMA model
I have blogged about detecting structural break using the strucchange package in R. It is pretty straight forward - here's the outline: # assuming you have a 'ts' object in R # 1. install package 's
strucchange package on ARIMA model I have blogged about detecting structural break using the strucchange package in R. It is pretty straight forward - here's the outline: # assuming you have a 'ts' object in R # 1. install package 'strucchange' # 2. Then write down this code: library(strucchange) # store the breakdates bp_ts <- breakpoints(ts) # this will give you the break dates and their confidence intervals summary(bp_ts) # store the confidence intervals ci_ts <- confint(bp_ts) ## to plot the breakpoints with confidence intervals plot(ts) lines(bp_ts) lines(ci_ts) The time series data used in my blog happens to be an ARIMA(0,1,1) process. If you want to verify that, check my Github repo regarding the same.
strucchange package on ARIMA model I have blogged about detecting structural break using the strucchange package in R. It is pretty straight forward - here's the outline: # assuming you have a 'ts' object in R # 1. install package 's
39,259
strucchange package on ARIMA model
If you want to check the existence of structure breaks, I would recommend you to: Make outliers analysis, with the package tsoutliers. By using the function tso, you can check if your model has an isolated spike (additive outlier), an abrupt change in the mean level (level shift), a spike that takes a few periods to disappear (transient change) or a shock in the innovations of the model (intervention outlier). Analyse the stability of parameters through the QLR test (see critical values in Andrews, 2006). This test is most appropriate when: i) the break date is unknown, ii) the lagged variable is an explanatory variable e/or iii) the errors are heteroscedatic / autocorrelated. To run this test, you have to add dummies to your model and make loop, in which regressions are calculated several times and the value of the dummies is changed over the time in order to capture possible breaks. #Define the window to be tested. In general, the first and last 15% observations are excluded window_breaks <- seq(inic_anomes, last_anomes, 1/12) # Loop to run the regressions and compute the test statistic for(i in 1:length(window_breaks)) { # Set up dummy variable D <- time(y) > window_breaks[i] # Estimate model with dummy model <- lm(y ~ x + D + D*x) # Compute and save the F-statistic Fstats[i] <- linearHypothesis(model, c("D", "x:D"), vcov = kernHAC)$F[2] } QLR <- max(Fstats) #if QLR < critical value, there is no break
strucchange package on ARIMA model
If you want to check the existence of structure breaks, I would recommend you to: Make outliers analysis, with the package tsoutliers. By using the function tso, you can check if your model has an is
strucchange package on ARIMA model If you want to check the existence of structure breaks, I would recommend you to: Make outliers analysis, with the package tsoutliers. By using the function tso, you can check if your model has an isolated spike (additive outlier), an abrupt change in the mean level (level shift), a spike that takes a few periods to disappear (transient change) or a shock in the innovations of the model (intervention outlier). Analyse the stability of parameters through the QLR test (see critical values in Andrews, 2006). This test is most appropriate when: i) the break date is unknown, ii) the lagged variable is an explanatory variable e/or iii) the errors are heteroscedatic / autocorrelated. To run this test, you have to add dummies to your model and make loop, in which regressions are calculated several times and the value of the dummies is changed over the time in order to capture possible breaks. #Define the window to be tested. In general, the first and last 15% observations are excluded window_breaks <- seq(inic_anomes, last_anomes, 1/12) # Loop to run the regressions and compute the test statistic for(i in 1:length(window_breaks)) { # Set up dummy variable D <- time(y) > window_breaks[i] # Estimate model with dummy model <- lm(y ~ x + D + D*x) # Compute and save the F-statistic Fstats[i] <- linearHypothesis(model, c("D", "x:D"), vcov = kernHAC)$F[2] } QLR <- max(Fstats) #if QLR < critical value, there is no break
strucchange package on ARIMA model If you want to check the existence of structure breaks, I would recommend you to: Make outliers analysis, with the package tsoutliers. By using the function tso, you can check if your model has an is
39,260
Countable intersection of almost sure events is also almost sure
Let's consider complements $B_n^c$ to $B_n$. For any $n$ it holds that $\mathbb{P}(B_n^c) = 0$. Using countable additivity for measures we get: $$ \mathbb{P} \left(\bigcap_{n = 1}^{\infty} B_n \right) = 1 - \mathbb{P}\left(\bigcup_{n = 1}^{\infty} B_n^c \right) \geq 1 - \sum_{n = 1}^{\infty} \mathbb{P}(B_n^c) = 1. $$ So $$ 1 \geq \mathbb{P}(\cap_{n = 1}^{\infty} B_n) \geq 1. $$ Consequently $$ \mathbb{P}(\cap_{n = 1}^{\infty} B_n) = 1. $$
Countable intersection of almost sure events is also almost sure
Let's consider complements $B_n^c$ to $B_n$. For any $n$ it holds that $\mathbb{P}(B_n^c) = 0$. Using countable additivity for measures we get: $$ \mathbb{P} \left(\bigcap_{n = 1}^{\infty} B_n \right
Countable intersection of almost sure events is also almost sure Let's consider complements $B_n^c$ to $B_n$. For any $n$ it holds that $\mathbb{P}(B_n^c) = 0$. Using countable additivity for measures we get: $$ \mathbb{P} \left(\bigcap_{n = 1}^{\infty} B_n \right) = 1 - \mathbb{P}\left(\bigcup_{n = 1}^{\infty} B_n^c \right) \geq 1 - \sum_{n = 1}^{\infty} \mathbb{P}(B_n^c) = 1. $$ So $$ 1 \geq \mathbb{P}(\cap_{n = 1}^{\infty} B_n) \geq 1. $$ Consequently $$ \mathbb{P}(\cap_{n = 1}^{\infty} B_n) = 1. $$
Countable intersection of almost sure events is also almost sure Let's consider complements $B_n^c$ to $B_n$. For any $n$ it holds that $\mathbb{P}(B_n^c) = 0$. Using countable additivity for measures we get: $$ \mathbb{P} \left(\bigcap_{n = 1}^{\infty} B_n \right
39,261
Confidence Interval for a Random Sample Selected from Gamma Distribution
Edit: Time to add details, I think. The OP has long since worked it out but hasn't taken the invitation to write up a more complete solution, so I shall, in the interest of having a full answer to the question. A pivot is a function of the data and the statistic whose distribution doesn't depend on the value of the statistic. So consider: (1) what would the distribution of a statistic consisting of the sum of the observations ($T=\sum_i x_i$) be? A sum of $n$ i.i.d. $\text{gamma}(\alpha,\theta)$ random variables has the $\text{gamma}(n\alpha,\theta)$ distribution (for the shape-rate form of the gamma). Here $n=6$ and $\alpha=2$, so the sum, $T$ has a $\text{gamma}(12,\theta)$ distribution. (2) Note that the distribution in (1) does depend on $\theta$ and the form of the statistic doesn't. You need to modify the statistic ($Q=f(T,\theta)$) in such a way that both of those change. (This part is trivial!) Let $Q=T/\theta$. Then $Q\sim \text{gamma}(12,1)$. $Q$ satisfies the conditions required for a pivotal quantity. (3) Once you have a pivotal quantity (i.e. $Q$), write down an interval for the pivotal quantity (in the form of a pair of inequalities, $a< Q< b$) with the given coverage. Since the distribution doesn't depend on the parameter, this interval is always the same (at a given sample size) no matter what the value of $\theta$. One such interval is $(a,b)$, where $P(a<Q<b)=0.95$, when $a$ is the 0.025 point of the $\text{gamma}(12,1)$ distribution and $b$ is the 0.975 point. (4) Now write the interval involving the pivotal quantity back in terms of the data and $\theta$. Back out an interval for the parameter, for which the corresponding probability statement must still hold (keeping in mind that the random quantity is not $\theta$ but the interval). $P(a<T/\theta<b)=0.95$ implies $P(1/b < \theta/T < 1/a)=0.95$, so $P(T/b < \theta < T/a)=0.95$. Therefore $(T/b,T/a)$ is a 95% interval for $\theta$. Our observed total, $t = 4.91$. The 0.025 point of a gamma(12,1) is 6.2006 and the 0.975 point is 19.682. Hence a 95% interval for $\theta$ is (4.91/19.682,4.91/6.200) = $(0.249, 0.792)$.
Confidence Interval for a Random Sample Selected from Gamma Distribution
Edit: Time to add details, I think. The OP has long since worked it out but hasn't taken the invitation to write up a more complete solution, so I shall, in the interest of having a full answer to the
Confidence Interval for a Random Sample Selected from Gamma Distribution Edit: Time to add details, I think. The OP has long since worked it out but hasn't taken the invitation to write up a more complete solution, so I shall, in the interest of having a full answer to the question. A pivot is a function of the data and the statistic whose distribution doesn't depend on the value of the statistic. So consider: (1) what would the distribution of a statistic consisting of the sum of the observations ($T=\sum_i x_i$) be? A sum of $n$ i.i.d. $\text{gamma}(\alpha,\theta)$ random variables has the $\text{gamma}(n\alpha,\theta)$ distribution (for the shape-rate form of the gamma). Here $n=6$ and $\alpha=2$, so the sum, $T$ has a $\text{gamma}(12,\theta)$ distribution. (2) Note that the distribution in (1) does depend on $\theta$ and the form of the statistic doesn't. You need to modify the statistic ($Q=f(T,\theta)$) in such a way that both of those change. (This part is trivial!) Let $Q=T/\theta$. Then $Q\sim \text{gamma}(12,1)$. $Q$ satisfies the conditions required for a pivotal quantity. (3) Once you have a pivotal quantity (i.e. $Q$), write down an interval for the pivotal quantity (in the form of a pair of inequalities, $a< Q< b$) with the given coverage. Since the distribution doesn't depend on the parameter, this interval is always the same (at a given sample size) no matter what the value of $\theta$. One such interval is $(a,b)$, where $P(a<Q<b)=0.95$, when $a$ is the 0.025 point of the $\text{gamma}(12,1)$ distribution and $b$ is the 0.975 point. (4) Now write the interval involving the pivotal quantity back in terms of the data and $\theta$. Back out an interval for the parameter, for which the corresponding probability statement must still hold (keeping in mind that the random quantity is not $\theta$ but the interval). $P(a<T/\theta<b)=0.95$ implies $P(1/b < \theta/T < 1/a)=0.95$, so $P(T/b < \theta < T/a)=0.95$. Therefore $(T/b,T/a)$ is a 95% interval for $\theta$. Our observed total, $t = 4.91$. The 0.025 point of a gamma(12,1) is 6.2006 and the 0.975 point is 19.682. Hence a 95% interval for $\theta$ is (4.91/19.682,4.91/6.200) = $(0.249, 0.792)$.
Confidence Interval for a Random Sample Selected from Gamma Distribution Edit: Time to add details, I think. The OP has long since worked it out but hasn't taken the invitation to write up a more complete solution, so I shall, in the interest of having a full answer to the
39,262
How to do Simple Confirmatory Factory Analysis/SEM in R?
A CFA is pretty easy to do in R with OpenMx, sem, or lavaan. Since a CFA is such a vanilla case of SEM, all three are pretty easy to implement and offer helpful walkthroughs within their respective documentations. I personally use OpenMx or lavaan. One thing to keep in mind if you use OpenMx is that it won't give you fit statistics by default, you have to specify a saturated model first (or use the semTools package to do this for you). Because OpenMx hasn't been updated for R version 3 yet (unless you compile from source), here's an example taken from the lavaan walkthrough. It is a CFA with 3 latent variables with three indicators, with covariances among all three latents. More information on the dataset used can be found in the link above. # load the lavaan package require(lavaan) # specify the model HS.model <- " visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 " # fit a full CFA model fit <- cfa(HS.model, data = HolzingerSwineford1939) # fit an orthogonal CFA model fitOrth <- cfa(HS.model, data = HolzingerSwineford1939, orthogonal = TRUE) # Likelihood ratio test between full and orthogonal model anova(fit, fitOrth) # display summary output for full model summary(fit, fit.measures=TRUE) Here we see that the orthogonal model (all three covariances set to zero) fits significantly worse than a full CFA. Two things to keep in mind with this code: 1) In this specification, loadings for x1, x4, x7 are fixed to 1 by default to set the scale of the CFA. This can be changed by moving the variables around. 2) Again by default, residual variances are added automatically. This can be changed by adding residual regression weights in the model syntax.
How to do Simple Confirmatory Factory Analysis/SEM in R?
A CFA is pretty easy to do in R with OpenMx, sem, or lavaan. Since a CFA is such a vanilla case of SEM, all three are pretty easy to implement and offer helpful walkthroughs within their respective do
How to do Simple Confirmatory Factory Analysis/SEM in R? A CFA is pretty easy to do in R with OpenMx, sem, or lavaan. Since a CFA is such a vanilla case of SEM, all three are pretty easy to implement and offer helpful walkthroughs within their respective documentations. I personally use OpenMx or lavaan. One thing to keep in mind if you use OpenMx is that it won't give you fit statistics by default, you have to specify a saturated model first (or use the semTools package to do this for you). Because OpenMx hasn't been updated for R version 3 yet (unless you compile from source), here's an example taken from the lavaan walkthrough. It is a CFA with 3 latent variables with three indicators, with covariances among all three latents. More information on the dataset used can be found in the link above. # load the lavaan package require(lavaan) # specify the model HS.model <- " visual =~ x1 + x2 + x3 textual =~ x4 + x5 + x6 speed =~ x7 + x8 + x9 " # fit a full CFA model fit <- cfa(HS.model, data = HolzingerSwineford1939) # fit an orthogonal CFA model fitOrth <- cfa(HS.model, data = HolzingerSwineford1939, orthogonal = TRUE) # Likelihood ratio test between full and orthogonal model anova(fit, fitOrth) # display summary output for full model summary(fit, fit.measures=TRUE) Here we see that the orthogonal model (all three covariances set to zero) fits significantly worse than a full CFA. Two things to keep in mind with this code: 1) In this specification, loadings for x1, x4, x7 are fixed to 1 by default to set the scale of the CFA. This can be changed by moving the variables around. 2) Again by default, residual variances are added automatically. This can be changed by adding residual regression weights in the model syntax.
How to do Simple Confirmatory Factory Analysis/SEM in R? A CFA is pretty easy to do in R with OpenMx, sem, or lavaan. Since a CFA is such a vanilla case of SEM, all three are pretty easy to implement and offer helpful walkthroughs within their respective do
39,263
Deliberately fitting a model without intercept [duplicate]
The intercept in a linear regression model may represent two totally different things: A) Your theoretical model may lead you to a specification with a constant term. A basic example from Economics is when one wants to estimate the parameters of a production function (a statistical relationship that links output produced with production factors used) $$Q = AK^aL^b$$ where $Q$ is output, $K$ is capital used and $L$ is labor used. $A$ is a shift factor, representing "technology", assumed constant in the short-run. To linearize this model we take the logs of the variable and we arrive at $$\ln Q = \ln A +a\ln K + b\ln L$$ Including an error term we have an econometric linear regression specification -and the constant estimates $\ln A$. B) But even if your theory does not account for a constant term, widespread advice with observational data at least, is not to omit the constant, except if you are "very sure" about the omission (and this advice holds for logistic regression too, by the way). Why such an advice? Because the existence of a constant absorbs the possibly non-zero mean of the error term. This is a recognition that our model may not include all factors that affect/are associated with the dependent variable. What we do "hope" is that any variable omitted is independent (or at least uncorrelated) with the included regressors. But even if these omitted variables are indeed independent/uncorrelated, they still create a non-zero mean for the dependent variable (which is incarnated as a non-zero mean of the error term), around which the dependent variable fluctuates with the included regressors. By including the constant, we capture this mean value (perhaps together with a constant factor that is postulated by our model), and we can make the assumption that the expected value of the error term (usually conditional on the included regressors), is zero, which we need for various results to pass through. This fact, that the constant term may essentially be the sum of totally different things, is what makes us in many cases not to try to interpret it. Regarding the logistic regression, the constant term reflects the (negative of) the possibly non-zero threshold of the latent continuous variable above which the binary dependent variable becomes unity -so again, its inclusion permits us to "safely" specify an error term with conditional zero mean.
Deliberately fitting a model without intercept [duplicate]
The intercept in a linear regression model may represent two totally different things: A) Your theoretical model may lead you to a specification with a constant term. A basic example from Economics
Deliberately fitting a model without intercept [duplicate] The intercept in a linear regression model may represent two totally different things: A) Your theoretical model may lead you to a specification with a constant term. A basic example from Economics is when one wants to estimate the parameters of a production function (a statistical relationship that links output produced with production factors used) $$Q = AK^aL^b$$ where $Q$ is output, $K$ is capital used and $L$ is labor used. $A$ is a shift factor, representing "technology", assumed constant in the short-run. To linearize this model we take the logs of the variable and we arrive at $$\ln Q = \ln A +a\ln K + b\ln L$$ Including an error term we have an econometric linear regression specification -and the constant estimates $\ln A$. B) But even if your theory does not account for a constant term, widespread advice with observational data at least, is not to omit the constant, except if you are "very sure" about the omission (and this advice holds for logistic regression too, by the way). Why such an advice? Because the existence of a constant absorbs the possibly non-zero mean of the error term. This is a recognition that our model may not include all factors that affect/are associated with the dependent variable. What we do "hope" is that any variable omitted is independent (or at least uncorrelated) with the included regressors. But even if these omitted variables are indeed independent/uncorrelated, they still create a non-zero mean for the dependent variable (which is incarnated as a non-zero mean of the error term), around which the dependent variable fluctuates with the included regressors. By including the constant, we capture this mean value (perhaps together with a constant factor that is postulated by our model), and we can make the assumption that the expected value of the error term (usually conditional on the included regressors), is zero, which we need for various results to pass through. This fact, that the constant term may essentially be the sum of totally different things, is what makes us in many cases not to try to interpret it. Regarding the logistic regression, the constant term reflects the (negative of) the possibly non-zero threshold of the latent continuous variable above which the binary dependent variable becomes unity -so again, its inclusion permits us to "safely" specify an error term with conditional zero mean.
Deliberately fitting a model without intercept [duplicate] The intercept in a linear regression model may represent two totally different things: A) Your theoretical model may lead you to a specification with a constant term. A basic example from Economics
39,264
Deliberately fitting a model without intercept [duplicate]
As several others have already mentioned, there are many situations where theory dictates a regression through the origin (e.g. y = corn production, x = amount of cultivated land; when x=0, y must be 0). In these situations, I would first run a traditional regression with a constant and slope term. Then, test whether the estimated intercept is significantly different than zero (Kutner, 2004). If the intercept is not significantly different than zero anyways, you may have a good argument for setting it equal to zero. NOTE: What happens if you KNOW the function must go through the origin, but you still have a significant intercept term? Beware of the following scenario: Here, the data point in the lower left corner corresponds to (0,0). Nevertheless, the linear regression appears to have a significant, positive, intercept. The underlying reason is likely that, between $x=0$ and $x=1$, there is a rapid, non-linear increase in $Y$. That is, the linear model is a good approximation for the data within $x \geq 1$, but does not extrapolate well below this range. This can suggest further refinements to your model. WARNING: As Alecos explained, the residuals in a regression through the origin model will typically have a nonzero mean and will not sum to zero. Why is this important? The short answer is that it affects the calculation of $R^2$, and can make it difficult to interpret. In fact, $R^2$ can be negative in this case. To see this, we have to consider how $R^2$ is derived. We start with the identity: $$(Y_i - \bar{Y}) = (Y_i - \hat{Y}_i) + (\hat{Y}_i - \bar{Y})$$ where $\bar{Y}$ is the mean of all dependent variable observations $Y_i$, and $\hat{Y}_i$ is our predicted value for each observation. Now square both sides and sum over all $i$: $$\sum(Y_i - \bar{Y})^2 = \sum(Y_i - \hat{Y}_i)^2 + \sum(\hat{Y}_i - \bar{Y})^2 + 2 \sum(Y_i - \hat{Y}_i)(\hat{Y}_i - \bar{Y})$$ It can be shown that, for a linear model with a slope and intercept, the cross-product term is equal to zero. However, this is not the case for regression through the origin. If that term did go to zero, we get the following equation $$\sum(Y_i - \bar{Y})^2 = \sum(Y_i - \hat{Y}_i)^2 + \sum(\hat{Y}_i - \bar{Y})^2$$ which states that the total variability in the dependent variable, $SSTO = \sum(Y_i - \bar{Y})^2$, is the sum of the variability explained by the model, $SSR = \sum(\hat{Y}_i - \bar{Y})^2$, and the remaining unexplained variability $SSE = \sum(Y_i - \hat{Y}_i)^2$. We summarize this with the $R^2$ statistic, $R^2 = SSR/SSTO$. However, all of this is based on the assumption that $\sum(Y_i - \hat{Y}_i)(\hat{Y}_i - \bar{Y})=0$. Again, this is not the case for regression through the origin. In extreme cases, when $SSR>SSTO$, $R^2$ will be less than zero. References: The following article succinctly describes all of this: Eisenhauer (2003) "Regression through the Origin". Also see the textbook "Applied Linear Regression Models" by Kutner et al. (I believe chapter 3 has a section devoted to this). Edit: After posting I came across this previous question, which is very relevant. Removal of statistically significant intercept term increases $R^2$ in linear model
Deliberately fitting a model without intercept [duplicate]
As several others have already mentioned, there are many situations where theory dictates a regression through the origin (e.g. y = corn production, x = amount of cultivated land; when x=0, y must be
Deliberately fitting a model without intercept [duplicate] As several others have already mentioned, there are many situations where theory dictates a regression through the origin (e.g. y = corn production, x = amount of cultivated land; when x=0, y must be 0). In these situations, I would first run a traditional regression with a constant and slope term. Then, test whether the estimated intercept is significantly different than zero (Kutner, 2004). If the intercept is not significantly different than zero anyways, you may have a good argument for setting it equal to zero. NOTE: What happens if you KNOW the function must go through the origin, but you still have a significant intercept term? Beware of the following scenario: Here, the data point in the lower left corner corresponds to (0,0). Nevertheless, the linear regression appears to have a significant, positive, intercept. The underlying reason is likely that, between $x=0$ and $x=1$, there is a rapid, non-linear increase in $Y$. That is, the linear model is a good approximation for the data within $x \geq 1$, but does not extrapolate well below this range. This can suggest further refinements to your model. WARNING: As Alecos explained, the residuals in a regression through the origin model will typically have a nonzero mean and will not sum to zero. Why is this important? The short answer is that it affects the calculation of $R^2$, and can make it difficult to interpret. In fact, $R^2$ can be negative in this case. To see this, we have to consider how $R^2$ is derived. We start with the identity: $$(Y_i - \bar{Y}) = (Y_i - \hat{Y}_i) + (\hat{Y}_i - \bar{Y})$$ where $\bar{Y}$ is the mean of all dependent variable observations $Y_i$, and $\hat{Y}_i$ is our predicted value for each observation. Now square both sides and sum over all $i$: $$\sum(Y_i - \bar{Y})^2 = \sum(Y_i - \hat{Y}_i)^2 + \sum(\hat{Y}_i - \bar{Y})^2 + 2 \sum(Y_i - \hat{Y}_i)(\hat{Y}_i - \bar{Y})$$ It can be shown that, for a linear model with a slope and intercept, the cross-product term is equal to zero. However, this is not the case for regression through the origin. If that term did go to zero, we get the following equation $$\sum(Y_i - \bar{Y})^2 = \sum(Y_i - \hat{Y}_i)^2 + \sum(\hat{Y}_i - \bar{Y})^2$$ which states that the total variability in the dependent variable, $SSTO = \sum(Y_i - \bar{Y})^2$, is the sum of the variability explained by the model, $SSR = \sum(\hat{Y}_i - \bar{Y})^2$, and the remaining unexplained variability $SSE = \sum(Y_i - \hat{Y}_i)^2$. We summarize this with the $R^2$ statistic, $R^2 = SSR/SSTO$. However, all of this is based on the assumption that $\sum(Y_i - \hat{Y}_i)(\hat{Y}_i - \bar{Y})=0$. Again, this is not the case for regression through the origin. In extreme cases, when $SSR>SSTO$, $R^2$ will be less than zero. References: The following article succinctly describes all of this: Eisenhauer (2003) "Regression through the Origin". Also see the textbook "Applied Linear Regression Models" by Kutner et al. (I believe chapter 3 has a section devoted to this). Edit: After posting I came across this previous question, which is very relevant. Removal of statistically significant intercept term increases $R^2$ in linear model
Deliberately fitting a model without intercept [duplicate] As several others have already mentioned, there are many situations where theory dictates a regression through the origin (e.g. y = corn production, x = amount of cultivated land; when x=0, y must be
39,265
How to regress a positive response variable that is also not a count variable?
On this information, many distributions could make sense: gamma, lognormal, etc., etc. In general, generalized linear models with various links (logarithm first and foremost) and various families could all apply. By the way, the usefulness of Poisson is not limited to count variables; this is a common myth. See for example for a brisk introduction to the question http://blog.stata.com/tag/poisson-regression/
How to regress a positive response variable that is also not a count variable?
On this information, many distributions could make sense: gamma, lognormal, etc., etc. In general, generalized linear models with various links (logarithm first and foremost) and various families coul
How to regress a positive response variable that is also not a count variable? On this information, many distributions could make sense: gamma, lognormal, etc., etc. In general, generalized linear models with various links (logarithm first and foremost) and various families could all apply. By the way, the usefulness of Poisson is not limited to count variables; this is a common myth. See for example for a brisk introduction to the question http://blog.stata.com/tag/poisson-regression/
How to regress a positive response variable that is also not a count variable? On this information, many distributions could make sense: gamma, lognormal, etc., etc. In general, generalized linear models with various links (logarithm first and foremost) and various families coul
39,266
What is an intuitive explanation of why we want homoskedasticity in a regression?
Homoskedasticity means that the variances of all the observations are identical to one another, heteroskedasticity means they're different. It's possible that the size of the variances displays some trend relative to x, but it's not strictly necessary; as shown in the accompanying diagram, variances that are differently sized in some random way from point to point will qualify just as well. The job of the regression is to estimate an optimal curve which passes as close to as many of the data points as possible. In the case of heteroskedastic data, by definition some points will naturally be much more widely dispersed than others. If the regression simply treats all of the data points equivalently, the ones with the largest variance will tend to have undue influence in selecting the optimal regression curve, by "dragging" the regression curve toward themselves, in order to achieve the objective of minimizing the overall scatter of the data points about the final regression curve. This issue can easily be overcome by simply weighting each data point in inverse proportion to its variance. This assumes, however, that one knows the variance associated with each individual point. Often, one doesn't. Thus, the reason that homoskedastic data are preferred is because they are simpler and easier to deal with--you can get the "correct" answer for the regression curve without necessarily knowing the underlying variances of the individual points, because the relative weights between the points in some sense will "cancel out" if they are all the same anyway. EDIT: A commenter asks me to explain the idea that individual points may have their own, unique, different variances. I do so with a thought experiment. Suppose I ask you to measure the weight vs. length of a bunch of different animals, from the size of a gnat all the way up to the size of an elephant. You do so, plotting length on the x-axis, and weight on the y-axis. But let's pause for a moment to consider things in a little more detail. Let's look at the weight values specifically--how did you actually obtain them? You can't possibly use the same physical measuring device to weigh a gnat as you would to weigh a house pet, nor can you use the same device to weigh a house pet as you would to weigh an elephant. For the gnat, you are probably going to have to use something like an analytical chemistry balance, accurate down to 0.0001 g, while for the house pet, you'd use a bathroom scale, which might be accurate to about a half of a pound or so (roughly around 200 g), while for the elephant, you might use a something like a truck scale, which might only be accurate to within +/- 10 kg. The point is, all of these devices have different inherent accuracies--they only tell you the weight up to a certain number of significant digits, and after that you can't really know for sure. The different sizes of the error bars in the heteroskedastic plot above, which we associate with the different variances of the individual points, reflect differing degrees of certainty about the underlying measurements. In short, different points can have different variances because sometimes we can't measure all of the points equally well--you're never going to know the weight of an elephant down to +/- 0.0001 g, because you can't get that kind of accuracy out of a truck scale. But you can know the weight of a gnat to +/- 0.0001 g, because you can get that kind of an accuracy on an analytical chemistry balance. (Technically, in this particular thought experiment, the same type of issue actually arises for the length measurement as well, but all that really means is that if we decided to plot horizontal error bars representing uncertainties in the x-axis values also, those would have different sizes for different points too.)
What is an intuitive explanation of why we want homoskedasticity in a regression?
Homoskedasticity means that the variances of all the observations are identical to one another, heteroskedasticity means they're different. It's possible that the size of the variances displays some
What is an intuitive explanation of why we want homoskedasticity in a regression? Homoskedasticity means that the variances of all the observations are identical to one another, heteroskedasticity means they're different. It's possible that the size of the variances displays some trend relative to x, but it's not strictly necessary; as shown in the accompanying diagram, variances that are differently sized in some random way from point to point will qualify just as well. The job of the regression is to estimate an optimal curve which passes as close to as many of the data points as possible. In the case of heteroskedastic data, by definition some points will naturally be much more widely dispersed than others. If the regression simply treats all of the data points equivalently, the ones with the largest variance will tend to have undue influence in selecting the optimal regression curve, by "dragging" the regression curve toward themselves, in order to achieve the objective of minimizing the overall scatter of the data points about the final regression curve. This issue can easily be overcome by simply weighting each data point in inverse proportion to its variance. This assumes, however, that one knows the variance associated with each individual point. Often, one doesn't. Thus, the reason that homoskedastic data are preferred is because they are simpler and easier to deal with--you can get the "correct" answer for the regression curve without necessarily knowing the underlying variances of the individual points, because the relative weights between the points in some sense will "cancel out" if they are all the same anyway. EDIT: A commenter asks me to explain the idea that individual points may have their own, unique, different variances. I do so with a thought experiment. Suppose I ask you to measure the weight vs. length of a bunch of different animals, from the size of a gnat all the way up to the size of an elephant. You do so, plotting length on the x-axis, and weight on the y-axis. But let's pause for a moment to consider things in a little more detail. Let's look at the weight values specifically--how did you actually obtain them? You can't possibly use the same physical measuring device to weigh a gnat as you would to weigh a house pet, nor can you use the same device to weigh a house pet as you would to weigh an elephant. For the gnat, you are probably going to have to use something like an analytical chemistry balance, accurate down to 0.0001 g, while for the house pet, you'd use a bathroom scale, which might be accurate to about a half of a pound or so (roughly around 200 g), while for the elephant, you might use a something like a truck scale, which might only be accurate to within +/- 10 kg. The point is, all of these devices have different inherent accuracies--they only tell you the weight up to a certain number of significant digits, and after that you can't really know for sure. The different sizes of the error bars in the heteroskedastic plot above, which we associate with the different variances of the individual points, reflect differing degrees of certainty about the underlying measurements. In short, different points can have different variances because sometimes we can't measure all of the points equally well--you're never going to know the weight of an elephant down to +/- 0.0001 g, because you can't get that kind of accuracy out of a truck scale. But you can know the weight of a gnat to +/- 0.0001 g, because you can get that kind of an accuracy on an analytical chemistry balance. (Technically, in this particular thought experiment, the same type of issue actually arises for the length measurement as well, but all that really means is that if we decided to plot horizontal error bars representing uncertainties in the x-axis values also, those would have different sizes for different points too.)
What is an intuitive explanation of why we want homoskedasticity in a regression? Homoskedasticity means that the variances of all the observations are identical to one another, heteroskedasticity means they're different. It's possible that the size of the variances displays some
39,267
What is an intuitive explanation of why we want homoskedasticity in a regression?
Why do we want homoskedasticity in regression? It's not that we want homoskedasticity or heteroskedasticity in the regression; what we want is for the model to reflect the actual properties of the data. Regression models may be formulated either with an assumption of homoskedasticity, or with an assumption of heteroskedasticity, in some specified form. We want to formulate a regression model that fits with the actual properties of the data, and thus reflects a reasonable specification of the behaviour of data coming from the observed process. Thus, if the variance of the deviation of the response from its expectation (the error term) is fixed (i.e., is homoskedastic) then we want a model that reflects this. And if the variance of the deviation of the response from its expectation (the error term) depends on the explanatory variable (i.e., is heteroskedastic) then we want a model that reflects this. If we mis-specify the model (e.g., by using a homoskedastic model for heteroskedastic data) then this means that we will mis-specify the variance of the error term. The result is that our estimate of the regression function will under-penalise some errors and over-penalise other errors, and will tend to perform more poorly than if we specify the model correctly.
What is an intuitive explanation of why we want homoskedasticity in a regression?
Why do we want homoskedasticity in regression? It's not that we want homoskedasticity or heteroskedasticity in the regression; what we want is for the model to reflect the actual properties of the da
What is an intuitive explanation of why we want homoskedasticity in a regression? Why do we want homoskedasticity in regression? It's not that we want homoskedasticity or heteroskedasticity in the regression; what we want is for the model to reflect the actual properties of the data. Regression models may be formulated either with an assumption of homoskedasticity, or with an assumption of heteroskedasticity, in some specified form. We want to formulate a regression model that fits with the actual properties of the data, and thus reflects a reasonable specification of the behaviour of data coming from the observed process. Thus, if the variance of the deviation of the response from its expectation (the error term) is fixed (i.e., is homoskedastic) then we want a model that reflects this. And if the variance of the deviation of the response from its expectation (the error term) depends on the explanatory variable (i.e., is heteroskedastic) then we want a model that reflects this. If we mis-specify the model (e.g., by using a homoskedastic model for heteroskedastic data) then this means that we will mis-specify the variance of the error term. The result is that our estimate of the regression function will under-penalise some errors and over-penalise other errors, and will tend to perform more poorly than if we specify the model correctly.
What is an intuitive explanation of why we want homoskedasticity in a regression? Why do we want homoskedasticity in regression? It's not that we want homoskedasticity or heteroskedasticity in the regression; what we want is for the model to reflect the actual properties of the da
39,268
What is an intuitive explanation of why we want homoskedasticity in a regression?
In addition to the other excellent answers: Can someone explain intuitively why this is necessary? (An applied example would be great!) Constant variance isn't necessary, but when it holds modeling and analysis is simpler. Part of this must be historical, analysis when variance is not constant is more complicated, requires more computation! So one developed methods (transformations) to get to a situation where constant variance holds and the simpler/faster methods could be used. Today there are more alternative methods, and fast computation isn't as important as it was. But simplicity is still of value! Part is technical/mathematical. Models with nonconstant variance does not have exact ancillaries (see here.) So only approximate inference is possible. Nonconstant variance in the two-groups problem is the famous Behrens-Fisher problem. But it is even deeper than that. Let us look at the simplest example, comparing the means of two groups with a (some variant of) t-test. The null hypothesis is that the groups are equal. Say this is a randomized experiment with a treatment and control group. If group sizes are reasonable, randomization should make the groups equal (before treatment.) The constant variance assumption says that the treatment (if it works at all), only influences the mean, not the variance. But how could it influence the variance? If the treatment really works equally on all members of the treatment group, it should have more or less the same effect for all, the group is just shifted. So unequal variance could mean that the treatment has different effect for some members of the treatment group than others. Say, if it has some effect for half the group and a much stronger effect for the other half, the variance will increase together with the mean! So the constant variance assumption is really an assumption about homogeneity of individual treatment effects. When this does not hold one should expect that analysis get more convoluted. See here. Then, with unequal variances, it could also be interesting to ask about reasons for it, specifically if the treatment could have anything to do with it. If so, this post could be of interest. Question 2: I can never remember whether it's hetero- or homo- that is ideal. Can someone explain the logic of which one is ideal? No one is ideal, you must model the situation you have! But if this is a question about remembering the meaning of those two funny words, just prepend them to sex and you will remember. Question 3: Heteroskedasticity means that x is correlated with the errors. Can someone explain why this is bad? It means that the conditional distribution of the errors given $x$, varies with $x$. That isn't bad, it just makes life complicated. But it might just make life interesting, it might be a signal of something interesting going on.
What is an intuitive explanation of why we want homoskedasticity in a regression?
In addition to the other excellent answers: Can someone explain intuitively why this is necessary? (An applied example would be great!) Constant variance isn't necessary, but when it holds modeling
What is an intuitive explanation of why we want homoskedasticity in a regression? In addition to the other excellent answers: Can someone explain intuitively why this is necessary? (An applied example would be great!) Constant variance isn't necessary, but when it holds modeling and analysis is simpler. Part of this must be historical, analysis when variance is not constant is more complicated, requires more computation! So one developed methods (transformations) to get to a situation where constant variance holds and the simpler/faster methods could be used. Today there are more alternative methods, and fast computation isn't as important as it was. But simplicity is still of value! Part is technical/mathematical. Models with nonconstant variance does not have exact ancillaries (see here.) So only approximate inference is possible. Nonconstant variance in the two-groups problem is the famous Behrens-Fisher problem. But it is even deeper than that. Let us look at the simplest example, comparing the means of two groups with a (some variant of) t-test. The null hypothesis is that the groups are equal. Say this is a randomized experiment with a treatment and control group. If group sizes are reasonable, randomization should make the groups equal (before treatment.) The constant variance assumption says that the treatment (if it works at all), only influences the mean, not the variance. But how could it influence the variance? If the treatment really works equally on all members of the treatment group, it should have more or less the same effect for all, the group is just shifted. So unequal variance could mean that the treatment has different effect for some members of the treatment group than others. Say, if it has some effect for half the group and a much stronger effect for the other half, the variance will increase together with the mean! So the constant variance assumption is really an assumption about homogeneity of individual treatment effects. When this does not hold one should expect that analysis get more convoluted. See here. Then, with unequal variances, it could also be interesting to ask about reasons for it, specifically if the treatment could have anything to do with it. If so, this post could be of interest. Question 2: I can never remember whether it's hetero- or homo- that is ideal. Can someone explain the logic of which one is ideal? No one is ideal, you must model the situation you have! But if this is a question about remembering the meaning of those two funny words, just prepend them to sex and you will remember. Question 3: Heteroskedasticity means that x is correlated with the errors. Can someone explain why this is bad? It means that the conditional distribution of the errors given $x$, varies with $x$. That isn't bad, it just makes life complicated. But it might just make life interesting, it might be a signal of something interesting going on.
What is an intuitive explanation of why we want homoskedasticity in a regression? In addition to the other excellent answers: Can someone explain intuitively why this is necessary? (An applied example would be great!) Constant variance isn't necessary, but when it holds modeling
39,269
What is an intuitive explanation of why we want homoskedasticity in a regression?
One of the assumptions of OLS regression is: Variance of the error term/residual is constant. This assumption is known as homoskedasticity. This assumption ensures that with the change in observations, the variations in the error term should not change If this condition is violated, the ordinary least square estimators would still be linear, unbiased and consistent however, these estimators would no longer be efficient. Also, the estimates of standard error would become biased and unreliable in the presence of heteroskedasticity which leads to a problem in hypothesis testing about estimators. In summary, in absence of homoskedasticity, we have linear and unbiased estimators but not BLUE (best linear unbiased estimators) [Read Gauss Markov theorem] I hope now it’s clear that ideally, we need homoskedasticity in our model. If the error term is correlated with y or y predicted or any of the xi’s; it indicates that our predictor(s) have not done the job of explaining the variation in β€˜y’ correctly. Somehow, the model specification is not correct or some other issues are there. Hope it helps! Will try to write an intuitive example soon.
What is an intuitive explanation of why we want homoskedasticity in a regression?
One of the assumptions of OLS regression is: Variance of the error term/residual is constant. This assumption is known as homoskedasticity. This assumption ensures that with the change in observatio
What is an intuitive explanation of why we want homoskedasticity in a regression? One of the assumptions of OLS regression is: Variance of the error term/residual is constant. This assumption is known as homoskedasticity. This assumption ensures that with the change in observations, the variations in the error term should not change If this condition is violated, the ordinary least square estimators would still be linear, unbiased and consistent however, these estimators would no longer be efficient. Also, the estimates of standard error would become biased and unreliable in the presence of heteroskedasticity which leads to a problem in hypothesis testing about estimators. In summary, in absence of homoskedasticity, we have linear and unbiased estimators but not BLUE (best linear unbiased estimators) [Read Gauss Markov theorem] I hope now it’s clear that ideally, we need homoskedasticity in our model. If the error term is correlated with y or y predicted or any of the xi’s; it indicates that our predictor(s) have not done the job of explaining the variation in β€˜y’ correctly. Somehow, the model specification is not correct or some other issues are there. Hope it helps! Will try to write an intuitive example soon.
What is an intuitive explanation of why we want homoskedasticity in a regression? One of the assumptions of OLS regression is: Variance of the error term/residual is constant. This assumption is known as homoskedasticity. This assumption ensures that with the change in observatio
39,270
In statistics what does NA stand for?
In datasets, NA can mean: "Not Available": e.g. the sensor was down at the time of the measure, "Not Applicable": e.g. when asking a bachelor the name of his wife, "No Answer": e.g. the respondent to a questionnaire skipped a question.
In statistics what does NA stand for?
In datasets, NA can mean: "Not Available": e.g. the sensor was down at the time of the measure, "Not Applicable": e.g. when asking a bachelor the name of his wife, "No Answer": e.g. the respondent to
In statistics what does NA stand for? In datasets, NA can mean: "Not Available": e.g. the sensor was down at the time of the measure, "Not Applicable": e.g. when asking a bachelor the name of his wife, "No Answer": e.g. the respondent to a questionnaire skipped a question.
In statistics what does NA stand for? In datasets, NA can mean: "Not Available": e.g. the sensor was down at the time of the measure, "Not Applicable": e.g. when asking a bachelor the name of his wife, "No Answer": e.g. the respondent to
39,271
Instrumental variable Tobit in R
You ask for two things in the question. First you ask for a way to consistently estimate this model in R. Second, you ask for the particular ways implemented in Stata's ivtobit (these ways are full-information maximum likelihood and the Newey, 1987 two-step estimator). Doing the second would be a nice service for the R community which I don't have time to do. It seems strange that nobody has done it, though. However, I can help you with the first. There are several ways to get a consistent estimator for the model. Here is an easy and intuitive way, based on the estimator of Rivers and Vuong (J of Econometrics, 1988). The steps in the estimator are: Estimate the reduced form for the endogenous variable (i.e. regress x on z in your notation) Collect the predicted values and the residuals from that regression Use Tobit to estimate the equation of interest, substituting predicted endogenous variables for endogenous variables and including the residuals collected in 2 (i.e. regress y on x-predicted and residuals using Tobit) The coefficient on the predicted x is the estimator of the coefficient of interest. The usual standard errors are wrong --- that is, the standard errors Tobit spits out in step 3 are not the right standard errors. This is because the Tobit routine does not "know" that the variables you are handing it are fitted variables instead of exact variables. You could react to this by "doing it right:" looking up the formula for the variance matrix and then implementing it (booooring). Or you could just bootstrap. I favor the latter. It is easier, and if you are really going to "do it right," then you would not use this estimator anyway, since it is not efficient. Rather, you would actually implement the Newey (1987) estimator or maximum likelihood. If you have additional right-hand-side variables which are exogenous, just include them along the way in all the commands (both in the tobit and in the reduced form regression). If you have additional right-hand-side variables which are endogenous, then you treat them like the first endogenous RHS variable. Run reduced form regressions for each of the endogenous variables. Include predicted values and residuals in the tobit at the end for each endogenous variable. As with any proper correction for endogeneity, you have to have as many excluded exogenous variables as you have included endogenous variables for this all to work. The code below implements both the estimator and the bootstrapped standard errors for the simple example you gave. I also fixed up the example so that it should work turn-key: require(censReg) require(boot) a <- 2 # structural parameter of interest b <- 1 # strength of instrument rho <- 0.5 # degree of endogeneity N <- 1000 z <- rnorm(N) res1 <- rnorm(N) res2 <- res1*rho + sqrt(1-rho*rho)*rnorm(N) x <- z*b + res1 ys <- x*a + res2 d <- (ys>0) #dummy variable y <- d*ys inconsistent.tobit <- censReg(y~x) summary(inconsistent.tobit) reduced.form <- lm(x~z) summary(reduced.form) consistent.tobit <- censReg(y~fitted(reduced.form)+residuals(reduced.form)) summary(consistent.tobit) # I'd like bootstrapped standard errors, please! my.data <- data.frame(y,x,z) tobit_2siv_coef <- function(data,indices){ d <- data[indices,] reduced.form <- lm(x~z,data=d) consistent.tobit <- censReg(d[,"y"]~fitted(reduced.form)+residuals(reduced.form)) return(summary(consistent.tobit)$estimate["fitted(reduced.form)",1]) } boot.results <- boot(data=my.data,statistic=tobit_2siv_coef,R=100) boot.results
Instrumental variable Tobit in R
You ask for two things in the question. First you ask for a way to consistently estimate this model in R. Second, you ask for the particular ways implemented in Stata's ivtobit (these ways are full-
Instrumental variable Tobit in R You ask for two things in the question. First you ask for a way to consistently estimate this model in R. Second, you ask for the particular ways implemented in Stata's ivtobit (these ways are full-information maximum likelihood and the Newey, 1987 two-step estimator). Doing the second would be a nice service for the R community which I don't have time to do. It seems strange that nobody has done it, though. However, I can help you with the first. There are several ways to get a consistent estimator for the model. Here is an easy and intuitive way, based on the estimator of Rivers and Vuong (J of Econometrics, 1988). The steps in the estimator are: Estimate the reduced form for the endogenous variable (i.e. regress x on z in your notation) Collect the predicted values and the residuals from that regression Use Tobit to estimate the equation of interest, substituting predicted endogenous variables for endogenous variables and including the residuals collected in 2 (i.e. regress y on x-predicted and residuals using Tobit) The coefficient on the predicted x is the estimator of the coefficient of interest. The usual standard errors are wrong --- that is, the standard errors Tobit spits out in step 3 are not the right standard errors. This is because the Tobit routine does not "know" that the variables you are handing it are fitted variables instead of exact variables. You could react to this by "doing it right:" looking up the formula for the variance matrix and then implementing it (booooring). Or you could just bootstrap. I favor the latter. It is easier, and if you are really going to "do it right," then you would not use this estimator anyway, since it is not efficient. Rather, you would actually implement the Newey (1987) estimator or maximum likelihood. If you have additional right-hand-side variables which are exogenous, just include them along the way in all the commands (both in the tobit and in the reduced form regression). If you have additional right-hand-side variables which are endogenous, then you treat them like the first endogenous RHS variable. Run reduced form regressions for each of the endogenous variables. Include predicted values and residuals in the tobit at the end for each endogenous variable. As with any proper correction for endogeneity, you have to have as many excluded exogenous variables as you have included endogenous variables for this all to work. The code below implements both the estimator and the bootstrapped standard errors for the simple example you gave. I also fixed up the example so that it should work turn-key: require(censReg) require(boot) a <- 2 # structural parameter of interest b <- 1 # strength of instrument rho <- 0.5 # degree of endogeneity N <- 1000 z <- rnorm(N) res1 <- rnorm(N) res2 <- res1*rho + sqrt(1-rho*rho)*rnorm(N) x <- z*b + res1 ys <- x*a + res2 d <- (ys>0) #dummy variable y <- d*ys inconsistent.tobit <- censReg(y~x) summary(inconsistent.tobit) reduced.form <- lm(x~z) summary(reduced.form) consistent.tobit <- censReg(y~fitted(reduced.form)+residuals(reduced.form)) summary(consistent.tobit) # I'd like bootstrapped standard errors, please! my.data <- data.frame(y,x,z) tobit_2siv_coef <- function(data,indices){ d <- data[indices,] reduced.form <- lm(x~z,data=d) consistent.tobit <- censReg(d[,"y"]~fitted(reduced.form)+residuals(reduced.form)) return(summary(consistent.tobit)$estimate["fitted(reduced.form)",1]) } boot.results <- boot(data=my.data,statistic=tobit_2siv_coef,R=100) boot.results
Instrumental variable Tobit in R You ask for two things in the question. First you ask for a way to consistently estimate this model in R. Second, you ask for the particular ways implemented in Stata's ivtobit (these ways are full-
39,272
Open source libraries in science [duplicate]
I don't consider this an R specific question. The real question is: can you trust other people's code? Or, taking the other perspective: do you think you can do better? (in the time you are willing/able to spend) Whether the software is open or closed source, does not really matter. The trustworthiness of open source compared to commercial software is a topic of much debate. Arguments exist in favor of either side of the discussion. Nobody can refute that both can (and do) contain bugs of various nature, some more insidious than others. TL;DR: in my opinion: yes, it's fine to use existing packages if they cover your needs. My motto, in general, is to try to implement stuff myself if a variety of the following are true: I know exactly what I want and how to do it (this is far from trivial for many practical models). It is worth the time required to implement and test my own code compared to what already exists. In other words: will I use it often or just once? What already exists does not offer all the functionality I need. I have a good reason not to trust existing implementations (such as unexpected behaviour when using it). Personally, I like, use and develop a lot of open source as I believe that to be important, especially in an academic context. In practice, many people only consider using a given algorithm if an implementation is available. Nobody benefits from a large set of implementations of the same thing. It is far better to dispose of one efficient, thoroughly tested and verified implementation of a given method. People like to believe that if I do it myself, I know it's done properly. Practice contradicts this on regular basis. Not every package of R has been created in an academic environment I don't know why you feel that packages created by academics are superior. Sure, the method itself may be more complex/novel, but all bets are off regarding implementation. Researchers are not necessarily the best programmers (in fact, given that that is usually not their specialty I would say the opposite is more likely). In practice, researchers regularly hack solutions together, sometimes without thorough testing. Naturally, this can lead to all kinds of bad results (most notably silent fails). Personally, I believe one of the reasons for this behaviour is the fact that software output is undervalued in research settings. Some researchers simply rush towards the next publication. Publication of results matter, the software used to get them doesn't really. This leads to software that has not been tested properly when people reinvent the wheel for a single use. How can we use open source without risking failures in the outcome? You can't. This also applies to commercial software and software you write yourself with lots of care and love. If you happen to find a way to ensure software has no bugs or caveats, you should instantly make an appointment with Bill Gates (or better yet, tell me about it). The only solution is to critically evaluate results every step of the way. Never trust software unconditionally, no matter what software it is. Are there certain quality indicators for packages in general and for R? Usually, packages that get made publicly available, for example on CRAN, have been subjected to rigorous tests, especially the ones that get used often. People will think twice before making untested garbage publicly available when their credibility is at stake. The number of citations can be considered a quality indicator too in addition to an active developing community.
Open source libraries in science [duplicate]
I don't consider this an R specific question. The real question is: can you trust other people's code? Or, taking the other perspective: do you think you can do better? (in the time you are willing/ab
Open source libraries in science [duplicate] I don't consider this an R specific question. The real question is: can you trust other people's code? Or, taking the other perspective: do you think you can do better? (in the time you are willing/able to spend) Whether the software is open or closed source, does not really matter. The trustworthiness of open source compared to commercial software is a topic of much debate. Arguments exist in favor of either side of the discussion. Nobody can refute that both can (and do) contain bugs of various nature, some more insidious than others. TL;DR: in my opinion: yes, it's fine to use existing packages if they cover your needs. My motto, in general, is to try to implement stuff myself if a variety of the following are true: I know exactly what I want and how to do it (this is far from trivial for many practical models). It is worth the time required to implement and test my own code compared to what already exists. In other words: will I use it often or just once? What already exists does not offer all the functionality I need. I have a good reason not to trust existing implementations (such as unexpected behaviour when using it). Personally, I like, use and develop a lot of open source as I believe that to be important, especially in an academic context. In practice, many people only consider using a given algorithm if an implementation is available. Nobody benefits from a large set of implementations of the same thing. It is far better to dispose of one efficient, thoroughly tested and verified implementation of a given method. People like to believe that if I do it myself, I know it's done properly. Practice contradicts this on regular basis. Not every package of R has been created in an academic environment I don't know why you feel that packages created by academics are superior. Sure, the method itself may be more complex/novel, but all bets are off regarding implementation. Researchers are not necessarily the best programmers (in fact, given that that is usually not their specialty I would say the opposite is more likely). In practice, researchers regularly hack solutions together, sometimes without thorough testing. Naturally, this can lead to all kinds of bad results (most notably silent fails). Personally, I believe one of the reasons for this behaviour is the fact that software output is undervalued in research settings. Some researchers simply rush towards the next publication. Publication of results matter, the software used to get them doesn't really. This leads to software that has not been tested properly when people reinvent the wheel for a single use. How can we use open source without risking failures in the outcome? You can't. This also applies to commercial software and software you write yourself with lots of care and love. If you happen to find a way to ensure software has no bugs or caveats, you should instantly make an appointment with Bill Gates (or better yet, tell me about it). The only solution is to critically evaluate results every step of the way. Never trust software unconditionally, no matter what software it is. Are there certain quality indicators for packages in general and for R? Usually, packages that get made publicly available, for example on CRAN, have been subjected to rigorous tests, especially the ones that get used often. People will think twice before making untested garbage publicly available when their credibility is at stake. The number of citations can be considered a quality indicator too in addition to an active developing community.
Open source libraries in science [duplicate] I don't consider this an R specific question. The real question is: can you trust other people's code? Or, taking the other perspective: do you think you can do better? (in the time you are willing/ab
39,273
Is there a situation under which the distribution of p-values is skewed towards 1?
It is also possible to have the effect in practice, when the null hypothesis is true but not all the assumption for the test are given. For example, the classical (non Welch) t-Test assumes equal variance in both groups. In the case that both groups are equally sized a violation is usually not that bad, otherwise the null distribution gets skewed. If the smaller group has a higher variance than the larger one the null distribution is skewed towards 0 and if it has a smaller variance it is skewed towards 1. Some R Code for experimentation: p.vals <- vector("numeric", 1e5) for (i in 1:1e5) { x <- rnorm(5, 0, 1) y <- rnorm(50, 0, 10) p.vals[i] <- t.test(x,y, var.equal = TRUE)$p.value } hist(p.vals) The example shown is the case where the larger group has higher variance. Note that a skewing of the null distribution towards 1 indicated the test is too conservative so results in more Type II Errors and skewing towards 0 gives too many false positives (Type I error).
Is there a situation under which the distribution of p-values is skewed towards 1?
It is also possible to have the effect in practice, when the null hypothesis is true but not all the assumption for the test are given. For example, the classical (non Welch) t-Test assumes equal vari
Is there a situation under which the distribution of p-values is skewed towards 1? It is also possible to have the effect in practice, when the null hypothesis is true but not all the assumption for the test are given. For example, the classical (non Welch) t-Test assumes equal variance in both groups. In the case that both groups are equally sized a violation is usually not that bad, otherwise the null distribution gets skewed. If the smaller group has a higher variance than the larger one the null distribution is skewed towards 0 and if it has a smaller variance it is skewed towards 1. Some R Code for experimentation: p.vals <- vector("numeric", 1e5) for (i in 1:1e5) { x <- rnorm(5, 0, 1) y <- rnorm(50, 0, 10) p.vals[i] <- t.test(x,y, var.equal = TRUE)$p.value } hist(p.vals) The example shown is the case where the larger group has higher variance. Note that a skewing of the null distribution towards 1 indicated the test is too conservative so results in more Type II Errors and skewing towards 0 gives too many false positives (Type I error).
Is there a situation under which the distribution of p-values is skewed towards 1? It is also possible to have the effect in practice, when the null hypothesis is true but not all the assumption for the test are given. For example, the classical (non Welch) t-Test assumes equal vari
39,274
Is there a situation under which the distribution of p-values is skewed towards 1?
That can happen in a one-sided test when your "true" parameter is inside the region of the null hypothesis but not on the boundary. Consider the following example in Stata where the "true" parameter (in this case mean) is 1: clear all program define sim, rclass drop _all set obs 100 gen x = rnormal(1,1) ttest x = 0.75 return scalar p1 = r(p_l) ttest x = 1 return scalar p2 = r(p_l) end simulate p1=r(p1) p2=r(p2) , reps(20000) : sim simpplot p1 p2, scheme(s2color) ylabel(,angle(horizontal)) /// legend(order( 2 "H0: {&mu} {&ge} .75" 3 "H0: {&mu} {&ge} 1")) I like this representation of the $p$-values. It shows on the y-axis the difference between the empirical estimate of the Cumulative Distribution Function (CDF) and the theoretical (continuous standard uniform) distribution. On the x-axis is the nominal $p$-value. The logic behind this graph is that for $p$-values in a simulation study in which the null hypothesis is true, the empirical CDF is an empirical estimate of the $p$-value. The empirical CDF gives for each nominal $p$-value an estimate of the probability of drawing a sample which deviates at least as much from the null hypothesis as the current sample (i.e. has a nominal $p$-value less than or equal to the current nominal $p$-value) if the null hypothesis is true. So negative values on the y-axis means that the emprical estimates of the $p$-value are less than the nominal $p$-values and positive values on the y-axis say that the empirical estimates of the $p$-values are larger than the nominal $p$-values. So the blue points correspond to a cumulative density function which bluges below the diagonal line which one would expect for a continuous standard uniform distribution. The corresponding histogram is shown below:
Is there a situation under which the distribution of p-values is skewed towards 1?
That can happen in a one-sided test when your "true" parameter is inside the region of the null hypothesis but not on the boundary. Consider the following example in Stata where the "true" parameter (
Is there a situation under which the distribution of p-values is skewed towards 1? That can happen in a one-sided test when your "true" parameter is inside the region of the null hypothesis but not on the boundary. Consider the following example in Stata where the "true" parameter (in this case mean) is 1: clear all program define sim, rclass drop _all set obs 100 gen x = rnormal(1,1) ttest x = 0.75 return scalar p1 = r(p_l) ttest x = 1 return scalar p2 = r(p_l) end simulate p1=r(p1) p2=r(p2) , reps(20000) : sim simpplot p1 p2, scheme(s2color) ylabel(,angle(horizontal)) /// legend(order( 2 "H0: {&mu} {&ge} .75" 3 "H0: {&mu} {&ge} 1")) I like this representation of the $p$-values. It shows on the y-axis the difference between the empirical estimate of the Cumulative Distribution Function (CDF) and the theoretical (continuous standard uniform) distribution. On the x-axis is the nominal $p$-value. The logic behind this graph is that for $p$-values in a simulation study in which the null hypothesis is true, the empirical CDF is an empirical estimate of the $p$-value. The empirical CDF gives for each nominal $p$-value an estimate of the probability of drawing a sample which deviates at least as much from the null hypothesis as the current sample (i.e. has a nominal $p$-value less than or equal to the current nominal $p$-value) if the null hypothesis is true. So negative values on the y-axis means that the emprical estimates of the $p$-value are less than the nominal $p$-values and positive values on the y-axis say that the empirical estimates of the $p$-values are larger than the nominal $p$-values. So the blue points correspond to a cumulative density function which bluges below the diagonal line which one would expect for a continuous standard uniform distribution. The corresponding histogram is shown below:
Is there a situation under which the distribution of p-values is skewed towards 1? That can happen in a one-sided test when your "true" parameter is inside the region of the null hypothesis but not on the boundary. Consider the following example in Stata where the "true" parameter (
39,275
Free econometrics textbooks
Hyndman and Athanasopoulos. Forecasting: principles and practice, OTexts is an introductory textbook on forecasting covering prediction using regression models and forecasting with univariate time series models such as ETS and ARIMA.
Free econometrics textbooks
Hyndman and Athanasopoulos. Forecasting: principles and practice, OTexts is an introductory textbook on forecasting covering prediction using regression models and forecasting with univariate time ser
Free econometrics textbooks Hyndman and Athanasopoulos. Forecasting: principles and practice, OTexts is an introductory textbook on forecasting covering prediction using regression models and forecasting with univariate time series models such as ETS and ARIMA.
Free econometrics textbooks Hyndman and Athanasopoulos. Forecasting: principles and practice, OTexts is an introductory textbook on forecasting covering prediction using regression models and forecasting with univariate time ser
39,276
Free econometrics textbooks
Graeme, I think Hansen is the best reference in that category; it is clearly and rigorously written. Imbens and Wooldridge produced a nice NBER summer course, see here (with video). It assumes some knowledge of basic econometrics, which you probably have already. They more or less repeated it at the UK Centre for Microdata Methods and Practice. The latter lectures are available at for(k=1;k<=18;k++) { http://www.cemmap.ac.uk/resources/imbens_wooldridge/lecture_k.pdf } They both are good teachers, this is a good set of materials to go through. I have a book on econometric analysis using Stata in RePEc in open access, but it is in Russian :).
Free econometrics textbooks
Graeme, I think Hansen is the best reference in that category; it is clearly and rigorously written. Imbens and Wooldridge produced a nice NBER summer course, see here (with video). It assumes some kn
Free econometrics textbooks Graeme, I think Hansen is the best reference in that category; it is clearly and rigorously written. Imbens and Wooldridge produced a nice NBER summer course, see here (with video). It assumes some knowledge of basic econometrics, which you probably have already. They more or less repeated it at the UK Centre for Microdata Methods and Practice. The latter lectures are available at for(k=1;k<=18;k++) { http://www.cemmap.ac.uk/resources/imbens_wooldridge/lecture_k.pdf } They both are good teachers, this is a good set of materials to go through. I have a book on econometric analysis using Stata in RePEc in open access, but it is in Russian :).
Free econometrics textbooks Graeme, I think Hansen is the best reference in that category; it is clearly and rigorously written. Imbens and Wooldridge produced a nice NBER summer course, see here (with video). It assumes some kn
39,277
Free econometrics textbooks
Econometrics by Michael Creel is a project to develop a document for teaching graduate econometrics that is "open source", specifically, licensed as GNU GPL. About the book: This document integrates lecture notes for a one year graduate level course with computer programs that illustrate and apply the methods that are studied. The immediate availability of executable (and modifiable) example programs (written using the GNU/Octave language) when using the PDF version of the document is a distinguishing feature of these notes. If printed, the document is a somewhat terse approximation to a textbook. From the Author's webpage (accessed 21 June 2013): You may be wondering why the notes are available in this form. It's simply because I use a lot of free software, and this is a means of contributing back to the community.
Free econometrics textbooks
Econometrics by Michael Creel is a project to develop a document for teaching graduate econometrics that is "open source", specifically, licensed as GNU GPL. About the book: This document integrates
Free econometrics textbooks Econometrics by Michael Creel is a project to develop a document for teaching graduate econometrics that is "open source", specifically, licensed as GNU GPL. About the book: This document integrates lecture notes for a one year graduate level course with computer programs that illustrate and apply the methods that are studied. The immediate availability of executable (and modifiable) example programs (written using the GNU/Octave language) when using the PDF version of the document is a distinguishing feature of these notes. If printed, the document is a somewhat terse approximation to a textbook. From the Author's webpage (accessed 21 June 2013): You may be wondering why the notes are available in this form. It's simply because I use a lot of free software, and this is a means of contributing back to the community.
Free econometrics textbooks Econometrics by Michael Creel is a project to develop a document for teaching graduate econometrics that is "open source", specifically, licensed as GNU GPL. About the book: This document integrates
39,278
Free econometrics textbooks
It is worth throwing into the mix that Greene (5th edition) is free online for self-study. Wonderful resource: https://spu.fem.uniag.sk/cvicenia/ksov/obtulovic/Mana%C5%BE.%20%C5%A1tatistika%20a%20ekonometria/EconometricsGREENE.pdf
Free econometrics textbooks
It is worth throwing into the mix that Greene (5th edition) is free online for self-study. Wonderful resource: https://spu.fem.uniag.sk/cvicenia/ksov/obtulovic/Mana%C5%BE.%20%C5%A1tatistika%20a%20eko
Free econometrics textbooks It is worth throwing into the mix that Greene (5th edition) is free online for self-study. Wonderful resource: https://spu.fem.uniag.sk/cvicenia/ksov/obtulovic/Mana%C5%BE.%20%C5%A1tatistika%20a%20ekonometria/EconometricsGREENE.pdf
Free econometrics textbooks It is worth throwing into the mix that Greene (5th edition) is free online for self-study. Wonderful resource: https://spu.fem.uniag.sk/cvicenia/ksov/obtulovic/Mana%C5%BE.%20%C5%A1tatistika%20a%20eko
39,279
Differences and relation between retrospective power analysis and a posteriori power analysis?
Assume for simplicity that your model is defined by only one parameter $\theta$. The power is the function $\theta \mapsto \Pr(\text{reject } H_0 \mid \theta)$, which depends on the sample size $n$. In Retrospective Power Analysis, you only plug in your estimate $\theta$: you look at the value $\Pr(\text{reject } H_0 \mid \hat\theta)$ at the power function at $\theta=\hat\theta$, with the same sample size $n$. It answers the question: "what would be the probability that I would obtain significant results if $\theta$ were $\hat\theta$" ? As said in your text this question is rather useless because there is a one-to-one correspondence between the $p$-value and the retrospective power $\Pr(\text{reject } H_0 \mid \hat\theta)$. For instance consider a binomial experiment with proportion parameter $\theta \in [0,1]$ and the hypothesis $H_0\colon\{\theta=0\}$. Obviously the power increases when $\theta$ increases. And obviously the $p$-value decreases when $\hat\theta$ increases. Consequently the lower $p$-value, the higher RP (retrospective power). A couple of years ago I wrote a R code for the case of Fisher tests in classical Gaussian linear models. It is here. There's a code using simulations for the one-way ANOVA example and a code for the general model providing an exact calculation of RP in function of the $p$-value and the design parameters. I called my function PAP() because "Puissance a posteriori" is the French translation of RP and PAP is also an acronym for "Power Approach Paradox". The cause of the decreasing correspondence between $p$ and RP for Gaussian linear models is intuitively the same as for the binomial experiment: if $\theta$ is "far from $H_0$" then the power at $\theta$ is high, and if $\hat\theta$ is "far from $H_0$" then the $p$-value is small. Theoretically this is a consequence of the fact that the noncentral Fisher distributions are stochastically increasing in the noncentrality parameter (see this discussion about noncentral $F$ distributions in Gaussian linear models). In fact here the noncentrality parameter plays the role of $\theta$ (is it the so-called effect size ? I don't know). I claimed "RS is rather useless because of the correspondence with $p$" because this decreasing correspondence with $p$ means that having a high RP is equivalent to having a small $p$, and vice-versa. But the more serious problem is the misinterpretation of RP; for instance, I have found such claims in the literature: $H_0$ is not rejected and RP is high, so the decision of the test is significant. $H_0$ is not rejected, it is not surprising because RP is low. $H_0$ is rejected (so the decision is significant) and RP is high, so the decision is even more significant. Respectively replace "RP is high" and "RP is low" with "$p$ is low" and "$p$ is high" in the three claims above and you will see that they are either useless, wrong, or puzzling. From a more "philosophical" perspective, RP is useless because why would we mind about the probability that rejection of $H_0$ occurs once the experiment is done ? See also here a funny but clever retrospective power online calculator ;-) The paragraph A Posteriori Power Analysis says nothing about the choice of $\theta$, but it emphasizes the main difference with the retrospective power: here the goal is to use the information issued from your first experiment to evaluate the power of a future experiment, focusing on the sample size. A sensible approach to evaluate this power is to consider your estimate $\hat\theta$ as a "guess" of the true $\theta$ and also to consider the uncertainty about this estimate. There is a natural way to do so in Bayesian statistics, namely the predictive power, which consists to average the possible values of $\Pr(\text{reject } H_0 \mid \theta)$ for various values of $\theta$, according to some distribution (the posterior distribution in Bayesian terms) representing the knowledge and the uncertainty about $\theta$ resulting from your first experiment. In the frequentist framework you could consider the values of the power evaluated at the bounds of your confidence interval about $\theta$.
Differences and relation between retrospective power analysis and a posteriori power analysis?
Assume for simplicity that your model is defined by only one parameter $\theta$. The power is the function $\theta \mapsto \Pr(\text{reject } H_0 \mid \theta)$, which depends on the sample size $n$. I
Differences and relation between retrospective power analysis and a posteriori power analysis? Assume for simplicity that your model is defined by only one parameter $\theta$. The power is the function $\theta \mapsto \Pr(\text{reject } H_0 \mid \theta)$, which depends on the sample size $n$. In Retrospective Power Analysis, you only plug in your estimate $\theta$: you look at the value $\Pr(\text{reject } H_0 \mid \hat\theta)$ at the power function at $\theta=\hat\theta$, with the same sample size $n$. It answers the question: "what would be the probability that I would obtain significant results if $\theta$ were $\hat\theta$" ? As said in your text this question is rather useless because there is a one-to-one correspondence between the $p$-value and the retrospective power $\Pr(\text{reject } H_0 \mid \hat\theta)$. For instance consider a binomial experiment with proportion parameter $\theta \in [0,1]$ and the hypothesis $H_0\colon\{\theta=0\}$. Obviously the power increases when $\theta$ increases. And obviously the $p$-value decreases when $\hat\theta$ increases. Consequently the lower $p$-value, the higher RP (retrospective power). A couple of years ago I wrote a R code for the case of Fisher tests in classical Gaussian linear models. It is here. There's a code using simulations for the one-way ANOVA example and a code for the general model providing an exact calculation of RP in function of the $p$-value and the design parameters. I called my function PAP() because "Puissance a posteriori" is the French translation of RP and PAP is also an acronym for "Power Approach Paradox". The cause of the decreasing correspondence between $p$ and RP for Gaussian linear models is intuitively the same as for the binomial experiment: if $\theta$ is "far from $H_0$" then the power at $\theta$ is high, and if $\hat\theta$ is "far from $H_0$" then the $p$-value is small. Theoretically this is a consequence of the fact that the noncentral Fisher distributions are stochastically increasing in the noncentrality parameter (see this discussion about noncentral $F$ distributions in Gaussian linear models). In fact here the noncentrality parameter plays the role of $\theta$ (is it the so-called effect size ? I don't know). I claimed "RS is rather useless because of the correspondence with $p$" because this decreasing correspondence with $p$ means that having a high RP is equivalent to having a small $p$, and vice-versa. But the more serious problem is the misinterpretation of RP; for instance, I have found such claims in the literature: $H_0$ is not rejected and RP is high, so the decision of the test is significant. $H_0$ is not rejected, it is not surprising because RP is low. $H_0$ is rejected (so the decision is significant) and RP is high, so the decision is even more significant. Respectively replace "RP is high" and "RP is low" with "$p$ is low" and "$p$ is high" in the three claims above and you will see that they are either useless, wrong, or puzzling. From a more "philosophical" perspective, RP is useless because why would we mind about the probability that rejection of $H_0$ occurs once the experiment is done ? See also here a funny but clever retrospective power online calculator ;-) The paragraph A Posteriori Power Analysis says nothing about the choice of $\theta$, but it emphasizes the main difference with the retrospective power: here the goal is to use the information issued from your first experiment to evaluate the power of a future experiment, focusing on the sample size. A sensible approach to evaluate this power is to consider your estimate $\hat\theta$ as a "guess" of the true $\theta$ and also to consider the uncertainty about this estimate. There is a natural way to do so in Bayesian statistics, namely the predictive power, which consists to average the possible values of $\Pr(\text{reject } H_0 \mid \theta)$ for various values of $\theta$, according to some distribution (the posterior distribution in Bayesian terms) representing the knowledge and the uncertainty about $\theta$ resulting from your first experiment. In the frequentist framework you could consider the values of the power evaluated at the bounds of your confidence interval about $\theta$.
Differences and relation between retrospective power analysis and a posteriori power analysis? Assume for simplicity that your model is defined by only one parameter $\theta$. The power is the function $\theta \mapsto \Pr(\text{reject } H_0 \mid \theta)$, which depends on the sample size $n$. I
39,280
Differences and relation between retrospective power analysis and a posteriori power analysis?
When you do an a posteriori power analysis you have to plug in an effect size and a variance. If you plug in the observed effect size and variance, as you do in an a posteriori power analysis, you are assuming that the true effect size is the same as the observed.
Differences and relation between retrospective power analysis and a posteriori power analysis?
When you do an a posteriori power analysis you have to plug in an effect size and a variance. If you plug in the observed effect size and variance, as you do in an a posteriori power analysis, you are
Differences and relation between retrospective power analysis and a posteriori power analysis? When you do an a posteriori power analysis you have to plug in an effect size and a variance. If you plug in the observed effect size and variance, as you do in an a posteriori power analysis, you are assuming that the true effect size is the same as the observed.
Differences and relation between retrospective power analysis and a posteriori power analysis? When you do an a posteriori power analysis you have to plug in an effect size and a variance. If you plug in the observed effect size and variance, as you do in an a posteriori power analysis, you are
39,281
Mean of log of cdf
No,you'll have to do the integration numerically for each $c$ value. Let $\Phi(x)$ be the Gaussian CDF function. You want to evaluate $$ I=\int \Phi'(x-c) \ln \Phi(x) dx $$ Idea 1: this looks like a convolution; the Fourier transform of $\Phi'$ is easy enough, but that of $\ln \Phi$ is not; and even if you could take the transform of the second factor, inverting the resulting expression is unlikely to be feasible. Idea 2: do it by parts, this works out to $$ \Phi(x-c) \ln \Phi(x) \vert^{\infty}_{\infty} - \int_{\infty}^{\infty} \frac{ \Phi(x-c)}{\Phi(x)} \Phi'(x) dx $$ the first term is conveneiently zero, but the second is no better off. Idea 3: expand the $\ln \Phi$ term. Define $U=1-\Phi$ so that : $$ I=-\int U'(x-c) \ln [ 1-U(x)] dx $$ and then expand the logarithm as though $U$ were small. You end up with terms like $U'(x-c) U^n(x)$, which are still not easily integrable due to the shift in the argument. At this point, it seems that the simplifications that arise in $E[ \ln \Phi(x)]$ aren't panning out, so, numerical integration is a feasible solution.
Mean of log of cdf
No,you'll have to do the integration numerically for each $c$ value. Let $\Phi(x)$ be the Gaussian CDF function. You want to evaluate $$ I=\int \Phi'(x-c) \ln \Phi(x) dx $$ Idea 1: this looks like a
Mean of log of cdf No,you'll have to do the integration numerically for each $c$ value. Let $\Phi(x)$ be the Gaussian CDF function. You want to evaluate $$ I=\int \Phi'(x-c) \ln \Phi(x) dx $$ Idea 1: this looks like a convolution; the Fourier transform of $\Phi'$ is easy enough, but that of $\ln \Phi$ is not; and even if you could take the transform of the second factor, inverting the resulting expression is unlikely to be feasible. Idea 2: do it by parts, this works out to $$ \Phi(x-c) \ln \Phi(x) \vert^{\infty}_{\infty} - \int_{\infty}^{\infty} \frac{ \Phi(x-c)}{\Phi(x)} \Phi'(x) dx $$ the first term is conveneiently zero, but the second is no better off. Idea 3: expand the $\ln \Phi$ term. Define $U=1-\Phi$ so that : $$ I=-\int U'(x-c) \ln [ 1-U(x)] dx $$ and then expand the logarithm as though $U$ were small. You end up with terms like $U'(x-c) U^n(x)$, which are still not easily integrable due to the shift in the argument. At this point, it seems that the simplifications that arise in $E[ \ln \Phi(x)]$ aren't panning out, so, numerical integration is a feasible solution.
Mean of log of cdf No,you'll have to do the integration numerically for each $c$ value. Let $\Phi(x)$ be the Gaussian CDF function. You want to evaluate $$ I=\int \Phi'(x-c) \ln \Phi(x) dx $$ Idea 1: this looks like a
39,282
Mean of log of cdf
In short there is no "easy" way to do this, and Dave has tried a few sensible approaches. One approach perhaps worth a try is to taylor expand since your function is smooth and analytic, and we have a simple form for the moments of the standard normal: Notice that $L^* = \ln(\Phi(X))$ $L = \frac{\phi(x)}{\Phi(x)}$ $L' = \frac{\phi(x)^2}{\Phi(x)^2} - x \frac{\phi(x)}{\Phi(x)} = L(x)^2 - xL(x)$ $L'' = 2LL'-L-xL' = 2L^3 - 3xL^2 +(x^2-1) L$ $L''' = 6L^2L' - 6xLL'-3L^2+2xL+(x^2-1)L' = 6L^4-6xL^3-6xL^3+6x^2L^2-3L^2 + 2xL+(x^2-1)(L^2-xL)$ $ =6L^4-12xL^3+(7x^2-4)L^2 + (3x-x^3)L$ I had hoped we could spot a pattern there incolving Hermite polynomials or some-such, but I can't see one... however, because of the recurrence it is basic calculus and algebra that a computer could chunk through to arbitrary depth. Armed with these we can now notice that we can define $X = Z + c$ so that $X \sim N(c,1)$ and then Taylor expand $L^*$ around the point c $E[\ln(\Phi(X))] = E[\ln(\Phi(c)) + L(c) Z + L'(c)\frac{Z^2}{2} + L''(c)\frac{Z^3}{3!}+\cdots]$ Now remember that $E[Z^p] = (p-1)!!$ for even $p$ and $0$ otherwise, so we only keep every other term: $E[\ln(\Phi(X))] = \ln(\Phi(c)) + L'(c)\frac{1}{2} + L'''(c)\frac{3!!}{4!}+ L^{(5)}(c)\frac{5!!}{6!} \cdots$ $E[\ln(\Phi(X))] = \ln(\Phi(c)) + L'(c)\frac{1}{2} + L'''(c)\frac{1}{8}+ L^{(5)}(c)\frac{1}{48} + L^{(7)}(c) \frac{1}{384} + \cdots + L^{(2k-1)}(c) \frac{1}{2^k k!} + \cdots$ (where we use that $(2k-1)!! = \frac{(2k)!}{2^k k!}$ ) So all that remains is to plug in the derivatives above and work out the total. If anyone spots a pattern then this should work exactly. Incidentally, if you hadn't asked about $\ln(\Phi(x))$ but simply $\Phi(x)$ then we have a much simpler expression. Carrying on from the expression above but using $\Phi$ in place of $L^*$ we have: $E[\Phi(X)]=\Phi(c) + \phi(c) \sum_{k=1}^\infty \frac{H_{2k-1}(c)}{2^k k!}$ Where $H_n$ is the nth Hermite polynomial, and we have used the fact that $\frac{d^n \phi(x)}{dx^n}=H_n(x)\phi(x)$ Notice that inside the sum we have a polynomial over an exponential, so this is going to converge nicely. Notice also that since $H_{2k-1}(0) = 0$ for all k then in the case where $c=0$ this colapses back to 0.5 as expected.
Mean of log of cdf
In short there is no "easy" way to do this, and Dave has tried a few sensible approaches. One approach perhaps worth a try is to taylor expand since your function is smooth and analytic, and we have
Mean of log of cdf In short there is no "easy" way to do this, and Dave has tried a few sensible approaches. One approach perhaps worth a try is to taylor expand since your function is smooth and analytic, and we have a simple form for the moments of the standard normal: Notice that $L^* = \ln(\Phi(X))$ $L = \frac{\phi(x)}{\Phi(x)}$ $L' = \frac{\phi(x)^2}{\Phi(x)^2} - x \frac{\phi(x)}{\Phi(x)} = L(x)^2 - xL(x)$ $L'' = 2LL'-L-xL' = 2L^3 - 3xL^2 +(x^2-1) L$ $L''' = 6L^2L' - 6xLL'-3L^2+2xL+(x^2-1)L' = 6L^4-6xL^3-6xL^3+6x^2L^2-3L^2 + 2xL+(x^2-1)(L^2-xL)$ $ =6L^4-12xL^3+(7x^2-4)L^2 + (3x-x^3)L$ I had hoped we could spot a pattern there incolving Hermite polynomials or some-such, but I can't see one... however, because of the recurrence it is basic calculus and algebra that a computer could chunk through to arbitrary depth. Armed with these we can now notice that we can define $X = Z + c$ so that $X \sim N(c,1)$ and then Taylor expand $L^*$ around the point c $E[\ln(\Phi(X))] = E[\ln(\Phi(c)) + L(c) Z + L'(c)\frac{Z^2}{2} + L''(c)\frac{Z^3}{3!}+\cdots]$ Now remember that $E[Z^p] = (p-1)!!$ for even $p$ and $0$ otherwise, so we only keep every other term: $E[\ln(\Phi(X))] = \ln(\Phi(c)) + L'(c)\frac{1}{2} + L'''(c)\frac{3!!}{4!}+ L^{(5)}(c)\frac{5!!}{6!} \cdots$ $E[\ln(\Phi(X))] = \ln(\Phi(c)) + L'(c)\frac{1}{2} + L'''(c)\frac{1}{8}+ L^{(5)}(c)\frac{1}{48} + L^{(7)}(c) \frac{1}{384} + \cdots + L^{(2k-1)}(c) \frac{1}{2^k k!} + \cdots$ (where we use that $(2k-1)!! = \frac{(2k)!}{2^k k!}$ ) So all that remains is to plug in the derivatives above and work out the total. If anyone spots a pattern then this should work exactly. Incidentally, if you hadn't asked about $\ln(\Phi(x))$ but simply $\Phi(x)$ then we have a much simpler expression. Carrying on from the expression above but using $\Phi$ in place of $L^*$ we have: $E[\Phi(X)]=\Phi(c) + \phi(c) \sum_{k=1}^\infty \frac{H_{2k-1}(c)}{2^k k!}$ Where $H_n$ is the nth Hermite polynomial, and we have used the fact that $\frac{d^n \phi(x)}{dx^n}=H_n(x)\phi(x)$ Notice that inside the sum we have a polynomial over an exponential, so this is going to converge nicely. Notice also that since $H_{2k-1}(0) = 0$ for all k then in the case where $c=0$ this colapses back to 0.5 as expected.
Mean of log of cdf In short there is no "easy" way to do this, and Dave has tried a few sensible approaches. One approach perhaps worth a try is to taylor expand since your function is smooth and analytic, and we have
39,283
Fuzzy regression discontinuity design and exclusion restriction
I am not a fan of the Angrist and Pischke book, but they do have a flair for phrasing, and as they say, fuzzy RD is IV (Sec. 6.2). This fact is obscured by the fact that the instrument is essentially a nonlinear transformation (step function) of one of the included exogenous variables, which by virtue of the conditional exogeneity assumption, is a valid instrument. Assume that each subject is characterized by the tuple of random variables, $\{Y_{0i}, Y_{1i}, D_i, X_i\}$, where $Y_{0i}$ and $Y_{1i}$ are the potential outcomes under non-treatment and treatment respectively, $D_i$ is an indicator variable of whether treatment is administered (which governs which of the potential outcomes is observed for a subject), and $X_i$ is the so-called forcing variable which deterministically or stochastically determines treatment. Usually, the fuzzy RD [FRD] model is stated as the rather concise set of specifications $$ \begin{align} \lim_{x\downarrow x_0} \mathbb{E}(D_i\mid X_i = x) &\neq \lim_{x\uparrow x_0} \mathbb{E}(D_i\mid X_i = x)\\ \lim_{x\downarrow x_0} \mathbb{E}(Y_{0i}\mid X_i = x) &= \lim_{x\uparrow x_0} \mathbb{E}(Y_{0i}\mid X_i = x)\\ \end{align} $$ which are intuitively transparent, but are hard to work with. Potential outcomes framework We can use the familiar potential outcomes model to unpack these specifications, where, for the simplicity of exposition, we exclude all other exogenous variables, other than the forcing variable, $X_i$, which deterministically (in the case of RDD) or stochastically (in the case of FRD) determines the treatment assignment ($D_i=1$). The conditional mean of the outcome in terms of the observable variables is given by $$ \begin{align} \mathbb{E}(Y_i \mid X_i, D_i) &= \mathbb{E}(Y_{0i}\mid X_i, D_i) + D_i\left(\mathbb{E}(Y_{1i}\mid X_i, D_i)-\mathbb{E}(Y_{0i}\mid X_i, D_i)\right) \\ \end{align} $$ Here we make no parametric assumptions about the form of the conditional expectation functions. Note that all of these specifications are restricted to the locality of $x_0$, that is $X_i\in [x_0-\Delta_n, x_0+\Delta_n]$, where the indexing by the sample size is for pragmatic reasons (it becomes relevant when we define the estimator). Recall that in the sharp RD case, we can write $D_i=\mathbf{1}_{[X_i\geq x_0]}$, where $x_0$ is the point of discontinuity. In the FRD case, this relationship is no longer deterministic, instead we have that the conditional mean is modelled in terms of the discontinuity $$ \begin{align} \mathbb{E}(D_i\mid X_i) &= \mathbb{P}\left[D_i=1\mid X_i\right]\\ &=(1-\mathbf{1}_{[X_i\geq x_0]})\mathbb{P}\left[D_i=1\mid X_i< x_0\right] + \mathbf{1}_{[X_i\geq x_0]}\mathbb{P}\left[D_i=1\mid X_i\geq x_0\right] \end{align} $$ Note that since $X_i$ is exogenous in the system, so is the random variable $\mathbf{1}_{[X_\geq x_0]}$ -- it acts as the excluded exogenous variable in the specification of the conditinal mean of the endogenous variable $D_i$. Estimation This is then a valid just-identified IV model, with one endogenous variable $D_i$, and one excluded exogenous variable $\mathbf{1}_{[X_i\geq x_0]}$. A direct and general estimator with no further parametric assumptions is the nonparametric Wald estimator. $$ \dfrac{\widehat{\mathbb{E}}\left(Y_i \mid x_0 \leq X_i\leq x_0+ \Delta_n \right)-\widehat{\mathbb{E}}\left(Y_i \mid x_0- \Delta_n \leq X_i< x_0\right)}{\widehat{\mathbb{P}}\left[D_i=1\mid x_0 \leq X_i\leq x_0+ \Delta_n \right]-\widehat{\mathbb{P}}\left[D_i=1\mid x_0- \Delta_n \leq X_i< x_0\right]} $$ Typically local smoothers, like the local linear smoother are used to estimate the conditional mean functions. ATE interpretation Note that in order to interpret the given estimator as the average treatment effect [ATE] in the locality of $x_0$, we have used the implausible but routine conditional (on $X_i$) independence of $D_i$ and $Y_{1i}-Y_{0i}$. This allows us to remove the conditioning on $D_i$ in the conditional mean function of the outcome in a mathematically convenient way. For more details, see Hahn, Todd & van der Klauuw (2001), which is an excellent and readable reference for RD models. They also provide interpretations of the parameter being estimated under weaker assumptions.
Fuzzy regression discontinuity design and exclusion restriction
I am not a fan of the Angrist and Pischke book, but they do have a flair for phrasing, and as they say, fuzzy RD is IV (Sec. 6.2). This fact is obscured by the fact that the instrument is essentially
Fuzzy regression discontinuity design and exclusion restriction I am not a fan of the Angrist and Pischke book, but they do have a flair for phrasing, and as they say, fuzzy RD is IV (Sec. 6.2). This fact is obscured by the fact that the instrument is essentially a nonlinear transformation (step function) of one of the included exogenous variables, which by virtue of the conditional exogeneity assumption, is a valid instrument. Assume that each subject is characterized by the tuple of random variables, $\{Y_{0i}, Y_{1i}, D_i, X_i\}$, where $Y_{0i}$ and $Y_{1i}$ are the potential outcomes under non-treatment and treatment respectively, $D_i$ is an indicator variable of whether treatment is administered (which governs which of the potential outcomes is observed for a subject), and $X_i$ is the so-called forcing variable which deterministically or stochastically determines treatment. Usually, the fuzzy RD [FRD] model is stated as the rather concise set of specifications $$ \begin{align} \lim_{x\downarrow x_0} \mathbb{E}(D_i\mid X_i = x) &\neq \lim_{x\uparrow x_0} \mathbb{E}(D_i\mid X_i = x)\\ \lim_{x\downarrow x_0} \mathbb{E}(Y_{0i}\mid X_i = x) &= \lim_{x\uparrow x_0} \mathbb{E}(Y_{0i}\mid X_i = x)\\ \end{align} $$ which are intuitively transparent, but are hard to work with. Potential outcomes framework We can use the familiar potential outcomes model to unpack these specifications, where, for the simplicity of exposition, we exclude all other exogenous variables, other than the forcing variable, $X_i$, which deterministically (in the case of RDD) or stochastically (in the case of FRD) determines the treatment assignment ($D_i=1$). The conditional mean of the outcome in terms of the observable variables is given by $$ \begin{align} \mathbb{E}(Y_i \mid X_i, D_i) &= \mathbb{E}(Y_{0i}\mid X_i, D_i) + D_i\left(\mathbb{E}(Y_{1i}\mid X_i, D_i)-\mathbb{E}(Y_{0i}\mid X_i, D_i)\right) \\ \end{align} $$ Here we make no parametric assumptions about the form of the conditional expectation functions. Note that all of these specifications are restricted to the locality of $x_0$, that is $X_i\in [x_0-\Delta_n, x_0+\Delta_n]$, where the indexing by the sample size is for pragmatic reasons (it becomes relevant when we define the estimator). Recall that in the sharp RD case, we can write $D_i=\mathbf{1}_{[X_i\geq x_0]}$, where $x_0$ is the point of discontinuity. In the FRD case, this relationship is no longer deterministic, instead we have that the conditional mean is modelled in terms of the discontinuity $$ \begin{align} \mathbb{E}(D_i\mid X_i) &= \mathbb{P}\left[D_i=1\mid X_i\right]\\ &=(1-\mathbf{1}_{[X_i\geq x_0]})\mathbb{P}\left[D_i=1\mid X_i< x_0\right] + \mathbf{1}_{[X_i\geq x_0]}\mathbb{P}\left[D_i=1\mid X_i\geq x_0\right] \end{align} $$ Note that since $X_i$ is exogenous in the system, so is the random variable $\mathbf{1}_{[X_\geq x_0]}$ -- it acts as the excluded exogenous variable in the specification of the conditinal mean of the endogenous variable $D_i$. Estimation This is then a valid just-identified IV model, with one endogenous variable $D_i$, and one excluded exogenous variable $\mathbf{1}_{[X_i\geq x_0]}$. A direct and general estimator with no further parametric assumptions is the nonparametric Wald estimator. $$ \dfrac{\widehat{\mathbb{E}}\left(Y_i \mid x_0 \leq X_i\leq x_0+ \Delta_n \right)-\widehat{\mathbb{E}}\left(Y_i \mid x_0- \Delta_n \leq X_i< x_0\right)}{\widehat{\mathbb{P}}\left[D_i=1\mid x_0 \leq X_i\leq x_0+ \Delta_n \right]-\widehat{\mathbb{P}}\left[D_i=1\mid x_0- \Delta_n \leq X_i< x_0\right]} $$ Typically local smoothers, like the local linear smoother are used to estimate the conditional mean functions. ATE interpretation Note that in order to interpret the given estimator as the average treatment effect [ATE] in the locality of $x_0$, we have used the implausible but routine conditional (on $X_i$) independence of $D_i$ and $Y_{1i}-Y_{0i}$. This allows us to remove the conditioning on $D_i$ in the conditional mean function of the outcome in a mathematically convenient way. For more details, see Hahn, Todd & van der Klauuw (2001), which is an excellent and readable reference for RD models. They also provide interpretations of the parameter being estimated under weaker assumptions.
Fuzzy regression discontinuity design and exclusion restriction I am not a fan of the Angrist and Pischke book, but they do have a flair for phrasing, and as they say, fuzzy RD is IV (Sec. 6.2). This fact is obscured by the fact that the instrument is essentially
39,284
Fuzzy regression discontinuity design and exclusion restriction
This is only a partial answer to the question I think you're asking. I think with RD we assume that conditional on treatment, the other variables are smooth functions of the assignment variable $z$. This means that the outcome variable $y$ should jump at the cutoff only because of the discontinuity in the level of treatment. (Well, technically only continuity at the cutoff is required, but a global assumption is somewhat easier to test.) This differs from IV, since the assignment variable $z$ can have a direct impact on the outcome $y$, not just on the probability of treatment $x$, but not a discontinuous impact.
Fuzzy regression discontinuity design and exclusion restriction
This is only a partial answer to the question I think you're asking. I think with RD we assume that conditional on treatment, the other variables are smooth functions of the assignment variable $z$. T
Fuzzy regression discontinuity design and exclusion restriction This is only a partial answer to the question I think you're asking. I think with RD we assume that conditional on treatment, the other variables are smooth functions of the assignment variable $z$. This means that the outcome variable $y$ should jump at the cutoff only because of the discontinuity in the level of treatment. (Well, technically only continuity at the cutoff is required, but a global assumption is somewhat easier to test.) This differs from IV, since the assignment variable $z$ can have a direct impact on the outcome $y$, not just on the probability of treatment $x$, but not a discontinuous impact.
Fuzzy regression discontinuity design and exclusion restriction This is only a partial answer to the question I think you're asking. I think with RD we assume that conditional on treatment, the other variables are smooth functions of the assignment variable $z$. T
39,285
Fuzzy regression discontinuity design and exclusion restriction
There is no need for an explicit exclusion restriction in the fuzzy regression discontinuity design. The assumption that the potential outcomes are continuous in the neighbourhood of the cutoff (or globally) together with the discontinuity of the probability to receive treatment at the cutoff implicitly acts as a (local) exclusion restriction. Any effect at the threshold can only be due to the treatment, and the effect of the forcing variable can only be through the effect on the probability to receive treatment. This is essentially the mechanism Dimitriy described. The typical global assumption is that $F_{Y(0)|X}(y,x)$ and $F_{Y(1)|X}(y,x)$ are continuous in $x$ for all $y$ (Imbens & Lemieux 2008).
Fuzzy regression discontinuity design and exclusion restriction
There is no need for an explicit exclusion restriction in the fuzzy regression discontinuity design. The assumption that the potential outcomes are continuous in the neighbourhood of the cutoff (or gl
Fuzzy regression discontinuity design and exclusion restriction There is no need for an explicit exclusion restriction in the fuzzy regression discontinuity design. The assumption that the potential outcomes are continuous in the neighbourhood of the cutoff (or globally) together with the discontinuity of the probability to receive treatment at the cutoff implicitly acts as a (local) exclusion restriction. Any effect at the threshold can only be due to the treatment, and the effect of the forcing variable can only be through the effect on the probability to receive treatment. This is essentially the mechanism Dimitriy described. The typical global assumption is that $F_{Y(0)|X}(y,x)$ and $F_{Y(1)|X}(y,x)$ are continuous in $x$ for all $y$ (Imbens & Lemieux 2008).
Fuzzy regression discontinuity design and exclusion restriction There is no need for an explicit exclusion restriction in the fuzzy regression discontinuity design. The assumption that the potential outcomes are continuous in the neighbourhood of the cutoff (or gl
39,286
Why is random assignment important in stratified sampling?
You have not correctly interpreted user697473's claim. He is not talking about failing to include any data from brand C. He was talking about giving a particular vector of assignemnts $0$ probability. He was not saying that you can magically determine the value of some variable while never testing it. He wants to be able to use a balanced random subset, so that each point is included in the random subset with the right probability, but not a uniformly random one. For example, if the set is $\lbrace x_1,x_2,x_3,x_4 \rbrace$, then the following random subsets of uniform size $2$ all have the property that the probability that $x_i$ is included is $1/2$: $S_1 = 1/6\lbrace x_1,x_2\rbrace + 1/6\lbrace x_1,x_3\rbrace + 1/6\lbrace x_1,x_4\rbrace + 1/6\lbrace x_2,x_3\rbrace + 1/6\lbrace x_2,x_4\rbrace + 1/6\lbrace x_3,x_4\rbrace $ $S_2 = 1/4\lbrace x_1,x_3\rbrace + 1/4\lbrace x_1,x_4\rbrace + 1/4\lbrace x_2,x_3\rbrace + 1/4\lbrace x_2,x_4\rbrace $ $S_3 = 1/2\lbrace x_1,x_2\rbrace + 1/2\lbrace x_3,x_4\rbrace $ These are all balanced in the sense that if you compute the average value of some function $f$ over the random set, the expected value is $1/4(f(x_1) + f(x_2) + f(x_3)+f(x_4))$. In the third random subset, the probability of the subset $\lbrace x_1,x_3 \rbrace$ is $0$. That said, the point of the experiment should not be to produce an unbiased estimate. That is just one consideration. Another goal is to provide useful information. If you know that you may want to estimate $f$ on a subset $T$ (say $\lbrace x_1,x_2 \rbrace$) and its complement and to subtract the two, the quality of your estimate depends on $\#(T \cap S)$ and $\#(T^c \cap S)$. Then not all balanced subsets have the same quality. For that task, $S_3$ is worse than random assignment ($S_1$), while $S_2$ is better than random assignment.
Why is random assignment important in stratified sampling?
You have not correctly interpreted user697473's claim. He is not talking about failing to include any data from brand C. He was talking about giving a particular vector of assignemnts $0$ probability.
Why is random assignment important in stratified sampling? You have not correctly interpreted user697473's claim. He is not talking about failing to include any data from brand C. He was talking about giving a particular vector of assignemnts $0$ probability. He was not saying that you can magically determine the value of some variable while never testing it. He wants to be able to use a balanced random subset, so that each point is included in the random subset with the right probability, but not a uniformly random one. For example, if the set is $\lbrace x_1,x_2,x_3,x_4 \rbrace$, then the following random subsets of uniform size $2$ all have the property that the probability that $x_i$ is included is $1/2$: $S_1 = 1/6\lbrace x_1,x_2\rbrace + 1/6\lbrace x_1,x_3\rbrace + 1/6\lbrace x_1,x_4\rbrace + 1/6\lbrace x_2,x_3\rbrace + 1/6\lbrace x_2,x_4\rbrace + 1/6\lbrace x_3,x_4\rbrace $ $S_2 = 1/4\lbrace x_1,x_3\rbrace + 1/4\lbrace x_1,x_4\rbrace + 1/4\lbrace x_2,x_3\rbrace + 1/4\lbrace x_2,x_4\rbrace $ $S_3 = 1/2\lbrace x_1,x_2\rbrace + 1/2\lbrace x_3,x_4\rbrace $ These are all balanced in the sense that if you compute the average value of some function $f$ over the random set, the expected value is $1/4(f(x_1) + f(x_2) + f(x_3)+f(x_4))$. In the third random subset, the probability of the subset $\lbrace x_1,x_3 \rbrace$ is $0$. That said, the point of the experiment should not be to produce an unbiased estimate. That is just one consideration. Another goal is to provide useful information. If you know that you may want to estimate $f$ on a subset $T$ (say $\lbrace x_1,x_2 \rbrace$) and its complement and to subtract the two, the quality of your estimate depends on $\#(T \cap S)$ and $\#(T^c \cap S)$. Then not all balanced subsets have the same quality. For that task, $S_3$ is worse than random assignment ($S_1$), while $S_2$ is better than random assignment.
Why is random assignment important in stratified sampling? You have not correctly interpreted user697473's claim. He is not talking about failing to include any data from brand C. He was talking about giving a particular vector of assignemnts $0$ probability.
39,287
Why is random assignment important in stratified sampling?
Michael, I did not say that when you exclude a group from randomization you can still get unbiased estimates, and I would never argue in favor of this idea; this is certainly not true. What is true is that you don't have to have the same sampling rates, and that's the imbalance that you can correct with weights. You illustrate this point with your Lady-Tasting-Tea example, which works unless one of the sample sizes is zero. The example I gave is aimed at a different subtle point. It uses all units in the population, but it's not a full design (which would be ${5 \choose 2}=10$ possible assignments); yet every unit is assigned to treatment twice, and to control, three times. So with respect to randomization to these assignments, the difference of the treatment group means is an unbiased estimator of the mean population difference (in potential outcomes). This example shows that you don't have to fully randomize, in the sense of tossing a coin independently for every unit (which may put you to an awkward situation of having no controls or no treatments with probability 1/16 for sample this small, by the way). However, for this example to work out, you need to have a sample size that was fixed in advance, and you had to toss a five-sided coin in advance before the very first unit enters the study. So each approach has its pros and cons; I doubt that the balanced sampling example that I gave is hugely practical, although cube sampling appears to be picking up in survey statistics. I am not a causal inference statistician, nor am I terribly good with experimental designs. But I know my randomization inference from survey statistics, so the way I've been viewing this problem is from the perspective of Horvitz-Thompson estimator.
Why is random assignment important in stratified sampling?
Michael, I did not say that when you exclude a group from randomization you can still get unbiased estimates, and I would never argue in favor of this idea; this is certainly not true. What is true is
Why is random assignment important in stratified sampling? Michael, I did not say that when you exclude a group from randomization you can still get unbiased estimates, and I would never argue in favor of this idea; this is certainly not true. What is true is that you don't have to have the same sampling rates, and that's the imbalance that you can correct with weights. You illustrate this point with your Lady-Tasting-Tea example, which works unless one of the sample sizes is zero. The example I gave is aimed at a different subtle point. It uses all units in the population, but it's not a full design (which would be ${5 \choose 2}=10$ possible assignments); yet every unit is assigned to treatment twice, and to control, three times. So with respect to randomization to these assignments, the difference of the treatment group means is an unbiased estimator of the mean population difference (in potential outcomes). This example shows that you don't have to fully randomize, in the sense of tossing a coin independently for every unit (which may put you to an awkward situation of having no controls or no treatments with probability 1/16 for sample this small, by the way). However, for this example to work out, you need to have a sample size that was fixed in advance, and you had to toss a five-sided coin in advance before the very first unit enters the study. So each approach has its pros and cons; I doubt that the balanced sampling example that I gave is hugely practical, although cube sampling appears to be picking up in survey statistics. I am not a causal inference statistician, nor am I terribly good with experimental designs. But I know my randomization inference from survey statistics, so the way I've been viewing this problem is from the perspective of Horvitz-Thompson estimator.
Why is random assignment important in stratified sampling? Michael, I did not say that when you exclude a group from randomization you can still get unbiased estimates, and I would never argue in favor of this idea; this is certainly not true. What is true is
39,288
How to interpret model diagnostics when doing linear regression in R?
This is a long and rambling question, so you are getting a long and rambling answer. Apologies. Using the example from the ?lm() call, ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) summary(lm.D9) #output# Call: lm(formula = weight ~ group) Residuals: Min 1Q Median 3Q Max -1.0710 -0.4938 0.0685 0.2462 1.3690 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.0320 0.2202 22.850 9.55e-15 *** groupTrt -0.3710 0.3114 -1.191 0.249 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.6964 on 18 degrees of freedom Multiple R-squared: 0.07308, Adjusted R-squared: 0.02158 F-statistic: 1.419 on 1 and 18 DF, p-value: 0.249 I don't entirely understand your confusion on the "coefficients." The table simply presents the OLS estimate of $\beta$, standard error of the estimate $SE(\beta)$, the "distance" that $\beta$ is from 0 on the Normal$(0, SE(\beta))$ distribution, and the probability of observing a $\beta$ that far away from 0. Forgive me for the basic statistics review; I can't tell if this is what you are asking for. Proper OLS-estimated regression modeling (which is what the lm command runs) requires several assumptions, and these diagnostic plots are designed to test them. The "Residuals vs Fitted" and "Scale-Location" charts are essentially the same, and show if there is a trend to the residuals. OLS models require that the residuals be "identically and independently distributed," that their distribution does not change substantially for different values of $x$. None of your charts is really satisfactory on this regard. If this assumption is not met, your $\beta$ estimates will still be good, but your $t$-statistics, and corresponding $p$-values, are garbage. Another assumption is that the errors are approximately normally distributed, which is what the Q-Q plot allows you to see. Again, none of your plots really satisfies me in this regard. The consequences of this assumption not being met are the same as above ($\beta$'s good, $t$'s worthless). The "outliers" principle is actually not an assumption of OLS regression. But if you have outliers in certain locations, your $\beta$ parameters will be unduly influenced by them. In this case, both your $\beta$ and $t$ measurements are useless. You can remove an influential observation from a data frame by identifying its row number and issuing the command data <- data[-offending.row,] Where offending.row is the number of the row you want to eliminate. The R diagnostic plots label the row numbers of potential outliers. I don't know what kind of data you have, but you should be very careful about eliminating observations that you don't like. You should instead ask yourself how that observation became this way. If it is due to measurement error, by all means discard it. If not, then is this observation a part of the system you are trying to model? If so, you should keep it in and adapt for it in other ways. I have two suggestions for your analysis. First, try to use GLS estimators. This method assigns weights to your observations to correct for heteroskedasticity, outliers, and some degree of non-normality. The R command for this is gls(). But it seems from your plots that your data are restricted in some ways. In particular Test-P seems like a variable that is either 1 or 0, or restricted to that range. For such a variable, you may want to look at binary logit or probit models, available with the command glm(model, family=binomial(link="logit")) If your data is censored at 0 but not on the upper end, a tobit model is what you want, tobit() from the AER package looks like the right thing (I've never run a tobit model, I have just looked at it theoretically). Finally, predictions are done with the predict() function. If you want to perturb your data afterwards (to create a distribution of possible predictions), the best way I know of it to add a random number to the prediction. Using the example above, #base prediction pred.values <- predict(lm.D9) # get standard error of residuals SER <- (summary(lm.D9)$sigma)^2 #perturbations pert <- rnorm(length(pred.values), mean=0, sd=SER) SIMULATION.VALUES <- pred.values + pert You can get multiple alternate simulations by repeating the last two steps. Good luck.
How to interpret model diagnostics when doing linear regression in R?
This is a long and rambling question, so you are getting a long and rambling answer. Apologies. Using the example from the ?lm() call, ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <
How to interpret model diagnostics when doing linear regression in R? This is a long and rambling question, so you are getting a long and rambling answer. Apologies. Using the example from the ?lm() call, ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <- c(4.81,4.17,4.41,3.59,5.87,3.83,6.03,4.89,4.32,4.69) group <- gl(2,10,20, labels=c("Ctl","Trt")) weight <- c(ctl, trt) lm.D9 <- lm(weight ~ group) summary(lm.D9) #output# Call: lm(formula = weight ~ group) Residuals: Min 1Q Median 3Q Max -1.0710 -0.4938 0.0685 0.2462 1.3690 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 5.0320 0.2202 22.850 9.55e-15 *** groupTrt -0.3710 0.3114 -1.191 0.249 --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Residual standard error: 0.6964 on 18 degrees of freedom Multiple R-squared: 0.07308, Adjusted R-squared: 0.02158 F-statistic: 1.419 on 1 and 18 DF, p-value: 0.249 I don't entirely understand your confusion on the "coefficients." The table simply presents the OLS estimate of $\beta$, standard error of the estimate $SE(\beta)$, the "distance" that $\beta$ is from 0 on the Normal$(0, SE(\beta))$ distribution, and the probability of observing a $\beta$ that far away from 0. Forgive me for the basic statistics review; I can't tell if this is what you are asking for. Proper OLS-estimated regression modeling (which is what the lm command runs) requires several assumptions, and these diagnostic plots are designed to test them. The "Residuals vs Fitted" and "Scale-Location" charts are essentially the same, and show if there is a trend to the residuals. OLS models require that the residuals be "identically and independently distributed," that their distribution does not change substantially for different values of $x$. None of your charts is really satisfactory on this regard. If this assumption is not met, your $\beta$ estimates will still be good, but your $t$-statistics, and corresponding $p$-values, are garbage. Another assumption is that the errors are approximately normally distributed, which is what the Q-Q plot allows you to see. Again, none of your plots really satisfies me in this regard. The consequences of this assumption not being met are the same as above ($\beta$'s good, $t$'s worthless). The "outliers" principle is actually not an assumption of OLS regression. But if you have outliers in certain locations, your $\beta$ parameters will be unduly influenced by them. In this case, both your $\beta$ and $t$ measurements are useless. You can remove an influential observation from a data frame by identifying its row number and issuing the command data <- data[-offending.row,] Where offending.row is the number of the row you want to eliminate. The R diagnostic plots label the row numbers of potential outliers. I don't know what kind of data you have, but you should be very careful about eliminating observations that you don't like. You should instead ask yourself how that observation became this way. If it is due to measurement error, by all means discard it. If not, then is this observation a part of the system you are trying to model? If so, you should keep it in and adapt for it in other ways. I have two suggestions for your analysis. First, try to use GLS estimators. This method assigns weights to your observations to correct for heteroskedasticity, outliers, and some degree of non-normality. The R command for this is gls(). But it seems from your plots that your data are restricted in some ways. In particular Test-P seems like a variable that is either 1 or 0, or restricted to that range. For such a variable, you may want to look at binary logit or probit models, available with the command glm(model, family=binomial(link="logit")) If your data is censored at 0 but not on the upper end, a tobit model is what you want, tobit() from the AER package looks like the right thing (I've never run a tobit model, I have just looked at it theoretically). Finally, predictions are done with the predict() function. If you want to perturb your data afterwards (to create a distribution of possible predictions), the best way I know of it to add a random number to the prediction. Using the example above, #base prediction pred.values <- predict(lm.D9) # get standard error of residuals SER <- (summary(lm.D9)$sigma)^2 #perturbations pert <- rnorm(length(pred.values), mean=0, sd=SER) SIMULATION.VALUES <- pred.values + pert You can get multiple alternate simulations by repeating the last two steps. Good luck.
How to interpret model diagnostics when doing linear regression in R? This is a long and rambling question, so you are getting a long and rambling answer. Apologies. Using the example from the ?lm() call, ctl <- c(4.17,5.58,5.18,6.11,4.50,4.61,5.17,4.53,5.33,5.14) trt <
39,289
Bayesion priors in ridge regression with scikit learn's linear model
Ridge regression looks like: $$ \min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta||^2 $$ If you want to instead compute $$ \beta^* = \arg\min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta - \beta_0||^2 $$ I guess you could just turn this into shrinking towards zero using the new variable $$\theta = \beta - \beta_0.$$ So you'd solve: $$ \theta^* := \arg\min_{\theta}||Y-X\beta_0-X \theta||^2 + \lambda_1 ||\theta||^2 $$ Then apply the change of variables again (i.e., $\beta^* := \theta^* + \beta_0$). So to recap, if I have some black box function $\text{RidgeRegression}(Y,X, \lambda)$, I can use it to solve for an arbitrary prior $\beta_0$ simply by calling $\text{RidgeRegression}(Y-X\beta_0, X, \lambda)$.
Bayesion priors in ridge regression with scikit learn's linear model
Ridge regression looks like: $$ \min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta||^2 $$ If you want to instead compute $$ \beta^* = \arg\min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta - \beta_0||^2 $$ I
Bayesion priors in ridge regression with scikit learn's linear model Ridge regression looks like: $$ \min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta||^2 $$ If you want to instead compute $$ \beta^* = \arg\min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta - \beta_0||^2 $$ I guess you could just turn this into shrinking towards zero using the new variable $$\theta = \beta - \beta_0.$$ So you'd solve: $$ \theta^* := \arg\min_{\theta}||Y-X\beta_0-X \theta||^2 + \lambda_1 ||\theta||^2 $$ Then apply the change of variables again (i.e., $\beta^* := \theta^* + \beta_0$). So to recap, if I have some black box function $\text{RidgeRegression}(Y,X, \lambda)$, I can use it to solve for an arbitrary prior $\beta_0$ simply by calling $\text{RidgeRegression}(Y-X\beta_0, X, \lambda)$.
Bayesion priors in ridge regression with scikit learn's linear model Ridge regression looks like: $$ \min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta||^2 $$ If you want to instead compute $$ \beta^* = \arg\min_{\beta}||Y-X\beta||^2 + \lambda_1 ||\beta - \beta_0||^2 $$ I
39,290
Bayesion priors in ridge regression with scikit learn's linear model
What's posted in the only answer by Dapz does not do what it's supposed to do. If I choose a value > 0 for any of the $\beta_0$, say the "i-th", the corresponding $\beta^*$ of "i" will be lower than with standard ridge regression, instead of higher as it should be (because we penalize for moving away from something > 0, instead of moving away from 0).
Bayesion priors in ridge regression with scikit learn's linear model
What's posted in the only answer by Dapz does not do what it's supposed to do. If I choose a value > 0 for any of the $\beta_0$, say the "i-th", the corresponding $\beta^*$ of "i" will be lower than w
Bayesion priors in ridge regression with scikit learn's linear model What's posted in the only answer by Dapz does not do what it's supposed to do. If I choose a value > 0 for any of the $\beta_0$, say the "i-th", the corresponding $\beta^*$ of "i" will be lower than with standard ridge regression, instead of higher as it should be (because we penalize for moving away from something > 0, instead of moving away from 0).
Bayesion priors in ridge regression with scikit learn's linear model What's posted in the only answer by Dapz does not do what it's supposed to do. If I choose a value > 0 for any of the $\beta_0$, say the "i-th", the corresponding $\beta^*$ of "i" will be lower than w
39,291
Bayesion priors in ridge regression with scikit learn's linear model
I think this code should work, implementing the solution suggested by others above: def fit_with_prior(model, X, y, sample_weight=None, prior=None): """Fit a regularized model with a nonzero prior""" assert prior is not None, "you need to specify a prior" new_y = y - np.sum(prior * X, axis=1) model.fit(X, new_y, sample_weight=sample_weight) model.coef_ += prior # modifying underlying model's coefficients # what about the intercept? Initialize the Ridge model object as you normally would: my_ridge = Ridge(alpha,...), and instead of calling my_ridge.fit(X, y), call fit_with_prior(my_ridge, X, y, prior=prior), where prior is a vector of length equal to the number of columns in x, being the prior values toward which you want to regularize. I think the intercept term is probably not penalized, and can thus be ignored for the purposes of this transformation (unless you explicitly added a column of constants to X, in which case it will be treated just like the other coefficients). I think it should also work for other regularized linear models, e.g. ElasticNet, as long as they use their coef_ attribute to do prediction and scoring, as Ridge seems to based on my testing.
Bayesion priors in ridge regression with scikit learn's linear model
I think this code should work, implementing the solution suggested by others above: def fit_with_prior(model, X, y, sample_weight=None, prior=None): """Fit a regularized model with a nonzero prior
Bayesion priors in ridge regression with scikit learn's linear model I think this code should work, implementing the solution suggested by others above: def fit_with_prior(model, X, y, sample_weight=None, prior=None): """Fit a regularized model with a nonzero prior""" assert prior is not None, "you need to specify a prior" new_y = y - np.sum(prior * X, axis=1) model.fit(X, new_y, sample_weight=sample_weight) model.coef_ += prior # modifying underlying model's coefficients # what about the intercept? Initialize the Ridge model object as you normally would: my_ridge = Ridge(alpha,...), and instead of calling my_ridge.fit(X, y), call fit_with_prior(my_ridge, X, y, prior=prior), where prior is a vector of length equal to the number of columns in x, being the prior values toward which you want to regularize. I think the intercept term is probably not penalized, and can thus be ignored for the purposes of this transformation (unless you explicitly added a column of constants to X, in which case it will be treated just like the other coefficients). I think it should also work for other regularized linear models, e.g. ElasticNet, as long as they use their coef_ attribute to do prediction and scoring, as Ridge seems to based on my testing.
Bayesion priors in ridge regression with scikit learn's linear model I think this code should work, implementing the solution suggested by others above: def fit_with_prior(model, X, y, sample_weight=None, prior=None): """Fit a regularized model with a nonzero prior
39,292
How to correlate two time series, with possible time differences
Apply a lag operator on one time series, with the other fixed, and calculate the coherence of the cross-spectrum achieved against each lag. Find the lag that gives you the maximum coherence and interpret it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want to weight the coherences at frequencies that have a high energy in the power spectral density. That way, you would be measuring the similarities at the frequencies that dominate the time series instead of weighting the coherence with a large weight, when the content of that frequency in the time series is negligible. http://www.stat.rutgers.edu/home/rebecka/Stat565/lab5-2007.pdf is a good link to look at to get started and http://www.atmos.washington.edu/~dennis/552_Notes_6c.pdf is an excellent introduction to cross-spectral analysis.
How to correlate two time series, with possible time differences
Apply a lag operator on one time series, with the other fixed, and calculate the coherence of the cross-spectrum achieved against each lag. Find the lag that gives you the maximum coherence and interp
How to correlate two time series, with possible time differences Apply a lag operator on one time series, with the other fixed, and calculate the coherence of the cross-spectrum achieved against each lag. Find the lag that gives you the maximum coherence and interpret it. Coherence is computed at each frequency-and hence is a vector. Hence, a sum of a weighted coherence would be a good measure. You would typically want to weight the coherences at frequencies that have a high energy in the power spectral density. That way, you would be measuring the similarities at the frequencies that dominate the time series instead of weighting the coherence with a large weight, when the content of that frequency in the time series is negligible. http://www.stat.rutgers.edu/home/rebecka/Stat565/lab5-2007.pdf is a good link to look at to get started and http://www.atmos.washington.edu/~dennis/552_Notes_6c.pdf is an excellent introduction to cross-spectral analysis.
How to correlate two time series, with possible time differences Apply a lag operator on one time series, with the other fixed, and calculate the coherence of the cross-spectrum achieved against each lag. Find the lag that gives you the maximum coherence and interp
39,293
How to correlate two time series, with possible time differences
The cross correlation function will give you the Pearson correlation for 2 time-series at different time lags. The R function is ccf(). For further study, a Granger causality test tries to determine a cause-effect relationship between the 2 correlated series by first removing the serial correlation in TS1 (the stock price series in this case).
How to correlate two time series, with possible time differences
The cross correlation function will give you the Pearson correlation for 2 time-series at different time lags. The R function is ccf(). For further study, a Granger causality test tries to determine a
How to correlate two time series, with possible time differences The cross correlation function will give you the Pearson correlation for 2 time-series at different time lags. The R function is ccf(). For further study, a Granger causality test tries to determine a cause-effect relationship between the 2 correlated series by first removing the serial correlation in TS1 (the stock price series in this case).
How to correlate two time series, with possible time differences The cross correlation function will give you the Pearson correlation for 2 time-series at different time lags. The R function is ccf(). For further study, a Granger causality test tries to determine a
39,294
Estimating kappa of von Mises distribution
According to Banerjee et al., Clustering on the Unit Hypersphere using von Mises-Fisher Distributions (J. Mach. Learning Res. 6 (2005)), you can estimate the von Mises-Fisher parameters $\mu$ and $\kappa$ with maximum likelihood. Let $x_i$ be the $n$ points in dimension $d$ from your sample. Let $r = \sum_i x_i$. Let $\overline{r} = \frac{||r||_2}{n}$ (the Euclidean distance from the barycenter to the origin). Then $$\hat{\mu} = \frac{r}{||r||_2}$$ (the unit vector in the direction of the barycenter) and $$\hat{\kappa} \approx \frac{\overline{r}d - \overline{r}^3}{1 - \overline{r}^2}$$ approximates the Maximum Likelihood estimate, which to be found exactly is obtained (numerically) as the solution to $$I_{d/2}(\kappa) / I_{d/2-1}(\kappa) = \overline{r}.$$ $I_m$ is the modified Bessel function of the first kind of order $m$. The approximation can be used as the starting point for Newton-Raphson iteration--but the authors point out that for "high-dimensional" data this can be "quite slow" compared to the cost of computing just the approximate value.
Estimating kappa of von Mises distribution
According to Banerjee et al., Clustering on the Unit Hypersphere using von Mises-Fisher Distributions (J. Mach. Learning Res. 6 (2005)), you can estimate the von Mises-Fisher parameters $\mu$ and $\ka
Estimating kappa of von Mises distribution According to Banerjee et al., Clustering on the Unit Hypersphere using von Mises-Fisher Distributions (J. Mach. Learning Res. 6 (2005)), you can estimate the von Mises-Fisher parameters $\mu$ and $\kappa$ with maximum likelihood. Let $x_i$ be the $n$ points in dimension $d$ from your sample. Let $r = \sum_i x_i$. Let $\overline{r} = \frac{||r||_2}{n}$ (the Euclidean distance from the barycenter to the origin). Then $$\hat{\mu} = \frac{r}{||r||_2}$$ (the unit vector in the direction of the barycenter) and $$\hat{\kappa} \approx \frac{\overline{r}d - \overline{r}^3}{1 - \overline{r}^2}$$ approximates the Maximum Likelihood estimate, which to be found exactly is obtained (numerically) as the solution to $$I_{d/2}(\kappa) / I_{d/2-1}(\kappa) = \overline{r}.$$ $I_m$ is the modified Bessel function of the first kind of order $m$. The approximation can be used as the starting point for Newton-Raphson iteration--but the authors point out that for "high-dimensional" data this can be "quite slow" compared to the cost of computing just the approximate value.
Estimating kappa of von Mises distribution According to Banerjee et al., Clustering on the Unit Hypersphere using von Mises-Fisher Distributions (J. Mach. Learning Res. 6 (2005)), you can estimate the von Mises-Fisher parameters $\mu$ and $\ka
39,295
Estimating kappa of von Mises distribution
Check out the est.kappa() function in the CircStats package for R: Computes the maximum likelihood estimate of kappa, the concentration parameter of a von Mises distribution, given a set of angular measurements.
Estimating kappa of von Mises distribution
Check out the est.kappa() function in the CircStats package for R: Computes the maximum likelihood estimate of kappa, the concentration parameter of a von Mises distribution, given a set of angular m
Estimating kappa of von Mises distribution Check out the est.kappa() function in the CircStats package for R: Computes the maximum likelihood estimate of kappa, the concentration parameter of a von Mises distribution, given a set of angular measurements.
Estimating kappa of von Mises distribution Check out the est.kappa() function in the CircStats package for R: Computes the maximum likelihood estimate of kappa, the concentration parameter of a von Mises distribution, given a set of angular m
39,296
Estimating kappa of von Mises distribution
Yes, the Von-Mises distribution family is an exponential family, so you can find the maximum likelihood estimate of its parameters as you would for any exponential family: set the expectation parameters to the average of the sufficient statistics $T(x) = x$ whose magnitude we'll call $\bar r$. You'll have to convert these parameters to your parametrization after to get $\kappa$. See @mic's answer for the equation. Just in case you're wondering how you implement @mic's solution in Python: I would use scipy.optimize to find the root of your function: the ratio of Bessel functions minus $\bar r$. Edit: Since this answer was posted, I've written an exponential family library for Python: import jax.numpy as jnp from efax import VonMisesFisherEP mean_r = jnp.asarray([0.2, 0.4, -0.3]) # The mean observation. x = VonMisesFisherEP(mean_r) print(x.to_nat().kappa()) # 2.007
Estimating kappa of von Mises distribution
Yes, the Von-Mises distribution family is an exponential family, so you can find the maximum likelihood estimate of its parameters as you would for any exponential family: set the expectation paramete
Estimating kappa of von Mises distribution Yes, the Von-Mises distribution family is an exponential family, so you can find the maximum likelihood estimate of its parameters as you would for any exponential family: set the expectation parameters to the average of the sufficient statistics $T(x) = x$ whose magnitude we'll call $\bar r$. You'll have to convert these parameters to your parametrization after to get $\kappa$. See @mic's answer for the equation. Just in case you're wondering how you implement @mic's solution in Python: I would use scipy.optimize to find the root of your function: the ratio of Bessel functions minus $\bar r$. Edit: Since this answer was posted, I've written an exponential family library for Python: import jax.numpy as jnp from efax import VonMisesFisherEP mean_r = jnp.asarray([0.2, 0.4, -0.3]) # The mean observation. x = VonMisesFisherEP(mean_r) print(x.to_nat().kappa()) # 2.007
Estimating kappa of von Mises distribution Yes, the Von-Mises distribution family is an exponential family, so you can find the maximum likelihood estimate of its parameters as you would for any exponential family: set the expectation paramete
39,297
Plot of the estimated log hazard ratio in R
Saw this question while browsing through the survival questions and though I just post some of my personal favorites. I like using the rms package for survival functions and you could do a forestplot type of output: library(survival) library(rms) ddist <- datadist(ovarian) options(datadist='ddist') ovarian$rx <- factor(ovarian$rx) fit1 = cph(Surv(futime, fustat) ~ rx + rcs(age, 3), ovarian, x=T, y=T) # The plot.summary.rms plot(summary(fit1, age=c(50,60)), q=c(.6, .8, .95), log=T, col=c("orange", "gold", "blue")) gives you: I also like the termplot that I've updated slightly: par(mfrow=c(1,2)) termplot2(fit1, se=T, rug.type="density", rug=T, density.proportion=.05, se.type="polygon", ylab=rep("Hazard Ratio", times=2), main=rep("cph() plot", times=2), col.se=rgb(.2,.2,1,.4), col.term="black") that gives you this plot: and last but not least for a log-log plot if you wanted to look for the hazard over time as the comment suggested: f <- survfit(Surv(futime, fustat) ~ rx, data=ovarian) survplot(f, loglog=T, logt=T, xlab="log(Years)") that gives: another efficient way of looking at the hazard of time is the Schoenfeld residuals: layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE)) f <- cox.zph(fit1) plot(f, resid=F) that gives:
Plot of the estimated log hazard ratio in R
Saw this question while browsing through the survival questions and though I just post some of my personal favorites. I like using the rms package for survival functions and you could do a forestplot
Plot of the estimated log hazard ratio in R Saw this question while browsing through the survival questions and though I just post some of my personal favorites. I like using the rms package for survival functions and you could do a forestplot type of output: library(survival) library(rms) ddist <- datadist(ovarian) options(datadist='ddist') ovarian$rx <- factor(ovarian$rx) fit1 = cph(Surv(futime, fustat) ~ rx + rcs(age, 3), ovarian, x=T, y=T) # The plot.summary.rms plot(summary(fit1, age=c(50,60)), q=c(.6, .8, .95), log=T, col=c("orange", "gold", "blue")) gives you: I also like the termplot that I've updated slightly: par(mfrow=c(1,2)) termplot2(fit1, se=T, rug.type="density", rug=T, density.proportion=.05, se.type="polygon", ylab=rep("Hazard Ratio", times=2), main=rep("cph() plot", times=2), col.se=rgb(.2,.2,1,.4), col.term="black") that gives you this plot: and last but not least for a log-log plot if you wanted to look for the hazard over time as the comment suggested: f <- survfit(Surv(futime, fustat) ~ rx, data=ovarian) survplot(f, loglog=T, logt=T, xlab="log(Years)") that gives: another efficient way of looking at the hazard of time is the Schoenfeld residuals: layout(matrix(c(1,1,2,3), 2, 2, byrow = TRUE)) f <- cox.zph(fit1) plot(f, resid=F) that gives:
Plot of the estimated log hazard ratio in R Saw this question while browsing through the survival questions and though I just post some of my personal favorites. I like using the rms package for survival functions and you could do a forestplot
39,298
Plot of the estimated log hazard ratio in R
Not having a copy of Applied Survival Analysis, I'm guessing you're looking for something like this: http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=Design:hazard.ratio.plot
Plot of the estimated log hazard ratio in R
Not having a copy of Applied Survival Analysis, I'm guessing you're looking for something like this: http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=Design:hazard.ratio.plot
Plot of the estimated log hazard ratio in R Not having a copy of Applied Survival Analysis, I'm guessing you're looking for something like this: http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=Design:hazard.ratio.plot
Plot of the estimated log hazard ratio in R Not having a copy of Applied Survival Analysis, I'm guessing you're looking for something like this: http://rgm2.lab.nig.ac.jp/RGM2/func.php?rd_id=Design:hazard.ratio.plot
39,299
What "more" does differencing (d>0) do in ARIMA than detrend?
Differencing isn't actually the preferred way of removing a trend---detrending is. Detrending involves estimating the trend and calculating the deviation from the estimated trend in any particular period. The main use of differencing is to remove the problem of unit roots. A unit root arises, for example, when $\rho=1$ in the simple AR(1) model $y_{t} = \rho y_{t-1} + \nu_t$. In this case, differencing yields a stationary white noise process $\nu_t$ that is appropriate for analysis. Differencing a process without a unit root, but with a trend, can actually produce bad results (the new, differenced error term can have a strange distribution that contains autocorrelation, but of a tricky process). Similarly, detrending a process without a trend, but with a unit root can fail to eliminate the problem of non-stationarity (that is, it doesn't fix the unit root problem).
What "more" does differencing (d>0) do in ARIMA than detrend?
Differencing isn't actually the preferred way of removing a trend---detrending is. Detrending involves estimating the trend and calculating the deviation from the estimated trend in any particular per
What "more" does differencing (d>0) do in ARIMA than detrend? Differencing isn't actually the preferred way of removing a trend---detrending is. Detrending involves estimating the trend and calculating the deviation from the estimated trend in any particular period. The main use of differencing is to remove the problem of unit roots. A unit root arises, for example, when $\rho=1$ in the simple AR(1) model $y_{t} = \rho y_{t-1} + \nu_t$. In this case, differencing yields a stationary white noise process $\nu_t$ that is appropriate for analysis. Differencing a process without a unit root, but with a trend, can actually produce bad results (the new, differenced error term can have a strange distribution that contains autocorrelation, but of a tricky process). Similarly, detrending a process without a trend, but with a unit root can fail to eliminate the problem of non-stationarity (that is, it doesn't fix the unit root problem).
What "more" does differencing (d>0) do in ARIMA than detrend? Differencing isn't actually the preferred way of removing a trend---detrending is. Detrending involves estimating the trend and calculating the deviation from the estimated trend in any particular per
39,300
What "more" does differencing (d>0) do in ARIMA than detrend?
Unnecessary differencing or filtering can inject structure (see Slutsky Effect: http://mathworld.wolfram.com/Slutzky-YuleEffect.html, https://www.minneapolisfed.org/publications/the-region/the-meaning-of-slutsky, https://blog.minitab.com/blog/understanding-statistics/the-ghost-pattern-a-haunting-cautionary-tale-about-moving-averages, http://www.sef.hku.hk/~wsuen/ls/immortal/y2c.html) . Sometimes a series can have a shift in the mean causing "non-statioanarity" ... the correct remedy is to neither difference or de-trend but to "de-mean" or use a Level Shift variable/filter to render the residual series stationary. Sometimes there is more than 1 trend requiring a number of trend variables/filters ... none of which have to start at the beginning if the series. Analysis will tell you which of these three approaches differencing de-meaning de-trending are suitable for your data.
What "more" does differencing (d>0) do in ARIMA than detrend?
Unnecessary differencing or filtering can inject structure (see Slutsky Effect: http://mathworld.wolfram.com/Slutzky-YuleEffect.html, https://www.minneapolisfed.org/publications/the-region/the-mean
What "more" does differencing (d>0) do in ARIMA than detrend? Unnecessary differencing or filtering can inject structure (see Slutsky Effect: http://mathworld.wolfram.com/Slutzky-YuleEffect.html, https://www.minneapolisfed.org/publications/the-region/the-meaning-of-slutsky, https://blog.minitab.com/blog/understanding-statistics/the-ghost-pattern-a-haunting-cautionary-tale-about-moving-averages, http://www.sef.hku.hk/~wsuen/ls/immortal/y2c.html) . Sometimes a series can have a shift in the mean causing "non-statioanarity" ... the correct remedy is to neither difference or de-trend but to "de-mean" or use a Level Shift variable/filter to render the residual series stationary. Sometimes there is more than 1 trend requiring a number of trend variables/filters ... none of which have to start at the beginning if the series. Analysis will tell you which of these three approaches differencing de-meaning de-trending are suitable for your data.
What "more" does differencing (d>0) do in ARIMA than detrend? Unnecessary differencing or filtering can inject structure (see Slutsky Effect: http://mathworld.wolfram.com/Slutzky-YuleEffect.html, https://www.minneapolisfed.org/publications/the-region/the-mean