idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
10,701
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
You can rearrange as follows: $$ \begin{eqnarray} Var(X) &=& E[X^2] - (E[X])^2\\ E[X^2] &=& (E[X])^2 + Var(X) \end{eqnarray} $$ Then, interpret as follows: the expected square of a random variable is equal to the square of its mean plus the expected squared deviation from its mean.
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
You can rearrange as follows: $$ \begin{eqnarray} Var(X) &=& E[X^2] - (E[X])^2\\ E[X^2] &=& (E[X])^2 + Var(X) \end{eqnarray} $$ Then, interpret as follows: the expected square of a random variable
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ You can rearrange as follows: $$ \begin{eqnarray} Var(X) &=& E[X^2] - (E[X])^2\\ E[X^2] &=& (E[X])^2 + Var(X) \end{eqnarray} $$ Then, interpret as follows: the expected square of a random variable is equal to the square of its mean plus the expected squared deviation from its mean.
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ You can rearrange as follows: $$ \begin{eqnarray} Var(X) &=& E[X^2] - (E[X])^2\\ E[X^2] &=& (E[X])^2 + Var(X) \end{eqnarray} $$ Then, interpret as follows: the expected square of a random variable
10,702
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
Sorry for not having the skill to elaborate and provide a proper answer, but I think the answer lies in the physical classical mechanics concept of moments, especially the conversion between 0 centred "raw" moments and mean centred central moments. Bear in mind that variance is the second order central moment of a random variable.
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
Sorry for not having the skill to elaborate and provide a proper answer, but I think the answer lies in the physical classical mechanics concept of moments, especially the conversion between 0 centred
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ Sorry for not having the skill to elaborate and provide a proper answer, but I think the answer lies in the physical classical mechanics concept of moments, especially the conversion between 0 centred "raw" moments and mean centred central moments. Bear in mind that variance is the second order central moment of a random variable.
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ Sorry for not having the skill to elaborate and provide a proper answer, but I think the answer lies in the physical classical mechanics concept of moments, especially the conversion between 0 centred
10,703
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
The general intuition is that you can relate these moments using the Pythagorean Theorem (PT) in a suitably defined vector space, by showing that two of the moments are perpendicular and the third is the hypotenuse. The only algebra needed is to show that the two legs are indeed orthogonal. For the sake of the following I'll assume you meant sample means and variances for computation purposes rather than moments for full distributions. That is: $$ \begin{array}{rcll} E[X] &=& \frac{1}{n}\sum x_i,& \rm{mean, first\ central\ sample\ moment}\\ E[X^2] &=& \frac{1}{n}\sum x^2_i,& \rm{second\ sample\ moment\ (non-central)}\\ Var(X) &=& \frac{1}{n}\sum (x_i - E[X])^2,& \rm{variance, second\ central\ sample\ moment} \end{array} $$ (where all sums are over $n$ items). For reference, the elementary proof of $Var(X) = E[X^2] - E[X]^2$ is just symbol pushing: $$ \begin{eqnarray} Var(X) &=& \frac{1}{n}\sum (x_i - E[X])^2\\ &=& \frac{1}{n}\sum (x^2_i - 2 E[X]x_i + E[X]^2)\\ &=& \frac{1}{n}\sum x^2_i - \frac{2}{n} E[X] \sum x_i + \frac{1}{n}\sum E[X]^2\\ &=& E[X^2] - 2 E[X]^2 + \frac{1}{n} n E[X]^2\\ &=& E[X^2] - E[X]^2\\ \end{eqnarray} $$ There's little meaning here, just elementary manipulation of algebra. One might notice that $E[X]$ is a constant inside the summation, but that is about it. Now in the vector space/geometrical interpretation/intuition, what we'll show is the slightly rearranged equation that corresponds to PT, that $$ \begin{eqnarray} Var(X) + E[X]^2 &=& E[X^2] \end{eqnarray} $$ So consider $X$, the sample of $n$ items, as a vector in $\mathbb{R}^n$. And let's create two vectors $E[X]{\bf 1}$ and $X-E[X]{\bf 1}$. The vector $E[X]{\bf 1}$ has the mean of the sample as every one of its coordinates. The vector $X-E[X]{\bf 1}$ is $\langle x_1-E[X], \dots, x_n-E[X]\rangle$. These two vectors are perpendicular because the dot product of the two vectors turns out to be 0: $$ \begin{eqnarray} E[X]{\bf 1}\cdot(X-E[X]{\bf 1}) &=& \sum E[X](x_i-E[X])\\ &=& \sum (E[X]x_i-E[X]^2)\\ &=& E[X]\sum x_i - \sum E[X]^2\\ &=& n E[X]E[X] - n E[X]^2\\ &=& 0\\ \end{eqnarray} $$ So the two vectors are perpendicular which means they are the two legs of a right triangle. Then by PT (which holds in $\mathbb{R}^n$), the sum of the squares of the lengths of the two legs equals the square of the hypotenuse. By the same algebra used in the boring algebraic proof at the top, we showed that we get that $E[X^2]$ is the square of the hypotenuse vector: $(X-E[X])^2 + E[X]^2 = ... = E[X^2]$ where squaring is the dot product (and it's really $E[x]{\bf 1}$ and $(X-E[X])^2$ is $Var(X)$. The interesting part about this interpretation is the conversion from a sample of $n$ items from a univariate distribution to a vector space of $n$ dimensions. This is similar to $n$ bivariate samples being interpreted as really two samples in $n$ variables. In one sense that is enough, the right triangle from vectors and $E[X^2]$ pops out as the hypotnenuse. We gave an interpretation (vectors) for these values and show they correspond. That's cool enough, but unenlightening either statistically or geometrically. It wouldn't really say why and would be a lot of extra conceptual machinery to, in the end mostly, reproduce the purely algebraic proof we already had at the beginning. Another interesting part is that the mean and variance, though they intuitively measure center and spread in one dimension, are orthogonal in $n$ dimensions. What does that mean, that they're orthogonal? I don't know! Are there other moments that are orthogonal? Is there a larger system of relations that includes this orthogonality? central moments vs non-central moments? I don't know!
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$
The general intuition is that you can relate these moments using the Pythagorean Theorem (PT) in a suitably defined vector space, by showing that two of the moments are perpendicular and the third is
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ The general intuition is that you can relate these moments using the Pythagorean Theorem (PT) in a suitably defined vector space, by showing that two of the moments are perpendicular and the third is the hypotenuse. The only algebra needed is to show that the two legs are indeed orthogonal. For the sake of the following I'll assume you meant sample means and variances for computation purposes rather than moments for full distributions. That is: $$ \begin{array}{rcll} E[X] &=& \frac{1}{n}\sum x_i,& \rm{mean, first\ central\ sample\ moment}\\ E[X^2] &=& \frac{1}{n}\sum x^2_i,& \rm{second\ sample\ moment\ (non-central)}\\ Var(X) &=& \frac{1}{n}\sum (x_i - E[X])^2,& \rm{variance, second\ central\ sample\ moment} \end{array} $$ (where all sums are over $n$ items). For reference, the elementary proof of $Var(X) = E[X^2] - E[X]^2$ is just symbol pushing: $$ \begin{eqnarray} Var(X) &=& \frac{1}{n}\sum (x_i - E[X])^2\\ &=& \frac{1}{n}\sum (x^2_i - 2 E[X]x_i + E[X]^2)\\ &=& \frac{1}{n}\sum x^2_i - \frac{2}{n} E[X] \sum x_i + \frac{1}{n}\sum E[X]^2\\ &=& E[X^2] - 2 E[X]^2 + \frac{1}{n} n E[X]^2\\ &=& E[X^2] - E[X]^2\\ \end{eqnarray} $$ There's little meaning here, just elementary manipulation of algebra. One might notice that $E[X]$ is a constant inside the summation, but that is about it. Now in the vector space/geometrical interpretation/intuition, what we'll show is the slightly rearranged equation that corresponds to PT, that $$ \begin{eqnarray} Var(X) + E[X]^2 &=& E[X^2] \end{eqnarray} $$ So consider $X$, the sample of $n$ items, as a vector in $\mathbb{R}^n$. And let's create two vectors $E[X]{\bf 1}$ and $X-E[X]{\bf 1}$. The vector $E[X]{\bf 1}$ has the mean of the sample as every one of its coordinates. The vector $X-E[X]{\bf 1}$ is $\langle x_1-E[X], \dots, x_n-E[X]\rangle$. These two vectors are perpendicular because the dot product of the two vectors turns out to be 0: $$ \begin{eqnarray} E[X]{\bf 1}\cdot(X-E[X]{\bf 1}) &=& \sum E[X](x_i-E[X])\\ &=& \sum (E[X]x_i-E[X]^2)\\ &=& E[X]\sum x_i - \sum E[X]^2\\ &=& n E[X]E[X] - n E[X]^2\\ &=& 0\\ \end{eqnarray} $$ So the two vectors are perpendicular which means they are the two legs of a right triangle. Then by PT (which holds in $\mathbb{R}^n$), the sum of the squares of the lengths of the two legs equals the square of the hypotenuse. By the same algebra used in the boring algebraic proof at the top, we showed that we get that $E[X^2]$ is the square of the hypotenuse vector: $(X-E[X])^2 + E[X]^2 = ... = E[X^2]$ where squaring is the dot product (and it's really $E[x]{\bf 1}$ and $(X-E[X])^2$ is $Var(X)$. The interesting part about this interpretation is the conversion from a sample of $n$ items from a univariate distribution to a vector space of $n$ dimensions. This is similar to $n$ bivariate samples being interpreted as really two samples in $n$ variables. In one sense that is enough, the right triangle from vectors and $E[X^2]$ pops out as the hypotnenuse. We gave an interpretation (vectors) for these values and show they correspond. That's cool enough, but unenlightening either statistically or geometrically. It wouldn't really say why and would be a lot of extra conceptual machinery to, in the end mostly, reproduce the purely algebraic proof we already had at the beginning. Another interesting part is that the mean and variance, though they intuitively measure center and spread in one dimension, are orthogonal in $n$ dimensions. What does that mean, that they're orthogonal? I don't know! Are there other moments that are orthogonal? Is there a larger system of relations that includes this orthogonality? central moments vs non-central moments? I don't know!
Intuition (geometric or other) of $Var(X) = E[X^2] - (E[X])^2$ The general intuition is that you can relate these moments using the Pythagorean Theorem (PT) in a suitably defined vector space, by showing that two of the moments are perpendicular and the third is
10,704
Detecting changes in time series (R example)
You could use time series outlier detection to detect changes in time series. Tsay's or Chen and Liu's procedures are popular time series outlier detection methods . See my earlier question on this site. R's tsoutlier package uses Chen and Liu's method for detection outliers. SAS/SPSS/Autobox can also do this. See below for the R code to detect changes in time series. library("tsoutliers") dat.ts<- ts(dat.change,frequency=1) data.ts.outliers <- tso(dat.ts) data.ts.outliers plot(data.ts.outliers) tso function in tsoultlier package identifies following outliers. You can read documentation to find out the type of outliers. Outliers: type ind time coefhat tstat 1 TC 42 42 -2.9462 -10.068 2 AO 43 43 1.0733 4.322 3 AO 45 45 -1.2113 -4.849 4 TC 47 47 1.0143 3.387 5 AO 51 51 0.9002 3.433 6 AO 52 52 -1.3455 -5.165 7 AO 56 56 0.9074 3.710 8 LS 62 62 1.1284 3.717 9 AO 67 67 -1.3503 -5.502 the package also provides nice plots. see below. The plot shows where the outliers are and also what would have happened if there were no outliers. I have also used R package called strucchange to detect level shifts. As an example on your data library("strucchange") breakpoints(dat.ts~1) The program correctly identifies breakpoints or structural changes. Optimal 4-segment partition: Call: breakpoints.formula(formula = dat.ts ~ 1) Breakpoints at observation number: 17 41 87 Corresponding to breakdates: 17 41 87 Hope this helps
Detecting changes in time series (R example)
You could use time series outlier detection to detect changes in time series. Tsay's or Chen and Liu's procedures are popular time series outlier detection methods . See my earlier question on this s
Detecting changes in time series (R example) You could use time series outlier detection to detect changes in time series. Tsay's or Chen and Liu's procedures are popular time series outlier detection methods . See my earlier question on this site. R's tsoutlier package uses Chen and Liu's method for detection outliers. SAS/SPSS/Autobox can also do this. See below for the R code to detect changes in time series. library("tsoutliers") dat.ts<- ts(dat.change,frequency=1) data.ts.outliers <- tso(dat.ts) data.ts.outliers plot(data.ts.outliers) tso function in tsoultlier package identifies following outliers. You can read documentation to find out the type of outliers. Outliers: type ind time coefhat tstat 1 TC 42 42 -2.9462 -10.068 2 AO 43 43 1.0733 4.322 3 AO 45 45 -1.2113 -4.849 4 TC 47 47 1.0143 3.387 5 AO 51 51 0.9002 3.433 6 AO 52 52 -1.3455 -5.165 7 AO 56 56 0.9074 3.710 8 LS 62 62 1.1284 3.717 9 AO 67 67 -1.3503 -5.502 the package also provides nice plots. see below. The plot shows where the outliers are and also what would have happened if there were no outliers. I have also used R package called strucchange to detect level shifts. As an example on your data library("strucchange") breakpoints(dat.ts~1) The program correctly identifies breakpoints or structural changes. Optimal 4-segment partition: Call: breakpoints.formula(formula = dat.ts ~ 1) Breakpoints at observation number: 17 41 87 Corresponding to breakdates: 17 41 87 Hope this helps
Detecting changes in time series (R example) You could use time series outlier detection to detect changes in time series. Tsay's or Chen and Liu's procedures are popular time series outlier detection methods . See my earlier question on this s
10,705
Detecting changes in time series (R example)
My response using AUTOBOX is quite similar to @forecaster but with a much simpler model. Box and Einstein and others have reflected on keeping solutions simple but not too simple. The model that was automatically developed was . The actual and cleansed plot is very similar . A plot of the residuals (which should always be shown ) is here along with the mandatory acf of the residuals . The statistics of the residuals are always useful in making comparisons between "dueling models" . The Actual/Fit/Forecast graph is here
Detecting changes in time series (R example)
My response using AUTOBOX is quite similar to @forecaster but with a much simpler model. Box and Einstein and others have reflected on keeping solutions simple but not too simple. The model that was a
Detecting changes in time series (R example) My response using AUTOBOX is quite similar to @forecaster but with a much simpler model. Box and Einstein and others have reflected on keeping solutions simple but not too simple. The model that was automatically developed was . The actual and cleansed plot is very similar . A plot of the residuals (which should always be shown ) is here along with the mandatory acf of the residuals . The statistics of the residuals are always useful in making comparisons between "dueling models" . The Actual/Fit/Forecast graph is here
Detecting changes in time series (R example) My response using AUTOBOX is quite similar to @forecaster but with a much simpler model. Box and Einstein and others have reflected on keeping solutions simple but not too simple. The model that was a
10,706
Detecting changes in time series (R example)
I would approach this problem from the following perspectives. These are just some ideas off the top of my head - please take them with a grain of salt. Nevertheless, I hope that this will be useful. Time series clustering. For example, by using popular dynamic time warping (DTW) or alternative approaches. Please see my related answers: on DTW for classification/clustering and on DTW or alternatives for uneven time series. The idea is to cluster time series into categories "normal" and "abnormal" (or similar). Entropy measures. See my relevant answer on time series entropy measures. The idea is to determine entropy of a "normal" time series and then compare it with other time series (this idea has an assumption of an entropy deviation in case of deviation from "normality"). Anomaly detection. See my relevant answer on anomaly detection (includes R resources). The idea is to directly detect anomalies via various methods (please see references). Early Warning Signals (EWS) Toolbox and R package earlywarnings seem especially promising.
Detecting changes in time series (R example)
I would approach this problem from the following perspectives. These are just some ideas off the top of my head - please take them with a grain of salt. Nevertheless, I hope that this will be useful.
Detecting changes in time series (R example) I would approach this problem from the following perspectives. These are just some ideas off the top of my head - please take them with a grain of salt. Nevertheless, I hope that this will be useful. Time series clustering. For example, by using popular dynamic time warping (DTW) or alternative approaches. Please see my related answers: on DTW for classification/clustering and on DTW or alternatives for uneven time series. The idea is to cluster time series into categories "normal" and "abnormal" (or similar). Entropy measures. See my relevant answer on time series entropy measures. The idea is to determine entropy of a "normal" time series and then compare it with other time series (this idea has an assumption of an entropy deviation in case of deviation from "normality"). Anomaly detection. See my relevant answer on anomaly detection (includes R resources). The idea is to directly detect anomalies via various methods (please see references). Early Warning Signals (EWS) Toolbox and R package earlywarnings seem especially promising.
Detecting changes in time series (R example) I would approach this problem from the following perspectives. These are just some ideas off the top of my head - please take them with a grain of salt. Nevertheless, I hope that this will be useful.
10,707
Detecting changes in time series (R example)
Lots of excellent answers are given here. Apparently, the results will depend largely on the models chosen. With that said, allow me to throw one more possibility to this old question based on a Bayesian time series decomposition model I developed, available in an R package Rbeast (https://github.com/zhaokg/Rbeast). It assumes a model of the form Y= seasonality+trend+outliers+error; for each component, changepoints are also detected, if any. Here is a test on the given example: library(Rbeast) out = beast(dat.change, season='none', ocp=8) # no seasonality with a max of 8 outlier points # out= beast(dat.change, season='none') # no outlier component is assumed plot(out) Below, the Pr(tcp) subplot gives the probability of having sudden changes (i.e., changepoint) over time. bcp is another great R package for Bayesian change detection.
Detecting changes in time series (R example)
Lots of excellent answers are given here. Apparently, the results will depend largely on the models chosen. With that said, allow me to throw one more possibility to this old question based on a Ba
Detecting changes in time series (R example) Lots of excellent answers are given here. Apparently, the results will depend largely on the models chosen. With that said, allow me to throw one more possibility to this old question based on a Bayesian time series decomposition model I developed, available in an R package Rbeast (https://github.com/zhaokg/Rbeast). It assumes a model of the form Y= seasonality+trend+outliers+error; for each component, changepoints are also detected, if any. Here is a test on the given example: library(Rbeast) out = beast(dat.change, season='none', ocp=8) # no seasonality with a max of 8 outlier points # out= beast(dat.change, season='none') # no outlier component is assumed plot(out) Below, the Pr(tcp) subplot gives the probability of having sudden changes (i.e., changepoint) over time. bcp is another great R package for Bayesian change detection.
Detecting changes in time series (R example) Lots of excellent answers are given here. Apparently, the results will depend largely on the models chosen. With that said, allow me to throw one more possibility to this old question based on a Ba
10,708
Detecting changes in time series (R example)
It would seem that your problem would be greatly simplified if you detrended your data. It appears to decline linearly. Once you detrend the data, you could apply a wide variety of tests for non-stationarity.
Detecting changes in time series (R example)
It would seem that your problem would be greatly simplified if you detrended your data. It appears to decline linearly. Once you detrend the data, you could apply a wide variety of tests for non-sta
Detecting changes in time series (R example) It would seem that your problem would be greatly simplified if you detrended your data. It appears to decline linearly. Once you detrend the data, you could apply a wide variety of tests for non-stationarity.
Detecting changes in time series (R example) It would seem that your problem would be greatly simplified if you detrended your data. It appears to decline linearly. Once you detrend the data, you could apply a wide variety of tests for non-sta
10,709
Detecting changes in time series (R example)
All fine answers, but here is a simple one, as suggested by @MrMeritology, which appears to work well for the time series in question, and likely for many other "similar" data sets. Here is an R-snippet producing the self-explanatory graphs below. outl = rep( NA, length(dat.change)) detr = c( 0, diff( dat.change)) ix = abs(detr) > 2*IQR( detr) outl[ix] = dat.change[ix] plot( dat.change, t='l', lwd=2, main="dat.change TS") points( outl, col=2, pch=18) plot( detr, col=4, main="detrended TS", t='l', lwd=2 ) acf( detr, main="ACF of detrended TS")
Detecting changes in time series (R example)
All fine answers, but here is a simple one, as suggested by @MrMeritology, which appears to work well for the time series in question, and likely for many other "similar" data sets. Here is an R-snipp
Detecting changes in time series (R example) All fine answers, but here is a simple one, as suggested by @MrMeritology, which appears to work well for the time series in question, and likely for many other "similar" data sets. Here is an R-snippet producing the self-explanatory graphs below. outl = rep( NA, length(dat.change)) detr = c( 0, diff( dat.change)) ix = abs(detr) > 2*IQR( detr) outl[ix] = dat.change[ix] plot( dat.change, t='l', lwd=2, main="dat.change TS") points( outl, col=2, pch=18) plot( detr, col=4, main="detrended TS", t='l', lwd=2 ) acf( detr, main="ACF of detrended TS")
Detecting changes in time series (R example) All fine answers, but here is a simple one, as suggested by @MrMeritology, which appears to work well for the time series in question, and likely for many other "similar" data sets. Here is an R-snipp
10,710
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Consider a simple regression without a constant term, and where the single regressor is centered on its sample mean. Then $X'X$ is ($n$ times) its sample variance, and $(X'X)^{-1}$ its recirpocal. So the higher the variance = variability in the regressor, the lower the variance of the coefficient estimator: the more variability we have in the explanatory variable, the more accurately we can estimate the unknown coefficient. Why? Because the more varying a regressor is, the more information it contains. When regressors are many, this generalizes to the inverse of their variance-covariance matrix, which takes also into account the co-variability of the regressors. In the extreme case where $X'X$ is diagonal, then the precision for each estimated coefficient depends only on the variance/variability of the associated regressor (given the variance of the error term).
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Consider a simple regression without a constant term, and where the single regressor is centered on its sample mean. Then $X'X$ is ($n$ times) its sample variance, and $(X'X)^{-1}$ its recirpocal. So
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Consider a simple regression without a constant term, and where the single regressor is centered on its sample mean. Then $X'X$ is ($n$ times) its sample variance, and $(X'X)^{-1}$ its recirpocal. So the higher the variance = variability in the regressor, the lower the variance of the coefficient estimator: the more variability we have in the explanatory variable, the more accurately we can estimate the unknown coefficient. Why? Because the more varying a regressor is, the more information it contains. When regressors are many, this generalizes to the inverse of their variance-covariance matrix, which takes also into account the co-variability of the regressors. In the extreme case where $X'X$ is diagonal, then the precision for each estimated coefficient depends only on the variance/variability of the associated regressor (given the variance of the error term).
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Consider a simple regression without a constant term, and where the single regressor is centered on its sample mean. Then $X'X$ is ($n$ times) its sample variance, and $(X'X)^{-1}$ its recirpocal. So
10,711
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
A simple way of viewing $\sigma^2 \left(\mathbf{X}^{T} \mathbf{X} \right)^{-1}$ is as the matrix (multivariate) analogue of $\frac{\sigma^2}{\sum_{i=1}^n \left(X_i-\bar{X}\right)^2}$, which is the variance of the slope coefficient in simple OLS regression. One can even get $\frac{\sigma^2}{\sum_{i=1}^n X_i^2}$ for that variance by ommitting the intercept in the model, i.e. by performing regression through the origin. From either one of these formulas it may be seen that larger variability of the predictor variable will in general lead to more precise estimation of its coefficient. This is the idea often exploited in the design of experiments, where by choosing values for the (non-random) predictors, one tries to make the determinant of $\left(\mathbf{X}^{T} \mathbf{X} \right)$ as large as possible, the determinant being a measure of variability.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
A simple way of viewing $\sigma^2 \left(\mathbf{X}^{T} \mathbf{X} \right)^{-1}$ is as the matrix (multivariate) analogue of $\frac{\sigma^2}{\sum_{i=1}^n \left(X_i-\bar{X}\right)^2}$, which is the var
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator A simple way of viewing $\sigma^2 \left(\mathbf{X}^{T} \mathbf{X} \right)^{-1}$ is as the matrix (multivariate) analogue of $\frac{\sigma^2}{\sum_{i=1}^n \left(X_i-\bar{X}\right)^2}$, which is the variance of the slope coefficient in simple OLS regression. One can even get $\frac{\sigma^2}{\sum_{i=1}^n X_i^2}$ for that variance by ommitting the intercept in the model, i.e. by performing regression through the origin. From either one of these formulas it may be seen that larger variability of the predictor variable will in general lead to more precise estimation of its coefficient. This is the idea often exploited in the design of experiments, where by choosing values for the (non-random) predictors, one tries to make the determinant of $\left(\mathbf{X}^{T} \mathbf{X} \right)$ as large as possible, the determinant being a measure of variability.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator A simple way of viewing $\sigma^2 \left(\mathbf{X}^{T} \mathbf{X} \right)^{-1}$ is as the matrix (multivariate) analogue of $\frac{\sigma^2}{\sum_{i=1}^n \left(X_i-\bar{X}\right)^2}$, which is the var
10,712
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
I'll take a different approach towards developing the intuition that underlies the formula $\text{Var}\,\hat{\beta}=\sigma^2 (X'X)^{-1}$. When developing intuition for the multiple regression model, it's helpful to consider the bivariate linear regression model, viz., $$y_i=\alpha+\beta x_i + \varepsilon_i, \quad i=1,\ldots,n.$$ $\alpha+\beta x_i$ is frequently called the deterministic contribution to $y_i$, and $\varepsilon_i$ is called the stochastic contribution. Expressed in terms of deviations from the sample means $(\bar{x},\bar{y})$, this model may also be written as $$(y_i-\bar{y}) = \beta(x_i-\bar{x})+(\varepsilon_i-\bar{\varepsilon}), \quad i=1,\ldots,n.$$ To help develop the intuition, we will assume that the simplest Gauss-Markov assumptions are satisfied: $x_i$ nonstochastic, $\sum_{i=1}^n(x_i-\bar{x})^2>0$ for all $n$, and $\varepsilon_i \sim \text{iid}(0,\sigma^2)$ for all $i=1,\ldots,n$. As you already know very well, these conditions guarantee that $$\text{Var}\,\hat{\beta}=\tfrac{1}{n}\sigma^2(\text{Var}\,x)^{-1}\text{,}$$ where $\text{Var}\,x$ is the sample variance of $x$. In words, this formula makes three claims: "The variance of $\hat{\beta}$ is inversely proportional to the sample size $n$, it is directly proportional to the variance of $\varepsilon$, and it is inversely proportional to the variance of $x$." Why should doubling the sample size, ceteris paribus, cause the variance of $\hat{\beta}$ to be cut in half? This result is intimately linked to the iid assumption applied to $\varepsilon$: Since the individual errors are assumed to be iid, each observation should be treated ex ante as being equally informative. And, doubling the number of observations doubles the amount of information about the parameters that describe the (assumed linear) relationship between $x$ and $y$. Having twice as much information cuts the uncertainty about the parameters in half. Similarly, it should be straightforward to develop one's intuition as to why doubling $\sigma^2$ also doubles the variance of $\hat{\beta}$. Let's turn, then, to your main question, which is about developing intuition for the claim that the variance of $\hat{\beta}$ is inversely proportional to the variance of $x$. To formalize notions, let us consider two separate bivariate linear regression models, called Model $(1)$ and Model $(2)$ from now on. We will assume that both models satisfy the assumptions of the simplest form of the Gauss-Markov theorem and that the models share the exact same values of $\alpha$, $\beta$, $n$, and $\sigma^2$. Under these assumptions, it is easy to show that $\text{E}\,\hat{\beta}{}^{(1)}=\text{E}\,\hat{\beta}{}^{(2)}=\beta$; in words, both estimators are unbiased. Crucially, we will also assume that whereas $\bar{x}^{(1)}=\bar{x}^{(2)}=\bar{x}$, $\text{Var}\,x^{(1)}\ne \text{Var}\,x^{(2)}$. Without loss of generality, let us assume that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$. Which estimator of $\hat{\beta}$ will have the smaller variance? Put differently, will $\hat{\beta}{}^{(1)}$ or $\hat{\beta}{}^{(2)}$ be closer, on average, to $\beta$? From the earlier discussion, we have $\text{Var}\,\hat{\beta} {}^{(k)} =\tfrac{1}{n}\sigma^2/\text{Var}\,x{}^{(k)})$ for $k=1,2$. Because $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ by assumption, it follows that $\text{Var}\,\hat{\beta}{}^{(1)} <\text{Var}\,\hat{\beta}{}^{(2)}$. What, then, is the intuition behind this result? Because by assumption $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$, on average each $x_i^{(1)}$ will be farther away from $\bar{x}$ than is the case, on average, for $x_i^{(2)}$. Let us denote the expected average absolute difference between $x_i$ and $\bar{x}$ by $d_x$. The assumption that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ implies that $d_x^{(1)} >d_x^{(2)}$. The bivariate linear regression model, expressed in deviations from means, states that $d_y = \beta d_x^{(1)}$ for Model $(1)$ and $d_y = \beta d_x^{(2)}$ for Model $(2)$. If $\beta\ne0$, this means that the deterministic component of Model $(1)$, $\beta d_x^{(1)}$, has a greater influence on $d_y$ than does the deterministic component of Model $(2)$, $\beta d_x^{(2)}$. Recall that the both models are assumed to satisfy the Gauss-Markov assumptions, that the error variances are the same in both models, and that $\beta^{(1)}=\beta^{(2)}=\beta$. Since Model $(1)$ imparts more information about the contribution of the deterministic component of $y$ than does Model $(2)$, it follows that the precision with which the deterministic contribution can be estimated is greater for Model $(1)$ than is the case for Model $(2)$. The converse of greater precision is a lower variance of the point estimate of $\beta$. It is reasonably straightforward to generalize the intuition obtained from studying the simple regression model to the general multiple linear regression model. The main complication is that instead of comparing scalar variances, it is necessary to compare the "size" of variance-covariance matrices. Having a good working knowledge of determinants, traces and eigenvalues of real symmetric matrices comes in very handy at this point :-)
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
I'll take a different approach towards developing the intuition that underlies the formula $\text{Var}\,\hat{\beta}=\sigma^2 (X'X)^{-1}$. When developing intuition for the multiple regression model, i
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator I'll take a different approach towards developing the intuition that underlies the formula $\text{Var}\,\hat{\beta}=\sigma^2 (X'X)^{-1}$. When developing intuition for the multiple regression model, it's helpful to consider the bivariate linear regression model, viz., $$y_i=\alpha+\beta x_i + \varepsilon_i, \quad i=1,\ldots,n.$$ $\alpha+\beta x_i$ is frequently called the deterministic contribution to $y_i$, and $\varepsilon_i$ is called the stochastic contribution. Expressed in terms of deviations from the sample means $(\bar{x},\bar{y})$, this model may also be written as $$(y_i-\bar{y}) = \beta(x_i-\bar{x})+(\varepsilon_i-\bar{\varepsilon}), \quad i=1,\ldots,n.$$ To help develop the intuition, we will assume that the simplest Gauss-Markov assumptions are satisfied: $x_i$ nonstochastic, $\sum_{i=1}^n(x_i-\bar{x})^2>0$ for all $n$, and $\varepsilon_i \sim \text{iid}(0,\sigma^2)$ for all $i=1,\ldots,n$. As you already know very well, these conditions guarantee that $$\text{Var}\,\hat{\beta}=\tfrac{1}{n}\sigma^2(\text{Var}\,x)^{-1}\text{,}$$ where $\text{Var}\,x$ is the sample variance of $x$. In words, this formula makes three claims: "The variance of $\hat{\beta}$ is inversely proportional to the sample size $n$, it is directly proportional to the variance of $\varepsilon$, and it is inversely proportional to the variance of $x$." Why should doubling the sample size, ceteris paribus, cause the variance of $\hat{\beta}$ to be cut in half? This result is intimately linked to the iid assumption applied to $\varepsilon$: Since the individual errors are assumed to be iid, each observation should be treated ex ante as being equally informative. And, doubling the number of observations doubles the amount of information about the parameters that describe the (assumed linear) relationship between $x$ and $y$. Having twice as much information cuts the uncertainty about the parameters in half. Similarly, it should be straightforward to develop one's intuition as to why doubling $\sigma^2$ also doubles the variance of $\hat{\beta}$. Let's turn, then, to your main question, which is about developing intuition for the claim that the variance of $\hat{\beta}$ is inversely proportional to the variance of $x$. To formalize notions, let us consider two separate bivariate linear regression models, called Model $(1)$ and Model $(2)$ from now on. We will assume that both models satisfy the assumptions of the simplest form of the Gauss-Markov theorem and that the models share the exact same values of $\alpha$, $\beta$, $n$, and $\sigma^2$. Under these assumptions, it is easy to show that $\text{E}\,\hat{\beta}{}^{(1)}=\text{E}\,\hat{\beta}{}^{(2)}=\beta$; in words, both estimators are unbiased. Crucially, we will also assume that whereas $\bar{x}^{(1)}=\bar{x}^{(2)}=\bar{x}$, $\text{Var}\,x^{(1)}\ne \text{Var}\,x^{(2)}$. Without loss of generality, let us assume that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$. Which estimator of $\hat{\beta}$ will have the smaller variance? Put differently, will $\hat{\beta}{}^{(1)}$ or $\hat{\beta}{}^{(2)}$ be closer, on average, to $\beta$? From the earlier discussion, we have $\text{Var}\,\hat{\beta} {}^{(k)} =\tfrac{1}{n}\sigma^2/\text{Var}\,x{}^{(k)})$ for $k=1,2$. Because $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ by assumption, it follows that $\text{Var}\,\hat{\beta}{}^{(1)} <\text{Var}\,\hat{\beta}{}^{(2)}$. What, then, is the intuition behind this result? Because by assumption $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$, on average each $x_i^{(1)}$ will be farther away from $\bar{x}$ than is the case, on average, for $x_i^{(2)}$. Let us denote the expected average absolute difference between $x_i$ and $\bar{x}$ by $d_x$. The assumption that $\text{Var}\,x^{(1)}>\text{Var}\,x^{(2)}$ implies that $d_x^{(1)} >d_x^{(2)}$. The bivariate linear regression model, expressed in deviations from means, states that $d_y = \beta d_x^{(1)}$ for Model $(1)$ and $d_y = \beta d_x^{(2)}$ for Model $(2)$. If $\beta\ne0$, this means that the deterministic component of Model $(1)$, $\beta d_x^{(1)}$, has a greater influence on $d_y$ than does the deterministic component of Model $(2)$, $\beta d_x^{(2)}$. Recall that the both models are assumed to satisfy the Gauss-Markov assumptions, that the error variances are the same in both models, and that $\beta^{(1)}=\beta^{(2)}=\beta$. Since Model $(1)$ imparts more information about the contribution of the deterministic component of $y$ than does Model $(2)$, it follows that the precision with which the deterministic contribution can be estimated is greater for Model $(1)$ than is the case for Model $(2)$. The converse of greater precision is a lower variance of the point estimate of $\beta$. It is reasonably straightforward to generalize the intuition obtained from studying the simple regression model to the general multiple linear regression model. The main complication is that instead of comparing scalar variances, it is necessary to compare the "size" of variance-covariance matrices. Having a good working knowledge of determinants, traces and eigenvalues of real symmetric matrices comes in very handy at this point :-)
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator I'll take a different approach towards developing the intuition that underlies the formula $\text{Var}\,\hat{\beta}=\sigma^2 (X'X)^{-1}$. When developing intuition for the multiple regression model, i
10,713
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Does linear transformation of Gaussian random variable help? Using the rule that if, $x \sim \mathcal{N}(\mu,\Sigma)$, then $Ax + b ~ \sim \mathcal{N}(A\mu + b,A\Sigma A^T)$. Assuming, that $Y = X\beta + \epsilon$ is the underlying model and $\epsilon \sim \mathcal{N}(0, \sigma^2)$. $$ \therefore Y \sim \mathcal{N}(X\beta,\sigma^2)\\ X^TY \sim \mathcal{N}(X^TX\beta, X^T\sigma^2 X)\\ (X^TX)^{-1}X^TY \sim \mathcal{N}[\beta,(X^TX)^{-1} \sigma^2] $$ So $(X^TX)^{-1}X^T$ is just a complicated scaling matrix that transforms the distribution of $Y$. Hope that was helpful.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Does linear transformation of Gaussian random variable help? Using the rule that if, $x \sim \mathcal{N}(\mu,\Sigma)$, then $Ax + b ~ \sim \mathcal{N}(A\mu + b,A\Sigma A^T)$. Assuming, that $Y = X\bet
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Does linear transformation of Gaussian random variable help? Using the rule that if, $x \sim \mathcal{N}(\mu,\Sigma)$, then $Ax + b ~ \sim \mathcal{N}(A\mu + b,A\Sigma A^T)$. Assuming, that $Y = X\beta + \epsilon$ is the underlying model and $\epsilon \sim \mathcal{N}(0, \sigma^2)$. $$ \therefore Y \sim \mathcal{N}(X\beta,\sigma^2)\\ X^TY \sim \mathcal{N}(X^TX\beta, X^T\sigma^2 X)\\ (X^TX)^{-1}X^TY \sim \mathcal{N}[\beta,(X^TX)^{-1} \sigma^2] $$ So $(X^TX)^{-1}X^T$ is just a complicated scaling matrix that transforms the distribution of $Y$. Hope that was helpful.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Does linear transformation of Gaussian random variable help? Using the rule that if, $x \sim \mathcal{N}(\mu,\Sigma)$, then $Ax + b ~ \sim \mathcal{N}(A\mu + b,A\Sigma A^T)$. Assuming, that $Y = X\bet
10,714
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
This builds on @Alecos Papadopuolos' answer. Recall that the result of a least-squares regression doesn't depend on the units of measurement of your variables. Suppose your X-variable is a length measurement, given in inches. Then rescaling X, say by multiplying by 2.54 to change the unit to centimeters, doesn't materially affect things. If you refit the model, the new regression estimate will be the old estimate divided by 2.54. The $X'X$ matrix is the variance of X, and hence reflects the scale of measurement of X. If you change the scale, you have to reflect this in your estimate of $\beta$, and this is done by multiplying by the inverse of $X'X$.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
This builds on @Alecos Papadopuolos' answer. Recall that the result of a least-squares regression doesn't depend on the units of measurement of your variables. Suppose your X-variable is a length meas
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator This builds on @Alecos Papadopuolos' answer. Recall that the result of a least-squares regression doesn't depend on the units of measurement of your variables. Suppose your X-variable is a length measurement, given in inches. Then rescaling X, say by multiplying by 2.54 to change the unit to centimeters, doesn't materially affect things. If you refit the model, the new regression estimate will be the old estimate divided by 2.54. The $X'X$ matrix is the variance of X, and hence reflects the scale of measurement of X. If you change the scale, you have to reflect this in your estimate of $\beta$, and this is done by multiplying by the inverse of $X'X$.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator This builds on @Alecos Papadopuolos' answer. Recall that the result of a least-squares regression doesn't depend on the units of measurement of your variables. Suppose your X-variable is a length meas
10,715
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Say we have $n$ observations (or sample size) and $p$ parameters. The covariance matrix $\operatorname{Var}(\hat{\beta})$ of the estimated parameters $\hat{\beta}_1,\hat{\beta}_2$ etc. is a representation of the accuracy of the estimated parameters. If in an ideal world the data could be perfectly described by the model, then the noise will be $\sigma^2= 0$. Now, the diagonal entries of $\operatorname{Var}(\hat{\beta})$ correspond to $\operatorname{Var}(\hat{\beta_1}),\operatorname{Var}(\hat{\beta_2})$ etc. The derived formula for the variance agrees with the intuition that if the noise is lower, the estimates will be more accurate. In addition, as the number of measurements gets larger, the variance of the estimated parameters will decrease. So, overall the absolute value of the entries of $X^TX$ will be higher, as the number of columns of $X^T$ is $n$ and the number of rows of $X$ is $n$, and each entry of $X^TX$ is a sum of $n$ product pairs. The absolute value of the entries of the inverse $(X^TX)^{-1}$ will be lower. Hence, even if there is a lot of noise, we can still reach good estimates $\hat{\beta_i}$ of the parameters if we increase the sample size $n$. I hope this helps. Reference: Section 7.3 on Least squares: Cosentino, Carlo, and Declan Bates. Feedback control in systems biology. Crc Press, 2011.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator
Say we have $n$ observations (or sample size) and $p$ parameters. The covariance matrix $\operatorname{Var}(\hat{\beta})$ of the estimated parameters $\hat{\beta}_1,\hat{\beta}_2$ etc. is a representa
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Say we have $n$ observations (or sample size) and $p$ parameters. The covariance matrix $\operatorname{Var}(\hat{\beta})$ of the estimated parameters $\hat{\beta}_1,\hat{\beta}_2$ etc. is a representation of the accuracy of the estimated parameters. If in an ideal world the data could be perfectly described by the model, then the noise will be $\sigma^2= 0$. Now, the diagonal entries of $\operatorname{Var}(\hat{\beta})$ correspond to $\operatorname{Var}(\hat{\beta_1}),\operatorname{Var}(\hat{\beta_2})$ etc. The derived formula for the variance agrees with the intuition that if the noise is lower, the estimates will be more accurate. In addition, as the number of measurements gets larger, the variance of the estimated parameters will decrease. So, overall the absolute value of the entries of $X^TX$ will be higher, as the number of columns of $X^T$ is $n$ and the number of rows of $X$ is $n$, and each entry of $X^TX$ is a sum of $n$ product pairs. The absolute value of the entries of the inverse $(X^TX)^{-1}$ will be lower. Hence, even if there is a lot of noise, we can still reach good estimates $\hat{\beta_i}$ of the parameters if we increase the sample size $n$. I hope this helps. Reference: Section 7.3 on Least squares: Cosentino, Carlo, and Declan Bates. Feedback control in systems biology. Crc Press, 2011.
Intuitive explanation of the $(X^TX)^{-1}$ term in the variance of least square estimator Say we have $n$ observations (or sample size) and $p$ parameters. The covariance matrix $\operatorname{Var}(\hat{\beta})$ of the estimated parameters $\hat{\beta}_1,\hat{\beta}_2$ etc. is a representa
10,716
Real-life examples of Markov Decision Processes
A Markovian Decision Process indeed has to do with going from one state to another and is mainly used for planning and decision making. The theory Just repeating the theory quickly, an MDP is: $$\text{MDP} = \langle S,A,T,R,\gamma \rangle$$ where $S$ are the states, $A$ the actions, $T$ the transition probabilities (i.e. the probabilities $Pr(s'|s, a)$ to go from one state to another given an action), $R$ the rewards (given a certain state, and possibly action), and $\gamma$ is a discount factor that is used to reduce the importance of the of future rewards. So in order to use it, you need to have predefined: States: these can refer to for example grid maps in robotics, or for example door open and door closed. Actions: a fixed set of actions, such as for example going north, south, east, etc for a robot, or opening and closing a door. Transition probabilities: the probability of going from one state to another given an action. For example, what is the probability of an open door if the action is open. In a perfect world the later could be 1.0, but if it is a robot, it could have failed in handling the doorknob correctly. Another example in the case of a moving robot would be the action north, which in most cases would bring it in the grid cell north of it, but in some cases could have moved too much and reached the next cell for example. Rewards: these are used to guide the planning. In the case of the grid example, we might want to go to a certain cell, and the reward will be higher if we get closer. In the case of the door example, an open door might give a high reward. Once the MDP is defined, a policy can be learned by doing Value Iteration or Policy Iteration which calculates the expected reward for each of the states. The policy then gives per state the best (given the MDP model) action to do. In summary, an MDP is useful when you want to plan an efficient sequence of actions in which your actions can be not always 100% effective. Your questions Can it be used to predict things? I would call it planning, not predicting like regression for example. If so what types of things? See examples. Can it find patterns among infinite amounts of data? MDPs are used to do Reinforcement Learning, to find patterns you need Unsupervised Learning. And no, you cannot handle an infinite amount of data. Actually, the complexity of finding a policy grows exponentially with the number of states $|S|$. What can this algorithm do for me. See examples. Examples of Applications of MDPs White, D.J. (1993) mentions a large list of applications: Harvesting: how much members of a population have to be left for breeding. Agriculture: how much to plant based on weather and soil state. Water resources: keep the correct water level at reservoirs. Inspection, maintenance and repair: when to replace/inspect based on age, condition, etc. Purchase and production: how much to produce based on demand. Queues: reduce waiting time. ... Finance: deciding how much to invest in stock. Robotics: A dialogue system to interact with people. Robot bartender. Robot exploration for navigation. .. And there are quite some more models. An even more interesting model is the Partially Observable Markovian Decision Process in which states are not completely visible, and instead, observations are used to get an idea of the current state, but this is out of the scope of this question. Additional Information A stochastic process is Markovian (or has the Markov property) if the conditional probability distribution of future states only depend on the current state, and not on previous ones (i.e. not on a list of previous states).
Real-life examples of Markov Decision Processes
A Markovian Decision Process indeed has to do with going from one state to another and is mainly used for planning and decision making. The theory Just repeating the theory quickly, an MDP is: $$\text
Real-life examples of Markov Decision Processes A Markovian Decision Process indeed has to do with going from one state to another and is mainly used for planning and decision making. The theory Just repeating the theory quickly, an MDP is: $$\text{MDP} = \langle S,A,T,R,\gamma \rangle$$ where $S$ are the states, $A$ the actions, $T$ the transition probabilities (i.e. the probabilities $Pr(s'|s, a)$ to go from one state to another given an action), $R$ the rewards (given a certain state, and possibly action), and $\gamma$ is a discount factor that is used to reduce the importance of the of future rewards. So in order to use it, you need to have predefined: States: these can refer to for example grid maps in robotics, or for example door open and door closed. Actions: a fixed set of actions, such as for example going north, south, east, etc for a robot, or opening and closing a door. Transition probabilities: the probability of going from one state to another given an action. For example, what is the probability of an open door if the action is open. In a perfect world the later could be 1.0, but if it is a robot, it could have failed in handling the doorknob correctly. Another example in the case of a moving robot would be the action north, which in most cases would bring it in the grid cell north of it, but in some cases could have moved too much and reached the next cell for example. Rewards: these are used to guide the planning. In the case of the grid example, we might want to go to a certain cell, and the reward will be higher if we get closer. In the case of the door example, an open door might give a high reward. Once the MDP is defined, a policy can be learned by doing Value Iteration or Policy Iteration which calculates the expected reward for each of the states. The policy then gives per state the best (given the MDP model) action to do. In summary, an MDP is useful when you want to plan an efficient sequence of actions in which your actions can be not always 100% effective. Your questions Can it be used to predict things? I would call it planning, not predicting like regression for example. If so what types of things? See examples. Can it find patterns among infinite amounts of data? MDPs are used to do Reinforcement Learning, to find patterns you need Unsupervised Learning. And no, you cannot handle an infinite amount of data. Actually, the complexity of finding a policy grows exponentially with the number of states $|S|$. What can this algorithm do for me. See examples. Examples of Applications of MDPs White, D.J. (1993) mentions a large list of applications: Harvesting: how much members of a population have to be left for breeding. Agriculture: how much to plant based on weather and soil state. Water resources: keep the correct water level at reservoirs. Inspection, maintenance and repair: when to replace/inspect based on age, condition, etc. Purchase and production: how much to produce based on demand. Queues: reduce waiting time. ... Finance: deciding how much to invest in stock. Robotics: A dialogue system to interact with people. Robot bartender. Robot exploration for navigation. .. And there are quite some more models. An even more interesting model is the Partially Observable Markovian Decision Process in which states are not completely visible, and instead, observations are used to get an idea of the current state, but this is out of the scope of this question. Additional Information A stochastic process is Markovian (or has the Markov property) if the conditional probability distribution of future states only depend on the current state, and not on previous ones (i.e. not on a list of previous states).
Real-life examples of Markov Decision Processes A Markovian Decision Process indeed has to do with going from one state to another and is mainly used for planning and decision making. The theory Just repeating the theory quickly, an MDP is: $$\text
10,717
Real-life examples of Markov Decision Processes
Bonus: It also feels like MDP's is all about getting from one state to another, is this true? Since, MDP is about making future decisions by taking action at present, yes! it's about going from the present state to a more returning(that yields more reward) future state. To answer the comment by @Suhail Gupta: So any process that has the states, actions, transition probabilities and rewards defined would be termed as Markovian? The process to be called Markovian should also follow the Markov property along with what you have mentioned; the property says, "the future state depends upon the action taken in the present state and is not affected by the past states."
Real-life examples of Markov Decision Processes
Bonus: It also feels like MDP's is all about getting from one state to another, is this true? Since, MDP is about making future decisions by taking action at present, yes! it's about going from the
Real-life examples of Markov Decision Processes Bonus: It also feels like MDP's is all about getting from one state to another, is this true? Since, MDP is about making future decisions by taking action at present, yes! it's about going from the present state to a more returning(that yields more reward) future state. To answer the comment by @Suhail Gupta: So any process that has the states, actions, transition probabilities and rewards defined would be termed as Markovian? The process to be called Markovian should also follow the Markov property along with what you have mentioned; the property says, "the future state depends upon the action taken in the present state and is not affected by the past states."
Real-life examples of Markov Decision Processes Bonus: It also feels like MDP's is all about getting from one state to another, is this true? Since, MDP is about making future decisions by taking action at present, yes! it's about going from the
10,718
Singular gradient error in nls with correct starting values
I've got bitten by this recently. My intentions were the same, make some artificial model and test it. The main reason is the one given by @whuber and @marco. Such model is not identified. To see that, remember that NLS minimizes the function: $$\sum_{i=1}^n(y_i-a-br^{x_i-m}-cx_i)^2$$ Say it is minimized by the set of parameters $(a,b,m,r,c)$. It is not hard to see that the the set of parameters $(a,br^{-m},0,r,c)$ will give the same value of the function to be minimized. Hence the model is not identified, i.e. there is no unique solution. It is also not hard to see why the gradient is singular. Denote $$f(a,b,r,m,c,x)=a+br^{x-m}+cx$$ Then $$\frac{\partial f}{\partial b}=r^{x-m}$$ $$\frac{\partial f}{\partial m}=-b\ln rr^{x-m}$$ and we get that for all $x$ $$b\ln r\frac{\partial f}{\partial b}+\frac{\partial f}{\partial m}=0.$$ Hence the matrix \begin{align} \begin{pmatrix} \nabla f(x_1)\\\\ \vdots\\\\ \nabla f(x_n) \end{pmatrix} \end{align} will not be of full rank and this is why nls will give the the singular gradient message. I've spent over a week looking for bugs in my code elsewhere till I noticed that the main bug was in the model :)
Singular gradient error in nls with correct starting values
I've got bitten by this recently. My intentions were the same, make some artificial model and test it. The main reason is the one given by @whuber and @marco. Such model is not identified. To see that
Singular gradient error in nls with correct starting values I've got bitten by this recently. My intentions were the same, make some artificial model and test it. The main reason is the one given by @whuber and @marco. Such model is not identified. To see that, remember that NLS minimizes the function: $$\sum_{i=1}^n(y_i-a-br^{x_i-m}-cx_i)^2$$ Say it is minimized by the set of parameters $(a,b,m,r,c)$. It is not hard to see that the the set of parameters $(a,br^{-m},0,r,c)$ will give the same value of the function to be minimized. Hence the model is not identified, i.e. there is no unique solution. It is also not hard to see why the gradient is singular. Denote $$f(a,b,r,m,c,x)=a+br^{x-m}+cx$$ Then $$\frac{\partial f}{\partial b}=r^{x-m}$$ $$\frac{\partial f}{\partial m}=-b\ln rr^{x-m}$$ and we get that for all $x$ $$b\ln r\frac{\partial f}{\partial b}+\frac{\partial f}{\partial m}=0.$$ Hence the matrix \begin{align} \begin{pmatrix} \nabla f(x_1)\\\\ \vdots\\\\ \nabla f(x_n) \end{pmatrix} \end{align} will not be of full rank and this is why nls will give the the singular gradient message. I've spent over a week looking for bugs in my code elsewhere till I noticed that the main bug was in the model :)
Singular gradient error in nls with correct starting values I've got bitten by this recently. My intentions were the same, make some artificial model and test it. The main reason is the one given by @whuber and @marco. Such model is not identified. To see that
10,719
Singular gradient error in nls with correct starting values
The answers above are, of course, correct. For what its worth, in addition to the explanations given, if you are trying this on an artificial data set, according to the nls help page found at: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/nls.html R's nls wont be able to handle it. The help page specifically states: Warning Do not use nls on artificial "zero-residual" data. The nls function uses a relative-offset convergence criterion that compares the numerical imprecision at the current parameter estimates to the residual sum-of-squares. This performs well on data of the form y = f(x, θ) + eps (with var(eps) > 0). It fails to indicate convergence on data of the form y = f(x, θ) because the criterion amounts to comparing two components of the round-off error. If you wish to test nls on artificial data please add a noise component, as shown in the example below. So, no noise==no good for R's nls.
Singular gradient error in nls with correct starting values
The answers above are, of course, correct. For what its worth, in addition to the explanations given, if you are trying this on an artificial data set, according to the nls help page found at: http:/
Singular gradient error in nls with correct starting values The answers above are, of course, correct. For what its worth, in addition to the explanations given, if you are trying this on an artificial data set, according to the nls help page found at: http://stat.ethz.ch/R-manual/R-patched/library/stats/html/nls.html R's nls wont be able to handle it. The help page specifically states: Warning Do not use nls on artificial "zero-residual" data. The nls function uses a relative-offset convergence criterion that compares the numerical imprecision at the current parameter estimates to the residual sum-of-squares. This performs well on data of the form y = f(x, θ) + eps (with var(eps) > 0). It fails to indicate convergence on data of the form y = f(x, θ) because the criterion amounts to comparing two components of the round-off error. If you wish to test nls on artificial data please add a noise component, as shown in the example below. So, no noise==no good for R's nls.
Singular gradient error in nls with correct starting values The answers above are, of course, correct. For what its worth, in addition to the explanations given, if you are trying this on an artificial data set, according to the nls help page found at: http:/
10,720
What are the implications of scaling the features to xgboost?
XGBoost is not sensitive to monotonic transformations of its features for the same reason that decision trees and random forests are not: the model only needs to pick "cut points" on features to split a node. Splits are not sensitive to monotonic transformations: defining a split on one scale has a corresponding split on the transformed scale. Your confusion stems from misunderstanding $w$. In the section "Model Complexity," the author writes Here $w$ is the vector of scores on leaves... The score measures the weight of the leaf. See the diagram in the "Tree Ensemble" section; the author labels the number below the leaf as the "score." The score is also defined more precisely in the paragraph preceding your expression for $\Omega(f)$: We need to define the complexity of the tree $\Omega(f)$. In order to do so, let us first refine the definition of the tree $f(x)$ as $$f_t(x)=w_{q(x)}, w \in R^T, q:R^d \to {1,2,\dots,T}.$$ Here $w$ is the vector of scores on leaves, $q$ is a function assigning each data point to the corresponding leaf, and $T$ is the number of leaves. What this expression is saying is that $q$ is a partitioning function of $R^d$, and $w$ is the weight associated with each partition. Partitioning $R^d$ can be done with coordinate-aligned splits, and coordinate-aligned splits are decision trees. The meaning of $w$ is that it is a "weight" chosen so that the loss of the ensemble with the new tree is lower than the loss of the ensemble without the new tree. This is described in "The Structure Score" section of the documentation. The score for a leaf $j$ is given by $$ w_j^* = \frac{G_j}{H_j + \lambda} $$ where $G_j$ and $H_j$ are the sums of functions of the partial derivatives of the loss function wrt the prediction for tree $t-1$ for the samples in the $j$th leaf. (See "Additive Training" for details.)
What are the implications of scaling the features to xgboost?
XGBoost is not sensitive to monotonic transformations of its features for the same reason that decision trees and random forests are not: the model only needs to pick "cut points" on features to split
What are the implications of scaling the features to xgboost? XGBoost is not sensitive to monotonic transformations of its features for the same reason that decision trees and random forests are not: the model only needs to pick "cut points" on features to split a node. Splits are not sensitive to monotonic transformations: defining a split on one scale has a corresponding split on the transformed scale. Your confusion stems from misunderstanding $w$. In the section "Model Complexity," the author writes Here $w$ is the vector of scores on leaves... The score measures the weight of the leaf. See the diagram in the "Tree Ensemble" section; the author labels the number below the leaf as the "score." The score is also defined more precisely in the paragraph preceding your expression for $\Omega(f)$: We need to define the complexity of the tree $\Omega(f)$. In order to do so, let us first refine the definition of the tree $f(x)$ as $$f_t(x)=w_{q(x)}, w \in R^T, q:R^d \to {1,2,\dots,T}.$$ Here $w$ is the vector of scores on leaves, $q$ is a function assigning each data point to the corresponding leaf, and $T$ is the number of leaves. What this expression is saying is that $q$ is a partitioning function of $R^d$, and $w$ is the weight associated with each partition. Partitioning $R^d$ can be done with coordinate-aligned splits, and coordinate-aligned splits are decision trees. The meaning of $w$ is that it is a "weight" chosen so that the loss of the ensemble with the new tree is lower than the loss of the ensemble without the new tree. This is described in "The Structure Score" section of the documentation. The score for a leaf $j$ is given by $$ w_j^* = \frac{G_j}{H_j + \lambda} $$ where $G_j$ and $H_j$ are the sums of functions of the partial derivatives of the loss function wrt the prediction for tree $t-1$ for the samples in the $j$th leaf. (See "Additive Training" for details.)
What are the implications of scaling the features to xgboost? XGBoost is not sensitive to monotonic transformations of its features for the same reason that decision trees and random forests are not: the model only needs to pick "cut points" on features to split
10,721
Why Levene test of equality of variances rather than F ratio?
You could use an F test to assess the variance of two groups, but the using F to test for differences in variance strictly requires that the distributions are normal. Using Levene's test (i.e., absolute values of the deviations from the mean) is more robust, and using the Brown-Forsythe test (i.e., absolute values of the deviations from the median) is even more robust. SPSS is using a good approach here. Update In response to the comment below, I want to clarify what I'm trying to say here. The question asks about using "a simple F ratio of the ratio of the variances of the two groups". From this, I understood the alternative to be what is sometimes known as Hartley's test, which is a very intuitive approach to assessing heterogeneity of variance. Although this does use a ratio of variances, it is not the same as that used in Levene's test. Because sometimes it is hard to understand what is meant when it is only stated in words, I will give equations to make this clearer. Hartley's test: $$ F=\frac{s^2_2}{s^2_1} $$ Levene's test / Brown-Forsythe test: $$ F=\frac{MS_{b/t-levels}}{MS_{w/i-levels}} $$ In all three cases, we have ratios of variances, but the specific variances used differ between them. What makes Levene's test and the Brown-Forsythe test more robust (and also distinct from any other ANOVA), is that they are performed over transformed data, whereas the F ratio of group variances (Hartley's test) uses the raw data. The transformed data in question are the absolute values of the deviations (from the mean, in the case of Levene's test, and from the median, in the case of the Brown-Forsythe test). There are other tests for heterogeneity of variance, but I'm restricting my discussion to these, as I understood them to be focus of the original question. The rationale for choosing amongst them is based on their performance if the original data are not truly normal; with the F test being sufficiently non-robust that it is not recommended; Levene's test being slightly more powerful than BF if the data really are normal, but not quite as robust if they aren't. The key citation here is O'Brien (1981), although I could not find an available version on the internet. I apologize if I misunderstood the question or was unclear.
Why Levene test of equality of variances rather than F ratio?
You could use an F test to assess the variance of two groups, but the using F to test for differences in variance strictly requires that the distributions are normal. Using Levene's test (i.e., absol
Why Levene test of equality of variances rather than F ratio? You could use an F test to assess the variance of two groups, but the using F to test for differences in variance strictly requires that the distributions are normal. Using Levene's test (i.e., absolute values of the deviations from the mean) is more robust, and using the Brown-Forsythe test (i.e., absolute values of the deviations from the median) is even more robust. SPSS is using a good approach here. Update In response to the comment below, I want to clarify what I'm trying to say here. The question asks about using "a simple F ratio of the ratio of the variances of the two groups". From this, I understood the alternative to be what is sometimes known as Hartley's test, which is a very intuitive approach to assessing heterogeneity of variance. Although this does use a ratio of variances, it is not the same as that used in Levene's test. Because sometimes it is hard to understand what is meant when it is only stated in words, I will give equations to make this clearer. Hartley's test: $$ F=\frac{s^2_2}{s^2_1} $$ Levene's test / Brown-Forsythe test: $$ F=\frac{MS_{b/t-levels}}{MS_{w/i-levels}} $$ In all three cases, we have ratios of variances, but the specific variances used differ between them. What makes Levene's test and the Brown-Forsythe test more robust (and also distinct from any other ANOVA), is that they are performed over transformed data, whereas the F ratio of group variances (Hartley's test) uses the raw data. The transformed data in question are the absolute values of the deviations (from the mean, in the case of Levene's test, and from the median, in the case of the Brown-Forsythe test). There are other tests for heterogeneity of variance, but I'm restricting my discussion to these, as I understood them to be focus of the original question. The rationale for choosing amongst them is based on their performance if the original data are not truly normal; with the F test being sufficiently non-robust that it is not recommended; Levene's test being slightly more powerful than BF if the data really are normal, but not quite as robust if they aren't. The key citation here is O'Brien (1981), although I could not find an available version on the internet. I apologize if I misunderstood the question or was unclear.
Why Levene test of equality of variances rather than F ratio? You could use an F test to assess the variance of two groups, but the using F to test for differences in variance strictly requires that the distributions are normal. Using Levene's test (i.e., absol
10,722
Bias of moment estimator of lognormal distribution
There is something puzzling in those results since the first method provides an unbiased estimator of $\mathbb{E}[X^2]$, namely$$\frac{1}{N}\sum_{i=1}^N X_i^2$$has $\mathbb{E}[X^2]$ as its mean. Hence the blue dots should be around the expected value (orange curve); the second method provides a biased estimator of $\mathbb{E}[X^2]$, namely$$\mathbb{E}[\exp(n \hat\mu + n^2 \hat{\sigma}^2/2)]>\exp(n \mu + (n \sigma)^2/2)$$when $\hat\mu$ and $\hat\sigma²$ are unbiased estimators of $\mu$ and $\sigma²$ respectively, and it is thus strange that the green dots are aligned with the orange curve. but they are due to the problem and not to the numerical computations: I repeated the experiment in R and got the following picture with the same colour code and the same sequence of $\mu_T$'s and $\sigma_T$'s, which represents each estimator divided by the true expectation: Here is the corresponding R code: moy1=moy2=rep(0,200) mus=0.14*(1:200) sigs=sqrt(0.13*(1:200)) tru=exp(2*mus+2*sigs^2) for (t in 1:200){ x=rnorm(1e5) moy1[t]=mean(exp(2*sigs[t]*x+2*mus[t])) moy2[t]=exp(2*mean(sigs[t]*x+mus[t])+2*var(sigs[t]*x+mus[t]))} plot(moy1/tru,col="blue",ylab="relative mean",xlab="T",cex=.4,pch=19) abline(h=1,col="orange") lines((moy2/tru),col="green",cex=.4,pch=19) Hence there is indeed a collapse of the second empirical moment as $\mu$ and $\sigma$ increase that I would attribute to the enormous increase in the variance of the said second empirical moment as $\mu$ and $\sigma$ increase. My explanation of this curious phenomenon is that, while $\mathbb{E}[X^2]$ obviously is the mean of $X^2$, it is not a central value: actually the median of $X^2$ is equal to $e^{2\mu}$. When representing the random variable $X^2$ as $\exp\{2\mu+2\sigma\epsilon\}$ where $\epsilon\sim\mathcal{N}(0,1)$, it is clear that, when $\sigma$ is large enough, the random variable $\sigma\epsilon$ is almost never of the magnitude of $\sigma^2$. In other words if $X$ is $\mathcal{LN}(\mu,\sigma)$ $$\begin{align*}\mathbb{P}(X^2>\mathbb{E}[X^2])&=\mathbb{P}(\log\{X^2\}>2\mu+2\sigma^2)\\&=\mathbb{P}(\mu+\sigma\epsilon>\mu+\sigma^2)\\&=\mathbb{P}(\epsilon>\sigma)\\ &=1-\Phi(\sigma)\end{align*}$$ which can be arbitrarily small.
Bias of moment estimator of lognormal distribution
There is something puzzling in those results since the first method provides an unbiased estimator of $\mathbb{E}[X^2]$, namely$$\frac{1}{N}\sum_{i=1}^N X_i^2$$has $\mathbb{E}[X^2]$ as its mean. Henc
Bias of moment estimator of lognormal distribution There is something puzzling in those results since the first method provides an unbiased estimator of $\mathbb{E}[X^2]$, namely$$\frac{1}{N}\sum_{i=1}^N X_i^2$$has $\mathbb{E}[X^2]$ as its mean. Hence the blue dots should be around the expected value (orange curve); the second method provides a biased estimator of $\mathbb{E}[X^2]$, namely$$\mathbb{E}[\exp(n \hat\mu + n^2 \hat{\sigma}^2/2)]>\exp(n \mu + (n \sigma)^2/2)$$when $\hat\mu$ and $\hat\sigma²$ are unbiased estimators of $\mu$ and $\sigma²$ respectively, and it is thus strange that the green dots are aligned with the orange curve. but they are due to the problem and not to the numerical computations: I repeated the experiment in R and got the following picture with the same colour code and the same sequence of $\mu_T$'s and $\sigma_T$'s, which represents each estimator divided by the true expectation: Here is the corresponding R code: moy1=moy2=rep(0,200) mus=0.14*(1:200) sigs=sqrt(0.13*(1:200)) tru=exp(2*mus+2*sigs^2) for (t in 1:200){ x=rnorm(1e5) moy1[t]=mean(exp(2*sigs[t]*x+2*mus[t])) moy2[t]=exp(2*mean(sigs[t]*x+mus[t])+2*var(sigs[t]*x+mus[t]))} plot(moy1/tru,col="blue",ylab="relative mean",xlab="T",cex=.4,pch=19) abline(h=1,col="orange") lines((moy2/tru),col="green",cex=.4,pch=19) Hence there is indeed a collapse of the second empirical moment as $\mu$ and $\sigma$ increase that I would attribute to the enormous increase in the variance of the said second empirical moment as $\mu$ and $\sigma$ increase. My explanation of this curious phenomenon is that, while $\mathbb{E}[X^2]$ obviously is the mean of $X^2$, it is not a central value: actually the median of $X^2$ is equal to $e^{2\mu}$. When representing the random variable $X^2$ as $\exp\{2\mu+2\sigma\epsilon\}$ where $\epsilon\sim\mathcal{N}(0,1)$, it is clear that, when $\sigma$ is large enough, the random variable $\sigma\epsilon$ is almost never of the magnitude of $\sigma^2$. In other words if $X$ is $\mathcal{LN}(\mu,\sigma)$ $$\begin{align*}\mathbb{P}(X^2>\mathbb{E}[X^2])&=\mathbb{P}(\log\{X^2\}>2\mu+2\sigma^2)\\&=\mathbb{P}(\mu+\sigma\epsilon>\mu+\sigma^2)\\&=\mathbb{P}(\epsilon>\sigma)\\ &=1-\Phi(\sigma)\end{align*}$$ which can be arbitrarily small.
Bias of moment estimator of lognormal distribution There is something puzzling in those results since the first method provides an unbiased estimator of $\mathbb{E}[X^2]$, namely$$\frac{1}{N}\sum_{i=1}^N X_i^2$$has $\mathbb{E}[X^2]$ as its mean. Henc
10,723
Bias of moment estimator of lognormal distribution
I thought I'd throw up some figs showing that both user29918 and Xi'an's plots are consistent. Fig 1 plots what user29918 did, and Fig 2 (based on same data), does what Xi'an did for his plot. Same result, different presentation. What's happening is that as T increases, the variances becomes huge and the estimator $\frac{1}{n} \sum_i x_i^2$ becomes like trying to estimate the population mean of the Powerball Lotto by buying Lotto tickets! A large percentage of the time, you will underestimate the payoff (because no sample observation hits the jackpot) and a tiny percentage of the time, you will massively overestimate the payoff (because there's a jackpot winner in the sample). The sample mean is an unbiased estimate but it's not expected to be precise, even with thousands and thousands of draws! In fact, as it becomes harder and harder to win the lotto, your sample mean will be below the population mean the vast majority of the time. Further Comments: An unbiased estimator does not mean the estimator is expected to be close! The blue dots need not be near the expectation. Eg. a single observation chosen at random gives an unbiased estimate of the population mean, but that estimator would not be expected to be close. The issue is coming up as the variance is becoming absolutely astronomical. As the variance goes batshit, the estimate for the first method is being driven be just a few observations. You also start having a tiny, tiny probability of an INSANELY, INSANELY, INSANELY big number... This is an intuitive explanation. Xi'an has a more formal derivation. His result $P(X^2 > E[X^2]) = 1 - \Phi(\sigma)$ implies that as $\sigma$ gets large, it becomes incredibly unlikely to ever draw an observation above the mean, even with thousands of observations. My language of "winning the lotto" refers to an event where $X^2 > E[X^2]$.
Bias of moment estimator of lognormal distribution
I thought I'd throw up some figs showing that both user29918 and Xi'an's plots are consistent. Fig 1 plots what user29918 did, and Fig 2 (based on same data), does what Xi'an did for his plot. Same re
Bias of moment estimator of lognormal distribution I thought I'd throw up some figs showing that both user29918 and Xi'an's plots are consistent. Fig 1 plots what user29918 did, and Fig 2 (based on same data), does what Xi'an did for his plot. Same result, different presentation. What's happening is that as T increases, the variances becomes huge and the estimator $\frac{1}{n} \sum_i x_i^2$ becomes like trying to estimate the population mean of the Powerball Lotto by buying Lotto tickets! A large percentage of the time, you will underestimate the payoff (because no sample observation hits the jackpot) and a tiny percentage of the time, you will massively overestimate the payoff (because there's a jackpot winner in the sample). The sample mean is an unbiased estimate but it's not expected to be precise, even with thousands and thousands of draws! In fact, as it becomes harder and harder to win the lotto, your sample mean will be below the population mean the vast majority of the time. Further Comments: An unbiased estimator does not mean the estimator is expected to be close! The blue dots need not be near the expectation. Eg. a single observation chosen at random gives an unbiased estimate of the population mean, but that estimator would not be expected to be close. The issue is coming up as the variance is becoming absolutely astronomical. As the variance goes batshit, the estimate for the first method is being driven be just a few observations. You also start having a tiny, tiny probability of an INSANELY, INSANELY, INSANELY big number... This is an intuitive explanation. Xi'an has a more formal derivation. His result $P(X^2 > E[X^2]) = 1 - \Phi(\sigma)$ implies that as $\sigma$ gets large, it becomes incredibly unlikely to ever draw an observation above the mean, even with thousands of observations. My language of "winning the lotto" refers to an event where $X^2 > E[X^2]$.
Bias of moment estimator of lognormal distribution I thought I'd throw up some figs showing that both user29918 and Xi'an's plots are consistent. Fig 1 plots what user29918 did, and Fig 2 (based on same data), does what Xi'an did for his plot. Same re
10,724
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It's unusual to not fit an intercept and generally inadvisable - one should only do so if you know it's 0, but I think that (and the fact that you can't compare the $R^2$ for fits with and without intercept) is well and truly covered already (if possibly a little overstated in the case of the 0 intercept); I want to focus on your main issue which is that you need the fitted function to be positive, though I do return to the 0-intercept issue in part of my answer. The best way to get an always positive fit is to fit something that will always be positive; in part that depends on what functions you need to fit. If your linear model was largely one of convenience (rather than coming from a known functional relationship that might stem from a physical model, say), then you might instead work with log-time; the fitted model is then guaranteed to be positive in $t$. As an alternative, you might work with speed rather than time - but then with linear fits you may get a problem with small speeds (long times) instead. If you know your response is linear in the predictors, you can attempt to fit a constrained regression, but with multiple regression the exact form you need will depend on your particular x's (there's no one linear constraint that will work for all $x's$), so it's a bit ad-hoc. You can also look at GLMs which can be used to fit models which have non-negative fitted values and can (if required) even have $E(Y)=X\beta$. For example, one can fit a gamma GLM with identity link. You should not end up with a negative fitted value for any of your x's (but you might perhaps have convergence issues in some cases if you force the identity link where it really won't fit). Here's an example: the cars data set in R, which records speed and stopping distances (the response). One might say "oh, but the distance for speed 0 is guaranteed to be 0, so we should omit the intercept" but the problem with that reasoning is that the model is misspecified in several ways, and that argument only works well enough when the model is not misspecified - a linear model with 0 intercept doesn't fit at all well in this case, while one with an intercept is actually a half-decent approximation even though it's not actually "correct". The problem is, if you fit an ordinary linear regression, the fitted intercept is quite a way negative, which causes the fitted values to be negative. The blue line is the OLS fit; the fitted value for the smallest x-values in the data set are negative. The red line is the gamma GLM with identity link -- while having a negative intercept, it only has positive fitted values. This model has variance proportional to mean, so if you find your data are more spread as the expected time grows, it may be especially suitable. So that's one possible alternative approach that may be worth a try. It's almost as easy as fitting a regression in R. If you don't need the identity link, you might consider other link functions, like the log-link and the inverse link, which relate to the transformations already discussed, but without the need for actual transformation. Since people usually ask for it, here's the code for my plot: plot(dist~speed,data=cars,xlim=c(0,30),ylim=c(-5,120)) abline(h=0,v=0,col=8) abline(glm(dist~speed,data=cars,family=Gamma(link=identity)),col=2,lty=2) abline(lm(dist~speed,data=cars),col=4,lty=2) (The ellipse was added by hand afterward, though it's easy enough to do in R as well)
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It's unusual to not fit an intercept and generally inadvisable - one should only do so if you know it's 0, but I think that (and the fact that you can't compare the $R^2$ for fits with and without int
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It's unusual to not fit an intercept and generally inadvisable - one should only do so if you know it's 0, but I think that (and the fact that you can't compare the $R^2$ for fits with and without intercept) is well and truly covered already (if possibly a little overstated in the case of the 0 intercept); I want to focus on your main issue which is that you need the fitted function to be positive, though I do return to the 0-intercept issue in part of my answer. The best way to get an always positive fit is to fit something that will always be positive; in part that depends on what functions you need to fit. If your linear model was largely one of convenience (rather than coming from a known functional relationship that might stem from a physical model, say), then you might instead work with log-time; the fitted model is then guaranteed to be positive in $t$. As an alternative, you might work with speed rather than time - but then with linear fits you may get a problem with small speeds (long times) instead. If you know your response is linear in the predictors, you can attempt to fit a constrained regression, but with multiple regression the exact form you need will depend on your particular x's (there's no one linear constraint that will work for all $x's$), so it's a bit ad-hoc. You can also look at GLMs which can be used to fit models which have non-negative fitted values and can (if required) even have $E(Y)=X\beta$. For example, one can fit a gamma GLM with identity link. You should not end up with a negative fitted value for any of your x's (but you might perhaps have convergence issues in some cases if you force the identity link where it really won't fit). Here's an example: the cars data set in R, which records speed and stopping distances (the response). One might say "oh, but the distance for speed 0 is guaranteed to be 0, so we should omit the intercept" but the problem with that reasoning is that the model is misspecified in several ways, and that argument only works well enough when the model is not misspecified - a linear model with 0 intercept doesn't fit at all well in this case, while one with an intercept is actually a half-decent approximation even though it's not actually "correct". The problem is, if you fit an ordinary linear regression, the fitted intercept is quite a way negative, which causes the fitted values to be negative. The blue line is the OLS fit; the fitted value for the smallest x-values in the data set are negative. The red line is the gamma GLM with identity link -- while having a negative intercept, it only has positive fitted values. This model has variance proportional to mean, so if you find your data are more spread as the expected time grows, it may be especially suitable. So that's one possible alternative approach that may be worth a try. It's almost as easy as fitting a regression in R. If you don't need the identity link, you might consider other link functions, like the log-link and the inverse link, which relate to the transformations already discussed, but without the need for actual transformation. Since people usually ask for it, here's the code for my plot: plot(dist~speed,data=cars,xlim=c(0,30),ylim=c(-5,120)) abline(h=0,v=0,col=8) abline(glm(dist~speed,data=cars,family=Gamma(link=identity)),col=2,lty=2) abline(lm(dist~speed,data=cars),col=4,lty=2) (The ellipse was added by hand afterward, though it's easy enough to do in R as well)
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It's unusual to not fit an intercept and generally inadvisable - one should only do so if you know it's 0, but I think that (and the fact that you can't compare the $R^2$ for fits with and without int
10,725
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
Short answer to question in title: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $R^2$ used is another one! $R^2$ is an expression of a comparison of the estimated model with some standard model, expressed as reduction in sum of squares compared to sum of squares with the standard model. In the model with intercept, the comparison sum of squares is around the mean. Without intercept, it is around zero! The last one is usually much higher, so it easier to get a large reduction in sum of squares. Conclusion: DO NOT LEAVE THE INTERCEPT OUT OF THE MODEL (unless you really, really know what you are doing). EDIT (from the comments below): One exception is mentioned elsewhere in the comments (but that is only seemingly an exception, the constant vector 1 is in the column space of the design matrix $X$. Otherwise, such as physical relationships $s=v t$ where there are no constant. But even then, if the model is only approximate (speed is not really constant), it might be better to leave in a constant even if it cannot be interpreted. With non-linear models this becomes more of an issue.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
Short answer to question in title: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] Short answer to question in title: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of $y$ given $x=0$ is zero. You almost never know that. $R^2$ becomes higher without intercept, not because the model is better, but because the definition of $R^2$ used is another one! $R^2$ is an expression of a comparison of the estimated model with some standard model, expressed as reduction in sum of squares compared to sum of squares with the standard model. In the model with intercept, the comparison sum of squares is around the mean. Without intercept, it is around zero! The last one is usually much higher, so it easier to get a large reduction in sum of squares. Conclusion: DO NOT LEAVE THE INTERCEPT OUT OF THE MODEL (unless you really, really know what you are doing). EDIT (from the comments below): One exception is mentioned elsewhere in the comments (but that is only seemingly an exception, the constant vector 1 is in the column space of the design matrix $X$. Otherwise, such as physical relationships $s=v t$ where there are no constant. But even then, if the model is only approximate (speed is not really constant), it might be better to leave in a constant even if it cannot be interpreted. With non-linear models this becomes more of an issue.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] Short answer to question in title: (almost) NEVER. In the linear regression model $$ y = \alpha + \beta x + \epsilon $$, if you set $\alpha=0$, then you say that you KNOW that the expected value of
10,726
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
1) It is never acceptable to suppress an intercept except in very rare types of DiD models where the outcome and predictors are actually computed differences between groups (this isn't the case for you). 2). Heck no it doesn't. What it means is that you may have a higher degree of internal validity (e.g. the model fits the data) but probably a low degree of external validity (e.g. the model would be poor in fitting experimental data obtained under similar conditions). This is generally a bad thing. 3) Suppressing the intercept will not necessarily do that, but I assume the predictor was continuous valued. In many situations, process completion times are analyzed using an inverse transform, e.g. $x = 1/t$ where $t$ is the time taken to complete a process. The inverse of the mean of inverse transformed data is called a harmonic mean and represents the average complete time for a task. $$\mbox{HM} = \frac{1}{\mathbb{E}(x)} = \frac{1}{\mathbb{E}(1/t)} $$ You can also use a parametric exponential or gamma or weibull time-to-event models which are types of models built specifically for predicting completion times. These will give results very similar to the inverse transformed outcomes.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
1) It is never acceptable to suppress an intercept except in very rare types of DiD models where the outcome and predictors are actually computed differences between groups (this isn't the case for yo
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] 1) It is never acceptable to suppress an intercept except in very rare types of DiD models where the outcome and predictors are actually computed differences between groups (this isn't the case for you). 2). Heck no it doesn't. What it means is that you may have a higher degree of internal validity (e.g. the model fits the data) but probably a low degree of external validity (e.g. the model would be poor in fitting experimental data obtained under similar conditions). This is generally a bad thing. 3) Suppressing the intercept will not necessarily do that, but I assume the predictor was continuous valued. In many situations, process completion times are analyzed using an inverse transform, e.g. $x = 1/t$ where $t$ is the time taken to complete a process. The inverse of the mean of inverse transformed data is called a harmonic mean and represents the average complete time for a task. $$\mbox{HM} = \frac{1}{\mathbb{E}(x)} = \frac{1}{\mathbb{E}(1/t)} $$ You can also use a parametric exponential or gamma or weibull time-to-event models which are types of models built specifically for predicting completion times. These will give results very similar to the inverse transformed outcomes.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] 1) It is never acceptable to suppress an intercept except in very rare types of DiD models where the outcome and predictors are actually computed differences between groups (this isn't the case for yo
10,727
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
1) Forcing $0$ intercept is advisable if you know for a fact that it is 0. Anything you know a priori, you should use in your model. One example is the Hubble model for expansion of the Universe (used in Statistical Sleuth): $$\mbox{Galaxy Speed} = k (\mbox{Distance from Earth}) $$ This model is rather crude, but uses 0 intercept as the consequence of the Big Bang Theory: at time $0$ all the matter is in one place. On the other hand, the model you're describing will likely need an intercept term. 2) You might or might not get better $R^2_{adj}$, or you may accept null hypothesis for the test for intercept being 0, but both of these are not reasons to remove the intercept term. 3) To ensure positivity of answers, you can sometimes transform the response variable. Log or sqrt might work depending on your data, of course you will need to check the residuals.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
1) Forcing $0$ intercept is advisable if you know for a fact that it is 0. Anything you know a priori, you should use in your model. One example is the Hubble model for expansion of the Universe (used
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] 1) Forcing $0$ intercept is advisable if you know for a fact that it is 0. Anything you know a priori, you should use in your model. One example is the Hubble model for expansion of the Universe (used in Statistical Sleuth): $$\mbox{Galaxy Speed} = k (\mbox{Distance from Earth}) $$ This model is rather crude, but uses 0 intercept as the consequence of the Big Bang Theory: at time $0$ all the matter is in one place. On the other hand, the model you're describing will likely need an intercept term. 2) You might or might not get better $R^2_{adj}$, or you may accept null hypothesis for the test for intercept being 0, but both of these are not reasons to remove the intercept term. 3) To ensure positivity of answers, you can sometimes transform the response variable. Log or sqrt might work depending on your data, of course you will need to check the residuals.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] 1) Forcing $0$ intercept is advisable if you know for a fact that it is 0. Anything you know a priori, you should use in your model. One example is the Hubble model for expansion of the Universe (used
10,728
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It makes sense (actually, is necessary) to leave out the intercept in the second stage of the Engle/Granger cointegration test. The test first estimates a candidate cointegrating relationship via a regression of some dependent variable on a constant (plus sometimes a trend) and the other nonstationary variables. In the second stage, the residuals of that regression are tested for a unit root to test whether the error actually represents an equilibrium relationship. As the first stage regression contains a constant, the residuals are mean zero by construction. Hence, the second stage unit root test does not need a constant and in fact, the limiting distribution for that unit root test is derived assuming that this constant indeed has not been fitted.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It makes sense (actually, is necessary) to leave out the intercept in the second stage of the Engle/Granger cointegration test. The test first estimates a candidate cointegrating relationship via a re
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It makes sense (actually, is necessary) to leave out the intercept in the second stage of the Engle/Granger cointegration test. The test first estimates a candidate cointegrating relationship via a regression of some dependent variable on a constant (plus sometimes a trend) and the other nonstationary variables. In the second stage, the residuals of that regression are tested for a unit root to test whether the error actually represents an equilibrium relationship. As the first stage regression contains a constant, the residuals are mean zero by construction. Hence, the second stage unit root test does not need a constant and in fact, the limiting distribution for that unit root test is derived assuming that this constant indeed has not been fitted.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It makes sense (actually, is necessary) to leave out the intercept in the second stage of the Engle/Granger cointegration test. The test first estimates a candidate cointegrating relationship via a re
10,729
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
The only way that I know to constrain all fitted values to be greater than zero is to use a linear programming approach and specify that as a constraint.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
The only way that I know to constrain all fitted values to be greater than zero is to use a linear programming approach and specify that as a constraint.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] The only way that I know to constrain all fitted values to be greater than zero is to use a linear programming approach and specify that as a constraint.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] The only way that I know to constrain all fitted values to be greater than zero is to use a linear programming approach and specify that as a constraint.
10,730
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
The actual problem is that a linear regression forcing the intercept=0 is a mathematical inconsistency that should never be done: It is clear that if y=a+bx, then average(y)=a+average(x), and indeed we can easily realize that when we estimate a and b using linear estimation in Excel, we obtain the above relation However, if we make arbitrarily a=0, then necessarily b=average(y)/average(x). But this is inconsistent with the minumum squares algorithm. Indeed, you can easily realize that when you estimate b using linear estimation in excel the above relation is not satisfied
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
The actual problem is that a linear regression forcing the intercept=0 is a mathematical inconsistency that should never be done: It is clear that if y=a+bx, then average(y)=a+average(x), and indeed
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] The actual problem is that a linear regression forcing the intercept=0 is a mathematical inconsistency that should never be done: It is clear that if y=a+bx, then average(y)=a+average(x), and indeed we can easily realize that when we estimate a and b using linear estimation in Excel, we obtain the above relation However, if we make arbitrarily a=0, then necessarily b=average(y)/average(x). But this is inconsistent with the minumum squares algorithm. Indeed, you can easily realize that when you estimate b using linear estimation in excel the above relation is not satisfied
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] The actual problem is that a linear regression forcing the intercept=0 is a mathematical inconsistency that should never be done: It is clear that if y=a+bx, then average(y)=a+average(x), and indeed
10,731
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It does make pretty much sense in models with categorical covariate. In this case the removal of the intercept results in an equivalent model with just different parametrization: > data(mtcars) > mtcars$cyl_factor <- as.factor(mtcars$cyl) > summary(lm(mpg ~ cyl_factor, data = mtcars)) Call: lm(formula = mpg ~ cyl_factor, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 26.6636 0.9718 27.437 < 2e-16 *** cyl_factor6 -6.9208 1.5583 -4.441 0.000119 *** cyl_factor8 -11.5636 1.2986 -8.905 8.57e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.7325, Adjusted R-squared: 0.714 F-statistic: 39.7 on 2 and 29 DF, p-value: 4.979e-09 > summary(lm(mpg ~ 0 + cyl_factor, data = mtcars)) Call: lm(formula = mpg ~ 0 + cyl_factor, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) cyl_factor4 26.6636 0.9718 27.44 < 2e-16 *** cyl_factor6 19.7429 1.2182 16.21 4.49e-16 *** cyl_factor8 15.1000 0.8614 17.53 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.9785, Adjusted R-squared: 0.9763 F-statistic: 440.9 on 3 and 29 DF, p-value: < 2.2e-16 The second example in fact results in the categorical variable being a category-specific intercept, so in reality the intercept isn't actually removed, it just seems so.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate]
It does make pretty much sense in models with categorical covariate. In this case the removal of the intercept results in an equivalent model with just different parametrization: > data(mtcars) > mtca
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It does make pretty much sense in models with categorical covariate. In this case the removal of the intercept results in an equivalent model with just different parametrization: > data(mtcars) > mtcars$cyl_factor <- as.factor(mtcars$cyl) > summary(lm(mpg ~ cyl_factor, data = mtcars)) Call: lm(formula = mpg ~ cyl_factor, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 26.6636 0.9718 27.437 < 2e-16 *** cyl_factor6 -6.9208 1.5583 -4.441 0.000119 *** cyl_factor8 -11.5636 1.2986 -8.905 8.57e-10 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.7325, Adjusted R-squared: 0.714 F-statistic: 39.7 on 2 and 29 DF, p-value: 4.979e-09 > summary(lm(mpg ~ 0 + cyl_factor, data = mtcars)) Call: lm(formula = mpg ~ 0 + cyl_factor, data = mtcars) Residuals: Min 1Q Median 3Q Max -5.2636 -1.8357 0.0286 1.3893 7.2364 Coefficients: Estimate Std. Error t value Pr(>|t|) cyl_factor4 26.6636 0.9718 27.44 < 2e-16 *** cyl_factor6 19.7429 1.2182 16.21 4.49e-16 *** cyl_factor8 15.1000 0.8614 17.53 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 3.223 on 29 degrees of freedom Multiple R-squared: 0.9785, Adjusted R-squared: 0.9763 F-statistic: 440.9 on 3 and 29 DF, p-value: < 2.2e-16 The second example in fact results in the categorical variable being a category-specific intercept, so in reality the intercept isn't actually removed, it just seems so.
When forcing intercept of 0 in linear regression is acceptable/advisable [duplicate] It does make pretty much sense in models with categorical covariate. In this case the removal of the intercept results in an equivalent model with just different parametrization: > data(mtcars) > mtca
10,732
Why is the formula for standard error the way it is?
This comes from the fact that $\newcommand{\Var}{\operatorname{Var}}\newcommand{\Cov}{\operatorname{Cov}}\Var(X+Y) = \Var(X) + \Var(Y) + 2\cdot\Cov(X,Y)$ and for a constant $a$, $\Var( a X ) = a^2 \Var(X)$. Since we are assuming that the individual observations are independent the $\Cov(X,Y)$ term is $0$ and since we assume that the observations are identically distributed all the variances are $\sigma^2$. So $\Var( \frac{1}{n} \sum X_i ) = \frac{1}{n^2} \sum \Var(X_i) = \frac{1}{n^2} \times \sum_{i=1}^n \sigma^2= \frac{n}{n^2} \sigma^2 = \frac{\sigma^2}{n}$ And when we take the square root of that (because it is harder to think on the variance scale) we get $\dfrac{\sigma}{\sqrt{n}}$. More intuitively, think of 2 statistics classes: in the first the teacher assigns each of the students to draw a sample of size 10 from a set of tiles with numbers on them (the teacher knows the true mean of this population, but the students don't) and compute the mean of their sample. The second teacher assigns each of his/her students to take samples of size 100 from the same set of tiles and compute the mean. Would you expect every sample mean to exactly match the population mean? or to vary about it? Would you expect the spread of the sample means to be the same in both classes? or would the 2nd class tend to be closer to the population? That's why it makes sense to divide by a function of the sample size. The square root means we have a law of diminishing returns, to halve the standard error you need to quadruple the sample size. As for the name, the full name is "The estimated standard deviation of the sampling distribution of x-bar"; it only takes saying that a few times before you appreciate having a shortened form. I don't know who first substituted "error" for "deviation" this way, but it stuck. The standard deviation measures variability of individual observations; the standard error measures variability in estimates of parameters (based on observations).
Why is the formula for standard error the way it is?
This comes from the fact that $\newcommand{\Var}{\operatorname{Var}}\newcommand{\Cov}{\operatorname{Cov}}\Var(X+Y) = \Var(X) + \Var(Y) + 2\cdot\Cov(X,Y)$ and for a constant $a$, $\Var( a X ) = a^2 \Va
Why is the formula for standard error the way it is? This comes from the fact that $\newcommand{\Var}{\operatorname{Var}}\newcommand{\Cov}{\operatorname{Cov}}\Var(X+Y) = \Var(X) + \Var(Y) + 2\cdot\Cov(X,Y)$ and for a constant $a$, $\Var( a X ) = a^2 \Var(X)$. Since we are assuming that the individual observations are independent the $\Cov(X,Y)$ term is $0$ and since we assume that the observations are identically distributed all the variances are $\sigma^2$. So $\Var( \frac{1}{n} \sum X_i ) = \frac{1}{n^2} \sum \Var(X_i) = \frac{1}{n^2} \times \sum_{i=1}^n \sigma^2= \frac{n}{n^2} \sigma^2 = \frac{\sigma^2}{n}$ And when we take the square root of that (because it is harder to think on the variance scale) we get $\dfrac{\sigma}{\sqrt{n}}$. More intuitively, think of 2 statistics classes: in the first the teacher assigns each of the students to draw a sample of size 10 from a set of tiles with numbers on them (the teacher knows the true mean of this population, but the students don't) and compute the mean of their sample. The second teacher assigns each of his/her students to take samples of size 100 from the same set of tiles and compute the mean. Would you expect every sample mean to exactly match the population mean? or to vary about it? Would you expect the spread of the sample means to be the same in both classes? or would the 2nd class tend to be closer to the population? That's why it makes sense to divide by a function of the sample size. The square root means we have a law of diminishing returns, to halve the standard error you need to quadruple the sample size. As for the name, the full name is "The estimated standard deviation of the sampling distribution of x-bar"; it only takes saying that a few times before you appreciate having a shortened form. I don't know who first substituted "error" for "deviation" this way, but it stuck. The standard deviation measures variability of individual observations; the standard error measures variability in estimates of parameters (based on observations).
Why is the formula for standard error the way it is? This comes from the fact that $\newcommand{\Var}{\operatorname{Var}}\newcommand{\Cov}{\operatorname{Cov}}\Var(X+Y) = \Var(X) + \Var(Y) + 2\cdot\Cov(X,Y)$ and for a constant $a$, $\Var( a X ) = a^2 \Va
10,733
Why is the formula for standard error the way it is?
Why √n? So there is this theory called the central limit theorem that tells us that as sample size increases, sampling distributions of means become normally distributed regardless of the parent distribution. In other words, given a sufficiently large sample size, the mean of all samples from a population will be the same as the population mean. We know that this really happens. Let’s put that in the bank and revisit it later. Now, let's look at what happens when we have a small sample size (n=5), and then observe what happens if we increase the sample size and leave all the population parameters the same. population mean: 15 standard deviation: 5 N=5 In the above case, our standard error of the mean (S.E.M.) will be: 5/√5 = 2.236 Now let's consider a case with a sample size of 10 population mean: 15 standard deviation: 5 N=10 S.E.M. will be: 5/√10 = 1.158 Increase the sample to 100: S.E.M = 5/√100 = .5 Increase the sample to 1000: S.E.M = 5/√1000 = .158 ...You see the pattern. As sample size increases, the standard error of the mean decreases and will continue to approach zero as your sample size increases infinitely. Why is this? One way to think about it is that if you keep increasing your sample size, you will eventually sample the entire population, at which point, your sample mean is your population mean. That is, any given sample mean will probably not be exactly equal to the true population mean, but as your sample size increases toward the size of the entire population, the amount that a given sample mean is likely to be off by (the standard error) becomes smaller and smaller. Now, let's go back to the conceptual definition of the standard error of the mean. One way to look at it is as the "standard deviation of sample means", or, alternatively, "On average, a sample of size N will deviate from the population mean by this amount". Therefore, your S.E.M. statistic is giving you an idea of how well your sample mean is likely to approximate the population mean. So we know that the S.E.M. tells us how well a sample mean of size N approximates the population mean, and we also know that as our sample size increases, any given sample mean will more closely approximate the population mean. The mathematical expression of those two ideas is the formula for S.E.M. By dividing by the square root of N, you are paying a “penalty” for using a sample instead of the entire population (sampling allows us to make guesses, or inferences, about a population. The smaller the sample, the less confidence you might have in those inferences; that’s the origin of the “penalty”). That penalty is relatively large when your sample is very small. As the sample size increases, however, that penalty rapidly diminishes, infinitely approaching the point where your sample is equivalent to the population itself. How fast does a sample mean approach equivalence with the population mean (i.e., how fast does the S.E.M. approach 0) as the sample size increases? That will depend on the numerator in the formula: standard deviation. The standard deviation is a measure of how predictable any given observation is in a population, or how far from the mean any one observation is likely to be. The less predictability, the higher the standard deviation. By the same token, the higher the standard deviation, the less quickly your sample mean will approach the population mean as sample size N increases. The power of the central limit theorem, however, shows us that as sample sizes become very large, the S.E.M. becomes very small, regardless of the standard deviation. That is, having a sufficiently large sample size will lead to a very small S.E.M. in just about all cases. The main differences in this trend between large or small standard deviations appears most notably when sample sizes are small – you pay a large penalty for having a small sample from a population that has a lot of variability!
Why is the formula for standard error the way it is?
Why √n? So there is this theory called the central limit theorem that tells us that as sample size increases, sampling distributions of means become normally distributed regardless of the parent distr
Why is the formula for standard error the way it is? Why √n? So there is this theory called the central limit theorem that tells us that as sample size increases, sampling distributions of means become normally distributed regardless of the parent distribution. In other words, given a sufficiently large sample size, the mean of all samples from a population will be the same as the population mean. We know that this really happens. Let’s put that in the bank and revisit it later. Now, let's look at what happens when we have a small sample size (n=5), and then observe what happens if we increase the sample size and leave all the population parameters the same. population mean: 15 standard deviation: 5 N=5 In the above case, our standard error of the mean (S.E.M.) will be: 5/√5 = 2.236 Now let's consider a case with a sample size of 10 population mean: 15 standard deviation: 5 N=10 S.E.M. will be: 5/√10 = 1.158 Increase the sample to 100: S.E.M = 5/√100 = .5 Increase the sample to 1000: S.E.M = 5/√1000 = .158 ...You see the pattern. As sample size increases, the standard error of the mean decreases and will continue to approach zero as your sample size increases infinitely. Why is this? One way to think about it is that if you keep increasing your sample size, you will eventually sample the entire population, at which point, your sample mean is your population mean. That is, any given sample mean will probably not be exactly equal to the true population mean, but as your sample size increases toward the size of the entire population, the amount that a given sample mean is likely to be off by (the standard error) becomes smaller and smaller. Now, let's go back to the conceptual definition of the standard error of the mean. One way to look at it is as the "standard deviation of sample means", or, alternatively, "On average, a sample of size N will deviate from the population mean by this amount". Therefore, your S.E.M. statistic is giving you an idea of how well your sample mean is likely to approximate the population mean. So we know that the S.E.M. tells us how well a sample mean of size N approximates the population mean, and we also know that as our sample size increases, any given sample mean will more closely approximate the population mean. The mathematical expression of those two ideas is the formula for S.E.M. By dividing by the square root of N, you are paying a “penalty” for using a sample instead of the entire population (sampling allows us to make guesses, or inferences, about a population. The smaller the sample, the less confidence you might have in those inferences; that’s the origin of the “penalty”). That penalty is relatively large when your sample is very small. As the sample size increases, however, that penalty rapidly diminishes, infinitely approaching the point where your sample is equivalent to the population itself. How fast does a sample mean approach equivalence with the population mean (i.e., how fast does the S.E.M. approach 0) as the sample size increases? That will depend on the numerator in the formula: standard deviation. The standard deviation is a measure of how predictable any given observation is in a population, or how far from the mean any one observation is likely to be. The less predictability, the higher the standard deviation. By the same token, the higher the standard deviation, the less quickly your sample mean will approach the population mean as sample size N increases. The power of the central limit theorem, however, shows us that as sample sizes become very large, the S.E.M. becomes very small, regardless of the standard deviation. That is, having a sufficiently large sample size will lead to a very small S.E.M. in just about all cases. The main differences in this trend between large or small standard deviations appears most notably when sample sizes are small – you pay a large penalty for having a small sample from a population that has a lot of variability!
Why is the formula for standard error the way it is? Why √n? So there is this theory called the central limit theorem that tells us that as sample size increases, sampling distributions of means become normally distributed regardless of the parent distr
10,734
Visually plotting multi dimensional cluster data
There's no single right visualization. It depends on what aspect of the clusters you want to see or emphasize. Do want to see how each variable contributes? Consider a parallel coordinates plot. Do you want to see how clusters are distributed along the principal components? Consider a biplot (in 2D or 3D): Do you want to look for cluster outliers over all dimensions. Consider a scatterplot of distance from cluster 1's center against distance from cluster's center 2. (By definition of K Means each cluster will fall on one side of the diagonal line.) Do you want to see pairwise relations compared to the clustering. Consider a scatterplot matrix colored by cluster. Do you want to see a summary view of the cluster distances? Consider a comparison of any distribution visualization, such as histograms, violin plots, or box plots.
Visually plotting multi dimensional cluster data
There's no single right visualization. It depends on what aspect of the clusters you want to see or emphasize. Do want to see how each variable contributes? Consider a parallel coordinates plot. Do
Visually plotting multi dimensional cluster data There's no single right visualization. It depends on what aspect of the clusters you want to see or emphasize. Do want to see how each variable contributes? Consider a parallel coordinates plot. Do you want to see how clusters are distributed along the principal components? Consider a biplot (in 2D or 3D): Do you want to look for cluster outliers over all dimensions. Consider a scatterplot of distance from cluster 1's center against distance from cluster's center 2. (By definition of K Means each cluster will fall on one side of the diagonal line.) Do you want to see pairwise relations compared to the clustering. Consider a scatterplot matrix colored by cluster. Do you want to see a summary view of the cluster distances? Consider a comparison of any distribution visualization, such as histograms, violin plots, or box plots.
Visually plotting multi dimensional cluster data There's no single right visualization. It depends on what aspect of the clusters you want to see or emphasize. Do want to see how each variable contributes? Consider a parallel coordinates plot. Do
10,735
Visually plotting multi dimensional cluster data
Multivariate displays are tricky, especially with that number of variables. I have two suggestions. If there are certain variables that are particularly important to the clustering, or substantively interesting, you can use a scatterplot matrix and display the bivariate relationships between your interesting variables. You could even used enhanced scatterplots (e.g. use shapes with size proportional to a third variable) to add in some more dimensionality Alternatively, you could use a springplot which was developed for displaying high dimensional data that exhibits clustering. Note, I have never seen this in the literature I am familiar with, but I think it is a very interesting way of displaying multivariate data. The following citation is where the plot was originally proposed. Hoffman,P.E. et al. (1997) DNA visual and analytic data mining. In the Proceedings of the IEEE Visualization. Phoenix, AZ, pp. 437-441. And here is where I originally found mention of it. Now, fair warning, I haven't been able to find an implementation of springplots outside of Orange. Then again, I haven't searched that hard! I am assuming that your data is real valued and continuous, if it is discrete or non-interval, so on so forth, I don't think either plots would be helpful.
Visually plotting multi dimensional cluster data
Multivariate displays are tricky, especially with that number of variables. I have two suggestions. If there are certain variables that are particularly important to the clustering, or substantively i
Visually plotting multi dimensional cluster data Multivariate displays are tricky, especially with that number of variables. I have two suggestions. If there are certain variables that are particularly important to the clustering, or substantively interesting, you can use a scatterplot matrix and display the bivariate relationships between your interesting variables. You could even used enhanced scatterplots (e.g. use shapes with size proportional to a third variable) to add in some more dimensionality Alternatively, you could use a springplot which was developed for displaying high dimensional data that exhibits clustering. Note, I have never seen this in the literature I am familiar with, but I think it is a very interesting way of displaying multivariate data. The following citation is where the plot was originally proposed. Hoffman,P.E. et al. (1997) DNA visual and analytic data mining. In the Proceedings of the IEEE Visualization. Phoenix, AZ, pp. 437-441. And here is where I originally found mention of it. Now, fair warning, I haven't been able to find an implementation of springplots outside of Orange. Then again, I haven't searched that hard! I am assuming that your data is real valued and continuous, if it is discrete or non-interval, so on so forth, I don't think either plots would be helpful.
Visually plotting multi dimensional cluster data Multivariate displays are tricky, especially with that number of variables. I have two suggestions. If there are certain variables that are particularly important to the clustering, or substantively i
10,736
Visually plotting multi dimensional cluster data
You can use fviz_cluster function from factoextra pacakge in R. It will show the scatter plot of your data and different colors of the points will be the cluster. To the best of my understanding, this function performs the PCA and then chooses the top two pc and plot those on 2D. Any suggestion/improvement in my answer are most welcome.
Visually plotting multi dimensional cluster data
You can use fviz_cluster function from factoextra pacakge in R. It will show the scatter plot of your data and different colors of the points will be the cluster. To the best of my understanding, thi
Visually plotting multi dimensional cluster data You can use fviz_cluster function from factoextra pacakge in R. It will show the scatter plot of your data and different colors of the points will be the cluster. To the best of my understanding, this function performs the PCA and then chooses the top two pc and plot those on 2D. Any suggestion/improvement in my answer are most welcome.
Visually plotting multi dimensional cluster data You can use fviz_cluster function from factoextra pacakge in R. It will show the scatter plot of your data and different colors of the points will be the cluster. To the best of my understanding, thi
10,737
Question about standardizing in ridge regression
Ridge regression regularize the linear regression by imposing a penalty on the size of coefficients. Thus the coefficients are shrunk toward zero and toward each other. But when this happens and if the independent variables does not have the same scale, the shrinking is not fair. Two independent variables with different scales will have different contributions to the penalized terms, because the penalized term is a sum of squares of all the coefficients. To avoid such kind of problems, very often, the independent variables are centered and scaled in order to have variance 1. [Later edit to answer to comment] Suppose now that you have an independent variable $height$. Now, human height might be measured in inches or meters or kilometers. If measured in kilometers, than in standard linear regression I think it will give a much bigger coefficient term, than if measured in millimeters. The penalization term with lambda is the same as expressing the square loss function with respect to the sum of squared coefficients less than or equal a given constant. That means, bigger lambda gives much space to the squared sum of coefficients, and lower lambda a smaller space. Bigger or smaller space means bigger or smaller absolute values of the coefficients. By not using standardization, then to fit the model might require big absolute values of the coefficients. Of course, we might have a big coefficient value naturally, due to the role of the variable in the model. What I state is that this value might have an artificially inflated value due to not scaling. So, scaling also decreases the need for a big values of coefficients. Thus, the optimal value of lambda would be usually smaller, which corresponds to a smaller sum of squared values of coefficients.
Question about standardizing in ridge regression
Ridge regression regularize the linear regression by imposing a penalty on the size of coefficients. Thus the coefficients are shrunk toward zero and toward each other. But when this happens and if th
Question about standardizing in ridge regression Ridge regression regularize the linear regression by imposing a penalty on the size of coefficients. Thus the coefficients are shrunk toward zero and toward each other. But when this happens and if the independent variables does not have the same scale, the shrinking is not fair. Two independent variables with different scales will have different contributions to the penalized terms, because the penalized term is a sum of squares of all the coefficients. To avoid such kind of problems, very often, the independent variables are centered and scaled in order to have variance 1. [Later edit to answer to comment] Suppose now that you have an independent variable $height$. Now, human height might be measured in inches or meters or kilometers. If measured in kilometers, than in standard linear regression I think it will give a much bigger coefficient term, than if measured in millimeters. The penalization term with lambda is the same as expressing the square loss function with respect to the sum of squared coefficients less than or equal a given constant. That means, bigger lambda gives much space to the squared sum of coefficients, and lower lambda a smaller space. Bigger or smaller space means bigger or smaller absolute values of the coefficients. By not using standardization, then to fit the model might require big absolute values of the coefficients. Of course, we might have a big coefficient value naturally, due to the role of the variable in the model. What I state is that this value might have an artificially inflated value due to not scaling. So, scaling also decreases the need for a big values of coefficients. Thus, the optimal value of lambda would be usually smaller, which corresponds to a smaller sum of squared values of coefficients.
Question about standardizing in ridge regression Ridge regression regularize the linear regression by imposing a penalty on the size of coefficients. Thus the coefficients are shrunk toward zero and toward each other. But when this happens and if th
10,738
Question about standardizing in ridge regression
Though four years late, hope someone will benefit from this.... The way I understood it, coeff is how much target variable changes for a unit change in independent variable (dy / dx). Let us assume we are studying relation between weight and height and weight is measured in Kg. When we use Kilometers for height, you can imagine most of the data points (for human height) packed closely. Thus, for a small fractional change in height there will be huge change in weight (assuming weight increase with height). The ratio dy /dx will be huge. On the other hand, if height is measured in millimeters, data will be spread far and wide on the height attributes. A unit change in height will have no significant change in weight dy /dx will be very small almost close to 0. Lambda will have to be higher when height is in Km compared to lambda when height is in millimeters
Question about standardizing in ridge regression
Though four years late, hope someone will benefit from this.... The way I understood it, coeff is how much target variable changes for a unit change in independent variable (dy / dx). Let us assume we
Question about standardizing in ridge regression Though four years late, hope someone will benefit from this.... The way I understood it, coeff is how much target variable changes for a unit change in independent variable (dy / dx). Let us assume we are studying relation between weight and height and weight is measured in Kg. When we use Kilometers for height, you can imagine most of the data points (for human height) packed closely. Thus, for a small fractional change in height there will be huge change in weight (assuming weight increase with height). The ratio dy /dx will be huge. On the other hand, if height is measured in millimeters, data will be spread far and wide on the height attributes. A unit change in height will have no significant change in weight dy /dx will be very small almost close to 0. Lambda will have to be higher when height is in Km compared to lambda when height is in millimeters
Question about standardizing in ridge regression Though four years late, hope someone will benefit from this.... The way I understood it, coeff is how much target variable changes for a unit change in independent variable (dy / dx). Let us assume we
10,739
How does the inverse transform method work?
The method is very simple, so I'll describe it in simple words. First, take cumulative distribution function $F_X$ of some distribution that you want to sample from. The function takes as input some value $x$ and tells you what is the probability of obtaining $X \leq x$. So $$ F_X(x) = \Pr(X \leq x) = p $$ inverse of such function function, $F_X^{-1}$ would take $p$ as input and return $x$. Notice that $p$'s are uniformly distributed -- this could be used for sampling from any $F_X$ if you know $F_X^{-1}$. The method is called the inverse transform sampling. The idea is very simple: it is easy to sample values uniformly from $U(0, 1)$, so if you want to sample from some $F_X$, just take values $u \sim U(0, 1)$ and pass $u$ through $F_X^{-1}$ to obtain $x$'s $$ F_X^{-1}(u) = x $$ or in R (for normal distribution) U <- runif(1e6) X <- qnorm(U) To visualize it look at CDF below, generally, we think of distributions in terms of looking at $y$-axis for probabilities of values from $x$-axis. With this sampling method we do the opposite and start with "probabilities" and use them to pick the values that are related to them. With discrete distributions you treat $U$ as a line from $0$ to $1$ and assign values based on where does some point $u$ lie on this line (e.g. $0$ if $0 \leq u < 0.5$ or $1$ if $0.5 \leq u \leq 1$ for sampling from $\mathrm{Bernoulli}(0.5)$). Unfortunately, this is not always possible since not every function has its inverse, e.g. you cannot use this method with bivariate distributions. It also does not have to be the most efficient method in all situations, in many cases better algorithms exist. You also ask what is the distribution of $F_X^{-1}(u)$. Since $F_X^{-1}$ is an inverse of $F_X$, then $F_X(F_X^{-1}(u)) = u$ and $F_X^{-1}(F_X(x)) = x$, so yes, values obtained using such method have the same distribution as $X$. You can check this by a simple simulation U <- runif(1e6) all.equal(pnorm(qnorm(U)), U)
How does the inverse transform method work?
The method is very simple, so I'll describe it in simple words. First, take cumulative distribution function $F_X$ of some distribution that you want to sample from. The function takes as input some v
How does the inverse transform method work? The method is very simple, so I'll describe it in simple words. First, take cumulative distribution function $F_X$ of some distribution that you want to sample from. The function takes as input some value $x$ and tells you what is the probability of obtaining $X \leq x$. So $$ F_X(x) = \Pr(X \leq x) = p $$ inverse of such function function, $F_X^{-1}$ would take $p$ as input and return $x$. Notice that $p$'s are uniformly distributed -- this could be used for sampling from any $F_X$ if you know $F_X^{-1}$. The method is called the inverse transform sampling. The idea is very simple: it is easy to sample values uniformly from $U(0, 1)$, so if you want to sample from some $F_X$, just take values $u \sim U(0, 1)$ and pass $u$ through $F_X^{-1}$ to obtain $x$'s $$ F_X^{-1}(u) = x $$ or in R (for normal distribution) U <- runif(1e6) X <- qnorm(U) To visualize it look at CDF below, generally, we think of distributions in terms of looking at $y$-axis for probabilities of values from $x$-axis. With this sampling method we do the opposite and start with "probabilities" and use them to pick the values that are related to them. With discrete distributions you treat $U$ as a line from $0$ to $1$ and assign values based on where does some point $u$ lie on this line (e.g. $0$ if $0 \leq u < 0.5$ or $1$ if $0.5 \leq u \leq 1$ for sampling from $\mathrm{Bernoulli}(0.5)$). Unfortunately, this is not always possible since not every function has its inverse, e.g. you cannot use this method with bivariate distributions. It also does not have to be the most efficient method in all situations, in many cases better algorithms exist. You also ask what is the distribution of $F_X^{-1}(u)$. Since $F_X^{-1}$ is an inverse of $F_X$, then $F_X(F_X^{-1}(u)) = u$ and $F_X^{-1}(F_X(x)) = x$, so yes, values obtained using such method have the same distribution as $X$. You can check this by a simple simulation U <- runif(1e6) all.equal(pnorm(qnorm(U)), U)
How does the inverse transform method work? The method is very simple, so I'll describe it in simple words. First, take cumulative distribution function $F_X$ of some distribution that you want to sample from. The function takes as input some v
10,740
How does the inverse transform method work?
Yes, $U^θ$ has the distribution of $X$. Two additional points on the intuition behind inverse transform method might be useful (1) In order to understand what $F^{-1}$ actually means please refer to a graph in Tim's answer to Help me understand the quantile (inverse CDF) function (2) [Please, simply ignore the following, if it brings more confusion instead of clarity] Let $X$ be any random variable (r.v.) with continuous and strictly increasing cdf $F$. Then $$F(X) \sim \text{Unif}(0,1)$$ Note on notation: $X$ is a r.v. Therefore, the function of r.v. $X$, $F(X)$ is a r.v. itself. For example, if you would flip the question, so that you would have access to $X$ and wanted to generate a standard uniform, then $X^{1/\theta} \sim \text{Unif}(0,1)$. Let call this random variable $U$. So $$U = X^{1/\theta}$$ Coming back to your question, you have the opposite task: to generate $X$ out of $U$. So, indeed $$X=U^\theta$$ PS. Alternative names for the method are probability integral transform, inverse transform sampling, the quantile transformation, and, in some sources, "the fundamental theorem of simulation".
How does the inverse transform method work?
Yes, $U^θ$ has the distribution of $X$. Two additional points on the intuition behind inverse transform method might be useful (1) In order to understand what $F^{-1}$ actually means please refer to
How does the inverse transform method work? Yes, $U^θ$ has the distribution of $X$. Two additional points on the intuition behind inverse transform method might be useful (1) In order to understand what $F^{-1}$ actually means please refer to a graph in Tim's answer to Help me understand the quantile (inverse CDF) function (2) [Please, simply ignore the following, if it brings more confusion instead of clarity] Let $X$ be any random variable (r.v.) with continuous and strictly increasing cdf $F$. Then $$F(X) \sim \text{Unif}(0,1)$$ Note on notation: $X$ is a r.v. Therefore, the function of r.v. $X$, $F(X)$ is a r.v. itself. For example, if you would flip the question, so that you would have access to $X$ and wanted to generate a standard uniform, then $X^{1/\theta} \sim \text{Unif}(0,1)$. Let call this random variable $U$. So $$U = X^{1/\theta}$$ Coming back to your question, you have the opposite task: to generate $X$ out of $U$. So, indeed $$X=U^\theta$$ PS. Alternative names for the method are probability integral transform, inverse transform sampling, the quantile transformation, and, in some sources, "the fundamental theorem of simulation".
How does the inverse transform method work? Yes, $U^θ$ has the distribution of $X$. Two additional points on the intuition behind inverse transform method might be useful (1) In order to understand what $F^{-1}$ actually means please refer to
10,741
Should data be centered+scaled before applying t-SNE?
Centering shouldn't matter since the algorithm only operates on distances between points, however rescaling is necessary if you want the different dimensions to be treated with equal importance, since the 2-norm will be more heavily influenced by dimensions with large variance.
Should data be centered+scaled before applying t-SNE?
Centering shouldn't matter since the algorithm only operates on distances between points, however rescaling is necessary if you want the different dimensions to be treated with equal importance, since
Should data be centered+scaled before applying t-SNE? Centering shouldn't matter since the algorithm only operates on distances between points, however rescaling is necessary if you want the different dimensions to be treated with equal importance, since the 2-norm will be more heavily influenced by dimensions with large variance.
Should data be centered+scaled before applying t-SNE? Centering shouldn't matter since the algorithm only operates on distances between points, however rescaling is necessary if you want the different dimensions to be treated with equal importance, since
10,742
Use of circular predictors in linear regression
Wind direction (here measured in degrees, presumably as a compass direction clockwise from North) is a circular variable. The test is that the conventional beginning of the scale is the same as the end, i.e. $0^\circ = 360^\circ$. When treated as a predictor it is probably best mapped to sine and cosine. Whatever your software, it is likely to expect angles to be measured in radians, so the conversion will be some equivalent of $ \sin(\pi\ \text{direction} / 180), \cos(\pi\ \text{direction} / 180)$ given that $2 \pi$ radians $= 360^\circ$. Similarly time of day measured in hours from midnight can be mapped to sine and cosine using $ \sin(\pi\ \text{time} / 12), \cos(\pi\ \text{time} / 12)$ or $ \sin(\pi (\text{time} + 0.5) / 12), \cos(\pi (\text{time} + 0.5) / 12)$ depending on exactly how time was recorded or should be interpreted. Sometimes nature or society is obliging and dependence on the circular variable takes the form of some direction being optimal for the response and the opposite direction (half the circle away) being pessimal. In that case a single sine and cosine term may suffice; for more complicated patterns you may need other terms. For much more detail a tutorial on this technique of circular, Fourier, periodic, trigonometric regression may be found here, with in turn further references. The good news is that once you have created sine and cosine terms they are just extra predictors in your regression. There is a large literature on circular statistics, itself seen as part of directional statistics. Oddly, this technique is often not mentioned, as focus in that literature is commonly on circular response variables. Summarising circular variables by their vector means is a standard descriptive method but is not required or directly helpful for regression. Some details on terminology Wind direction and time of day are in statistical terms variables, not parameters, whatever the usage in your branch of science. Linear regression is defined by linearity in parameters, i.e. for a vector $y$ predicted by $X\beta$ it is the vector of parameters $\beta$, not the matrix of predictors $X$, that is more crucial. So, in this case, the fact that predictors such as sine and cosine are measured on circular scales and also restricted to $[-1, 1]$ is no barrier to their appearing in linear regression. Incidental comment For a response variable such as particle concentration I'd expect to use a generalised linear model with logarithmic link to ensure positive predictions.
Use of circular predictors in linear regression
Wind direction (here measured in degrees, presumably as a compass direction clockwise from North) is a circular variable. The test is that the conventional beginning of the scale is the same as the en
Use of circular predictors in linear regression Wind direction (here measured in degrees, presumably as a compass direction clockwise from North) is a circular variable. The test is that the conventional beginning of the scale is the same as the end, i.e. $0^\circ = 360^\circ$. When treated as a predictor it is probably best mapped to sine and cosine. Whatever your software, it is likely to expect angles to be measured in radians, so the conversion will be some equivalent of $ \sin(\pi\ \text{direction} / 180), \cos(\pi\ \text{direction} / 180)$ given that $2 \pi$ radians $= 360^\circ$. Similarly time of day measured in hours from midnight can be mapped to sine and cosine using $ \sin(\pi\ \text{time} / 12), \cos(\pi\ \text{time} / 12)$ or $ \sin(\pi (\text{time} + 0.5) / 12), \cos(\pi (\text{time} + 0.5) / 12)$ depending on exactly how time was recorded or should be interpreted. Sometimes nature or society is obliging and dependence on the circular variable takes the form of some direction being optimal for the response and the opposite direction (half the circle away) being pessimal. In that case a single sine and cosine term may suffice; for more complicated patterns you may need other terms. For much more detail a tutorial on this technique of circular, Fourier, periodic, trigonometric regression may be found here, with in turn further references. The good news is that once you have created sine and cosine terms they are just extra predictors in your regression. There is a large literature on circular statistics, itself seen as part of directional statistics. Oddly, this technique is often not mentioned, as focus in that literature is commonly on circular response variables. Summarising circular variables by their vector means is a standard descriptive method but is not required or directly helpful for regression. Some details on terminology Wind direction and time of day are in statistical terms variables, not parameters, whatever the usage in your branch of science. Linear regression is defined by linearity in parameters, i.e. for a vector $y$ predicted by $X\beta$ it is the vector of parameters $\beta$, not the matrix of predictors $X$, that is more crucial. So, in this case, the fact that predictors such as sine and cosine are measured on circular scales and also restricted to $[-1, 1]$ is no barrier to their appearing in linear regression. Incidental comment For a response variable such as particle concentration I'd expect to use a generalised linear model with logarithmic link to ensure positive predictions.
Use of circular predictors in linear regression Wind direction (here measured in degrees, presumably as a compass direction clockwise from North) is a circular variable. The test is that the conventional beginning of the scale is the same as the en
10,743
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not normal?
In typical EM GMM situations, one does take variance and covariance into account. This is not done in k-means. But indeed, one of the popular heuristics for k-means (note: k-means is a problem, not an algorithm) - the Lloyd algorithm - is essentially an EM algorithm, using a centroid model (without variance) and hard assignments. When doing k-means style clustering (i.e. variance minimization), you coincidentally minimize squared Euclidean distance, because WCSS (within-cluster sum of squares) variance contribution = squared euclidean distance coincidentally assign objects to the nearest cluster by Euclidean distance, because the sqrt function is monotone (note that the mean does not optimize Euclidean distances, but the WCSS function) represent clusters using a centroid only get Voronoi cell shaped clusters, i.e. polygons it works best with spherical clusters The k-means objective function can be formalized as this: $$\text{argmin}_S \sum_{i=1}^{k} \sum_{x_j \in S_i} \sum_{d=1}^{D} \left(x_{jd} - \mu_{id} \right)^2$$ where $S=\{S_1 \ldots S_k\}$ are all possible partitionings of the data set into $k$ partitions, $D$ is the data set dimensionality, and e.g. $x_{jd}$ is the coordinate of the $j$th instance in dimension $d$. It is commonly said that k-means assumes spherical clusters. It is also commonly acknowledged that k-means clusters are Voronoi cells, i.e. not spherical. Both are correct, and both are wrong. First of all, the clusters are not complete Voronoi cells, but only the known objects therein. There is no need to consider the dead space in-between the clusters to be part of either cluster, as having an object there would affect the algorithm result. But it is not much better to call it "spherical" either, just because the euclidean distance is spherical. K-means doesn't care about Euclidean distance. All it is, is a heuristic to minimize the variances. And that is actually, what you should consider k-means to be: variance minimization.
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not n
In typical EM GMM situations, one does take variance and covariance into account. This is not done in k-means. But indeed, one of the popular heuristics for k-means (note: k-means is a problem, not an
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not normal? In typical EM GMM situations, one does take variance and covariance into account. This is not done in k-means. But indeed, one of the popular heuristics for k-means (note: k-means is a problem, not an algorithm) - the Lloyd algorithm - is essentially an EM algorithm, using a centroid model (without variance) and hard assignments. When doing k-means style clustering (i.e. variance minimization), you coincidentally minimize squared Euclidean distance, because WCSS (within-cluster sum of squares) variance contribution = squared euclidean distance coincidentally assign objects to the nearest cluster by Euclidean distance, because the sqrt function is monotone (note that the mean does not optimize Euclidean distances, but the WCSS function) represent clusters using a centroid only get Voronoi cell shaped clusters, i.e. polygons it works best with spherical clusters The k-means objective function can be formalized as this: $$\text{argmin}_S \sum_{i=1}^{k} \sum_{x_j \in S_i} \sum_{d=1}^{D} \left(x_{jd} - \mu_{id} \right)^2$$ where $S=\{S_1 \ldots S_k\}$ are all possible partitionings of the data set into $k$ partitions, $D$ is the data set dimensionality, and e.g. $x_{jd}$ is the coordinate of the $j$th instance in dimension $d$. It is commonly said that k-means assumes spherical clusters. It is also commonly acknowledged that k-means clusters are Voronoi cells, i.e. not spherical. Both are correct, and both are wrong. First of all, the clusters are not complete Voronoi cells, but only the known objects therein. There is no need to consider the dead space in-between the clusters to be part of either cluster, as having an object there would affect the algorithm result. But it is not much better to call it "spherical" either, just because the euclidean distance is spherical. K-means doesn't care about Euclidean distance. All it is, is a heuristic to minimize the variances. And that is actually, what you should consider k-means to be: variance minimization.
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not n In typical EM GMM situations, one does take variance and covariance into account. This is not done in k-means. But indeed, one of the popular heuristics for k-means (note: k-means is a problem, not an
10,744
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not normal?
GMM uses overlapping hills that stretch to infinity (but practically only count for 3 sigma). Each point gets all the hills' probability scores. Also, the hills are "egg-shaped" [okay, they're symmetric ellipses] and, using the full covariance matrix, may be tilted. K-means hard-assigns a point to a single cluster, so the scores of the other cluster centers get ignored (are implicitly reset to zero/don't care). The hills are spherical soap bubbles. Where two soap bubbles touch, the boundary between them becomes a flat (hyper-)plane. Just as when you blow a foam of many soap bubbles, the bubbles on the inside are not flat but are boxy, so the boundaries between many (hyper-)spheres actually forms a Voronoi partition of the space. In 2D, this tends to look vaguely like hexagonal close-packing, think a bee-hive (although of course Voronoi cells are not guaranteed to be hexagons). A K-means hill is round and does not get tilted, so it has less representation power; but it is much faster to compute, especially in the higher dimensions. Because K-means uses the Euclidean distance metric, it assumes that the dimensions are comparable and of equal weight. So if dimension X has units of miles per hour, varying from 0 to 80, and dimension Y has units of pounds, varying from 0 to 400, and you're fitting circles in this XY space, then one dimension (and its spread) is going to be more powerful than the other dimension and will overshadow the results. This is why it's customary to normalize the data when taking K-means. Both GMM and K-means model the data by fitting best approximations to what's given. GMM fits tilted eggs, and K-means fits untilted spheres. But the underlying data could be shaped like anything, it could be a spiral or a Picasso painting, and each algorithm would still run, and take its best shot. Whether the resulting model looks anything like the actual data depends on the underlying physical process generating the data. (For instance, time-delay measurements are one-sided; is a Gaussian a good fit? Maybe.) However, both GMM and K-means implicitly assume data axes/domains coming from the field of real numbers $R^n$. This matters based on what kind of data axis/domain you are trying to cluster. Ordered integer counts map nicely onto reals. Ordered symbols, such as colors in a spectrum, not so nicely. Binary symbols, ehn. Unordered symbols do not map onto reals at all (unless you're using creative new mathematics since 2000). Thus your 8x8 binary image is going to be construed as a 64-dimensional hypercube in the first hyperquadrant. The algorithms then use geometric analogies to find clusters. Distance, with K-means, shows up as Euclidean distance in 64-dimensional space. It's one way to do it.
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not n
GMM uses overlapping hills that stretch to infinity (but practically only count for 3 sigma). Each point gets all the hills' probability scores. Also, the hills are "egg-shaped" [okay, they're symme
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not normal? GMM uses overlapping hills that stretch to infinity (but practically only count for 3 sigma). Each point gets all the hills' probability scores. Also, the hills are "egg-shaped" [okay, they're symmetric ellipses] and, using the full covariance matrix, may be tilted. K-means hard-assigns a point to a single cluster, so the scores of the other cluster centers get ignored (are implicitly reset to zero/don't care). The hills are spherical soap bubbles. Where two soap bubbles touch, the boundary between them becomes a flat (hyper-)plane. Just as when you blow a foam of many soap bubbles, the bubbles on the inside are not flat but are boxy, so the boundaries between many (hyper-)spheres actually forms a Voronoi partition of the space. In 2D, this tends to look vaguely like hexagonal close-packing, think a bee-hive (although of course Voronoi cells are not guaranteed to be hexagons). A K-means hill is round and does not get tilted, so it has less representation power; but it is much faster to compute, especially in the higher dimensions. Because K-means uses the Euclidean distance metric, it assumes that the dimensions are comparable and of equal weight. So if dimension X has units of miles per hour, varying from 0 to 80, and dimension Y has units of pounds, varying from 0 to 400, and you're fitting circles in this XY space, then one dimension (and its spread) is going to be more powerful than the other dimension and will overshadow the results. This is why it's customary to normalize the data when taking K-means. Both GMM and K-means model the data by fitting best approximations to what's given. GMM fits tilted eggs, and K-means fits untilted spheres. But the underlying data could be shaped like anything, it could be a spiral or a Picasso painting, and each algorithm would still run, and take its best shot. Whether the resulting model looks anything like the actual data depends on the underlying physical process generating the data. (For instance, time-delay measurements are one-sided; is a Gaussian a good fit? Maybe.) However, both GMM and K-means implicitly assume data axes/domains coming from the field of real numbers $R^n$. This matters based on what kind of data axis/domain you are trying to cluster. Ordered integer counts map nicely onto reals. Ordered symbols, such as colors in a spectrum, not so nicely. Binary symbols, ehn. Unordered symbols do not map onto reals at all (unless you're using creative new mathematics since 2000). Thus your 8x8 binary image is going to be construed as a 64-dimensional hypercube in the first hyperquadrant. The algorithms then use geometric analogies to find clusters. Distance, with K-means, shows up as Euclidean distance in 64-dimensional space. It's one way to do it.
If k-means clustering is a form of Gaussian mixture modeling, can it be used when the data are not n GMM uses overlapping hills that stretch to infinity (but practically only count for 3 sigma). Each point gets all the hills' probability scores. Also, the hills are "egg-shaped" [okay, they're symme
10,745
What function could be a kernel?
Generally, a function $k(x,y)$ is a valid kernel function (in the sense of the kernel trick) if it satisfies two key properties: symmetry: $k(x,y) = k(y,x)$ positive semi-definiteness. Reference: Page 4 of http://www.cs.berkeley.edu/~jordan/courses/281B-spring04/lectures/lec3.pdf Checking symmetry is usually straightforward by inspection. Verifying positive semi-definiteness analytically can be quite hairy sometimes. I can think of two strategies for checking this fact: (1) Inspecting for an "inner-product" representation Consider $k(x,y) = e^{x+y}$. Can we find some $\phi(a)$ such that $k(x,y) = \phi(x)^T \phi(y)$? A little math shows that $e^{x+y} = e^x e^y$, so let $\phi(a)=e^a$ and we're done. If you get lucky, your $k()$ will be amenable to this analysis. If not, you can resort to option (2): (2) Checking positive definite-ness by random simulation. Consider the function on $D$-dim vectors $k(\vec{x},\vec{y}) = \sum_{d=1}^D \min( x_d, y_d)$, where each vector $\vec{x}, \vec{y}$ must be non-negative and sum to one. Is this a valid kernel? We can check this by simulation. Draw a set of $N$ random vectors $\{\vec{x}_i\}_{i=1}^N$ and build a Gram matrix $K$ where $K_{ij} = k( \vec{x}_i , \vec{x}_j )$. Then check if $K$ is positive (semi-) definite. The best way to do this numerically is to find the eigenvalues of the matrix (using good existing numerical libraries like scipy or matlab), and verify that the smallest eigenvalue is larger than or equal to 0. If yes, the matrix $K$ is p.s.d. Otherwise, you do not have a valid kernel. Sample MATLAB/Octave code: D=5; N=100; X = zeros(N,D); for n = 1:N xcur = rand(1,D); X(n,:) = xcur/sum(xcur); end K = zeros(N,N); for n = 1:N; for m = 1:N K(n,m) = sum( min( X(n,:), X(m,:) ) ); end; end; disp( min( eig(K) ) ); This is a very simple test, but be careful. If the test fails, you can be sure the kernel is not valid, but if it passes the kernel still might not be valid. I find that no matter how many random matrices I generate and regardless of $N$ and $D$, this kernel passes the test, so it is probably positive semi-definite (in fact, this is the well-known histogram intersection kernel, and has been proven valid). However, the same test on $k(\vec{x},\vec{y}) = \sum_{d=1}^D max( x_d, y_d)$ fails on every try I've given it (at least 20). So it is most definitely invalid, and quite easy to verify. I really like this second option because it's quite rapid and much easier to debug than compilcated formal proofs. According to Jitendra Malik's slide 19, the intersection kernel was introduced in 1991 but not proven correct until 2005. Formal proofs can be very challenging!
What function could be a kernel?
Generally, a function $k(x,y)$ is a valid kernel function (in the sense of the kernel trick) if it satisfies two key properties: symmetry: $k(x,y) = k(y,x)$ positive semi-definiteness. Reference: P
What function could be a kernel? Generally, a function $k(x,y)$ is a valid kernel function (in the sense of the kernel trick) if it satisfies two key properties: symmetry: $k(x,y) = k(y,x)$ positive semi-definiteness. Reference: Page 4 of http://www.cs.berkeley.edu/~jordan/courses/281B-spring04/lectures/lec3.pdf Checking symmetry is usually straightforward by inspection. Verifying positive semi-definiteness analytically can be quite hairy sometimes. I can think of two strategies for checking this fact: (1) Inspecting for an "inner-product" representation Consider $k(x,y) = e^{x+y}$. Can we find some $\phi(a)$ such that $k(x,y) = \phi(x)^T \phi(y)$? A little math shows that $e^{x+y} = e^x e^y$, so let $\phi(a)=e^a$ and we're done. If you get lucky, your $k()$ will be amenable to this analysis. If not, you can resort to option (2): (2) Checking positive definite-ness by random simulation. Consider the function on $D$-dim vectors $k(\vec{x},\vec{y}) = \sum_{d=1}^D \min( x_d, y_d)$, where each vector $\vec{x}, \vec{y}$ must be non-negative and sum to one. Is this a valid kernel? We can check this by simulation. Draw a set of $N$ random vectors $\{\vec{x}_i\}_{i=1}^N$ and build a Gram matrix $K$ where $K_{ij} = k( \vec{x}_i , \vec{x}_j )$. Then check if $K$ is positive (semi-) definite. The best way to do this numerically is to find the eigenvalues of the matrix (using good existing numerical libraries like scipy or matlab), and verify that the smallest eigenvalue is larger than or equal to 0. If yes, the matrix $K$ is p.s.d. Otherwise, you do not have a valid kernel. Sample MATLAB/Octave code: D=5; N=100; X = zeros(N,D); for n = 1:N xcur = rand(1,D); X(n,:) = xcur/sum(xcur); end K = zeros(N,N); for n = 1:N; for m = 1:N K(n,m) = sum( min( X(n,:), X(m,:) ) ); end; end; disp( min( eig(K) ) ); This is a very simple test, but be careful. If the test fails, you can be sure the kernel is not valid, but if it passes the kernel still might not be valid. I find that no matter how many random matrices I generate and regardless of $N$ and $D$, this kernel passes the test, so it is probably positive semi-definite (in fact, this is the well-known histogram intersection kernel, and has been proven valid). However, the same test on $k(\vec{x},\vec{y}) = \sum_{d=1}^D max( x_d, y_d)$ fails on every try I've given it (at least 20). So it is most definitely invalid, and quite easy to verify. I really like this second option because it's quite rapid and much easier to debug than compilcated formal proofs. According to Jitendra Malik's slide 19, the intersection kernel was introduced in 1991 but not proven correct until 2005. Formal proofs can be very challenging!
What function could be a kernel? Generally, a function $k(x,y)$ is a valid kernel function (in the sense of the kernel trick) if it satisfies two key properties: symmetry: $k(x,y) = k(y,x)$ positive semi-definiteness. Reference: P
10,746
What is the curse of dimensionality?
Following up on richiemorrisroe, here is the relevant image from the Elements of Statistical Learning, chapter 2 (pp22-27): As you can see in the upper right pane, there are more neighbors 1 unit away in 1 dimension than there are neighbors 1 unit away in 2 dimensions. 3 dimensions would be even worse!
What is the curse of dimensionality?
Following up on richiemorrisroe, here is the relevant image from the Elements of Statistical Learning, chapter 2 (pp22-27): As you can see in the upper right pane, there are more neighbors 1 unit awa
What is the curse of dimensionality? Following up on richiemorrisroe, here is the relevant image from the Elements of Statistical Learning, chapter 2 (pp22-27): As you can see in the upper right pane, there are more neighbors 1 unit away in 1 dimension than there are neighbors 1 unit away in 2 dimensions. 3 dimensions would be even worse!
What is the curse of dimensionality? Following up on richiemorrisroe, here is the relevant image from the Elements of Statistical Learning, chapter 2 (pp22-27): As you can see in the upper right pane, there are more neighbors 1 unit awa
10,747
What is the curse of dimensionality?
This doesn't answer your question directly, but David Donoho has a nice article on High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality (associated slides are here), in which he mentions three curses: Optimization by Exhaustive Search: "If we must approximately optimize a function of $D$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an approximate minimizer within error $\epsilon$." Integration over product domains: "If we must integrate a function of $d$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an integration scheme with error $\epsilon$." Approximation over high-dimensional domains: "If we must approximate a function of $D$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an approximation scheme with uniform approximation error $\epsilon$."
What is the curse of dimensionality?
This doesn't answer your question directly, but David Donoho has a nice article on High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality (associated slides are here), in which he
What is the curse of dimensionality? This doesn't answer your question directly, but David Donoho has a nice article on High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality (associated slides are here), in which he mentions three curses: Optimization by Exhaustive Search: "If we must approximately optimize a function of $D$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an approximate minimizer within error $\epsilon$." Integration over product domains: "If we must integrate a function of $d$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an integration scheme with error $\epsilon$." Approximation over high-dimensional domains: "If we must approximate a function of $D$ variables and we know only that it is Lipschitz, say, then we need order $(1/\epsilon)^D$ evaluations on a grid in order to obtain an approximation scheme with uniform approximation error $\epsilon$."
What is the curse of dimensionality? This doesn't answer your question directly, but David Donoho has a nice article on High-Dimensional Data Analysis: The Curses and Blessings of Dimensionality (associated slides are here), in which he
10,748
What is the curse of dimensionality?
I know that I keep referring to it, but there's a great explanation of this is the Elements of Statistical Learning, chapter 2 (pp22-27). They basically note that as dimensions increase, the amount of data needs to increase (exponentially) with it or there will not be enough points in the larger sample space for any useful analysis to be carried out. They refer to a paper by Bellman (1961) as their source, which appears to be his book Adaptive Control Processes, available from Amazon here
What is the curse of dimensionality?
I know that I keep referring to it, but there's a great explanation of this is the Elements of Statistical Learning, chapter 2 (pp22-27). They basically note that as dimensions increase, the amount of
What is the curse of dimensionality? I know that I keep referring to it, but there's a great explanation of this is the Elements of Statistical Learning, chapter 2 (pp22-27). They basically note that as dimensions increase, the amount of data needs to increase (exponentially) with it or there will not be enough points in the larger sample space for any useful analysis to be carried out. They refer to a paper by Bellman (1961) as their source, which appears to be his book Adaptive Control Processes, available from Amazon here
What is the curse of dimensionality? I know that I keep referring to it, but there's a great explanation of this is the Elements of Statistical Learning, chapter 2 (pp22-27). They basically note that as dimensions increase, the amount of
10,749
What is the curse of dimensionality?
Maybe the most notorious impact is captured by the following limit (which is (indirectly) illustrated in above picture): $$\lim_{dim\rightarrow\infty}\frac{dist_{max}-dist_{min}}{dist_{min}}$$ The distance in the picture is the $L_2$-based euclidian distance. The limit expresses that the notion of distance captures less and less information on similarity with increase of dimensionality. That impacts algorithms like the k-NN. By allowing fractions for $k$ in $L_k$-norms the described affect can be amended. Impact of Dimensionality on Data in Pictures
What is the curse of dimensionality?
Maybe the most notorious impact is captured by the following limit (which is (indirectly) illustrated in above picture): $$\lim_{dim\rightarrow\infty}\frac{dist_{max}-dist_{min}}{dist_{min}}$$ The dis
What is the curse of dimensionality? Maybe the most notorious impact is captured by the following limit (which is (indirectly) illustrated in above picture): $$\lim_{dim\rightarrow\infty}\frac{dist_{max}-dist_{min}}{dist_{min}}$$ The distance in the picture is the $L_2$-based euclidian distance. The limit expresses that the notion of distance captures less and less information on similarity with increase of dimensionality. That impacts algorithms like the k-NN. By allowing fractions for $k$ in $L_k$-norms the described affect can be amended. Impact of Dimensionality on Data in Pictures
What is the curse of dimensionality? Maybe the most notorious impact is captured by the following limit (which is (indirectly) illustrated in above picture): $$\lim_{dim\rightarrow\infty}\frac{dist_{max}-dist_{min}}{dist_{min}}$$ The dis
10,750
Why do lme and aov return different results for repeated measures ANOVA in R?
They are different because the lme model is forcing the variance component of id to be greater than zero. Looking at the raw anova table for all terms, we see that the mean squared error for id is less than that for the residuals. > anova(lm1 <- lm(value~ factor+id, data=tau.base)) Df Sum Sq Mean Sq F value Pr(>F) factor 3 0.6484 0.21614 1.3399 0.2694 id 21 3.1609 0.15052 0.9331 0.5526 Residuals 63 10.1628 0.16131 When we compute the variance components, this means that the variance due to id will be negative. My memory of expected mean squares memory is shaky, but the calculation is something like (0.15052-0.16131)/3 = -0.003597. This sounds odd but can happen. What it means is that the averages for each id are closer to each other than you would expect to each other given the amount of residual variation in the model. In contrast, using lme forces this variance to be greater than zero. > summary(lme1 <- lme(value ~ factor, data = tau.base, random = ~1|id)) ... Random effects: Formula: ~1 | id (Intercept) Residual StdDev: 3.09076e-05 0.3982667 This reports standard deviations, squaring to get the variance yields 9.553e-10 for the id variance and 0.1586164 for the residual variance. Now, you should know that using aov for repeated measures is only appropriate if you believe that the correlation between all pairs of repeated measures is identical; this is called compound symmetry. (Technically, sphericity is required but this is sufficient for now.) One reason to use lme over aov is that it can handle different kinds of correlation structures. In this particular data set, the estimate for this correlation is negative; this helps explain how the mean squared error for id was less than the residual squared error. A negative correlation means that if an individual's first measurement was below average, on average, their second would be above average, making the total averages for the individuals less variable than we would expect if there was a zero correlation or a positive correlation. Using lme with a random effect is equivalent to fitting a compound symmetry model where that correlation is forced to be non-negative; we can fit a model where the correlation is allowed to be negative using gls: > anova(gls1 <- gls(value ~ factor, correlation=corCompSymm(form=~1|id), data=tau.base)) Denom. DF: 84 numDF F-value p-value (Intercept) 1 199.55223 <.0001 factor 3 1.33985 0.267 This ANOVA table agrees with the table from the aov fit and from the lm fit. OK, so what? Well, if you believe that the variance from id and the correlation between observations should be non-negative, the lme fit is actually more appropriate than the fit using aov or lm as its estimate of the residual variance is slightly better. However, if you believe the correlation between observations could be negative, aov or lm or gls is better. You may also be interested in exploring the correlation structure further; to look at a general correlation structure, you'd do something like gls2 <- gls(value ~ factor, correlation=corSymm(form=~unclass(factor)|id), data=tau.base) Here I only limit the output to the correlation structure. The values 1 to 4 represent the four levels of factor; we see that factor 1 and factor 4 have a fairly strong negative correlation: > summary(gls2) ... Correlation Structure: General Formula: ~unclass(factor) | id Parameter estimate(s): Correlation: 1 2 3 2 0.049 3 -0.127 0.208 4 -0.400 0.146 -0.024 One way to choose between these models is with a likelihood ratio test; this shows that the random effects model and the general correlation structure model aren't statistically significantly different; when that happens the simpler model is usually preferred. > anova(lme1, gls2) Model df AIC BIC logLik Test L.Ratio p-value lme1 1 6 108.0794 122.6643 -48.03972 gls2 2 11 111.9787 138.7177 -44.98936 1 vs 2 6.100725 0.2965
Why do lme and aov return different results for repeated measures ANOVA in R?
They are different because the lme model is forcing the variance component of id to be greater than zero. Looking at the raw anova table for all terms, we see that the mean squared error for id is le
Why do lme and aov return different results for repeated measures ANOVA in R? They are different because the lme model is forcing the variance component of id to be greater than zero. Looking at the raw anova table for all terms, we see that the mean squared error for id is less than that for the residuals. > anova(lm1 <- lm(value~ factor+id, data=tau.base)) Df Sum Sq Mean Sq F value Pr(>F) factor 3 0.6484 0.21614 1.3399 0.2694 id 21 3.1609 0.15052 0.9331 0.5526 Residuals 63 10.1628 0.16131 When we compute the variance components, this means that the variance due to id will be negative. My memory of expected mean squares memory is shaky, but the calculation is something like (0.15052-0.16131)/3 = -0.003597. This sounds odd but can happen. What it means is that the averages for each id are closer to each other than you would expect to each other given the amount of residual variation in the model. In contrast, using lme forces this variance to be greater than zero. > summary(lme1 <- lme(value ~ factor, data = tau.base, random = ~1|id)) ... Random effects: Formula: ~1 | id (Intercept) Residual StdDev: 3.09076e-05 0.3982667 This reports standard deviations, squaring to get the variance yields 9.553e-10 for the id variance and 0.1586164 for the residual variance. Now, you should know that using aov for repeated measures is only appropriate if you believe that the correlation between all pairs of repeated measures is identical; this is called compound symmetry. (Technically, sphericity is required but this is sufficient for now.) One reason to use lme over aov is that it can handle different kinds of correlation structures. In this particular data set, the estimate for this correlation is negative; this helps explain how the mean squared error for id was less than the residual squared error. A negative correlation means that if an individual's first measurement was below average, on average, their second would be above average, making the total averages for the individuals less variable than we would expect if there was a zero correlation or a positive correlation. Using lme with a random effect is equivalent to fitting a compound symmetry model where that correlation is forced to be non-negative; we can fit a model where the correlation is allowed to be negative using gls: > anova(gls1 <- gls(value ~ factor, correlation=corCompSymm(form=~1|id), data=tau.base)) Denom. DF: 84 numDF F-value p-value (Intercept) 1 199.55223 <.0001 factor 3 1.33985 0.267 This ANOVA table agrees with the table from the aov fit and from the lm fit. OK, so what? Well, if you believe that the variance from id and the correlation between observations should be non-negative, the lme fit is actually more appropriate than the fit using aov or lm as its estimate of the residual variance is slightly better. However, if you believe the correlation between observations could be negative, aov or lm or gls is better. You may also be interested in exploring the correlation structure further; to look at a general correlation structure, you'd do something like gls2 <- gls(value ~ factor, correlation=corSymm(form=~unclass(factor)|id), data=tau.base) Here I only limit the output to the correlation structure. The values 1 to 4 represent the four levels of factor; we see that factor 1 and factor 4 have a fairly strong negative correlation: > summary(gls2) ... Correlation Structure: General Formula: ~unclass(factor) | id Parameter estimate(s): Correlation: 1 2 3 2 0.049 3 -0.127 0.208 4 -0.400 0.146 -0.024 One way to choose between these models is with a likelihood ratio test; this shows that the random effects model and the general correlation structure model aren't statistically significantly different; when that happens the simpler model is usually preferred. > anova(lme1, gls2) Model df AIC BIC logLik Test L.Ratio p-value lme1 1 6 108.0794 122.6643 -48.03972 gls2 2 11 111.9787 138.7177 -44.98936 1 vs 2 6.100725 0.2965
Why do lme and aov return different results for repeated measures ANOVA in R? They are different because the lme model is forcing the variance component of id to be greater than zero. Looking at the raw anova table for all terms, we see that the mean squared error for id is le
10,751
Why do lme and aov return different results for repeated measures ANOVA in R?
aov() fits the model via lm() using least squares, lme fits via maximum likelihood. That difference in how the parameters of the linear model are estimated likely accounts for the (very small) difference in your f-values. In practice, (e.g. for hypothesis testing) these estimates are the same, so I don't see how one could be considered 'more credible' than the other. They come from different model fitting paradigms. For contrasts, you need to set up a contrast matrix for your factors. Venebles and Ripley show how to do this on p 143, p.146 and p.293-294 of the 4th edition.
Why do lme and aov return different results for repeated measures ANOVA in R?
aov() fits the model via lm() using least squares, lme fits via maximum likelihood. That difference in how the parameters of the linear model are estimated likely accounts for the (very small) differe
Why do lme and aov return different results for repeated measures ANOVA in R? aov() fits the model via lm() using least squares, lme fits via maximum likelihood. That difference in how the parameters of the linear model are estimated likely accounts for the (very small) difference in your f-values. In practice, (e.g. for hypothesis testing) these estimates are the same, so I don't see how one could be considered 'more credible' than the other. They come from different model fitting paradigms. For contrasts, you need to set up a contrast matrix for your factors. Venebles and Ripley show how to do this on p 143, p.146 and p.293-294 of the 4th edition.
Why do lme and aov return different results for repeated measures ANOVA in R? aov() fits the model via lm() using least squares, lme fits via maximum likelihood. That difference in how the parameters of the linear model are estimated likely accounts for the (very small) differe
10,752
Statistics collaboration
My answer is from the point of view of an UK academic statistician. In particular, as an academic that gets judged on advances in statistical methodology. What would make me (or any other scientist) a better collaborator? To be blunt - money. My time isn't free and I (as an academic) don't get employed to carry out standard statistical analysis. Even being first/last author on a paper that uses standard methodology is worth very little to me (in terms promotion and my personal research). Paying for my time will buy me out of administrative or teaching duties. Payment could be through a joint grant. In the UK, every five or so years academics have to submit their four best papers. My papers are judged on their contribution to the statistical literature. It sucks, but that's the way it is. Now it may well be that you have a very interesting problem which would lead to advances in statistical techniques. However, just think about the size of your statistics department compared to the rest of the Uni. There probably won't be enough statisticians to go around. In saying that, I do try and do some "statistical consultancy" once a year to broaden my interests and to help for teaching purposes. This year I did some survival analysis. However, I've never advertised this fact and I still get half dozen requests each year for help! Sorry for being so negative :( Specifically, what is one statistics concept you wish all of your scientist collaborators already understood? That statisticians do statistical research. As one of my collaborators said: Surely there's nothing left to solve in statistics?
Statistics collaboration
My answer is from the point of view of an UK academic statistician. In particular, as an academic that gets judged on advances in statistical methodology. What would make me (or any other scientist
Statistics collaboration My answer is from the point of view of an UK academic statistician. In particular, as an academic that gets judged on advances in statistical methodology. What would make me (or any other scientist) a better collaborator? To be blunt - money. My time isn't free and I (as an academic) don't get employed to carry out standard statistical analysis. Even being first/last author on a paper that uses standard methodology is worth very little to me (in terms promotion and my personal research). Paying for my time will buy me out of administrative or teaching duties. Payment could be through a joint grant. In the UK, every five or so years academics have to submit their four best papers. My papers are judged on their contribution to the statistical literature. It sucks, but that's the way it is. Now it may well be that you have a very interesting problem which would lead to advances in statistical techniques. However, just think about the size of your statistics department compared to the rest of the Uni. There probably won't be enough statisticians to go around. In saying that, I do try and do some "statistical consultancy" once a year to broaden my interests and to help for teaching purposes. This year I did some survival analysis. However, I've never advertised this fact and I still get half dozen requests each year for help! Sorry for being so negative :( Specifically, what is one statistics concept you wish all of your scientist collaborators already understood? That statisticians do statistical research. As one of my collaborators said: Surely there's nothing left to solve in statistics?
Statistics collaboration My answer is from the point of view of an UK academic statistician. In particular, as an academic that gets judged on advances in statistical methodology. What would make me (or any other scientist
10,753
Statistics collaboration
I think the concept that few scientists grasp is this: A statistical result can really only be taken at face value when the statistical methods were chosen in advance while the experiment was being planned (or while preliminary data were collected to polish methods). You are likely to be mislead if you first analyze the data this way, then that way, then try something else, then analyze only a subset of data, then analyze only that subset after removing an obvious outlier..... and only stop when the results match your preconceptions or has lot of asterisks. That is a fine way to generate an hypothesis, but not an appropriate way to test one.
Statistics collaboration
I think the concept that few scientists grasp is this: A statistical result can really only be taken at face value when the statistical methods were chosen in advance while the experiment was being p
Statistics collaboration I think the concept that few scientists grasp is this: A statistical result can really only be taken at face value when the statistical methods were chosen in advance while the experiment was being planned (or while preliminary data were collected to polish methods). You are likely to be mislead if you first analyze the data this way, then that way, then try something else, then analyze only a subset of data, then analyze only that subset after removing an obvious outlier..... and only stop when the results match your preconceptions or has lot of asterisks. That is a fine way to generate an hypothesis, but not an appropriate way to test one.
Statistics collaboration I think the concept that few scientists grasp is this: A statistical result can really only be taken at face value when the statistical methods were chosen in advance while the experiment was being p
10,754
Statistics collaboration
To get a good answer, you must write a good question. Answering a statistics question without context is like boxing blindfolded. You might knock your opponent out, or you might break your hand on the ring post. What goes into a good question? Tell us the PROBLEM you are trying to solve. That is, the substantive problem, not the statistical aspects. Tell us what math and statistics you know. If you’ve had one course in Introductory Stat, then it won’t make sense for us to give you an answer full of mixed model theory and matrix algebra. On the other hand, if you’ve got several courses or lots of experience, then we can assume you know some basics. Tell us what data you have, where it came from, what is missing, how many variables, what are the Dependent Variables (DVs) and Independent Variables (IVs) – if any, and anything else we need to know about the data. Also tell us which (if any) statistical software you use. Are you thinking of hiring a consultant, or do you just want pointers in some direction? THEN, and ONLY THEN tell us what you’ve tried, why you aren’t happy, and so on.
Statistics collaboration
To get a good answer, you must write a good question. Answering a statistics question without context is like boxing blindfolded. You might knock your opponent out, or you might break your hand on the
Statistics collaboration To get a good answer, you must write a good question. Answering a statistics question without context is like boxing blindfolded. You might knock your opponent out, or you might break your hand on the ring post. What goes into a good question? Tell us the PROBLEM you are trying to solve. That is, the substantive problem, not the statistical aspects. Tell us what math and statistics you know. If you’ve had one course in Introductory Stat, then it won’t make sense for us to give you an answer full of mixed model theory and matrix algebra. On the other hand, if you’ve got several courses or lots of experience, then we can assume you know some basics. Tell us what data you have, where it came from, what is missing, how many variables, what are the Dependent Variables (DVs) and Independent Variables (IVs) – if any, and anything else we need to know about the data. Also tell us which (if any) statistical software you use. Are you thinking of hiring a consultant, or do you just want pointers in some direction? THEN, and ONLY THEN tell us what you’ve tried, why you aren’t happy, and so on.
Statistics collaboration To get a good answer, you must write a good question. Answering a statistics question without context is like boxing blindfolded. You might knock your opponent out, or you might break your hand on the
10,755
Statistics collaboration
Having no preconceived ideas about the method you should use solely based on papers. Their ideas, logic or methods may be faulty. You want to think about your problem and use the most appropriate set of tools. This reminds me of reproducing cited information without checking the source. On the other hand, paper with methods (or logic) that differs from the rest of literature may hinder or cull a review process because "it's not the norm".
Statistics collaboration
Having no preconceived ideas about the method you should use solely based on papers. Their ideas, logic or methods may be faulty. You want to think about your problem and use the most appropriate set
Statistics collaboration Having no preconceived ideas about the method you should use solely based on papers. Their ideas, logic or methods may be faulty. You want to think about your problem and use the most appropriate set of tools. This reminds me of reproducing cited information without checking the source. On the other hand, paper with methods (or logic) that differs from the rest of literature may hinder or cull a review process because "it's not the norm".
Statistics collaboration Having no preconceived ideas about the method you should use solely based on papers. Their ideas, logic or methods may be faulty. You want to think about your problem and use the most appropriate set
10,756
Why does the continuity correction (say, the normal approximation to the binomial distribution) work?
In fact it doesn't always "work" (in the sense of always improving the approximation of the binomial cdf by the normal at any $x$). If the binomial $p$ is 0.5 I think it always helps, except perhaps for the most extreme tail. If $p$ is not too far from 0.5, for reasonably large $n$ it generally works very well except in the far tail, but if $p$ is near 0 or 1 it might not help at all (see point 6. below) One thing to keep in mind (in spite of illustrations almost always involving pmfs and pdfs) is that the thing we're trying to approximate is the cdf. It can be useful to ponder what's going on with the cdf of the binomial and the approximating normal (e.g. here's $n=20,p=0.5$): In the limit the cdf of a standardized binomial will go to a standard normal (note that standardizing affects the scale on the x-axis but not the y-axis); along the way to increasingly large $n$ the binomial cdf's jumps tend to more evenly straddle the normal cdf. Let's zoom in and look at this in the above simple example: Notice that since the approximating normal passes close to the middle of the vertical jumps*, while in the limit the normal cdf is locally approximately linear and (as is the progression of the binomial cdf at the top of each jump); as a result the cdf tends to cross the horizontal steps near $x+\frac{_1}{^2}$. If you want to approximate the value of the binomial cdf, $F(x)$ at integer $x$, the normal cdf reaches that height near to $x+\frac{_1}{^2}$. * If we apply Berry-Esseen to mean-corrected Bernoulli variables, the Berry-Esseen bounds allow for very little wiggle room when $p$ is near $\frac12$ and $x$ is near $\mu$ -- the normal cdf must pass reasonably close to the middle of the jumps there because otherwise the absolute difference in cdfs will exceed the best Berry-Esseen bound on one side or the other. This in turn relates to how far from $x+\frac{_1}{^2}$ the normal cdf can cross horizontal part of the binomial cdf's step-function. Expanding on the motivation that in 1. let's consider how we'd use a normal approximation to the binomial cdf to work out $P(X=k)$. E.g. $n=20, p=0.5, k=9$ (see the second diagram above). So our normal with the same mean and sd is $N(10,(\sqrt{5})^2)$. Note that we would approximate the jump in cdf at 9 by the change in normal cdf between about 8.5 and 9.5. Doing the same thing under the less formal but more "usual" textbook motivation (which is perhaps more intuitive, especially for beginning students), we're trying to approximate a discrete variable by a continuous one. We can make a continuous version of the binomial by replacing each probability spike of height $p(x)$ by a rectangle of width 1 centered at $x$, giving it height $p(x)$ (see the blue rectangle below; imagine one for every x-value) and then approximating that by the normal density with the same mean and sd as the original binomial: The area under the box is approximated by the normal between $x-\frac12$ and $x+\frac12$; the two almost-triangular parts that lie above and below the horizontal step are close together in area. Some sum of binomial probabilities in an interval will reduce to a collection of these approximations. (Drawing a diagram like this is often very useful if it's not instantly clear whether you need to go up or down by 0.5 for a particular calculation ... work out which binomial values you want in your calculation and go either side by $\frac12$ for each one.) One can motivate this approach algebraically using a derivation [along the lines of de Moivre's -- see here or here for example] to derive the normal approximation (though it can be performed somewhat more directly than de Moivre's approach). That essentially proceeds via several approximations, including using Stirling's approximation on the ${n \choose x}$ term and using that $\log(1+x)\approx x-x^2/2$ to obtain that $$P(X=x)\approx \frac{1}{\sqrt{2\pi np(1-p)}}\exp(-\frac{(x-np)^2}{2np(1-p)})$$ which is to say that the density of a normal with mean $\mu=np$ and variance $\sigma^2 = np(1-p)$ at $x$ is approximately the height of the binomial pmf at $x$. This is essentially where de Moivre got to. So now consider that we have a midpoint-rule approximation for normal areas in terms of binomial heights ... that is, for $Y\sim N(np,np(1-p))$, the midpoint rule says that $F(y+\frac12)-F(y-\frac12) = \int_{y-\frac12}^{y+\frac12}f_Y(u)du\approx f_Y(y)$ and we have from de Moivre that $f_Y(x)\approx P(X=x)$. Flipping that about, $P(X=x)\approx F(x+\frac12)-F(x-\frac12)$. [A similar "midpoint rule" type approximation can be used to motivate other such approximations of continuous pmfs by densities using a continuity correction, but one must always be careful to pay attention to where it makes sense to invoke that approximation] Historical note: the continuity correction seems to have originated with Augustus de Morgan in 1838 as an improvement of de Moivre's approximation. See, for example Hald (2007)[1]. From Hald's description, his reasoning was along the lines of item 4. above (i.e. essentially in terms of trying to approximate the pmf by replacing the probability spike with a "block" of width 1 centered at the x-value). An illustration of a situation where continuity correction doesn't help: In the plot on the left (where as before, $X$ is the binomial, $Y$ is the normal approximation), $F_X(x)\approx F_Y(x+\frac12)$ and so $p(x) \approx F_Y(x+\frac12)-F_Y(x-\frac12)$. In the plot on the right (the same binomial but further into the tail), $F_X(x)\approx F_Y(x)$ and so $p(x) \approx F_Y(x)-F_Y(x-1)$ -- which is to say that ignoring the continuity correction is better than using it in this region. [1]: Hald, Anders (2007), "A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935", Sources and Studies in the History of Mathematics and Physical Sciences, Springer-Verlag New York
Why does the continuity correction (say, the normal approximation to the binomial distribution) work
In fact it doesn't always "work" (in the sense of always improving the approximation of the binomial cdf by the normal at any $x$). If the binomial $p$ is 0.5 I think it always helps, except perhaps f
Why does the continuity correction (say, the normal approximation to the binomial distribution) work? In fact it doesn't always "work" (in the sense of always improving the approximation of the binomial cdf by the normal at any $x$). If the binomial $p$ is 0.5 I think it always helps, except perhaps for the most extreme tail. If $p$ is not too far from 0.5, for reasonably large $n$ it generally works very well except in the far tail, but if $p$ is near 0 or 1 it might not help at all (see point 6. below) One thing to keep in mind (in spite of illustrations almost always involving pmfs and pdfs) is that the thing we're trying to approximate is the cdf. It can be useful to ponder what's going on with the cdf of the binomial and the approximating normal (e.g. here's $n=20,p=0.5$): In the limit the cdf of a standardized binomial will go to a standard normal (note that standardizing affects the scale on the x-axis but not the y-axis); along the way to increasingly large $n$ the binomial cdf's jumps tend to more evenly straddle the normal cdf. Let's zoom in and look at this in the above simple example: Notice that since the approximating normal passes close to the middle of the vertical jumps*, while in the limit the normal cdf is locally approximately linear and (as is the progression of the binomial cdf at the top of each jump); as a result the cdf tends to cross the horizontal steps near $x+\frac{_1}{^2}$. If you want to approximate the value of the binomial cdf, $F(x)$ at integer $x$, the normal cdf reaches that height near to $x+\frac{_1}{^2}$. * If we apply Berry-Esseen to mean-corrected Bernoulli variables, the Berry-Esseen bounds allow for very little wiggle room when $p$ is near $\frac12$ and $x$ is near $\mu$ -- the normal cdf must pass reasonably close to the middle of the jumps there because otherwise the absolute difference in cdfs will exceed the best Berry-Esseen bound on one side or the other. This in turn relates to how far from $x+\frac{_1}{^2}$ the normal cdf can cross horizontal part of the binomial cdf's step-function. Expanding on the motivation that in 1. let's consider how we'd use a normal approximation to the binomial cdf to work out $P(X=k)$. E.g. $n=20, p=0.5, k=9$ (see the second diagram above). So our normal with the same mean and sd is $N(10,(\sqrt{5})^2)$. Note that we would approximate the jump in cdf at 9 by the change in normal cdf between about 8.5 and 9.5. Doing the same thing under the less formal but more "usual" textbook motivation (which is perhaps more intuitive, especially for beginning students), we're trying to approximate a discrete variable by a continuous one. We can make a continuous version of the binomial by replacing each probability spike of height $p(x)$ by a rectangle of width 1 centered at $x$, giving it height $p(x)$ (see the blue rectangle below; imagine one for every x-value) and then approximating that by the normal density with the same mean and sd as the original binomial: The area under the box is approximated by the normal between $x-\frac12$ and $x+\frac12$; the two almost-triangular parts that lie above and below the horizontal step are close together in area. Some sum of binomial probabilities in an interval will reduce to a collection of these approximations. (Drawing a diagram like this is often very useful if it's not instantly clear whether you need to go up or down by 0.5 for a particular calculation ... work out which binomial values you want in your calculation and go either side by $\frac12$ for each one.) One can motivate this approach algebraically using a derivation [along the lines of de Moivre's -- see here or here for example] to derive the normal approximation (though it can be performed somewhat more directly than de Moivre's approach). That essentially proceeds via several approximations, including using Stirling's approximation on the ${n \choose x}$ term and using that $\log(1+x)\approx x-x^2/2$ to obtain that $$P(X=x)\approx \frac{1}{\sqrt{2\pi np(1-p)}}\exp(-\frac{(x-np)^2}{2np(1-p)})$$ which is to say that the density of a normal with mean $\mu=np$ and variance $\sigma^2 = np(1-p)$ at $x$ is approximately the height of the binomial pmf at $x$. This is essentially where de Moivre got to. So now consider that we have a midpoint-rule approximation for normal areas in terms of binomial heights ... that is, for $Y\sim N(np,np(1-p))$, the midpoint rule says that $F(y+\frac12)-F(y-\frac12) = \int_{y-\frac12}^{y+\frac12}f_Y(u)du\approx f_Y(y)$ and we have from de Moivre that $f_Y(x)\approx P(X=x)$. Flipping that about, $P(X=x)\approx F(x+\frac12)-F(x-\frac12)$. [A similar "midpoint rule" type approximation can be used to motivate other such approximations of continuous pmfs by densities using a continuity correction, but one must always be careful to pay attention to where it makes sense to invoke that approximation] Historical note: the continuity correction seems to have originated with Augustus de Morgan in 1838 as an improvement of de Moivre's approximation. See, for example Hald (2007)[1]. From Hald's description, his reasoning was along the lines of item 4. above (i.e. essentially in terms of trying to approximate the pmf by replacing the probability spike with a "block" of width 1 centered at the x-value). An illustration of a situation where continuity correction doesn't help: In the plot on the left (where as before, $X$ is the binomial, $Y$ is the normal approximation), $F_X(x)\approx F_Y(x+\frac12)$ and so $p(x) \approx F_Y(x+\frac12)-F_Y(x-\frac12)$. In the plot on the right (the same binomial but further into the tail), $F_X(x)\approx F_Y(x)$ and so $p(x) \approx F_Y(x)-F_Y(x-1)$ -- which is to say that ignoring the continuity correction is better than using it in this region. [1]: Hald, Anders (2007), "A History of Parametric Statistical Inference from Bernoulli to Fisher, 1713-1935", Sources and Studies in the History of Mathematics and Physical Sciences, Springer-Verlag New York
Why does the continuity correction (say, the normal approximation to the binomial distribution) work In fact it doesn't always "work" (in the sense of always improving the approximation of the binomial cdf by the normal at any $x$). If the binomial $p$ is 0.5 I think it always helps, except perhaps f
10,757
Why does the continuity correction (say, the normal approximation to the binomial distribution) work?
I believe the factor arises from the fact that we are comparing a continuous distribution to a discrete. We thus need to translate what each discrete value means in the continuous distribution. We could choose another value, however this would be unbalanced about a given integer. (ie you would weight the probability of being at 6 more toward 7 than 5.) I found a useful link here: link
Why does the continuity correction (say, the normal approximation to the binomial distribution) work
I believe the factor arises from the fact that we are comparing a continuous distribution to a discrete. We thus need to translate what each discrete value means in the continuous distribution. We cou
Why does the continuity correction (say, the normal approximation to the binomial distribution) work? I believe the factor arises from the fact that we are comparing a continuous distribution to a discrete. We thus need to translate what each discrete value means in the continuous distribution. We could choose another value, however this would be unbalanced about a given integer. (ie you would weight the probability of being at 6 more toward 7 than 5.) I found a useful link here: link
Why does the continuity correction (say, the normal approximation to the binomial distribution) work I believe the factor arises from the fact that we are comparing a continuous distribution to a discrete. We thus need to translate what each discrete value means in the continuous distribution. We cou
10,758
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
This sounds like another strident paper by a confused individual. Fisher didn't fall into any such trap, though many students of statistics do. Hypothesis testing is a decision theoretic problem. Generally, you end up with a test with a given threshold between the two decisions (hypothesis true or hypothesis false). If you have a hypothesis which corresponds to a single point, such as $\theta=0$, then you can calculate the probability of your data resulting when it's true. But what do you do if it's not a single point? You get a function of $\theta$. The hypothesis $\theta\not= 0$ is such a hypothesis, and you get such a function for the probability of producing your observed data given that it's true. That function is the power function. It's very classical. Fisher knew all about it. The expected loss is a part of the basic machinery of decision theory. You have various states of nature, and various possible data resulting from them, and some possible decisions you can make, and you want to find a good function from data to decision. How do you define good? Given a particular state of nature underlying the data you have obtained, and the decision made by that procedure, what is your expected loss? This is most simply understood in business problems (if I do this based on the sales I observed in the past three quarters, what is the expected monetary loss?). Bayesian procedures are a subset of decision theoretic procedures. The expected loss is insufficient to specify uniquely best procedures in all but trivial cases. If one procedure is better than another in both state A and B, obviously you'll prefer it, but if one is better in state A and one is better in state B, which do you choose? This is where ancillary ideas like Bayes procedures, minimaxity, and unbiasedness enter. The t-test is actually a perfectly good solution to a decision theoretic problem. The question is how you choose the cutoff on the $t$ you calculate. A given value of $t$ corresponds to a given value of $\alpha$, the probability of type I error, and to a given set of powers $\beta$, depending on the size of the underlying parameter you are estimating. Is it an approximation to use a point null hypothesis? Yes. Is it usually a problem in practice? No, just like using Bernoulli's approximate theory for beam deflection is usually just fine in structural engineering. Is having the $p$-value useless? No. Another person looking at your data may want to use a different $\alpha$ than you, and the $p$-value accommodates that use. I'm also a little confused on why he names Student and Jeffreys together, considering that Fisher was responsible for the wide dissemination of Student's work. Basically, the blind use of p-values is a bad idea, and they are a rather subtle concept, but that doesn't make them useless. Should we object to their misuse by researchers with poor mathematical backgrounds? Absolutely, but let's remember what it looked like before Fisher tried to distill something down for the man in the field to use.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
This sounds like another strident paper by a confused individual. Fisher didn't fall into any such trap, though many students of statistics do. Hypothesis testing is a decision theoretic problem. Ge
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? This sounds like another strident paper by a confused individual. Fisher didn't fall into any such trap, though many students of statistics do. Hypothesis testing is a decision theoretic problem. Generally, you end up with a test with a given threshold between the two decisions (hypothesis true or hypothesis false). If you have a hypothesis which corresponds to a single point, such as $\theta=0$, then you can calculate the probability of your data resulting when it's true. But what do you do if it's not a single point? You get a function of $\theta$. The hypothesis $\theta\not= 0$ is such a hypothesis, and you get such a function for the probability of producing your observed data given that it's true. That function is the power function. It's very classical. Fisher knew all about it. The expected loss is a part of the basic machinery of decision theory. You have various states of nature, and various possible data resulting from them, and some possible decisions you can make, and you want to find a good function from data to decision. How do you define good? Given a particular state of nature underlying the data you have obtained, and the decision made by that procedure, what is your expected loss? This is most simply understood in business problems (if I do this based on the sales I observed in the past three quarters, what is the expected monetary loss?). Bayesian procedures are a subset of decision theoretic procedures. The expected loss is insufficient to specify uniquely best procedures in all but trivial cases. If one procedure is better than another in both state A and B, obviously you'll prefer it, but if one is better in state A and one is better in state B, which do you choose? This is where ancillary ideas like Bayes procedures, minimaxity, and unbiasedness enter. The t-test is actually a perfectly good solution to a decision theoretic problem. The question is how you choose the cutoff on the $t$ you calculate. A given value of $t$ corresponds to a given value of $\alpha$, the probability of type I error, and to a given set of powers $\beta$, depending on the size of the underlying parameter you are estimating. Is it an approximation to use a point null hypothesis? Yes. Is it usually a problem in practice? No, just like using Bernoulli's approximate theory for beam deflection is usually just fine in structural engineering. Is having the $p$-value useless? No. Another person looking at your data may want to use a different $\alpha$ than you, and the $p$-value accommodates that use. I'm also a little confused on why he names Student and Jeffreys together, considering that Fisher was responsible for the wide dissemination of Student's work. Basically, the blind use of p-values is a bad idea, and they are a rather subtle concept, but that doesn't make them useless. Should we object to their misuse by researchers with poor mathematical backgrounds? Absolutely, but let's remember what it looked like before Fisher tried to distill something down for the man in the field to use.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? This sounds like another strident paper by a confused individual. Fisher didn't fall into any such trap, though many students of statistics do. Hypothesis testing is a decision theoretic problem. Ge
10,759
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
I recommend focusing on things like confidence intervals and model-checking. Andrew Gelman has done great work on this. I recommend his textbooks but also check out the stuff he's put online, e.g. http://andrewgelman.com/2011/06/the_holes_in_my/
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
I recommend focusing on things like confidence intervals and model-checking. Andrew Gelman has done great work on this. I recommend his textbooks but also check out the stuff he's put online, e.g. h
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? I recommend focusing on things like confidence intervals and model-checking. Andrew Gelman has done great work on this. I recommend his textbooks but also check out the stuff he's put online, e.g. http://andrewgelman.com/2011/06/the_holes_in_my/
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? I recommend focusing on things like confidence intervals and model-checking. Andrew Gelman has done great work on this. I recommend his textbooks but also check out the stuff he's put online, e.g. h
10,760
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
The ez package provides likelihood ratios when you use the ezMixed() function to do mixed effects modelling. Likelihood ratios aim to quantify evidence for a phenomenon by comparing the likelihood (given the observed data) of two models: a "restricted" model that restricts the influence of the phenomenon to zero and an "unrestricted" model that permits non-zero influence of the phenomenon. After correcting the observed likelihoods for the models' differential complexity (via Akaike's Information Criterion, which is asymptotically equivalent to cross-validation), the ratio quantifies the evidence for the phenomenon.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
The ez package provides likelihood ratios when you use the ezMixed() function to do mixed effects modelling. Likelihood ratios aim to quantify evidence for a phenomenon by comparing the likelihood (gi
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? The ez package provides likelihood ratios when you use the ezMixed() function to do mixed effects modelling. Likelihood ratios aim to quantify evidence for a phenomenon by comparing the likelihood (given the observed data) of two models: a "restricted" model that restricts the influence of the phenomenon to zero and an "unrestricted" model that permits non-zero influence of the phenomenon. After correcting the observed likelihoods for the models' differential complexity (via Akaike's Information Criterion, which is asymptotically equivalent to cross-validation), the ratio quantifies the evidence for the phenomenon.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? The ez package provides likelihood ratios when you use the ezMixed() function to do mixed effects modelling. Likelihood ratios aim to quantify evidence for a phenomenon by comparing the likelihood (gi
10,761
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
All those techniques are available in R in the same sense that all of algebra is available in your pencil. Even p-values are available through many many different functions in R, deciding which function to use to get a p-value or a Bayesian posterior is more complex than a pointer to a single function or package. Once you learn about those techniques and decide what question you actually want the answer too then you can see (or we can provide more help) how to do it using R (or other tools). Just saying that you want to minimize your loss function, or to get a posterior distribution is about as useful as replying "food" when asked what you want to eat for dinner.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they?
All those techniques are available in R in the same sense that all of algebra is available in your pencil. Even p-values are available through many many different functions in R, deciding which funct
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? All those techniques are available in R in the same sense that all of algebra is available in your pencil. Even p-values are available through many many different functions in R, deciding which function to use to get a p-value or a Bayesian posterior is more complex than a pointer to a single function or package. Once you learn about those techniques and decide what question you actually want the answer too then you can see (or we can provide more help) how to do it using R (or other tools). Just saying that you want to minimize your loss function, or to get a posterior distribution is about as useful as replying "food" when asked what you want to eat for dinner.
Ziliak (2011) opposes the use of p-values and mentions some alternatives; what are they? All those techniques are available in R in the same sense that all of algebra is available in your pencil. Even p-values are available through many many different functions in R, deciding which funct
10,762
Hamiltonian Monte Carlo vs. Sequential Monte Carlo
Hamiltonian Monte Carlo performs well with continuous target distributions with "weird" shapes. It requires the target distribution to be differentiable as it basically uses the slope of the target distribution to know where to go. The perfect example is a banana shaped function. Here is a standard Metropolis Hastings in a Banana function: Acceptance rate of 66% and very poor coverage. Here is with HMC: 99% acceptance with good coverage. SMC (the method behind Particle Filtering) is almost unbeatable when the target distribution is multimodal, especially if there are several separate areas with mass. Instead of having one Markov Chain trapped within a mode, you have several Markov chains running in parallel. Note that you use it to estimate a sequence of distributions, usually of increasing sharpness. You can generate the increasing sharpness using something like simulated annealing (put a progressively increasing exponent on the target). Or typically, in a Bayesian context, the sequence of distributions is the sequence of posteriors: $$ P(\theta|y_1) \;,\; P(\theta|y_1,y_2)\;,\;... \;,\; P(\theta|y_1,y_2,...,y_N) $$ For instance, this sequence is an excellent target for SMC: The parallel nature of the SMC makes it particularly well suited for distributed/parallel computing. Summary: HMC: good for elongated weird target. Does not work with non continuous function. SMC: good for multimodal and not-continuous cases. Might converge slower or use more computing power for high dimensional weird shapes. Source: Most of the images come from a paper I wrote combining the 2 Methods (Hamiltonian Sequential Monte Carlo). This combination can simulate pretty much any distribution we can throw at it, even at very high dimensions.
Hamiltonian Monte Carlo vs. Sequential Monte Carlo
Hamiltonian Monte Carlo performs well with continuous target distributions with "weird" shapes. It requires the target distribution to be differentiable as it basically uses the slope of the target di
Hamiltonian Monte Carlo vs. Sequential Monte Carlo Hamiltonian Monte Carlo performs well with continuous target distributions with "weird" shapes. It requires the target distribution to be differentiable as it basically uses the slope of the target distribution to know where to go. The perfect example is a banana shaped function. Here is a standard Metropolis Hastings in a Banana function: Acceptance rate of 66% and very poor coverage. Here is with HMC: 99% acceptance with good coverage. SMC (the method behind Particle Filtering) is almost unbeatable when the target distribution is multimodal, especially if there are several separate areas with mass. Instead of having one Markov Chain trapped within a mode, you have several Markov chains running in parallel. Note that you use it to estimate a sequence of distributions, usually of increasing sharpness. You can generate the increasing sharpness using something like simulated annealing (put a progressively increasing exponent on the target). Or typically, in a Bayesian context, the sequence of distributions is the sequence of posteriors: $$ P(\theta|y_1) \;,\; P(\theta|y_1,y_2)\;,\;... \;,\; P(\theta|y_1,y_2,...,y_N) $$ For instance, this sequence is an excellent target for SMC: The parallel nature of the SMC makes it particularly well suited for distributed/parallel computing. Summary: HMC: good for elongated weird target. Does not work with non continuous function. SMC: good for multimodal and not-continuous cases. Might converge slower or use more computing power for high dimensional weird shapes. Source: Most of the images come from a paper I wrote combining the 2 Methods (Hamiltonian Sequential Monte Carlo). This combination can simulate pretty much any distribution we can throw at it, even at very high dimensions.
Hamiltonian Monte Carlo vs. Sequential Monte Carlo Hamiltonian Monte Carlo performs well with continuous target distributions with "weird" shapes. It requires the target distribution to be differentiable as it basically uses the slope of the target di
10,763
Machine learning algorithms to handle missing data
It depends on the model you use. If you are using some generative model, then there is a principled way to deal with missing values (). For example in models like Naive Bayes or Gaussian Processes you would integrate out missing variables, and choose the best option with the remaining variables. For discriminative models it is more elaborate, since that is not possible. There are a number of approaches. Gharamani and Jordan describe a principled approach, where missing values are treated like hidden variables, and a variant of the EM algorithm is used to estimate them. In a similar fashion, Smola et al. describe a variant of the SVM algorithm which explicitly tackles the problem. Note that it is often recommended to substitute the missing values by the mean value of the variable. This is problematic, as described in the first paper. Sometimes, I have come across papers that do regression on the variables to estimate missing values, but I cannot say whether that applies to your case.
Machine learning algorithms to handle missing data
It depends on the model you use. If you are using some generative model, then there is a principled way to deal with missing values (). For example in models like Naive Bayes or Gaussian Processes you
Machine learning algorithms to handle missing data It depends on the model you use. If you are using some generative model, then there is a principled way to deal with missing values (). For example in models like Naive Bayes or Gaussian Processes you would integrate out missing variables, and choose the best option with the remaining variables. For discriminative models it is more elaborate, since that is not possible. There are a number of approaches. Gharamani and Jordan describe a principled approach, where missing values are treated like hidden variables, and a variant of the EM algorithm is used to estimate them. In a similar fashion, Smola et al. describe a variant of the SVM algorithm which explicitly tackles the problem. Note that it is often recommended to substitute the missing values by the mean value of the variable. This is problematic, as described in the first paper. Sometimes, I have come across papers that do regression on the variables to estimate missing values, but I cannot say whether that applies to your case.
Machine learning algorithms to handle missing data It depends on the model you use. If you are using some generative model, then there is a principled way to deal with missing values (). For example in models like Naive Bayes or Gaussian Processes you
10,764
Machine learning algorithms to handle missing data
The R-package randomForestSRC, which implements Breiman's random forests, handles missing data for a wide class of analyses (regression, classification, survival, competing risk, unsupervised, multivariate). See the following post: Why doesn't Random Forest handle missing values in predictors?
Machine learning algorithms to handle missing data
The R-package randomForestSRC, which implements Breiman's random forests, handles missing data for a wide class of analyses (regression, classification, survival, competing risk, unsupervised, multiva
Machine learning algorithms to handle missing data The R-package randomForestSRC, which implements Breiman's random forests, handles missing data for a wide class of analyses (regression, classification, survival, competing risk, unsupervised, multivariate). See the following post: Why doesn't Random Forest handle missing values in predictors?
Machine learning algorithms to handle missing data The R-package randomForestSRC, which implements Breiman's random forests, handles missing data for a wide class of analyses (regression, classification, survival, competing risk, unsupervised, multiva
10,765
Machine learning algorithms to handle missing data
Try imputation using nearest neighbours to get rid of missing data. Additionally, the Caret package has interfaces to a wide variety of algorithms and they all come with predict methods in R that can be used to predict novel data. Performance metrics can also be estimated using k-fold cross validation using the same package.
Machine learning algorithms to handle missing data
Try imputation using nearest neighbours to get rid of missing data. Additionally, the Caret package has interfaces to a wide variety of algorithms and they all come with predict methods in R that can
Machine learning algorithms to handle missing data Try imputation using nearest neighbours to get rid of missing data. Additionally, the Caret package has interfaces to a wide variety of algorithms and they all come with predict methods in R that can be used to predict novel data. Performance metrics can also be estimated using k-fold cross validation using the same package.
Machine learning algorithms to handle missing data Try imputation using nearest neighbours to get rid of missing data. Additionally, the Caret package has interfaces to a wide variety of algorithms and they all come with predict methods in R that can
10,766
Machine learning algorithms to handle missing data
There are also algorithms that can use the missing value as a unique and different value when building the predictive model, such as classification and regression trees. such as xgboost
Machine learning algorithms to handle missing data
There are also algorithms that can use the missing value as a unique and different value when building the predictive model, such as classification and regression trees. such as xgboost
Machine learning algorithms to handle missing data There are also algorithms that can use the missing value as a unique and different value when building the predictive model, such as classification and regression trees. such as xgboost
Machine learning algorithms to handle missing data There are also algorithms that can use the missing value as a unique and different value when building the predictive model, such as classification and regression trees. such as xgboost
10,767
Machine learning algorithms to handle missing data
lightgbm can handle NaNs from the box(http://lightgbm.readthedocs.io/en/latest/).
Machine learning algorithms to handle missing data
lightgbm can handle NaNs from the box(http://lightgbm.readthedocs.io/en/latest/).
Machine learning algorithms to handle missing data lightgbm can handle NaNs from the box(http://lightgbm.readthedocs.io/en/latest/).
Machine learning algorithms to handle missing data lightgbm can handle NaNs from the box(http://lightgbm.readthedocs.io/en/latest/).
10,768
Import stock price from Yahoo Finance into R?
This really isn't a statistics question (perhaps this could be moved to SO?), but there's a nice function in quantmod that does what Dirk has done by hand. See getQuote() and yahooQF(). Typing yahooQF() will bring up a menu of all the possible quote formats you can use. > require(quantmod) > getQuote("QQQQ;SPY", what=yahooQF("Last Trade (Price Only)")) Trade Time Last QQQQ 2011-03-17 12:33:00 55.14 SPY 2011-03-17 12:33:00 128.17
Import stock price from Yahoo Finance into R?
This really isn't a statistics question (perhaps this could be moved to SO?), but there's a nice function in quantmod that does what Dirk has done by hand. See getQuote() and yahooQF(). Typing yahoo
Import stock price from Yahoo Finance into R? This really isn't a statistics question (perhaps this could be moved to SO?), but there's a nice function in quantmod that does what Dirk has done by hand. See getQuote() and yahooQF(). Typing yahooQF() will bring up a menu of all the possible quote formats you can use. > require(quantmod) > getQuote("QQQQ;SPY", what=yahooQF("Last Trade (Price Only)")) Trade Time Last QQQQ 2011-03-17 12:33:00 55.14 SPY 2011-03-17 12:33:00 128.17
Import stock price from Yahoo Finance into R? This really isn't a statistics question (perhaps this could be moved to SO?), but there's a nice function in quantmod that does what Dirk has done by hand. See getQuote() and yahooQF(). Typing yahoo
10,769
Import stock price from Yahoo Finance into R?
That is pretty easy given that R can read directly off a given URL. The key is simply to know how to form the URL. Here is a quick and dirty example based on code Dj Padzensky wrote in the late 1990s and which I have been maintaining in the Perl module Yahoo-FinanceQuote (which is of course also on CPAN here) for almost as long. If you know a little R, the code should be self-explanatory. Getting documentation for the format string is a little trickier but e.g. the Perl module has some. R> syms <- c("^GSPC", "^IXIC") R> baseURL <- "http://download.finance.yahoo.com/d/quotes.csvr?e=.csv&f=" R> formatURL <- "snl1d1t1c1p2va2bapomwerr1dyj1x" R> endURL <- "&s=" R> url <- paste(baseURL, formatURL, endURL, paste(syms, collapse="+"), sep="") R> read.csv(url, header=FALSE) V1 V2 V3 V4 V5 V6 V7 1 ^GSPC S&P 500 INDEX,RTH 1256.88 3/16/2011 4:04pm 0 0.00% 2 ^IXIC NASDAQ Composite 2616.82 3/16/2011 5:30pm 0 0.00% V8 V9 V10 V11 V12 V13 V14 1 4282084608 0 N/A N/A 1256.88 1279.46 1249.05 - 1280.91 2 0 0 N/A N/A 2616.82 0.00 0.00 - 0.00 V15 V16 V17 V18 V19 V20 V21 V22 1 1010.91 - 1344.07 N/A N/A N/A N/A N/A N/A SNP 2 2061.14 - 2840.51 N/A N/A N/A N/A N/A N/A NasdaqSC R> Column three is your last trade. During open market hours you will get fewer NAs and more data variability. But note though that most prices are 15 or 20 minute delayed---but some indices are real-time. Real-time data is a big business and major revenue for exchanges so they tend not to give it away. Also, and if I remember correctly, the newer and more real-time displays on the Finance pages at Google and Yahoo use something more AJAXy that is harder to milk from the outside.
Import stock price from Yahoo Finance into R?
That is pretty easy given that R can read directly off a given URL. The key is simply to know how to form the URL. Here is a quick and dirty example based on code Dj Padzensky wrote in the late 1990s
Import stock price from Yahoo Finance into R? That is pretty easy given that R can read directly off a given URL. The key is simply to know how to form the URL. Here is a quick and dirty example based on code Dj Padzensky wrote in the late 1990s and which I have been maintaining in the Perl module Yahoo-FinanceQuote (which is of course also on CPAN here) for almost as long. If you know a little R, the code should be self-explanatory. Getting documentation for the format string is a little trickier but e.g. the Perl module has some. R> syms <- c("^GSPC", "^IXIC") R> baseURL <- "http://download.finance.yahoo.com/d/quotes.csvr?e=.csv&f=" R> formatURL <- "snl1d1t1c1p2va2bapomwerr1dyj1x" R> endURL <- "&s=" R> url <- paste(baseURL, formatURL, endURL, paste(syms, collapse="+"), sep="") R> read.csv(url, header=FALSE) V1 V2 V3 V4 V5 V6 V7 1 ^GSPC S&P 500 INDEX,RTH 1256.88 3/16/2011 4:04pm 0 0.00% 2 ^IXIC NASDAQ Composite 2616.82 3/16/2011 5:30pm 0 0.00% V8 V9 V10 V11 V12 V13 V14 1 4282084608 0 N/A N/A 1256.88 1279.46 1249.05 - 1280.91 2 0 0 N/A N/A 2616.82 0.00 0.00 - 0.00 V15 V16 V17 V18 V19 V20 V21 V22 1 1010.91 - 1344.07 N/A N/A N/A N/A N/A N/A SNP 2 2061.14 - 2840.51 N/A N/A N/A N/A N/A N/A NasdaqSC R> Column three is your last trade. During open market hours you will get fewer NAs and more data variability. But note though that most prices are 15 or 20 minute delayed---but some indices are real-time. Real-time data is a big business and major revenue for exchanges so they tend not to give it away. Also, and if I remember correctly, the newer and more real-time displays on the Finance pages at Google and Yahoo use something more AJAXy that is harder to milk from the outside.
Import stock price from Yahoo Finance into R? That is pretty easy given that R can read directly off a given URL. The key is simply to know how to form the URL. Here is a quick and dirty example based on code Dj Padzensky wrote in the late 1990s
10,770
Import stock price from Yahoo Finance into R?
Here's a little function I wrote to gather and chart "pseudo-real time" data from yahoo: require(quantmod) Times <- NULL Prices <- NULL while(1) { tryCatch({ #Load current quote Year <- 1970 currentYear <- as.numeric(format(Sys.time(),'%Y')) while (Year != currentYear) { #Sometimes yahoo returns bad quotes currentQuote <- getQuote('SPY') Year <- as.numeric(format(currentQuote['Trade Time'],'%Y')) } #Add current quote to the dataset if (is.null(Times)) { Times <- Sys.time()-15*60 #Quotes are delayed 15 minutes Prices <- currentQuote['Last'] } else { Times <- c(Times,Sys.time()) Prices <- rbind(Prices,currentQuote['Last']) } #Convert to 1-minute bars Data <- xts(Prices,order.by=Times) Data <- na.omit(to.minutes(Data,indexAt='endof')) #Plot the data when we have enough if (nrow(Data)>5) { chartSeries(Data,theme='white',TA='addRSI(n=5);addBBands(n=5)') } #Wait 1 second to avoid overwhelming the server Sys.sleep(1) #On errors, sleep 10 seconds and hope it goes away },error=function(e) {print(e);Sys.sleep(10)}) } It produces charts like this: You can also use the data for other purposes.
Import stock price from Yahoo Finance into R?
Here's a little function I wrote to gather and chart "pseudo-real time" data from yahoo: require(quantmod) Times <- NULL Prices <- NULL while(1) { tryCatch({ #Load current quote Year
Import stock price from Yahoo Finance into R? Here's a little function I wrote to gather and chart "pseudo-real time" data from yahoo: require(quantmod) Times <- NULL Prices <- NULL while(1) { tryCatch({ #Load current quote Year <- 1970 currentYear <- as.numeric(format(Sys.time(),'%Y')) while (Year != currentYear) { #Sometimes yahoo returns bad quotes currentQuote <- getQuote('SPY') Year <- as.numeric(format(currentQuote['Trade Time'],'%Y')) } #Add current quote to the dataset if (is.null(Times)) { Times <- Sys.time()-15*60 #Quotes are delayed 15 minutes Prices <- currentQuote['Last'] } else { Times <- c(Times,Sys.time()) Prices <- rbind(Prices,currentQuote['Last']) } #Convert to 1-minute bars Data <- xts(Prices,order.by=Times) Data <- na.omit(to.minutes(Data,indexAt='endof')) #Plot the data when we have enough if (nrow(Data)>5) { chartSeries(Data,theme='white',TA='addRSI(n=5);addBBands(n=5)') } #Wait 1 second to avoid overwhelming the server Sys.sleep(1) #On errors, sleep 10 seconds and hope it goes away },error=function(e) {print(e);Sys.sleep(10)}) } It produces charts like this: You can also use the data for other purposes.
Import stock price from Yahoo Finance into R? Here's a little function I wrote to gather and chart "pseudo-real time" data from yahoo: require(quantmod) Times <- NULL Prices <- NULL while(1) { tryCatch({ #Load current quote Year
10,771
Import stock price from Yahoo Finance into R?
library(quantmod) getSymbols("LT.NS",src="yahoo")
Import stock price from Yahoo Finance into R?
library(quantmod) getSymbols("LT.NS",src="yahoo")
Import stock price from Yahoo Finance into R? library(quantmod) getSymbols("LT.NS",src="yahoo")
Import stock price from Yahoo Finance into R? library(quantmod) getSymbols("LT.NS",src="yahoo")
10,772
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
The WoE method consists of two steps: to split (a continuous) variable into few categories or to group (a discrete) variable into few categories (and in both cases you assume that all observations in one category have "same" effect on dependent variable) to calculate WoE value for each category (then the original x values are replaced by the WoE values) The WoE transformation has (at least) three positive effects: It can transform an independent variable so that it establishes monotonic relationship to the dependent variable. Actually it does more than this - to secure monotonic relationship it would be enough to "recode" it to any ordered measure (for example 1,2,3,4...) but the WoE transformation actually orders the categories on a "logistic" scale which is natural for logistic regression For variables with too many (sparsely populated) discrete values, these can be grouped into categories (densely populated) and the WoE can be used to express information for the whole category The (univariate) effect of each category on dependent variable can be simply compared across categories and across variables because WoE is standardized value (for example you can compare WoE of married people to WoE of manual workers) It also has (at least) three drawbacks: Loss of information (variation) due to binning to few categories It is a "univariate" measure so it does not take into account correlation between independent variables It is easy to manipulate (overfit) the effect of variables according to how categories are created Conventionally, the betas of the regression (where the x has been replaced by WoE) are not interpreted per se but they are multiplied with WoE to obtain a "score" (for example beta for variable "marital status" can be multiplied with WoE of "married people" group to see the score of married people; beta for variable "occupation" can be multiplied by WoE of "manual workers" to see the score of manual workers. then if you are interested in the score of married manual workers, you sum up these two score and see how much is the effect on outcome). The higher the score is, the greater is probability of an outcome equal to 1.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
The WoE method consists of two steps: to split (a continuous) variable into few categories or to group (a discrete) variable into few categories (and in both cases you assume that all observations in
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression The WoE method consists of two steps: to split (a continuous) variable into few categories or to group (a discrete) variable into few categories (and in both cases you assume that all observations in one category have "same" effect on dependent variable) to calculate WoE value for each category (then the original x values are replaced by the WoE values) The WoE transformation has (at least) three positive effects: It can transform an independent variable so that it establishes monotonic relationship to the dependent variable. Actually it does more than this - to secure monotonic relationship it would be enough to "recode" it to any ordered measure (for example 1,2,3,4...) but the WoE transformation actually orders the categories on a "logistic" scale which is natural for logistic regression For variables with too many (sparsely populated) discrete values, these can be grouped into categories (densely populated) and the WoE can be used to express information for the whole category The (univariate) effect of each category on dependent variable can be simply compared across categories and across variables because WoE is standardized value (for example you can compare WoE of married people to WoE of manual workers) It also has (at least) three drawbacks: Loss of information (variation) due to binning to few categories It is a "univariate" measure so it does not take into account correlation between independent variables It is easy to manipulate (overfit) the effect of variables according to how categories are created Conventionally, the betas of the regression (where the x has been replaced by WoE) are not interpreted per se but they are multiplied with WoE to obtain a "score" (for example beta for variable "marital status" can be multiplied with WoE of "married people" group to see the score of married people; beta for variable "occupation" can be multiplied by WoE of "manual workers" to see the score of manual workers. then if you are interested in the score of married manual workers, you sum up these two score and see how much is the effect on outcome). The higher the score is, the greater is probability of an outcome equal to 1.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression The WoE method consists of two steps: to split (a continuous) variable into few categories or to group (a discrete) variable into few categories (and in both cases you assume that all observations in
10,773
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
The rational for using WOE in logistic regression is to generate what is sometimes called the Semi-Naive Bayesian Classifier (SNBC). The beginning of this blog post explains things pretty well: http://multithreaded.stitchfix.com/blog/2015/08/13/weight-of-evidence/ The beta parameters in the model are the linear bias of each naive effect (a.k.a. weight-of-evidence) due to the presence of other predictors and they can be interpreted as the linear change in log odds of the particular predictors due to the presence of other predictors.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
The rational for using WOE in logistic regression is to generate what is sometimes called the Semi-Naive Bayesian Classifier (SNBC). The beginning of this blog post explains things pretty well: http:
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression The rational for using WOE in logistic regression is to generate what is sometimes called the Semi-Naive Bayesian Classifier (SNBC). The beginning of this blog post explains things pretty well: http://multithreaded.stitchfix.com/blog/2015/08/13/weight-of-evidence/ The beta parameters in the model are the linear bias of each naive effect (a.k.a. weight-of-evidence) due to the presence of other predictors and they can be interpreted as the linear change in log odds of the particular predictors due to the presence of other predictors.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression The rational for using WOE in logistic regression is to generate what is sometimes called the Semi-Naive Bayesian Classifier (SNBC). The beginning of this blog post explains things pretty well: http:
10,774
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
Weight of Evidence (WoE) is powerful technique to perform variable transformation & selection . It is widely used In credit scoring to measure the separation of good vs bad customers.(variables). Advantages :: - Handles missing values Handles outliers the transformation is based on logrithmic value of distribution. No need for dummy variables by using proper binning technique it can establish monotonic relationship btw the independent & dependent. mono_bin() = used for numeric variables. char_bin() = used for character variables.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression
Weight of Evidence (WoE) is powerful technique to perform variable transformation & selection . It is widely used In credit scoring to measure the separation of good vs bad customers.(variables). Adv
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression Weight of Evidence (WoE) is powerful technique to perform variable transformation & selection . It is widely used In credit scoring to measure the separation of good vs bad customers.(variables). Advantages :: - Handles missing values Handles outliers the transformation is based on logrithmic value of distribution. No need for dummy variables by using proper binning technique it can establish monotonic relationship btw the independent & dependent. mono_bin() = used for numeric variables. char_bin() = used for character variables.
Replacing Variables by WoE (Weight of Evidence) in Logistic Regression Weight of Evidence (WoE) is powerful technique to perform variable transformation & selection . It is widely used In credit scoring to measure the separation of good vs bad customers.(variables). Adv
10,775
Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line?
For estimation normality isn't exactly an assumption, but a major consideration would be efficiency; in many cases a good linear estimator will do fine and in that case (by Gauss-Markov) the LS estimate would be the best of those things-that-would-be-okay. (If your tails are quite heavy, or very light, it may make sense to consider something else) In the case of tests and CIs, while normality is assumed, it's usually not all that critical (again, as long as tails are not really heavy or light, or perhaps one of each), in that, at least in not-very-small samples the tests and typical CIs tend to have close to their nominal properties (not-too-far from claimed significance level or coverage) and perform well (reasonable power for typical situations or CIs not too much wider than alternatives) - as you move further from the normal case power can be more of an issue, and in that case large samples won't generally improve relative efficiency, so where effect sizes are such that power is middling in a test with relatively good power, it may be very poor for the tests which assume normality. This tendency to have close to the nominal properties for CIs and significance levels in tests is because of several factors operating together (one of which is the tendency of linear combinations of variables to have close to normal distribution as long as there's lots of values involved and none of them contribute a large fraction of the total variance). However, in the case of a prediction interval based on the normal assumption, normality is relatively more critical, since the width of the interval is strongly dependent on the distribution of a single value. However, even there, for the most common interval size (95% interval), the fact that many unimodal distributions have very close to 95% of their distribution within about 2sds of the mean tends to result in reasonable performance of a normal prediction interval even when the distribution isn't normal. [This doesn't carry over quite so well to much narrower or wider intervals -- say a 50% interval or a 99.9% interval -- though.]
Why is the normality of residuals "barely important at all" for the purpose of estimating the regres
For estimation normality isn't exactly an assumption, but a major consideration would be efficiency; in many cases a good linear estimator will do fine and in that case (by Gauss-Markov) the LS estima
Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line? For estimation normality isn't exactly an assumption, but a major consideration would be efficiency; in many cases a good linear estimator will do fine and in that case (by Gauss-Markov) the LS estimate would be the best of those things-that-would-be-okay. (If your tails are quite heavy, or very light, it may make sense to consider something else) In the case of tests and CIs, while normality is assumed, it's usually not all that critical (again, as long as tails are not really heavy or light, or perhaps one of each), in that, at least in not-very-small samples the tests and typical CIs tend to have close to their nominal properties (not-too-far from claimed significance level or coverage) and perform well (reasonable power for typical situations or CIs not too much wider than alternatives) - as you move further from the normal case power can be more of an issue, and in that case large samples won't generally improve relative efficiency, so where effect sizes are such that power is middling in a test with relatively good power, it may be very poor for the tests which assume normality. This tendency to have close to the nominal properties for CIs and significance levels in tests is because of several factors operating together (one of which is the tendency of linear combinations of variables to have close to normal distribution as long as there's lots of values involved and none of them contribute a large fraction of the total variance). However, in the case of a prediction interval based on the normal assumption, normality is relatively more critical, since the width of the interval is strongly dependent on the distribution of a single value. However, even there, for the most common interval size (95% interval), the fact that many unimodal distributions have very close to 95% of their distribution within about 2sds of the mean tends to result in reasonable performance of a normal prediction interval even when the distribution isn't normal. [This doesn't carry over quite so well to much narrower or wider intervals -- say a 50% interval or a 99.9% interval -- though.]
Why is the normality of residuals "barely important at all" for the purpose of estimating the regres For estimation normality isn't exactly an assumption, but a major consideration would be efficiency; in many cases a good linear estimator will do fine and in that case (by Gauss-Markov) the LS estima
10,776
Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line?
2: When predicting individual data points, the confidence interval around that prediction assumes that the residuals are normally distributed. This isn't much different than the general assumption about confidence intervals -- to be valid, we need to understand the distribution, and the most common assumption is normality. For example, a standard confidence interval around a mean works because the distribution of sample means approaches normality, so we can use a z or t distribution
Why is the normality of residuals "barely important at all" for the purpose of estimating the regres
2: When predicting individual data points, the confidence interval around that prediction assumes that the residuals are normally distributed. This isn't much different than the general assumption a
Why is the normality of residuals "barely important at all" for the purpose of estimating the regression line? 2: When predicting individual data points, the confidence interval around that prediction assumes that the residuals are normally distributed. This isn't much different than the general assumption about confidence intervals -- to be valid, we need to understand the distribution, and the most common assumption is normality. For example, a standard confidence interval around a mean works because the distribution of sample means approaches normality, so we can use a z or t distribution
Why is the normality of residuals "barely important at all" for the purpose of estimating the regres 2: When predicting individual data points, the confidence interval around that prediction assumes that the residuals are normally distributed. This isn't much different than the general assumption a
10,777
How can an improper prior lead to a proper posterior distribution?
We generally accept posteriors from improper priors $\pi(\theta)$ if $$ \frac{\pi(X \mid \theta) \pi(\theta)}{\pi(X)} $$ exists and is a valid probability distribution (i.e., it integrates exactly to 1 over the support). Essentially this boils down to $\pi(X) = \int \pi(X \mid \theta) \pi(\theta) \,d\theta$ being finite. If this is the case, then we call this quantity $\pi(\theta \mid X)$ and accept it as the posterior distribution that we want. However, it is important to note that this is NOT a posterior distribution, nor is it a conditional probability distribution (these two terms are synonymous in the context here). Now, I said we accept 'posterior' distributions from improper priors given the above. The reason they are accepted is because the prior $\pi(\theta)$ will still give us relative 'scores' on the parameter space; i.e., the ratio $\frac{\pi(\theta_1)}{\pi(\theta_2)}$ brings meaning to our analysis. The meaning we get from improper priors in some cases may not be available in proper priors. This is a potential justification for using them. See Sergio's answer for a more thorough examination of the practical motivation for improper priors. It's worth noting that this quantity $\pi(\theta \mid X)$ does have desirable theoretical properties as well, Degroot & Schervish: Improper priors are not true probability distributions, but if we pretend that they are, we will compute posterior distributions that approximate the posteriors that we would have obtained using proper conjugate priors with extreme values of the prior hyperparameters.
How can an improper prior lead to a proper posterior distribution?
We generally accept posteriors from improper priors $\pi(\theta)$ if $$ \frac{\pi(X \mid \theta) \pi(\theta)}{\pi(X)} $$ exists and is a valid probability distribution (i.e., it integrates exactly to
How can an improper prior lead to a proper posterior distribution? We generally accept posteriors from improper priors $\pi(\theta)$ if $$ \frac{\pi(X \mid \theta) \pi(\theta)}{\pi(X)} $$ exists and is a valid probability distribution (i.e., it integrates exactly to 1 over the support). Essentially this boils down to $\pi(X) = \int \pi(X \mid \theta) \pi(\theta) \,d\theta$ being finite. If this is the case, then we call this quantity $\pi(\theta \mid X)$ and accept it as the posterior distribution that we want. However, it is important to note that this is NOT a posterior distribution, nor is it a conditional probability distribution (these two terms are synonymous in the context here). Now, I said we accept 'posterior' distributions from improper priors given the above. The reason they are accepted is because the prior $\pi(\theta)$ will still give us relative 'scores' on the parameter space; i.e., the ratio $\frac{\pi(\theta_1)}{\pi(\theta_2)}$ brings meaning to our analysis. The meaning we get from improper priors in some cases may not be available in proper priors. This is a potential justification for using them. See Sergio's answer for a more thorough examination of the practical motivation for improper priors. It's worth noting that this quantity $\pi(\theta \mid X)$ does have desirable theoretical properties as well, Degroot & Schervish: Improper priors are not true probability distributions, but if we pretend that they are, we will compute posterior distributions that approximate the posteriors that we would have obtained using proper conjugate priors with extreme values of the prior hyperparameters.
How can an improper prior lead to a proper posterior distribution? We generally accept posteriors from improper priors $\pi(\theta)$ if $$ \frac{\pi(X \mid \theta) \pi(\theta)}{\pi(X)} $$ exists and is a valid probability distribution (i.e., it integrates exactly to
10,778
How can an improper prior lead to a proper posterior distribution?
There are a "theoretical" answer and a "pragmatic" one. From a theroretical point of view, when a prior is improper the posterior does not exist (well, look at Matthew's answer for a sounder statement), but may be approximated by a limiting form. If the data comprise a conditionally i.i.d. sample from the Bernoulli distribution with parameter $\theta$, and $\theta$ has the beta distribution with parameters $\alpha$ and $\beta$, the posterior distribution of $\theta$ is the beta distribution with parameters $\alpha + s, \beta+n-s$ ($n$ observations, $s$ successes) and its mean is $(\alpha+s)/(\alpha+\beta+n)$. If we use the improper (and unreal) beta distribution prior with prior hypeparameters $\alpha=\beta=0$, and pretend that $\pi(\theta)\propto\theta^{-1}(1-\theta)^{-1}$, we obtain a proper posterior proportional to $\theta^{s-1}(1-\theta)^{n-s-1}$, i.e. the p.d.f. of the beta distribution with parameters $s$ and $n-s$ except for a constant factor. This is the limiting form of the posterior for a beta prior with parameters $\alpha\to 0$ and $\beta\to 0$ (Degroot & Schervish, Example 7.3.13). In a normal model with mean $\theta$, known variance $\sigma^2$, and a $\mathcal{N}(\mu_0,\tau^2_0)$ prior distribution for $\theta$, if the prior precision, $1/\tau^2_0$, is small relative to the data precision, $n/\sigma^2$, then the posterior distribution is approximately as if $\tau^2_0=\infty$: $$p(\theta\mid x)\approx \mathcal{N}(\theta\mid\bar{x},\sigma^2/n)$$ i.e. the posterior distribution is approximately that which would result from assuming $p(\theta)$ is proportional to a constant for $\theta\in(-\infty,\infty)$, a distribution that is not strictly possible, but the limiting form of the posterior as $\tau^2_0$ approaches $\infty$ does exist (Gelman et al., p. 52). From a "pragmatic" point of view, $p(x\mid\theta)p(\theta)=0$ when $p(x\mid\theta)=0$ whatever $p(\theta)$ is, so if $p(x\mid\theta)\ne 0$ in $(a,b)$, then $\int_{-\infty}^{\infty}p(x\mid\theta)p(\theta)d\theta=\int_a^b p(x\mid\theta)p(\theta)d\theta$. Improper priors may be employed to represent the local behavior of the prior distribution in the region where the likelihood is appreciable, say $(a,b)$. By supposing that to a sufficient approximation a prior follows forms such as $f(x)=k, x\in(-\infty,\infty)$ or $f(x)=kx^{-1}, x\in(0,\infty)$ only over $(a,b)$, that it suitably tails to zero outside that range, we ensure the priors actually used are proper (Box and Tiao, p. 21). So if the prior distribution of $\theta$ is $\mathcal{U}(-\infty,\infty)$ but $(a,b)$ is bounded, it is as if $\theta\sim\mathcal{U}(a,b)$, i.e. $p(x\mid\theta)p(\theta)=p(x\mid\theta)k\propto p(x\mid\theta)$. For a concrete example, this is what happens in Stan: if no prior is specified for a parameter, it is implicitly given a uniform prior on its support and this is handled as a multiplication of the likelihood by a constant.
How can an improper prior lead to a proper posterior distribution?
There are a "theoretical" answer and a "pragmatic" one. From a theroretical point of view, when a prior is improper the posterior does not exist (well, look at Matthew's answer for a sounder statement
How can an improper prior lead to a proper posterior distribution? There are a "theoretical" answer and a "pragmatic" one. From a theroretical point of view, when a prior is improper the posterior does not exist (well, look at Matthew's answer for a sounder statement), but may be approximated by a limiting form. If the data comprise a conditionally i.i.d. sample from the Bernoulli distribution with parameter $\theta$, and $\theta$ has the beta distribution with parameters $\alpha$ and $\beta$, the posterior distribution of $\theta$ is the beta distribution with parameters $\alpha + s, \beta+n-s$ ($n$ observations, $s$ successes) and its mean is $(\alpha+s)/(\alpha+\beta+n)$. If we use the improper (and unreal) beta distribution prior with prior hypeparameters $\alpha=\beta=0$, and pretend that $\pi(\theta)\propto\theta^{-1}(1-\theta)^{-1}$, we obtain a proper posterior proportional to $\theta^{s-1}(1-\theta)^{n-s-1}$, i.e. the p.d.f. of the beta distribution with parameters $s$ and $n-s$ except for a constant factor. This is the limiting form of the posterior for a beta prior with parameters $\alpha\to 0$ and $\beta\to 0$ (Degroot & Schervish, Example 7.3.13). In a normal model with mean $\theta$, known variance $\sigma^2$, and a $\mathcal{N}(\mu_0,\tau^2_0)$ prior distribution for $\theta$, if the prior precision, $1/\tau^2_0$, is small relative to the data precision, $n/\sigma^2$, then the posterior distribution is approximately as if $\tau^2_0=\infty$: $$p(\theta\mid x)\approx \mathcal{N}(\theta\mid\bar{x},\sigma^2/n)$$ i.e. the posterior distribution is approximately that which would result from assuming $p(\theta)$ is proportional to a constant for $\theta\in(-\infty,\infty)$, a distribution that is not strictly possible, but the limiting form of the posterior as $\tau^2_0$ approaches $\infty$ does exist (Gelman et al., p. 52). From a "pragmatic" point of view, $p(x\mid\theta)p(\theta)=0$ when $p(x\mid\theta)=0$ whatever $p(\theta)$ is, so if $p(x\mid\theta)\ne 0$ in $(a,b)$, then $\int_{-\infty}^{\infty}p(x\mid\theta)p(\theta)d\theta=\int_a^b p(x\mid\theta)p(\theta)d\theta$. Improper priors may be employed to represent the local behavior of the prior distribution in the region where the likelihood is appreciable, say $(a,b)$. By supposing that to a sufficient approximation a prior follows forms such as $f(x)=k, x\in(-\infty,\infty)$ or $f(x)=kx^{-1}, x\in(0,\infty)$ only over $(a,b)$, that it suitably tails to zero outside that range, we ensure the priors actually used are proper (Box and Tiao, p. 21). So if the prior distribution of $\theta$ is $\mathcal{U}(-\infty,\infty)$ but $(a,b)$ is bounded, it is as if $\theta\sim\mathcal{U}(a,b)$, i.e. $p(x\mid\theta)p(\theta)=p(x\mid\theta)k\propto p(x\mid\theta)$. For a concrete example, this is what happens in Stan: if no prior is specified for a parameter, it is implicitly given a uniform prior on its support and this is handled as a multiplication of the likelihood by a constant.
How can an improper prior lead to a proper posterior distribution? There are a "theoretical" answer and a "pragmatic" one. From a theroretical point of view, when a prior is improper the posterior does not exist (well, look at Matthew's answer for a sounder statement
10,779
How can an improper prior lead to a proper posterior distribution?
However, in the case of an improper prior, how do you know that the posterior distribution actually exists? The posterior might not be proper either. If the prior is improper and the likelihood is flat (because there are no meaningful observations), then the posterior equals the prior and is also improper. Usually you have some observations, and usually the likelihood is not flat, so the posterior is proper.
How can an improper prior lead to a proper posterior distribution?
However, in the case of an improper prior, how do you know that the posterior distribution actually exists? The posterior might not be proper either. If the prior is improper and the likelihood is
How can an improper prior lead to a proper posterior distribution? However, in the case of an improper prior, how do you know that the posterior distribution actually exists? The posterior might not be proper either. If the prior is improper and the likelihood is flat (because there are no meaningful observations), then the posterior equals the prior and is also improper. Usually you have some observations, and usually the likelihood is not flat, so the posterior is proper.
How can an improper prior lead to a proper posterior distribution? However, in the case of an improper prior, how do you know that the posterior distribution actually exists? The posterior might not be proper either. If the prior is improper and the likelihood is
10,780
How to deal with high correlation among predictors in multiple regression?
The key problem is not correlation but collinearity (see works by Belsley, for instance). This is best tested using condition indexes (available in R, SAS and probably other programs as well. Correlation is neither a necessary nor a sufficient condition for collinearity. Condition indexes over 10 (per Belsley) indicate moderate collinearity, over 30 severe, but it also depends on which variables are involved in the collinearity. If you do find high collinearity, it means that your parameter estimates are unstable. That is, small changes (sometimes in the 4th significant figure) in your data can cause big changes in your parameter estimates (sometimes even reversing their sign). This is a bad thing. Remedies are Getting more data Dropping one variable Combining the variables (e.g. with partial least squares) and Performing ridge regression, which gives biased results but reduces the variance on the estimates.
How to deal with high correlation among predictors in multiple regression?
The key problem is not correlation but collinearity (see works by Belsley, for instance). This is best tested using condition indexes (available in R, SAS and probably other programs as well. Correlat
How to deal with high correlation among predictors in multiple regression? The key problem is not correlation but collinearity (see works by Belsley, for instance). This is best tested using condition indexes (available in R, SAS and probably other programs as well. Correlation is neither a necessary nor a sufficient condition for collinearity. Condition indexes over 10 (per Belsley) indicate moderate collinearity, over 30 severe, but it also depends on which variables are involved in the collinearity. If you do find high collinearity, it means that your parameter estimates are unstable. That is, small changes (sometimes in the 4th significant figure) in your data can cause big changes in your parameter estimates (sometimes even reversing their sign). This is a bad thing. Remedies are Getting more data Dropping one variable Combining the variables (e.g. with partial least squares) and Performing ridge regression, which gives biased results but reduces the variance on the estimates.
How to deal with high correlation among predictors in multiple regression? The key problem is not correlation but collinearity (see works by Belsley, for instance). This is best tested using condition indexes (available in R, SAS and probably other programs as well. Correlat
10,781
Best approach for model selection Bayesian or cross-validation?
Are these approaches suitable for solving this problem (deciding how many parameters to include in your model, or selecting among a number of models)? Either one could be, yes. If you're interested in obtaining a model that predicts best, out of the list of models you consider, the splitting/cross-validation approach can do that well. If you are interested in known which of the models (in your list of putative models) is actually the one generating your data, then the second approach (evaluating the posterior probability of the models) is what you want. Are they equivalent? Probably not. Will they give the same optimal model under certain assumptions or in practice? No, they are not in general equivalent. For example, using AIC (An Information Criterion, by Akaike) to choose the 'best' model corresponds to cross-validation, approximately. Use of BIC (Bayesian Information Criterion) corresponds to using the posterior probabilities, again approximately. These are not the same criterion, so one should expect them to lead to different choices, in general. They can give the same answers - whenever the model that predicts best also happens to be the truth - but in many situations the model that fits best is actually one that overfits, which leads to disagreement between the approaches. Do they agree in practice? It depends on what your 'practice' involves. Try it both ways and find out. Other than the usual philosophical difference of specifying prior knowledge in Bayesian models etc., what are the pros and cons of each approach? Which one would you choose? It's typically a lot easier to do the calculations for cross-validation, rather than compute posterior probabilities It's often hard to make a convincing case that the 'true' model is among the list from which you are choosing. This is a problem for use of posterior probabilities, but not cross-validation Both methods tend to involve use of fairly arbitrary constants; how much is an extra unit of prediction worth, in terms of numbers of variables? How much do we believe each of the models, a priori? I'd probably choose cross-validation. But before committing, I'd want to know a lot about why this model-selection was being done, i.e. what the chosen model was to be used for. Neither form of model-selection may be appropriate, if e.g. causal inference is required.
Best approach for model selection Bayesian or cross-validation?
Are these approaches suitable for solving this problem (deciding how many parameters to include in your model, or selecting among a number of models)? Either one could be, yes. If you're intereste
Best approach for model selection Bayesian or cross-validation? Are these approaches suitable for solving this problem (deciding how many parameters to include in your model, or selecting among a number of models)? Either one could be, yes. If you're interested in obtaining a model that predicts best, out of the list of models you consider, the splitting/cross-validation approach can do that well. If you are interested in known which of the models (in your list of putative models) is actually the one generating your data, then the second approach (evaluating the posterior probability of the models) is what you want. Are they equivalent? Probably not. Will they give the same optimal model under certain assumptions or in practice? No, they are not in general equivalent. For example, using AIC (An Information Criterion, by Akaike) to choose the 'best' model corresponds to cross-validation, approximately. Use of BIC (Bayesian Information Criterion) corresponds to using the posterior probabilities, again approximately. These are not the same criterion, so one should expect them to lead to different choices, in general. They can give the same answers - whenever the model that predicts best also happens to be the truth - but in many situations the model that fits best is actually one that overfits, which leads to disagreement between the approaches. Do they agree in practice? It depends on what your 'practice' involves. Try it both ways and find out. Other than the usual philosophical difference of specifying prior knowledge in Bayesian models etc., what are the pros and cons of each approach? Which one would you choose? It's typically a lot easier to do the calculations for cross-validation, rather than compute posterior probabilities It's often hard to make a convincing case that the 'true' model is among the list from which you are choosing. This is a problem for use of posterior probabilities, but not cross-validation Both methods tend to involve use of fairly arbitrary constants; how much is an extra unit of prediction worth, in terms of numbers of variables? How much do we believe each of the models, a priori? I'd probably choose cross-validation. But before committing, I'd want to know a lot about why this model-selection was being done, i.e. what the chosen model was to be used for. Neither form of model-selection may be appropriate, if e.g. causal inference is required.
Best approach for model selection Bayesian or cross-validation? Are these approaches suitable for solving this problem (deciding how many parameters to include in your model, or selecting among a number of models)? Either one could be, yes. If you're intereste
10,782
Best approach for model selection Bayesian or cross-validation?
Optimisation is the root of all evil in statistics! ;o) Anytime you try to select a model based on a criterion that is evaluated on a finite sample of data, you introduce a risk of over-fitting the model selection criterion and end up with a worse model than you started with. Both cross-validation and marginal likelihood are sensible model selection criteria, but they are both dependent on a finite sample of data (as are AIC and BIC - the complexity penalty can help, but doesn't solve this problem). I have found this to be a substantial issue in machine learning, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www) From a Bayesian point of view, it is better to integrate over all model choices and parameters. If you don't optimise or choose anything then it becomes harder to over-fit. The downside is you end up with difficult integrals, which often need to be solved with MCMC. If you want best predictive performance, then I would suggest a fully Bayesian approach; if you want to understand the data then choosing a best model is often helpful. However, if you resample the data and end up with a different model each time, it means the fitting procedure is unstable and none of the models are reliable for understanding the data. Note that one important difference between cross-validation and evidence is that the value of the marginal likelihood assumes that the model is not misspecified (essentially the basic form of the model is appropriate) and can give misleading results if it is. Cross-validation makes no such assumption, which means it can be a little more robust.
Best approach for model selection Bayesian or cross-validation?
Optimisation is the root of all evil in statistics! ;o) Anytime you try to select a model based on a criterion that is evaluated on a finite sample of data, you introduce a risk of over-fitting the mo
Best approach for model selection Bayesian or cross-validation? Optimisation is the root of all evil in statistics! ;o) Anytime you try to select a model based on a criterion that is evaluated on a finite sample of data, you introduce a risk of over-fitting the model selection criterion and end up with a worse model than you started with. Both cross-validation and marginal likelihood are sensible model selection criteria, but they are both dependent on a finite sample of data (as are AIC and BIC - the complexity penalty can help, but doesn't solve this problem). I have found this to be a substantial issue in machine learning, see G. C. Cawley and N. L. C. Talbot, Over-fitting in model selection and subsequent selection bias in performance evaluation, Journal of Machine Learning Research, 2010. Research, vol. 11, pp. 2079-2107, July 2010. (www) From a Bayesian point of view, it is better to integrate over all model choices and parameters. If you don't optimise or choose anything then it becomes harder to over-fit. The downside is you end up with difficult integrals, which often need to be solved with MCMC. If you want best predictive performance, then I would suggest a fully Bayesian approach; if you want to understand the data then choosing a best model is often helpful. However, if you resample the data and end up with a different model each time, it means the fitting procedure is unstable and none of the models are reliable for understanding the data. Note that one important difference between cross-validation and evidence is that the value of the marginal likelihood assumes that the model is not misspecified (essentially the basic form of the model is appropriate) and can give misleading results if it is. Cross-validation makes no such assumption, which means it can be a little more robust.
Best approach for model selection Bayesian or cross-validation? Optimisation is the root of all evil in statistics! ;o) Anytime you try to select a model based on a criterion that is evaluated on a finite sample of data, you introduce a risk of over-fitting the mo
10,783
What does "node size" refer to in the Random Forest?
A decision tree works by recursive partition of the training set. Every node $t$ of a decision tree is associated with a set of $n_t$ data points from the training set: You might find the parameter nodesize in some random forests packages, e.g. R: This is the minimum node size, in the example above the minimum node size is 10. The minimum node size is a single value: e.g. 10. If splitting a node generates two nodes for which one is smaller than nodesize then the node is not split, and it becomes a leaf node. It is actually a stopping criterion. This parameter implicitly sets the depth of your trees. Setting this number to bigger values causes smaller trees to be grown (and thus take less time). Note that the default values are different for classification (default 1 in R) and regression (default 5 in R). In other packages you directly find the parameter depth, e.g. WEKA: -depth from WEKA random forest package The maximum depth of the trees, 0 for unlimited. (default 0)
What does "node size" refer to in the Random Forest?
A decision tree works by recursive partition of the training set. Every node $t$ of a decision tree is associated with a set of $n_t$ data points from the training set: You might find the parameter n
What does "node size" refer to in the Random Forest? A decision tree works by recursive partition of the training set. Every node $t$ of a decision tree is associated with a set of $n_t$ data points from the training set: You might find the parameter nodesize in some random forests packages, e.g. R: This is the minimum node size, in the example above the minimum node size is 10. The minimum node size is a single value: e.g. 10. If splitting a node generates two nodes for which one is smaller than nodesize then the node is not split, and it becomes a leaf node. It is actually a stopping criterion. This parameter implicitly sets the depth of your trees. Setting this number to bigger values causes smaller trees to be grown (and thus take less time). Note that the default values are different for classification (default 1 in R) and regression (default 5 in R). In other packages you directly find the parameter depth, e.g. WEKA: -depth from WEKA random forest package The maximum depth of the trees, 0 for unlimited. (default 0)
What does "node size" refer to in the Random Forest? A decision tree works by recursive partition of the training set. Every node $t$ of a decision tree is associated with a set of $n_t$ data points from the training set: You might find the parameter n
10,784
What does "node size" refer to in the Random Forest?
It is not clear if the nodesize is on the "in-bag" sampling or on the "out-of-bag" error. If it is on the "out-of-bag" sampling, it is slightly more restrictive.
What does "node size" refer to in the Random Forest?
It is not clear if the nodesize is on the "in-bag" sampling or on the "out-of-bag" error. If it is on the "out-of-bag" sampling, it is slightly more restrictive.
What does "node size" refer to in the Random Forest? It is not clear if the nodesize is on the "in-bag" sampling or on the "out-of-bag" error. If it is on the "out-of-bag" sampling, it is slightly more restrictive.
What does "node size" refer to in the Random Forest? It is not clear if the nodesize is on the "in-bag" sampling or on the "out-of-bag" error. If it is on the "out-of-bag" sampling, it is slightly more restrictive.
10,785
What are the main differences between Granger's and Pearl's causality frameworks?
Granger causality is essentially usefulness for forecasting: X is said to Granger-cause Y if Y can be better predicted using the histories of both X and Y than it can by using the history of Y alone. GC has very little to do with causality in Pearl's counterfactual sense, which involves comparisons of different states of the world that could have occurred. So Peeps Granger-cause Easter, but they do not cause it. Of course, the two will overlap in a world where there are no potential causes other than X, but that is not a very likely setting and a fundamentally untestable one. Another less restrictive way they can coincide is, if, conditional on the realised history of Y and X, the next realisation of X is independent of the potential outcomes. This point is made in Lechner, M. (2010), "The Relation of Different Concepts of Causality Used in Time Series and Microeconometrics," Econometric Reviews, 30, 109-127 (WP link), which is written in the potential outcomes framework, rather than Pearl's DAGs. Addendum: Let me make an implicit assumption more explicit. The crucial ingredient for my claim is that Easter does not have a fixed date. Suppose you knew nothing about Easter and wanted to forecast its date next year. From historical data (history of Y), you can see that Easter takes place in the spring. But can we do better than that? Using Peeps sales or marketing data (X) from near the holiday, we can see that peeps do Grange-cause it since that data is useful for forecasting Easter more precisely. The corollary is that Christmas trees sales do not Granger-cause Christmas since if you know that Christmas took place on December 25th for centuries (adjusting for various calendar reforms and church schisms), tree sales do not help.
What are the main differences between Granger's and Pearl's causality frameworks?
Granger causality is essentially usefulness for forecasting: X is said to Granger-cause Y if Y can be better predicted using the histories of both X and Y than it can by using the history of Y alone.
What are the main differences between Granger's and Pearl's causality frameworks? Granger causality is essentially usefulness for forecasting: X is said to Granger-cause Y if Y can be better predicted using the histories of both X and Y than it can by using the history of Y alone. GC has very little to do with causality in Pearl's counterfactual sense, which involves comparisons of different states of the world that could have occurred. So Peeps Granger-cause Easter, but they do not cause it. Of course, the two will overlap in a world where there are no potential causes other than X, but that is not a very likely setting and a fundamentally untestable one. Another less restrictive way they can coincide is, if, conditional on the realised history of Y and X, the next realisation of X is independent of the potential outcomes. This point is made in Lechner, M. (2010), "The Relation of Different Concepts of Causality Used in Time Series and Microeconometrics," Econometric Reviews, 30, 109-127 (WP link), which is written in the potential outcomes framework, rather than Pearl's DAGs. Addendum: Let me make an implicit assumption more explicit. The crucial ingredient for my claim is that Easter does not have a fixed date. Suppose you knew nothing about Easter and wanted to forecast its date next year. From historical data (history of Y), you can see that Easter takes place in the spring. But can we do better than that? Using Peeps sales or marketing data (X) from near the holiday, we can see that peeps do Grange-cause it since that data is useful for forecasting Easter more precisely. The corollary is that Christmas trees sales do not Granger-cause Christmas since if you know that Christmas took place on December 25th for centuries (adjusting for various calendar reforms and church schisms), tree sales do not help.
What are the main differences between Granger's and Pearl's causality frameworks? Granger causality is essentially usefulness for forecasting: X is said to Granger-cause Y if Y can be better predicted using the histories of both X and Y than it can by using the history of Y alone.
10,786
What are the main differences between Granger's and Pearl's causality frameworks?
Pearl provides a calculus for reasoning about causality, Granger provides a method for discovering potential causal relations. I will elaborate: Pearl's work is based on what he has termed "Structural Causal Models", which is a triple M = (U, V, F). In this model U is the collection of the exogenous (background, or driving) unobserved variables, V is the collection of endogenous (determined in some way by variables from U and V) variables, and F is a collection of functions f1, f2, ..., for each Vi in V. The variable Vi is fully determined as Vi = fi(U, V \ Vi), that is the arguments to fi are some of the variables in U, and some of the variables in V, but not Vi itself. In order to turn this into a probabilistic model, U is augmented with a probability distribution. An example is given where U1 is a court order for a man's execution, V are the actions of a captain (V1) and two riflemen (V2,V3) in a firing squad as well as the living/dead state of the person to whom the court order pertains (V3). If the judge orders the man shot (U1 = 'execute'), then this causes the captain to issue the order to fire, which causes the riflemen to shoot the prisoner, and hence causing his death. If the court order is not given, the captain remains silent, the riflemen don't shoot, and the prisoner is left alive. Pearl argues how his model can be used to reason about causation, design experiments, predict the effects of intervention, and answer counter-factual questions. Intervention is distinct from anything in probability theory. In doing intervention we interact with the model and hold a variable constant (which is more than merely observing that the variable is in a particular state, as with probabilistic conditioning), and Pearl describes how to "perform surgery" on the model in order to predict the outcome of this intervention. Counter-factuals are even more difficult to answer, as we want to know what would have been the outcome of an experiment had something not been the case, even though it was. This is what Pearl's models are about. Granger Causality on the other hand is a statistical method, and makes no attempt to "prove" causation. If we have a whole bunch of processes, we can use Granger causality to obtain a graph of "plausible causal relations", which may be interpreted as potentially genuine causes, or to provide measures of their interconnectedness, or detect the flow of energy or information amongst the processes. In the case of literal causation, you can imagine a situation in which experiments (necessary for the methods of Pearl) are very costly. In which case, you may be able to still observe the system and apply Granger-Causality to narrow things down to potential causes. After doing this, you can have some sense of where to appropriate additional resources. One question that immediately comes to mind when reading about Pearl's causal models is "how does one build the model in the first place?". This would be accomplished through a combination of domain expertise and hypothesizing, but Granger-Causality could potentially provide some more information about how to construct the Pearl causal model as well. Since I don't have enough reputation to comment, I will add here a criticism of Dimitriy V. Masterov's answer: Peeps do not Granger-Cause Easter. Easter occurs periodically, even though the occurrence of Peeps is closely correlated with that of Easter, the history of the occurrences of Easter is enough to predict it's future occurrence. Information about Peeps does not add any additional information about Easter. I think this is a key point: Granger-Causality is much more than mere correlation. Processes that are correlated may not have any Granger-Causal relation, and processes with a Granger-Causal relation may not be correlated.
What are the main differences between Granger's and Pearl's causality frameworks?
Pearl provides a calculus for reasoning about causality, Granger provides a method for discovering potential causal relations. I will elaborate: Pearl's work is based on what he has termed "Structura
What are the main differences between Granger's and Pearl's causality frameworks? Pearl provides a calculus for reasoning about causality, Granger provides a method for discovering potential causal relations. I will elaborate: Pearl's work is based on what he has termed "Structural Causal Models", which is a triple M = (U, V, F). In this model U is the collection of the exogenous (background, or driving) unobserved variables, V is the collection of endogenous (determined in some way by variables from U and V) variables, and F is a collection of functions f1, f2, ..., for each Vi in V. The variable Vi is fully determined as Vi = fi(U, V \ Vi), that is the arguments to fi are some of the variables in U, and some of the variables in V, but not Vi itself. In order to turn this into a probabilistic model, U is augmented with a probability distribution. An example is given where U1 is a court order for a man's execution, V are the actions of a captain (V1) and two riflemen (V2,V3) in a firing squad as well as the living/dead state of the person to whom the court order pertains (V3). If the judge orders the man shot (U1 = 'execute'), then this causes the captain to issue the order to fire, which causes the riflemen to shoot the prisoner, and hence causing his death. If the court order is not given, the captain remains silent, the riflemen don't shoot, and the prisoner is left alive. Pearl argues how his model can be used to reason about causation, design experiments, predict the effects of intervention, and answer counter-factual questions. Intervention is distinct from anything in probability theory. In doing intervention we interact with the model and hold a variable constant (which is more than merely observing that the variable is in a particular state, as with probabilistic conditioning), and Pearl describes how to "perform surgery" on the model in order to predict the outcome of this intervention. Counter-factuals are even more difficult to answer, as we want to know what would have been the outcome of an experiment had something not been the case, even though it was. This is what Pearl's models are about. Granger Causality on the other hand is a statistical method, and makes no attempt to "prove" causation. If we have a whole bunch of processes, we can use Granger causality to obtain a graph of "plausible causal relations", which may be interpreted as potentially genuine causes, or to provide measures of their interconnectedness, or detect the flow of energy or information amongst the processes. In the case of literal causation, you can imagine a situation in which experiments (necessary for the methods of Pearl) are very costly. In which case, you may be able to still observe the system and apply Granger-Causality to narrow things down to potential causes. After doing this, you can have some sense of where to appropriate additional resources. One question that immediately comes to mind when reading about Pearl's causal models is "how does one build the model in the first place?". This would be accomplished through a combination of domain expertise and hypothesizing, but Granger-Causality could potentially provide some more information about how to construct the Pearl causal model as well. Since I don't have enough reputation to comment, I will add here a criticism of Dimitriy V. Masterov's answer: Peeps do not Granger-Cause Easter. Easter occurs periodically, even though the occurrence of Peeps is closely correlated with that of Easter, the history of the occurrences of Easter is enough to predict it's future occurrence. Information about Peeps does not add any additional information about Easter. I think this is a key point: Granger-Causality is much more than mere correlation. Processes that are correlated may not have any Granger-Causal relation, and processes with a Granger-Causal relation may not be correlated.
What are the main differences between Granger's and Pearl's causality frameworks? Pearl provides a calculus for reasoning about causality, Granger provides a method for discovering potential causal relations. I will elaborate: Pearl's work is based on what he has termed "Structura
10,787
P-value in a two-tail test with asymmetric null distribution
If we look at the 2x2 exact test, and take that to be our approach, what's "more extreme" might be directly measured by 'lower likelihood'. (Agresti[1] mentions a number of approaches by various authors to computing two tailed p-values just for this case of the 2x2 Fisher exact test, of which this approach is one of the three specifically discussed as 'most popular'.) For a continuous (unimodal) distribution, you just find the point in the other tail with the same density as your sample value, and everything with equal or lower likelihood in the the other tail is counted in your computation of p-value. For discrete distributions which are monotonically nonincreasing in the tails, it's just about as simple. You just count everything with equal or lower likelihood than your sample, which given the assumptions I added (to make the term "tails" fit with the idea), gives a way to work it out. If you're familiar with HPD intervals (and again, we're dealing with unimodality), it's basically like taking everything outside an open HPD interval that's bounded in one tail by your sample statistic. [To reiterate -- this is likelihood under the null we're equating here.] So at least in the unimodal case, it seems simple enough to emulate Fisher's exact test and still talk about the two tails. However, you may not have intended to invoke the spirit of Fisher's exact test in quite this way. So thinking outside that idea of what makes something 'as, or more extreme' for a moment, let's head just slightly more toward the Neyman-Pearson end of things. It can help (before you test!) to set about defining a rejection region for a test conducted at some generic level $\alpha$ (I don't mean you have to literally compute one, just how you would compute one). As soon as you do, the way to compute two tailed p-values for your case should become obvious. This approach can be valuable even if one is conducting a test outside the usual likelihood ratio test. For some applications, it can be tricky to figure out how to compute p-values in asymmetric permutation tests... but it often becomes substantially simpler if you think about a rejection rule first. With F-tests of variance, I've noticed that the "double one tail p-value" can give quite different p-values to what I see as the right approach. [I insist, for example, that it shouldn't matter which group you call "sample 1", or whether you put the larger or the smaller variance in the numerator - yet with some common approaches these apparently reasonable conditions are violated.] [1]: Agresti, A. (1992), A Survey of Exact Inference for Contingency Tables Statistical Science, Vol. 7, No. 1. (Feb.), pp. 131-153.
P-value in a two-tail test with asymmetric null distribution
If we look at the 2x2 exact test, and take that to be our approach, what's "more extreme" might be directly measured by 'lower likelihood'. (Agresti[1] mentions a number of approaches by various autho
P-value in a two-tail test with asymmetric null distribution If we look at the 2x2 exact test, and take that to be our approach, what's "more extreme" might be directly measured by 'lower likelihood'. (Agresti[1] mentions a number of approaches by various authors to computing two tailed p-values just for this case of the 2x2 Fisher exact test, of which this approach is one of the three specifically discussed as 'most popular'.) For a continuous (unimodal) distribution, you just find the point in the other tail with the same density as your sample value, and everything with equal or lower likelihood in the the other tail is counted in your computation of p-value. For discrete distributions which are monotonically nonincreasing in the tails, it's just about as simple. You just count everything with equal or lower likelihood than your sample, which given the assumptions I added (to make the term "tails" fit with the idea), gives a way to work it out. If you're familiar with HPD intervals (and again, we're dealing with unimodality), it's basically like taking everything outside an open HPD interval that's bounded in one tail by your sample statistic. [To reiterate -- this is likelihood under the null we're equating here.] So at least in the unimodal case, it seems simple enough to emulate Fisher's exact test and still talk about the two tails. However, you may not have intended to invoke the spirit of Fisher's exact test in quite this way. So thinking outside that idea of what makes something 'as, or more extreme' for a moment, let's head just slightly more toward the Neyman-Pearson end of things. It can help (before you test!) to set about defining a rejection region for a test conducted at some generic level $\alpha$ (I don't mean you have to literally compute one, just how you would compute one). As soon as you do, the way to compute two tailed p-values for your case should become obvious. This approach can be valuable even if one is conducting a test outside the usual likelihood ratio test. For some applications, it can be tricky to figure out how to compute p-values in asymmetric permutation tests... but it often becomes substantially simpler if you think about a rejection rule first. With F-tests of variance, I've noticed that the "double one tail p-value" can give quite different p-values to what I see as the right approach. [I insist, for example, that it shouldn't matter which group you call "sample 1", or whether you put the larger or the smaller variance in the numerator - yet with some common approaches these apparently reasonable conditions are violated.] [1]: Agresti, A. (1992), A Survey of Exact Inference for Contingency Tables Statistical Science, Vol. 7, No. 1. (Feb.), pp. 131-153.
P-value in a two-tail test with asymmetric null distribution If we look at the 2x2 exact test, and take that to be our approach, what's "more extreme" might be directly measured by 'lower likelihood'. (Agresti[1] mentions a number of approaches by various autho
10,788
P-value in a two-tail test with asymmetric null distribution
A p-value's well-defined once you create a test statistic that partitions the sample space & orders the partitions according to your notions of increasing discrepancy with the null hypothesis. (Or, equivalently, once you create a set of nested rejection regions of decreasing size.) So what R. & S. are getting at is that if you consider either high or low values of a statistic $S$ to be interestingly discrepant with your null hypothesis you still have a little work to do to get a proper test statistic $T$ from it. When $S$ has a symmetric distribution around nought they seem to leap to $T=|S|$ without much thought, & therefore regard the asymmetric case as presenting a puzzle. Doubling the lowest one-tailed p-value can be seen as a multiple-comparisons correction for carrying out two one-tailed tests. After all, following a two-tailed test, we're usually very much inclined to regard any doubt cast on the truth of the null as favouring another hypothesis whose direction is determined by the observed data. A proper test statistic is then $t=\min(\Pr_{H_0}(S<s),\Pr_{H_0}(S>s))$, & when $S$ has a continuous distribution the p-value is given by $2t$.† When $S$ has a continuous distribution, the approach to forming a two-tailed test shown by @Glen_b—defining the density of $S$ as the test statistic: $T=f_S(S)$—will of course produce valid p-values; but I'm not sure that it was ever recommended by Fisher, or that it's currently recommended by neo-Fisherians. If at first glance it appears more principled somehow than doubling the one-tailed p-value, note that having to deal with probability density rather than mass means that that the two-tailed p-value thus calculated may change when the test-statistic is transformed by an order-preserving function. For example, if to test the null that a Gaussian mean is equal to nought, you take a single observation $X$ & obtain $1.66$, the value with equal density at the other tail is $-1.66$, & the p-value therefore $$p=\Pr(X > 1.66) +\Pr(X<-1.66)=0.048457+0.048457=0.09691.$$ But if you consider it as testing the null that a log-Gaussian geometric mean is equal to one & take a single observation $Y$ & obtain $\mathrm{e}^{1.66}=5.2593$, the value with equal density at the other tail is $0.025732$($=\mathrm{e}^{-3.66}$), & the p-value therefore $$p=\Pr(Y>5.2593) +\Pr(Y<0.025732)=0.048457+0.00012611=0.04858.$$ Note that cumulative distribution functions are invariant to order-preserving transformations, so in the example above doubling the lowest p-value gives \begin{align}p=2t&=2\min(\Pr(X<1.66),\Pr(X>1.66))\\&=2\min(\Pr(Y<5.2593),\Pr(Y>5.2593))\\&=2\min(0.048457,0.951543)\\&=2\times 0.048457=0.09691.\end{align} A kind of sequel to this answer, discussing some principles of test construction in which the alternative hypothesis is explicitly stated, can be found here. † When $S$ has a discrete distribution, writing $$p_\mathrm{L} = \Pr_{H_0}(S\leq s)$$ $$p_\mathrm{U} = \Pr_{H_0}(S\geq s)$$ for the lower & upper one-tailed p-values, the two-tailed p-value is given by $$ \Pr(T\leq t) = \begin{cases} p_\mathrm{L} + \Pr_{H_0}(P_\mathrm{U} \leq p_\mathrm{L}) & \text{when}\ p_\mathrm{L} \leq p_\mathrm{U}\\ p_\mathrm{U} + \Pr_{H_0}(P_\mathrm{L} \leq p_\mathrm{U}) & \text{otherwise} \end{cases} $$ ; i.e. by adding to the smaller one-tailed p-value the largest achievable p-value in the other tail that does not exceed it. Note that $2t$ is still an upper bound.
P-value in a two-tail test with asymmetric null distribution
A p-value's well-defined once you create a test statistic that partitions the sample space & orders the partitions according to your notions of increasing discrepancy with the null hypothesis. (Or, eq
P-value in a two-tail test with asymmetric null distribution A p-value's well-defined once you create a test statistic that partitions the sample space & orders the partitions according to your notions of increasing discrepancy with the null hypothesis. (Or, equivalently, once you create a set of nested rejection regions of decreasing size.) So what R. & S. are getting at is that if you consider either high or low values of a statistic $S$ to be interestingly discrepant with your null hypothesis you still have a little work to do to get a proper test statistic $T$ from it. When $S$ has a symmetric distribution around nought they seem to leap to $T=|S|$ without much thought, & therefore regard the asymmetric case as presenting a puzzle. Doubling the lowest one-tailed p-value can be seen as a multiple-comparisons correction for carrying out two one-tailed tests. After all, following a two-tailed test, we're usually very much inclined to regard any doubt cast on the truth of the null as favouring another hypothesis whose direction is determined by the observed data. A proper test statistic is then $t=\min(\Pr_{H_0}(S<s),\Pr_{H_0}(S>s))$, & when $S$ has a continuous distribution the p-value is given by $2t$.† When $S$ has a continuous distribution, the approach to forming a two-tailed test shown by @Glen_b—defining the density of $S$ as the test statistic: $T=f_S(S)$—will of course produce valid p-values; but I'm not sure that it was ever recommended by Fisher, or that it's currently recommended by neo-Fisherians. If at first glance it appears more principled somehow than doubling the one-tailed p-value, note that having to deal with probability density rather than mass means that that the two-tailed p-value thus calculated may change when the test-statistic is transformed by an order-preserving function. For example, if to test the null that a Gaussian mean is equal to nought, you take a single observation $X$ & obtain $1.66$, the value with equal density at the other tail is $-1.66$, & the p-value therefore $$p=\Pr(X > 1.66) +\Pr(X<-1.66)=0.048457+0.048457=0.09691.$$ But if you consider it as testing the null that a log-Gaussian geometric mean is equal to one & take a single observation $Y$ & obtain $\mathrm{e}^{1.66}=5.2593$, the value with equal density at the other tail is $0.025732$($=\mathrm{e}^{-3.66}$), & the p-value therefore $$p=\Pr(Y>5.2593) +\Pr(Y<0.025732)=0.048457+0.00012611=0.04858.$$ Note that cumulative distribution functions are invariant to order-preserving transformations, so in the example above doubling the lowest p-value gives \begin{align}p=2t&=2\min(\Pr(X<1.66),\Pr(X>1.66))\\&=2\min(\Pr(Y<5.2593),\Pr(Y>5.2593))\\&=2\min(0.048457,0.951543)\\&=2\times 0.048457=0.09691.\end{align} A kind of sequel to this answer, discussing some principles of test construction in which the alternative hypothesis is explicitly stated, can be found here. † When $S$ has a discrete distribution, writing $$p_\mathrm{L} = \Pr_{H_0}(S\leq s)$$ $$p_\mathrm{U} = \Pr_{H_0}(S\geq s)$$ for the lower & upper one-tailed p-values, the two-tailed p-value is given by $$ \Pr(T\leq t) = \begin{cases} p_\mathrm{L} + \Pr_{H_0}(P_\mathrm{U} \leq p_\mathrm{L}) & \text{when}\ p_\mathrm{L} \leq p_\mathrm{U}\\ p_\mathrm{U} + \Pr_{H_0}(P_\mathrm{L} \leq p_\mathrm{U}) & \text{otherwise} \end{cases} $$ ; i.e. by adding to the smaller one-tailed p-value the largest achievable p-value in the other tail that does not exceed it. Note that $2t$ is still an upper bound.
P-value in a two-tail test with asymmetric null distribution A p-value's well-defined once you create a test statistic that partitions the sample space & orders the partitions according to your notions of increasing discrepancy with the null hypothesis. (Or, eq
10,789
Optimal construction of day feature in neural networks
Your second representation is more traditional for categorical variables like day of week. This is also known as creating dummy variables and is a widely used method for encoding categorical variables. If you used 1-7 encoding you're telling the model that days 4 and 5 are very similar, while days 1 and 7 are very dissimilar. In fact, days 1 and 7 are just as similar as days 4 and 5. The same logic holds up for 0-30 encoding for days of the month. Day of the month is a little trickier, because while every week has the same 7 days, not every month has the same 30 days: some months have 31 days, and some months have 28 days. Since both weeks and months are cyclical, you could use fourier transformations to convert them to smooth linear variables. For example (using R, my programming language of choice): day_of_month = c(1:31, 1:28, 1:30) day_of_year <- 1:length(day_of_month) s = sin((2*pi)/30*day_of_month) c = cos((2*pi)/30*day_of_month) plot(day_of_month ~ day_of_year) lines(15*s+15 ~ day_of_year, col='blue') lines(15*c+15 ~ day_of_year, col='red') legend(10, 30, c('raw', 'sin', 'cos'), c('black', 'blue', 'red')) (I scaled the sine/cosine variables to be 0/30, rather than -1/1 so the graph looks better) As you can see, while the raw "day of month variable" jumps back to zero at the end of each month, the sine and cosine transformations make a smooth transition that lets the model know days at the end of one month are be similar to days at the beginning of the next month. You can add the rest of the fourier terms as follows: for(i in 1:3){ s = sin((2*pi)/30*day_of_month + 30 * i/4) c = cos((2*pi)/30*day_of_month + 30 * i/4) lines(15*s+15 ~ day_of_year, col='blue') lines(15*c+15 ~ day_of_year, col='red') } legend(10, 30, c('raw', 'sin', 'cos'), c('black', 'blue', 'red')) Each pair of sine/cosine waves makes a circle: m <- lapply(1:4, function(i){ as.matrix( data.frame( s = sin((2*pi)/30*day_of_month + 30 * i/4), c = cos((2*pi)/30*day_of_month + 30 * i/4) ) ) }) m <- do.call(cbind, m) pairs(m) This page has a really handy explanation of how to manipulate sine and cosine waves.
Optimal construction of day feature in neural networks
Your second representation is more traditional for categorical variables like day of week. This is also known as creating dummy variables and is a widely used method for encoding categorical variables
Optimal construction of day feature in neural networks Your second representation is more traditional for categorical variables like day of week. This is also known as creating dummy variables and is a widely used method for encoding categorical variables. If you used 1-7 encoding you're telling the model that days 4 and 5 are very similar, while days 1 and 7 are very dissimilar. In fact, days 1 and 7 are just as similar as days 4 and 5. The same logic holds up for 0-30 encoding for days of the month. Day of the month is a little trickier, because while every week has the same 7 days, not every month has the same 30 days: some months have 31 days, and some months have 28 days. Since both weeks and months are cyclical, you could use fourier transformations to convert them to smooth linear variables. For example (using R, my programming language of choice): day_of_month = c(1:31, 1:28, 1:30) day_of_year <- 1:length(day_of_month) s = sin((2*pi)/30*day_of_month) c = cos((2*pi)/30*day_of_month) plot(day_of_month ~ day_of_year) lines(15*s+15 ~ day_of_year, col='blue') lines(15*c+15 ~ day_of_year, col='red') legend(10, 30, c('raw', 'sin', 'cos'), c('black', 'blue', 'red')) (I scaled the sine/cosine variables to be 0/30, rather than -1/1 so the graph looks better) As you can see, while the raw "day of month variable" jumps back to zero at the end of each month, the sine and cosine transformations make a smooth transition that lets the model know days at the end of one month are be similar to days at the beginning of the next month. You can add the rest of the fourier terms as follows: for(i in 1:3){ s = sin((2*pi)/30*day_of_month + 30 * i/4) c = cos((2*pi)/30*day_of_month + 30 * i/4) lines(15*s+15 ~ day_of_year, col='blue') lines(15*c+15 ~ day_of_year, col='red') } legend(10, 30, c('raw', 'sin', 'cos'), c('black', 'blue', 'red')) Each pair of sine/cosine waves makes a circle: m <- lapply(1:4, function(i){ as.matrix( data.frame( s = sin((2*pi)/30*day_of_month + 30 * i/4), c = cos((2*pi)/30*day_of_month + 30 * i/4) ) ) }) m <- do.call(cbind, m) pairs(m) This page has a really handy explanation of how to manipulate sine and cosine waves.
Optimal construction of day feature in neural networks Your second representation is more traditional for categorical variables like day of week. This is also known as creating dummy variables and is a widely used method for encoding categorical variables
10,790
How robust is Pearson's correlation coefficient to violations of normality?
Short answer: Very non-robust. The correlation is a measure of linear dependence, and when one variable can’t be written as a linear function of the other (and still have the given marginal distribution), you can’t have perfect (positive or negative) correlation. In fact, the possible correlations values can be severely restricted. The problem is that while the population correlation is always between $-1$ and $1$, the exact range attainable heavily depends on the marginal distributions. A quick proof and demonstration: Attainable range of the correlation If $(X,Y)$ has the distribution function $H$ and marginal distribution functions $F$ and $G$, there exists some rather nice upper and lower bounds for $H$, $$ H_-(x,y) \leq H(x,y) \leq H_+(x,y), $$ called Fréchet bounds. These are $$ \begin{aligned} H_-(x,y) &= \max(F(x) + G(y)-1, 0)\\ H_+(x,y) &= \min(F(x), G(y)). \end{aligned} $$ (Try to prove it; it’s not very difficult.) The bounds are themselves distribution functions. Let $U$ have a uniform distribution. The upper bound is the distribution function of $(X,Y)=(F^-(U), G^-(U))$ and the lower bound is the distribution function of $(F^-(-U), G^-(1-U))$. Now, using this variant on the formula for the covariance, $$ \mathop{\textrm{Cov}}(X,Y)=\iint H(x,y)-F(x)G(y) \mathop{\mathrm d\!}x \mathop{\mathrm d\!}y, $$ we see that we obtain the maximum and minimum correlation when $H$ is equal to $H_+$ and $H_-$, respectively, i.e., when $Y$ is a (positively or negatively, respectively) monotone function of $X$. Examples Here are a few examples (without proofs): When $X$ and $Y$ are normally distributed, we obtain the maximum and minimum when $(X,Y)$ has the usual bivariate normal distribution where $Y$ is written as a linear function of $X$. That is, we get the maximum for $$Y=\mu_Y+\sigma_Y \frac{X-\mu_X}{\sigma_X}.$$ Here the bounds are (of course) $-1$ and $1$, no matter what means and variances $X$ and $Y$ have. When $X$ and $Y$ have lognormal distributions, the lower bound is never attainable, as that would imply that $Y$ could be written $Y=a-bX$ for some $a$ and positive $b$, and $Y$ can never be negative. There exists (slightly ugly) formulas for the exact bounds, but let me just give a special case. When $X$ and $Y$ have standard lognormal distributions (meaning that when exponentiated, they are standard normal), the attainable range is $[-1/e, 1]\approx [-0.37, 1]$. (In general, the upper bound is also restricted.) When $X$ has a standard normal distribution and $Y$ has a standard lognormal distribution, the correlation bounds are $$\pm \frac{1}{\sqrt{e-1}} \approx 0.76.$$ Note that all bounds are for the population correlation. The sample correlation can easily extend outside the bounds, especially for small samples (quick example: sample size of 2). Estimating the correlation bounds It’s actually quite easy to estimate the upper and lower bounds on the correlation if you can simulate from the marginal distributions. For the last example above, we can use this R code: > n = 10^5 # Sample size: 100,000 observations > x = rnorm(n) # From the standard normal distribution > y = rlnorm(n) # From the standard lognormal distribution > > # Estimated maximum correlation > cor( sort(x), sort(y) ) 0.772 > > # Estimated minimum correlation > cor( sort(x), sort(y, decreasing=TRUE) ) −0.769 If we only have actual data and don’t know the marginal distributions, we can still use the above method. It’s not a problem that the variables are dependent as long as the observations pairs are dependent. But it helps to have many observation pairs. Transforming the data It is of course possible to transform the data to be (marginally) normally distributed and then calculate the correlation on the transformed data. The problem is one of interpretability. (And why use the normal distribution instead of any other distribution where $Y$ can be a linear function of $X$?) For data that are bivariate normally distributed, the correlation has a nice interpretation (its square is the variance of one variable explained by the other). This is not the case here. What you’re really doing here is creating a new measure of dependence that does not depend on the marginal distributions; i.e., you are creating a copula-based measure of dependence. There already exists several such measure, Spearman’s ρ and Kendall’s τ being the most well-known. (If you’re really interested in dependence concepts, it’s not a bad idea to look into copulas.) In conclusion Some final thoughts and advice: Just looking at the correlation has one big problem: It makes you stop thinking. Looking at scatter plots, on the other hand, often makes you start thinking. My main advice would therefore be to examine scatter plots and try to model dependence explicitly. That said, if you need a simple correlation-like measure, I would just use Spearman’s ρ (and associated confidence interval and tests). Its range is not restricted. But be very aware of non-monotone dependence. The Wikipedia article on correlation has a couple of nice plots illustrating potential problems.
How robust is Pearson's correlation coefficient to violations of normality?
Short answer: Very non-robust. The correlation is a measure of linear dependence, and when one variable can’t be written as a linear function of the other (and still have the given marginal distributi
How robust is Pearson's correlation coefficient to violations of normality? Short answer: Very non-robust. The correlation is a measure of linear dependence, and when one variable can’t be written as a linear function of the other (and still have the given marginal distribution), you can’t have perfect (positive or negative) correlation. In fact, the possible correlations values can be severely restricted. The problem is that while the population correlation is always between $-1$ and $1$, the exact range attainable heavily depends on the marginal distributions. A quick proof and demonstration: Attainable range of the correlation If $(X,Y)$ has the distribution function $H$ and marginal distribution functions $F$ and $G$, there exists some rather nice upper and lower bounds for $H$, $$ H_-(x,y) \leq H(x,y) \leq H_+(x,y), $$ called Fréchet bounds. These are $$ \begin{aligned} H_-(x,y) &= \max(F(x) + G(y)-1, 0)\\ H_+(x,y) &= \min(F(x), G(y)). \end{aligned} $$ (Try to prove it; it’s not very difficult.) The bounds are themselves distribution functions. Let $U$ have a uniform distribution. The upper bound is the distribution function of $(X,Y)=(F^-(U), G^-(U))$ and the lower bound is the distribution function of $(F^-(-U), G^-(1-U))$. Now, using this variant on the formula for the covariance, $$ \mathop{\textrm{Cov}}(X,Y)=\iint H(x,y)-F(x)G(y) \mathop{\mathrm d\!}x \mathop{\mathrm d\!}y, $$ we see that we obtain the maximum and minimum correlation when $H$ is equal to $H_+$ and $H_-$, respectively, i.e., when $Y$ is a (positively or negatively, respectively) monotone function of $X$. Examples Here are a few examples (without proofs): When $X$ and $Y$ are normally distributed, we obtain the maximum and minimum when $(X,Y)$ has the usual bivariate normal distribution where $Y$ is written as a linear function of $X$. That is, we get the maximum for $$Y=\mu_Y+\sigma_Y \frac{X-\mu_X}{\sigma_X}.$$ Here the bounds are (of course) $-1$ and $1$, no matter what means and variances $X$ and $Y$ have. When $X$ and $Y$ have lognormal distributions, the lower bound is never attainable, as that would imply that $Y$ could be written $Y=a-bX$ for some $a$ and positive $b$, and $Y$ can never be negative. There exists (slightly ugly) formulas for the exact bounds, but let me just give a special case. When $X$ and $Y$ have standard lognormal distributions (meaning that when exponentiated, they are standard normal), the attainable range is $[-1/e, 1]\approx [-0.37, 1]$. (In general, the upper bound is also restricted.) When $X$ has a standard normal distribution and $Y$ has a standard lognormal distribution, the correlation bounds are $$\pm \frac{1}{\sqrt{e-1}} \approx 0.76.$$ Note that all bounds are for the population correlation. The sample correlation can easily extend outside the bounds, especially for small samples (quick example: sample size of 2). Estimating the correlation bounds It’s actually quite easy to estimate the upper and lower bounds on the correlation if you can simulate from the marginal distributions. For the last example above, we can use this R code: > n = 10^5 # Sample size: 100,000 observations > x = rnorm(n) # From the standard normal distribution > y = rlnorm(n) # From the standard lognormal distribution > > # Estimated maximum correlation > cor( sort(x), sort(y) ) 0.772 > > # Estimated minimum correlation > cor( sort(x), sort(y, decreasing=TRUE) ) −0.769 If we only have actual data and don’t know the marginal distributions, we can still use the above method. It’s not a problem that the variables are dependent as long as the observations pairs are dependent. But it helps to have many observation pairs. Transforming the data It is of course possible to transform the data to be (marginally) normally distributed and then calculate the correlation on the transformed data. The problem is one of interpretability. (And why use the normal distribution instead of any other distribution where $Y$ can be a linear function of $X$?) For data that are bivariate normally distributed, the correlation has a nice interpretation (its square is the variance of one variable explained by the other). This is not the case here. What you’re really doing here is creating a new measure of dependence that does not depend on the marginal distributions; i.e., you are creating a copula-based measure of dependence. There already exists several such measure, Spearman’s ρ and Kendall’s τ being the most well-known. (If you’re really interested in dependence concepts, it’s not a bad idea to look into copulas.) In conclusion Some final thoughts and advice: Just looking at the correlation has one big problem: It makes you stop thinking. Looking at scatter plots, on the other hand, often makes you start thinking. My main advice would therefore be to examine scatter plots and try to model dependence explicitly. That said, if you need a simple correlation-like measure, I would just use Spearman’s ρ (and associated confidence interval and tests). Its range is not restricted. But be very aware of non-monotone dependence. The Wikipedia article on correlation has a couple of nice plots illustrating potential problems.
How robust is Pearson's correlation coefficient to violations of normality? Short answer: Very non-robust. The correlation is a measure of linear dependence, and when one variable can’t be written as a linear function of the other (and still have the given marginal distributi
10,791
How robust is Pearson's correlation coefficient to violations of normality?
What do the distributions of these variables look like (beyond being skewed)? If the only non-normality is skewness, then a transformation of some sort must help. But if these variables have a lot of lumping, then no transformation will bring them to normality. If the variable isn't continuous, the same is true. How robust is correlation to violations? Take a look at the Anscombe Quartet. It illustrates several problems quite well. As for other types of analysis, it depends on the analysis. If the skewed variables are independent variables in a regression, for example, there may not be a problem at all - you need to look at the residuals.
How robust is Pearson's correlation coefficient to violations of normality?
What do the distributions of these variables look like (beyond being skewed)? If the only non-normality is skewness, then a transformation of some sort must help. But if these variables have a lot of
How robust is Pearson's correlation coefficient to violations of normality? What do the distributions of these variables look like (beyond being skewed)? If the only non-normality is skewness, then a transformation of some sort must help. But if these variables have a lot of lumping, then no transformation will bring them to normality. If the variable isn't continuous, the same is true. How robust is correlation to violations? Take a look at the Anscombe Quartet. It illustrates several problems quite well. As for other types of analysis, it depends on the analysis. If the skewed variables are independent variables in a regression, for example, there may not be a problem at all - you need to look at the residuals.
How robust is Pearson's correlation coefficient to violations of normality? What do the distributions of these variables look like (beyond being skewed)? If the only non-normality is skewness, then a transformation of some sort must help. But if these variables have a lot of
10,792
General Linear Model vs. Generalized Linear Model (with an identity link function?)
A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from each, you're doing something wrong. Note that specifying an identity link is not the same thing as specifying a normal distribution. The distribution and the link function are two different components of the generalized linear model, and each can be chosen independently of the other (although certain links work better with certain distributions, so most software packages specify the choice of links allowed for each distribution). Some software packages may report noticeably different $p$-values when the residual degrees of freedom are small if it calculates these using the asymptotic normal and chi-square distributions for all generalized linear models. All software will report $p$-values based on Student's $t$- and Fisher's $F$-distributions for general linear models, as these are more accurate for small residual degrees of freedom as they do not rely on asymptotics. Student's $t$- and Fisher's $F$-distributions are strictly valid for the normal family only, although some other software for generalized linear models may also use these as approximations when fitting other families with a scale parameter that is estimated from the data.
General Linear Model vs. Generalized Linear Model (with an identity link function?)
A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from
General Linear Model vs. Generalized Linear Model (with an identity link function?) A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from each, you're doing something wrong. Note that specifying an identity link is not the same thing as specifying a normal distribution. The distribution and the link function are two different components of the generalized linear model, and each can be chosen independently of the other (although certain links work better with certain distributions, so most software packages specify the choice of links allowed for each distribution). Some software packages may report noticeably different $p$-values when the residual degrees of freedom are small if it calculates these using the asymptotic normal and chi-square distributions for all generalized linear models. All software will report $p$-values based on Student's $t$- and Fisher's $F$-distributions for general linear models, as these are more accurate for small residual degrees of freedom as they do not rely on asymptotics. Student's $t$- and Fisher's $F$-distributions are strictly valid for the normal family only, although some other software for generalized linear models may also use these as approximations when fitting other families with a scale parameter that is estimated from the data.
General Linear Model vs. Generalized Linear Model (with an identity link function?) A generalized linear model specifying an identity link function and a normal family distribution is exactly equivalent to a (general) linear model. If you're getting noticeably different results from
10,793
General Linear Model vs. Generalized Linear Model (with an identity link function?)
I would like to include my experience in this discussion. I have seen that a generalized linear model (specifying an identity link function and a normal family distribution) is identical to a general linear model only when you use the maximum likelihood estimate as scale parameter method. Otherwise if "fixed value = 1" is chosen as scale parameter method you get very different p values. My experience suggest that usually "fixed value = 1" should be avoided. I'm curious to know if someone knows when it is appropriate to choose fixed value = 1 as scale parameter method. Thanks in advance. Mark
General Linear Model vs. Generalized Linear Model (with an identity link function?)
I would like to include my experience in this discussion. I have seen that a generalized linear model (specifying an identity link function and a normal family distribution) is identical to a general
General Linear Model vs. Generalized Linear Model (with an identity link function?) I would like to include my experience in this discussion. I have seen that a generalized linear model (specifying an identity link function and a normal family distribution) is identical to a general linear model only when you use the maximum likelihood estimate as scale parameter method. Otherwise if "fixed value = 1" is chosen as scale parameter method you get very different p values. My experience suggest that usually "fixed value = 1" should be avoided. I'm curious to know if someone knows when it is appropriate to choose fixed value = 1 as scale parameter method. Thanks in advance. Mark
General Linear Model vs. Generalized Linear Model (with an identity link function?) I would like to include my experience in this discussion. I have seen that a generalized linear model (specifying an identity link function and a normal family distribution) is identical to a general
10,794
Can (should?) regularization techniques be used in a random effects model?
There are a few papers that deal with this question. I would look up in no special order: Pen.LME: Howard D Bondell, Arun Krishna, and Sujit K Ghosh. Joint variable selection for fixed and random eects in linear mixed-eects models. Biometrics, 66(4):1069-1077, 2010. GLMMLASSO: Jurg Schelldorfer, Peter Buhlmann, Sara van de Geer. Estimation for high-dimensional linear mixed-eects models using L1- penalization. Scandinavian Journal of Statistics, 38(2):197-214, 2011. which can be found online. I happen to be finishing up a paper on applying an elastic net penalty to the mixed model (LMMEN) now and plan to send it for journal review in the coming month. LMMEN: Sidi, Ritov, Unger. Regularization and Classification of Linear Mixed Models via the Elastic Net Penalty Over all, if you are modeling data that is either not normal or does not have a identity link I would go with GLMMLASSO, (but beware that it can not handle lots of RE's). Otherwise Pen.LME is good given that you do not have highly correlated data, be it in the fixed or random effects. In the latter case you can mail me and I would be happy to send you code/paper (I will put it on cran in the near future). I uploaded to CRAN today - lmmen. It solves the linear mixed model problem with an elastic-net type penalty on the fixed and random effects simultaneously. There is also in the package cv functions for the lmmlasso and glmmLasso packages in it.
Can (should?) regularization techniques be used in a random effects model?
There are a few papers that deal with this question. I would look up in no special order: Pen.LME: Howard D Bondell, Arun Krishna, and Sujit K Ghosh. Joint variable selection for fixed and random eec
Can (should?) regularization techniques be used in a random effects model? There are a few papers that deal with this question. I would look up in no special order: Pen.LME: Howard D Bondell, Arun Krishna, and Sujit K Ghosh. Joint variable selection for fixed and random eects in linear mixed-eects models. Biometrics, 66(4):1069-1077, 2010. GLMMLASSO: Jurg Schelldorfer, Peter Buhlmann, Sara van de Geer. Estimation for high-dimensional linear mixed-eects models using L1- penalization. Scandinavian Journal of Statistics, 38(2):197-214, 2011. which can be found online. I happen to be finishing up a paper on applying an elastic net penalty to the mixed model (LMMEN) now and plan to send it for journal review in the coming month. LMMEN: Sidi, Ritov, Unger. Regularization and Classification of Linear Mixed Models via the Elastic Net Penalty Over all, if you are modeling data that is either not normal or does not have a identity link I would go with GLMMLASSO, (but beware that it can not handle lots of RE's). Otherwise Pen.LME is good given that you do not have highly correlated data, be it in the fixed or random effects. In the latter case you can mail me and I would be happy to send you code/paper (I will put it on cran in the near future). I uploaded to CRAN today - lmmen. It solves the linear mixed model problem with an elastic-net type penalty on the fixed and random effects simultaneously. There is also in the package cv functions for the lmmlasso and glmmLasso packages in it.
Can (should?) regularization techniques be used in a random effects model? There are a few papers that deal with this question. I would look up in no special order: Pen.LME: Howard D Bondell, Arun Krishna, and Sujit K Ghosh. Joint variable selection for fixed and random eec
10,795
Can (should?) regularization techniques be used in a random effects model?
I always viewed ridge regression as just empirical random effects models not limited to a single categorical variable (and no fancy correlation matrices). You can almost always get the same predictions from cross validating a ridge penalty and fitting/estimating a simple random effect. In your example, you could get fancy and have a separate ridge penalty on the demo/diag features and another one on the patient indicators (using something line the penalty scaling factor in glmnet). Alternatively, you could include a fancy random effect that has time-correlated effects by person. None of these possibilities are right or wrong, they're just useful.
Can (should?) regularization techniques be used in a random effects model?
I always viewed ridge regression as just empirical random effects models not limited to a single categorical variable (and no fancy correlation matrices). You can almost always get the same predictio
Can (should?) regularization techniques be used in a random effects model? I always viewed ridge regression as just empirical random effects models not limited to a single categorical variable (and no fancy correlation matrices). You can almost always get the same predictions from cross validating a ridge penalty and fitting/estimating a simple random effect. In your example, you could get fancy and have a separate ridge penalty on the demo/diag features and another one on the patient indicators (using something line the penalty scaling factor in glmnet). Alternatively, you could include a fancy random effect that has time-correlated effects by person. None of these possibilities are right or wrong, they're just useful.
Can (should?) regularization techniques be used in a random effects model? I always viewed ridge regression as just empirical random effects models not limited to a single categorical variable (and no fancy correlation matrices). You can almost always get the same predictio
10,796
Can (should?) regularization techniques be used in a random effects model?
I am currently thinking about a similar question. I think in application, you can do it if it works and you believe using this is reasonable. If it is a usual setting in random effects (that means, you have repeated measurements for each group), then it is just about estimation technique, which is less controversial. If you actually don't have many repeated measurements for most groups, then it might lie on the borderline of usual random effects model and you might want to carefully justify its validity (from a methodology perspective) if you want to proposal it as a general method.
Can (should?) regularization techniques be used in a random effects model?
I am currently thinking about a similar question. I think in application, you can do it if it works and you believe using this is reasonable. If it is a usual setting in random effects (that means, yo
Can (should?) regularization techniques be used in a random effects model? I am currently thinking about a similar question. I think in application, you can do it if it works and you believe using this is reasonable. If it is a usual setting in random effects (that means, you have repeated measurements for each group), then it is just about estimation technique, which is less controversial. If you actually don't have many repeated measurements for most groups, then it might lie on the borderline of usual random effects model and you might want to carefully justify its validity (from a methodology perspective) if you want to proposal it as a general method.
Can (should?) regularization techniques be used in a random effects model? I am currently thinking about a similar question. I think in application, you can do it if it works and you believe using this is reasonable. If it is a usual setting in random effects (that means, yo
10,797
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
Likelihood is a function of $\theta$, given $x$, while $P$ is a function of $x$, given $\theta$. Roughly like so (excuse the quick effort in MS paint): In this sketch we have a single $x$ as our observation. Densities (functions of $x$ at some $\theta$) are in black running left to right and the likelihood functions (functions of $\theta$ at some $x$) are in red, running front to back (or rather back to front, since the $\theta$ axis comes 'forward' and somewhat to the left). The red curves are what you get when you 'slice' across the set of black densities, evaluating each at a given $x$. When we have some observation, it will 'pick out' a single red curve at $x=x_\text{obs}$. The likelihood function is not a density (or pmf) -- it doesn't integrate (/sum) to 1. Indeed, $\mathcal L$ may be continuous while $P$ is discrete (e.g. likelihood for a binomial parameter) or vice-versa (e.g. likelihood for an Erlang distribution with unit rate parameter but unspecified shape) Imagine a bivariate function of a single potential observation $x$ (say a Poisson count) and a single parameter (e.g. $\lambda$) -- in this example discrete in $x$ and continuous in $\lambda$ -- then when you slice that bivariate function of $(x,\lambda)$ one way you get $p_\lambda(x)$ (each slice gives a different pmf) and when you slice it the other way you get $\mathcal L_x(\lambda)$ (each a different likelihood function). (That bivariate function simply expresses the way $x$ and $\lambda$ are related via your model) [Alternatively, consider a discrete $\theta$ and a continuous $x$; here the likelihood is discrete and the density continuous.] As soon as you specify $x$, you identify a particular $\mathcal L$, that we call the likelihood function of that sample. It tells you about $\theta$ for that sample -- in particular what values had more or less likelihood of giving that sample. Likelihood is a function that tell you about the relative chance (in that ratios of likelihoods can be thought of as ratios of probabilities of being in $x+dx$) that this value of $\theta$ could produce your data.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
Likelihood is a function of $\theta$, given $x$, while $P$ is a function of $x$, given $\theta$. Roughly like so (excuse the quick effort in MS paint): In this sketch we have a single $x$ as our obse
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? Likelihood is a function of $\theta$, given $x$, while $P$ is a function of $x$, given $\theta$. Roughly like so (excuse the quick effort in MS paint): In this sketch we have a single $x$ as our observation. Densities (functions of $x$ at some $\theta$) are in black running left to right and the likelihood functions (functions of $\theta$ at some $x$) are in red, running front to back (or rather back to front, since the $\theta$ axis comes 'forward' and somewhat to the left). The red curves are what you get when you 'slice' across the set of black densities, evaluating each at a given $x$. When we have some observation, it will 'pick out' a single red curve at $x=x_\text{obs}$. The likelihood function is not a density (or pmf) -- it doesn't integrate (/sum) to 1. Indeed, $\mathcal L$ may be continuous while $P$ is discrete (e.g. likelihood for a binomial parameter) or vice-versa (e.g. likelihood for an Erlang distribution with unit rate parameter but unspecified shape) Imagine a bivariate function of a single potential observation $x$ (say a Poisson count) and a single parameter (e.g. $\lambda$) -- in this example discrete in $x$ and continuous in $\lambda$ -- then when you slice that bivariate function of $(x,\lambda)$ one way you get $p_\lambda(x)$ (each slice gives a different pmf) and when you slice it the other way you get $\mathcal L_x(\lambda)$ (each a different likelihood function). (That bivariate function simply expresses the way $x$ and $\lambda$ are related via your model) [Alternatively, consider a discrete $\theta$ and a continuous $x$; here the likelihood is discrete and the density continuous.] As soon as you specify $x$, you identify a particular $\mathcal L$, that we call the likelihood function of that sample. It tells you about $\theta$ for that sample -- in particular what values had more or less likelihood of giving that sample. Likelihood is a function that tell you about the relative chance (in that ratios of likelihoods can be thought of as ratios of probabilities of being in $x+dx$) that this value of $\theta$ could produce your data.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? Likelihood is a function of $\theta$, given $x$, while $P$ is a function of $x$, given $\theta$. Roughly like so (excuse the quick effort in MS paint): In this sketch we have a single $x$ as our obse
10,798
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
According to the Bayesian theory, $f(\theta|x_1,...,x_n) = \frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ holds, that is $\text{posterior} = \frac{\text{likelihood} * \text{prior}}{evidence}$. Notice that the maximum likelihood estimate omits the prior beliefs(or defaults it to zero-mean Gaussian and count on it as the L2 regularization or weight decay) and treats the evidence as constant(when calculating the partial derivative with respect to $\theta$). It tries to maximize the likelihood by adjusting $\theta$ and just treating $f(\theta|x_1,...,x_n)$ equal to $f(x_1,...,x_n|\theta)$ which we can easily get(usually the loss) and keep the likelihood as $\mathcal{L}(\theta|\mathbf x)$. The true probability $\frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ can hardly be worked out because the evidence(the denominator), $\int_{\theta} f(x_1, ...,x_n, \theta)d\theta$, is intractable. Hope this helps.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
According to the Bayesian theory, $f(\theta|x_1,...,x_n) = \frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ holds, that is $\text{posterior} = \frac{\text{likelihood} * \text{prior}}{evidence
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? According to the Bayesian theory, $f(\theta|x_1,...,x_n) = \frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ holds, that is $\text{posterior} = \frac{\text{likelihood} * \text{prior}}{evidence}$. Notice that the maximum likelihood estimate omits the prior beliefs(or defaults it to zero-mean Gaussian and count on it as the L2 regularization or weight decay) and treats the evidence as constant(when calculating the partial derivative with respect to $\theta$). It tries to maximize the likelihood by adjusting $\theta$ and just treating $f(\theta|x_1,...,x_n)$ equal to $f(x_1,...,x_n|\theta)$ which we can easily get(usually the loss) and keep the likelihood as $\mathcal{L}(\theta|\mathbf x)$. The true probability $\frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ can hardly be worked out because the evidence(the denominator), $\int_{\theta} f(x_1, ...,x_n, \theta)d\theta$, is intractable. Hope this helps.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? According to the Bayesian theory, $f(\theta|x_1,...,x_n) = \frac{f(x_1,...,x_n|\theta) * f(\theta)}{f(x_1,...,x_n)}$ holds, that is $\text{posterior} = \frac{\text{likelihood} * \text{prior}}{evidence
10,799
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
I agree with @Big Agnes. Here is what my professor taught in class: One way is to think of likelihood function $L(\theta | \mathbf{x})$ as a random function which depends on the data. Different data gives us different likelihood functions. So you may say conditioning on data. Given a realization of data, we want to find a $\hat{\theta}$ such that $L(\theta | \mathbf{x})$ is maximized or you can say $\hat{\theta}$ is most consistent with data. This is same to say we maximize "observed probability" $P (\mathbf{x} | \theta)$. We use $P(\mathbf{x} | \theta)$ to do calculation but it is different from $P(\mathbf{X} | \theta)$. Small $\mathbf{x}$ stands for observed values, while $\mathbf{X}$ stands for random variable. If you know $\theta$, then $P(\mathbf{x} | \theta)$ is the probability/ density of observing $\mathbf{x}$.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
I agree with @Big Agnes. Here is what my professor taught in class: One way is to think of likelihood function $L(\theta | \mathbf{x})$ as a random function which depends on the data. Different data g
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? I agree with @Big Agnes. Here is what my professor taught in class: One way is to think of likelihood function $L(\theta | \mathbf{x})$ as a random function which depends on the data. Different data gives us different likelihood functions. So you may say conditioning on data. Given a realization of data, we want to find a $\hat{\theta}$ such that $L(\theta | \mathbf{x})$ is maximized or you can say $\hat{\theta}$ is most consistent with data. This is same to say we maximize "observed probability" $P (\mathbf{x} | \theta)$. We use $P(\mathbf{x} | \theta)$ to do calculation but it is different from $P(\mathbf{X} | \theta)$. Small $\mathbf{x}$ stands for observed values, while $\mathbf{X}$ stands for random variable. If you know $\theta$, then $P(\mathbf{x} | \theta)$ is the probability/ density of observing $\mathbf{x}$.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? I agree with @Big Agnes. Here is what my professor taught in class: One way is to think of likelihood function $L(\theta | \mathbf{x})$ as a random function which depends on the data. Different data g
10,800
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
I think the other answers given by jwyao and Glen_b are quite good. I just wanted to add a very simple example which is too long for a comment. Consider one observation $X$ from a Bernoulli distribution with probability of success $\theta$. With $\theta$ fixed (known or unknown), the distribution of $X$ is given by $p(X|\theta)$. $$P(x|\theta) = \theta^x(1-\theta)^{1-x}$$ In other words, we know that $P(X=1) = 1 - P(X=0) = \theta$. Alternatively, we could look treat the observation as fixed and view this as a function of $\theta$. $$L(\theta | x) = \theta^x(1-\theta)^{1-x}$$ For example, in a maximum likelihood setting, we seek to find $\theta$ which maximizes the likelihood as a function of $\theta$. For example, if we observe $X = 1$, then the likelihood becomes $$L(\theta | x) = \begin{cases} \theta, & 0 \leq \theta \leq 1 \\ 0, & \text{else} \end{cases}$$ and we see that the MLE would be $\hat\theta = 1$. Not sure that I've really added any value to the discussion, but I just wanted to give a simple example of the different ways of viewing the same function.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$?
I think the other answers given by jwyao and Glen_b are quite good. I just wanted to add a very simple example which is too long for a comment. Consider one observation $X$ from a Bernoulli distributi
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? I think the other answers given by jwyao and Glen_b are quite good. I just wanted to add a very simple example which is too long for a comment. Consider one observation $X$ from a Bernoulli distribution with probability of success $\theta$. With $\theta$ fixed (known or unknown), the distribution of $X$ is given by $p(X|\theta)$. $$P(x|\theta) = \theta^x(1-\theta)^{1-x}$$ In other words, we know that $P(X=1) = 1 - P(X=0) = \theta$. Alternatively, we could look treat the observation as fixed and view this as a function of $\theta$. $$L(\theta | x) = \theta^x(1-\theta)^{1-x}$$ For example, in a maximum likelihood setting, we seek to find $\theta$ which maximizes the likelihood as a function of $\theta$. For example, if we observe $X = 1$, then the likelihood becomes $$L(\theta | x) = \begin{cases} \theta, & 0 \leq \theta \leq 1 \\ 0, & \text{else} \end{cases}$$ and we see that the MLE would be $\hat\theta = 1$. Not sure that I've really added any value to the discussion, but I just wanted to give a simple example of the different ways of viewing the same function.
Why do people use $\mathcal{L}(\theta|x)$ for likelihood instead of $P(x|\theta)$? I think the other answers given by jwyao and Glen_b are quite good. I just wanted to add a very simple example which is too long for a comment. Consider one observation $X$ from a Bernoulli distributi