idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k β | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 β | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
38,001 | Are there useful applications of SVD that use only the smallest singular values? | Slow feature analysis (SFA) uses the smalles Eigenvalues of the covariance matrix of temporal differences to find the slowest features in a time series,
Minor component analysis (MCA) uses the smallest components in a probabilistic setting--here, not directions of variations are found but constraints,
Extreme component analysis (XCA) is a combination of probabilistic PCA and MCA,
In Canonical Correlation Analysis (where you analyse the correlation btw two different data sets), the smaller components of the correlation matrix correspond to so called "private" spaces. These represent the subspaces of each variable which do not correlate linearly with each other. | Are there useful applications of SVD that use only the smallest singular values? | Slow feature analysis (SFA) uses the smalles Eigenvalues of the covariance matrix of temporal differences to find the slowest features in a time series,
Minor component analysis (MCA) uses the smalles | Are there useful applications of SVD that use only the smallest singular values?
Slow feature analysis (SFA) uses the smalles Eigenvalues of the covariance matrix of temporal differences to find the slowest features in a time series,
Minor component analysis (MCA) uses the smallest components in a probabilistic setting--here, not directions of variations are found but constraints,
Extreme component analysis (XCA) is a combination of probabilistic PCA and MCA,
In Canonical Correlation Analysis (where you analyse the correlation btw two different data sets), the smaller components of the correlation matrix correspond to so called "private" spaces. These represent the subspaces of each variable which do not correlate linearly with each other. | Are there useful applications of SVD that use only the smallest singular values?
Slow feature analysis (SFA) uses the smalles Eigenvalues of the covariance matrix of temporal differences to find the slowest features in a time series,
Minor component analysis (MCA) uses the smalles |
38,002 | Are there useful applications of SVD that use only the smallest singular values? | Total Least Squares regression (aka Orthogonal Distance regression) uses the singular vector corresponding to the smallest singular value of the augmented predictor/criterion matrix.
When there is only one dependent variable (i.e., when $k = 1$), both equation 12.3-5 in my Golub & Van Loan (first edition), and the final equation and Octave code in the "Algebraic point of view" section of the Standard account, use only the singular vector corresponding to the smallest singular value to get the vector of regression coefficients. | Are there useful applications of SVD that use only the smallest singular values? | Total Least Squares regression (aka Orthogonal Distance regression) uses the singular vector corresponding to the smallest singular value of the augmented predictor/criterion matrix.
When there is o | Are there useful applications of SVD that use only the smallest singular values?
Total Least Squares regression (aka Orthogonal Distance regression) uses the singular vector corresponding to the smallest singular value of the augmented predictor/criterion matrix.
When there is only one dependent variable (i.e., when $k = 1$), both equation 12.3-5 in my Golub & Van Loan (first edition), and the final equation and Octave code in the "Algebraic point of view" section of the Standard account, use only the singular vector corresponding to the smallest singular value to get the vector of regression coefficients. | Are there useful applications of SVD that use only the smallest singular values?
Total Least Squares regression (aka Orthogonal Distance regression) uses the singular vector corresponding to the smallest singular value of the augmented predictor/criterion matrix.
When there is o |
38,003 | Are there useful applications of SVD that use only the smallest singular values? | Yes, there are. I'm currently working with a professor on a research project where we try to predict very short-term stock market changes based on real-time tweets from Twitter. Unfortunately, the majority of what people say on Twitter about the companies we are tracking is useless rambling. In other words, the largest singular values are useless.
Our plan is to use the biggest singular values to flag vast amounts of garbage tweets so that we can delete them. The remaining tweets and their text content (the small singular values) are candidates for a variable selection process.
We are trying find the needles in a haystack, and dropping the largest singular values is like setting the hay on fire. | Are there useful applications of SVD that use only the smallest singular values? | Yes, there are. I'm currently working with a professor on a research project where we try to predict very short-term stock market changes based on real-time tweets from Twitter. Unfortunately, the maj | Are there useful applications of SVD that use only the smallest singular values?
Yes, there are. I'm currently working with a professor on a research project where we try to predict very short-term stock market changes based on real-time tweets from Twitter. Unfortunately, the majority of what people say on Twitter about the companies we are tracking is useless rambling. In other words, the largest singular values are useless.
Our plan is to use the biggest singular values to flag vast amounts of garbage tweets so that we can delete them. The remaining tweets and their text content (the small singular values) are candidates for a variable selection process.
We are trying find the needles in a haystack, and dropping the largest singular values is like setting the hay on fire. | Are there useful applications of SVD that use only the smallest singular values?
Yes, there are. I'm currently working with a professor on a research project where we try to predict very short-term stock market changes based on real-time tweets from Twitter. Unfortunately, the maj |
38,004 | Are there useful applications of SVD that use only the smallest singular values? | It's a bit of a stretch, but consider the portfolio optimization problem: minimize $w^{\top} \Sigma w$ subject to $w^{\top} w \ge 1$. You can think of this as the minimum variance portfolio with an $\ell_2$ constraint. After applying Lagrange Multiplier method, you find that $w$ should be the eigenvector associated with the smallest eigenvalue of $\Sigma$. Since $\Sigma$ is typically the sample covariance $(1/N) \sum_{1\le i\le N} X_i X_i^{\top}$, where the $X_i$ have been centered, you can view this problem as an SVD computation where the singular vector associated with the smallest singular value is of importance. Like I said, it's a bit of a stretch. | Are there useful applications of SVD that use only the smallest singular values? | It's a bit of a stretch, but consider the portfolio optimization problem: minimize $w^{\top} \Sigma w$ subject to $w^{\top} w \ge 1$. You can think of this as the minimum variance portfolio with an $\ | Are there useful applications of SVD that use only the smallest singular values?
It's a bit of a stretch, but consider the portfolio optimization problem: minimize $w^{\top} \Sigma w$ subject to $w^{\top} w \ge 1$. You can think of this as the minimum variance portfolio with an $\ell_2$ constraint. After applying Lagrange Multiplier method, you find that $w$ should be the eigenvector associated with the smallest eigenvalue of $\Sigma$. Since $\Sigma$ is typically the sample covariance $(1/N) \sum_{1\le i\le N} X_i X_i^{\top}$, where the $X_i$ have been centered, you can view this problem as an SVD computation where the singular vector associated with the smallest singular value is of importance. Like I said, it's a bit of a stretch. | Are there useful applications of SVD that use only the smallest singular values?
It's a bit of a stretch, but consider the portfolio optimization problem: minimize $w^{\top} \Sigma w$ subject to $w^{\top} w \ge 1$. You can think of this as the minimum variance portfolio with an $\ |
38,005 | Are there useful applications of SVD that use only the smallest singular values? | There is an interesting LSA related paper which concludes that discarding (a lot of) first SVD features can improve results on semantic tests like TOEFL - "Extracting Semantic Represenations from Word Cooccurrence Statistics: Stop-lists, Stemming and SVD" (Bullinaria and Levy, 2012) | Are there useful applications of SVD that use only the smallest singular values? | There is an interesting LSA related paper which concludes that discarding (a lot of) first SVD features can improve results on semantic tests like TOEFL - "Extracting Semantic Represenations from Word | Are there useful applications of SVD that use only the smallest singular values?
There is an interesting LSA related paper which concludes that discarding (a lot of) first SVD features can improve results on semantic tests like TOEFL - "Extracting Semantic Represenations from Word Cooccurrence Statistics: Stop-lists, Stemming and SVD" (Bullinaria and Levy, 2012) | Are there useful applications of SVD that use only the smallest singular values?
There is an interesting LSA related paper which concludes that discarding (a lot of) first SVD features can improve results on semantic tests like TOEFL - "Extracting Semantic Represenations from Word |
38,006 | Are there useful applications of SVD that use only the smallest singular values? | I'm not aware of any. The smallest singular values correspond to modes that don't contribute much to the reconstruction of the original matrix, or to use the PCA interpretation, don't describe much of the variance in the data. Typically, the modes with smaller singular values are just noise. This doesn't rule out the possibility that some meaning could be found in them, but I think it would be highly dependent on the data which make up the original matrix and β honestly β pretty unlikely. | Are there useful applications of SVD that use only the smallest singular values? | I'm not aware of any. The smallest singular values correspond to modes that don't contribute much to the reconstruction of the original matrix, or to use the PCA interpretation, don't describe much o | Are there useful applications of SVD that use only the smallest singular values?
I'm not aware of any. The smallest singular values correspond to modes that don't contribute much to the reconstruction of the original matrix, or to use the PCA interpretation, don't describe much of the variance in the data. Typically, the modes with smaller singular values are just noise. This doesn't rule out the possibility that some meaning could be found in them, but I think it would be highly dependent on the data which make up the original matrix and β honestly β pretty unlikely. | Are there useful applications of SVD that use only the smallest singular values?
I'm not aware of any. The smallest singular values correspond to modes that don't contribute much to the reconstruction of the original matrix, or to use the PCA interpretation, don't describe much o |
38,007 | How to test my data against an specific normal distribution? | ks.test in R allows one to adjust the mean and sd of the distribution to be tested against. e.g.
x <- rnorm(1000, 4, 10)
ks.test(x, "pnorm", mean = 4, sd = 10) | How to test my data against an specific normal distribution? | ks.test in R allows one to adjust the mean and sd of the distribution to be tested against. e.g.
x <- rnorm(1000, 4, 10)
ks.test(x, "pnorm", mean = 4, sd = 10) | How to test my data against an specific normal distribution?
ks.test in R allows one to adjust the mean and sd of the distribution to be tested against. e.g.
x <- rnorm(1000, 4, 10)
ks.test(x, "pnorm", mean = 4, sd = 10) | How to test my data against an specific normal distribution?
ks.test in R allows one to adjust the mean and sd of the distribution to be tested against. e.g.
x <- rnorm(1000, 4, 10)
ks.test(x, "pnorm", mean = 4, sd = 10) |
38,008 | How to test my data against an specific normal distribution? | In R, you can just use the function ks.test with the following arguments:
ks.test(your_data, "pnorm", mean=test_mu, sd=test_sd)
Where your_data is your data vector, test_mu is the specific mean of the theoretical normal distribution and test_sd its standard deviation.
To inspect your data graphically, you can use the function qqPlot from the car package. Just use it with the following arguments:
qqPlot(your_data, "norm", mean=test_mu, sd=test_sd)
This produces a Q-Q plot with a comparison line and a 95% point-wise confidence envelope (as default).
Hope that helps. | How to test my data against an specific normal distribution? | In R, you can just use the function ks.test with the following arguments:
ks.test(your_data, "pnorm", mean=test_mu, sd=test_sd)
Where your_data is your data vector, test_mu is the specific mean of th | How to test my data against an specific normal distribution?
In R, you can just use the function ks.test with the following arguments:
ks.test(your_data, "pnorm", mean=test_mu, sd=test_sd)
Where your_data is your data vector, test_mu is the specific mean of the theoretical normal distribution and test_sd its standard deviation.
To inspect your data graphically, you can use the function qqPlot from the car package. Just use it with the following arguments:
qqPlot(your_data, "norm", mean=test_mu, sd=test_sd)
This produces a Q-Q plot with a comparison line and a 95% point-wise confidence envelope (as default).
Hope that helps. | How to test my data against an specific normal distribution?
In R, you can just use the function ks.test with the following arguments:
ks.test(your_data, "pnorm", mean=test_mu, sd=test_sd)
Where your_data is your data vector, test_mu is the specific mean of th |
38,009 | How to test my data against an specific normal distribution? | So don't use the default! As already noted, R lets you specify the population mean and standard deviation.
Here is how to do it in MATLAB or anything else that doesn't give you the option to specify:
Standardize your variable by the population parameters: $z_i = \frac{x_i-\mu_0}{\sigma_0}$
... and then test against standard normal.
(However, if those $\mu$ and $\sigma$ values come from a sample... don't do this!) | How to test my data against an specific normal distribution? | So don't use the default! As already noted, R lets you specify the population mean and standard deviation.
Here is how to do it in MATLAB or anything else that doesn't give you the option to specify: | How to test my data against an specific normal distribution?
So don't use the default! As already noted, R lets you specify the population mean and standard deviation.
Here is how to do it in MATLAB or anything else that doesn't give you the option to specify:
Standardize your variable by the population parameters: $z_i = \frac{x_i-\mu_0}{\sigma_0}$
... and then test against standard normal.
(However, if those $\mu$ and $\sigma$ values come from a sample... don't do this!) | How to test my data against an specific normal distribution?
So don't use the default! As already noted, R lets you specify the population mean and standard deviation.
Here is how to do it in MATLAB or anything else that doesn't give you the option to specify: |
38,010 | Regression coefficients by group in R? | data.table also has great tools for solving problems such as this:
library(data.table)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
dat[,coef(lm(y~x)),by=grp]
The first row in each group is the intercept, and the second row is the coefficient:
grp V1
[1,] 1 0.5991761
[2,] 1 -0.1350489
[3,] 2 0.4401174
[4,] 2 0.1400153
If you'de rather have a wide data.frame, that just takes a little more specification:
dat[,list(intercept=coef(lm(y~x))[1], coef=coef(lm(y~x))[2]),by=grp]
grp intercept coef
[1,] 1 0.5991761 -0.1350489
[2,] 2 0.4401174 0.1400153
Or you could put it even more succinctly as:
dat[,as.list(coef(lm(y~x))),by=grp]
(Intercept) x
1: 1 0.5991761 -0.1350489
2: 2 0.4401174 0.1400153 | Regression coefficients by group in R? | data.table also has great tools for solving problems such as this:
library(data.table)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
dat[,coef(lm(y~x)),by=grp]
The first | Regression coefficients by group in R?
data.table also has great tools for solving problems such as this:
library(data.table)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
dat[,coef(lm(y~x)),by=grp]
The first row in each group is the intercept, and the second row is the coefficient:
grp V1
[1,] 1 0.5991761
[2,] 1 -0.1350489
[3,] 2 0.4401174
[4,] 2 0.1400153
If you'de rather have a wide data.frame, that just takes a little more specification:
dat[,list(intercept=coef(lm(y~x))[1], coef=coef(lm(y~x))[2]),by=grp]
grp intercept coef
[1,] 1 0.5991761 -0.1350489
[2,] 2 0.4401174 0.1400153
Or you could put it even more succinctly as:
dat[,as.list(coef(lm(y~x))),by=grp]
(Intercept) x
1: 1 0.5991761 -0.1350489
2: 2 0.4401174 0.1400153 | Regression coefficients by group in R?
data.table also has great tools for solving problems such as this:
library(data.table)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
dat[,coef(lm(y~x)),by=grp]
The first |
38,011 | Regression coefficients by group in R? | The responses by @Henry and @Zach both work, but I think the most straight-forward way to do what you want is to use lmList in the nlme package:
dat <- data.frame(
GRP = sample(c("A","B","C"), 100, replace=TRUE),
X = runif(100),
Y = runif(100)
)
require(nlme)
lmList(Y ~ X | GRP, data=dat) | Regression coefficients by group in R? | The responses by @Henry and @Zach both work, but I think the most straight-forward way to do what you want is to use lmList in the nlme package:
dat <- data.frame(
GRP = sample(c("A","B","C"), 100, | Regression coefficients by group in R?
The responses by @Henry and @Zach both work, but I think the most straight-forward way to do what you want is to use lmList in the nlme package:
dat <- data.frame(
GRP = sample(c("A","B","C"), 100, replace=TRUE),
X = runif(100),
Y = runif(100)
)
require(nlme)
lmList(Y ~ X | GRP, data=dat) | Regression coefficients by group in R?
The responses by @Henry and @Zach both work, but I think the most straight-forward way to do what you want is to use lmList in the nlme package:
dat <- data.frame(
GRP = sample(c("A","B","C"), 100, |
38,012 | Regression coefficients by group in R? | Adapting from help("by"), this example may meet your needs
mydf <- data.frame( GRP = rep(c("A","B","C"), each=100), X = rep(1:100,3),
Y = rep(c(2,4,8),each=100) +
rep(c(4,2,1),each=100) * rep(1:100,3) + rnorm(300))
by(mydf, mydf$GRP, function(z) lm(Y ~ X, data = z)) | Regression coefficients by group in R? | Adapting from help("by"), this example may meet your needs
mydf <- data.frame( GRP = rep(c("A","B","C"), each=100), X = rep(1:100,3),
Y = rep(c(2,4,8),each=100) +
| Regression coefficients by group in R?
Adapting from help("by"), this example may meet your needs
mydf <- data.frame( GRP = rep(c("A","B","C"), each=100), X = rep(1:100,3),
Y = rep(c(2,4,8),each=100) +
rep(c(4,2,1),each=100) * rep(1:100,3) + rnorm(300))
by(mydf, mydf$GRP, function(z) lm(Y ~ X, data = z)) | Regression coefficients by group in R?
Adapting from help("by"), this example may meet your needs
mydf <- data.frame( GRP = rep(c("A","B","C"), each=100), X = rep(1:100,3),
Y = rep(c(2,4,8),each=100) +
|
38,013 | Regression coefficients by group in R? | If you use package "tidyr" you could do the following
library(data.table)
library(tidyr)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
ncoefs <- 1
dat <- dat[, coef( lm(y ~ x) ), by = grp]
dat[, est := rep( c("intercept", "coef"), .N/(ncoefs + 1)) ]
dat <- dat %>% spread(est, V1)
The result is
grp coef intercept
1: 1 -0.1350489 0.5991761
2: 2 0.1400153 0.4401174
This method is easy to scale up and must be faster than estimating the regression for each coefficient. | Regression coefficients by group in R? | If you use package "tidyr" you could do the following
library(data.table)
library(tidyr)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
ncoefs <- 1
dat <- dat[, coef( lm(y | Regression coefficients by group in R?
If you use package "tidyr" you could do the following
library(data.table)
library(tidyr)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
ncoefs <- 1
dat <- dat[, coef( lm(y ~ x) ), by = grp]
dat[, est := rep( c("intercept", "coef"), .N/(ncoefs + 1)) ]
dat <- dat %>% spread(est, V1)
The result is
grp coef intercept
1: 1 -0.1350489 0.5991761
2: 2 0.1400153 0.4401174
This method is easy to scale up and must be faster than estimating the regression for each coefficient. | Regression coefficients by group in R?
If you use package "tidyr" you could do the following
library(data.table)
library(tidyr)
set.seed(1)
dat <- data.table(x=runif(100), y=runif(100), grp=rep(1:2,50))
ncoefs <- 1
dat <- dat[, coef( lm(y |
38,014 | Creating a uniform prior on the logarithmic scale | It's just a standard change of variables; the (monotone & 1-1) transformation is $y = \exp(x)$ with inverse $x=\log(y)$ and Jacobian $\frac{dx}{dy} = \frac{1}{y}$.
With a uniform prior $p_y(y) \propto 1$ on $\mathbb{R}$ we get $p_x(x) = p_y(x(y)) |\frac{dx}{dy}| \propto \frac{1}{y}$ on $(0, \infty)$.
Edit: Wikipedia has a bit on transformations of random variables: http://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables. Similar material will be in any intro probability book. Jim Pitman's "Probability" presents the material in a pretty distinctive way as well IIRC. | Creating a uniform prior on the logarithmic scale | It's just a standard change of variables; the (monotone & 1-1) transformation is $y = \exp(x)$ with inverse $x=\log(y)$ and Jacobian $\frac{dx}{dy} = \frac{1}{y}$.
With a uniform prior $p_y(y) \propto | Creating a uniform prior on the logarithmic scale
It's just a standard change of variables; the (monotone & 1-1) transformation is $y = \exp(x)$ with inverse $x=\log(y)$ and Jacobian $\frac{dx}{dy} = \frac{1}{y}$.
With a uniform prior $p_y(y) \propto 1$ on $\mathbb{R}$ we get $p_x(x) = p_y(x(y)) |\frac{dx}{dy}| \propto \frac{1}{y}$ on $(0, \infty)$.
Edit: Wikipedia has a bit on transformations of random variables: http://en.wikipedia.org/wiki/Probability_density_function#Dependent_variables_and_change_of_variables. Similar material will be in any intro probability book. Jim Pitman's "Probability" presents the material in a pretty distinctive way as well IIRC. | Creating a uniform prior on the logarithmic scale
It's just a standard change of variables; the (monotone & 1-1) transformation is $y = \exp(x)$ with inverse $x=\log(y)$ and Jacobian $\frac{dx}{dy} = \frac{1}{y}$.
With a uniform prior $p_y(y) \propto |
38,015 | Creating a uniform prior on the logarithmic scale | We are told that the scale parameter is uniform on the logarithmic scale. That means that if x is the scale parameter, then $y=\log(π₯)$ and the distribution function for $y$ is the one uniform on the logarithmic scale, $p_Y(y) \propto 1$.
Then, applying the Jacobian transformation which comes from the fact that the probability contained in a differential area must be invariant under change of variables, we must have that $p_X(π₯)=p_Y(y(x))|\frac{dy}{dx}|$. Since $\frac{dy}{dx} \propto \frac{1}{x}$, we obtain $p_X(π₯) \propto \frac{1}{x}$.
Note: I tried posting this as a comment but I do not have privileges to post comments because I am a new user. The currently accepted answer to the question (given by @JMS) has errors in it. I tried to edit the answer given by @JMS to make the minimum necessary changes but my edit was rejected because people wanted me to put this as a comment or as an answer. Firstly, $p_π(π₯)$ should end up being a function of $x$, not a function of y. The way @JMS's the answer is phrased right now gives $p_X(x)\propto\frac{1}{y}$. Secondly, there is an error in the Jacobian formulation, it should be $p_X(π₯)=p_Y(y(x))|\frac{dy}{dx}|$; right now it is given as $p_X(π₯)=p_Y(x(y))|\frac{dx}{dy}|$. Thirdly, $y=\log(π₯)$, not $y=\exp(x)$, due to the reason explained in this answer. | Creating a uniform prior on the logarithmic scale | We are told that the scale parameter is uniform on the logarithmic scale. That means that if x is the scale parameter, then $y=\log(π₯)$ and the distribution function for $y$ is the one uniform on the | Creating a uniform prior on the logarithmic scale
We are told that the scale parameter is uniform on the logarithmic scale. That means that if x is the scale parameter, then $y=\log(π₯)$ and the distribution function for $y$ is the one uniform on the logarithmic scale, $p_Y(y) \propto 1$.
Then, applying the Jacobian transformation which comes from the fact that the probability contained in a differential area must be invariant under change of variables, we must have that $p_X(π₯)=p_Y(y(x))|\frac{dy}{dx}|$. Since $\frac{dy}{dx} \propto \frac{1}{x}$, we obtain $p_X(π₯) \propto \frac{1}{x}$.
Note: I tried posting this as a comment but I do not have privileges to post comments because I am a new user. The currently accepted answer to the question (given by @JMS) has errors in it. I tried to edit the answer given by @JMS to make the minimum necessary changes but my edit was rejected because people wanted me to put this as a comment or as an answer. Firstly, $p_π(π₯)$ should end up being a function of $x$, not a function of y. The way @JMS's the answer is phrased right now gives $p_X(x)\propto\frac{1}{y}$. Secondly, there is an error in the Jacobian formulation, it should be $p_X(π₯)=p_Y(y(x))|\frac{dy}{dx}|$; right now it is given as $p_X(π₯)=p_Y(x(y))|\frac{dx}{dy}|$. Thirdly, $y=\log(π₯)$, not $y=\exp(x)$, due to the reason explained in this answer. | Creating a uniform prior on the logarithmic scale
We are told that the scale parameter is uniform on the logarithmic scale. That means that if x is the scale parameter, then $y=\log(π₯)$ and the distribution function for $y$ is the one uniform on the |
38,016 | Creating a uniform prior on the logarithmic scale | @JMS answer is adequate for the nuts and bolts of changing variables. However, This question may help you a bit with why it is uniform on that scale.
My answer to this question goes through a slightly longer derivation of the "jacobian rule" result given in @JMS's answer. It may help with understanding why the rule applies. | Creating a uniform prior on the logarithmic scale | @JMS answer is adequate for the nuts and bolts of changing variables. However, This question may help you a bit with why it is uniform on that scale.
My answer to this question goes through a slightl | Creating a uniform prior on the logarithmic scale
@JMS answer is adequate for the nuts and bolts of changing variables. However, This question may help you a bit with why it is uniform on that scale.
My answer to this question goes through a slightly longer derivation of the "jacobian rule" result given in @JMS's answer. It may help with understanding why the rule applies. | Creating a uniform prior on the logarithmic scale
@JMS answer is adequate for the nuts and bolts of changing variables. However, This question may help you a bit with why it is uniform on that scale.
My answer to this question goes through a slightl |
38,017 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code)
Edit: Woops! Left off a zero!
m <- rbind(c(3, 1000000-3), c(10, 1000000-10))
# [,1] [,2]
# [1,] 3 999997
# [2,] 10 999990
And chi-squared test will be
chisq.test(m)
Which returns chi-squared = 2.7692, df = 1, p-value = 0.0961, which is not statistically significant at the p < 0.05 level. I'd be surprised if these could be clinically significant anyway. | Determine if three is statistically different than ten for a very large number of observations (1,00 | I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code)
Edit: Woops! Left off a zero!
m | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code)
Edit: Woops! Left off a zero!
m <- rbind(c(3, 1000000-3), c(10, 1000000-10))
# [,1] [,2]
# [1,] 3 999997
# [2,] 10 999990
And chi-squared test will be
chisq.test(m)
Which returns chi-squared = 2.7692, df = 1, p-value = 0.0961, which is not statistically significant at the p < 0.05 level. I'd be surprised if these could be clinically significant anyway. | Determine if three is statistically different than ten for a very large number of observations (1,00
I think a simple chi-squared test will do the trick. Do you have 1,000,000 observations for both control and test? If so, your table of observations will be (in R code)
Edit: Woops! Left off a zero!
m |
38,018 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | The huge denominators throw off one's intuition. Since the sample sizes are identical, and the proportions low, the problem can be recast: 13 events occurred, and were expected (by null hypothesis) to occur equally in both groups. In fact the split was 3 in one group and 10 in the other. How rare is that? The binomial test answers.
Enter this line into R:
binom.test(3,13,0.5,alternative="two.sided")
The two-tail P value is 0.09229, identical to four digits to the results of Fisher's test.
Looked at that way, the results are not surprising. The problem is equivalent to this one: If you flipped a coin 13 times, how surprising would it be to see three or fewer, or ten or more, heads. One of those outcomes would occur 9.23% of the time. | Determine if three is statistically different than ten for a very large number of observations (1,00 | The huge denominators throw off one's intuition. Since the sample sizes are identical, and the proportions low, the problem can be recast: 13 events occurred, and were expected (by null hypothesis) to | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
The huge denominators throw off one's intuition. Since the sample sizes are identical, and the proportions low, the problem can be recast: 13 events occurred, and were expected (by null hypothesis) to occur equally in both groups. In fact the split was 3 in one group and 10 in the other. How rare is that? The binomial test answers.
Enter this line into R:
binom.test(3,13,0.5,alternative="two.sided")
The two-tail P value is 0.09229, identical to four digits to the results of Fisher's test.
Looked at that way, the results are not surprising. The problem is equivalent to this one: If you flipped a coin 13 times, how surprising would it be to see three or fewer, or ten or more, heads. One of those outcomes would occur 9.23% of the time. | Determine if three is statistically different than ten for a very large number of observations (1,00
The huge denominators throw off one's intuition. Since the sample sizes are identical, and the proportions low, the problem can be recast: 13 events occurred, and were expected (by null hypothesis) to |
38,019 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | A (two-sided) Fisher's Exact test gives p-value = 0.092284.
function p = fexact(k, x, m, n)
%FEXACT Fisher's Exact test.
% Y = FEXACT(K, X, M, N) calculates the P-value for Fisher's
% Exact Test.
% K, X, M and N must be nonnegative integer vectors of the same
% length. The following must also hold:
% X <= N <= M, X <= K <= M and K + N - M <= X. Here:
% K is the number of items in the group,
% X is the number of items in the group with the feature,
% M is the total number of items,
% N is the total number of items with the feature,
if nargin < 4
help(mfilename);
return;
end
nr = length(k);
if nr ~= length(x) | nr ~= length(m) | nr ~= length(n)
help(mfilename);
return;
end
na = nan;
v = na(ones(nr, 1));
mi = max(0, k + n - m);
ma = min(k, n);
d = hygepdf(x, m, k, n) * (1 + 5.8e-11);
for i = 1:nr
y = hygepdf(mi(i):ma(i), m(i), k(i), n(i));
v(i) = sum(y(y <= d(i)));
end
p = max(min(v, 1), 0);
p(isnan(v)) = nan;
For your example, try fexact(1e6, 3, 2e6, 13). | Determine if three is statistically different than ten for a very large number of observations (1,00 | A (two-sided) Fisher's Exact test gives p-value = 0.092284.
function p = fexact(k, x, m, n)
%FEXACT Fisher's Exact test.
% Y = FEXACT(K, X, M, N) calculates the P-value for Fisher's
% Exact Test.
| Determine if three is statistically different than ten for a very large number of observations (1,000,000)
A (two-sided) Fisher's Exact test gives p-value = 0.092284.
function p = fexact(k, x, m, n)
%FEXACT Fisher's Exact test.
% Y = FEXACT(K, X, M, N) calculates the P-value for Fisher's
% Exact Test.
% K, X, M and N must be nonnegative integer vectors of the same
% length. The following must also hold:
% X <= N <= M, X <= K <= M and K + N - M <= X. Here:
% K is the number of items in the group,
% X is the number of items in the group with the feature,
% M is the total number of items,
% N is the total number of items with the feature,
if nargin < 4
help(mfilename);
return;
end
nr = length(k);
if nr ~= length(x) | nr ~= length(m) | nr ~= length(n)
help(mfilename);
return;
end
na = nan;
v = na(ones(nr, 1));
mi = max(0, k + n - m);
ma = min(k, n);
d = hygepdf(x, m, k, n) * (1 + 5.8e-11);
for i = 1:nr
y = hygepdf(mi(i):ma(i), m(i), k(i), n(i));
v(i) = sum(y(y <= d(i)));
end
p = max(min(v, 1), 0);
p(isnan(v)) = nan;
For your example, try fexact(1e6, 3, 2e6, 13). | Determine if three is statistically different than ten for a very large number of observations (1,00
A (two-sided) Fisher's Exact test gives p-value = 0.092284.
function p = fexact(k, x, m, n)
%FEXACT Fisher's Exact test.
% Y = FEXACT(K, X, M, N) calculates the P-value for Fisher's
% Exact Test.
|
38,020 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | In this case Poisson is good approximation for distribution for number of cases.
There is simple formula to approximate variance of log RR (delta method) .
log RR = 10/3 = 1.2,
se log RR = sqrt(1/3+1/10) = 0.66, so 95%CI = (-0.09; 2.5)
It is not significant difference at 0.05 level using two-sided test.
LR based Chi-square test for Poisson model gives p=0.046 and Wald test p=0.067.
This results are similar to Pearson Chi-square test without continuity correction (Chi2 with correction p=0.096).
Another possibility is chisq.test with option simulate.p.value=T, in this case p=0.092 (for 100 000 simulations).
In this case test statistics is rather discrete, so Fisher test can be conservative.
There is some evidence that difference can be significant. Before final conclusion data collecting process should be taken into account. | Determine if three is statistically different than ten for a very large number of observations (1,00 | In this case Poisson is good approximation for distribution for number of cases.
There is simple formula to approximate variance of log RR (delta method) .
log RR = 10/3 = 1.2,
se log RR = sqrt(1/3+1 | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
In this case Poisson is good approximation for distribution for number of cases.
There is simple formula to approximate variance of log RR (delta method) .
log RR = 10/3 = 1.2,
se log RR = sqrt(1/3+1/10) = 0.66, so 95%CI = (-0.09; 2.5)
It is not significant difference at 0.05 level using two-sided test.
LR based Chi-square test for Poisson model gives p=0.046 and Wald test p=0.067.
This results are similar to Pearson Chi-square test without continuity correction (Chi2 with correction p=0.096).
Another possibility is chisq.test with option simulate.p.value=T, in this case p=0.092 (for 100 000 simulations).
In this case test statistics is rather discrete, so Fisher test can be conservative.
There is some evidence that difference can be significant. Before final conclusion data collecting process should be taken into account. | Determine if three is statistically different than ten for a very large number of observations (1,00
In this case Poisson is good approximation for distribution for number of cases.
There is simple formula to approximate variance of log RR (delta method) .
log RR = 10/3 = 1.2,
se log RR = sqrt(1/3+1 |
38,021 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | If you wanted to check non-parametrically for significance, you could bootstrap the confidence intervals on the ratio, or you could do a permutation test on the two classes. For example, to do the bootstrap, create two arrays: one with 3 ones and 999,997 zeros, and one with 10 ones and 999,990 zeros. Then draw with replacement a sample of 1m items from the first population and a sample of 1m items from the second population. The ratio we're interested in is the ratio of "hits" in the first group to the ratio of "hits" in the second group, or: (proportion of ones in the first sample) / (proportion of ones in the second sample). We do this 1000 times. I don't have matlab handy but here's the R code to do it:
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
# set up the vectors that will hold our proportions
t1 <- vector()
t2 <- vector()
# loop 1000 times each time sample with replacement from the test data and
# record the proportion of 1's from each sample
# note: this step takes a few minutes. There are ways to write it such that
# it will go faster in R (applies), but it's more obvious what's going on this way:
for(i in 1:1000) {
t1[i] <- length(which(sample(v1,1000000,replace=TRUE)==1)) / 1000000
t2[i] <- length(which(sample(v2,1000000,replace=TRUE)==1)) / 1000000
}
# what was the ratio of the proportion of 1's between each group for each random draw?
ratios <- t1 / t2
# grab the 95% confidence interval over the bootstrapped samples
quantile(ratios,c(.05,.95))
# and the 99% confidence interval
quantile(ratios,c(.01,.99))
The output is:
5% 95%
0.0000000 0.8333333
and:
1% 99%
0.00 1.25
Since the 95% confidence interval doesn't overlap the null hypothesis (1), but the 99% confidence interval does, I believe that it would be correct to say that this is significant at an alpha of .05 but not at .01.
Another way to look at it is with a permutation test to estimate the distribution of ratios given the null hypothesis. In this case you'd mix the two samples together and randomly divide it into two 1,000,000 item groups. Then you'd see what the distribution of ratios under the null hypothesis looks like, and your empirical p-value is how extreme the true ratio is given this distribution of null ratios. Again, the R code:
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
v3 <- c(v1,v2)
# vectors to hold the null hypothesis ratios
t1 <- vector()
t2 <- vector()
# loop 1000 times; each time randomly divide the samples
# into 2 groups and see what those two random groups' proportions are
for(i in 1:1000) {
idxs <- sample(1:2000000,1000000,replace=FALSE)
s1 <- v3[idxs]
s2 <- v3[-idxs]
t1[i] <- length(which(s1==1)) / 1000000
t2[i] <- length(which(s2==1)) / 1000000
}
# vector of the ratios
ratios <- t1 / t2
# take a look at the distribution
plot(density(ratios))
# calculate the sampled ratio of proportions
sample.ratio <- ((3/1000000)/(10/1000000))
# where does this fall on the distribution of null proportions?
plot(abline(v=sample.ratio))
# this ratio (r+1)/(n+1) gives the p-value of the true sample
(length(which(ratios <= sample.ratio)) + 1) / (1001)
The output is ~ .0412 (of course this will vary run to run since it's based on random draws). So again, you could potentially call this significant at the .05 value.
I should issue the caveats: it depends too on how your data was collected and the type of study, and I'm just a grad student so don't take my word as gold. If anyone has any criticism of my methods I'd love to hear them since I'm doing this stuff for my own work as well and I'd love to find out the methods are flawed here rather than in peer review. For more stuff like this check out Efron & Tibshirani 1993, or chapter 14 of Introduction to the Practice of Statistics by David Moore (a good general textbook for practitioners). | Determine if three is statistically different than ten for a very large number of observations (1,00 | If you wanted to check non-parametrically for significance, you could bootstrap the confidence intervals on the ratio, or you could do a permutation test on the two classes. For example, to do the boo | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
If you wanted to check non-parametrically for significance, you could bootstrap the confidence intervals on the ratio, or you could do a permutation test on the two classes. For example, to do the bootstrap, create two arrays: one with 3 ones and 999,997 zeros, and one with 10 ones and 999,990 zeros. Then draw with replacement a sample of 1m items from the first population and a sample of 1m items from the second population. The ratio we're interested in is the ratio of "hits" in the first group to the ratio of "hits" in the second group, or: (proportion of ones in the first sample) / (proportion of ones in the second sample). We do this 1000 times. I don't have matlab handy but here's the R code to do it:
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
# set up the vectors that will hold our proportions
t1 <- vector()
t2 <- vector()
# loop 1000 times each time sample with replacement from the test data and
# record the proportion of 1's from each sample
# note: this step takes a few minutes. There are ways to write it such that
# it will go faster in R (applies), but it's more obvious what's going on this way:
for(i in 1:1000) {
t1[i] <- length(which(sample(v1,1000000,replace=TRUE)==1)) / 1000000
t2[i] <- length(which(sample(v2,1000000,replace=TRUE)==1)) / 1000000
}
# what was the ratio of the proportion of 1's between each group for each random draw?
ratios <- t1 / t2
# grab the 95% confidence interval over the bootstrapped samples
quantile(ratios,c(.05,.95))
# and the 99% confidence interval
quantile(ratios,c(.01,.99))
The output is:
5% 95%
0.0000000 0.8333333
and:
1% 99%
0.00 1.25
Since the 95% confidence interval doesn't overlap the null hypothesis (1), but the 99% confidence interval does, I believe that it would be correct to say that this is significant at an alpha of .05 but not at .01.
Another way to look at it is with a permutation test to estimate the distribution of ratios given the null hypothesis. In this case you'd mix the two samples together and randomly divide it into two 1,000,000 item groups. Then you'd see what the distribution of ratios under the null hypothesis looks like, and your empirical p-value is how extreme the true ratio is given this distribution of null ratios. Again, the R code:
# generate the test data to sample from
v1 <- c(rep(1,3),rep(0,999997))
v2 <- c(rep(1,10),rep(0,999990))
v3 <- c(v1,v2)
# vectors to hold the null hypothesis ratios
t1 <- vector()
t2 <- vector()
# loop 1000 times; each time randomly divide the samples
# into 2 groups and see what those two random groups' proportions are
for(i in 1:1000) {
idxs <- sample(1:2000000,1000000,replace=FALSE)
s1 <- v3[idxs]
s2 <- v3[-idxs]
t1[i] <- length(which(s1==1)) / 1000000
t2[i] <- length(which(s2==1)) / 1000000
}
# vector of the ratios
ratios <- t1 / t2
# take a look at the distribution
plot(density(ratios))
# calculate the sampled ratio of proportions
sample.ratio <- ((3/1000000)/(10/1000000))
# where does this fall on the distribution of null proportions?
plot(abline(v=sample.ratio))
# this ratio (r+1)/(n+1) gives the p-value of the true sample
(length(which(ratios <= sample.ratio)) + 1) / (1001)
The output is ~ .0412 (of course this will vary run to run since it's based on random draws). So again, you could potentially call this significant at the .05 value.
I should issue the caveats: it depends too on how your data was collected and the type of study, and I'm just a grad student so don't take my word as gold. If anyone has any criticism of my methods I'd love to hear them since I'm doing this stuff for my own work as well and I'd love to find out the methods are flawed here rather than in peer review. For more stuff like this check out Efron & Tibshirani 1993, or chapter 14 of Introduction to the Practice of Statistics by David Moore (a good general textbook for practitioners). | Determine if three is statistically different than ten for a very large number of observations (1,00
If you wanted to check non-parametrically for significance, you could bootstrap the confidence intervals on the ratio, or you could do a permutation test on the two classes. For example, to do the boo |
38,022 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | I would be really surprised if you find the difference statistically significant. Having said that you may want to use a test for a difference of proportions (3 out of 1M vs 10 out of 1M). | Determine if three is statistically different than ten for a very large number of observations (1,00 | I would be really surprised if you find the difference statistically significant. Having said that you may want to use a test for a difference of proportions (3 out of 1M vs 10 out of 1M). | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
I would be really surprised if you find the difference statistically significant. Having said that you may want to use a test for a difference of proportions (3 out of 1M vs 10 out of 1M). | Determine if three is statistically different than ten for a very large number of observations (1,00
I would be really surprised if you find the difference statistically significant. Having said that you may want to use a test for a difference of proportions (3 out of 1M vs 10 out of 1M). |
38,023 | Determine if three is statistically different than ten for a very large number of observations (1,000,000) | In addition to the other answers:
If you have 1,000,000 observations and when your event comes up only a few times, you are likely to want to look at a lot of different events.
If you look at 100 different events you will run into problems if you work with p<0.05 as criteria for significance. | Determine if three is statistically different than ten for a very large number of observations (1,00 | In addition to the other answers:
If you have 1,000,000 observations and when your event comes up only a few times, you are likely to want to look at a lot of different events.
If you look at 100 diff | Determine if three is statistically different than ten for a very large number of observations (1,000,000)
In addition to the other answers:
If you have 1,000,000 observations and when your event comes up only a few times, you are likely to want to look at a lot of different events.
If you look at 100 different events you will run into problems if you work with p<0.05 as criteria for significance. | Determine if three is statistically different than ten for a very large number of observations (1,00
In addition to the other answers:
If you have 1,000,000 observations and when your event comes up only a few times, you are likely to want to look at a lot of different events.
If you look at 100 diff |
38,024 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Gauss-hermite (GH) quadrature is a very efficient method compared to the Monte Carlo method (MC). If you care about the trade off between precision versus speed (here measured by the number of function evaluations) GH is far superior to MC sampling for the specific integral here under consideration.
The software I use for solving the integral by Gauss-Hermite procedure approximates the integral
$$(1) \ \ \int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz$$
as a weighted sum
$$\approx \sum_i w(z_i)g(z_i)$$
The procedure is to select the number of points $N$ here referred to as the sample size. The software will then return $w_1,...,w_N$ weights and $z_1,...,z_N$ nodes. These nodes are plugged into the function $g()$ to get $g_i = g(z_i)$ and then the user computes $$\sum_i w_i g_i.$$
To apply the procedure the first step is therefore to rewrite the integral $\int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz$ to the integral under consideration - up to a known constant of proportionality - which can be done by letting $$z = (x-\mu)/(\sqrt{2v})$$ and by letting $$g(z) = \log(1+\exp(\mu +\sqrt{2v}z)).$$ By substitution it follows that
$$(2) \ \ \int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz = \frac{1}{2v}\int_{-\infty}^\infty \exp\left(- \frac{(x-\mu)^2}{2v} \right)\log(1+\exp(x))dx$$
since the R code approximates the integral on the left hand side the result has to be multiplied with the constant $2v$ to get the result the OP is looking for.
The integral is then solved in R by the following code
library(statmod)
g <- function(z) log(1+exp(mu + sqrt(2*v)*z))
hermite <- gauss.quad(50,kind="hermite")
sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
The method is here no different than the MC method because the MC method also presuppose that the integral under consideration can be written on a certain form more precisely
$$\int_{-\infty}^{\infty}h(x)f(x) dx$$
where $h(x)$ is a distribution it is possible to sample from. In the current case this is particularly easy because the OP's problem can be written as
$$\sqrt{2\pi v}\int_{-\infty}^\infty\frac{1}{\sqrt{2\pi v}} \exp\left(- \frac{(x-\mu)^2}{2v} \right)\log(1+\exp(x))dx$$
The OP's problem is then given as
$$\sqrt{2\pi v} \ \mathbb E[\log(1+\exp(x))] \approx \sqrt{2\pi v} \frac{1}{N} \sum_i \log(1+\exp(x_i))$$
for a sample $\{x_i\}_{i=1}^N$ where $x_i$ is a random normal draw of mean $\mu$ and variance $v$. This can be done using the following R code
f <- function(x) log(1+exp(x))
w <- rnorm(1000,mean=mu,sd=sqrt(v))
sqrt(2*pi*v)*mean(f(w))
Rough Comparison
To make a rough comparison first note that both the MC method and the GH method makes an approximation of the form
$$\sum_{i=1}^N weight_i \cdot F(s_i)$$
it therefore seems reasonable to compare the methods on the precision they achieve for a given sample size $N$. In order to do this I use the R function integrate() to find the "TRUE" value of the integral and then calculate the deviation from this assumed true value for different sample sizes $N$ for both the MC method and the GH method. The result is displayed in the plot which clearly show how the GH method gets very close even for a small number of function evaluations as indicated by the red line quickly going to 0 whereas MC integration is very slowly goes to zero.
The plot is generated using the following code
library(statmod)
mu <- 0
v <- 10
f <- function(x) log(1+exp(x))
g <- function(z) log(1+exp(mu + sqrt(2*v)*z))
h <- function(x) exp(-(x-mu)^2/(2*v))*log(1+exp(x))
# Monte Carlo integration using function f
w <- rnorm(1000,mean=mu,sd=sqrt(v))
sqrt(2*pi*v)*mean(f(w))
# Integration using build in R function
int.sol <- integrate(h,-500,500)
int.sol$value
# Integration using Gauss-Hermite
hermite <- gauss.quad(50,kind="hermite")
sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
# Now wolve for different sample sizes
# Set up the sample sizes
sample_size <- 1:300
error <- matrix(0,ncol=length(sample_size),nrow=2)
for (i in 1:length(sample_size))
{
MC <- rep(0,1000)
for (j in 1:1000)
{
w <- rnorm(sample_size[i],mean=mu,sd=sqrt(v))
MC[j] <- sqrt(2*pi*v)*mean(f(w))
}
error[1,i] <- mean(abs(MC - int.sol$value))
hermite <- gauss.quad(sample_size[i],kind="hermite")
her.sol <- sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
error[2,i] <- (abs(her.sol - int.sol$value))
}
plot(sample_size,error[1,],ylim=c(0,10),type="l",ylab="error")
points(sample_size,error[2,],type="l",lwd=2,col="red") | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Gauss-hermite (GH) quadrature is a very efficient method compared to the Monte Carlo method (MC). If you care about the trade off between precision versus speed (here measured by the number of functio | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Gauss-hermite (GH) quadrature is a very efficient method compared to the Monte Carlo method (MC). If you care about the trade off between precision versus speed (here measured by the number of function evaluations) GH is far superior to MC sampling for the specific integral here under consideration.
The software I use for solving the integral by Gauss-Hermite procedure approximates the integral
$$(1) \ \ \int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz$$
as a weighted sum
$$\approx \sum_i w(z_i)g(z_i)$$
The procedure is to select the number of points $N$ here referred to as the sample size. The software will then return $w_1,...,w_N$ weights and $z_1,...,z_N$ nodes. These nodes are plugged into the function $g()$ to get $g_i = g(z_i)$ and then the user computes $$\sum_i w_i g_i.$$
To apply the procedure the first step is therefore to rewrite the integral $\int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz$ to the integral under consideration - up to a known constant of proportionality - which can be done by letting $$z = (x-\mu)/(\sqrt{2v})$$ and by letting $$g(z) = \log(1+\exp(\mu +\sqrt{2v}z)).$$ By substitution it follows that
$$(2) \ \ \int_{-\infty}^\infty \exp\left(-z^2\right)g(z) dz = \frac{1}{2v}\int_{-\infty}^\infty \exp\left(- \frac{(x-\mu)^2}{2v} \right)\log(1+\exp(x))dx$$
since the R code approximates the integral on the left hand side the result has to be multiplied with the constant $2v$ to get the result the OP is looking for.
The integral is then solved in R by the following code
library(statmod)
g <- function(z) log(1+exp(mu + sqrt(2*v)*z))
hermite <- gauss.quad(50,kind="hermite")
sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
The method is here no different than the MC method because the MC method also presuppose that the integral under consideration can be written on a certain form more precisely
$$\int_{-\infty}^{\infty}h(x)f(x) dx$$
where $h(x)$ is a distribution it is possible to sample from. In the current case this is particularly easy because the OP's problem can be written as
$$\sqrt{2\pi v}\int_{-\infty}^\infty\frac{1}{\sqrt{2\pi v}} \exp\left(- \frac{(x-\mu)^2}{2v} \right)\log(1+\exp(x))dx$$
The OP's problem is then given as
$$\sqrt{2\pi v} \ \mathbb E[\log(1+\exp(x))] \approx \sqrt{2\pi v} \frac{1}{N} \sum_i \log(1+\exp(x_i))$$
for a sample $\{x_i\}_{i=1}^N$ where $x_i$ is a random normal draw of mean $\mu$ and variance $v$. This can be done using the following R code
f <- function(x) log(1+exp(x))
w <- rnorm(1000,mean=mu,sd=sqrt(v))
sqrt(2*pi*v)*mean(f(w))
Rough Comparison
To make a rough comparison first note that both the MC method and the GH method makes an approximation of the form
$$\sum_{i=1}^N weight_i \cdot F(s_i)$$
it therefore seems reasonable to compare the methods on the precision they achieve for a given sample size $N$. In order to do this I use the R function integrate() to find the "TRUE" value of the integral and then calculate the deviation from this assumed true value for different sample sizes $N$ for both the MC method and the GH method. The result is displayed in the plot which clearly show how the GH method gets very close even for a small number of function evaluations as indicated by the red line quickly going to 0 whereas MC integration is very slowly goes to zero.
The plot is generated using the following code
library(statmod)
mu <- 0
v <- 10
f <- function(x) log(1+exp(x))
g <- function(z) log(1+exp(mu + sqrt(2*v)*z))
h <- function(x) exp(-(x-mu)^2/(2*v))*log(1+exp(x))
# Monte Carlo integration using function f
w <- rnorm(1000,mean=mu,sd=sqrt(v))
sqrt(2*pi*v)*mean(f(w))
# Integration using build in R function
int.sol <- integrate(h,-500,500)
int.sol$value
# Integration using Gauss-Hermite
hermite <- gauss.quad(50,kind="hermite")
sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
# Now wolve for different sample sizes
# Set up the sample sizes
sample_size <- 1:300
error <- matrix(0,ncol=length(sample_size),nrow=2)
for (i in 1:length(sample_size))
{
MC <- rep(0,1000)
for (j in 1:1000)
{
w <- rnorm(sample_size[i],mean=mu,sd=sqrt(v))
MC[j] <- sqrt(2*pi*v)*mean(f(w))
}
error[1,i] <- mean(abs(MC - int.sol$value))
hermite <- gauss.quad(sample_size[i],kind="hermite")
her.sol <- sum(hermite$weights * g(hermite$nodes))*sqrt(2*v)
error[2,i] <- (abs(her.sol - int.sol$value))
}
plot(sample_size,error[1,],ylim=c(0,10),type="l",ylab="error")
points(sample_size,error[2,],type="l",lwd=2,col="red") | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Gauss-hermite (GH) quadrature is a very efficient method compared to the Monte Carlo method (MC). If you care about the trade off between precision versus speed (here measured by the number of functio |
38,025 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Like you mentioned, This is just the average value of $\ln(1+e^x)$, when x is normally distributed with mean ΞΌ and variance Ξ½. So all you have to do is:
1) Draw N (large number) of $X_i \sim N(\mu, \nu)$
2) Your estimate $\hat\theta$ is then:
$ \hat\theta = 1/N * \sqrt{2\pi\nu }\sum^N_{i =1}\ln(1+e^{X_i}) $ | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Like you mentioned, This is just the average value of $\ln(1+e^x)$, when x is normally distributed with mean ΞΌ and variance Ξ½. So all you have to do is:
1) Draw N (large number) of $X_i \sim N(\mu, \n | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Like you mentioned, This is just the average value of $\ln(1+e^x)$, when x is normally distributed with mean ΞΌ and variance Ξ½. So all you have to do is:
1) Draw N (large number) of $X_i \sim N(\mu, \nu)$
2) Your estimate $\hat\theta$ is then:
$ \hat\theta = 1/N * \sqrt{2\pi\nu }\sum^N_{i =1}\ln(1+e^{X_i}) $ | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Like you mentioned, This is just the average value of $\ln(1+e^x)$, when x is normally distributed with mean ΞΌ and variance Ξ½. So all you have to do is:
1) Draw N (large number) of $X_i \sim N(\mu, \n |
38,026 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Out of curiosity, I have tried the suggestions of both answers with the following R-code (note that the integrand $\log(1+e^x)$ must be approximated by $x$ for large values to avoid overflow):
f <- function(x, mu, s) {
# approximate integrand for large values
log.part <- ifelse(x<100, log(1+exp(x)), x)
return(exp(-(x-mu)^2/(2*s*s)) * log.part)
}
m <- 1; s <- 1
# Monte Carlo integration
N <- 10^6
x <- rnorm(N, mean=m, sd=s)
int.mc <- sqrt(2*pi)*s * mean(log(1+exp(x)))
int.mc.sd <- sqrt(2*pi)*s * sd(log(1+exp(x))) / sqrt(N)
cat(sprintf("Monte Carlo: %f +/- %f\n", int.mc, 2*int.mc.sd))
# numerical integration
int.num <- integrate(f, lower=-Inf, upper=Inf, mu=m, s=s)
cat(sprintf("Numerical: %f +/- %f\n", int.num$value, int.num$abs.error))
It turns out that, for this integral, the results are close to each other, but the numerical integration (R uses a Gauss-Kronrod quadrature in connection with extrapolation by Wynn's Epsilon algorithm) is both faster and more accurate (the Monte Carlo "accuracy" is a 95% confidence interval):
Monte Carlo: 3.525610 +/- 0.003548
Numerical: 3.526466 +/- 0.000001 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | Out of curiosity, I have tried the suggestions of both answers with the following R-code (note that the integrand $\log(1+e^x)$ must be approximated by $x$ for large values to avoid overflow):
f <- fu | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Out of curiosity, I have tried the suggestions of both answers with the following R-code (note that the integrand $\log(1+e^x)$ must be approximated by $x$ for large values to avoid overflow):
f <- function(x, mu, s) {
# approximate integrand for large values
log.part <- ifelse(x<100, log(1+exp(x)), x)
return(exp(-(x-mu)^2/(2*s*s)) * log.part)
}
m <- 1; s <- 1
# Monte Carlo integration
N <- 10^6
x <- rnorm(N, mean=m, sd=s)
int.mc <- sqrt(2*pi)*s * mean(log(1+exp(x)))
int.mc.sd <- sqrt(2*pi)*s * sd(log(1+exp(x))) / sqrt(N)
cat(sprintf("Monte Carlo: %f +/- %f\n", int.mc, 2*int.mc.sd))
# numerical integration
int.num <- integrate(f, lower=-Inf, upper=Inf, mu=m, s=s)
cat(sprintf("Numerical: %f +/- %f\n", int.num$value, int.num$abs.error))
It turns out that, for this integral, the results are close to each other, but the numerical integration (R uses a Gauss-Kronrod quadrature in connection with extrapolation by Wynn's Epsilon algorithm) is both faster and more accurate (the Monte Carlo "accuracy" is a 95% confidence interval):
Monte Carlo: 3.525610 +/- 0.003548
Numerical: 3.526466 +/- 0.000001 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
Out of curiosity, I have tried the suggestions of both answers with the following R-code (note that the integrand $\log(1+e^x)$ must be approximated by $x$ for large values to avoid overflow):
f <- fu |
38,027 | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | The following diagram compares the desired expectation $E[\ln \left(1+e^x\right)]$ to the 45 degree asymptote $\mu$ (when the variance is 1), and the $x$-axis asymptote when $\mu$ is negative.
That suggests a very simple approximation when $\mu$ is either relatively large negative (0) or relatively large positive ($\mu$). More generally, increasing the variance will push outwards the range of 'contact / excellent approximation'. If you play around with it a bit, I suspect it will be possible to come up with some fairly simple rule like $|\frac{\mu}{2\sqrt{\sigma}}| > 3$ or similar where the asymptote approximation will work nicely.
The following implements Becko's suggestion (see comment below) of using $E[\max[0,X]]$ as the approximation (see dashed line) for which closed-form solutions exist. This is equivalent to the value of an option in a world where the parent random variable is Normal (as opposed to Lognormal).
It works remarkably well when $\sigma$ is large. | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed | The following diagram compares the desired expectation $E[\ln \left(1+e^x\right)]$ to the 45 degree asymptote $\mu$ (when the variance is 1), and the $x$-axis asymptote when $\mu$ is negative.
That s | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
The following diagram compares the desired expectation $E[\ln \left(1+e^x\right)]$ to the 45 degree asymptote $\mu$ (when the variance is 1), and the $x$-axis asymptote when $\mu$ is negative.
That suggests a very simple approximation when $\mu$ is either relatively large negative (0) or relatively large positive ($\mu$). More generally, increasing the variance will push outwards the range of 'contact / excellent approximation'. If you play around with it a bit, I suspect it will be possible to come up with some fairly simple rule like $|\frac{\mu}{2\sqrt{\sigma}}| > 3$ or similar where the asymptote approximation will work nicely.
The following implements Becko's suggestion (see comment below) of using $E[\max[0,X]]$ as the approximation (see dashed line) for which closed-form solutions exist. This is equivalent to the value of an option in a world where the parent random variable is Normal (as opposed to Lognormal).
It works remarkably well when $\sigma$ is large. | Expectation of $\ln(1 + e^x)$, where $x$ is normally distributed
The following diagram compares the desired expectation $E[\ln \left(1+e^x\right)]$ to the 45 degree asymptote $\mu$ (when the variance is 1), and the $x$-axis asymptote when $\mu$ is negative.
That s |
38,028 | Why does lasso not converge on a penalization parameter? | I don't know python very well, but I did find one problem with your R code.
You have the 2 lines:
residuals = sum(y_true - y_preds)
mse=residuals^2
Which sums the residuals, then squares them. This is very different from squaring the residuals, then summing them (which it appears that the python code does correctly). I would suspect that this may be a big part of the difference between the R code and the python code. Fix the R code and run it again to see if it behaves more like the python code.
I would also suggest that instead of just saving the "best" alpha and the corresponding mse that you store all of them and plot the relationship. It could be that for your setup there is a region that is quite flat so that the difference between the mse at different points is not very big. If this is the case, then very minor changes to the data (even the order in the cross-validation) can change which point, among many that are essentially the same, gives the minimum. Having a situation that results in a flat region around the optimum will often lead to what you are seeing and the plot of all the alpha values with the corresponding mse values could be enlightening. | Why does lasso not converge on a penalization parameter? | I don't know python very well, but I did find one problem with your R code.
You have the 2 lines:
residuals = sum(y_true - y_preds)
mse=residuals^2
Which sums the residuals, then squares them. Thi | Why does lasso not converge on a penalization parameter?
I don't know python very well, but I did find one problem with your R code.
You have the 2 lines:
residuals = sum(y_true - y_preds)
mse=residuals^2
Which sums the residuals, then squares them. This is very different from squaring the residuals, then summing them (which it appears that the python code does correctly). I would suspect that this may be a big part of the difference between the R code and the python code. Fix the R code and run it again to see if it behaves more like the python code.
I would also suggest that instead of just saving the "best" alpha and the corresponding mse that you store all of them and plot the relationship. It could be that for your setup there is a region that is quite flat so that the difference between the mse at different points is not very big. If this is the case, then very minor changes to the data (even the order in the cross-validation) can change which point, among many that are essentially the same, gives the minimum. Having a situation that results in a flat region around the optimum will often lead to what you are seeing and the plot of all the alpha values with the corresponding mse values could be enlightening. | Why does lasso not converge on a penalization parameter?
I don't know python very well, but I did find one problem with your R code.
You have the 2 lines:
residuals = sum(y_true - y_preds)
mse=residuals^2
Which sums the residuals, then squares them. Thi |
38,029 | Why does lasso not converge on a penalization parameter? | sklearn has an example that is almost identical to what you're trying to do here: http://scikit-learn.org/stable/auto_examples/exercises/plot_cv_diabetes.html
Indeed this example shows that you do get wildly varying results for alpha for each of the three folds done in that example. This means that you cannot trust the selection of alpha because it clearly is highly dependent on what portion of your data you are using to train and select alpha.
I don't think you should think of cross validation as something that will 'converge' to give you a perfect answer. Actually, I think that conceptually it is almost the opposite of converging. You are separating your data and for each fold you are going in a 'separate direction'. The fact that you get different results depending on how you partition your testing and training data should tell you that converging on one perfect result is impossible - and also not desirable. The only way you would get a consistent alpha value all the time is if you were to use all your data for training. However, if you were to do this you would get the best learning result but the worst validation result. | Why does lasso not converge on a penalization parameter? | sklearn has an example that is almost identical to what you're trying to do here: http://scikit-learn.org/stable/auto_examples/exercises/plot_cv_diabetes.html
Indeed this example shows that you do get | Why does lasso not converge on a penalization parameter?
sklearn has an example that is almost identical to what you're trying to do here: http://scikit-learn.org/stable/auto_examples/exercises/plot_cv_diabetes.html
Indeed this example shows that you do get wildly varying results for alpha for each of the three folds done in that example. This means that you cannot trust the selection of alpha because it clearly is highly dependent on what portion of your data you are using to train and select alpha.
I don't think you should think of cross validation as something that will 'converge' to give you a perfect answer. Actually, I think that conceptually it is almost the opposite of converging. You are separating your data and for each fold you are going in a 'separate direction'. The fact that you get different results depending on how you partition your testing and training data should tell you that converging on one perfect result is impossible - and also not desirable. The only way you would get a consistent alpha value all the time is if you were to use all your data for training. However, if you were to do this you would get the best learning result but the worst validation result. | Why does lasso not converge on a penalization parameter?
sklearn has an example that is almost identical to what you're trying to do here: http://scikit-learn.org/stable/auto_examples/exercises/plot_cv_diabetes.html
Indeed this example shows that you do get |
38,030 | Why does lasso not converge on a penalization parameter? | The multi-collinearity in x1 and x2 is what makes the $\alpha$ value unstable in the Python code. The variance is so small for the distributions that generate these variables, so that the variance of the coefficients are inflated. The variance inflation factor (VIF) could be computed to illustrate this. After the variance is increased from
x1 = range(n) + norm.rvs(0, 1, n) + 50
x2 = map(lambda aval: aval*x1x2corr, x1) + norm.rvs(0, 2, n) + 500
....to....
x1 = range(n) + norm.rvs(0, 100, n) + 50
x2 = map(lambda aval: aval*x1x2corr, x1) + norm.rvs(0, 200, n) + 500
then the $\alpha$ value stabilizes.
The issue with the R code being different from the Python code is still a mystery however... | Why does lasso not converge on a penalization parameter? | The multi-collinearity in x1 and x2 is what makes the $\alpha$ value unstable in the Python code. The variance is so small for the distributions that generate these variables, so that the variance of | Why does lasso not converge on a penalization parameter?
The multi-collinearity in x1 and x2 is what makes the $\alpha$ value unstable in the Python code. The variance is so small for the distributions that generate these variables, so that the variance of the coefficients are inflated. The variance inflation factor (VIF) could be computed to illustrate this. After the variance is increased from
x1 = range(n) + norm.rvs(0, 1, n) + 50
x2 = map(lambda aval: aval*x1x2corr, x1) + norm.rvs(0, 2, n) + 500
....to....
x1 = range(n) + norm.rvs(0, 100, n) + 50
x2 = map(lambda aval: aval*x1x2corr, x1) + norm.rvs(0, 200, n) + 500
then the $\alpha$ value stabilizes.
The issue with the R code being different from the Python code is still a mystery however... | Why does lasso not converge on a penalization parameter?
The multi-collinearity in x1 and x2 is what makes the $\alpha$ value unstable in the Python code. The variance is so small for the distributions that generate these variables, so that the variance of |
38,031 | Why does lasso not converge on a penalization parameter? | I am going to comment on the R code:
You are resetting variables in the wrong places, i.e., the variables min_mse should be initialized as Inf outside the for loop and
optimal_alpha should be initialized as NULL there. This becomes:
library(glmnet)
library(lars)
library(pracma)
set.seed(1)
k = 2 # number of features selected
n = 100
x1x2corr = 1.1
x1 = seq(n) + rnorm(n, 0, 1) + 50
x2 = x1*x1x2corr + rnorm(n, 0, 2) + 500
y = x1 + x2 +rnorm(n,0,0.5)
df = data.frame(x1 = x1, x2 = x2, y = y)
filter_out_label <- function(col) {col!="y"}
alphas = logspace(-5, 6, 50)
###
# INITIALIZE here before loop
###
min_mse = Inf
optimal_alpha = NULL
# Let's store the mse values for good measure
my_mse = c()
for (alpha in alphas){
k = 10
folds <- cut(seq(1, nrow(df)), breaks=k, labels=FALSE)
# DO NOT INITIALIZE min_mse and optimal_alpha here,
# then you cannot find them...
total_mse = 0
for(i in 1:k){
# Segement your data by fold using the which() function
testIndexes <- which(folds==i, arr.ind=TRUE)
testData <- df[testIndexes, ]
trainData <- df[-testIndexes, ]
fit <- lars(as.matrix(trainData[Filter(filter_out_label, names(df))]),
trainData$y,
type="lasso")
# predict
y_preds <- predict(fit, as.matrix(testData[Filter(filter_out_label,
names(df))]),
s=alpha, type="fit", mode="lambda")$fit
y_true = testData$y
residuals = (y_true - y_preds)
mse=sum(residuals^2)
total_mse = total_mse + mse
}
# Let's store the MSE to see the effect
my_mse <- c(my_mse, total_mse)
if (total_mse < min_mse){
min_mse = total_mse
optimal_alpha = alpha
# Let's observe the output
print(min_mse)
}
}
print(paste("the optimal alpha is ", optimal_alpha))
# Plot the effect of MSE with varying alphas
plot(my_mse)
The output should now be consistently the smallest values of alpha, because there is strong colinearity in the predictors and the response is only built from the available predictors, i.e. there are no redundant variable that we want LASSO to put to zero, in this case we do not want to perform regularization, i.e. the smallest alpha should be the best. You can see the effect of MSE here:
Note that I am using 50 alphas on the same scale as you. Around alpha indexed 35 both variables are slammed to zero, meaning that the model is always doing the same thing and the mse stagnates.
A better problem to study MSE, CV and the LASSO
The problem above is not very interesting for the LASSO. The LASSO performs model selection, so we want to see it actually pick out the parameters of interest. It is more impressive to see that the model is actually picking out an alpha that actually lowers the MSE, i.e. gives us better predictions by throwing out some variables. Here is a better example, where I add a bunch of redundant predictors.
set.seed(1)
k = 100 # number of features selected
n = 100
x1x2corr = 1.1
x1 = seq(n) + rnorm(n, 0, 1) + 50
x2 = x1*x1x2corr + rnorm(n, 0, 2) + 500
# Rest of the variables are just noise
x3 = matrix(rnorm(k-2,0,(k-2)*n),n,k-2)
y = x1 + x2 +rnorm(n,0,0.5)
df = data.frame(x1 = x1, x2 = x2, y = y)
df <- cbind(df,x3)
filter_out_label <- function(col) {col!="y"}
alphas = logspace(-5, 1.5, 100)
min_mse = Inf
optimal_alpha = NULL
my_mse = c()
Then you just run the for loop like in the code above! Note that I put the max of the alphas down to 1.5 from 6, just to see the effect in the plot below. Now the best alpha value is not the lowest one, but you can see in the plot that the cross-validation MSE is taking a drop and spikes up again in the end. The lowest point on that graph, corresponds to the alpha index with the lowest CV-error. | Why does lasso not converge on a penalization parameter? | I am going to comment on the R code:
You are resetting variables in the wrong places, i.e., the variables min_mse should be initialized as Inf outside the for loop and
optimal_alpha should be initia | Why does lasso not converge on a penalization parameter?
I am going to comment on the R code:
You are resetting variables in the wrong places, i.e., the variables min_mse should be initialized as Inf outside the for loop and
optimal_alpha should be initialized as NULL there. This becomes:
library(glmnet)
library(lars)
library(pracma)
set.seed(1)
k = 2 # number of features selected
n = 100
x1x2corr = 1.1
x1 = seq(n) + rnorm(n, 0, 1) + 50
x2 = x1*x1x2corr + rnorm(n, 0, 2) + 500
y = x1 + x2 +rnorm(n,0,0.5)
df = data.frame(x1 = x1, x2 = x2, y = y)
filter_out_label <- function(col) {col!="y"}
alphas = logspace(-5, 6, 50)
###
# INITIALIZE here before loop
###
min_mse = Inf
optimal_alpha = NULL
# Let's store the mse values for good measure
my_mse = c()
for (alpha in alphas){
k = 10
folds <- cut(seq(1, nrow(df)), breaks=k, labels=FALSE)
# DO NOT INITIALIZE min_mse and optimal_alpha here,
# then you cannot find them...
total_mse = 0
for(i in 1:k){
# Segement your data by fold using the which() function
testIndexes <- which(folds==i, arr.ind=TRUE)
testData <- df[testIndexes, ]
trainData <- df[-testIndexes, ]
fit <- lars(as.matrix(trainData[Filter(filter_out_label, names(df))]),
trainData$y,
type="lasso")
# predict
y_preds <- predict(fit, as.matrix(testData[Filter(filter_out_label,
names(df))]),
s=alpha, type="fit", mode="lambda")$fit
y_true = testData$y
residuals = (y_true - y_preds)
mse=sum(residuals^2)
total_mse = total_mse + mse
}
# Let's store the MSE to see the effect
my_mse <- c(my_mse, total_mse)
if (total_mse < min_mse){
min_mse = total_mse
optimal_alpha = alpha
# Let's observe the output
print(min_mse)
}
}
print(paste("the optimal alpha is ", optimal_alpha))
# Plot the effect of MSE with varying alphas
plot(my_mse)
The output should now be consistently the smallest values of alpha, because there is strong colinearity in the predictors and the response is only built from the available predictors, i.e. there are no redundant variable that we want LASSO to put to zero, in this case we do not want to perform regularization, i.e. the smallest alpha should be the best. You can see the effect of MSE here:
Note that I am using 50 alphas on the same scale as you. Around alpha indexed 35 both variables are slammed to zero, meaning that the model is always doing the same thing and the mse stagnates.
A better problem to study MSE, CV and the LASSO
The problem above is not very interesting for the LASSO. The LASSO performs model selection, so we want to see it actually pick out the parameters of interest. It is more impressive to see that the model is actually picking out an alpha that actually lowers the MSE, i.e. gives us better predictions by throwing out some variables. Here is a better example, where I add a bunch of redundant predictors.
set.seed(1)
k = 100 # number of features selected
n = 100
x1x2corr = 1.1
x1 = seq(n) + rnorm(n, 0, 1) + 50
x2 = x1*x1x2corr + rnorm(n, 0, 2) + 500
# Rest of the variables are just noise
x3 = matrix(rnorm(k-2,0,(k-2)*n),n,k-2)
y = x1 + x2 +rnorm(n,0,0.5)
df = data.frame(x1 = x1, x2 = x2, y = y)
df <- cbind(df,x3)
filter_out_label <- function(col) {col!="y"}
alphas = logspace(-5, 1.5, 100)
min_mse = Inf
optimal_alpha = NULL
my_mse = c()
Then you just run the for loop like in the code above! Note that I put the max of the alphas down to 1.5 from 6, just to see the effect in the plot below. Now the best alpha value is not the lowest one, but you can see in the plot that the cross-validation MSE is taking a drop and spikes up again in the end. The lowest point on that graph, corresponds to the alpha index with the lowest CV-error. | Why does lasso not converge on a penalization parameter?
I am going to comment on the R code:
You are resetting variables in the wrong places, i.e., the variables min_mse should be initialized as Inf outside the for loop and
optimal_alpha should be initia |
38,032 | Normal Distribution or not? | For two reasons you picked the wrong kind of plot for visualizing your sample. First, you assume that your data is continuous, so there is no point in counting distinct values. Second, your sample is very small, so even with discrete numbers, in most cases you can expect small counts per value that result with a flat barplot.
Recall that for continuous random variable $\Pr(X=x)=0$, so assuming that we are talking about continuous random variable we would rather not expect different values to appear in your sample multiple times -- so counting their occurrences is misleading. That is why, for continuous random variables we use probability densities, i.e. probabilities "per foot". Instead of counting how many number each of the numbers appeared, you should count their counts in intervals. That is why for visualizing your data rather than using bar plot, you should use histogram, or density plot.
Since your sample is very small, histogram could be misleading because there is limited number of bars that can be used and small number of cases that will fall into each of the bars (no matter if your variable is discrete or continuous). In this case, density plot (see below) could be more informative.
As a counter-example, below you can see barplot of values generated from normal distribution using pseudo-random numbers generator (black bars) and density plot (red line).
As you can see, barplot would "suggest" that this perfectly normal data is almost uniformly distributed...
As about if your sample is normally distributed -- it seems that the data contains of integers rather than real numbers, so obviously it is not perfectly normal. Moreover, the distribution is skewed rather than symmetric. However in most cases this is not a problem because we are interested in approximate normality. See: Is normality testing 'essentially useless'? | Normal Distribution or not? | For two reasons you picked the wrong kind of plot for visualizing your sample. First, you assume that your data is continuous, so there is no point in counting distinct values. Second, your sample is | Normal Distribution or not?
For two reasons you picked the wrong kind of plot for visualizing your sample. First, you assume that your data is continuous, so there is no point in counting distinct values. Second, your sample is very small, so even with discrete numbers, in most cases you can expect small counts per value that result with a flat barplot.
Recall that for continuous random variable $\Pr(X=x)=0$, so assuming that we are talking about continuous random variable we would rather not expect different values to appear in your sample multiple times -- so counting their occurrences is misleading. That is why, for continuous random variables we use probability densities, i.e. probabilities "per foot". Instead of counting how many number each of the numbers appeared, you should count their counts in intervals. That is why for visualizing your data rather than using bar plot, you should use histogram, or density plot.
Since your sample is very small, histogram could be misleading because there is limited number of bars that can be used and small number of cases that will fall into each of the bars (no matter if your variable is discrete or continuous). In this case, density plot (see below) could be more informative.
As a counter-example, below you can see barplot of values generated from normal distribution using pseudo-random numbers generator (black bars) and density plot (red line).
As you can see, barplot would "suggest" that this perfectly normal data is almost uniformly distributed...
As about if your sample is normally distributed -- it seems that the data contains of integers rather than real numbers, so obviously it is not perfectly normal. Moreover, the distribution is skewed rather than symmetric. However in most cases this is not a problem because we are interested in approximate normality. See: Is normality testing 'essentially useless'? | Normal Distribution or not?
For two reasons you picked the wrong kind of plot for visualizing your sample. First, you assume that your data is continuous, so there is no point in counting distinct values. Second, your sample is |
38,033 | Normal Distribution or not? | Are the following set of values normally distributed? 26, 33, 65, 28, 34, 55, 25, 44, 50, 36, 26, 37, 43, 62, 35, 38, 45, 32, 28, 34
Clearly not; they're integers.
[More properly, it's not a set of observed values that's normally distributed (the ECDF of a set of $n$ known values is discrete, the values themselves are bounded and so on); normality is an attribute of a population distribution from which an observed sample might have been drawn. But not this one.]
However, while it's often clear we cannot have a sample from a normal distribution for one reason or another, rarely is it interesting to ask whether the sample came from a normal distribution. A more relevant question is whether it might be a suitable approximation -- but to answer that question you need to know more about what you're doing, what impact the non-normality you have might have on it, and what your tolerance for that impact might be (or your audience's tolerance, perhaps).
(One thing worth noting about the shape can be seen from a QQ-plot -- or any number of other displays, depending on what you're used to using to investigate distributional shape. You should show a suitable display and interpret it. The display you show -- which is not a histogram in spite of being labelled as one -- is not really suitable, since it disguises the relative gaps in the data. It appears to be treating the x-axis values as a set of ordered category labels rather than something where the number indicates position.)
Q-Q plot of the data indicates skewness
We know that a normally distributed set of observations has no skewness at all
I sure don't know that; in fact I know it's untrue -- a sample from a normal distribution can certainly be somewhat skewed, just by random variation. It's the population that has no skewness at all.
But your conclusion -- that the data indicate skewness -- is correct, it's just much harder to see in in that chart in your question.
Here's a dotplot, which does a better job than the bar chart. An actual histogram should be adequate. (If there was more data, I'd look at something else -- with separate thin bars representing relative frequency, as your display has, but with the x-position representing the values, akin to a histogram. In R you get this with plot(table(x)), but for very small samples like this with few repeated values I prefer the dotplot.)
Do we need to transform the data-set into normally distributed values before calculating the mean, standard deviation and the z scores?
What could you conclude from the mean (etc) of transformed data?
...since in real world situations, data-sets may not be normally distributed
In real world situations, you don't really have normal distributions, except in a few special situations.
then how do we go ahead to perform statistical tests on them.
Not all tests assume normality
Even for those that do, the assumption of normality is not always very important (sometimes it may matter only a little, sometimes it might matter a lot -- it can depend on the test and on the sample size).
Transformation is frequently not the first thing you should think about doing. You should first really pay attention to what questions you need to ask of the data (what do you need to find out?). Then you can worry about what might be suitable ways to do that. It might involve transformation, but it might much better involve something else.
What are you interested in finding out from these data? If you don't know, why would you transform first? It might have no value in answering the questions of interest. | Normal Distribution or not? | Are the following set of values normally distributed? 26, 33, 65, 28, 34, 55, 25, 44, 50, 36, 26, 37, 43, 62, 35, 38, 45, 32, 28, 34
Clearly not; they're integers.
[More properly, it's not a set of o | Normal Distribution or not?
Are the following set of values normally distributed? 26, 33, 65, 28, 34, 55, 25, 44, 50, 36, 26, 37, 43, 62, 35, 38, 45, 32, 28, 34
Clearly not; they're integers.
[More properly, it's not a set of observed values that's normally distributed (the ECDF of a set of $n$ known values is discrete, the values themselves are bounded and so on); normality is an attribute of a population distribution from which an observed sample might have been drawn. But not this one.]
However, while it's often clear we cannot have a sample from a normal distribution for one reason or another, rarely is it interesting to ask whether the sample came from a normal distribution. A more relevant question is whether it might be a suitable approximation -- but to answer that question you need to know more about what you're doing, what impact the non-normality you have might have on it, and what your tolerance for that impact might be (or your audience's tolerance, perhaps).
(One thing worth noting about the shape can be seen from a QQ-plot -- or any number of other displays, depending on what you're used to using to investigate distributional shape. You should show a suitable display and interpret it. The display you show -- which is not a histogram in spite of being labelled as one -- is not really suitable, since it disguises the relative gaps in the data. It appears to be treating the x-axis values as a set of ordered category labels rather than something where the number indicates position.)
Q-Q plot of the data indicates skewness
We know that a normally distributed set of observations has no skewness at all
I sure don't know that; in fact I know it's untrue -- a sample from a normal distribution can certainly be somewhat skewed, just by random variation. It's the population that has no skewness at all.
But your conclusion -- that the data indicate skewness -- is correct, it's just much harder to see in in that chart in your question.
Here's a dotplot, which does a better job than the bar chart. An actual histogram should be adequate. (If there was more data, I'd look at something else -- with separate thin bars representing relative frequency, as your display has, but with the x-position representing the values, akin to a histogram. In R you get this with plot(table(x)), but for very small samples like this with few repeated values I prefer the dotplot.)
Do we need to transform the data-set into normally distributed values before calculating the mean, standard deviation and the z scores?
What could you conclude from the mean (etc) of transformed data?
...since in real world situations, data-sets may not be normally distributed
In real world situations, you don't really have normal distributions, except in a few special situations.
then how do we go ahead to perform statistical tests on them.
Not all tests assume normality
Even for those that do, the assumption of normality is not always very important (sometimes it may matter only a little, sometimes it might matter a lot -- it can depend on the test and on the sample size).
Transformation is frequently not the first thing you should think about doing. You should first really pay attention to what questions you need to ask of the data (what do you need to find out?). Then you can worry about what might be suitable ways to do that. It might involve transformation, but it might much better involve something else.
What are you interested in finding out from these data? If you don't know, why would you transform first? It might have no value in answering the questions of interest. | Normal Distribution or not?
Are the following set of values normally distributed? 26, 33, 65, 28, 34, 55, 25, 44, 50, 36, 26, 37, 43, 62, 35, 38, 45, 32, 28, 34
Clearly not; they're integers.
[More properly, it's not a set of o |
38,034 | What are the formulas for exponential, logarithmic, and polynomial trendlines? | I've come to the conclusion that you're probably trying to reproduce what Excel does.
This is not necessarily sensible, but at least it's pretty straightforward.
Here's what Excel does:
linear trendline: ordinary simple regression, fitted by least squares
logarithmic trendline: $y \sim a + b \ln(x)$ -- fitted by taking $x'=\ln(x)$ & using ordinary linear least squares on the new $x$-variable (i.e. fitting $E(y)=a+bx'$ using least squares as above). (Here the symbol $'$ simply denotes the new, transformed variable; it's not intended to indicate a derivative or anything)
This would be reasonably appropriate if the spread about the curve was roughly constant:
Transforming the $x$-variable doesn't alter the spread at each $x$.
exponential trendline: $y \sim ae^{bx}$ -- fitted by taking $y'=\ln(y)$ and using least squares $E(y')=a'+bx$, then exponentiating the intercept to obtain $a=\exp(a')$ , and exponentiating the log-scale fit to obtain fitted values.
By transforming $y$, we change the spread about the curved relationship. On the log-scale, where we're fitting a straight line by least squares, the spread is assumed constant on that scale, which implies it's proportional to the mean of $y$ on the scale of the original data:
Note that the fitted values on the original scale will be biased for the mean (if the distribution is symmetric on the log-scale the curve will instead estimate the median).
power: $y \sim ax^b$ -- fitted by taking logs of both x and y and using least squares to fit a straight line $E(y')=a'+bx'$, then exponentiating the fitted intercept to obtain $a$, and exponentiating the log-scale fit to obtain fitted values. (You didn't ask for this one, you can have it for free.)
As with the exponential trend, by transforming $y$, we change the spread about the curved relationship. On the log-scale, where we're fitting a straight line by least squares, so the model is best suited to when the spread about the curve is proportional to the mean of $y$ on the scale of the original data. The picture for that case looks broadly similar to the above situation where the spread is wider when the mean is larger. (This also has the same mean-bias issue as the exponential fit.)
polynomial: least squares polynomial fit using ordinary multiple regression on powers of $x$. (For better numerical behavior, use orthogonal polynomials and convert back.)
Somewhat similar to the log-trend case, this is most suitable when the spread is constant about the curved relationship, but unlike the log-trend case this can deal with a wider range of curved relationships.
In the case of polynomial regression it's important to avoid too high an order of polynomial; polynomial fits can be quite unstable, in some cases they can be heavily affected by small changes in a few points.
So in fact, aside from the polynomial fit (which is itself simply a multiple regression problem), if you can fit a simple least squares line, you can emulate Excel by doing a simple transformation of one or both of $x$ and $y$ and using least squares (and possibly doing some simple transformation back the other way in some cases). As such, you already have the formulas for those cases.
(If you want something more sensible than what Excel does, you may need to ask a different question, and then issues like those I raised in comments may become important.) | What are the formulas for exponential, logarithmic, and polynomial trendlines? | I've come to the conclusion that you're probably trying to reproduce what Excel does.
This is not necessarily sensible, but at least it's pretty straightforward.
Here's what Excel does:
linear trendl | What are the formulas for exponential, logarithmic, and polynomial trendlines?
I've come to the conclusion that you're probably trying to reproduce what Excel does.
This is not necessarily sensible, but at least it's pretty straightforward.
Here's what Excel does:
linear trendline: ordinary simple regression, fitted by least squares
logarithmic trendline: $y \sim a + b \ln(x)$ -- fitted by taking $x'=\ln(x)$ & using ordinary linear least squares on the new $x$-variable (i.e. fitting $E(y)=a+bx'$ using least squares as above). (Here the symbol $'$ simply denotes the new, transformed variable; it's not intended to indicate a derivative or anything)
This would be reasonably appropriate if the spread about the curve was roughly constant:
Transforming the $x$-variable doesn't alter the spread at each $x$.
exponential trendline: $y \sim ae^{bx}$ -- fitted by taking $y'=\ln(y)$ and using least squares $E(y')=a'+bx$, then exponentiating the intercept to obtain $a=\exp(a')$ , and exponentiating the log-scale fit to obtain fitted values.
By transforming $y$, we change the spread about the curved relationship. On the log-scale, where we're fitting a straight line by least squares, the spread is assumed constant on that scale, which implies it's proportional to the mean of $y$ on the scale of the original data:
Note that the fitted values on the original scale will be biased for the mean (if the distribution is symmetric on the log-scale the curve will instead estimate the median).
power: $y \sim ax^b$ -- fitted by taking logs of both x and y and using least squares to fit a straight line $E(y')=a'+bx'$, then exponentiating the fitted intercept to obtain $a$, and exponentiating the log-scale fit to obtain fitted values. (You didn't ask for this one, you can have it for free.)
As with the exponential trend, by transforming $y$, we change the spread about the curved relationship. On the log-scale, where we're fitting a straight line by least squares, so the model is best suited to when the spread about the curve is proportional to the mean of $y$ on the scale of the original data. The picture for that case looks broadly similar to the above situation where the spread is wider when the mean is larger. (This also has the same mean-bias issue as the exponential fit.)
polynomial: least squares polynomial fit using ordinary multiple regression on powers of $x$. (For better numerical behavior, use orthogonal polynomials and convert back.)
Somewhat similar to the log-trend case, this is most suitable when the spread is constant about the curved relationship, but unlike the log-trend case this can deal with a wider range of curved relationships.
In the case of polynomial regression it's important to avoid too high an order of polynomial; polynomial fits can be quite unstable, in some cases they can be heavily affected by small changes in a few points.
So in fact, aside from the polynomial fit (which is itself simply a multiple regression problem), if you can fit a simple least squares line, you can emulate Excel by doing a simple transformation of one or both of $x$ and $y$ and using least squares (and possibly doing some simple transformation back the other way in some cases). As such, you already have the formulas for those cases.
(If you want something more sensible than what Excel does, you may need to ask a different question, and then issues like those I raised in comments may become important.) | What are the formulas for exponential, logarithmic, and polynomial trendlines?
I've come to the conclusion that you're probably trying to reproduce what Excel does.
This is not necessarily sensible, but at least it's pretty straightforward.
Here's what Excel does:
linear trendl |
38,035 | What are the formulas for exponential, logarithmic, and polynomial trendlines? | You can use the forumla for calculating the least squares coefficients of a regression with multiple independent variables. See here for a detailed explanation.
Basically they all reduce down to a single matrix equation given by the formula:
$$\hat\beta = (X'X)^{-1}X'y$$
where
$\beta=\begin{bmatrix}\beta_0 \\ \beta_1 \\ . \\. \\. \\ \beta_n \end{bmatrix}$, $
X=\begin{bmatrix}1 & x_{11} & x_{21} & . & . & . & x_{n1}\\1 & x_{12} & x_{22} & . & . & . & x_{n2}\\. & . & . & . & & & .\\. & . & . & & . & & .\\. & . & . & & & . & .\\1 & x_{1m} & x_{2m} & . & . & . & x_{nm}\\\end{bmatrix}$, $y=\begin{bmatrix}y_0 \\ y_1 \\ . \\. \\. \\ y_m \end{bmatrix}$
and $m$ is the number of observations you have, $n$ is the degree of the polynomial curve you wish to fit. Note that with a log or exponential curve $n$ will just be $1$.
And the idea is that your $X$ matrix can have transformations of your original $x$ independent variable rather than totally new variables. So for example for a polynomial curve your $X$ matrix has as its columns your original $X$, and then that same data repeated but now raised to different powers depending on the degree of the curve you're looking to fit (this you have to decide before hand). If you want an offset like your $b$ then you'll need to add a column of just the number $1$ (i.e. to get an offset).
So for a polynomial curve your $X$ matrix becomes:
$$X=\begin{bmatrix}1 & x_{1} & x_{1}^2 & . & . & . & x_{1}^n\\1 & x_{2} & x_{2}^2 & . & . & . & x_{2}^n\\. & . & . & . & & & .\\. & . & . & & . & & .\\. & . & . & & & . & .\\1 & x_{m} & x_{m}^2 & . & . & . & x_{m}^n\\\end{bmatrix}$$
For an exponential curve / log curve
$$X=\begin{bmatrix}1 & e^{x_1}\\1 & e^{x_2}\\. & .\\.& .\\. & .\\1 & e^{x_m}\\\end{bmatrix}, X=\begin{bmatrix}1 & \ln{x_1}\\1 & \ln{x_2}\\.& . \\.& .\\.& . \\1 & \ln{x_m}\\\end{bmatrix}$$
Note that these equations are still linear (in that they take the form $y=\beta X$) and thus of the exact same form as the linear curve formula you've already solved. i.e. for the log curve we get $y = \beta_1 \ln{x} + \beta_0$
Sometimes it is impractical to use the above closed form solution. This can happen if your $X$ matrix is very large (around say $100 000$ variables, even more rows). In these cases you can consider the alternative iterative approach to solving the least squares problem. i.e using this same set up of your $X$ matrix, instead of using the formula to solve for $\beta$, you can use an iterative optimization method such as gradient descent (or others such as fmincon in Matlab or the solver in Excel). Here you would be trying to minimize:
$$\sum_1^m\left((\beta_0+\beta_1\overrightarrow{x}_1+...+\beta_n\overrightarrow{x}_n)-\overrightarrow{y}\right)^2$$
i.e.
$$\sum_1^m\left(\beta X - y\right)^2$$
where
$\overrightarrow{x}_k=\begin{bmatrix}x_{k1} \\x_{k2}\\ . \\. \\. \\ x_{km} \end{bmatrix}$
See this post for a more detailed comparison of these two alternatives. | What are the formulas for exponential, logarithmic, and polynomial trendlines? | You can use the forumla for calculating the least squares coefficients of a regression with multiple independent variables. See here for a detailed explanation.
Basically they all reduce down to a sin | What are the formulas for exponential, logarithmic, and polynomial trendlines?
You can use the forumla for calculating the least squares coefficients of a regression with multiple independent variables. See here for a detailed explanation.
Basically they all reduce down to a single matrix equation given by the formula:
$$\hat\beta = (X'X)^{-1}X'y$$
where
$\beta=\begin{bmatrix}\beta_0 \\ \beta_1 \\ . \\. \\. \\ \beta_n \end{bmatrix}$, $
X=\begin{bmatrix}1 & x_{11} & x_{21} & . & . & . & x_{n1}\\1 & x_{12} & x_{22} & . & . & . & x_{n2}\\. & . & . & . & & & .\\. & . & . & & . & & .\\. & . & . & & & . & .\\1 & x_{1m} & x_{2m} & . & . & . & x_{nm}\\\end{bmatrix}$, $y=\begin{bmatrix}y_0 \\ y_1 \\ . \\. \\. \\ y_m \end{bmatrix}$
and $m$ is the number of observations you have, $n$ is the degree of the polynomial curve you wish to fit. Note that with a log or exponential curve $n$ will just be $1$.
And the idea is that your $X$ matrix can have transformations of your original $x$ independent variable rather than totally new variables. So for example for a polynomial curve your $X$ matrix has as its columns your original $X$, and then that same data repeated but now raised to different powers depending on the degree of the curve you're looking to fit (this you have to decide before hand). If you want an offset like your $b$ then you'll need to add a column of just the number $1$ (i.e. to get an offset).
So for a polynomial curve your $X$ matrix becomes:
$$X=\begin{bmatrix}1 & x_{1} & x_{1}^2 & . & . & . & x_{1}^n\\1 & x_{2} & x_{2}^2 & . & . & . & x_{2}^n\\. & . & . & . & & & .\\. & . & . & & . & & .\\. & . & . & & & . & .\\1 & x_{m} & x_{m}^2 & . & . & . & x_{m}^n\\\end{bmatrix}$$
For an exponential curve / log curve
$$X=\begin{bmatrix}1 & e^{x_1}\\1 & e^{x_2}\\. & .\\.& .\\. & .\\1 & e^{x_m}\\\end{bmatrix}, X=\begin{bmatrix}1 & \ln{x_1}\\1 & \ln{x_2}\\.& . \\.& .\\.& . \\1 & \ln{x_m}\\\end{bmatrix}$$
Note that these equations are still linear (in that they take the form $y=\beta X$) and thus of the exact same form as the linear curve formula you've already solved. i.e. for the log curve we get $y = \beta_1 \ln{x} + \beta_0$
Sometimes it is impractical to use the above closed form solution. This can happen if your $X$ matrix is very large (around say $100 000$ variables, even more rows). In these cases you can consider the alternative iterative approach to solving the least squares problem. i.e using this same set up of your $X$ matrix, instead of using the formula to solve for $\beta$, you can use an iterative optimization method such as gradient descent (or others such as fmincon in Matlab or the solver in Excel). Here you would be trying to minimize:
$$\sum_1^m\left((\beta_0+\beta_1\overrightarrow{x}_1+...+\beta_n\overrightarrow{x}_n)-\overrightarrow{y}\right)^2$$
i.e.
$$\sum_1^m\left(\beta X - y\right)^2$$
where
$\overrightarrow{x}_k=\begin{bmatrix}x_{k1} \\x_{k2}\\ . \\. \\. \\ x_{km} \end{bmatrix}$
See this post for a more detailed comparison of these two alternatives. | What are the formulas for exponential, logarithmic, and polynomial trendlines?
You can use the forumla for calculating the least squares coefficients of a regression with multiple independent variables. See here for a detailed explanation.
Basically they all reduce down to a sin |
38,036 | What are the formulas for exponential, logarithmic, and polynomial trendlines? | Well, I think that you mean only linear models? Otherwise, as Glen_b has said, the list of models would be uncountable. Just to give you a hint, in statistics we call models linear if they are linear in their parameters. For example, the parameters (in your trend model from the picture) are $m$ and $b$, thus it is a linear model. Now, let us go to quadratic in terms of $x$, which is still $linear$ in statistical terms --- that would be: $y = mx + m_1x^2 + b$. Here I have introduced the extra parameter $m_1$ that would be governing the quadratic..Thus, in the $R^2$ formula, you would get:
$R^2 =1 - \frac{\sum[y - (mx + m_1x^2 + b)]^2}{\sum(y - \bar{y})^2}$.
Analogously, you could get similar expressions for any model(e.g. log or cubic) in terms of $x$, but be careful what and how many your parameters are. Hope this helps. | What are the formulas for exponential, logarithmic, and polynomial trendlines? | Well, I think that you mean only linear models? Otherwise, as Glen_b has said, the list of models would be uncountable. Just to give you a hint, in statistics we call models linear if they are linear | What are the formulas for exponential, logarithmic, and polynomial trendlines?
Well, I think that you mean only linear models? Otherwise, as Glen_b has said, the list of models would be uncountable. Just to give you a hint, in statistics we call models linear if they are linear in their parameters. For example, the parameters (in your trend model from the picture) are $m$ and $b$, thus it is a linear model. Now, let us go to quadratic in terms of $x$, which is still $linear$ in statistical terms --- that would be: $y = mx + m_1x^2 + b$. Here I have introduced the extra parameter $m_1$ that would be governing the quadratic..Thus, in the $R^2$ formula, you would get:
$R^2 =1 - \frac{\sum[y - (mx + m_1x^2 + b)]^2}{\sum(y - \bar{y})^2}$.
Analogously, you could get similar expressions for any model(e.g. log or cubic) in terms of $x$, but be careful what and how many your parameters are. Hope this helps. | What are the formulas for exponential, logarithmic, and polynomial trendlines?
Well, I think that you mean only linear models? Otherwise, as Glen_b has said, the list of models would be uncountable. Just to give you a hint, in statistics we call models linear if they are linear |
38,037 | What are the formulas for exponential, logarithmic, and polynomial trendlines? | Use Linearizing transformation.
$$y=m\ln{x}+b$$
All you do is replace $x$ with $\ln{x}$ in all your expressions for $m$ and $b$, e.g.
$$b = \frac{\sum y- m \sum \ln{x}}{n}$$
The same thing with exponential: replace $x$ with $\exp{(x)}$.
Polynomial become a bit more involved, but the same idea. You have to apply ordinary least squares in the way that is shown in this example, i.e. $$y=X\beta$$where $X$ is the column matrix of your dependent variables. In this case the first column is all ones, the second column is $x_i$, the third is $x_i^2$, the fourth is $x_i^3$ and so on. Then your estimated parameters are $$(X'X)^{-1}X'y$$.
You'll have to use matrix calculations library, don't even try to implement matrix algebra yourself.
To whuber's point,you have to plug your transformed variable in $R^2$ too:
$$ R^2 = 1 - \frac{\sum[y-(m\ln{x}+b)]^2}{\sum(y-\bar{y})^2}$$ | What are the formulas for exponential, logarithmic, and polynomial trendlines? | Use Linearizing transformation.
$$y=m\ln{x}+b$$
All you do is replace $x$ with $\ln{x}$ in all your expressions for $m$ and $b$, e.g.
$$b = \frac{\sum y- m \sum \ln{x}}{n}$$
The same thing with expone | What are the formulas for exponential, logarithmic, and polynomial trendlines?
Use Linearizing transformation.
$$y=m\ln{x}+b$$
All you do is replace $x$ with $\ln{x}$ in all your expressions for $m$ and $b$, e.g.
$$b = \frac{\sum y- m \sum \ln{x}}{n}$$
The same thing with exponential: replace $x$ with $\exp{(x)}$.
Polynomial become a bit more involved, but the same idea. You have to apply ordinary least squares in the way that is shown in this example, i.e. $$y=X\beta$$where $X$ is the column matrix of your dependent variables. In this case the first column is all ones, the second column is $x_i$, the third is $x_i^2$, the fourth is $x_i^3$ and so on. Then your estimated parameters are $$(X'X)^{-1}X'y$$.
You'll have to use matrix calculations library, don't even try to implement matrix algebra yourself.
To whuber's point,you have to plug your transformed variable in $R^2$ too:
$$ R^2 = 1 - \frac{\sum[y-(m\ln{x}+b)]^2}{\sum(y-\bar{y})^2}$$ | What are the formulas for exponential, logarithmic, and polynomial trendlines?
Use Linearizing transformation.
$$y=m\ln{x}+b$$
All you do is replace $x$ with $\ln{x}$ in all your expressions for $m$ and $b$, e.g.
$$b = \frac{\sum y- m \sum \ln{x}}{n}$$
The same thing with expone |
38,038 | The difference between the three Augmented DickeyβFuller test (none,drift, trend) | The Wikipedia page states the following:
The testing procedure for the ADF test is the same as for the
DickeyβFuller test but it is applied to the model
$$ \Delta y_t =
\alpha + \beta t + \gamma y_{t-1} + \delta_1 \Delta y_{t-1} + \cdots +
\delta_{p-1} \Delta y_{t-p+1} + \varepsilon_t
$$
As you very well note, there are variations of the test, which involve restricting $\alpha$ and/or $\beta$ equal to 0. Imposing the restriction on $\alpha$ corresponds to omitting a constant while restricting $\beta$ corresponds to omitting a time trend.
To understand what you're doing when using the adf.test() function from the tseries package in R, we should first consult the documentation provided by the package authors. To do this, we execute ?adf.test in the R console. Doing this will provide us details about the function; what it does, how we can use it, etc. For present purposes, we just need to be aware that the documentation states:
The general regression equation which incorporates a constant and a
linear trend is used and the t-statistic for a first order
autoregressive coefficient equals one is computed.
(Do we need more information than that?)
Coupled with that fact, if we look at the usage of the function; namely,
adf.test(x, alternative = c("stationary", "explosive"),
k = trunc((length(x)-1)^(1/3)))
one begins to think that the function has limited capabilities with regard to the restricted variations of the ADF test. Reading all of the documentation seems to make it clear that the function only runs one variation of the test; the unrestricted version, which includes both a constant and a trend.
(Do we need more information than that?)
Since you're using R, we don't have to be left wondering if the function somehow imposes the restrictions internally without us knowing! To really be sure what's going on behind the scenes, we can look at the source code of the adf.test() function. Below, I step through the code, which I have shortened, and I hope it's instructive to you.
# Import some toy data
data(sunspots)
# Set arguments that are normally function inputs
x <- sunspots
alternative <- "stationary"
k <- trunc((length(x) - 1)^(1/3))
# Let the function go to work! (short version)
k <- k + 1 # Number of lagged differenced terms
y <- diff(x) # First differences
n <- length(y) # Length of first differenced series
z <- embed(y, k) # Used for creating lagged series
# Things get interesting here as variables are prepared for the regression
yt <- z[, 1] # First differences
xt1 <- x[k:n] # Series in levels - the first k-1 observations are dropped
tt <- k:n # Time-trend
yt1 <- z[, 2:k] # Lagged differenced series - there are k-1 of them
# Next, the key pieces of code.
# Regression 1: if k > 0
# The augmented Dickey-Fuller test (with constant and time-trend)
res <- lm(yt ~ xt1 + 1 + tt + yt1)
# Regression 2: if k = 0
# The standard Dickey-Fuller test (with constant and time-trend)
res <- lm(yt ~ xt1 + 1 + tt)
By my count, the adf.test() function is, in fact, made up of 57 lines of code, which I encourage you to inspect. The rest of the function code is not important in the context of this question. All that needs to be known is that the function does do what it says on the tin. Importantly, there does not seem to be a high level way of using the function to run a restricted variation of the ADF test and retrieve the associated critical values.
What to do? Your first instinct should be to check out the CRAN Task View: Time Series Analysis page. In doing so, you'll learn that the urca package provides an alternative implementation of the ADF test. Indeed, as I mentioned in the comments, the ur.df() function should be able to meet your needs. Inspecting the function usage is quite informative!
ur.df(y, type = c("none", "drift", "trend"), lags = 1,
selectlags = c("Fixed", "AIC", "BIC"))
The urca package can be found here and I recommend consulting the package documentation and the source code if you need to. I suspect that you should be able to use the function and not worry about issues regarding critical values; the authors of the package will have taken care of that so you can concentrate on using it as a high-level function and doing your research.
In terms of applying the ADF test (knowing which tests to run and in which order), I would suggest the Dolado et al. procedure. The reference is:
Dolado, J. J., Jenkinson, T., and Sosvilla-Rivera, S. (1990).
Cointegration and unit roots, Journal of Economic Surveys, 4, 249-273.
Final note on matching the R code to the mathematical equation. You can basically think of it as follows (strictly speaking, the parameters should be omitted, but...):
yt = $\Delta y_{t}$
xt = $\gamma y_{t-1}$
+ 1 = $\alpha$
tt = $\beta t$
yt1 = $\delta_1 \Delta y_{t-1} + \cdots +
\delta_{p-1} \Delta y_{t-p+1}$ | The difference between the three Augmented DickeyβFuller test (none,drift, trend) | The Wikipedia page states the following:
The testing procedure for the ADF test is the same as for the
DickeyβFuller test but it is applied to the model
$$ \Delta y_t =
\alpha + \beta t + \gamm | The difference between the three Augmented DickeyβFuller test (none,drift, trend)
The Wikipedia page states the following:
The testing procedure for the ADF test is the same as for the
DickeyβFuller test but it is applied to the model
$$ \Delta y_t =
\alpha + \beta t + \gamma y_{t-1} + \delta_1 \Delta y_{t-1} + \cdots +
\delta_{p-1} \Delta y_{t-p+1} + \varepsilon_t
$$
As you very well note, there are variations of the test, which involve restricting $\alpha$ and/or $\beta$ equal to 0. Imposing the restriction on $\alpha$ corresponds to omitting a constant while restricting $\beta$ corresponds to omitting a time trend.
To understand what you're doing when using the adf.test() function from the tseries package in R, we should first consult the documentation provided by the package authors. To do this, we execute ?adf.test in the R console. Doing this will provide us details about the function; what it does, how we can use it, etc. For present purposes, we just need to be aware that the documentation states:
The general regression equation which incorporates a constant and a
linear trend is used and the t-statistic for a first order
autoregressive coefficient equals one is computed.
(Do we need more information than that?)
Coupled with that fact, if we look at the usage of the function; namely,
adf.test(x, alternative = c("stationary", "explosive"),
k = trunc((length(x)-1)^(1/3)))
one begins to think that the function has limited capabilities with regard to the restricted variations of the ADF test. Reading all of the documentation seems to make it clear that the function only runs one variation of the test; the unrestricted version, which includes both a constant and a trend.
(Do we need more information than that?)
Since you're using R, we don't have to be left wondering if the function somehow imposes the restrictions internally without us knowing! To really be sure what's going on behind the scenes, we can look at the source code of the adf.test() function. Below, I step through the code, which I have shortened, and I hope it's instructive to you.
# Import some toy data
data(sunspots)
# Set arguments that are normally function inputs
x <- sunspots
alternative <- "stationary"
k <- trunc((length(x) - 1)^(1/3))
# Let the function go to work! (short version)
k <- k + 1 # Number of lagged differenced terms
y <- diff(x) # First differences
n <- length(y) # Length of first differenced series
z <- embed(y, k) # Used for creating lagged series
# Things get interesting here as variables are prepared for the regression
yt <- z[, 1] # First differences
xt1 <- x[k:n] # Series in levels - the first k-1 observations are dropped
tt <- k:n # Time-trend
yt1 <- z[, 2:k] # Lagged differenced series - there are k-1 of them
# Next, the key pieces of code.
# Regression 1: if k > 0
# The augmented Dickey-Fuller test (with constant and time-trend)
res <- lm(yt ~ xt1 + 1 + tt + yt1)
# Regression 2: if k = 0
# The standard Dickey-Fuller test (with constant and time-trend)
res <- lm(yt ~ xt1 + 1 + tt)
By my count, the adf.test() function is, in fact, made up of 57 lines of code, which I encourage you to inspect. The rest of the function code is not important in the context of this question. All that needs to be known is that the function does do what it says on the tin. Importantly, there does not seem to be a high level way of using the function to run a restricted variation of the ADF test and retrieve the associated critical values.
What to do? Your first instinct should be to check out the CRAN Task View: Time Series Analysis page. In doing so, you'll learn that the urca package provides an alternative implementation of the ADF test. Indeed, as I mentioned in the comments, the ur.df() function should be able to meet your needs. Inspecting the function usage is quite informative!
ur.df(y, type = c("none", "drift", "trend"), lags = 1,
selectlags = c("Fixed", "AIC", "BIC"))
The urca package can be found here and I recommend consulting the package documentation and the source code if you need to. I suspect that you should be able to use the function and not worry about issues regarding critical values; the authors of the package will have taken care of that so you can concentrate on using it as a high-level function and doing your research.
In terms of applying the ADF test (knowing which tests to run and in which order), I would suggest the Dolado et al. procedure. The reference is:
Dolado, J. J., Jenkinson, T., and Sosvilla-Rivera, S. (1990).
Cointegration and unit roots, Journal of Economic Surveys, 4, 249-273.
Final note on matching the R code to the mathematical equation. You can basically think of it as follows (strictly speaking, the parameters should be omitted, but...):
yt = $\Delta y_{t}$
xt = $\gamma y_{t-1}$
+ 1 = $\alpha$
tt = $\beta t$
yt1 = $\delta_1 \Delta y_{t-1} + \cdots +
\delta_{p-1} \Delta y_{t-p+1}$ | The difference between the three Augmented DickeyβFuller test (none,drift, trend)
The Wikipedia page states the following:
The testing procedure for the ADF test is the same as for the
DickeyβFuller test but it is applied to the model
$$ \Delta y_t =
\alpha + \beta t + \gamm |
38,039 | The difference between the three Augmented DickeyβFuller test (none,drift, trend) | The key difference among the three versions of the test are in the specification of the test equation. As a consequence, critical values are different, too.
You want to find the correct specification of the Dickey-Fuller test regression used for testing for a unit root. It means choosing between {no constant, no trend}, {constant, no trend} and {constant, trend} and also selecting the number of autoregressive lags. Under the correct specification, the coefficient estimators from the regression will be well-behaved and thus the test result will be trusted. Under an incorrect specification, things go wrong.
The R function adf.test only uses one type of critical value (with drift and trend). So if my data doesn't have drift and trend, might the output of adf.test be incorrect?
If the true specification of the underlying process does not match the specification in the adf.test function, the coefficient estimators will not be well-behaved and so you cannot trust the test result. | The difference between the three Augmented DickeyβFuller test (none,drift, trend) | The key difference among the three versions of the test are in the specification of the test equation. As a consequence, critical values are different, too.
You want to find the correct specification | The difference between the three Augmented DickeyβFuller test (none,drift, trend)
The key difference among the three versions of the test are in the specification of the test equation. As a consequence, critical values are different, too.
You want to find the correct specification of the Dickey-Fuller test regression used for testing for a unit root. It means choosing between {no constant, no trend}, {constant, no trend} and {constant, trend} and also selecting the number of autoregressive lags. Under the correct specification, the coefficient estimators from the regression will be well-behaved and thus the test result will be trusted. Under an incorrect specification, things go wrong.
The R function adf.test only uses one type of critical value (with drift and trend). So if my data doesn't have drift and trend, might the output of adf.test be incorrect?
If the true specification of the underlying process does not match the specification in the adf.test function, the coefficient estimators will not be well-behaved and so you cannot trust the test result. | The difference between the three Augmented DickeyβFuller test (none,drift, trend)
The key difference among the three versions of the test are in the specification of the test equation. As a consequence, critical values are different, too.
You want to find the correct specification |
38,040 | Plot log-normal distribution in R [closed] | As @ocram stated, the parameters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$ so you do not need to take the log yourself when specifiying the mean and standard deviation of the distribution. Here is some revised code with the true density function plotted (without having to sample as @Glen_b above suggested) and the density estimate.
x = rlnorm(500,1,.6)
grid = seq(0,25,.1)
plot(grid,dlnorm(grid,1,.6),type="l",xlab="x",ylab="f(x)")
lines(density(x),col="red")
legend("topright",c("True Density","Estimate"),lty=1,col=1:2) | Plot log-normal distribution in R [closed] | As @ocram stated, the parameters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$ so you do not need to take the log yourself when specifiying the mean and standard deviation of the distribut | Plot log-normal distribution in R [closed]
As @ocram stated, the parameters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$ so you do not need to take the log yourself when specifiying the mean and standard deviation of the distribution. Here is some revised code with the true density function plotted (without having to sample as @Glen_b above suggested) and the density estimate.
x = rlnorm(500,1,.6)
grid = seq(0,25,.1)
plot(grid,dlnorm(grid,1,.6),type="l",xlab="x",ylab="f(x)")
lines(density(x),col="red")
legend("topright",c("True Density","Estimate"),lty=1,col=1:2) | Plot log-normal distribution in R [closed]
As @ocram stated, the parameters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$ so you do not need to take the log yourself when specifiying the mean and standard deviation of the distribut |
38,041 | Plot log-normal distribution in R [closed] | If $X \sim \text{N}(\mu, \theta)$ and $Y = \exp(X)$, then
$$
\text{E}(X) = \mu, \quad \text{Var}(X) = \theta
$$
and
$$
\text{E}(Y) = \exp(\mu + \tfrac{\theta}{2}), \quad \text{Var}(Y) = (\exp(\theta) - 1) \exp(2 \mu + \theta)
$$
In dlnorm, the paramaters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$. | Plot log-normal distribution in R [closed] | If $X \sim \text{N}(\mu, \theta)$ and $Y = \exp(X)$, then
$$
\text{E}(X) = \mu, \quad \text{Var}(X) = \theta
$$
and
$$
\text{E}(Y) = \exp(\mu + \tfrac{\theta}{2}), \quad \text{Var}(Y) = (\exp(\theta) | Plot log-normal distribution in R [closed]
If $X \sim \text{N}(\mu, \theta)$ and $Y = \exp(X)$, then
$$
\text{E}(X) = \mu, \quad \text{Var}(X) = \theta
$$
and
$$
\text{E}(Y) = \exp(\mu + \tfrac{\theta}{2}), \quad \text{Var}(Y) = (\exp(\theta) - 1) \exp(2 \mu + \theta)
$$
In dlnorm, the paramaters meanlog and sdlog correspond to $\mu$ and $\sqrt{\theta}$. | Plot log-normal distribution in R [closed]
If $X \sim \text{N}(\mu, \theta)$ and $Y = \exp(X)$, then
$$
\text{E}(X) = \mu, \quad \text{Var}(X) = \theta
$$
and
$$
\text{E}(Y) = \exp(\mu + \tfrac{\theta}{2}), \quad \text{Var}(Y) = (\exp(\theta) |
38,042 | Importance of normal distribution | The main reason that the normal distribution is so popular is because it works (is at least good enough in many situations). The reason that it works is really because of the Central Limit Theorem. Rather than trying to look beyond the CLT, I think you (and others) should better appreciate the CLT (I have a cross-stitch of the CLT hanging on my wall as I type).
We usually teach and think about the CLT in terms of a sample mean (and that is a powerful use of the CLT), but it extends much further than that. The CLT also means that any variable that we measure that is the result of combining many effects (many relative to the degree of relationship between the different pieces) will be approximately normal.
For example: a person's height is determined by many small effects including genetics (there will be several genes that contribute to height), nutrition (not just good/bad, but what was actually eaten each day that the person was growing), environmental polutions (again each day contributed a small effect), and other things. So heights (within sex/race combinations) are approximately normal.
Annual rainfall for a specific area is the summation of the daily rainfall for the year and while the daily rainfall is probably very far from normal (zero inflated) when you add all those days together you get something much more normal.
Binomial distributions are just sums of Bernoullis and a Poisson distribution can be the sum of smaller Poissons, it should not be a surprise that either can be approximated by a normal (if enough pieces are added together).
Most exceptions come when common values are close to a natural boundary (rainfall in the desert, test scores where many students get 100% or close to it, etc.) or when there is a single (or small number) of very strong contributors (height including both sexes or with a spread of ages when kids are still growing). Otherwise there are many things that can be approximated using the normal distribution (and things become even more normal when you average them together from a sample).
So why do we need any more justification than the CLT (not to take away from the other great answers).
dismount soapbox
addition
Since it appears that at least 2 people want to see the cross-stitch (based on comments below) here is a picture:
I also have cross-stitches of Bayes theorem and the mean value theorem of integration, but they are off topic for this question. | Importance of normal distribution | The main reason that the normal distribution is so popular is because it works (is at least good enough in many situations). The reason that it works is really because of the Central Limit Theorem. | Importance of normal distribution
The main reason that the normal distribution is so popular is because it works (is at least good enough in many situations). The reason that it works is really because of the Central Limit Theorem. Rather than trying to look beyond the CLT, I think you (and others) should better appreciate the CLT (I have a cross-stitch of the CLT hanging on my wall as I type).
We usually teach and think about the CLT in terms of a sample mean (and that is a powerful use of the CLT), but it extends much further than that. The CLT also means that any variable that we measure that is the result of combining many effects (many relative to the degree of relationship between the different pieces) will be approximately normal.
For example: a person's height is determined by many small effects including genetics (there will be several genes that contribute to height), nutrition (not just good/bad, but what was actually eaten each day that the person was growing), environmental polutions (again each day contributed a small effect), and other things. So heights (within sex/race combinations) are approximately normal.
Annual rainfall for a specific area is the summation of the daily rainfall for the year and while the daily rainfall is probably very far from normal (zero inflated) when you add all those days together you get something much more normal.
Binomial distributions are just sums of Bernoullis and a Poisson distribution can be the sum of smaller Poissons, it should not be a surprise that either can be approximated by a normal (if enough pieces are added together).
Most exceptions come when common values are close to a natural boundary (rainfall in the desert, test scores where many students get 100% or close to it, etc.) or when there is a single (or small number) of very strong contributors (height including both sexes or with a spread of ages when kids are still growing). Otherwise there are many things that can be approximated using the normal distribution (and things become even more normal when you average them together from a sample).
So why do we need any more justification than the CLT (not to take away from the other great answers).
dismount soapbox
addition
Since it appears that at least 2 people want to see the cross-stitch (based on comments below) here is a picture:
I also have cross-stitches of Bayes theorem and the mean value theorem of integration, but they are off topic for this question. | Importance of normal distribution
The main reason that the normal distribution is so popular is because it works (is at least good enough in many situations). The reason that it works is really because of the Central Limit Theorem. |
38,043 | Importance of normal distribution | The Wikipedia article on the normal distribution contains many reasons. I'll summarize a few of the more useful ones here - but I really do suggest having a read through that article:
It is entirely characterized by two parameters that are easy to estimate
A sum of two jointly normal random variables is also normal
Uncorrelated, jointly distributed, normal random variables are independent
Normality assumptions frequently result in analytic (as opposed to numeric) solutions to many estimation problems
To this Wikipedia-based list I'll add one more:
Estimators are often consistent when normality is assumed, even if the normality assumption is violated, see eg quasi-maximum likelihood | Importance of normal distribution | The Wikipedia article on the normal distribution contains many reasons. I'll summarize a few of the more useful ones here - but I really do suggest having a read through that article:
It is entirely | Importance of normal distribution
The Wikipedia article on the normal distribution contains many reasons. I'll summarize a few of the more useful ones here - but I really do suggest having a read through that article:
It is entirely characterized by two parameters that are easy to estimate
A sum of two jointly normal random variables is also normal
Uncorrelated, jointly distributed, normal random variables are independent
Normality assumptions frequently result in analytic (as opposed to numeric) solutions to many estimation problems
To this Wikipedia-based list I'll add one more:
Estimators are often consistent when normality is assumed, even if the normality assumption is violated, see eg quasi-maximum likelihood | Importance of normal distribution
The Wikipedia article on the normal distribution contains many reasons. I'll summarize a few of the more useful ones here - but I really do suggest having a read through that article:
It is entirely |
38,044 | Importance of normal distribution | We all kneel to the central limit theorem.
Here are some slightly less standard reasons why it has become "popular":
Many people never do more than one statistics course or study more than one introductory text. In such courses or texts it is customary to touch on $t$ tests, correlation and regression, for all of which normal distributions should at least be mentioned as context. Conversely, some procedures may be mentioned as not predicated on normal distributions (chi-square or Wilcoxon-Mann-Whitney, etc.), which creates as many problems as it solves. If other distributions are mentioned, the most likely candidates are binomial and Poisson, which fairly clearly apply to different kinds of problems. People who have never studied statistics formally, but nevertheless use it, even for published research, also tend to have a picture of statistics that is similar.
A higher level of understanding entails realising that many named distributions could be relevant or useful, which means learning not just about one or two other distributions but about many more. That is a big jump, requiring more teaching time, and a stronger formal background in mathematics, than is likely for introductory courses. Naturally, there are many exceptions, e.g. physics, engineering and economics students should usually know the right kind of machinery. Unfortunately, many researchers who use statistics, and many non-statisticians who write statistics texts and give courses to people in their own field, work with a foggy kind of myth about statistics, such as that you need normal distributions to do mainstream statistics, except that you can use non-parametric tests instead.
In short, what is popular hinges not just on statistical logic and what works with data, but also on the sociology and psychology of what is taught and remembered and its ugly complement misconceptions of what is central to statistics. At worst, the normal is "popular" because people know about almost nothing else.... | Importance of normal distribution | We all kneel to the central limit theorem.
Here are some slightly less standard reasons why it has become "popular":
Many people never do more than one statistics course or study more than one intr | Importance of normal distribution
We all kneel to the central limit theorem.
Here are some slightly less standard reasons why it has become "popular":
Many people never do more than one statistics course or study more than one introductory text. In such courses or texts it is customary to touch on $t$ tests, correlation and regression, for all of which normal distributions should at least be mentioned as context. Conversely, some procedures may be mentioned as not predicated on normal distributions (chi-square or Wilcoxon-Mann-Whitney, etc.), which creates as many problems as it solves. If other distributions are mentioned, the most likely candidates are binomial and Poisson, which fairly clearly apply to different kinds of problems. People who have never studied statistics formally, but nevertheless use it, even for published research, also tend to have a picture of statistics that is similar.
A higher level of understanding entails realising that many named distributions could be relevant or useful, which means learning not just about one or two other distributions but about many more. That is a big jump, requiring more teaching time, and a stronger formal background in mathematics, than is likely for introductory courses. Naturally, there are many exceptions, e.g. physics, engineering and economics students should usually know the right kind of machinery. Unfortunately, many researchers who use statistics, and many non-statisticians who write statistics texts and give courses to people in their own field, work with a foggy kind of myth about statistics, such as that you need normal distributions to do mainstream statistics, except that you can use non-parametric tests instead.
In short, what is popular hinges not just on statistical logic and what works with data, but also on the sociology and psychology of what is taught and remembered and its ugly complement misconceptions of what is central to statistics. At worst, the normal is "popular" because people know about almost nothing else.... | Importance of normal distribution
We all kneel to the central limit theorem.
Here are some slightly less standard reasons why it has become "popular":
Many people never do more than one statistics course or study more than one intr |
38,045 | Importance of normal distribution | I like to view the normal distribution as the curve that approximates (or is the limit of) the sum of many small little random effects. Galton's Bean machines such as the ones in the image below demonstrate this nicely and these simple models make you imagine more easily why and how we see the pattern of the normal curve, or something that looks like it, so often around us.
It also shows why it is not always correct since not always the effect is due to many small effects (or sometimes a few big ones dominate), and also those bean machines are actually binomial distributions, the Gaussian curve (or maybe we should call it the De Moivre curve) is just an approximation to it.
(Yes I know this is just like CLT but it gives that theorem a more practical meaning instead of just being a mathematical theorem, so I'd say this is one of the more reasons. Gauss actually gives another reason, it is the distribution of errors for which the sum of least squares is the maximum likelihood solution.)
https://commons.wikimedia.org/wiki/File:Quincunx_(Galton_Box)_-_Galton_1889_diagram.png | Importance of normal distribution | I like to view the normal distribution as the curve that approximates (or is the limit of) the sum of many small little random effects. Galton's Bean machines such as the ones in the image below demon | Importance of normal distribution
I like to view the normal distribution as the curve that approximates (or is the limit of) the sum of many small little random effects. Galton's Bean machines such as the ones in the image below demonstrate this nicely and these simple models make you imagine more easily why and how we see the pattern of the normal curve, or something that looks like it, so often around us.
It also shows why it is not always correct since not always the effect is due to many small effects (or sometimes a few big ones dominate), and also those bean machines are actually binomial distributions, the Gaussian curve (or maybe we should call it the De Moivre curve) is just an approximation to it.
(Yes I know this is just like CLT but it gives that theorem a more practical meaning instead of just being a mathematical theorem, so I'd say this is one of the more reasons. Gauss actually gives another reason, it is the distribution of errors for which the sum of least squares is the maximum likelihood solution.)
https://commons.wikimedia.org/wiki/File:Quincunx_(Galton_Box)_-_Galton_1889_diagram.png | Importance of normal distribution
I like to view the normal distribution as the curve that approximates (or is the limit of) the sum of many small little random effects. Galton's Bean machines such as the ones in the image below demon |
38,046 | Importance of normal distribution | Essentially your question is asking about characterisations of the normal distribution. One characterisation of the distribution is that it arises when one takes a second-order Maclaurin approximation to the cumulant generating function of a distribution.
Normal distribution arises by second-order approximation to CGF: Suppose we have a random variable $X$ with (complex) cumulant generating function given by:
$$H(t) = \ln \phi_X(t) = \ln (\mathbb{E}(e^{itX})).$$
This function $H$ is the logarithm of the characteristic function $\phi_X$. It is a convex function with $H(0)=0$ and it can be approximated via a Taylor polynomial. In particular, if we take a second-order Taylor approximation around the value $t=0$ (i.e., a second-order Maclaurin approximation) we get:
$$H(t) \approx K(0) + \frac{H'(0)}{1!} it - \frac{H''(0)}{2!} t^2.$$
To find this approximation we note that the characteristic function has $\phi_X^{(k)}(0) = i^k \mathbb{E}(X^k)$ for all $k \in \mathbb{N}$, which gives us the following derivatives at zero for the cumulant generating function:
$$\begin{equation} \begin{aligned}
H(t) &= \ln \phi_X(t) & & \implies & & H(0) = 0, \\[12pt]
H'(t) &= \frac{\phi_X'(t)}{\phi_X(t)} & & \implies & & H'(0) = i \mathbb{E}(X), \\[6pt]
H''(t) &= \frac{\phi_X''(t)}{\phi_X(t)} - \frac{\phi_X'(t)^2}{\phi_X(t)^2} & & \implies & & H''(0) = - \mathbb{E}(X^2) + \mathbb{E}(X)^2 = - \mathbb{V}(X). \\[6pt]
\end{aligned} \end{equation}$$
Substituting these into the second-order Maclaurin approximation we obtain:
$$H(t) \approx \mathbb{E}(X) i t - \frac{\mathbb{V}(X) t^2}{2}.$$
This approximation is the cumulant generating function for the normal distribution. Thus, we see that one characterisation of the normal distribution is that it arises whenever one takes a second-order Maclaurin approximation to the cumulant generating function of a random variable. | Importance of normal distribution | Essentially your question is asking about characterisations of the normal distribution. One characterisation of the distribution is that it arises when one takes a second-order Maclaurin approximatio | Importance of normal distribution
Essentially your question is asking about characterisations of the normal distribution. One characterisation of the distribution is that it arises when one takes a second-order Maclaurin approximation to the cumulant generating function of a distribution.
Normal distribution arises by second-order approximation to CGF: Suppose we have a random variable $X$ with (complex) cumulant generating function given by:
$$H(t) = \ln \phi_X(t) = \ln (\mathbb{E}(e^{itX})).$$
This function $H$ is the logarithm of the characteristic function $\phi_X$. It is a convex function with $H(0)=0$ and it can be approximated via a Taylor polynomial. In particular, if we take a second-order Taylor approximation around the value $t=0$ (i.e., a second-order Maclaurin approximation) we get:
$$H(t) \approx K(0) + \frac{H'(0)}{1!} it - \frac{H''(0)}{2!} t^2.$$
To find this approximation we note that the characteristic function has $\phi_X^{(k)}(0) = i^k \mathbb{E}(X^k)$ for all $k \in \mathbb{N}$, which gives us the following derivatives at zero for the cumulant generating function:
$$\begin{equation} \begin{aligned}
H(t) &= \ln \phi_X(t) & & \implies & & H(0) = 0, \\[12pt]
H'(t) &= \frac{\phi_X'(t)}{\phi_X(t)} & & \implies & & H'(0) = i \mathbb{E}(X), \\[6pt]
H''(t) &= \frac{\phi_X''(t)}{\phi_X(t)} - \frac{\phi_X'(t)^2}{\phi_X(t)^2} & & \implies & & H''(0) = - \mathbb{E}(X^2) + \mathbb{E}(X)^2 = - \mathbb{V}(X). \\[6pt]
\end{aligned} \end{equation}$$
Substituting these into the second-order Maclaurin approximation we obtain:
$$H(t) \approx \mathbb{E}(X) i t - \frac{\mathbb{V}(X) t^2}{2}.$$
This approximation is the cumulant generating function for the normal distribution. Thus, we see that one characterisation of the normal distribution is that it arises whenever one takes a second-order Maclaurin approximation to the cumulant generating function of a random variable. | Importance of normal distribution
Essentially your question is asking about characterisations of the normal distribution. One characterisation of the distribution is that it arises when one takes a second-order Maclaurin approximatio |
38,047 | Are Measurements made on the same patient independent? | They are definitely three different data points, but they are also definitely not independent (whether they are same day or different day). What you should do about this depends on the goals of your analysis, but it is likely that a multi-level model is a good choice. Averaging the points is also possible, but it reduces variability and eliminates the ability to look at trends over time. | Are Measurements made on the same patient independent? | They are definitely three different data points, but they are also definitely not independent (whether they are same day or different day). What you should do about this depends on the goals of your a | Are Measurements made on the same patient independent?
They are definitely three different data points, but they are also definitely not independent (whether they are same day or different day). What you should do about this depends on the goals of your analysis, but it is likely that a multi-level model is a good choice. Averaging the points is also possible, but it reduces variability and eliminates the ability to look at trends over time. | Are Measurements made on the same patient independent?
They are definitely three different data points, but they are also definitely not independent (whether they are same day or different day). What you should do about this depends on the goals of your a |
38,048 | Are Measurements made on the same patient independent? | I mostly concur with @PeterFlom's answer. In my opinion, you should not average your data (you are basically throwing away 2/3 of your information, why would you want to do that?), but you should definitely account for the fact that measurements on the same patient will tend to be closer together than measurements on different patients. In such a situation, I usually recommend mixed linear models, which are a simple instance of the multi-level models @PeterFlom recommends.
Specifically, you would use a generalized linear mixed model. The link function would be logistic, as in "ordinary" logistic regression. However, the functional form would include multiple observations on each participant, modeled by a random effect, just as in "ordinary" linear mixed models, $yβΌF(XΞ²+ZΞ³)$. In R, you can fit this by glmer() in the lme4 package, using the binomial family. For prediction, you could use a single measurement.
Whether or not a mixed model predicts better than a non-mixed model in a particular setting is hard to say, of course. What the mixed model does is account for intra-person variability. If you just average the three original data points, you lose all the variability between measurements, so you will be too optimistic about your ability to predict from a single new observation.
If, on the other hand, you simply throw in all observations without taking the grouping into account, you will again be too optimistic, as all standard errors will shrink. Think of what would happen if you started with a single observation per participant, say 100 data points... and then simply copied each observation 100 times. You would end up with 10,000 "observations" and far smaller standard errors than with the original data, although you didn't enter any new information.
In addition, mixed models allow modeling other grouping factors, like the location, its specific demographics, its staff, diagnostician characteristics etc. So they are a lot more general than averaging. | Are Measurements made on the same patient independent? | I mostly concur with @PeterFlom's answer. In my opinion, you should not average your data (you are basically throwing away 2/3 of your information, why would you want to do that?), but you should defi | Are Measurements made on the same patient independent?
I mostly concur with @PeterFlom's answer. In my opinion, you should not average your data (you are basically throwing away 2/3 of your information, why would you want to do that?), but you should definitely account for the fact that measurements on the same patient will tend to be closer together than measurements on different patients. In such a situation, I usually recommend mixed linear models, which are a simple instance of the multi-level models @PeterFlom recommends.
Specifically, you would use a generalized linear mixed model. The link function would be logistic, as in "ordinary" logistic regression. However, the functional form would include multiple observations on each participant, modeled by a random effect, just as in "ordinary" linear mixed models, $yβΌF(XΞ²+ZΞ³)$. In R, you can fit this by glmer() in the lme4 package, using the binomial family. For prediction, you could use a single measurement.
Whether or not a mixed model predicts better than a non-mixed model in a particular setting is hard to say, of course. What the mixed model does is account for intra-person variability. If you just average the three original data points, you lose all the variability between measurements, so you will be too optimistic about your ability to predict from a single new observation.
If, on the other hand, you simply throw in all observations without taking the grouping into account, you will again be too optimistic, as all standard errors will shrink. Think of what would happen if you started with a single observation per participant, say 100 data points... and then simply copied each observation 100 times. You would end up with 10,000 "observations" and far smaller standard errors than with the original data, although you didn't enter any new information.
In addition, mixed models allow modeling other grouping factors, like the location, its specific demographics, its staff, diagnostician characteristics etc. So they are a lot more general than averaging. | Are Measurements made on the same patient independent?
I mostly concur with @PeterFlom's answer. In my opinion, you should not average your data (you are basically throwing away 2/3 of your information, why would you want to do that?), but you should defi |
38,049 | Are Measurements made on the same patient independent? | The three exams are different data points. Though they are clearly not independent (nor random) observations of all possible exams in your population of interest, at least for any analysis I can imagine.
Others have emphasized that you may do well to include those data points in your analysis (since you already have them), as simple replicates within patient [a nested design] or including "time/visit" as an absolute (e.g. date) or relative (number of visit) variable of interest [some form of repeated-measures design], if interesting. I agree that this is the most interesting (and probable) scenario.
However, it may not be necessary, pay for increased complexity, or improve your conclusions if you are only interested in between-subjects variables. Let's say that you only care for differences between males and females, or you want to explain air volume by patient age. Since you know that you can not properly characterize a patient in one blow (cause measurements result variable even for the same patient at the same moment), then you take several measures and average them. You don't care about that variation, it's just inevitable; you just want to get as close as possible to the "true" (mean) value for that patient (at/in that time). This may be the most reasonable analysis.
[Check this paper for a good read about simplicity vs. complexity in statistical analyses.] | Are Measurements made on the same patient independent? | The three exams are different data points. Though they are clearly not independent (nor random) observations of all possible exams in your population of interest, at least for any analysis I can imagi | Are Measurements made on the same patient independent?
The three exams are different data points. Though they are clearly not independent (nor random) observations of all possible exams in your population of interest, at least for any analysis I can imagine.
Others have emphasized that you may do well to include those data points in your analysis (since you already have them), as simple replicates within patient [a nested design] or including "time/visit" as an absolute (e.g. date) or relative (number of visit) variable of interest [some form of repeated-measures design], if interesting. I agree that this is the most interesting (and probable) scenario.
However, it may not be necessary, pay for increased complexity, or improve your conclusions if you are only interested in between-subjects variables. Let's say that you only care for differences between males and females, or you want to explain air volume by patient age. Since you know that you can not properly characterize a patient in one blow (cause measurements result variable even for the same patient at the same moment), then you take several measures and average them. You don't care about that variation, it's just inevitable; you just want to get as close as possible to the "true" (mean) value for that patient (at/in that time). This may be the most reasonable analysis.
[Check this paper for a good read about simplicity vs. complexity in statistical analyses.] | Are Measurements made on the same patient independent?
The three exams are different data points. Though they are clearly not independent (nor random) observations of all possible exams in your population of interest, at least for any analysis I can imagi |
38,050 | Are Measurements made on the same patient independent? | In accordance with the other answers (no, these observations are certainly not independent, so what do you do about it)....
But do you want to use this information to predict other variables? Many of the suggestions so far seem to be assuming you want to use spirometry as a dependent variable, and thus modelling the error is more straightforward (using a multilevel model). If you instead want to use the spirometry measures as an independent variable, you would be well served by using a confirmatory factor analysis model with the 3 repeat measures modeled as indicators of a single underlying latent variable. The variance of the underlying latent variable is that shared by all three measures, and thus a better reflection of what you are really after (compared to taking the mean, for example). | Are Measurements made on the same patient independent? | In accordance with the other answers (no, these observations are certainly not independent, so what do you do about it)....
But do you want to use this information to predict other variables? Many of | Are Measurements made on the same patient independent?
In accordance with the other answers (no, these observations are certainly not independent, so what do you do about it)....
But do you want to use this information to predict other variables? Many of the suggestions so far seem to be assuming you want to use spirometry as a dependent variable, and thus modelling the error is more straightforward (using a multilevel model). If you instead want to use the spirometry measures as an independent variable, you would be well served by using a confirmatory factor analysis model with the 3 repeat measures modeled as indicators of a single underlying latent variable. The variance of the underlying latent variable is that shared by all three measures, and thus a better reflection of what you are really after (compared to taking the mean, for example). | Are Measurements made on the same patient independent?
In accordance with the other answers (no, these observations are certainly not independent, so what do you do about it)....
But do you want to use this information to predict other variables? Many of |
38,051 | Are Measurements made on the same patient independent? | the measurements can be independent or not. if you describe the measured value as $y_t=x_t+\varepsilon_t$, where $x_t$ - true value, and $\varepsilon_t$ - measurement error, then independence means that $cov(\varepsilon_t,\varepsilon_{t-i})=0$ for all times. this may or may not be true. if you have two measurements one immediately after another then it's most likely not true. if two measurements were time separated but conducted buy the same technician again this may not be true. etc.
on the other hand it must be possible to setup the measurement in a way that $\varepsilon_t$ would be independent of each other and the $x_t$.
$y_t$'s are most definitely not independent through $x_t$ correlations, but that's not what is meant by independence | Are Measurements made on the same patient independent? | the measurements can be independent or not. if you describe the measured value as $y_t=x_t+\varepsilon_t$, where $x_t$ - true value, and $\varepsilon_t$ - measurement error, then independence means th | Are Measurements made on the same patient independent?
the measurements can be independent or not. if you describe the measured value as $y_t=x_t+\varepsilon_t$, where $x_t$ - true value, and $\varepsilon_t$ - measurement error, then independence means that $cov(\varepsilon_t,\varepsilon_{t-i})=0$ for all times. this may or may not be true. if you have two measurements one immediately after another then it's most likely not true. if two measurements were time separated but conducted buy the same technician again this may not be true. etc.
on the other hand it must be possible to setup the measurement in a way that $\varepsilon_t$ would be independent of each other and the $x_t$.
$y_t$'s are most definitely not independent through $x_t$ correlations, but that's not what is meant by independence | Are Measurements made on the same patient independent?
the measurements can be independent or not. if you describe the measured value as $y_t=x_t+\varepsilon_t$, where $x_t$ - true value, and $\varepsilon_t$ - measurement error, then independence means th |
38,052 | Where to find mathematical modeling help on low-budget project? | Graduate students. Graduate students, as much as I hate to say this (being one) can be bribed with paper authorships and the like in lieu of actual money. It's important to recognize that said project may get done a little more slowly, as it won't be their first priority if they're also doing something for funding.
Some academic departments also have consulting classes to teach their students future skills needed to be, well, consultants, or practical experience requirements that mean students may be looking for projects. There's also always the possibility of framing your project as a potential masters thesis or the like.
So lots of ways to potentially get free/cheap labor from graduate students. The best place to start is probably emailing the department secretaries of departments from Universities near you with the kind of student you're looking for. | Where to find mathematical modeling help on low-budget project? | Graduate students. Graduate students, as much as I hate to say this (being one) can be bribed with paper authorships and the like in lieu of actual money. It's important to recognize that said project | Where to find mathematical modeling help on low-budget project?
Graduate students. Graduate students, as much as I hate to say this (being one) can be bribed with paper authorships and the like in lieu of actual money. It's important to recognize that said project may get done a little more slowly, as it won't be their first priority if they're also doing something for funding.
Some academic departments also have consulting classes to teach their students future skills needed to be, well, consultants, or practical experience requirements that mean students may be looking for projects. There's also always the possibility of framing your project as a potential masters thesis or the like.
So lots of ways to potentially get free/cheap labor from graduate students. The best place to start is probably emailing the department secretaries of departments from Universities near you with the kind of student you're looking for. | Where to find mathematical modeling help on low-budget project?
Graduate students. Graduate students, as much as I hate to say this (being one) can be bribed with paper authorships and the like in lieu of actual money. It's important to recognize that said project |
38,053 | Where to find mathematical modeling help on low-budget project? | Not sure if you're a non-profit or not, but Jake Porway has been working on launching a Data Without Borders project, where folks can help out on non-profit projects in need of data analysis skills:
Data Without Borders seeks to match non-profits in need of data
analysis with freelance and pro bono data scientists who can work to
help them with data collection, analysis, visualization, or decision
support. | Where to find mathematical modeling help on low-budget project? | Not sure if you're a non-profit or not, but Jake Porway has been working on launching a Data Without Borders project, where folks can help out on non-profit projects in need of data analysis skills:
| Where to find mathematical modeling help on low-budget project?
Not sure if you're a non-profit or not, but Jake Porway has been working on launching a Data Without Borders project, where folks can help out on non-profit projects in need of data analysis skills:
Data Without Borders seeks to match non-profits in need of data
analysis with freelance and pro bono data scientists who can work to
help them with data collection, analysis, visualization, or decision
support. | Where to find mathematical modeling help on low-budget project?
Not sure if you're a non-profit or not, but Jake Porway has been working on launching a Data Without Borders project, where folks can help out on non-profit projects in need of data analysis skills:
|
38,054 | Where to find mathematical modeling help on low-budget project? | You might go to linkedin.com and join some groups that match your needs and ask this question. Alternatively you might approach statistical software developers to see if they could help. | Where to find mathematical modeling help on low-budget project? | You might go to linkedin.com and join some groups that match your needs and ask this question. Alternatively you might approach statistical software developers to see if they could help. | Where to find mathematical modeling help on low-budget project?
You might go to linkedin.com and join some groups that match your needs and ask this question. Alternatively you might approach statistical software developers to see if they could help. | Where to find mathematical modeling help on low-budget project?
You might go to linkedin.com and join some groups that match your needs and ask this question. Alternatively you might approach statistical software developers to see if they could help. |
38,055 | Stacked bar plot | With 60 distinct categories, I feel you may have a hard time making that an effective graphic. You may want to consider a regular bar-chart that is sorted in ascending or descending order. Whether or not these are counts or percentages is up to you. Maybe something like this:
library(ggplot2)
df$names <- reorder(df$names, -df$freq) #Reorders into ascending order
qplot(x = names, y = freq, data = df, geom = "bar") + coord_flip()
EDIT:
To make a stacked bar chart with ggplot, we set the x = 1 since we will have only one column. We will use the fill argument to add color:
qplot(x = factor(1), y = freq, data = df, geom = "bar", fill = names)
Also of interest: a stacked bar chart is pretty darn close to being a pie chart. You can transform the coordinate system of ggplot charts with + coord_polar(theta = "y") to make a pie chart from the stacked bar chart above. | Stacked bar plot | With 60 distinct categories, I feel you may have a hard time making that an effective graphic. You may want to consider a regular bar-chart that is sorted in ascending or descending order. Whether or | Stacked bar plot
With 60 distinct categories, I feel you may have a hard time making that an effective graphic. You may want to consider a regular bar-chart that is sorted in ascending or descending order. Whether or not these are counts or percentages is up to you. Maybe something like this:
library(ggplot2)
df$names <- reorder(df$names, -df$freq) #Reorders into ascending order
qplot(x = names, y = freq, data = df, geom = "bar") + coord_flip()
EDIT:
To make a stacked bar chart with ggplot, we set the x = 1 since we will have only one column. We will use the fill argument to add color:
qplot(x = factor(1), y = freq, data = df, geom = "bar", fill = names)
Also of interest: a stacked bar chart is pretty darn close to being a pie chart. You can transform the coordinate system of ggplot charts with + coord_polar(theta = "y") to make a pie chart from the stacked bar chart above. | Stacked bar plot
With 60 distinct categories, I feel you may have a hard time making that an effective graphic. You may want to consider a regular bar-chart that is sorted in ascending or descending order. Whether or |
38,056 | Stacked bar plot | I doubt you fill find a suitable range of distinct colours with so much categories. Anyway, here are some ideas:
For stacked barchart, you need barplot() with beside=FALSE (which is the default) -- this is in base R (@Chase's solution with ggplot2 is good too)
For generating a color ramp, you can use the RColorBrewer package; the example shown by @fRed can be reproduced with brewer.pal and any one of the diverging or sequential palettes. However, the number of colour is limited, so you will need to recycle them (e.g., every 6 items)
Here is an illustration:
library(RColorBrewer)
x <- sample(LETTERS[1:20], 100, replace=TRUE)
tab <- as.matrix(table(x))
my.col <- brewer.pal(6, "BrBG") # or brewer.pal(6, "Blues")
barplot(tab, col=my.col)
There is also the colorspace package, which has a nice accompagnying vignette about the design of good color schemes. Check also Ross Ihaka's course on Topic in Computational Data Analysis and Graphics.
Now, a better way to display such data is probably to use a so-called Cleveland dot plot, i.e.
dotchart(tab) | Stacked bar plot | I doubt you fill find a suitable range of distinct colours with so much categories. Anyway, here are some ideas:
For stacked barchart, you need barplot() with beside=FALSE (which is the default) -- t | Stacked bar plot
I doubt you fill find a suitable range of distinct colours with so much categories. Anyway, here are some ideas:
For stacked barchart, you need barplot() with beside=FALSE (which is the default) -- this is in base R (@Chase's solution with ggplot2 is good too)
For generating a color ramp, you can use the RColorBrewer package; the example shown by @fRed can be reproduced with brewer.pal and any one of the diverging or sequential palettes. However, the number of colour is limited, so you will need to recycle them (e.g., every 6 items)
Here is an illustration:
library(RColorBrewer)
x <- sample(LETTERS[1:20], 100, replace=TRUE)
tab <- as.matrix(table(x))
my.col <- brewer.pal(6, "BrBG") # or brewer.pal(6, "Blues")
barplot(tab, col=my.col)
There is also the colorspace package, which has a nice accompagnying vignette about the design of good color schemes. Check also Ross Ihaka's course on Topic in Computational Data Analysis and Graphics.
Now, a better way to display such data is probably to use a so-called Cleveland dot plot, i.e.
dotchart(tab) | Stacked bar plot
I doubt you fill find a suitable range of distinct colours with so much categories. Anyway, here are some ideas:
For stacked barchart, you need barplot() with beside=FALSE (which is the default) -- t |
38,057 | Stacked bar plot | For the coloring, either you specify a list of colors or you generate them.
In the latter, I suggest you execute this code
n = 32;
main.name = paste("color palettes; n=",n)
ch.col = c("rainbow(n, start=.7, end=.1)", "heat.colors(n)", "terrain.colors(n)", "topo.colors(n)", "cm.colors(n)");
nt <- length(ch.col)
i <- 1:n;
j <- n/nt;
d <- j/6;
dy <- 2*d;
plot(i,i+d, type="n", yaxt="n", xaxt="n", ylab="", , xlab ="", main=main.name) #yaxt="n" set no y axie label and tick.
for (k in 1:nt) {
rect(i-.5, (k-1)*j+ dy, i+.4, k*j, col = eval(parse(text=ch.col[k])), border = "grey");
text(2.5*j, k * j + dy/2, ch.col[k])
}
taken from the blog http://statisticsr.blogspot.com/2008/07/color-scale-in-r.html
Barplotting should be done with ?barplot
DF=data.frame(names=c("tomato", "potato", "cabbage", "sukuma-wiki", "terere"), freq=c(7,4,5,8,20))
barplot(as.matrix(DF[,2]), col=heat.colors(length(DF[,2])), legend=DF[,1], xlim=c(0,9), width=2) | Stacked bar plot | For the coloring, either you specify a list of colors or you generate them.
In the latter, I suggest you execute this code
n = 32;
main.name = paste("color palettes; n=",n)
ch.col = c("rainbow(n, star | Stacked bar plot
For the coloring, either you specify a list of colors or you generate them.
In the latter, I suggest you execute this code
n = 32;
main.name = paste("color palettes; n=",n)
ch.col = c("rainbow(n, start=.7, end=.1)", "heat.colors(n)", "terrain.colors(n)", "topo.colors(n)", "cm.colors(n)");
nt <- length(ch.col)
i <- 1:n;
j <- n/nt;
d <- j/6;
dy <- 2*d;
plot(i,i+d, type="n", yaxt="n", xaxt="n", ylab="", , xlab ="", main=main.name) #yaxt="n" set no y axie label and tick.
for (k in 1:nt) {
rect(i-.5, (k-1)*j+ dy, i+.4, k*j, col = eval(parse(text=ch.col[k])), border = "grey");
text(2.5*j, k * j + dy/2, ch.col[k])
}
taken from the blog http://statisticsr.blogspot.com/2008/07/color-scale-in-r.html
Barplotting should be done with ?barplot
DF=data.frame(names=c("tomato", "potato", "cabbage", "sukuma-wiki", "terere"), freq=c(7,4,5,8,20))
barplot(as.matrix(DF[,2]), col=heat.colors(length(DF[,2])), legend=DF[,1], xlim=c(0,9), width=2) | Stacked bar plot
For the coloring, either you specify a list of colors or you generate them.
In the latter, I suggest you execute this code
n = 32;
main.name = paste("color palettes; n=",n)
ch.col = c("rainbow(n, star |
38,058 | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements? | Consider the general equi-correlation covariance matrix:
\begin{align}
\Sigma = \begin{bmatrix}
1 & \rho & \cdots & \rho \\
\rho & 1 & \cdots & \rho \\
\vdots & \vdots & \ddots & \vdots \\
\rho & \rho & \cdots & 1
\end{bmatrix} \in \mathbb{R}^{n \times n}. \tag{1}
\end{align}
The sum of all the diagonal elements is $S_1 = n$, while the sum of all the off-diagonal elements is $S_2 = \rho \times (n^2 - n)$. If you analyze the limiting behavior, for fixed $\rho \in (0, 1]$, the opposite inequality $S_2 > S_1$ always holds for sufficiently large $n$.
Note that $\Sigma$ in $(1)$ is positive semi-definite (PSD) for $\rho \in (0, 1]$. A classical proof of this goes as follows.
It is straightforward to verify that $\Sigma$ can be rewritten as $\Sigma = \rho ee' + (1 - \rho)I_{(n)}$ with $e$ an $n$-long column vector of all ones. As all the eigenvalues of the rank-$1$ matrix $ee'$ are $\{n, 0, \ldots, 0\}$, all the eigenvalues of $\rho ee' + (1 - \rho)I_{(n)}$ are
\begin{align}
n\rho + (1 - \rho) = 1 + (n - 1)\rho, 1 - \rho, \ldots, 1 - \rho,
\end{align}
which are all nonnegative provided $\rho \in [-(n - 1)^{-1}, 1]$. This shows that $\Sigma$ is PSD for $\rho \in (0, 1]$, hence a valid covariance matrix. | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it | Consider the general equi-correlation covariance matrix:
\begin{align}
\Sigma = \begin{bmatrix}
1 & \rho & \cdots & \rho \\
\rho & 1 & \cdots & \rho \\
\vdots & \vdots & \ddots & \vdots \\
\rho & \rho | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements?
Consider the general equi-correlation covariance matrix:
\begin{align}
\Sigma = \begin{bmatrix}
1 & \rho & \cdots & \rho \\
\rho & 1 & \cdots & \rho \\
\vdots & \vdots & \ddots & \vdots \\
\rho & \rho & \cdots & 1
\end{bmatrix} \in \mathbb{R}^{n \times n}. \tag{1}
\end{align}
The sum of all the diagonal elements is $S_1 = n$, while the sum of all the off-diagonal elements is $S_2 = \rho \times (n^2 - n)$. If you analyze the limiting behavior, for fixed $\rho \in (0, 1]$, the opposite inequality $S_2 > S_1$ always holds for sufficiently large $n$.
Note that $\Sigma$ in $(1)$ is positive semi-definite (PSD) for $\rho \in (0, 1]$. A classical proof of this goes as follows.
It is straightforward to verify that $\Sigma$ can be rewritten as $\Sigma = \rho ee' + (1 - \rho)I_{(n)}$ with $e$ an $n$-long column vector of all ones. As all the eigenvalues of the rank-$1$ matrix $ee'$ are $\{n, 0, \ldots, 0\}$, all the eigenvalues of $\rho ee' + (1 - \rho)I_{(n)}$ are
\begin{align}
n\rho + (1 - \rho) = 1 + (n - 1)\rho, 1 - \rho, \ldots, 1 - \rho,
\end{align}
which are all nonnegative provided $\rho \in [-(n - 1)^{-1}, 1]$. This shows that $\Sigma$ is PSD for $\rho \in (0, 1]$, hence a valid covariance matrix. | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it
Consider the general equi-correlation covariance matrix:
\begin{align}
\Sigma = \begin{bmatrix}
1 & \rho & \cdots & \rho \\
\rho & 1 & \cdots & \rho \\
\vdots & \vdots & \ddots & \vdots \\
\rho & \rho |
38,059 | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements? | No. Highly-correlated variables will violate this rule:
x <- seq(0, 1, len = 100)
X <- data.frame(x = x, x2 = x^2, x3 = x^3)
X_cor <- cor(X)
sum(X_cor[col(X_cor) != row(X_cor)]) # 5.73822
sum(diag(X_cor)) # 3 | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it | No. Highly-correlated variables will violate this rule:
x <- seq(0, 1, len = 100)
X <- data.frame(x = x, x2 = x^2, x3 = x^3)
X_cor <- cor(X)
sum(X_cor[col(X_cor) != row(X_cor)]) # 5.73822
sum(diag(X_ | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements?
No. Highly-correlated variables will violate this rule:
x <- seq(0, 1, len = 100)
X <- data.frame(x = x, x2 = x^2, x3 = x^3)
X_cor <- cor(X)
sum(X_cor[col(X_cor) != row(X_cor)]) # 5.73822
sum(diag(X_cor)) # 3 | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it
No. Highly-correlated variables will violate this rule:
x <- seq(0, 1, len = 100)
X <- data.frame(x = x, x2 = x^2, x3 = x^3)
X_cor <- cor(X)
sum(X_cor[col(X_cor) != row(X_cor)]) # 5.73822
sum(diag(X_ |
38,060 | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements? | You have been given two good answers. I thought it might be instructive to come at this from a different angle, and suggest how one might realise themselves that the statement is false, by finding a counterexample.
It can often be useful to run simulations (in say R or Python), to test our understanding of things or in this case look for counter examples.
The Python code below took very little time to write (minutes), and gave me a counter example almost immediately.
import numpy as np
for i in range(1000):
data = np.random.randint(-100,100,(3,3))
cov = np.cov(data)
sum_diag = np.diag(cov).sum()
sum_all_elements = cov.sum()
sum_off_diag = sum_all_elements - sum_diag
if sum_off_diag > sum_diag:
print('Data:' ,data, "\nCovariance matrix: ", cov, "\nSum diag:", sum_diag, "\nSum off diag", sum_off_diag)
break;
Having a counter example(s) means you can focus your attention in the right place, perhaps if you had a few counter examples you may have then have observed that highly correlated variables seem to violate this, such as pointed out in Michael M's answer. | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it | You have been given two good answers. I thought it might be instructive to come at this from a different angle, and suggest how one might realise themselves that the statement is false, by finding a c | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of its off-diagonal elements?
You have been given two good answers. I thought it might be instructive to come at this from a different angle, and suggest how one might realise themselves that the statement is false, by finding a counterexample.
It can often be useful to run simulations (in say R or Python), to test our understanding of things or in this case look for counter examples.
The Python code below took very little time to write (minutes), and gave me a counter example almost immediately.
import numpy as np
for i in range(1000):
data = np.random.randint(-100,100,(3,3))
cov = np.cov(data)
sum_diag = np.diag(cov).sum()
sum_all_elements = cov.sum()
sum_off_diag = sum_all_elements - sum_diag
if sum_off_diag > sum_diag:
print('Data:' ,data, "\nCovariance matrix: ", cov, "\nSum diag:", sum_diag, "\nSum off diag", sum_off_diag)
break;
Having a counter example(s) means you can focus your attention in the right place, perhaps if you had a few counter examples you may have then have observed that highly correlated variables seem to violate this, such as pointed out in Michael M's answer. | Is the sum of the diagonal elements of a covariance matrix always equal or larger than the sum of it
You have been given two good answers. I thought it might be instructive to come at this from a different angle, and suggest how one might realise themselves that the statement is false, by finding a c |
38,061 | Correcting p-value in multiple regression [duplicate] | I suppose it depends on your intent.
In my day to day, I will regress an outcome on several variables but am really only interested in the effect of one. So I don't need to adjust anything -- those other variables are mainly there to reduce variance or so that I may stratify.
If you're conducting exploratory analyses and looking to see if anything is significant then yea it might be a good idea to correct the p value. However, my intuition says the degree to which you correct the p value would depend on the population correlation between covariates. Typical correction factors assume the tests are independent, but if the covariates are correlated I don't think this would be the case.
You can see this a little empirically. The following R code will simulate 1000 regressions and determine if any of the p values are less than 0.05
library(tidyverse)
library(broom)
library(rethinking)
sig <- rlkjcorr(1, 10, 1)
# sig <- diag(10)
replicate(1000, {
X <- MASS::mvrnorm(100, m = rep(0, 10), sig)
y <- rnorm(100)
lm(y~X) %>%
tidy %>%
pull(p.value)->p
any(p<0.05)
}) %>%
mean
I've included 2 lines for sig, which is intended to be the population covariance matrix for the covariates X. The probability that ANY of the p values would be less than 0.05 when variables are all independent (i.e. when sigma is the identity) is about 41%, and this decreases when we pick a random correlation matrix using rlkjcorr.
All in all, this means that typical p value correction methods (a la Bonferroni) might be too conservative and result in smaller type 1 error rate than might be desired. | Correcting p-value in multiple regression [duplicate] | I suppose it depends on your intent.
In my day to day, I will regress an outcome on several variables but am really only interested in the effect of one. So I don't need to adjust anything -- those o | Correcting p-value in multiple regression [duplicate]
I suppose it depends on your intent.
In my day to day, I will regress an outcome on several variables but am really only interested in the effect of one. So I don't need to adjust anything -- those other variables are mainly there to reduce variance or so that I may stratify.
If you're conducting exploratory analyses and looking to see if anything is significant then yea it might be a good idea to correct the p value. However, my intuition says the degree to which you correct the p value would depend on the population correlation between covariates. Typical correction factors assume the tests are independent, but if the covariates are correlated I don't think this would be the case.
You can see this a little empirically. The following R code will simulate 1000 regressions and determine if any of the p values are less than 0.05
library(tidyverse)
library(broom)
library(rethinking)
sig <- rlkjcorr(1, 10, 1)
# sig <- diag(10)
replicate(1000, {
X <- MASS::mvrnorm(100, m = rep(0, 10), sig)
y <- rnorm(100)
lm(y~X) %>%
tidy %>%
pull(p.value)->p
any(p<0.05)
}) %>%
mean
I've included 2 lines for sig, which is intended to be the population covariance matrix for the covariates X. The probability that ANY of the p values would be less than 0.05 when variables are all independent (i.e. when sigma is the identity) is about 41%, and this decreases when we pick a random correlation matrix using rlkjcorr.
All in all, this means that typical p value correction methods (a la Bonferroni) might be too conservative and result in smaller type 1 error rate than might be desired. | Correcting p-value in multiple regression [duplicate]
I suppose it depends on your intent.
In my day to day, I will regress an outcome on several variables but am really only interested in the effect of one. So I don't need to adjust anything -- those o |
38,062 | Correcting p-value in multiple regression [duplicate] | I would argue that this depends on why you include multiple regressors in your model. Broadly speaking, two ends of a spectrum come to mind:
A) You are interested in the effect of one regressor and "only" include the others to hopefully avoid omitted variable bias (say, the effect of education on earnings, requiring you to control for things like ability and experience). Then, you would not really care about the effects of these regressors and their significances. Thus, despite having many regressors, you effectively still just conduct a single hypothesis test.
B) You are on a "fishing expedition" where you throw in a bunch of predictors and see which, if any, are related to the dependent variable. In my field (econometrics), "growth regressions" are a classical example, i.e., to find variables which predict why some countries grow fast and others do not. Predictors include all sorts of things, such as initial GDP, schooling levels, religion, geography,... You would then want to take multiplicity into account so as to avoid to spuriously find "relevant" variables just because you tried so many. [Something I have in fact done in a joint paper with my colleague Thomas Deckers.] | Correcting p-value in multiple regression [duplicate] | I would argue that this depends on why you include multiple regressors in your model. Broadly speaking, two ends of a spectrum come to mind:
A) You are interested in the effect of one regressor and "o | Correcting p-value in multiple regression [duplicate]
I would argue that this depends on why you include multiple regressors in your model. Broadly speaking, two ends of a spectrum come to mind:
A) You are interested in the effect of one regressor and "only" include the others to hopefully avoid omitted variable bias (say, the effect of education on earnings, requiring you to control for things like ability and experience). Then, you would not really care about the effects of these regressors and their significances. Thus, despite having many regressors, you effectively still just conduct a single hypothesis test.
B) You are on a "fishing expedition" where you throw in a bunch of predictors and see which, if any, are related to the dependent variable. In my field (econometrics), "growth regressions" are a classical example, i.e., to find variables which predict why some countries grow fast and others do not. Predictors include all sorts of things, such as initial GDP, schooling levels, religion, geography,... You would then want to take multiplicity into account so as to avoid to spuriously find "relevant" variables just because you tried so many. [Something I have in fact done in a joint paper with my colleague Thomas Deckers.] | Correcting p-value in multiple regression [duplicate]
I would argue that this depends on why you include multiple regressors in your model. Broadly speaking, two ends of a spectrum come to mind:
A) You are interested in the effect of one regressor and "o |
38,063 | Correcting p-value in multiple regression [duplicate] | A reason that people might not be applying corrections is
Because significance cut-off values are arbitrary anyway.
In some field there might be typically some couple of parameters being tested and if that number is not greatly varying then researchers in that field will settle on some value like the typical values as 0.01 or 0.05.
Maybe some field where researchers are testing more hypotheses at once in a single research (or more often in multiple researches) they will be using 0.01 more often instead of 0.05 and are indirectly using a correction in that way. There are even fields that use significance levels of 0.0000006 (the 5-sigma), and have no formal way of controlling for multiple regression (if you have multiple research groups doing the same experiment multiple times, how do you correct for that?).
Because p-values are just a way to express error and people reading the results can imagine how this adds up for multiple tests done at once.
As an expression of error, p-values relate to the standard error and are a way to express that error in terms of a probability. Do we correct standard errors as well when we do multiple regression?
This is why researchers publish the actual p-values and not just whether something was below the (arbitrary) threshold or not, and any readers can do the maths themselves. (Often you have these star symbols added in tables with a legend like *** p<0.001 ** p<0.01 * p<0.05. That is just an aid for the readers, but the actual meaning of the cutoff values is a rule of thumb and not a strict rule)
If some research tested 20 variables and got one result with a p-value below 0.05 then readers will know that this is not a very reliable result. | Correcting p-value in multiple regression [duplicate] | A reason that people might not be applying corrections is
Because significance cut-off values are arbitrary anyway.
In some field there might be typically some couple of parameters being tested and i | Correcting p-value in multiple regression [duplicate]
A reason that people might not be applying corrections is
Because significance cut-off values are arbitrary anyway.
In some field there might be typically some couple of parameters being tested and if that number is not greatly varying then researchers in that field will settle on some value like the typical values as 0.01 or 0.05.
Maybe some field where researchers are testing more hypotheses at once in a single research (or more often in multiple researches) they will be using 0.01 more often instead of 0.05 and are indirectly using a correction in that way. There are even fields that use significance levels of 0.0000006 (the 5-sigma), and have no formal way of controlling for multiple regression (if you have multiple research groups doing the same experiment multiple times, how do you correct for that?).
Because p-values are just a way to express error and people reading the results can imagine how this adds up for multiple tests done at once.
As an expression of error, p-values relate to the standard error and are a way to express that error in terms of a probability. Do we correct standard errors as well when we do multiple regression?
This is why researchers publish the actual p-values and not just whether something was below the (arbitrary) threshold or not, and any readers can do the maths themselves. (Often you have these star symbols added in tables with a legend like *** p<0.001 ** p<0.01 * p<0.05. That is just an aid for the readers, but the actual meaning of the cutoff values is a rule of thumb and not a strict rule)
If some research tested 20 variables and got one result with a p-value below 0.05 then readers will know that this is not a very reliable result. | Correcting p-value in multiple regression [duplicate]
A reason that people might not be applying corrections is
Because significance cut-off values are arbitrary anyway.
In some field there might be typically some couple of parameters being tested and i |
38,064 | Correcting p-value in multiple regression [duplicate] | Yea, usually you need to because of the problem with multiple comparison (https://en.wikipedia.org/wiki/Multiple_comparisons_problem). Depending on what your are doing (academic or industry) the most conservative and considered the best is the Bonferroni correction (https://en.wikipedia.org/wiki/Bonferroni_correction). If you still have something after a Bonferroni correction, chances are it's robust. Here is an example of the analysis in R (https://rpubs.com/JLLJ/SPC12B). | Correcting p-value in multiple regression [duplicate] | Yea, usually you need to because of the problem with multiple comparison (https://en.wikipedia.org/wiki/Multiple_comparisons_problem). Depending on what your are doing (academic or industry) the most | Correcting p-value in multiple regression [duplicate]
Yea, usually you need to because of the problem with multiple comparison (https://en.wikipedia.org/wiki/Multiple_comparisons_problem). Depending on what your are doing (academic or industry) the most conservative and considered the best is the Bonferroni correction (https://en.wikipedia.org/wiki/Bonferroni_correction). If you still have something after a Bonferroni correction, chances are it's robust. Here is an example of the analysis in R (https://rpubs.com/JLLJ/SPC12B). | Correcting p-value in multiple regression [duplicate]
Yea, usually you need to because of the problem with multiple comparison (https://en.wikipedia.org/wiki/Multiple_comparisons_problem). Depending on what your are doing (academic or industry) the most |
38,065 | Difference between E(Y) and E(Y|X) in regression | Disclaimer: I only read your question correctly after i wrote this. If $X$ is non random then yeah $E(Y|X) = E(Y)$.
Your mistake comes down to an abuse of notation. Within usual notation $X$ is a random variable which for simplicity, takes values in the reals. $x$ is one such value $X$ could take. Let's sensibly define
$$ Y := \beta_0 + \beta_1X + \mu$$
with $E(\mu|X = x) = 0$ for all $x \in \mathbb{R}$
Now just as $X$ and $x$ are nor the same thing $E(Y|X)$ and $E(Y|X = x)$ are also not the same. The first conditions on the random value of $X$ making it itself a random value while the second conditions on the specific event of $X=x$ meaning it is not random anymore. Just dependent on the chosen value of $x$.
To finish out:
$E(E(Y|X)) = E(\beta_0 + \beta_1X) = \beta_0 + \beta_1E(X) = E(Y)$
$E(E(Y|X =x)) = E(\beta_0 + \beta_1x) = \beta_0 + \beta_1x = E(Y|X=x)$ because that already isn't random anymore and so the second $E$ doesn't do anything.
Speaking of notation abuse: $\mu$ usually refers to a mean while error terms are expressed with something like $\epsilon$ or $e$ | Difference between E(Y) and E(Y|X) in regression | Disclaimer: I only read your question correctly after i wrote this. If $X$ is non random then yeah $E(Y|X) = E(Y)$.
Your mistake comes down to an abuse of notation. Within usual notation $X$ is a rand | Difference between E(Y) and E(Y|X) in regression
Disclaimer: I only read your question correctly after i wrote this. If $X$ is non random then yeah $E(Y|X) = E(Y)$.
Your mistake comes down to an abuse of notation. Within usual notation $X$ is a random variable which for simplicity, takes values in the reals. $x$ is one such value $X$ could take. Let's sensibly define
$$ Y := \beta_0 + \beta_1X + \mu$$
with $E(\mu|X = x) = 0$ for all $x \in \mathbb{R}$
Now just as $X$ and $x$ are nor the same thing $E(Y|X)$ and $E(Y|X = x)$ are also not the same. The first conditions on the random value of $X$ making it itself a random value while the second conditions on the specific event of $X=x$ meaning it is not random anymore. Just dependent on the chosen value of $x$.
To finish out:
$E(E(Y|X)) = E(\beta_0 + \beta_1X) = \beta_0 + \beta_1E(X) = E(Y)$
$E(E(Y|X =x)) = E(\beta_0 + \beta_1x) = \beta_0 + \beta_1x = E(Y|X=x)$ because that already isn't random anymore and so the second $E$ doesn't do anything.
Speaking of notation abuse: $\mu$ usually refers to a mean while error terms are expressed with something like $\epsilon$ or $e$ | Difference between E(Y) and E(Y|X) in regression
Disclaimer: I only read your question correctly after i wrote this. If $X$ is non random then yeah $E(Y|X) = E(Y)$.
Your mistake comes down to an abuse of notation. Within usual notation $X$ is a rand |
38,066 | Difference between E(Y) and E(Y|X) in regression | If $X$ is not random (i.e. just some constant), then $E[Y|X] = E[Y]$ so it is correct. If $X$ was random then $E[Y] = \beta_0 + \beta_1 E[X] + E[\mu]$ and so, $E[Y|X]$ would not be equal to $E[Y]$ in general | Difference between E(Y) and E(Y|X) in regression | If $X$ is not random (i.e. just some constant), then $E[Y|X] = E[Y]$ so it is correct. If $X$ was random then $E[Y] = \beta_0 + \beta_1 E[X] + E[\mu]$ and so, $E[Y|X]$ would not be equal to $E[Y]$ in | Difference between E(Y) and E(Y|X) in regression
If $X$ is not random (i.e. just some constant), then $E[Y|X] = E[Y]$ so it is correct. If $X$ was random then $E[Y] = \beta_0 + \beta_1 E[X] + E[\mu]$ and so, $E[Y|X]$ would not be equal to $E[Y]$ in general | Difference between E(Y) and E(Y|X) in regression
If $X$ is not random (i.e. just some constant), then $E[Y|X] = E[Y]$ so it is correct. If $X$ was random then $E[Y] = \beta_0 + \beta_1 E[X] + E[\mu]$ and so, $E[Y|X]$ would not be equal to $E[Y]$ in |
38,067 | Difference between E(Y) and E(Y|X) in regression | $\bullet$ You must bear in mind $\mathbb E[\boldsymbol \varepsilon |\mathbf X]=\mathbf 0\implies \mathbb E[\boldsymbol \varepsilon]=\mathbf 0$ but not $\mathbb E[\boldsymbol \varepsilon]=\mathbf 0\implies \mathbb E[\boldsymbol\varepsilon |\mathbf X]=\mathbf 0.$
$\bullet$ For non-stochastic regressors, you can straightforward use the unconditional conditions rather than conditional. However, when they are random, be careful with conditional and unconditional aspects. | Difference between E(Y) and E(Y|X) in regression | $\bullet$ You must bear in mind $\mathbb E[\boldsymbol \varepsilon |\mathbf X]=\mathbf 0\implies \mathbb E[\boldsymbol \varepsilon]=\mathbf 0$ but not $\mathbb E[\boldsymbol \varepsilon]=\mathbf 0\imp | Difference between E(Y) and E(Y|X) in regression
$\bullet$ You must bear in mind $\mathbb E[\boldsymbol \varepsilon |\mathbf X]=\mathbf 0\implies \mathbb E[\boldsymbol \varepsilon]=\mathbf 0$ but not $\mathbb E[\boldsymbol \varepsilon]=\mathbf 0\implies \mathbb E[\boldsymbol\varepsilon |\mathbf X]=\mathbf 0.$
$\bullet$ For non-stochastic regressors, you can straightforward use the unconditional conditions rather than conditional. However, when they are random, be careful with conditional and unconditional aspects. | Difference between E(Y) and E(Y|X) in regression
$\bullet$ You must bear in mind $\mathbb E[\boldsymbol \varepsilon |\mathbf X]=\mathbf 0\implies \mathbb E[\boldsymbol \varepsilon]=\mathbf 0$ but not $\mathbb E[\boldsymbol \varepsilon]=\mathbf 0\imp |
38,068 | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target value dropped since it's a lone constant? | No. A proper Norm will not allow it to be.
Even the simplest absolute value function as a loss will depend on $t$: $|m(w)-t|β=\pm mβ(w)$, here the sign depends on $t$.
TL;DR;
Generally, your loss function will be $L(w|t,X)$, so the first derivative is $\partial L(w|t,X)/\partial w$, and there's no reason for $t$ to disappear from the expression unless you construct $L$ for this purpose only, for instance you make $L$ strictly linear on $w$. However, $L$ can't be just any function in a problem that you imply, i.e. where you have a target to hit.
Clearly, loss can't be negative, because the best you could do in this kind of a problem is to hit a target then there's no loss, i.e. $L(w^*)=0$. This means that no matter what loss function you chose, it has to be nonlinear around the optimal $w^*$. The example of the absolute value norm above shows you that even a loss function that is totally linear on $w$ everywhere but in one point will still depend on $t$. | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target | No. A proper Norm will not allow it to be.
Even the simplest absolute value function as a loss will depend on $t$: $|m(w)-t|β=\pm mβ(w)$, here the sign depends on $t$.
TL;DR;
Generally, your loss func | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target value dropped since it's a lone constant?
No. A proper Norm will not allow it to be.
Even the simplest absolute value function as a loss will depend on $t$: $|m(w)-t|β=\pm mβ(w)$, here the sign depends on $t$.
TL;DR;
Generally, your loss function will be $L(w|t,X)$, so the first derivative is $\partial L(w|t,X)/\partial w$, and there's no reason for $t$ to disappear from the expression unless you construct $L$ for this purpose only, for instance you make $L$ strictly linear on $w$. However, $L$ can't be just any function in a problem that you imply, i.e. where you have a target to hit.
Clearly, loss can't be negative, because the best you could do in this kind of a problem is to hit a target then there's no loss, i.e. $L(w^*)=0$. This means that no matter what loss function you chose, it has to be nonlinear around the optimal $w^*$. The example of the absolute value norm above shows you that even a loss function that is totally linear on $w$ everywhere but in one point will still depend on $t$. | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target
No. A proper Norm will not allow it to be.
Even the simplest absolute value function as a loss will depend on $t$: $|m(w)-t|β=\pm mβ(w)$, here the sign depends on $t$.
TL;DR;
Generally, your loss func |
38,069 | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target value dropped since it's a lone constant? | If we are considering the absolute difference as a norm, that is:
$loss(w) = |m_x(w) - t|$
then $\nabla loss(w)$ is far from simply being equivalent to $\nabla m_x(w)$.
By definition of the derivative for an absolute value (and using the chain rule), we actually get:
$\nabla loss(w) = \frac{m_x(w) - t}{|m_x(w) - t|}. m_x'(w)$
This is similar Aksakal's answer but I wanted to show exactly why we get $\pm m_x'(w)$ | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target | If we are considering the absolute difference as a norm, that is:
$loss(w) = |m_x(w) - t|$
then $\nabla loss(w)$ is far from simply being equivalent to $\nabla m_x(w)$.
By definition of the derivative | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target value dropped since it's a lone constant?
If we are considering the absolute difference as a norm, that is:
$loss(w) = |m_x(w) - t|$
then $\nabla loss(w)$ is far from simply being equivalent to $\nabla m_x(w)$.
By definition of the derivative for an absolute value (and using the chain rule), we actually get:
$\nabla loss(w) = \frac{m_x(w) - t}{|m_x(w) - t|}. m_x'(w)$
This is similar Aksakal's answer but I wanted to show exactly why we get $\pm m_x'(w)$ | In GD-optimisation, if the gradient of the error function is w.r.t to the weights, isn't the target
If we are considering the absolute difference as a norm, that is:
$loss(w) = |m_x(w) - t|$
then $\nabla loss(w)$ is far from simply being equivalent to $\nabla m_x(w)$.
By definition of the derivative |
38,070 | How to test whether a correlation is equal to 1? | I would argue that there is not any testing to do. If the sample correlation is not 1, then you reject $H_0: \rho=1$ with certainty.
Having a correlation of 1 means that the points cannot deviate from a diagonal line the way that they can when $\vert \rho \vert < 1$.
EDIT
set.seed(2019)
x <- rexp(1000)
y <- 3*x
plot(x,y)
V <- rep(NA,10000)
for (i in 1:length(V)){
print(i)
idx <- sample(seq(1,length(x),1),replace=T)
V[i] <- cor(x[idx],y[idx])
}
summary(V)
With the points of the scatterplot locked to the diagonal line $y=3x$, every single sample correlation is 1. You can try this out with other distributions and sample sizes.
Where this gets interesting---and I'm not completely sure of the math at the population level---is when I set a Gaussian copula to have a parameter of 1.
library(copula)
set.seed(2019)
gc <-ellipCopula("normal", param = 1, dim = 2)#, dispstr = "un")
norm_exp <- mvdc(gc,c("norm","exp"),list(list(mean=0,sd=1),list(rate=1)))
V <- rep(NA,10000)
for (i in 1:length(V)){
print(i)
D_ne <- rMvdc(1000, norm_exp)
x <- D_ne[,1]
y <- D_ne[,2]
V[i] <- cor(x[idx],y[idx])
}
plot(x,y)
summary(V)
I still don't think this relationship gives a population Pearson correlation of 1 (the relationship is perfectly monotonic but not linear), but this result surprised me. I expected another plot of a straight line.
To defend my assertion that the population Pearson correlation is not 1, I refer to theorem 4.5.7 on pg. 172 of the second edition of Casella & Berger's Statistial Inference: "$\vert \rho_{XY}\vert=1$ if and only if there exist numbers $a\ne0$ and $b$ such that $P(Y = aX+b)=1$." Since the relationship between my $X$ (the normal variable) and $Y$ (exponential) is nonlinear, there can be no such $a$ and $b$.
Casella, George, and Roger L. Berger. Statistical Inference. 2nd ed., Cengage Learning & Wadsworth, 2002. | How to test whether a correlation is equal to 1? | I would argue that there is not any testing to do. If the sample correlation is not 1, then you reject $H_0: \rho=1$ with certainty.
Having a correlation of 1 means that the points cannot deviate from | How to test whether a correlation is equal to 1?
I would argue that there is not any testing to do. If the sample correlation is not 1, then you reject $H_0: \rho=1$ with certainty.
Having a correlation of 1 means that the points cannot deviate from a diagonal line the way that they can when $\vert \rho \vert < 1$.
EDIT
set.seed(2019)
x <- rexp(1000)
y <- 3*x
plot(x,y)
V <- rep(NA,10000)
for (i in 1:length(V)){
print(i)
idx <- sample(seq(1,length(x),1),replace=T)
V[i] <- cor(x[idx],y[idx])
}
summary(V)
With the points of the scatterplot locked to the diagonal line $y=3x$, every single sample correlation is 1. You can try this out with other distributions and sample sizes.
Where this gets interesting---and I'm not completely sure of the math at the population level---is when I set a Gaussian copula to have a parameter of 1.
library(copula)
set.seed(2019)
gc <-ellipCopula("normal", param = 1, dim = 2)#, dispstr = "un")
norm_exp <- mvdc(gc,c("norm","exp"),list(list(mean=0,sd=1),list(rate=1)))
V <- rep(NA,10000)
for (i in 1:length(V)){
print(i)
D_ne <- rMvdc(1000, norm_exp)
x <- D_ne[,1]
y <- D_ne[,2]
V[i] <- cor(x[idx],y[idx])
}
plot(x,y)
summary(V)
I still don't think this relationship gives a population Pearson correlation of 1 (the relationship is perfectly monotonic but not linear), but this result surprised me. I expected another plot of a straight line.
To defend my assertion that the population Pearson correlation is not 1, I refer to theorem 4.5.7 on pg. 172 of the second edition of Casella & Berger's Statistial Inference: "$\vert \rho_{XY}\vert=1$ if and only if there exist numbers $a\ne0$ and $b$ such that $P(Y = aX+b)=1$." Since the relationship between my $X$ (the normal variable) and $Y$ (exponential) is nonlinear, there can be no such $a$ and $b$.
Casella, George, and Roger L. Berger. Statistical Inference. 2nd ed., Cengage Learning & Wadsworth, 2002. | How to test whether a correlation is equal to 1?
I would argue that there is not any testing to do. If the sample correlation is not 1, then you reject $H_0: \rho=1$ with certainty.
Having a correlation of 1 means that the points cannot deviate from |
38,071 | How to test whether a correlation is equal to 1? | Using the Fisher Z-transform is one way of doing this (usually used for confidence intervals), bootstrapping would be another.
Here's a brief article for Fisher Z transform for Pearson Product Moment Correlation Coefficient https://www.statisticshowto.datasciencecentral.com/fisher-z/ | How to test whether a correlation is equal to 1? | Using the Fisher Z-transform is one way of doing this (usually used for confidence intervals), bootstrapping would be another.
Here's a brief article for Fisher Z transform for Pearson Product Moment | How to test whether a correlation is equal to 1?
Using the Fisher Z-transform is one way of doing this (usually used for confidence intervals), bootstrapping would be another.
Here's a brief article for Fisher Z transform for Pearson Product Moment Correlation Coefficient https://www.statisticshowto.datasciencecentral.com/fisher-z/ | How to test whether a correlation is equal to 1?
Using the Fisher Z-transform is one way of doing this (usually used for confidence intervals), bootstrapping would be another.
Here's a brief article for Fisher Z transform for Pearson Product Moment |
38,072 | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | ...explain like I'm in the humanities please :)
The paradox of "confidence" relating to inference from sampling is an expression of the hermeneutics of anxiety, tracable to the observations of Kierkegaard. "Statistical Science" puts forward a symbolic synthesis to internalise and "objectify" the antithesis of passion and paradox inherent in inference of the unknown. By examining the hermeneutical dimensions of Kierkegaardian anxiety we can place "Statistical Science" within the superego of the subject, heavily affected by the genealogy of the control relations of the capitalist production system.
The statistical inferential problem itself is a manifestation of the subject anxiety induced by the precarious production relations of late-stage capitalism. Obsesssion with a socially constructed "model parameter" is observably a manifestation of the Lacanian objet petit a generated by the anxiety of the subject navigating the "objectivity" of the "scientific" enterprise. The symbolic language of the statistical machinery allows the subject to confine this anxiety and express uncertainty entirely within the known Symbolic Order. As Lacan has observed, the Symbolic Order forms a part of the Big Other and the striving for "confidence"; inferring the aforementioned model parameter surely represents striving for return to the womb of the Mother.
In The Subversion of the Subject and the Dialectic of Desire in the Freudian Unconscious, Lacan recognises that desire is "...a defense against going beyond a limit in jouissance" (p. 699). Such a defence mechanism is amplified in the capitalist production system, itself in perpetual crisis and so developing an expanding motif of symbols and language with which to internalise crisis. Ε½iΕΎek presents this as capitalism "borrowing from the future by way of escaping the future". Thus, one is unsurprised that the Symbolic Order of statistical "confidence" should be conceived as a mode of pictorialism for those unwilling to recognise the hermeneutics of anxiety within the capitalist model.
And this brings us to the notion of "calculating" the "confidence" relating to inference from the finite sample. This exercise reflects the language of symbolism developed as internalisation of the hermeneutics of anxiety. To proceed according to this motif, one sets in motion the Symbolic Order at the expense of the Real (and plays out the dialectic of desire). As Nietzsche counsels in The Gay Science, "[o]ne should have more respect for the bashfulness with which nature has hidden behind riddles and iridescent uncertainties".
; ) | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | ...explain like I'm in the humanities please :)
The paradox of "confidence" relating to inference from sampling is an expression of the hermeneutics of anxiety, tracable to the observations of Kierke | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
...explain like I'm in the humanities please :)
The paradox of "confidence" relating to inference from sampling is an expression of the hermeneutics of anxiety, tracable to the observations of Kierkegaard. "Statistical Science" puts forward a symbolic synthesis to internalise and "objectify" the antithesis of passion and paradox inherent in inference of the unknown. By examining the hermeneutical dimensions of Kierkegaardian anxiety we can place "Statistical Science" within the superego of the subject, heavily affected by the genealogy of the control relations of the capitalist production system.
The statistical inferential problem itself is a manifestation of the subject anxiety induced by the precarious production relations of late-stage capitalism. Obsesssion with a socially constructed "model parameter" is observably a manifestation of the Lacanian objet petit a generated by the anxiety of the subject navigating the "objectivity" of the "scientific" enterprise. The symbolic language of the statistical machinery allows the subject to confine this anxiety and express uncertainty entirely within the known Symbolic Order. As Lacan has observed, the Symbolic Order forms a part of the Big Other and the striving for "confidence"; inferring the aforementioned model parameter surely represents striving for return to the womb of the Mother.
In The Subversion of the Subject and the Dialectic of Desire in the Freudian Unconscious, Lacan recognises that desire is "...a defense against going beyond a limit in jouissance" (p. 699). Such a defence mechanism is amplified in the capitalist production system, itself in perpetual crisis and so developing an expanding motif of symbols and language with which to internalise crisis. Ε½iΕΎek presents this as capitalism "borrowing from the future by way of escaping the future". Thus, one is unsurprised that the Symbolic Order of statistical "confidence" should be conceived as a mode of pictorialism for those unwilling to recognise the hermeneutics of anxiety within the capitalist model.
And this brings us to the notion of "calculating" the "confidence" relating to inference from the finite sample. This exercise reflects the language of symbolism developed as internalisation of the hermeneutics of anxiety. To proceed according to this motif, one sets in motion the Symbolic Order at the expense of the Real (and plays out the dialectic of desire). As Nietzsche counsels in The Gay Science, "[o]ne should have more respect for the bashfulness with which nature has hidden behind riddles and iridescent uncertainties".
; ) | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
...explain like I'm in the humanities please :)
The paradox of "confidence" relating to inference from sampling is an expression of the hermeneutics of anxiety, tracable to the observations of Kierke |
38,073 | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | I promised a serious answer (to supplement my gag answer) so here goes. I think at present you are mixing several issues, including considerations relating to descriptive statistics for a sample and some (not clearly specified) population inferences. The main thing I would recommend at this stage is to take some time to consider and describe your sampling method, and in particular, determine whether it can reasonable be regarded as a "non-informative" type of random sampling. If you are using genuine random sampling then you will have available the entire suite of standard inference methods, including methods for construting confidence interval for population means, quantiles, etc.
With regard to your desire to describe aspects of your sample, this would typically be done using standard visualisations with accompanying descriptive statistics. The usual rule here is to remember that "a picture tells a-thousand words", so you should primarily be thinking about how to illustrate as much useful information about your sample using appropriate plots. When describing the distribution of continuous quantities across multiple discrete groups, a violin plot is a useful tool. When dealing with heavily skewed variables like income/wealth, you might also consider showing these on a logarithmic scale.
At the moment your problem is not that you are "overthinking" things --- you just don't seem to have a clear description (or even consideration?) of your sampling method or a clear description of what population quantities are of interest in your inference problem, and what descriptive aspects of the sample you wish to illustrate. Once you have a clear formulation of those things you will be in a better position to choose appropriate statistical methods for these tasks. | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | I promised a serious answer (to supplement my gag answer) so here goes. I think at present you are mixing several issues, including considerations relating to descriptive statistics for a sample and | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
I promised a serious answer (to supplement my gag answer) so here goes. I think at present you are mixing several issues, including considerations relating to descriptive statistics for a sample and some (not clearly specified) population inferences. The main thing I would recommend at this stage is to take some time to consider and describe your sampling method, and in particular, determine whether it can reasonable be regarded as a "non-informative" type of random sampling. If you are using genuine random sampling then you will have available the entire suite of standard inference methods, including methods for construting confidence interval for population means, quantiles, etc.
With regard to your desire to describe aspects of your sample, this would typically be done using standard visualisations with accompanying descriptive statistics. The usual rule here is to remember that "a picture tells a-thousand words", so you should primarily be thinking about how to illustrate as much useful information about your sample using appropriate plots. When describing the distribution of continuous quantities across multiple discrete groups, a violin plot is a useful tool. When dealing with heavily skewed variables like income/wealth, you might also consider showing these on a logarithmic scale.
At the moment your problem is not that you are "overthinking" things --- you just don't seem to have a clear description (or even consideration?) of your sampling method or a clear description of what population quantities are of interest in your inference problem, and what descriptive aspects of the sample you wish to illustrate. Once you have a clear formulation of those things you will be in a better position to choose appropriate statistical methods for these tasks. | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
I promised a serious answer (to supplement my gag answer) so here goes. I think at present you are mixing several issues, including considerations relating to descriptive statistics for a sample and |
38,074 | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | Reading some of your follow-up comments, it sounds like you want to provide descriptive statistics with confidence intervals.
The uncertainty around the total population size generally isn't a problem. Typically, you'd just assume that the total population is infinite. This is a conservative approach. The you follow the standard approach for computing confidence intervals.
A more precise approach takes the size of the population into account. The idea here is that if you have a population of 100 and a sample of 99, you actually almost know the mean of the population. There's a formula to shrink your variance estimate by this ratio. It's called the finite population correction.
In your case, there's some uncertainty about the total population, but if your estimate of the total population is unbiased, then your estimate of this ratio is unbiased and you're okay. You can see this with a little math. See my answer here to a similar question.
So you have two options, 1) just assume your sample of 1,000 CEO's is from an infinite population of CEO's or 2) apply a finite population correction using your estimate for the population size.
Unless you've sampled a large ratio of the total population, 1) and 2) will be about the same. So if you just want to keep it simple, you should go with 1). In which case, yes, you're probably over thinking things. :) | Calculating confidence in part of a sample (explain like I'm in the humanities please :) ) | Reading some of your follow-up comments, it sounds like you want to provide descriptive statistics with confidence intervals.
The uncertainty around the total population size generally isn't a problem | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
Reading some of your follow-up comments, it sounds like you want to provide descriptive statistics with confidence intervals.
The uncertainty around the total population size generally isn't a problem. Typically, you'd just assume that the total population is infinite. This is a conservative approach. The you follow the standard approach for computing confidence intervals.
A more precise approach takes the size of the population into account. The idea here is that if you have a population of 100 and a sample of 99, you actually almost know the mean of the population. There's a formula to shrink your variance estimate by this ratio. It's called the finite population correction.
In your case, there's some uncertainty about the total population, but if your estimate of the total population is unbiased, then your estimate of this ratio is unbiased and you're okay. You can see this with a little math. See my answer here to a similar question.
So you have two options, 1) just assume your sample of 1,000 CEO's is from an infinite population of CEO's or 2) apply a finite population correction using your estimate for the population size.
Unless you've sampled a large ratio of the total population, 1) and 2) will be about the same. So if you just want to keep it simple, you should go with 1). In which case, yes, you're probably over thinking things. :) | Calculating confidence in part of a sample (explain like I'm in the humanities please :) )
Reading some of your follow-up comments, it sounds like you want to provide descriptive statistics with confidence intervals.
The uncertainty around the total population size generally isn't a problem |
38,075 | Is Bessel's correction required when calculating mean? | NOPE
When you calculate the variance as $s^2 = \sum_{i=1}^n\Bigg[\dfrac{(x_i-\bar{x})^2}{n-1}\Bigg]$, there is another term in there that you are calculating, the $\bar{x}$.
When you calculate the usual mean, $\bar{x}=\dfrac{\sum_{i=1}^n x_i}{n}$, there is no such other term to calculate, so you do not drop that so-called "degree of freedom". | Is Bessel's correction required when calculating mean? | NOPE
When you calculate the variance as $s^2 = \sum_{i=1}^n\Bigg[\dfrac{(x_i-\bar{x})^2}{n-1}\Bigg]$, there is another term in there that you are calculating, the $\bar{x}$.
When you calculate the usu | Is Bessel's correction required when calculating mean?
NOPE
When you calculate the variance as $s^2 = \sum_{i=1}^n\Bigg[\dfrac{(x_i-\bar{x})^2}{n-1}\Bigg]$, there is another term in there that you are calculating, the $\bar{x}$.
When you calculate the usual mean, $\bar{x}=\dfrac{\sum_{i=1}^n x_i}{n}$, there is no such other term to calculate, so you do not drop that so-called "degree of freedom". | Is Bessel's correction required when calculating mean?
NOPE
When you calculate the variance as $s^2 = \sum_{i=1}^n\Bigg[\dfrac{(x_i-\bar{x})^2}{n-1}\Bigg]$, there is another term in there that you are calculating, the $\bar{x}$.
When you calculate the usu |
38,076 | Is Bessel's correction required when calculating mean? | No, Bessel's correction is not required for the sample mean
The reason Bessel's correction is required for the sample variance is that estimation of the variance also requires us to use the sample mean to estimate the true mean (around which this variance is formed). This is evident when we compare the the differences in the formulae for the sample variance versus the true varriance (see the little arrows in the following formulae for the relevant difference here):
$$s^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \underset{\uparrow}{\bar{X}_n})^2
\quad \quad \quad
\mathbb{V}(X) = \mathbb{E}[(X-\underset{\uparrow}{\mathbb{E}(X)})^2].$$
Observe that in the sample variance estimator, the true mean around which the variance is formed is estimated with the sample mean $\bar{X}_n$. The sample mean will tend to be closer to the middle of the data values than the true mean, so the squared deviations of the data values from the sample mean will tend to be smaller than the squared deviations from the true mean ---i.e., we have:
$$\sum_{i=1}^n (X_i - \bar{X}_n)^2 \leqslant \sum_{i=1}^n (X_i - \mathbb{E}(X))^2.$$
(In most actual data sets, this inequality is strict, but it is possible that the sample mean is equal to the true mean, in which case they are equal.) This means that taking a straight average will tend to underestimate the true variance. Bessel's correction accounts for this, and gives a sample variance that is an unbiased estimator of the true variance. | Is Bessel's correction required when calculating mean? | No, Bessel's correction is not required for the sample mean
The reason Bessel's correction is required for the sample variance is that estimation of the variance also requires us to use the sample mea | Is Bessel's correction required when calculating mean?
No, Bessel's correction is not required for the sample mean
The reason Bessel's correction is required for the sample variance is that estimation of the variance also requires us to use the sample mean to estimate the true mean (around which this variance is formed). This is evident when we compare the the differences in the formulae for the sample variance versus the true varriance (see the little arrows in the following formulae for the relevant difference here):
$$s^2 = \frac{1}{n-1} \sum_{i=1}^n (X_i - \underset{\uparrow}{\bar{X}_n})^2
\quad \quad \quad
\mathbb{V}(X) = \mathbb{E}[(X-\underset{\uparrow}{\mathbb{E}(X)})^2].$$
Observe that in the sample variance estimator, the true mean around which the variance is formed is estimated with the sample mean $\bar{X}_n$. The sample mean will tend to be closer to the middle of the data values than the true mean, so the squared deviations of the data values from the sample mean will tend to be smaller than the squared deviations from the true mean ---i.e., we have:
$$\sum_{i=1}^n (X_i - \bar{X}_n)^2 \leqslant \sum_{i=1}^n (X_i - \mathbb{E}(X))^2.$$
(In most actual data sets, this inequality is strict, but it is possible that the sample mean is equal to the true mean, in which case they are equal.) This means that taking a straight average will tend to underestimate the true variance. Bessel's correction accounts for this, and gives a sample variance that is an unbiased estimator of the true variance. | Is Bessel's correction required when calculating mean?
No, Bessel's correction is not required for the sample mean
The reason Bessel's correction is required for the sample variance is that estimation of the variance also requires us to use the sample mea |
38,077 | Is Bessel's correction required when calculating mean? | I'll approach this from the Bessel's correction for variance.
Wikipedia brings in its third proof the expected discrepancy between the true variance $\sigma^2$ and the biased estimate $s_n^2$ is given:
$$
E\left[\sigma^2 - s_n^2\right]
= E\left[\frac{1}{n}\sum_i^n(x_i-\mu)^2-\frac{1}{n}\sum_i^n(x_i-\bar x)^2\right]\\
= E\left[\frac{1}{n}\sum_i^n(x_i-\mu)^2-(x_i-\bar x)^2\right]\\
= E\left[\frac{1}{n}\sum_i^n \color{red}{x_i^2}+\mu^2-2x_i\mu-\color{red}{x_i^2}-\bar x^2 + 2 x_i \bar x\right]\\
= E\left[ \mu^2-\bar x^2 + 2\frac{(\bar x - \mu)}{n}\sum_i^n x_i \right]\\
= E\left[ \mu^2-\bar x^2 + 2(\bar x - \mu)\bar x \right]\\
= E\left[ \mu^2-\bar x^2 + 2\bar x^2 - 2\mu\bar x \right]\\
= E\left[ \mu^2- 2\mu\bar x +\bar x^2 \right]\\
= E\left[ (\mu- \bar x)^2 \right]\\
= \operatorname{Var}(\bar x)
=\frac{\sigma^2}{n}
$$
We then isolate $E\left[s_n^2\right]$ to derivate Bessel's correction, that should undo that bias.
$$E\left[s_n^2\right] = E\left[\sigma^2\right] - \frac{\sigma^2}{n} = \sigma^2 - \frac{\sigma^2}{n} = \sigma^2\color{green}{\left(\frac{n-1}{n}\right)}$$
Let's do the same to the mean now:
$$
E\left[\mu - \bar x\right]
= \mu - E\left[\bar x\right] = \mu - \frac{1}{n}\sum_i^n E\left[x_i\right] = \mu - \frac{1}{n}\sum_i^n \mu = \mu - \mu = 0\\
$$
Because the $\bar x$ is an unbiased estimator of $\mu$ there is no expected discrepancy, thus circumventing any correction. | Is Bessel's correction required when calculating mean? | I'll approach this from the Bessel's correction for variance.
Wikipedia brings in its third proof the expected discrepancy between the true variance $\sigma^2$ and the biased estimate $s_n^2$ is given | Is Bessel's correction required when calculating mean?
I'll approach this from the Bessel's correction for variance.
Wikipedia brings in its third proof the expected discrepancy between the true variance $\sigma^2$ and the biased estimate $s_n^2$ is given:
$$
E\left[\sigma^2 - s_n^2\right]
= E\left[\frac{1}{n}\sum_i^n(x_i-\mu)^2-\frac{1}{n}\sum_i^n(x_i-\bar x)^2\right]\\
= E\left[\frac{1}{n}\sum_i^n(x_i-\mu)^2-(x_i-\bar x)^2\right]\\
= E\left[\frac{1}{n}\sum_i^n \color{red}{x_i^2}+\mu^2-2x_i\mu-\color{red}{x_i^2}-\bar x^2 + 2 x_i \bar x\right]\\
= E\left[ \mu^2-\bar x^2 + 2\frac{(\bar x - \mu)}{n}\sum_i^n x_i \right]\\
= E\left[ \mu^2-\bar x^2 + 2(\bar x - \mu)\bar x \right]\\
= E\left[ \mu^2-\bar x^2 + 2\bar x^2 - 2\mu\bar x \right]\\
= E\left[ \mu^2- 2\mu\bar x +\bar x^2 \right]\\
= E\left[ (\mu- \bar x)^2 \right]\\
= \operatorname{Var}(\bar x)
=\frac{\sigma^2}{n}
$$
We then isolate $E\left[s_n^2\right]$ to derivate Bessel's correction, that should undo that bias.
$$E\left[s_n^2\right] = E\left[\sigma^2\right] - \frac{\sigma^2}{n} = \sigma^2 - \frac{\sigma^2}{n} = \sigma^2\color{green}{\left(\frac{n-1}{n}\right)}$$
Let's do the same to the mean now:
$$
E\left[\mu - \bar x\right]
= \mu - E\left[\bar x\right] = \mu - \frac{1}{n}\sum_i^n E\left[x_i\right] = \mu - \frac{1}{n}\sum_i^n \mu = \mu - \mu = 0\\
$$
Because the $\bar x$ is an unbiased estimator of $\mu$ there is no expected discrepancy, thus circumventing any correction. | Is Bessel's correction required when calculating mean?
I'll approach this from the Bessel's correction for variance.
Wikipedia brings in its third proof the expected discrepancy between the true variance $\sigma^2$ and the biased estimate $s_n^2$ is given |
38,078 | Is Bessel's correction required when calculating mean? | No, don't divide by $n - 1$ when estimating the mean.
Suppose I want to estimate how tall the average person is. I randomly select 4 people, and all of them are 6 feet tall. Which is a better estimate of the height of the average person:
$$\frac{6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet}}{4} = 6 \text{ feet},$$
or
$$\frac{6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet}}{(4 - 1)} = 8 \text{ feet}?$$
Or, if you prefer height in centimeters, suppose all of the people are 180 cm tall. Which is a better estimate of the height of the average person:
$$\frac{180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm}}{4} = 180 \text{ cm},$$
or
$$\frac{180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm}}{(4 - 1)} = 240 \text{ cm}?$$ | Is Bessel's correction required when calculating mean? | No, don't divide by $n - 1$ when estimating the mean.
Suppose I want to estimate how tall the average person is. I randomly select 4 people, and all of them are 6 feet tall. Which is a better estimate | Is Bessel's correction required when calculating mean?
No, don't divide by $n - 1$ when estimating the mean.
Suppose I want to estimate how tall the average person is. I randomly select 4 people, and all of them are 6 feet tall. Which is a better estimate of the height of the average person:
$$\frac{6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet}}{4} = 6 \text{ feet},$$
or
$$\frac{6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet} + 6 \text{ feet}}{(4 - 1)} = 8 \text{ feet}?$$
Or, if you prefer height in centimeters, suppose all of the people are 180 cm tall. Which is a better estimate of the height of the average person:
$$\frac{180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm}}{4} = 180 \text{ cm},$$
or
$$\frac{180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm} + 180 \text{ cm}}{(4 - 1)} = 240 \text{ cm}?$$ | Is Bessel's correction required when calculating mean?
No, don't divide by $n - 1$ when estimating the mean.
Suppose I want to estimate how tall the average person is. I randomly select 4 people, and all of them are 6 feet tall. Which is a better estimate |
38,079 | Dependent and not identically distributed random variables | Next to the "formal" example by Xi'an, a "real-world" example might be height and weight. Already because the two are measured on different scales will they be distributed differently, but they sure are dependent, as taller people tend to be heavier. | Dependent and not identically distributed random variables | Next to the "formal" example by Xi'an, a "real-world" example might be height and weight. Already because the two are measured on different scales will they be distributed differently, but they sure a | Dependent and not identically distributed random variables
Next to the "formal" example by Xi'an, a "real-world" example might be height and weight. Already because the two are measured on different scales will they be distributed differently, but they sure are dependent, as taller people tend to be heavier. | Dependent and not identically distributed random variables
Next to the "formal" example by Xi'an, a "real-world" example might be height and weight. Already because the two are measured on different scales will they be distributed differently, but they sure a |
38,080 | Dependent and not identically distributed random variables | If you randomly draw a card from a deck of playing cards, do not put it back, and draw again. Then, the probability distributions for which card will be drawn in each of the two draws are dependent and not identical.
Otherwise, if the card of first draw is put back and well shuffled before the second draw, then the distributions of the two draws are independent and identical. | Dependent and not identically distributed random variables | If you randomly draw a card from a deck of playing cards, do not put it back, and draw again. Then, the probability distributions for which card will be drawn in each of the two draws are dependent an | Dependent and not identically distributed random variables
If you randomly draw a card from a deck of playing cards, do not put it back, and draw again. Then, the probability distributions for which card will be drawn in each of the two draws are dependent and not identical.
Otherwise, if the card of first draw is put back and well shuffled before the second draw, then the distributions of the two draws are independent and identical. | Dependent and not identically distributed random variables
If you randomly draw a card from a deck of playing cards, do not put it back, and draw again. Then, the probability distributions for which card will be drawn in each of the two draws are dependent an |
38,081 | Dependent and not identically distributed random variables | Autorcorrelated processes
A variable in series that 'remembers' its previous values to some degree is not i.i.d.! Any autoregresive value depends on previous values of the variable, and the distribution changes depending on location within the series.
For example, the time series variable $y$, where $t$ indicates period of time, $y_t = \beta_0 + \beta_1 y_{t-1} + \varepsilon_t,$ and $\varepsilon \sim \mathcal{N}(0,\sigma)$ is not i.i.d. for non-zero values of $\beta_1$ (especially for $|\beta_1|\ge 1$), because the variance of $y$ is a function of $t$ (the more time passes, the more variable $y$ is). In a similar way, the expected value of $y$ at some point in the future is also a function of $t$.
Real World Examples
Ok, so is that just some statistical abstraction? Or are there actual real-world examples of autocorrelated processes? In fact, they abound! Here are some:
Annual marriage rates by state, province or country
Annual mortality rates by state, province or country
Daily closing value of the NASDAQ Composite, Dow Jones Industrial Average, or S&P 500 Indexβall marketing indexesβin the US
What these (and other) autoregressive series have in common is that their value at one point in time 'remembers' (i.e. is a function of) their previous value or values. | Dependent and not identically distributed random variables | Autorcorrelated processes
A variable in series that 'remembers' its previous values to some degree is not i.i.d.! Any autoregresive value depends on previous values of the variable, and the distributi | Dependent and not identically distributed random variables
Autorcorrelated processes
A variable in series that 'remembers' its previous values to some degree is not i.i.d.! Any autoregresive value depends on previous values of the variable, and the distribution changes depending on location within the series.
For example, the time series variable $y$, where $t$ indicates period of time, $y_t = \beta_0 + \beta_1 y_{t-1} + \varepsilon_t,$ and $\varepsilon \sim \mathcal{N}(0,\sigma)$ is not i.i.d. for non-zero values of $\beta_1$ (especially for $|\beta_1|\ge 1$), because the variance of $y$ is a function of $t$ (the more time passes, the more variable $y$ is). In a similar way, the expected value of $y$ at some point in the future is also a function of $t$.
Real World Examples
Ok, so is that just some statistical abstraction? Or are there actual real-world examples of autocorrelated processes? In fact, they abound! Here are some:
Annual marriage rates by state, province or country
Annual mortality rates by state, province or country
Daily closing value of the NASDAQ Composite, Dow Jones Industrial Average, or S&P 500 Indexβall marketing indexesβin the US
What these (and other) autoregressive series have in common is that their value at one point in time 'remembers' (i.e. is a function of) their previous value or values. | Dependent and not identically distributed random variables
Autorcorrelated processes
A variable in series that 'remembers' its previous values to some degree is not i.i.d.! Any autoregresive value depends on previous values of the variable, and the distributi |
38,082 | Dependent and not identically distributed random variables | If $\varepsilon_1,\varepsilon_2$ are iid $\mathcal N(0,1)$,
$$X_1=\mu_1+\sigma_1\epsilon_1\qquad X_2=\mu_2+\varrho \epsilon_1 + \sigma_2 \epsilon_2$$
is a pair of dependent RVs that are not identically distributed for most values of the parameters. | Dependent and not identically distributed random variables | If $\varepsilon_1,\varepsilon_2$ are iid $\mathcal N(0,1)$,
$$X_1=\mu_1+\sigma_1\epsilon_1\qquad X_2=\mu_2+\varrho \epsilon_1 + \sigma_2 \epsilon_2$$
is a pair of dependent RVs that are not identicall | Dependent and not identically distributed random variables
If $\varepsilon_1,\varepsilon_2$ are iid $\mathcal N(0,1)$,
$$X_1=\mu_1+\sigma_1\epsilon_1\qquad X_2=\mu_2+\varrho \epsilon_1 + \sigma_2 \epsilon_2$$
is a pair of dependent RVs that are not identically distributed for most values of the parameters. | Dependent and not identically distributed random variables
If $\varepsilon_1,\varepsilon_2$ are iid $\mathcal N(0,1)$,
$$X_1=\mu_1+\sigma_1\epsilon_1\qquad X_2=\mu_2+\varrho \epsilon_1 + \sigma_2 \epsilon_2$$
is a pair of dependent RVs that are not identicall |
38,083 | Dependent and not identically distributed random variables | Some other "real-world"-examples:
Let $(M, F)$ be a pair of measurements on an opposite sex married couple, sampled randomly:
Measurement is height, will have different means.
Measurement is IQ, same mean, different variance.
(But maybe for this example, independence is in doubt ...) Paired data in general can be used to make many similar examples, and could save the independence assumption maybe by conditioning on some common latent variables. | Dependent and not identically distributed random variables | Some other "real-world"-examples:
Let $(M, F)$ be a pair of measurements on an opposite sex married couple, sampled randomly:
Measurement is height, will have different means.
Measurement is IQ, same | Dependent and not identically distributed random variables
Some other "real-world"-examples:
Let $(M, F)$ be a pair of measurements on an opposite sex married couple, sampled randomly:
Measurement is height, will have different means.
Measurement is IQ, same mean, different variance.
(But maybe for this example, independence is in doubt ...) Paired data in general can be used to make many similar examples, and could save the independence assumption maybe by conditioning on some common latent variables. | Dependent and not identically distributed random variables
Some other "real-world"-examples:
Let $(M, F)$ be a pair of measurements on an opposite sex married couple, sampled randomly:
Measurement is height, will have different means.
Measurement is IQ, same |
38,084 | PCA finds a variable to be the most important twice | Your interpretation of PCA components is not correct.
PCA does not tell you which variables account for the most variation in the data, so a statement like
Calcium is the most explanatory of the variance, as well as also being the third most explanatory variable for the variance.
cannot be drawn from a PC analysis.
What it does say is that the direction determined by the vector
$$\begin{array}{cccc}&PC_1\\Calcium&0.6729\\Iron&0.5331\\Uranium&0.1123\end{array}$$
accounts for the most variation in the data. This direction is a combination of the directions determined by the individual variables. This mixing of directions is fundamental to PCA, and it cannot be undone or ignored.
The further principal components are interpreted iteravely, they account for the most variation in the data in directions that are orthogonal to the previous PC directions. | PCA finds a variable to be the most important twice | Your interpretation of PCA components is not correct.
PCA does not tell you which variables account for the most variation in the data, so a statement like
Calcium is the most explanatory of the var | PCA finds a variable to be the most important twice
Your interpretation of PCA components is not correct.
PCA does not tell you which variables account for the most variation in the data, so a statement like
Calcium is the most explanatory of the variance, as well as also being the third most explanatory variable for the variance.
cannot be drawn from a PC analysis.
What it does say is that the direction determined by the vector
$$\begin{array}{cccc}&PC_1\\Calcium&0.6729\\Iron&0.5331\\Uranium&0.1123\end{array}$$
accounts for the most variation in the data. This direction is a combination of the directions determined by the individual variables. This mixing of directions is fundamental to PCA, and it cannot be undone or ignored.
The further principal components are interpreted iteravely, they account for the most variation in the data in directions that are orthogonal to the previous PC directions. | PCA finds a variable to be the most important twice
Your interpretation of PCA components is not correct.
PCA does not tell you which variables account for the most variation in the data, so a statement like
Calcium is the most explanatory of the var |
38,085 | PCA finds a variable to be the most important twice | You aren't interpreting PCA correctly. PCA finds a whole new basis for your data. It's analogous to a change of basis: https://www.math.hmc.edu/calculus/tutorials/changebasis/ but we choose a particular basis
The new basis is not arbitrary: the vectors are selected based on how much variation they account for. That is to say, PC1 "points in the direction of greatest variability"
Just because the primary component (vector projection) of PC1 and PC3 are in the direction of calcium, we can not say that calcium is the most "important" (whatever that may mean!).
Geeking out about linear algebra:
By the laws of linear algebra, all principal components are orthogonal to each other, and the the amount of explained variance for any given eigvenvalue, E_p is E_p/(sum(E_i) where sum(E_i) is the sum of all eigenvalues
lastly, here's a good discussion on PCA: Making sense of principal component analysis, eigenvectors & eigenvalues | PCA finds a variable to be the most important twice | You aren't interpreting PCA correctly. PCA finds a whole new basis for your data. It's analogous to a change of basis: https://www.math.hmc.edu/calculus/tutorials/changebasis/ but we choose a partic | PCA finds a variable to be the most important twice
You aren't interpreting PCA correctly. PCA finds a whole new basis for your data. It's analogous to a change of basis: https://www.math.hmc.edu/calculus/tutorials/changebasis/ but we choose a particular basis
The new basis is not arbitrary: the vectors are selected based on how much variation they account for. That is to say, PC1 "points in the direction of greatest variability"
Just because the primary component (vector projection) of PC1 and PC3 are in the direction of calcium, we can not say that calcium is the most "important" (whatever that may mean!).
Geeking out about linear algebra:
By the laws of linear algebra, all principal components are orthogonal to each other, and the the amount of explained variance for any given eigvenvalue, E_p is E_p/(sum(E_i) where sum(E_i) is the sum of all eigenvalues
lastly, here's a good discussion on PCA: Making sense of principal component analysis, eigenvectors & eigenvalues | PCA finds a variable to be the most important twice
You aren't interpreting PCA correctly. PCA finds a whole new basis for your data. It's analogous to a change of basis: https://www.math.hmc.edu/calculus/tutorials/changebasis/ but we choose a partic |
38,086 | PCA finds a variable to be the most important twice | Correlation is not the same as linear combination with largest variance, which is what PCA finds.
Also the eigenvectors have no particular direction. You can multiply them with $-1$ and those vectors will also be eigenvectors with the same eigenvalue (variance) and then you would get positive $+0.677\cdots$ for third component.
If you want correlation maybe you could check out Canonical Correlation Analysis (CCA) instead. | PCA finds a variable to be the most important twice | Correlation is not the same as linear combination with largest variance, which is what PCA finds.
Also the eigenvectors have no particular direction. You can multiply them with $-1$ and those vectors | PCA finds a variable to be the most important twice
Correlation is not the same as linear combination with largest variance, which is what PCA finds.
Also the eigenvectors have no particular direction. You can multiply them with $-1$ and those vectors will also be eigenvectors with the same eigenvalue (variance) and then you would get positive $+0.677\cdots$ for third component.
If you want correlation maybe you could check out Canonical Correlation Analysis (CCA) instead. | PCA finds a variable to be the most important twice
Correlation is not the same as linear combination with largest variance, which is what PCA finds.
Also the eigenvectors have no particular direction. You can multiply them with $-1$ and those vectors |
38,087 | How is the $\chi^2_1$-distribution not a Gaussian? | As Silverfish said, the problem in your reasoning is that to find the PDF of a squared random variable, or any other transformed random variable, you can't just perform that transformation on the PDF.
If we want to know the actual PDF of a squared random variable we must calculate $P(\chi ^2_1 = x)$. One way to do this is to use the CDF method below,
$P(\chi ^2_1 \leq x) = P(Z^2 \leq x) = P(-\sqrt{x} \leq Z \leq \sqrt{x}) = P(Z \leq \sqrt{x}) - P(Z \leq -\sqrt{x})$
Since the derivative of the CDF is the PDF, we take the derivative of both sides with respect to $x$ and get,
$f_{\chi ^2_1}(x) = \frac{1}{2\sqrt{x}} f_Z(\sqrt{x}) + \frac{1}{2\sqrt{x}}f_Z(-\sqrt{x}) = \frac{1}{2\sqrt{x}} \frac{1}{\sqrt{2\pi}}e^{-x/2} + \frac{1}{2\sqrt{x}} \frac{1}{\sqrt{2\pi}}e^{-x/2} = \frac{1}{\sqrt{x}}\frac{1}{\sqrt{2\pi}}e^{-x/2}$
which is the PDF we expect for a Chi-square distribution with one degree of freedom. | How is the $\chi^2_1$-distribution not a Gaussian? | As Silverfish said, the problem in your reasoning is that to find the PDF of a squared random variable, or any other transformed random variable, you can't just perform that transformation on the PDF. | How is the $\chi^2_1$-distribution not a Gaussian?
As Silverfish said, the problem in your reasoning is that to find the PDF of a squared random variable, or any other transformed random variable, you can't just perform that transformation on the PDF.
If we want to know the actual PDF of a squared random variable we must calculate $P(\chi ^2_1 = x)$. One way to do this is to use the CDF method below,
$P(\chi ^2_1 \leq x) = P(Z^2 \leq x) = P(-\sqrt{x} \leq Z \leq \sqrt{x}) = P(Z \leq \sqrt{x}) - P(Z \leq -\sqrt{x})$
Since the derivative of the CDF is the PDF, we take the derivative of both sides with respect to $x$ and get,
$f_{\chi ^2_1}(x) = \frac{1}{2\sqrt{x}} f_Z(\sqrt{x}) + \frac{1}{2\sqrt{x}}f_Z(-\sqrt{x}) = \frac{1}{2\sqrt{x}} \frac{1}{\sqrt{2\pi}}e^{-x/2} + \frac{1}{2\sqrt{x}} \frac{1}{\sqrt{2\pi}}e^{-x/2} = \frac{1}{\sqrt{x}}\frac{1}{\sqrt{2\pi}}e^{-x/2}$
which is the PDF we expect for a Chi-square distribution with one degree of freedom. | How is the $\chi^2_1$-distribution not a Gaussian?
As Silverfish said, the problem in your reasoning is that to find the PDF of a squared random variable, or any other transformed random variable, you can't just perform that transformation on the PDF. |
38,088 | How is the $\chi^2_1$-distribution not a Gaussian? | Normally distributed random variable can take values ranging from $-\infty$ to $\infty$. Squaring any real value would lead to positive values. So how can possibly $Z^2$ be normally distributed?
Squaring probability density function does not make any sense since this would give you "probabilities squared" (more precisely: squared density) rather then squared random variable. Applying any function $g$ to random variable $X$ means applying it to $X$'s values rather then to it's probability density function, or cumulative distribution function. | How is the $\chi^2_1$-distribution not a Gaussian? | Normally distributed random variable can take values ranging from $-\infty$ to $\infty$. Squaring any real value would lead to positive values. So how can possibly $Z^2$ be normally distributed?
Squa | How is the $\chi^2_1$-distribution not a Gaussian?
Normally distributed random variable can take values ranging from $-\infty$ to $\infty$. Squaring any real value would lead to positive values. So how can possibly $Z^2$ be normally distributed?
Squaring probability density function does not make any sense since this would give you "probabilities squared" (more precisely: squared density) rather then squared random variable. Applying any function $g$ to random variable $X$ means applying it to $X$'s values rather then to it's probability density function, or cumulative distribution function. | How is the $\chi^2_1$-distribution not a Gaussian?
Normally distributed random variable can take values ranging from $-\infty$ to $\infty$. Squaring any real value would lead to positive values. So how can possibly $Z^2$ be normally distributed?
Squa |
38,089 | How is the $\chi^2_1$-distribution not a Gaussian? | Actually, $\chi^2_\infty$ distribution "looks like" normal. The exact relation for $x\sim\chi^2_n$ is $(x-n)/2\sqrt n\sim\mathcal{N}(0,1)$
How come? As you noted a variable from this distribution can be thought of as a sum of squared normals: $Z_1^2+Z_2^2+\dots+Z_n^2$. However, the squared normal random variables are themselves random variables, albeit not normal anymore. We also know from CLT, that a sum of random variables must converge to some kind of a normal variable.
Also, you proposed to square the normal PDF to get the PDF of the square of normal variable. That doesn't work. For instance, if you take an integral of your new proposed PDF, you get $\frac{1}{2\sqrt \pi}$ instead of 1. So, the squared PDF of normal distribution is not even a PDF, it doesn't normalize to 1. | How is the $\chi^2_1$-distribution not a Gaussian? | Actually, $\chi^2_\infty$ distribution "looks like" normal. The exact relation for $x\sim\chi^2_n$ is $(x-n)/2\sqrt n\sim\mathcal{N}(0,1)$
How come? As you noted a variable from this distribution can | How is the $\chi^2_1$-distribution not a Gaussian?
Actually, $\chi^2_\infty$ distribution "looks like" normal. The exact relation for $x\sim\chi^2_n$ is $(x-n)/2\sqrt n\sim\mathcal{N}(0,1)$
How come? As you noted a variable from this distribution can be thought of as a sum of squared normals: $Z_1^2+Z_2^2+\dots+Z_n^2$. However, the squared normal random variables are themselves random variables, albeit not normal anymore. We also know from CLT, that a sum of random variables must converge to some kind of a normal variable.
Also, you proposed to square the normal PDF to get the PDF of the square of normal variable. That doesn't work. For instance, if you take an integral of your new proposed PDF, you get $\frac{1}{2\sqrt \pi}$ instead of 1. So, the squared PDF of normal distribution is not even a PDF, it doesn't normalize to 1. | How is the $\chi^2_1$-distribution not a Gaussian?
Actually, $\chi^2_\infty$ distribution "looks like" normal. The exact relation for $x\sim\chi^2_n$ is $(x-n)/2\sqrt n\sim\mathcal{N}(0,1)$
How come? As you noted a variable from this distribution can |
38,090 | Negative BIC in k-means | Generally, the aim is to minimize BIC, so if you are in a negative territory, a negative number that has the largest modulus (deepest down in the negative territory) indicates the preferred model. Hence, in your plot the best case would appear to be "2".
However, the definition of BIC used in the mclust package happens to be the negative of the standard BIC, as the answer by @simone indicates. Therefore, in this package you are looking for the solution with the maximum BIC. In your example, this would be around 25 and above, but below 50. | Negative BIC in k-means | Generally, the aim is to minimize BIC, so if you are in a negative territory, a negative number that has the largest modulus (deepest down in the negative territory) indicates the preferred model. Hen | Negative BIC in k-means
Generally, the aim is to minimize BIC, so if you are in a negative territory, a negative number that has the largest modulus (deepest down in the negative territory) indicates the preferred model. Hence, in your plot the best case would appear to be "2".
However, the definition of BIC used in the mclust package happens to be the negative of the standard BIC, as the answer by @simone indicates. Therefore, in this package you are looking for the solution with the maximum BIC. In your example, this would be around 25 and above, but below 50. | Negative BIC in k-means
Generally, the aim is to minimize BIC, so if you are in a negative territory, a negative number that has the largest modulus (deepest down in the negative territory) indicates the preferred model. Hen |
38,091 | Negative BIC in k-means | This may result useful for someone else.
I was puzzled with the mclust package because I tried Gaussian mixture models to check whether my data followed a uni- or multi-modal Gaussian distribution.
I found that, according to the examples provided in the help, the model that best fitted my data was one with two components (it would suggest the data follows a bimodal Gaussian distribution). However, to my surprise, I found that the model with two components had the highest BIC value (the range of values was on the negative side).
This is because the BIC value calculated in this package is:
2 * loglik - nparams * log(n)
instead of the classical:
-2 * loglik + nparams * log(n)
This is explained here. | Negative BIC in k-means | This may result useful for someone else.
I was puzzled with the mclust package because I tried Gaussian mixture models to check whether my data followed a uni- or multi-modal Gaussian distribution.
I | Negative BIC in k-means
This may result useful for someone else.
I was puzzled with the mclust package because I tried Gaussian mixture models to check whether my data followed a uni- or multi-modal Gaussian distribution.
I found that, according to the examples provided in the help, the model that best fitted my data was one with two components (it would suggest the data follows a bimodal Gaussian distribution). However, to my surprise, I found that the model with two components had the highest BIC value (the range of values was on the negative side).
This is because the BIC value calculated in this package is:
2 * loglik - nparams * log(n)
instead of the classical:
-2 * loglik + nparams * log(n)
This is explained here. | Negative BIC in k-means
This may result useful for someone else.
I was puzzled with the mclust package because I tried Gaussian mixture models to check whether my data followed a uni- or multi-modal Gaussian distribution.
I |
38,092 | How to use boxplots to find the point where values are more likely to come from different conditions? | @NickCox has presented a good way to visualize your data. I take it you want to find a rule for deciding when to classify a value as condition1 vs condition2.
In an earlier version of your question, you wondered if you should call any value greater than the median of condition1 as a member of condition2. This is not a good rule to use. Note that by definition, $50\%$ of a distribution is above the median. Therefore, you will necessarily misclassify $50\%$ of true condition1 members. Based on your data, I gather you will also misclassify $18\%$ of your true condition2 members.
A way to think through the value of a rule like yours is to form a confusion matrix. In R, you can use ?confusionMatrix in the caret package. Here is an example using your data and your suggested rule:
library(caret)
dat = stack(list(cond1=Cond.1, cond2=Cond.2))
pred = ifelse(dat$values>median(Cond.1), "cond2", "cond1")
confusionMatrix(pred, dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 20 7
# cond2 19 32
#
# Accuracy : 0.6667
# ...
#
# Sensitivity : 0.5128
# Specificity : 0.8205
# Pos Pred Value : 0.7407
# Neg Pred Value : 0.6275
# Prevalence : 0.5000
# Detection Rate : 0.2564
# Detection Prevalence : 0.3462
# Balanced Accuracy : 0.6667
I bet we can do better.
A natural approach is to use a CART (decision tree) model, which (when there is only one variable) simply finds the optimal split. In R, you can do that with ?ctree from the party package.
library(party)
cart.model = ctree(ind~values, dat)
windows()
plot(cart.model)
You can see that the model will call a value "condition1" if it is $\le5.7$, and "condition2" otherwise (note that the median of condition1 is $3.9$). Here is the confusion matrix:
confusionMatrix(predict(cart.model), dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 39 15
# cond2 0 24
#
# Accuracy : 0.8077
# ...
#
# Sensitivity : 1.0000
# Specificity : 0.6154
# Pos Pred Value : 0.7222
# Neg Pred Value : 1.0000
# Prevalence : 0.5000
# Detection Rate : 0.5000
# Detection Prevalence : 0.6923
# Balanced Accuracy : 0.8077
This rule yields an accuracy of $0.8077$, instead of $0.6667$. From the plot and the confusion matrix, you can see that true condition1 members are never misclassified as condition2. This falls out of optimizing the accuracy of the rule and the assumption that both types of misclassification are equally bad; you can tweak the model fitting process if that isn't true.
On the other hand, I would be remiss if I didn't point out that a classifier necessarily throws away a lot of information and is typically suboptimal (unless you really need classifications). You may want to model the data so that you can get the probability a value will be a member of condition2. Logistic regression is the natural choice here. Note that because your condition2 is much more spread out than condition1, I added a squared term to allow for a curvilinear fit:
lr.model = glm(ind~values+I(values^2), dat, family="binomial")
lr.preds = predict(lr.model, type="response")
ord = order(dat$values)
dat = dat[ord,]
lr.preds = lr.preds[ord]
windows()
with(dat, plot(values, ifelse(ind=="cond2",1,0),
ylab="predicted probability of condition2"))
lines(dat$values, lr.preds)
This is clearly giving you more, and better, information. It is not recommended that you throw away the extra information in your predicted probabilities and dichotomize them into classifications, but for the sake of comparison with the rules above, I can show you the confusion matrix that comes from doing so with your logistic regression model:
lr.class = ifelse(lr.preds<.5, "cond1", "cond2")
confusionMatrix(lr.class, dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 36 8
# cond2 3 31
#
# Accuracy : 0.859
# ...
#
# Sensitivity : 0.9231
# Specificity : 0.7949
# Pos Pred Value : 0.8182
# Neg Pred Value : 0.9118
# Prevalence : 0.5000
# Detection Rate : 0.4615
# Detection Prevalence : 0.5641
# Balanced Accuracy : 0.8590
The accuracy is now $0.859$, instead of $0.8077$. | How to use boxplots to find the point where values are more likely to come from different conditions | @NickCox has presented a good way to visualize your data. I take it you want to find a rule for deciding when to classify a value as condition1 vs condition2.
In an earlier version of your question | How to use boxplots to find the point where values are more likely to come from different conditions?
@NickCox has presented a good way to visualize your data. I take it you want to find a rule for deciding when to classify a value as condition1 vs condition2.
In an earlier version of your question, you wondered if you should call any value greater than the median of condition1 as a member of condition2. This is not a good rule to use. Note that by definition, $50\%$ of a distribution is above the median. Therefore, you will necessarily misclassify $50\%$ of true condition1 members. Based on your data, I gather you will also misclassify $18\%$ of your true condition2 members.
A way to think through the value of a rule like yours is to form a confusion matrix. In R, you can use ?confusionMatrix in the caret package. Here is an example using your data and your suggested rule:
library(caret)
dat = stack(list(cond1=Cond.1, cond2=Cond.2))
pred = ifelse(dat$values>median(Cond.1), "cond2", "cond1")
confusionMatrix(pred, dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 20 7
# cond2 19 32
#
# Accuracy : 0.6667
# ...
#
# Sensitivity : 0.5128
# Specificity : 0.8205
# Pos Pred Value : 0.7407
# Neg Pred Value : 0.6275
# Prevalence : 0.5000
# Detection Rate : 0.2564
# Detection Prevalence : 0.3462
# Balanced Accuracy : 0.6667
I bet we can do better.
A natural approach is to use a CART (decision tree) model, which (when there is only one variable) simply finds the optimal split. In R, you can do that with ?ctree from the party package.
library(party)
cart.model = ctree(ind~values, dat)
windows()
plot(cart.model)
You can see that the model will call a value "condition1" if it is $\le5.7$, and "condition2" otherwise (note that the median of condition1 is $3.9$). Here is the confusion matrix:
confusionMatrix(predict(cart.model), dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 39 15
# cond2 0 24
#
# Accuracy : 0.8077
# ...
#
# Sensitivity : 1.0000
# Specificity : 0.6154
# Pos Pred Value : 0.7222
# Neg Pred Value : 1.0000
# Prevalence : 0.5000
# Detection Rate : 0.5000
# Detection Prevalence : 0.6923
# Balanced Accuracy : 0.8077
This rule yields an accuracy of $0.8077$, instead of $0.6667$. From the plot and the confusion matrix, you can see that true condition1 members are never misclassified as condition2. This falls out of optimizing the accuracy of the rule and the assumption that both types of misclassification are equally bad; you can tweak the model fitting process if that isn't true.
On the other hand, I would be remiss if I didn't point out that a classifier necessarily throws away a lot of information and is typically suboptimal (unless you really need classifications). You may want to model the data so that you can get the probability a value will be a member of condition2. Logistic regression is the natural choice here. Note that because your condition2 is much more spread out than condition1, I added a squared term to allow for a curvilinear fit:
lr.model = glm(ind~values+I(values^2), dat, family="binomial")
lr.preds = predict(lr.model, type="response")
ord = order(dat$values)
dat = dat[ord,]
lr.preds = lr.preds[ord]
windows()
with(dat, plot(values, ifelse(ind=="cond2",1,0),
ylab="predicted probability of condition2"))
lines(dat$values, lr.preds)
This is clearly giving you more, and better, information. It is not recommended that you throw away the extra information in your predicted probabilities and dichotomize them into classifications, but for the sake of comparison with the rules above, I can show you the confusion matrix that comes from doing so with your logistic regression model:
lr.class = ifelse(lr.preds<.5, "cond1", "cond2")
confusionMatrix(lr.class, dat$ind)
# Confusion Matrix and Statistics
#
# Reference
# Prediction cond1 cond2
# cond1 36 8
# cond2 3 31
#
# Accuracy : 0.859
# ...
#
# Sensitivity : 0.9231
# Specificity : 0.7949
# Pos Pred Value : 0.8182
# Neg Pred Value : 0.9118
# Prevalence : 0.5000
# Detection Rate : 0.4615
# Detection Prevalence : 0.5641
# Balanced Accuracy : 0.8590
The accuracy is now $0.859$, instead of $0.8077$. | How to use boxplots to find the point where values are more likely to come from different conditions
@NickCox has presented a good way to visualize your data. I take it you want to find a rule for deciding when to classify a value as condition1 vs condition2.
In an earlier version of your question |
38,093 | How to use boxplots to find the point where values are more likely to come from different conditions? | Here is one of many possibilities. Back in 1979, Emanuel Parzen suggested hybridising the quantile plot and the box plot. Some references are given below. Clearly, the box of the box plot shows median and quartiles, which are just key quantiles. Showing all of the data, namely all the quantiles or order statistics, is entirely possible, at least with a small number of groups (as in this thread) and a small or moderate number of observations (as in this thread too). In fact the design extends quite well to larger sample sizes. Outliers, granularity, ties, grouping and gaps (whichever way you want to think about such features) are always evident as well as general level, spread and shape. The graph is not subject to artefacts or side-effects of arbitrary rules of thumb such as what is or is not within 1.5 IQR of the nearer quartile. Conversely, it may offer too much detail for some tastes, but faced with a less than ideal graph one just moves on.
It is reasonable to point out that quantile plots are just cumulative distribution plots with axes reversed, although they are more often shown as point patterns than as connected lines.
Cox (2012) reported one Stata implementation and his stripplot (Stata users can download from SSC) offers another. Implementation should be trivial in any major statistical or mathematical software.
I think this kind of display offers much more detail than a conventional box plot, which here does not fully exploit the space available. A conventional box plot can be helpful for 10-100 groups or variables, where some severe reduction of the data may be needed, but it throws out possibly interesting fine structure for the common few-group or few-variable case.
Another key virtue of this graph is that it echoes the elementary but fundamental fact that just as half the values are inside the box, so also half the values are outside the box (and often the most interesting or more important half). I've seen even experienced statistical people misled by the stark contrast between fat box and thin whiskers. The classic illustration of this is any U-shaped distribution or any distribution with two big clumps of approximately equal size. The box will then be long and fat and the whiskers short and thin. People often miss the fact that such whiskers are hiding the highest densities. Tukey (1977) gave an example of this with Rayleigh's data.
In this case and in many others logarithmic scale is used. In principle, the quantile-box plot is easily compatible with any monotonic transformation, as the transform of the quantiles is identical to the quantiles of the transformed values. (There is some small print qualifying that, arising because median and quartiles may be be produced by averaging of adjacent order statistics, which doesn't usually bite.)
I don't offer herewith any kind of graphical substitute for a significance test. This is an exploratory device.
Cox, N.J. 2012. Axis practice, or what goes where on a graph.
Stata Journal 12(3): 549-561. .pdf accessible here
Parzen, E. 1979a.
Nonparametric statistical data modeling.
Journal, American Statistical Association 74: 105-121.
Parzen, E. 1979b.
A density-quantile function perspective on robust estimation.
In Launer, R.L. and G.N. Wilkinson (Eds) Robustness in statistics.
New York: Academic Press, 237-258.
Parzen, E. 1982.
Data modeling using quantile and density-quantile functions.
In Tiago de Oliveira, J. and Epstein, B. (Eds)
Some recent advances in statistics. London: Academic Press,
23-52.
Tukey, J.W.
Exploratory Data Analysis.
Reading, MA: Addison-Wesley. | How to use boxplots to find the point where values are more likely to come from different conditions | Here is one of many possibilities. Back in 1979, Emanuel Parzen suggested hybridising the quantile plot and the box plot. Some references are given below. Clearly, the box of the box plot shows median | How to use boxplots to find the point where values are more likely to come from different conditions?
Here is one of many possibilities. Back in 1979, Emanuel Parzen suggested hybridising the quantile plot and the box plot. Some references are given below. Clearly, the box of the box plot shows median and quartiles, which are just key quantiles. Showing all of the data, namely all the quantiles or order statistics, is entirely possible, at least with a small number of groups (as in this thread) and a small or moderate number of observations (as in this thread too). In fact the design extends quite well to larger sample sizes. Outliers, granularity, ties, grouping and gaps (whichever way you want to think about such features) are always evident as well as general level, spread and shape. The graph is not subject to artefacts or side-effects of arbitrary rules of thumb such as what is or is not within 1.5 IQR of the nearer quartile. Conversely, it may offer too much detail for some tastes, but faced with a less than ideal graph one just moves on.
It is reasonable to point out that quantile plots are just cumulative distribution plots with axes reversed, although they are more often shown as point patterns than as connected lines.
Cox (2012) reported one Stata implementation and his stripplot (Stata users can download from SSC) offers another. Implementation should be trivial in any major statistical or mathematical software.
I think this kind of display offers much more detail than a conventional box plot, which here does not fully exploit the space available. A conventional box plot can be helpful for 10-100 groups or variables, where some severe reduction of the data may be needed, but it throws out possibly interesting fine structure for the common few-group or few-variable case.
Another key virtue of this graph is that it echoes the elementary but fundamental fact that just as half the values are inside the box, so also half the values are outside the box (and often the most interesting or more important half). I've seen even experienced statistical people misled by the stark contrast between fat box and thin whiskers. The classic illustration of this is any U-shaped distribution or any distribution with two big clumps of approximately equal size. The box will then be long and fat and the whiskers short and thin. People often miss the fact that such whiskers are hiding the highest densities. Tukey (1977) gave an example of this with Rayleigh's data.
In this case and in many others logarithmic scale is used. In principle, the quantile-box plot is easily compatible with any monotonic transformation, as the transform of the quantiles is identical to the quantiles of the transformed values. (There is some small print qualifying that, arising because median and quartiles may be be produced by averaging of adjacent order statistics, which doesn't usually bite.)
I don't offer herewith any kind of graphical substitute for a significance test. This is an exploratory device.
Cox, N.J. 2012. Axis practice, or what goes where on a graph.
Stata Journal 12(3): 549-561. .pdf accessible here
Parzen, E. 1979a.
Nonparametric statistical data modeling.
Journal, American Statistical Association 74: 105-121.
Parzen, E. 1979b.
A density-quantile function perspective on robust estimation.
In Launer, R.L. and G.N. Wilkinson (Eds) Robustness in statistics.
New York: Academic Press, 237-258.
Parzen, E. 1982.
Data modeling using quantile and density-quantile functions.
In Tiago de Oliveira, J. and Epstein, B. (Eds)
Some recent advances in statistics. London: Academic Press,
23-52.
Tukey, J.W.
Exploratory Data Analysis.
Reading, MA: Addison-Wesley. | How to use boxplots to find the point where values are more likely to come from different conditions
Here is one of many possibilities. Back in 1979, Emanuel Parzen suggested hybridising the quantile plot and the box plot. Some references are given below. Clearly, the box of the box plot shows median |
38,094 | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | You mention in the comments that you are computing the AUC using a 75-25 train-test split, and you are puzzled why AUC is maximized when training your model on only 8 of your 30 regressors. From this you have gotten the impression that AUC is somehow penalizing complexity in your model.
In reality there is something penalizing complexity in your model, but it is not the AUC metric. It is the train-test split. Train-test splitting is what makes it possible to use pretty much any metric, even AUC, for model selection, even if they have no inherent penalty on model complexity.
As you probably know, we do not measure performance on the same data that we train our models on, because the training data error rate is generally an overly optimistic measure of performance in practice (see Section 7.4 of the ESL book). But this is not the most important reason to use train-test splits. The most important reason is to avoid overfitting with excessively complex models.
Given two models A and B such that B "contains A" (the parameter set of B contains that of A) the training error is mathematically guaranteed to favor model B, if you are fitting by optimizing some fit criterion and measuring error by that same criterion. That's because B can fit the data in all the ways that A can, plus additional ways that may produce lower error than A's best fit. This is why you were expecting to see lower error as you added more predictors to your model.
However, by splitting your data into two reasonably independent sets for training and testing, you guard yourself against this pitfall. When you fit the training data aggressively, with many predictors and parameters, it doesn't necessarily improve the test data fit. In fact, no matter what the model or fit criterion, we can generally expect that a model which has overfit the training data will not do well on an independent set of test data which it has never seen. As model complexity increases into overfitting territory, test set performance will generally worsen as the model picks up on increasingly spurious training data patterns, taking its predictions farther and farther away from the actual trends in the system it is trying to predict. See for example slide 4 of this presentation, and sections 7.10 and 7.12 of ESL.
If you still need convincing, a simple thought experiment may help. Imagine you have a dataset of 100 points with a simple linear trend plus gaussian noise, and you want to fit a polynomial model to this data. Now let's say you split the data into training and test sets of size 50 each and you fit a polynomial of degree 50 to the training data. This polynomial will interpolate the data and give zero training set error, but it will exhibit wild oscillatory behavior carrying it far, far away from the simple linear trendline. This will cause extremely large errors on the test set, much larger than you would get using a simple linear model. So the linear model will be favored by CV error. This will also happen if you compare the linear model against a more stable model like smoothing splines, although the effect will be less dramatic.
In conclusion, by using train-test splitting techniques such as CV, and measuring performance on the test data, we get an implicit penalization of model complexity, no matter what metric we use, just because the model has to predict on data it hasn't seen. This is why train-test splitting is universally used in the modern approach to evaluating performance in regression and classification. | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | You mention in the comments that you are computing the AUC using a 75-25 train-test split, and you are puzzled why AUC is maximized when training your model on only 8 of your 30 regressors. From this | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
You mention in the comments that you are computing the AUC using a 75-25 train-test split, and you are puzzled why AUC is maximized when training your model on only 8 of your 30 regressors. From this you have gotten the impression that AUC is somehow penalizing complexity in your model.
In reality there is something penalizing complexity in your model, but it is not the AUC metric. It is the train-test split. Train-test splitting is what makes it possible to use pretty much any metric, even AUC, for model selection, even if they have no inherent penalty on model complexity.
As you probably know, we do not measure performance on the same data that we train our models on, because the training data error rate is generally an overly optimistic measure of performance in practice (see Section 7.4 of the ESL book). But this is not the most important reason to use train-test splits. The most important reason is to avoid overfitting with excessively complex models.
Given two models A and B such that B "contains A" (the parameter set of B contains that of A) the training error is mathematically guaranteed to favor model B, if you are fitting by optimizing some fit criterion and measuring error by that same criterion. That's because B can fit the data in all the ways that A can, plus additional ways that may produce lower error than A's best fit. This is why you were expecting to see lower error as you added more predictors to your model.
However, by splitting your data into two reasonably independent sets for training and testing, you guard yourself against this pitfall. When you fit the training data aggressively, with many predictors and parameters, it doesn't necessarily improve the test data fit. In fact, no matter what the model or fit criterion, we can generally expect that a model which has overfit the training data will not do well on an independent set of test data which it has never seen. As model complexity increases into overfitting territory, test set performance will generally worsen as the model picks up on increasingly spurious training data patterns, taking its predictions farther and farther away from the actual trends in the system it is trying to predict. See for example slide 4 of this presentation, and sections 7.10 and 7.12 of ESL.
If you still need convincing, a simple thought experiment may help. Imagine you have a dataset of 100 points with a simple linear trend plus gaussian noise, and you want to fit a polynomial model to this data. Now let's say you split the data into training and test sets of size 50 each and you fit a polynomial of degree 50 to the training data. This polynomial will interpolate the data and give zero training set error, but it will exhibit wild oscillatory behavior carrying it far, far away from the simple linear trendline. This will cause extremely large errors on the test set, much larger than you would get using a simple linear model. So the linear model will be favored by CV error. This will also happen if you compare the linear model against a more stable model like smoothing splines, although the effect will be less dramatic.
In conclusion, by using train-test splitting techniques such as CV, and measuring performance on the test data, we get an implicit penalization of model complexity, no matter what metric we use, just because the model has to predict on data it hasn't seen. This is why train-test splitting is universally used in the modern approach to evaluating performance in regression and classification. | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
You mention in the comments that you are computing the AUC using a 75-25 train-test split, and you are puzzled why AUC is maximized when training your model on only 8 of your 30 regressors. From this |
38,095 | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | There is a good reason why the regression coefficients in logistic regression are estimated by maximizing the likelihood or penalized likelihood. This leads to certain optimality properties. The concordance probability ($c$-index; AUROC) is a useful supplemental measure for describing the final model's predictive discrimination, but it is not sensitive enough for the use you envisioned nor would it lead to an optimal model. This is quite aside from the overfitting issue, which affects both the $c$-index and the (unpenalized) likelihood.
The $c$-index can reach its maximum with a misleadingly small number of predictors, even though it does not penalize for model complexity, because the concordance probability does not reward extreme predictions that are "correct". $c$ uses only the rank order of predictions and not the absolute predicted values. $c$ is not sensitive enough to be used to compare two models.
Seeking a model that does not use the entire list of predictors is often not well motivated. Model selection brings instability and extreme difficulty with co-linearities. If you want optimum prediction, using all candidate features and incorporating penalization will work best in most situations you are likely to encounter. The data seldom have sufficient information to allow one to make correct choices about which variables are "important" and which are worthless. | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | There is a good reason why the regression coefficients in logistic regression are estimated by maximizing the likelihood or penalized likelihood. This leads to certain optimality properties. The con | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
There is a good reason why the regression coefficients in logistic regression are estimated by maximizing the likelihood or penalized likelihood. This leads to certain optimality properties. The concordance probability ($c$-index; AUROC) is a useful supplemental measure for describing the final model's predictive discrimination, but it is not sensitive enough for the use you envisioned nor would it lead to an optimal model. This is quite aside from the overfitting issue, which affects both the $c$-index and the (unpenalized) likelihood.
The $c$-index can reach its maximum with a misleadingly small number of predictors, even though it does not penalize for model complexity, because the concordance probability does not reward extreme predictions that are "correct". $c$ uses only the rank order of predictions and not the absolute predicted values. $c$ is not sensitive enough to be used to compare two models.
Seeking a model that does not use the entire list of predictors is often not well motivated. Model selection brings instability and extreme difficulty with co-linearities. If you want optimum prediction, using all candidate features and incorporating penalization will work best in most situations you are likely to encounter. The data seldom have sufficient information to allow one to make correct choices about which variables are "important" and which are worthless. | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
There is a good reason why the regression coefficients in logistic regression are estimated by maximizing the likelihood or penalized likelihood. This leads to certain optimality properties. The con |
38,096 | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | This should help clarify a few things, in as few words as possible:
AUC = measure of model's actual predictive performance
BIC = estimate of model's predictive performance
Performance Measures, like AUC, are something you would use to evaluate a model's predictions on data it has never seen before.
Information Criteria, like BIC, on the other hand, attempt to guess at how well a model would make predictions by using how well the model fit the training data AND the number of parameters used to make that fit as a penalty (using the number of parameters makes for better guesses).
Simply put, BIC (and other information criteria), approximate what performance measures, like AUC, give you directly. To be more precise:
Information criteria attempt to approximate out-of-sample deviance using only training data, and make better approximations when accounting for the number of parameters used.
Direct performance measures, like deviance or AUC, are used to asses how well a model makes predictions on validation/test data. The number of parameters is irrelevant to them because they're illustrating performance in the most straightforward way possible.
I thought the link between information criteria and performance measures was hard to understand at first, but it's actually quite simple. If you were to use deviance instead of AUC as a performance measure then BIC would basically tell you what deviance you could expect if you actually made predictions with your model, and then measured their deviance.
This begs the question, why use information criteria at all? Well you shouldn't if you're just trying to build the most accurate model possible. Stick to AUC because models that have unnecessary predictors are likely to make worse predictions (so AUC doesn't penalize them per se, they just happen to have less predictive power). | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | This should help clarify a few things, in as few words as possible:
AUC = measure of model's actual predictive performance
BIC = estimate of model's predictive performance
Performance Measures, li | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
This should help clarify a few things, in as few words as possible:
AUC = measure of model's actual predictive performance
BIC = estimate of model's predictive performance
Performance Measures, like AUC, are something you would use to evaluate a model's predictions on data it has never seen before.
Information Criteria, like BIC, on the other hand, attempt to guess at how well a model would make predictions by using how well the model fit the training data AND the number of parameters used to make that fit as a penalty (using the number of parameters makes for better guesses).
Simply put, BIC (and other information criteria), approximate what performance measures, like AUC, give you directly. To be more precise:
Information criteria attempt to approximate out-of-sample deviance using only training data, and make better approximations when accounting for the number of parameters used.
Direct performance measures, like deviance or AUC, are used to asses how well a model makes predictions on validation/test data. The number of parameters is irrelevant to them because they're illustrating performance in the most straightforward way possible.
I thought the link between information criteria and performance measures was hard to understand at first, but it's actually quite simple. If you were to use deviance instead of AUC as a performance measure then BIC would basically tell you what deviance you could expect if you actually made predictions with your model, and then measured their deviance.
This begs the question, why use information criteria at all? Well you shouldn't if you're just trying to build the most accurate model possible. Stick to AUC because models that have unnecessary predictors are likely to make worse predictions (so AUC doesn't penalize them per se, they just happen to have less predictive power). | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
This should help clarify a few things, in as few words as possible:
AUC = measure of model's actual predictive performance
BIC = estimate of model's predictive performance
Performance Measures, li |
38,097 | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | In logistic regression (I do it univariate for easier typing) you try to explain a binary outome $y_i \in \{0,1\}$ by assuming that it is the outcome of a Bernouilli random variable with a success probability $p_i$ that depends on your explanatory variable $x_i$, i.e. $p_i=P(y_i=1|_{x_i})=f(x_i)$, where $f$ is the logistic function: $f(x)=\frac{1}{1+e^{-(\beta_0+\beta_1 x)}}$. The parameters $\beta_i$ are estimated by maximum likelihood. This works as follows: for the $i$-th observation you observe the outcome $y_i$ and the success probability is $p_i=f(x_i)$, the probability to observe $y_i$ for a Bernouilli with success probability $p_i$ is $p_i^{y_i}(1-p_i)^{(1-y_i)}$. So, for all the observations in the sample, assuming independence between observations, the probability of observing $y_i, i=1,2, \dots n$ is $\prod_{i=1}^np_i^{y_i}(1-p_i)^{(1-y_i)}$. Using the above definition of $p_i=f(x_i)$ this becomes $\prod_{i=1}^nf(x_i)^{y_i}(1-f(x_i))^{(1-y_i)}=$. As the $y_i$ and $x_i$ are observed values, we can see this as a function of the unknown parameters $\beta_i$, i.e. $\mathcal{L}(\beta_0, \beta_1)=\prod_{i=1}^n\left(\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{y_i}\left(1-\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{(1-y_i)}$. Maximimum likelihood finds the values for $\beta_i$ that maximise $\mathcal{L}(\beta_0, \beta_1)$. Let us denote this maximum $(\hat{\beta}_0, \hat{\beta}_1)$, then the value of the likelihood in this maximum is $\mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$.
In a similar way, if you would have used two explanatory variables $x_1$ and $x_2$, then the likelihood function would have had three parameters $\mathcal{L}'(\beta_0, \beta_1, \beta_2)$ and the maximum would be $(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$ and the value of the likelihood would be
$\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$. Obviously it would hold that $\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2) > \mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$, whether the incerase in likelihood is significant has to be 'tested' with e.g. a likelihood ratio test. So likelihood ratio tests allow you te 'penalize' models with too many regressors.
This is not so for AUC ! In fact AUC does not even tell you whether your 'success probabilities' are well predicted ! If you take all possible couples $(i,j)$ where $y_i=1$ and $y_j=0$ then AUC will be equal to the fraction of all these couples that have $p_i < p_j$. So AUC has to do with (1) how good your model is in distinguishing between '0' and '1' (it tells you about couples with one 'zero' and one 'one'), it does not say anything about how good your model is in predicting the probabilities ! and (2) it is only based on the 'ranking' ($p_i < p_j$) of the probabilities. If adding 1 explanatory variable does not change anything to the ranking of the probabilities of the subjects, then AUC will not change by adding an explanatory variable.
So the first question you have to ask is what you want to predict: do you want to distinguish between zeroes and ones or do you want to have 'well predicted probabilities' ? Only after you have answered this question you can look for the most parsimonious technique.
If you want to distinguish between zeroes and ones then ROC/AUC may be an option, if you want well predicted probabilities you should take a look at Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | In logistic regression (I do it univariate for easier typing) you try to explain a binary outome $y_i \in \{0,1\}$ by assuming that it is the outcome of a Bernouilli random variable with a success pro | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
In logistic regression (I do it univariate for easier typing) you try to explain a binary outome $y_i \in \{0,1\}$ by assuming that it is the outcome of a Bernouilli random variable with a success probability $p_i$ that depends on your explanatory variable $x_i$, i.e. $p_i=P(y_i=1|_{x_i})=f(x_i)$, where $f$ is the logistic function: $f(x)=\frac{1}{1+e^{-(\beta_0+\beta_1 x)}}$. The parameters $\beta_i$ are estimated by maximum likelihood. This works as follows: for the $i$-th observation you observe the outcome $y_i$ and the success probability is $p_i=f(x_i)$, the probability to observe $y_i$ for a Bernouilli with success probability $p_i$ is $p_i^{y_i}(1-p_i)^{(1-y_i)}$. So, for all the observations in the sample, assuming independence between observations, the probability of observing $y_i, i=1,2, \dots n$ is $\prod_{i=1}^np_i^{y_i}(1-p_i)^{(1-y_i)}$. Using the above definition of $p_i=f(x_i)$ this becomes $\prod_{i=1}^nf(x_i)^{y_i}(1-f(x_i))^{(1-y_i)}=$. As the $y_i$ and $x_i$ are observed values, we can see this as a function of the unknown parameters $\beta_i$, i.e. $\mathcal{L}(\beta_0, \beta_1)=\prod_{i=1}^n\left(\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{y_i}\left(1-\frac{1}{1+e^{-(\beta_0+\beta_1 x_i)}}\right)^{(1-y_i)}$. Maximimum likelihood finds the values for $\beta_i$ that maximise $\mathcal{L}(\beta_0, \beta_1)$. Let us denote this maximum $(\hat{\beta}_0, \hat{\beta}_1)$, then the value of the likelihood in this maximum is $\mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$.
In a similar way, if you would have used two explanatory variables $x_1$ and $x_2$, then the likelihood function would have had three parameters $\mathcal{L}'(\beta_0, \beta_1, \beta_2)$ and the maximum would be $(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$ and the value of the likelihood would be
$\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2)$. Obviously it would hold that $\mathcal{L}'(\hat{\beta}'_0, \hat{\beta}'_1, \hat{\beta}'_2) > \mathcal{L}(\hat{\beta}_0, \hat{\beta}_1)$, whether the incerase in likelihood is significant has to be 'tested' with e.g. a likelihood ratio test. So likelihood ratio tests allow you te 'penalize' models with too many regressors.
This is not so for AUC ! In fact AUC does not even tell you whether your 'success probabilities' are well predicted ! If you take all possible couples $(i,j)$ where $y_i=1$ and $y_j=0$ then AUC will be equal to the fraction of all these couples that have $p_i < p_j$. So AUC has to do with (1) how good your model is in distinguishing between '0' and '1' (it tells you about couples with one 'zero' and one 'one'), it does not say anything about how good your model is in predicting the probabilities ! and (2) it is only based on the 'ranking' ($p_i < p_j$) of the probabilities. If adding 1 explanatory variable does not change anything to the ranking of the probabilities of the subjects, then AUC will not change by adding an explanatory variable.
So the first question you have to ask is what you want to predict: do you want to distinguish between zeroes and ones or do you want to have 'well predicted probabilities' ? Only after you have answered this question you can look for the most parsimonious technique.
If you want to distinguish between zeroes and ones then ROC/AUC may be an option, if you want well predicted probabilities you should take a look at Goodness-of-fit test in Logistic regression; which 'fit' do we want to test?. | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
In logistic regression (I do it univariate for easier typing) you try to explain a binary outome $y_i \in \{0,1\}$ by assuming that it is the outcome of a Bernouilli random variable with a success pro |
38,098 | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | As Marc said, AUC is only a measure of performance, just like missclassification rate. It does not require any information about the model.
Conversely, BIC, AIC, need to know the number of parameters of your model to be evaluated.
There is no good reason, if all of your predictors are relevant, that the missclassification rate or the AUC decreases when removing variables.
However, it is quite common that combining a learning algorithm, an importance measure of the variables and variable selection (based on the importance the algorithm grants them) will perform better than fitting the model on the whole data set.
You have an implementation of this method for Random Forests in the R RFauc package. | Area Under Curve ROC penalizes somehow models with too many explanatory variables? | As Marc said, AUC is only a measure of performance, just like missclassification rate. It does not require any information about the model.
Conversely, BIC, AIC, need to know the number of parameters | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
As Marc said, AUC is only a measure of performance, just like missclassification rate. It does not require any information about the model.
Conversely, BIC, AIC, need to know the number of parameters of your model to be evaluated.
There is no good reason, if all of your predictors are relevant, that the missclassification rate or the AUC decreases when removing variables.
However, it is quite common that combining a learning algorithm, an importance measure of the variables and variable selection (based on the importance the algorithm grants them) will perform better than fitting the model on the whole data set.
You have an implementation of this method for Random Forests in the R RFauc package. | Area Under Curve ROC penalizes somehow models with too many explanatory variables?
As Marc said, AUC is only a measure of performance, just like missclassification rate. It does not require any information about the model.
Conversely, BIC, AIC, need to know the number of parameters |
38,099 | Growing number of Gaussians in a mixture | If your goal is to find the maximum-likelihood mixture of size $n+1$, then you can use the existing solution as an initialization, once you have enlarged it to have one more Gaussian. To enlarge it, there are two approaches in the literature. The first approach is to add a new Gaussian in the best possible place, holding the existing ones fixed. This approach is described in Efficient Greedy Learning of Gaussian Mixture Models by Verbeek et al (2003). They give a fast heuristic for finding good parameters for the new Gaussian, and present experiments (but not theory) showing that it works well. In that paper, you can also find references to earlier work doing similar things.
The second approach is to split an existing Gaussian in two. This approach is described in SMEM Algorithm for Mixture Models by Ueda et al (2000), and Learning of Latent Class Models by Splitting and Merging Components by Karciauskas et al (2004). These papers were focused on splitting Gaussians as a way to escape local optima of the EM algorithm, but splitting can also be used for your situation. | Growing number of Gaussians in a mixture | If your goal is to find the maximum-likelihood mixture of size $n+1$, then you can use the existing solution as an initialization, once you have enlarged it to have one more Gaussian. To enlarge it, | Growing number of Gaussians in a mixture
If your goal is to find the maximum-likelihood mixture of size $n+1$, then you can use the existing solution as an initialization, once you have enlarged it to have one more Gaussian. To enlarge it, there are two approaches in the literature. The first approach is to add a new Gaussian in the best possible place, holding the existing ones fixed. This approach is described in Efficient Greedy Learning of Gaussian Mixture Models by Verbeek et al (2003). They give a fast heuristic for finding good parameters for the new Gaussian, and present experiments (but not theory) showing that it works well. In that paper, you can also find references to earlier work doing similar things.
The second approach is to split an existing Gaussian in two. This approach is described in SMEM Algorithm for Mixture Models by Ueda et al (2000), and Learning of Latent Class Models by Splitting and Merging Components by Karciauskas et al (2004). These papers were focused on splitting Gaussians as a way to escape local optima of the EM algorithm, but splitting can also be used for your situation. | Growing number of Gaussians in a mixture
If your goal is to find the maximum-likelihood mixture of size $n+1$, then you can use the existing solution as an initialization, once you have enlarged it to have one more Gaussian. To enlarge it, |
38,100 | Growing number of Gaussians in a mixture | Generally it depends on algorithm and software you use. In probably any software you can start your algorithm with some starting values.
However consider that you already fitted some model that is somehow "optimal" for the data. If you'll ask your algorithm for one more Gaussian than you already start in some local optima so it could get messy. It could be hard for the algorithm to find a "better" solution than the one that already is "the best of possible choices".
If you think of it in Bayesian way, you can consider it as if you were using some highly informative prior on your data and in this kind of cases sometimes it is hard for your data to overcome information coming from the prior.
Unfortunately I do not recall any literature that is dealing especially with this case. In his book Sivia (2006, Data Analysis: A Bayesian Tutorial. Oxford University Press.) argues that in Bayesian case using posterior from a previous analysis as a prior in a subsequent one is a bad idea in most cases. | Growing number of Gaussians in a mixture | Generally it depends on algorithm and software you use. In probably any software you can start your algorithm with some starting values.
However consider that you already fitted some model that is som | Growing number of Gaussians in a mixture
Generally it depends on algorithm and software you use. In probably any software you can start your algorithm with some starting values.
However consider that you already fitted some model that is somehow "optimal" for the data. If you'll ask your algorithm for one more Gaussian than you already start in some local optima so it could get messy. It could be hard for the algorithm to find a "better" solution than the one that already is "the best of possible choices".
If you think of it in Bayesian way, you can consider it as if you were using some highly informative prior on your data and in this kind of cases sometimes it is hard for your data to overcome information coming from the prior.
Unfortunately I do not recall any literature that is dealing especially with this case. In his book Sivia (2006, Data Analysis: A Bayesian Tutorial. Oxford University Press.) argues that in Bayesian case using posterior from a previous analysis as a prior in a subsequent one is a bad idea in most cases. | Growing number of Gaussians in a mixture
Generally it depends on algorithm and software you use. In probably any software you can start your algorithm with some starting values.
However consider that you already fitted some model that is som |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.