idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,001
|
Ideas for "lab notebook" software?
|
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar software products which exploit the same concept.
|
Ideas for "lab notebook" software?
|
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar
|
Ideas for "lab notebook" software?
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar software products which exploit the same concept.
|
Ideas for "lab notebook" software?
I found Xmind useful. You can attach anything, and tree structure is really useful for organizing. I especially like the feature where you can drill down into the node (topic). There are more similar
|
14,002
|
Ideas for "lab notebook" software?
|
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchronises with your pen strokes) ready for later revision. NB- there is a downside and that is you have to buy special paper that works with the pen.....swings and round-a-bouts really
The audio/pen stokes an be uploaded onto the web and then attached to many of the other programs already higlihted above.
Students I teach (biomechanics) absolutely love this and find later studying of difficult concepts much easier than before (pre livescribe)
|
Ideas for "lab notebook" software?
|
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchron
|
Ideas for "lab notebook" software?
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchronises with your pen strokes) ready for later revision. NB- there is a downside and that is you have to buy special paper that works with the pen.....swings and round-a-bouts really
The audio/pen stokes an be uploaded onto the web and then attached to many of the other programs already higlihted above.
Students I teach (biomechanics) absolutely love this and find later studying of difficult concepts much easier than before (pre livescribe)
|
Ideas for "lab notebook" software?
Personally I have found the Livescribe 'smartpen' a God send.
it merges the trusty 'old-world charm' of a traditional pen and paper notebook but inlcudes the ability to record sound (which it synchron
|
14,003
|
Ideas for "lab notebook" software?
|
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox.
|
Ideas for "lab notebook" software?
|
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox.
|
Ideas for "lab notebook" software?
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox.
|
Ideas for "lab notebook" software?
You might want to check out the latest Zotero beta, which is now standalone and doesn't require Firefox.
|
14,004
|
Ideas for "lab notebook" software?
|
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas.
|
Ideas for "lab notebook" software?
|
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas.
|
Ideas for "lab notebook" software?
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas.
|
Ideas for "lab notebook" software?
Check out this article, Beautifying Data in the Real World, from Nature Precedings for some ideas.
|
14,005
|
Ideas for "lab notebook" software?
|
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than a month.
|
Ideas for "lab notebook" software?
|
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than
|
Ideas for "lab notebook" software?
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than a month.
|
Ideas for "lab notebook" software?
How about a Boogie Board? You can write your notes on a slate and record them as 's the same idea as the LiveScribe pen but you can save them as PDF files...It isn't out yet, but will be in less than
|
14,006
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
|
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually scaled to have length = square-root of the corresponding eigenvalues, and this is what the Cholesky decomposition gives.
ctr <- c(0, 0) # data centroid -> colMeans(dataMatrix)
A <- matrix(c(2.2, 0.4, 0.4, 2.8), nrow=2) # covariance matrix -> cov(dataMatrix)
RR <- chol(A) # Cholesky decomposition
angles <- seq(0, 2*pi, length.out=200) # angles for ellipse
ell <- 1 * cbind(cos(angles), sin(angles)) %*% RR # ellipse scaled with factor 1
ellCtr <- sweep(ell, 2, ctr, "+") # center ellipse to the data centroid
plot(ellCtr, type="l", lwd=2, asp=1) # plot ellipse
points(ctr[1], ctr[2], pch=4, lwd=2) # plot data centroid
library(car) # verify with car's ellipse() function
ellipse(c(0, 0), shape=A, radius=0.98, col="red", lty=2)
Edit: in order to plot the eigenvectors as well, you have to use the more complicated approach. This is equivalent to suncoolsu's answer, it just uses matrix notation to shorten the code.
eigVal <- eigen(A)$values
eigVec <- eigen(A)$vectors
eigScl <- eigVec %*% diag(sqrt(eigVal)) # scale eigenvectors to length = square-root
xMat <- rbind(ctr[1] + eigScl[1, ], ctr[1] - eigScl[1, ])
yMat <- rbind(ctr[2] + eigScl[2, ], ctr[2] - eigScl[2, ])
ellBase <- cbind(sqrt(eigVal[1])*cos(angles), sqrt(eigVal[2])*sin(angles)) # normal ellipse
ellRot <- eigVec %*% t(ellBase) # rotated ellipse
plot((ellRot+ctr)[1, ], (ellRot+ctr)[2, ], asp=1, type="l", lwd=2)
matlines(xMat, yMat, lty=1, lwd=2, col="green")
points(ctr[1], ctr[2], pch=4, col="red", lwd=3)
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
|
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually scaled to have length = square-root of the corresponding eigenvalues, and this is what the Cholesky decomposition gives.
ctr <- c(0, 0) # data centroid -> colMeans(dataMatrix)
A <- matrix(c(2.2, 0.4, 0.4, 2.8), nrow=2) # covariance matrix -> cov(dataMatrix)
RR <- chol(A) # Cholesky decomposition
angles <- seq(0, 2*pi, length.out=200) # angles for ellipse
ell <- 1 * cbind(cos(angles), sin(angles)) %*% RR # ellipse scaled with factor 1
ellCtr <- sweep(ell, 2, ctr, "+") # center ellipse to the data centroid
plot(ellCtr, type="l", lwd=2, asp=1) # plot ellipse
points(ctr[1], ctr[2], pch=4, lwd=2) # plot data centroid
library(car) # verify with car's ellipse() function
ellipse(c(0, 0), shape=A, radius=0.98, col="red", lty=2)
Edit: in order to plot the eigenvectors as well, you have to use the more complicated approach. This is equivalent to suncoolsu's answer, it just uses matrix notation to shorten the code.
eigVal <- eigen(A)$values
eigVec <- eigen(A)$vectors
eigScl <- eigVec %*% diag(sqrt(eigVal)) # scale eigenvectors to length = square-root
xMat <- rbind(ctr[1] + eigScl[1, ], ctr[1] - eigScl[1, ])
yMat <- rbind(ctr[2] + eigScl[2, ], ctr[2] - eigScl[2, ])
ellBase <- cbind(sqrt(eigVal[1])*cos(angles), sqrt(eigVal[2])*sin(angles)) # normal ellipse
ellRot <- eigVec %*% t(ellBase) # rotated ellipse
plot((ellRot+ctr)[1, ], (ellRot+ctr)[2, ], asp=1, type="l", lwd=2)
matlines(xMat, yMat, lty=1, lwd=2, col="green")
points(ctr[1], ctr[2], pch=4, col="red", lwd=3)
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
You could extract the eigenvectors and -values via eigen(A). However, it's simpler to use the Cholesky decomposition. Note that when plotting confidence ellipses for data, the ellipse-axes are usually
|
14,007
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
|
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotate the ellipse by the amount of angle between the first eigen vector and the x-axis
mat <- matrix(c(2.2, 0.4, 0.4, 2.8), 2, 2)
eigens <- eigen(mat)
evs <- sqrt(eigens$values)
evecs <- eigens$vectors
a <- evs[1]
b <- evs[2]
x0 <- 0
y0 <- 0
alpha <- atan(evecs[ , 1][2] / evecs[ , 1][1])
theta <- seq(0, 2 * pi, length=(1000))
x <- x0 + a * cos(theta) * cos(alpha) - b * sin(theta) * sin(alpha)
y <- y0 + a * cos(theta) * sin(alpha) + b * sin(theta) * cos(alpha)
png("graph.png")
plot(x, y, type = "l", main = expression("x = a cos " * theta * " + " * x[0] * " and y = b sin " * theta * " + " * y[0]), asp = 1)
arrows(0, 0, a * evecs[ , 1][2], a * evecs[ , 1][2])
arrows(0, 0, b * evecs[ , 2][3], b * evecs[ , 2][2])
dev.off()
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
|
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotat
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotate the ellipse by the amount of angle between the first eigen vector and the x-axis
mat <- matrix(c(2.2, 0.4, 0.4, 2.8), 2, 2)
eigens <- eigen(mat)
evs <- sqrt(eigens$values)
evecs <- eigens$vectors
a <- evs[1]
b <- evs[2]
x0 <- 0
y0 <- 0
alpha <- atan(evecs[ , 1][2] / evecs[ , 1][1])
theta <- seq(0, 2 * pi, length=(1000))
x <- x0 + a * cos(theta) * cos(alpha) - b * sin(theta) * sin(alpha)
y <- y0 + a * cos(theta) * sin(alpha) + b * sin(theta) * cos(alpha)
png("graph.png")
plot(x, y, type = "l", main = expression("x = a cos " * theta * " + " * x[0] * " and y = b sin " * theta * " + " * y[0]), asp = 1)
arrows(0, 0, a * evecs[ , 1][2], a * evecs[ , 1][2])
arrows(0, 0, b * evecs[ , 2][3], b * evecs[ , 2][2])
dev.off()
|
How to plot an ellipse from eigenvalues and eigenvectors in R?
I think this is the R code that you want. I borrowed the R-code from this thread on the r-mailing list. The idea basically is: the major and minor half-diameters are the two eigen values and you rotat
|
14,008
|
What is the community's take on the Fourth Quadrant?
|
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more careful in his language when addressing an auditorium of statisticians than he has been in his popular writing.
Some statisticians are offended by the provocative hyperbole in Taleb's books, but when he states his ideas professionally there's not too much to object to. It's hard to argue that one can confidently estimate the probability of rare events with little or no data, or that one should make high-stakes decisions on such estimates if they can at all be avoided.
(Here's a blog post I wrote about Taleb's ASA talk shortly after the event.)
|
What is the community's take on the Fourth Quadrant?
|
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more ca
|
What is the community's take on the Fourth Quadrant?
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more careful in his language when addressing an auditorium of statisticians than he has been in his popular writing.
Some statisticians are offended by the provocative hyperbole in Taleb's books, but when he states his ideas professionally there's not too much to object to. It's hard to argue that one can confidently estimate the probability of rare events with little or no data, or that one should make high-stakes decisions on such estimates if they can at all be avoided.
(Here's a blog post I wrote about Taleb's ASA talk shortly after the event.)
|
What is the community's take on the Fourth Quadrant?
I was at a meeting of the ASA (American Statistical Association) a couple years ago where Taleb talked about his "fourth quadrant" and it seemed his remarks were well received. Taleb was much more ca
|
14,009
|
Why use the logit link in beta regression?
|
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not matter that much in some applications, e.g., because the predictions or only evaluated in-sample or are not too close to 0 or 1. But it may matter in some applications and you typically do not know in advance whether it matters or not. Typical problems I have seen include: evaluating predictions new $x$ values that are (slightly) outside the range of the original learning sample or finding suitable starting values. For the latter consider:
library("betareg")
data("GasolineYield", package = "betareg")
betareg(yield ~ batch + temp, data = GasolineYield, link = make.link("identity"))
## Error in optim(par = start, fn = loglikfun, gr = if (temporary_control$use_gradient) gradfun else NULL, :
## initial value in 'vmmin' is not finite
But, of course, one can simply try both options and see whether problems with the identity link occur and/or whether it improves the fit of the model.
Interpretation of the parameters: I agree that interpreting parameters in models with link functions is more difficult than in models with an identity link and practitioners often get it wrong. However, I have also often seen misinterpretations of the parameters in linear probability models (binary regressions with identity link, typically by least squares). The assumption that marginal effects are constant cannot hold if predictions get close enough to 0 or 1 and one would need to be really careful. E.g., for an observation with $\hat \mu = 0.01$ an increase in $x$ cannot lead to a decrease of $\hat \mu$ of, say, $0.02$. But this is often treated very sloppily in those scenarios. Hence, I would argue that for a limited response model the parameters from any link function need to be interpreted carefully and might need some practice. My usual advice is therefore (as shown in the other discussion you linked in your question) to look at the effects for regressor configurations of interest. These are easier to interpret and often (but not always) rather similar (from a practical perspective) for different link functions.
|
Why use the logit link in beta regression?
|
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not ma
|
Why use the logit link in beta regression?
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not matter that much in some applications, e.g., because the predictions or only evaluated in-sample or are not too close to 0 or 1. But it may matter in some applications and you typically do not know in advance whether it matters or not. Typical problems I have seen include: evaluating predictions new $x$ values that are (slightly) outside the range of the original learning sample or finding suitable starting values. For the latter consider:
library("betareg")
data("GasolineYield", package = "betareg")
betareg(yield ~ batch + temp, data = GasolineYield, link = make.link("identity"))
## Error in optim(par = start, fn = loglikfun, gr = if (temporary_control$use_gradient) gradfun else NULL, :
## initial value in 'vmmin' is not finite
But, of course, one can simply try both options and see whether problems with the identity link occur and/or whether it improves the fit of the model.
Interpretation of the parameters: I agree that interpreting parameters in models with link functions is more difficult than in models with an identity link and practitioners often get it wrong. However, I have also often seen misinterpretations of the parameters in linear probability models (binary regressions with identity link, typically by least squares). The assumption that marginal effects are constant cannot hold if predictions get close enough to 0 or 1 and one would need to be really careful. E.g., for an observation with $\hat \mu = 0.01$ an increase in $x$ cannot lead to a decrease of $\hat \mu$ of, say, $0.02$. But this is often treated very sloppily in those scenarios. Hence, I would argue that for a limited response model the parameters from any link function need to be interpreted carefully and might need some practice. My usual advice is therefore (as shown in the other discussion you linked in your question) to look at the effects for regressor configurations of interest. These are easier to interpret and often (but not always) rather similar (from a practical perspective) for different link functions.
|
Why use the logit link in beta regression?
Justification of the link function: A link function $g(\mu): (0,1) \rightarrow \mathbb{R}$ assures that all fitted values $\hat \mu = g^{-1}(x^\top \hat \beta)$ are always in $(0, 1)$. This may not ma
|
14,010
|
Why use the logit link in beta regression?
|
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a logistic curve as a function of the predictors 2) the variance of the outcome is the expected outcome times one minus the expected outcome (or some proportion thereof) 3) (consequence of 2) the data ranges between 0 and 1. These properties certainly hold for Bernoulli data. But one should undertake some exploratory statistics and plots before immediately discrediting the logistic model as a viable (and easy to implement/explain) means to answer a scientific question.
A logistic regression model is a special case of the generalized linear model (GLM), that means that consistent parameter estimates and inference are given by the model. Logistic models are used to model proportions, ordinal variables, rates, exam scores, ranks, and all manner of non-binary outcomes in several places in the literature.
Sorry that this response doesn't direct your question later down, but stating the prior reasoning brings up a misconception that's worth addressing.
Many R users have suggested that the "warning" that comes from fitting a continuous response with logistic models should be suppressed. A "middle of the road" way is to change family=binomial to family=quasibinomial. An example of simulating these data, fitting a model, and obtaining correct inference is shown here:
set.seed(123)
## logistic non-binary response
x <- rep(c(-2, 0, 2), each=50)
n <- length(x)
b0 <- 0
b1 <- 0.3
yhat <- plogis(b0 + b1*x)
do.one <- function(){
e <- rnorm(n, 0, yhat*(1-yhat))
y <- yhat + e
yfixed <- pmin(y, 1)
yfixed <- pmax(yfixed, 0)
est <- glm(yfixed ~ x, family=quasibinomial())
ci <- confint.default(est, level = 0.9)
cov0 <- b0 > ci[1,1] & b0 < ci[1,2]
cov1 <- b1 > ci[2,1] & b1 < ci[2,2]
c(cov0, cov1)
}
reg <- replicate(10000, do.one())
rowMeans(reg)
Gives accurate 90% coverage of the CIs
|
Why use the logit link in beta regression?
|
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a l
|
Why use the logit link in beta regression?
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a logistic curve as a function of the predictors 2) the variance of the outcome is the expected outcome times one minus the expected outcome (or some proportion thereof) 3) (consequence of 2) the data ranges between 0 and 1. These properties certainly hold for Bernoulli data. But one should undertake some exploratory statistics and plots before immediately discrediting the logistic model as a viable (and easy to implement/explain) means to answer a scientific question.
A logistic regression model is a special case of the generalized linear model (GLM), that means that consistent parameter estimates and inference are given by the model. Logistic models are used to model proportions, ordinal variables, rates, exam scores, ranks, and all manner of non-binary outcomes in several places in the literature.
Sorry that this response doesn't direct your question later down, but stating the prior reasoning brings up a misconception that's worth addressing.
Many R users have suggested that the "warning" that comes from fitting a continuous response with logistic models should be suppressed. A "middle of the road" way is to change family=binomial to family=quasibinomial. An example of simulating these data, fitting a model, and obtaining correct inference is shown here:
set.seed(123)
## logistic non-binary response
x <- rep(c(-2, 0, 2), each=50)
n <- length(x)
b0 <- 0
b1 <- 0.3
yhat <- plogis(b0 + b1*x)
do.one <- function(){
e <- rnorm(n, 0, yhat*(1-yhat))
y <- yhat + e
yfixed <- pmin(y, 1)
yfixed <- pmax(yfixed, 0)
est <- glm(yfixed ~ x, family=quasibinomial())
ci <- confint.default(est, level = 0.9)
cov0 <- b0 > ci[1,1] & b0 < ci[1,2]
cov1 <- b1 > ci[2,1] & b1 < ci[2,2]
c(cov0, cov1)
}
reg <- replicate(10000, do.one())
rowMeans(reg)
Gives accurate 90% coverage of the CIs
|
Why use the logit link in beta regression?
It is incorrect that the logistic regression can only be used to model binary outcome data. The logistic regression model is appropriate for any data where 1) the expected value of outcome follows a l
|
14,011
|
Singularity issues in Gaussian mixture model
|
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which in the multi-variate Gaussian case, leads to a singular covariance matrix, so it's called the singularity problem.
When the variance gets to zero, the likelihood of the Gaussian component (formula 9.15) goes to infinity and the model becomes overfitted. This doesn't occur when we fit only one Gaussian to a number of points since the variance can not be zero. But it can happen when we have a mixture of Gaussians, as illustrated on the same page of PRML.
Update:
The book suggests two methods for addressing the singularity problem, which are
1) resetting the mean and variance when singularity occurs
2) using MAP instead of MLE by adding a prior.
|
Singularity issues in Gaussian mixture model
|
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which
|
Singularity issues in Gaussian mixture model
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which in the multi-variate Gaussian case, leads to a singular covariance matrix, so it's called the singularity problem.
When the variance gets to zero, the likelihood of the Gaussian component (formula 9.15) goes to infinity and the model becomes overfitted. This doesn't occur when we fit only one Gaussian to a number of points since the variance can not be zero. But it can happen when we have a mixture of Gaussians, as illustrated on the same page of PRML.
Update:
The book suggests two methods for addressing the singularity problem, which are
1) resetting the mean and variance when singularity occurs
2) using MAP instead of MLE by adding a prior.
|
Singularity issues in Gaussian mixture model
If we want to fit a Gaussian to a single data point using maximum likelihood, we will get a very spiky Gaussian that "collapses" to that point. The variance is zero when there's only one point, which
|
14,012
|
Singularity issues in Gaussian mixture model
|
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent that.
Therefore we best start by recapitulating the steps during the fitting of a Gaussian Mixture Model to a dataset.
0. Decide how many sources/clusters (c) you want to fit to your data
1. Initialize the parameters mean $\mu_c$, covariance $\Sigma_c$, and fraction_per_class $\pi_c$ per cluster c
$\underline{E-Step}$
Calculate for each datapoint $x_i$ the probability $r_{ic}$ that datapoint $x_i$ belongs to cluster c with:
$$r_{ic} = \frac{\pi_c N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_c},\boldsymbol{\Sigma_c})}{\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}})}$$
where $N(\boldsymbol{x} \ | \ \boldsymbol{\mu},\boldsymbol{\Sigma})$ describes the mulitvariate Gaussian with:
$$N(\boldsymbol{x_i},\boldsymbol{\mu_c},\boldsymbol{\Sigma_c}) \ = \ \frac{1}{(2\pi)^{\frac{n}{2}}|\Sigma_c|^{\frac{1}{2}}}exp(-\frac{1}{2}(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T\boldsymbol{\Sigma_c^{-1}}(\boldsymbol{x_i}-\boldsymbol{\mu_c}))$$
$r_{ic}$ gives us for each datapoint $x_i$ the measure of: $\frac{Probability \ that \ x_i \ belongs \ to \ class \ c}{Probability \ of \ x_i \ over \ all \ classes}$ hence if $x_i$ is very close to one gaussian c, it will get a high $r_{ic}$ value for this gaussian and relatively low values otherwise.
$\underline{M-Step}$
For each cluster c:
Calculate the total weight $m_c$ (loosely speaking the fraction of points allocated to cluster c) and update $\pi_c$, $\mu_c$, and $\Sigma_c$ using $r_{ic}$ with:
$$m_c \ = \ \Sigma_i r_ic$$
$$\pi_c \ = \ \frac{m_c}{m}$$
$$\boldsymbol{\mu_c} \ = \ \frac{1}{m_c}\Sigma_i r_{ic} \boldsymbol{x_i} $$
$$\boldsymbol{\Sigma_c} \ = \ \frac{1}{m_c}\Sigma_i r_{ic}(\boldsymbol{x_i}-\boldsymbol{\mu_c})(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T$$
Mind that you have to use the updated means in this last formula.
Iteratively repeat the E and M step until the log-likelihood function of our model converges where the log likelihood is computed with:
$$ln \ p(\boldsymbol{X} \ | \ \boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma}) \ = \ \Sigma_{i=1}^N \ ln(\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}))$$
So now we have derived the single steps during the calculation we have to consider what it mean for a matrix to be singular.
A matrix is singular if it is not invertible. A matrix is invertible if there is a matrix $X$ such that $AX = XA = I$. If this is not given, the matrix is said to be singular. That is, a matrix like:
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
is not invertible and following singular. It is also plausible, that if we assume that the above matrix is matrix $A$ there could not be a matrix $X$ which gives dotted with this matrix the identity matrix $I$ (Simply take this zero matrix and dot-product it with any other 2x2 matrix and you will see that you will always get the zero matrix). But why is this a problem for us? Well, consider the formula for the multivariate normal above. There you would find $\boldsymbol{\Sigma_c^{-1}}$ which is the invertible of the covariance matrix. Since a singular matrix is not invertible, this will throw us an error during the computation.
So now that we know how a singular, not invertible matrix looks like and why this is important to us during the GMM calculations, how could we ran into this issue? First of all, we get this $\boldsymbol{0}$ covariance matrix above if the Multivariate Gaussian falls into one point during the iteration between the E and M step. This could happen if we have for instance a dataset to which we want to fit 3 gaussians but which actually consists only of two classes (clusters) such that loosely speaking, two of these three gaussians catch their own cluster while the last gaussian only manages it to catch one single point on which it sits. We will see how this looks like below. But step by step:
Assume you have a two dimensional dataset which consist of two clusters but you don't know that and want to fit three gaussian models to it, that is c = 3. You initialize your parameters in the E step and plot the gaussians on top of your data which looks smth. like (maybe you can see the two relatively scattered clusters on the bottom left and top right):
[![enter image description here][1]][1]
Having initialized the parameter, you iteratively do the E, T steps. During this procedure the three Gaussians are kind of wandering around and searching for their optimal place. If you observe the model parameters, that is $\mu_c$ and $\pi_c$ you will observe that they converge, that it after some number of iterations they will no longer change and therewith the corresponding Gaussian has found its place in space. In the case where you have a singularity matrix you encounter smth. like:
[![enter image description here][2]][2]
Where I have circled the third gaussian model with red. So you see, that this Gaussian sits on one single datapoint while the two others claim the rest. Here I have to notice that to be able to draw the figure like that I already have used covariance-regularization which is a method to prevent singularity matrices and is described below.
Ok , but now we still do not know why and how we encounter a singularity matrix. Therefore we have to look at the calculations of the $r_{ic}$ and the $cov$ during the E and M steps.
If you look at the $r_{ic}$ formula again:
$$r_{ic} = \frac{\pi_c N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_c},\boldsymbol{\Sigma_c})}{\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}})}$$
you see that there the $r_{ic}$'s would have large values if they are very likely under cluster c and low values otherwise.
To make this more apparent consider the case where we have two relatively spread gaussians and one very tight gaussian and we compute the $r_{ic}$ for each datapoint $x_i$ as illustrated in the figure:
[![enter image description here][3]][3]
So go through the datapoints from left to right and imagine you would write down the probability for each $x_i$ that it belongs to the red, blue and yellow gaussian. What you can see is that for most of the $x_i$ the probability that it belongs to the yellow gaussian is very little. In the case above where the third gaussian sits onto one single datapoint, $r_{ic}$ is only larger than zero for this one datapoint while it is zero for every other $x_i$. (collapses onto this datapoint --> This happens if all other points are more likely part of gaussian one or two and hence this is the only point which remains for gaussian three --> The reason why this happens can be found in the interaction between the dataset itself in the initializaion of the gaussians. That is, if we had chosen other initial values for the gaussians, we would have seen another picture and the third gaussian maybe would not collapse). This is sufficient if you further and further spikes this gaussian. The $r_{ic}$ table then looks smth. like:
[![enter image description here][4]][4]
As you can see, the $r_{ic}$ of the third column, that is for the third gaussian are zero instead of this one row. If we look up which datapoint is represented here we get the datapoint: [ 23.38566343 8.07067598]. Ok, but why do we get a singularity matrix in this case? Well, and this is our last step, therefore we have to once more consider the calculation of the covariance matrix which is:
$$\boldsymbol{\Sigma_c} \ = \ \Sigma_i r_{ic}(\boldsymbol{x_i}-\boldsymbol{\mu_c})(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T$$
we have seen that all $r_{ic}$ are zero instead for the one $x_i$ with [23.38566343 8.07067598]. Now the formula wants us to calculate $(\boldsymbol{x_i}-\boldsymbol{\mu_c})$. If we look at the $\boldsymbol{\mu_c}$ for this third gaussian we get [23.38566343 8.07067598]. Oh, but wait, that exactly the same as $x_i$ and that's what Bishop wrote with:"Suppose that one of the components of the mixture
model, let us say the $j$ th
component, has its mean $\boldsymbol{\mu_j}$
exactly equal to one of the data points so that $\boldsymbol{\mu_j}
= \boldsymbol{x_n}$ for some value of *n*" (Bishop, 2006, p.434). So what will happen? Well, this term will be zero and hence this datapoint was the only chance for the covariance-matrix not to get zero (since this datapoint was the only one where $r_{ic}$>0), it now gets zero and looks like:
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
Consequently as said above, this is a singular matrix and will lead to an error during the calculations of the multivariate gaussian.
So how can we prevent such a situation. Well, we have seen that the covariance matrix is singular if it is the $\boldsymbol{0}$ matrix. Hence to prevent singularity we simply have to prevent that the covariance matrix becomes a $\boldsymbol{0}$ matrix. This is done by adding a very little value (in [sklearn's GaussianMixture](http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html#sklearn.mixture.GaussianMixture) this value is set to 1e-6) to the digonal of the covariance matrix. There are also other ways to prevent singularity such as noticing when a gaussian collapses and setting its mean and/or covariance matrix to a new, arbitrarily high value(s). This covariance regularization is also implemented in the code below with which you get the described results. Maybe you have to run the code several times to get a singular covariance matrix since, as said. this must not happen each time but also depends on the initial set up of the gaussians.
import matplotlib.pyplot as plt
from matplotlib import style
style.use('fivethirtyeight')
from sklearn.datasets.samples_generator import make_blobs
import numpy as np
from scipy.stats import multivariate_normal
# 0. Create dataset
X,Y = make_blobs(cluster_std=2.5,random_state=20,n_samples=500,centers=3)
# Stratch dataset to get ellipsoid data
X = np.dot(X,np.random.RandomState(0).randn(2,2))
class EMM:
def __init__(self,X,number_of_sources,iterations):
self.iterations = iterations
self.number_of_sources = number_of_sources
self.X = X
self.mu = None
self.pi = None
self.cov = None
self.XY = None
# Define a function which runs for i iterations:
def run(self):
self.reg_cov = 1e-6*np.identity(len(self.X[0]))
x,y = np.meshgrid(np.sort(self.X[:,0]),np.sort(self.X[:,1]))
self.XY = np.array([x.flatten(),y.flatten()]).T
# 1. Set the initial mu, covariance and pi values
self.mu = np.random.randint(min(self.X[:,0]),max(self.X[:,0]),size=(self.number_of_sources,len(self.X[0]))) # This is a nxm matrix since we assume n sources (n Gaussians) where each has m dimensions
self.cov = np.zeros((self.number_of_sources,len(X[0]),len(X[0]))) # We need a nxmxm covariance matrix for each source since we have m features --> We create symmetric covariance matrices with ones on the digonal
for dim in range(len(self.cov)):
np.fill_diagonal(self.cov[dim],5)
self.pi = np.ones(self.number_of_sources)/self.number_of_sources # Are "Fractions"
log_likelihoods = [] # In this list we store the log likehoods per iteration and plot them in the end to check if
# if we have converged
# Plot the initial state
fig = plt.figure(figsize=(10,10))
ax0 = fig.add_subplot(111)
ax0.scatter(self.X[:,0],self.X[:,1])
for m,c in zip(self.mu,self.cov):
c += self.reg_cov
multi_normal = multivariate_normal(mean=m,cov=c)
ax0.contour(np.sort(self.X[:,0]),np.sort(self.X[:,1]),multi_normal.pdf(self.XY).reshape(len(self.X),len(self.X)),colors='black',alpha=0.3)
ax0.scatter(m[0],m[1],c='grey',zorder=10,s=100)
mu = []
cov = []
R = []
for i in range(self.iterations):
mu.append(self.mu)
cov.append(self.cov)
# E Step
r_ic = np.zeros((len(self.X),len(self.cov)))
for m,co,p,r in zip(self.mu,self.cov,self.pi,range(len(r_ic[0]))):
co+=self.reg_cov
mn = multivariate_normal(mean=m,cov=co)
r_ic[:,r] = p*mn.pdf(self.X)/np.sum([pi_c*multivariate_normal(mean=mu_c,cov=cov_c).pdf(X) for pi_c,mu_c,cov_c in zip(self.pi,self.mu,self.cov+self.reg_cov)],axis=0)
R.append(r_ic)
# M Step
# Calculate the new mean vector and new covariance matrices, based on the probable membership of the single x_i to classes c --> r_ic
self.mu = []
self.cov = []
self.pi = []
log_likelihood = []
for c in range(len(r_ic[0])):
m_c = np.sum(r_ic[:,c],axis=0)
mu_c = (1/m_c)*np.sum(self.X*r_ic[:,c].reshape(len(self.X),1),axis=0)
self.mu.append(mu_c)
# Calculate the covariance matrix per source based on the new mean
self.cov.append(((1/m_c)*np.dot((np.array(r_ic[:,c]).reshape(len(self.X),1)*(self.X-mu_c)).T,(self.X-mu_c)))+self.reg_cov)
# Calculate pi_new which is the "fraction of points" respectively the fraction of the probability assigned to each source
self.pi.append(m_c/np.sum(r_ic))
# Log likelihood
log_likelihoods.append(np.log(np.sum([k*multivariate_normal(self.mu[i],self.cov[j]).pdf(X) for k,i,j in zip(self.pi,range(len(self.mu)),range(len(self.cov)))])))
fig2 = plt.figure(figsize=(10,10))
ax1 = fig2.add_subplot(111)
ax1.plot(range(0,self.iterations,1),log_likelihoods)
#plt.show()
print(mu[-1])
print(cov[-1])
for r in np.array(R[-1]):
print(r)
print(X)
def predict(self):
# PLot the point onto the fittet gaussians
fig3 = plt.figure(figsize=(10,10))
ax2 = fig3.add_subplot(111)
ax2.scatter(self.X[:,0],self.X[:,1])
for m,c in zip(self.mu,self.cov):
multi_normal = multivariate_normal(mean=m,cov=c)
ax2.contour(np.sort(self.X[:,0]),np.sort(self.X[:,1]),multi_normal.pdf(self.XY).reshape(len(self.X),len(self.X)),colors='black',alpha=0.3)
EMM = EMM(X,3,100)
EMM.run()
EMM.predict()
|
Singularity issues in Gaussian mixture model
|
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent
|
Singularity issues in Gaussian mixture model
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent that.
Therefore we best start by recapitulating the steps during the fitting of a Gaussian Mixture Model to a dataset.
0. Decide how many sources/clusters (c) you want to fit to your data
1. Initialize the parameters mean $\mu_c$, covariance $\Sigma_c$, and fraction_per_class $\pi_c$ per cluster c
$\underline{E-Step}$
Calculate for each datapoint $x_i$ the probability $r_{ic}$ that datapoint $x_i$ belongs to cluster c with:
$$r_{ic} = \frac{\pi_c N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_c},\boldsymbol{\Sigma_c})}{\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}})}$$
where $N(\boldsymbol{x} \ | \ \boldsymbol{\mu},\boldsymbol{\Sigma})$ describes the mulitvariate Gaussian with:
$$N(\boldsymbol{x_i},\boldsymbol{\mu_c},\boldsymbol{\Sigma_c}) \ = \ \frac{1}{(2\pi)^{\frac{n}{2}}|\Sigma_c|^{\frac{1}{2}}}exp(-\frac{1}{2}(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T\boldsymbol{\Sigma_c^{-1}}(\boldsymbol{x_i}-\boldsymbol{\mu_c}))$$
$r_{ic}$ gives us for each datapoint $x_i$ the measure of: $\frac{Probability \ that \ x_i \ belongs \ to \ class \ c}{Probability \ of \ x_i \ over \ all \ classes}$ hence if $x_i$ is very close to one gaussian c, it will get a high $r_{ic}$ value for this gaussian and relatively low values otherwise.
$\underline{M-Step}$
For each cluster c:
Calculate the total weight $m_c$ (loosely speaking the fraction of points allocated to cluster c) and update $\pi_c$, $\mu_c$, and $\Sigma_c$ using $r_{ic}$ with:
$$m_c \ = \ \Sigma_i r_ic$$
$$\pi_c \ = \ \frac{m_c}{m}$$
$$\boldsymbol{\mu_c} \ = \ \frac{1}{m_c}\Sigma_i r_{ic} \boldsymbol{x_i} $$
$$\boldsymbol{\Sigma_c} \ = \ \frac{1}{m_c}\Sigma_i r_{ic}(\boldsymbol{x_i}-\boldsymbol{\mu_c})(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T$$
Mind that you have to use the updated means in this last formula.
Iteratively repeat the E and M step until the log-likelihood function of our model converges where the log likelihood is computed with:
$$ln \ p(\boldsymbol{X} \ | \ \boldsymbol{\pi},\boldsymbol{\mu},\boldsymbol{\Sigma}) \ = \ \Sigma_{i=1}^N \ ln(\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}))$$
So now we have derived the single steps during the calculation we have to consider what it mean for a matrix to be singular.
A matrix is singular if it is not invertible. A matrix is invertible if there is a matrix $X$ such that $AX = XA = I$. If this is not given, the matrix is said to be singular. That is, a matrix like:
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
is not invertible and following singular. It is also plausible, that if we assume that the above matrix is matrix $A$ there could not be a matrix $X$ which gives dotted with this matrix the identity matrix $I$ (Simply take this zero matrix and dot-product it with any other 2x2 matrix and you will see that you will always get the zero matrix). But why is this a problem for us? Well, consider the formula for the multivariate normal above. There you would find $\boldsymbol{\Sigma_c^{-1}}$ which is the invertible of the covariance matrix. Since a singular matrix is not invertible, this will throw us an error during the computation.
So now that we know how a singular, not invertible matrix looks like and why this is important to us during the GMM calculations, how could we ran into this issue? First of all, we get this $\boldsymbol{0}$ covariance matrix above if the Multivariate Gaussian falls into one point during the iteration between the E and M step. This could happen if we have for instance a dataset to which we want to fit 3 gaussians but which actually consists only of two classes (clusters) such that loosely speaking, two of these three gaussians catch their own cluster while the last gaussian only manages it to catch one single point on which it sits. We will see how this looks like below. But step by step:
Assume you have a two dimensional dataset which consist of two clusters but you don't know that and want to fit three gaussian models to it, that is c = 3. You initialize your parameters in the E step and plot the gaussians on top of your data which looks smth. like (maybe you can see the two relatively scattered clusters on the bottom left and top right):
[![enter image description here][1]][1]
Having initialized the parameter, you iteratively do the E, T steps. During this procedure the three Gaussians are kind of wandering around and searching for their optimal place. If you observe the model parameters, that is $\mu_c$ and $\pi_c$ you will observe that they converge, that it after some number of iterations they will no longer change and therewith the corresponding Gaussian has found its place in space. In the case where you have a singularity matrix you encounter smth. like:
[![enter image description here][2]][2]
Where I have circled the third gaussian model with red. So you see, that this Gaussian sits on one single datapoint while the two others claim the rest. Here I have to notice that to be able to draw the figure like that I already have used covariance-regularization which is a method to prevent singularity matrices and is described below.
Ok , but now we still do not know why and how we encounter a singularity matrix. Therefore we have to look at the calculations of the $r_{ic}$ and the $cov$ during the E and M steps.
If you look at the $r_{ic}$ formula again:
$$r_{ic} = \frac{\pi_c N(\boldsymbol{x_i} \ | \ \boldsymbol{\mu_c},\boldsymbol{\Sigma_c})}{\Sigma_{k=1}^K \pi_k N(\boldsymbol{x_i \ | \ \boldsymbol{\mu_k},\boldsymbol{\Sigma_k}})}$$
you see that there the $r_{ic}$'s would have large values if they are very likely under cluster c and low values otherwise.
To make this more apparent consider the case where we have two relatively spread gaussians and one very tight gaussian and we compute the $r_{ic}$ for each datapoint $x_i$ as illustrated in the figure:
[![enter image description here][3]][3]
So go through the datapoints from left to right and imagine you would write down the probability for each $x_i$ that it belongs to the red, blue and yellow gaussian. What you can see is that for most of the $x_i$ the probability that it belongs to the yellow gaussian is very little. In the case above where the third gaussian sits onto one single datapoint, $r_{ic}$ is only larger than zero for this one datapoint while it is zero for every other $x_i$. (collapses onto this datapoint --> This happens if all other points are more likely part of gaussian one or two and hence this is the only point which remains for gaussian three --> The reason why this happens can be found in the interaction between the dataset itself in the initializaion of the gaussians. That is, if we had chosen other initial values for the gaussians, we would have seen another picture and the third gaussian maybe would not collapse). This is sufficient if you further and further spikes this gaussian. The $r_{ic}$ table then looks smth. like:
[![enter image description here][4]][4]
As you can see, the $r_{ic}$ of the third column, that is for the third gaussian are zero instead of this one row. If we look up which datapoint is represented here we get the datapoint: [ 23.38566343 8.07067598]. Ok, but why do we get a singularity matrix in this case? Well, and this is our last step, therefore we have to once more consider the calculation of the covariance matrix which is:
$$\boldsymbol{\Sigma_c} \ = \ \Sigma_i r_{ic}(\boldsymbol{x_i}-\boldsymbol{\mu_c})(\boldsymbol{x_i}-\boldsymbol{\mu_c})^T$$
we have seen that all $r_{ic}$ are zero instead for the one $x_i$ with [23.38566343 8.07067598]. Now the formula wants us to calculate $(\boldsymbol{x_i}-\boldsymbol{\mu_c})$. If we look at the $\boldsymbol{\mu_c}$ for this third gaussian we get [23.38566343 8.07067598]. Oh, but wait, that exactly the same as $x_i$ and that's what Bishop wrote with:"Suppose that one of the components of the mixture
model, let us say the $j$ th
component, has its mean $\boldsymbol{\mu_j}$
exactly equal to one of the data points so that $\boldsymbol{\mu_j}
= \boldsymbol{x_n}$ for some value of *n*" (Bishop, 2006, p.434). So what will happen? Well, this term will be zero and hence this datapoint was the only chance for the covariance-matrix not to get zero (since this datapoint was the only one where $r_{ic}$>0), it now gets zero and looks like:
\begin{bmatrix}
0 & 0 \\
0 & 0
\end{bmatrix}
Consequently as said above, this is a singular matrix and will lead to an error during the calculations of the multivariate gaussian.
So how can we prevent such a situation. Well, we have seen that the covariance matrix is singular if it is the $\boldsymbol{0}$ matrix. Hence to prevent singularity we simply have to prevent that the covariance matrix becomes a $\boldsymbol{0}$ matrix. This is done by adding a very little value (in [sklearn's GaussianMixture](http://scikit-learn.org/stable/modules/generated/sklearn.mixture.GaussianMixture.html#sklearn.mixture.GaussianMixture) this value is set to 1e-6) to the digonal of the covariance matrix. There are also other ways to prevent singularity such as noticing when a gaussian collapses and setting its mean and/or covariance matrix to a new, arbitrarily high value(s). This covariance regularization is also implemented in the code below with which you get the described results. Maybe you have to run the code several times to get a singular covariance matrix since, as said. this must not happen each time but also depends on the initial set up of the gaussians.
import matplotlib.pyplot as plt
from matplotlib import style
style.use('fivethirtyeight')
from sklearn.datasets.samples_generator import make_blobs
import numpy as np
from scipy.stats import multivariate_normal
# 0. Create dataset
X,Y = make_blobs(cluster_std=2.5,random_state=20,n_samples=500,centers=3)
# Stratch dataset to get ellipsoid data
X = np.dot(X,np.random.RandomState(0).randn(2,2))
class EMM:
def __init__(self,X,number_of_sources,iterations):
self.iterations = iterations
self.number_of_sources = number_of_sources
self.X = X
self.mu = None
self.pi = None
self.cov = None
self.XY = None
# Define a function which runs for i iterations:
def run(self):
self.reg_cov = 1e-6*np.identity(len(self.X[0]))
x,y = np.meshgrid(np.sort(self.X[:,0]),np.sort(self.X[:,1]))
self.XY = np.array([x.flatten(),y.flatten()]).T
# 1. Set the initial mu, covariance and pi values
self.mu = np.random.randint(min(self.X[:,0]),max(self.X[:,0]),size=(self.number_of_sources,len(self.X[0]))) # This is a nxm matrix since we assume n sources (n Gaussians) where each has m dimensions
self.cov = np.zeros((self.number_of_sources,len(X[0]),len(X[0]))) # We need a nxmxm covariance matrix for each source since we have m features --> We create symmetric covariance matrices with ones on the digonal
for dim in range(len(self.cov)):
np.fill_diagonal(self.cov[dim],5)
self.pi = np.ones(self.number_of_sources)/self.number_of_sources # Are "Fractions"
log_likelihoods = [] # In this list we store the log likehoods per iteration and plot them in the end to check if
# if we have converged
# Plot the initial state
fig = plt.figure(figsize=(10,10))
ax0 = fig.add_subplot(111)
ax0.scatter(self.X[:,0],self.X[:,1])
for m,c in zip(self.mu,self.cov):
c += self.reg_cov
multi_normal = multivariate_normal(mean=m,cov=c)
ax0.contour(np.sort(self.X[:,0]),np.sort(self.X[:,1]),multi_normal.pdf(self.XY).reshape(len(self.X),len(self.X)),colors='black',alpha=0.3)
ax0.scatter(m[0],m[1],c='grey',zorder=10,s=100)
mu = []
cov = []
R = []
for i in range(self.iterations):
mu.append(self.mu)
cov.append(self.cov)
# E Step
r_ic = np.zeros((len(self.X),len(self.cov)))
for m,co,p,r in zip(self.mu,self.cov,self.pi,range(len(r_ic[0]))):
co+=self.reg_cov
mn = multivariate_normal(mean=m,cov=co)
r_ic[:,r] = p*mn.pdf(self.X)/np.sum([pi_c*multivariate_normal(mean=mu_c,cov=cov_c).pdf(X) for pi_c,mu_c,cov_c in zip(self.pi,self.mu,self.cov+self.reg_cov)],axis=0)
R.append(r_ic)
# M Step
# Calculate the new mean vector and new covariance matrices, based on the probable membership of the single x_i to classes c --> r_ic
self.mu = []
self.cov = []
self.pi = []
log_likelihood = []
for c in range(len(r_ic[0])):
m_c = np.sum(r_ic[:,c],axis=0)
mu_c = (1/m_c)*np.sum(self.X*r_ic[:,c].reshape(len(self.X),1),axis=0)
self.mu.append(mu_c)
# Calculate the covariance matrix per source based on the new mean
self.cov.append(((1/m_c)*np.dot((np.array(r_ic[:,c]).reshape(len(self.X),1)*(self.X-mu_c)).T,(self.X-mu_c)))+self.reg_cov)
# Calculate pi_new which is the "fraction of points" respectively the fraction of the probability assigned to each source
self.pi.append(m_c/np.sum(r_ic))
# Log likelihood
log_likelihoods.append(np.log(np.sum([k*multivariate_normal(self.mu[i],self.cov[j]).pdf(X) for k,i,j in zip(self.pi,range(len(self.mu)),range(len(self.cov)))])))
fig2 = plt.figure(figsize=(10,10))
ax1 = fig2.add_subplot(111)
ax1.plot(range(0,self.iterations,1),log_likelihoods)
#plt.show()
print(mu[-1])
print(cov[-1])
for r in np.array(R[-1]):
print(r)
print(X)
def predict(self):
# PLot the point onto the fittet gaussians
fig3 = plt.figure(figsize=(10,10))
ax2 = fig3.add_subplot(111)
ax2.scatter(self.X[:,0],self.X[:,1])
for m,c in zip(self.mu,self.cov):
multi_normal = multivariate_normal(mean=m,cov=c)
ax2.contour(np.sort(self.X[:,0]),np.sort(self.X[:,1]),multi_normal.pdf(self.XY).reshape(len(self.X),len(self.X)),colors='black',alpha=0.3)
EMM = EMM(X,3,100)
EMM.run()
EMM.predict()
|
Singularity issues in Gaussian mixture model
This answer will give an insight into what is happening that leads to a singular covariance matrix during the fitting of an GMM to a dataset, why this is happening as well as what we can do to prevent
|
14,013
|
Singularity issues in Gaussian mixture model
|
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
multiplicative factors to the likelihood function arising from the
other data points and these factors will go to zero exponentially
fast, giving an overall likelihood that goes to zero rather than
infinity.
I'm also kinda confused by this part, and here's my interpretation. Take 1D case for simplicity.
When a single Gaussian "collapses" on a data point $x_i$, i.e., $\mu=x_i$, the overall likelihood becomes:
$$p(\mathbf{x}) = p(x_i) p(\mathbf{x}\setminus{i}) = (\frac{1}{\sqrt{2\pi}\sigma}) (\prod_{n \neq i}^N \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x_n-\mu)^2}{2\sigma^2}} )$$
You see as $\sigma \to 0$, the term on the left $p(x_i) \to \infty$, which is like the pathological case in GMM, but the term on the right, which is the likelihood of other data points $p(\mathbf{x}\setminus{i})$, still contains terms like $e^{-\frac{(x_n-\mu)^2}{2\sigma^2}}$ which $\to 0$ exponentially fast as $\sigma \to 0$, so the overall effect on the likelihood is for it to go the zero.
The main point here is that when fitting a single Gaussian, all the data points have to share one set of parameters $\mu, \sigma$, unlike in the mixture case where one component can "focus" on one data point without penalty to the overall data likelihood.
|
Singularity issues in Gaussian mixture model
|
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
|
Singularity issues in Gaussian mixture model
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
multiplicative factors to the likelihood function arising from the
other data points and these factors will go to zero exponentially
fast, giving an overall likelihood that goes to zero rather than
infinity.
I'm also kinda confused by this part, and here's my interpretation. Take 1D case for simplicity.
When a single Gaussian "collapses" on a data point $x_i$, i.e., $\mu=x_i$, the overall likelihood becomes:
$$p(\mathbf{x}) = p(x_i) p(\mathbf{x}\setminus{i}) = (\frac{1}{\sqrt{2\pi}\sigma}) (\prod_{n \neq i}^N \frac{1}{\sqrt{2\pi}\sigma} e^{-\frac{(x_n-\mu)^2}{2\sigma^2}} )$$
You see as $\sigma \to 0$, the term on the left $p(x_i) \to \infty$, which is like the pathological case in GMM, but the term on the right, which is the likelihood of other data points $p(\mathbf{x}\setminus{i})$, still contains terms like $e^{-\frac{(x_n-\mu)^2}{2\sigma^2}}$ which $\to 0$ exponentially fast as $\sigma \to 0$, so the overall effect on the likelihood is for it to go the zero.
The main point here is that when fitting a single Gaussian, all the data points have to share one set of parameters $\mu, \sigma$, unlike in the mixture case where one component can "focus" on one data point without penalty to the overall data likelihood.
|
Singularity issues in Gaussian mixture model
Recall that this problem did not arise in the case of a single
Gaussian distribution. To understand the difference, note that if a
single Gaussian collapses onto a data point it will contribute
|
14,014
|
Singularity issues in Gaussian mixture model
|
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of components in the mixture. That means that derivatives are automatically zero and typically the whole subspace will show up as a mle. More philosophically, the subspace of less than full rank covariances is the boundary of the parameter space and one should always be suspicious when the mle occurs on the boundary- it usually indicates that there is a bigger parameter space lurking around in which one can find the 'real' mle. There is a book called "Algebraic Statistics" by Drton, Sturmfeld, and Sullivant. This issue is discussed in that book in some detail. If you are really curious, you should look at that.
|
Singularity issues in Gaussian mixture model
|
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of
|
Singularity issues in Gaussian mixture model
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of components in the mixture. That means that derivatives are automatically zero and typically the whole subspace will show up as a mle. More philosophically, the subspace of less than full rank covariances is the boundary of the parameter space and one should always be suspicious when the mle occurs on the boundary- it usually indicates that there is a bigger parameter space lurking around in which one can find the 'real' mle. There is a book called "Algebraic Statistics" by Drton, Sturmfeld, and Sullivant. This issue is discussed in that book in some detail. If you are really curious, you should look at that.
|
Singularity issues in Gaussian mixture model
Imho, all the answers miss a fundamental fact. If one looks at the parameter space for a Gaussian mixture model, this space is singular along the subspace where there are less than the full number of
|
14,015
|
Singularity issues in Gaussian mixture model
|
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_j 1\!\!1)\rightarrow \lim_{\sigma_j\rightarrow x_n}\frac{1}{(2\pi)^{1/2}\sigma_j} \exp \left( -\frac{1}{\sigma_j}|x_n-\sigma_j|^2 \right)= \frac{1}{(2\pi)^{1/2}\sigma_j}
\end{equation}
The limit $\sigma_j\rightarrow 0$ is now clearly divergent since the argument of the exponential vanishes.
However for a data point $x_m$ different from the mean $\sigma_j$, we will have
\begin{equation}
{\cal N}(x_m|x_m,\sigma_j 1\!\!1)= \frac{1}{(2\pi)^{1/2}\sigma_j} \exp \left( -\frac{1}{\sigma_j}|x_m-\sigma_j|^2 \right)
\end{equation}
and now the argument of the exponential diverges (and is negative) in the limit $\sigma_j\rightarrow 0$. As a result the product of these two terms in the likelihood function will vanish.
|
Singularity issues in Gaussian mixture model
|
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_
|
Singularity issues in Gaussian mixture model
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_j 1\!\!1)\rightarrow \lim_{\sigma_j\rightarrow x_n}\frac{1}{(2\pi)^{1/2}\sigma_j} \exp \left( -\frac{1}{\sigma_j}|x_n-\sigma_j|^2 \right)= \frac{1}{(2\pi)^{1/2}\sigma_j}
\end{equation}
The limit $\sigma_j\rightarrow 0$ is now clearly divergent since the argument of the exponential vanishes.
However for a data point $x_m$ different from the mean $\sigma_j$, we will have
\begin{equation}
{\cal N}(x_m|x_m,\sigma_j 1\!\!1)= \frac{1}{(2\pi)^{1/2}\sigma_j} \exp \left( -\frac{1}{\sigma_j}|x_m-\sigma_j|^2 \right)
\end{equation}
and now the argument of the exponential diverges (and is negative) in the limit $\sigma_j\rightarrow 0$. As a result the product of these two terms in the likelihood function will vanish.
|
Singularity issues in Gaussian mixture model
For a single Gaussian, the mean may possibly equal one of the data points ($x_n$ for example) and then there is the following term in the likelihood function:
\begin{equation}
{\cal N}(x_n|x_n,\sigma_
|
14,016
|
Log probability vs product of probabilities
|
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the log scale.
That is, instead of "$p_{AB} = p_A\cdot p_B$" (when you have independence), one can instead write "$\log(p_{AB}) = \log(p_A)+ \log(p_B)$". If you need the actual probability, you can exponentiate at the end to get back $p_{AB}$: $\qquad p_{AB}=e^{\log(p_A)+ \log(p_B)}\,,$ but if needed at all, the exponentiation would normally be left to the last possible step. So far so good.
The second part is replacing $\log p$ with $-\log p$. This is so that we work with positive values.
Personally, I don't really see much value in this, especially since it reverses the direction of any ordering ($\log$ is monotonic increasing, so if $p_1<p_2$, then $\log(p_A)< \log(p_2)$; this order is reversed with $-\log p$).
This reversal seems to concern you, but it's a direct consequence of the negation - it should happen with negative log probabilities. Think of negative log probability as a scale of "rarity" - the larger the number, the rarer the event is (the article refers to it as 'surprise value', or surprisal, which is another way to think about it). If you don't like that reversal, work with $\log p$ instead.
To convert negative-log-probabilities back to probabilities, you must negate before exponentiating. If we say $s_i = -\log(p_i)$ ($s$ for 'surprise value'), then $p_{AB}=e^{-[s_A+ s_B]}\,.$ As you see, that reverses direction a second time, giving us back what we need.
|
Log probability vs product of probabilities
|
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the lo
|
Log probability vs product of probabilities
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the log scale.
That is, instead of "$p_{AB} = p_A\cdot p_B$" (when you have independence), one can instead write "$\log(p_{AB}) = \log(p_A)+ \log(p_B)$". If you need the actual probability, you can exponentiate at the end to get back $p_{AB}$: $\qquad p_{AB}=e^{\log(p_A)+ \log(p_B)}\,,$ but if needed at all, the exponentiation would normally be left to the last possible step. So far so good.
The second part is replacing $\log p$ with $-\log p$. This is so that we work with positive values.
Personally, I don't really see much value in this, especially since it reverses the direction of any ordering ($\log$ is monotonic increasing, so if $p_1<p_2$, then $\log(p_A)< \log(p_2)$; this order is reversed with $-\log p$).
This reversal seems to concern you, but it's a direct consequence of the negation - it should happen with negative log probabilities. Think of negative log probability as a scale of "rarity" - the larger the number, the rarer the event is (the article refers to it as 'surprise value', or surprisal, which is another way to think about it). If you don't like that reversal, work with $\log p$ instead.
To convert negative-log-probabilities back to probabilities, you must negate before exponentiating. If we say $s_i = -\log(p_i)$ ($s$ for 'surprise value'), then $p_{AB}=e^{-[s_A+ s_B]}\,.$ As you see, that reverses direction a second time, giving us back what we need.
|
Log probability vs product of probabilities
I fear you have misunderstood what the article intends. This is no great surprise, since it's somewhat unclearly written. There are two different things going on.
The first is simply to work on the lo
|
14,017
|
Why at all consider sampling without replacement in a practical application?
|
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characteristic of individuals in this population. You could do it your way, and randomly sample with replacement. The variance of the sample mean will be V(X)/5.
On the other hand, suppose you sample the five individuals without replacement. Then, the variance of the sample mean is 0. You've sampled the whole population, each individual exactly once, so there is no distinction between "sample mean" and "population mean." They are the same thing.
In the real world, you should jump for joy each time you have to do the finite population correction because (drumroll . . .) it makes the variance of your estimator go down without you having to collect more data. Almost nothing does this. It's like magic: good magic.
Saying the exact same thing in math (pay attention to the <, and assume sample size is greater than 1):
\begin{equation}
\textrm{finite sample correction} = \frac{N-n}{N-1} < \frac{N-1}{N-1} = 1
\end{equation}
Correction < 1 means that applying the correction makes the variance go DOWN, 'cause you apply the correction by multiplying it against the variance. Variance DOWN == good.
Moving in the opposite direction, entirely away from math, think about what you are asking. If you want to learn about the population and you can sample 5 people from it, does it seem likely that you will learn more by taking the chance of sampling the same guy 5 times or does it seem more likely that you will learn more by ensuring that you sample 5 different guys?
The real world case is almost the opposite of what you are saying. Almost never do you sample with replacement --- it's only when you are doing special things like bootstrapping. In that case, you are actually trying to screw up the estimator and give it a "too big" variance.
|
Why at all consider sampling without replacement in a practical application?
|
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characterist
|
Why at all consider sampling without replacement in a practical application?
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characteristic of individuals in this population. You could do it your way, and randomly sample with replacement. The variance of the sample mean will be V(X)/5.
On the other hand, suppose you sample the five individuals without replacement. Then, the variance of the sample mean is 0. You've sampled the whole population, each individual exactly once, so there is no distinction between "sample mean" and "population mean." They are the same thing.
In the real world, you should jump for joy each time you have to do the finite population correction because (drumroll . . .) it makes the variance of your estimator go down without you having to collect more data. Almost nothing does this. It's like magic: good magic.
Saying the exact same thing in math (pay attention to the <, and assume sample size is greater than 1):
\begin{equation}
\textrm{finite sample correction} = \frac{N-n}{N-1} < \frac{N-1}{N-1} = 1
\end{equation}
Correction < 1 means that applying the correction makes the variance go DOWN, 'cause you apply the correction by multiplying it against the variance. Variance DOWN == good.
Moving in the opposite direction, entirely away from math, think about what you are asking. If you want to learn about the population and you can sample 5 people from it, does it seem likely that you will learn more by taking the chance of sampling the same guy 5 times or does it seem more likely that you will learn more by ensuring that you sample 5 different guys?
The real world case is almost the opposite of what you are saying. Almost never do you sample with replacement --- it's only when you are doing special things like bootstrapping. In that case, you are actually trying to screw up the estimator and give it a "too big" variance.
|
Why at all consider sampling without replacement in a practical application?
Expanding on the answer of @Scortchi . . .
Suppose the population had 5 members and you have budget to sample 5 individuals. You are interested in the population mean of a variable X, a characterist
|
14,018
|
Why at all consider sampling without replacement in a practical application?
|
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is done with replacement in an extreme case. That could lead to very imprecise estimate of the population parameter of interest. Such a situation is not possible under sampling without replacement. So the variance is usually lower for estimates made from sampling without replacement.
|
Why at all consider sampling without replacement in a practical application?
|
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is
|
Why at all consider sampling without replacement in a practical application?
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is done with replacement in an extreme case. That could lead to very imprecise estimate of the population parameter of interest. Such a situation is not possible under sampling without replacement. So the variance is usually lower for estimates made from sampling without replacement.
|
Why at all consider sampling without replacement in a practical application?
The precision of estimates is usually higher for sampling without replacement comparing to sampling with replacement.
For example, it is possible to select only one element $n$ times when sampling is
|
14,019
|
Why at all consider sampling without replacement in a practical application?
|
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all, especially with many bootstrap resamples (~1000). If I have sampled from the true distribution a dataset of size 10,000, and I resample with replacement 1,000 times, then the variance I gain (as opposed to the variance I would obtain by doing no replacement) is totally negligible.
I would say that the more accurate answer is this: resampling without replacement is essential when estimating the confidence of a second-order statistic. For example, if I'm using a bootstrap to estimate the uncertainty that I have in a dispersion measurement. Drawing with replacement for such a quantity can artificially bias the recovered dispersions low.
For a concrete example with real data, if you're up to it, see this paper
https://arxiv.org/abs/1612.02827
it briefly discusses your question on page 10
|
Why at all consider sampling without replacement in a practical application?
|
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all
|
Why at all consider sampling without replacement in a practical application?
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all, especially with many bootstrap resamples (~1000). If I have sampled from the true distribution a dataset of size 10,000, and I resample with replacement 1,000 times, then the variance I gain (as opposed to the variance I would obtain by doing no replacement) is totally negligible.
I would say that the more accurate answer is this: resampling without replacement is essential when estimating the confidence of a second-order statistic. For example, if I'm using a bootstrap to estimate the uncertainty that I have in a dispersion measurement. Drawing with replacement for such a quantity can artificially bias the recovered dispersions low.
For a concrete example with real data, if you're up to it, see this paper
https://arxiv.org/abs/1612.02827
it briefly discusses your question on page 10
|
Why at all consider sampling without replacement in a practical application?
I don't think the answers here are totally adequate, and they seem to argue for the limiting case in which your amount of data is very low.
With a sufficiently large sample, this isn't a worry at all
|
14,020
|
Why at all consider sampling without replacement in a practical application?
|
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, well, make replacements. This has two benefits:
You can just take a larger sample and consider it as multiple individual samples.
Replacements in the real world can be costly, time-consuming or even nigh-impossible.
Suppose you're a powerful alien sampling people from the Earth's human population by abducting them. Do you realize how hard it is to perform a replacement? You would have to erase their memory, cook up excuses for their absence, etc. Much better to just abduct a bunch of them and base your statistical analysis on that.
|
Why at all consider sampling without replacement in a practical application?
|
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, wel
|
Why at all consider sampling without replacement in a practical application?
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, well, make replacements. This has two benefits:
You can just take a larger sample and consider it as multiple individual samples.
Replacements in the real world can be costly, time-consuming or even nigh-impossible.
Suppose you're a powerful alien sampling people from the Earth's human population by abducting them. Do you realize how hard it is to perform a replacement? You would have to erase their memory, cook up excuses for their absence, etc. Much better to just abduct a bunch of them and base your statistical analysis on that.
|
Why at all consider sampling without replacement in a practical application?
But from a practical POV I don't see why one would consider sampling without replacement given the advantages of with replacement.
In practice, sampling without replacement saves you the need to, wel
|
14,021
|
Why at all consider sampling without replacement in a practical application?
|
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involves p and q,probabilities of success and failure, in with replacement case, the corresponding probability in without replacement case is obtained simply with the the replacement of p^a.q^b with (N-a-b)C(R-a) for any a and b, where N, R are the total number of balls and the number of white balls. Remember that p is treated as R/N.
K.Balasubramanian
|
Why at all consider sampling without replacement in a practical application?
|
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involve
|
Why at all consider sampling without replacement in a practical application?
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involves p and q,probabilities of success and failure, in with replacement case, the corresponding probability in without replacement case is obtained simply with the the replacement of p^a.q^b with (N-a-b)C(R-a) for any a and b, where N, R are the total number of balls and the number of white balls. Remember that p is treated as R/N.
K.Balasubramanian
|
Why at all consider sampling without replacement in a practical application?
I have a result which treats without replacement practically as with replacement and removes all the difficulties. Note that with replacement calculations are much easier. So, if a probability involve
|
14,022
|
Can lmer() use splines as random effects?
|
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require("lme4")
require("splines")
lmer(counts ~ dependent_variable + (bs(t) | ID), family="poisson")
Depending on what you want to do, you should also look at the gamm4 package and the mgcv package. The former is essentially formalising the bs() bit in the lmer() call above and allows smoothness selection to be performed as part of the analysis. The latter with function gam() allows for some degree of flexibility in fitting models like this (if I understand what you are trying to do). It looks like you want separate trends within ID? A more fixed effects approach would be something like:
gam(counts ~ dependent_variable + ID + s(t, by = ID) , family="poisson")
Random effects can be included in gam() models using the s(foo, bs = "re") type terms where foo would be ID in your example. Whether it makes sense to combine the by term idea with a random effect is something to think about and not something I am qualified to comment on.
|
Can lmer() use splines as random effects?
|
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require(
|
Can lmer() use splines as random effects?
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require("lme4")
require("splines")
lmer(counts ~ dependent_variable + (bs(t) | ID), family="poisson")
Depending on what you want to do, you should also look at the gamm4 package and the mgcv package. The former is essentially formalising the bs() bit in the lmer() call above and allows smoothness selection to be performed as part of the analysis. The latter with function gam() allows for some degree of flexibility in fitting models like this (if I understand what you are trying to do). It looks like you want separate trends within ID? A more fixed effects approach would be something like:
gam(counts ~ dependent_variable + ID + s(t, by = ID) , family="poisson")
Random effects can be included in gam() models using the s(foo, bs = "re") type terms where foo would be ID in your example. Whether it makes sense to combine the by term idea with a random effect is something to think about and not something I am qualified to comment on.
|
Can lmer() use splines as random effects?
If what you show works for a lmer formula for a random effects term then you should be able to use functions from the splines package that comes with R to set up the relevant basis functions.
require(
|
14,023
|
How can I calculate the conditional probability of several events?
|
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
= P(B| A, C, D).P(C| A, D).P(D| A).P(A)/{P(C| B, D).P(D| B).P(B)}
Note the similarity to:
P(A| B) = P(A, B)/P(B)
= P(B| A).P(A)/P(B)
And there are many equivalent forms.
Taking U = (B, C, D) gives:
P(A| B, C, D) = P(A, U)/P(U)
P(A| B, C, D) = P(A, U)/P(U)
= P(U| A).P(A)/P(U)
= P(B, C, D| A).P(A)/P(B, C, D)
I'm sure they're equivalent, but do you want the joint probability of B, C & D given A?
|
How can I calculate the conditional probability of several events?
|
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
|
How can I calculate the conditional probability of several events?
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
= P(B| A, C, D).P(C| A, D).P(D| A).P(A)/{P(C| B, D).P(D| B).P(B)}
Note the similarity to:
P(A| B) = P(A, B)/P(B)
= P(B| A).P(A)/P(B)
And there are many equivalent forms.
Taking U = (B, C, D) gives:
P(A| B, C, D) = P(A, U)/P(U)
P(A| B, C, D) = P(A, U)/P(U)
= P(U| A).P(A)/P(U)
= P(B, C, D| A).P(A)/P(B, C, D)
I'm sure they're equivalent, but do you want the joint probability of B, C & D given A?
|
How can I calculate the conditional probability of several events?
Another approach would be:
P(A| B, C, D) = P(A, B, C, D)/P(B, C, D)
= P(B| A, C, D).P(A, C, D)/P(B, C, D)
= P(B| A, C, D).P(C| A, D).P(A, D)/{P(C| B, D).P(B, D)}
|
14,024
|
How can I calculate the conditional probability of several events?
|
Take the intersection of B,C and D call it U. Then perform P(A|U).
|
How can I calculate the conditional probability of several events?
|
Take the intersection of B,C and D call it U. Then perform P(A|U).
|
How can I calculate the conditional probability of several events?
Take the intersection of B,C and D call it U. Then perform P(A|U).
|
How can I calculate the conditional probability of several events?
Take the intersection of B,C and D call it U. Then perform P(A|U).
|
14,025
|
How can I calculate the conditional probability of several events?
|
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events.
|
How can I calculate the conditional probability of several events?
|
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events.
|
How can I calculate the conditional probability of several events?
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events.
|
How can I calculate the conditional probability of several events?
check this wikipedia page under the sub-section named extensions, they do show how to derive conditional probability involving more than 2 events.
|
14,026
|
When can you use data-based criteria to specify a regression model?
|
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it is especially problematic with small datasets, though it is not exclusive to them. By using a procedure that selects variables based on best fit, all of the random variation that looks like fit in this particular sample contributes to estimates and standard errors. This is a problem for both prediction and interpretation of the model.
Specifically, r-squared is too high and parameter estimates are biased (they are too far from 0), standard errors for parameters are too small (and thus p-values and intervals around parameters are too small/narrow).
The best line of defense against these problems is to build models thoughtfully and include the predictors that make sense based on theory, logic, and previous knowledge. If a variable selection procedure is necessary, you should select a method that penalizes the parameter estimates (shrinkage methods) by adjusting the parameters and standard errors to account for over-fitting. Some common shrinkage methods are Ridge Regression, Least Angle Regression, or the lasso. In addition, cross-validation using a training dataset and a test dataset or model-averaging can be useful to test or reduce the effects of over-fitting.
Harrell is a great source for a detailed discussion of these problems. Harrell (2001). "Regression Modeling Strategies."
|
When can you use data-based criteria to specify a regression model?
|
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population.
|
When can you use data-based criteria to specify a regression model?
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population. The technical term for this is over-fitting and it is especially problematic with small datasets, though it is not exclusive to them. By using a procedure that selects variables based on best fit, all of the random variation that looks like fit in this particular sample contributes to estimates and standard errors. This is a problem for both prediction and interpretation of the model.
Specifically, r-squared is too high and parameter estimates are biased (they are too far from 0), standard errors for parameters are too small (and thus p-values and intervals around parameters are too small/narrow).
The best line of defense against these problems is to build models thoughtfully and include the predictors that make sense based on theory, logic, and previous knowledge. If a variable selection procedure is necessary, you should select a method that penalizes the parameter estimates (shrinkage methods) by adjusting the parameters and standard errors to account for over-fitting. Some common shrinkage methods are Ridge Regression, Least Angle Regression, or the lasso. In addition, cross-validation using a training dataset and a test dataset or model-averaging can be useful to test or reduce the effects of over-fitting.
Harrell is a great source for a detailed discussion of these problems. Harrell (2001). "Regression Modeling Strategies."
|
When can you use data-based criteria to specify a regression model?
Variable selection techniques, in general (whether stepwise, backward, forward, all subsets, AIC, etc.), capitalize on chance or random patterns in the sample data that do not exist in the population.
|
14,027
|
When can you use data-based criteria to specify a regression model?
|
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven approaches are appropriate.
If the purpose is to examine a focused research question then it is important to consider which regression model specifically tests your question.
For example, if your task was to select a set of selection tests to predict job performance, the aim can in some sense be seen as one of maximising prediction of job performance.
Thus, data driven approaches would be useful.
In contrast if you wanted to understand the relative role of personality variables and ability variables in influencing performance, then a specific model comparison approach might be more appropriate.
Typically when exploring focussed research questions the aim is to elucidate something about the underlying causal processes that are operating as opposed to developing a model with optimal prediction.
When I'm in the process of developing models about process based on cross-sectional data I'd be wary about:
(a) including predictors that could theoretically be thought of as consequences of the outcome variable. E.g., a person's belief that they are a good performer is a good predictor of job performance, but it is likely that this is at least partially caused by the fact that they have observed their own performance.
(b) including a large number of predictors that are all reflective of the same underlying phenomena. E.g., including 20 items all measuring satisfaction with life in different ways.
Thus, focused research questions rely a lot more on domain specific knowledge.
This probably goes some way to explaining why data driven approaches are less often used in the social sciences.
|
When can you use data-based criteria to specify a regression model?
|
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven app
|
When can you use data-based criteria to specify a regression model?
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven approaches are appropriate.
If the purpose is to examine a focused research question then it is important to consider which regression model specifically tests your question.
For example, if your task was to select a set of selection tests to predict job performance, the aim can in some sense be seen as one of maximising prediction of job performance.
Thus, data driven approaches would be useful.
In contrast if you wanted to understand the relative role of personality variables and ability variables in influencing performance, then a specific model comparison approach might be more appropriate.
Typically when exploring focussed research questions the aim is to elucidate something about the underlying causal processes that are operating as opposed to developing a model with optimal prediction.
When I'm in the process of developing models about process based on cross-sectional data I'd be wary about:
(a) including predictors that could theoretically be thought of as consequences of the outcome variable. E.g., a person's belief that they are a good performer is a good predictor of job performance, but it is likely that this is at least partially caused by the fact that they have observed their own performance.
(b) including a large number of predictors that are all reflective of the same underlying phenomena. E.g., including 20 items all measuring satisfaction with life in different ways.
Thus, focused research questions rely a lot more on domain specific knowledge.
This probably goes some way to explaining why data driven approaches are less often used in the social sciences.
|
When can you use data-based criteria to specify a regression model?
In the social science context where I come from, the issue is whether you are interested in (a) prediction or (b) testing a focused research question.
If the purpose is prediction then data driven app
|
14,028
|
When can you use data-based criteria to specify a regression model?
|
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One approach is to formulate the model using one set of data, and do inference on a different set of data. This is done in forecasting all the time where we have a training set and a test set. It is not very common in other fields, probably because data are so precious that we want to use every single observation for model selection and for inference. However, as you note in your question, the downside is that the inference is actually misleading.
There are many situations where a theory-based approach is impossible as there is no well-developed theory. In fact, I think this is much more common than the cases where theory suggests a model.
|
When can you use data-based criteria to specify a regression model?
|
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One
|
When can you use data-based criteria to specify a regression model?
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One approach is to formulate the model using one set of data, and do inference on a different set of data. This is done in forecasting all the time where we have a training set and a test set. It is not very common in other fields, probably because data are so precious that we want to use every single observation for model selection and for inference. However, as you note in your question, the downside is that the inference is actually misleading.
There are many situations where a theory-based approach is impossible as there is no well-developed theory. In fact, I think this is much more common than the cases where theory suggests a model.
|
When can you use data-based criteria to specify a regression model?
I don't think it is possible to do Bonferoni or similar corrections to adjust for variable selection in regression because all the tests and steps involved in model selection are not independent.
One
|
14,029
|
When can you use data-based criteria to specify a regression model?
|
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting for multiple hypothesis tests.
Statistical Inference After Model Selection
by: Richard Berk, Lawrence Brown, Linda Zhao
Journal of Quantitative Criminology, Vol. 26, No. 2. (1 June 2010), pp. 217-236.
PDF version here
|
When can you use data-based criteria to specify a regression model?
|
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting
|
When can you use data-based criteria to specify a regression model?
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting for multiple hypothesis tests.
Statistical Inference After Model Selection
by: Richard Berk, Lawrence Brown, Linda Zhao
Journal of Quantitative Criminology, Vol. 26, No. 2. (1 June 2010), pp. 217-236.
PDF version here
|
When can you use data-based criteria to specify a regression model?
Richard Berk has a recent article where he demonstrates through simulation the problems of such data snooping and statistical inference. As Rob suggested it is more problematic than simply correcting
|
14,030
|
When can you use data-based criteria to specify a regression model?
|
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypothesis (= your different models) by their p-value and reject those with a p samller than (desired p-value / index).
More about the topic can be found on Wikipedia
|
When can you use data-based criteria to specify a regression model?
|
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypo
|
When can you use data-based criteria to specify a regression model?
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypothesis (= your different models) by their p-value and reject those with a p samller than (desired p-value / index).
More about the topic can be found on Wikipedia
|
When can you use data-based criteria to specify a regression model?
If I understand your question right, than the answer to your problem is to correct the p-values accordingly to the number of hypothesis.
For example Holm-Bonferoni corrections, where you sort the hypo
|
14,031
|
Is a spline interpolation considered to be a nonparametric model?
|
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models are nonparametric in the sense that using them does not involve reported quantities like $\widehat{\beta}$, $\widehat{\theta}$, etc. (in contrast to linear regression, GLM, etc.). Smoothing models are extremely flexible ways to represent properties of $y$ conditional on one or more $x$ variables, and do not make a priori commitments to, for example, linearity, simple integer polynomial, or similar functional forms relating $y$ to $x$.
On the other hand, these models are parametric, in the mathematical sense that they indeed involve parameters: number of splines, functional form of splines, arrangement of splines, weighting function for data fed to splines, etc. In application, however, these parameters are generally not of substantive interest: they are not the exciting bit of evidence reported by researchers… the smoothed curves (along with CIs and measures of model fit based on deviation of observed values from the curves) are the evidentiary bits. One motivation for this agnosticism about the actual parameters underlying a smoothing model is that different smoothing algorithms tend to give pretty similar results (see Buja, A., Hastie, T., & Tibshirani, R. (1989). Linear Smoothers and Additive Models. The Annals of Statistics, 17(2), 453–510 for a good comparison of several).
If I understand you, your "mixed" approaches are what are called "semi-parametric models". Cox regression is one highly-specialized example of such: the baseline hazard function relies on a nonparametric estimator, while the explanatory variables are estimated in a parametric fashion. GAMs—generalized additive models—permit us to decide which $x$ variables' effects on $y$ we will model using smoothers, which we will model using parametric specifications, and which we will model using both all in a single regression.
|
Is a spline interpolation considered to be a nonparametric model?
|
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models
|
Is a spline interpolation considered to be a nonparametric model?
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models are nonparametric in the sense that using them does not involve reported quantities like $\widehat{\beta}$, $\widehat{\theta}$, etc. (in contrast to linear regression, GLM, etc.). Smoothing models are extremely flexible ways to represent properties of $y$ conditional on one or more $x$ variables, and do not make a priori commitments to, for example, linearity, simple integer polynomial, or similar functional forms relating $y$ to $x$.
On the other hand, these models are parametric, in the mathematical sense that they indeed involve parameters: number of splines, functional form of splines, arrangement of splines, weighting function for data fed to splines, etc. In application, however, these parameters are generally not of substantive interest: they are not the exciting bit of evidence reported by researchers… the smoothed curves (along with CIs and measures of model fit based on deviation of observed values from the curves) are the evidentiary bits. One motivation for this agnosticism about the actual parameters underlying a smoothing model is that different smoothing algorithms tend to give pretty similar results (see Buja, A., Hastie, T., & Tibshirani, R. (1989). Linear Smoothers and Additive Models. The Annals of Statistics, 17(2), 453–510 for a good comparison of several).
If I understand you, your "mixed" approaches are what are called "semi-parametric models". Cox regression is one highly-specialized example of such: the baseline hazard function relies on a nonparametric estimator, while the explanatory variables are estimated in a parametric fashion. GAMs—generalized additive models—permit us to decide which $x$ variables' effects on $y$ we will model using smoothers, which we will model using parametric specifications, and which we will model using both all in a single regression.
|
Is a spline interpolation considered to be a nonparametric model?
This is a good question. Frequently, one will see smoothing regressions (e.g., splines, but also smoothing GAMs, running lines, LOWESS, etc.) described as nonparametric regression models.
These models
|
14,032
|
Is a spline interpolation considered to be a nonparametric model?
|
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The technical definition of "nonparametric" just says "infinite or unspecified", but in practice it means "infinite, or so large that thinking in terms of the parameters becomes unwieldy and/or not useful". You give the example of a KDE, but a KDE is calculated from the sampled values, and the number of samples is finite, so the set of samples is technically a finite set of parameters.
If each spline has a finite number of parameters, and there is a finite number of splines, then it follows that the total number of parameters is finite, but practically speaking the number may be so large that it's not treated as parametric.
On the other hand, if the number of splines is small enough, and the models within the splines are simple enough, that may still be treated as being parametric. Other factors are whether there's a large collection of models with the same type of parameters (that is, the parameters have different values, but the parameters from one model are analogous to those of another), and how intuitive the meaning of the parameters are.
For instance, if you model the volume of $H_2O$ as a function of temperature, you'll probably want separate splines for ice, water, and steam. If you model each as being linear with respect to temperature, you have one coefficient of expansion for each phase (and probably different intercepts as well), which is a small enough number of parameters to being considered "parametric". You'll then also have solid, liquid, and gas coefficients of expansion for other substances.
In this case, the small number of parameters for a particular substance, the large number of substances that have those types of parameters, and the straight forward meaning of the parameters (how much does the substance expand when you heat it) contribute to it likely being considered a parametric model.
|
Is a spline interpolation considered to be a nonparametric model?
|
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The tec
|
Is a spline interpolation considered to be a nonparametric model?
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The technical definition of "nonparametric" just says "infinite or unspecified", but in practice it means "infinite, or so large that thinking in terms of the parameters becomes unwieldy and/or not useful". You give the example of a KDE, but a KDE is calculated from the sampled values, and the number of samples is finite, so the set of samples is technically a finite set of parameters.
If each spline has a finite number of parameters, and there is a finite number of splines, then it follows that the total number of parameters is finite, but practically speaking the number may be so large that it's not treated as parametric.
On the other hand, if the number of splines is small enough, and the models within the splines are simple enough, that may still be treated as being parametric. Other factors are whether there's a large collection of models with the same type of parameters (that is, the parameters have different values, but the parameters from one model are analogous to those of another), and how intuitive the meaning of the parameters are.
For instance, if you model the volume of $H_2O$ as a function of temperature, you'll probably want separate splines for ice, water, and steam. If you model each as being linear with respect to temperature, you have one coefficient of expansion for each phase (and probably different intercepts as well), which is a small enough number of parameters to being considered "parametric". You'll then also have solid, liquid, and gas coefficients of expansion for other substances.
In this case, the small number of parameters for a particular substance, the large number of substances that have those types of parameters, and the straight forward meaning of the parameters (how much does the substance expand when you heat it) contribute to it likely being considered a parametric model.
|
Is a spline interpolation considered to be a nonparametric model?
Strictly speaking, every model is parametric in the sense of having parameters. When we speak of a "nonparametric model", we really mean a model with the number of parameters being manageable.
The tec
|
14,033
|
What is a surrogate loss function?
|
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis function $h(x)$, the loss function $l: (h(X_n), Y_n) \rightarrow \mathbb{R}$ takes the hypothesis function's prediction (i.e. $h(X_n)$) as well as the true label for that particular input and returns a penalty. Now, a general goal is to find a hypothesis such that it minimizes the empirical risk (that is, it minimizes the chances of being wrong):
$$R_l(h) = E_{\text{empirical}}[l(h(X), Y)] = \dfrac{1}{m}\sum_i^m{l(h(X_i), Y_i)}$$
In the case of binary classification, a common loss function that is used is the $0$-$1$ loss function:
$$
l(h(X), Y) = \begin{cases}
0 & Y = h(X) \\
1 & \text{otherwise}
\end{cases}
$$
In general, the loss function that we care about cannot be optimized efficiently. For example, the $0$-$1$ loss function is discontinuous. So, we consider another loss function that will make our life easier, which we call the surrogate loss function.
An example of a surrogate loss function could be $\psi(h(x)) = \max(1 - h(x), 0)$ (the so-called hinge loss in SVM), which is convex and easy to optimize using conventional methods. This function acts as a proxy for the actual loss we wanted to minimize in the first place. Obviously, it has its disadvantages, but in some cases a surrogate loss function actually results in being able to learn more. By this, I mean that once your classifier achieves optimal risk (i.e. highest accuracy), you can still see the loss decreasing, which means that it is trying to push the different classes even further apart to improve its robustness.
|
What is a surrogate loss function?
|
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis fu
|
What is a surrogate loss function?
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis function $h(x)$, the loss function $l: (h(X_n), Y_n) \rightarrow \mathbb{R}$ takes the hypothesis function's prediction (i.e. $h(X_n)$) as well as the true label for that particular input and returns a penalty. Now, a general goal is to find a hypothesis such that it minimizes the empirical risk (that is, it minimizes the chances of being wrong):
$$R_l(h) = E_{\text{empirical}}[l(h(X), Y)] = \dfrac{1}{m}\sum_i^m{l(h(X_i), Y_i)}$$
In the case of binary classification, a common loss function that is used is the $0$-$1$ loss function:
$$
l(h(X), Y) = \begin{cases}
0 & Y = h(X) \\
1 & \text{otherwise}
\end{cases}
$$
In general, the loss function that we care about cannot be optimized efficiently. For example, the $0$-$1$ loss function is discontinuous. So, we consider another loss function that will make our life easier, which we call the surrogate loss function.
An example of a surrogate loss function could be $\psi(h(x)) = \max(1 - h(x), 0)$ (the so-called hinge loss in SVM), which is convex and easy to optimize using conventional methods. This function acts as a proxy for the actual loss we wanted to minimize in the first place. Obviously, it has its disadvantages, but in some cases a surrogate loss function actually results in being able to learn more. By this, I mean that once your classifier achieves optimal risk (i.e. highest accuracy), you can still see the loss decreasing, which means that it is trying to push the different classes even further apart to improve its robustness.
|
What is a surrogate loss function?
In the context of learning, say you have a classification problem with data set $\{(X_1, Y_1), \dots, (X_n, Y_n)\}$, where $X_n$ are your features and $Y_n$ are your true labels.
Given a hypothesis fu
|
14,034
|
What is a surrogate loss function?
|
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you develop the loss functions and most frequently Mean Squared Error. But in MSE the accuracy may not reflect the true accuracy of the classifier. So we would like a loss function (like 0-1 loss function) which gives error as 1 if the class is wrong and 0 if the prediction is right. This is used in svm and called as Hinge Loss.
But in broader terms if you look at the formula
∑max(0,1−y(i)(w⊺ x(i)+b))
It essentially applies the same. You may want to read more about how L1 and L2 regularizations come in the picture but intuitively that is what I understood.
|
What is a surrogate loss function?
|
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you d
|
What is a surrogate loss function?
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you develop the loss functions and most frequently Mean Squared Error. But in MSE the accuracy may not reflect the true accuracy of the classifier. So we would like a loss function (like 0-1 loss function) which gives error as 1 if the class is wrong and 0 if the prediction is right. This is used in svm and called as Hinge Loss.
But in broader terms if you look at the formula
∑max(0,1−y(i)(w⊺ x(i)+b))
It essentially applies the same. You may want to read more about how L1 and L2 regularizations come in the picture but intuitively that is what I understood.
|
What is a surrogate loss function?
On a very general note, this function is used to penalize the misclassifications.
In the end, your aim is to classify the data in correct classes and to evaluate your results. To train the model you d
|
14,035
|
Doing correct statistics in a working environment?
|
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and partly due to apathy. Turn a critical eye to most any published research article that doesn't have a statistician or a machine-learning expert on the author list and you'll quickly spot such elementary mistakes as interpreting $p$-values as the probability that the null hypothesis is true.
I think the only thing to do, when confronted with this kind of situation, is to carefully explain what's wrong about the wrongheaded practice, with an example or two.
|
Doing correct statistics in a working environment?
|
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and pa
|
Doing correct statistics in a working environment?
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and partly due to apathy. Turn a critical eye to most any published research article that doesn't have a statistician or a machine-learning expert on the author list and you'll quickly spot such elementary mistakes as interpreting $p$-values as the probability that the null hypothesis is true.
I think the only thing to do, when confronted with this kind of situation, is to carefully explain what's wrong about the wrongheaded practice, with an example or two.
|
Doing correct statistics in a working environment?
In a nutshell, you're right and he's wrong. The tragedy of data analysis is that a lot of people do it, but only a minority of people do it well, partly due to a weak education in data analysis and pa
|
14,036
|
Doing correct statistics in a working environment?
|
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For example, I currently work in a field where specifications on products need to be set. This is nearly always done by monitoring the products/processes in some ways and recording means and std deviations - then using good old $mean + 3*\sigma$.
Now, apart from the fact that this confidence interval is not telling them what they actually need (they need a tolerance interval for that), this is done blindly on parameters that are hovering near some maximum or minimum value (but where the interval won't actually exceed those values). Because Excel will calculate what they need (yes, I said Excel), they set their specs according to that, despite the fact that the parameter is not going to be anywhere near normally distributed. These people have been taught basic statistics, but not q-q plots or such like. One of the biggest problems is that stats will give you a number, even when used inappropriately- so most people don't know when they have done so.
In other words, the specifications on the vast majority of products, in the vast majority of industries, are nonsense.
One of the worst examples I have of people blindly following statistics, without understanding, is Cpk use in the automotive industry. One company spent about a year arguing over a product with their supplier, because they thought the supplier could control their product to a level that was simply not possible. They were setting only a maximum spec (no minimum) on a parameter and used Cpk to justify their claim - until it was pointed out that their calculations (when used to set a theoretical minimum level - they didn't want that so had not checked) implied a massive negative value. This, on a parameter that could never go less than 0. Cpk assumes normal, the process didn't give anywhere near normal data. It took a long time to get that to sink in. All that wasted time and money because people didn't understand what they were calculating - and it could have been a lot worse had it not been noticed. This might be a contributing factor to why there are regular recalls in the automotive industry!
I, myself, come from a science background, and, frankly, the statistics teaching in science and engineering is shockingly insufficient. I'd never heard of most of what I need to use now - it's all been self taught and there are (compared to a proper statistician) massive gaps in my knowledge even now. For that reason, I don't begrudge people misusing statistics (I probably still do it regularly), it's poor education.
So, going back to your original question, it's really not easy. I would agree with Kodiologist's recommendation to try to gently explain these things so the right statistics are used. But, I would add an extra caveat to that and also advise you to pick your battles wisely, for the sake of your career.
It's unfortunate, but it's a fact that you won't be able to get everyone to do the best statistics every time. Choose to correct them when it really matters to the final overall conclusion (which sometimes means doing things two different ways to check). There are times (e.g. your model 1,2 example) where using the "wrong" way might lead to the same conclusions. Avoid correcting too many people too frequently.
I know that's intellectually frustrating and the world should work differently - sadly it doesn't. To a degree you'll have to learn to judge your battles based on your colleagues' individual personalities. Your (career) goal is to be the expert they go to when they really need help, not the picky person always trying to correct them. And, in fact, if you become that person, that's probably where you'll have the most success getting people to listen and do things the right way. Good luck.
|
Doing correct statistics in a working environment?
|
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For
|
Doing correct statistics in a working environment?
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For example, I currently work in a field where specifications on products need to be set. This is nearly always done by monitoring the products/processes in some ways and recording means and std deviations - then using good old $mean + 3*\sigma$.
Now, apart from the fact that this confidence interval is not telling them what they actually need (they need a tolerance interval for that), this is done blindly on parameters that are hovering near some maximum or minimum value (but where the interval won't actually exceed those values). Because Excel will calculate what they need (yes, I said Excel), they set their specs according to that, despite the fact that the parameter is not going to be anywhere near normally distributed. These people have been taught basic statistics, but not q-q plots or such like. One of the biggest problems is that stats will give you a number, even when used inappropriately- so most people don't know when they have done so.
In other words, the specifications on the vast majority of products, in the vast majority of industries, are nonsense.
One of the worst examples I have of people blindly following statistics, without understanding, is Cpk use in the automotive industry. One company spent about a year arguing over a product with their supplier, because they thought the supplier could control their product to a level that was simply not possible. They were setting only a maximum spec (no minimum) on a parameter and used Cpk to justify their claim - until it was pointed out that their calculations (when used to set a theoretical minimum level - they didn't want that so had not checked) implied a massive negative value. This, on a parameter that could never go less than 0. Cpk assumes normal, the process didn't give anywhere near normal data. It took a long time to get that to sink in. All that wasted time and money because people didn't understand what they were calculating - and it could have been a lot worse had it not been noticed. This might be a contributing factor to why there are regular recalls in the automotive industry!
I, myself, come from a science background, and, frankly, the statistics teaching in science and engineering is shockingly insufficient. I'd never heard of most of what I need to use now - it's all been self taught and there are (compared to a proper statistician) massive gaps in my knowledge even now. For that reason, I don't begrudge people misusing statistics (I probably still do it regularly), it's poor education.
So, going back to your original question, it's really not easy. I would agree with Kodiologist's recommendation to try to gently explain these things so the right statistics are used. But, I would add an extra caveat to that and also advise you to pick your battles wisely, for the sake of your career.
It's unfortunate, but it's a fact that you won't be able to get everyone to do the best statistics every time. Choose to correct them when it really matters to the final overall conclusion (which sometimes means doing things two different ways to check). There are times (e.g. your model 1,2 example) where using the "wrong" way might lead to the same conclusions. Avoid correcting too many people too frequently.
I know that's intellectually frustrating and the world should work differently - sadly it doesn't. To a degree you'll have to learn to judge your battles based on your colleagues' individual personalities. Your (career) goal is to be the expert they go to when they really need help, not the picky person always trying to correct them. And, in fact, if you become that person, that's probably where you'll have the most success getting people to listen and do things the right way. Good luck.
|
Doing correct statistics in a working environment?
Kodiologist is right - you're right, he's wrong. However sadly this is an even more common place problem than what you're encountering. You're actually in an industry that's doing relatively well.
For
|
14,037
|
Doing correct statistics in a working environment?
|
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement of their supervisor/manager.
Yes, very, very likely you are correct to suggest using CV instead of $R^2$ for model selection for example. But you need to find why this (potentially dodgy) methodology came to be, see how is this hurting the company down the line and then offer solutions for that pain. Nobody wants to use a wrong methodology consciously unless they are reasons to do so.
Saying that something is wrong (which might very well be) and not showing how the mistake affects your actual work, rather than the asymptotic behaviour somewhere in the future, does not mean much. People will be reluctant to accept it; why spend energy to change when everything is (somewhat) working?
Your manager is not necessarily wrong from a business perspective. He is responsible for the statistical as well as the business decisions of your department; those decision do not necessarily coincide always and quite likely do not coincide on short-term deliverables (time constraints are a very important factor in industry data analytics).
My advise is to stick to your (statistical) guns but be open to what people do, be patient with people that might be detached from new statistical practices and offer advice/opinions when asked, grow a thicker skin and learn from your environment. If you are doing the right stuff, this will slowly show, people will want your opinion because they will recognise you can offer solutions where their current work-flow does not. Finally, yeah sure, if after a reasonable amount of time (a couple of months at least) you feel that you are devalued and disrespected just move on.
It goes without saying that now you are in the industry you cannot sit back and think you do not need to hone your Statistics education. Predictive modelling, regression strategies, clustering algorithms just keep evolving. For example, using Gaussian Processes Regression in an industrial setting was close to science fiction 10 years ago; now it can seen almost like an off-the-shelf thing to try.
|
Doing correct statistics in a working environment?
|
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement
|
Doing correct statistics in a working environment?
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement of their supervisor/manager.
Yes, very, very likely you are correct to suggest using CV instead of $R^2$ for model selection for example. But you need to find why this (potentially dodgy) methodology came to be, see how is this hurting the company down the line and then offer solutions for that pain. Nobody wants to use a wrong methodology consciously unless they are reasons to do so.
Saying that something is wrong (which might very well be) and not showing how the mistake affects your actual work, rather than the asymptotic behaviour somewhere in the future, does not mean much. People will be reluctant to accept it; why spend energy to change when everything is (somewhat) working?
Your manager is not necessarily wrong from a business perspective. He is responsible for the statistical as well as the business decisions of your department; those decision do not necessarily coincide always and quite likely do not coincide on short-term deliverables (time constraints are a very important factor in industry data analytics).
My advise is to stick to your (statistical) guns but be open to what people do, be patient with people that might be detached from new statistical practices and offer advice/opinions when asked, grow a thicker skin and learn from your environment. If you are doing the right stuff, this will slowly show, people will want your opinion because they will recognise you can offer solutions where their current work-flow does not. Finally, yeah sure, if after a reasonable amount of time (a couple of months at least) you feel that you are devalued and disrespected just move on.
It goes without saying that now you are in the industry you cannot sit back and think you do not need to hone your Statistics education. Predictive modelling, regression strategies, clustering algorithms just keep evolving. For example, using Gaussian Processes Regression in an industrial setting was close to science fiction 10 years ago; now it can seen almost like an off-the-shelf thing to try.
|
Doing correct statistics in a working environment?
What is described appears like a somewhat bad experience. Nevertheless it should not be something that causes one to immediately question their own educational background nor the statistical judgement
|
14,038
|
Linear regression and non-invertibility
|
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, i.e. there will be something about $X$ which makes $X^TX$ singular. In mathematical terms, the columns of $X$ are linearly dependent. In econometric terms, there are multi-collinearities.
I don't know for certain about your particular problem, but I doubt that the method you employ to minimize the cost function has any bearing on the invertibility of $X^T X$. The problem is in the specification.
There exists methods for analyzing which columns are linearly dependent/multi-collinear, in which way. They may be helpful for finding specification errors. In econometrics these are typically resolved by exhibiting suitable estimable functions, or change the specification of your original problem. You should search the literature for some of these concepts to find something which suits your problem and prior knowledge.
|
Linear regression and non-invertibility
|
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why,
|
Linear regression and non-invertibility
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why, i.e. there will be something about $X$ which makes $X^TX$ singular. In mathematical terms, the columns of $X$ are linearly dependent. In econometric terms, there are multi-collinearities.
I don't know for certain about your particular problem, but I doubt that the method you employ to minimize the cost function has any bearing on the invertibility of $X^T X$. The problem is in the specification.
There exists methods for analyzing which columns are linearly dependent/multi-collinear, in which way. They may be helpful for finding specification errors. In econometrics these are typically resolved by exhibiting suitable estimable functions, or change the specification of your original problem. You should search the literature for some of these concepts to find something which suits your problem and prior knowledge.
|
Linear regression and non-invertibility
What you really want to solve is $$X^T X\beta = X^TY.$$
This equation has a single solution if $X^TX$ is invertible(non-singular). If it's not, you have more solutions. You then need to analyze why,
|
14,039
|
Linear regression and non-invertibility
|
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by using generalized inverses to solve the problem.
A matrix $X^g$ is a generalized inverse of the matrix $A$ if and only if it satisfies $AX^gA=A$.
So, using the definition of a generalized inverse, we can write a solution to the least squares equation as
$$\beta=(X^TX)^gX^TY$$
|
Linear regression and non-invertibility
|
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by u
|
Linear regression and non-invertibility
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by using generalized inverses to solve the problem.
A matrix $X^g$ is a generalized inverse of the matrix $A$ if and only if it satisfies $AX^gA=A$.
So, using the definition of a generalized inverse, we can write a solution to the least squares equation as
$$\beta=(X^TX)^gX^TY$$
|
Linear regression and non-invertibility
We can obtain a solution to
$$\beta=(X^TX)^{-1}X^TY$$ even if $(X^TX)^{-1}$ is singular. However, the solution will not be unique. We can get around the problem of $(X^TX)^{-1}$ being singular by u
|
14,040
|
Linear regression and non-invertibility
|
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is unique. It is what your gradient descent should converge to, but because the loss is quadratic it can also be solved directly (e.g. using the SVD).
Also look into Weighted Least Squares and Generalized Least Squares for immediate generalizations to handling data with more complicated variances/correlations.
|
Linear regression and non-invertibility
|
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is uni
|
Linear regression and non-invertibility
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is unique. It is what your gradient descent should converge to, but because the loss is quadratic it can also be solved directly (e.g. using the SVD).
Also look into Weighted Least Squares and Generalized Least Squares for immediate generalizations to handling data with more complicated variances/correlations.
|
Linear regression and non-invertibility
Use the Moore-Penrose inverse! It's usually the "best" generalized inverse, in that it minimizes the sum of squared residuals (which is what you want if you assume gaussian noise in $Y$) and it is uni
|
14,041
|
Interpretting LASSO variable trace plots
|
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, the loss function of your model approaches the OLS loss function. Here's one way you could specify the LASSO loss function to make this concrete:
$$\beta_{lasso} = \text{argmin } [ RSS(\beta) + \lambda *\text{L1-Norm}(\beta) ]$$
Therefore, when lambda is very small, the LASSO solution should be very close to the OLS solution, and all of your coefficients are in the model. As lambda grows, the regularization term has greater effect and you will see fewer variables in your model (because more and more coefficients will be zero valued).
As I mentioned above, the L1 norm is the regularization term for LASSO. Perhaps a better way to look at it is that the x-axis is the maximum permissible value the L1 norm can take. So when you have a small L1 norm, you have a lot of regularization. Therefore, an L1 norm of zero gives an empty model, and as you increase the L1 norm, variables will "enter" the model as their coefficients take non-zero values.
The plot on the left and the plot on the right are basically showing you the same thing, just on different scales.
|
Interpretting LASSO variable trace plots
|
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero,
|
Interpretting LASSO variable trace plots
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero, the loss function of your model approaches the OLS loss function. Here's one way you could specify the LASSO loss function to make this concrete:
$$\beta_{lasso} = \text{argmin } [ RSS(\beta) + \lambda *\text{L1-Norm}(\beta) ]$$
Therefore, when lambda is very small, the LASSO solution should be very close to the OLS solution, and all of your coefficients are in the model. As lambda grows, the regularization term has greater effect and you will see fewer variables in your model (because more and more coefficients will be zero valued).
As I mentioned above, the L1 norm is the regularization term for LASSO. Perhaps a better way to look at it is that the x-axis is the maximum permissible value the L1 norm can take. So when you have a small L1 norm, you have a lot of regularization. Therefore, an L1 norm of zero gives an empty model, and as you increase the L1 norm, variables will "enter" the model as their coefficients take non-zero values.
The plot on the left and the plot on the right are basically showing you the same thing, just on different scales.
|
Interpretting LASSO variable trace plots
In both plots, each colored line represents the value taken by a different coefficient in your model. Lambda is the weight given to the regularization term (the L1 norm), so as lambda approaches zero,
|
14,042
|
Normalization prior to cross-validation
|
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier rescales the data, which most do (at least in R).
However, selecting feature before cross validating is a BIG NO and will lead to overfitting, since you will select them based on how they perform on the whole data set. The log-transformation is ok to perform outside, since the transformation does not depend on the actual data (more on the type of data) and is not something you would not do if you had only 90% of the data instead of 100% and is not tweaked according to the data.
To also answer your comment, obviously whether it will result in overfitting will depend on your manner of feature selection. If you choose them by chance (why would you do that?) or because of a priori theoretical considerations (other literature) it won't matter. But if it depends on your data set it will. Elements of Statistical Learnings has a good explanation. You can freely and legally download a .pdf here http://www-stat.stanford.edu/~tibs/ElemStatLearn/
The point concerning you is in section 7.10.2 on page 245 of the fifth printing. It is titled "The Wrong and Right Ways to do Cross-validation".
|
Normalization prior to cross-validation
|
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier resc
|
Normalization prior to cross-validation
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier rescales the data, which most do (at least in R).
However, selecting feature before cross validating is a BIG NO and will lead to overfitting, since you will select them based on how they perform on the whole data set. The log-transformation is ok to perform outside, since the transformation does not depend on the actual data (more on the type of data) and is not something you would not do if you had only 90% of the data instead of 100% and is not tweaked according to the data.
To also answer your comment, obviously whether it will result in overfitting will depend on your manner of feature selection. If you choose them by chance (why would you do that?) or because of a priori theoretical considerations (other literature) it won't matter. But if it depends on your data set it will. Elements of Statistical Learnings has a good explanation. You can freely and legally download a .pdf here http://www-stat.stanford.edu/~tibs/ElemStatLearn/
The point concerning you is in section 7.10.2 on page 245 of the fifth printing. It is titled "The Wrong and Right Ways to do Cross-validation".
|
Normalization prior to cross-validation
To answer your main question, it would be optimal and more appropiate to scale within the CV. But it will probably not matter much and might not be important in practice at all if your classifier resc
|
14,043
|
Normalization prior to cross-validation
|
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to repeat every element of that procedure separately in each fold of the cross-validation, which would include normalisation. So I would say normalise in each fold.
The only time this would not be necessary is if the statistical procedure was completely insensitive to the scaling and mean value of the data.
|
Normalization prior to cross-validation
|
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to
|
Normalization prior to cross-validation
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to repeat every element of that procedure separately in each fold of the cross-validation, which would include normalisation. So I would say normalise in each fold.
The only time this would not be necessary is if the statistical procedure was completely insensitive to the scaling and mean value of the data.
|
Normalization prior to cross-validation
Cross-validation is best viewed as a method to estimate the performance of a statistical procedure, rather than a statistical model. Thus in order to get an unbiased performance estimate, you need to
|
14,044
|
Normalization prior to cross-validation
|
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selection process. 10 fold cross-validation seems to be the rage today. Doesn't anybody use bootstrap 632 or 632+ for classifier error rate estimation as suggested first by Efron (1983) in JASA and followed-up later in a paper by Efron and Tibshirani with the 632+?
|
Normalization prior to cross-validation
|
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selectio
|
Normalization prior to cross-validation
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selection process. 10 fold cross-validation seems to be the rage today. Doesn't anybody use bootstrap 632 or 632+ for classifier error rate estimation as suggested first by Efron (1983) in JASA and followed-up later in a paper by Efron and Tibshirani with the 632+?
|
Normalization prior to cross-validation
I think that if the normalization only involves two parameters and you have a good size sample that will not be a problem. I would be more concerned about the transformation and the variable selectio
|
14,045
|
Normalization prior to cross-validation
|
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat.
|
Normalization prior to cross-validation
|
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat.
|
Normalization prior to cross-validation
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat.
|
Normalization prior to cross-validation
I personally like the .632 method. Which is basically boostrapping with replacement. If you do that and remove duplicates you will get 632 entries out of an input set of 1000. Kind of neat.
|
14,046
|
Applying the "kernel trick" to linear methods?
|
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc).
|
Applying the "kernel trick" to linear methods?
|
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc).
|
Applying the "kernel trick" to linear methods?
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc).
|
Applying the "kernel trick" to linear methods?
The kernel trick can only be applied to linear models where the examples in the problem formulation appear as dot products (Support Vector Machines, PCA, etc).
|
14,047
|
Applying the "kernel trick" to linear methods?
|
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational biology. The MIT Press.
and a website dedicated to kernel machines.
|
Applying the "kernel trick" to linear methods?
|
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational
|
Applying the "kernel trick" to linear methods?
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational biology. The MIT Press.
and a website dedicated to kernel machines.
|
Applying the "kernel trick" to linear methods?
Two further references from B. Schölkopf:
Schölkopf, B. and Smola, A.J. (2002). Learning with kernels. The MIT Press.
Schölkopf, B., Tsuda, K., and Vert, J.-P. (2004). Kernel methods in computational
|
14,048
|
Applying the "kernel trick" to linear methods?
|
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines, in Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007), pages 1732-1737, Orlando, Florida, USA, August 12-17, 2007. (www,pdf)
I also wrote a (research quality) MATLAB toolbox (sadly no instructions), which you can find here.
Being able to model the target distribution is pretty useful in uncertainty quantification etc. so it is a useful (if fairly incremental) addition to kernel learning methods.
|
Applying the "kernel trick" to linear methods?
|
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G.
|
Applying the "kernel trick" to linear methods?
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G. C. Cawley, G. J. Janacek and N. L. C. Talbot, Generalised kernel machines, in Proceedings of the IEEE/INNS International Joint Conference on Neural Networks (IJCNN-2007), pages 1732-1737, Orlando, Florida, USA, August 12-17, 2007. (www,pdf)
I also wrote a (research quality) MATLAB toolbox (sadly no instructions), which you can find here.
Being able to model the target distribution is pretty useful in uncertainty quantification etc. so it is a useful (if fairly incremental) addition to kernel learning methods.
|
Applying the "kernel trick" to linear methods?
@ebony1 gives the key point (+1), I was a co-author of a paper discussing how to kernelize generalised linear models, e.g. logistic regression and Poisson regression, it is pretty straightforward.
G.
|
14,049
|
How is MANOVA related to LDA?
|
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, such that $\mathbf T = \mathbf W + \mathbf B$. Note that this is fully analogous to how one-way ANOVA decomposes total sum-of-squares $T$ into within-class and between-class sums-of-squares: $T=B+W$. In ANOVA a ratio $B/W$ is then computed and used to find the p-value: the bigger this ratio, the smaller the p-value. MANOVA and LDA compose an analogous multivariate quantity $\mathbf W^{-1} \mathbf B$.
From here on they are different. The sole purpose of MANOVA is to test if the means of all groups are the same; this null hypothesis would mean that $\mathbf B$ should be similar in size to $\mathbf W$. So MANOVA performs an eigendecomposition of $\mathbf W^{-1} \mathbf B$ and finds its eigenvalues $\lambda_i$. The idea is now to test if they are big enough to reject the null. There are four common ways to form a scalar statistic out of the whole set of eigenvalues $\lambda_i$. One way is to take the sum of all eigenvalues. Another way is to take the maximal eigenvalue. In each case, if the chosen statistic is big enough, the null hypothesis is rejected.
In contrast, LDA performs eigendecomposition of $\mathbf W^{-1} \mathbf B$ and looks at the eigenvectors (not eigenvalues). These eigenvectors define directions in the variable space and are called discriminant axes. Projection of the data onto the first discriminant axis has highest class separation (measured as $B/W$); onto the second one -- second highest; etc. When LDA is used for dimensionality reduction, the data can be projected e.g. on the first two axes, and the remaining ones are discarded.
See also an excellent answer by @ttnphns in another thread which covers almost the same ground.
Example
Let us consider a one-way case with $M=2$ dependent variables and $k=3$ groups of observations (i.e. one factor with three levels). I will take the well-known Fisher's Iris dataset and consider only sepal length and sepal width (to make it two-dimensional). Here is the scatter plot:
We can start with computing ANOVAs with both sepal length/width separately. Imagine data points projected vertically or horizontally on the x and y axes, and 1-way ANOVA performed to test if three groups have same means. We get $F_{2,147}=119$ and $p=10^{-31}$ for sepal length, and $F_{2,147}=49$ and $p=10^{-17}$ for sepal width. Okay, so my example is pretty bad as three groups are significantly different with ridiculous p-values on both measures, but I will stick to it anyway.
Now we can perform LDA to find an axis that maximally separates three clusters. As described above, we compute full scatter matrix $\mathbf{T}$, within-class scatter matrix $\mathbf{W}$ and between-class scatter matrix $\mathbf{B}=\mathbf{T}-\mathbf{W}$ and find eigenvectors of $\mathbf{W}^{-1}\mathbf{B}$. I can plot both eigenvectors on the same scatterplot:
Dashed lines are discriminant axes. I plotted them with arbitrary lengths, but the longer axis shows the eigenvector with larger eigenvalue (4.1) and the shorter one --- the one with smaller eigenvalue (0.02). Note that they are not orthogonal, but the mathematics of LDA guarantees that the projections on these axes have zero correlation.
If we now project our data on the first (longer) discriminant axis and then run the ANOVA, we get $F=305$ and $p=10^{-53}$, which is lower than before, and is the lowest possible value among all linear projections (that was the whole point of LDA). The projection on the second axis gives only $p=10^{-5}$.
If we run MANOVA on the same data, we compute the same matrix $\mathbf{W}^{-1}\mathbf{B}$ and look at its eigenvalues in order to compute the p-value. In this case the larger eigenvalue is equal to 4.1, which is equal to $B/W$ for ANOVA along the first discriminant (indeed, $F=B/W \cdot (N-k)/(k-1) = 4.1\cdot 147/2 = 305$, where $N=150$ is the total number of data points and $k=3$ is the number of groups).
There are several commonly used statistical tests that calculate p-value from the eigenspectrum (in this case $\lambda_1=4.1$ and $\lambda_2=0.02$) and give slightly different results. MATLAB gives me the Wilks' test, which reports $p=10^{-55}$. Note that this value is lower than what we had before with any ANOVA, and the intuition here is that MANOVA's p-value "combines" two p-values obtained with ANOVAs on two discriminant axes.
Is it possible to get an opposite situation: higher p-value with MANOVA? Yes, it is. For this we need a situation when only one discriminate axis gives significant $F$, and the second one does not discriminate at all. I modified the above dataset by adding seven points with coordinates $(8,4)$ to the "green" class (the big green dot represents these seven identical points):
The second discriminant axis is gone: its eigenvalue is almost zero. ANOVAs on two discriminant axes give $p=10^{-55}$ and $p=0.26$. But now MANOVA reports only $p=10^{-54}$, which is a bit higher than ANOVA. The intuition behind it is (I believe) that MANOVA's increase of p-value accounts for the fact that we fitted the discriminant axis to get the minimum possible value and corrects for possible false positive. More formally one would say that MANOVA consumes more degrees of freedom. Imagine that there are 100 variables, and only along $\sim 5$ directions one gets $p\approx0.05$ significance; this is essentially multiple testing and those five cases are false positives, so MANOVA will take it into account and report an overall non-significant $p$.
MANOVA vs LDA as machine learning vs. statistics
This seems to me now one of the exemplary cases of how different machine learning community and statistics community approach the same thing. Every textbook on machine learning covers LDA, shows nice pictures etc. but it would never even mention MANOVA (e.g. Bishop, Hastie and Murphy). Probably because people there are more interested in LDA classification accuracy (which roughly corresponds to the effect size), and have no interest in statistical significance of group difference. On the other hand, textbooks on multivariate analysis would discuss MANOVA ad nauseam, provide lots of tabulated data (arrrgh) but rarely mention LDA and even rarer show any plots (e.g. Anderson, or Harris; however, Rencher & Christensen do and Huberty & Olejnik is even called "MANOVA and Discriminant Analysis").
Factorial MANOVA
Factorial MANOVA is much more confusing, but is interesting to consider because it differs from LDA in a sense that "factorial LDA" does not really exist, and factorial MANOVA does not directly correspond to any "usual LDA".
Consider balanced two-way MANOVA with two factors (or independent variables, IVs). One factor (factor A) has three levels, and another factor (factor B) has two levels, making $3\cdot 2=6$ "cells" in the experimental design (using ANOVA terminology). For simplicity I will only consider two dependent variables (DVs):
On this figure all six "cells" (I will also call them "groups" or "classes") are well-separated, which of course rarely happens in practice. Note that it is obvious that there are significant main effects of both factors here, and also significant interaction effect (because the upper-right group is shifted to the right; if I moved it to its "grid" position, then there would be no interaction effect).
How do MANOVA computations work in this case?
First, MANOVA computes pooled within-class scatter matrix $\mathbf W$. But the between-class scatter matrix depends on what effect we are testing. Consider between-class scatter matrix $\mathbf B_A$ for factor A. To compute it, we find the global mean (represented in the figure by a star) and the means conditional on the levels of factor A (represented in the figure by three crosses). We then compute the scatter of these conditional means (weighted by the number of data points in each level of A) relative to the global mean, arriving to $\mathbf B_A$. Now we can consider a usual $\mathbf W^{-1} \mathbf B_A$ matrix, compute its eigendecomposition, and run MANOVA significance tests based on the eigenvalues.
For the factor B, there will be another between-class scatter matrix $\mathbf B_B$, and analogously (a bit more complicated, but straightforward) there will be yet another between-class scatter matrix $\mathbf B_{AB}$ for the interaction effect, so that in the end the total scatter matrix is decomposed into a neat $$\mathbf T = \mathbf B_A + \mathbf B_B + \mathbf B_{AB} + \mathbf W.$$ [Note that this decomposition works only for a balanced dataset with the same number of data points in each cluster. For unbalanced dataset, $\mathbf B$ cannot be uniquely decomposed into a sum of three factor contributions because the factors are not orthogonal anymore; this is similar to the discussion of Type I/II/III SS in ANOVA.]
Now, our main question here is how MANOVA corresponds to LDA. There is no such thing as "factorial LDA". Consider factor A. If we wanted to run LDA to classify levels of factor A (forgetting about factor B altogether), we would have the same between-class $\mathbf B_A$ matrix, but a different within-class scatter matrix $\mathbf W_A=\mathbf T - \mathbf B_A$ (think of merging together two little ellipsoids in each level of factor A on my figure above). The same is true for other factors. So there is no "simple LDA" that directly corresponds to the three tests that MANOVA runs in this case.
However, of course nothing prevents us from looking at the eigenvectors of $\mathbf W^{-1} \mathbf B_A$, and from calling them "discriminant axes" for factor A in MANOVA.
|
How is MANOVA related to LDA?
|
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, suc
|
How is MANOVA related to LDA?
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, such that $\mathbf T = \mathbf W + \mathbf B$. Note that this is fully analogous to how one-way ANOVA decomposes total sum-of-squares $T$ into within-class and between-class sums-of-squares: $T=B+W$. In ANOVA a ratio $B/W$ is then computed and used to find the p-value: the bigger this ratio, the smaller the p-value. MANOVA and LDA compose an analogous multivariate quantity $\mathbf W^{-1} \mathbf B$.
From here on they are different. The sole purpose of MANOVA is to test if the means of all groups are the same; this null hypothesis would mean that $\mathbf B$ should be similar in size to $\mathbf W$. So MANOVA performs an eigendecomposition of $\mathbf W^{-1} \mathbf B$ and finds its eigenvalues $\lambda_i$. The idea is now to test if they are big enough to reject the null. There are four common ways to form a scalar statistic out of the whole set of eigenvalues $\lambda_i$. One way is to take the sum of all eigenvalues. Another way is to take the maximal eigenvalue. In each case, if the chosen statistic is big enough, the null hypothesis is rejected.
In contrast, LDA performs eigendecomposition of $\mathbf W^{-1} \mathbf B$ and looks at the eigenvectors (not eigenvalues). These eigenvectors define directions in the variable space and are called discriminant axes. Projection of the data onto the first discriminant axis has highest class separation (measured as $B/W$); onto the second one -- second highest; etc. When LDA is used for dimensionality reduction, the data can be projected e.g. on the first two axes, and the remaining ones are discarded.
See also an excellent answer by @ttnphns in another thread which covers almost the same ground.
Example
Let us consider a one-way case with $M=2$ dependent variables and $k=3$ groups of observations (i.e. one factor with three levels). I will take the well-known Fisher's Iris dataset and consider only sepal length and sepal width (to make it two-dimensional). Here is the scatter plot:
We can start with computing ANOVAs with both sepal length/width separately. Imagine data points projected vertically or horizontally on the x and y axes, and 1-way ANOVA performed to test if three groups have same means. We get $F_{2,147}=119$ and $p=10^{-31}$ for sepal length, and $F_{2,147}=49$ and $p=10^{-17}$ for sepal width. Okay, so my example is pretty bad as three groups are significantly different with ridiculous p-values on both measures, but I will stick to it anyway.
Now we can perform LDA to find an axis that maximally separates three clusters. As described above, we compute full scatter matrix $\mathbf{T}$, within-class scatter matrix $\mathbf{W}$ and between-class scatter matrix $\mathbf{B}=\mathbf{T}-\mathbf{W}$ and find eigenvectors of $\mathbf{W}^{-1}\mathbf{B}$. I can plot both eigenvectors on the same scatterplot:
Dashed lines are discriminant axes. I plotted them with arbitrary lengths, but the longer axis shows the eigenvector with larger eigenvalue (4.1) and the shorter one --- the one with smaller eigenvalue (0.02). Note that they are not orthogonal, but the mathematics of LDA guarantees that the projections on these axes have zero correlation.
If we now project our data on the first (longer) discriminant axis and then run the ANOVA, we get $F=305$ and $p=10^{-53}$, which is lower than before, and is the lowest possible value among all linear projections (that was the whole point of LDA). The projection on the second axis gives only $p=10^{-5}$.
If we run MANOVA on the same data, we compute the same matrix $\mathbf{W}^{-1}\mathbf{B}$ and look at its eigenvalues in order to compute the p-value. In this case the larger eigenvalue is equal to 4.1, which is equal to $B/W$ for ANOVA along the first discriminant (indeed, $F=B/W \cdot (N-k)/(k-1) = 4.1\cdot 147/2 = 305$, where $N=150$ is the total number of data points and $k=3$ is the number of groups).
There are several commonly used statistical tests that calculate p-value from the eigenspectrum (in this case $\lambda_1=4.1$ and $\lambda_2=0.02$) and give slightly different results. MATLAB gives me the Wilks' test, which reports $p=10^{-55}$. Note that this value is lower than what we had before with any ANOVA, and the intuition here is that MANOVA's p-value "combines" two p-values obtained with ANOVAs on two discriminant axes.
Is it possible to get an opposite situation: higher p-value with MANOVA? Yes, it is. For this we need a situation when only one discriminate axis gives significant $F$, and the second one does not discriminate at all. I modified the above dataset by adding seven points with coordinates $(8,4)$ to the "green" class (the big green dot represents these seven identical points):
The second discriminant axis is gone: its eigenvalue is almost zero. ANOVAs on two discriminant axes give $p=10^{-55}$ and $p=0.26$. But now MANOVA reports only $p=10^{-54}$, which is a bit higher than ANOVA. The intuition behind it is (I believe) that MANOVA's increase of p-value accounts for the fact that we fitted the discriminant axis to get the minimum possible value and corrects for possible false positive. More formally one would say that MANOVA consumes more degrees of freedom. Imagine that there are 100 variables, and only along $\sim 5$ directions one gets $p\approx0.05$ significance; this is essentially multiple testing and those five cases are false positives, so MANOVA will take it into account and report an overall non-significant $p$.
MANOVA vs LDA as machine learning vs. statistics
This seems to me now one of the exemplary cases of how different machine learning community and statistics community approach the same thing. Every textbook on machine learning covers LDA, shows nice pictures etc. but it would never even mention MANOVA (e.g. Bishop, Hastie and Murphy). Probably because people there are more interested in LDA classification accuracy (which roughly corresponds to the effect size), and have no interest in statistical significance of group difference. On the other hand, textbooks on multivariate analysis would discuss MANOVA ad nauseam, provide lots of tabulated data (arrrgh) but rarely mention LDA and even rarer show any plots (e.g. Anderson, or Harris; however, Rencher & Christensen do and Huberty & Olejnik is even called "MANOVA and Discriminant Analysis").
Factorial MANOVA
Factorial MANOVA is much more confusing, but is interesting to consider because it differs from LDA in a sense that "factorial LDA" does not really exist, and factorial MANOVA does not directly correspond to any "usual LDA".
Consider balanced two-way MANOVA with two factors (or independent variables, IVs). One factor (factor A) has three levels, and another factor (factor B) has two levels, making $3\cdot 2=6$ "cells" in the experimental design (using ANOVA terminology). For simplicity I will only consider two dependent variables (DVs):
On this figure all six "cells" (I will also call them "groups" or "classes") are well-separated, which of course rarely happens in practice. Note that it is obvious that there are significant main effects of both factors here, and also significant interaction effect (because the upper-right group is shifted to the right; if I moved it to its "grid" position, then there would be no interaction effect).
How do MANOVA computations work in this case?
First, MANOVA computes pooled within-class scatter matrix $\mathbf W$. But the between-class scatter matrix depends on what effect we are testing. Consider between-class scatter matrix $\mathbf B_A$ for factor A. To compute it, we find the global mean (represented in the figure by a star) and the means conditional on the levels of factor A (represented in the figure by three crosses). We then compute the scatter of these conditional means (weighted by the number of data points in each level of A) relative to the global mean, arriving to $\mathbf B_A$. Now we can consider a usual $\mathbf W^{-1} \mathbf B_A$ matrix, compute its eigendecomposition, and run MANOVA significance tests based on the eigenvalues.
For the factor B, there will be another between-class scatter matrix $\mathbf B_B$, and analogously (a bit more complicated, but straightforward) there will be yet another between-class scatter matrix $\mathbf B_{AB}$ for the interaction effect, so that in the end the total scatter matrix is decomposed into a neat $$\mathbf T = \mathbf B_A + \mathbf B_B + \mathbf B_{AB} + \mathbf W.$$ [Note that this decomposition works only for a balanced dataset with the same number of data points in each cluster. For unbalanced dataset, $\mathbf B$ cannot be uniquely decomposed into a sum of three factor contributions because the factors are not orthogonal anymore; this is similar to the discussion of Type I/II/III SS in ANOVA.]
Now, our main question here is how MANOVA corresponds to LDA. There is no such thing as "factorial LDA". Consider factor A. If we wanted to run LDA to classify levels of factor A (forgetting about factor B altogether), we would have the same between-class $\mathbf B_A$ matrix, but a different within-class scatter matrix $\mathbf W_A=\mathbf T - \mathbf B_A$ (think of merging together two little ellipsoids in each level of factor A on my figure above). The same is true for other factors. So there is no "simple LDA" that directly corresponds to the three tests that MANOVA runs in this case.
However, of course nothing prevents us from looking at the eigenvectors of $\mathbf W^{-1} \mathbf B_A$, and from calling them "discriminant axes" for factor A in MANOVA.
|
How is MANOVA related to LDA?
In a nutshell
Both one-way MANOVA and LDA start with decomposing the total scatter matrix $\mathbf T$ into the within-class scatter matrix $\mathbf W$ and between-class scatter matrix $\mathbf B$, suc
|
14,050
|
How to show that an estimator is consistent?
|
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\rightarrow\infty}T_n = \theta $.
Convergence in probability, mathematically, means
$\lim\limits_{n\rightarrow\infty} P(|T_n - \theta|\geq \epsilon)= 0$ for all $\epsilon>0$.
The easiest way to show convergence in probability/consistency is to invoke Chebyshev's Inequality, which states:
$P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$.
Thus,
$P(|T_n - \theta|\geq \epsilon)=P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$.
And so you need to show that $E(T_n - \theta)^2$ goes to 0 as $n\rightarrow\infty$.
EDIT 2: The above requires that the estimator is at least asymptotically unbiased. As G. Jay Kerns points out, consider the estimator $T_n = \bar{X}_n+3$ (for estimating the mean $\mu$). $T_n$ is biased both for finite $n$ and asymptotically, and $\mathrm{Var}(T_n)=\mathrm{Var}(\bar{X}_n)\rightarrow 0$ as $n\rightarrow \infty$. However, $T_n$ is not a consistent estimator of $\mu$.
EDIT 3: See cardinal's points in the comments below.
|
How to show that an estimator is consistent?
|
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\
|
How to show that an estimator is consistent?
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\rightarrow\infty}T_n = \theta $.
Convergence in probability, mathematically, means
$\lim\limits_{n\rightarrow\infty} P(|T_n - \theta|\geq \epsilon)= 0$ for all $\epsilon>0$.
The easiest way to show convergence in probability/consistency is to invoke Chebyshev's Inequality, which states:
$P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$.
Thus,
$P(|T_n - \theta|\geq \epsilon)=P((T_n - \theta)^2\geq \epsilon^2)\leq \frac{E(T_n - \theta)^2}{\epsilon^2}$.
And so you need to show that $E(T_n - \theta)^2$ goes to 0 as $n\rightarrow\infty$.
EDIT 2: The above requires that the estimator is at least asymptotically unbiased. As G. Jay Kerns points out, consider the estimator $T_n = \bar{X}_n+3$ (for estimating the mean $\mu$). $T_n$ is biased both for finite $n$ and asymptotically, and $\mathrm{Var}(T_n)=\mathrm{Var}(\bar{X}_n)\rightarrow 0$ as $n\rightarrow \infty$. However, $T_n$ is not a consistent estimator of $\mu$.
EDIT 3: See cardinal's points in the comments below.
|
How to show that an estimator is consistent?
EDIT: Fixed minor mistakes.
Here's one way to do it:
An estimator of $\theta$ (let's call it $T_n$) is consistent if it converges in probability to $\theta$. Using your notation
$\mathrm{plim}_{n\
|
14,051
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) Popper didn't think of statistical testing as formalising his approach at all. Mayo states that this is because Popper was not expert enough in statistics, but also he probably wouldn't have allowed for an error probability of 5% or 1% as "falsification" (Mayo may also have mentioned this somewhere, but I don't remember).
(1b) There are different approaches for picking the null and alternative hypothesis. In some applications, the null hypothesis is a precise scientific theory of interest, and we check whether the data falsify it. This would be in line with Popper (at least if he allowed for nonzero error probabilities). In some other approaches (in many areas this is found much more often), the null hypothesis formalises the idea that "nothing meaningful is going on", and the alternative is of actual scientific interest. This would not be in line with Popper. (Also, the alternative is not normally specified precisely enough to imply conditions for falsification, and be it statistical.)
(2) According to the standard logic of statistical tests, the null hypothesis can be statistically (i.e. with error probability) falsified, but not the alternative. There is a possibility to argue that an alternative is statistically falisfied, but this basically amounts to running tests the other way round. For example, if you have a $H_0:\ \mu=0$ and an alternative $\mu\neq 0$, you cannot falsify the alternative (as it allows for $\mu$ arbitrarily close to 0, which cannot be distinguished by data from $\mu=0$), but you could state that a meaningful deviation from $\mu=0$ would actually be $|\mu|\ge 2$, and in this case you may reject $|\mu|\ge 2$ in case $\bar x$ is very close to zero. This makes sense if the power of the original test for $|\mu|\ge 2$ is large enough that in that case "$\bar x$ close to zero" would be very unlikely. (This is related to Mayo's concept of "severity"; in such a case we can say that $|\mu|<2$ "with severity".) We could also then say that we have "statistically falsified" $|\mu|\ge 2$.
(3) This is indeed a philosophical question, and I have seen arguments in either direction.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a)
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a) Popper didn't think of statistical testing as formalising his approach at all. Mayo states that this is because Popper was not expert enough in statistics, but also he probably wouldn't have allowed for an error probability of 5% or 1% as "falsification" (Mayo may also have mentioned this somewhere, but I don't remember).
(1b) There are different approaches for picking the null and alternative hypothesis. In some applications, the null hypothesis is a precise scientific theory of interest, and we check whether the data falsify it. This would be in line with Popper (at least if he allowed for nonzero error probabilities). In some other approaches (in many areas this is found much more often), the null hypothesis formalises the idea that "nothing meaningful is going on", and the alternative is of actual scientific interest. This would not be in line with Popper. (Also, the alternative is not normally specified precisely enough to imply conditions for falsification, and be it statistical.)
(2) According to the standard logic of statistical tests, the null hypothesis can be statistically (i.e. with error probability) falsified, but not the alternative. There is a possibility to argue that an alternative is statistically falisfied, but this basically amounts to running tests the other way round. For example, if you have a $H_0:\ \mu=0$ and an alternative $\mu\neq 0$, you cannot falsify the alternative (as it allows for $\mu$ arbitrarily close to 0, which cannot be distinguished by data from $\mu=0$), but you could state that a meaningful deviation from $\mu=0$ would actually be $|\mu|\ge 2$, and in this case you may reject $|\mu|\ge 2$ in case $\bar x$ is very close to zero. This makes sense if the power of the original test for $|\mu|\ge 2$ is large enough that in that case "$\bar x$ close to zero" would be very unlikely. (This is related to Mayo's concept of "severity"; in such a case we can say that $|\mu|<2$ "with severity".) We could also then say that we have "statistically falsified" $|\mu|\ge 2$.
(3) This is indeed a philosophical question, and I have seen arguments in either direction.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
I was also going to point to Deborah Mayo's work as linked in a comment. She is a Popper influenced philosopher who has written a lot about statistical testing.
I'll try to address the questions.
(1a)
|
14,052
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best you can do is to find out what you can reasonably reject and continue on with hypotheses that have survived the tests thrown at them so far.
Firstly, we should avoid talking of falsifying the null hypothesis, and should stick to "reject" or "do not reject". Being able to reject the null hypothesis does not mean that we have shown it to be false, just that the observations are unlikely under that hypothesis. The observations may be even more unlikely under the alternative hypothesis! Here is the classic example:
In this case the null hypothesis is almost certainly true, even though we have rejected it, the detector was almost certainly giving a random false alarm. This is because the alternative hypothesis was even more unlikely to be true than the null hypothesis, because the prior probability of H1 was vastly smaller than that of H0, but the NHST does not take that into account. This is an example where rejecting the null hypothesis is not a failed falsification of the alternative hypothesis.
Conversely, if an NHST has low statistical power, then a failure to reject the null does not falsify the alternative hypothesis.
As @Dave suggests, sometimes we know for sure a-priori that the null hypothesis is false, for example a coin with two faces is unlikely to be exactly unbiased, i.e. p(head) = p(tail) = 0.5, but we may need a very large amount of coin-flips to detect the bias that is bound to be present, even in a coin that is to all intents and purposes "unbiased". Testing for normality involves a similar issue in most cases, AFAICS. Rejecting a hypothesis that you know to be false from the outset is not very Popperian, but that doesn't mean that such NHSTs cannot perform a useful purpose.
The Quine-Duhem Thesis suggests that in practice it is not that easy to falsify a hypothesis either.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best you can do is to find out what you can reasonably reject and continue on with hypotheses that have survived the tests thrown at them so far.
Firstly, we should avoid talking of falsifying the null hypothesis, and should stick to "reject" or "do not reject". Being able to reject the null hypothesis does not mean that we have shown it to be false, just that the observations are unlikely under that hypothesis. The observations may be even more unlikely under the alternative hypothesis! Here is the classic example:
In this case the null hypothesis is almost certainly true, even though we have rejected it, the detector was almost certainly giving a random false alarm. This is because the alternative hypothesis was even more unlikely to be true than the null hypothesis, because the prior probability of H1 was vastly smaller than that of H0, but the NHST does not take that into account. This is an example where rejecting the null hypothesis is not a failed falsification of the alternative hypothesis.
Conversely, if an NHST has low statistical power, then a failure to reject the null does not falsify the alternative hypothesis.
As @Dave suggests, sometimes we know for sure a-priori that the null hypothesis is false, for example a coin with two faces is unlikely to be exactly unbiased, i.e. p(head) = p(tail) = 0.5, but we may need a very large amount of coin-flips to detect the bias that is bound to be present, even in a coin that is to all intents and purposes "unbiased". Testing for normality involves a similar issue in most cases, AFAICS. Rejecting a hypothesis that you know to be false from the outset is not very Popperian, but that doesn't mean that such NHSTs cannot perform a useful purpose.
The Quine-Duhem Thesis suggests that in practice it is not that easy to falsify a hypothesis either.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Too long for comments, so here are my thoughts.
Null Hypothesis Statistical Testing (NHST) is only Popperian in the sense that no amount of corroboration proves a hypothesis correct, so often the best
|
14,053
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
|
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses.
|
Does the rejection of the null hypothesis have anything to do with Popper's theory of falsification?
Second-hand from Richard McElreath, but I think no. Popper's famous falsification theory was about falsifying experimental hypotheses not null hypotheses.
|
14,054
|
Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
|
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of examples of increasing complexity. For the first example we need the concept of a location-scale family. Let $f_0$ be a density function centered at zero in some sense. We can construct a family of densities by defining
$$
f(x)= f(x;\mu,\sigma)= \frac{1}{\sigma} f_0\left(\frac{x-\mu}{\sigma}\right)
$$
where $\sigma > 0$ is a scale parameter and $\mu$ is a location parameter. In the measurement error model, where usual the error term is modeled as a normal distribution, we can in the place of that normal distribution use a location-scale family as constructed above. When $f_0$ is the standard normal distribution, the construction above gives the $\text{N}(\mu, \sigma)$ family.
Now we will use IRLS on some simple examples. First we will find the ML (maximum likelihood) estimators in the model
$$
Y_1,Y_2,\ldots,Y_n \hspace{1em} \text{i.i.d}
$$
with the density
$$
f(y)= \frac{1}{\pi} \frac{1}{1+(y-\mu)^2},\hspace{1em} y\in{\mathbb R},
$$
the Cauchy distribution the location family $\mu$ (so this is a location family). But first some notation. The weighted least squares estimator of $\mu$ is given by
$$
\mu^{\ast} = \frac{\sum_{i=1}^n w_i y_i}
{\sum_{i=1}^n w_i}.
$$
where $w_i$ is some weights. We will see that the ML estimator of $\mu$ can be expressed in the same form, with $w_i$ some function of the residuals
$$
\epsilon_i = y_i-\hat{\mu}.
$$
The likelihood function is given by
$$
L(y;\mu)= \left(\frac{1}{\pi}\right)^n \prod_{i=1}^n \frac{1}{1+(y_i-\mu)^2}
$$
and the loglikelihood function is given by
$$
l(y)= -n \log(\pi) - \sum_{i=1}^n \log\left(1+(y_i-\mu)^2\right).
$$
Its derivative with respect to $\mu$ is
$$
\begin{eqnarray}
\frac{\partial l(y)}{\partial \mu}&=&
0-\sum \frac{\partial}{\partial \mu} \log\left(1+(y_i-\mu)^2\right) \nonumber \\
&=& -\sum \frac{2(y_i-\mu)}{1+(y_i-\mu)^2}\cdot (-1) \nonumber \\
&=& \sum \frac{2 \epsilon_i}{1+\epsilon_i^2} \nonumber
\end{eqnarray}
$$
where $\epsilon_i=y_i-\mu$. Write $f_0(\epsilon)= \frac{1}{\pi} \frac{1}{1+\epsilon^2}$ and $f_0'(\epsilon)=\frac{1}{\pi} \frac{-1\cdot 2 \epsilon}{(1+\epsilon^2)^2}$, we get
$$
\frac{f_0'(\epsilon)}{f_0(\epsilon)} =
\frac{\frac{-1 \cdot2\epsilon}{(1+\epsilon^2)^2}}
{\frac{1}{1+\epsilon^2}} = -\frac{2\epsilon}{1+\epsilon^2}.
$$
We find
$$
\begin{eqnarray}
\frac {\partial l(y)} {\partial \mu}
& =& -\sum \frac {f_0'(\epsilon_i)} {f_0(\epsilon_i)} \nonumber \\
&=& -\sum \frac {f_0'(\epsilon_i)} {f_0(\epsilon_i)} \cdot
\left(-\frac{1}{\epsilon_i}\right)
\cdot (-\epsilon_i) \nonumber \\
&=& \sum w_i \epsilon_i \nonumber
\end{eqnarray}
$$
where we used the definition
$$
w_i= \frac{f_0'(\epsilon_i)}
{f_0(\epsilon_i)} \cdot \left(-\frac{1}{\epsilon_i}\right)
= \frac{-2 \epsilon_i}
{1+\epsilon_i^2} \cdot \left(-\frac{1}{\epsilon_i}\right)
= \frac{2}{1+\epsilon_i^2}.
$$
Remembering that
$\epsilon_i=y_i-\mu$ we obtain the equation
$$
\sum w_i y_i = \mu \sum w_i,
$$
which is the estimating equation of IRLS. Note that
The weights $w_i$ are always positive.
If the residual is large, we give less weight to the corresponding observation.
To calculate the ML estimator in practice, we need a start value $\hat{\mu}^{(0)}$, we could use the median, for example. Using this value we calculate residuals
$$
\epsilon_i^{(0)} = y_i - \hat{\mu}^{(0)}
$$
and weights
$$
w_i^{(0)} = \frac{2}{1+\epsilon_i^{(0)} }.
$$
The new value of $\hat{\mu}$ is given by
$$
\hat{\mu}^{(1)} = \frac{\sum w_i^{(0)} y_i}
{\sum w_i^{(0)} }.
$$
Continuing in this way we define
$$
\epsilon_i^{(j)} = y_i- \hat{\mu}^{(j)}
$$ and
$$
w_i^{(j)} = \frac{2}{1+\epsilon_i^{(j)} }.
$$
The estimated value at the pass $j+1$ of the algorithm becomes
$$
\hat{\mu}^{(j+1)} = \frac{\sum w_i^{(j)} y_i}
{\sum w_i^{(j)} }.
$$
Continuing until the sequence
$$
\hat{\mu}^{(0)}, \hat{\mu}^{(1)}, \ldots, \hat{\mu}^{(j)}, \ldots
$$
converges.
Now we studies this process with a more general location and scale family, $f(y)= \frac{1}{\sigma} f_0(\frac{y-\mu}{\sigma})$, with less detail.
Let $Y_1,Y_2,\ldots,Y_n$ be independent with the density above. Define also $ \epsilon_i=\frac{y_i-\mu}{\sigma}$. The loglikelihood function is
$$
l(y)= -\frac{n}{2}\log(\sigma^2) + \sum \log(f_0\left(\frac{y_i-\mu}{\sigma}\right)).
$$
Writing $\nu=\sigma^2$, note that
$$
\frac{\partial \epsilon_i}{\partial \mu} =
-\frac{1}{\sigma}
$$
and
$$
\frac{\partial \epsilon_i}{\partial \nu} =
(y_i-\mu)\left(\frac{1}{\sqrt{\nu}}\right)' =
(y_i-\mu)\cdot \frac{-1}{2 \sigma^3}.
$$
Calculating the loglikelihood derivative
$$
\frac{\partial l(y)}{\partial \mu} =
\sum \frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\frac{\partial \epsilon_i}{\partial \mu} =
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot\left(-\frac{1}{\sigma}\right)=
-\frac{1}{\sigma}\sum\frac{f_o'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\left(-\frac{1}{\epsilon_i}\right)(-\epsilon_i) =
\frac{1}{\sigma}\sum w_i \epsilon_i
$$
and equaling this to zero gives the same estimating equation as the first example. Then searching for an estimator for $\sigma^2$:
$$
\begin{eqnarray}
\frac{\partial l(y)}{\partial \nu} &=& -\frac{n}{2}\frac{1}{\nu} +
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\frac{\partial \epsilon_i}{\partial\nu} \nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}+\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}
\cdot \left(-\frac{(y_i-\mu)}{2\sigma^3}\right) \nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu} - \frac{1}{2}\frac{1}{\sigma^2}
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot \epsilon_i\nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}-\frac{1}{2}\frac{1}{\nu}
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\left(-\frac{1}{\epsilon_i}\right)
(-\epsilon_i)\cdot\epsilon_i\nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}+\frac{1}{2}\frac{1}{\nu}\sum w_i \epsilon_i^2
\stackrel{!}{=} 0. \nonumber
\end{eqnarray}
$$
leading to the estimator
$$
\hat{\sigma^2} = \frac{1}{n}\sum w_i (y_i-\hat{\mu})^2.
$$
The iterative algorithm above can be used in this case as well.
In the following we give a numerical example using R, for the double exponential model (with known scale) and with data y <- c(-5,-1,0,1,5). For this data the true value of the ML estimator is 0.
The initial value will be mu <- 0.5. One pass of the algorithm is
iterest <- function(y, mu) {
w <- 1/abs(y - mu)
weighted.mean(y, w)
}
with this function you can experiment with doing the iterations "by hand"
Then the iterative algorithm can be done by
mu_0 <- 0.5
repeat {mu <- iterest(y, mu_0)
if (abs(mu_0 - mu) < 0.000001) break
mu_0 <- mu }
Exercise: If the model is a $t_k$ distribution with scale parameter $\sigma$ show the iterations are given by the weight
$$
w_i = \frac{k + 1}{k + \epsilon_i^2}.
$$
Exercise: If the density is logistic, show the weights are given by
$$
w(\epsilon) = \frac{ 1-e^\epsilon}{1+e^\epsilon} \cdot - \frac{1}{\epsilon}.
$$
For the moment I will leave it here, I will continue this post.
|
Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
|
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of
|
Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of examples of increasing complexity. For the first example we need the concept of a location-scale family. Let $f_0$ be a density function centered at zero in some sense. We can construct a family of densities by defining
$$
f(x)= f(x;\mu,\sigma)= \frac{1}{\sigma} f_0\left(\frac{x-\mu}{\sigma}\right)
$$
where $\sigma > 0$ is a scale parameter and $\mu$ is a location parameter. In the measurement error model, where usual the error term is modeled as a normal distribution, we can in the place of that normal distribution use a location-scale family as constructed above. When $f_0$ is the standard normal distribution, the construction above gives the $\text{N}(\mu, \sigma)$ family.
Now we will use IRLS on some simple examples. First we will find the ML (maximum likelihood) estimators in the model
$$
Y_1,Y_2,\ldots,Y_n \hspace{1em} \text{i.i.d}
$$
with the density
$$
f(y)= \frac{1}{\pi} \frac{1}{1+(y-\mu)^2},\hspace{1em} y\in{\mathbb R},
$$
the Cauchy distribution the location family $\mu$ (so this is a location family). But first some notation. The weighted least squares estimator of $\mu$ is given by
$$
\mu^{\ast} = \frac{\sum_{i=1}^n w_i y_i}
{\sum_{i=1}^n w_i}.
$$
where $w_i$ is some weights. We will see that the ML estimator of $\mu$ can be expressed in the same form, with $w_i$ some function of the residuals
$$
\epsilon_i = y_i-\hat{\mu}.
$$
The likelihood function is given by
$$
L(y;\mu)= \left(\frac{1}{\pi}\right)^n \prod_{i=1}^n \frac{1}{1+(y_i-\mu)^2}
$$
and the loglikelihood function is given by
$$
l(y)= -n \log(\pi) - \sum_{i=1}^n \log\left(1+(y_i-\mu)^2\right).
$$
Its derivative with respect to $\mu$ is
$$
\begin{eqnarray}
\frac{\partial l(y)}{\partial \mu}&=&
0-\sum \frac{\partial}{\partial \mu} \log\left(1+(y_i-\mu)^2\right) \nonumber \\
&=& -\sum \frac{2(y_i-\mu)}{1+(y_i-\mu)^2}\cdot (-1) \nonumber \\
&=& \sum \frac{2 \epsilon_i}{1+\epsilon_i^2} \nonumber
\end{eqnarray}
$$
where $\epsilon_i=y_i-\mu$. Write $f_0(\epsilon)= \frac{1}{\pi} \frac{1}{1+\epsilon^2}$ and $f_0'(\epsilon)=\frac{1}{\pi} \frac{-1\cdot 2 \epsilon}{(1+\epsilon^2)^2}$, we get
$$
\frac{f_0'(\epsilon)}{f_0(\epsilon)} =
\frac{\frac{-1 \cdot2\epsilon}{(1+\epsilon^2)^2}}
{\frac{1}{1+\epsilon^2}} = -\frac{2\epsilon}{1+\epsilon^2}.
$$
We find
$$
\begin{eqnarray}
\frac {\partial l(y)} {\partial \mu}
& =& -\sum \frac {f_0'(\epsilon_i)} {f_0(\epsilon_i)} \nonumber \\
&=& -\sum \frac {f_0'(\epsilon_i)} {f_0(\epsilon_i)} \cdot
\left(-\frac{1}{\epsilon_i}\right)
\cdot (-\epsilon_i) \nonumber \\
&=& \sum w_i \epsilon_i \nonumber
\end{eqnarray}
$$
where we used the definition
$$
w_i= \frac{f_0'(\epsilon_i)}
{f_0(\epsilon_i)} \cdot \left(-\frac{1}{\epsilon_i}\right)
= \frac{-2 \epsilon_i}
{1+\epsilon_i^2} \cdot \left(-\frac{1}{\epsilon_i}\right)
= \frac{2}{1+\epsilon_i^2}.
$$
Remembering that
$\epsilon_i=y_i-\mu$ we obtain the equation
$$
\sum w_i y_i = \mu \sum w_i,
$$
which is the estimating equation of IRLS. Note that
The weights $w_i$ are always positive.
If the residual is large, we give less weight to the corresponding observation.
To calculate the ML estimator in practice, we need a start value $\hat{\mu}^{(0)}$, we could use the median, for example. Using this value we calculate residuals
$$
\epsilon_i^{(0)} = y_i - \hat{\mu}^{(0)}
$$
and weights
$$
w_i^{(0)} = \frac{2}{1+\epsilon_i^{(0)} }.
$$
The new value of $\hat{\mu}$ is given by
$$
\hat{\mu}^{(1)} = \frac{\sum w_i^{(0)} y_i}
{\sum w_i^{(0)} }.
$$
Continuing in this way we define
$$
\epsilon_i^{(j)} = y_i- \hat{\mu}^{(j)}
$$ and
$$
w_i^{(j)} = \frac{2}{1+\epsilon_i^{(j)} }.
$$
The estimated value at the pass $j+1$ of the algorithm becomes
$$
\hat{\mu}^{(j+1)} = \frac{\sum w_i^{(j)} y_i}
{\sum w_i^{(j)} }.
$$
Continuing until the sequence
$$
\hat{\mu}^{(0)}, \hat{\mu}^{(1)}, \ldots, \hat{\mu}^{(j)}, \ldots
$$
converges.
Now we studies this process with a more general location and scale family, $f(y)= \frac{1}{\sigma} f_0(\frac{y-\mu}{\sigma})$, with less detail.
Let $Y_1,Y_2,\ldots,Y_n$ be independent with the density above. Define also $ \epsilon_i=\frac{y_i-\mu}{\sigma}$. The loglikelihood function is
$$
l(y)= -\frac{n}{2}\log(\sigma^2) + \sum \log(f_0\left(\frac{y_i-\mu}{\sigma}\right)).
$$
Writing $\nu=\sigma^2$, note that
$$
\frac{\partial \epsilon_i}{\partial \mu} =
-\frac{1}{\sigma}
$$
and
$$
\frac{\partial \epsilon_i}{\partial \nu} =
(y_i-\mu)\left(\frac{1}{\sqrt{\nu}}\right)' =
(y_i-\mu)\cdot \frac{-1}{2 \sigma^3}.
$$
Calculating the loglikelihood derivative
$$
\frac{\partial l(y)}{\partial \mu} =
\sum \frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\frac{\partial \epsilon_i}{\partial \mu} =
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot\left(-\frac{1}{\sigma}\right)=
-\frac{1}{\sigma}\sum\frac{f_o'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\left(-\frac{1}{\epsilon_i}\right)(-\epsilon_i) =
\frac{1}{\sigma}\sum w_i \epsilon_i
$$
and equaling this to zero gives the same estimating equation as the first example. Then searching for an estimator for $\sigma^2$:
$$
\begin{eqnarray}
\frac{\partial l(y)}{\partial \nu} &=& -\frac{n}{2}\frac{1}{\nu} +
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\frac{\partial \epsilon_i}{\partial\nu} \nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}+\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}
\cdot \left(-\frac{(y_i-\mu)}{2\sigma^3}\right) \nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu} - \frac{1}{2}\frac{1}{\sigma^2}
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot \epsilon_i\nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}-\frac{1}{2}\frac{1}{\nu}
\sum\frac{f_0'(\epsilon_i)}{f_0(\epsilon_i)}\cdot
\left(-\frac{1}{\epsilon_i}\right)
(-\epsilon_i)\cdot\epsilon_i\nonumber \\
&=& -\frac{n}{2}\frac{1}{\nu}+\frac{1}{2}\frac{1}{\nu}\sum w_i \epsilon_i^2
\stackrel{!}{=} 0. \nonumber
\end{eqnarray}
$$
leading to the estimator
$$
\hat{\sigma^2} = \frac{1}{n}\sum w_i (y_i-\hat{\mu})^2.
$$
The iterative algorithm above can be used in this case as well.
In the following we give a numerical example using R, for the double exponential model (with known scale) and with data y <- c(-5,-1,0,1,5). For this data the true value of the ML estimator is 0.
The initial value will be mu <- 0.5. One pass of the algorithm is
iterest <- function(y, mu) {
w <- 1/abs(y - mu)
weighted.mean(y, w)
}
with this function you can experiment with doing the iterations "by hand"
Then the iterative algorithm can be done by
mu_0 <- 0.5
repeat {mu <- iterest(y, mu_0)
if (abs(mu_0 - mu) < 0.000001) break
mu_0 <- mu }
Exercise: If the model is a $t_k$ distribution with scale parameter $\sigma$ show the iterations are given by the weight
$$
w_i = \frac{k + 1}{k + \epsilon_i^2}.
$$
Exercise: If the density is logistic, show the weights are given by
$$
w(\epsilon) = \frac{ 1-e^\epsilon}{1+e^\epsilon} \cdot - \frac{1}{\epsilon}.
$$
For the moment I will leave it here, I will continue this post.
|
Can you give a simple intuitive explanation of IRLS method to find the MLE of a GLM?
Some years ago I wrote a paper about this for my students (in spanish), so I can try to rewrite those explanations here. I will look at IRLS (iteratively reweighted least squares) through a series of
|
14,055
|
what makes neural networks a nonlinear classification model?
|
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+w_2x_2
$$
but
$$
h_1 = \text{sigmoid}(w_1x_1+w_2x_2)
$$
where sigmoid function like this, $\text{sigmoid}(x)=\frac 1 {1+e^{-x}}$
Let's use a numerical example to explain the impact of the sigmoid function, suppose you have $w_1x_1+w_2x_2=4$ then $\text{sigmoid}(4)=0.99$. On the other hand, suppose you have $w_1x_1+w_2x_2=4000$, $\text{sigmoid}(4000)=1$ and it is almost as same as $\text{sigmoid}(4)$, which is non-linear.
In addition, I think the slide 14 in this tutorial can show where you did wrong exactly. For $H_1$ please not the otuput is not -7.65, but $\text{sigmoid}(-7.65)$
|
what makes neural networks a nonlinear classification model?
|
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+
|
what makes neural networks a nonlinear classification model?
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+w_2x_2
$$
but
$$
h_1 = \text{sigmoid}(w_1x_1+w_2x_2)
$$
where sigmoid function like this, $\text{sigmoid}(x)=\frac 1 {1+e^{-x}}$
Let's use a numerical example to explain the impact of the sigmoid function, suppose you have $w_1x_1+w_2x_2=4$ then $\text{sigmoid}(4)=0.99$. On the other hand, suppose you have $w_1x_1+w_2x_2=4000$, $\text{sigmoid}(4000)=1$ and it is almost as same as $\text{sigmoid}(4)$, which is non-linear.
In addition, I think the slide 14 in this tutorial can show where you did wrong exactly. For $H_1$ please not the otuput is not -7.65, but $\text{sigmoid}(-7.65)$
|
what makes neural networks a nonlinear classification model?
I think you forget the activation function in nodes in neural network, which is non-linear and will make the whole model non-linear.
In your formula is not totally correct, where,
$$
h_1 \neq w_1x_1+
|
14,056
|
what makes neural networks a nonlinear classification model?
|
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a classifier is nonlinear means that it has a nonlinear decision boundary. The decision boundary is a surface that separates the classes; the classifier will predict one class for all points on one side of the decision boundary, and another class for all points on the other side.
Let's consider a common situation: performing binary classification with a network containing multiple layers of nonlinear hidden units and an output unit with a sigmoidal activation function. $y$ gives the output, $h$ is a vector of activations for the last hidden layer, $w$ is a vector of their weights onto the output unit, and $b$ is the output unit's bias. The output is:
$$y = \sigma(hw + b)$$
where $\sigma$ is the logistic sigmoid function. Output is interpreted as the probability that the class is $1$. The predicted class $c$ is:
$$c = \left \{ \begin{array}{cl}
0 & y \le 0.5 \\
1 & y > 0.5 \\
\end{array} \right . $$
Let's consider the classification rule with respect to the hidden unit activations. We can see that the hidden unit activations are projected onto a line $hW + b$. The rule for assigning a class is a function of $y$, which is monotonically related to the projection along the line. The classification rule is therefore equivalent to determining whether the projection along the line is less than or greater than some threshold (in this case, the threshold is given by the negative of the bias). This means that the decision boundary is a hyperplane that's orthogonal to the line, and intersects the line at a point corresponding to that threshold.
I said earlier that the decision boundary is nonlinear, but a hyperplane is the very definition of a linear boundary. But, we've been considering the boundary as a function of the hidden units just before the output. The hidden unit activations are a nonlinear function of the original inputs, due to the previous hidden layers and their nonlinear activation functions. One way to think about the network is that it maps the data nonlinearly into some feature space. The coordinates in this space are given by the activations of the last hidden units. The network then performs linear classification in this space (logistic regression, in this case). We can also think about the decision boundary as a function of the original inputs. This function will be nonlinear, as a consequence of the nonlinear mapping from inputs to hidden unit activations.
This blog post shows some nice figures and animations of this process.
|
what makes neural networks a nonlinear classification model?
|
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a cl
|
what makes neural networks a nonlinear classification model?
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a classifier is nonlinear means that it has a nonlinear decision boundary. The decision boundary is a surface that separates the classes; the classifier will predict one class for all points on one side of the decision boundary, and another class for all points on the other side.
Let's consider a common situation: performing binary classification with a network containing multiple layers of nonlinear hidden units and an output unit with a sigmoidal activation function. $y$ gives the output, $h$ is a vector of activations for the last hidden layer, $w$ is a vector of their weights onto the output unit, and $b$ is the output unit's bias. The output is:
$$y = \sigma(hw + b)$$
where $\sigma$ is the logistic sigmoid function. Output is interpreted as the probability that the class is $1$. The predicted class $c$ is:
$$c = \left \{ \begin{array}{cl}
0 & y \le 0.5 \\
1 & y > 0.5 \\
\end{array} \right . $$
Let's consider the classification rule with respect to the hidden unit activations. We can see that the hidden unit activations are projected onto a line $hW + b$. The rule for assigning a class is a function of $y$, which is monotonically related to the projection along the line. The classification rule is therefore equivalent to determining whether the projection along the line is less than or greater than some threshold (in this case, the threshold is given by the negative of the bias). This means that the decision boundary is a hyperplane that's orthogonal to the line, and intersects the line at a point corresponding to that threshold.
I said earlier that the decision boundary is nonlinear, but a hyperplane is the very definition of a linear boundary. But, we've been considering the boundary as a function of the hidden units just before the output. The hidden unit activations are a nonlinear function of the original inputs, due to the previous hidden layers and their nonlinear activation functions. One way to think about the network is that it maps the data nonlinearly into some feature space. The coordinates in this space are given by the activations of the last hidden units. The network then performs linear classification in this space (logistic regression, in this case). We can also think about the decision boundary as a function of the original inputs. This function will be nonlinear, as a consequence of the nonlinear mapping from inputs to hidden unit activations.
This blog post shows some nice figures and animations of this process.
|
what makes neural networks a nonlinear classification model?
You're correct that multiple linear layers can be equivalent to a single linear layer. As the other answers have said, a nonlinear activation function allows nonlinear classification. Saying that a cl
|
14,057
|
what makes neural networks a nonlinear classification model?
|
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this activation are zero and one because either the denominator gets so large that the fraction approaches zero, or e^x becomes so small that the fraction approaches 1/1.
|
what makes neural networks a nonlinear classification model?
|
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this
|
what makes neural networks a nonlinear classification model?
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this activation are zero and one because either the denominator gets so large that the fraction approaches zero, or e^x becomes so small that the fraction approaches 1/1.
|
what makes neural networks a nonlinear classification model?
The nonlinearity comes from the sigmoid activation function, 1/(1+e^x), where x is the linear combination of predictors and weights that you referenced in your question.
By the way, the bounds of this
|
14,058
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
|
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$u'Av = u'A'v = (Au)'v = \lambda u'v$$
More directly:
$$u'Av = u'(\mu v) = \mu u'v$$
Since these are equal we obtain $(\lambda - \mu)u'v =0$. So either $u'v = 0$ and the two vectors are orthogonal, or $\lambda - \mu = 0$ and the two eigenvalues are equal. In the latter case, the eigenspace for that repeated eigenvalue can contain eigenvectors which are not orthogonal. So your instinct to question why the eigenvectors have to be orthogonal was a good one; if there are repeated eigenvalues they may not be! What if your sample covariance is the identity matrix? This has repeated eigenvalue $1$ and any two non-zero vectors are eigenvectors, orthogonal or not. (Thinking out such special cases is often a good way to spot counter-examples.)
If a symmetric matrix has a repeated eigenvalue, we can choose to pick out orthogonal eigenvectors from its eigenspace. That's what we want to do in PCA, because finding orthogonal components is the whole point of the exercise. Of course it's unlikely that your sample covariance matrix will have repeated eigenvalues - if so, it would only have taken a small perturbation of your data to make them unequal - but we should take care to define our algorithm so it really does pick out orthogonal eigenvectors. (Note that which it picks out, and in what order, is arbitrary. Think back to the identity matrix and all its possible orthogonal sets of eigenvectors! This is a generalisation of the arbitrary choice between $v$ and $-v$ for a unique eigenvalue. Output from two different implementations of PCA may look quite different.)
To see why it's guaranteed that we can set up our implementation of eig(cov(X)) in this manner - in other words, why there are always "just enough" orthogonal vectors we can pick out from that eigenspace - you need to understand why geometric and algebraic multiplicities are equal for symmetric matrices. If the eigenvalue appears twice we can pick out two orthogonal eigenvectors; thrice and we can pick out three, and so on. Several approaches are raised in this mathematics stack exchange thread but the usual method is via the Schur decomposition. The result you are after is probably proved in your linear algebra textbook as the "spectral theorem" (though that phrase can also refer to several more general results) or perhaps under a more specific name like "symmetric eigenvalue decomposition". Symmetric matrices have several nice properties that it's worth knowing, e.g. their eigenvalues are real, so we can find real eigenvectors, with obvious implications for PCA.
Finally, how can we write an implementation that achieves this? I will consider two implementations of PCA in R. We can see the code for princomp: look at methods(princomp) then getAnywhere(princomp.default) and we observe edc <- eigen(cv, symmetric = TRUE). So eigen will use LAPACK routines for symmetric matrices. Checking the LAPACK Users' Guide (3rd edition) for "symmetric eigenproblems" we see it firstly decomposes $A = QTQ'$ where $T$ is symmetric tridiagonal and $Q$ is orthogonal, then decomposes $T = S \Lambda S'$ where $\Lambda$ is diagonal and $S$ orthogonal. Then writing $Z = QS$ we have diagonalized $A = Z \Lambda Z'$. Here $\Lambda$ is the vector of eigenvalues (of $T$ and also of $A$ - they work out the same) and since $Z$ is the product of two orthogonal matrices it is also orthogonal. The computed eigenvectors are the columns of $Z$ so we can see LAPACK guarantees they will be orthonormal (if you want to know quite how the orthogonal vectors of $T$ are picked, using a Relatively Robust Representations procedure, have a look at the documentation for DSYEVR). So that's one approach, but for numerical reasons it'd be better to do a singular value decomposition. If you look under the bonnet of another PCA function in R, you'll see this is how prcomp works. R uses a different bunch of LAPACK routines to solve this problem.
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
|
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$u'Av = u'A'v = (Au)'v = \lambda u'v$$
More directly:
$$u'Av = u'(\mu v) = \mu u'v$$
Since these are equal we obtain $(\lambda - \mu)u'v =0$. So either $u'v = 0$ and the two vectors are orthogonal, or $\lambda - \mu = 0$ and the two eigenvalues are equal. In the latter case, the eigenspace for that repeated eigenvalue can contain eigenvectors which are not orthogonal. So your instinct to question why the eigenvectors have to be orthogonal was a good one; if there are repeated eigenvalues they may not be! What if your sample covariance is the identity matrix? This has repeated eigenvalue $1$ and any two non-zero vectors are eigenvectors, orthogonal or not. (Thinking out such special cases is often a good way to spot counter-examples.)
If a symmetric matrix has a repeated eigenvalue, we can choose to pick out orthogonal eigenvectors from its eigenspace. That's what we want to do in PCA, because finding orthogonal components is the whole point of the exercise. Of course it's unlikely that your sample covariance matrix will have repeated eigenvalues - if so, it would only have taken a small perturbation of your data to make them unequal - but we should take care to define our algorithm so it really does pick out orthogonal eigenvectors. (Note that which it picks out, and in what order, is arbitrary. Think back to the identity matrix and all its possible orthogonal sets of eigenvectors! This is a generalisation of the arbitrary choice between $v$ and $-v$ for a unique eigenvalue. Output from two different implementations of PCA may look quite different.)
To see why it's guaranteed that we can set up our implementation of eig(cov(X)) in this manner - in other words, why there are always "just enough" orthogonal vectors we can pick out from that eigenspace - you need to understand why geometric and algebraic multiplicities are equal for symmetric matrices. If the eigenvalue appears twice we can pick out two orthogonal eigenvectors; thrice and we can pick out three, and so on. Several approaches are raised in this mathematics stack exchange thread but the usual method is via the Schur decomposition. The result you are after is probably proved in your linear algebra textbook as the "spectral theorem" (though that phrase can also refer to several more general results) or perhaps under a more specific name like "symmetric eigenvalue decomposition". Symmetric matrices have several nice properties that it's worth knowing, e.g. their eigenvalues are real, so we can find real eigenvectors, with obvious implications for PCA.
Finally, how can we write an implementation that achieves this? I will consider two implementations of PCA in R. We can see the code for princomp: look at methods(princomp) then getAnywhere(princomp.default) and we observe edc <- eigen(cv, symmetric = TRUE). So eigen will use LAPACK routines for symmetric matrices. Checking the LAPACK Users' Guide (3rd edition) for "symmetric eigenproblems" we see it firstly decomposes $A = QTQ'$ where $T$ is symmetric tridiagonal and $Q$ is orthogonal, then decomposes $T = S \Lambda S'$ where $\Lambda$ is diagonal and $S$ orthogonal. Then writing $Z = QS$ we have diagonalized $A = Z \Lambda Z'$. Here $\Lambda$ is the vector of eigenvalues (of $T$ and also of $A$ - they work out the same) and since $Z$ is the product of two orthogonal matrices it is also orthogonal. The computed eigenvectors are the columns of $Z$ so we can see LAPACK guarantees they will be orthonormal (if you want to know quite how the orthogonal vectors of $T$ are picked, using a Relatively Robust Representations procedure, have a look at the documentation for DSYEVR). So that's one approach, but for numerical reasons it'd be better to do a singular value decomposition. If you look under the bonnet of another PCA function in R, you'll see this is how prcomp works. R uses a different bunch of LAPACK routines to solve this problem.
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
The covariance matrix is symmetric. If a matrix $A$ is symmetric, and has two eigenvectors $u$ and $v$, consider $Au = \lambda u$ and $Av = \mu v$.
Then by symmetry (and writing $'$ for transpose):
$$
|
14,059
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
|
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible--i.e. the way that reveals as much of the underlying data structure as possible.
For an example, let's consider a data set with 3 variables: height, weight and age of people. If you have this data, you could create 3 separate 2-dimensional scatter plots, Height vs. Weight, Weight vs. Age, and Age vs. Height--and if you graphed them, they would all look like 'squares' with base=X-axis and height=Y-axis.
But if you suspect there is an interesting relationship among all 3 variables, you would probably want to create a 3-dimensional plot of the data, with X, Y and Z axes--which would create a 3-D 'cube' plot. This plot might reveal an interesting relationship that you would like to communicate to people. But of course, you can't print a 3-dimensional plot, so you have to project the data onto a 2-dimensional piece of paper, which means you have to choose 2 dimensions to prioritize, at the expense of the third (which would be orthogonal to the piece of paper you are printing on).
You could try to include the 3rd dimension visually with the use of color-coding or various sized bubbles for the points. Or, you could rotate the plot (in your mind, or with software) until you find a new projection that expresses as much of the 3-D information in 2-D space as possible--think visually of just rotating the 'cube' around until the underlying data relationship you want to show is a clear as possible for printing on paper. As you rotate the 3-D 'cube' through 2-D space, you create new synthetic axes, and these new synthetic axes are orthogonal to each other--and correspond to the length and width of the paper you are printing on. If you travel along these new synthetic axes, you are moving through multiple dimensions of the original data (height, weight and age) at the same time, but you can travel along the new synthetic X-axis (the width of the paper) without moving along the synthetic Y-axis.
We can think of this visually, because our brains understand 3-dimensional spaces, but things quickly become problematic if you are talking about higher dimensional data sets. We can't imagine 9-dimensional 'hyper-cubes' (at least I can't), but we often have to deal with data sets that contain many variables. We can use software (or grueling math) to 'rotate' the 9-dimensional data through space, until we find the new projection that represents as much of the higher-dimensional data structure as possible--for printing on a 2-D page.
This is exactly what PCA does. Again, for simplicity, consider the earlier 3-D data set example plotted in a 'cube' space--we would see something like a cloud of points. PCA simply rotates that cloud until it finds the 'longest' straight line possible through that cloud--the direction of this line becomes PC1. Then, with PC1 fixed, the data point cloud is rotated again, along PC1, until the next 'longest' orthogonal axis is found, which is PC2. Then you can print a new 2-D plot (PC1 vs. PC2) that captures as much of the 3-D data structure as possible. And of course you can keep going and find PC3, PC4 and so on, if it helps understand the data.
Then, the PCA results will tell you how much of the data variance is explained by the new synthetic principal components, and if the PCA axes capture more of the data variance than you would expect to occur by random chance, we can infer that there is a meaningful relationship among the original measured variables.
All the discussion about eigenvectors and matrix algebra is a little bit beside the point in my opinion (and also, I'm not that mathematically inclined)--orthogonal axes are just an inherent part of this type of matrix algebra. So, citing the mathematical foundations of orthogonal axes doesn't really explain why we use this approach for PCA. We use this matrix algebra in statistical analysis because it helps us reveal important characteristics of data structures that we are interested in.
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
|
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible-
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [duplicate]
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible--i.e. the way that reveals as much of the underlying data structure as possible.
For an example, let's consider a data set with 3 variables: height, weight and age of people. If you have this data, you could create 3 separate 2-dimensional scatter plots, Height vs. Weight, Weight vs. Age, and Age vs. Height--and if you graphed them, they would all look like 'squares' with base=X-axis and height=Y-axis.
But if you suspect there is an interesting relationship among all 3 variables, you would probably want to create a 3-dimensional plot of the data, with X, Y and Z axes--which would create a 3-D 'cube' plot. This plot might reveal an interesting relationship that you would like to communicate to people. But of course, you can't print a 3-dimensional plot, so you have to project the data onto a 2-dimensional piece of paper, which means you have to choose 2 dimensions to prioritize, at the expense of the third (which would be orthogonal to the piece of paper you are printing on).
You could try to include the 3rd dimension visually with the use of color-coding or various sized bubbles for the points. Or, you could rotate the plot (in your mind, or with software) until you find a new projection that expresses as much of the 3-D information in 2-D space as possible--think visually of just rotating the 'cube' around until the underlying data relationship you want to show is a clear as possible for printing on paper. As you rotate the 3-D 'cube' through 2-D space, you create new synthetic axes, and these new synthetic axes are orthogonal to each other--and correspond to the length and width of the paper you are printing on. If you travel along these new synthetic axes, you are moving through multiple dimensions of the original data (height, weight and age) at the same time, but you can travel along the new synthetic X-axis (the width of the paper) without moving along the synthetic Y-axis.
We can think of this visually, because our brains understand 3-dimensional spaces, but things quickly become problematic if you are talking about higher dimensional data sets. We can't imagine 9-dimensional 'hyper-cubes' (at least I can't), but we often have to deal with data sets that contain many variables. We can use software (or grueling math) to 'rotate' the 9-dimensional data through space, until we find the new projection that represents as much of the higher-dimensional data structure as possible--for printing on a 2-D page.
This is exactly what PCA does. Again, for simplicity, consider the earlier 3-D data set example plotted in a 'cube' space--we would see something like a cloud of points. PCA simply rotates that cloud until it finds the 'longest' straight line possible through that cloud--the direction of this line becomes PC1. Then, with PC1 fixed, the data point cloud is rotated again, along PC1, until the next 'longest' orthogonal axis is found, which is PC2. Then you can print a new 2-D plot (PC1 vs. PC2) that captures as much of the 3-D data structure as possible. And of course you can keep going and find PC3, PC4 and so on, if it helps understand the data.
Then, the PCA results will tell you how much of the data variance is explained by the new synthetic principal components, and if the PCA axes capture more of the data variance than you would expect to occur by random chance, we can infer that there is a meaningful relationship among the original measured variables.
All the discussion about eigenvectors and matrix algebra is a little bit beside the point in my opinion (and also, I'm not that mathematically inclined)--orthogonal axes are just an inherent part of this type of matrix algebra. So, citing the mathematical foundations of orthogonal axes doesn't really explain why we use this approach for PCA. We use this matrix algebra in statistical analysis because it helps us reveal important characteristics of data structures that we are interested in.
|
Why are principal components in PCA (eigenvectors of the covariance matrix) mutually orthogonal? [du
I think it might help to pull back from the mathematics and think about the goal of PCA. In my mind, PCA is used to represent large-dimensional data sets (many variables) in the clearest way possible-
|
14,060
|
Is it wrong to refer to results as being "highly significant"?
|
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you would still have judged the results as significant. Or, equivalently, if some of your readers have a much smaller $\alpha$ in mind, then they can still judge your results as significant.
Note that the significance level $\alpha$ is in the eye of the beholder, whereas the $p$-value is (with some caveats) a property of the data.
Observing $p=10^{-10}$ is just not the same as observing $p=0.04$, even though both might be called "significant" by standard conventions of your field ($\alpha=0.05$). Tiny $p$-value means stronger evidence against the null (for those who like Fisher's framework of hypothesis testing); it means that the confidence interval around the effect size will exclude the null value with a larger margin (for those who prefer CIs to $p$-values); it means that the posterior probability of the null will be smaller (for Bayesians with some prior); this is all equivalent and simply means that the findings are more convincing. See Are smaller p-values more convincing? for more discussion.
The term "highly significant" is not precise and does not need to be. It is a subjective expert judgment, similar to observing a surprisingly large effect size and calling it "huge" (or perhaps simply "very large"). There is nothing wrong with using qualitative, subjective descriptions of your data, even in the scientific writing; provided of course, that the objective quantitative analysis is presented as well.
See also some excellent comments above, +1 to @whuber, @Glen_b, and @COOLSerdash.
|
Is it wrong to refer to results as being "highly significant"?
|
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you
|
Is it wrong to refer to results as being "highly significant"?
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you would still have judged the results as significant. Or, equivalently, if some of your readers have a much smaller $\alpha$ in mind, then they can still judge your results as significant.
Note that the significance level $\alpha$ is in the eye of the beholder, whereas the $p$-value is (with some caveats) a property of the data.
Observing $p=10^{-10}$ is just not the same as observing $p=0.04$, even though both might be called "significant" by standard conventions of your field ($\alpha=0.05$). Tiny $p$-value means stronger evidence against the null (for those who like Fisher's framework of hypothesis testing); it means that the confidence interval around the effect size will exclude the null value with a larger margin (for those who prefer CIs to $p$-values); it means that the posterior probability of the null will be smaller (for Bayesians with some prior); this is all equivalent and simply means that the findings are more convincing. See Are smaller p-values more convincing? for more discussion.
The term "highly significant" is not precise and does not need to be. It is a subjective expert judgment, similar to observing a surprisingly large effect size and calling it "huge" (or perhaps simply "very large"). There is nothing wrong with using qualitative, subjective descriptions of your data, even in the scientific writing; provided of course, that the objective quantitative analysis is presented as well.
See also some excellent comments above, +1 to @whuber, @Glen_b, and @COOLSerdash.
|
Is it wrong to refer to results as being "highly significant"?
I think there is not much wrong in saying that the results are "highly significant" (even though yes, it is a bit sloppy).
It means that if you had set a much smaller significance level $\alpha$, you
|
14,061
|
Is it wrong to refer to results as being "highly significant"?
|
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only one part of the answer. With enough data, usually some parameters will show up as "significant" (look up Bonferroni correction). Multiple testing is a specific problem in genetics where large studies looking for significance are common and p-values <10-8 are often required (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2621212/).
Also, one issue with many analyses is that they were opportunistic and not pre-planned (i.e. "If you torture the data enough, nature will always confess." - Ronald Coase).
Generally, if an analysis is pre-planned (with a repeated-analysis correction for statistical power), it can be considered significant. Often, repeated testing by multiple individuals or groups is the best way to confirm that something works (or not). And repetition of results is most often the right test for significance.
|
Is it wrong to refer to results as being "highly significant"?
|
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only
|
Is it wrong to refer to results as being "highly significant"?
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only one part of the answer. With enough data, usually some parameters will show up as "significant" (look up Bonferroni correction). Multiple testing is a specific problem in genetics where large studies looking for significance are common and p-values <10-8 are often required (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2621212/).
Also, one issue with many analyses is that they were opportunistic and not pre-planned (i.e. "If you torture the data enough, nature will always confess." - Ronald Coase).
Generally, if an analysis is pre-planned (with a repeated-analysis correction for statistical power), it can be considered significant. Often, repeated testing by multiple individuals or groups is the best way to confirm that something works (or not). And repetition of results is most often the right test for significance.
|
Is it wrong to refer to results as being "highly significant"?
This is a common question.
A similar question may be "Why is p<=0.05 considered significant?" (http://www.jerrydallal.com/LHSP/p05.htm)
@Michael-Mayer gave one part of the answer: significance is only
|
14,062
|
Is it wrong to refer to results as being "highly significant"?
|
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a waste of resources. Why asking a binary question if it is possible to get an answer to a quantitative question like 'how large is the true treatment effect?' that implicitly answers also the yes/no question? So instead of answering an uninformative yes/no question with high certainty, we often recommend the use of confidence intervals that contains much more information.
|
Is it wrong to refer to results as being "highly significant"?
|
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a wa
|
Is it wrong to refer to results as being "highly significant"?
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a waste of resources. Why asking a binary question if it is possible to get an answer to a quantitative question like 'how large is the true treatment effect?' that implicitly answers also the yes/no question? So instead of answering an uninformative yes/no question with high certainty, we often recommend the use of confidence intervals that contains much more information.
|
Is it wrong to refer to results as being "highly significant"?
A test is a tool for a black-white decision, i.e. it tries to answer a yes/no question like 'is there a true treatment effect?'. Often, especially if the data set is large, such question is quite a wa
|
14,063
|
What's wrong with this illustration of posterior distribution?
|
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n)$$ and
$$\mu \sim N(\mu_0, \tau^2),$$
then the posterior variance of $\mu \mid X$ is $$\dfrac{1}{n/\sigma^2 + 1/\tau^2} < \min( \sigma^2/n, \tau^2).$$
|
What's wrong with this illustration of posterior distribution?
|
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n
|
What's wrong with this illustration of posterior distribution?
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n)$$ and
$$\mu \sim N(\mu_0, \tau^2),$$
then the posterior variance of $\mu \mid X$ is $$\dfrac{1}{n/\sigma^2 + 1/\tau^2} < \min( \sigma^2/n, \tau^2).$$
|
What's wrong with this illustration of posterior distribution?
It looks like the prior and likelihood are normal, in which case the posterior should actually be narrower than either the likelihood or the prior. Notice that if
$$X \mid \mu \sim N(\mu, \sigma^2/n
|
14,064
|
Why is the probability zero for any given value of a normal distribution?
|
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the wheel is partitioned in several discrete sectors, perhaps 20 or so. If all sectors have the same area, you would have a probability of $1/20$ to hit one specific sector (e.g. the main price). The sum of all probabilities is 1, because $20\cdot 1/20 = 1$. More general: If there are $m$ sectors evenly distributed on the wheel, every sectors has a probability of $1/m$ of being hit (uniform probabilities). But what happens if we decided to partition the wheel into a million sectors. Now the probability of hitting one specific sectors (the main prize), is extremely small: $1/10^{6}$. Further, note that the pointer can theoretically stop at an infinite number of positions of the wheel. If we wanted to make a separate prize for each possible stopping point, we would have to partition the wheel in an infinite number of "sectors" of equal area (but each of those would have an area of 0). But what probability should we assign to each of these "sectors"? It must be zero because if the probabilities for each "sectors" would be positive and equal, the sum of infinitely many equal positive numbers diverges, which creates a contradiction (the total probability must be 1). That's why we only can assign a probability to an interval, to a real area on the wheel.
More technical: In a continuous distribution (e.g. continuous uniform, normal, and others), the probability is calculated by integration, as an area under the probability density function $f(x)$ (with $a\leq b$):
$$
P(a\leq X \leq b) = \int_{a}^{b} f(x) dx
$$
But the area of an interval of length 0 is 0.
See this document for the analogy of the wheel of fortune.
The Poisson distribution on the other hand is a discrete probability distribution. A random Poisson variable can only take discrete values (i.e. the number of children for one family cannot be 1.25). The probability that a family has exactly 1 child is certainly not zero but is positive. The sum of all probabilities for all values must be 1. Other famous discrete distributions are: Binomial, negative binomial, geometric, hypergeometric and many others.
|
Why is the probability zero for any given value of a normal distribution?
|
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the whee
|
Why is the probability zero for any given value of a normal distribution?
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the wheel is partitioned in several discrete sectors, perhaps 20 or so. If all sectors have the same area, you would have a probability of $1/20$ to hit one specific sector (e.g. the main price). The sum of all probabilities is 1, because $20\cdot 1/20 = 1$. More general: If there are $m$ sectors evenly distributed on the wheel, every sectors has a probability of $1/m$ of being hit (uniform probabilities). But what happens if we decided to partition the wheel into a million sectors. Now the probability of hitting one specific sectors (the main prize), is extremely small: $1/10^{6}$. Further, note that the pointer can theoretically stop at an infinite number of positions of the wheel. If we wanted to make a separate prize for each possible stopping point, we would have to partition the wheel in an infinite number of "sectors" of equal area (but each of those would have an area of 0). But what probability should we assign to each of these "sectors"? It must be zero because if the probabilities for each "sectors" would be positive and equal, the sum of infinitely many equal positive numbers diverges, which creates a contradiction (the total probability must be 1). That's why we only can assign a probability to an interval, to a real area on the wheel.
More technical: In a continuous distribution (e.g. continuous uniform, normal, and others), the probability is calculated by integration, as an area under the probability density function $f(x)$ (with $a\leq b$):
$$
P(a\leq X \leq b) = \int_{a}^{b} f(x) dx
$$
But the area of an interval of length 0 is 0.
See this document for the analogy of the wheel of fortune.
The Poisson distribution on the other hand is a discrete probability distribution. A random Poisson variable can only take discrete values (i.e. the number of children for one family cannot be 1.25). The probability that a family has exactly 1 child is certainly not zero but is positive. The sum of all probabilities for all values must be 1. Other famous discrete distributions are: Binomial, negative binomial, geometric, hypergeometric and many others.
|
Why is the probability zero for any given value of a normal distribution?
Perhaps the following thought-experiment helps you to understand better why the probability $Pr(X=a)$ is zero in a continuous distribution: Imagine that you have a wheel of fortune. Normally, the whee
|
14,065
|
Why is the probability zero for any given value of a normal distribution?
|
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous random variable equals some value is always zero."
reference page:
http://support.minitab.com/en-us/minitab-express/1/help-and-how-to/basic-statistics/probability-distributions/supporting-topics/basics/continuous-and-discrete-probability-distributions/
|
Why is the probability zero for any given value of a normal distribution?
|
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous rand
|
Why is the probability zero for any given value of a normal distribution?
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous random variable equals some value is always zero."
reference page:
http://support.minitab.com/en-us/minitab-express/1/help-and-how-to/basic-statistics/probability-distributions/supporting-topics/basics/continuous-and-discrete-probability-distributions/
|
Why is the probability zero for any given value of a normal distribution?
"Probabilities of continuous random variables (X) are defined as the area under the curve of its PDF. Thus, only ranges of values can have a nonzero probability. The probability that a continuous rand
|
14,066
|
Framing the negative binomial distribution for DNA sequencing
|
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in an infinite number of replicates then the true distribution would be lognormal. This distribution is then sampled via a Poisson process (with a count) so the true distribution reads per gene across replicates would be a Poisson-Lognormal distribution.
But in packages that we use such as EdgeR and DESeq this distribution modeled as a negative binomial distribution. This is not because the guys that wrote it didn't know about a Poisson Lognormal distribution.
It is because the Poisson Lognormal distribution is a terrible thing to work with because it requires numerical integration to do the fits etc. so when you actually try to use it sometimes the performance is really bad.
A negative binomial distribution has a closed form so it is a lot easier to work with and the gamma distribution (the underlying distribution) looks a lot like a lognormal distribution in that it sometimes looks kind of normal and sometimes has a tail.
But in this example (if you believe the assumption) it can't possibly be theoretically correct because the theoretically correct distribution is the Poisson lognormal and the two distributions are reasonable approximations of one another but are not equivalent.
But I still think the "incorrect" negative binomial distribution is often the better choice because empirically it will give better results because the integration performs slowly and the fits can perform badly, especially with distributions with long tails.
|
Framing the negative binomial distribution for DNA sequencing
|
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in
|
Framing the negative binomial distribution for DNA sequencing
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in an infinite number of replicates then the true distribution would be lognormal. This distribution is then sampled via a Poisson process (with a count) so the true distribution reads per gene across replicates would be a Poisson-Lognormal distribution.
But in packages that we use such as EdgeR and DESeq this distribution modeled as a negative binomial distribution. This is not because the guys that wrote it didn't know about a Poisson Lognormal distribution.
It is because the Poisson Lognormal distribution is a terrible thing to work with because it requires numerical integration to do the fits etc. so when you actually try to use it sometimes the performance is really bad.
A negative binomial distribution has a closed form so it is a lot easier to work with and the gamma distribution (the underlying distribution) looks a lot like a lognormal distribution in that it sometimes looks kind of normal and sometimes has a tail.
But in this example (if you believe the assumption) it can't possibly be theoretically correct because the theoretically correct distribution is the Poisson lognormal and the two distributions are reasonable approximations of one another but are not equivalent.
But I still think the "incorrect" negative binomial distribution is often the better choice because empirically it will give better results because the integration performs slowly and the fits can perform badly, especially with distributions with long tails.
|
Framing the negative binomial distribution for DNA sequencing
IMOH, I really think that the negative binomial distribution is used for convenience.
So in RNA Seq there is a common assumption that if you take an infinite number of measurements of the same gene in
|
14,067
|
Framing the negative binomial distribution for DNA sequencing
|
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta particles at the rates $\alpha$ and $\beta$, respectively.
What is the distribution of the number of alpha particles before the $r$th beta particle?
Consider the alpha particles as successes, and the beta particles as failures. When a particle is detected, the probability that it is an alpha particle is $\frac{\alpha}{\alpha+\beta}$. So, this is the negative binomial distribution $\text{NB}(r,\frac{\alpha}{\alpha+\beta})$.
Consider the time $t_r$ of the $r$th beta particle. This follows a gamma distribution $\Gamma(r,1/\beta).$ If you condition on $t_r = \lambda/\alpha$, then the number of alpha particles before time $t_r$ follows a Poisson distribution $\text{Pois}(\lambda).$ So, the distribution of the number of alpha particles before the $r$th beta particle is a Gamma-mixed Poisson distribution.
That explains why these distributions are equal.
|
Framing the negative binomial distribution for DNA sequencing
|
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta par
|
Framing the negative binomial distribution for DNA sequencing
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta particles at the rates $\alpha$ and $\beta$, respectively.
What is the distribution of the number of alpha particles before the $r$th beta particle?
Consider the alpha particles as successes, and the beta particles as failures. When a particle is detected, the probability that it is an alpha particle is $\frac{\alpha}{\alpha+\beta}$. So, this is the negative binomial distribution $\text{NB}(r,\frac{\alpha}{\alpha+\beta})$.
Consider the time $t_r$ of the $r$th beta particle. This follows a gamma distribution $\Gamma(r,1/\beta).$ If you condition on $t_r = \lambda/\alpha$, then the number of alpha particles before time $t_r$ follows a Poisson distribution $\text{Pois}(\lambda).$ So, the distribution of the number of alpha particles before the $r$th beta particle is a Gamma-mixed Poisson distribution.
That explains why these distributions are equal.
|
Framing the negative binomial distribution for DNA sequencing
I looked through a few web pages and couldn't find an explanation, but I came up with one for integer values of $r$. Suppose we have two radioactive sources independently generating alpha and beta par
|
14,068
|
Framing the negative binomial distribution for DNA sequencing
|
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to the mean
Some explain it as a weighted mixture of Poisson distributions (with a gamma mixing distribution on the Poisson parameter)
Mathematically one obtains Negative binomial by integrating the Poisson distribution over Gamma-distributed weights, see Gamma-Poisson mixture. This mathematical fact remains regardless of whether we accept it as the justification for using the distribution or not.
Poisson distribution is a rather natural choice when talking about counting reads arising from DNA sequencing (one could use binomial, but given that one sequences only a small fraction of reads/DNA obtained from the sample, the difference is negligible, and we can use whatever seems more convenient.) We are also sure that the parameter of this Poisson distribution varies, although the reason for this variation depends on the exact nature of the experiment - e.g., it can be variation due to
replicating the same experiment several times
the reads originating from different cells with somewhat different properties
comparing the numbers of reads corresponding different genes
genes having different chemical structure and therefore amplified differently by PCR or some reads more likely to make their way to the sequencing machine
the student/postdoc preparing the libraries not being very careful/consistent
etc.
In other words, we are sure that the variation exists (and we do observe it experimentally), but we don't know exactly where it comes from, and we cannot directly know what probability distribution describes it. We couldn't model it using the normal distribution, since the Poisson parameter should be positive, so we use the Gamma distribution, because it is "almost like normal", but with non-negative support... but we could have also used log-normal or something else. As long as we are not looking for the fine biological effects that could turn out to be artifacts of the particular distribution we use, anything that is computationally convenient is good.
Note that, besides the flexibility provided by an extra parameter, negative binomial has a thicker tail than the Poisson distribution, making it less sensitive to outliers. This provides an additional motivation for using this distribution: it allows more robust inference.
|
Framing the negative binomial distribution for DNA sequencing
|
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to th
|
Framing the negative binomial distribution for DNA sequencing
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to the mean
Some explain it as a weighted mixture of Poisson distributions (with a gamma mixing distribution on the Poisson parameter)
Mathematically one obtains Negative binomial by integrating the Poisson distribution over Gamma-distributed weights, see Gamma-Poisson mixture. This mathematical fact remains regardless of whether we accept it as the justification for using the distribution or not.
Poisson distribution is a rather natural choice when talking about counting reads arising from DNA sequencing (one could use binomial, but given that one sequences only a small fraction of reads/DNA obtained from the sample, the difference is negligible, and we can use whatever seems more convenient.) We are also sure that the parameter of this Poisson distribution varies, although the reason for this variation depends on the exact nature of the experiment - e.g., it can be variation due to
replicating the same experiment several times
the reads originating from different cells with somewhat different properties
comparing the numbers of reads corresponding different genes
genes having different chemical structure and therefore amplified differently by PCR or some reads more likely to make their way to the sequencing machine
the student/postdoc preparing the libraries not being very careful/consistent
etc.
In other words, we are sure that the variation exists (and we do observe it experimentally), but we don't know exactly where it comes from, and we cannot directly know what probability distribution describes it. We couldn't model it using the normal distribution, since the Poisson parameter should be positive, so we use the Gamma distribution, because it is "almost like normal", but with non-negative support... but we could have also used log-normal or something else. As long as we are not looking for the fine biological effects that could turn out to be artifacts of the particular distribution we use, anything that is computationally convenient is good.
Note that, besides the flexibility provided by an extra parameter, negative binomial has a thicker tail than the Poisson distribution, making it less sensitive to outliers. This provides an additional motivation for using this distribution: it allows more robust inference.
|
Framing the negative binomial distribution for DNA sequencing
Some explain it as something that works like the Poisson distribution but has an additional parameter, allowing more freedom to model the true distribution, with a variance not necessarily equal to th
|
14,069
|
Framing the negative binomial distribution for DNA sequencing
|
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of discrete poisson distributions would result in a discrete waiting time (trials until N failures) does not seem too surprising.
I hope someone has a more formal answer.
Edit: I always justified the negative binomial dist. for sequencing as follows: The actual sequencing step is simply sampling reads from a large library of molecules (poisson). However that library is made from the original sample by PCR. That means that the original molecules are amplified exponentially. And the gamma distribution describes the sum of k independent exponentially distributed random variables, i.e. how many molecules in the library after amplifying k sample molecules for the same number of PCR cycles.
Hence the negative binomial models PCR followed by sequencing.
|
Framing the negative binomial distribution for DNA sequencing
|
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of di
|
Framing the negative binomial distribution for DNA sequencing
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of discrete poisson distributions would result in a discrete waiting time (trials until N failures) does not seem too surprising.
I hope someone has a more formal answer.
Edit: I always justified the negative binomial dist. for sequencing as follows: The actual sequencing step is simply sampling reads from a large library of molecules (poisson). However that library is made from the original sample by PCR. That means that the original molecules are amplified exponentially. And the gamma distribution describes the sum of k independent exponentially distributed random variables, i.e. how many molecules in the library after amplifying k sample molecules for the same number of PCR cycles.
Hence the negative binomial models PCR followed by sequencing.
|
Framing the negative binomial distribution for DNA sequencing
I can only offer intuition, but the gamma distribution itself describes (continuous) waiting times (how long does it take for a rare event to occur). So the fact that a gamma-distributed mixture of di
|
14,070
|
Framing the negative binomial distribution for DNA sequencing
|
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ reads covering a site on average.
Say that sequencing is a process that picks an original DNA fragment, puts it through a stochastic process that goes through PCR, subsampling, etc, and comes up with
a base from the fragment at frequency $p$, and a failure otherwise. If sequencing proceeds until $\mu\frac{1-p}{p}$ failures, it can be modeled with a negative binomial
distribution, $NB(\mu\frac{1-p}{p}, p)$.
Calculating the moments of this distribution, we get expected number of successes $\mu\frac{1-p}{p}\frac{p}{1-p} = \mu$ as required. For variance of the number of successes, we get $\sigma^2 = \mu(1-p)^{-1}$
- the rate at which the library prep fails for a fragment increases the variance in the observed coverage.
While the above is a slightly artificial description of the sequencing process, and one could make a proper generative model of the PCR steps etc,
I think it gives some insight into the origin of the overdispersion parameter $(1-p)^{-1}$ directly from the negative binomial distribution. I do prefer
the Poisson model with rate integrated out as an explanation in general.
|
Framing the negative binomial distribution for DNA sequencing
|
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$
|
Framing the negative binomial distribution for DNA sequencing
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$ reads covering a site on average.
Say that sequencing is a process that picks an original DNA fragment, puts it through a stochastic process that goes through PCR, subsampling, etc, and comes up with
a base from the fragment at frequency $p$, and a failure otherwise. If sequencing proceeds until $\mu\frac{1-p}{p}$ failures, it can be modeled with a negative binomial
distribution, $NB(\mu\frac{1-p}{p}, p)$.
Calculating the moments of this distribution, we get expected number of successes $\mu\frac{1-p}{p}\frac{p}{1-p} = \mu$ as required. For variance of the number of successes, we get $\sigma^2 = \mu(1-p)^{-1}$
- the rate at which the library prep fails for a fragment increases the variance in the observed coverage.
While the above is a slightly artificial description of the sequencing process, and one could make a proper generative model of the PCR steps etc,
I think it gives some insight into the origin of the overdispersion parameter $(1-p)^{-1}$ directly from the negative binomial distribution. I do prefer
the Poisson model with rate integrated out as an explanation in general.
|
Framing the negative binomial distribution for DNA sequencing
I'll try to give a simplistic mechanistic interpretation that I found useful when thinking about this.
Assume we have a perfect uniform coverage of the genome before library prep, and we observe $\mu$
|
14,071
|
How to do estimation, when only summary statistics are available?
|
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This is not a wild assumption given that the quality, in terms of convergence, of frequentist estimators depends on the sample size, therefore one cannot obtain arbitrarily good estimators without knowing the original sample size.
The idea is to generate a sample from the posterior distribution of $\theta$ and, in order to produce an approximation of the MLE, you can use an importance sampling technique as in [1] or to consider a uniform prior on $\theta$ with support on a suitable set as in [2].
I am going to describe the method in [2]. First of all, let me describe the ABC sampler.
ABC Sampler
Let $f(\cdot\vert\theta)$ be the model that generates the sample where $\theta \in \Theta$ is a parameter (to be estimated), $T$ be a statistic (a function of the sample) and $T_0$ be the observed statistic, in the ABC jargon this is called a summary statistic, $\rho$ be a metric, $\pi(\theta)$ a prior distribution on $\theta$ and $\epsilon>0$ a tolerance. Then, the ABC-rejection sampler can be implemented as follows.
Sample $\theta^*$ from $\pi(\cdot)$.
Generate a sample $\bf{x}$ of size $n$ from the model $f(\cdot\vert\theta^*)$.
Compute $T^*=T({\bf x})$.
If $\rho(T^*,T_0)<\epsilon$, accept $\theta^*$ as a simulation from the posterior of $\theta$.
This algorithm generates an approximate sample from the posterior distribution of $\theta$ given $T({\bf x})=T_0$. Therefore, the best scenario is when the statistic $T$ is sufficient but other statistics can be used. For a more detailed description of this see this paper.
Now, in a general framework, if one uses a uniform prior that contains the MLE in its support, then the Maximum a posteriori (MAP) coincides with Maximum Likelihood Estimator (MLE). Therefore, if you consider an appropriate uniform prior in the ABC Sampler, then you can generate an approximate sample of a posterior distribution whose MAP coincides with the MLE. The remaining step consists of estimating this mode. This problem has been discussed in CV, for instance in "Computationally efficient estimation of multivariate mode".
A toy example
Let $(x_1,...,x_n)$ be a sample from a $N(\mu,1)$ and suppose that the only information available from this sample is $\bar{x}=\dfrac{1}{n}\sum_{j=1}^n x_j$. Let $\rho$ be the Euclidean metric in ${\mathbb R}$ and $\epsilon=0.001$. The following R code shows how to obtain an approximate MLE using the methods described above using a simulated sample with $n=100$ and $\mu=0$, a sample of the posterior distribution of size $1000$, a uniform prior for $\mu$ on $(-0.3,0.3)$, and a kernel density estimator for the estimation of the mode of the posterior sample (MAP=MLE).
# rm(list=ls())
# Simulated data
set.seed(1)
x = rnorm(100)
# Observed statistic
T0 = mean(x)
# ABC Sampler using a uniform prior
N=1000
eps = 0.001
ABCsamp = rep(0,N)
i=1
while(i < N+1){
u = runif(1,-0.3,0.3)
t.samp = rnorm(100,u,1)
Ts = mean(t.samp)
if(abs(Ts-T0)<eps){
ABCsamp[i]=u
i=i+1
print(i)
}
}
# Approximation of the MLE
kd = density(ABCsamp)
kd$x[which(kd$y==max(kd$y))]
As you can see, using a small tolerance we get a very good approximation of the MLE (which in this trivial example can be calculated from the statistic given that it is sufficient). It is important to notice that the choice of the summary statistic is crucial. Quantiles are typically a good choice for the summary statistic, but not all the choices produce a good approximation. It may be the case that the summary statistic is not very informative and then the quality of the approximation might be poor, which is well-known in the ABC community.
Update: A similar approach was recently published in Fan et al. (2012). See this entry for a discussion on the paper.
|
How to do estimation, when only summary statistics are available?
|
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This
|
How to do estimation, when only summary statistics are available?
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This is not a wild assumption given that the quality, in terms of convergence, of frequentist estimators depends on the sample size, therefore one cannot obtain arbitrarily good estimators without knowing the original sample size.
The idea is to generate a sample from the posterior distribution of $\theta$ and, in order to produce an approximation of the MLE, you can use an importance sampling technique as in [1] or to consider a uniform prior on $\theta$ with support on a suitable set as in [2].
I am going to describe the method in [2]. First of all, let me describe the ABC sampler.
ABC Sampler
Let $f(\cdot\vert\theta)$ be the model that generates the sample where $\theta \in \Theta$ is a parameter (to be estimated), $T$ be a statistic (a function of the sample) and $T_0$ be the observed statistic, in the ABC jargon this is called a summary statistic, $\rho$ be a metric, $\pi(\theta)$ a prior distribution on $\theta$ and $\epsilon>0$ a tolerance. Then, the ABC-rejection sampler can be implemented as follows.
Sample $\theta^*$ from $\pi(\cdot)$.
Generate a sample $\bf{x}$ of size $n$ from the model $f(\cdot\vert\theta^*)$.
Compute $T^*=T({\bf x})$.
If $\rho(T^*,T_0)<\epsilon$, accept $\theta^*$ as a simulation from the posterior of $\theta$.
This algorithm generates an approximate sample from the posterior distribution of $\theta$ given $T({\bf x})=T_0$. Therefore, the best scenario is when the statistic $T$ is sufficient but other statistics can be used. For a more detailed description of this see this paper.
Now, in a general framework, if one uses a uniform prior that contains the MLE in its support, then the Maximum a posteriori (MAP) coincides with Maximum Likelihood Estimator (MLE). Therefore, if you consider an appropriate uniform prior in the ABC Sampler, then you can generate an approximate sample of a posterior distribution whose MAP coincides with the MLE. The remaining step consists of estimating this mode. This problem has been discussed in CV, for instance in "Computationally efficient estimation of multivariate mode".
A toy example
Let $(x_1,...,x_n)$ be a sample from a $N(\mu,1)$ and suppose that the only information available from this sample is $\bar{x}=\dfrac{1}{n}\sum_{j=1}^n x_j$. Let $\rho$ be the Euclidean metric in ${\mathbb R}$ and $\epsilon=0.001$. The following R code shows how to obtain an approximate MLE using the methods described above using a simulated sample with $n=100$ and $\mu=0$, a sample of the posterior distribution of size $1000$, a uniform prior for $\mu$ on $(-0.3,0.3)$, and a kernel density estimator for the estimation of the mode of the posterior sample (MAP=MLE).
# rm(list=ls())
# Simulated data
set.seed(1)
x = rnorm(100)
# Observed statistic
T0 = mean(x)
# ABC Sampler using a uniform prior
N=1000
eps = 0.001
ABCsamp = rep(0,N)
i=1
while(i < N+1){
u = runif(1,-0.3,0.3)
t.samp = rnorm(100,u,1)
Ts = mean(t.samp)
if(abs(Ts-T0)<eps){
ABCsamp[i]=u
i=i+1
print(i)
}
}
# Approximation of the MLE
kd = density(ABCsamp)
kd$x[which(kd$y==max(kd$y))]
As you can see, using a small tolerance we get a very good approximation of the MLE (which in this trivial example can be calculated from the statistic given that it is sufficient). It is important to notice that the choice of the summary statistic is crucial. Quantiles are typically a good choice for the summary statistic, but not all the choices produce a good approximation. It may be the case that the summary statistic is not very informative and then the quality of the approximation might be poor, which is well-known in the ABC community.
Update: A similar approach was recently published in Fan et al. (2012). See this entry for a discussion on the paper.
|
How to do estimation, when only summary statistics are available?
In this case, you can consider an ABC approximation of the likelihood (and consequently of the MLE) under the following assumption/restriction:
Assumption. The original sample size $n$ is known.
This
|
14,072
|
How to do estimation, when only summary statistics are available?
|
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimation based on this joint distribution. Note that, unless $(T_1,\ldots,T_k)$ is sufficient, this will almost always be a different maximum likelihood than when using the raw data $(X_1,\ldots,X_n)$. It will necessarily be less efficient, with a larger asymptotic variance.
If the above joint distribution with density $g$ is not available, the solution proposed by Procrastinator is quite appropriate.
|
How to do estimation, when only summary statistics are available?
|
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimat
|
How to do estimation, when only summary statistics are available?
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimation based on this joint distribution. Note that, unless $(T_1,\ldots,T_k)$ is sufficient, this will almost always be a different maximum likelihood than when using the raw data $(X_1,\ldots,X_n)$. It will necessarily be less efficient, with a larger asymptotic variance.
If the above joint distribution with density $g$ is not available, the solution proposed by Procrastinator is quite appropriate.
|
How to do estimation, when only summary statistics are available?
It all depends on whether or not the joint distribution of those $T_i$'s is known. If it is, e.g.,
$$
(T_1,\ldots,T_k)\sim g(t_1,\ldots,t_k|\theta,n)
$$
then you can conduct maximum likelihood estimat
|
14,073
|
How to do estimation, when only summary statistics are available?
|
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form:
$$
l(\theta| T) = \exp\left( -\psi(\theta) + \langle T,\phi(\theta) \rangle \right),
$$
where $\langle \cdot, \cdot\rangle$ is the scalar product, $T$ is the vector of suff. stats. and $\psi(\cdot)$ and $\phi(\cdot)$ are continuous twice-differentiable.
The way you actually maximize the likelihood depends mostly on the possiblity to write the likelihood analytically in a tractable way. If this is possible you will be able to consider general optimisation algorithms (newton-raphson, simplex...). If you do not have a tractable likelihood, you may find it easier to compute a conditional expection as in the EM algorithm, which will also yield maximum likelihood estimates under rather affordable hypotheses.
Best
|
How to do estimation, when only summary statistics are available?
|
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form
|
How to do estimation, when only summary statistics are available?
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form:
$$
l(\theta| T) = \exp\left( -\psi(\theta) + \langle T,\phi(\theta) \rangle \right),
$$
where $\langle \cdot, \cdot\rangle$ is the scalar product, $T$ is the vector of suff. stats. and $\psi(\cdot)$ and $\phi(\cdot)$ are continuous twice-differentiable.
The way you actually maximize the likelihood depends mostly on the possiblity to write the likelihood analytically in a tractable way. If this is possible you will be able to consider general optimisation algorithms (newton-raphson, simplex...). If you do not have a tractable likelihood, you may find it easier to compute a conditional expection as in the EM algorithm, which will also yield maximum likelihood estimates under rather affordable hypotheses.
Best
|
How to do estimation, when only summary statistics are available?
The (frequentist) maximum likelihood estimator is as follows:
For $F$ in the exponential family, and if your statistics are sufficient your likelihood to be maximised can always be written in the form
|
14,074
|
What is the definition of a symmetric distribution?
|
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, because it raises many implicit questions: why this definition of "symmetric"? Can there be other kinds of symmetries? What is the relationship between a distribution and its symmetries, and conversely, what is the relationship between a "symmetry" and those distributions that might have that symmetry?
The symmetries in question are reflections of the real line. All are of the form
$$x \to 2a-x$$
for some constant $a$.
So, suppose $X$ has this symmetry for at least one $a$. Then the symmetry implies
$$\Pr[X \ge a] = \Pr[2a-X \ge a] = \Pr[X \le a]$$
showing that $a$ is a median of $X$. Similarly, if $X$ has an expectation, then it immediately follows that $a = E[X]$. Thus we usually can pin down $a$ easily. Even if not, $a$ (and therefore the symmetry itself) is still uniquely determined (if it exists at all).
To see this, let $b$ be any center of symmetry. Then applying both symmetries we see that $X$ is invariant under the translation $x \to x + 2(b-a)$. If $b-a \ne 0$, the distribution of $X$ must have a period of $b-a$, which is impossible because the total probability of a periodic distribution is either $0$ or infinite. Thus $b-a=0$, showing that $a$ is unique.
More generally, when $G$ is a group acting faithfully on the real line (and by extension on all its Borel subsets), we could say that a distribution $X$ is "symmetric" (with respect to $G$) when
$$\Pr[X \in E] = \Pr[X \in E^g]$$
for all measurable sets $E$ and elements $g \in G$, where $E^g$ denotes the image of $E$ under the action of $g$.
As an example, let $G$ still be a group of order $2$, but now let its action be to take the reciprocal of a real number (and let it fix $0$). The standard lognormal distribution is symmetric with respect to this group. This example can be understood as an instance of a reflection symmetry where a nonlinear re-expression of the coordinates has taken place. This suggests focusing on transformations that respect the "structure" of the real line. The structure essential to probability must be related to Borel sets and Lebesgue measure, both of which can be defined in terms of (Euclidean) distance between two points.
A distance-preserving map is, by definition, an isometry. It is well known (and easy, albeit a little involved, to demonstrate) that all isometries of the real line are generated by reflections. Whence, when it is understand that "symmetric" means symmetric with respect to some group of isometries, the group must be generated by at most one reflection and we have seen that reflection is uniquely determined by any symmetric distribution with respect to it. In this sense, the preceding analysis is exhaustive and justifies the usual terminology of "symmetric" distributions.
Incidentally, a host of multivariate examples of distributions invariant under groups of isometries is afforded by considering "spherical" distributions. These are invariant under all rotations (relative to some fixed center). These generalize the one-dimensional case: the "rotations" of the real line are just the reflections.
Finally, it is worth pointing out that a standard construction--averaging over the group--gives a way to produce loads of symmetric distributions. In the case of the real line, let $G$ be generated by the reflection about a point $a$, so that it consists of the identity element $e$ and this reflection, $g$. Let $X$ be any distribution. Define the distribution $Y$ by setting
$${\Pr}_Y[E] = \frac{1}{|G|}\sum_{g \in G} {\Pr}_X[E^g] = ({\Pr}_X[E] + {\Pr}_X[E^g])/2$$
for all Borel sets $E$. This is manifestly symmetric and it's easy to check that it remains a distribution (all probabilities remain nonnegative and the total probability is $1$).
Illustrating the group averaging process, the PDF of a symmetrized Gamma distribution (centered at $a=2$) is shown in gold. The original Gamma is in blue and its reflection is in red.
|
What is the definition of a symmetric distribution?
|
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, becau
|
What is the definition of a symmetric distribution?
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, because it raises many implicit questions: why this definition of "symmetric"? Can there be other kinds of symmetries? What is the relationship between a distribution and its symmetries, and conversely, what is the relationship between a "symmetry" and those distributions that might have that symmetry?
The symmetries in question are reflections of the real line. All are of the form
$$x \to 2a-x$$
for some constant $a$.
So, suppose $X$ has this symmetry for at least one $a$. Then the symmetry implies
$$\Pr[X \ge a] = \Pr[2a-X \ge a] = \Pr[X \le a]$$
showing that $a$ is a median of $X$. Similarly, if $X$ has an expectation, then it immediately follows that $a = E[X]$. Thus we usually can pin down $a$ easily. Even if not, $a$ (and therefore the symmetry itself) is still uniquely determined (if it exists at all).
To see this, let $b$ be any center of symmetry. Then applying both symmetries we see that $X$ is invariant under the translation $x \to x + 2(b-a)$. If $b-a \ne 0$, the distribution of $X$ must have a period of $b-a$, which is impossible because the total probability of a periodic distribution is either $0$ or infinite. Thus $b-a=0$, showing that $a$ is unique.
More generally, when $G$ is a group acting faithfully on the real line (and by extension on all its Borel subsets), we could say that a distribution $X$ is "symmetric" (with respect to $G$) when
$$\Pr[X \in E] = \Pr[X \in E^g]$$
for all measurable sets $E$ and elements $g \in G$, where $E^g$ denotes the image of $E$ under the action of $g$.
As an example, let $G$ still be a group of order $2$, but now let its action be to take the reciprocal of a real number (and let it fix $0$). The standard lognormal distribution is symmetric with respect to this group. This example can be understood as an instance of a reflection symmetry where a nonlinear re-expression of the coordinates has taken place. This suggests focusing on transformations that respect the "structure" of the real line. The structure essential to probability must be related to Borel sets and Lebesgue measure, both of which can be defined in terms of (Euclidean) distance between two points.
A distance-preserving map is, by definition, an isometry. It is well known (and easy, albeit a little involved, to demonstrate) that all isometries of the real line are generated by reflections. Whence, when it is understand that "symmetric" means symmetric with respect to some group of isometries, the group must be generated by at most one reflection and we have seen that reflection is uniquely determined by any symmetric distribution with respect to it. In this sense, the preceding analysis is exhaustive and justifies the usual terminology of "symmetric" distributions.
Incidentally, a host of multivariate examples of distributions invariant under groups of isometries is afforded by considering "spherical" distributions. These are invariant under all rotations (relative to some fixed center). These generalize the one-dimensional case: the "rotations" of the real line are just the reflections.
Finally, it is worth pointing out that a standard construction--averaging over the group--gives a way to produce loads of symmetric distributions. In the case of the real line, let $G$ be generated by the reflection about a point $a$, so that it consists of the identity element $e$ and this reflection, $g$. Let $X$ be any distribution. Define the distribution $Y$ by setting
$${\Pr}_Y[E] = \frac{1}{|G|}\sum_{g \in G} {\Pr}_X[E^g] = ({\Pr}_X[E] + {\Pr}_X[E^g])/2$$
for all Borel sets $E$. This is manifestly symmetric and it's easy to check that it remains a distribution (all probabilities remain nonnegative and the total probability is $1$).
Illustrating the group averaging process, the PDF of a symmetrized Gamma distribution (centered at $a=2$) is shown in gold. The original Gamma is in blue and its reflection is in red.
|
What is the definition of a symmetric distribution?
Briefly: $X$ is symmetric when $X$ and $2a-X$ have the same distribution for some real number $a$. But arriving at this in a fully justified manner requires some digression and generalizations, becau
|
14,075
|
What is the definition of a symmetric distribution?
|
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the case of a probability distribution this could be translated to any operation $X \to X'$ that returns the same probability $P(X) = P(X')$.
In the simple case of the first example you are referring to the reflection symmetry about the maximum. If the distribution were sinusoidal then you could have the condition $X \to X + \lambda$, where $\lambda$ is the wavelength or period. Then $P(X) = P(X + \lambda)$ and would still fit a more general definition of symmetry.
|
What is the definition of a symmetric distribution?
|
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the ca
|
What is the definition of a symmetric distribution?
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the case of a probability distribution this could be translated to any operation $X \to X'$ that returns the same probability $P(X) = P(X')$.
In the simple case of the first example you are referring to the reflection symmetry about the maximum. If the distribution were sinusoidal then you could have the condition $X \to X + \lambda$, where $\lambda$ is the wavelength or period. Then $P(X) = P(X + \lambda)$ and would still fit a more general definition of symmetry.
|
What is the definition of a symmetric distribution?
The answer will depend on what you mean by symmetry. In physics the notion of symmetry is fundamental and has become very general. Symmetry is any operation that leaves the system unchanged. In the ca
|
14,076
|
Real meaning of confidence ellipse
|
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse for this mean is really an algorithm with the following property: if you were to replicate your sampling from the underlying distribution many times and each time calculate a confidence ellipse, then 95% of the ellipses so constructed would contain the underlying mean. (Note that each sample would of course yield a different ellipse.)
Thus, a confidence ellipse will usually not contain 95% of the observations. In fact, as the number of observations increases, the mean will usually be better and better estimated, leading to smaller and smaller confidence ellipses, which in turn contain a smaller and smaller proportion of the actual data. (Unfortunately, some people calculate the smallest ellipse that contains 95% of their data, reminiscent of a quantile, which by itself is quite OK... but then go on to call this "quantile ellipse" a "confidence ellipse", which, as you see, leads to confusion.)
The variance of the underlying population relates to the confidence ellipse. High variance will mean that the data are all over the place, so the mean is not well estimated, so the confidence ellipse will be larger than if the variance were smaller.
Of course, we can calculate confidence ellipses also for any other population parameter we may wish to estimate. Or we could look at other confidence regions than ellipses, especially if we don't know the estimated parameter to be (asymptotically) normally distributed.
The one-dimensional analogue of the confidence ellipse is the confidence-interval, and browsing through previous questions in this tag is helpful. Our current top-voted question in this tag is particularly nice: Why does a 95% CI not imply a 95% chance of containing the mean? Most of the discussion there holds just as well for higher dimensional analogues of the one-dimensional confidence interval.
|
Real meaning of confidence ellipse
|
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse f
|
Real meaning of confidence ellipse
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse for this mean is really an algorithm with the following property: if you were to replicate your sampling from the underlying distribution many times and each time calculate a confidence ellipse, then 95% of the ellipses so constructed would contain the underlying mean. (Note that each sample would of course yield a different ellipse.)
Thus, a confidence ellipse will usually not contain 95% of the observations. In fact, as the number of observations increases, the mean will usually be better and better estimated, leading to smaller and smaller confidence ellipses, which in turn contain a smaller and smaller proportion of the actual data. (Unfortunately, some people calculate the smallest ellipse that contains 95% of their data, reminiscent of a quantile, which by itself is quite OK... but then go on to call this "quantile ellipse" a "confidence ellipse", which, as you see, leads to confusion.)
The variance of the underlying population relates to the confidence ellipse. High variance will mean that the data are all over the place, so the mean is not well estimated, so the confidence ellipse will be larger than if the variance were smaller.
Of course, we can calculate confidence ellipses also for any other population parameter we may wish to estimate. Or we could look at other confidence regions than ellipses, especially if we don't know the estimated parameter to be (asymptotically) normally distributed.
The one-dimensional analogue of the confidence ellipse is the confidence-interval, and browsing through previous questions in this tag is helpful. Our current top-voted question in this tag is particularly nice: Why does a 95% CI not imply a 95% chance of containing the mean? Most of the discussion there holds just as well for higher dimensional analogues of the one-dimensional confidence interval.
|
Real meaning of confidence ellipse
Actually, neither explanation is correct.
A confidence ellipse has to do with unobserved population parameters, like the true population mean of your bivariate distribution. A 95% confidence ellipse f
|
14,077
|
Real meaning of confidence ellipse
|
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use the term confidence ellipse (though there is a debate whether it should be prediction ellipse) as a technique for measuring the centre of pressure displacement when a subject stands on a force platform. Then the ellipse that is drawn around the two axes (major and minor) is supposed to contain the 95% of the data points that represent the centre of pressure displacement over the time of a trial.
|
Real meaning of confidence ellipse
|
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use th
|
Real meaning of confidence ellipse
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use the term confidence ellipse (though there is a debate whether it should be prediction ellipse) as a technique for measuring the centre of pressure displacement when a subject stands on a force platform. Then the ellipse that is drawn around the two axes (major and minor) is supposed to contain the 95% of the data points that represent the centre of pressure displacement over the time of a trial.
|
Real meaning of confidence ellipse
It depends on the area this concept applies to. What was said above is true for statistics but when we apply stats to other subjects things are a bit different. In biomechanics, for example, we use th
|
14,078
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = \mathbf 0$$
the last relation taking into account the linearity structure of the regression equation.
In comparison , the OLS estimator satisfies
$$\sum \epsilon_i\mathbf x_i = \mathbf 0$$
In order to obtain identical algebraic expressions for the slope coefficients we need to have a density for the error term such that
$$\frac {f'(\epsilon_i)}{f(\epsilon_i)} = \pm \;c\epsilon_i \implies f'(\epsilon_i)= \pm \;c\epsilon_if(\epsilon_i)$$
These are differential equations of the form $y' = \pm\; xy$ that have solutions
$$\int \frac 1 {y}dy = \pm \int x dx\implies \ln y = \pm\;\frac 12 x^2$$
$$ \implies y = f(\epsilon) = \exp\left \{\pm\;\frac 12 c\epsilon^2\right\}$$
Any function that has this kernel and integrates to unity over an appropriate domain, will make the MLE and OLS for the slope coefficients identical. Namely we are looking for
$$g(x)= A\exp\left \{\pm\;\frac 12 cx^2\right\} : \int_a^b g(x)dx =1$$
Is there such a $g$ that is not the normal density (or the half-normal or the derivative of the error function)?
Certainly. But one more thing one has to consider is the following: if one uses the plus sign in the exponent, and a symmetric support around zero for example, one will get a density that has a unique minimum in the middle, and two local maxima at the boundaries of the support.
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i =
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i = \mathbf 0$$
the last relation taking into account the linearity structure of the regression equation.
In comparison , the OLS estimator satisfies
$$\sum \epsilon_i\mathbf x_i = \mathbf 0$$
In order to obtain identical algebraic expressions for the slope coefficients we need to have a density for the error term such that
$$\frac {f'(\epsilon_i)}{f(\epsilon_i)} = \pm \;c\epsilon_i \implies f'(\epsilon_i)= \pm \;c\epsilon_if(\epsilon_i)$$
These are differential equations of the form $y' = \pm\; xy$ that have solutions
$$\int \frac 1 {y}dy = \pm \int x dx\implies \ln y = \pm\;\frac 12 x^2$$
$$ \implies y = f(\epsilon) = \exp\left \{\pm\;\frac 12 c\epsilon^2\right\}$$
Any function that has this kernel and integrates to unity over an appropriate domain, will make the MLE and OLS for the slope coefficients identical. Namely we are looking for
$$g(x)= A\exp\left \{\pm\;\frac 12 cx^2\right\} : \int_a^b g(x)dx =1$$
Is there such a $g$ that is not the normal density (or the half-normal or the derivative of the error function)?
Certainly. But one more thing one has to consider is the following: if one uses the plus sign in the exponent, and a symmetric support around zero for example, one will get a density that has a unique minimum in the middle, and two local maxima at the boundaries of the support.
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
In maximum likelihood estimation, we calculate
$$\hat \beta_{ML}: \sum \frac {\partial \ln f(\epsilon_i)}{\partial \beta} = \mathbf 0 \implies \sum \frac {f'(\epsilon_i)}{f(\epsilon_i)}\mathbf x_i =
|
14,079
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \log\{f(y_i|x_i,\beta_0,\beta_1)\}=\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$is acceptable. This means for instance that densities of the form
$$f(y|x,\beta_0,\beta_1)=f_0(y|x)\exp\{-\omega(y_i-\beta_0-\beta_1x_i)^2\}$$
are acceptable since the factor $f_0(y|x)$ does not depend on the parameter $(\beta_0,\beta_1)$. There is therefore an infinity of such distributions.
Another setting where both estimators coincide is when the data comes from a spherically symmetric distribution, namely when the (vector) data $\mathbf{y}$ has conditional density$$h(||\mathbf{y}-\mathbf{X}\beta||)$$ with $h(\cdot)$ a decreasing function. (In this case the OLS is still available although the assumption of the independence of the $\epsilon_i$'s only holds in the Normal case.)
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \l
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \log\{f(y_i|x_i,\beta_0,\beta_1)\}=\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$is acceptable. This means for instance that densities of the form
$$f(y|x,\beta_0,\beta_1)=f_0(y|x)\exp\{-\omega(y_i-\beta_0-\beta_1x_i)^2\}$$
are acceptable since the factor $f_0(y|x)$ does not depend on the parameter $(\beta_0,\beta_1)$. There is therefore an infinity of such distributions.
Another setting where both estimators coincide is when the data comes from a spherically symmetric distribution, namely when the (vector) data $\mathbf{y}$ has conditional density$$h(||\mathbf{y}-\mathbf{X}\beta||)$$ with $h(\cdot)$ a decreasing function. (In this case the OLS is still available although the assumption of the independence of the $\epsilon_i$'s only holds in the Normal case.)
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
If we define the OLS as the solution to
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n (y_i-\beta_0-\beta_1x_i)^2$$
any density $f(y|x,\beta_0,\beta_1)$ such that
$$\arg_{\beta_0,\beta_1}\min\sum_{i=1}^n \l
|
14,080
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. For such distributions mean is the minimizer. OLS minimizer is also the mean. Therefore for all such distributions they should coincide when the linear functional is linked to the mean parameter.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.75.6958&rep=rep1&type=pdf
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
|
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences.
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences. For such distributions mean is the minimizer. OLS minimizer is also the mean. Therefore for all such distributions they should coincide when the linear functional is linked to the mean parameter.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.75.6958&rep=rep1&type=pdf
|
Linear regression: any non-normal distribution giving identity of OLS and MLE?
I didn't know about this question until @Xi'an just updated with an answer. There is a more generic solution. Exponential family distributions with some parameters fixed yield to Bregman divergences.
|
14,081
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
|
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be conditioned on. The exact unconditional test (first proposed by Barnard) is more complicated because you have to calculate the maximal p-value over all possible values of a nuisance parameter, viz the common Bernoulli probability under the null hypothesis. More recently, maximizing the p-value over a confidence interval for the nuisance parameter has been proposed: see Berger (1996), "More Powerful Tests from Confidence Interval p Values", The American Statistician, 50, 4; exact tests having the correct size can be constructed using this idea.
Fisher's Exact Test also arises as a randomization test, in Edgington's sense: a random assignment of the experimental treatments allows the distribution of the test statistic over permutations of these assignments to be used to test the null hypothesis. In this approach the lady's determinations are considered as fixed (& the marginal totals of milk-first and tea-first cups are of course preserved by permutation).
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
|
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be condition
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be conditioned on. The exact unconditional test (first proposed by Barnard) is more complicated because you have to calculate the maximal p-value over all possible values of a nuisance parameter, viz the common Bernoulli probability under the null hypothesis. More recently, maximizing the p-value over a confidence interval for the nuisance parameter has been proposed: see Berger (1996), "More Powerful Tests from Confidence Interval p Values", The American Statistician, 50, 4; exact tests having the correct size can be constructed using this idea.
Fisher's Exact Test also arises as a randomization test, in Edgington's sense: a random assignment of the experimental treatments allows the distribution of the test statistic over permutations of these assignments to be used to test the null hypothesis. In this approach the lady's determinations are considered as fixed (& the marginal totals of milk-first and tea-first cups are of course preserved by permutation).
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Some would argue that even if the second margin is not fixed by design, it carries little information about the lady's ability to discriminate (i.e. it's approximately ancillary) & should be condition
|
14,082
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
|
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell the difference between milk-first and tea-first cups, I can never prove she has that ability "by any finite amount of
experimentation". For this reason, as an experimenter, I should start with the assumption that she doesn't have an ability(null hypothesis) and try to disapprove that. And the original experiment design(fisher exact test)is a sufficient,efficient, and justifiable procedure to do so.
Here is the excerpt from "The Design of Experiments" by RA Fisher:
It might be argued that if an experiment can disprove the hypothesis
that the subject possesses no sensory discrimination between two
different sorts of object, it must therefore be able to prove the
opposite hypothesis, that she can make some such discrimination. But
this last hypothesis, however reasonable or true it may be, is
ineligible as a null hypothesis to be tested by experiment, because it
is inexact. If it were asserted that the subject would never be wrong
in her judgments we hold again have an exact hypothesis, and it is
easy to see that this hypothesis could be disproved by a single
failure, but could never be proved by any finite amount of
experimentation.
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
|
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell th
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell the difference between milk-first and tea-first cups, I can never prove she has that ability "by any finite amount of
experimentation". For this reason, as an experimenter, I should start with the assumption that she doesn't have an ability(null hypothesis) and try to disapprove that. And the original experiment design(fisher exact test)is a sufficient,efficient, and justifiable procedure to do so.
Here is the excerpt from "The Design of Experiments" by RA Fisher:
It might be argued that if an experiment can disprove the hypothesis
that the subject possesses no sensory discrimination between two
different sorts of object, it must therefore be able to prove the
opposite hypothesis, that she can make some such discrimination. But
this last hypothesis, however reasonable or true it may be, is
ineligible as a null hypothesis to be tested by experiment, because it
is inexact. If it were asserted that the subject would never be wrong
in her judgments we hold again have an exact hypothesis, and it is
easy to see that this hypothesis could be disproved by a single
failure, but could never be proved by any finite amount of
experimentation.
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Today, I read the first chapters of "The Design of Experiments" by RA Fisher, and one of the paragraph made me realize the fundamental flaw in my question.
That is, even if the lady can really tell th
|
14,083
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
|
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null hypothesis (the uninformed lady has 50% probability to correctly guess a cup).
Then the number of correct guesses, under the null hypothesis, becomes a binomial distribution: guessing 8 cups with 50% probability for each cup.
In other occasions you may not have this trivial 50% probability for the null hypothesis. And without fixed margins you may not know what that probability should be. In that case you need Barnard's test.
Even if you would do Barnard's test on the lady tasting tea test, it would become 50% anyway (if the outcome is all correct guesses) since the nuisance parameter with the highest p-value is 0.5 and would result in the trivial binomial test (it is actually the combination of two binomial tests one for the four milk first cups and one for the four tea first cups).
> library(Barnard)
> barnard.test(4,0,0,4)
Barnard's Unconditional Test
Treatment I Treatment II
Outcome I 4 0
Outcome II 0 4
Null hypothesis: Treatments have no effect on the outcomes
Score statistic = -2.82843
Nuisance parameter = 0.5 (One sided), 0.5 (Two sided)
P-value = 0.00390625 (One sided), 0.0078125 (Two sided)
> dbinom(8,8,0.5)
[1] 0.00390625
> dbinom(4,4,0.5)^2
[1] 0.00390625
Below is how it would go for a more complicated outcome (if not all guesses are correct e.g. 2 versus 4), then the counting of what is and what is not extreme becomes a bit more difficult
(Note as well that Barnard's test uses, in the case of a 4-2 result a nuisance parameter p=0.686 which you could argue is not correct, the p-value for 50% probability of answering 'tea first' would be 0.08203125. This becomes even smaller when you consider a different region, instead the one based on Wald's statistic, although defining the region is not so easy)
out <- rep(0,1000)
for (k in 1:1000) {
p <- k/1000
ps <- matrix(rep(0,25),5) # probability for outcome i,j
ts <- matrix(rep(0,25),5) # distance of outcome i,j (using wald statistic)
for (i in 0:4) {
for (j in 0:4) {
ps[i+1,j+1] <- dbinom(i,4,p)*dbinom(j,4,p)
pt <- (i+j)/8
p1 <- i/4
p2 <- j/4
ts[i+1,j+1] <- (p2-p1)/sqrt(pt*(1-pt)*(0.25+0.25))
}
}
cases <- ts < ts[2+1,4+1]
cases[1,1] = TRUE
cases[5,5] = TRUE
ps
out[k] <- 1-sum(ps[cases])
}
> max(out)
[1] 0.08926748
> barnard.test(4,2,0,2)
Barnard's Unconditional Test
Treatment I Treatment II
Outcome I 4 2
Outcome II 0 2
Null hypothesis: Treatments have no effect on the outcomes
Score statistic = -1.63299
Nuisance parameter = 0.686 (One sided), 0.314 (Two sided)
P-value = 0.0892675 (One sided), 0.178535 (Two sided)
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
|
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of milk-first cups?
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null hypothesis (the uninformed lady has 50% probability to correctly guess a cup).
Then the number of correct guesses, under the null hypothesis, becomes a binomial distribution: guessing 8 cups with 50% probability for each cup.
In other occasions you may not have this trivial 50% probability for the null hypothesis. And without fixed margins you may not know what that probability should be. In that case you need Barnard's test.
Even if you would do Barnard's test on the lady tasting tea test, it would become 50% anyway (if the outcome is all correct guesses) since the nuisance parameter with the highest p-value is 0.5 and would result in the trivial binomial test (it is actually the combination of two binomial tests one for the four milk first cups and one for the four tea first cups).
> library(Barnard)
> barnard.test(4,0,0,4)
Barnard's Unconditional Test
Treatment I Treatment II
Outcome I 4 0
Outcome II 0 4
Null hypothesis: Treatments have no effect on the outcomes
Score statistic = -2.82843
Nuisance parameter = 0.5 (One sided), 0.5 (Two sided)
P-value = 0.00390625 (One sided), 0.0078125 (Two sided)
> dbinom(8,8,0.5)
[1] 0.00390625
> dbinom(4,4,0.5)^2
[1] 0.00390625
Below is how it would go for a more complicated outcome (if not all guesses are correct e.g. 2 versus 4), then the counting of what is and what is not extreme becomes a bit more difficult
(Note as well that Barnard's test uses, in the case of a 4-2 result a nuisance parameter p=0.686 which you could argue is not correct, the p-value for 50% probability of answering 'tea first' would be 0.08203125. This becomes even smaller when you consider a different region, instead the one based on Wald's statistic, although defining the region is not so easy)
out <- rep(0,1000)
for (k in 1:1000) {
p <- k/1000
ps <- matrix(rep(0,25),5) # probability for outcome i,j
ts <- matrix(rep(0,25),5) # distance of outcome i,j (using wald statistic)
for (i in 0:4) {
for (j in 0:4) {
ps[i+1,j+1] <- dbinom(i,4,p)*dbinom(j,4,p)
pt <- (i+j)/8
p1 <- i/4
p2 <- j/4
ts[i+1,j+1] <- (p2-p1)/sqrt(pt*(1-pt)*(0.25+0.25))
}
}
cases <- ts < ts[2+1,4+1]
cases[1,1] = TRUE
cases[5,5] = TRUE
ps
out[k] <- 1-sum(ps[cases])
}
> max(out)
[1] 0.08926748
> barnard.test(4,2,0,2)
Barnard's Unconditional Test
Treatment I Treatment II
Outcome I 4 2
Outcome II 0 2
Null hypothesis: Treatments have no effect on the outcomes
Score statistic = -1.63299
Nuisance parameter = 0.686 (One sided), 0.314 (Two sided)
P-value = 0.0892675 (One sided), 0.178535 (Two sided)
|
On Fisher's exact test: What test would have been appropriate if the lady hadn't known the number of
Barnard's test is used when the nuisance parameter is unknown under the null hypothesis.
However in the lady tasting test you could argue that the nuisance parameter can be set at 0.5 under the null
|
14,084
|
Where and why does deep learning shine?
|
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid)
(2) The learnt features are sometimes better than the best hand-engineered features, and can be so complex (computer vision - e.g. face-like features) that it would take way too much human time to engineer.
(3) Can use unlabeled data to pre-train the network. Suppose we have 1000000 unlabeled images and 1000 labeled images. We can now drastically improve a supervised learning algorithm by pre-training on the 1000000 unlabeled images with deep learning. In addition, in some domains we have so much unlabeled data but labeled data is hard to find. An algorithm that can use this unlabeled data to improve classification is valuable.
(4) Empirically, smashed many benchmarks that were only seeing incremental improvements until the introduction of deep learning methods.
(5) Same algorithm works in multiple areas with raw (perhaps with minor pre-processing) inputs.
(6) Keeps improving as more data is fed to the network (assuming stationary distributions etc).
|
Where and why does deep learning shine?
|
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term ban
|
Where and why does deep learning shine?
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term band-aid)
(2) The learnt features are sometimes better than the best hand-engineered features, and can be so complex (computer vision - e.g. face-like features) that it would take way too much human time to engineer.
(3) Can use unlabeled data to pre-train the network. Suppose we have 1000000 unlabeled images and 1000 labeled images. We can now drastically improve a supervised learning algorithm by pre-training on the 1000000 unlabeled images with deep learning. In addition, in some domains we have so much unlabeled data but labeled data is hard to find. An algorithm that can use this unlabeled data to improve classification is valuable.
(4) Empirically, smashed many benchmarks that were only seeing incremental improvements until the introduction of deep learning methods.
(5) Same algorithm works in multiple areas with raw (perhaps with minor pre-processing) inputs.
(6) Keeps improving as more data is fed to the network (assuming stationary distributions etc).
|
Where and why does deep learning shine?
The main purported benefits:
(1) Don't need to hand engineer features for non-linear learning problems (save time and scalable to the future, since hand engineering is seen by some as a short-term ban
|
14,085
|
Where and why does deep learning shine?
|
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from the learned distributions - this can have some major benefits in certain applications where you want to generate synthetic data corresponding to the learned classes/clusters.
|
Where and why does deep learning shine?
|
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from t
|
Where and why does deep learning shine?
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from the learned distributions - this can have some major benefits in certain applications where you want to generate synthetic data corresponding to the learned classes/clusters.
|
Where and why does deep learning shine?
Another important point in addition to the above (I don't have sufficient rep to merely add it as a comment) is that it is a generative model (Deep Belief Nets at least) and thus you can sample from t
|
14,086
|
How to simulate from a Gaussian copula?
|
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required definition and properties of the multivariate normal distribution, followed by the Gaussian copula, and then I'll provide the algorithm to simulate from the Gauss copula.
Multivariate normal distribution
A random vector $X = (X_1, \ldots, X_d)'$ has a multivariate normal distribution if
$$
X \stackrel{\mathrm{d}}{=} \mu + AZ,
$$
where $Z$ is a $k$-dimensional vector of independent standard normal random variables, $\mu$ is a $d$-dimensional vector of constants, and $A$ is a $d\times k$ matrix of constants.
The notation $\stackrel{\mathrm{d}}{=}$ denotes equality in distribution.
So, each component of $X$ is essentially a weighted sum of independent standard normal random variables.
From the properties of mean vectors and covariance matrices, we have
${\rm E}(X) = \mu$ and ${\rm cov}(X) = \Sigma$, with $\Sigma = AA'$, leading to the natural notation $X \sim {\rm N}_d(\mu, \Sigma)$.
Gauss copula
The Gauss copula is defined implicitely from the multivariate normal distribution, that is, the Gauss copula is the copula associated with a multivariate normal distribution. Specifically, from Sklar's theorem the Gauss copula is
$$
C_P(u_1, \ldots, u_d) = \boldsymbol{\Phi}_P(\Phi^{-1}(u_1), \ldots, \Phi^{-1}(u_d)),
$$
where $\Phi$ denotes the standard normal distribution function, and $\boldsymbol{\Phi}_P$ denotes the multivariate standard normal distribution function with correlation matrix P. So, the Gauss copula is simply a standard multivariate normal distribution where the probability integral transform is applied to each margin.
Simulation algorithm
In view of the above, a natural approach to simulate from the Gauss copula is to simulate from the multivariate standard normal distribution with an appropriate correlation matrix $P$, and to convert each margin using the probability integral transform with the standard normal distribution function.
Whilst simulating from a multivariate normal distribution with covariance matrix $\Sigma$ essentially comes down to do a weighted sum of independent standard normal random variables, where the "weight" matrix $A$ can be obtained by the Cholesky decomposition of the covariance matrix $\Sigma$.
Therefore, an algorithm to simulate $n$ samples from the Gauss copula with correlation matrix $P$ is:
Perform a Cholesky decomposition of $P$, and set $A$ as the resulting lower triangular matrix.
Repeat the following steps $n$ times.
Generate a vector $Z = (Z_1, \ldots, Z_d)'$ of independent standard normal variates.
Set $X = AZ$
Return $U = (\Phi(X_1), \ldots, \Phi(X_d))'$.
The following code in an example implementation of this algorithm using R:
## Initialization and parameters
set.seed(123)
P <- matrix(c(1, 0.1, 0.8, # Correlation matrix
0.1, 1, 0.4,
0.8, 0.4, 1), nrow = 3)
d <- nrow(P) # Dimension
n <- 200 # Number of samples
## Simulation (non-vectorized version)
A <- t(chol(P))
U <- matrix(nrow = n, ncol = d)
for (i in 1:n){
Z <- rnorm(d)
X <- A%*%Z
U[i, ] <- pnorm(X)
}
## Simulation (compact vectorized version)
U <- pnorm(matrix(rnorm(n*d), ncol = d) %*% chol(P))
## Visualization
pairs(U, pch = 16,
labels = sapply(1:d, function(i){as.expression(substitute(U[k], list(k = i)))}))
The following chart shows the data resulting from the above R code.
|
How to simulate from a Gaussian copula?
|
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required
|
How to simulate from a Gaussian copula?
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required definition and properties of the multivariate normal distribution, followed by the Gaussian copula, and then I'll provide the algorithm to simulate from the Gauss copula.
Multivariate normal distribution
A random vector $X = (X_1, \ldots, X_d)'$ has a multivariate normal distribution if
$$
X \stackrel{\mathrm{d}}{=} \mu + AZ,
$$
where $Z$ is a $k$-dimensional vector of independent standard normal random variables, $\mu$ is a $d$-dimensional vector of constants, and $A$ is a $d\times k$ matrix of constants.
The notation $\stackrel{\mathrm{d}}{=}$ denotes equality in distribution.
So, each component of $X$ is essentially a weighted sum of independent standard normal random variables.
From the properties of mean vectors and covariance matrices, we have
${\rm E}(X) = \mu$ and ${\rm cov}(X) = \Sigma$, with $\Sigma = AA'$, leading to the natural notation $X \sim {\rm N}_d(\mu, \Sigma)$.
Gauss copula
The Gauss copula is defined implicitely from the multivariate normal distribution, that is, the Gauss copula is the copula associated with a multivariate normal distribution. Specifically, from Sklar's theorem the Gauss copula is
$$
C_P(u_1, \ldots, u_d) = \boldsymbol{\Phi}_P(\Phi^{-1}(u_1), \ldots, \Phi^{-1}(u_d)),
$$
where $\Phi$ denotes the standard normal distribution function, and $\boldsymbol{\Phi}_P$ denotes the multivariate standard normal distribution function with correlation matrix P. So, the Gauss copula is simply a standard multivariate normal distribution where the probability integral transform is applied to each margin.
Simulation algorithm
In view of the above, a natural approach to simulate from the Gauss copula is to simulate from the multivariate standard normal distribution with an appropriate correlation matrix $P$, and to convert each margin using the probability integral transform with the standard normal distribution function.
Whilst simulating from a multivariate normal distribution with covariance matrix $\Sigma$ essentially comes down to do a weighted sum of independent standard normal random variables, where the "weight" matrix $A$ can be obtained by the Cholesky decomposition of the covariance matrix $\Sigma$.
Therefore, an algorithm to simulate $n$ samples from the Gauss copula with correlation matrix $P$ is:
Perform a Cholesky decomposition of $P$, and set $A$ as the resulting lower triangular matrix.
Repeat the following steps $n$ times.
Generate a vector $Z = (Z_1, \ldots, Z_d)'$ of independent standard normal variates.
Set $X = AZ$
Return $U = (\Phi(X_1), \ldots, \Phi(X_d))'$.
The following code in an example implementation of this algorithm using R:
## Initialization and parameters
set.seed(123)
P <- matrix(c(1, 0.1, 0.8, # Correlation matrix
0.1, 1, 0.4,
0.8, 0.4, 1), nrow = 3)
d <- nrow(P) # Dimension
n <- 200 # Number of samples
## Simulation (non-vectorized version)
A <- t(chol(P))
U <- matrix(nrow = n, ncol = d)
for (i in 1:n){
Z <- rnorm(d)
X <- A%*%Z
U[i, ] <- pnorm(X)
}
## Simulation (compact vectorized version)
U <- pnorm(matrix(rnorm(n*d), ncol = d) %*% chol(P))
## Visualization
pairs(U, pch = 16,
labels = sapply(1:d, function(i){as.expression(substitute(U[k], list(k = i)))}))
The following chart shows the data resulting from the above R code.
|
How to simulate from a Gaussian copula?
There is a very simple method to simulate from the Gaussian copula which is based on the definitions of the multivariate normal distribution and the Gauss copula.
I'll start by providing the required
|
14,087
|
MLE convergence errors with statespace SARIMAX
|
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do the estimated parameters seem "in the ballpark", or are they nonsense? Or are they NaN?
Without knowing more: you might try increasing the maximum number of iterations, e.g.
mod = sm.tsa.SARIMAX(endog, order=(p,d,q))
res = mod.fit(maxiter=200)
You also might try a different optimization routine (e.g. nm for Nelder-Mead):
res = mod.fit(maxiter=200, method='nm')
You could try computing better starting parameters somehow. The default routine may not be good for hourly data and may not be good with lots of missing data, e.g.:
params = [...]
res = mod.fit(start_params=params)
Finally, if nothing else works, to see if it is the missing data per se that is the problem, you could try imputing the data (e.g. with Pandas interpolate or something) and see if the model converges then (the estimated parameters wouldn't apply to the original dataset, but at least you could tell if that was the problem).
Update
I guess the likelihood function may be quite flat, and/or has many local minima. That could also explain why powell (which is a derivative-free method) helps.
I think it is likely that the basic SARIMAX model is too "blunt" for such high frequency data. For example, there are possibilities of very long seasonal patterns (e.g weekly) that are computationally burdensome under the basic model, and there can be calendarity effects that must be taken into account. I think these features would make the autocorrelations difficult to interpret, and could make it appear that large numbers of lags are required to fit the model well.
Unfortunately, the best way to proceed can probably only be determined by looking at and thinking about the data. You may want to start with simple, low-order SARIMAX models, and take a look at the residuals and associated diagnostics (e.g. res.plot_diagnostics() for a start).
You could also try using a filter (e.g. seasonal_decompose or bk_filter) to remove cyclic effects at various frequencies prior to fitting the SARIMAX model. You could also try using the Census Bureau's X13 tool.
|
MLE convergence errors with statespace SARIMAX
|
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do th
|
MLE convergence errors with statespace SARIMAX
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do the estimated parameters seem "in the ballpark", or are they nonsense? Or are they NaN?
Without knowing more: you might try increasing the maximum number of iterations, e.g.
mod = sm.tsa.SARIMAX(endog, order=(p,d,q))
res = mod.fit(maxiter=200)
You also might try a different optimization routine (e.g. nm for Nelder-Mead):
res = mod.fit(maxiter=200, method='nm')
You could try computing better starting parameters somehow. The default routine may not be good for hourly data and may not be good with lots of missing data, e.g.:
params = [...]
res = mod.fit(start_params=params)
Finally, if nothing else works, to see if it is the missing data per se that is the problem, you could try imputing the data (e.g. with Pandas interpolate or something) and see if the model converges then (the estimated parameters wouldn't apply to the original dataset, but at least you could tell if that was the problem).
Update
I guess the likelihood function may be quite flat, and/or has many local minima. That could also explain why powell (which is a derivative-free method) helps.
I think it is likely that the basic SARIMAX model is too "blunt" for such high frequency data. For example, there are possibilities of very long seasonal patterns (e.g weekly) that are computationally burdensome under the basic model, and there can be calendarity effects that must be taken into account. I think these features would make the autocorrelations difficult to interpret, and could make it appear that large numbers of lags are required to fit the model well.
Unfortunately, the best way to proceed can probably only be determined by looking at and thinking about the data. You may want to start with simple, low-order SARIMAX models, and take a look at the residuals and associated diagnostics (e.g. res.plot_diagnostics() for a start).
You could also try using a filter (e.g. seasonal_decompose or bk_filter) to remove cyclic effects at various frequencies prior to fitting the SARIMAX model. You could also try using the Census Bureau's X13 tool.
|
MLE convergence errors with statespace SARIMAX
First, mle_retvals should be an attribute of SARIMAXResults if it is constructed using a fit call, so you should be able to check it. What do you get when you try print(res.mle_retvals)?
Second, do th
|
14,088
|
How few training examples is too few when training a neural network?
|
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, people try and see. It's not rare to find studies showing decent results with a training set smaller than 1000 samples.
A good way to roughly assess to what extent it could be beneficial to have more training samples is to plot the performance of the neural network based against the size of the training set, e.g. from (1):
(1) Dernoncourt, Franck, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. "De-identification of Patient Notes with Recurrent Neural Networks" arXiv preprint arXiv:1606.03475 (2016).
(2) Cireşan, Dan C., Ueli Meier, and Jürgen Schmidhuber. "Transfer learning for Latin and Chinese characters with deep neural networks." In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1-6. IEEE, 2012. https://scholar.google.com/scholar?cluster=7452424507909578812&hl=en&as_sdt=0,22 ; http://people.idsia.ch/~ciresan/data/ijcnn2012_v9.pdf:
For classification tasks with a few thousand samples per
class, the benefit of (unsupervised or supervised) pretraining is not easy to demonstrate.
|
How few training examples is too few when training a neural network?
|
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice,
|
How few training examples is too few when training a neural network?
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice, people try and see. It's not rare to find studies showing decent results with a training set smaller than 1000 samples.
A good way to roughly assess to what extent it could be beneficial to have more training samples is to plot the performance of the neural network based against the size of the training set, e.g. from (1):
(1) Dernoncourt, Franck, Ji Young Lee, Ozlem Uzuner, and Peter Szolovits. "De-identification of Patient Notes with Recurrent Neural Networks" arXiv preprint arXiv:1606.03475 (2016).
(2) Cireşan, Dan C., Ueli Meier, and Jürgen Schmidhuber. "Transfer learning for Latin and Chinese characters with deep neural networks." In The 2012 International Joint Conference on Neural Networks (IJCNN), pp. 1-6. IEEE, 2012. https://scholar.google.com/scholar?cluster=7452424507909578812&hl=en&as_sdt=0,22 ; http://people.idsia.ch/~ciresan/data/ijcnn2012_v9.pdf:
For classification tasks with a few thousand samples per
class, the benefit of (unsupervised or supervised) pretraining is not easy to demonstrate.
|
How few training examples is too few when training a neural network?
It really depends on your dataset, and network architecture. One rule of thumb I have read (2) was a few thousand samples per class for the neural network to start to perform very well.
In practice,
|
14,089
|
Difference in Means vs. Mean Difference
|
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_n\}$. Then define $$\begin{align}
\bar x &= \frac{1}{n} \sum_{i=1}^n x_i \\
\bar y &= \frac{1}{n} \sum_{i=1}^n y_i \\
\bar d &= \frac{1}{n} \sum_{i=1}^n x_i - y_i
\end{align}$$
Then you have: $$\begin{align}
\bar x - \bar y &= \left( \frac{1}{n} \sum_{i=1}^n x_i \right) - \left( \frac{1}{n} \sum_{i=1}^n y_i \right) \\
&= \frac{1}{n} \left( \sum_{i=1}^n x_i - \sum_{i=1}^n y_i \right) \\
&= \frac{1}{n} \left( \left( x_1 + \dots + x_n \right) - \left( y_1 + \dots + y_n \right) \right) \\
&= \frac{1}{n} \left( x_1 + \dots + x_n - y_1 - \dots - y_n \right) \\
&= \frac{1}{n} \left( x_1 - y_1 + \dots + x_n - y_n \right) \\
&= \frac{1}{n} \left( \left( x_1 - y_1 \right) + \dots + \left( x_n - y_n \right) \right) \\
&= \frac{1}{n} \sum_{i = 1}^n x_i - y_i \\
&= \bar d.
\end{align}$$
|
Difference in Means vs. Mean Difference
|
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dot
|
Difference in Means vs. Mean Difference
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dots,y_n\}$. Then define $$\begin{align}
\bar x &= \frac{1}{n} \sum_{i=1}^n x_i \\
\bar y &= \frac{1}{n} \sum_{i=1}^n y_i \\
\bar d &= \frac{1}{n} \sum_{i=1}^n x_i - y_i
\end{align}$$
Then you have: $$\begin{align}
\bar x - \bar y &= \left( \frac{1}{n} \sum_{i=1}^n x_i \right) - \left( \frac{1}{n} \sum_{i=1}^n y_i \right) \\
&= \frac{1}{n} \left( \sum_{i=1}^n x_i - \sum_{i=1}^n y_i \right) \\
&= \frac{1}{n} \left( \left( x_1 + \dots + x_n \right) - \left( y_1 + \dots + y_n \right) \right) \\
&= \frac{1}{n} \left( x_1 + \dots + x_n - y_1 - \dots - y_n \right) \\
&= \frac{1}{n} \left( x_1 - y_1 + \dots + x_n - y_n \right) \\
&= \frac{1}{n} \left( \left( x_1 - y_1 \right) + \dots + \left( x_n - y_n \right) \right) \\
&= \frac{1}{n} \sum_{i = 1}^n x_i - y_i \\
&= \bar d.
\end{align}$$
|
Difference in Means vs. Mean Difference
(I'm assuming you mean "sample" and not "population" in your first paragraph.)
The equivalence is easy to show mathematically. Start with two samples of equal size, $\{x_1,\dots,x_n\}$ and $\{y_1,\dot
|
14,090
|
Difference in Means vs. Mean Difference
|
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 1000
difference of means is 1 1 2 0 (unlike samples itself) has small std.
|
Difference in Means vs. Mean Difference
|
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 10
|
Difference in Means vs. Mean Difference
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 1000
difference of means is 1 1 2 0 (unlike samples itself) has small std.
|
Difference in Means vs. Mean Difference
the distribution of the mean difference should be tighter then the distribution of the difference of means. See this with an easy example:
mean in sample 1: 1 10 100 1000
mean in sample 2: 2 11 102 10
|
14,091
|
What are the theoretical guarantees of bagging
|
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
Algorithms: Bagging, Boosting, and Variants" by Bauer and Kohavi. It usually works as advertised.
However, contrary to popular belief, bagging is not guaranteed to reduce the variance. A more recent and (in my opinion) better explanation is that bagging reduces the influence of leverage points. Leverage points are those that disproportionately affect the resulting model, such as outliers in least-squares regression. It is rare but possible for leverage points to positively influence resulting models, in which case bagging reduces performance. Have a look at "Bagging equalizes influence" by Grandvalet.
So, to finally answer your question: the effect of bagging largely depends on leverage points. Few theoretical guarantees exist, except that bagging linearly increases computation time in terms of bag size! That said, it is still a widely used and very powerful technique. When learning with label noise, for instance, bagging can produce more robust classifiers.
Rao and Tibshirani have given a Bayesian interpretation in "The out-of-bootstrap method for model averaging and selection":
In this sense, the bootstrap distribution represents an (approximate) nonparametric, non-informative posterior distribution for our parameter. But
this bootstrap distribution is obtained painlessly- without having to formally
specify a prior and without having to sample from the posterior distribution.
Hence we might think of the bootstrap distribution as a poor man's" Bayes
posterior.
|
What are the theoretical guarantees of bagging
|
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
|
What are the theoretical guarantees of bagging
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
Algorithms: Bagging, Boosting, and Variants" by Bauer and Kohavi. It usually works as advertised.
However, contrary to popular belief, bagging is not guaranteed to reduce the variance. A more recent and (in my opinion) better explanation is that bagging reduces the influence of leverage points. Leverage points are those that disproportionately affect the resulting model, such as outliers in least-squares regression. It is rare but possible for leverage points to positively influence resulting models, in which case bagging reduces performance. Have a look at "Bagging equalizes influence" by Grandvalet.
So, to finally answer your question: the effect of bagging largely depends on leverage points. Few theoretical guarantees exist, except that bagging linearly increases computation time in terms of bag size! That said, it is still a widely used and very powerful technique. When learning with label noise, for instance, bagging can produce more robust classifiers.
Rao and Tibshirani have given a Bayesian interpretation in "The out-of-bootstrap method for model averaging and selection":
In this sense, the bootstrap distribution represents an (approximate) nonparametric, non-informative posterior distribution for our parameter. But
this bootstrap distribution is obtained painlessly- without having to formally
specify a prior and without having to sample from the posterior distribution.
Hence we might think of the bootstrap distribution as a poor man's" Bayes
posterior.
|
What are the theoretical guarantees of bagging
The main use-case for bagging is reducing variance of low-biased models by bunching them together. This was studied empirically in the landmark paper "An Empirical Comparison of Voting Classification
|
14,092
|
Good resources (online or book) on the mathematical foundations of statistics
|
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more than just maths):
Cox, Principles of Statistical Inference
Cox & Hinkley, Theoretical Statistics
Geisser, Modes of Parametric Statistical Inference
And I second @Andre's Casella & Berger.
|
Good resources (online or book) on the mathematical foundations of statistics
|
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more tha
|
Good resources (online or book) on the mathematical foundations of statistics
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more than just maths):
Cox, Principles of Statistical Inference
Cox & Hinkley, Theoretical Statistics
Geisser, Modes of Parametric Statistical Inference
And I second @Andre's Casella & Berger.
|
Good resources (online or book) on the mathematical foundations of statistics
Maths:
Grinstead & Snell, Introduction to Probability (it's free)
Strang, Introduction to Linear Algebra
Strang, Calculus
Also check out Strang on MIT OpenCourseWare.
Statistical theory (it's more tha
|
14,093
|
Good resources (online or book) on the mathematical foundations of statistics
|
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Graybill, F. A., & Boes, D. C. (1974). Introduction to theory of statistics. (B. C. Harrinson & M. Eichberg, Eds.) (3rd ed., p. 564). McGraw-Hill, Inc.
Casella, G., & Berger, R. L. (2002). Statistical Inference. (C. Crockett, Ed.) (2nd ed., p. 657). Pacific Grove, CA: Wadsworth Group, Thomson Learning Inc.
|
Good resources (online or book) on the mathematical foundations of statistics
|
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Grayb
|
Good resources (online or book) on the mathematical foundations of statistics
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Graybill, F. A., & Boes, D. C. (1974). Introduction to theory of statistics. (B. C. Harrinson & M. Eichberg, Eds.) (3rd ed., p. 564). McGraw-Hill, Inc.
Casella, G., & Berger, R. L. (2002). Statistical Inference. (C. Crockett, Ed.) (2nd ed., p. 657). Pacific Grove, CA: Wadsworth Group, Thomson Learning Inc.
|
Good resources (online or book) on the mathematical foundations of statistics
Some important mathematical statistics topics are:
Exponential family and sufficiency.
Estimator construction.
Hypothesis testing.
References regarding mathematical statistics:
Mood, A. M., Grayb
|
14,094
|
Good resources (online or book) on the mathematical foundations of statistics
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have a look at the Mathematical Biostatistics Bootcamp at Coursera https://www.coursera.org/#course/biostats.
|
Good resources (online or book) on the mathematical foundations of statistics
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Good resources (online or book) on the mathematical foundations of statistics
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Have a look at the Mathematical Biostatistics Bootcamp at Coursera https://www.coursera.org/#course/biostats.
|
Good resources (online or book) on the mathematical foundations of statistics
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,095
|
Good resources (online or book) on the mathematical foundations of statistics
|
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesian statistics). I think SEM is the result of a great deal of abstraction from such methods. I furthermore think that the reason why such abstractions were necessary was because of the overwhelming demand to better understand causal inference.
I think a book that would be perfect for someone of your background would be Judea Pearl's Causality. This book specifically addresses SEM as well as multivariate statistics, develops a theory of causality and inference, and is very philosophically sound. It's not a mathematical book, but draws heavily upon logic and counterfactuals, and develops a very precise language for defending statistical models.
I can say from a mathematical background that these results are very sound and do not require an extensive understanding of calculus. I also think it's unrealistic for someone of your pedigree to catch up on the necessary mathematics when you're already a graduate student, that's why there are statisticians!
|
Good resources (online or book) on the mathematical foundations of statistics
|
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesia
|
Good resources (online or book) on the mathematical foundations of statistics
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesian statistics). I think SEM is the result of a great deal of abstraction from such methods. I furthermore think that the reason why such abstractions were necessary was because of the overwhelming demand to better understand causal inference.
I think a book that would be perfect for someone of your background would be Judea Pearl's Causality. This book specifically addresses SEM as well as multivariate statistics, develops a theory of causality and inference, and is very philosophically sound. It's not a mathematical book, but draws heavily upon logic and counterfactuals, and develops a very precise language for defending statistical models.
I can say from a mathematical background that these results are very sound and do not require an extensive understanding of calculus. I also think it's unrealistic for someone of your pedigree to catch up on the necessary mathematics when you're already a graduate student, that's why there are statisticians!
|
Good resources (online or book) on the mathematical foundations of statistics
SEM is (in my opinion) very far removed from traditional probability theory and some basic statistical techniques that extend easily from it (such as point estimation, large sample theory, and Bayesia
|
14,096
|
Which diagnostics can validate the use of a particular family of GLM?
|
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for example, the Hosmer-Lemeshow statistic is used to assess goodness of fit; leverage values tend to be small where the estimated odds are very large, very small or about even; & so on.
(2) Sometimes one family of models can be seen as a special case of another, so you can use a hypothesis test on a parameter to help you choose. Exponential vs Weibull, for example.
(3) Akaike's Information Criterion is useful in choosing between different models, which includes choosing between different families.
(4) Theoretical/empirical knowledge about what you're modelling narrows the field of plausible models.
But there's no automatic way of finding the 'right' family; real-life data can come from distributions as complicated as you like, & the complexity of models that are worth trying to fit increases with the amount of data you have. This is part & parcel of Box's dictum that no models are true but some are useful.
Re @gung's comment: it appears the commonly used Hosmer-Lemeshow test is (a) surprisingly sensitive to the choice of bins, & (b) generally less powerful than some other tests against some relevant classes of alternative hypothesis. That doesn't detract from point (1): it's also good to be up-to-date.
|
Which diagnostics can validate the use of a particular family of GLM?
|
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for exa
|
Which diagnostics can validate the use of a particular family of GLM?
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for example, the Hosmer-Lemeshow statistic is used to assess goodness of fit; leverage values tend to be small where the estimated odds are very large, very small or about even; & so on.
(2) Sometimes one family of models can be seen as a special case of another, so you can use a hypothesis test on a parameter to help you choose. Exponential vs Weibull, for example.
(3) Akaike's Information Criterion is useful in choosing between different models, which includes choosing between different families.
(4) Theoretical/empirical knowledge about what you're modelling narrows the field of plausible models.
But there's no automatic way of finding the 'right' family; real-life data can come from distributions as complicated as you like, & the complexity of models that are worth trying to fit increases with the amount of data you have. This is part & parcel of Box's dictum that no models are true but some are useful.
Re @gung's comment: it appears the commonly used Hosmer-Lemeshow test is (a) surprisingly sensitive to the choice of bins, & (b) generally less powerful than some other tests against some relevant classes of alternative hypothesis. That doesn't detract from point (1): it's also good to be up-to-date.
|
Which diagnostics can validate the use of a particular family of GLM?
I have some tips :
(1) How residuals ought to compare to fits isn't always all that obvious, so it's good to be familiar with diagnostics for particular models. In logistic regression models, for exa
|
14,097
|
Which diagnostics can validate the use of a particular family of GLM?
|
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self-explanatory that you can get some insights into the process of inferring distributional families from data. You will probably be able to implement some of the ideas in Stata via your own code. In particular, I think the Cullen and Frey graph, if it is / could be implemented in Stata, may be helpful for you.
|
Which diagnostics can validate the use of a particular family of GLM?
|
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self
|
Which diagnostics can validate the use of a particular family of GLM?
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self-explanatory that you can get some insights into the process of inferring distributional families from data. You will probably be able to implement some of the ideas in Stata via your own code. In particular, I think the Cullen and Frey graph, if it is / could be implemented in Stata, may be helpful for you.
|
Which diagnostics can validate the use of a particular family of GLM?
You may find it interesting to read the vignette (introductory manual) for the R package fitdistrplus. I recognize that you prefer to work in Stata, but I think the vignette will be sufficiently self
|
14,098
|
Should confidence intervals for linear regression coefficients be based on the normal or $t$ distribution?
|
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_0$ is the true regression coefficient. The default in R is to test $\beta_0 = 0$, so the $t$-statistics reported there are just $$\frac{\hat{\beta}}{{\rm se}(\hat{\beta})}$$
Note that, under some regularity conditions, the statistic above is always asymptotically normally distributed, regardless of whether the errors are normal or whether the error variance is known.
(2) The reason you're getting different results is that the percentiles of the normal distribution are different from the percentiles of the $t$-distribution. Therefore, the multiplier you're using in front of the standard error is different, which, in turn gives different confidence intervals.
Specifically, recall that the confidence interval using the normal distribution is
$$ \hat{\beta} \pm z_{\alpha/2} \cdot {\rm se}(\hat{\beta}) $$
where $z_{\alpha/2}$ is the $\alpha/2$ quantile of the normal distribution. In the standard case of a $95\%$ confidence interval, $\alpha = .05$ and $z_{\alpha/2} \approx 1.96$. The confidence interval based on the $t$-distribution is
$$ \hat{\beta} \pm t_{\alpha/2,n-p} \cdot {\rm se}(\hat{\beta}) $$
where the multiplier $t_{\alpha/2,n-p}$ is based on the quantiles of the $t$-distribution with $n-p$ degrees of freedom where $n$ is the sample size and $p$ is the number of predictors. When $n$ is large, $t_{\alpha/2,n-p}$ and $z_{\alpha/2}$ are about the same.
Below is a plot of the $t$ multipliers for sample sizes ranging from $5$ to $300$ (I've assumed $p=1$ for this plot, but that qualitatively changes nothing). The $t$-multipliers are larger, but, as you can see below, they do converge to the $z$ (solid black line) multiplier as the sample size increases.
|
Should confidence intervals for linear regression coefficients be based on the normal or $t$ distrib
|
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_
|
Should confidence intervals for linear regression coefficients be based on the normal or $t$ distribution?
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_0$ is the true regression coefficient. The default in R is to test $\beta_0 = 0$, so the $t$-statistics reported there are just $$\frac{\hat{\beta}}{{\rm se}(\hat{\beta})}$$
Note that, under some regularity conditions, the statistic above is always asymptotically normally distributed, regardless of whether the errors are normal or whether the error variance is known.
(2) The reason you're getting different results is that the percentiles of the normal distribution are different from the percentiles of the $t$-distribution. Therefore, the multiplier you're using in front of the standard error is different, which, in turn gives different confidence intervals.
Specifically, recall that the confidence interval using the normal distribution is
$$ \hat{\beta} \pm z_{\alpha/2} \cdot {\rm se}(\hat{\beta}) $$
where $z_{\alpha/2}$ is the $\alpha/2$ quantile of the normal distribution. In the standard case of a $95\%$ confidence interval, $\alpha = .05$ and $z_{\alpha/2} \approx 1.96$. The confidence interval based on the $t$-distribution is
$$ \hat{\beta} \pm t_{\alpha/2,n-p} \cdot {\rm se}(\hat{\beta}) $$
where the multiplier $t_{\alpha/2,n-p}$ is based on the quantiles of the $t$-distribution with $n-p$ degrees of freedom where $n$ is the sample size and $p$ is the number of predictors. When $n$ is large, $t_{\alpha/2,n-p}$ and $z_{\alpha/2}$ are about the same.
Below is a plot of the $t$ multipliers for sample sizes ranging from $5$ to $300$ (I've assumed $p=1$ for this plot, but that qualitatively changes nothing). The $t$-multipliers are larger, but, as you can see below, they do converge to the $z$ (solid black line) multiplier as the sample size increases.
|
Should confidence intervals for linear regression coefficients be based on the normal or $t$ distrib
(1) When the errors are normally distributed and their variance is not known, then $$\frac{\hat{\beta} - \beta_0}{{\rm se}(\hat{\beta})}$$ has a $t$-distribution under the null hypothesis that $\beta_
|
14,099
|
How to add two dependent random variables?
|
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint distribution of A and B: he may well
be saying that he knows the marginal distributions of A and B. However,
assuming that Mesko does know the joint distribution, the answer is given below.
From the convolution integral in OP Mesko's comment (which is wrong, by the way), it could be inferred that
Mesko is interested in jointly continuous random variables $A$ and $B$ with joint probability density function $f_{A,B}(a,b)$. In this case,
$$f_{A+B}(z) = \int_{-\infty}^{\infty} f_{A,B}(a,z-a) \mathrm da
= \int_{-\infty}^{\infty} f_{A,B}(z-b,b) \mathrm db.$$
When $A$ and $B$ are independent, the joint density function factors into the
product of the marginal density functions: $f_{A,B}(a,z-a)=f_{A}(a)f_{B}(z-a)$
and we get the more familiar
convolution formula for independent random variables. A similar result
applies for discrete random variables as well.
Things are more complicated if $A$ and $B$ are not jointly continuous, or
if one random variable is continuous and the other is discrete. However,
in all cases, one can always find the cumulative probability distribution
function $F_{A+B}(z)$ of $A+B$ as the total probability mass in the region of
the plane specified as $\{(a,b) \colon a+b \leq z\}$ and compute the probability
density function, or the probability mass function, or whatever, from the
distribution function. Indeed the above formula is obtained by writing
$F_{A+B}(z)$ as a double integral of the joint density function over the
specified region and then "differentiating under the integral
sign.''
|
How to add two dependent random variables?
|
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint dis
|
How to add two dependent random variables?
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint distribution of A and B: he may well
be saying that he knows the marginal distributions of A and B. However,
assuming that Mesko does know the joint distribution, the answer is given below.
From the convolution integral in OP Mesko's comment (which is wrong, by the way), it could be inferred that
Mesko is interested in jointly continuous random variables $A$ and $B$ with joint probability density function $f_{A,B}(a,b)$. In this case,
$$f_{A+B}(z) = \int_{-\infty}^{\infty} f_{A,B}(a,z-a) \mathrm da
= \int_{-\infty}^{\infty} f_{A,B}(z-b,b) \mathrm db.$$
When $A$ and $B$ are independent, the joint density function factors into the
product of the marginal density functions: $f_{A,B}(a,z-a)=f_{A}(a)f_{B}(z-a)$
and we get the more familiar
convolution formula for independent random variables. A similar result
applies for discrete random variables as well.
Things are more complicated if $A$ and $B$ are not jointly continuous, or
if one random variable is continuous and the other is discrete. However,
in all cases, one can always find the cumulative probability distribution
function $F_{A+B}(z)$ of $A+B$ as the total probability mass in the region of
the plane specified as $\{(a,b) \colon a+b \leq z\}$ and compute the probability
density function, or the probability mass function, or whatever, from the
distribution function. Indeed the above formula is obtained by writing
$F_{A+B}(z)$ as a double integral of the joint density function over the
specified region and then "differentiating under the integral
sign.''
|
How to add two dependent random variables?
As vinux points out, one needs the joint distribution of $A$ and $B$, and
it is not obvious from OP Mesko's response "I know Distributive function of A and B"
that he is saying he knows the joint dis
|
14,100
|
How to add two dependent random variables?
|
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_{A,B}(a,b)=(a+b) H(a,b) H(-a+1,-b+1)
$$
or equivalently
$$
f_{A,B}(a,b)=(a+b)(H(a)-H(a-1))(H(b)-H(b-1))
$$
Now you can perform the integral without caring about limits of integration.
This is the wolfram rapresentation of the joint : A
Computing the integral I have : B
Plotted : C
That's the function :
$$ f(z)=\left\{\begin{matrix}
z^2 \qquad \qquad \; \quad for \quad 0\leq z \leq1\\ 1-(z-1)^2 \quad for \quad 1\leq z \leq 2 \\ 0 \qquad \quad \; otherwise
\end{matrix}\right. $$
and it's normalized as you can easily check.
|
How to add two dependent random variables?
|
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_
|
How to add two dependent random variables?
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_{A,B}(a,b)=(a+b) H(a,b) H(-a+1,-b+1)
$$
or equivalently
$$
f_{A,B}(a,b)=(a+b)(H(a)-H(a-1))(H(b)-H(b-1))
$$
Now you can perform the integral without caring about limits of integration.
This is the wolfram rapresentation of the joint : A
Computing the integral I have : B
Plotted : C
That's the function :
$$ f(z)=\left\{\begin{matrix}
z^2 \qquad \qquad \; \quad for \quad 0\leq z \leq1\\ 1-(z-1)^2 \quad for \quad 1\leq z \leq 2 \\ 0 \qquad \quad \; otherwise
\end{matrix}\right. $$
and it's normalized as you can easily check.
|
How to add two dependent random variables?
Beforehand , I don't know if what i'm saying is correct but I got stuck on the same problem and I tried to solve it in this way:
express the joint distribution using the heaviside step function:
$$
f_
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.