idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,601
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
|
It's like taking a test with just one question - it's a lot more hit-and-miss.
This is an intuitive explanation of the standard deviation of an instance versus that of a mean - the score on a batch of instances has less variance.
Here are some more details.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
|
It's like taking a test with just one question - it's a lot more hit-and-miss.
This is an intuitive explanation of the standard deviation of an instance versus that of a mean - the score on a batch of
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [duplicate]
It's like taking a test with just one question - it's a lot more hit-and-miss.
This is an intuitive explanation of the standard deviation of an instance versus that of a mean - the score on a batch of instances has less variance.
Here are some more details.
|
Why is leave-one-out cross-validation (LOOCV) variance about the mean estimate for error high? [dupl
It's like taking a test with just one question - it's a lot more hit-and-miss.
This is an intuitive explanation of the standard deviation of an instance versus that of a mean - the score on a batch of
|
14,602
|
Weibull distribution parameters $k$ and $c$ for wind speed data
|
Because @zaynah posted in the comments that the data are thought to follow a Weibull distribution, I'm gonna provide a short tutorial on how to estimate the parameters of such a distribution using MLE (Maximum likelihood estimation). There is a similar post about wind speeds and Weibull distribution on the site.
Download and install R, it's free
Optional: Download and install RStudio, which is a great IDE for R providing a ton of useful functions such as syntax highlighting and more.
Install the packages MASS and car by typing: install.packages(c("MASS", "car")). Load them by typing: library(MASS) and library(car).
Import your data into R. If you have your data in Excel, for example, save them as delimited text file (.txt) and import them in R with read.table.
Use the function fitdistr to calculate the maximum likelihood estimates of your weibull distribution: fitdistr(my.data, densfun="weibull", lower = 0). To see a fully worked out example, see the link at the bottom of the answer.
Make a QQ-Plot to compare your data with a Weibull distribution with the scale and shape parameters estimated at point 5: qqPlot(my.data, distribution="weibull", shape=, scale=)
The tutorial of Vito Ricci on fitting distribution with R is a good starting point on the matter. And there are numerous posts on this site on the subject (see this post too).
To see a fully worked out example of how to use fitdistr, have a look at this post.
Let's look at an example in R:
# Load packages
library(MASS)
library(car)
# First, we generate 1000 random numbers from a Weibull distribution with
# scale = 1 and shape = 1.5
rw <- rweibull(1000, scale=1, shape=1.5)
# We can calculate a kernel density estimation to inspect the distribution
# Because the Weibull distribution has support [0,+Infinity), we are truncate
# the density at 0
par(bg="white", las=1, cex=1.1)
plot(density(rw, bw=0.5, cut=0), las=1, lwd=2,
xlim=c(0,5),col="steelblue")
# Now, we can use fitdistr to calculate the parameters by MLE
# The option "lower = 0" is added because the parameters of the Weibull distribution need to be >= 0
fitdistr(rw, densfun="weibull", lower = 0)
shape scale
1.56788999 1.01431852
(0.03891863) (0.02153039)
The maximum likelihood estimates are close to those we arbitrarily set in the generation of the random numbers. Let's compare our data using a QQ-Plot with a hypothetical Weibull distribution with the parameters that we've estimated with fitdistr:
qqPlot(rw, distribution="weibull", scale=1.014, shape=1.568, las=1, pch=19)
The points are nicely aligned on the line and mostly within the 95%-confidence envelope. We would conclude that our data are compatible with a Weibull distribution. This was expected, of course, as we've sampled our values from a Weibull distribution.
Estimating the $k$ (shape) and $c$ (scale) of a Weibull distribution without MLE
This paper lists five methods to estimate the parameters of a Weibull distribution for wind speeds. I'm gonna explain three of them here.
From means and standard deviation
The shape parameter $k$ is estimated as:
$$
k=\left(\frac{\hat{\sigma}}{\hat{v}}\right)^{-1.086}
$$
and the scale parameter $c$ is estimated as:
$$
c=\frac{\hat{v}}{\Gamma(1+1/k)}
$$
with $\hat{v}$ is the mean wind speed and $\hat{\sigma}$ the standard deviation and $\Gamma$ is the Gamma function.
Least-squares fit to observed distribution
If the observed wind speeds are divided into $n$ speed interval $0-V_{1},V_{1}-V_{2},\ldots, V_{n-1}-V_{n}$, having frequencies of occurrence $f_{1}, f_{2},\ldots,f_{n}$ and cumulative frequencies $p_{1}=f_{1}, p_{2}=f_{1}+f_{2}, \ldots, p_{n}=p_{n-1}+f_{n}$, then you can fit a linear regression of the form $y=a+bx$ to the values
$$
x_{i} = \ln(V_{i})
$$
$$
y_{i} = \ln[-\ln(1-p_{i})]
$$
The Weibull parameters are related to the linear coefficients $a$ and $b$ by
$$
c=\exp\left(-\frac{a}{b}\right)
$$
$$
k=b
$$
Median and quartile wind speeds
If you don't have the complete observed wind speeds but the median $V_{m}$ and quartiles $V_{0.25}$ and $V_{0.75}$ $\left[p(V\leq V_{0.25})=0.25, p(V\leq V_{0.75})=0.75\right]$, then $c$ and $k$ can be computed by the relations
$$
k = \ln\left[\ln(0.25)/\ln(0.75)\right]/\ln(V_{0.75}/V_{0.25})\approx 1.573/\ln(V_{0.75}/V_{0.25})
$$
$$
c=V_{m}/\ln(2)^{1/k}
$$
Comparison of the four methods
Here is an example in R comparing the four methods:
library(MASS) # for "fitdistr"
set.seed(123)
#-----------------------------------------------------------------------------
# Generate 10000 random numbers from a Weibull distribution
# with shape = 1.5 and scale = 1
#-----------------------------------------------------------------------------
rw <- rweibull(10000, shape=1.5, scale=1)
#-----------------------------------------------------------------------------
# 1. Estimate k and c by MLE
#-----------------------------------------------------------------------------
fitdistr(rw, densfun="weibull", lower = 0)
shape scale
1.515380298 1.005562356
#-----------------------------------------------------------------------------
# 2. Estimate k and c using the leas square fit
#-----------------------------------------------------------------------------
n <- 100 # number of bins
breaks <- seq(0, max(rw), length.out=n)
freqs <- as.vector(prop.table(table(cut(rw, breaks = breaks))))
cum.freqs <- c(0, cumsum(freqs))
xi <- log(breaks)
yi <- log(-log(1-cum.freqs))
# Fit the linear regression
least.squares <- lm(yi[is.finite(yi) & is.finite(xi)]~xi[is.finite(yi) & is.finite(xi)])
lin.mod.coef <- coefficients(least.squares)
k <- lin.mod.coef[2]
k
1.515115
c <- exp(-lin.mod.coef[1]/lin.mod.coef[2])
c
1.006004
#-----------------------------------------------------------------------------
# 3. Estimate k and c using the median and quartiles
#-----------------------------------------------------------------------------
med <- median(rw)
quarts <- quantile(rw, c(0.25, 0.75))
k <- log(log(0.25)/log(0.75))/log(quarts[2]/quarts[1])
k
1.537766
c <- med/log(2)^(1/k)
c
1.004434
#-----------------------------------------------------------------------------
# 4. Estimate k and c using mean and standard deviation.
#-----------------------------------------------------------------------------
k <- (sd(rw)/mean(rw))^(-1.086)
c <- mean(rw)/(gamma(1+1/k))
k
1.535481
c
1.006938
All methods yield very similar results. The maximum likelihood approach has the advantage that the standard errors of the Weibull parameters are directly given.
Using bootstrap to add pointwise confidence intervals to the PDF or CDF
We can use a the non-parametric bootstrap to construct pointwise confidence intervals around the PDF and CDF of the estimated Weibull distribution. Here's an R script:
#-----------------------------------------------------------------------------
# 5. Bootstrapping the pointwise confidence intervals
#-----------------------------------------------------------------------------
set.seed(123)
rw.small <- rweibull(100,shape=1.5, scale=1)
xs <- seq(0, 5, len=500)
boot.pdf <- sapply(1:1000, function(i) {
xi <- sample(rw.small, size=length(rw.small), replace=TRUE)
MLE.est <- suppressWarnings(fitdistr(xi, densfun="weibull", lower = 0))
dweibull(xs, shape=as.numeric(MLE.est[[1]][13]), scale=as.numeric(MLE.est[[1]][14]))
}
)
boot.cdf <- sapply(1:1000, function(i) {
xi <- sample(rw.small, size=length(rw.small), replace=TRUE)
MLE.est <- suppressWarnings(fitdistr(xi, densfun="weibull", lower = 0))
pweibull(xs, shape=as.numeric(MLE.est[[1]][15]), scale=as.numeric(MLE.est[[1]][16]))
}
)
#-----------------------------------------------------------------------------
# Plot PDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf),
xlab="x", ylab="Probability density")
for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.pdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.pdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2)
#lines(xs, min.point, col="purple")
#lines(xs, max.point, col="purple")
#-----------------------------------------------------------------------------
# Plot CDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf),
xlab="x", ylab="F(x)")
for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.cdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.cdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2)
lines(xs, min.point, col="purple")
lines(xs, max.point, col="purple")
|
Weibull distribution parameters $k$ and $c$ for wind speed data
|
Because @zaynah posted in the comments that the data are thought to follow a Weibull distribution, I'm gonna provide a short tutorial on how to estimate the parameters of such a distribution using MLE
|
Weibull distribution parameters $k$ and $c$ for wind speed data
Because @zaynah posted in the comments that the data are thought to follow a Weibull distribution, I'm gonna provide a short tutorial on how to estimate the parameters of such a distribution using MLE (Maximum likelihood estimation). There is a similar post about wind speeds and Weibull distribution on the site.
Download and install R, it's free
Optional: Download and install RStudio, which is a great IDE for R providing a ton of useful functions such as syntax highlighting and more.
Install the packages MASS and car by typing: install.packages(c("MASS", "car")). Load them by typing: library(MASS) and library(car).
Import your data into R. If you have your data in Excel, for example, save them as delimited text file (.txt) and import them in R with read.table.
Use the function fitdistr to calculate the maximum likelihood estimates of your weibull distribution: fitdistr(my.data, densfun="weibull", lower = 0). To see a fully worked out example, see the link at the bottom of the answer.
Make a QQ-Plot to compare your data with a Weibull distribution with the scale and shape parameters estimated at point 5: qqPlot(my.data, distribution="weibull", shape=, scale=)
The tutorial of Vito Ricci on fitting distribution with R is a good starting point on the matter. And there are numerous posts on this site on the subject (see this post too).
To see a fully worked out example of how to use fitdistr, have a look at this post.
Let's look at an example in R:
# Load packages
library(MASS)
library(car)
# First, we generate 1000 random numbers from a Weibull distribution with
# scale = 1 and shape = 1.5
rw <- rweibull(1000, scale=1, shape=1.5)
# We can calculate a kernel density estimation to inspect the distribution
# Because the Weibull distribution has support [0,+Infinity), we are truncate
# the density at 0
par(bg="white", las=1, cex=1.1)
plot(density(rw, bw=0.5, cut=0), las=1, lwd=2,
xlim=c(0,5),col="steelblue")
# Now, we can use fitdistr to calculate the parameters by MLE
# The option "lower = 0" is added because the parameters of the Weibull distribution need to be >= 0
fitdistr(rw, densfun="weibull", lower = 0)
shape scale
1.56788999 1.01431852
(0.03891863) (0.02153039)
The maximum likelihood estimates are close to those we arbitrarily set in the generation of the random numbers. Let's compare our data using a QQ-Plot with a hypothetical Weibull distribution with the parameters that we've estimated with fitdistr:
qqPlot(rw, distribution="weibull", scale=1.014, shape=1.568, las=1, pch=19)
The points are nicely aligned on the line and mostly within the 95%-confidence envelope. We would conclude that our data are compatible with a Weibull distribution. This was expected, of course, as we've sampled our values from a Weibull distribution.
Estimating the $k$ (shape) and $c$ (scale) of a Weibull distribution without MLE
This paper lists five methods to estimate the parameters of a Weibull distribution for wind speeds. I'm gonna explain three of them here.
From means and standard deviation
The shape parameter $k$ is estimated as:
$$
k=\left(\frac{\hat{\sigma}}{\hat{v}}\right)^{-1.086}
$$
and the scale parameter $c$ is estimated as:
$$
c=\frac{\hat{v}}{\Gamma(1+1/k)}
$$
with $\hat{v}$ is the mean wind speed and $\hat{\sigma}$ the standard deviation and $\Gamma$ is the Gamma function.
Least-squares fit to observed distribution
If the observed wind speeds are divided into $n$ speed interval $0-V_{1},V_{1}-V_{2},\ldots, V_{n-1}-V_{n}$, having frequencies of occurrence $f_{1}, f_{2},\ldots,f_{n}$ and cumulative frequencies $p_{1}=f_{1}, p_{2}=f_{1}+f_{2}, \ldots, p_{n}=p_{n-1}+f_{n}$, then you can fit a linear regression of the form $y=a+bx$ to the values
$$
x_{i} = \ln(V_{i})
$$
$$
y_{i} = \ln[-\ln(1-p_{i})]
$$
The Weibull parameters are related to the linear coefficients $a$ and $b$ by
$$
c=\exp\left(-\frac{a}{b}\right)
$$
$$
k=b
$$
Median and quartile wind speeds
If you don't have the complete observed wind speeds but the median $V_{m}$ and quartiles $V_{0.25}$ and $V_{0.75}$ $\left[p(V\leq V_{0.25})=0.25, p(V\leq V_{0.75})=0.75\right]$, then $c$ and $k$ can be computed by the relations
$$
k = \ln\left[\ln(0.25)/\ln(0.75)\right]/\ln(V_{0.75}/V_{0.25})\approx 1.573/\ln(V_{0.75}/V_{0.25})
$$
$$
c=V_{m}/\ln(2)^{1/k}
$$
Comparison of the four methods
Here is an example in R comparing the four methods:
library(MASS) # for "fitdistr"
set.seed(123)
#-----------------------------------------------------------------------------
# Generate 10000 random numbers from a Weibull distribution
# with shape = 1.5 and scale = 1
#-----------------------------------------------------------------------------
rw <- rweibull(10000, shape=1.5, scale=1)
#-----------------------------------------------------------------------------
# 1. Estimate k and c by MLE
#-----------------------------------------------------------------------------
fitdistr(rw, densfun="weibull", lower = 0)
shape scale
1.515380298 1.005562356
#-----------------------------------------------------------------------------
# 2. Estimate k and c using the leas square fit
#-----------------------------------------------------------------------------
n <- 100 # number of bins
breaks <- seq(0, max(rw), length.out=n)
freqs <- as.vector(prop.table(table(cut(rw, breaks = breaks))))
cum.freqs <- c(0, cumsum(freqs))
xi <- log(breaks)
yi <- log(-log(1-cum.freqs))
# Fit the linear regression
least.squares <- lm(yi[is.finite(yi) & is.finite(xi)]~xi[is.finite(yi) & is.finite(xi)])
lin.mod.coef <- coefficients(least.squares)
k <- lin.mod.coef[2]
k
1.515115
c <- exp(-lin.mod.coef[1]/lin.mod.coef[2])
c
1.006004
#-----------------------------------------------------------------------------
# 3. Estimate k and c using the median and quartiles
#-----------------------------------------------------------------------------
med <- median(rw)
quarts <- quantile(rw, c(0.25, 0.75))
k <- log(log(0.25)/log(0.75))/log(quarts[2]/quarts[1])
k
1.537766
c <- med/log(2)^(1/k)
c
1.004434
#-----------------------------------------------------------------------------
# 4. Estimate k and c using mean and standard deviation.
#-----------------------------------------------------------------------------
k <- (sd(rw)/mean(rw))^(-1.086)
c <- mean(rw)/(gamma(1+1/k))
k
1.535481
c
1.006938
All methods yield very similar results. The maximum likelihood approach has the advantage that the standard errors of the Weibull parameters are directly given.
Using bootstrap to add pointwise confidence intervals to the PDF or CDF
We can use a the non-parametric bootstrap to construct pointwise confidence intervals around the PDF and CDF of the estimated Weibull distribution. Here's an R script:
#-----------------------------------------------------------------------------
# 5. Bootstrapping the pointwise confidence intervals
#-----------------------------------------------------------------------------
set.seed(123)
rw.small <- rweibull(100,shape=1.5, scale=1)
xs <- seq(0, 5, len=500)
boot.pdf <- sapply(1:1000, function(i) {
xi <- sample(rw.small, size=length(rw.small), replace=TRUE)
MLE.est <- suppressWarnings(fitdistr(xi, densfun="weibull", lower = 0))
dweibull(xs, shape=as.numeric(MLE.est[[1]][13]), scale=as.numeric(MLE.est[[1]][14]))
}
)
boot.cdf <- sapply(1:1000, function(i) {
xi <- sample(rw.small, size=length(rw.small), replace=TRUE)
MLE.est <- suppressWarnings(fitdistr(xi, densfun="weibull", lower = 0))
pweibull(xs, shape=as.numeric(MLE.est[[1]][15]), scale=as.numeric(MLE.est[[1]][16]))
}
)
#-----------------------------------------------------------------------------
# Plot PDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.pdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.pdf),
xlab="x", ylab="Probability density")
for(i in 2:ncol(boot.pdf)) lines(xs, boot.pdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.pdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.pdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.pdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2)
#lines(xs, min.point, col="purple")
#lines(xs, max.point, col="purple")
#-----------------------------------------------------------------------------
# Plot CDF
#-----------------------------------------------------------------------------
par(bg="white", las=1, cex=1.2)
plot(xs, boot.cdf[, 1], type="l", col=rgb(.6, .6, .6, .1), ylim=range(boot.cdf),
xlab="x", ylab="F(x)")
for(i in 2:ncol(boot.cdf)) lines(xs, boot.cdf[, i], col=rgb(.6, .6, .6, .1))
# Add pointwise confidence bands
quants <- apply(boot.cdf, 1, quantile, c(0.025, 0.5, 0.975))
min.point <- apply(boot.cdf, 1, min, na.rm=TRUE)
max.point <- apply(boot.cdf, 1, max, na.rm=TRUE)
lines(xs, quants[1, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[3, ], col="red", lwd=1.5, lty=2)
lines(xs, quants[2, ], col="darkred", lwd=2)
lines(xs, min.point, col="purple")
lines(xs, max.point, col="purple")
|
Weibull distribution parameters $k$ and $c$ for wind speed data
Because @zaynah posted in the comments that the data are thought to follow a Weibull distribution, I'm gonna provide a short tutorial on how to estimate the parameters of such a distribution using MLE
|
14,603
|
Using neural network for trading in stock exchange
|
There are severe flaws with this approach.
First, there are many gambles which usually win, but which are bad gambles. Suppose you have the chance to win \$1 $90\%$ of the time and lose \$100 $10\%$ of the time. This has a negative expected value, but the way you are training the neural network would teach it to recommend such reverse lottery tickets.
Second, you are missing a big point of the stock exchange, which is to manage risk. What determines the price of an investment is not just its return, it is the return versus the risk which can't be hedged away. Investments with high returns and high risks are not necessarily better than investments with low returns and low risk. If you can invest risk-free at $6\%$ and borrow money at $5\%$, this is more valuable than finding a very risky investment with a return of $60\%$. An investment with a negative rate of return may still be valuable if it is strongly negatively correlated with a risky investment with a high rate of return. So, the rate of return is insufficient for evaluating investments.
Third, you should realize that you are competing with other people who also have access to neural networks. There are a lot of commercial programs aimed at day traders based on neural networks. (These are made by people who find it more profitable to sell software to confused day traders than to use their own systems.) There are many proprietary systems, some of which may involve neural networks. To find value they overlook, you need to have some advantage, and you haven't mentioned any.
I'm a big fan of neural networks, but I think typical users of neural networks in the stock market do not understand the basics and burn money.
|
Using neural network for trading in stock exchange
|
There are severe flaws with this approach.
First, there are many gambles which usually win, but which are bad gambles. Suppose you have the chance to win \$1 $90\%$ of the time and lose \$100 $10\%$ o
|
Using neural network for trading in stock exchange
There are severe flaws with this approach.
First, there are many gambles which usually win, but which are bad gambles. Suppose you have the chance to win \$1 $90\%$ of the time and lose \$100 $10\%$ of the time. This has a negative expected value, but the way you are training the neural network would teach it to recommend such reverse lottery tickets.
Second, you are missing a big point of the stock exchange, which is to manage risk. What determines the price of an investment is not just its return, it is the return versus the risk which can't be hedged away. Investments with high returns and high risks are not necessarily better than investments with low returns and low risk. If you can invest risk-free at $6\%$ and borrow money at $5\%$, this is more valuable than finding a very risky investment with a return of $60\%$. An investment with a negative rate of return may still be valuable if it is strongly negatively correlated with a risky investment with a high rate of return. So, the rate of return is insufficient for evaluating investments.
Third, you should realize that you are competing with other people who also have access to neural networks. There are a lot of commercial programs aimed at day traders based on neural networks. (These are made by people who find it more profitable to sell software to confused day traders than to use their own systems.) There are many proprietary systems, some of which may involve neural networks. To find value they overlook, you need to have some advantage, and you haven't mentioned any.
I'm a big fan of neural networks, but I think typical users of neural networks in the stock market do not understand the basics and burn money.
|
Using neural network for trading in stock exchange
There are severe flaws with this approach.
First, there are many gambles which usually win, but which are bad gambles. Suppose you have the chance to win \$1 $90\%$ of the time and lose \$100 $10\%$ o
|
14,604
|
Using neural network for trading in stock exchange
|
A single human may never see this, but I'd still like to give my input as someone who has found themselves engrossed in both finance and computer science.
I will never say a neural network won't be successful in equity trading, but you need to think at the differences between how a human trades stocks and how a neural network will trade stocks. The average trader subconsciously takes into account hundreds of factors when making a simple buy, hold, sell decision.
Recent news
Recent earning/financials
Economic indicators (interest rates, loan delinquencies, politics)
Industry competitors
So on and so on, and I'm sure over half of people trading stocks don't beat the benchmarks. A neural network will really struggle to make a better decision than a human due to lack of inputs. Looking at someones face and saying "Oh that's Bobby John" is a lot easier than making and investment decision, and some networks struggle to pick out faces. Possibly 40+ 150 IQ nerds from Cal Tech at Renaissance Technology or DE Shaw & Co have figured out how to make neural networks trade stocks, but I wouldn't waste my time. Stick to getting programs to pick out basic trends or scrap financial data of of EDGAR.
|
Using neural network for trading in stock exchange
|
A single human may never see this, but I'd still like to give my input as someone who has found themselves engrossed in both finance and computer science.
I will never say a neural network won't be su
|
Using neural network for trading in stock exchange
A single human may never see this, but I'd still like to give my input as someone who has found themselves engrossed in both finance and computer science.
I will never say a neural network won't be successful in equity trading, but you need to think at the differences between how a human trades stocks and how a neural network will trade stocks. The average trader subconsciously takes into account hundreds of factors when making a simple buy, hold, sell decision.
Recent news
Recent earning/financials
Economic indicators (interest rates, loan delinquencies, politics)
Industry competitors
So on and so on, and I'm sure over half of people trading stocks don't beat the benchmarks. A neural network will really struggle to make a better decision than a human due to lack of inputs. Looking at someones face and saying "Oh that's Bobby John" is a lot easier than making and investment decision, and some networks struggle to pick out faces. Possibly 40+ 150 IQ nerds from Cal Tech at Renaissance Technology or DE Shaw & Co have figured out how to make neural networks trade stocks, but I wouldn't waste my time. Stick to getting programs to pick out basic trends or scrap financial data of of EDGAR.
|
Using neural network for trading in stock exchange
A single human may never see this, but I'd still like to give my input as someone who has found themselves engrossed in both finance and computer science.
I will never say a neural network won't be su
|
14,605
|
Using neural network for trading in stock exchange
|
i realise this is an old thread, but just in case anyone stumbles on it, what the OP needed to do was squish his desired field down into the 0 to 1 space. ie just remap -1 = 0.0, 0 = 0.5, and 1 = 1. Then you can just use the standard logistic sigmoid activation function.
|
Using neural network for trading in stock exchange
|
i realise this is an old thread, but just in case anyone stumbles on it, what the OP needed to do was squish his desired field down into the 0 to 1 space. ie just remap -1 = 0.0, 0 = 0.5, and 1 = 1. T
|
Using neural network for trading in stock exchange
i realise this is an old thread, but just in case anyone stumbles on it, what the OP needed to do was squish his desired field down into the 0 to 1 space. ie just remap -1 = 0.0, 0 = 0.5, and 1 = 1. Then you can just use the standard logistic sigmoid activation function.
|
Using neural network for trading in stock exchange
i realise this is an old thread, but just in case anyone stumbles on it, what the OP needed to do was squish his desired field down into the 0 to 1 space. ie just remap -1 = 0.0, 0 = 0.5, and 1 = 1. T
|
14,606
|
How to tune smoothing in mgcv GAM model
|
The k argument effectively sets up the dimensionality of the smoothing matrix for each term. gam() is using a GCV or UBRE score to select an optimal amount of smoothness, but it can only work within the dimensionality of the smoothing matrix. By default, te() smooths have k = 5^2 for 2d surfaces. I forget what it is for s() so check the documents. The current advice from Simon Wood, author of mgcv, is that if the degree of smoothness selected by the model is at or close to the limit of the dimensionality imposed by the value used for k, you should increase k and refit the model to see if a more complex model is selected from the higher dimensional smoothing matrix.
However, I don't know how locfit works, but you do need to have something the stops you from fitting too complex a surface (GCV and UBRE, or (RE)ML if you choose to use them [you can't as you set scale = -1], are trying to do just that), that is not supported by the data. In other words, you could fit very local features of the data but are you fitting the noise in the sample of data you collected or are you fitting the mean of the probability distribution? gam() may be telling you something about what can be estimated from your data, assuming that you've sorted out the basis dimensionality (above).
Another thing to look at is that the smoothers you are currently using are global in the sense that the smoothness selected is applied over the entire range of the smooth. Adaptive smoothers can spend the allotted smoothness "allowance" in parts of the data where the response is changing rapidly. gam() has capabilities for using adaptive smoothers.
See ?smooth.terms and ?adaptive.smooth to see what can be fitted using gam(). te() can combine most if not all of these smoothers (check the docs for which can and can't be included in tensor products) so you could use an adaptive smoother basis to try to capture the finer local scale in the parts of the data where the response is varying quickly.
I should add, that you can get R to estimate a model with a fixed set of degrees of freedom used by a smooth term, using the fx = TRUE argument to s() and te(). Basically, set k to be what you want and fx = TRUE and gam() will just fit a regression spline of fixed degrees of freedom not a penalised regression spline.
|
How to tune smoothing in mgcv GAM model
|
The k argument effectively sets up the dimensionality of the smoothing matrix for each term. gam() is using a GCV or UBRE score to select an optimal amount of smoothness, but it can only work within t
|
How to tune smoothing in mgcv GAM model
The k argument effectively sets up the dimensionality of the smoothing matrix for each term. gam() is using a GCV or UBRE score to select an optimal amount of smoothness, but it can only work within the dimensionality of the smoothing matrix. By default, te() smooths have k = 5^2 for 2d surfaces. I forget what it is for s() so check the documents. The current advice from Simon Wood, author of mgcv, is that if the degree of smoothness selected by the model is at or close to the limit of the dimensionality imposed by the value used for k, you should increase k and refit the model to see if a more complex model is selected from the higher dimensional smoothing matrix.
However, I don't know how locfit works, but you do need to have something the stops you from fitting too complex a surface (GCV and UBRE, or (RE)ML if you choose to use them [you can't as you set scale = -1], are trying to do just that), that is not supported by the data. In other words, you could fit very local features of the data but are you fitting the noise in the sample of data you collected or are you fitting the mean of the probability distribution? gam() may be telling you something about what can be estimated from your data, assuming that you've sorted out the basis dimensionality (above).
Another thing to look at is that the smoothers you are currently using are global in the sense that the smoothness selected is applied over the entire range of the smooth. Adaptive smoothers can spend the allotted smoothness "allowance" in parts of the data where the response is changing rapidly. gam() has capabilities for using adaptive smoothers.
See ?smooth.terms and ?adaptive.smooth to see what can be fitted using gam(). te() can combine most if not all of these smoothers (check the docs for which can and can't be included in tensor products) so you could use an adaptive smoother basis to try to capture the finer local scale in the parts of the data where the response is varying quickly.
I should add, that you can get R to estimate a model with a fixed set of degrees of freedom used by a smooth term, using the fx = TRUE argument to s() and te(). Basically, set k to be what you want and fx = TRUE and gam() will just fit a regression spline of fixed degrees of freedom not a penalised regression spline.
|
How to tune smoothing in mgcv GAM model
The k argument effectively sets up the dimensionality of the smoothing matrix for each term. gam() is using a GCV or UBRE score to select an optimal amount of smoothness, but it can only work within t
|
14,607
|
How to tune smoothing in mgcv GAM model
|
There are a number of options to make a gam less wiggly:
Set the default s(..., k = 10) to a smaller value.
Set the default s(...,bs = 'tp') to ts.
Set gam(..., select = TRUE).
Set the default gam(..., gamma = 1) to a larger value. Try values between 1 and 2.
Set the default s(..., m = 2) to m = 1.
Set the default method = "GCV.Cp" to method = "REML" (section 1.1; Wood, 2011).
Force monotonically increasing/decreasing curves. See scam package and other options.
Change some of the smoothed predictors + s(X1) to linear terms + X1.
Use fewer predictors.
|
How to tune smoothing in mgcv GAM model
|
There are a number of options to make a gam less wiggly:
Set the default s(..., k = 10) to a smaller value.
Set the default s(...,bs = 'tp') to ts.
Set gam(..., select = TRUE).
Set the default gam(..
|
How to tune smoothing in mgcv GAM model
There are a number of options to make a gam less wiggly:
Set the default s(..., k = 10) to a smaller value.
Set the default s(...,bs = 'tp') to ts.
Set gam(..., select = TRUE).
Set the default gam(..., gamma = 1) to a larger value. Try values between 1 and 2.
Set the default s(..., m = 2) to m = 1.
Set the default method = "GCV.Cp" to method = "REML" (section 1.1; Wood, 2011).
Force monotonically increasing/decreasing curves. See scam package and other options.
Change some of the smoothed predictors + s(X1) to linear terms + X1.
Use fewer predictors.
|
How to tune smoothing in mgcv GAM model
There are a number of options to make a gam less wiggly:
Set the default s(..., k = 10) to a smaller value.
Set the default s(...,bs = 'tp') to ts.
Set gam(..., select = TRUE).
Set the default gam(..
|
14,608
|
Is it bad to have error bars constructed with standard deviation that spans to the negative scale while the variable itself shouldn't be negative?
|
No, in this case, it does not make sense to draw error bars using SDs.
Take a step back. Why do we draw error bars with SDs? As you write, it's to show where "much" of the data lies. This makes sense if your data come from a normal distribution: 68% of your data will lie within $\pm 1$ SD from the mean, so showing the mean with an error bar of $\pm 1$ SD will give you an interval that contains 68% of your data.
However, the number of visits to a doctor is a count, so it is discrete. And it can't be negative. Thus, it can't be normal. For high counts, you can often treat counts as normal, but not for a mean of 3 and an SD of 5. Using SD-based error bars is the wrong way of answering the original question, i.e., showing where "much" of the data falls.
Better: calculate the top and bottom ends of your interval directly, by calculating (e.g.) the 16% and the 84% quantile of your observations. The range between them will again contain 68% of your data, as in the normal case the interval around the mean $\pm 1$ SD.
Alternatively, you can fit a distribution. For instance, a mean of 3 and an SD of 5 are consistent with a negative binomial distribution with a mean of 3 and a size parameter of $\frac{3^2}{5^2-3}$ (see R's help page ?qnbinom - there are many different parameterizations of the negbin). For such a distribution, we can again calculate the parametric 16%/84% quantiles, which turns out to give us an interval $[0,6]$:
> qnbinom(pnorm(c(-1,1)),mu=3,size=3^2/(5^2-3))
[1] 0 6
|
Is it bad to have error bars constructed with standard deviation that spans to the negative scale wh
|
No, in this case, it does not make sense to draw error bars using SDs.
Take a step back. Why do we draw error bars with SDs? As you write, it's to show where "much" of the data lies. This makes sense
|
Is it bad to have error bars constructed with standard deviation that spans to the negative scale while the variable itself shouldn't be negative?
No, in this case, it does not make sense to draw error bars using SDs.
Take a step back. Why do we draw error bars with SDs? As you write, it's to show where "much" of the data lies. This makes sense if your data come from a normal distribution: 68% of your data will lie within $\pm 1$ SD from the mean, so showing the mean with an error bar of $\pm 1$ SD will give you an interval that contains 68% of your data.
However, the number of visits to a doctor is a count, so it is discrete. And it can't be negative. Thus, it can't be normal. For high counts, you can often treat counts as normal, but not for a mean of 3 and an SD of 5. Using SD-based error bars is the wrong way of answering the original question, i.e., showing where "much" of the data falls.
Better: calculate the top and bottom ends of your interval directly, by calculating (e.g.) the 16% and the 84% quantile of your observations. The range between them will again contain 68% of your data, as in the normal case the interval around the mean $\pm 1$ SD.
Alternatively, you can fit a distribution. For instance, a mean of 3 and an SD of 5 are consistent with a negative binomial distribution with a mean of 3 and a size parameter of $\frac{3^2}{5^2-3}$ (see R's help page ?qnbinom - there are many different parameterizations of the negbin). For such a distribution, we can again calculate the parametric 16%/84% quantiles, which turns out to give us an interval $[0,6]$:
> qnbinom(pnorm(c(-1,1)),mu=3,size=3^2/(5^2-3))
[1] 0 6
|
Is it bad to have error bars constructed with standard deviation that spans to the negative scale wh
No, in this case, it does not make sense to draw error bars using SDs.
Take a step back. Why do we draw error bars with SDs? As you write, it's to show where "much" of the data lies. This makes sense
|
14,609
|
What to do with random effects correlation that equals 1 or -1?
|
Singular random-effect covariance matrices
Obtaining a random effect correlation estimate of +1 or -1 means that the optimization algorithm hit "a boundary": correlations cannot be higher than +1 or lower than -1. Even if there are no explicit convergence errors or warnings, this potentially indicates some problems with convergence because we do not expect true correlations to lie on the boundary. As you said, this usually means that there are not enough data to estimate all the parameters reliably. Matuschek et al. 2017 say that in this situation the power can be compromised.
Another way to hit a boundary is to get a variance estimate of 0: Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
Both situations can be seen as obtaining a degenerate covariance matrix of random effects (in your example output covariance matrix is $4\times 4$); a zero variance or a perfect correlation means that the covariance matrix is not full rank and [at least] one of its eigenvalues is zero. This observation immediately suggests that there are other, more complex ways to get a degenerate covariance matrix: one can have a $4\times 4$ covariance matrix without any zeros or perfect correlations but nevertheless rank-deficient (singular). Bates et al. 2015 Parsimonious Mixed Models (unpublished preprint) recommend using principal component analysis (PCA) to check if the obtained covariance matrix is singular. If it is, they suggest to treat this situation the same way as the above singular situations.
So what to do?
If there is not enough data to estimate all the parameters of a model reliably, then we should consider simplifying the model. Taking your example model, X*Cond + (X*Cond|subj), there are various possible ways to simplify it:
Remove one of the random effects, usually the highest-order correlation:
X*Cond + (X+Cond|subj)
Get rid of all the correlation parameters:
X*Cond + (X*Cond||subj)
Update: as @Henrik notes, the || syntax will only remove correlations if all variables to the left of it are numerical. If categorical variables (such as Cond) are involved, one should rather use his convenient afex package (or cumbersome manual workarounds). See his answer for more details.
Get rid of some of the correlations parameters by breaking the term into several, e.g.:
X*Cond + (X+Cond|subj) + (0+X:Cond|subj)
Constrain the covariance matrix in some specific way, e.g. by setting one specific correlation (the one that hit the boundary) to zero, as you suggest. There is no built-in way in lme4 to achieve this. See @BenBolker's answer on SO for a demonstration of how to achieve this via some smart hacking.
Contrary to what you said, I don't think Matuschek et al. 2017 specifically recommend #4. The gist of Matuschek et al. 2017 and Bates et al. 2015 seems to be that one starts with the maximal model a la Barr et al. 2013 and then decreases the complexity until the covariance matrix is full rank. (Moreover, they would often recommend to reduce the complexity even further, in order to increase the power.) Update: In contrast, Barr et al. recommend to reduce complexity ONLY if the model did not converge; they are willing to tolerate singular covariance matrices. See @Henrik's answer.
If one agrees with Bates/Matuschek, then I think it is fine to try out different ways of decreasing the complexity in order to find the one that does the job while doing "the least damage". Looking at my list above, the original covariance matrix has 10 parameters; #1 has 6 parameters, #2 has 4 parameters, #3 has 7 parameters. Which model will get rid of the perfect correlations is impossible to say without fitting them.
But what if you are interested in this parameter?
The above discussion treats random effect covariance matrix as a nuisance parameter. You raise an interesting question of what to do if you are specifically interested in a correlation parameter that you have to "give up" in order to get a meaningful full-rank solution.
Note that fixing correlation parameter at zero will not necessarily yield BLUPs (ranef) that are uncorrelated; in fact, they might not even be affected that much at all (see @Placidia's answer for a demonstration). So one option would be to look at the correlations of BLUPs and report that.
Another, perhaps less attractive, option would be to use treat subject as a fixed effect Y~X*cond*subj, get the estimates for each subject and compute correlation between them. This is equivalent to running separate Y~X*cond regressions for each subject separately and get the correlation estimates from them.
See also the section on singular models in Ben Bolker's mixed model FAQ:
It is very common for overfitted mixed models to result in singular fits. Technically, singularity means that some of the $\theta$ (variance-covariance Cholesky decomposition) parameters corresponding to diagonal elements of the Cholesky factor are exactly zero, which is the edge of the feasible space, or equivalently that the variance-covariance matrix has some zero eigenvalues (i.e. is positive semidefinite rather than positive definite), or (almost equivalently) that some of the variances are estimated as zero or some of the correlations are estimated as +/-1.
|
What to do with random effects correlation that equals 1 or -1?
|
Singular random-effect covariance matrices
Obtaining a random effect correlation estimate of +1 or -1 means that the optimization algorithm hit "a boundary": correlations cannot be higher than +1 or l
|
What to do with random effects correlation that equals 1 or -1?
Singular random-effect covariance matrices
Obtaining a random effect correlation estimate of +1 or -1 means that the optimization algorithm hit "a boundary": correlations cannot be higher than +1 or lower than -1. Even if there are no explicit convergence errors or warnings, this potentially indicates some problems with convergence because we do not expect true correlations to lie on the boundary. As you said, this usually means that there are not enough data to estimate all the parameters reliably. Matuschek et al. 2017 say that in this situation the power can be compromised.
Another way to hit a boundary is to get a variance estimate of 0: Why do I get zero variance of a random effect in my mixed model, despite some variation in the data?
Both situations can be seen as obtaining a degenerate covariance matrix of random effects (in your example output covariance matrix is $4\times 4$); a zero variance or a perfect correlation means that the covariance matrix is not full rank and [at least] one of its eigenvalues is zero. This observation immediately suggests that there are other, more complex ways to get a degenerate covariance matrix: one can have a $4\times 4$ covariance matrix without any zeros or perfect correlations but nevertheless rank-deficient (singular). Bates et al. 2015 Parsimonious Mixed Models (unpublished preprint) recommend using principal component analysis (PCA) to check if the obtained covariance matrix is singular. If it is, they suggest to treat this situation the same way as the above singular situations.
So what to do?
If there is not enough data to estimate all the parameters of a model reliably, then we should consider simplifying the model. Taking your example model, X*Cond + (X*Cond|subj), there are various possible ways to simplify it:
Remove one of the random effects, usually the highest-order correlation:
X*Cond + (X+Cond|subj)
Get rid of all the correlation parameters:
X*Cond + (X*Cond||subj)
Update: as @Henrik notes, the || syntax will only remove correlations if all variables to the left of it are numerical. If categorical variables (such as Cond) are involved, one should rather use his convenient afex package (or cumbersome manual workarounds). See his answer for more details.
Get rid of some of the correlations parameters by breaking the term into several, e.g.:
X*Cond + (X+Cond|subj) + (0+X:Cond|subj)
Constrain the covariance matrix in some specific way, e.g. by setting one specific correlation (the one that hit the boundary) to zero, as you suggest. There is no built-in way in lme4 to achieve this. See @BenBolker's answer on SO for a demonstration of how to achieve this via some smart hacking.
Contrary to what you said, I don't think Matuschek et al. 2017 specifically recommend #4. The gist of Matuschek et al. 2017 and Bates et al. 2015 seems to be that one starts with the maximal model a la Barr et al. 2013 and then decreases the complexity until the covariance matrix is full rank. (Moreover, they would often recommend to reduce the complexity even further, in order to increase the power.) Update: In contrast, Barr et al. recommend to reduce complexity ONLY if the model did not converge; they are willing to tolerate singular covariance matrices. See @Henrik's answer.
If one agrees with Bates/Matuschek, then I think it is fine to try out different ways of decreasing the complexity in order to find the one that does the job while doing "the least damage". Looking at my list above, the original covariance matrix has 10 parameters; #1 has 6 parameters, #2 has 4 parameters, #3 has 7 parameters. Which model will get rid of the perfect correlations is impossible to say without fitting them.
But what if you are interested in this parameter?
The above discussion treats random effect covariance matrix as a nuisance parameter. You raise an interesting question of what to do if you are specifically interested in a correlation parameter that you have to "give up" in order to get a meaningful full-rank solution.
Note that fixing correlation parameter at zero will not necessarily yield BLUPs (ranef) that are uncorrelated; in fact, they might not even be affected that much at all (see @Placidia's answer for a demonstration). So one option would be to look at the correlations of BLUPs and report that.
Another, perhaps less attractive, option would be to use treat subject as a fixed effect Y~X*cond*subj, get the estimates for each subject and compute correlation between them. This is equivalent to running separate Y~X*cond regressions for each subject separately and get the correlation estimates from them.
See also the section on singular models in Ben Bolker's mixed model FAQ:
It is very common for overfitted mixed models to result in singular fits. Technically, singularity means that some of the $\theta$ (variance-covariance Cholesky decomposition) parameters corresponding to diagonal elements of the Cholesky factor are exactly zero, which is the edge of the feasible space, or equivalently that the variance-covariance matrix has some zero eigenvalues (i.e. is positive semidefinite rather than positive definite), or (almost equivalently) that some of the variances are estimated as zero or some of the correlations are estimated as +/-1.
|
What to do with random effects correlation that equals 1 or -1?
Singular random-effect covariance matrices
Obtaining a random effect correlation estimate of +1 or -1 means that the optimization algorithm hit "a boundary": correlations cannot be higher than +1 or l
|
14,610
|
What to do with random effects correlation that equals 1 or -1?
|
I agree with everything said in amoeba's answer which provides a great summary of the current discussion on this issue. I will try to add a few additional points and otherwise refer to the handout of my recent mixed model course which also summarizes these points.
Suppressing the correlation parameters (options 2 and 3 in amoeba's answer) via || works only for numerical covariates in lmer and not for factors. This is discussed in some detail with code by Reinhold Kliegl.
However, my afex package provides the functionality to suppress the correlation also among factors if argument expand_re = TRUE in the call to
mixed() (see also function lmer_alt()). It essentially does so by implementing the approach discussed by Reinhold Kliegl (i.e., transfomring the factors into numerical covariates and specify the random-effects structure on those).
A simple example:
library("afex")
data("Machines", package = "MEMSS") # same data as in Kliegl code
# with correlation:
summary(lmer(score ~ Machine + (Machine | Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev. Corr
# Worker (Intercept) 16.6405 4.0793
# MachineB 34.5467 5.8776 0.48
# MachineC 13.6150 3.6899 -0.37 0.30
# Residual 0.9246 0.9616
# Number of obs: 54, groups: Worker, 6
## crazy results:
summary(lmer(score ~ Machine + (Machine || Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev. Corr
# Worker (Intercept) 0.2576 0.5076
# Worker.1 MachineA 16.3829 4.0476
# MachineB 74.1381 8.6103 0.80
# MachineC 19.0099 4.3600 0.62 0.77
# Residual 0.9246 0.9616
# Number of obs: 54, groups: Worker, 6
## as expected:
summary(lmer_alt(score ~ Machine + (Machine || Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev.
# Worker (Intercept) 16.600 4.0743
# Worker.1 re1.MachineB 34.684 5.8894
# Worker.2 re1.MachineC 13.301 3.6471
# Residual 0.926 0.9623
# Number of obs: 54, groups: Worker, 6
For those not knowing afex, the main functionality for mixed models is to provide p-values for the fixed effects, e.g.,:
(m1 <- mixed(score ~ Machine + (Machine || Worker), data=Machines, expand_re = TRUE))
# Mixed Model Anova Table (Type 3 tests, KR-method)
#
# Model: score ~ Machine + (Machine || Worker)
# Data: Machines
# Effect df F p.value
# 1 Machine 2, 5.98 20.96 ** .002
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β+β 0.1 β β 1
summary(m1)
# [...]
# Random effects:
# Groups Name Variance Std.Dev.
# Worker (Intercept) 27.4947 5.2435
# Worker.1 re1.Machine1 6.6794 2.5845
# Worker.2 re1.Machine2 13.8015 3.7150
# Residual 0.9265 0.9626
# Number of obs: 54, groups: Worker, 6
# [...]
Dale Barr from the Barr et al. (2013) paper is more cautious in recommending reducing the random-effects structure than presented in amoeba's answer. In a recent twitter exchange he wrote:
"reducing the model introduces unknown risk of anticonservativity, and should be done with caution, if at all." and
"My main concern is that people understand risks associated with model reduction and that minimizing this risk requires a more conservative approach than is commonly adopted (eg each slope tested at .05)."
So caution is advised.
As one of the reviewers I can also provide some insight on why we the Bates et al. (2015) paper remained unpublished. Me and the other two reviewers (which signed, but will remain unnamed here) had some criticism with the PCA approach (seems unprincipled and there is no evidence that it is superior in terms of power). Furthermore, I believe all three criticized that the paper did not focus on the issue of how to specify the random-effects structure, but also tries to introduce GAMMs. Thus, the Bates et al (2015) paper morphed into the Matuschek et al. (2017) paper which addresses the issue of the random-effects structure with simulations and the Baayen et al. (2017) paper introducing GAMMs.
My full review of the Bates et al. draft can be found here. IIRC, the other reviews had kind of similar main points.
|
What to do with random effects correlation that equals 1 or -1?
|
I agree with everything said in amoeba's answer which provides a great summary of the current discussion on this issue. I will try to add a few additional points and otherwise refer to the handout of
|
What to do with random effects correlation that equals 1 or -1?
I agree with everything said in amoeba's answer which provides a great summary of the current discussion on this issue. I will try to add a few additional points and otherwise refer to the handout of my recent mixed model course which also summarizes these points.
Suppressing the correlation parameters (options 2 and 3 in amoeba's answer) via || works only for numerical covariates in lmer and not for factors. This is discussed in some detail with code by Reinhold Kliegl.
However, my afex package provides the functionality to suppress the correlation also among factors if argument expand_re = TRUE in the call to
mixed() (see also function lmer_alt()). It essentially does so by implementing the approach discussed by Reinhold Kliegl (i.e., transfomring the factors into numerical covariates and specify the random-effects structure on those).
A simple example:
library("afex")
data("Machines", package = "MEMSS") # same data as in Kliegl code
# with correlation:
summary(lmer(score ~ Machine + (Machine | Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev. Corr
# Worker (Intercept) 16.6405 4.0793
# MachineB 34.5467 5.8776 0.48
# MachineC 13.6150 3.6899 -0.37 0.30
# Residual 0.9246 0.9616
# Number of obs: 54, groups: Worker, 6
## crazy results:
summary(lmer(score ~ Machine + (Machine || Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev. Corr
# Worker (Intercept) 0.2576 0.5076
# Worker.1 MachineA 16.3829 4.0476
# MachineB 74.1381 8.6103 0.80
# MachineC 19.0099 4.3600 0.62 0.77
# Residual 0.9246 0.9616
# Number of obs: 54, groups: Worker, 6
## as expected:
summary(lmer_alt(score ~ Machine + (Machine || Worker), data=Machines))
# Random effects:
# Groups Name Variance Std.Dev.
# Worker (Intercept) 16.600 4.0743
# Worker.1 re1.MachineB 34.684 5.8894
# Worker.2 re1.MachineC 13.301 3.6471
# Residual 0.926 0.9623
# Number of obs: 54, groups: Worker, 6
For those not knowing afex, the main functionality for mixed models is to provide p-values for the fixed effects, e.g.,:
(m1 <- mixed(score ~ Machine + (Machine || Worker), data=Machines, expand_re = TRUE))
# Mixed Model Anova Table (Type 3 tests, KR-method)
#
# Model: score ~ Machine + (Machine || Worker)
# Data: Machines
# Effect df F p.value
# 1 Machine 2, 5.98 20.96 ** .002
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β+β 0.1 β β 1
summary(m1)
# [...]
# Random effects:
# Groups Name Variance Std.Dev.
# Worker (Intercept) 27.4947 5.2435
# Worker.1 re1.Machine1 6.6794 2.5845
# Worker.2 re1.Machine2 13.8015 3.7150
# Residual 0.9265 0.9626
# Number of obs: 54, groups: Worker, 6
# [...]
Dale Barr from the Barr et al. (2013) paper is more cautious in recommending reducing the random-effects structure than presented in amoeba's answer. In a recent twitter exchange he wrote:
"reducing the model introduces unknown risk of anticonservativity, and should be done with caution, if at all." and
"My main concern is that people understand risks associated with model reduction and that minimizing this risk requires a more conservative approach than is commonly adopted (eg each slope tested at .05)."
So caution is advised.
As one of the reviewers I can also provide some insight on why we the Bates et al. (2015) paper remained unpublished. Me and the other two reviewers (which signed, but will remain unnamed here) had some criticism with the PCA approach (seems unprincipled and there is no evidence that it is superior in terms of power). Furthermore, I believe all three criticized that the paper did not focus on the issue of how to specify the random-effects structure, but also tries to introduce GAMMs. Thus, the Bates et al (2015) paper morphed into the Matuschek et al. (2017) paper which addresses the issue of the random-effects structure with simulations and the Baayen et al. (2017) paper introducing GAMMs.
My full review of the Bates et al. draft can be found here. IIRC, the other reviews had kind of similar main points.
|
What to do with random effects correlation that equals 1 or -1?
I agree with everything said in amoeba's answer which provides a great summary of the current discussion on this issue. I will try to add a few additional points and otherwise refer to the handout of
|
14,611
|
What to do with random effects correlation that equals 1 or -1?
|
I too have had this problem when using maximum likelihood estimation - only I use Goldstein IGLS algorithm as implemented through the MLwiN software and not LME4 in R. However in each and very case the problem has resolved when I have switched to MCMC estimation using the same software. I have even had a correlation in excess of 3 which resolved when I changed estimation. Using IGLS, the correlation is calculated post estimation as the covariance divided by the product of the square root of the product of the associated variances - and this does not take account of the uncertainty in each of the constituent estimates.
The IGLS software does not 'know' that the covariance implies a correlation and just calculates estimates of a constant, linear, quadratic etc variance function. In contrast the MCMC approach is built on the assumption of samples from a multivariate normal distribution which corresponds to variances and covariances with good properties and full error propogation so that the uncertainty in the estimation of the covariances is taken into account in the estimation of the variances and vice versa.
MLwin beings the MCMC estimate chain with IGLS estimates and the non-negative definite variance covariance matrix might need to be altered by changing the covariance to zero at the outset before starting the sampling.
For a worked example see
Developing multilevel models for analysing contextuality, heterogeneity and change using MLwiN 3, Volume 1 (updated September 2017); Volume 2 is also on RGate
https://www.researchgate.net/publication/320197425_Vol1Training_manualRevisedSept2017
Appendix to Chapter 10
|
What to do with random effects correlation that equals 1 or -1?
|
I too have had this problem when using maximum likelihood estimation - only I use Goldstein IGLS algorithm as implemented through the MLwiN software and not LME4 in R. However in each and very case th
|
What to do with random effects correlation that equals 1 or -1?
I too have had this problem when using maximum likelihood estimation - only I use Goldstein IGLS algorithm as implemented through the MLwiN software and not LME4 in R. However in each and very case the problem has resolved when I have switched to MCMC estimation using the same software. I have even had a correlation in excess of 3 which resolved when I changed estimation. Using IGLS, the correlation is calculated post estimation as the covariance divided by the product of the square root of the product of the associated variances - and this does not take account of the uncertainty in each of the constituent estimates.
The IGLS software does not 'know' that the covariance implies a correlation and just calculates estimates of a constant, linear, quadratic etc variance function. In contrast the MCMC approach is built on the assumption of samples from a multivariate normal distribution which corresponds to variances and covariances with good properties and full error propogation so that the uncertainty in the estimation of the covariances is taken into account in the estimation of the variances and vice versa.
MLwin beings the MCMC estimate chain with IGLS estimates and the non-negative definite variance covariance matrix might need to be altered by changing the covariance to zero at the outset before starting the sampling.
For a worked example see
Developing multilevel models for analysing contextuality, heterogeneity and change using MLwiN 3, Volume 1 (updated September 2017); Volume 2 is also on RGate
https://www.researchgate.net/publication/320197425_Vol1Training_manualRevisedSept2017
Appendix to Chapter 10
|
What to do with random effects correlation that equals 1 or -1?
I too have had this problem when using maximum likelihood estimation - only I use Goldstein IGLS algorithm as implemented through the MLwiN software and not LME4 in R. However in each and very case th
|
14,612
|
The reason of superiority of Limited-memory BFGS over ADAM solver
|
There are a lot of reasons that this could be the case. Off the top of my head I can think of one plausible cause, but without knowing more about the problem it is difficult to suggest that it is the one.
An L-BFGS solver is a true quasi-Newton method in that it estimates the curvature of the parameter space via an approximation of the Hessian. So if your parameter space has plenty of long, nearly-flat valleys then L-BFGS would likely perform well. It has the downside of additional costs in performing a rank-two update to the (inverse) Hessian approximation at every step. While this is reasonably fast, it does begin to add up, particularly as the input space grows. This may account for the fact that ADAM outperforms L-BFGS for you as you get more data.
ADAM is a first order method that attempts to compensate for the fact that it doesn't estimate the curvature by adapting the step-size in every dimension. In some sense, this is similar to constructing a diagonal Hessian at every step, but they do it cleverly by simply using past gradients. In this way it is still a first order method, though it has the benefit of acting as though it is second order. The estimate is cruder than that of the L-BFGS in that it is only along each dimension and doesn't account for what would be the off-diagonals in the Hessian. If your Hessian is nearly singular then these off-diagonals may play an important role in the curvature and ADAM is likely to underperform relative the BFGS.
|
The reason of superiority of Limited-memory BFGS over ADAM solver
|
There are a lot of reasons that this could be the case. Off the top of my head I can think of one plausible cause, but without knowing more about the problem it is difficult to suggest that it is the
|
The reason of superiority of Limited-memory BFGS over ADAM solver
There are a lot of reasons that this could be the case. Off the top of my head I can think of one plausible cause, but without knowing more about the problem it is difficult to suggest that it is the one.
An L-BFGS solver is a true quasi-Newton method in that it estimates the curvature of the parameter space via an approximation of the Hessian. So if your parameter space has plenty of long, nearly-flat valleys then L-BFGS would likely perform well. It has the downside of additional costs in performing a rank-two update to the (inverse) Hessian approximation at every step. While this is reasonably fast, it does begin to add up, particularly as the input space grows. This may account for the fact that ADAM outperforms L-BFGS for you as you get more data.
ADAM is a first order method that attempts to compensate for the fact that it doesn't estimate the curvature by adapting the step-size in every dimension. In some sense, this is similar to constructing a diagonal Hessian at every step, but they do it cleverly by simply using past gradients. In this way it is still a first order method, though it has the benefit of acting as though it is second order. The estimate is cruder than that of the L-BFGS in that it is only along each dimension and doesn't account for what would be the off-diagonals in the Hessian. If your Hessian is nearly singular then these off-diagonals may play an important role in the curvature and ADAM is likely to underperform relative the BFGS.
|
The reason of superiority of Limited-memory BFGS over ADAM solver
There are a lot of reasons that this could be the case. Off the top of my head I can think of one plausible cause, but without knowing more about the problem it is difficult to suggest that it is the
|
14,613
|
The reason of superiority of Limited-memory BFGS over ADAM solver
|
In my opinion, they are two different heuristics to scale the gradient, however, they are motivated differently.
Nowadays people try to find a trade-off between Adam which converges fast with possibly bad generalization and SGD which converges poorly but results in better generalizations.
Maybe you should also consider to use DiffGrad which is an extension of Adam but with better convergence properties.
@David: what I'm not understanding in your answer is that you mention that Adam does not account for the off-diagonals. However, for L-BFGS this is the case as well. It approximates the Hessian by a diagonal. Accounting for off-diagonals would mean that they have to be evaluated/stored and most importantly a non-diagonal matrix would have to be inverted.
|
The reason of superiority of Limited-memory BFGS over ADAM solver
|
In my opinion, they are two different heuristics to scale the gradient, however, they are motivated differently.
Nowadays people try to find a trade-off between Adam which converges fast with possibly
|
The reason of superiority of Limited-memory BFGS over ADAM solver
In my opinion, they are two different heuristics to scale the gradient, however, they are motivated differently.
Nowadays people try to find a trade-off between Adam which converges fast with possibly bad generalization and SGD which converges poorly but results in better generalizations.
Maybe you should also consider to use DiffGrad which is an extension of Adam but with better convergence properties.
@David: what I'm not understanding in your answer is that you mention that Adam does not account for the off-diagonals. However, for L-BFGS this is the case as well. It approximates the Hessian by a diagonal. Accounting for off-diagonals would mean that they have to be evaluated/stored and most importantly a non-diagonal matrix would have to be inverted.
|
The reason of superiority of Limited-memory BFGS over ADAM solver
In my opinion, they are two different heuristics to scale the gradient, however, they are motivated differently.
Nowadays people try to find a trade-off between Adam which converges fast with possibly
|
14,614
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Here's the deal:
Technically you did write true sentences(both models can approximate any 'not too crazy' function given enough parameters), but those sentences do not get you anywhere at all!
Why is that?
Well, take a closer look at the universal approximation theory, or any other formal proof that a neural network can compute any f(x) if there are ENOUGH neurons.
All of those kind of proofs which I have seen use only one hidden layer.
Take a quick look here http://neuralnetworksanddeeplearning.com/chap5.html for some intuition.
There are works showing that in a sense the number of neurons needed grow exponentially if you are just using one layer.
So, while in theory you are right, in practice, you do not have infinite amount of memory, so you don't really want to train a 2^1000 neurons net,do you? Even if you did have infinite amount of memory,that net will overfit for sure.
To my mind, the most important point of ML is the practical point!
Let's expand a little on that.
The real big issue here isn't just how polynomials increase/decrease very quickly outside the training set. Not at all. As a quick example, any picture's pixel is within a very specific range ([0,255] for each RGB color) so you can rest assured that any new sample will be within your training set range of values. No. The big deal is: This comparison is not useful to begin with(!).
I suggest that you will experiment a bit with MNIST, and try and see the actual results you can come up with by using just one single layer.
Practical nets use way more than one hidden layers, sometimes dozens (well, Resnet even more...) of layers. For a reason. That reason is not proved, and in general, choosing an architecture for a neural net is a hot area of research. In other words, while we still need to know more, both models which you have compared(linear regression and NN with just one hidden layer ), for many datasets, are not useful whatsoever!
By the way, in case you will get into ML, there is another useless theorem which is actually a current 'area of research'- PAC (probably approximately correct)/VC dimension. I will expand on that as a bonus:
If the universal approximation basically states that given infinite amount of neurons we can approximate any function (thank you very much?), what PAC says in practical terms is, given (practically!) infinite amount of labelled examples we can get as close as we want to the best hypothesis within our model.
It was absolutely hilarious when I calculated the actual amount of examples needed for a practical net to be within some practical desired error rate with some okish probability :)
It was more than the number of electrons in the universe.
P.S. to boost it also assumes that the samples are IID (that is never ever true!).
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Here's the deal:
Technically you did write true sentences(both models can approximate any 'not too crazy' function given enough parameters), but those sentences do not get you anywhere at all!
Why is
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Here's the deal:
Technically you did write true sentences(both models can approximate any 'not too crazy' function given enough parameters), but those sentences do not get you anywhere at all!
Why is that?
Well, take a closer look at the universal approximation theory, or any other formal proof that a neural network can compute any f(x) if there are ENOUGH neurons.
All of those kind of proofs which I have seen use only one hidden layer.
Take a quick look here http://neuralnetworksanddeeplearning.com/chap5.html for some intuition.
There are works showing that in a sense the number of neurons needed grow exponentially if you are just using one layer.
So, while in theory you are right, in practice, you do not have infinite amount of memory, so you don't really want to train a 2^1000 neurons net,do you? Even if you did have infinite amount of memory,that net will overfit for sure.
To my mind, the most important point of ML is the practical point!
Let's expand a little on that.
The real big issue here isn't just how polynomials increase/decrease very quickly outside the training set. Not at all. As a quick example, any picture's pixel is within a very specific range ([0,255] for each RGB color) so you can rest assured that any new sample will be within your training set range of values. No. The big deal is: This comparison is not useful to begin with(!).
I suggest that you will experiment a bit with MNIST, and try and see the actual results you can come up with by using just one single layer.
Practical nets use way more than one hidden layers, sometimes dozens (well, Resnet even more...) of layers. For a reason. That reason is not proved, and in general, choosing an architecture for a neural net is a hot area of research. In other words, while we still need to know more, both models which you have compared(linear regression and NN with just one hidden layer ), for many datasets, are not useful whatsoever!
By the way, in case you will get into ML, there is another useless theorem which is actually a current 'area of research'- PAC (probably approximately correct)/VC dimension. I will expand on that as a bonus:
If the universal approximation basically states that given infinite amount of neurons we can approximate any function (thank you very much?), what PAC says in practical terms is, given (practically!) infinite amount of labelled examples we can get as close as we want to the best hypothesis within our model.
It was absolutely hilarious when I calculated the actual amount of examples needed for a practical net to be within some practical desired error rate with some okish probability :)
It was more than the number of electrons in the universe.
P.S. to boost it also assumes that the samples are IID (that is never ever true!).
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Here's the deal:
Technically you did write true sentences(both models can approximate any 'not too crazy' function given enough parameters), but those sentences do not get you anywhere at all!
Why is
|
14,615
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
It is true that any function can be approximated arbitrarily close both by something that counts as a neural network and something that counts as a polynomial.
First of all, keep in mind that this is true for a lot of constructs. You could approximate any function by combining sines and cosines (Fourier transforms), or simply by adding a lot of "rectangles" (not really a precise definition, but I hope you get the point).
Second, much like Yoni's answer, whenever you are training a network, or fitting a regression with a lot of powers, the number of neurons, or the number of powers, are fixed. Then you apply some algorithm, maybe gradient descent or something, and find the best parameters with that. The parameters are the weights in a network, and the coefficients for a large polynomial. The maximum power you take in a polynomial, or the number of neurons used, are called the hyperparameters. In practice, you'll try a couple of those. You can make a case that a parameter is a parameter, sure, but that is not how this is done in practice.
The point though, with machine learning, you don't really want a function that fits through your data perfectly. That wouldn't be too hard to achieve actually. You want something that fits well, but also probably works for points that you haven't seen yet. See this picture for example, taken from the documentation for scikit-learn.
A line is too simple, but the best approximation is not on the right, it's in the middle, allthough the function on the right fits best. The function on the right would make some pretty weird (and probably suboptimal) predictions for new data points, especially if they fall near the wiggly bits on the left.
The ultimate reason for neural networks with a couple of parameters working so well, is that they can fit something but not really overfit it. This also has a lot to do with the way they are trained, with some form of stochastic gradient descent.
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
It is true that any function can be approximated arbitrarily close both by something that counts as a neural network and something that counts as a polynomial.
First of all, keep in mind that this is
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
It is true that any function can be approximated arbitrarily close both by something that counts as a neural network and something that counts as a polynomial.
First of all, keep in mind that this is true for a lot of constructs. You could approximate any function by combining sines and cosines (Fourier transforms), or simply by adding a lot of "rectangles" (not really a precise definition, but I hope you get the point).
Second, much like Yoni's answer, whenever you are training a network, or fitting a regression with a lot of powers, the number of neurons, or the number of powers, are fixed. Then you apply some algorithm, maybe gradient descent or something, and find the best parameters with that. The parameters are the weights in a network, and the coefficients for a large polynomial. The maximum power you take in a polynomial, or the number of neurons used, are called the hyperparameters. In practice, you'll try a couple of those. You can make a case that a parameter is a parameter, sure, but that is not how this is done in practice.
The point though, with machine learning, you don't really want a function that fits through your data perfectly. That wouldn't be too hard to achieve actually. You want something that fits well, but also probably works for points that you haven't seen yet. See this picture for example, taken from the documentation for scikit-learn.
A line is too simple, but the best approximation is not on the right, it's in the middle, allthough the function on the right fits best. The function on the right would make some pretty weird (and probably suboptimal) predictions for new data points, especially if they fall near the wiggly bits on the left.
The ultimate reason for neural networks with a couple of parameters working so well, is that they can fit something but not really overfit it. This also has a lot to do with the way they are trained, with some form of stochastic gradient descent.
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
It is true that any function can be approximated arbitrarily close both by something that counts as a neural network and something that counts as a polynomial.
First of all, keep in mind that this is
|
14,616
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Maybe this paper can help you:
Polynomial Regression As an Alternative to Neural Nets
The abstract says:
Despite the success of neural networks (NNs), there is still a concern
among many over their "black box" nature. Why do they work? Here we
present a simple analytic argument that NNs are in fact essentially
polynomial regression models. This view will have various implications
for NNs, e.g. providing an explanation for why convergence problems
arise in NNs, and it gives rough guidance on avoiding overfitting. In
addition, we use this phenomenon to predict and confirm a
multicollinearity property of NNs not previously reported in the
literature. Most importantly, given this loose correspondence, one may
choose to routinely use polynomial models instead of NNs, thus
avoiding some major problems of the latter, such as having to set many
tuning parameters and dealing with convergence issues. We present a
number of empirical results; in each case, the accuracy of the
polynomial approach matches or exceeds that of NN approaches. A
many-featured, open-source software package, polyreg, is available.
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Maybe this paper can help you:
Polynomial Regression As an Alternative to Neural Nets
The abstract says:
Despite the success of neural networks (NNs), there is still a concern
among many over their
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Maybe this paper can help you:
Polynomial Regression As an Alternative to Neural Nets
The abstract says:
Despite the success of neural networks (NNs), there is still a concern
among many over their "black box" nature. Why do they work? Here we
present a simple analytic argument that NNs are in fact essentially
polynomial regression models. This view will have various implications
for NNs, e.g. providing an explanation for why convergence problems
arise in NNs, and it gives rough guidance on avoiding overfitting. In
addition, we use this phenomenon to predict and confirm a
multicollinearity property of NNs not previously reported in the
literature. Most importantly, given this loose correspondence, one may
choose to routinely use polynomial models instead of NNs, thus
avoiding some major problems of the latter, such as having to set many
tuning parameters and dealing with convergence issues. We present a
number of empirical results; in each case, the accuracy of the
polynomial approach matches or exceeds that of NN approaches. A
many-featured, open-source software package, polyreg, is available.
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Maybe this paper can help you:
Polynomial Regression As an Alternative to Neural Nets
The abstract says:
Despite the success of neural networks (NNs), there is still a concern
among many over their
|
14,617
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Since no answers have yet been provided (though I would accept the comment of user1952009 was it posted as an answer), let me share what I have learned in the meantime:
(1) It seems to me that my understanding is generally right, but the devil is in the details.
(2) One thing that missed in "my understanding": How good will the parametrized hypothesis generalize to data outside the training set? The non-polynomial nature of the neural network predictions may be better there than simple linear/polynomial regression (remember how polynomials increase/decrease very quickly outside the training set).
(3) A link which further explains the importance of being able to compute parameters quickly: http://www.heatonresearch.com/2017/06/01/hidden-layers.html
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
|
Since no answers have yet been provided (though I would accept the comment of user1952009 was it posted as an answer), let me share what I have learned in the meantime:
(1) It seems to me that my und
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Since no answers have yet been provided (though I would accept the comment of user1952009 was it posted as an answer), let me share what I have learned in the meantime:
(1) It seems to me that my understanding is generally right, but the devil is in the details.
(2) One thing that missed in "my understanding": How good will the parametrized hypothesis generalize to data outside the training set? The non-polynomial nature of the neural network predictions may be better there than simple linear/polynomial regression (remember how polynomials increase/decrease very quickly outside the training set).
(3) A link which further explains the importance of being able to compute parameters quickly: http://www.heatonresearch.com/2017/06/01/hidden-layers.html
|
Artificial neural networks EQUIVALENT to linear regression with polynomial features?
Since no answers have yet been provided (though I would accept the comment of user1952009 was it posted as an answer), let me share what I have learned in the meantime:
(1) It seems to me that my und
|
14,618
|
What does pooled variance "actually" mean?
|
Put simply, the pooled variance is an (unbiased) estimate of the variance within each sample, under the assumption/constraint that those variances are equal.
This is explained, motivated, and analyzed in some detail in the Wikipedia entry for pooled variance.
It does not estimate the variance of a new "meta-sample" formed by concatenating the two individual samples, like you supposed. As you have already discovered, estimating that requires a completely different formula.
|
What does pooled variance "actually" mean?
|
Put simply, the pooled variance is an (unbiased) estimate of the variance within each sample, under the assumption/constraint that those variances are equal.
This is explained, motivated, and analyzed
|
What does pooled variance "actually" mean?
Put simply, the pooled variance is an (unbiased) estimate of the variance within each sample, under the assumption/constraint that those variances are equal.
This is explained, motivated, and analyzed in some detail in the Wikipedia entry for pooled variance.
It does not estimate the variance of a new "meta-sample" formed by concatenating the two individual samples, like you supposed. As you have already discovered, estimating that requires a completely different formula.
|
What does pooled variance "actually" mean?
Put simply, the pooled variance is an (unbiased) estimate of the variance within each sample, under the assumption/constraint that those variances are equal.
This is explained, motivated, and analyzed
|
14,619
|
What does pooled variance "actually" mean?
|
Pooled variance is used to combine together variances from different samples by taking their weighted average, to get the "overall" variance. The problem with your example is that it is a pathological case, since each of the sub-samples has variance equal to zero. Such pathological case has very little in common with the data we usually encounter, since there is always some variability and if there is no variability, we don't care about such variables since they carry no information. You need to notice that this is a very simple method and there are more complicated ways of estimating variance in hierarchical data structures that are not prone to such problems.
As about your example in the edit, it shows that it is important to clearly state your assumptions before starting the analysis. Let's say that you have $n$ data points in $k$ groups, we would denote it as $x_{1,1},x_{2,1},\dots,x_{n-1,k},x_{n,k}$, where the $i$-th index in $x_{i,j}$ stands for cases and $j$-th index stands for group indexes. There are several scenarios possible, you can assume that all the points come from the same distribution (for simplicity, let's assume normal distribution),
$$
x_{i,j} \sim \mathcal{N}(\mu, \sigma^2) \tag{1}
$$
you can assume that each of the sub-samples has its own mean
$$
x_{i,j} \sim \mathcal{N}(\mu_j, \sigma^2) \tag{2}
$$
or, its own variance
$$
x_{i,j} \sim \mathcal{N}(\mu, \sigma^2_j) \tag{3}
$$
or, each of them have their own, distinct parameters
$$
x_{i,j} \sim \mathcal{N}(\mu_j, \sigma^2_j) \tag{4}
$$
Depending on your assumptions, particular method may, or may not be adequate for analyzing the data.
In the first case, you wouldn't be interested in estimating the within-group variances, since you would assume that they all are the same. Nonetheless, if you aggregated the global variance from the group variances, you would get the same result as by using pooled variance since the definition of variance is
$$
\mathrm{Var}(X) = \frac{1}{n-1} \sum_i (x_i - \mu)^2
$$
and in pooled estimator you first multiply it by $n-1$, then add together, and finally divide by $n_1 + n_2 - 1$.
In the second case, means differ, but you have a common variance. This example is closest to your example in the edit. In this scenario, the pooled variance would correctly estimate the global variance, while if estimated variance on the whole dataset, you would obtain incorrect results, since you were not accounting for the fact that the groups have different means.
In the third case it doesn't make sense to estimate the "global" variance since you assume that each of the groups have its own variance. You may be still interested in obtaining the estimate for the whole population, but in such case both (a) calculating the individual variances per group, and (b) calculating the global variance from the whole dataset, can give you misleading results. If you are dealing with this kind of data, you should think of using more complicated model that accounts for the hierarchical nature of the data.
The fourth case is the most extreme and quite similar to the previous one. In this scenario, if you wanted to estimate the global mean and variance, you would need a different model and different set of assumptions. In such case, you would assume that your data is of hierarchical structure, and besides the within-group means and variances, there is a higher-level common variance, for example assuming the following model
$$
\begin{align}
x_{i,j} &\sim \mathcal{N}(\mu_j, \sigma^2_j) \\
\mu_j &\sim \mathcal{N}(\mu_0, \sigma^2_0) \\
\sigma^2_j &\sim \mathcal{IG}(\alpha, \beta)
\end{align} \tag{5}
$$
where each sample has its own means and variances $\mu_j,\sigma^2_j$ that are themselves draws from common distributions. In such case, you would use a hierarchical model that takes into consideration both the lower-level and upper-level variability. To read more about this kind of models, you can check the Bayesian Data Analysis book by Gelman et al. and their eight schools example. This is however much more complicated model then the simple pooled variance estimator.
|
What does pooled variance "actually" mean?
|
Pooled variance is used to combine together variances from different samples by taking their weighted average, to get the "overall" variance. The problem with your example is that it is a pathological
|
What does pooled variance "actually" mean?
Pooled variance is used to combine together variances from different samples by taking their weighted average, to get the "overall" variance. The problem with your example is that it is a pathological case, since each of the sub-samples has variance equal to zero. Such pathological case has very little in common with the data we usually encounter, since there is always some variability and if there is no variability, we don't care about such variables since they carry no information. You need to notice that this is a very simple method and there are more complicated ways of estimating variance in hierarchical data structures that are not prone to such problems.
As about your example in the edit, it shows that it is important to clearly state your assumptions before starting the analysis. Let's say that you have $n$ data points in $k$ groups, we would denote it as $x_{1,1},x_{2,1},\dots,x_{n-1,k},x_{n,k}$, where the $i$-th index in $x_{i,j}$ stands for cases and $j$-th index stands for group indexes. There are several scenarios possible, you can assume that all the points come from the same distribution (for simplicity, let's assume normal distribution),
$$
x_{i,j} \sim \mathcal{N}(\mu, \sigma^2) \tag{1}
$$
you can assume that each of the sub-samples has its own mean
$$
x_{i,j} \sim \mathcal{N}(\mu_j, \sigma^2) \tag{2}
$$
or, its own variance
$$
x_{i,j} \sim \mathcal{N}(\mu, \sigma^2_j) \tag{3}
$$
or, each of them have their own, distinct parameters
$$
x_{i,j} \sim \mathcal{N}(\mu_j, \sigma^2_j) \tag{4}
$$
Depending on your assumptions, particular method may, or may not be adequate for analyzing the data.
In the first case, you wouldn't be interested in estimating the within-group variances, since you would assume that they all are the same. Nonetheless, if you aggregated the global variance from the group variances, you would get the same result as by using pooled variance since the definition of variance is
$$
\mathrm{Var}(X) = \frac{1}{n-1} \sum_i (x_i - \mu)^2
$$
and in pooled estimator you first multiply it by $n-1$, then add together, and finally divide by $n_1 + n_2 - 1$.
In the second case, means differ, but you have a common variance. This example is closest to your example in the edit. In this scenario, the pooled variance would correctly estimate the global variance, while if estimated variance on the whole dataset, you would obtain incorrect results, since you were not accounting for the fact that the groups have different means.
In the third case it doesn't make sense to estimate the "global" variance since you assume that each of the groups have its own variance. You may be still interested in obtaining the estimate for the whole population, but in such case both (a) calculating the individual variances per group, and (b) calculating the global variance from the whole dataset, can give you misleading results. If you are dealing with this kind of data, you should think of using more complicated model that accounts for the hierarchical nature of the data.
The fourth case is the most extreme and quite similar to the previous one. In this scenario, if you wanted to estimate the global mean and variance, you would need a different model and different set of assumptions. In such case, you would assume that your data is of hierarchical structure, and besides the within-group means and variances, there is a higher-level common variance, for example assuming the following model
$$
\begin{align}
x_{i,j} &\sim \mathcal{N}(\mu_j, \sigma^2_j) \\
\mu_j &\sim \mathcal{N}(\mu_0, \sigma^2_0) \\
\sigma^2_j &\sim \mathcal{IG}(\alpha, \beta)
\end{align} \tag{5}
$$
where each sample has its own means and variances $\mu_j,\sigma^2_j$ that are themselves draws from common distributions. In such case, you would use a hierarchical model that takes into consideration both the lower-level and upper-level variability. To read more about this kind of models, you can check the Bayesian Data Analysis book by Gelman et al. and their eight schools example. This is however much more complicated model then the simple pooled variance estimator.
|
What does pooled variance "actually" mean?
Pooled variance is used to combine together variances from different samples by taking their weighted average, to get the "overall" variance. The problem with your example is that it is a pathological
|
14,620
|
What does pooled variance "actually" mean?
|
Through pooled variance we are not trying to estimate the variance of a bigger sample, using smaller samples. Hence, the two examples you gave don't exactly refer to the question.
Pooled variance is required to get a better estimate of population variance, from two samples that have been randomly taken from that population and come up with different variance estimates.
Example, you are trying to gauge variance in the smoking habits of males in London. You sample two times, 300 males from London. You end up getting two variances (probably a bit different!). Now since, you did a fair random sampling (best to your capability! as true random sampling is almost impossible), you have all the rights to say that both the variances are true point estimates of population variance (London males in this case).
But how is that possible? i.e. two different point estimates!! Thus, we go ahead and find a common point estimate which is pooled variance. It is nothing but weighted average of two point estimates, where the weights are the degree of freedom associated with each sample.
Hope this clarifies.
|
What does pooled variance "actually" mean?
|
Through pooled variance we are not trying to estimate the variance of a bigger sample, using smaller samples. Hence, the two examples you gave don't exactly refer to the question.
Pooled variance is r
|
What does pooled variance "actually" mean?
Through pooled variance we are not trying to estimate the variance of a bigger sample, using smaller samples. Hence, the two examples you gave don't exactly refer to the question.
Pooled variance is required to get a better estimate of population variance, from two samples that have been randomly taken from that population and come up with different variance estimates.
Example, you are trying to gauge variance in the smoking habits of males in London. You sample two times, 300 males from London. You end up getting two variances (probably a bit different!). Now since, you did a fair random sampling (best to your capability! as true random sampling is almost impossible), you have all the rights to say that both the variances are true point estimates of population variance (London males in this case).
But how is that possible? i.e. two different point estimates!! Thus, we go ahead and find a common point estimate which is pooled variance. It is nothing but weighted average of two point estimates, where the weights are the degree of freedom associated with each sample.
Hope this clarifies.
|
What does pooled variance "actually" mean?
Through pooled variance we are not trying to estimate the variance of a bigger sample, using smaller samples. Hence, the two examples you gave don't exactly refer to the question.
Pooled variance is r
|
14,621
|
What does pooled variance "actually" mean?
|
The problem is if you just concatenate the samples and estimate its variance you're assuming they're from the same distribution therefore have the same mean. But we are in general interested in several samples with different mean. Does this make sense?
|
What does pooled variance "actually" mean?
|
The problem is if you just concatenate the samples and estimate its variance you're assuming they're from the same distribution therefore have the same mean. But we are in general interested in severa
|
What does pooled variance "actually" mean?
The problem is if you just concatenate the samples and estimate its variance you're assuming they're from the same distribution therefore have the same mean. But we are in general interested in several samples with different mean. Does this make sense?
|
What does pooled variance "actually" mean?
The problem is if you just concatenate the samples and estimate its variance you're assuming they're from the same distribution therefore have the same mean. But we are in general interested in severa
|
14,622
|
What does pooled variance "actually" mean?
|
The use-case of pooled variance is when you have two samples from distributions that:
may have different means, but
which you expect to have an equal true variance.
An example of this is a situation where you measure the length of Alice's nose $n$ times for one sample, and measure the length of Bob's nose $m$ times for the second. These are likely to produce a bunch of different measurements on the scale of millimeters, because of measurement error. But you expect the variance in measurement error to be the same no matter which nose you measure.
In this case, taking the pooled variance would give you a better estimate of the variance in measurement error than taking the variance of one sample alone.
|
What does pooled variance "actually" mean?
|
The use-case of pooled variance is when you have two samples from distributions that:
may have different means, but
which you expect to have an equal true variance.
An example of this is a situation
|
What does pooled variance "actually" mean?
The use-case of pooled variance is when you have two samples from distributions that:
may have different means, but
which you expect to have an equal true variance.
An example of this is a situation where you measure the length of Alice's nose $n$ times for one sample, and measure the length of Bob's nose $m$ times for the second. These are likely to produce a bunch of different measurements on the scale of millimeters, because of measurement error. But you expect the variance in measurement error to be the same no matter which nose you measure.
In this case, taking the pooled variance would give you a better estimate of the variance in measurement error than taking the variance of one sample alone.
|
What does pooled variance "actually" mean?
The use-case of pooled variance is when you have two samples from distributions that:
may have different means, but
which you expect to have an equal true variance.
An example of this is a situation
|
14,623
|
What does pooled variance "actually" mean?
|
Although I am very late to the conversation maybe I can add something helpful:
It seems to me that the OP wants to know why (what for) we would need a pooled variability estimate $\hat\sigma_{pooled}$ as a weighted average of two samples (be it variance or standard deviation).
As far as I am aware the main practical need for this kind of dispersion measure arises from wanting to compare means of (sub-)groups: so if I want to compare the average nose length for 1) people who did not undergo a gene therapy, 2) people who underwent gene therapy A and 3) people who underwent gene therapy B.
To be better able to compare the amount of the mean differences in length (mm) I divide the mean difference, say, $e=\bar x_{Control}-\bar x_{GTA}=30mm-28mm=2mm$ by the variability estimate (here standard deviation). Depending on the size of the square root of pooled variance (pooled standard deviation) we can better judge the size of the 2mm difference between those groups (e.g., $d=2mm/0.5mm=4$ vs. $d=2mm/4mm=0.5$ --> Does gene therapy A do something to the nose length? And if so, how much? When $d=4$ or $2 \pm 0.5mm$ there seems to be a "stable" or "consistent" or "big" (compared to the variability) difference between the mean nose lengths, when $d=0.5$ or $2 \pm 4mm$ it does not seem so much, relatively speaking. In case all values within both groups are the same and therefore there is no variability within the groups, $d$ would not be defined but the interpretation would be $2 \pm 0mm=2mm$ exactly).
This is the idea of effect size (first theoretically introduced by Neyman and Pearson as far as I know, but in one kind or another used well before, see Stigler, 1986, for example).
So what I am doing is comparing the mean difference between groups with the mean differences within those same groups, i.e weighted average of variances (standard deviations). This makes more sense than to compare the mean difference between (sub-)groups with the mean difference within the "whole" group, because, as you (Hanciong) have shown, the variance (and standard deviation) of the whole group contains the difference(s) of the group means as well.
The theoretical need for the measure arises from being able to use the $t$-distribution to find the probability for the observed mean difference or a more extreme one, given some expected value for the mean difference (p-value for e.g., Null-Hypothesis-Significance-Test, NHST, or Neyman-Pearson hypothesis test or Fisher hypothesis test, confidence intervals etc.): $p(e \ge e_{observed}|\mu_e=0)$.
As far as I know the p-value obtained by the $t$-distribution (and especially the $F$-distribution in cases with more than 2 means to compare) will give correct estimates for the probability only when both (or all) samples are drawn from populations with equal variances (homogeneity of variance, as pointed out in the other answers already; this should be described in (more) detail in most statistics textbooks). I think all distributions based on the normal distribution ($t$, $F$, $\chi^2$) assume a variance of more than 0 and less than $\infty$, so it would be impossible to find the p-value for a case with a within variability of 0 (in this case you would obviously not assume to have drawn your sample from a normal distribution).
(This also seems intuitively reasonable: if I want to compare two or more means then the precision of those means should be the same or at least comparable:
if I run my gene therapy A on people whose nose lengths are quite similar, say $\bar x \pm 0.5mm$ but have a group of people with high variability in nose lengths in my control group, say $\bar x \pm 4mm$ it doesn't seem fair to directly compare those means, because those means do not have the same "mean-meaning"; in fact the very much higher variance/standard deviation in my control group could be indicating further subgroups, maybe differences of nose lengths due to differences on some gene.)
|
What does pooled variance "actually" mean?
|
Although I am very late to the conversation maybe I can add something helpful:
It seems to me that the OP wants to know why (what for) we would need a pooled variability estimate $\hat\sigma_{pooled}$
|
What does pooled variance "actually" mean?
Although I am very late to the conversation maybe I can add something helpful:
It seems to me that the OP wants to know why (what for) we would need a pooled variability estimate $\hat\sigma_{pooled}$ as a weighted average of two samples (be it variance or standard deviation).
As far as I am aware the main practical need for this kind of dispersion measure arises from wanting to compare means of (sub-)groups: so if I want to compare the average nose length for 1) people who did not undergo a gene therapy, 2) people who underwent gene therapy A and 3) people who underwent gene therapy B.
To be better able to compare the amount of the mean differences in length (mm) I divide the mean difference, say, $e=\bar x_{Control}-\bar x_{GTA}=30mm-28mm=2mm$ by the variability estimate (here standard deviation). Depending on the size of the square root of pooled variance (pooled standard deviation) we can better judge the size of the 2mm difference between those groups (e.g., $d=2mm/0.5mm=4$ vs. $d=2mm/4mm=0.5$ --> Does gene therapy A do something to the nose length? And if so, how much? When $d=4$ or $2 \pm 0.5mm$ there seems to be a "stable" or "consistent" or "big" (compared to the variability) difference between the mean nose lengths, when $d=0.5$ or $2 \pm 4mm$ it does not seem so much, relatively speaking. In case all values within both groups are the same and therefore there is no variability within the groups, $d$ would not be defined but the interpretation would be $2 \pm 0mm=2mm$ exactly).
This is the idea of effect size (first theoretically introduced by Neyman and Pearson as far as I know, but in one kind or another used well before, see Stigler, 1986, for example).
So what I am doing is comparing the mean difference between groups with the mean differences within those same groups, i.e weighted average of variances (standard deviations). This makes more sense than to compare the mean difference between (sub-)groups with the mean difference within the "whole" group, because, as you (Hanciong) have shown, the variance (and standard deviation) of the whole group contains the difference(s) of the group means as well.
The theoretical need for the measure arises from being able to use the $t$-distribution to find the probability for the observed mean difference or a more extreme one, given some expected value for the mean difference (p-value for e.g., Null-Hypothesis-Significance-Test, NHST, or Neyman-Pearson hypothesis test or Fisher hypothesis test, confidence intervals etc.): $p(e \ge e_{observed}|\mu_e=0)$.
As far as I know the p-value obtained by the $t$-distribution (and especially the $F$-distribution in cases with more than 2 means to compare) will give correct estimates for the probability only when both (or all) samples are drawn from populations with equal variances (homogeneity of variance, as pointed out in the other answers already; this should be described in (more) detail in most statistics textbooks). I think all distributions based on the normal distribution ($t$, $F$, $\chi^2$) assume a variance of more than 0 and less than $\infty$, so it would be impossible to find the p-value for a case with a within variability of 0 (in this case you would obviously not assume to have drawn your sample from a normal distribution).
(This also seems intuitively reasonable: if I want to compare two or more means then the precision of those means should be the same or at least comparable:
if I run my gene therapy A on people whose nose lengths are quite similar, say $\bar x \pm 0.5mm$ but have a group of people with high variability in nose lengths in my control group, say $\bar x \pm 4mm$ it doesn't seem fair to directly compare those means, because those means do not have the same "mean-meaning"; in fact the very much higher variance/standard deviation in my control group could be indicating further subgroups, maybe differences of nose lengths due to differences on some gene.)
|
What does pooled variance "actually" mean?
Although I am very late to the conversation maybe I can add something helpful:
It seems to me that the OP wants to know why (what for) we would need a pooled variability estimate $\hat\sigma_{pooled}$
|
14,624
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
|
From http://cs231n.github.io/neural-networks-2/#reg:
Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector $\vec{w}$ of every neuron to satisfy $\Vert \vec{w} \Vert_2 < c.$ Typical values of $c$ are on orders of 3 or 4. Some people report improvements when using this form of regularization. One of its appealing properties is that network cannot βexplodeβ even when the learning rates are set too high because the updates are always bounded.
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
|
From http://cs231n.github.io/neural-networks-2/#reg:
Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
From http://cs231n.github.io/neural-networks-2/#reg:
Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector $\vec{w}$ of every neuron to satisfy $\Vert \vec{w} \Vert_2 < c.$ Typical values of $c$ are on orders of 3 or 4. Some people report improvements when using this form of regularization. One of its appealing properties is that network cannot βexplodeβ even when the learning rates are set too high because the updates are always bounded.
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
From http://cs231n.github.io/neural-networks-2/#reg:
Max norm constraints. Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron
|
14,625
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
|
I found an answer by McLawrence in another question to be very helpful. Reproduced below:
What does a weight constraint of max_normdo?
maxnorm(m) will, if the L2-Norm of your weights exceeds m, scale your whole weight matrix by a factor that reduces the norm to m.
As you can find in the keras code in class MaxNorm(Constraint):
def __call__(self, w):
norms = K.sqrt(K.sum(K.square(w), axis=self.axis, keepdims=True))
desired = K.clip(norms, 0, self.max_value)
w *= (desired / (K.epsilon() + norms))
return w
Aditionally, maxnorm has an axis argument, along which the norm is calculated. In your example you don't specify an axis, thus the norm is calculated over the whole weight matrix. If for example, you want to constrain the norm of every convolutional filter, assuming that you are using tf dimension ordering, the weight matrix will have the shape (rows, cols, input_depth, output_depth). Calculating the norm over axis = [0, 1, 2] will constrain each filter to the given norm.
Why to do it?
Constraining the weight matrix directly is another kind of regularization. If you use a simple L2 regularization term you penalize high weights with your loss function. With this constraint, you regularize directly.
As also linked in the keras code, this seems to work especially well in combination with a dropoutlayer. More more info see chapter 5.1 in this paper
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
|
I found an answer by McLawrence in another question to be very helpful. Reproduced below:
What does a weight constraint of max_normdo?
maxnorm(m) will, if the L2-Norm of your weights exceeds m, scale
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
I found an answer by McLawrence in another question to be very helpful. Reproduced below:
What does a weight constraint of max_normdo?
maxnorm(m) will, if the L2-Norm of your weights exceeds m, scale your whole weight matrix by a factor that reduces the norm to m.
As you can find in the keras code in class MaxNorm(Constraint):
def __call__(self, w):
norms = K.sqrt(K.sum(K.square(w), axis=self.axis, keepdims=True))
desired = K.clip(norms, 0, self.max_value)
w *= (desired / (K.epsilon() + norms))
return w
Aditionally, maxnorm has an axis argument, along which the norm is calculated. In your example you don't specify an axis, thus the norm is calculated over the whole weight matrix. If for example, you want to constrain the norm of every convolutional filter, assuming that you are using tf dimension ordering, the weight matrix will have the shape (rows, cols, input_depth, output_depth). Calculating the norm over axis = [0, 1, 2] will constrain each filter to the given norm.
Why to do it?
Constraining the weight matrix directly is another kind of regularization. If you use a simple L2 regularization term you penalize high weights with your loss function. With this constraint, you regularize directly.
As also linked in the keras code, this seems to work especially well in combination with a dropoutlayer. More more info see chapter 5.1 in this paper
|
What is maxnorm constraint? How is it useful in Convolutional Neural Networks?
I found an answer by McLawrence in another question to be very helpful. Reproduced below:
What does a weight constraint of max_normdo?
maxnorm(m) will, if the L2-Norm of your weights exceeds m, scale
|
14,626
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
Why do we need to estimate mean using MLE when we already know that mean is average of the data?
The text book problem states that $x_1,x_2,\dots,x_N$ is from $$x\sim\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
They tell you that $\sigma$ is known, but $\mu$ has to be estimated.
Is it really that obvious that a good estimate $\hat\mu=\bar x$?!
Here, $\bar x=\frac{1}{N}\sum_{i=1}^Nx_i$.
It wasn't obvious to me, and I was quite surprised to see that it is in fact MLE estimate.
Also, consider this: what if $\mu$ was known and $\sigma$ unknown? In this case MLE estimator is $$\hat\sigma^2=\frac{1}{N}\sum_{i=1}^N(x-\bar x)^2$$
Notice, how this estimator is not the same as a sample variance estimator! Don't "we already know" that the sample variance is given by the following equation?
$$s^2=\frac{1}{N-1}\sum_{i}(x-\bar x)^2$$
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
Why do we need to estimate mean using MLE when we already know that mean is average of the data?
The text book problem states that $x_1,x_2,\dots,x_N$ is from $$x\sim\frac{1}{\sqrt{2\pi}\sigma}e^{-\f
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
Why do we need to estimate mean using MLE when we already know that mean is average of the data?
The text book problem states that $x_1,x_2,\dots,x_N$ is from $$x\sim\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$
They tell you that $\sigma$ is known, but $\mu$ has to be estimated.
Is it really that obvious that a good estimate $\hat\mu=\bar x$?!
Here, $\bar x=\frac{1}{N}\sum_{i=1}^Nx_i$.
It wasn't obvious to me, and I was quite surprised to see that it is in fact MLE estimate.
Also, consider this: what if $\mu$ was known and $\sigma$ unknown? In this case MLE estimator is $$\hat\sigma^2=\frac{1}{N}\sum_{i=1}^N(x-\bar x)^2$$
Notice, how this estimator is not the same as a sample variance estimator! Don't "we already know" that the sample variance is given by the following equation?
$$s^2=\frac{1}{N-1}\sum_{i}(x-\bar x)^2$$
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
Why do we need to estimate mean using MLE when we already know that mean is average of the data?
The text book problem states that $x_1,x_2,\dots,x_N$ is from $$x\sim\frac{1}{\sqrt{2\pi}\sigma}e^{-\f
|
14,627
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
In this case, the average of your sample happens to also be the maximum likelihood estimator. So doing all the work derive the MLE feels like an unnecessary exercise, as you get back to your intuitive estimate of the mean you would have used in the first place. Well, this wasn't "just by chance"; this was specifically chosen to show that MLE estimators often lead to intuitive estimators.
But what if there was no intuitive estimator? For example, suppose you had a sample of iid gamma random variables and you were interested in estimating the shape and the rate parameters. Perhaps you could try to reason out an estimator from the properties you know about Gamma distributions. But what would be the best way to do it? Using some combination of the estimated mean and variance? Why not use the estimated median instead of the mean? Or the log-mean? These all could be used to create some sort of estimator, but which will be a good one?
As it turns out, MLE theory gives us a great way of succinctly getting an answer to that question: take the values of the parameters that maximize the likelihood of the observed data (which seems pretty intuitive) and use that as your estimate. In fact, we have theory that states that under certain conditions, this will be approximately the best estimator. This is a lot better than trying to figure out a unique estimator for each type of data and then stepping lots of time worrying if it's really the best choice.
In short: while MLE doesn't provide new insight in the case of estimating the mean of normal data, it in general is a very, very useful tool.
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
In this case, the average of your sample happens to also be the maximum likelihood estimator. So doing all the work derive the MLE feels like an unnecessary exercise, as you get back to your intuitive
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
In this case, the average of your sample happens to also be the maximum likelihood estimator. So doing all the work derive the MLE feels like an unnecessary exercise, as you get back to your intuitive estimate of the mean you would have used in the first place. Well, this wasn't "just by chance"; this was specifically chosen to show that MLE estimators often lead to intuitive estimators.
But what if there was no intuitive estimator? For example, suppose you had a sample of iid gamma random variables and you were interested in estimating the shape and the rate parameters. Perhaps you could try to reason out an estimator from the properties you know about Gamma distributions. But what would be the best way to do it? Using some combination of the estimated mean and variance? Why not use the estimated median instead of the mean? Or the log-mean? These all could be used to create some sort of estimator, but which will be a good one?
As it turns out, MLE theory gives us a great way of succinctly getting an answer to that question: take the values of the parameters that maximize the likelihood of the observed data (which seems pretty intuitive) and use that as your estimate. In fact, we have theory that states that under certain conditions, this will be approximately the best estimator. This is a lot better than trying to figure out a unique estimator for each type of data and then stepping lots of time worrying if it's really the best choice.
In short: while MLE doesn't provide new insight in the case of estimating the mean of normal data, it in general is a very, very useful tool.
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
In this case, the average of your sample happens to also be the maximum likelihood estimator. So doing all the work derive the MLE feels like an unnecessary exercise, as you get back to your intuitive
|
14,628
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
It is a matter of confusing vocabulary, as illustrated by those quotes, straight from google:
average
noun: average; plural noun: averages
a number expressing the central or typical value in a set of data, in particular the mode, median, or (most commonly) the mean, which is
calculated by dividing the sum of the values in the set by their
number.
"the proportion of over-60s is above the EU average of 19 per cent"
synonyms: mean, median, mode, midpoint, centre
Not the best definition, I agree! Especially when suggesting mean as a synonym. I would think average is most appropriate for datasets or samples as in $\bar{x}$ and should not be used for distributions, as $\mu$ in $\mathfrak{N}(\mu,\sigmaΒ²)$.
mean
In mathematics, mean has several different definitions depending on
the context.
In probability and statistics, mean and expected value are used
synonymously to refer to one measure of the central tendency either of
a probability distribution or of the random variable characterized by
that distribution. In the case of a discrete probability distribution
of a random variable X, the mean is equal to the sum over every
possible value weighted by the probability of that value; that is, it
is computed by taking the product of each possible value x of X and
its probability P(x), and then adding all these products together,
giving $\mu = \sum x P(x)$.
For a data set, the terms arithmetic mean, mathematical expectation,
and sometimes average are used synonymously to refer to a central
value of a discrete set of numbers: specifically, the sum of the
values divided by the number of values. The arithmetic mean of a set
of numbers $x_1, x_2, ..., x_n$ is typically denoted by $\bar{x}$,
pronounced "x bar". If the data set were based on a series of
observations obtained by sampling from a statistical population, the
arithmetic mean is termed the sample mean (denoted $\bar{x}$) to
distinguish it from the population mean (denoted $\mu$ or $\mu_x$).
As suggested by this Wikipedia entry, mean applies to both distributions and samples or datasets. The mean of a dataset or sample is also the mean of the empirical distribution associated with this sample. The entry also exemplifies the possibility of a confusion between the terms since it gives average and expectation as synonyms.
expectation
noun: expectation; plural noun: expectations
Mathematics:
another term for expected value.
I would restrict the use of expectation to an object obtained by an integral, as in $$\mathbb{E}[X]=\int_\mathcal{X} x\text{d}P(x)$$ but the average of a sample is once again the expectation associated with the empirical distribution derived from this sample.
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
|
It is a matter of confusing vocabulary, as illustrated by those quotes, straight from google:
average
noun: average; plural noun: averages
a number expressing the central or typical value in a set
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
It is a matter of confusing vocabulary, as illustrated by those quotes, straight from google:
average
noun: average; plural noun: averages
a number expressing the central or typical value in a set of data, in particular the mode, median, or (most commonly) the mean, which is
calculated by dividing the sum of the values in the set by their
number.
"the proportion of over-60s is above the EU average of 19 per cent"
synonyms: mean, median, mode, midpoint, centre
Not the best definition, I agree! Especially when suggesting mean as a synonym. I would think average is most appropriate for datasets or samples as in $\bar{x}$ and should not be used for distributions, as $\mu$ in $\mathfrak{N}(\mu,\sigmaΒ²)$.
mean
In mathematics, mean has several different definitions depending on
the context.
In probability and statistics, mean and expected value are used
synonymously to refer to one measure of the central tendency either of
a probability distribution or of the random variable characterized by
that distribution. In the case of a discrete probability distribution
of a random variable X, the mean is equal to the sum over every
possible value weighted by the probability of that value; that is, it
is computed by taking the product of each possible value x of X and
its probability P(x), and then adding all these products together,
giving $\mu = \sum x P(x)$.
For a data set, the terms arithmetic mean, mathematical expectation,
and sometimes average are used synonymously to refer to a central
value of a discrete set of numbers: specifically, the sum of the
values divided by the number of values. The arithmetic mean of a set
of numbers $x_1, x_2, ..., x_n$ is typically denoted by $\bar{x}$,
pronounced "x bar". If the data set were based on a series of
observations obtained by sampling from a statistical population, the
arithmetic mean is termed the sample mean (denoted $\bar{x}$) to
distinguish it from the population mean (denoted $\mu$ or $\mu_x$).
As suggested by this Wikipedia entry, mean applies to both distributions and samples or datasets. The mean of a dataset or sample is also the mean of the empirical distribution associated with this sample. The entry also exemplifies the possibility of a confusion between the terms since it gives average and expectation as synonyms.
expectation
noun: expectation; plural noun: expectations
Mathematics:
another term for expected value.
I would restrict the use of expectation to an object obtained by an integral, as in $$\mathbb{E}[X]=\int_\mathcal{X} x\text{d}P(x)$$ but the average of a sample is once again the expectation associated with the empirical distribution derived from this sample.
|
Why do we estimate mean using MLE when we already know that mean is average of the data?
It is a matter of confusing vocabulary, as illustrated by those quotes, straight from google:
average
noun: average; plural noun: averages
a number expressing the central or typical value in a set
|
14,629
|
What fast algorithms exist for computing truncated SVD?
|
Very broadly speaking, there are two approaches to compute eigenvalue or singular value decompositions. One approach is to diagonalize the matrix and this essentially yields the whole eigenvalue / singular value decomposition (the whole eigenvalue spectrum) at the same time, see some overview here: What are efficient algorithms to compute singular value decomposition (SVD)? The alternative is to use an iterative algorithm that yields one (or several) eigenvectors at a time. Iterations can be stopped after the desired number of eigenvectors has been computed.
I don't think there are iterative algorithms specifically for SVD. This is because one can compute SVD of a $n\times m$ matrix $\mathbf B$ by doing an eigendecomposition of a square symmetric $(n+m)\times(n+m)$ matrix $$\mathbf A=\left(\begin{array}{cc}0 & \mathbf B\\\mathbf B^\top & 0\end{array}\right).$$ Therefore instead of asking what algorithms compute truncated SVD, you should be asking what iterative algorithms compute eigendecomposition: $$\text{algorithm for truncated SVD} \approx \text{iterative algorithm for eigendecomposition}.$$
The simplest iterative algorithm is called power iteration and is indeed very simple:
Initialize random $\mathbf x$.
Update $\mathbf x \gets \mathbf A\mathbf x$.
Normalize $\mathbf x \gets \mathbf x / \|\mathbf x\|$.
Goto step #2 unless converged.
All the more complex algorithms are ultimately based on the power iteration idea, but do get quite sophisticated. Necessary math is given by Krylov subspaces. The algorithms are Arnoldi iteration (for square nonsymmetric matrices), Lanczos iteration (for square symmetric matrices), and variations thereof such as e.g. "implicitly restarted Lanczos method" and whatnot.
You can find this described in e.g. the following textbooks:
Golub & Van Loan, Matrix Computations
Trefethen & Bau, Numerical Linear Algebra
Demmel, Applied Numerical Linear Algebra
Saad, Numerical Methods for Large Eigenvalue Problems
All reasonable programming languages and statistic packages (Matlab, R, Python numpy, you name it) use the same Fortran libraries to perform eigen/singular-value decompositions. These are LAPACK and ARPACK. ARPACK stands for ARnoldi PACKage, and it's all about Arnoldi/Lanczos iterations. E.g. in Matlab there are two functions for SVD: svd performs full decomposition via LAPACK, and svds computes a given number of singular vectors via ARPACK and it is actually just a wrapper for an eigs call on the "square-ized" matrix.
Update
Turns out there are variants of Lanczos algorithm that are specifically tailored to perform SVD of a rectangular matrix $\mathbf B$ without explicitly constructing a square matrix $\mathbf A$ first. The central term here is Lanczos bidiagonalization; as far as I understand, it is essentially a trick to perform all the steps of Lanczos iterations on $\mathbf A$ directly on $\mathbf B$ without ever constructing $\mathbf A$ and thus saving space and time.
There is a Fortran library for these methods too, it's called PROPACK:
The software package PROPACK contains a set of functions for computing the singular value decomposition of large and sparse or structured matrices. The SVD routines are based on the Lanczos bidiagonalization algorithm with partial reorthogonalization (BPRO).
However, PROPACK seems to be much less standard than ARPACK and is not natively supported in standard programming languages. It is written by Rasmus Larsen who has a large 90-page long 1998 paper Lanczos bidiagonalization with partial reorthogonalization with what seems a good overview. Thanks to @MichaelGrant via this Computational Science SE thread.
Among the more recent papers, the most popular seems to be Baglama & Reichel, 2005, Augmented implicitly restarted Lanczos bidiagonalization methods, which is probably around the state of the art. Thanks to @Dougal for giving this link in the comments.
Update 2
There is indeed an entirely different approach described in detail in the overview paper that you cited yourself: Halko et al. 2009, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. I don't know enough about it to comment.
|
What fast algorithms exist for computing truncated SVD?
|
Very broadly speaking, there are two approaches to compute eigenvalue or singular value decompositions. One approach is to diagonalize the matrix and this essentially yields the whole eigenvalue / sin
|
What fast algorithms exist for computing truncated SVD?
Very broadly speaking, there are two approaches to compute eigenvalue or singular value decompositions. One approach is to diagonalize the matrix and this essentially yields the whole eigenvalue / singular value decomposition (the whole eigenvalue spectrum) at the same time, see some overview here: What are efficient algorithms to compute singular value decomposition (SVD)? The alternative is to use an iterative algorithm that yields one (or several) eigenvectors at a time. Iterations can be stopped after the desired number of eigenvectors has been computed.
I don't think there are iterative algorithms specifically for SVD. This is because one can compute SVD of a $n\times m$ matrix $\mathbf B$ by doing an eigendecomposition of a square symmetric $(n+m)\times(n+m)$ matrix $$\mathbf A=\left(\begin{array}{cc}0 & \mathbf B\\\mathbf B^\top & 0\end{array}\right).$$ Therefore instead of asking what algorithms compute truncated SVD, you should be asking what iterative algorithms compute eigendecomposition: $$\text{algorithm for truncated SVD} \approx \text{iterative algorithm for eigendecomposition}.$$
The simplest iterative algorithm is called power iteration and is indeed very simple:
Initialize random $\mathbf x$.
Update $\mathbf x \gets \mathbf A\mathbf x$.
Normalize $\mathbf x \gets \mathbf x / \|\mathbf x\|$.
Goto step #2 unless converged.
All the more complex algorithms are ultimately based on the power iteration idea, but do get quite sophisticated. Necessary math is given by Krylov subspaces. The algorithms are Arnoldi iteration (for square nonsymmetric matrices), Lanczos iteration (for square symmetric matrices), and variations thereof such as e.g. "implicitly restarted Lanczos method" and whatnot.
You can find this described in e.g. the following textbooks:
Golub & Van Loan, Matrix Computations
Trefethen & Bau, Numerical Linear Algebra
Demmel, Applied Numerical Linear Algebra
Saad, Numerical Methods for Large Eigenvalue Problems
All reasonable programming languages and statistic packages (Matlab, R, Python numpy, you name it) use the same Fortran libraries to perform eigen/singular-value decompositions. These are LAPACK and ARPACK. ARPACK stands for ARnoldi PACKage, and it's all about Arnoldi/Lanczos iterations. E.g. in Matlab there are two functions for SVD: svd performs full decomposition via LAPACK, and svds computes a given number of singular vectors via ARPACK and it is actually just a wrapper for an eigs call on the "square-ized" matrix.
Update
Turns out there are variants of Lanczos algorithm that are specifically tailored to perform SVD of a rectangular matrix $\mathbf B$ without explicitly constructing a square matrix $\mathbf A$ first. The central term here is Lanczos bidiagonalization; as far as I understand, it is essentially a trick to perform all the steps of Lanczos iterations on $\mathbf A$ directly on $\mathbf B$ without ever constructing $\mathbf A$ and thus saving space and time.
There is a Fortran library for these methods too, it's called PROPACK:
The software package PROPACK contains a set of functions for computing the singular value decomposition of large and sparse or structured matrices. The SVD routines are based on the Lanczos bidiagonalization algorithm with partial reorthogonalization (BPRO).
However, PROPACK seems to be much less standard than ARPACK and is not natively supported in standard programming languages. It is written by Rasmus Larsen who has a large 90-page long 1998 paper Lanczos bidiagonalization with partial reorthogonalization with what seems a good overview. Thanks to @MichaelGrant via this Computational Science SE thread.
Among the more recent papers, the most popular seems to be Baglama & Reichel, 2005, Augmented implicitly restarted Lanczos bidiagonalization methods, which is probably around the state of the art. Thanks to @Dougal for giving this link in the comments.
Update 2
There is indeed an entirely different approach described in detail in the overview paper that you cited yourself: Halko et al. 2009, Finding structure with randomness: Probabilistic algorithms for constructing approximate matrix decompositions. I don't know enough about it to comment.
|
What fast algorithms exist for computing truncated SVD?
Very broadly speaking, there are two approaches to compute eigenvalue or singular value decompositions. One approach is to diagonalize the matrix and this essentially yields the whole eigenvalue / sin
|
14,630
|
What fast algorithms exist for computing truncated SVD?
|
I just stumbled on the thread via googling fast SVDs, so I'm trying to figure out things myself, but maybe you should look into adaptive cross approximation (ACA).
I don't really know what problem is like or what you need, but if your matrix $M$ is calculated from smooth functions, and you just need an approximate separated representation $M=\sum_{i=0}^k U_i\cdot V^T_i$ and not really a "proper" SVD, ACA algorithm has (almost) linear computational complexity ($N\times N$ matrix then it is almost $O(N)$). So it's really fast; unfortunately many people use the word "fast" lightly.
Again, it depends on your problem if that works. In many cases I personally encounter, the ACA is a very useful numerical tool.
Note: I wanted to write this as a comment, but because I just created this account I don't have enough reputation for comments... But posting works.
|
What fast algorithms exist for computing truncated SVD?
|
I just stumbled on the thread via googling fast SVDs, so I'm trying to figure out things myself, but maybe you should look into adaptive cross approximation (ACA).
I don't really know what problem is
|
What fast algorithms exist for computing truncated SVD?
I just stumbled on the thread via googling fast SVDs, so I'm trying to figure out things myself, but maybe you should look into adaptive cross approximation (ACA).
I don't really know what problem is like or what you need, but if your matrix $M$ is calculated from smooth functions, and you just need an approximate separated representation $M=\sum_{i=0}^k U_i\cdot V^T_i$ and not really a "proper" SVD, ACA algorithm has (almost) linear computational complexity ($N\times N$ matrix then it is almost $O(N)$). So it's really fast; unfortunately many people use the word "fast" lightly.
Again, it depends on your problem if that works. In many cases I personally encounter, the ACA is a very useful numerical tool.
Note: I wanted to write this as a comment, but because I just created this account I don't have enough reputation for comments... But posting works.
|
What fast algorithms exist for computing truncated SVD?
I just stumbled on the thread via googling fast SVDs, so I'm trying to figure out things myself, but maybe you should look into adaptive cross approximation (ACA).
I don't really know what problem is
|
14,631
|
What fast algorithms exist for computing truncated SVD?
|
Here's a technique I have used successfully in the past for computing a truncated SVD (on the Netflix dataset). It is taken from this paper. In a collaborative filtering setting, I should note that most of the values are missing and the point is to predict them, so to use truncated SVD to solve such a problem, you have to use a technique that works under that condition. A short description:
Before you do anything, fit a simple model (e.g., global mean + column and row constant values), and only once you have done that should you move on to using truncated SVD to fit the residuals.
Initialize a random vector of length k (where that's the rank you're truncating to) to each row and column (to each movie and user in the Netflix case).
Hold the row vectors fixed and update the column vectors to minimize error w.r.t. the known entries in the matrix. The procedure is given in matlab code in the paper.
Hold the column vectors fixed and update the row vectors in an analogous way.
Repeat 3 & 4 until you converge or are getting good enough results.
|
What fast algorithms exist for computing truncated SVD?
|
Here's a technique I have used successfully in the past for computing a truncated SVD (on the Netflix dataset). It is taken from this paper. In a collaborative filtering setting, I should note that mo
|
What fast algorithms exist for computing truncated SVD?
Here's a technique I have used successfully in the past for computing a truncated SVD (on the Netflix dataset). It is taken from this paper. In a collaborative filtering setting, I should note that most of the values are missing and the point is to predict them, so to use truncated SVD to solve such a problem, you have to use a technique that works under that condition. A short description:
Before you do anything, fit a simple model (e.g., global mean + column and row constant values), and only once you have done that should you move on to using truncated SVD to fit the residuals.
Initialize a random vector of length k (where that's the rank you're truncating to) to each row and column (to each movie and user in the Netflix case).
Hold the row vectors fixed and update the column vectors to minimize error w.r.t. the known entries in the matrix. The procedure is given in matlab code in the paper.
Hold the column vectors fixed and update the row vectors in an analogous way.
Repeat 3 & 4 until you converge or are getting good enough results.
|
What fast algorithms exist for computing truncated SVD?
Here's a technique I have used successfully in the past for computing a truncated SVD (on the Netflix dataset). It is taken from this paper. In a collaborative filtering setting, I should note that mo
|
14,632
|
Which one is better maximum likelihood or marginal likelihood and why?
|
Each of these will give different results with a different interpretation. The first finds the pair $\beta$,$\theta$ which is most probable, while the second finds the $\beta$ which is (marginally) most probable. Imagine that your distribution looks like this:
Β Β Β Β $\beta=1$$\beta=2$
$\theta=1$0.0Β 0.2Β
$\theta=2$0.1Β 0.2Β
$\theta=3$0.3Β 0.2Β
Then the maximum likelihood answer is $\beta=1$ ($\theta=3$), while the maximum marginal likelihood answer is $\beta=2$ (since, marginalizing over $\theta$, $P(\beta=2)=0.6$).
I'd say that in general, the marginal likelihood is often what you want - if you really don't care about the values of the $\theta$ parameters, then you should just collapse over them. But probably in practice these methods will not yield very different results - if they do, then it may point to some underlying instability in your solution, e.g. multiple modes with different combinations of $\beta$,$\theta$ that all give similar predictions.
|
Which one is better maximum likelihood or marginal likelihood and why?
|
Each of these will give different results with a different interpretation. The first finds the pair $\beta$,$\theta$ which is most probable, while the second finds the $\beta$ which is (marginally) mo
|
Which one is better maximum likelihood or marginal likelihood and why?
Each of these will give different results with a different interpretation. The first finds the pair $\beta$,$\theta$ which is most probable, while the second finds the $\beta$ which is (marginally) most probable. Imagine that your distribution looks like this:
Β Β Β Β $\beta=1$$\beta=2$
$\theta=1$0.0Β 0.2Β
$\theta=2$0.1Β 0.2Β
$\theta=3$0.3Β 0.2Β
Then the maximum likelihood answer is $\beta=1$ ($\theta=3$), while the maximum marginal likelihood answer is $\beta=2$ (since, marginalizing over $\theta$, $P(\beta=2)=0.6$).
I'd say that in general, the marginal likelihood is often what you want - if you really don't care about the values of the $\theta$ parameters, then you should just collapse over them. But probably in practice these methods will not yield very different results - if they do, then it may point to some underlying instability in your solution, e.g. multiple modes with different combinations of $\beta$,$\theta$ that all give similar predictions.
|
Which one is better maximum likelihood or marginal likelihood and why?
Each of these will give different results with a different interpretation. The first finds the pair $\beta$,$\theta$ which is most probable, while the second finds the $\beta$ which is (marginally) mo
|
14,633
|
Which one is better maximum likelihood or marginal likelihood and why?
|
I'm grappling with this question myself right now. Here's a result that may be helpful. Consider the linear model
$$y = X\beta + \epsilon, \quad \epsilon \sim N(0,\sigma^2)$$
where $y \in \mathbb{R}^n, \beta \in \mathbb{R}^p,$ and $\beta$ and $\sigma^2$ are the parameters of interest. The joint likelihood is
$$L(\beta,\sigma^2) = (2 \pi \sigma^2)^{-n/2} exp\left(-\frac{||y-X\beta||^2}{2\sigma^2}\right)$$
Optimizing the joint likelihood yields
$$\hat{\beta} = X^+ y$$
$$\hat{\sigma}^2 = \frac{1}{n}||r||^2$$
where $X^+$ is the pseudoinverse of $X$ and $r=y-X\hat{\beta}$ is the fit residual vector. Note that in $\hat{\sigma}^2$ we have $1/n$ instead of the familiar degrees-of-freedom corrected ratio $1/(n-p)$. This estimator is known to be biased in the finite-sample case.
Now suppose instead of optimizing over both $\beta$ and $\sigma^2$, we integrate $\beta$ out and estimate $\sigma^2$ from the resulting integrated likelihood:
$$\hat{\sigma}^2 = \text{max}_{\sigma^2} \int_{\mathbb{R}^p} L(\beta,\sigma^2) d\beta$$
Using elementary linear algebra and the Gaussian integral formula, you can show that
$$\hat{\sigma}^2 = \frac{1}{n-p} ||r||^2$$
This has the degrees-of-freedom correction which makes it unbiased and generally favored over the joint ML estimate.
From this result one might ask if there is something inherently advantageous about the integrated likelihood, but I do not know of any general results that answer that question. The consensus seems to be that integrated ML is better at accounting for uncertainty in most estimation problems. In particular, if you are estimating a quantity that depends on other parameter estimates (even implicitly), then integrating over the other parameters will better account for their uncertainties.
|
Which one is better maximum likelihood or marginal likelihood and why?
|
I'm grappling with this question myself right now. Here's a result that may be helpful. Consider the linear model
$$y = X\beta + \epsilon, \quad \epsilon \sim N(0,\sigma^2)$$
where $y \in \mathbb{R}^n
|
Which one is better maximum likelihood or marginal likelihood and why?
I'm grappling with this question myself right now. Here's a result that may be helpful. Consider the linear model
$$y = X\beta + \epsilon, \quad \epsilon \sim N(0,\sigma^2)$$
where $y \in \mathbb{R}^n, \beta \in \mathbb{R}^p,$ and $\beta$ and $\sigma^2$ are the parameters of interest. The joint likelihood is
$$L(\beta,\sigma^2) = (2 \pi \sigma^2)^{-n/2} exp\left(-\frac{||y-X\beta||^2}{2\sigma^2}\right)$$
Optimizing the joint likelihood yields
$$\hat{\beta} = X^+ y$$
$$\hat{\sigma}^2 = \frac{1}{n}||r||^2$$
where $X^+$ is the pseudoinverse of $X$ and $r=y-X\hat{\beta}$ is the fit residual vector. Note that in $\hat{\sigma}^2$ we have $1/n$ instead of the familiar degrees-of-freedom corrected ratio $1/(n-p)$. This estimator is known to be biased in the finite-sample case.
Now suppose instead of optimizing over both $\beta$ and $\sigma^2$, we integrate $\beta$ out and estimate $\sigma^2$ from the resulting integrated likelihood:
$$\hat{\sigma}^2 = \text{max}_{\sigma^2} \int_{\mathbb{R}^p} L(\beta,\sigma^2) d\beta$$
Using elementary linear algebra and the Gaussian integral formula, you can show that
$$\hat{\sigma}^2 = \frac{1}{n-p} ||r||^2$$
This has the degrees-of-freedom correction which makes it unbiased and generally favored over the joint ML estimate.
From this result one might ask if there is something inherently advantageous about the integrated likelihood, but I do not know of any general results that answer that question. The consensus seems to be that integrated ML is better at accounting for uncertainty in most estimation problems. In particular, if you are estimating a quantity that depends on other parameter estimates (even implicitly), then integrating over the other parameters will better account for their uncertainties.
|
Which one is better maximum likelihood or marginal likelihood and why?
I'm grappling with this question myself right now. Here's a result that may be helpful. Consider the linear model
$$y = X\beta + \epsilon, \quad \epsilon \sim N(0,\sigma^2)$$
where $y \in \mathbb{R}^n
|
14,634
|
Which one is better maximum likelihood or marginal likelihood and why?
|
This is usually not a matter of choice. If we are interested in the estimation of $\beta$ (e.g. when $\beta$ is a model hyperparameter and $\theta$ is a latent variable) and there is not a single value for $\theta$ and instead the distribution of $\theta$ in known, we need to integrate out $\theta$. You can think of marginal likelihood as the weighted average of the likelihood for different values of $\theta_i$ weighted by their probability density $p(\theta_i)$. Now that $\theta$ has disappeared, using training samples as $data$, you can optimize the marginal likelihood w.r.t. $\beta$.
|
Which one is better maximum likelihood or marginal likelihood and why?
|
This is usually not a matter of choice. If we are interested in the estimation of $\beta$ (e.g. when $\beta$ is a model hyperparameter and $\theta$ is a latent variable) and there is not a single val
|
Which one is better maximum likelihood or marginal likelihood and why?
This is usually not a matter of choice. If we are interested in the estimation of $\beta$ (e.g. when $\beta$ is a model hyperparameter and $\theta$ is a latent variable) and there is not a single value for $\theta$ and instead the distribution of $\theta$ in known, we need to integrate out $\theta$. You can think of marginal likelihood as the weighted average of the likelihood for different values of $\theta_i$ weighted by their probability density $p(\theta_i)$. Now that $\theta$ has disappeared, using training samples as $data$, you can optimize the marginal likelihood w.r.t. $\beta$.
|
Which one is better maximum likelihood or marginal likelihood and why?
This is usually not a matter of choice. If we are interested in the estimation of $\beta$ (e.g. when $\beta$ is a model hyperparameter and $\theta$ is a latent variable) and there is not a single val
|
14,635
|
In R how to compute the p-value for area under ROC
|
In your situation it would be fine to plot a ROC curve, and to calculate the area under that curve, but this should be thought of as supplemental to your main analysis, rather than the primary analysis itself. Instead, you want to fit a logistic regression model.
The logistic regression model will come standard with a test of the model as a whole. (Actually, since you have only one variable, that p-value will be the same as the p-value for your test result variable.) That p-value is the one you are after. The model will allow you to calculate the predicted probability of an observation being diseased. A Receiver Operating Characteristic tells you how the sensitivity and specificity will trade off, if you use different thresholds to convert the predicted probability into a predicted classification. Since the predicted probability will be a function of your test result variable, it is also telling you how they trade off if you use different test result values as your threshold.
If you are not terribly familiar with logistic regression there are some resources available on the internet (besides the Wikipedia page linked above):
I discuss some basics in my answer here: Interpretation of simple predictions to odds ratios in logistic regression; and (although written in a different context) I provide an overview of what logistic regression is and how it relates to OLS (regular) regression in my answer here: Difference between logit and probit models.
You can also read through some of the threads categorized under our logistic tag.
For how to fit a logistic regression model in R, the UCLA stats help website is generally excellent and has a relevant page here.
|
In R how to compute the p-value for area under ROC
|
In your situation it would be fine to plot a ROC curve, and to calculate the area under that curve, but this should be thought of as supplemental to your main analysis, rather than the primary analysi
|
In R how to compute the p-value for area under ROC
In your situation it would be fine to plot a ROC curve, and to calculate the area under that curve, but this should be thought of as supplemental to your main analysis, rather than the primary analysis itself. Instead, you want to fit a logistic regression model.
The logistic regression model will come standard with a test of the model as a whole. (Actually, since you have only one variable, that p-value will be the same as the p-value for your test result variable.) That p-value is the one you are after. The model will allow you to calculate the predicted probability of an observation being diseased. A Receiver Operating Characteristic tells you how the sensitivity and specificity will trade off, if you use different thresholds to convert the predicted probability into a predicted classification. Since the predicted probability will be a function of your test result variable, it is also telling you how they trade off if you use different test result values as your threshold.
If you are not terribly familiar with logistic regression there are some resources available on the internet (besides the Wikipedia page linked above):
I discuss some basics in my answer here: Interpretation of simple predictions to odds ratios in logistic regression; and (although written in a different context) I provide an overview of what logistic regression is and how it relates to OLS (regular) regression in my answer here: Difference between logit and probit models.
You can also read through some of the threads categorized under our logistic tag.
For how to fit a logistic regression model in R, the UCLA stats help website is generally excellent and has a relevant page here.
|
In R how to compute the p-value for area under ROC
In your situation it would be fine to plot a ROC curve, and to calculate the area under that curve, but this should be thought of as supplemental to your main analysis, rather than the primary analysi
|
14,636
|
In R how to compute the p-value for area under ROC
|
Basically you want to test H0 = "The AUC is equal to 0.5".
This is in fact equivalent as saying H0 = "The distribution of the ranks in the two groups are equal".
The latter is the null hypothesis of the Mann-Whitney (Wilcoxon) test (see for instance Gold, 1999).
In other words, you can safely use a Mann-Whitney-Wilcoxon test to answer your question (see for instance Mason & Graham, 2002). This is exactly what the verification package mentioned by Franck Dernoncourt does.
|
In R how to compute the p-value for area under ROC
|
Basically you want to test H0 = "The AUC is equal to 0.5".
This is in fact equivalent as saying H0 = "The distribution of the ranks in the two groups are equal".
The latter is the null hypothesis of t
|
In R how to compute the p-value for area under ROC
Basically you want to test H0 = "The AUC is equal to 0.5".
This is in fact equivalent as saying H0 = "The distribution of the ranks in the two groups are equal".
The latter is the null hypothesis of the Mann-Whitney (Wilcoxon) test (see for instance Gold, 1999).
In other words, you can safely use a Mann-Whitney-Wilcoxon test to answer your question (see for instance Mason & Graham, 2002). This is exactly what the verification package mentioned by Franck Dernoncourt does.
|
In R how to compute the p-value for area under ROC
Basically you want to test H0 = "The AUC is equal to 0.5".
This is in fact equivalent as saying H0 = "The distribution of the ranks in the two groups are equal".
The latter is the null hypothesis of t
|
14,637
|
In R how to compute the p-value for area under ROC
|
You can use roc.area() from the package verification:
install.packages("verification")
library("verification")
# Data used from Mason and Graham (2002).
a<- c(1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990,
1991, 1992, 1993, 1994, 1995)
d<- c(.928,.576, .008, .944, .832, .816, .136, .584, .032, .016, .28, .024, 0, .984, .952)
A<- data.frame(a,d)
names(A)<- c("year", "p2")
# For model without ties
roc.area(A$event, A$p2)
It will return $p.value
[1] 0.0069930071
|
In R how to compute the p-value for area under ROC
|
You can use roc.area() from the package verification:
install.packages("verification")
library("verification")
# Data used from Mason and Graham (2002).
a<- c(1981, 1982, 1983, 1984, 1985, 1986, 1987
|
In R how to compute the p-value for area under ROC
You can use roc.area() from the package verification:
install.packages("verification")
library("verification")
# Data used from Mason and Graham (2002).
a<- c(1981, 1982, 1983, 1984, 1985, 1986, 1987, 1988, 1989, 1990,
1991, 1992, 1993, 1994, 1995)
d<- c(.928,.576, .008, .944, .832, .816, .136, .584, .032, .016, .28, .024, 0, .984, .952)
A<- data.frame(a,d)
names(A)<- c("year", "p2")
# For model without ties
roc.area(A$event, A$p2)
It will return $p.value
[1] 0.0069930071
|
In R how to compute the p-value for area under ROC
You can use roc.area() from the package verification:
install.packages("verification")
library("verification")
# Data used from Mason and Graham (2002).
a<- c(1981, 1982, 1983, 1984, 1985, 1986, 1987
|
14,638
|
In R how to compute the p-value for area under ROC
|
Two ROC curves can be compared in pROC using roc.test(). This also produces a p-value. In addition, using roc(..., auc=TRUE, ci=TRUE) will give you the lower and higher confidence intervals along with the AUC in the output while creating the ROC object, which may be useful.
The following is working example code that tests whether the miles per gallon or the weight of a car is a better predictor of kind of transmission it comes equipped with (automatic or manual):
library(pROC)
roc_object_1 <- roc(mtcars$am, mtcars$mpg, auc=T, ci=T) #gives AUC and CI
roc_object_2 <- roc(mtcars$am, mtcars$wt, auc=T, ci=T) #gives AUC and CI
roc.test(roc_object_1, roc_object_2) #gives p-value
The weight is a significantly better predictor than the fuel consumption, it seems. However, this is comparing two curves, and not a single curve against a number such as 0.5. Looking at the confidence interval to see whether it contains the number 0.5 tells us whether it is significantly different, but it doesn't produce a p-value.
|
In R how to compute the p-value for area under ROC
|
Two ROC curves can be compared in pROC using roc.test(). This also produces a p-value. In addition, using roc(..., auc=TRUE, ci=TRUE) will give you the lower and higher confidence intervals along with
|
In R how to compute the p-value for area under ROC
Two ROC curves can be compared in pROC using roc.test(). This also produces a p-value. In addition, using roc(..., auc=TRUE, ci=TRUE) will give you the lower and higher confidence intervals along with the AUC in the output while creating the ROC object, which may be useful.
The following is working example code that tests whether the miles per gallon or the weight of a car is a better predictor of kind of transmission it comes equipped with (automatic or manual):
library(pROC)
roc_object_1 <- roc(mtcars$am, mtcars$mpg, auc=T, ci=T) #gives AUC and CI
roc_object_2 <- roc(mtcars$am, mtcars$wt, auc=T, ci=T) #gives AUC and CI
roc.test(roc_object_1, roc_object_2) #gives p-value
The weight is a significantly better predictor than the fuel consumption, it seems. However, this is comparing two curves, and not a single curve against a number such as 0.5. Looking at the confidence interval to see whether it contains the number 0.5 tells us whether it is significantly different, but it doesn't produce a p-value.
|
In R how to compute the p-value for area under ROC
Two ROC curves can be compared in pROC using roc.test(). This also produces a p-value. In addition, using roc(..., auc=TRUE, ci=TRUE) will give you the lower and higher confidence intervals along with
|
14,639
|
Machine learning curse of dimensionality explained?
|
Translating that paragraph:
Let there be a set of features that describe a data point. Maybe you're looking at the weather. That set of features might include things like temperature, humidity, time of day, etc. So each data point might have one feature (if you're only looking at temperature) or it might have 2 features (if you're looking at temperature and humidity) and so on. What this paragraph is saying is that based on the number of dimensions your data has (how many features it has), the more difficult it is to make an estimator. This is because if you simply have one feature of data, or 1-dimensional data, then when you go to graph this data, you get a line graph, and imagining a line graph between let's say 0-50 degrees C, it only takes 50 random points before each data point is about 1 degree from any other data point. Now let's think about 2 dimensions, talking about humidity and temperature, now it's trickier to find that d such that all the points are within "d" units of each other. Imagine temperature is still between 0-50 but now humidity is also between 0-100%. How many random points does it take to get all the points within 1 or 2 of each other? Now it's 100 * 50 or ~5,000! Now imagine 3 dimensions, etc etc. You start needing way more points to ensure that every point is within d of some other point. To make your life easier try assuming "d" is 1 and see what happens. Hope that helps!
|
Machine learning curse of dimensionality explained?
|
Translating that paragraph:
Let there be a set of features that describe a data point. Maybe you're looking at the weather. That set of features might include things like temperature, humidity, time o
|
Machine learning curse of dimensionality explained?
Translating that paragraph:
Let there be a set of features that describe a data point. Maybe you're looking at the weather. That set of features might include things like temperature, humidity, time of day, etc. So each data point might have one feature (if you're only looking at temperature) or it might have 2 features (if you're looking at temperature and humidity) and so on. What this paragraph is saying is that based on the number of dimensions your data has (how many features it has), the more difficult it is to make an estimator. This is because if you simply have one feature of data, or 1-dimensional data, then when you go to graph this data, you get a line graph, and imagining a line graph between let's say 0-50 degrees C, it only takes 50 random points before each data point is about 1 degree from any other data point. Now let's think about 2 dimensions, talking about humidity and temperature, now it's trickier to find that d such that all the points are within "d" units of each other. Imagine temperature is still between 0-50 but now humidity is also between 0-100%. How many random points does it take to get all the points within 1 or 2 of each other? Now it's 100 * 50 or ~5,000! Now imagine 3 dimensions, etc etc. You start needing way more points to ensure that every point is within d of some other point. To make your life easier try assuming "d" is 1 and see what happens. Hope that helps!
|
Machine learning curse of dimensionality explained?
Translating that paragraph:
Let there be a set of features that describe a data point. Maybe you're looking at the weather. That set of features might include things like temperature, humidity, time o
|
14,640
|
Machine learning curse of dimensionality explained?
|
matty-d has already provided a very good answer, but I found another answer that explains this problem equally as well, from a Quora user Kevin Lacker:
Let's say you have a straight line 100 yards long and you dropped a
penny somewhere on it. It wouldn't be too hard to find. You walk along
the line and it takes two minutes.
Now let's say you have a square 100 yards on each side and you dropped
a penny somewhere on it. It would be pretty hard, like searching
across two football fields stuck together. It could take days.
Now a cube 100 yards across. That's like searching a 30-story building
the size of a football stadium. Ugh.
The difficulty of searching through the space gets a lot harder as
you have more dimensions. You might not realize this intuitively when
it's just stated in mathematical formulas, since they all have the
same "width". That's the curse of dimensionality. It gets to have a
name because it is unintuitive, useful, and yet simple.
|
Machine learning curse of dimensionality explained?
|
matty-d has already provided a very good answer, but I found another answer that explains this problem equally as well, from a Quora user Kevin Lacker:
Let's say you have a straight line 100 yards lo
|
Machine learning curse of dimensionality explained?
matty-d has already provided a very good answer, but I found another answer that explains this problem equally as well, from a Quora user Kevin Lacker:
Let's say you have a straight line 100 yards long and you dropped a
penny somewhere on it. It wouldn't be too hard to find. You walk along
the line and it takes two minutes.
Now let's say you have a square 100 yards on each side and you dropped
a penny somewhere on it. It would be pretty hard, like searching
across two football fields stuck together. It could take days.
Now a cube 100 yards across. That's like searching a 30-story building
the size of a football stadium. Ugh.
The difficulty of searching through the space gets a lot harder as
you have more dimensions. You might not realize this intuitively when
it's just stated in mathematical formulas, since they all have the
same "width". That's the curse of dimensionality. It gets to have a
name because it is unintuitive, useful, and yet simple.
|
Machine learning curse of dimensionality explained?
matty-d has already provided a very good answer, but I found another answer that explains this problem equally as well, from a Quora user Kevin Lacker:
Let's say you have a straight line 100 yards lo
|
14,641
|
Machine learning curse of dimensionality explained?
|
That example can give some intuition of the problem, but is actually not a rigorous proof at all: that's only an example where many samples are needed to get a "good" space coverage. There could be (and there are indeed, e.g. hexagons in 2D already) much more efficient coverages than a regular grid... (the sophisticated area of low discrepancy sequences is devoted to this) ...and proving that even with such better coverings there is still some curse of dimensionality is quite another issue. Actually in certain function spaces there are even ways to circumvent this apparent problem.
|
Machine learning curse of dimensionality explained?
|
That example can give some intuition of the problem, but is actually not a rigorous proof at all: that's only an example where many samples are needed to get a "good" space coverage. There could be (a
|
Machine learning curse of dimensionality explained?
That example can give some intuition of the problem, but is actually not a rigorous proof at all: that's only an example where many samples are needed to get a "good" space coverage. There could be (and there are indeed, e.g. hexagons in 2D already) much more efficient coverages than a regular grid... (the sophisticated area of low discrepancy sequences is devoted to this) ...and proving that even with such better coverings there is still some curse of dimensionality is quite another issue. Actually in certain function spaces there are even ways to circumvent this apparent problem.
|
Machine learning curse of dimensionality explained?
That example can give some intuition of the problem, but is actually not a rigorous proof at all: that's only an example where many samples are needed to get a "good" space coverage. There could be (a
|
14,642
|
$t$-tests vs $z$-tests?
|
The names "$t$-test" and "$z$-test" are typically used to refer to the special case when $X$ is normal $\mbox{N}(\mu,\sigma^2)$, $\hat{b}=\bar{x}$ and $C=\mu_{0}$. You can however of course construct tests of "$t$-test type" in other settings as well (bootstrap comes to mind), using the same type of reasoning.
Either way, the difference is in the $\mbox{s.e.}(\hat{b})$ part:
In a $z$-test, the standard deviation of $\hat{b}$ is assumed to be
known without error. In the special case mentioned above, this means that $\mbox{s.e.}(\bar{x})=\sigma/\sqrt{n}$.
In a $t$-test, it is estimated using the data. In the special case mentioned above, this means that $\mbox{s.e.}(\bar{x})=\hat{\sigma}/\sqrt{n}$, where $\hat{\sigma}=\sqrt{\frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2}$ is an estimator of $\sigma$.
The choice between a $t$-test and a $z$-test, therefore, depends on whether or not $\sigma$ is known prior to collecting the data.
The reason that the distribution of the two statistics differ is that the $t$-statistic contains more unknowns. This causes it to be more variable, so that its distribution has heavier tails. As the sample size $n$ grows, the estimator $\hat{\sigma}$ comes very close to the true $\sigma$, so that $\sigma$ essentially is known. So when the sample size is large, the $\mbox{N}(0,1)$ quantiles can be used also for the $t$-test.
|
$t$-tests vs $z$-tests?
|
The names "$t$-test" and "$z$-test" are typically used to refer to the special case when $X$ is normal $\mbox{N}(\mu,\sigma^2)$, $\hat{b}=\bar{x}$ and $C=\mu_{0}$. You can however of course construct
|
$t$-tests vs $z$-tests?
The names "$t$-test" and "$z$-test" are typically used to refer to the special case when $X$ is normal $\mbox{N}(\mu,\sigma^2)$, $\hat{b}=\bar{x}$ and $C=\mu_{0}$. You can however of course construct tests of "$t$-test type" in other settings as well (bootstrap comes to mind), using the same type of reasoning.
Either way, the difference is in the $\mbox{s.e.}(\hat{b})$ part:
In a $z$-test, the standard deviation of $\hat{b}$ is assumed to be
known without error. In the special case mentioned above, this means that $\mbox{s.e.}(\bar{x})=\sigma/\sqrt{n}$.
In a $t$-test, it is estimated using the data. In the special case mentioned above, this means that $\mbox{s.e.}(\bar{x})=\hat{\sigma}/\sqrt{n}$, where $\hat{\sigma}=\sqrt{\frac{1}{n-1}\sum_{i=1}^n(x_i-\bar{x})^2}$ is an estimator of $\sigma$.
The choice between a $t$-test and a $z$-test, therefore, depends on whether or not $\sigma$ is known prior to collecting the data.
The reason that the distribution of the two statistics differ is that the $t$-statistic contains more unknowns. This causes it to be more variable, so that its distribution has heavier tails. As the sample size $n$ grows, the estimator $\hat{\sigma}$ comes very close to the true $\sigma$, so that $\sigma$ essentially is known. So when the sample size is large, the $\mbox{N}(0,1)$ quantiles can be used also for the $t$-test.
|
$t$-tests vs $z$-tests?
The names "$t$-test" and "$z$-test" are typically used to refer to the special case when $X$ is normal $\mbox{N}(\mu,\sigma^2)$, $\hat{b}=\bar{x}$ and $C=\mu_{0}$. You can however of course construct
|
14,643
|
Negative coefficient in ordered logistic regression
|
You're on the right track, but always have a look at the documentation of the software you're using to see what model is actually fit. Assume a situation with a categorical dependent variable $Y$ with ordered categories $1, \ldots, g, \ldots, k$ and predictors $X_{1}, \ldots, X_{j}, \ldots, X_{p}$.
"In the wild", you can encounter three equivalent choices for writing the theoretical proportional-odds model with different implied parameter meanings:
$\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 1, \ldots, k-1)$
$\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} - (\beta_{1} X_{1} + \dots + \beta_{p} X_{p}) \quad(g = 1, \ldots, k-1)$
$\text{logit}(p(Y \geqslant g)) = \ln \frac{p(Y \geqslant g)}{p(Y < g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 2, \ldots, k)$
(Models 1 and 2 have the restriction that in the $k-1$ separate binary logistic regressions, the $\beta_{j}$ do not vary with $g$, and $\beta_{0_1} < \ldots < \beta_{0_g} < \ldots < \beta_{0_k-1}$, model 3 has the same restriction about the $\beta_{j}$, and requires that $\beta_{0_2} > \ldots > \beta_{0_g} > \ldots > \beta_{0_k}$)
In model 1, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a lower category in $Y$.
Model 1 is somewhat counterintuitive, therefore model 2 or 3 seem to be the preferred one in software. Here, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a higher category in $Y$.
Models 1 and 2 lead to the same estimates for the $\beta_{0_g}$, but their estimates for the $\beta_{j}$ have opposite signs.
Models 2 and 3 lead to the same estimates for the $\beta_{j}$, but their estimates for the $\beta_{0_g}$ have opposite signs.
Assuming your software uses model 2 or 3, you can say "with a 1 unit increase in $X_1$, ceteris paribus, the predicted odds of observing '$Y = \text{Good}$' vs. observing '$Y = \text{Neutral OR Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$.", and likewise "with a 1 unit increase in $X_1$, ceteris paribus, the predicted odds of observing '$Y = \text{Good OR Neutral}$' vs. observing '$Y = \text{Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$." Note that in the empirical case, we only have the predicted odds, not the actual ones.
Here are some additional illustrations for model 1 with $k = 4$ categories. First, the assumption of a linear model for the cumulative logits with proportional odds. Second, the implied probabilities of observing at most category $g$. The probabilities follow logistic functions with the same shape.
For the category probabilities themselves, the depicted model implies the following ordered functions:
P.S. To my knowledge, model 2 is used in SPSS as well as in R functions MASS::polr() and ordinal::clm(). Model 3 is used in R functions rms::lrm() and VGAM::vglm(). Unfortunately, I don't know about SAS and Stata.
|
Negative coefficient in ordered logistic regression
|
You're on the right track, but always have a look at the documentation of the software you're using to see what model is actually fit. Assume a situation with a categorical dependent variable $Y$ with
|
Negative coefficient in ordered logistic regression
You're on the right track, but always have a look at the documentation of the software you're using to see what model is actually fit. Assume a situation with a categorical dependent variable $Y$ with ordered categories $1, \ldots, g, \ldots, k$ and predictors $X_{1}, \ldots, X_{j}, \ldots, X_{p}$.
"In the wild", you can encounter three equivalent choices for writing the theoretical proportional-odds model with different implied parameter meanings:
$\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 1, \ldots, k-1)$
$\text{logit}(p(Y \leqslant g)) = \ln \frac{p(Y \leqslant g)}{p(Y > g)} = \beta_{0_g} - (\beta_{1} X_{1} + \dots + \beta_{p} X_{p}) \quad(g = 1, \ldots, k-1)$
$\text{logit}(p(Y \geqslant g)) = \ln \frac{p(Y \geqslant g)}{p(Y < g)} = \beta_{0_g} + \beta_{1} X_{1} + \dots + \beta_{p} X_{p} \quad(g = 2, \ldots, k)$
(Models 1 and 2 have the restriction that in the $k-1$ separate binary logistic regressions, the $\beta_{j}$ do not vary with $g$, and $\beta_{0_1} < \ldots < \beta_{0_g} < \ldots < \beta_{0_k-1}$, model 3 has the same restriction about the $\beta_{j}$, and requires that $\beta_{0_2} > \ldots > \beta_{0_g} > \ldots > \beta_{0_k}$)
In model 1, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a lower category in $Y$.
Model 1 is somewhat counterintuitive, therefore model 2 or 3 seem to be the preferred one in software. Here, a positive $\beta_{j}$ means that an increase in predictor $X_{j}$ is associated with increased odds for a higher category in $Y$.
Models 1 and 2 lead to the same estimates for the $\beta_{0_g}$, but their estimates for the $\beta_{j}$ have opposite signs.
Models 2 and 3 lead to the same estimates for the $\beta_{j}$, but their estimates for the $\beta_{0_g}$ have opposite signs.
Assuming your software uses model 2 or 3, you can say "with a 1 unit increase in $X_1$, ceteris paribus, the predicted odds of observing '$Y = \text{Good}$' vs. observing '$Y = \text{Neutral OR Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$.", and likewise "with a 1 unit increase in $X_1$, ceteris paribus, the predicted odds of observing '$Y = \text{Good OR Neutral}$' vs. observing '$Y = \text{Bad}$' change by a factor of $e^{\hat{\beta}_{1}} = 0.607$." Note that in the empirical case, we only have the predicted odds, not the actual ones.
Here are some additional illustrations for model 1 with $k = 4$ categories. First, the assumption of a linear model for the cumulative logits with proportional odds. Second, the implied probabilities of observing at most category $g$. The probabilities follow logistic functions with the same shape.
For the category probabilities themselves, the depicted model implies the following ordered functions:
P.S. To my knowledge, model 2 is used in SPSS as well as in R functions MASS::polr() and ordinal::clm(). Model 3 is used in R functions rms::lrm() and VGAM::vglm(). Unfortunately, I don't know about SAS and Stata.
|
Negative coefficient in ordered logistic regression
You're on the right track, but always have a look at the documentation of the software you're using to see what model is actually fit. Assume a situation with a categorical dependent variable $Y$ with
|
14,644
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
|
If we start with the premise that all variables have been centred (standard practice in PCA), then the total variance in the data is just the sum of squares:
$$T=\sum_{i}(A_{i}^{2}+B_{i}^{2}+C_{i}^{2}+D_{i}^{2}+E_{i}^{2}+F_{i}^{2})$$
This is equal to the trace of the covariance matrix of the variables, which equals the sum of the eigenvalues of the covariance matrix. This is the same quantity that PCA speaks of in terms of "explaining the data" - i.e. you want your PCs to explain the greatest proportion of the diagonal elements of the covariance matrix. Now if we make this an objective function for a set of predicted values like so:
$$S=\sum_{i}\left(\left[A_{i}-\hat{A}_{i}\right]^{2}+\dots+\left[F_{i}-\hat{F}_{i}\right]^{2}\right)$$
Then the first principal component minimises $S$ among all rank 1 fitted values $(\hat{A}_{i},\dots,\hat{F}_{i})$. So it would seem like the appropriate quantity you are after is
$$P=1-\frac{S}{T}$$
To use your example $A+2B+5C$, we need to turn this equation into rank 1 predictions. First you need to normalise the weights to have sum of squares 1. So we replace $(1,2,5,0,0,0)$ (sum of squares $30$) with $\left(\frac{1}{\sqrt{30}},\frac{2}{\sqrt{30}},\frac{5}{\sqrt{30}},0,0,0\right)$. Next we "score" each observation according to the normalised weights:
$$Z_{i}=\frac{1}{\sqrt{30}}A_{i}+\frac{2}{\sqrt{30}}B_{i}+\frac{5}{\sqrt{30}}C_{i}$$
Then we multiply the scores by the weight vector to get our rank 1 prediction.
$$\begin{pmatrix}
\hat{A}_{i} \\
\hat{B}_{i} \\
\hat{C}_{i} \\
\hat{D}_{i} \\
\hat{E}_{i} \\
\hat{F}_{i}\end{pmatrix}
=Z_{i}\times\begin{pmatrix}
\frac{1}{\sqrt{30}} \\
\frac{2}{\sqrt{30}} \\
\frac{5}{\sqrt{30}} \\
0 \\
0 \\
0\end{pmatrix}$$
Then we plug these estimates into $S$ calculate $P$. You can also put this into matrix norm notation, which may suggest a different generalisation. If we set $O$ as the $N\times q$ matrix of observed values of the variables ($q=6$ in your case), and $E$ as a corresponding matrix of predictions. We can define the proportion of variance explained as:
$$\frac{||O||_{2}^{2}-||O-E||_{2}^{2}}{||O||_{2}^{2}}$$
Where $||.||_{2}$ is the Frobenius matrix norm. So you could "generalise" this to be some other kind of matrix norm, and you will get a difference measure of "variation explained", although it won't be "variance" per se unless it is sum of squares.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
|
If we start with the premise that all variables have been centred (standard practice in PCA), then the total variance in the data is just the sum of squares:
$$T=\sum_{i}(A_{i}^{2}+B_{i}^{2}+C_{i}^{2}
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
If we start with the premise that all variables have been centred (standard practice in PCA), then the total variance in the data is just the sum of squares:
$$T=\sum_{i}(A_{i}^{2}+B_{i}^{2}+C_{i}^{2}+D_{i}^{2}+E_{i}^{2}+F_{i}^{2})$$
This is equal to the trace of the covariance matrix of the variables, which equals the sum of the eigenvalues of the covariance matrix. This is the same quantity that PCA speaks of in terms of "explaining the data" - i.e. you want your PCs to explain the greatest proportion of the diagonal elements of the covariance matrix. Now if we make this an objective function for a set of predicted values like so:
$$S=\sum_{i}\left(\left[A_{i}-\hat{A}_{i}\right]^{2}+\dots+\left[F_{i}-\hat{F}_{i}\right]^{2}\right)$$
Then the first principal component minimises $S$ among all rank 1 fitted values $(\hat{A}_{i},\dots,\hat{F}_{i})$. So it would seem like the appropriate quantity you are after is
$$P=1-\frac{S}{T}$$
To use your example $A+2B+5C$, we need to turn this equation into rank 1 predictions. First you need to normalise the weights to have sum of squares 1. So we replace $(1,2,5,0,0,0)$ (sum of squares $30$) with $\left(\frac{1}{\sqrt{30}},\frac{2}{\sqrt{30}},\frac{5}{\sqrt{30}},0,0,0\right)$. Next we "score" each observation according to the normalised weights:
$$Z_{i}=\frac{1}{\sqrt{30}}A_{i}+\frac{2}{\sqrt{30}}B_{i}+\frac{5}{\sqrt{30}}C_{i}$$
Then we multiply the scores by the weight vector to get our rank 1 prediction.
$$\begin{pmatrix}
\hat{A}_{i} \\
\hat{B}_{i} \\
\hat{C}_{i} \\
\hat{D}_{i} \\
\hat{E}_{i} \\
\hat{F}_{i}\end{pmatrix}
=Z_{i}\times\begin{pmatrix}
\frac{1}{\sqrt{30}} \\
\frac{2}{\sqrt{30}} \\
\frac{5}{\sqrt{30}} \\
0 \\
0 \\
0\end{pmatrix}$$
Then we plug these estimates into $S$ calculate $P$. You can also put this into matrix norm notation, which may suggest a different generalisation. If we set $O$ as the $N\times q$ matrix of observed values of the variables ($q=6$ in your case), and $E$ as a corresponding matrix of predictions. We can define the proportion of variance explained as:
$$\frac{||O||_{2}^{2}-||O-E||_{2}^{2}}{||O||_{2}^{2}}$$
Where $||.||_{2}$ is the Frobenius matrix norm. So you could "generalise" this to be some other kind of matrix norm, and you will get a difference measure of "variation explained", although it won't be "variance" per se unless it is sum of squares.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
If we start with the premise that all variables have been centred (standard practice in PCA), then the total variance in the data is just the sum of squares:
$$T=\sum_{i}(A_{i}^{2}+B_{i}^{2}+C_{i}^{2}
|
14,645
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
|
Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
This question can be understood in two different ways, leading to two different answers.
A linear combination corresponds to a vector, which in your example is $[1, 2, 5, 0, 0, 0]$. This vector, in turn, defines an axis in the 6D space of the original variables. What you are asking is, how much variance does projection on this axis "describe"? The answer is given via the notion of "reconstruction" of original data from this projection, and measuring the reconstruction error (see Wikipedia on Fraction of variance unexplained). Turns out, this reconstruction can be reasonably done in two different ways, yielding two different answers.
Approach #1
Let $\newcommand{\S}{\boldsymbol \Sigma} \newcommand{\w}{\mathbf w} \newcommand{\v}{\mathbf v}\newcommand{\X}{\mathbf X} \X$ be the centered dataset ($n$ rows correspond to samples, $d$ columns correspond to variables), let $\S$ be its covariance matrix, and let $\w$ be a unit vector from $\mathbb R^d$. The total variance of the dataset is the sum of all $d$ variances, i.e. the trace of the covariance matrix: $T = \mathrm{tr}(\S)$. The question is: what proportion of $T$ does $\w$ describe? The two answers given by @todddeluca and @probabilityislogic are both equivalent to the following: compute projection $\X \w$, compute its variance and divide by $T$: $$R^2_\mathrm{first} = \frac{\mathrm{Var}(\X \w)}{T} = \frac{\w^\top \S \w}{\mathrm{tr}(\S)}.$$
This might not be immediately obvious, because e.g. @probabilityislogic suggests to consider the reconstruction $\X \w \w^\top$ and then to compute $$\frac{\|\X\|^2 - \|\X-\X \w \w^\top\|^2}{\|\X\|^2},$$ but with a little algebra this can be shown to be an equivalent expression.
Approach #2
Okay. Now consider a following example: $\X$ is a $d=2$ dataset with covariance matrix $$\S = \left(\begin{array}{c}1&0.99\\0.99&1\end{array}\right)$$ and $\mathbf w = (\begin{array}{}1&0\end{array})^\top$ is simply an $x$ vector:
The total variance is $T=2$. The variance of the projection onto $\w$ (shown in red dots) is equal to $1$. So according to the above logic, the explained variance is equal to $1/2$. And in some sense it is: red dots ("reconstruction") are far away from the corresponding blue dots, so a lot of the variance is "lost".
On the other hand, the two variables have $0.99$ correlation and so are almost identical; saying that one of them describes only $50\%$ of the total variance is weird, because each of them contains "almost all the information" about the second one. We can formalize it as follows: given projection $\X\w$, find a best possible reconstruction $\X\w\v^\top$ with $\v$ not necessarily the same as $\w$, and then compute the reconstruction error and plug it into the expression for the proportion of explained variance: $$R^2_\mathrm{second}=\frac{\|\X\|^2 - \|\X-\X \w \v^\top\|^2}{\|\X\|^2},$$ where $\v$ is chosen such that $\|\X-\X \w \v^\top\|^2$ is minimal (i.e. $R^2$ is maximal). This is exactly equivalent to computing $R^2$ of multivariate regression predicting original dataset $\X$ from the $1$-dimensional projection $\X\w$.
It is a matter of straightforward algebra to use regression solution for $\v$ to find that the whole expression simplifies to $$R^2_\mathrm{second}=\frac{\|\S \w\|^2}{\w^\top \S \w \cdot \mathrm{tr}(\S)}.$$ In the example above this is equal to $0.9901$, which seems reasonable.
Note that if (and only if) $\w$ is one of the eigenvectors of $\S$, i.e. one of the principal axes, with eigenvalue $\lambda$ (so that $\S \w = \lambda \w$), then both approaches to compute $R^2$ coincide and reduce to the familiar PCA expression $$R^2_\mathrm{PCA} = R^2_\mathrm{first} = R^2_\mathrm{second} = \lambda/\mathrm{tr}(\S) = \lambda/\sum \lambda_i.$$
PS. See my answer here for an application of the derived formula to the special case of $\w$ being one of the basis vectors: Variance of the data explained by a single variable.
Appendix. Derivation of the formula for $R^2_\mathrm{second}$
Finding $\v$ minimizing the reconstruction $\|\X-\X \w \v^\top\|^2$ is a regression problem (with $\X \w$ as univariate predictor and $\X$ as multivariate response). Its solution is given by $$\v^\top = \left((\X \w)^\top (\X \w)\right)^{-1}(\X \w)^\top \X = (\w^\top \S \w)^{-1} \w^\top \S.$$
Next, the $R^2$ formula can be simplified as $$R^2=\frac{\|\X\|^2 - \|\X-\X \w \v^\top\|^2}{\|\X\|^2} = \frac{\|\X \w \v^\top\|^2}{\|\X\|^2}$$ due to the Pythagoras theorem, because the hat matrix in regression is an orthogonal projection (but it is also easy to show directly).
Plugging now the equation for $\v$, we obtain for the numerator: $$\|\X \w \v^\top\|^2 = \mathrm{tr}\left(\X \w \v^\top (\X \w \v^\top)^\top\right) = \mathrm{tr}(\X\w\w^\top\S\S\w\w^\top\X^\top)/(\w^\top\S\w)^2=\mathrm{tr}(\w^\top\S\S\w)/(\w^\top\S\w) = \|\S\w\|^2 / (\w^\top\S\w).$$
The denominator is equal to $\|\X\|^2 = \mathrm{tr}(\S)$ resulting in the formula given above.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
|
Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
This question can be understood in two different ways,
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
This question can be understood in two different ways, leading to two different answers.
A linear combination corresponds to a vector, which in your example is $[1, 2, 5, 0, 0, 0]$. This vector, in turn, defines an axis in the 6D space of the original variables. What you are asking is, how much variance does projection on this axis "describe"? The answer is given via the notion of "reconstruction" of original data from this projection, and measuring the reconstruction error (see Wikipedia on Fraction of variance unexplained). Turns out, this reconstruction can be reasonably done in two different ways, yielding two different answers.
Approach #1
Let $\newcommand{\S}{\boldsymbol \Sigma} \newcommand{\w}{\mathbf w} \newcommand{\v}{\mathbf v}\newcommand{\X}{\mathbf X} \X$ be the centered dataset ($n$ rows correspond to samples, $d$ columns correspond to variables), let $\S$ be its covariance matrix, and let $\w$ be a unit vector from $\mathbb R^d$. The total variance of the dataset is the sum of all $d$ variances, i.e. the trace of the covariance matrix: $T = \mathrm{tr}(\S)$. The question is: what proportion of $T$ does $\w$ describe? The two answers given by @todddeluca and @probabilityislogic are both equivalent to the following: compute projection $\X \w$, compute its variance and divide by $T$: $$R^2_\mathrm{first} = \frac{\mathrm{Var}(\X \w)}{T} = \frac{\w^\top \S \w}{\mathrm{tr}(\S)}.$$
This might not be immediately obvious, because e.g. @probabilityislogic suggests to consider the reconstruction $\X \w \w^\top$ and then to compute $$\frac{\|\X\|^2 - \|\X-\X \w \w^\top\|^2}{\|\X\|^2},$$ but with a little algebra this can be shown to be an equivalent expression.
Approach #2
Okay. Now consider a following example: $\X$ is a $d=2$ dataset with covariance matrix $$\S = \left(\begin{array}{c}1&0.99\\0.99&1\end{array}\right)$$ and $\mathbf w = (\begin{array}{}1&0\end{array})^\top$ is simply an $x$ vector:
The total variance is $T=2$. The variance of the projection onto $\w$ (shown in red dots) is equal to $1$. So according to the above logic, the explained variance is equal to $1/2$. And in some sense it is: red dots ("reconstruction") are far away from the corresponding blue dots, so a lot of the variance is "lost".
On the other hand, the two variables have $0.99$ correlation and so are almost identical; saying that one of them describes only $50\%$ of the total variance is weird, because each of them contains "almost all the information" about the second one. We can formalize it as follows: given projection $\X\w$, find a best possible reconstruction $\X\w\v^\top$ with $\v$ not necessarily the same as $\w$, and then compute the reconstruction error and plug it into the expression for the proportion of explained variance: $$R^2_\mathrm{second}=\frac{\|\X\|^2 - \|\X-\X \w \v^\top\|^2}{\|\X\|^2},$$ where $\v$ is chosen such that $\|\X-\X \w \v^\top\|^2$ is minimal (i.e. $R^2$ is maximal). This is exactly equivalent to computing $R^2$ of multivariate regression predicting original dataset $\X$ from the $1$-dimensional projection $\X\w$.
It is a matter of straightforward algebra to use regression solution for $\v$ to find that the whole expression simplifies to $$R^2_\mathrm{second}=\frac{\|\S \w\|^2}{\w^\top \S \w \cdot \mathrm{tr}(\S)}.$$ In the example above this is equal to $0.9901$, which seems reasonable.
Note that if (and only if) $\w$ is one of the eigenvectors of $\S$, i.e. one of the principal axes, with eigenvalue $\lambda$ (so that $\S \w = \lambda \w$), then both approaches to compute $R^2$ coincide and reduce to the familiar PCA expression $$R^2_\mathrm{PCA} = R^2_\mathrm{first} = R^2_\mathrm{second} = \lambda/\mathrm{tr}(\S) = \lambda/\sum \lambda_i.$$
PS. See my answer here for an application of the derived formula to the special case of $\w$ being one of the basis vectors: Variance of the data explained by a single variable.
Appendix. Derivation of the formula for $R^2_\mathrm{second}$
Finding $\v$ minimizing the reconstruction $\|\X-\X \w \v^\top\|^2$ is a regression problem (with $\X \w$ as univariate predictor and $\X$ as multivariate response). Its solution is given by $$\v^\top = \left((\X \w)^\top (\X \w)\right)^{-1}(\X \w)^\top \X = (\w^\top \S \w)^{-1} \w^\top \S.$$
Next, the $R^2$ formula can be simplified as $$R^2=\frac{\|\X\|^2 - \|\X-\X \w \v^\top\|^2}{\|\X\|^2} = \frac{\|\X \w \v^\top\|^2}{\|\X\|^2}$$ due to the Pythagoras theorem, because the hat matrix in regression is an orthogonal projection (but it is also easy to show directly).
Plugging now the equation for $\v$, we obtain for the numerator: $$\|\X \w \v^\top\|^2 = \mathrm{tr}\left(\X \w \v^\top (\X \w \v^\top)^\top\right) = \mathrm{tr}(\X\w\w^\top\S\S\w\w^\top\X^\top)/(\w^\top\S\w)^2=\mathrm{tr}(\w^\top\S\S\w)/(\w^\top\S\w) = \|\S\w\|^2 / (\w^\top\S\w).$$
The denominator is equal to $\|\X\|^2 = \mathrm{tr}(\S)$ resulting in the formula given above.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
Let's say I choose some linear combination of these variables -- e.g. $A+2B+5C$, could I work out how much variance in the data this describes?
This question can be understood in two different ways,
|
14,646
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
|
Let the total variance, $T$, in a data set of vectors be the sum of squared errors (SSE) between the vectors in the data set and the mean vector of the data set,
$$T = \sum_{i} (x_i-\bar{x}) \cdot (x_i-\bar{x})$$
where $\bar{x}$ is the mean vector of the data set, $x_i$ is the ith vector in the data set, and $\cdot$ is the dot product of two vectors. Said another way, the total variance is the SSE between each $x_i$ and its predicted value, $f(x_i)$, when we set $f(x_i)=\bar{x}$.
Now let the predictor of $x_i$, $f(x_i)$, be the projection of vector $x_i$ onto a unit vector $c$.
$$ f_c(x_i) = (c \cdot x_i)c$$
Then the $SSE$ for a given $c$ is $$SSE_c = \sum_i (x_i - f_c(x_i)) \cdot (x_i - f_c(x_i))$$
I think that if you choose $c$ to minimize $SSE_c$, then $c$ is the first principal component.
If instead you choose $c$ to be the normalized version of the vector $(1, 2, 5, ...)$, then $T-SSE_c$ is the variance in the data described by using $c$ as a predictor.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
|
Let the total variance, $T$, in a data set of vectors be the sum of squared errors (SSE) between the vectors in the data set and the mean vector of the data set,
$$T = \sum_{i} (x_i-\bar{x}) \cdot (x
|
Principal component analysis "backwards": how much variance of the data is explained by a given linear combination of the variables?
Let the total variance, $T$, in a data set of vectors be the sum of squared errors (SSE) between the vectors in the data set and the mean vector of the data set,
$$T = \sum_{i} (x_i-\bar{x}) \cdot (x_i-\bar{x})$$
where $\bar{x}$ is the mean vector of the data set, $x_i$ is the ith vector in the data set, and $\cdot$ is the dot product of two vectors. Said another way, the total variance is the SSE between each $x_i$ and its predicted value, $f(x_i)$, when we set $f(x_i)=\bar{x}$.
Now let the predictor of $x_i$, $f(x_i)$, be the projection of vector $x_i$ onto a unit vector $c$.
$$ f_c(x_i) = (c \cdot x_i)c$$
Then the $SSE$ for a given $c$ is $$SSE_c = \sum_i (x_i - f_c(x_i)) \cdot (x_i - f_c(x_i))$$
I think that if you choose $c$ to minimize $SSE_c$, then $c$ is the first principal component.
If instead you choose $c$ to be the normalized version of the vector $(1, 2, 5, ...)$, then $T-SSE_c$ is the variance in the data described by using $c$ as a predictor.
|
Principal component analysis "backwards": how much variance of the data is explained by a given line
Let the total variance, $T$, in a data set of vectors be the sum of squared errors (SSE) between the vectors in the data set and the mean vector of the data set,
$$T = \sum_{i} (x_i-\bar{x}) \cdot (x
|
14,647
|
How does the power of a logistic regression and a t-test compare?
|
If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global maximum (its negative estimates the variance-covariance matrix of the ML solution). Don't bother with the usual logistic parameterization: it's simpler just to parameterize it with the two probabilities in question. The details will depend on exactly how you test the significance of a logistic regression coefficient (there are several methods).
That these tests have similar powers should not be too surprising, because the chi-square theory for ML estimates is based on a normal approximation to the log likelihood, and the t-test is based on a normal approximation to the distributions of proportions. The crux of the matter is that both methods make the same estimates of the two proportions and both estimates have the same standard errors.
An actual analysis might be more convincing. Let's adopt some general terminology for the values in a given group (A or B):
$p$ is the probability of a 1.
$n$ is the size of each set of draws.
$m$ is the number of sets of draws.
$N = m n$ is the amount of data.
$k_{ij}$ (equal to $0$ or $1$) is the value of the $j^\text{th}$ result in the $i^\text{th}$ set of draws.
$k_i$ is the total number of ones in the $i^\text{th}$ set of draws.
$k$ is the total number of ones.
Logistic regression is essentially the ML estimator of $p$. Its logarithm is given by
$$\log(\mathbb{L}) = k \log(p) + (N-k) \log(1-p).$$
Its derivatives with respect to the parameter $p$ are
$$\frac{\partial \log(\mathbb{L})}{ \partial p} = \frac{k}{p} - \frac{N-k}{1-p} \text{ and}$$
$$-\frac{\partial^2 \log(\mathbb{L})}{\partial p^2} = \frac{k}{p^2} + \frac{N-k}{(1-p)^2}.$$
Setting the first to zero yields the ML estimate ${\hat{p} = k/N}$ and plugging that into the reciprocal of the second expression yields the variance $\hat{p}(1 - \hat{p})/N$, which is the square of the standard error.
The t statistic will be obtained from estimators based on the data grouped by sets of draws; namely, as the difference of the means (one from group A and the other from group B) divided by the standard error of that difference, which is obtained from the standard deviations of the means. Let's look at the mean and standard deviation for a given group, then. The mean equals $k/N$, which is identical to the ML estimator $\hat{p}$. The standard deviation in question is the standard deviation of the draw means; that is, it is the standard deviation of the set of $k_i/n$. Here is the crux of the matter, so let's explore some possibilities.
Suppose the data aren't grouped into draws at all: that is, $n = 1$ and $m = N$. The $k_{i}$ are the draw means. Their sample variance equals $N/(N-1)$ times $\hat{p}(1 - \hat{p})$. From this it follows that the standard error is identical to the ML standard error apart from a factor of $\sqrt{N/(N-1)}$, which is essentially $1$ when $N = 1800$. Therefore--apart from this tiny difference--any tests based on logistic regression will be the same as a t-test and we will achieve essentially the same power.
When the data are grouped, the (true) variance of the $k_i/n$ equals $p(1-p)/n$ because the statistics $k_i$ represent the sum of $n$ Bernoulli($p$) variables, each with variance $p(1-p)$. Therefore the expected standard error of the mean of $m$ of these values is the square root of $p(1-p)/n/m = p(1-p)/N$, just as before.
Number 2 indicates the power of the test should not vary appreciably with how the draws are apportioned (that is, with how $m$ and $n$ are varied subject to $m n = N$), apart perhaps from a fairly small effect from the adjustment in the sample variance (unless you were so foolish as to use extremely few sets of draws within each group).
Limited simulations to compare $p = 0.70$ to $p = 0.74$ (with 10,000 iterations apiece) involving $m = 900, n = 1$ (essentially logistic regression); $m = n = 30$; and $m = 2, n = 450$ (maximizing the sample variance adjustment) bear this out: the power (at $\alpha = 0.05$, one-sided) in the first two cases is 0.59 whereas in the third, where the adjustment factor makes a material change (there are now just two degrees of freedom instead of 1798 or 58), it drops to 0.36. Another test comparing $p = 0.50$ to $p = 0.52$ gives powers of 0.22, 0.21, and 0.15, respectively: again, we observe only a slight drop from no grouping into draws (=logistic regression) to grouping into 30 groups and a substantial drop down to just two groups.
The morals of this analysis are:
You don't lose much when you partition your $N$ data values into a large number $m$ of relatively small groups of "draws".
You can lose appreciable power using small numbers of groups ($m$ is small, $n$--the amount of data per group--is large).
You're best off not grouping your $N$ data values into "draws" at all. Just analyze them as-is (using any reasonable test, including logistic regression and t-testing).
|
How does the power of a logistic regression and a t-test compare?
|
If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global
|
How does the power of a logistic regression and a t-test compare?
If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global maximum (its negative estimates the variance-covariance matrix of the ML solution). Don't bother with the usual logistic parameterization: it's simpler just to parameterize it with the two probabilities in question. The details will depend on exactly how you test the significance of a logistic regression coefficient (there are several methods).
That these tests have similar powers should not be too surprising, because the chi-square theory for ML estimates is based on a normal approximation to the log likelihood, and the t-test is based on a normal approximation to the distributions of proportions. The crux of the matter is that both methods make the same estimates of the two proportions and both estimates have the same standard errors.
An actual analysis might be more convincing. Let's adopt some general terminology for the values in a given group (A or B):
$p$ is the probability of a 1.
$n$ is the size of each set of draws.
$m$ is the number of sets of draws.
$N = m n$ is the amount of data.
$k_{ij}$ (equal to $0$ or $1$) is the value of the $j^\text{th}$ result in the $i^\text{th}$ set of draws.
$k_i$ is the total number of ones in the $i^\text{th}$ set of draws.
$k$ is the total number of ones.
Logistic regression is essentially the ML estimator of $p$. Its logarithm is given by
$$\log(\mathbb{L}) = k \log(p) + (N-k) \log(1-p).$$
Its derivatives with respect to the parameter $p$ are
$$\frac{\partial \log(\mathbb{L})}{ \partial p} = \frac{k}{p} - \frac{N-k}{1-p} \text{ and}$$
$$-\frac{\partial^2 \log(\mathbb{L})}{\partial p^2} = \frac{k}{p^2} + \frac{N-k}{(1-p)^2}.$$
Setting the first to zero yields the ML estimate ${\hat{p} = k/N}$ and plugging that into the reciprocal of the second expression yields the variance $\hat{p}(1 - \hat{p})/N$, which is the square of the standard error.
The t statistic will be obtained from estimators based on the data grouped by sets of draws; namely, as the difference of the means (one from group A and the other from group B) divided by the standard error of that difference, which is obtained from the standard deviations of the means. Let's look at the mean and standard deviation for a given group, then. The mean equals $k/N$, which is identical to the ML estimator $\hat{p}$. The standard deviation in question is the standard deviation of the draw means; that is, it is the standard deviation of the set of $k_i/n$. Here is the crux of the matter, so let's explore some possibilities.
Suppose the data aren't grouped into draws at all: that is, $n = 1$ and $m = N$. The $k_{i}$ are the draw means. Their sample variance equals $N/(N-1)$ times $\hat{p}(1 - \hat{p})$. From this it follows that the standard error is identical to the ML standard error apart from a factor of $\sqrt{N/(N-1)}$, which is essentially $1$ when $N = 1800$. Therefore--apart from this tiny difference--any tests based on logistic regression will be the same as a t-test and we will achieve essentially the same power.
When the data are grouped, the (true) variance of the $k_i/n$ equals $p(1-p)/n$ because the statistics $k_i$ represent the sum of $n$ Bernoulli($p$) variables, each with variance $p(1-p)$. Therefore the expected standard error of the mean of $m$ of these values is the square root of $p(1-p)/n/m = p(1-p)/N$, just as before.
Number 2 indicates the power of the test should not vary appreciably with how the draws are apportioned (that is, with how $m$ and $n$ are varied subject to $m n = N$), apart perhaps from a fairly small effect from the adjustment in the sample variance (unless you were so foolish as to use extremely few sets of draws within each group).
Limited simulations to compare $p = 0.70$ to $p = 0.74$ (with 10,000 iterations apiece) involving $m = 900, n = 1$ (essentially logistic regression); $m = n = 30$; and $m = 2, n = 450$ (maximizing the sample variance adjustment) bear this out: the power (at $\alpha = 0.05$, one-sided) in the first two cases is 0.59 whereas in the third, where the adjustment factor makes a material change (there are now just two degrees of freedom instead of 1798 or 58), it drops to 0.36. Another test comparing $p = 0.50$ to $p = 0.52$ gives powers of 0.22, 0.21, and 0.15, respectively: again, we observe only a slight drop from no grouping into draws (=logistic regression) to grouping into 30 groups and a substantial drop down to just two groups.
The morals of this analysis are:
You don't lose much when you partition your $N$ data values into a large number $m$ of relatively small groups of "draws".
You can lose appreciable power using small numbers of groups ($m$ is small, $n$--the amount of data per group--is large).
You're best off not grouping your $N$ data values into "draws" at all. Just analyze them as-is (using any reasonable test, including logistic regression and t-testing).
|
How does the power of a logistic regression and a t-test compare?
If I have computed correctly, logistic regression asymptotically has the same power as the t-test. To see this, write down its log likelihood and compute the expectation of its Hessian at its global
|
14,648
|
How does the power of a logistic regression and a t-test compare?
|
Here is code in R that illustrates the simulation of whuber's answer. Feedback on improving my R code is more than welcome.
N <- 900 # Total number data points
m <- 30; # Size of draw per set
n <- 30; # No of sets
p_null <- 0.70; # Null hypothesis
p_alternate <- 0.74 # Alternate hypothesis
tot_iter <- 10000;
set.seed(1); # Initialize random seed
null_rejected <- 0; # Set counter to 0
for (iter in 1:tot_iter)
{
draws1 <- matrix(0,m,n);
draws2 <- matrix(0,m,n);
means1 <- matrix(0,m);
means2 <- matrix(0,m);
for (obs in 1:m)
{
draws1[obs,] <- rbinom(n,1,p_null);
draws2[obs,] <- rbinom(n,1,p_alternate);
means1[obs,] <- mean(draws1[obs,]);
means2[obs,] <- mean(draws2[obs,]);
}
if (t.test(means1,means2,alternative="l")$p.value <= 0.05)
{
null_rejected <- null_rejected + 1;
}
}
power <- null_rejected / tot_iter
|
How does the power of a logistic regression and a t-test compare?
|
Here is code in R that illustrates the simulation of whuber's answer. Feedback on improving my R code is more than welcome.
N <- 900 # Total number data points
m <- 30; # Size of
|
How does the power of a logistic regression and a t-test compare?
Here is code in R that illustrates the simulation of whuber's answer. Feedback on improving my R code is more than welcome.
N <- 900 # Total number data points
m <- 30; # Size of draw per set
n <- 30; # No of sets
p_null <- 0.70; # Null hypothesis
p_alternate <- 0.74 # Alternate hypothesis
tot_iter <- 10000;
set.seed(1); # Initialize random seed
null_rejected <- 0; # Set counter to 0
for (iter in 1:tot_iter)
{
draws1 <- matrix(0,m,n);
draws2 <- matrix(0,m,n);
means1 <- matrix(0,m);
means2 <- matrix(0,m);
for (obs in 1:m)
{
draws1[obs,] <- rbinom(n,1,p_null);
draws2[obs,] <- rbinom(n,1,p_alternate);
means1[obs,] <- mean(draws1[obs,]);
means2[obs,] <- mean(draws2[obs,]);
}
if (t.test(means1,means2,alternative="l")$p.value <= 0.05)
{
null_rejected <- null_rejected + 1;
}
}
power <- null_rejected / tot_iter
|
How does the power of a logistic regression and a t-test compare?
Here is code in R that illustrates the simulation of whuber's answer. Feedback on improving my R code is more than welcome.
N <- 900 # Total number data points
m <- 30; # Size of
|
14,649
|
Good introduction into different kinds of entropy
|
Cover and Thomas's book Elements of Information Theory is a good source on entropy and its applications, although I don't know that it addresses exactly the issues you have in mind.
|
Good introduction into different kinds of entropy
|
Cover and Thomas's book Elements of Information Theory is a good source on entropy and its applications, although I don't know that it addresses exactly the issues you have in mind.
|
Good introduction into different kinds of entropy
Cover and Thomas's book Elements of Information Theory is a good source on entropy and its applications, although I don't know that it addresses exactly the issues you have in mind.
|
Good introduction into different kinds of entropy
Cover and Thomas's book Elements of Information Theory is a good source on entropy and its applications, although I don't know that it addresses exactly the issues you have in mind.
|
14,650
|
Good introduction into different kinds of entropy
|
These lecture notes on information theory by O. Johnson contain a good introduction to different kinds of entropy.
|
Good introduction into different kinds of entropy
|
These lecture notes on information theory by O. Johnson contain a good introduction to different kinds of entropy.
|
Good introduction into different kinds of entropy
These lecture notes on information theory by O. Johnson contain a good introduction to different kinds of entropy.
|
Good introduction into different kinds of entropy
These lecture notes on information theory by O. Johnson contain a good introduction to different kinds of entropy.
|
14,651
|
Good introduction into different kinds of entropy
|
If your interested in the mathematical statistic around entropy, you may consult this book
http://www.renyi.hu/~csiszar/Publications/Information_Theory_and_Statistics:_A_Tutorial.pdf
it is freely available !
|
Good introduction into different kinds of entropy
|
If your interested in the mathematical statistic around entropy, you may consult this book
http://www.renyi.hu/~csiszar/Publications/Information_Theory_and_Statistics:_A_Tutorial.pdf
it is freely ava
|
Good introduction into different kinds of entropy
If your interested in the mathematical statistic around entropy, you may consult this book
http://www.renyi.hu/~csiszar/Publications/Information_Theory_and_Statistics:_A_Tutorial.pdf
it is freely available !
|
Good introduction into different kinds of entropy
If your interested in the mathematical statistic around entropy, you may consult this book
http://www.renyi.hu/~csiszar/Publications/Information_Theory_and_Statistics:_A_Tutorial.pdf
it is freely ava
|
14,652
|
Good introduction into different kinds of entropy
|
GrΓΌnwald and Dawid's paper Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory discuss generalisations of the traditional notion of entropy. Given a loss, its associated entropy function is the mapping from a distribution to the minimal achievable expected loss for that distribution. The usual entropy function is the generalised entropy associated with the log loss. Other choices of losses yield different entropy such as the RΓ©nyi entropy.
|
Good introduction into different kinds of entropy
|
GrΓΌnwald and Dawid's paper Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory discuss generalisations of the traditional notion of entropy. Given a loss, its associa
|
Good introduction into different kinds of entropy
GrΓΌnwald and Dawid's paper Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory discuss generalisations of the traditional notion of entropy. Given a loss, its associated entropy function is the mapping from a distribution to the minimal achievable expected loss for that distribution. The usual entropy function is the generalised entropy associated with the log loss. Other choices of losses yield different entropy such as the RΓ©nyi entropy.
|
Good introduction into different kinds of entropy
GrΓΌnwald and Dawid's paper Game theory, maximum entropy, minimum discrepancy and robust Bayesian decision theory discuss generalisations of the traditional notion of entropy. Given a loss, its associa
|
14,653
|
Good introduction into different kinds of entropy
|
The entropy is only one (as a concept) -- the amount of information needed to describe some system; there are only many its generalizations. Sample entropy is only some entropy-like descriptor used in heart rate analysis.
|
Good introduction into different kinds of entropy
|
The entropy is only one (as a concept) -- the amount of information needed to describe some system; there are only many its generalizations. Sample entropy is only some entropy-like descriptor used in
|
Good introduction into different kinds of entropy
The entropy is only one (as a concept) -- the amount of information needed to describe some system; there are only many its generalizations. Sample entropy is only some entropy-like descriptor used in heart rate analysis.
|
Good introduction into different kinds of entropy
The entropy is only one (as a concept) -- the amount of information needed to describe some system; there are only many its generalizations. Sample entropy is only some entropy-like descriptor used in
|
14,654
|
Good introduction into different kinds of entropy
|
Jaynes shows how to derive Shannon's entropy from basic principles in his book.
One idea is that if you approximate $n!$ by $n^n$, entropy is the rewriting of the following quantity
$$\frac{1}{n}\log \frac{n!}{(n p_1)!\cdots (n p_d)!}$$
The quantity inside the log is the number of different length n observation sequences over $d$ outcomes that are matched by distribution $p$, so it's a kind of a measure of explanatory power of the distribution.
|
Good introduction into different kinds of entropy
|
Jaynes shows how to derive Shannon's entropy from basic principles in his book.
One idea is that if you approximate $n!$ by $n^n$, entropy is the rewriting of the following quantity
$$\frac{1}{n}\log
|
Good introduction into different kinds of entropy
Jaynes shows how to derive Shannon's entropy from basic principles in his book.
One idea is that if you approximate $n!$ by $n^n$, entropy is the rewriting of the following quantity
$$\frac{1}{n}\log \frac{n!}{(n p_1)!\cdots (n p_d)!}$$
The quantity inside the log is the number of different length n observation sequences over $d$ outcomes that are matched by distribution $p$, so it's a kind of a measure of explanatory power of the distribution.
|
Good introduction into different kinds of entropy
Jaynes shows how to derive Shannon's entropy from basic principles in his book.
One idea is that if you approximate $n!$ by $n^n$, entropy is the rewriting of the following quantity
$$\frac{1}{n}\log
|
14,655
|
Does it make sense to use Logistic regression with binary outcome and predictor?
|
In this case you can collapse your data to
$$
\begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & S_{00} & S_{01} \\ 1 & S_{10} & S_{11} \end{array}
$$
where $S_{ij}$ is the number of instances for $x = i$ and $y =j$ with $i,j \in \{0,1\}$. Suppose there are $n$ observations overall.
If we fit the model $p_i = g^{-1}(x_i^T \beta) = g^{-1}(\beta_0 + \beta_1 1_{x_i = 1})$ (where $g$ is our link function) we'll find that $\hat \beta_0$ is the logit of the proportion of successes when $x_i = 0$ and $\hat \beta_0 + \hat \beta_1$ is the logit of the proportion of successes when $x_i = 1$. In other words,
$$
\hat \beta_0 = g\left(\frac{S_{01}}{S_{00} + S_{01}}\right)
$$
and
$$
\hat \beta_0 + \hat \beta_1 = g\left(\frac{S_{11}}{S_{10} + S_{11}}\right).
$$
Let's check this is R.
n <- 54
set.seed(123)
x <- rbinom(n, 1, .4)
y <- rbinom(n, 1, .6)
tbl <- table(x=x,y=y)
mod <- glm(y ~ x, family=binomial())
# all the same at 0.5757576
binomial()$linkinv( mod$coef[1])
mean(y[x == 0])
tbl[1,2] / sum(tbl[1,])
# all the same at 0.5714286
binomial()$linkinv( mod$coef[1] + mod$coef[2])
mean(y[x == 1])
tbl[2,2] / sum(tbl[2,])
So the logistic regression coefficients are exactly transformations of proportions coming from the table.
The upshot is that we certainly can analyze this dataset with a logistic regression if we have data coming from a series of Bernoulli random variables, but it turns out to be no different than directly analyzing the resulting contingency table.
I want to comment on why this works from a theoretical perspective. When we're fitting a logistic regression, we are using the model that $Y_i | x_i \stackrel{\perp}{\sim} \text{Bern}(p_i)$. We then decide to model the mean as a transformation of a linear predictor in $x_i$, or in symbols $p_i = g^{-1}\left( \beta_0 + \beta_1 x_i\right)$. In our case we only have two unique values of $x_i$, and therefore there are only two unique values of $p_i$, say $p_0$ and $p_1$. Because of our independence assumption we have
$$
\sum \limits_{i : x_i = 0} Y_i = S_{01} \sim \text{Bin} \left(n_0, p_0\right)
$$
and
$$
\sum \limits_{i : x_i = 1} Y_i = S_{11} \sim \text{Bin} \left(n_1, p_1\right).
$$
Note how we're using the fact that the $x_i$, and in turn $n_0$ and $n_1$, are nonrandom: if this was not the case then these would not necessarily be binomial.
This means that
$$
S_{01} / n_0 = \frac{S_{01}}{S_{00} + S_{01}} \to_p p_0 \hspace{2mm} \text{ and } \hspace{2mm} S_{11} / n_1 = \frac{S_{11}}{S_{10} + S_{11}} \to_p p_1.
$$
The key insight here: our Bernoulli RVs are $Y_i | x_i = j \sim \text{Bern}(p_j)$ while our binomial RVs are $S_{j1} \sim \text{Bin}(n_j, p_j)$, but both have the same probability of success. That's the reason why these contingency table proportions are estimating the same thing as an observation-level logistic regression. It's not just some coincidence with the table: it's a direct consequence of the distributional assumptions we have made.
|
Does it make sense to use Logistic regression with binary outcome and predictor?
|
In this case you can collapse your data to
$$
\begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & S_{00} & S_{01} \\ 1 & S_{10} & S_{11} \end{array}
$$
where $S_{ij}$ is the number of instances f
|
Does it make sense to use Logistic regression with binary outcome and predictor?
In this case you can collapse your data to
$$
\begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & S_{00} & S_{01} \\ 1 & S_{10} & S_{11} \end{array}
$$
where $S_{ij}$ is the number of instances for $x = i$ and $y =j$ with $i,j \in \{0,1\}$. Suppose there are $n$ observations overall.
If we fit the model $p_i = g^{-1}(x_i^T \beta) = g^{-1}(\beta_0 + \beta_1 1_{x_i = 1})$ (where $g$ is our link function) we'll find that $\hat \beta_0$ is the logit of the proportion of successes when $x_i = 0$ and $\hat \beta_0 + \hat \beta_1$ is the logit of the proportion of successes when $x_i = 1$. In other words,
$$
\hat \beta_0 = g\left(\frac{S_{01}}{S_{00} + S_{01}}\right)
$$
and
$$
\hat \beta_0 + \hat \beta_1 = g\left(\frac{S_{11}}{S_{10} + S_{11}}\right).
$$
Let's check this is R.
n <- 54
set.seed(123)
x <- rbinom(n, 1, .4)
y <- rbinom(n, 1, .6)
tbl <- table(x=x,y=y)
mod <- glm(y ~ x, family=binomial())
# all the same at 0.5757576
binomial()$linkinv( mod$coef[1])
mean(y[x == 0])
tbl[1,2] / sum(tbl[1,])
# all the same at 0.5714286
binomial()$linkinv( mod$coef[1] + mod$coef[2])
mean(y[x == 1])
tbl[2,2] / sum(tbl[2,])
So the logistic regression coefficients are exactly transformations of proportions coming from the table.
The upshot is that we certainly can analyze this dataset with a logistic regression if we have data coming from a series of Bernoulli random variables, but it turns out to be no different than directly analyzing the resulting contingency table.
I want to comment on why this works from a theoretical perspective. When we're fitting a logistic regression, we are using the model that $Y_i | x_i \stackrel{\perp}{\sim} \text{Bern}(p_i)$. We then decide to model the mean as a transformation of a linear predictor in $x_i$, or in symbols $p_i = g^{-1}\left( \beta_0 + \beta_1 x_i\right)$. In our case we only have two unique values of $x_i$, and therefore there are only two unique values of $p_i$, say $p_0$ and $p_1$. Because of our independence assumption we have
$$
\sum \limits_{i : x_i = 0} Y_i = S_{01} \sim \text{Bin} \left(n_0, p_0\right)
$$
and
$$
\sum \limits_{i : x_i = 1} Y_i = S_{11} \sim \text{Bin} \left(n_1, p_1\right).
$$
Note how we're using the fact that the $x_i$, and in turn $n_0$ and $n_1$, are nonrandom: if this was not the case then these would not necessarily be binomial.
This means that
$$
S_{01} / n_0 = \frac{S_{01}}{S_{00} + S_{01}} \to_p p_0 \hspace{2mm} \text{ and } \hspace{2mm} S_{11} / n_1 = \frac{S_{11}}{S_{10} + S_{11}} \to_p p_1.
$$
The key insight here: our Bernoulli RVs are $Y_i | x_i = j \sim \text{Bern}(p_j)$ while our binomial RVs are $S_{j1} \sim \text{Bin}(n_j, p_j)$, but both have the same probability of success. That's the reason why these contingency table proportions are estimating the same thing as an observation-level logistic regression. It's not just some coincidence with the table: it's a direct consequence of the distributional assumptions we have made.
|
Does it make sense to use Logistic regression with binary outcome and predictor?
In this case you can collapse your data to
$$
\begin{array}{c|cc} X \backslash Y & 0 & 1 \\ \hline 0 & S_{00} & S_{01} \\ 1 & S_{10} & S_{11} \end{array}
$$
where $S_{ij}$ is the number of instances f
|
14,656
|
Does it make sense to use Logistic regression with binary outcome and predictor?
|
When you have more than one predictors and all the predictors are binary variables, you could fit a model using Logic Regression [1] (note it's "Logic" not "Logistic"). It's useful when you believe interaction effects among your predictors are prominent. There's an implementation in R (LogicReg package).
[1] Ruczinski, I., Kooperberg, C., & LeBlanc, M. (2003). Logic regression. Journal of Computational and graphical Statistics, 12(3), 475-511.
|
Does it make sense to use Logistic regression with binary outcome and predictor?
|
When you have more than one predictors and all the predictors are binary variables, you could fit a model using Logic Regression [1] (note it's "Logic" not "Logistic"). It's useful when you believe in
|
Does it make sense to use Logistic regression with binary outcome and predictor?
When you have more than one predictors and all the predictors are binary variables, you could fit a model using Logic Regression [1] (note it's "Logic" not "Logistic"). It's useful when you believe interaction effects among your predictors are prominent. There's an implementation in R (LogicReg package).
[1] Ruczinski, I., Kooperberg, C., & LeBlanc, M. (2003). Logic regression. Journal of Computational and graphical Statistics, 12(3), 475-511.
|
Does it make sense to use Logistic regression with binary outcome and predictor?
When you have more than one predictors and all the predictors are binary variables, you could fit a model using Logic Regression [1] (note it's "Logic" not "Logistic"). It's useful when you believe in
|
14,657
|
difference between convex and concave functions
|
A convex function has one minimum - a nice property, as an optimization algorithm won't get stuck in a local minimum that isn't a global minimum. Take $x^2 - 1$, for example:
A non-convex function is wavy - has some 'valleys' (local minima) that aren't as deep as the overall deepest 'valley' (global minimum). Optimization algorithms can get stuck in the local minimum, and it can be hard to tell when this happens. Take $x^4 + x^3 -2x^2 -2x$, for example:
A concave function is the negative of a convex function. Take $-x^2$, for example:
A non-concave function isn't a widely used term, and it's sufficient to say it's a function that isn't concave - though I've seen it used to refer to non-convex functions. I wouldn't really worry about this one.
|
difference between convex and concave functions
|
A convex function has one minimum - a nice property, as an optimization algorithm won't get stuck in a local minimum that isn't a global minimum. Take $x^2 - 1$, for example:
A non-convex function is
|
difference between convex and concave functions
A convex function has one minimum - a nice property, as an optimization algorithm won't get stuck in a local minimum that isn't a global minimum. Take $x^2 - 1$, for example:
A non-convex function is wavy - has some 'valleys' (local minima) that aren't as deep as the overall deepest 'valley' (global minimum). Optimization algorithms can get stuck in the local minimum, and it can be hard to tell when this happens. Take $x^4 + x^3 -2x^2 -2x$, for example:
A concave function is the negative of a convex function. Take $-x^2$, for example:
A non-concave function isn't a widely used term, and it's sufficient to say it's a function that isn't concave - though I've seen it used to refer to non-convex functions. I wouldn't really worry about this one.
|
difference between convex and concave functions
A convex function has one minimum - a nice property, as an optimization algorithm won't get stuck in a local minimum that isn't a global minimum. Take $x^2 - 1$, for example:
A non-convex function is
|
14,658
|
difference between convex and concave functions
|
To define a convex function, you need a convex set $X$ as the domain and $\mathbb{R}$ as the codomain.
A function is convex if it satisfies the following property:
$$\forall x_1, x_2 \in X, \forall t \in [0,1], f(tx_1+(1-t)x_2) \le tf(x_1) +(1-t)f(x_2)$$
You should read through the wikipedia page of convex funciton.
In one dimension, you can visualize the dimension as whenever you pick any two points on the domain and connect them using a straight line, the straight line is always equal to or above the graph.
Personally, I find the following property to check convex property to be incredibly useful:
"A continuous, twice differentiable function of several variables is convex on a convex set if and only if its Hessian matrix of second partial derivatives is positive semidefinite on the interior of the convex set."
A function is non-convex if the function is not a convex function.
A function, $g$ is concave if $-g$ is a convex function.
A function is non-concave if the function is not a concave function.
Notice that a function can be both convex and concave at the same time, a straight line is both convex and concave.
A non-convex function need not be a concave function. For example, the function $f(x)=x(x-1)(x+1)$ defined on $[-1,1]$.
|
difference between convex and concave functions
|
To define a convex function, you need a convex set $X$ as the domain and $\mathbb{R}$ as the codomain.
A function is convex if it satisfies the following property:
$$\forall x_1, x_2 \in X, \forall t
|
difference between convex and concave functions
To define a convex function, you need a convex set $X$ as the domain and $\mathbb{R}$ as the codomain.
A function is convex if it satisfies the following property:
$$\forall x_1, x_2 \in X, \forall t \in [0,1], f(tx_1+(1-t)x_2) \le tf(x_1) +(1-t)f(x_2)$$
You should read through the wikipedia page of convex funciton.
In one dimension, you can visualize the dimension as whenever you pick any two points on the domain and connect them using a straight line, the straight line is always equal to or above the graph.
Personally, I find the following property to check convex property to be incredibly useful:
"A continuous, twice differentiable function of several variables is convex on a convex set if and only if its Hessian matrix of second partial derivatives is positive semidefinite on the interior of the convex set."
A function is non-convex if the function is not a convex function.
A function, $g$ is concave if $-g$ is a convex function.
A function is non-concave if the function is not a concave function.
Notice that a function can be both convex and concave at the same time, a straight line is both convex and concave.
A non-convex function need not be a concave function. For example, the function $f(x)=x(x-1)(x+1)$ defined on $[-1,1]$.
|
difference between convex and concave functions
To define a convex function, you need a convex set $X$ as the domain and $\mathbb{R}$ as the codomain.
A function is convex if it satisfies the following property:
$$\forall x_1, x_2 \in X, \forall t
|
14,659
|
Intuition behind the hazard rate
|
Let $X$ denote the time of death (or time of failure if you
prefer a less morbid description). Suppose that $X$ is a continuous random
variable whose density function $f(t)$ is nonzero only on
$(0,\infty)$. Now, notice that it must be the case that $f(t)$
decays away to $0$ as $t \to \infty$ because if $f(t)$ does not decay away
as stated, then
$\displaystyle \int_{-\infty}^\infty f(t)\,\mathrm dt = 1$ cannot hold.
Thus, your notion that $f(T)$ is the probability of death at time $T$
(actually, it is $f(T)\Delta t$ that is (approximately)
the probability of death in the short interval $(T, T+\Delta t]$
of length $\Delta t$) leads to implausible and
unbelievable conclusions such as
You are more likely to die within the next month when you are thirty
years old than when you are ninety-eight years old.
whenever $f(t)$ is such that $f(30) > f(98)$.
The reason why $f(T)$ (or $f(T)\Delta t$) is the "wrong" probability
to look at is that the value of $f(T)$ is of interest only
to those who are alive at age $T$ (and still mentally alert enough
to read stats.SE on a regular basis!) What ought to be looked at
is the probability of a $T$-year old dying within the next month,
that is,
\begin{align}P\{(X \in (T, T+\Delta t] \mid X \geq T\}
&= \frac{P\{\left(X \in (T, T+\Delta t]\right) \cap \left(X\geq T\right)\}}{P\{X\geq T\}} &
\\ \scriptstyle{ \text{ definition of conditional probability}}\\
&= \frac{P\{X \in (T, T+\Delta t]\}}{P\{X\geq T\}}\\
&= \frac{f(T)\Delta t}{1-F(T)}
& \\ \scriptstyle{
\text{because }X\text{ is a continuous rv}}
\end{align}
Choosing $\Delta t$ to be a fortnight, a week, a day, an hour, a minute,
etc. we come to the conclusion that the (instantaneous) hazard
rate for a $T$-year old is
$$h(T) = \frac{f(T)}{1-F(T)}$$
in the sense that the approximate probability of death in the
next femtosecond
$(\Delta t)$ of a $T$-year old is $\displaystyle
\frac{f(T)\Delta t}{1-F(T)}.$
Note that in contrast to the density $f(t)$ integrating to $1$, the
integral
$\displaystyle \int_0^\infty h(t)\, \mathrm dt$ must diverge. This is because the CDF $F(t)$ is related to the hazard rate through
$$F(t) = 1 - \exp\left(-\int_0^t h(\tau)\, \mathrm d\tau\right)$$
and since $\lim_{t\to \infty}F(t) = 1$, it must be that
$$\lim_{t\to \infty} \int_0^t h(\tau)\, \mathrm d\tau = \infty,$$ or stated more formally, the integral of the hazard rate must diverge: there is no potential divergence as a previous edit claimed.
Typical hazard rates are increasing functions of time, but
constant hazard rates (exponential lifetimes) are possible. Both of these kinds of hazard rates obviously have divergent integrals. A less common scenario (for those who believe that things improve with age, like fine wine does) is a hazard rate that decreases with time but slowly enough that
the integral diverges.
|
Intuition behind the hazard rate
|
Let $X$ denote the time of death (or time of failure if you
prefer a less morbid description). Suppose that $X$ is a continuous random
variable whose density function $f(t)$ is nonzero only on
$(0,\i
|
Intuition behind the hazard rate
Let $X$ denote the time of death (or time of failure if you
prefer a less morbid description). Suppose that $X$ is a continuous random
variable whose density function $f(t)$ is nonzero only on
$(0,\infty)$. Now, notice that it must be the case that $f(t)$
decays away to $0$ as $t \to \infty$ because if $f(t)$ does not decay away
as stated, then
$\displaystyle \int_{-\infty}^\infty f(t)\,\mathrm dt = 1$ cannot hold.
Thus, your notion that $f(T)$ is the probability of death at time $T$
(actually, it is $f(T)\Delta t$ that is (approximately)
the probability of death in the short interval $(T, T+\Delta t]$
of length $\Delta t$) leads to implausible and
unbelievable conclusions such as
You are more likely to die within the next month when you are thirty
years old than when you are ninety-eight years old.
whenever $f(t)$ is such that $f(30) > f(98)$.
The reason why $f(T)$ (or $f(T)\Delta t$) is the "wrong" probability
to look at is that the value of $f(T)$ is of interest only
to those who are alive at age $T$ (and still mentally alert enough
to read stats.SE on a regular basis!) What ought to be looked at
is the probability of a $T$-year old dying within the next month,
that is,
\begin{align}P\{(X \in (T, T+\Delta t] \mid X \geq T\}
&= \frac{P\{\left(X \in (T, T+\Delta t]\right) \cap \left(X\geq T\right)\}}{P\{X\geq T\}} &
\\ \scriptstyle{ \text{ definition of conditional probability}}\\
&= \frac{P\{X \in (T, T+\Delta t]\}}{P\{X\geq T\}}\\
&= \frac{f(T)\Delta t}{1-F(T)}
& \\ \scriptstyle{
\text{because }X\text{ is a continuous rv}}
\end{align}
Choosing $\Delta t$ to be a fortnight, a week, a day, an hour, a minute,
etc. we come to the conclusion that the (instantaneous) hazard
rate for a $T$-year old is
$$h(T) = \frac{f(T)}{1-F(T)}$$
in the sense that the approximate probability of death in the
next femtosecond
$(\Delta t)$ of a $T$-year old is $\displaystyle
\frac{f(T)\Delta t}{1-F(T)}.$
Note that in contrast to the density $f(t)$ integrating to $1$, the
integral
$\displaystyle \int_0^\infty h(t)\, \mathrm dt$ must diverge. This is because the CDF $F(t)$ is related to the hazard rate through
$$F(t) = 1 - \exp\left(-\int_0^t h(\tau)\, \mathrm d\tau\right)$$
and since $\lim_{t\to \infty}F(t) = 1$, it must be that
$$\lim_{t\to \infty} \int_0^t h(\tau)\, \mathrm d\tau = \infty,$$ or stated more formally, the integral of the hazard rate must diverge: there is no potential divergence as a previous edit claimed.
Typical hazard rates are increasing functions of time, but
constant hazard rates (exponential lifetimes) are possible. Both of these kinds of hazard rates obviously have divergent integrals. A less common scenario (for those who believe that things improve with age, like fine wine does) is a hazard rate that decreases with time but slowly enough that
the integral diverges.
|
Intuition behind the hazard rate
Let $X$ denote the time of death (or time of failure if you
prefer a less morbid description). Suppose that $X$ is a continuous random
variable whose density function $f(t)$ is nonzero only on
$(0,\i
|
14,660
|
Intuition behind the hazard rate
|
Imagine that you are interested in the incidence of (first) marriage for men. To look at the incidence of marriage at age 20, say, you would select a sample of people who are not married at that age and see if they get married within the next year (before they turn 21).
The you could get a rough estimate for
$$ P(\mathrm{marry\,\, before\,\, 21}| \mathrm{not\,\, married \,\,at\,\, 20}) $$
as the proportion of individuals who got married from your sample of single 20 year olds, i.e.
$$
\frac{N(\mathrm{married \,\,before \,\,21\,\, and \,\, not\,\,married\,\, at \,\, 20})}{N(\mathrm{not\,\, married\,\, at\,\, 20})}
$$
So basically this is just using the definition of conditional probability,
$$
P(X|Y) = \frac{P(X,Y)}{P(Y)}.
$$
Now imagine we make the age unit smaller and smaller, up to days for example. I.e. what is the incidence of marriage at age of 7300 days? Then you would do the same, but survey all individuals of 7300 days and look who gets married before the end of the day. If $T$ is a random variable age at marriage, then we could write
$$
P(T \leq 7301)| T \geq 7300) = \frac{P(T \in [7300, 7301))}{P(T \geq 7300)}
$$
by the same logic as before.
The hazard would then be the instantaneous probability of marriage at age $t$, for a non-married individual.
We can write this as
$$
h(t) dt= P(T \in [t, t+dt) | T\geq t) = \frac{P(T \in[t, t+dt))}{P(T \geq t)}
$$
|
Intuition behind the hazard rate
|
Imagine that you are interested in the incidence of (first) marriage for men. To look at the incidence of marriage at age 20, say, you would select a sample of people who are not married at that age a
|
Intuition behind the hazard rate
Imagine that you are interested in the incidence of (first) marriage for men. To look at the incidence of marriage at age 20, say, you would select a sample of people who are not married at that age and see if they get married within the next year (before they turn 21).
The you could get a rough estimate for
$$ P(\mathrm{marry\,\, before\,\, 21}| \mathrm{not\,\, married \,\,at\,\, 20}) $$
as the proportion of individuals who got married from your sample of single 20 year olds, i.e.
$$
\frac{N(\mathrm{married \,\,before \,\,21\,\, and \,\, not\,\,married\,\, at \,\, 20})}{N(\mathrm{not\,\, married\,\, at\,\, 20})}
$$
So basically this is just using the definition of conditional probability,
$$
P(X|Y) = \frac{P(X,Y)}{P(Y)}.
$$
Now imagine we make the age unit smaller and smaller, up to days for example. I.e. what is the incidence of marriage at age of 7300 days? Then you would do the same, but survey all individuals of 7300 days and look who gets married before the end of the day. If $T$ is a random variable age at marriage, then we could write
$$
P(T \leq 7301)| T \geq 7300) = \frac{P(T \in [7300, 7301))}{P(T \geq 7300)}
$$
by the same logic as before.
The hazard would then be the instantaneous probability of marriage at age $t$, for a non-married individual.
We can write this as
$$
h(t) dt= P(T \in [t, t+dt) | T\geq t) = \frac{P(T \in[t, t+dt))}{P(T \geq t)}
$$
|
Intuition behind the hazard rate
Imagine that you are interested in the incidence of (first) marriage for men. To look at the incidence of marriage at age 20, say, you would select a sample of people who are not married at that age a
|
14,661
|
Intuition behind the hazard rate
|
$f(x)$ is not the probability of death, but the probability density; the expected number of times you die within the next unit of time if the probability density remained constant during that unit of time.
Notice there is a problem: your probability of dying when you already died before is rather problematic. So it makes more sense to compute the probability of dying conditional on having survived thus far. $1-F(t)$ it the probability of having survived until $t$, so dividing the probabilty density by that probability, will get us the expected number of times we will die within the next unit of time conditional on not having died before. That is the hazard rate.
|
Intuition behind the hazard rate
|
$f(x)$ is not the probability of death, but the probability density; the expected number of times you die within the next unit of time if the probability density remained constant during that unit of
|
Intuition behind the hazard rate
$f(x)$ is not the probability of death, but the probability density; the expected number of times you die within the next unit of time if the probability density remained constant during that unit of time.
Notice there is a problem: your probability of dying when you already died before is rather problematic. So it makes more sense to compute the probability of dying conditional on having survived thus far. $1-F(t)$ it the probability of having survived until $t$, so dividing the probabilty density by that probability, will get us the expected number of times we will die within the next unit of time conditional on not having died before. That is the hazard rate.
|
Intuition behind the hazard rate
$f(x)$ is not the probability of death, but the probability density; the expected number of times you die within the next unit of time if the probability density remained constant during that unit of
|
14,662
|
Intuition behind the hazard rate
|
"Death of a person is a tragedy, deaths of millions is statistics"
- Joseph Stalin
Hazard rate is just a renormalization of the probability space that takes pallid impersonal statistics on input and converts it into your own chances to live another day.
Suppose you're an average young man in the Wild West. You decide to pursue a questionable career of a train robber.
Assume that the chance of an average guy surviving his first train robbery is $\frac{1}{2}$. After that you get slightly more experienced and for your second train robbery your chance of survival is $\frac{2}{3}$. Now, you're even more experienced and for the third stint the chance of survival is $\frac{3}{4}$.
So the night before your third robbery you might ask yourself, whether it is worth the risk of dying with 25% chance tomorrow, or should you rather give up on train robberies altogether and move on to start a career in finance?
The data you want to ask yourself this question is the chance of survival tomorrow, which is the Hazard rate.
Unfortunately, it's impossible to get the data about your odds in the real life. What you could do instead is take a look at the cumulative distribution function $F(t)$ of a train robber's life expectancy, or, rather its counterpart $S(t) = 1-F(t)$, called the survival function:
Probability mass function (which is a discrete-case analogue of the continuous probability density function) of dying at your third robbery $p(\xi = 3) = \frac{1}{8}$ . We can more or less reformulate this as a continuous problem $p(3 \leq \xi < 4) = F_\xi(3) - F_\xi(4) = f_\xi(3)dx$, where $\xi$ is a random variable indicating the number of robberies an average train robber survives, $dx=1$, $F_\xi(x)$ is cumulative probability function and $f_\xi(x)$ is probability density function.
So you see, probability density function/probability mass function answers a wrong question. It says that out of all repeat train robbers the fraction that dies at their third robbery is ($\frac{1}{8}$). But the question you want to ask is: if I go for my third robbery tomorrow, what are my chances to survive it, and the answer you want is $(\frac{3}{4})$.
Now, let's start formalizing this. For a discrete-time variable, Hazard function is your chance to die during your next robbery number $t$:
$\underbrace{S(t) - S(t+1)}_\text{fraction of train robbers who die at t} = \underbrace{\lambda(t)}_\text{hazard function at t} \cdot \underbrace{S(t)}_\text{fraction of survivors by t} \ cdot \delta t$
Thus, hazard function is defined as:
$\lambda(t) = \frac{-\delta S(t)}{\delta t \cdot S(t)}$
Or, in continuous-time case:
$\lambda(t) = \frac{-\partial S(t)}{\partial t \cdot S(t)} = \frac{f(t)}{S(t)}$
Cumulative hazard rate $\Lambda(t)$ is a funny thing. It essentially enumerates and sums up all the chances of death you escaped by the current moment. So, for instance, at your first train robbery you had a chance to die of $1/2$, at the second - $1/3$, at the third - $1/4$.
So by the time you start contemplating your fourth robbery, the "number of deaths" you deserved by now $\Lambda(t) = 1/2 + 1/3 + 1/4 = 1.083333$, so in a fair world you would have already been more than dead, exercising your luck so readily...
|
Intuition behind the hazard rate
|
"Death of a person is a tragedy, deaths of millions is statistics"
- Joseph Stalin
Hazard rate is just a renormalization of the probability space that takes pallid impersonal statistics on input and
|
Intuition behind the hazard rate
"Death of a person is a tragedy, deaths of millions is statistics"
- Joseph Stalin
Hazard rate is just a renormalization of the probability space that takes pallid impersonal statistics on input and converts it into your own chances to live another day.
Suppose you're an average young man in the Wild West. You decide to pursue a questionable career of a train robber.
Assume that the chance of an average guy surviving his first train robbery is $\frac{1}{2}$. After that you get slightly more experienced and for your second train robbery your chance of survival is $\frac{2}{3}$. Now, you're even more experienced and for the third stint the chance of survival is $\frac{3}{4}$.
So the night before your third robbery you might ask yourself, whether it is worth the risk of dying with 25% chance tomorrow, or should you rather give up on train robberies altogether and move on to start a career in finance?
The data you want to ask yourself this question is the chance of survival tomorrow, which is the Hazard rate.
Unfortunately, it's impossible to get the data about your odds in the real life. What you could do instead is take a look at the cumulative distribution function $F(t)$ of a train robber's life expectancy, or, rather its counterpart $S(t) = 1-F(t)$, called the survival function:
Probability mass function (which is a discrete-case analogue of the continuous probability density function) of dying at your third robbery $p(\xi = 3) = \frac{1}{8}$ . We can more or less reformulate this as a continuous problem $p(3 \leq \xi < 4) = F_\xi(3) - F_\xi(4) = f_\xi(3)dx$, where $\xi$ is a random variable indicating the number of robberies an average train robber survives, $dx=1$, $F_\xi(x)$ is cumulative probability function and $f_\xi(x)$ is probability density function.
So you see, probability density function/probability mass function answers a wrong question. It says that out of all repeat train robbers the fraction that dies at their third robbery is ($\frac{1}{8}$). But the question you want to ask is: if I go for my third robbery tomorrow, what are my chances to survive it, and the answer you want is $(\frac{3}{4})$.
Now, let's start formalizing this. For a discrete-time variable, Hazard function is your chance to die during your next robbery number $t$:
$\underbrace{S(t) - S(t+1)}_\text{fraction of train robbers who die at t} = \underbrace{\lambda(t)}_\text{hazard function at t} \cdot \underbrace{S(t)}_\text{fraction of survivors by t} \ cdot \delta t$
Thus, hazard function is defined as:
$\lambda(t) = \frac{-\delta S(t)}{\delta t \cdot S(t)}$
Or, in continuous-time case:
$\lambda(t) = \frac{-\partial S(t)}{\partial t \cdot S(t)} = \frac{f(t)}{S(t)}$
Cumulative hazard rate $\Lambda(t)$ is a funny thing. It essentially enumerates and sums up all the chances of death you escaped by the current moment. So, for instance, at your first train robbery you had a chance to die of $1/2$, at the second - $1/3$, at the third - $1/4$.
So by the time you start contemplating your fourth robbery, the "number of deaths" you deserved by now $\Lambda(t) = 1/2 + 1/3 + 1/4 = 1.083333$, so in a fair world you would have already been more than dead, exercising your luck so readily...
|
Intuition behind the hazard rate
"Death of a person is a tragedy, deaths of millions is statistics"
- Joseph Stalin
Hazard rate is just a renormalization of the probability space that takes pallid impersonal statistics on input and
|
14,663
|
What is "Targeted Maximum Likelihood Expectation"?
|
I agree that van der Laan has a tendency to invent new names for already existing ideas (e.g. the super-learner), but TMLE is not one of them as far as I know. It is actually a very clever idea, and I have seen nothing from the Machine Learning community which looks similar (although I might just be ignorant). The ideas come from the theory of semiparametric-efficient estimating equations, which is something that I think statisticians think much more about than ML people.
The idea essentially is this. Suppose $P_0$ is a true data generating mechanism, and interest is in a particular functional $\Psi(P_0)$. Associated with such a functional is often an estimating equation
$$
\sum_i \varphi(Y_i \mid \theta) = 0,
$$
where $\theta = \theta(P)$ is determined in some way by $P$, and contains enough information to identify $\Psi$. $\varphi$ will be such that $E_{P} \varphi(Y \mid \theta) = 0$. Solving this equation in $\theta$ may, for example, be much easier than estimating all of $P_0$. This estimating equation is efficient in the sense that any efficient estimator of $\Psi(P_0)$ is asymptotically equivalent to one which solves this equation. (Note: I'm being a little loose with the term "efficient", since I'm just describing the heuristic.) The theory behind such estimating equations is quite elegant, with this book being the canonical reference. This is where one might find standard definitions of "least favorable submodels"; these aren't terms van der Laan invented.
However, estimating $P_0$ using machine learning techniques will not, in general, satisfy this estimating equation. Estimating, say, the density of $P_0$ is an intrinsically difficult problem, perhaps much harder than estimating $\Psi(P_0)$, but machine learning techniques will typically go ahead and estimate $P_0$ with some $\hat P$, and then use a plug-in estimate $\Psi(\hat P)$. van der Laan would criticize this estimator as not being targeted and hence may be inefficient - perhaps, it may not even be $\sqrt n$-consistent at all! Nevertheless, van der Laan recognizes the power of machine learning, and knows that to estimate the effects he is interested in will ultimately require some density estimation. But he doesn't care about estimating $P_0$ itself; the density estimation is only done for the purpose of getting at $\Psi$.
The idea of TMLE is to start with the initial density estimate $\hat p$ and then consider a new model like this:
$$
\hat p_{1, \epsilon} = \frac{\hat p \exp(\epsilon \ \varphi(Y \mid \theta))}{\int \hat p \exp(\epsilon \ \varphi(y \mid \theta)) \ dy}
$$
where $\epsilon$ is called a fluctuation parameter. Now we do maximum likelihood on $\epsilon$. If it happens to be the case that $\epsilon = 0$ is the MLE then one can easily verify by taking the derivative that $\hat p$ solves the efficient estimating equation, and hence is efficient for estimating $\Psi$! On the other hand, if $\epsilon \ne 0$ at the MLE, we have a new density estimator $\hat p_1$ which fits the data better than $\hat p$ (after all, we did MLE, so it has a higher likelihood). Then, we iterate this procedure and look at
$$
\hat p_{2, \epsilon} \propto \hat p_{1, \hat \epsilon} \exp(\epsilon \ \varphi(Y \mid \theta).
$$
and so on until we get something, in the limit, which satisfies the efficient estimating equation.
|
What is "Targeted Maximum Likelihood Expectation"?
|
I agree that van der Laan has a tendency to invent new names for already existing ideas (e.g. the super-learner), but TMLE is not one of them as far as I know. It is actually a very clever idea, and I
|
What is "Targeted Maximum Likelihood Expectation"?
I agree that van der Laan has a tendency to invent new names for already existing ideas (e.g. the super-learner), but TMLE is not one of them as far as I know. It is actually a very clever idea, and I have seen nothing from the Machine Learning community which looks similar (although I might just be ignorant). The ideas come from the theory of semiparametric-efficient estimating equations, which is something that I think statisticians think much more about than ML people.
The idea essentially is this. Suppose $P_0$ is a true data generating mechanism, and interest is in a particular functional $\Psi(P_0)$. Associated with such a functional is often an estimating equation
$$
\sum_i \varphi(Y_i \mid \theta) = 0,
$$
where $\theta = \theta(P)$ is determined in some way by $P$, and contains enough information to identify $\Psi$. $\varphi$ will be such that $E_{P} \varphi(Y \mid \theta) = 0$. Solving this equation in $\theta$ may, for example, be much easier than estimating all of $P_0$. This estimating equation is efficient in the sense that any efficient estimator of $\Psi(P_0)$ is asymptotically equivalent to one which solves this equation. (Note: I'm being a little loose with the term "efficient", since I'm just describing the heuristic.) The theory behind such estimating equations is quite elegant, with this book being the canonical reference. This is where one might find standard definitions of "least favorable submodels"; these aren't terms van der Laan invented.
However, estimating $P_0$ using machine learning techniques will not, in general, satisfy this estimating equation. Estimating, say, the density of $P_0$ is an intrinsically difficult problem, perhaps much harder than estimating $\Psi(P_0)$, but machine learning techniques will typically go ahead and estimate $P_0$ with some $\hat P$, and then use a plug-in estimate $\Psi(\hat P)$. van der Laan would criticize this estimator as not being targeted and hence may be inefficient - perhaps, it may not even be $\sqrt n$-consistent at all! Nevertheless, van der Laan recognizes the power of machine learning, and knows that to estimate the effects he is interested in will ultimately require some density estimation. But he doesn't care about estimating $P_0$ itself; the density estimation is only done for the purpose of getting at $\Psi$.
The idea of TMLE is to start with the initial density estimate $\hat p$ and then consider a new model like this:
$$
\hat p_{1, \epsilon} = \frac{\hat p \exp(\epsilon \ \varphi(Y \mid \theta))}{\int \hat p \exp(\epsilon \ \varphi(y \mid \theta)) \ dy}
$$
where $\epsilon$ is called a fluctuation parameter. Now we do maximum likelihood on $\epsilon$. If it happens to be the case that $\epsilon = 0$ is the MLE then one can easily verify by taking the derivative that $\hat p$ solves the efficient estimating equation, and hence is efficient for estimating $\Psi$! On the other hand, if $\epsilon \ne 0$ at the MLE, we have a new density estimator $\hat p_1$ which fits the data better than $\hat p$ (after all, we did MLE, so it has a higher likelihood). Then, we iterate this procedure and look at
$$
\hat p_{2, \epsilon} \propto \hat p_{1, \hat \epsilon} \exp(\epsilon \ \varphi(Y \mid \theta).
$$
and so on until we get something, in the limit, which satisfies the efficient estimating equation.
|
What is "Targeted Maximum Likelihood Expectation"?
I agree that van der Laan has a tendency to invent new names for already existing ideas (e.g. the super-learner), but TMLE is not one of them as far as I know. It is actually a very clever idea, and I
|
14,664
|
What's the difference between multiple R and R squared?
|
Capital $R^2$ (as opposed to $r^2$) should generally be the multiple $R^2$ in a multiple regression model. In bivariate linear regression, there is no multiple $R$, and $R^2=r^2$. So one difference is applicability: "multiple $R$" implies multiple regressors, whereas "$R^2$" doesn't necessarily.
Another simple difference is interpretation. In multiple regression, the multiple $R$ is the coefficient of multiple correlation, whereas its square is the coefficient of determination. $R$ can be interpreted somewhat like a bivariate correlation coefficient, the main difference being that the multiple correlation is between the dependent variable and a linear combination of the predictors, not just any one of them, and not just the average of those bivariate correlations. $R^2$ can be interpreted as the percentage of variance in the dependent variable that can be explained by the predictors; as above, this is also true if there is only one predictor.
|
What's the difference between multiple R and R squared?
|
Capital $R^2$ (as opposed to $r^2$) should generally be the multiple $R^2$ in a multiple regression model. In bivariate linear regression, there is no multiple $R$, and $R^2=r^2$. So one difference is
|
What's the difference between multiple R and R squared?
Capital $R^2$ (as opposed to $r^2$) should generally be the multiple $R^2$ in a multiple regression model. In bivariate linear regression, there is no multiple $R$, and $R^2=r^2$. So one difference is applicability: "multiple $R$" implies multiple regressors, whereas "$R^2$" doesn't necessarily.
Another simple difference is interpretation. In multiple regression, the multiple $R$ is the coefficient of multiple correlation, whereas its square is the coefficient of determination. $R$ can be interpreted somewhat like a bivariate correlation coefficient, the main difference being that the multiple correlation is between the dependent variable and a linear combination of the predictors, not just any one of them, and not just the average of those bivariate correlations. $R^2$ can be interpreted as the percentage of variance in the dependent variable that can be explained by the predictors; as above, this is also true if there is only one predictor.
|
What's the difference between multiple R and R squared?
Capital $R^2$ (as opposed to $r^2$) should generally be the multiple $R^2$ in a multiple regression model. In bivariate linear regression, there is no multiple $R$, and $R^2=r^2$. So one difference is
|
14,665
|
What's the difference between multiple R and R squared?
|
Multiple R actually can be viewed as the correlation between response and the fitted values. As such it is always positive. Multiple R-squared is its squared version.
Let me illustrate using a small example:
set.seed(32)
n <- 100
x1 <- runif(n)
x2 <- runif(n)
y <- 4 + x1 - 2*x2 + rnorm(n)
fit <- lm(y ~ x1 + x2)
summary(fit) # Multiple R-squared: 0.2347
(R <- cor(y, fitted(fit))) # 0.4845068
R^2 # 0.2347469
There is no need to make a big fuss around "multiple" or not. This formula always applies, even in an Anova setting. In the case where there is only one covariable $X$, then R with the sign of the slope is the same as the correlation between $X$ and the response.
|
What's the difference between multiple R and R squared?
|
Multiple R actually can be viewed as the correlation between response and the fitted values. As such it is always positive. Multiple R-squared is its squared version.
Let me illustrate using a small
|
What's the difference between multiple R and R squared?
Multiple R actually can be viewed as the correlation between response and the fitted values. As such it is always positive. Multiple R-squared is its squared version.
Let me illustrate using a small example:
set.seed(32)
n <- 100
x1 <- runif(n)
x2 <- runif(n)
y <- 4 + x1 - 2*x2 + rnorm(n)
fit <- lm(y ~ x1 + x2)
summary(fit) # Multiple R-squared: 0.2347
(R <- cor(y, fitted(fit))) # 0.4845068
R^2 # 0.2347469
There is no need to make a big fuss around "multiple" or not. This formula always applies, even in an Anova setting. In the case where there is only one covariable $X$, then R with the sign of the slope is the same as the correlation between $X$ and the response.
|
What's the difference between multiple R and R squared?
Multiple R actually can be viewed as the correlation between response and the fitted values. As such it is always positive. Multiple R-squared is its squared version.
Let me illustrate using a small
|
14,666
|
What's the difference between multiple R and R squared?
|
I simply explain to my students that:
the multiple R be thought of as the absolute value of the correlation coefficient (or the correlation coefficient without the negative sign)!
The R-squared is simply the square of the multiple R. It can be through of as percentage of variation caused by the independent variable (s)
It is easy to grasp the concept and the difference this way.
|
What's the difference between multiple R and R squared?
|
I simply explain to my students that:
the multiple R be thought of as the absolute value of the correlation coefficient (or the correlation coefficient without the negative sign)!
The R-squared is
|
What's the difference between multiple R and R squared?
I simply explain to my students that:
the multiple R be thought of as the absolute value of the correlation coefficient (or the correlation coefficient without the negative sign)!
The R-squared is simply the square of the multiple R. It can be through of as percentage of variation caused by the independent variable (s)
It is easy to grasp the concept and the difference this way.
|
What's the difference between multiple R and R squared?
I simply explain to my students that:
the multiple R be thought of as the absolute value of the correlation coefficient (or the correlation coefficient without the negative sign)!
The R-squared is
|
14,667
|
How do I find values not given in (interpolate in) statistical tables?
|
This answer is in two main parts: firstly, using linear interpolation, and secondly, using transformations for more accurate interpolation. The approaches discussed here are suitable for hand calculation when you have limited tables available, but if you're implementing a computer routine to produce p-values, there are much better approaches (if tedious when done by hand) that should be used instead.
If you knew that the 10% (one tailed) critical value for a z-test was 1.28 and the 20% critical value was 0.84, a rough guess at the 15% critical value would be half-way between - (1.28+0.84)/2 = 1.06 (the actual value is 1.0364), and the 12.5% value could be guessed at halfway between that and the 10% value (1.28+1.06)/2 = 1.17 (actual value 1.15+). This is exactly what linear interpolation does - but instead of 'half-way between', it looks at any fraction of the way between two values.
Univariate linear interpolation
Let's look at the case of simple linear interpolation.
So we have some function (say of $x$) that we think is approximately linear near the value we're trying to approximate, and we have a value of the function either side of the value we want, for example, like so:
\begin{array}{ c c }
x & y\\
8 & 9.3\\
16 & y_{16}\\
20 & 15.6\\
\end{array}
The two $x$ values whose $y$'s we know are 12 (20-8) apart. See how the $x$-value (the one that we want an approximate $y$-value for) divides that difference of 12 up in the ratio 8:4 (16-8 and 20-16)? That is, it's 2/3 of the distance from the first $x$-value to the last. If the relationship were linear, the corresponding range of y-values would be in the same ratio.
So $\frac{y_{16} - 9.3}{15.6 - 9.3}$ should be about the same as $\frac{16-8}{20-8}$.
That is $\frac{y_{16} - 9.3}{15.6 - 9.3} \approx \frac{16-8}{20-8}$
rearranging:
$y_{16} \approx 9.3 + (15.6 - 9.3) \frac{16-8}{20-8} = 13.5$
An example with statistical tables: if we have a t-table with the following critical values for 12 df:
\begin{array}{ c c }
(2\text{-tail})& \\
Ξ± & t\\
0.01 & 3.05\\
0.02 & 2.68\\
0.05 & 2.18\\
0.10 & 1.78
\end{array}
We want the critical value of t with 12 df and a two-tail alpha of 0.025. That is, we interpolate between
the 0.02 and the 0.05 row of that table:
\begin{array}{ c c }
Ξ± & t\\
0.02 & 2.68\\
0.025 & \text{?}\\
0.05 & 2.18\\
\end{array}
The value at "$\text{?}$" is the $t_{0.025}$ value that we wish to use linear interpolation to approximate. (By $t_{0.025}$ I actually mean the $1-0.025/2$ point of the inverse cdf of a $t_{12}$ distribution.)
As before, $0.025$ divides the interval from $0.02$ to $0.05$ in the ratio $(0.025-0.02)$ to $(0.05-0.025)$ (i.e. $1:5$) and the unknown $t$-value should divide the $t$ range $2.68$ to $2.18$ in the same ratio; equivalently, $0.025$ occurs $(0.025-0.02)/(0.05-0.02) = 1/6$th of the way along the $x$-range, so the unknown $t$-value should occur $1/6$th of the way along the $t$-range.
That is $\frac{t_{0.025}-2.68}{2.18-2.68} \approx \frac{0.025-0.02}{0.05-0.02}$ or equivalently
$t_{0.025} \approx 2.68 + (2.18-2.68) \frac{0.025-0.02}{0.05-0.02} = 2.68 - 0.5 \frac{1}{6} \approx 2.60 $
The actual answer is $2.56$ ... which is not particularly close because the function we're approximating isn't very close to linear in that range (nearer $\alpha = 0.5$ it is).
Better approximations via transformation
We can replace linear interpolation by other functional forms; in effect, we transform to a scale where linear interpolation works better. In this case, in the tail, many tabulated critical values are more nearly linear the $\log$ of the significance level. After we take $\log$s, we simply apply linear interpolation as before. Let's try that on the above example:
\begin{array}{ c c }
Ξ± & \log(Ξ±)& t\\
0.02 & -3.912 & 2.68\\
0.025& -3.689 & t_{0.025}\\
0.05 & -2.996 & 2.18\\
\end{array}
Now
\begin{eqnarray}
\frac{t_{0.025}-2.68}{2.18-2.68} &\approx& \frac{\log(0.025)-\log(0.02)}{\log(0.05)-\log(0.02)} \\
&=& \frac{-3.689 - -3.912}{-2.996 - -3.912}\\
\end{eqnarray}
or equivalently
\begin{eqnarray}
t_{0.025} &\approx& 2.68 + (2.18-2.68) \frac{-3.689 - -3.912}{-2.996 - -3.912}\\
&=& 2.68 - 0.5 \cdot 0.243 \approx 2.56
\end{eqnarray}
Which is correct to the quoted number of figures. This is because - when we transform the x-scale logarithmically - the relationship is almost linear:
Indeed, visually the curve (grey) lies neatly on top of the straight line (blue).
In some cases, the logit of the significance level ($\text{logit}(\alpha)=\log(\frac{Ξ±}{1-Ξ±})=\log(\frac{1}{1-Ξ±}-1)$) may work well over a wider range but is usually not necessary (we usually only care about accurate critical values when $\alpha$ is small enough that $\log$ works quite well).
Interpolation across different degrees of freedom
$t$, chi-square and $F$ tables also have degrees of freedom, where not every df ($\nu$-) value is tabulated. The critical values mostly$^\dagger$ aren't accurately represented by linear interpolation in the df. Indeed, often it's more nearly the case that the tabulated values are linear in the reciprocal of df, $1/\nu$.
(In old tables you'd often see a recommendation to work with $120/\nu$ - the constant on the numerator makes no difference, but was more convenient in pre-calculator days because 120 has a lot of factors, so $120/\nu$ is often an integer, making the calculation a bit simpler.)
Here's how inverse interpolation performs on 5% critical values of $F_{4,\nu}$ between $\nu = 60$ and $120$. That is, only the endpoints participate in the interpolation in $1/\nu$. For example, to compute the critical value for $\nu=80$, we take (and note that here $F$ represents the inverse of the cdf):
$$F_{4,80,.95} \approx F_{4,60,.95} + \frac{1/80 - 1/60}{1/120 - 1/60} \cdot (F_{4,120,.95}-F_{4,60,.95})$$
(Compare with diagram here)
$^\dagger$ Mostly but not always. Here's an example where linear interpolation in df is better, and an explanation of how to tell from the table that linear interpolation is going to be accurate.
Here's a piece of a chi-squared table
Probability less than the critical value
df 0.90 0.95 0.975 0.99 0.999
______ __________________________________________________
40 51.805 55.758 59.342 63.691 73.402
50 63.167 67.505 71.420 76.154 86.661
60 74.397 79.082 83.298 88.379 99.607
70 85.527 90.531 95.023 100.425 112.317
Imagine we wish to find the 5% critical value (95th percentiles) for 57 degrees of freedom.
Looking closely, we see that the 5% critical values in the table progress almost linearly here:
(the green line joins the values for 50 and 60 df; you can see it touches the dots for 40 and 70)
So linear interpolation will do very well. But of course we don't have time to draw the graph; how to decide when to use linear interpolation and when to try something more complicated?
As well as the values either side of the one we seek, take the next nearest value (70 in this case). If the middle tabulated value (the one for df=60) is close to linear between the end values (50 and 70), then linear interpolation will be suitable. In this case the values are equispaced so it's especially easy: is $(x_{50,0.95}+x_{70,0.95})/2$ close to $x_{60,0.95}$?
We find that $(67.505+90.531)/2 = 79.018$, which when compared to the actual value for 60 df, 79.082, we can see is accurate to almost three full figures, which is usually pretty good for interpolation, so in this case, you'd stick with linear interpolation; with the finer step for the value we need we would now expect to have effectively 3 figure accuracy.
So we get: $\frac{x-67.505}{79.082-67.505} \approx {57-50}{60-50}$ or
$x\approx 67.505+(79.082-67.505)\cdot {57-50}{60-50}\approx 75.61$.
The actual value is 75.62375, so we indeed got 3 figures of accuracy and were only out by 1 in the fourth figure.
More accurate interpolation still may be had by using methods of finite differences (in particular, via divided differences), but this is probably overkill for most hypothesis testing problems.
If your degrees of freedom go past the ends of your table, this question discusses that problem.
|
How do I find values not given in (interpolate in) statistical tables?
|
This answer is in two main parts: firstly, using linear interpolation, and secondly, using transformations for more accurate interpolation. The approaches discussed here are suitable for hand calculat
|
How do I find values not given in (interpolate in) statistical tables?
This answer is in two main parts: firstly, using linear interpolation, and secondly, using transformations for more accurate interpolation. The approaches discussed here are suitable for hand calculation when you have limited tables available, but if you're implementing a computer routine to produce p-values, there are much better approaches (if tedious when done by hand) that should be used instead.
If you knew that the 10% (one tailed) critical value for a z-test was 1.28 and the 20% critical value was 0.84, a rough guess at the 15% critical value would be half-way between - (1.28+0.84)/2 = 1.06 (the actual value is 1.0364), and the 12.5% value could be guessed at halfway between that and the 10% value (1.28+1.06)/2 = 1.17 (actual value 1.15+). This is exactly what linear interpolation does - but instead of 'half-way between', it looks at any fraction of the way between two values.
Univariate linear interpolation
Let's look at the case of simple linear interpolation.
So we have some function (say of $x$) that we think is approximately linear near the value we're trying to approximate, and we have a value of the function either side of the value we want, for example, like so:
\begin{array}{ c c }
x & y\\
8 & 9.3\\
16 & y_{16}\\
20 & 15.6\\
\end{array}
The two $x$ values whose $y$'s we know are 12 (20-8) apart. See how the $x$-value (the one that we want an approximate $y$-value for) divides that difference of 12 up in the ratio 8:4 (16-8 and 20-16)? That is, it's 2/3 of the distance from the first $x$-value to the last. If the relationship were linear, the corresponding range of y-values would be in the same ratio.
So $\frac{y_{16} - 9.3}{15.6 - 9.3}$ should be about the same as $\frac{16-8}{20-8}$.
That is $\frac{y_{16} - 9.3}{15.6 - 9.3} \approx \frac{16-8}{20-8}$
rearranging:
$y_{16} \approx 9.3 + (15.6 - 9.3) \frac{16-8}{20-8} = 13.5$
An example with statistical tables: if we have a t-table with the following critical values for 12 df:
\begin{array}{ c c }
(2\text{-tail})& \\
Ξ± & t\\
0.01 & 3.05\\
0.02 & 2.68\\
0.05 & 2.18\\
0.10 & 1.78
\end{array}
We want the critical value of t with 12 df and a two-tail alpha of 0.025. That is, we interpolate between
the 0.02 and the 0.05 row of that table:
\begin{array}{ c c }
Ξ± & t\\
0.02 & 2.68\\
0.025 & \text{?}\\
0.05 & 2.18\\
\end{array}
The value at "$\text{?}$" is the $t_{0.025}$ value that we wish to use linear interpolation to approximate. (By $t_{0.025}$ I actually mean the $1-0.025/2$ point of the inverse cdf of a $t_{12}$ distribution.)
As before, $0.025$ divides the interval from $0.02$ to $0.05$ in the ratio $(0.025-0.02)$ to $(0.05-0.025)$ (i.e. $1:5$) and the unknown $t$-value should divide the $t$ range $2.68$ to $2.18$ in the same ratio; equivalently, $0.025$ occurs $(0.025-0.02)/(0.05-0.02) = 1/6$th of the way along the $x$-range, so the unknown $t$-value should occur $1/6$th of the way along the $t$-range.
That is $\frac{t_{0.025}-2.68}{2.18-2.68} \approx \frac{0.025-0.02}{0.05-0.02}$ or equivalently
$t_{0.025} \approx 2.68 + (2.18-2.68) \frac{0.025-0.02}{0.05-0.02} = 2.68 - 0.5 \frac{1}{6} \approx 2.60 $
The actual answer is $2.56$ ... which is not particularly close because the function we're approximating isn't very close to linear in that range (nearer $\alpha = 0.5$ it is).
Better approximations via transformation
We can replace linear interpolation by other functional forms; in effect, we transform to a scale where linear interpolation works better. In this case, in the tail, many tabulated critical values are more nearly linear the $\log$ of the significance level. After we take $\log$s, we simply apply linear interpolation as before. Let's try that on the above example:
\begin{array}{ c c }
Ξ± & \log(Ξ±)& t\\
0.02 & -3.912 & 2.68\\
0.025& -3.689 & t_{0.025}\\
0.05 & -2.996 & 2.18\\
\end{array}
Now
\begin{eqnarray}
\frac{t_{0.025}-2.68}{2.18-2.68} &\approx& \frac{\log(0.025)-\log(0.02)}{\log(0.05)-\log(0.02)} \\
&=& \frac{-3.689 - -3.912}{-2.996 - -3.912}\\
\end{eqnarray}
or equivalently
\begin{eqnarray}
t_{0.025} &\approx& 2.68 + (2.18-2.68) \frac{-3.689 - -3.912}{-2.996 - -3.912}\\
&=& 2.68 - 0.5 \cdot 0.243 \approx 2.56
\end{eqnarray}
Which is correct to the quoted number of figures. This is because - when we transform the x-scale logarithmically - the relationship is almost linear:
Indeed, visually the curve (grey) lies neatly on top of the straight line (blue).
In some cases, the logit of the significance level ($\text{logit}(\alpha)=\log(\frac{Ξ±}{1-Ξ±})=\log(\frac{1}{1-Ξ±}-1)$) may work well over a wider range but is usually not necessary (we usually only care about accurate critical values when $\alpha$ is small enough that $\log$ works quite well).
Interpolation across different degrees of freedom
$t$, chi-square and $F$ tables also have degrees of freedom, where not every df ($\nu$-) value is tabulated. The critical values mostly$^\dagger$ aren't accurately represented by linear interpolation in the df. Indeed, often it's more nearly the case that the tabulated values are linear in the reciprocal of df, $1/\nu$.
(In old tables you'd often see a recommendation to work with $120/\nu$ - the constant on the numerator makes no difference, but was more convenient in pre-calculator days because 120 has a lot of factors, so $120/\nu$ is often an integer, making the calculation a bit simpler.)
Here's how inverse interpolation performs on 5% critical values of $F_{4,\nu}$ between $\nu = 60$ and $120$. That is, only the endpoints participate in the interpolation in $1/\nu$. For example, to compute the critical value for $\nu=80$, we take (and note that here $F$ represents the inverse of the cdf):
$$F_{4,80,.95} \approx F_{4,60,.95} + \frac{1/80 - 1/60}{1/120 - 1/60} \cdot (F_{4,120,.95}-F_{4,60,.95})$$
(Compare with diagram here)
$^\dagger$ Mostly but not always. Here's an example where linear interpolation in df is better, and an explanation of how to tell from the table that linear interpolation is going to be accurate.
Here's a piece of a chi-squared table
Probability less than the critical value
df 0.90 0.95 0.975 0.99 0.999
______ __________________________________________________
40 51.805 55.758 59.342 63.691 73.402
50 63.167 67.505 71.420 76.154 86.661
60 74.397 79.082 83.298 88.379 99.607
70 85.527 90.531 95.023 100.425 112.317
Imagine we wish to find the 5% critical value (95th percentiles) for 57 degrees of freedom.
Looking closely, we see that the 5% critical values in the table progress almost linearly here:
(the green line joins the values for 50 and 60 df; you can see it touches the dots for 40 and 70)
So linear interpolation will do very well. But of course we don't have time to draw the graph; how to decide when to use linear interpolation and when to try something more complicated?
As well as the values either side of the one we seek, take the next nearest value (70 in this case). If the middle tabulated value (the one for df=60) is close to linear between the end values (50 and 70), then linear interpolation will be suitable. In this case the values are equispaced so it's especially easy: is $(x_{50,0.95}+x_{70,0.95})/2$ close to $x_{60,0.95}$?
We find that $(67.505+90.531)/2 = 79.018$, which when compared to the actual value for 60 df, 79.082, we can see is accurate to almost three full figures, which is usually pretty good for interpolation, so in this case, you'd stick with linear interpolation; with the finer step for the value we need we would now expect to have effectively 3 figure accuracy.
So we get: $\frac{x-67.505}{79.082-67.505} \approx {57-50}{60-50}$ or
$x\approx 67.505+(79.082-67.505)\cdot {57-50}{60-50}\approx 75.61$.
The actual value is 75.62375, so we indeed got 3 figures of accuracy and were only out by 1 in the fourth figure.
More accurate interpolation still may be had by using methods of finite differences (in particular, via divided differences), but this is probably overkill for most hypothesis testing problems.
If your degrees of freedom go past the ends of your table, this question discusses that problem.
|
How do I find values not given in (interpolate in) statistical tables?
This answer is in two main parts: firstly, using linear interpolation, and secondly, using transformations for more accurate interpolation. The approaches discussed here are suitable for hand calculat
|
14,668
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
|
With just three farms, there is no point in trying to pretend that you can fit a Gaussian distribution to three points. Analyze this simply as lm(response~as.factor(farm) + treat+other stuff), and don't bother with lmer; you won't be able to do much better than ANOVA, anyway.
Generally, hitting exactly zero is not that unusual. The variance estimate is a nonlinear function of the data, the difference between the overall variance and the within-site variance. If the true variance is zero, this nonlinear statistic has a distribution that puts non-zero mass to the left of zero (this will also be true if the true value is a small positive quantity, but the sampling variability is large enough to overshoot below zero). Due to the way the estimator is programmed, however (Cholesky factorization), it can only take non-negative values. So whenever the unattainably best estimate would have been at zero (as in your balanced-by-design situation) or below it, the log-likelihood will be maximized at zero, with a negative gradient to the right of it. Self & Liang (1987) is the standard biostat reference for the problem; I better like Andrews (1999) which is even more general.
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
|
With just three farms, there is no point in trying to pretend that you can fit a Gaussian distribution to three points. Analyze this simply as lm(response~as.factor(farm) + treat+other stuff), and don
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
With just three farms, there is no point in trying to pretend that you can fit a Gaussian distribution to three points. Analyze this simply as lm(response~as.factor(farm) + treat+other stuff), and don't bother with lmer; you won't be able to do much better than ANOVA, anyway.
Generally, hitting exactly zero is not that unusual. The variance estimate is a nonlinear function of the data, the difference between the overall variance and the within-site variance. If the true variance is zero, this nonlinear statistic has a distribution that puts non-zero mass to the left of zero (this will also be true if the true value is a small positive quantity, but the sampling variability is large enough to overshoot below zero). Due to the way the estimator is programmed, however (Cholesky factorization), it can only take non-negative values. So whenever the unattainably best estimate would have been at zero (as in your balanced-by-design situation) or below it, the log-likelihood will be maximized at zero, with a negative gradient to the right of it. Self & Liang (1987) is the standard biostat reference for the problem; I better like Andrews (1999) which is even more general.
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
With just three farms, there is no point in trying to pretend that you can fit a Gaussian distribution to three points. Analyze this simply as lm(response~as.factor(farm) + treat+other stuff), and don
|
14,669
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
|
Looks like there was probably no effect due to Farm built in from the experimental design; each farm has exactly half treated and half not.
> xtabs(~treat+farm, territory)
farm
treat 1 2 3
0 14 12 10
1 14 12 10
It can also be instructive to fit farm as a fixed effect and see what happens; we see that the Farm effect is very, very small compared with the built effect, so I wouldn't be too surprised that the fitted variance in the mixed model is zero.
> m2<-glm(treat~built+factor(farm),family=binomial,data=territory)
> library(car)
> Anova(m2)
Analysis of Deviance Table (Type II tests)
Response: treat
LR Chisq Df Pr(>Chisq)
built 0.50685 1 0.4765
factor(farm) 0.02008 2 0.9900
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
|
Looks like there was probably no effect due to Farm built in from the experimental design; each farm has exactly half treated and half not.
> xtabs(~treat+farm, territory)
farm
treat 1 2 3
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
Looks like there was probably no effect due to Farm built in from the experimental design; each farm has exactly half treated and half not.
> xtabs(~treat+farm, territory)
farm
treat 1 2 3
0 14 12 10
1 14 12 10
It can also be instructive to fit farm as a fixed effect and see what happens; we see that the Farm effect is very, very small compared with the built effect, so I wouldn't be too surprised that the fitted variance in the mixed model is zero.
> m2<-glm(treat~built+factor(farm),family=binomial,data=territory)
> library(car)
> Anova(m2)
Analysis of Deviance Table (Type II tests)
Response: treat
LR Chisq Df Pr(>Chisq)
built 0.50685 1 0.4765
factor(farm) 0.02008 2 0.9900
|
Random effect equal to 0 in generalized linear mixed model [duplicate]
Looks like there was probably no effect due to Farm built in from the experimental design; each farm has exactly half treated and half not.
> xtabs(~treat+farm, territory)
farm
treat 1 2 3
|
14,670
|
Why is regression about variance?
|
why would we care about "how much of the variance in the data is explained by the given regression model?"
To answer this it is useful to think about exactly what it means for a certain percentage of the variance to be explained by the regression model.
Let $Y_{1}, ..., Y_{n}$ be the outcome variable. The usual sample variance of the dependent variable in a regression model is $$ \frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \overline{Y})^2 $$ Now let $\widehat{Y}_i \equiv \widehat{f}({\boldsymbol X}_i)$ be the prediction of $Y_i$ based on a least squares linear regression model with predictor values ${\boldsymbol X}_i$. As proven here, this variance above can be partitioned as:
$$ \frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \overline{Y})^2 =
\underbrace{\frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \widehat{Y}_i)^2}_{{\rm residual \ variance}} + \underbrace{\frac{1}{n-1} \sum_{i=1}^{n} (\widehat{Y}_i - \overline{Y})^2}_{{\rm explained \ variance}}
$$
In least squares regression, the average of the predicted values is $\overline{Y}$, therefore the total variance is equal to the averaged squared difference between the
observed and the predicted values (residual variance) plus the sample variance of the predictions themselves (explained variance), which are only a function of the ${\boldsymbol X}$s. Therefore the "explained" variance may be thought of as the variance in $Y_i$ that is attributable to variation in ${\boldsymbol X}_i$. The proportion of the variance in $Y_i$ that is "explained" (i.e. the proportion of variation in $Y_i$ that is attributable to variation in ${\boldsymbol X}_i$) is sometimes referred to as $R^2$.
Now we use two extreme examples make it clear why this variance decomposition is important:
(1) The predictors have nothing to do with the responses. In that case, the best unbiased predictor (in the least squares sense) for $Y_i$ is $\widehat{Y}_i = \overline{Y}$. Therefore the total variance in $Y_i$ is just equal to the residual variance and is unrelated to the variance in the predictors ${\boldsymbol X}_i$.
(2) The predictors are perfectly linearly related to the predictors. In that case, the predictions are exactly correct and $\widehat{Y}_i = Y_i$. Therefore there is no residual variance and all of the variance in the outcome is the variance in the predictions themselves, which are only a function of the predictors. Therefore all of the variance in the outcome is simply due to variance in the predictors ${\boldsymbol X}_i$.
Situations with real data will often lie between the two extremes, as will the proportion of variance that can be attributed to these two sources. The more "explained variance" there is - i.e. the more of the variation in $Y_i$ that is due to variation in ${\boldsymbol X}_i$ - the better the predictions $\widehat{Y}_{i}$ are performing (i.e. the smaller the "residual variance" is), which is another way of saying that the least squares model fits well.
|
Why is regression about variance?
|
why would we care about "how much of the variance in the data is explained by the given regression model?"
To answer this it is useful to think about exactly what it means for a certain percentage of
|
Why is regression about variance?
why would we care about "how much of the variance in the data is explained by the given regression model?"
To answer this it is useful to think about exactly what it means for a certain percentage of the variance to be explained by the regression model.
Let $Y_{1}, ..., Y_{n}$ be the outcome variable. The usual sample variance of the dependent variable in a regression model is $$ \frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \overline{Y})^2 $$ Now let $\widehat{Y}_i \equiv \widehat{f}({\boldsymbol X}_i)$ be the prediction of $Y_i$ based on a least squares linear regression model with predictor values ${\boldsymbol X}_i$. As proven here, this variance above can be partitioned as:
$$ \frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \overline{Y})^2 =
\underbrace{\frac{1}{n-1} \sum_{i=1}^{n} (Y_i - \widehat{Y}_i)^2}_{{\rm residual \ variance}} + \underbrace{\frac{1}{n-1} \sum_{i=1}^{n} (\widehat{Y}_i - \overline{Y})^2}_{{\rm explained \ variance}}
$$
In least squares regression, the average of the predicted values is $\overline{Y}$, therefore the total variance is equal to the averaged squared difference between the
observed and the predicted values (residual variance) plus the sample variance of the predictions themselves (explained variance), which are only a function of the ${\boldsymbol X}$s. Therefore the "explained" variance may be thought of as the variance in $Y_i$ that is attributable to variation in ${\boldsymbol X}_i$. The proportion of the variance in $Y_i$ that is "explained" (i.e. the proportion of variation in $Y_i$ that is attributable to variation in ${\boldsymbol X}_i$) is sometimes referred to as $R^2$.
Now we use two extreme examples make it clear why this variance decomposition is important:
(1) The predictors have nothing to do with the responses. In that case, the best unbiased predictor (in the least squares sense) for $Y_i$ is $\widehat{Y}_i = \overline{Y}$. Therefore the total variance in $Y_i$ is just equal to the residual variance and is unrelated to the variance in the predictors ${\boldsymbol X}_i$.
(2) The predictors are perfectly linearly related to the predictors. In that case, the predictions are exactly correct and $\widehat{Y}_i = Y_i$. Therefore there is no residual variance and all of the variance in the outcome is the variance in the predictions themselves, which are only a function of the predictors. Therefore all of the variance in the outcome is simply due to variance in the predictors ${\boldsymbol X}_i$.
Situations with real data will often lie between the two extremes, as will the proportion of variance that can be attributed to these two sources. The more "explained variance" there is - i.e. the more of the variation in $Y_i$ that is due to variation in ${\boldsymbol X}_i$ - the better the predictions $\widehat{Y}_{i}$ are performing (i.e. the smaller the "residual variance" is), which is another way of saying that the least squares model fits well.
|
Why is regression about variance?
why would we care about "how much of the variance in the data is explained by the given regression model?"
To answer this it is useful to think about exactly what it means for a certain percentage of
|
14,671
|
Why is regression about variance?
|
I can't run with the big dogs of statistics who have answered before me, and perhaps my thinking is naive, but I look at it this way...
Imagine you're in a car and you're going down the road and turning the wheel left and right and pressing the gas pedal and the brakes frantically. Yet the car is moving along smoothly, unaffected by your actions. You'd immediately suspect that you weren't in a real car, and perhaps if we looked closely we'd determine that you're on a ride in Disney World. (If you were in a real car, you would be in mortal danger, but let's not go there.)
On the other hand, if you were driving down the road in a car and turning the wheel just slightly left or right immediately resulted in the car moving, taping the brakes resulted in a strong deceleration, while pressing the gas pedal threw you back into the seat. You might suspect that you were in a high-performance sports car.
In general, you probably experience something between those two extremes. The degree to which your inputs (steering, brakes, gas) directly affect the car's motion gives you a clue as to the quality of the car. That is, the more of your car's variance in motion that is related to your actions the better the car, and the more that the car moves independently of your control the worse the car is.
In a similar manner, you're talking about creating a model for some data (let's call this data $y$), based on some other sets of data (let's call them $x_1, x_2, ..., x_i$). If $y$ doesn't vary, it's like a car that's not moving and there's really no point in discussing if the car (model) works well or not, so we'll assume $y$ does vary.
Just like the car, a good-quality model will have a good relationship between the results $y$ varying and the inputs $x_i$ varying. Unlike a car, the $x_i$ do not necessarily cause $y$ to change, but if the model is going to be useful the $x_i$ need to change in a close relationship to $y$. In other words, the $x_i$ explain much of the variance in $y$.
P.S. I wasn't able to come up with a Winnie The Pooh analogy, but I tried.
P.P.S. [EDIT:] Note that I'm addressing this particular question. Don't be confused into thinking that if you account for 100% of the variance your model will perform wonderfully. You also need to think about over-fitting, where your model is so flexible that it fits the training data very closely -- including its random quirks and oddities. To use the analogy, you want a car that has good steering and brakes, but you want it to work well out on the road, not just in the test track you're using.
|
Why is regression about variance?
|
I can't run with the big dogs of statistics who have answered before me, and perhaps my thinking is naive, but I look at it this way...
Imagine you're in a car and you're going down the road and turni
|
Why is regression about variance?
I can't run with the big dogs of statistics who have answered before me, and perhaps my thinking is naive, but I look at it this way...
Imagine you're in a car and you're going down the road and turning the wheel left and right and pressing the gas pedal and the brakes frantically. Yet the car is moving along smoothly, unaffected by your actions. You'd immediately suspect that you weren't in a real car, and perhaps if we looked closely we'd determine that you're on a ride in Disney World. (If you were in a real car, you would be in mortal danger, but let's not go there.)
On the other hand, if you were driving down the road in a car and turning the wheel just slightly left or right immediately resulted in the car moving, taping the brakes resulted in a strong deceleration, while pressing the gas pedal threw you back into the seat. You might suspect that you were in a high-performance sports car.
In general, you probably experience something between those two extremes. The degree to which your inputs (steering, brakes, gas) directly affect the car's motion gives you a clue as to the quality of the car. That is, the more of your car's variance in motion that is related to your actions the better the car, and the more that the car moves independently of your control the worse the car is.
In a similar manner, you're talking about creating a model for some data (let's call this data $y$), based on some other sets of data (let's call them $x_1, x_2, ..., x_i$). If $y$ doesn't vary, it's like a car that's not moving and there's really no point in discussing if the car (model) works well or not, so we'll assume $y$ does vary.
Just like the car, a good-quality model will have a good relationship between the results $y$ varying and the inputs $x_i$ varying. Unlike a car, the $x_i$ do not necessarily cause $y$ to change, but if the model is going to be useful the $x_i$ need to change in a close relationship to $y$. In other words, the $x_i$ explain much of the variance in $y$.
P.S. I wasn't able to come up with a Winnie The Pooh analogy, but I tried.
P.P.S. [EDIT:] Note that I'm addressing this particular question. Don't be confused into thinking that if you account for 100% of the variance your model will perform wonderfully. You also need to think about over-fitting, where your model is so flexible that it fits the training data very closely -- including its random quirks and oddities. To use the analogy, you want a car that has good steering and brakes, but you want it to work well out on the road, not just in the test track you're using.
|
Why is regression about variance?
I can't run with the big dogs of statistics who have answered before me, and perhaps my thinking is naive, but I look at it this way...
Imagine you're in a car and you're going down the road and turni
|
14,672
|
How to get started with rating and ranking based on pairwise competition data?
|
Regarding "how to do it in R", the prefmod package http://cran.r-project.org/web/packages/prefmod/index.html is meant for preference analysis with paired comparisons, rankings and ratings. It fits Bradley-Terry models and pattern models with object and subject covariates. See my answer here How to fit BradleyβTerryβLuce model in R, without complicated formula? for a short intro, or this paper http://www.jstatsoft.org/v48/i10 for more info.
|
How to get started with rating and ranking based on pairwise competition data?
|
Regarding "how to do it in R", the prefmod package http://cran.r-project.org/web/packages/prefmod/index.html is meant for preference analysis with paired comparisons, rankings and ratings. It fits B
|
How to get started with rating and ranking based on pairwise competition data?
Regarding "how to do it in R", the prefmod package http://cran.r-project.org/web/packages/prefmod/index.html is meant for preference analysis with paired comparisons, rankings and ratings. It fits Bradley-Terry models and pattern models with object and subject covariates. See my answer here How to fit BradleyβTerryβLuce model in R, without complicated formula? for a short intro, or this paper http://www.jstatsoft.org/v48/i10 for more info.
|
How to get started with rating and ranking based on pairwise competition data?
Regarding "how to do it in R", the prefmod package http://cran.r-project.org/web/packages/prefmod/index.html is meant for preference analysis with paired comparisons, rankings and ratings. It fits B
|
14,673
|
How to get started with rating and ranking based on pairwise competition data?
|
I just finished a pretty good book on that subject. It discusses ELO as well as many other ranking methods like Massey, Colley, and Keener's. Most of the methods in the book use sports matches as the example and use both win/loss and margin of victory as inputs.
|
How to get started with rating and ranking based on pairwise competition data?
|
I just finished a pretty good book on that subject. It discusses ELO as well as many other ranking methods like Massey, Colley, and Keener's. Most of the methods in the book use sports matches as the
|
How to get started with rating and ranking based on pairwise competition data?
I just finished a pretty good book on that subject. It discusses ELO as well as many other ranking methods like Massey, Colley, and Keener's. Most of the methods in the book use sports matches as the example and use both win/loss and margin of victory as inputs.
|
How to get started with rating and ranking based on pairwise competition data?
I just finished a pretty good book on that subject. It discusses ELO as well as many other ranking methods like Massey, Colley, and Keener's. Most of the methods in the book use sports matches as the
|
14,674
|
How to get started with rating and ranking based on pairwise competition data?
|
Since asking this question, I've found I've had lots of success with the PlayerRatings package for R. It makes creating ELO/Glicko and the authors own method of performance ratings very easy.
|
How to get started with rating and ranking based on pairwise competition data?
|
Since asking this question, I've found I've had lots of success with the PlayerRatings package for R. It makes creating ELO/Glicko and the authors own method of performance ratings very easy.
|
How to get started with rating and ranking based on pairwise competition data?
Since asking this question, I've found I've had lots of success with the PlayerRatings package for R. It makes creating ELO/Glicko and the authors own method of performance ratings very easy.
|
How to get started with rating and ranking based on pairwise competition data?
Since asking this question, I've found I've had lots of success with the PlayerRatings package for R. It makes creating ELO/Glicko and the authors own method of performance ratings very easy.
|
14,675
|
How to get started with rating and ranking based on pairwise competition data?
|
This book does not work with margins but provides the theory of rank teams based on paired comparisons. The Method of Paired Comparison by Herbert A. David http://www.amazon.com/Method-Paired-Comparisons-Statistical-Monograph/dp/0852640137/ref=sr_1_1?s=books&ie=UTF8&qid=1340424897&sr=1-1&keywords=The+method+of+paired+comparisons
Regarding victory margins I beleive some of the computer methods used for the BCS combine paired comparison methods such as the Bradley-Terry model with victory margins.
|
How to get started with rating and ranking based on pairwise competition data?
|
This book does not work with margins but provides the theory of rank teams based on paired comparisons. The Method of Paired Comparison by Herbert A. David http://www.amazon.com/Method-Paired-Comparis
|
How to get started with rating and ranking based on pairwise competition data?
This book does not work with margins but provides the theory of rank teams based on paired comparisons. The Method of Paired Comparison by Herbert A. David http://www.amazon.com/Method-Paired-Comparisons-Statistical-Monograph/dp/0852640137/ref=sr_1_1?s=books&ie=UTF8&qid=1340424897&sr=1-1&keywords=The+method+of+paired+comparisons
Regarding victory margins I beleive some of the computer methods used for the BCS combine paired comparison methods such as the Bradley-Terry model with victory margins.
|
How to get started with rating and ranking based on pairwise competition data?
This book does not work with margins but provides the theory of rank teams based on paired comparisons. The Method of Paired Comparison by Herbert A. David http://www.amazon.com/Method-Paired-Comparis
|
14,676
|
Clustering of mixed type data with R
|
This may come in late but try klaR (http://cran.r-project.org/web/packages/klaR/index.html)
install.packages("klar")
It uses the non-hierarchical k-modes algorithm, which is based on simple matching as a distance function, so the distance Ξ΄ between a variable m of two data points $x$ and $y$ is given by
$$
\delta(x_m,y_m) = \begin{cases}
1 & x_m \neq y_m,\\
0 & \text{otherwise}
\end{cases}
$$
There is a flaw with the package, that is if two data points have the same distance to a cluster-center, the first in your data is chosen as opposed to a random point, but you can easily modify the bit in the code.
To accommodate for mixed-variable clustering, you will need to go into the code and modify the distance function to identify numeric and non-numeric modes and variables.
|
Clustering of mixed type data with R
|
This may come in late but try klaR (http://cran.r-project.org/web/packages/klaR/index.html)
install.packages("klar")
It uses the non-hierarchical k-modes algorithm, which is based on simple matching
|
Clustering of mixed type data with R
This may come in late but try klaR (http://cran.r-project.org/web/packages/klaR/index.html)
install.packages("klar")
It uses the non-hierarchical k-modes algorithm, which is based on simple matching as a distance function, so the distance Ξ΄ between a variable m of two data points $x$ and $y$ is given by
$$
\delta(x_m,y_m) = \begin{cases}
1 & x_m \neq y_m,\\
0 & \text{otherwise}
\end{cases}
$$
There is a flaw with the package, that is if two data points have the same distance to a cluster-center, the first in your data is chosen as opposed to a random point, but you can easily modify the bit in the code.
To accommodate for mixed-variable clustering, you will need to go into the code and modify the distance function to identify numeric and non-numeric modes and variables.
|
Clustering of mixed type data with R
This may come in late but try klaR (http://cran.r-project.org/web/packages/klaR/index.html)
install.packages("klar")
It uses the non-hierarchical k-modes algorithm, which is based on simple matching
|
14,677
|
Clustering of mixed type data with R
|
Another appealing way of handling variables of mixed types is to use the proximy/similarity matrix from Random Forests: http://cogns.northwestern.edu/cbmg/LiawAndWiener2002.pdf. This faciliates a unified way of equally treating all variables (nevertheless, be aware of the variable selection bias issue). On the other hand, there is really no gold universal way of defining distance for variables of mixed types. It all depends on the application contexts.
|
Clustering of mixed type data with R
|
Another appealing way of handling variables of mixed types is to use the proximy/similarity matrix from Random Forests: http://cogns.northwestern.edu/cbmg/LiawAndWiener2002.pdf. This faciliates a unif
|
Clustering of mixed type data with R
Another appealing way of handling variables of mixed types is to use the proximy/similarity matrix from Random Forests: http://cogns.northwestern.edu/cbmg/LiawAndWiener2002.pdf. This faciliates a unified way of equally treating all variables (nevertheless, be aware of the variable selection bias issue). On the other hand, there is really no gold universal way of defining distance for variables of mixed types. It all depends on the application contexts.
|
Clustering of mixed type data with R
Another appealing way of handling variables of mixed types is to use the proximy/similarity matrix from Random Forests: http://cogns.northwestern.edu/cbmg/LiawAndWiener2002.pdf. This faciliates a unif
|
14,678
|
Clustering of mixed type data with R
|
You might use multiple correspondence analysis to create continuous dimensions from the categorical variables and then use them with the numerical variables in a second step.
|
Clustering of mixed type data with R
|
You might use multiple correspondence analysis to create continuous dimensions from the categorical variables and then use them with the numerical variables in a second step.
|
Clustering of mixed type data with R
You might use multiple correspondence analysis to create continuous dimensions from the categorical variables and then use them with the numerical variables in a second step.
|
Clustering of mixed type data with R
You might use multiple correspondence analysis to create continuous dimensions from the categorical variables and then use them with the numerical variables in a second step.
|
14,679
|
Clustering of mixed type data with R
|
Well, you certainly can. By making the categorical variables artificially numeric. Or using a distance-matrix based clustering (fpc can probably do that). The question you should first try to answer is: does it actually make sense?
|
Clustering of mixed type data with R
|
Well, you certainly can. By making the categorical variables artificially numeric. Or using a distance-matrix based clustering (fpc can probably do that). The question you should first try to answer i
|
Clustering of mixed type data with R
Well, you certainly can. By making the categorical variables artificially numeric. Or using a distance-matrix based clustering (fpc can probably do that). The question you should first try to answer is: does it actually make sense?
|
Clustering of mixed type data with R
Well, you certainly can. By making the categorical variables artificially numeric. Or using a distance-matrix based clustering (fpc can probably do that). The question you should first try to answer i
|
14,680
|
Clustering of mixed type data with R
|
You could use the universal similarity coefficient of Gower (see Sneath & Sokal 1973, pp 135-136), which for two OTUs $j$ and $k$ is
$$S_G = \frac{\sum_{i=1}^n{w_{i,j,k} s_{i,j,k}}}{\sum_{i=1}^n{w_{i,j,k}}}$$
for all characters $i$.
The weight $w_{i,j,k}$ is either 1 or 0, depending on whether the the comparison is valid or not (missing data, absence of binary character in both OTUs). More complicated weighing schemes have been published.
$s_{i,j,k}$ is calculated for
binary variables: 1 for concordance, 0 for discordance (equivalent to Jaccard's coefficient if $w_{i,j,k}$ is set to 0 for concordant absences)
multistate characters(nominal or ordinal): 1 for equality, 0 else (equivalent to the simple matching coefficient)
cardinal character: $s_{i,j,k} = 1 - \frac{|X_{i,j} - X_{i,k}|}{R_i}$ with $R_i$ the range of character $i$ (either in the population or in the sample).
The nice thing about $S_G$ is that it can not only handle all types of data, but is also robust towards missing data. It also results in positive semi-definite similarity matrices, i.e., OTUs are represented by points in Euklidian space (at least if not too many data are missing).
The distance between OTUs can be represented by $\sqrt{1-S_G}$
|
Clustering of mixed type data with R
|
You could use the universal similarity coefficient of Gower (see Sneath & Sokal 1973, pp 135-136), which for two OTUs $j$ and $k$ is
$$S_G = \frac{\sum_{i=1}^n{w_{i,j,k} s_{i,j,k}}}{\sum_{i=1}^n{w_{i,
|
Clustering of mixed type data with R
You could use the universal similarity coefficient of Gower (see Sneath & Sokal 1973, pp 135-136), which for two OTUs $j$ and $k$ is
$$S_G = \frac{\sum_{i=1}^n{w_{i,j,k} s_{i,j,k}}}{\sum_{i=1}^n{w_{i,j,k}}}$$
for all characters $i$.
The weight $w_{i,j,k}$ is either 1 or 0, depending on whether the the comparison is valid or not (missing data, absence of binary character in both OTUs). More complicated weighing schemes have been published.
$s_{i,j,k}$ is calculated for
binary variables: 1 for concordance, 0 for discordance (equivalent to Jaccard's coefficient if $w_{i,j,k}$ is set to 0 for concordant absences)
multistate characters(nominal or ordinal): 1 for equality, 0 else (equivalent to the simple matching coefficient)
cardinal character: $s_{i,j,k} = 1 - \frac{|X_{i,j} - X_{i,k}|}{R_i}$ with $R_i$ the range of character $i$ (either in the population or in the sample).
The nice thing about $S_G$ is that it can not only handle all types of data, but is also robust towards missing data. It also results in positive semi-definite similarity matrices, i.e., OTUs are represented by points in Euklidian space (at least if not too many data are missing).
The distance between OTUs can be represented by $\sqrt{1-S_G}$
|
Clustering of mixed type data with R
You could use the universal similarity coefficient of Gower (see Sneath & Sokal 1973, pp 135-136), which for two OTUs $j$ and $k$ is
$$S_G = \frac{\sum_{i=1}^n{w_{i,j,k} s_{i,j,k}}}{\sum_{i=1}^n{w_{i,
|
14,681
|
Clustering of mixed type data with R
|
If possible values of categorical variables are not too many, then you may think of creating binary variables out of those values. You can treat these binary variables as numeric variables and run your clustering. That's what I did for my project.
|
Clustering of mixed type data with R
|
If possible values of categorical variables are not too many, then you may think of creating binary variables out of those values. You can treat these binary variables as numeric variables and run yo
|
Clustering of mixed type data with R
If possible values of categorical variables are not too many, then you may think of creating binary variables out of those values. You can treat these binary variables as numeric variables and run your clustering. That's what I did for my project.
|
Clustering of mixed type data with R
If possible values of categorical variables are not too many, then you may think of creating binary variables out of those values. You can treat these binary variables as numeric variables and run yo
|
14,682
|
Clustering of mixed type data with R
|
k-prototypes clustering might be better suited here. It combines k-modes and k-means and is able to cluster mixed numerical / categorical data. For R, use the Package 'clustMixType'.
https://cran.r-project.org/web/packages/clustMixType/clustMixType.pdf
|
Clustering of mixed type data with R
|
k-prototypes clustering might be better suited here. It combines k-modes and k-means and is able to cluster mixed numerical / categorical data. For R, use the Package 'clustMixType'.
https://cran.r-p
|
Clustering of mixed type data with R
k-prototypes clustering might be better suited here. It combines k-modes and k-means and is able to cluster mixed numerical / categorical data. For R, use the Package 'clustMixType'.
https://cran.r-project.org/web/packages/clustMixType/clustMixType.pdf
|
Clustering of mixed type data with R
k-prototypes clustering might be better suited here. It combines k-modes and k-means and is able to cluster mixed numerical / categorical data. For R, use the Package 'clustMixType'.
https://cran.r-p
|
14,683
|
Clustering of mixed type data with R
|
VarSelLCM package offers
Variable Selection for Model-Based Clustering of Mixed-Type Data Set with Missing Values
On CRAN, and described more in paper.
Advantage over some of the previous methods is that it offers some help in choice of the number of clusters and handles missing data. Nice shiny app provided is also not be frowned upon.
|
Clustering of mixed type data with R
|
VarSelLCM package offers
Variable Selection for Model-Based Clustering of Mixed-Type Data Set with Missing Values
On CRAN, and described more in paper.
Advantage over some of the previous methods is
|
Clustering of mixed type data with R
VarSelLCM package offers
Variable Selection for Model-Based Clustering of Mixed-Type Data Set with Missing Values
On CRAN, and described more in paper.
Advantage over some of the previous methods is that it offers some help in choice of the number of clusters and handles missing data. Nice shiny app provided is also not be frowned upon.
|
Clustering of mixed type data with R
VarSelLCM package offers
Variable Selection for Model-Based Clustering of Mixed-Type Data Set with Missing Values
On CRAN, and described more in paper.
Advantage over some of the previous methods is
|
14,684
|
What is the difference between sample variance and sampling variance?
|
Sample variance refers to variation of observations (the data points) in a single sample. Sampling variance refers to variation of a particular statistic (e.g. the mean) calculated in sample, if to repeat the study (sample-creation/data-collection/statistic-calculation) many times. Due to central limit theorem, though, for some statistics you don't have to repeat the study many times in reality, but can deduce sampling variance from a single sample if the sample is representative (this is asymptotic approach). Or you could simulate repetition of the study by a single sample (this is bootstrapping approach).
An additional note on "sample variance". Two may be mixed in one term:
Estimate of population variance based on this sample. This is what we
usually use, it has denominator (degrees of freedom) n-1.
Variance of this sample. It has denominator n.
|
What is the difference between sample variance and sampling variance?
|
Sample variance refers to variation of observations (the data points) in a single sample. Sampling variance refers to variation of a particular statistic (e.g. the mean) calculated in sample, if to re
|
What is the difference between sample variance and sampling variance?
Sample variance refers to variation of observations (the data points) in a single sample. Sampling variance refers to variation of a particular statistic (e.g. the mean) calculated in sample, if to repeat the study (sample-creation/data-collection/statistic-calculation) many times. Due to central limit theorem, though, for some statistics you don't have to repeat the study many times in reality, but can deduce sampling variance from a single sample if the sample is representative (this is asymptotic approach). Or you could simulate repetition of the study by a single sample (this is bootstrapping approach).
An additional note on "sample variance". Two may be mixed in one term:
Estimate of population variance based on this sample. This is what we
usually use, it has denominator (degrees of freedom) n-1.
Variance of this sample. It has denominator n.
|
What is the difference between sample variance and sampling variance?
Sample variance refers to variation of observations (the data points) in a single sample. Sampling variance refers to variation of a particular statistic (e.g. the mean) calculated in sample, if to re
|
14,685
|
What is the difference between sample variance and sampling variance?
|
The sample variance, $s^2$, is the variance of the sample, an estimate of the variance of the population from which the sample was drawn.
"Sampling variance" I would interpret as "the variance that is due to sampling", for example of an estimator (like the mean). And so I would consider these two terms to be quite different.
But "sampling variance" is a bit vague, and I would need to see some context to be sure. And I'd prefer to say "sampling variation" for the general idea.
[Many people (particularly in quantitative genetics) use the term "variance" in place of "variation", whereas I would reserve "variance" solely for the particular measure of variation.]
|
What is the difference between sample variance and sampling variance?
|
The sample variance, $s^2$, is the variance of the sample, an estimate of the variance of the population from which the sample was drawn.
"Sampling variance" I would interpret as "the variance that
|
What is the difference between sample variance and sampling variance?
The sample variance, $s^2$, is the variance of the sample, an estimate of the variance of the population from which the sample was drawn.
"Sampling variance" I would interpret as "the variance that is due to sampling", for example of an estimator (like the mean). And so I would consider these two terms to be quite different.
But "sampling variance" is a bit vague, and I would need to see some context to be sure. And I'd prefer to say "sampling variation" for the general idea.
[Many people (particularly in quantitative genetics) use the term "variance" in place of "variation", whereas I would reserve "variance" solely for the particular measure of variation.]
|
What is the difference between sample variance and sampling variance?
The sample variance, $s^2$, is the variance of the sample, an estimate of the variance of the population from which the sample was drawn.
"Sampling variance" I would interpret as "the variance that
|
14,686
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
The set $\Omega(d,n)$ of distinct identifiable outcomes in $n$ independent rolls of a die with $d=6$ faces has $d^n$ elements. When the die is fair, that means each outcome of one roll has probability $1/d$ and independence means each of these outcomes will therefore have probability $(1/d)^n:$ that is, they have a uniform distribution $\mathbb{P}_{d,n}.$
Suppose you have devised some procedure $t$ that either determines $m$ outcomes of a $c (=150)$-sided die--that is, an element of $\Omega(c,m)$--or else reports failure (which means you will have to repeat it in order to obtain an outcome). That is,
$$t:\Omega(d,n)\to\Omega(c,m)\cup\{\text{Failure}\}.$$
Let $F$ be the probability $t$ results in failure and note that $F$ is some integral multiple of $d^{-n},$ say
$$F = \Pr(t(\omega)=\text{Failure}) = N_F\, d^{-n}.$$
(For future reference, note that the expected number of times $t$ must be invoked before not failing is $1/(1-F).$)
The requirement that these outcomes in $\Omega(c,m)$ be uniform and independent conditional on $t$ not reporting failure means that $t$ preserves probability in the sense that for every event $\mathcal{A}\subset\Omega(c,m),$
$$\frac{\mathbb{P}_{d,n}\left(t^{*}\mathcal{A}\right)}{1-F}= \mathbb{P}_{c,m}\left(\mathcal{A}\right) \tag{1}$$
where
$$t^{*}\left(\mathcal A\right) = \{\omega\in\Omega\mid t(\omega)\in\mathcal{A}\}$$
is the set of die rolls that the procedure $t$ assigns to the event $\mathcal A.$
Consider an atomic event $\mathcal A = \{\eta\}\subset\Omega(c,m)$, which must have probability $c^{-m}.$ Let $t^{*}\left(\mathcal A\right)$ (the dice rolls associated with $\eta$) have $N_\eta$ elements. $(1)$ becomes
$$\frac{N_\eta d^{-n}}{1 - N_F d^{-n}} = \frac{\mathbb{P}_{d,n}\left(t^{*}\mathcal{A}\right)}{1-F}= \mathbb{P}_{c,m}\left(\mathcal{A}\right) = c^{-m}.\tag{2}$$
It is immediate that the $N_\eta$ are all equal to some integer $N.$ It remains only to find the most efficient procedures $t.$ The expected number of non-failures per roll of the $c$ sided die is
$$\frac{1}{m}\left(1 - F\right).$$
There are two immediate and obvious implications. One is that if we can keep $F$ small as $m$ grows large, then the effect of reporting a failure is asymptotically zero. The other is that for any given $m$ (the number of rolls of the $c$-sided die to simulate), we want to make $F$ as small as possible.
Let's take a closer look at $(2)$ by clearing the denominators:
$$N c^m = d^n - N_F \gt 0.$$
This makes it obvious that in a given context (determined by $c,d,n,m$), $F$ is made as small as possible by making $d^n-N_F$ equal the largest multiple of $c^m$ that is less than or equal to $d^n.$ We may write this in terms of the greatest integer function (or "floor") $\lfloor*\rfloor$ as
$$N = \bigg\lfloor \frac{d^n}{c^m} \bigg\rfloor.$$
Finally, it is clear that $N$ ought to be as small as possible for highest efficiency, because it measures redundancy in $t$. Specifically, the expected number of rolls of the $d$-sided die needed to produce one roll of the $c$-sided die is
$$N \times \frac{n}{m} \times \frac{1}{1-F}.$$
Thus, our search for high-efficiency procedures ought to focus on the cases where $d^n$ is equal to, or just barely greater than, some power $c^m.$
The analysis ends by showing that for given $d$ and $c,$ there is a sequence of multiples $(n,m)$ for which this approach approximates perfect efficiency. This amounts to finding $(n,m)$ for which $d^n/c^m \ge 1$ approaches $N=1$ in the limit (automatically guaranteeing $F\to 0$). One such sequence is obtained by taking $n=1,2,3,\ldots$ and determining
$$m = \bigg\lfloor \frac{n\log d}{\log c} \bigg\rfloor.\tag{3}$$
The proof is straightforward.
This all means that when we are willing to roll the original $d$-sided die a sufficiently large number of times $n,$ we can expect to simulate nearly $\log d / \log c = \log_c d$ outcomes of a $c$-sided die per roll. Equivalently,
It is possible to simulate a large number $m$ of independent rolls of a $c$-sided die using a fair $d$-sided die using an average of $\log(c)/\log(d) + \epsilon = \log_d(c) + \epsilon$ rolls per outcome where $\epsilon$ can be made arbitrarily small by choosing $m$ sufficiently large.
Examples and algorithms
In the question, $d=6$ and $c=150,$ whence
$$\log_d(c) = \frac{\log(c)}{\log(d)} \approx 2.796489.$$
Thus, the best possible procedure will require, on average, at least $2.796489$ rolls of a d6 to simulate each d150 outcome.
The analysis shows how to do this. We don't need to resort to number theory to carry it out: we can just tabulate the powers $d^n=6^n$ and the powers $c^m=150^m$ and compare them to find where $c^m \le d^n$ are close. This brute force calculation gives $(n,m)$ pairs
$$(n,m) \in \{(3,1), (14,5), \ldots\}$$
for instance, corresponding to the numbers
$$(6^n, 150^m) \in \{(216,150), (78364164096,75937500000), \ldots\}.$$
In the first case $t$ would associate $216-150=66$ of the outcomes of three rolls of a d6 to Failure and the other $150$ outcomes would each be associated with a single outcome of a d150.
In the second case $t$ would associate $78364164096-75937500000$ of the outcomes of 14 rolls of a d6 to Failure -- about 3.1% of them all -- and otherwise would output a sequence of 5 outcomes of a d150.
A simple algorithm to implement $t$ labels the faces of the $d$-sided die with the numerals $0,1,\ldots, d-1$ and the faces of the $c$-sided die with the numerals $0,1,\ldots, c-1.$ The $n$ rolls of the first die are interpreted as an $n$-digit number in base $d.$ This is converted to a number in base $c.$ If it has at most $m$ digits, the sequence of the last $m$ digits is the output. Otherwise, $t$ returns Failure by invoking itself recursively.
For much longer sequences, you can find suitable pairs $(n,m)$ by considering every other convergent $n/m$ of the continued fraction expansion of $x=\log(c)/\log(d).$ The theory of continued fractions shows that these convergents alternate between being less than $x$ and greater than it (assuming $x$ is not already rational). Choose those that are less than $x.$
In the question, the first few such convergents are
$$3, 14/5, 165/59, 797/285, 4301/1538, 89043/31841, 279235/99852, 29036139/10383070 \ldots.$$
In the last case, a sequence of 29,036,139 rolls of a d6 will produce a sequence of 10,383,070 rolls of a d150 with a failure rate less than $2\times 10^{-8},$ for an efficiency of $2.79649$--indistinguishable from the asymptotic limit.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
The set $\Omega(d,n)$ of distinct identifiable outcomes in $n$ independent rolls of a die with $d=6$ faces has $d^n$ elements. When the die is fair, that means each outcome of one roll has probabilit
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
The set $\Omega(d,n)$ of distinct identifiable outcomes in $n$ independent rolls of a die with $d=6$ faces has $d^n$ elements. When the die is fair, that means each outcome of one roll has probability $1/d$ and independence means each of these outcomes will therefore have probability $(1/d)^n:$ that is, they have a uniform distribution $\mathbb{P}_{d,n}.$
Suppose you have devised some procedure $t$ that either determines $m$ outcomes of a $c (=150)$-sided die--that is, an element of $\Omega(c,m)$--or else reports failure (which means you will have to repeat it in order to obtain an outcome). That is,
$$t:\Omega(d,n)\to\Omega(c,m)\cup\{\text{Failure}\}.$$
Let $F$ be the probability $t$ results in failure and note that $F$ is some integral multiple of $d^{-n},$ say
$$F = \Pr(t(\omega)=\text{Failure}) = N_F\, d^{-n}.$$
(For future reference, note that the expected number of times $t$ must be invoked before not failing is $1/(1-F).$)
The requirement that these outcomes in $\Omega(c,m)$ be uniform and independent conditional on $t$ not reporting failure means that $t$ preserves probability in the sense that for every event $\mathcal{A}\subset\Omega(c,m),$
$$\frac{\mathbb{P}_{d,n}\left(t^{*}\mathcal{A}\right)}{1-F}= \mathbb{P}_{c,m}\left(\mathcal{A}\right) \tag{1}$$
where
$$t^{*}\left(\mathcal A\right) = \{\omega\in\Omega\mid t(\omega)\in\mathcal{A}\}$$
is the set of die rolls that the procedure $t$ assigns to the event $\mathcal A.$
Consider an atomic event $\mathcal A = \{\eta\}\subset\Omega(c,m)$, which must have probability $c^{-m}.$ Let $t^{*}\left(\mathcal A\right)$ (the dice rolls associated with $\eta$) have $N_\eta$ elements. $(1)$ becomes
$$\frac{N_\eta d^{-n}}{1 - N_F d^{-n}} = \frac{\mathbb{P}_{d,n}\left(t^{*}\mathcal{A}\right)}{1-F}= \mathbb{P}_{c,m}\left(\mathcal{A}\right) = c^{-m}.\tag{2}$$
It is immediate that the $N_\eta$ are all equal to some integer $N.$ It remains only to find the most efficient procedures $t.$ The expected number of non-failures per roll of the $c$ sided die is
$$\frac{1}{m}\left(1 - F\right).$$
There are two immediate and obvious implications. One is that if we can keep $F$ small as $m$ grows large, then the effect of reporting a failure is asymptotically zero. The other is that for any given $m$ (the number of rolls of the $c$-sided die to simulate), we want to make $F$ as small as possible.
Let's take a closer look at $(2)$ by clearing the denominators:
$$N c^m = d^n - N_F \gt 0.$$
This makes it obvious that in a given context (determined by $c,d,n,m$), $F$ is made as small as possible by making $d^n-N_F$ equal the largest multiple of $c^m$ that is less than or equal to $d^n.$ We may write this in terms of the greatest integer function (or "floor") $\lfloor*\rfloor$ as
$$N = \bigg\lfloor \frac{d^n}{c^m} \bigg\rfloor.$$
Finally, it is clear that $N$ ought to be as small as possible for highest efficiency, because it measures redundancy in $t$. Specifically, the expected number of rolls of the $d$-sided die needed to produce one roll of the $c$-sided die is
$$N \times \frac{n}{m} \times \frac{1}{1-F}.$$
Thus, our search for high-efficiency procedures ought to focus on the cases where $d^n$ is equal to, or just barely greater than, some power $c^m.$
The analysis ends by showing that for given $d$ and $c,$ there is a sequence of multiples $(n,m)$ for which this approach approximates perfect efficiency. This amounts to finding $(n,m)$ for which $d^n/c^m \ge 1$ approaches $N=1$ in the limit (automatically guaranteeing $F\to 0$). One such sequence is obtained by taking $n=1,2,3,\ldots$ and determining
$$m = \bigg\lfloor \frac{n\log d}{\log c} \bigg\rfloor.\tag{3}$$
The proof is straightforward.
This all means that when we are willing to roll the original $d$-sided die a sufficiently large number of times $n,$ we can expect to simulate nearly $\log d / \log c = \log_c d$ outcomes of a $c$-sided die per roll. Equivalently,
It is possible to simulate a large number $m$ of independent rolls of a $c$-sided die using a fair $d$-sided die using an average of $\log(c)/\log(d) + \epsilon = \log_d(c) + \epsilon$ rolls per outcome where $\epsilon$ can be made arbitrarily small by choosing $m$ sufficiently large.
Examples and algorithms
In the question, $d=6$ and $c=150,$ whence
$$\log_d(c) = \frac{\log(c)}{\log(d)} \approx 2.796489.$$
Thus, the best possible procedure will require, on average, at least $2.796489$ rolls of a d6 to simulate each d150 outcome.
The analysis shows how to do this. We don't need to resort to number theory to carry it out: we can just tabulate the powers $d^n=6^n$ and the powers $c^m=150^m$ and compare them to find where $c^m \le d^n$ are close. This brute force calculation gives $(n,m)$ pairs
$$(n,m) \in \{(3,1), (14,5), \ldots\}$$
for instance, corresponding to the numbers
$$(6^n, 150^m) \in \{(216,150), (78364164096,75937500000), \ldots\}.$$
In the first case $t$ would associate $216-150=66$ of the outcomes of three rolls of a d6 to Failure and the other $150$ outcomes would each be associated with a single outcome of a d150.
In the second case $t$ would associate $78364164096-75937500000$ of the outcomes of 14 rolls of a d6 to Failure -- about 3.1% of them all -- and otherwise would output a sequence of 5 outcomes of a d150.
A simple algorithm to implement $t$ labels the faces of the $d$-sided die with the numerals $0,1,\ldots, d-1$ and the faces of the $c$-sided die with the numerals $0,1,\ldots, c-1.$ The $n$ rolls of the first die are interpreted as an $n$-digit number in base $d.$ This is converted to a number in base $c.$ If it has at most $m$ digits, the sequence of the last $m$ digits is the output. Otherwise, $t$ returns Failure by invoking itself recursively.
For much longer sequences, you can find suitable pairs $(n,m)$ by considering every other convergent $n/m$ of the continued fraction expansion of $x=\log(c)/\log(d).$ The theory of continued fractions shows that these convergents alternate between being less than $x$ and greater than it (assuming $x$ is not already rational). Choose those that are less than $x.$
In the question, the first few such convergents are
$$3, 14/5, 165/59, 797/285, 4301/1538, 89043/31841, 279235/99852, 29036139/10383070 \ldots.$$
In the last case, a sequence of 29,036,139 rolls of a d6 will produce a sequence of 10,383,070 rolls of a d150 with a failure rate less than $2\times 10^{-8},$ for an efficiency of $2.79649$--indistinguishable from the asymptotic limit.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
The set $\Omega(d,n)$ of distinct identifiable outcomes in $n$ independent rolls of a die with $d=6$ faces has $d^n$ elements. When the die is fair, that means each outcome of one roll has probabilit
|
14,687
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
For the case of $N=150$, rolling a d6 three times distinctly creates $6^3=216$ outcomes.
The desired result can be tabulated in this way:
Record a d6 three times sequentially. This produces results $a,b,c$. The result is uniform because all values of $a,b,c$ are equally likely (the dice are fair, and we are treating each roll as distinct).
Subtract 1 from each.
This is a senary number: each digit (place value) goes from 0 to 5 by powers of 6, so you can write the number in decimal using $$(a-1) \times 6^2 + (b-1) \times 6^1 + (c-1)\times 6^0$$
Add 1.
If the result exceeds 150, discard the result and roll again.
The probability of keeping a result is $p=\frac{150}{216}=\frac{25}{36}$. All rolls are independent, and we repeat the procedure until a "success" (a result in $1,2,\dots,150$) so the number of attempts to generate 1 draw between 1 and 150 is distributed as a geometric random variable, which has expectation $p^{-1}=\frac{36}{25}$. Therefore, using this method to generate 1 draw requires rolling $\frac{36}{25}\times 3 =4.32$ dice rolls on average (because each attempt rolls 3 dice).
Credit to @whuber to for suggesting this in chat.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
For the case of $N=150$, rolling a d6 three times distinctly creates $6^3=216$ outcomes.
The desired result can be tabulated in this way:
Record a d6 three times sequentially. This produces results $
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
For the case of $N=150$, rolling a d6 three times distinctly creates $6^3=216$ outcomes.
The desired result can be tabulated in this way:
Record a d6 three times sequentially. This produces results $a,b,c$. The result is uniform because all values of $a,b,c$ are equally likely (the dice are fair, and we are treating each roll as distinct).
Subtract 1 from each.
This is a senary number: each digit (place value) goes from 0 to 5 by powers of 6, so you can write the number in decimal using $$(a-1) \times 6^2 + (b-1) \times 6^1 + (c-1)\times 6^0$$
Add 1.
If the result exceeds 150, discard the result and roll again.
The probability of keeping a result is $p=\frac{150}{216}=\frac{25}{36}$. All rolls are independent, and we repeat the procedure until a "success" (a result in $1,2,\dots,150$) so the number of attempts to generate 1 draw between 1 and 150 is distributed as a geometric random variable, which has expectation $p^{-1}=\frac{36}{25}$. Therefore, using this method to generate 1 draw requires rolling $\frac{36}{25}\times 3 =4.32$ dice rolls on average (because each attempt rolls 3 dice).
Credit to @whuber to for suggesting this in chat.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
For the case of $N=150$, rolling a d6 three times distinctly creates $6^3=216$ outcomes.
The desired result can be tabulated in this way:
Record a d6 three times sequentially. This produces results $
|
14,688
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
Here is an even simpler alternative to the answer by Sycorax for the case where $N=150$. Since $150 = 5 \times 5 \times 6$ you can perform the following procedure:
Generating uniform random number from 1 to 150:
Make three ordered rolls of 1D6 and denote these as $R_1, R_2, R_3$.
If either of the first two rolls is a six, reroll it until it is not 6.
The number $(R_1, R_2, R_3)$ is a uniform number using positional notation with a radix of 5-5-6. Thus, you can compute the desired number as:
$$X = 30 \cdot (R_1-1) + 6 \cdot (R_2-1) + (R_3-1) + 1.$$
This method can be generalised to larger $N$, but it becomes a bit more awkward when the value has one or more prime factors larger than $6$.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
Here is an even simpler alternative to the answer by Sycorax for the case where $N=150$. Since $150 = 5 \times 5 \times 6$ you can perform the following procedure:
Generating uniform random number f
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
Here is an even simpler alternative to the answer by Sycorax for the case where $N=150$. Since $150 = 5 \times 5 \times 6$ you can perform the following procedure:
Generating uniform random number from 1 to 150:
Make three ordered rolls of 1D6 and denote these as $R_1, R_2, R_3$.
If either of the first two rolls is a six, reroll it until it is not 6.
The number $(R_1, R_2, R_3)$ is a uniform number using positional notation with a radix of 5-5-6. Thus, you can compute the desired number as:
$$X = 30 \cdot (R_1-1) + 6 \cdot (R_2-1) + (R_3-1) + 1.$$
This method can be generalised to larger $N$, but it becomes a bit more awkward when the value has one or more prime factors larger than $6$.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
Here is an even simpler alternative to the answer by Sycorax for the case where $N=150$. Since $150 = 5 \times 5 \times 6$ you can perform the following procedure:
Generating uniform random number f
|
14,689
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
As an illustration of an algorithm to choose uniformly between $150$ values using six-sided dice, try this which uses each roll to multiply the available values by $6$ and making each of the new values equally likely:
After $0$ rolls, you have $1$ possibility, not enough to distinguish $150$ values
After $1$ roll, you have $6$ possibilities, not enough to distinguish $150$ values
After $2$ rolls, you have $36$ possibilities, not enough to distinguish $150$ values
After $3$ rolls, you have $216$ possibilities, enough to distinguish $150$ values but with $66$ remaining values; the probability you stop now is $\frac{150}{216}$
If you have not stopped, then after $4$ rolls you have $396$ remaining possibilities, enough to distinguish $150$ values two ways but with $96$ remaining values; the probability you stop now is $\frac{300}{1296}$
If you have not stopped, then after $5$ rolls you have $576$ remaining possibilities, enough to distinguish $150$ values three ways but with $96$ remaining values; the probability you stop now is $\frac{450}{7776}$
If you have not stopped, then after $6$ rolls you have $756$ remaining possibilities, enough to distinguish $150$ values five ways but with $6$ remaining values; the probability you stop now is $\frac{750}{46656}$
If you are on one of the $6$ remaining values after $6$ rolls then you are in a similar situation to the position after $1$ roll. So you can continue in the same way: the probability you stop after $7$ rolls is $\frac{0}{279936}$, after $8$ rolls is $\frac{150}{1679616}$ etc.
Add these up and you find that the expected number of rolls needed is about $3.39614$. It provides a uniform selection from the $150$, as you only select a value at a time when you can select each of the $150$ with equal probability
Sycorax asked in the comments for a more explicit algorithm
First, I will work in base-$6$ with $150_{10}=410_6$
Second, rather than target values $1_6$ to $410_6$, I will subtract one so the target values are $0_6$ to $409_6$
Third, each die should have values $0_6$ to $5_6$, and rolling a die involves adding a base $6$ digit to the right hand side of the existing generated number. Generated numbers can have leading zeros, and their number of digits is the number of rolls so far
The algorithm is successive rolls of dice:
Roll the first three dice to generate a number from $000_6$ to $555_6$. Since $1000_6 \div 410_6 = 1_6 \text{ remainder } 150_6$ you take the generated value (which is also its remainder on division by $410_6$) if the generated value is strictly below $1000_6-150_6=410_6$ and stop;
If continuing, roll the fourth die so you have now generated a number from $4100_6$ to $5555_6$. Since $10000_6 \div 410_6 = 12_6 \text{ remainder } 240_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $10000_6-240_6=5320_6$ and stop;
If continuing, roll the fifth die so you have now generated a number from $53200_6$ to $55555_6$. Since $100000_6 \div 410_6 = 123_6 \text{ remainder } 330_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $100000_6-330_6=55230_6$ and stop;
If continuing, roll the sixth die so you have now generated a number from $552300_6$ to $555555_6$. Since $1000000_6 \div 410_6 = 1235_6 \text{ remainder } 10_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $1000000_6-10_6=555550_6$ and stop;
etc.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
|
As an illustration of an algorithm to choose uniformly between $150$ values using six-sided dice, try this which uses each roll to multiply the available values by $6$ and making each of the new value
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
As an illustration of an algorithm to choose uniformly between $150$ values using six-sided dice, try this which uses each roll to multiply the available values by $6$ and making each of the new values equally likely:
After $0$ rolls, you have $1$ possibility, not enough to distinguish $150$ values
After $1$ roll, you have $6$ possibilities, not enough to distinguish $150$ values
After $2$ rolls, you have $36$ possibilities, not enough to distinguish $150$ values
After $3$ rolls, you have $216$ possibilities, enough to distinguish $150$ values but with $66$ remaining values; the probability you stop now is $\frac{150}{216}$
If you have not stopped, then after $4$ rolls you have $396$ remaining possibilities, enough to distinguish $150$ values two ways but with $96$ remaining values; the probability you stop now is $\frac{300}{1296}$
If you have not stopped, then after $5$ rolls you have $576$ remaining possibilities, enough to distinguish $150$ values three ways but with $96$ remaining values; the probability you stop now is $\frac{450}{7776}$
If you have not stopped, then after $6$ rolls you have $756$ remaining possibilities, enough to distinguish $150$ values five ways but with $6$ remaining values; the probability you stop now is $\frac{750}{46656}$
If you are on one of the $6$ remaining values after $6$ rolls then you are in a similar situation to the position after $1$ roll. So you can continue in the same way: the probability you stop after $7$ rolls is $\frac{0}{279936}$, after $8$ rolls is $\frac{150}{1679616}$ etc.
Add these up and you find that the expected number of rolls needed is about $3.39614$. It provides a uniform selection from the $150$, as you only select a value at a time when you can select each of the $150$ with equal probability
Sycorax asked in the comments for a more explicit algorithm
First, I will work in base-$6$ with $150_{10}=410_6$
Second, rather than target values $1_6$ to $410_6$, I will subtract one so the target values are $0_6$ to $409_6$
Third, each die should have values $0_6$ to $5_6$, and rolling a die involves adding a base $6$ digit to the right hand side of the existing generated number. Generated numbers can have leading zeros, and their number of digits is the number of rolls so far
The algorithm is successive rolls of dice:
Roll the first three dice to generate a number from $000_6$ to $555_6$. Since $1000_6 \div 410_6 = 1_6 \text{ remainder } 150_6$ you take the generated value (which is also its remainder on division by $410_6$) if the generated value is strictly below $1000_6-150_6=410_6$ and stop;
If continuing, roll the fourth die so you have now generated a number from $4100_6$ to $5555_6$. Since $10000_6 \div 410_6 = 12_6 \text{ remainder } 240_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $10000_6-240_6=5320_6$ and stop;
If continuing, roll the fifth die so you have now generated a number from $53200_6$ to $55555_6$. Since $100000_6 \div 410_6 = 123_6 \text{ remainder } 330_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $100000_6-330_6=55230_6$ and stop;
If continuing, roll the sixth die so you have now generated a number from $552300_6$ to $555555_6$. Since $1000000_6 \div 410_6 = 1235_6 \text{ remainder } 10_6$ you take the remainder of the generated value on division by $410_6$ if the generated value is strictly below $1000000_6-10_6=555550_6$ and stop;
etc.
|
Draw integers independently & uniformly at random from 1 to $N$ using fair d6?
As an illustration of an algorithm to choose uniformly between $150$ values using six-sided dice, try this which uses each roll to multiply the available values by $6$ and making each of the new value
|
14,690
|
How can the central limit theorem hold for distributions which have limits on the random variable?
|
This is an excellent question, since it shows that you are thinking about the intuitive aspects of the theorems you are learning. That puts you ahead of most students who learn the CLT. Here I will try to supply you with an explanation for how it is possible for the CLT to hold for random variables with restricted support.
The classical central limit theorem applies to any sequence $X_1, X_2, X_3, ... \sim \text{IID Dist}(\mu, \sigma^2)$ consisting of independent and identically distributed random variables with arbitrary mean $\mu$ and finite non-zero variance $0 < \sigma^2 < \infty$. Now, suppose that you have such a sequence, and they are bounded by $x_{\text{min}} \leqslant X_i \leqslant x_{\text{max}}$, and therefore their support does not cover the whole real line.
The central limit theorem relates to the distribution of the sample mean $\bar{X}_n \equiv \tfrac{1}{n} \sum_{i=1}^n X_i$, and from the restricted support on the underlying random variables in the sequence, this statistic must also obey the bounds $x_{\text{min}} \leqslant \bar{X}_n \leqslant x_{\text{max}}$. So, the plot thickens - the sample mean that is the subject of the theorem is also bounded! How can the CLT hold if this is the case?
Central Limit Theorem (CLT): Letting $\Phi$ be the standard normal distribution function, we have:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} \leqslant z \Big) = \Phi (z).$$
Approximation arising from CLT: For large $n$ we have the approximate distribution:
$$\bar{X}_n \sim \text{N} \Big( \mu, \frac{\sigma^2}{n} \Big).$$
Your issue stems from the fact that the distributional approximation arising out of this theorem approximates a distribution with bounded support by one with unbounded support, and hence, it cannot be correct. You are right about that --- the distributional approximation for large $n$ is only an approximation, and it does indeed mis-specify the probability that the sample mean is outside its bounds (by giving this positive probability).
However, the CLT is not a statement about a distributional approximation for finite $n$. It is about the limiting distribution of the standardised sample mean. The bounds on this quantity are:
$$z_{\text{min}} = \frac{x_{\text{min}} - \mu}{\sigma / \sqrt{n}} \leqslant \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} \leqslant \frac{x_{\text{max}} - \mu}{\sigma / \sqrt{n}} = z_{\text{max}}.$$
For any finite sample size, the normal approximation gives a non-zero probability to values outside the support (which of course have a true probability of zero):
$$\begin{align}
P_n^\text{(erroneous)}
&\equiv \mathbb{P}(\bar{X}_n \notin [x_\min, x_\max] | \text{Normal Approx}) \\[6pt]
&= 1 - \Phi(z_\max) + \Phi(z_\min). \\[6pt]
\end{align}$$
Now, as $n \rightarrow \infty$ we have limits $z_{\text{min}} \rightarrow - \infty$ and $z_{\text{max}} \rightarrow \infty$ which means that the bounds of the standardised sample mean become wider and wider and converge in the limit to the whole real line. (Or to put it slightly more formally, for any point in the real line, the bounds will come to encompass that point for some sufficiently large $n$.) A consequence of this is that the probability ascribed to the parts outside the bounds by the normal distribution converges to zero as $n \rightarrow \infty$. That is, we have $\lim_{n \rightarrow \infty} P_n^\text{(erroneous)} = 0$.
Here we get at the heart of the issue regarding your misgivings about the CLT. It is true that for any finite $n$, a normal approximation to the distribution of the sample mean will give positive probability to subsets of values that are outside the bounds of the true support. However, when we take the limit $n \rightarrow \infty$ this erroneous positive probability converges to zero. The distributional approximation to the standardised sample mean converges to the true distribution of this quantity in the limit, even though the approximation does not hold exactly for finite $n$.
Using some statistical kung-fu to improve the approximation: You are right to have misgivings about the fact that the normal approximation from the CLT gives an erroneous non-zero probability to values outside the bounds of the true distribution. Is there anything that can be done about this?
Well, it turns out there is. You see, the normal distribution is not the only approximating distribution that arises from the CLT. In fact, any sequence of distributions that converges to the normal can also be used for the approximation. This is extremely useful in cases where you have a quantity that is known to have bounded support, and you also want to approximate its distribution with the CLT.
As an example, suppose you are interested in the scaled sample variance $S_n^2/\sigma^2$ for large $n$ (see related questions here and here). This quantity is always non-negative, yet it obeys a CLT result that says that its distribution converges to the normal distribution (so long as the kurtosis of the underlying population is finite). So, for large $n$ you can use the CLT to get the (not particularly wonderful) approximating distribution:
$$\frac{S_N^2}{\sigma^2} \overset{\text{Approx}}{\sim} \text{N} \Bigg( 1, \frac{1}{n} \bigg( \kappa - \frac{n-3}{n-1} \bigg) \Bigg),$$
which gives an erroneous non-zero probability to the negative values. However, following an alternative method used in O'Neill (2014) (Result 14, p. 285) you can use the asymptotically equivalent (and now wonderful) approximating distribution:
$$\frac{S_N^2}{\sigma^2} \overset{\text{Approx}}{\sim} \frac{\text{ChiSq} (DF_n)}{DF_n}
\quad \quad \quad \quad \quad
DF_n \equiv \frac{2n}{\kappa - (n-3)/(n-1)},$$
which reduces to the exact distribution for an underlying normal population, and does not give positive probability to the (impossible) negative values. Other asymptotically equivalent approximating distributions are also possible, so the point here is that the CLT always gives you a range of available asymptotic distributions, and we can choose the one that has other good properties (e.g., not giving positive probability to impossible values).
|
How can the central limit theorem hold for distributions which have limits on the random variable?
|
This is an excellent question, since it shows that you are thinking about the intuitive aspects of the theorems you are learning. That puts you ahead of most students who learn the CLT. Here I will
|
How can the central limit theorem hold for distributions which have limits on the random variable?
This is an excellent question, since it shows that you are thinking about the intuitive aspects of the theorems you are learning. That puts you ahead of most students who learn the CLT. Here I will try to supply you with an explanation for how it is possible for the CLT to hold for random variables with restricted support.
The classical central limit theorem applies to any sequence $X_1, X_2, X_3, ... \sim \text{IID Dist}(\mu, \sigma^2)$ consisting of independent and identically distributed random variables with arbitrary mean $\mu$ and finite non-zero variance $0 < \sigma^2 < \infty$. Now, suppose that you have such a sequence, and they are bounded by $x_{\text{min}} \leqslant X_i \leqslant x_{\text{max}}$, and therefore their support does not cover the whole real line.
The central limit theorem relates to the distribution of the sample mean $\bar{X}_n \equiv \tfrac{1}{n} \sum_{i=1}^n X_i$, and from the restricted support on the underlying random variables in the sequence, this statistic must also obey the bounds $x_{\text{min}} \leqslant \bar{X}_n \leqslant x_{\text{max}}$. So, the plot thickens - the sample mean that is the subject of the theorem is also bounded! How can the CLT hold if this is the case?
Central Limit Theorem (CLT): Letting $\Phi$ be the standard normal distribution function, we have:
$$\lim_{n \rightarrow \infty} \mathbb{P} \Big( \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} \leqslant z \Big) = \Phi (z).$$
Approximation arising from CLT: For large $n$ we have the approximate distribution:
$$\bar{X}_n \sim \text{N} \Big( \mu, \frac{\sigma^2}{n} \Big).$$
Your issue stems from the fact that the distributional approximation arising out of this theorem approximates a distribution with bounded support by one with unbounded support, and hence, it cannot be correct. You are right about that --- the distributional approximation for large $n$ is only an approximation, and it does indeed mis-specify the probability that the sample mean is outside its bounds (by giving this positive probability).
However, the CLT is not a statement about a distributional approximation for finite $n$. It is about the limiting distribution of the standardised sample mean. The bounds on this quantity are:
$$z_{\text{min}} = \frac{x_{\text{min}} - \mu}{\sigma / \sqrt{n}} \leqslant \frac{\bar{X}_n - \mu}{\sigma / \sqrt{n}} \leqslant \frac{x_{\text{max}} - \mu}{\sigma / \sqrt{n}} = z_{\text{max}}.$$
For any finite sample size, the normal approximation gives a non-zero probability to values outside the support (which of course have a true probability of zero):
$$\begin{align}
P_n^\text{(erroneous)}
&\equiv \mathbb{P}(\bar{X}_n \notin [x_\min, x_\max] | \text{Normal Approx}) \\[6pt]
&= 1 - \Phi(z_\max) + \Phi(z_\min). \\[6pt]
\end{align}$$
Now, as $n \rightarrow \infty$ we have limits $z_{\text{min}} \rightarrow - \infty$ and $z_{\text{max}} \rightarrow \infty$ which means that the bounds of the standardised sample mean become wider and wider and converge in the limit to the whole real line. (Or to put it slightly more formally, for any point in the real line, the bounds will come to encompass that point for some sufficiently large $n$.) A consequence of this is that the probability ascribed to the parts outside the bounds by the normal distribution converges to zero as $n \rightarrow \infty$. That is, we have $\lim_{n \rightarrow \infty} P_n^\text{(erroneous)} = 0$.
Here we get at the heart of the issue regarding your misgivings about the CLT. It is true that for any finite $n$, a normal approximation to the distribution of the sample mean will give positive probability to subsets of values that are outside the bounds of the true support. However, when we take the limit $n \rightarrow \infty$ this erroneous positive probability converges to zero. The distributional approximation to the standardised sample mean converges to the true distribution of this quantity in the limit, even though the approximation does not hold exactly for finite $n$.
Using some statistical kung-fu to improve the approximation: You are right to have misgivings about the fact that the normal approximation from the CLT gives an erroneous non-zero probability to values outside the bounds of the true distribution. Is there anything that can be done about this?
Well, it turns out there is. You see, the normal distribution is not the only approximating distribution that arises from the CLT. In fact, any sequence of distributions that converges to the normal can also be used for the approximation. This is extremely useful in cases where you have a quantity that is known to have bounded support, and you also want to approximate its distribution with the CLT.
As an example, suppose you are interested in the scaled sample variance $S_n^2/\sigma^2$ for large $n$ (see related questions here and here). This quantity is always non-negative, yet it obeys a CLT result that says that its distribution converges to the normal distribution (so long as the kurtosis of the underlying population is finite). So, for large $n$ you can use the CLT to get the (not particularly wonderful) approximating distribution:
$$\frac{S_N^2}{\sigma^2} \overset{\text{Approx}}{\sim} \text{N} \Bigg( 1, \frac{1}{n} \bigg( \kappa - \frac{n-3}{n-1} \bigg) \Bigg),$$
which gives an erroneous non-zero probability to the negative values. However, following an alternative method used in O'Neill (2014) (Result 14, p. 285) you can use the asymptotically equivalent (and now wonderful) approximating distribution:
$$\frac{S_N^2}{\sigma^2} \overset{\text{Approx}}{\sim} \frac{\text{ChiSq} (DF_n)}{DF_n}
\quad \quad \quad \quad \quad
DF_n \equiv \frac{2n}{\kappa - (n-3)/(n-1)},$$
which reduces to the exact distribution for an underlying normal population, and does not give positive probability to the (impossible) negative values. Other asymptotically equivalent approximating distributions are also possible, so the point here is that the CLT always gives you a range of available asymptotic distributions, and we can choose the one that has other good properties (e.g., not giving positive probability to impossible values).
|
How can the central limit theorem hold for distributions which have limits on the random variable?
This is an excellent question, since it shows that you are thinking about the intuitive aspects of the theorems you are learning. That puts you ahead of most students who learn the CLT. Here I will
|
14,691
|
How can the central limit theorem hold for distributions which have limits on the random variable?
|
Your source of confusion stems from two sources:
1) The CLT applies to the normalized sample means, i.e.:
$Z_n=\frac{S_n/n-\mu}{\sigma/\sqrt{n}}=\frac{S_n-n\mu}{\sigma\sqrt{n}}$,
which is centered around 0, hence admits negative values with positive probability. As an extreme example, if $n=1$ then $\frac{X_1-\mu}{\sigma}$ can be negative for Poisson $X_1$. In fact you can easily conclude that if $Z_n$ is never negative, then $X_i$ must be constant (hence $\sigma=0$).
2) The CLT for finite $n$ is only a local result around the mean. In other words the fact that $P(Z_n\leq x)$ is approximately $\phi(x)$ (the normal CDF), normal tends to be more true for $x$ near 0. When $n$ isn't large enough, relative to $x$, this approximation breaks down.
If you're say, measuring the heights of people, then a standard normal approximation may imply that negative height has positive probability. This is false since the most adults have heights between 4 and 7 feet, so the approximation would break down beyond these limits if your $n$ is small.
Alternatively, if $P(X_i=1)=0.99999$ and $P(X_i=-1)=0.00001$, then it will take many realizations of $X_i$ to infer situations where $X_i$ is negative, so that $Z_n$ will mostly be positive, and you might (erroneously) conclude that it can never be negative.
|
How can the central limit theorem hold for distributions which have limits on the random variable?
|
Your source of confusion stems from two sources:
1) The CLT applies to the normalized sample means, i.e.:
$Z_n=\frac{S_n/n-\mu}{\sigma/\sqrt{n}}=\frac{S_n-n\mu}{\sigma\sqrt{n}}$,
which is centered aro
|
How can the central limit theorem hold for distributions which have limits on the random variable?
Your source of confusion stems from two sources:
1) The CLT applies to the normalized sample means, i.e.:
$Z_n=\frac{S_n/n-\mu}{\sigma/\sqrt{n}}=\frac{S_n-n\mu}{\sigma\sqrt{n}}$,
which is centered around 0, hence admits negative values with positive probability. As an extreme example, if $n=1$ then $\frac{X_1-\mu}{\sigma}$ can be negative for Poisson $X_1$. In fact you can easily conclude that if $Z_n$ is never negative, then $X_i$ must be constant (hence $\sigma=0$).
2) The CLT for finite $n$ is only a local result around the mean. In other words the fact that $P(Z_n\leq x)$ is approximately $\phi(x)$ (the normal CDF), normal tends to be more true for $x$ near 0. When $n$ isn't large enough, relative to $x$, this approximation breaks down.
If you're say, measuring the heights of people, then a standard normal approximation may imply that negative height has positive probability. This is false since the most adults have heights between 4 and 7 feet, so the approximation would break down beyond these limits if your $n$ is small.
Alternatively, if $P(X_i=1)=0.99999$ and $P(X_i=-1)=0.00001$, then it will take many realizations of $X_i$ to infer situations where $X_i$ is negative, so that $Z_n$ will mostly be positive, and you might (erroneously) conclude that it can never be negative.
|
How can the central limit theorem hold for distributions which have limits on the random variable?
Your source of confusion stems from two sources:
1) The CLT applies to the normalized sample means, i.e.:
$Z_n=\frac{S_n/n-\mu}{\sigma/\sqrt{n}}=\frac{S_n-n\mu}{\sigma\sqrt{n}}$,
which is centered aro
|
14,692
|
Understanding which features were most important for logistic regression
|
The first thing to note is that you don't use logistic regression as a classifier. The fact that $Y$ is binary has absolutely nothing to do with using this maximum likelihood method to actually classify observations. Once you get past that, concentrate on the gold standard information measure which is a by-product of maximum likelihood: the likelihood ratio $\chi^2$ statistic. You can produce a chart showing the partial contribution of each predictor in terms of its partial $\chi^2$ statistic. These statistics have maximum information/power. You can use the bootstrap to show how hard it is to pick "winners" and "losers" by getting confidence intervals on the ranks of the predictive information provided by each predictor once the other predictors are accounted for. An example is in Section 5.4 of my course notes - click on Handouts.
If you have highly correlated features you can do a "chunk test" to combine their influence. A chart that does this is given in Figure 15.11 where size represents the combined contribution of 4 separate predictors.
|
Understanding which features were most important for logistic regression
|
The first thing to note is that you don't use logistic regression as a classifier. The fact that $Y$ is binary has absolutely nothing to do with using this maximum likelihood method to actually class
|
Understanding which features were most important for logistic regression
The first thing to note is that you don't use logistic regression as a classifier. The fact that $Y$ is binary has absolutely nothing to do with using this maximum likelihood method to actually classify observations. Once you get past that, concentrate on the gold standard information measure which is a by-product of maximum likelihood: the likelihood ratio $\chi^2$ statistic. You can produce a chart showing the partial contribution of each predictor in terms of its partial $\chi^2$ statistic. These statistics have maximum information/power. You can use the bootstrap to show how hard it is to pick "winners" and "losers" by getting confidence intervals on the ranks of the predictive information provided by each predictor once the other predictors are accounted for. An example is in Section 5.4 of my course notes - click on Handouts.
If you have highly correlated features you can do a "chunk test" to combine their influence. A chart that does this is given in Figure 15.11 where size represents the combined contribution of 4 separate predictors.
|
Understanding which features were most important for logistic regression
The first thing to note is that you don't use logistic regression as a classifier. The fact that $Y$ is binary has absolutely nothing to do with using this maximum likelihood method to actually class
|
14,693
|
Understanding which features were most important for logistic regression
|
The short answer is that is that there isn't a single, "right" way to answer this question.
For the best review of the issues see Ulrike Groemping's papers, e.g., Estimators of Relative Importance in Linear Regression Based on Variance Decomposition. The options she discusses range from simple heuristics to sophisticated, CPU intensive, multivariate solutions.
http://prof.beuth-hochschule.de/fileadmin/prof/groemp/downloads/amstat07mayp139.pdf
Groemping proposes her own approach in an R package called RELAIMPO that's also worth reading.
https://cran.r-project.org/web/packages/relaimpo/relaimpo.pdf
One quick and dirty heuristic that I've used is to sum up the chi-squares (F values, t-statistics) associated with each parameter then repercentage the individual values with that sum. The result would be a metric of rankable relative importance.
That said, I've never been a fan of "standardized beta coefficients" although they are frequently recommended by the profession and widely used. Here's the problem with them: the standardization is univariate and external to the model solution. In other words, this approach does not reflect the conditional nature of the model's results.
|
Understanding which features were most important for logistic regression
|
The short answer is that is that there isn't a single, "right" way to answer this question.
For the best review of the issues see Ulrike Groemping's papers, e.g., Estimators of Relative Importance in
|
Understanding which features were most important for logistic regression
The short answer is that is that there isn't a single, "right" way to answer this question.
For the best review of the issues see Ulrike Groemping's papers, e.g., Estimators of Relative Importance in Linear Regression Based on Variance Decomposition. The options she discusses range from simple heuristics to sophisticated, CPU intensive, multivariate solutions.
http://prof.beuth-hochschule.de/fileadmin/prof/groemp/downloads/amstat07mayp139.pdf
Groemping proposes her own approach in an R package called RELAIMPO that's also worth reading.
https://cran.r-project.org/web/packages/relaimpo/relaimpo.pdf
One quick and dirty heuristic that I've used is to sum up the chi-squares (F values, t-statistics) associated with each parameter then repercentage the individual values with that sum. The result would be a metric of rankable relative importance.
That said, I've never been a fan of "standardized beta coefficients" although they are frequently recommended by the profession and widely used. Here's the problem with them: the standardization is univariate and external to the model solution. In other words, this approach does not reflect the conditional nature of the model's results.
|
Understanding which features were most important for logistic regression
The short answer is that is that there isn't a single, "right" way to answer this question.
For the best review of the issues see Ulrike Groemping's papers, e.g., Estimators of Relative Importance in
|
14,694
|
Understanding which features were most important for logistic regression
|
A fairly robust way of doing this would be to try fitting the model N times where N is the number of features. Each time use N-1 of the features and leave one feature out. Then you can use your favourite validation metric to measure how much the inclusion or exclusion of each feature affects the performance of the model. Depending on the number of features you have this may be computationally expensive.
|
Understanding which features were most important for logistic regression
|
A fairly robust way of doing this would be to try fitting the model N times where N is the number of features. Each time use N-1 of the features and leave one feature out. Then you can use your favour
|
Understanding which features were most important for logistic regression
A fairly robust way of doing this would be to try fitting the model N times where N is the number of features. Each time use N-1 of the features and leave one feature out. Then you can use your favourite validation metric to measure how much the inclusion or exclusion of each feature affects the performance of the model. Depending on the number of features you have this may be computationally expensive.
|
Understanding which features were most important for logistic regression
A fairly robust way of doing this would be to try fitting the model N times where N is the number of features. Each time use N-1 of the features and leave one feature out. Then you can use your favour
|
14,695
|
Understanding which features were most important for logistic regression
|
You are correct in your observation that merely looking at the size of the estimated coefficient $|\hat{\beta_j}|$ is not very meaningful for the reason mentioned. But a simple adjustment is to multiply the coefficient estimate by the estimated standard deviation of the predictor $|\hat{\beta_j}| \hat{\sigma}_j$ and use this as a measure of importance. This is sometimes called a standardized beta coefficient and in logistic regression it represents the change in the estimated log odds of success caused by a one standard deviation change in $x_j$. One issue with this is that it breaks down when you're no longer dealing with numeric predictors.
Regarding your last point, of course it's possible that a variable might contribute a lot to the estimated log odds while not actually affecting the "true" log odds much, but I don't think this needs to be too much of a concern if we have any confidence in the procedure that produced the estimates.
|
Understanding which features were most important for logistic regression
|
You are correct in your observation that merely looking at the size of the estimated coefficient $|\hat{\beta_j}|$ is not very meaningful for the reason mentioned. But a simple adjustment is to multi
|
Understanding which features were most important for logistic regression
You are correct in your observation that merely looking at the size of the estimated coefficient $|\hat{\beta_j}|$ is not very meaningful for the reason mentioned. But a simple adjustment is to multiply the coefficient estimate by the estimated standard deviation of the predictor $|\hat{\beta_j}| \hat{\sigma}_j$ and use this as a measure of importance. This is sometimes called a standardized beta coefficient and in logistic regression it represents the change in the estimated log odds of success caused by a one standard deviation change in $x_j$. One issue with this is that it breaks down when you're no longer dealing with numeric predictors.
Regarding your last point, of course it's possible that a variable might contribute a lot to the estimated log odds while not actually affecting the "true" log odds much, but I don't think this needs to be too much of a concern if we have any confidence in the procedure that produced the estimates.
|
Understanding which features were most important for logistic regression
You are correct in your observation that merely looking at the size of the estimated coefficient $|\hat{\beta_j}|$ is not very meaningful for the reason mentioned. But a simple adjustment is to multi
|
14,696
|
Understanding which features were most important for logistic regression
|
You are right about why you should not use the coefficients as a measure of relevance, but you absolutelly can if you divide them by their standard error! If you have estimated the model with R, then it is already done for you! You can even remove the least important features from the model and see how it works.
A more heuristic approach to study how different changes in the variables alter the outcome is doing exactly that: try different inputs and study their estimated probabilities. However, as your model is quite simple, I would usggest against that
|
Understanding which features were most important for logistic regression
|
You are right about why you should not use the coefficients as a measure of relevance, but you absolutelly can if you divide them by their standard error! If you have estimated the model with R, then
|
Understanding which features were most important for logistic regression
You are right about why you should not use the coefficients as a measure of relevance, but you absolutelly can if you divide them by their standard error! If you have estimated the model with R, then it is already done for you! You can even remove the least important features from the model and see how it works.
A more heuristic approach to study how different changes in the variables alter the outcome is doing exactly that: try different inputs and study their estimated probabilities. However, as your model is quite simple, I would usggest against that
|
Understanding which features were most important for logistic regression
You are right about why you should not use the coefficients as a measure of relevance, but you absolutelly can if you divide them by their standard error! If you have estimated the model with R, then
|
14,697
|
Rand index calculation
|
I was pondering about the same, and I solved it like this. Suppose you have a co-occurrence matrix/contingency table where the rows are the ground truth clusters, and the columns are the clusters found by the clustering algorithm.
So, for the example in the book, it would look like:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
Now, you can very easily compute the TP + FP by taking the sum per column and 'choose 2' over all those values. So the sums are [6, 6, 5] and you do '6 choose 2' + '6 choose 2' + '5 choose 2'.
Now, indeed, similarly, you can get TP + FN by taking the sum over the rows (so, that is [8, 5, 4] in the example above), apply 'choose 2' over all those values, and take the sum of that.
The TP's themselves can be calculated by applying 'choose 2' to every cell in the matrix and taking the sum of everything (assuming that '1 choose 2' is 0).
In fact, here is some Python code that does exactly that:
import numpy as np
from scipy.misc import comb
# There is a comb function for Python which does 'n choose k'
# only you can't apply it to an array right away
# So here we vectorize it...
def myComb(a,b):
return comb(a,b,exact=True)
vComb = np.vectorize(myComb)
def get_tp_fp_tn_fn(cooccurrence_matrix):
tp_plus_fp = vComb(cooccurrence_matrix.sum(0, dtype=int),2).sum()
tp_plus_fn = vComb(cooccurrence_matrix.sum(1, dtype=int),2).sum()
tp = vComb(cooccurrence_matrix.astype(int), 2).sum()
fp = tp_plus_fp - tp
fn = tp_plus_fn - tp
tn = comb(cooccurrence_matrix.sum(), 2) - tp - fp - fn
return [tp, fp, tn, fn]
if __name__ == "__main__":
# The co-occurrence matrix from example from
# An Introduction into Information Retrieval (Manning, Raghavan & Schutze, 2009)
# also available on:
# http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
#
cooccurrence_matrix = np.array([[ 5, 1, 2], [ 1, 4, 0], [ 0, 1, 3]])
# Get the stats
tp, fp, tn, fn = get_tp_fp_tn_fn(cooccurrence_matrix)
print "TP: %d, FP: %d, TN: %d, FN: %d" % (tp, fp, tn, fn)
# Print the measures:
print "Rand index: %f" % (float(tp + tn) / (tp + fp + fn + tn))
precision = float(tp) / (tp + fp)
recall = float(tp) / (tp + fn)
print "Precision : %f" % precision
print "Recall : %f" % recall
print "F1 : %f" % ((2.0 * precision * recall) / (precision + recall))
If I run it I get:
$ python testCode.py
TP: 20, FP: 20, TN: 72, FN: 24
Rand index: 0.676471
Precision : 0.500000
Recall : 0.454545
F1 : 0.476190
I actually didn't check any other examples than this one, so I hope I did it right.... ;-)
|
Rand index calculation
|
I was pondering about the same, and I solved it like this. Suppose you have a co-occurrence matrix/contingency table where the rows are the ground truth clusters, and the columns are the clusters foun
|
Rand index calculation
I was pondering about the same, and I solved it like this. Suppose you have a co-occurrence matrix/contingency table where the rows are the ground truth clusters, and the columns are the clusters found by the clustering algorithm.
So, for the example in the book, it would look like:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
Now, you can very easily compute the TP + FP by taking the sum per column and 'choose 2' over all those values. So the sums are [6, 6, 5] and you do '6 choose 2' + '6 choose 2' + '5 choose 2'.
Now, indeed, similarly, you can get TP + FN by taking the sum over the rows (so, that is [8, 5, 4] in the example above), apply 'choose 2' over all those values, and take the sum of that.
The TP's themselves can be calculated by applying 'choose 2' to every cell in the matrix and taking the sum of everything (assuming that '1 choose 2' is 0).
In fact, here is some Python code that does exactly that:
import numpy as np
from scipy.misc import comb
# There is a comb function for Python which does 'n choose k'
# only you can't apply it to an array right away
# So here we vectorize it...
def myComb(a,b):
return comb(a,b,exact=True)
vComb = np.vectorize(myComb)
def get_tp_fp_tn_fn(cooccurrence_matrix):
tp_plus_fp = vComb(cooccurrence_matrix.sum(0, dtype=int),2).sum()
tp_plus_fn = vComb(cooccurrence_matrix.sum(1, dtype=int),2).sum()
tp = vComb(cooccurrence_matrix.astype(int), 2).sum()
fp = tp_plus_fp - tp
fn = tp_plus_fn - tp
tn = comb(cooccurrence_matrix.sum(), 2) - tp - fp - fn
return [tp, fp, tn, fn]
if __name__ == "__main__":
# The co-occurrence matrix from example from
# An Introduction into Information Retrieval (Manning, Raghavan & Schutze, 2009)
# also available on:
# http://nlp.stanford.edu/IR-book/html/htmledition/evaluation-of-clustering-1.html
#
cooccurrence_matrix = np.array([[ 5, 1, 2], [ 1, 4, 0], [ 0, 1, 3]])
# Get the stats
tp, fp, tn, fn = get_tp_fp_tn_fn(cooccurrence_matrix)
print "TP: %d, FP: %d, TN: %d, FN: %d" % (tp, fp, tn, fn)
# Print the measures:
print "Rand index: %f" % (float(tp + tn) / (tp + fp + fn + tn))
precision = float(tp) / (tp + fp)
recall = float(tp) / (tp + fn)
print "Precision : %f" % precision
print "Recall : %f" % recall
print "F1 : %f" % ((2.0 * precision * recall) / (precision + recall))
If I run it I get:
$ python testCode.py
TP: 20, FP: 20, TN: 72, FN: 24
Rand index: 0.676471
Precision : 0.500000
Recall : 0.454545
F1 : 0.476190
I actually didn't check any other examples than this one, so I hope I did it right.... ;-)
|
Rand index calculation
I was pondering about the same, and I solved it like this. Suppose you have a co-occurrence matrix/contingency table where the rows are the ground truth clusters, and the columns are the clusters foun
|
14,698
|
Rand index calculation
|
After having studied the other answers in this thread, here is my Python implementation, which takes arrays as inputs, sklearn-style:
import numpy as np
from scipy.misc import comb
def rand_index_score(clusters, classes):
tp_plus_fp = comb(np.bincount(clusters), 2).sum()
tp_plus_fn = comb(np.bincount(classes), 2).sum()
A = np.c_[(clusters, classes)]
tp = sum(comb(np.bincount(A[A[:, 0] == i, 1]), 2).sum()
for i in set(clusters))
fp = tp_plus_fp - tp
fn = tp_plus_fn - tp
tn = comb(len(A), 2) - tp - fp - fn
return (tp + tn) / (tp + fp + fn + tn)
In [319]: clusters
Out[319]: [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
In [320]: classes
Out[320]: [0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 2, 1, 0, 2, 2, 2, 0]
In [321]: rand_index_score(clusters, classes)
Out[321]: 0.67647058823529416
|
Rand index calculation
|
After having studied the other answers in this thread, here is my Python implementation, which takes arrays as inputs, sklearn-style:
import numpy as np
from scipy.misc import comb
def rand_index_sco
|
Rand index calculation
After having studied the other answers in this thread, here is my Python implementation, which takes arrays as inputs, sklearn-style:
import numpy as np
from scipy.misc import comb
def rand_index_score(clusters, classes):
tp_plus_fp = comb(np.bincount(clusters), 2).sum()
tp_plus_fn = comb(np.bincount(classes), 2).sum()
A = np.c_[(clusters, classes)]
tp = sum(comb(np.bincount(A[A[:, 0] == i, 1]), 2).sum()
for i in set(clusters))
fp = tp_plus_fp - tp
fn = tp_plus_fn - tp
tn = comb(len(A), 2) - tp - fp - fn
return (tp + tn) / (tp + fp + fn + tn)
In [319]: clusters
Out[319]: [0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2]
In [320]: classes
Out[320]: [0, 0, 1, 0, 0, 0, 0, 1, 1, 1, 2, 1, 0, 2, 2, 2, 0]
In [321]: rand_index_score(clusters, classes)
Out[321]: 0.67647058823529416
|
Rand index calculation
After having studied the other answers in this thread, here is my Python implementation, which takes arrays as inputs, sklearn-style:
import numpy as np
from scipy.misc import comb
def rand_index_sco
|
14,699
|
Rand index calculation
|
I am not quite sure myself, but this is how I did the TN value:
TN=(7 2) (10 2) (4 2)
(7 2) β Cluster 1 β test says βxβ, so count those that are NOT x (and are correctly clustered in clusters 2 & 3)
i.e. 4 βoβs + 3 βdβs (diamonds) =(7 2)
(10 2) β Cluster 2, count those that are NOT βoβs and correctly clustered in clusters 1 and 3,
i.e. 5 βxβ +(2βxβ+ 3βdβ) = (10 2)
(4 2) β Cluster 3, count those that are NOT βxβ and NOT βdβ(diamond shaped element)s that are correctly clustered in clusters 1& 2.
i.e. 4 'o's in cluster 2. =(4 2)
TN = (7 2) + (10 2) + (4 2) =72.
Then FN is :
FN = (17 2) - (TP+FP) - TN = 136 - 40 -72 = 24. ---> (17= total number of documents)
|
Rand index calculation
|
I am not quite sure myself, but this is how I did the TN value:
TN=(7 2) (10 2) (4 2)
(7 2) β Cluster 1 β test says βxβ, so count those that are NOT x (and are correctly clustered in clusters 2 &
|
Rand index calculation
I am not quite sure myself, but this is how I did the TN value:
TN=(7 2) (10 2) (4 2)
(7 2) β Cluster 1 β test says βxβ, so count those that are NOT x (and are correctly clustered in clusters 2 & 3)
i.e. 4 βoβs + 3 βdβs (diamonds) =(7 2)
(10 2) β Cluster 2, count those that are NOT βoβs and correctly clustered in clusters 1 and 3,
i.e. 5 βxβ +(2βxβ+ 3βdβ) = (10 2)
(4 2) β Cluster 3, count those that are NOT βxβ and NOT βdβ(diamond shaped element)s that are correctly clustered in clusters 1& 2.
i.e. 4 'o's in cluster 2. =(4 2)
TN = (7 2) + (10 2) + (4 2) =72.
Then FN is :
FN = (17 2) - (TP+FP) - TN = 136 - 40 -72 = 24. ---> (17= total number of documents)
|
Rand index calculation
I am not quite sure myself, but this is how I did the TN value:
TN=(7 2) (10 2) (4 2)
(7 2) β Cluster 1 β test says βxβ, so count those that are NOT x (and are correctly clustered in clusters 2 &
|
14,700
|
Rand index calculation
|
Taking the example of another question:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
The reasonable answer for FN:
FN = (c(8,2)-c(5,2)-c(2,2))+(c(5,2)-c(4,2))+(c(4,2)-c(3,2))=24
Explanation:
(c(8,2)-c(5,2)-c(2,2))
choose 2 from 8 for 'x'(a) the combination of same class in same clusters ( c(5,2) for cluster 1 and c(2,2) for cluster 3 ),
(c(5,2)-c(4,2))
choose 2 from 5 'o'(b) minus the combination of same class in same clusters ( c(4,2) for cluster 2 )
(c(4,2)-c(3,2)
choose 2 from 4 for 'β'(c) minus the combination of same class in same clusters ( c(3,2) for cluster 3 )
I derived it like this.
|
Rand index calculation
|
Taking the example of another question:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
The reasonable answer for FN:
FN = (c(8,2)-c(5,2)-c(2,2))
|
Rand index calculation
Taking the example of another question:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
The reasonable answer for FN:
FN = (c(8,2)-c(5,2)-c(2,2))+(c(5,2)-c(4,2))+(c(4,2)-c(3,2))=24
Explanation:
(c(8,2)-c(5,2)-c(2,2))
choose 2 from 8 for 'x'(a) the combination of same class in same clusters ( c(5,2) for cluster 1 and c(2,2) for cluster 3 ),
(c(5,2)-c(4,2))
choose 2 from 5 'o'(b) minus the combination of same class in same clusters ( c(4,2) for cluster 2 )
(c(4,2)-c(3,2)
choose 2 from 4 for 'β'(c) minus the combination of same class in same clusters ( c(3,2) for cluster 3 )
I derived it like this.
|
Rand index calculation
Taking the example of another question:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
The reasonable answer for FN:
FN = (c(8,2)-c(5,2)-c(2,2))
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.