idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
33,801 | Find distribution and transform to normal distribution | Any continuous distribution can be turned into a normal distribution through a process called Gaussianization (Chen & Gopinath, 2001). For univariate distributions, Gaussianization is simple. If a random variable $Y$ has cumulative distribution function (CDF) $F_Y$ and $\Phi$ is the CDF of a standard normal, then
$$X = \Phi^{-1}(F_Y(Y))$$
will have a standard normal distribution. This is easy to see, since the CDF of $X$ is
$$ F_X(x) = P(X \leq x) = P(\Phi^{-1}(F_Y(Y)) \leq x) = P(Y \leq F_Y^{-1}(\Phi(x))) = F_Y(F_Y^{-1}(\Phi(x))) = \Phi(x).$$
If $Y$ is exponentially distributed with rate $\lambda$, then the data could be transformed via
$$X = \Phi^{-1}\left(1 - e^{-\lambda Y} \right),$$
which looks similar to a logarithm:
I don't use R, but I'm sure you can find implementations of the inverse CDF (also known as quantile function) of the normal, $\Phi^{-1}$. | Find distribution and transform to normal distribution | Any continuous distribution can be turned into a normal distribution through a process called Gaussianization (Chen & Gopinath, 2001). For univariate distributions, Gaussianization is simple. If a ran | Find distribution and transform to normal distribution
Any continuous distribution can be turned into a normal distribution through a process called Gaussianization (Chen & Gopinath, 2001). For univariate distributions, Gaussianization is simple. If a random variable $Y$ has cumulative distribution function (CDF) $F_Y$ and $\Phi$ is the CDF of a standard normal, then
$$X = \Phi^{-1}(F_Y(Y))$$
will have a standard normal distribution. This is easy to see, since the CDF of $X$ is
$$ F_X(x) = P(X \leq x) = P(\Phi^{-1}(F_Y(Y)) \leq x) = P(Y \leq F_Y^{-1}(\Phi(x))) = F_Y(F_Y^{-1}(\Phi(x))) = \Phi(x).$$
If $Y$ is exponentially distributed with rate $\lambda$, then the data could be transformed via
$$X = \Phi^{-1}\left(1 - e^{-\lambda Y} \right),$$
which looks similar to a logarithm:
I don't use R, but I'm sure you can find implementations of the inverse CDF (also known as quantile function) of the normal, $\Phi^{-1}$. | Find distribution and transform to normal distribution
Any continuous distribution can be turned into a normal distribution through a process called Gaussianization (Chen & Gopinath, 2001). For univariate distributions, Gaussianization is simple. If a ran |
33,802 | Find distribution and transform to normal distribution | How can I find out which distribution this is?
Here you can use some statistical tests from R package fitdistrplus. From the package You will find fitting crateria i.e AIC, BIC etc. The Fitting of the distribution ' gamma or mor disrtribution like "normal". Here are the methods.
MAXIMUM LIKELIHOOD ESTIMATION
MOMENT MATCHING ESTIMATION
QUANTILE MATCHING ESTIMATION
MAXIMUM GOODNESS-OF-FIT ESTIMATION (Goodness-of-fit statistics and Goodness-of-fit criteria)
Then finally you will find among several theoritical models the best one that resembles your observed data.
And how can I transfrom the data to a normal distribution?
Here you can use Box Cox Transfom
Box_Cox_tran=function(x, lambda, jacobian.adjusted = FALSE)
{
bc1 <- function(x, lambda)
{
if (any(x[!is.na(x)] <= 0))
stop("First argument must be strictly positive.")
z <- if (abs(lambda) <= 1e-06)
log(x)
else ((x^lambda) - 1)/lambda
if (jacobian.adjusted == TRUE) {
z * (exp(mean(log(x), na.rm = TRUE)))^(1 - lambda)
}
else z
}
out <- x
out <- if (is.matrix(out) | is.data.frame(out)) {
if (is.null(colnames(out)))
colnames(out) <- paste("Z", 1:dim(out)[2], sep = "")
for (j in 1:ncol(out)) {
out[, j] <- bc1(out[, j], lambda[j])
}
colnames(out) <- paste(colnames(out), round(lambda, 2),
sep = "^")
out
}
else bc1(out, lambda)
out
}
Here is my working exmaple:
# ---------------------------------------------------------------------------------------------------------------------------
# Objective three starts Here
# (3)= Bivariate modelling of annual maxima using traditional approach
# a) First transform onbserved seasonal maxima into normal distribution using Box-Cox Transformations(x to z)
# b) Finaly, Estimate Pearson coefficient using traditional bivariate normal distribution
# ---------------------------------------------------------------------------------------------------------------------------
rm(list=ls())
Sys.setenv(LANGUAGE="en") # to set languege from Polish to English
setwd("C:/Users/sdebele/Desktop/From_oldcomp/Old_Computer/Seasonal_APP/Data/Data_Winter&Summer")
# Loading the required package here
library(MASS)
library(geoR)
require(scales)
require(plyr)
require(car)
library(ggplot2)
require(alr3)
library(ggplot2)
library(reshape2)
library(nortest)
require(AID)
require(distr)
require(fBasics)
# -----------------------------------------------------------------------------------------------------------------------------
# Here the Box-Cox Transformations equations
# x(lambda)=x^lamda-1/lambda, if lambda is not zero
# else log(x) if lambda=0
#--------------------------------------------------------------------------------------------------------------------------------
# Here is the data for six guaging stations of dependant ( 51.12% to 89.85%)
filenames=c("ZAPALOW.txt","GORLICZYNA.txt","SARZYNA.txt","OSUCHY.txt","HARASIUKI.txt","RUDJASTKOWSKA.txt")
# ---------------------------------------------------------------------------------------------------------------------------
# (1)= For ZAPALOW hydrological guaging stations starts here
# --------------------------------------------------------------------------------------------------------------------------------
ZAPALOW=read.table(file=filenames[1],head=T,sep="\t")
newZAPALOW <- na.omit(ZAPALOW) # to eliminte the missing value from the data sets
Years=newZAPALOW$Year
Winter=newZAPALOW$Winter
Summer=newZAPALOW$Sumer
source("Box_Cox_Transfom.R") # R_script containing the tranformation equations
# estimation of lambda using AID R package
# boxcoxnc(Sumer, method="ac", lam=seq(-2,2,0.01), plotit=TRUE, rep=30, p.method="BY")
# boxcoxnc(Winter, method="ac", lam=seq(-2,2,0.01), plotit=TRUE, rep=30, p.method="BY")
Trans_Win=boxcoxnc(Winter)
Trans_Sum=boxcoxnc(Summer)
Winter_trans=Box_Cox_tran(Winter,Trans_Win$result[1,1],jacobian.adjusted=T)
Summer_trans=Box_Cox_tran(Summer,Trans_Sum$result[1,1],jacobian.adjusted=T)
newZAPALOW[,4]=Winter_trans
newZAPALOW[,5]=Summer_trans
colnames(newZAPALOW)= c("Year","Winter " ,"Summer","Winter_Trans","Summer_Trans")
par(mfrow=c(2,2))
par("lwd"=2)
## Plot histogram with overlayed normal distribution.
hist(newZAPALOW[,4],main="",xlab="Discharge",freq=FALSE,col="lightblue")
curve(dnorm(x,mean=mean(newZAPALOW[,4]),sd=sd(newZAPALOW[,4])), add=TRUE, col="darkred",lwd=2)
qq.plot(newZAPALOW[,4], dist= "norm", col=palette()[1], ylab="Sample Quantiles",
main="Normal Probability Plot", pch=19)
#b <- mydata[,c(2,3)] # select interesting columns
result <- shapiro.test(newZAPALOW[,4]) # checking for normality test
result$p.value
ad.test(newZAPALOW[,4]) # checking for normality test
## Plot histogram with overlayed normal distribution.
hist(newZAPALOW[,5],main="",xlab="Discharge",freq=FALSE,col="lightblue")
curve(dnorm(x,mean=mean(newZAPALOW[,5]),sd=sd(newZAPALOW[,5])), add=TRUE, col="darkred",lwd=2)
qq.plot(newZAPALOW[,5], dist= "norm", col=palette()[1], ylab="Sample Quantiles",
main="Normal Probability Plot", pch=19)
result <- shapiro.test(newZAPALOW[,5]) # checking for normality test
result$p.value
ad.test(newZAPALOW[,5]) # checking for normality test
write.table(newZAPALOW, "newZAPALOW_trans.txt", sep="\t")
For sure this will be helpfull for you. | Find distribution and transform to normal distribution | How can I find out which distribution this is?
Here you can use some statistical tests from R package fitdistrplus. From the package You will find fitting crateria i.e AIC, BIC etc. The Fitting of the | Find distribution and transform to normal distribution
How can I find out which distribution this is?
Here you can use some statistical tests from R package fitdistrplus. From the package You will find fitting crateria i.e AIC, BIC etc. The Fitting of the distribution ' gamma or mor disrtribution like "normal". Here are the methods.
MAXIMUM LIKELIHOOD ESTIMATION
MOMENT MATCHING ESTIMATION
QUANTILE MATCHING ESTIMATION
MAXIMUM GOODNESS-OF-FIT ESTIMATION (Goodness-of-fit statistics and Goodness-of-fit criteria)
Then finally you will find among several theoritical models the best one that resembles your observed data.
And how can I transfrom the data to a normal distribution?
Here you can use Box Cox Transfom
Box_Cox_tran=function(x, lambda, jacobian.adjusted = FALSE)
{
bc1 <- function(x, lambda)
{
if (any(x[!is.na(x)] <= 0))
stop("First argument must be strictly positive.")
z <- if (abs(lambda) <= 1e-06)
log(x)
else ((x^lambda) - 1)/lambda
if (jacobian.adjusted == TRUE) {
z * (exp(mean(log(x), na.rm = TRUE)))^(1 - lambda)
}
else z
}
out <- x
out <- if (is.matrix(out) | is.data.frame(out)) {
if (is.null(colnames(out)))
colnames(out) <- paste("Z", 1:dim(out)[2], sep = "")
for (j in 1:ncol(out)) {
out[, j] <- bc1(out[, j], lambda[j])
}
colnames(out) <- paste(colnames(out), round(lambda, 2),
sep = "^")
out
}
else bc1(out, lambda)
out
}
Here is my working exmaple:
# ---------------------------------------------------------------------------------------------------------------------------
# Objective three starts Here
# (3)= Bivariate modelling of annual maxima using traditional approach
# a) First transform onbserved seasonal maxima into normal distribution using Box-Cox Transformations(x to z)
# b) Finaly, Estimate Pearson coefficient using traditional bivariate normal distribution
# ---------------------------------------------------------------------------------------------------------------------------
rm(list=ls())
Sys.setenv(LANGUAGE="en") # to set languege from Polish to English
setwd("C:/Users/sdebele/Desktop/From_oldcomp/Old_Computer/Seasonal_APP/Data/Data_Winter&Summer")
# Loading the required package here
library(MASS)
library(geoR)
require(scales)
require(plyr)
require(car)
library(ggplot2)
require(alr3)
library(ggplot2)
library(reshape2)
library(nortest)
require(AID)
require(distr)
require(fBasics)
# -----------------------------------------------------------------------------------------------------------------------------
# Here the Box-Cox Transformations equations
# x(lambda)=x^lamda-1/lambda, if lambda is not zero
# else log(x) if lambda=0
#--------------------------------------------------------------------------------------------------------------------------------
# Here is the data for six guaging stations of dependant ( 51.12% to 89.85%)
filenames=c("ZAPALOW.txt","GORLICZYNA.txt","SARZYNA.txt","OSUCHY.txt","HARASIUKI.txt","RUDJASTKOWSKA.txt")
# ---------------------------------------------------------------------------------------------------------------------------
# (1)= For ZAPALOW hydrological guaging stations starts here
# --------------------------------------------------------------------------------------------------------------------------------
ZAPALOW=read.table(file=filenames[1],head=T,sep="\t")
newZAPALOW <- na.omit(ZAPALOW) # to eliminte the missing value from the data sets
Years=newZAPALOW$Year
Winter=newZAPALOW$Winter
Summer=newZAPALOW$Sumer
source("Box_Cox_Transfom.R") # R_script containing the tranformation equations
# estimation of lambda using AID R package
# boxcoxnc(Sumer, method="ac", lam=seq(-2,2,0.01), plotit=TRUE, rep=30, p.method="BY")
# boxcoxnc(Winter, method="ac", lam=seq(-2,2,0.01), plotit=TRUE, rep=30, p.method="BY")
Trans_Win=boxcoxnc(Winter)
Trans_Sum=boxcoxnc(Summer)
Winter_trans=Box_Cox_tran(Winter,Trans_Win$result[1,1],jacobian.adjusted=T)
Summer_trans=Box_Cox_tran(Summer,Trans_Sum$result[1,1],jacobian.adjusted=T)
newZAPALOW[,4]=Winter_trans
newZAPALOW[,5]=Summer_trans
colnames(newZAPALOW)= c("Year","Winter " ,"Summer","Winter_Trans","Summer_Trans")
par(mfrow=c(2,2))
par("lwd"=2)
## Plot histogram with overlayed normal distribution.
hist(newZAPALOW[,4],main="",xlab="Discharge",freq=FALSE,col="lightblue")
curve(dnorm(x,mean=mean(newZAPALOW[,4]),sd=sd(newZAPALOW[,4])), add=TRUE, col="darkred",lwd=2)
qq.plot(newZAPALOW[,4], dist= "norm", col=palette()[1], ylab="Sample Quantiles",
main="Normal Probability Plot", pch=19)
#b <- mydata[,c(2,3)] # select interesting columns
result <- shapiro.test(newZAPALOW[,4]) # checking for normality test
result$p.value
ad.test(newZAPALOW[,4]) # checking for normality test
## Plot histogram with overlayed normal distribution.
hist(newZAPALOW[,5],main="",xlab="Discharge",freq=FALSE,col="lightblue")
curve(dnorm(x,mean=mean(newZAPALOW[,5]),sd=sd(newZAPALOW[,5])), add=TRUE, col="darkred",lwd=2)
qq.plot(newZAPALOW[,5], dist= "norm", col=palette()[1], ylab="Sample Quantiles",
main="Normal Probability Plot", pch=19)
result <- shapiro.test(newZAPALOW[,5]) # checking for normality test
result$p.value
ad.test(newZAPALOW[,5]) # checking for normality test
write.table(newZAPALOW, "newZAPALOW_trans.txt", sep="\t")
For sure this will be helpfull for you. | Find distribution and transform to normal distribution
How can I find out which distribution this is?
Here you can use some statistical tests from R package fitdistrplus. From the package You will find fitting crateria i.e AIC, BIC etc. The Fitting of the |
33,803 | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | Here's a slightly more notational proof, since sometimes people feel squeamish about intuitive proofs like glen_b's comment. (Sometimes for good reason, since it's not necessarily immediately obvious that his proof doesn't apply to discrete distributions.)
Suppose that $X_i$ are distributed iid according to some distribution $D$.
Let $M_i$ be the event that $X_i = \max(X_1, \dots, X_n)$.
Clearly, at least one of the $M_i$ must hold, so
$\Pr(M_1 \cup \dots \cup M_n) = 1$.
But, using the inclusion-exclusion principle
\begin{align*}
\Pr(M_1 \cup \dots \cup M_n)
&= \sum_i Pr(M_i) - \sum_{i < j} \Pr(M_i \cap M_j) + \sum_{i < j < k} \Pr(M_i \cap M_j \cap M_k) - \dots
\end{align*}
If $D$ is continuous, then $\Pr(M_i \cap M_j) = 0$ for all $i \ne j$; all the latter terms drop. Also, since the $X_i$ are identically distributed clearly $\Pr(M_1) = \Pr(M_i)$ for all $i$. Thus
$$ \Pr(M_1 \cup \dots \cup M_n) = \sum_i \Pr(M_i) = n \Pr(M_1) = 1,$$
so $\Pr(M_1) = \frac{1}{n}$.
If $D$ is not continuous, the higher terms don't drop out. Taking @Yair Daon's example where $X_i$ is identically 1, every $M_i$ always holds, and the sum becomes
$$
\Pr(M_1 \cup \dots \cup M_n) = \sum_i 1 - \sum_{i<j} 1 + \sum_{i<j<k} 1 - \dots
= \sum_{k=1}^n (-1)^{k+1} \binom{n}{k} = 1
.$$ | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | Here's a slightly more notational proof, since sometimes people feel squeamish about intuitive proofs like glen_b's comment. (Sometimes for good reason, since it's not necessarily immediately obvious | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
Here's a slightly more notational proof, since sometimes people feel squeamish about intuitive proofs like glen_b's comment. (Sometimes for good reason, since it's not necessarily immediately obvious that his proof doesn't apply to discrete distributions.)
Suppose that $X_i$ are distributed iid according to some distribution $D$.
Let $M_i$ be the event that $X_i = \max(X_1, \dots, X_n)$.
Clearly, at least one of the $M_i$ must hold, so
$\Pr(M_1 \cup \dots \cup M_n) = 1$.
But, using the inclusion-exclusion principle
\begin{align*}
\Pr(M_1 \cup \dots \cup M_n)
&= \sum_i Pr(M_i) - \sum_{i < j} \Pr(M_i \cap M_j) + \sum_{i < j < k} \Pr(M_i \cap M_j \cap M_k) - \dots
\end{align*}
If $D$ is continuous, then $\Pr(M_i \cap M_j) = 0$ for all $i \ne j$; all the latter terms drop. Also, since the $X_i$ are identically distributed clearly $\Pr(M_1) = \Pr(M_i)$ for all $i$. Thus
$$ \Pr(M_1 \cup \dots \cup M_n) = \sum_i \Pr(M_i) = n \Pr(M_1) = 1,$$
so $\Pr(M_1) = \frac{1}{n}$.
If $D$ is not continuous, the higher terms don't drop out. Taking @Yair Daon's example where $X_i$ is identically 1, every $M_i$ always holds, and the sum becomes
$$
\Pr(M_1 \cup \dots \cup M_n) = \sum_i 1 - \sum_{i<j} 1 + \sum_{i<j<k} 1 - \dots
= \sum_{k=1}^n (-1)^{k+1} \binom{n}{k} = 1
.$$ | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
Here's a slightly more notational proof, since sometimes people feel squeamish about intuitive proofs like glen_b's comment. (Sometimes for good reason, since it's not necessarily immediately obvious |
33,804 | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | For absolutely continuous random variables, this has a nice-looking proof.
We have an i.i.d. sample characterized by density $f$ and distribution function $F$. To avoid subscripts, denote $Y \equiv X_{(n-1)}$ the maximum of the subsample of size $n-1$, and $W \equiv X_n$ the $n$-th draw. Being the maximum order statistic, the density function of $Y$ is $f_Y(y) = (n-1)f(y)[F(y)]^{n-2}$. We want to calculate the probability that the $n$-th draw will be maximum (we do not know the values of any draw),
$$P(Y \leq W) = \int_{-\infty}^{\infty} \int_{-\infty}^w f_{WY}(w,y){\rm d}y{\rm d}w$$
$$=\int_{-\infty}^{\infty} \int_{-\infty}^w f(w) f_Y(y){\rm d}y{\rm d}w$$
the decomposition of the joint density due to independence. $f_Y(y)$ is not a simple density, so we change the order of integration
$$P(Y \leq W) =\int_{-\infty}^{\infty} f_Y(y)\int_y^{\infty} f(w) {\rm d}w{\rm d}y$$ $$=\int_{-\infty}^{\infty} f_Y(y)[1-F(y)] {\rm d}y = 1-\int_{-\infty}^{\infty}f_Y(y)F(y) {\rm d}y$$
since we have integrated the density of $Y$ over the whole support. Writing out this density for the remaining integral we have
$$\int_{-\infty}^{\infty}f_Y(y)F(y) {\rm d}y = \int_{-\infty}^{\infty}(n-1)f(y)[F(y)]^{n-2}F(y){\rm d}y $$
$$=\frac {n-1}{n}\int_{-\infty}^{\infty}nf(y)[F(y)]^{n-1}{\rm d}y = \frac {n-1}{n}$$
since the integrand has become the density function of the maximum order statistic from a sample of size $n$, and so integrated over the whole support, equals unity too.
So,
$$P(Y \leq W) = 1- \frac {n-1}{n} = \frac 1n$$ | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | For absolutely continuous random variables, this has a nice-looking proof.
We have an i.i.d. sample characterized by density $f$ and distribution function $F$. To avoid subscripts, denote $Y \equiv X_ | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
For absolutely continuous random variables, this has a nice-looking proof.
We have an i.i.d. sample characterized by density $f$ and distribution function $F$. To avoid subscripts, denote $Y \equiv X_{(n-1)}$ the maximum of the subsample of size $n-1$, and $W \equiv X_n$ the $n$-th draw. Being the maximum order statistic, the density function of $Y$ is $f_Y(y) = (n-1)f(y)[F(y)]^{n-2}$. We want to calculate the probability that the $n$-th draw will be maximum (we do not know the values of any draw),
$$P(Y \leq W) = \int_{-\infty}^{\infty} \int_{-\infty}^w f_{WY}(w,y){\rm d}y{\rm d}w$$
$$=\int_{-\infty}^{\infty} \int_{-\infty}^w f(w) f_Y(y){\rm d}y{\rm d}w$$
the decomposition of the joint density due to independence. $f_Y(y)$ is not a simple density, so we change the order of integration
$$P(Y \leq W) =\int_{-\infty}^{\infty} f_Y(y)\int_y^{\infty} f(w) {\rm d}w{\rm d}y$$ $$=\int_{-\infty}^{\infty} f_Y(y)[1-F(y)] {\rm d}y = 1-\int_{-\infty}^{\infty}f_Y(y)F(y) {\rm d}y$$
since we have integrated the density of $Y$ over the whole support. Writing out this density for the remaining integral we have
$$\int_{-\infty}^{\infty}f_Y(y)F(y) {\rm d}y = \int_{-\infty}^{\infty}(n-1)f(y)[F(y)]^{n-2}F(y){\rm d}y $$
$$=\frac {n-1}{n}\int_{-\infty}^{\infty}nf(y)[F(y)]^{n-1}{\rm d}y = \frac {n-1}{n}$$
since the integrand has become the density function of the maximum order statistic from a sample of size $n$, and so integrated over the whole support, equals unity too.
So,
$$P(Y \leq W) = 1- \frac {n-1}{n} = \frac 1n$$ | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
For absolutely continuous random variables, this has a nice-looking proof.
We have an i.i.d. sample characterized by density $f$ and distribution function $F$. To avoid subscripts, denote $Y \equiv X_ |
33,805 | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | Assume that these are continuous random variables then you'll be right on the money. Obviously they must be independent, i.e. i.i.d. in this case.
Think of how many permutations are of the sequence $X_1,\dots,X_n$, where the maximum will end up being at the last position? It's $\frac{n!}{(n-1)!n!}$. There are $n!$ permutations in total. So, your probability is $\frac{1}{n}$.
For discrete probabilities your claim will not be right, as @YairDaon showed you | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$ | Assume that these are continuous random variables then you'll be right on the money. Obviously they must be independent, i.e. i.i.d. in this case.
Think of how many permutations are of the sequence $X | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
Assume that these are continuous random variables then you'll be right on the money. Obviously they must be independent, i.e. i.i.d. in this case.
Think of how many permutations are of the sequence $X_1,\dots,X_n$, where the maximum will end up being at the last position? It's $\frac{n!}{(n-1)!n!}$. There are $n!$ permutations in total. So, your probability is $\frac{1}{n}$.
For discrete probabilities your claim will not be right, as @YairDaon showed you | Proof that the probability of one RV being larger than $n-1$ others is $\frac{1}{n}$
Assume that these are continuous random variables then you'll be right on the money. Obviously they must be independent, i.e. i.i.d. in this case.
Think of how many permutations are of the sequence $X |
33,806 | panel data - within-group estimate - individual fixed effects retrieved | You can and should use a well-specified random effects model. Always.
The Hausman test is said to suggest fixed effects models, but can and should be viewed "as a standard Wald test for the omission of the variables $\widetilde{\mathbf{X}}$" (Baltagi 2008, §4.3), where $\widetilde{\mathbf{X}}$ is a matrix of deviations from group means. If you do not omit $\widetilde{\mathbf{X}}$, a random effects model gives you the same population (fixed) effects as a fixed effects model, and the individual effects.
Mundlak (1978) argues that there is a unique estimator for the model
$$\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{Z}\boldsymbol{\alpha}+\mathbf{u}\qquad\qquad \mathbf{Z}=\mathbf{I}_{N}\otimes\mathbf{e}_T$$
where $\mathbf{I}_{N}$ is an identity matrix, $\otimes$ denotes Kronecker product, $\mathbf{e}_T$ is a vector of ones, so $\mathbf{Z}$ is the matrix of individual dummies, and $\boldsymbol{\alpha}=(\alpha_1,\dots,\alpha_N)$.
If $\alpha_i=\overline{\mathbf{X}}_{i*}\boldsymbol{\pi}+w_{i}$, $\boldsymbol{\pi}\ne\mathbf{0}$, averaging over $t$ for a given $i$, the model can be written as
$$\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}(\mathbf{X}\boldsymbol{\pi}+\mathbf{w})+\mathbf{u}\qquad\qquad
\mathbf{P}=\mathbf{I}_N\otimes\bar{\mathbf{J}}_T$$
where $\mathbf{P}$ is a matrix which averages the observations across time for each individual (Baltagi 2008, §2.1). Under the fixed effects model, the within estimator is
$$\hat{\boldsymbol{\beta}}_{w}=(\mathbf{X'QX})^{-1}\mathbf{X'Qy}\tag{1}$$
where $\mathbf{Q}=\mathbf{I}-\mathbf{P}$ is a matrix which obtains the deviations from individual means. Mundlak argues that under the random effects model, to get the same estimates the estimator should be
$$\begin{bmatrix} \hat{\boldsymbol{\beta}} \\ \hat{\boldsymbol{\pi}}\end{bmatrix}=
\left(\begin{bmatrix}\mathbf{X}' \\ \mathbf{X'P}\end{bmatrix}\boldsymbol{\Sigma}^{-1}\begin{bmatrix}\mathbf{X}&\mathbf{XP} \end{bmatrix}\right)^{-1}\begin{bmatrix}\mathbf{X}' \\ \mathbf{X'P} \end{bmatrix}\boldsymbol{\Sigma}^{-1}\mathbf{y}\tag{2}$$
where $\boldsymbol{\Sigma}^{-1}$ is the variance of the error term,
while the "usual" estimator (the so-called "Balestra-Nerlove estimator") is
$$\hat{\boldsymbol{\beta}}=(\mathbf{X}'\boldsymbol{\Sigma}^{-1}\mathbf{X})^{-1}\mathbf{X}'\boldsymbol{\Sigma}^{-1}\mathbf{y}$$
which is biased. According to Mundlak, since $(1)$ and $(2)$ obtain the same estimates for $\boldsymbol{\beta}$, $(2)$ is the within estimator, i.e. $(1)$ is the unique estimator and does not depend on the knowledge of the variance components.
However, the models
$$\begin{align}
\mathbf{y}&=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}(\mathbf{X}\boldsymbol{\pi}+\mathbf{w})+\mathbf{u}\tag{FE} \\
\mathbf{y}&=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}\mathbf{X}\boldsymbol{\pi}+(\mathbf{Pw}+\mathbf{u})\tag{RE}
\end{align}$$
are formally equivalent (Hsiao 2003, §4.3), so a random effects model obtains the same estimates ... as long as you do not omit $\widetilde{\mathbf{X}}$! Let's try.
Data generation (R code):
set.seed(1234)
N <- 25 # individuals
T <- 5 # time
In <- diag(N) # identity matrix of order N
Int <- diag(N*T) # identity matrix of order N*T
Jt <- matrix(1, T, T) # matrix of ones of order T
Jtm <- Jt / T
P <- kronecker(In, Jtm) # averages the obs across time for each individual
s2a <- 0.3 # sigma^2_\alpha
s2u <- 0.6 # sigma^2_u
w <- rep(rnorm(N, 0, sqrt(s2a)), each = T)
u <- rnorm(N*T, 0, sqrt(s2u))
b <- c(1.5, -2)
p <- c(-0.7, 0.8)
X <- cbind(runif(N*T, 2, 5), runif(N*T, 4, 8))
XPX <- cbind(X, P %*% X) # [ X PX ]
y <- XPX %*% c(b,p) + (P %*% w + u) # y = Xb + PXp + Pw + u
ds <- data.frame(id=rep(1:N, each=T), wave=rep(1:T, N), y, split(X, col(X)))
Under a fixed effects model we get:
> fe.1 <- plm(y ~ X1 + X2, data=ds, model="within")
> summary(fe.1)$coefficients
Estimate Std. Error t-value Pr(>|t|)
X1 1.435987 0.07825464 18.35019 1.806239e-33
X2 -1.916447 0.06339342 -30.23100 1.757634e-51
while under a random effects model...
> re.1 <- plm(y ~ X1 + X2, data=ds, model="random")
> summary(re.1)$coefficients
Estimate Std. Error t-value Pr(>|t|)
(Intercept) 1.830633 0.51687109 3.541759 5.638216e-04
X1 1.405060 0.07927271 17.724390 1.505521e-35
X2 -1.874784 0.06372731 -29.418846 3.076414e-57
bias!
But what if we do not omit $\widetilde{\mathbf{X}}=\mathbf{QX}$?
> Q <- diag(N*T) - P
> X1.mean <- P %*% ds$X1
> X1.dev <- Q %*% ds$X1
> X2.mean <- P %*% ds$X2
> X2.dev <- Q %*% ds$X2
> re.2 <- plm(y ~ X1.mean + X1.dev + X2.mean + X2.dev, data=ds, model="random")
> summary(re.2)$coefficients
Estimate Std. Error t-value Pr(>|t|)
(Intercept) -0.04123108 2.30907450 -0.01785611 9.857833e-01
X1.mean 0.81279279 0.38146339 2.13072292 3.515287e-02
X1.dev 1.43598746 0.07824535 18.35236883 1.239171e-36
X2.mean -1.23071499 0.26379329 -4.66545216 8.072196e-06
X2.dev -1.91644653 0.06338590 -30.23458903 5.809240e-58
The estimates for X1.dev and X2.dev are equal to the within estimates for X1 and X2 (no room for Hausman tests!), and you get much more. You get what you need.
However this is just the tip of the iceberg. I recommend that you read at least Bafumi and Gelman (2006), Snijders and Berkhof (2008), Bell and Jones (2014).
References
Baltagi, Badi H. (2008), Econometric Analysis of Panel Data, John Wiley & Sons
Bafumi, Joseph and Andrew Gelman (2006), Fitting Multilevel Models When Predictors and Group Effects Correlate, http://www.stat.columbia.edu/~gelman/research/unpublished/Bafumi_Gelman_Midwest06.pdf
Bell, Andrew and Kelvyn Jones (2014), "Explaining Fixed Effects: Random Effects modelling of Time-Series Cross-Sectional and Panel Data", Political Science Research and Methods, http://dx.doi.org/10.7910/DVN/23415
Hsiao, Cheng (2003), Analysis of Panel Data, Cambridge University Press
Mundlak, Yair (1978), "On the Pooling of Time Series and Cross Section Data", Econometrica, 43(1), 44-56
Sniiders, Tom A. B. and Johannes Berkhof (2008), "Diagnostic Checks for Multilevel Models", in: Jan de Leeuw and Erik Meijer (eds), Handbook of Multilevel Analysis, Springer, Chap. 3 | panel data - within-group estimate - individual fixed effects retrieved | You can and should use a well-specified random effects model. Always.
The Hausman test is said to suggest fixed effects models, but can and should be viewed "as a standard Wald test for the omission o | panel data - within-group estimate - individual fixed effects retrieved
You can and should use a well-specified random effects model. Always.
The Hausman test is said to suggest fixed effects models, but can and should be viewed "as a standard Wald test for the omission of the variables $\widetilde{\mathbf{X}}$" (Baltagi 2008, §4.3), where $\widetilde{\mathbf{X}}$ is a matrix of deviations from group means. If you do not omit $\widetilde{\mathbf{X}}$, a random effects model gives you the same population (fixed) effects as a fixed effects model, and the individual effects.
Mundlak (1978) argues that there is a unique estimator for the model
$$\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{Z}\boldsymbol{\alpha}+\mathbf{u}\qquad\qquad \mathbf{Z}=\mathbf{I}_{N}\otimes\mathbf{e}_T$$
where $\mathbf{I}_{N}$ is an identity matrix, $\otimes$ denotes Kronecker product, $\mathbf{e}_T$ is a vector of ones, so $\mathbf{Z}$ is the matrix of individual dummies, and $\boldsymbol{\alpha}=(\alpha_1,\dots,\alpha_N)$.
If $\alpha_i=\overline{\mathbf{X}}_{i*}\boldsymbol{\pi}+w_{i}$, $\boldsymbol{\pi}\ne\mathbf{0}$, averaging over $t$ for a given $i$, the model can be written as
$$\mathbf{y}=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}(\mathbf{X}\boldsymbol{\pi}+\mathbf{w})+\mathbf{u}\qquad\qquad
\mathbf{P}=\mathbf{I}_N\otimes\bar{\mathbf{J}}_T$$
where $\mathbf{P}$ is a matrix which averages the observations across time for each individual (Baltagi 2008, §2.1). Under the fixed effects model, the within estimator is
$$\hat{\boldsymbol{\beta}}_{w}=(\mathbf{X'QX})^{-1}\mathbf{X'Qy}\tag{1}$$
where $\mathbf{Q}=\mathbf{I}-\mathbf{P}$ is a matrix which obtains the deviations from individual means. Mundlak argues that under the random effects model, to get the same estimates the estimator should be
$$\begin{bmatrix} \hat{\boldsymbol{\beta}} \\ \hat{\boldsymbol{\pi}}\end{bmatrix}=
\left(\begin{bmatrix}\mathbf{X}' \\ \mathbf{X'P}\end{bmatrix}\boldsymbol{\Sigma}^{-1}\begin{bmatrix}\mathbf{X}&\mathbf{XP} \end{bmatrix}\right)^{-1}\begin{bmatrix}\mathbf{X}' \\ \mathbf{X'P} \end{bmatrix}\boldsymbol{\Sigma}^{-1}\mathbf{y}\tag{2}$$
where $\boldsymbol{\Sigma}^{-1}$ is the variance of the error term,
while the "usual" estimator (the so-called "Balestra-Nerlove estimator") is
$$\hat{\boldsymbol{\beta}}=(\mathbf{X}'\boldsymbol{\Sigma}^{-1}\mathbf{X})^{-1}\mathbf{X}'\boldsymbol{\Sigma}^{-1}\mathbf{y}$$
which is biased. According to Mundlak, since $(1)$ and $(2)$ obtain the same estimates for $\boldsymbol{\beta}$, $(2)$ is the within estimator, i.e. $(1)$ is the unique estimator and does not depend on the knowledge of the variance components.
However, the models
$$\begin{align}
\mathbf{y}&=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}(\mathbf{X}\boldsymbol{\pi}+\mathbf{w})+\mathbf{u}\tag{FE} \\
\mathbf{y}&=\mathbf{X}\boldsymbol{\beta}+\mathbf{P}\mathbf{X}\boldsymbol{\pi}+(\mathbf{Pw}+\mathbf{u})\tag{RE}
\end{align}$$
are formally equivalent (Hsiao 2003, §4.3), so a random effects model obtains the same estimates ... as long as you do not omit $\widetilde{\mathbf{X}}$! Let's try.
Data generation (R code):
set.seed(1234)
N <- 25 # individuals
T <- 5 # time
In <- diag(N) # identity matrix of order N
Int <- diag(N*T) # identity matrix of order N*T
Jt <- matrix(1, T, T) # matrix of ones of order T
Jtm <- Jt / T
P <- kronecker(In, Jtm) # averages the obs across time for each individual
s2a <- 0.3 # sigma^2_\alpha
s2u <- 0.6 # sigma^2_u
w <- rep(rnorm(N, 0, sqrt(s2a)), each = T)
u <- rnorm(N*T, 0, sqrt(s2u))
b <- c(1.5, -2)
p <- c(-0.7, 0.8)
X <- cbind(runif(N*T, 2, 5), runif(N*T, 4, 8))
XPX <- cbind(X, P %*% X) # [ X PX ]
y <- XPX %*% c(b,p) + (P %*% w + u) # y = Xb + PXp + Pw + u
ds <- data.frame(id=rep(1:N, each=T), wave=rep(1:T, N), y, split(X, col(X)))
Under a fixed effects model we get:
> fe.1 <- plm(y ~ X1 + X2, data=ds, model="within")
> summary(fe.1)$coefficients
Estimate Std. Error t-value Pr(>|t|)
X1 1.435987 0.07825464 18.35019 1.806239e-33
X2 -1.916447 0.06339342 -30.23100 1.757634e-51
while under a random effects model...
> re.1 <- plm(y ~ X1 + X2, data=ds, model="random")
> summary(re.1)$coefficients
Estimate Std. Error t-value Pr(>|t|)
(Intercept) 1.830633 0.51687109 3.541759 5.638216e-04
X1 1.405060 0.07927271 17.724390 1.505521e-35
X2 -1.874784 0.06372731 -29.418846 3.076414e-57
bias!
But what if we do not omit $\widetilde{\mathbf{X}}=\mathbf{QX}$?
> Q <- diag(N*T) - P
> X1.mean <- P %*% ds$X1
> X1.dev <- Q %*% ds$X1
> X2.mean <- P %*% ds$X2
> X2.dev <- Q %*% ds$X2
> re.2 <- plm(y ~ X1.mean + X1.dev + X2.mean + X2.dev, data=ds, model="random")
> summary(re.2)$coefficients
Estimate Std. Error t-value Pr(>|t|)
(Intercept) -0.04123108 2.30907450 -0.01785611 9.857833e-01
X1.mean 0.81279279 0.38146339 2.13072292 3.515287e-02
X1.dev 1.43598746 0.07824535 18.35236883 1.239171e-36
X2.mean -1.23071499 0.26379329 -4.66545216 8.072196e-06
X2.dev -1.91644653 0.06338590 -30.23458903 5.809240e-58
The estimates for X1.dev and X2.dev are equal to the within estimates for X1 and X2 (no room for Hausman tests!), and you get much more. You get what you need.
However this is just the tip of the iceberg. I recommend that you read at least Bafumi and Gelman (2006), Snijders and Berkhof (2008), Bell and Jones (2014).
References
Baltagi, Badi H. (2008), Econometric Analysis of Panel Data, John Wiley & Sons
Bafumi, Joseph and Andrew Gelman (2006), Fitting Multilevel Models When Predictors and Group Effects Correlate, http://www.stat.columbia.edu/~gelman/research/unpublished/Bafumi_Gelman_Midwest06.pdf
Bell, Andrew and Kelvyn Jones (2014), "Explaining Fixed Effects: Random Effects modelling of Time-Series Cross-Sectional and Panel Data", Political Science Research and Methods, http://dx.doi.org/10.7910/DVN/23415
Hsiao, Cheng (2003), Analysis of Panel Data, Cambridge University Press
Mundlak, Yair (1978), "On the Pooling of Time Series and Cross Section Data", Econometrica, 43(1), 44-56
Sniiders, Tom A. B. and Johannes Berkhof (2008), "Diagnostic Checks for Multilevel Models", in: Jan de Leeuw and Erik Meijer (eds), Handbook of Multilevel Analysis, Springer, Chap. 3 | panel data - within-group estimate - individual fixed effects retrieved
You can and should use a well-specified random effects model. Always.
The Hausman test is said to suggest fixed effects models, but can and should be viewed "as a standard Wald test for the omission o |
33,807 | panel data - within-group estimate - individual fixed effects retrieved | In addition to Andy W's answer, the procedure that was suggested to you is similar to the Fixed Effects Vector Decomposition (FEVD) proposed by Plümber and Troeger (2007). It's not quite the same but very alike to their three-step method which goes as follows:
estimate the unit fixed effects
decompose the fixed effects into the time-invariant factors and an error term
estimate 1. again by pooled OLS including the time-invariant variables and the error from 2.
This procedure was heavily criticized by Greene (2011) and Breusch et al. (2011) so I would be careful with such types of estimation strategies. The point about the lower/higher level effects mentioned by Andy W is one of the set of critique points in these two papers.
If it helps you, I have written another post in a related question on how to keep time-invariant variables in fixed effects regressions. I hope you will find this useful. | panel data - within-group estimate - individual fixed effects retrieved | In addition to Andy W's answer, the procedure that was suggested to you is similar to the Fixed Effects Vector Decomposition (FEVD) proposed by Plümber and Troeger (2007). It's not quite the same but | panel data - within-group estimate - individual fixed effects retrieved
In addition to Andy W's answer, the procedure that was suggested to you is similar to the Fixed Effects Vector Decomposition (FEVD) proposed by Plümber and Troeger (2007). It's not quite the same but very alike to their three-step method which goes as follows:
estimate the unit fixed effects
decompose the fixed effects into the time-invariant factors and an error term
estimate 1. again by pooled OLS including the time-invariant variables and the error from 2.
This procedure was heavily criticized by Greene (2011) and Breusch et al. (2011) so I would be careful with such types of estimation strategies. The point about the lower/higher level effects mentioned by Andy W is one of the set of critique points in these two papers.
If it helps you, I have written another post in a related question on how to keep time-invariant variables in fixed effects regressions. I hope you will find this useful. | panel data - within-group estimate - individual fixed effects retrieved
In addition to Andy W's answer, the procedure that was suggested to you is similar to the Fixed Effects Vector Decomposition (FEVD) proposed by Plümber and Troeger (2007). It's not quite the same but |
33,808 | panel data - within-group estimate - individual fixed effects retrieved | For 2, assuming that "individuals" are the cluster, no you shouldn't cluster the standard errors on the first step, and the same logic then extends to your question 3. For 1, this is sometimes called the between effects estimator in economics. See a Stata FAQ on it, and Snijders and Bosker's Multilevel modeling book has a pretty brief section explaining it as well.
That being said, I personally see no reason for it in favor of random effects modeling. Like Andrew Gelman says, "If you get to the point of asking, just do it." All the Hausman test tells you is if the between estimators are equal to the within estimators, which is not a terribly interesting question in and of itself. Most study designs should dictate the use of fixed effects or random effects, and here it appears you are really interested in the random effects. | panel data - within-group estimate - individual fixed effects retrieved | For 2, assuming that "individuals" are the cluster, no you shouldn't cluster the standard errors on the first step, and the same logic then extends to your question 3. For 1, this is sometimes called | panel data - within-group estimate - individual fixed effects retrieved
For 2, assuming that "individuals" are the cluster, no you shouldn't cluster the standard errors on the first step, and the same logic then extends to your question 3. For 1, this is sometimes called the between effects estimator in economics. See a Stata FAQ on it, and Snijders and Bosker's Multilevel modeling book has a pretty brief section explaining it as well.
That being said, I personally see no reason for it in favor of random effects modeling. Like Andrew Gelman says, "If you get to the point of asking, just do it." All the Hausman test tells you is if the between estimators are equal to the within estimators, which is not a terribly interesting question in and of itself. Most study designs should dictate the use of fixed effects or random effects, and here it appears you are really interested in the random effects. | panel data - within-group estimate - individual fixed effects retrieved
For 2, assuming that "individuals" are the cluster, no you shouldn't cluster the standard errors on the first step, and the same logic then extends to your question 3. For 1, this is sometimes called |
33,809 | When no model comparison, should I use REML vs ML? | When there is no model comparison, the difference between restricted
(or residual) maximum likelihood (REML) and maximum likelihood (ML)
is that, REML can give you unbiased estimates of the variance
parameters. Recap that, ML estimates for variance has a term $1/n$, but the
unbiased estimate should be $1/(n-p)$, where $n$ is the sample size, $p$
is the number of mean parameters. So REML should be used when you are
interested in variance estimates and $n$ is not big enough as
compared to $p$.
When there is model comparison, notice that REML cannot be used to
compare mean models, since REML transformes the
data thus makes the likelihood incomparable. | When no model comparison, should I use REML vs ML? | When there is no model comparison, the difference between restricted
(or residual) maximum likelihood (REML) and maximum likelihood (ML)
is that, REML can give you unbiased estimates of the variance
p | When no model comparison, should I use REML vs ML?
When there is no model comparison, the difference between restricted
(or residual) maximum likelihood (REML) and maximum likelihood (ML)
is that, REML can give you unbiased estimates of the variance
parameters. Recap that, ML estimates for variance has a term $1/n$, but the
unbiased estimate should be $1/(n-p)$, where $n$ is the sample size, $p$
is the number of mean parameters. So REML should be used when you are
interested in variance estimates and $n$ is not big enough as
compared to $p$.
When there is model comparison, notice that REML cannot be used to
compare mean models, since REML transformes the
data thus makes the likelihood incomparable. | When no model comparison, should I use REML vs ML?
When there is no model comparison, the difference between restricted
(or residual) maximum likelihood (REML) and maximum likelihood (ML)
is that, REML can give you unbiased estimates of the variance
p |
33,810 | When no model comparison, should I use REML vs ML? | The restricted maximum likelihood (REML, aka RML) procedure separates the estimation of fixed and random parameters (Raudenbush & Bryk, 2002; Searle, Casella & McCulloch, 1992). Snijders and Bosker (2012) noted that when J-q-1 is equal or larger than 50 (J is the number of clusters and q is the number of level-2 predictors), the difference between ML and REML estimates are negligible. If J-q-1 is smaller than 50; ML estimates of the variance components are biased, generally downward (Hox, 2010).
Raudenbush and Bryk (2002) posit that ML estimates for level-2 variances and covariances will be smaller than REML by a factor of approximately (J-F)/J where F is the total number of regression coefficients. To my experience, this adjustment works satisfactorily.
However, when deviance tests are the choice to compare models with different fixed effects but the same variance components, the REML deviance should not be used because it is a deviance for the variance components only. Instead the ML deviance should be used (Snijders & Bosker, 2012). If the models differ in both the fixed effects and the variance components, neither deviance can be used to conduct tests on the fixed effects. | When no model comparison, should I use REML vs ML? | The restricted maximum likelihood (REML, aka RML) procedure separates the estimation of fixed and random parameters (Raudenbush & Bryk, 2002; Searle, Casella & McCulloch, 1992). Snijders and Bosker ( | When no model comparison, should I use REML vs ML?
The restricted maximum likelihood (REML, aka RML) procedure separates the estimation of fixed and random parameters (Raudenbush & Bryk, 2002; Searle, Casella & McCulloch, 1992). Snijders and Bosker (2012) noted that when J-q-1 is equal or larger than 50 (J is the number of clusters and q is the number of level-2 predictors), the difference between ML and REML estimates are negligible. If J-q-1 is smaller than 50; ML estimates of the variance components are biased, generally downward (Hox, 2010).
Raudenbush and Bryk (2002) posit that ML estimates for level-2 variances and covariances will be smaller than REML by a factor of approximately (J-F)/J where F is the total number of regression coefficients. To my experience, this adjustment works satisfactorily.
However, when deviance tests are the choice to compare models with different fixed effects but the same variance components, the REML deviance should not be used because it is a deviance for the variance components only. Instead the ML deviance should be used (Snijders & Bosker, 2012). If the models differ in both the fixed effects and the variance components, neither deviance can be used to conduct tests on the fixed effects. | When no model comparison, should I use REML vs ML?
The restricted maximum likelihood (REML, aka RML) procedure separates the estimation of fixed and random parameters (Raudenbush & Bryk, 2002; Searle, Casella & McCulloch, 1992). Snijders and Bosker ( |
33,811 | Is there any alternative to HMM? | There are some good answers here already, but I thought I'd chime in with one more, which has been used in areas related to gesture recognition.
This paper by Taylor, Hinton, and Roweis is similar to an HMM in the sense that it's a time series model with latent states, but:
1) the structure of the hidden states can be much more complex (many many more possible states) and
2) many of the connections are undirected (as in a conditional random field or a Markov random field) rather than directed.
Figure 2 shows a diagram of the basic model, and the authors have some great videos of what the model can learn on their websites. | Is there any alternative to HMM? | There are some good answers here already, but I thought I'd chime in with one more, which has been used in areas related to gesture recognition.
This paper by Taylor, Hinton, and Roweis is similar to | Is there any alternative to HMM?
There are some good answers here already, but I thought I'd chime in with one more, which has been used in areas related to gesture recognition.
This paper by Taylor, Hinton, and Roweis is similar to an HMM in the sense that it's a time series model with latent states, but:
1) the structure of the hidden states can be much more complex (many many more possible states) and
2) many of the connections are undirected (as in a conditional random field or a Markov random field) rather than directed.
Figure 2 shows a diagram of the basic model, and the authors have some great videos of what the model can learn on their websites. | Is there any alternative to HMM?
There are some good answers here already, but I thought I'd chime in with one more, which has been used in areas related to gesture recognition.
This paper by Taylor, Hinton, and Roweis is similar to |
33,812 | Is there any alternative to HMM? | HMMs are a special case of probabilistic graphical models (PGM), which include very broad range of more or less related (in terms of particular application) models.
There are at least to generic models that you could give a try:
Conditional Random Fields (CRF)
Bayesian networks
It is also worth noting, that on the coursera platform one can find a good introductionary course regarding PGMs: https://www.coursera.org/course/pgm
Neural Networks on the other hand are quite generic term, which includes dozens of actual models. The most common understanding of this term ie. Multi Layer Perceptron (also refered as Artificial Neural Network, Feedforward Neural Network) is a different concept, which is rather a regression method then actual probabilistic model. On the other hand there are some probabilistic versions of neural networks which can be used in the similar tasks. | Is there any alternative to HMM? | HMMs are a special case of probabilistic graphical models (PGM), which include very broad range of more or less related (in terms of particular application) models.
There are at least to generic model | Is there any alternative to HMM?
HMMs are a special case of probabilistic graphical models (PGM), which include very broad range of more or less related (in terms of particular application) models.
There are at least to generic models that you could give a try:
Conditional Random Fields (CRF)
Bayesian networks
It is also worth noting, that on the coursera platform one can find a good introductionary course regarding PGMs: https://www.coursera.org/course/pgm
Neural Networks on the other hand are quite generic term, which includes dozens of actual models. The most common understanding of this term ie. Multi Layer Perceptron (also refered as Artificial Neural Network, Feedforward Neural Network) is a different concept, which is rather a regression method then actual probabilistic model. On the other hand there are some probabilistic versions of neural networks which can be used in the similar tasks. | Is there any alternative to HMM?
HMMs are a special case of probabilistic graphical models (PGM), which include very broad range of more or less related (in terms of particular application) models.
There are at least to generic model |
33,813 | Is there any alternative to HMM? | Recurrent neural networks are a discriminative model which can be used to solve many tasks that you'd typically use an HMM for. They are now (at least if the TIMIT benchmark is correct) the state of the art in speech recognition. They are successfully used in language modelling and many more areas. A good introductory text is Ilya Sutskever's phd thesis. | Is there any alternative to HMM? | Recurrent neural networks are a discriminative model which can be used to solve many tasks that you'd typically use an HMM for. They are now (at least if the TIMIT benchmark is correct) the state of t | Is there any alternative to HMM?
Recurrent neural networks are a discriminative model which can be used to solve many tasks that you'd typically use an HMM for. They are now (at least if the TIMIT benchmark is correct) the state of the art in speech recognition. They are successfully used in language modelling and many more areas. A good introductory text is Ilya Sutskever's phd thesis. | Is there any alternative to HMM?
Recurrent neural networks are a discriminative model which can be used to solve many tasks that you'd typically use an HMM for. They are now (at least if the TIMIT benchmark is correct) the state of t |
33,814 | Is there any alternative to HMM? | I'm not sure if it qualifies under your criteria, but Kalman Filters are much like HMM's with continuous (Gaussian) latent state space. | Is there any alternative to HMM? | I'm not sure if it qualifies under your criteria, but Kalman Filters are much like HMM's with continuous (Gaussian) latent state space. | Is there any alternative to HMM?
I'm not sure if it qualifies under your criteria, but Kalman Filters are much like HMM's with continuous (Gaussian) latent state space. | Is there any alternative to HMM?
I'm not sure if it qualifies under your criteria, but Kalman Filters are much like HMM's with continuous (Gaussian) latent state space. |
33,815 | Is there any alternative to HMM? | You might be interested in looking into Echo State Networks though they do not explicitly model transition probabilities between states. They are easy to implement and fast to train.
Here is an introductory article that you may find useful:
http://www.scholarpedia.org/article/Echo_state_network | Is there any alternative to HMM? | You might be interested in looking into Echo State Networks though they do not explicitly model transition probabilities between states. They are easy to implement and fast to train.
Here is an introd | Is there any alternative to HMM?
You might be interested in looking into Echo State Networks though they do not explicitly model transition probabilities between states. They are easy to implement and fast to train.
Here is an introductory article that you may find useful:
http://www.scholarpedia.org/article/Echo_state_network | Is there any alternative to HMM?
You might be interested in looking into Echo State Networks though they do not explicitly model transition probabilities between states. They are easy to implement and fast to train.
Here is an introd |
33,816 | How can I sample from a distribution with incomputable CDF? | The CDF is readily invertible. A formula for the inversion leads to what has to be one of the simplest and most expedient possible solutions.
Begin by observing that the probability of outcome $k$, $0 \le k \le n$, is proportional to $e^{-b k}$. Thus, if we generate a uniform value $q$ between $0$ and $q_{\max}=\sum_{k=0}^{n} e^{-b k}$ = $(1 - e^{-b(n+1)})/(1 - e^{-b})$, we only need find the largest $k$ for which
$$q \ge \sum_{i=0}^{k} e^{-bi} = \frac{1 - e^{-(k+1)b}}{1 -e^{-b}}.$$
Simple algebra gives the solution
$$k = - \text{ceiling}\left(\frac{\log(1 - q (1-e^{-b}))}{b}\right).$$
Here is an R implementation constructed like all the other random-number generators: its first argument specifies how many iid values to generate and the rest of the arguments name the parameters ($b$ as b and $n$ as n.max):
rgeom.truncated <- function(n=1, b, n.max) {
a <- 1 - exp(-b)
q.max <- (1 - exp(-b*(n.max+1))) / a
q <- runif(n, 0, q.max)
return(-ceiling(log(1 - q*a) / b))
}
As an example of its use, let's generate a million random variates according to this distribution:
b <- 0.001
n.max <- 3500
n.sim <- 10^6
set.seed(17)
system.time(sim <- rgeom.truncated(n.sim, b,n.max))
($0.10$ seconds were needed.)
h <- hist(sim+1, probability=TRUE, breaks=50, xlab="Outcome+1")
pmf <- exp(-b * (0: n.max)); pmf <- pmf / sum(pmf)
lines(0:n.max, pmf, col="Red", lwd=2)
($1$ was added to each value in order to create a better histogram: R's hist procedure has an idiosyncrasy (=bug) in which the first bar is too high when the left endpoint is set at zero.) The red curve is the reference distribution that this simulation attempts to reproduce. Let's evaluate the goodness of fit with a chi-square test:
observed <- table(sim)
expected <- n.sim * pmf
chi.square <- (observed-expected)^2 / expected
pchisq(sum(chi.square), n.max, lower.tail=FALSE)
The p-value is $0.84$: a beautiful fit. | How can I sample from a distribution with incomputable CDF? | The CDF is readily invertible. A formula for the inversion leads to what has to be one of the simplest and most expedient possible solutions.
Begin by observing that the probability of outcome $k$, $ | How can I sample from a distribution with incomputable CDF?
The CDF is readily invertible. A formula for the inversion leads to what has to be one of the simplest and most expedient possible solutions.
Begin by observing that the probability of outcome $k$, $0 \le k \le n$, is proportional to $e^{-b k}$. Thus, if we generate a uniform value $q$ between $0$ and $q_{\max}=\sum_{k=0}^{n} e^{-b k}$ = $(1 - e^{-b(n+1)})/(1 - e^{-b})$, we only need find the largest $k$ for which
$$q \ge \sum_{i=0}^{k} e^{-bi} = \frac{1 - e^{-(k+1)b}}{1 -e^{-b}}.$$
Simple algebra gives the solution
$$k = - \text{ceiling}\left(\frac{\log(1 - q (1-e^{-b}))}{b}\right).$$
Here is an R implementation constructed like all the other random-number generators: its first argument specifies how many iid values to generate and the rest of the arguments name the parameters ($b$ as b and $n$ as n.max):
rgeom.truncated <- function(n=1, b, n.max) {
a <- 1 - exp(-b)
q.max <- (1 - exp(-b*(n.max+1))) / a
q <- runif(n, 0, q.max)
return(-ceiling(log(1 - q*a) / b))
}
As an example of its use, let's generate a million random variates according to this distribution:
b <- 0.001
n.max <- 3500
n.sim <- 10^6
set.seed(17)
system.time(sim <- rgeom.truncated(n.sim, b,n.max))
($0.10$ seconds were needed.)
h <- hist(sim+1, probability=TRUE, breaks=50, xlab="Outcome+1")
pmf <- exp(-b * (0: n.max)); pmf <- pmf / sum(pmf)
lines(0:n.max, pmf, col="Red", lwd=2)
($1$ was added to each value in order to create a better histogram: R's hist procedure has an idiosyncrasy (=bug) in which the first bar is too high when the left endpoint is set at zero.) The red curve is the reference distribution that this simulation attempts to reproduce. Let's evaluate the goodness of fit with a chi-square test:
observed <- table(sim)
expected <- n.sim * pmf
chi.square <- (observed-expected)^2 / expected
pchisq(sum(chi.square), n.max, lower.tail=FALSE)
The p-value is $0.84$: a beautiful fit. | How can I sample from a distribution with incomputable CDF?
The CDF is readily invertible. A formula for the inversion leads to what has to be one of the simplest and most expedient possible solutions.
Begin by observing that the probability of outcome $k$, $ |
33,817 | How can I sample from a distribution with incomputable CDF? | You're dealing with a truncated geometric distribution with $p = 1-e^{-b}$. There are a variety of ways of approaching this.
I'd advise different options in different situations; some options would involve simulating from a geometric and regenerating when its outside the range, taking the integer part of an appropriate truncated exponential (as here), or using any of several fast techniques tailored to discrete distributions over a finite range. Given that $n$ is large, taking the floor of a truncated exponential is likely to be relatively fast, but whether it's the best choice also depends on $b$.
Here's a related question on math.SE
Before I attempt specific suggestions, what's a typical range of values for $b$? | How can I sample from a distribution with incomputable CDF? | You're dealing with a truncated geometric distribution with $p = 1-e^{-b}$. There are a variety of ways of approaching this.
I'd advise different options in different situations; some options would i | How can I sample from a distribution with incomputable CDF?
You're dealing with a truncated geometric distribution with $p = 1-e^{-b}$. There are a variety of ways of approaching this.
I'd advise different options in different situations; some options would involve simulating from a geometric and regenerating when its outside the range, taking the integer part of an appropriate truncated exponential (as here), or using any of several fast techniques tailored to discrete distributions over a finite range. Given that $n$ is large, taking the floor of a truncated exponential is likely to be relatively fast, but whether it's the best choice also depends on $b$.
Here's a related question on math.SE
Before I attempt specific suggestions, what's a typical range of values for $b$? | How can I sample from a distribution with incomputable CDF?
You're dealing with a truncated geometric distribution with $p = 1-e^{-b}$. There are a variety of ways of approaching this.
I'd advise different options in different situations; some options would i |
33,818 | How can I sample from a distribution with incomputable CDF? | First, note that $P(x)\propto e^{-bx}$ which, if $x$ were continuous, would be related to an exponential distribution. Then, what you can do is to simulate from a truncated exponential distribution and take the floor() (integer part) of the observations.
The cdf of a truncated exponential is
$$F(x;n,b)= \dfrac{1-e^{-bx}}{1-e^{-bn}}.$$
Then, if we make $F(x;n,b)=u$, we obtain that $x=-\dfrac{1}{b}\log[1-u(1-e^{-bn})]$. If $bn$ is large, then $e^{-bn}\approx 0$ which suggest to approximate $x\approx -\dfrac{1}{b}\log[1-u]$.
rweirdp <- function(ns,n,b){
u <- runif(ns)
samp <- - log(1-u*(1-exp(-n*b)))/b
return(floor(samp))
}
rweirdp(1000,10,1) | How can I sample from a distribution with incomputable CDF? | First, note that $P(x)\propto e^{-bx}$ which, if $x$ were continuous, would be related to an exponential distribution. Then, what you can do is to simulate from a truncated exponential distribution a | How can I sample from a distribution with incomputable CDF?
First, note that $P(x)\propto e^{-bx}$ which, if $x$ were continuous, would be related to an exponential distribution. Then, what you can do is to simulate from a truncated exponential distribution and take the floor() (integer part) of the observations.
The cdf of a truncated exponential is
$$F(x;n,b)= \dfrac{1-e^{-bx}}{1-e^{-bn}}.$$
Then, if we make $F(x;n,b)=u$, we obtain that $x=-\dfrac{1}{b}\log[1-u(1-e^{-bn})]$. If $bn$ is large, then $e^{-bn}\approx 0$ which suggest to approximate $x\approx -\dfrac{1}{b}\log[1-u]$.
rweirdp <- function(ns,n,b){
u <- runif(ns)
samp <- - log(1-u*(1-exp(-n*b)))/b
return(floor(samp))
}
rweirdp(1000,10,1) | How can I sample from a distribution with incomputable CDF?
First, note that $P(x)\propto e^{-bx}$ which, if $x$ were continuous, would be related to an exponential distribution. Then, what you can do is to simulate from a truncated exponential distribution a |
33,819 | How can I sample from a distribution with incomputable CDF? | A way to sample from the target distribution $p(k)\propto \exp\{-bk\}$ is to
run a Metropolis-Hastings experiment to determine the (interesting) support of the distribution, i.e. in which subset of $\{0,1,\ldots,n\}$ it concentrates;
metro=function(N,b,n){
x=sample(0:n,N,rep=TRUE)
for (t in 2:N){
x[t]=prop=x[t-1]+sample(c(-1,1),1)
if ((prop<0)||(prop>n)||(log(runif(1))>b*(x[t]-prop)))
x[t]=x[t-1]
}
return(x)
}
Use the support thus determined, $\{k_0,\ldots,k_1\}$ say, to compute the exact probabilities as $p(k)\propto \exp\{-bk+bk_0\}$ to avoid overflows.
Update: When thinking more about it, since $p(\cdot)$ is decreasing in k, the effective support of the distribution will always start at $k_0=0$. If $b$ is quite large, this support will end very quickly, in which case $n$ does not matter much as large values of $k$ will never be visited. If $b$ is very small, the pdf is almost flat, which means that one can use a uniform distribution on $\{0,1,\ldots,n\}$ as an accept-reject proposal. And use logs in the acceptance step to avoid overflows. | How can I sample from a distribution with incomputable CDF? | A way to sample from the target distribution $p(k)\propto \exp\{-bk\}$ is to
run a Metropolis-Hastings experiment to determine the (interesting) support of the distribution, i.e. in which subset of $ | How can I sample from a distribution with incomputable CDF?
A way to sample from the target distribution $p(k)\propto \exp\{-bk\}$ is to
run a Metropolis-Hastings experiment to determine the (interesting) support of the distribution, i.e. in which subset of $\{0,1,\ldots,n\}$ it concentrates;
metro=function(N,b,n){
x=sample(0:n,N,rep=TRUE)
for (t in 2:N){
x[t]=prop=x[t-1]+sample(c(-1,1),1)
if ((prop<0)||(prop>n)||(log(runif(1))>b*(x[t]-prop)))
x[t]=x[t-1]
}
return(x)
}
Use the support thus determined, $\{k_0,\ldots,k_1\}$ say, to compute the exact probabilities as $p(k)\propto \exp\{-bk+bk_0\}$ to avoid overflows.
Update: When thinking more about it, since $p(\cdot)$ is decreasing in k, the effective support of the distribution will always start at $k_0=0$. If $b$ is quite large, this support will end very quickly, in which case $n$ does not matter much as large values of $k$ will never be visited. If $b$ is very small, the pdf is almost flat, which means that one can use a uniform distribution on $\{0,1,\ldots,n\}$ as an accept-reject proposal. And use logs in the acceptance step to avoid overflows. | How can I sample from a distribution with incomputable CDF?
A way to sample from the target distribution $p(k)\propto \exp\{-bk\}$ is to
run a Metropolis-Hastings experiment to determine the (interesting) support of the distribution, i.e. in which subset of $ |
33,820 | Is it reasonable to delete a large number of outliers from a dataset? | I would be more than suspicious, if someone told me that 30% of my sample are outliers ...
Rather than blindly trusting a canned routine I would carefully analyze the data and try to find out why an outlier is an outlier. Is it a "bug" or a "feature"? Is it measurement error? Does your sample cover different sub-populations (mixture)?
Moreover, the detection of outliers involves the more or less arbitrary definition of a threshold, which separates "good" and "bad". You should assess if these thresholds are sensible. It could thus be a good idea to move the goalposts and to see what happens.
Also note that rather than dropping observations, you could use robust statistical techniques if you are concerned about outliers. | Is it reasonable to delete a large number of outliers from a dataset? | I would be more than suspicious, if someone told me that 30% of my sample are outliers ...
Rather than blindly trusting a canned routine I would carefully analyze the data and try to find out why an | Is it reasonable to delete a large number of outliers from a dataset?
I would be more than suspicious, if someone told me that 30% of my sample are outliers ...
Rather than blindly trusting a canned routine I would carefully analyze the data and try to find out why an outlier is an outlier. Is it a "bug" or a "feature"? Is it measurement error? Does your sample cover different sub-populations (mixture)?
Moreover, the detection of outliers involves the more or less arbitrary definition of a threshold, which separates "good" and "bad". You should assess if these thresholds are sensible. It could thus be a good idea to move the goalposts and to see what happens.
Also note that rather than dropping observations, you could use robust statistical techniques if you are concerned about outliers. | Is it reasonable to delete a large number of outliers from a dataset?
I would be more than suspicious, if someone told me that 30% of my sample are outliers ...
Rather than blindly trusting a canned routine I would carefully analyze the data and try to find out why an |
33,821 | Is it reasonable to delete a large number of outliers from a dataset? | Absolutely not: Outliers are points that are distant from the bulk of other points in a distribution, and diagnosis of an "outlier" is generally done by comparison to some assumed distributional form. Although outliers can occasionally be caused by measurement error, diagnosis of outliers can also occur when the data follows a distribution with high kurtosis (i.e., fat tails), but the analyst compares the data points to an assumed distributional form with low kurtosis (e.g., the normal distribution).
The entire concept of an "outlier" really does far more harm than good. All that is really needed is to recognise that it's okay to remove data points that have been measured incorrectly, but it's not okay to remove data points that are legitimate observations (except for the limited purposes of sensitivity analysis). Unless the statistical analyst has evidence to conclude that an "outlier" has occurred due to measurement error, it is almost always the case that it is identified because the data follows a distribution with high kurtosis (i.e., fatter tails) than the assumed distributional form. To conclude that this reflects some problem with the data is tantamount to claiming that reality must conform to your statistical assumptions, and when it does not it has made an unfortunate error, that you will rectify in your analysis by removing parts of reality that are non-compliant with your assumptions.
In any case where an analyst identifies a large amount like 30% of the data as "outliers", it is likely either that the outlier test has been incorrectly applied, or the outlier test is based on a distributional assumption that assumes much thinner tails than the data, and is therefore falsified by the data. In either case, it is a sure sign that something has gone wrong. Personally, I would never trust any analysis that has removed a large proportion of the data as "outliers".
In view of this, I would suggest that you first consider whether there are any data points that have incorrect values due to measurement error. If you have good reason to think this has occurred, it is legitimate to remove these and note their removal in your analysis. (Bear in mind that unless the people making the observations are extremely incompetent, then realistically you should not have measurement errors for more than a small number of your points.) If you still find you have high numbers of "outliers" then this almost certainly means you are using a statistical model with a distribution that has thinner tails than is warranted by the data (e.g., you are assuming a normal distribution, but there is substantial excess kurtosis). Find the sample kurtosis of the residuals in your data and compare this to your assumed distributional form to check. If your assumed form does not match the data, consider replacing this with a distribution with higher kurtosis (e.g., you might replace the normal distribution with a t-distribution or generalised error distribution). | Is it reasonable to delete a large number of outliers from a dataset? | Absolutely not: Outliers are points that are distant from the bulk of other points in a distribution, and diagnosis of an "outlier" is generally done by comparison to some assumed distributional form. | Is it reasonable to delete a large number of outliers from a dataset?
Absolutely not: Outliers are points that are distant from the bulk of other points in a distribution, and diagnosis of an "outlier" is generally done by comparison to some assumed distributional form. Although outliers can occasionally be caused by measurement error, diagnosis of outliers can also occur when the data follows a distribution with high kurtosis (i.e., fat tails), but the analyst compares the data points to an assumed distributional form with low kurtosis (e.g., the normal distribution).
The entire concept of an "outlier" really does far more harm than good. All that is really needed is to recognise that it's okay to remove data points that have been measured incorrectly, but it's not okay to remove data points that are legitimate observations (except for the limited purposes of sensitivity analysis). Unless the statistical analyst has evidence to conclude that an "outlier" has occurred due to measurement error, it is almost always the case that it is identified because the data follows a distribution with high kurtosis (i.e., fatter tails) than the assumed distributional form. To conclude that this reflects some problem with the data is tantamount to claiming that reality must conform to your statistical assumptions, and when it does not it has made an unfortunate error, that you will rectify in your analysis by removing parts of reality that are non-compliant with your assumptions.
In any case where an analyst identifies a large amount like 30% of the data as "outliers", it is likely either that the outlier test has been incorrectly applied, or the outlier test is based on a distributional assumption that assumes much thinner tails than the data, and is therefore falsified by the data. In either case, it is a sure sign that something has gone wrong. Personally, I would never trust any analysis that has removed a large proportion of the data as "outliers".
In view of this, I would suggest that you first consider whether there are any data points that have incorrect values due to measurement error. If you have good reason to think this has occurred, it is legitimate to remove these and note their removal in your analysis. (Bear in mind that unless the people making the observations are extremely incompetent, then realistically you should not have measurement errors for more than a small number of your points.) If you still find you have high numbers of "outliers" then this almost certainly means you are using a statistical model with a distribution that has thinner tails than is warranted by the data (e.g., you are assuming a normal distribution, but there is substantial excess kurtosis). Find the sample kurtosis of the residuals in your data and compare this to your assumed distributional form to check. If your assumed form does not match the data, consider replacing this with a distribution with higher kurtosis (e.g., you might replace the normal distribution with a t-distribution or generalised error distribution). | Is it reasonable to delete a large number of outliers from a dataset?
Absolutely not: Outliers are points that are distant from the bulk of other points in a distribution, and diagnosis of an "outlier" is generally done by comparison to some assumed distributional form. |
33,822 | Is it appropriate to identify and remove outliers because they cause problems? | It is very important that you consider the possibility that the categories of subject have a real difference in reaction times. If that is the case then anything that makes the difference go away will lead to potentially artifactual results. Don't assume that an inconvenient effect is a result of the presence of outliers.
Perhaps you could look for a relationship between reaction time and another outcome measure. The form of the relationship may differ between autistic subjects and normal subjects. | Is it appropriate to identify and remove outliers because they cause problems? | It is very important that you consider the possibility that the categories of subject have a real difference in reaction times. If that is the case then anything that makes the difference go away will | Is it appropriate to identify and remove outliers because they cause problems?
It is very important that you consider the possibility that the categories of subject have a real difference in reaction times. If that is the case then anything that makes the difference go away will lead to potentially artifactual results. Don't assume that an inconvenient effect is a result of the presence of outliers.
Perhaps you could look for a relationship between reaction time and another outcome measure. The form of the relationship may differ between autistic subjects and normal subjects. | Is it appropriate to identify and remove outliers because they cause problems?
It is very important that you consider the possibility that the categories of subject have a real difference in reaction times. If that is the case then anything that makes the difference go away will |
33,823 | Is it appropriate to identify and remove outliers because they cause problems? | You should not exclude outliers just because they cause problems, nor should you use a subset of your data because the full data causes problems. Neither of these solved the "problem" in your case, but even if they did, it wouldn't be right.
You haven't given a lot of detail about what you are trying to do or how you are doing it, but can you add reaction time as a covariate? | Is it appropriate to identify and remove outliers because they cause problems? | You should not exclude outliers just because they cause problems, nor should you use a subset of your data because the full data causes problems. Neither of these solved the "problem" in your case, bu | Is it appropriate to identify and remove outliers because they cause problems?
You should not exclude outliers just because they cause problems, nor should you use a subset of your data because the full data causes problems. Neither of these solved the "problem" in your case, but even if they did, it wouldn't be right.
You haven't given a lot of detail about what you are trying to do or how you are doing it, but can you add reaction time as a covariate? | Is it appropriate to identify and remove outliers because they cause problems?
You should not exclude outliers just because they cause problems, nor should you use a subset of your data because the full data causes problems. Neither of these solved the "problem" in your case, bu |
33,824 | Is it appropriate to identify and remove outliers because they cause problems? | It sounds like you need to explore your data a little more. Why don't you try some unsupervised techniques like clustering. Outliers would show up in their own groups. And you would think there'd be some kind of grouping of your controls.
Regardless, you can still have a thesis about not seeing an effect you expected to see. You'd have to explain how your data/method was not flawed. And add a section about what variables you might add to explain why your test subjects and controls are grouping together. This work still helps future researchers. | Is it appropriate to identify and remove outliers because they cause problems? | It sounds like you need to explore your data a little more. Why don't you try some unsupervised techniques like clustering. Outliers would show up in their own groups. And you would think there'd b | Is it appropriate to identify and remove outliers because they cause problems?
It sounds like you need to explore your data a little more. Why don't you try some unsupervised techniques like clustering. Outliers would show up in their own groups. And you would think there'd be some kind of grouping of your controls.
Regardless, you can still have a thesis about not seeing an effect you expected to see. You'd have to explain how your data/method was not flawed. And add a section about what variables you might add to explain why your test subjects and controls are grouping together. This work still helps future researchers. | Is it appropriate to identify and remove outliers because they cause problems?
It sounds like you need to explore your data a little more. Why don't you try some unsupervised techniques like clustering. Outliers would show up in their own groups. And you would think there'd b |
33,825 | How to do ANOVA on data which is still not normal after transformations? | It's the residuals that should be normally distributed, not the marginal distribution of your response variable.
I would try using transformations, do the ANOVA, and check the residuals. If they look noticeably non-normal regardless of what transformation you use, I would switch to a non-parametric test such as the Friedman test. | How to do ANOVA on data which is still not normal after transformations? | It's the residuals that should be normally distributed, not the marginal distribution of your response variable.
I would try using transformations, do the ANOVA, and check the residuals. If they look | How to do ANOVA on data which is still not normal after transformations?
It's the residuals that should be normally distributed, not the marginal distribution of your response variable.
I would try using transformations, do the ANOVA, and check the residuals. If they look noticeably non-normal regardless of what transformation you use, I would switch to a non-parametric test such as the Friedman test. | How to do ANOVA on data which is still not normal after transformations?
It's the residuals that should be normally distributed, not the marginal distribution of your response variable.
I would try using transformations, do the ANOVA, and check the residuals. If they look |
33,826 | How to do ANOVA on data which is still not normal after transformations? | I believe with negatively skewed data, you may have to reflect the data to become positively skewed before applying another data transformation (e.g. log or square root). However, this tends to make interpretation of your results difficult.
What is your sample size? Depending on how large it is exactly, parametric tests may give fairly good estimates.
Otherwise, for a non-parametric alternative, maybe you can try the Friedman test.
In addition, you may try conducting a MANOVA for repeated measures, with an explicit time variable included, as an alternative to a 4x3 Mixed ANOVA. A major difference is that the assumption of sphericity is relaxed (or rather, it is estimated for you), and that all time-points of your outcome variable are fitted at once. | How to do ANOVA on data which is still not normal after transformations? | I believe with negatively skewed data, you may have to reflect the data to become positively skewed before applying another data transformation (e.g. log or square root). However, this tends to make i | How to do ANOVA on data which is still not normal after transformations?
I believe with negatively skewed data, you may have to reflect the data to become positively skewed before applying another data transformation (e.g. log or square root). However, this tends to make interpretation of your results difficult.
What is your sample size? Depending on how large it is exactly, parametric tests may give fairly good estimates.
Otherwise, for a non-parametric alternative, maybe you can try the Friedman test.
In addition, you may try conducting a MANOVA for repeated measures, with an explicit time variable included, as an alternative to a 4x3 Mixed ANOVA. A major difference is that the assumption of sphericity is relaxed (or rather, it is estimated for you), and that all time-points of your outcome variable are fitted at once. | How to do ANOVA on data which is still not normal after transformations?
I believe with negatively skewed data, you may have to reflect the data to become positively skewed before applying another data transformation (e.g. log or square root). However, this tends to make i |
33,827 | How to do ANOVA on data which is still not normal after transformations? | A boxcox transformation (there's one in the MASS package) works as well on negatively as positively skewed data. FYI, you need to enter a formula in that function like y~1 and make sure all of y is positive first (if it's not just add a constant like abs(min(y))). You may have to adjust the lambda range in the function to find the peak of the curve. It will give you the best lambda value to choose and then you just apply this transform:
b <- boxcox(y~1)
lambda <- b$x[b$y == max(b$y)]
yt <- (y^lambda-1)/lambda
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
See if your data are normal then.
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
#maybe put back the min
ytb <- ytb - abs(min(y)) | How to do ANOVA on data which is still not normal after transformations? | A boxcox transformation (there's one in the MASS package) works as well on negatively as positively skewed data. FYI, you need to enter a formula in that function like y~1 and make sure all of y is p | How to do ANOVA on data which is still not normal after transformations?
A boxcox transformation (there's one in the MASS package) works as well on negatively as positively skewed data. FYI, you need to enter a formula in that function like y~1 and make sure all of y is positive first (if it's not just add a constant like abs(min(y))). You may have to adjust the lambda range in the function to find the peak of the curve. It will give you the best lambda value to choose and then you just apply this transform:
b <- boxcox(y~1)
lambda <- b$x[b$y == max(b$y)]
yt <- (y^lambda-1)/lambda
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
See if your data are normal then.
#you can transform back with
ytb <- (t*lambda+1)^(1/lambda)
#maybe put back the min
ytb <- ytb - abs(min(y)) | How to do ANOVA on data which is still not normal after transformations?
A boxcox transformation (there's one in the MASS package) works as well on negatively as positively skewed data. FYI, you need to enter a formula in that function like y~1 and make sure all of y is p |
33,828 | Transforming arbitrary distributions to distributions on $[0,1]$ | It’s much easier to simultaneously construct $X_i$ and $Y_i$ having the desired properties,
by first letting $Y_i$ be i.i.d. Uniform$[0,1]$ and then taking $X_i = F^{-1}(Y_i)$. This is the basic method for generating random variables with arbitrary distributions.
The other direction, where you are first given $X_i$ and then asked to construct $Y_i$, is more difficult, but is still possible for all distributions. You just have to be careful with how you define $Y_i$.
Attempting to define $Y_i$ as $Y_i = F(X_i)$ fails to produce uniformly distributed $Y_i$ when $F$ has jump discontinuities. You have to spread the point masses in the distribution of $X_i$ across the the gaps created by the jumps.
Let $$D = \{x : F(x) \neq \lim_{z \to x^-} F(z)\}$$ denote the set of jump discontinuities of $F$. ($\lim_{z\to x^-}$ denotes the limit from the left. All distributions functions are right continuous, so the main issue is left discontinuities.)
Let $U_i$ be i.i.d. Uniform$[0,1]$ random variables, and define
$$Y_i =
\begin{cases}
F(X_i), & \text{if }X_i \notin D \\
U_i F(X_i) + (1-U_i) \lim_{z \to X_i^-} F(z), & \text{otherwise.}
\end{cases}
$$
The second part of the definition fills in the gaps uniformly.
The quantile function $F^{-1}$ is not a genuine inverse when $F$ is not 1-to-1. Note that if $X_i \in D$ then $F^{-1}(Y_i) = X_i$, because the pre-image of the gap is the corresponding point of discontinuity. For the continuous parts where $X_i \notin D$, the flat sections of $F$ correspond to intervals where $X_i$ has 0 probability so they don’t really matter when considering $F^{-1}(Y_i)$.
The second part of your question follows from similar reasoning after the first part which asserts that $X_i = F^{-1}(Y_i)$ with probability 1. The empirical CDFs are defined as
$$G_n(y) = \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq y\}}$$
$$F_n(x) = \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x\}}$$
so
$$
\begin{align}
G_n(F(x))
&= \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq F(x) \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{F^{-1}(Y_i) \leq x \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x \}}
= F_n(x)
\end{align}
$$
with probability 1.
It should be easy to convince yourself that $Y_i$ has Uniform$[0,1]$ distribution by looking at pictures. Doing so rigorously is tedious, but can be done. We have to verify that $P(Y_i \leq u) = u$ for all $u \in (0,1)$. Fix such $u$ and let $x^* = \inf\{x : F(x) \geq u \}$ — this is just the value of quantile function at $u$. It’s defined this way to deal with flat sections. We’ll consider two separate cases.
First suppose that $F(x^*) = u$. Then
$$
Y_i \leq u
\iff Y_i \leq F(x^*)
\iff F(X_i) \leq F(x^*).
$$
Since $F$ is a non-decreasing function and $F(x^*) = u$,
$$
F(X_i) \leq F(x^*) \iff X_i \leq x^* .
$$
Thus,
$$
P[Y_i \leq u]
= P[X_i \leq x^*]
= F(x^*)
= u .
$$
Now suppose that $F(x^*) \neq u$. Then necessarily $F(x^*) > u$, and $u$ falls inside one of the gaps. Moreover, $x^* \in D$, because otherwise $F(x^*) = u$ and we have a contradiction.
Let $u^* = F(x^*)$ be the upper part of the gap. Then by the previous case,
$$
\begin{align}
P[Y_i \leq u]
&= P[Y_i \leq u^*] - P[u < Y_i \leq u^*]\\
&= u^* - P[u < Y_i \leq u^*].
\end{align}
$$
By the way $Y_i$ is defined, $P(Y_i = u^*) = 0$ and
$$
\begin{align}
P[u < Y_i \leq u^*]
&= P[u < Y_i < u^*] \\
&= P[u < Y_i < u^* , X_i = x^*] \\
&= u^* - u .
\end{align}
$$
Thus, $P[Y_i \leq u] = u$. | Transforming arbitrary distributions to distributions on $[0,1]$ | It’s much easier to simultaneously construct $X_i$ and $Y_i$ having the desired properties,
by first letting $Y_i$ be i.i.d. Uniform$[0,1]$ and then taking $X_i = F^{-1}(Y_i)$. This is the basic meth | Transforming arbitrary distributions to distributions on $[0,1]$
It’s much easier to simultaneously construct $X_i$ and $Y_i$ having the desired properties,
by first letting $Y_i$ be i.i.d. Uniform$[0,1]$ and then taking $X_i = F^{-1}(Y_i)$. This is the basic method for generating random variables with arbitrary distributions.
The other direction, where you are first given $X_i$ and then asked to construct $Y_i$, is more difficult, but is still possible for all distributions. You just have to be careful with how you define $Y_i$.
Attempting to define $Y_i$ as $Y_i = F(X_i)$ fails to produce uniformly distributed $Y_i$ when $F$ has jump discontinuities. You have to spread the point masses in the distribution of $X_i$ across the the gaps created by the jumps.
Let $$D = \{x : F(x) \neq \lim_{z \to x^-} F(z)\}$$ denote the set of jump discontinuities of $F$. ($\lim_{z\to x^-}$ denotes the limit from the left. All distributions functions are right continuous, so the main issue is left discontinuities.)
Let $U_i$ be i.i.d. Uniform$[0,1]$ random variables, and define
$$Y_i =
\begin{cases}
F(X_i), & \text{if }X_i \notin D \\
U_i F(X_i) + (1-U_i) \lim_{z \to X_i^-} F(z), & \text{otherwise.}
\end{cases}
$$
The second part of the definition fills in the gaps uniformly.
The quantile function $F^{-1}$ is not a genuine inverse when $F$ is not 1-to-1. Note that if $X_i \in D$ then $F^{-1}(Y_i) = X_i$, because the pre-image of the gap is the corresponding point of discontinuity. For the continuous parts where $X_i \notin D$, the flat sections of $F$ correspond to intervals where $X_i$ has 0 probability so they don’t really matter when considering $F^{-1}(Y_i)$.
The second part of your question follows from similar reasoning after the first part which asserts that $X_i = F^{-1}(Y_i)$ with probability 1. The empirical CDFs are defined as
$$G_n(y) = \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq y\}}$$
$$F_n(x) = \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x\}}$$
so
$$
\begin{align}
G_n(F(x))
&= \frac{1}{n} \sum_{i=1}^n 1_{\{Y_i \leq F(x) \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{F^{-1}(Y_i) \leq x \}}
= \frac{1}{n} \sum_{i=1}^n 1_{\{X_i \leq x \}}
= F_n(x)
\end{align}
$$
with probability 1.
It should be easy to convince yourself that $Y_i$ has Uniform$[0,1]$ distribution by looking at pictures. Doing so rigorously is tedious, but can be done. We have to verify that $P(Y_i \leq u) = u$ for all $u \in (0,1)$. Fix such $u$ and let $x^* = \inf\{x : F(x) \geq u \}$ — this is just the value of quantile function at $u$. It’s defined this way to deal with flat sections. We’ll consider two separate cases.
First suppose that $F(x^*) = u$. Then
$$
Y_i \leq u
\iff Y_i \leq F(x^*)
\iff F(X_i) \leq F(x^*).
$$
Since $F$ is a non-decreasing function and $F(x^*) = u$,
$$
F(X_i) \leq F(x^*) \iff X_i \leq x^* .
$$
Thus,
$$
P[Y_i \leq u]
= P[X_i \leq x^*]
= F(x^*)
= u .
$$
Now suppose that $F(x^*) \neq u$. Then necessarily $F(x^*) > u$, and $u$ falls inside one of the gaps. Moreover, $x^* \in D$, because otherwise $F(x^*) = u$ and we have a contradiction.
Let $u^* = F(x^*)$ be the upper part of the gap. Then by the previous case,
$$
\begin{align}
P[Y_i \leq u]
&= P[Y_i \leq u^*] - P[u < Y_i \leq u^*]\\
&= u^* - P[u < Y_i \leq u^*].
\end{align}
$$
By the way $Y_i$ is defined, $P(Y_i = u^*) = 0$ and
$$
\begin{align}
P[u < Y_i \leq u^*]
&= P[u < Y_i < u^*] \\
&= P[u < Y_i < u^* , X_i = x^*] \\
&= u^* - u .
\end{align}
$$
Thus, $P[Y_i \leq u] = u$. | Transforming arbitrary distributions to distributions on $[0,1]$
It’s much easier to simultaneously construct $X_i$ and $Y_i$ having the desired properties,
by first letting $Y_i$ be i.i.d. Uniform$[0,1]$ and then taking $X_i = F^{-1}(Y_i)$. This is the basic meth |
33,829 | Transforming arbitrary distributions to distributions on $[0,1]$ | This is merely saying that $F(x) = \Pr[X \le x] = \Pr[F(X) \le F(x)]$ which is exactly what it means for $F(X)$ to have a uniform distribution.
OK, let's go a little slower.
For continuous distributions, forget for a moment that the CDF $F$ is a CDF and think of it as just a nonlinear way to re-express the values of $X$. In fact, to make the distinction clear, suppose that $G$ is any monotonically increasing way of re-expressing $X$. Let $Y$ be the name of its re-expressed value. $G^{-1}$, by definition, is the "back transform": it expresses $Y$ back in terms of the original $X$.
What is the distribution of $Y$? As always, we discover this by picking an arbitrary value that $Y$ might take on, say $y$, and ask for the chance that $Y$ is less than or equal to $y$. Back-transform this question in terms of the original way of expressing $X$: we are inquiring about the chance that $X$ is less than or equal to $x = G^{-1}(y)$. Now take $G$ to be $F$ and remember that $F$ is the CDF of $X$: by definition, the chance that $X$ is less than or equal to any $x$ is $F(x)$. In this case,
$$F(x) = F(G^{-1}(y)) = F(F^{-1}(y)) = y.$$
We have established that the CDF of $Y$ is $\Pr[Y \le y] = y$, the uniform distribution on $[0,1]$.
It can help to look at this graphically. Draw the graph of $F$. As $X$ ranges over the reals, $F$ ranges between $0$ and $1$. The function $F$ is constructed specifically so that the distribution of $F(x)$ is uniform. That is, if you want to pick a random value for $X$, pick a uniformly random value along the y-axis between $0$ and $1$ and find the value of $X$ where $F(X)$ equals that random height.
In the continuous case we have $X = F^{-1}(Y)$ so clearly $\Pr[X = F^{-1}(Y)] = 1$. In the discontinuous case there's no difficulty, either, provided we define $F^{-1}$ appropriately. But if there's a jump in $X$ from $x_0$ to $x_1 \gt x_0$, all we can say in general is that the event where $x_0 \le X \lt x_1$ has zero probability, not that it's impossible for $X$ to lie in this interval. For this reason we cannot assert that $X = F^{-1}(Y)$ everywhere, but we can assert that the event that this equality does not hold has probability zero, because it consists of at most a countable number of jumps.
(For a practical example of why this arcane technical distinction is important, consider a gambling problem. Suppose you will play Roulette repeatedly with a fixed bet until either you are broke or you double your money. Let $X$ be the random variable representing your net gain if you ever go broke or double your money. Otherwise, define $X$ to be any number you want. $X$ has a Bernoulli distribution because there is some chance $p$ you will go broke, the chance of doubling your money is $1-p$ (work it out!), and the chance of playing forever is zero. Nevertheless, playing forever is a possibility: it is part of the mathematical set of possible outcomes.)
As a simple exercise in learning to reason with the uniform probability transform, graph $F$ for a Bernoulli($p$) variable. The graph equals $0$ for all $x \lt 0$, jumps to $1-p$ at $0$, is horizontal again for $0 \lt x \lt 1$, then jumps to $1$ at $x=1$ and stays there for all greater $x$. A uniform variate on the interval $[0,1]$ on the y-axis will cover the initial jump with a probability $p$; $F^{-1}$ maps this down to $x = 0$. Otherwise, the variate covers the final jump with the remaining probability $1-p$ and $F^{-1}$ maps this down to $1$. We see, then, how a uniform distribution of $Y$ reproduces this simple discrete distribution function. Illustrations of CDFs of discrete and continuous/discrete distributions appear on this page I wrote for a stats course long ago. | Transforming arbitrary distributions to distributions on $[0,1]$ | This is merely saying that $F(x) = \Pr[X \le x] = \Pr[F(X) \le F(x)]$ which is exactly what it means for $F(X)$ to have a uniform distribution.
OK, let's go a little slower.
For continuous distributio | Transforming arbitrary distributions to distributions on $[0,1]$
This is merely saying that $F(x) = \Pr[X \le x] = \Pr[F(X) \le F(x)]$ which is exactly what it means for $F(X)$ to have a uniform distribution.
OK, let's go a little slower.
For continuous distributions, forget for a moment that the CDF $F$ is a CDF and think of it as just a nonlinear way to re-express the values of $X$. In fact, to make the distinction clear, suppose that $G$ is any monotonically increasing way of re-expressing $X$. Let $Y$ be the name of its re-expressed value. $G^{-1}$, by definition, is the "back transform": it expresses $Y$ back in terms of the original $X$.
What is the distribution of $Y$? As always, we discover this by picking an arbitrary value that $Y$ might take on, say $y$, and ask for the chance that $Y$ is less than or equal to $y$. Back-transform this question in terms of the original way of expressing $X$: we are inquiring about the chance that $X$ is less than or equal to $x = G^{-1}(y)$. Now take $G$ to be $F$ and remember that $F$ is the CDF of $X$: by definition, the chance that $X$ is less than or equal to any $x$ is $F(x)$. In this case,
$$F(x) = F(G^{-1}(y)) = F(F^{-1}(y)) = y.$$
We have established that the CDF of $Y$ is $\Pr[Y \le y] = y$, the uniform distribution on $[0,1]$.
It can help to look at this graphically. Draw the graph of $F$. As $X$ ranges over the reals, $F$ ranges between $0$ and $1$. The function $F$ is constructed specifically so that the distribution of $F(x)$ is uniform. That is, if you want to pick a random value for $X$, pick a uniformly random value along the y-axis between $0$ and $1$ and find the value of $X$ where $F(X)$ equals that random height.
In the continuous case we have $X = F^{-1}(Y)$ so clearly $\Pr[X = F^{-1}(Y)] = 1$. In the discontinuous case there's no difficulty, either, provided we define $F^{-1}$ appropriately. But if there's a jump in $X$ from $x_0$ to $x_1 \gt x_0$, all we can say in general is that the event where $x_0 \le X \lt x_1$ has zero probability, not that it's impossible for $X$ to lie in this interval. For this reason we cannot assert that $X = F^{-1}(Y)$ everywhere, but we can assert that the event that this equality does not hold has probability zero, because it consists of at most a countable number of jumps.
(For a practical example of why this arcane technical distinction is important, consider a gambling problem. Suppose you will play Roulette repeatedly with a fixed bet until either you are broke or you double your money. Let $X$ be the random variable representing your net gain if you ever go broke or double your money. Otherwise, define $X$ to be any number you want. $X$ has a Bernoulli distribution because there is some chance $p$ you will go broke, the chance of doubling your money is $1-p$ (work it out!), and the chance of playing forever is zero. Nevertheless, playing forever is a possibility: it is part of the mathematical set of possible outcomes.)
As a simple exercise in learning to reason with the uniform probability transform, graph $F$ for a Bernoulli($p$) variable. The graph equals $0$ for all $x \lt 0$, jumps to $1-p$ at $0$, is horizontal again for $0 \lt x \lt 1$, then jumps to $1$ at $x=1$ and stays there for all greater $x$. A uniform variate on the interval $[0,1]$ on the y-axis will cover the initial jump with a probability $p$; $F^{-1}$ maps this down to $x = 0$. Otherwise, the variate covers the final jump with the remaining probability $1-p$ and $F^{-1}$ maps this down to $1$. We see, then, how a uniform distribution of $Y$ reproduces this simple discrete distribution function. Illustrations of CDFs of discrete and continuous/discrete distributions appear on this page I wrote for a stats course long ago. | Transforming arbitrary distributions to distributions on $[0,1]$
This is merely saying that $F(x) = \Pr[X \le x] = \Pr[F(X) \le F(x)]$ which is exactly what it means for $F(X)$ to have a uniform distribution.
OK, let's go a little slower.
For continuous distributio |
33,830 | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | Here's one reason:
If you fit the parameters using maximum likelihood, with an assumption of t-distributed errors the fitted line is less impacted by points that are far away from the bulk of the data (in the y-direction).
If you have heavy tailed errors, it's easy to get points that pull the line up (away from the bulk of the data) if you fit by least squares. Using a heavy-tailed error distribution down-weights those points so they don't pull the line about so much.
However, it's important to note that this robustness to heavy tails in the y-values doesn't offer protection against observations with high influence. If that's likely to be an issue, you need something that's robust against influential outliers. [With a designed experiment where you control the predictor-values this may not be an issue, but you don't always have designed experiments.]
(There are other reasons you might want to use a suitable model for errors, like more efficient estimates of parameters and the possibility of more suitable small-sample inference.)
The point about least-squares being BLUE is a bit misleading. Yes, it is best among linear unbiased estimators, but if your conditional distribution is very heavy tailed all linear estimators may be very poor, even arbitrarily so, and having the best of a very poor collection of estimators is no consolation. If you want a reasonable level of efficiency in the presence of heavy tails, linear estimators are not a good choice. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | Here's one reason:
If you fit the parameters using maximum likelihood, with an assumption of t-distributed errors the fitted line is less impacted by points that are far away from the bulk of the data | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
Here's one reason:
If you fit the parameters using maximum likelihood, with an assumption of t-distributed errors the fitted line is less impacted by points that are far away from the bulk of the data (in the y-direction).
If you have heavy tailed errors, it's easy to get points that pull the line up (away from the bulk of the data) if you fit by least squares. Using a heavy-tailed error distribution down-weights those points so they don't pull the line about so much.
However, it's important to note that this robustness to heavy tails in the y-values doesn't offer protection against observations with high influence. If that's likely to be an issue, you need something that's robust against influential outliers. [With a designed experiment where you control the predictor-values this may not be an issue, but you don't always have designed experiments.]
(There are other reasons you might want to use a suitable model for errors, like more efficient estimates of parameters and the possibility of more suitable small-sample inference.)
The point about least-squares being BLUE is a bit misleading. Yes, it is best among linear unbiased estimators, but if your conditional distribution is very heavy tailed all linear estimators may be very poor, even arbitrarily so, and having the best of a very poor collection of estimators is no consolation. If you want a reasonable level of efficiency in the presence of heavy tails, linear estimators are not a good choice. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
Here's one reason:
If you fit the parameters using maximum likelihood, with an assumption of t-distributed errors the fitted line is less impacted by points that are far away from the bulk of the data |
33,831 | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | However, since the OLS estimator is BLUE (by Gauss-Markov), it should have lower variance (and therefore MSE) than a regression that assumes student-t residuals fit via maximum likelihood.
The estimator that assumes student-t residuals is not a linear estimator. OLS is the Best among Linear Unbiased Estimators. So, that does not include the student-t residuals estimator. The variance of the estimator that uses student-t residuals can be smaller than the variance of the OLS estimator.
The student-t estimator has the advantage that it is more robust to violations of the assumption that the error is Gaussian distributed. In cases of relatively more higher values it can have a lower variance than the OLS estimator.
An extreme and simple example is the estimation of the location parameter of a shifted t-distribution with 1 degree of freedom (which is the same as the Cauchy distribution). In that case the OLS estimator is the mean of the sample and will have undefined variance. On the other hand, the MLE estimate of the Cauchy distribution although difficult to compute, has lower variance
An answer to this post suggests that prediction intervals will be wrong if one fails to use t-distributed errors. Is that really the only benefit?
The linear estimator, a weighted sum of all the $y$, has the distribution of a sum of the distributions of the $y$. Except in extreme cases, this will approach a normal distribution quickly and it wouldn't be so wrong to for prediction intervals. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | However, since the OLS estimator is BLUE (by Gauss-Markov), it should have lower variance (and therefore MSE) than a regression that assumes student-t residuals fit via maximum likelihood.
The estima | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
However, since the OLS estimator is BLUE (by Gauss-Markov), it should have lower variance (and therefore MSE) than a regression that assumes student-t residuals fit via maximum likelihood.
The estimator that assumes student-t residuals is not a linear estimator. OLS is the Best among Linear Unbiased Estimators. So, that does not include the student-t residuals estimator. The variance of the estimator that uses student-t residuals can be smaller than the variance of the OLS estimator.
The student-t estimator has the advantage that it is more robust to violations of the assumption that the error is Gaussian distributed. In cases of relatively more higher values it can have a lower variance than the OLS estimator.
An extreme and simple example is the estimation of the location parameter of a shifted t-distribution with 1 degree of freedom (which is the same as the Cauchy distribution). In that case the OLS estimator is the mean of the sample and will have undefined variance. On the other hand, the MLE estimate of the Cauchy distribution although difficult to compute, has lower variance
An answer to this post suggests that prediction intervals will be wrong if one fails to use t-distributed errors. Is that really the only benefit?
The linear estimator, a weighted sum of all the $y$, has the distribution of a sum of the distributions of the $y$. Except in extreme cases, this will approach a normal distribution quickly and it wouldn't be so wrong to for prediction intervals. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
However, since the OLS estimator is BLUE (by Gauss-Markov), it should have lower variance (and therefore MSE) than a regression that assumes student-t residuals fit via maximum likelihood.
The estima |
33,832 | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | The weakness of the OLS regression is that it is extremely sensitive to outliers, (the influence of the outlying points on the loss function grows as the square of their distance from the regression line). One of the possible alternatives for making regression more robust (i.e., less sensitive to the outliers, see Robust regression) is using loss functions that give less weight to the outlying points, such as Huber loss or Tukey loss (see also M-estimator).
Student-t distribution with its long tails is also a potential candidate with distinct advantages and disadvantages:
It is more difficult to work with analytically than Huber or Tukey loss functions
It is a real probability distribution (unlike Huber and Tukey, which are somewhat ad-hoc choices), and smoothly fits, e.g., with Bayesian methods. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | The weakness of the OLS regression is that it is extremely sensitive to outliers, (the influence of the outlying points on the loss function grows as the square of their distance from the regression l | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
The weakness of the OLS regression is that it is extremely sensitive to outliers, (the influence of the outlying points on the loss function grows as the square of their distance from the regression line). One of the possible alternatives for making regression more robust (i.e., less sensitive to the outliers, see Robust regression) is using loss functions that give less weight to the outlying points, such as Huber loss or Tukey loss (see also M-estimator).
Student-t distribution with its long tails is also a potential candidate with distinct advantages and disadvantages:
It is more difficult to work with analytically than Huber or Tukey loss functions
It is a real probability distribution (unlike Huber and Tukey, which are somewhat ad-hoc choices), and smoothly fits, e.g., with Bayesian methods. | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
The weakness of the OLS regression is that it is extremely sensitive to outliers, (the influence of the outlying points on the loss function grows as the square of their distance from the regression l |
33,833 | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | Student-t Distribution is the distribution of small sample; as opposed: one of the important assumptions of using OLS - is Normal Distribution, that needs great amount of independent variables in the sample... and in order to cope with outliers - you'd better to increase the size of your sample-data to reach its normality (though, of course, you can just remove your outliers from your data analysis - the other way to cope with robust vars)... to check the data for normality you can use Shapiro-Wilk normality test or Kolmogorov–Smirnov test...
Anyway, you cannot compare OLS-regression & testing residuals - because in order to trust R^2 score of your regression curve (you're getting with OLS instrument) - you have to check your residuals for Normality as well after OLS...
So, OLS is having rather strict requirements about the structure & size of the sample - Normality (of data & residuals as well)... but if you cannot prove the normality you'd better use MLE (maximum likelyhood estimation) instead of OLS - to find out more appropriate distribution to your emperical data... Also speaking about OLS - it also requires the f(x) to be unambiguously defined in apriory analysis of your data dependences, otherwise MLE is advised to find the theoretical curve that mostly suits your emperical distribution...
But to prove that your regression coeficients for each X are meaningful - yes, you see t-Student criteria for each of coef's of regression model found, - additionaly to the F-criteria for the whole regression model found...
import statsmodels.api as sm
...
model= sm.OLS(y,x)
results= model.fit()
print(results.summary())
be carefull with terms: t-Student tests can be used in different stat.investigations with different sense & t-Student Distribution can describe a "small sample"... | What is the benefit of regression with student-t residuals over OLS regression? [duplicate] | Student-t Distribution is the distribution of small sample; as opposed: one of the important assumptions of using OLS - is Normal Distribution, that needs great amount of independent variables in the | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
Student-t Distribution is the distribution of small sample; as opposed: one of the important assumptions of using OLS - is Normal Distribution, that needs great amount of independent variables in the sample... and in order to cope with outliers - you'd better to increase the size of your sample-data to reach its normality (though, of course, you can just remove your outliers from your data analysis - the other way to cope with robust vars)... to check the data for normality you can use Shapiro-Wilk normality test or Kolmogorov–Smirnov test...
Anyway, you cannot compare OLS-regression & testing residuals - because in order to trust R^2 score of your regression curve (you're getting with OLS instrument) - you have to check your residuals for Normality as well after OLS...
So, OLS is having rather strict requirements about the structure & size of the sample - Normality (of data & residuals as well)... but if you cannot prove the normality you'd better use MLE (maximum likelyhood estimation) instead of OLS - to find out more appropriate distribution to your emperical data... Also speaking about OLS - it also requires the f(x) to be unambiguously defined in apriory analysis of your data dependences, otherwise MLE is advised to find the theoretical curve that mostly suits your emperical distribution...
But to prove that your regression coeficients for each X are meaningful - yes, you see t-Student criteria for each of coef's of regression model found, - additionaly to the F-criteria for the whole regression model found...
import statsmodels.api as sm
...
model= sm.OLS(y,x)
results= model.fit()
print(results.summary())
be carefull with terms: t-Student tests can be used in different stat.investigations with different sense & t-Student Distribution can describe a "small sample"... | What is the benefit of regression with student-t residuals over OLS regression? [duplicate]
Student-t Distribution is the distribution of small sample; as opposed: one of the important assumptions of using OLS - is Normal Distribution, that needs great amount of independent variables in the |
33,834 | Why does a function being smoother make it more likely? | While the author mentions it as an "example", it is true that, generally, smoother functions are often preferred in modelling the characteristics of the "true" underlying function, and therefore may be assigned a higher "prior probability", as the author maintains. Why is this? You may learn more about it by reading this similar question here, but essentially, there is no real justification for it, just the conventional belief that most things occurring in nature tend to change gradually rather than in a non-continuous way. Practically, smoother functions are desired because they are more easily differentiated and may have convenient mathematical properties. More on that discussion here.
However, though I would say that smooth functions are still widely anchored in statistical methods, in my experience over the years we have been working more and more with non-smooth functions. Examples I can think of include in the context of real-world optimization problems, interpolation problems, and many applications of deep neural networks (an easy one to see is the common ReLU activation function).
In any case, while this question easily spurs debate, I think opportunities to ponder underlying principles are great! | Why does a function being smoother make it more likely? | While the author mentions it as an "example", it is true that, generally, smoother functions are often preferred in modelling the characteristics of the "true" underlying function, and therefore may b | Why does a function being smoother make it more likely?
While the author mentions it as an "example", it is true that, generally, smoother functions are often preferred in modelling the characteristics of the "true" underlying function, and therefore may be assigned a higher "prior probability", as the author maintains. Why is this? You may learn more about it by reading this similar question here, but essentially, there is no real justification for it, just the conventional belief that most things occurring in nature tend to change gradually rather than in a non-continuous way. Practically, smoother functions are desired because they are more easily differentiated and may have convenient mathematical properties. More on that discussion here.
However, though I would say that smooth functions are still widely anchored in statistical methods, in my experience over the years we have been working more and more with non-smooth functions. Examples I can think of include in the context of real-world optimization problems, interpolation problems, and many applications of deep neural networks (an easy one to see is the common ReLU activation function).
In any case, while this question easily spurs debate, I think opportunities to ponder underlying principles are great! | Why does a function being smoother make it more likely?
While the author mentions it as an "example", it is true that, generally, smoother functions are often preferred in modelling the characteristics of the "true" underlying function, and therefore may b |
33,835 | Why does a function being smoother make it more likely? | One intuitive way to view it is that a smooth function can be described with less information than a less smooth function. If we restrict ourselves to vector spaces of functions, the dimension of the vector space (finite or infinite) is the number of coefficients we need to give for a complete specification of the function. For a linear function we need two coefficients, the slope and intercept. So for random linear function, we must specify a 2-dim joint distribution on the slope and intercept. For more wiggly functions we need higher-dimensional joint distributions, so intuitively the probability mass is more "spread out" over a larger volume in parameter space (some would say state space), so total probability is spread over "more functions", and then probability densities must be lower. That implies, in particular, if those functions serve as parameter in some likelihood function, likelihood will be more spread out, the densities will be lower and vary slower, so in particular, Fisher information will be lower.
Let us see how this works out for some simulated spline functions$^\dagger$. First I show a plot of the spline basis functions for a natural spline with 5 degrees of freedom:
Then we can simulate some actual spline functions by drawing standard normal coefficients randomly:
If we had chosen, say, more interior knots, we would have got wigglier, less smooth, functions, which need more information to be described.
$^\dagger$Splines are piecewise polynomials, knots are the points where we shift from one poly to the next. We could have used other functions as an example, even just polynomials. See
Splines - basis functions - clarification Interpretation of a spline and search this site.
For reference, the actual R code used:
library(splines)
x <- 1:20
S <- ns(x, knots=c(5, 10, 15),
intercept=TRUE, Boundary.knots=c(0, 21))
# Plot the basis functions:
library(ggplot2)
library(reshape2)
pframe <- melt(as.data.frame(S), measure.vars=1:5)
pframe$x <- rep(x, 5)
ggplot( pframe, aes(x=x, y=value, group=variable, color=variable)) +
geom_line() + ggtitle("Natural spline basis functions")
# Then we can simulate some coefficients and plot the resulting functions:
# First we choose the coefficients as iid standard normal:
set.seed(7*11*13)# My public seed
N <- 5
n <- length(x)
simfuns <- data.frame(x=rep(x, N), Y=as.numeric(NA), group=rep(1:N, each=n))
for (i in 1:N) simfuns$Y[((i-1)*n+1):(i*n)] <- S %*% rnorm(5)
ggplot(simfuns, aes(x=x, y=Y, group=group, color=group)) +
geom_line() + ggtitle("Some simulated spline functions:") | Why does a function being smoother make it more likely? | One intuitive way to view it is that a smooth function can be described with less information than a less smooth function. If we restrict ourselves to vector spaces of functions, the dimension of the | Why does a function being smoother make it more likely?
One intuitive way to view it is that a smooth function can be described with less information than a less smooth function. If we restrict ourselves to vector spaces of functions, the dimension of the vector space (finite or infinite) is the number of coefficients we need to give for a complete specification of the function. For a linear function we need two coefficients, the slope and intercept. So for random linear function, we must specify a 2-dim joint distribution on the slope and intercept. For more wiggly functions we need higher-dimensional joint distributions, so intuitively the probability mass is more "spread out" over a larger volume in parameter space (some would say state space), so total probability is spread over "more functions", and then probability densities must be lower. That implies, in particular, if those functions serve as parameter in some likelihood function, likelihood will be more spread out, the densities will be lower and vary slower, so in particular, Fisher information will be lower.
Let us see how this works out for some simulated spline functions$^\dagger$. First I show a plot of the spline basis functions for a natural spline with 5 degrees of freedom:
Then we can simulate some actual spline functions by drawing standard normal coefficients randomly:
If we had chosen, say, more interior knots, we would have got wigglier, less smooth, functions, which need more information to be described.
$^\dagger$Splines are piecewise polynomials, knots are the points where we shift from one poly to the next. We could have used other functions as an example, even just polynomials. See
Splines - basis functions - clarification Interpretation of a spline and search this site.
For reference, the actual R code used:
library(splines)
x <- 1:20
S <- ns(x, knots=c(5, 10, 15),
intercept=TRUE, Boundary.knots=c(0, 21))
# Plot the basis functions:
library(ggplot2)
library(reshape2)
pframe <- melt(as.data.frame(S), measure.vars=1:5)
pframe$x <- rep(x, 5)
ggplot( pframe, aes(x=x, y=value, group=variable, color=variable)) +
geom_line() + ggtitle("Natural spline basis functions")
# Then we can simulate some coefficients and plot the resulting functions:
# First we choose the coefficients as iid standard normal:
set.seed(7*11*13)# My public seed
N <- 5
n <- length(x)
simfuns <- data.frame(x=rep(x, N), Y=as.numeric(NA), group=rep(1:N, each=n))
for (i in 1:N) simfuns$Y[((i-1)*n+1):(i*n)] <- S %*% rnorm(5)
ggplot(simfuns, aes(x=x, y=Y, group=group, color=group)) +
geom_line() + ggtitle("Some simulated spline functions:") | Why does a function being smoother make it more likely?
One intuitive way to view it is that a smooth function can be described with less information than a less smooth function. If we restrict ourselves to vector spaces of functions, the dimension of the |
33,836 | Why does a function being smoother make it more likely? | I disagree with the other answers here asserting that there is no good reason for this, and that it is merely a simplifying assumption. From a metaphysical perspective, causal effects in nature generally operate in a roughly "smooth" manner, and so small changes in the input quantities in a causal system generally result in small changes in the output. Of course, this is not always the case; there are some causal systems that exhibit large changes with threshold effects, and there are some chaotic systems where small chnages in inputs may lead to large and unpredictable changes in outputs. However, as a general rule, causal changes exhibit smoothness between inputs and outputs. This is a metaphysical property of nature, and not merely a modelling or statistical convention. One can certainly note that this is not true in all cases, but it is true in most applications where we model related variables.
For example, when you throw a ball in the air (without any obstruction above you), the force you impart to the ball affects the height it reaches in a smooth manner. If you throw it slightly harder it will go slightly higher, and so forth. Similarly, if a gust of upward or downward wind affects the trajectory of the ball, the wind-speed and angle will affect the height the ball reaches in a roughly smooth manner. If you have a slightly stronger wind it will affect the ball height slightly more, and so forth. I have given physical examples for simplicity, but similar outcomes occur in a range of areas including economics, finance, psychology, etc.
There is an "anthropomorphic" philosophical argument that can be made here. If the universe were such that causal laws tended not to be "smooth" then it would be a very chaotic place, and it is unlikely that life could exist; a fortiori intelligent life. Hence, our presence as living cognisant observers asking this question constitutes a form of selection bias that virtually necessitates smooth "well behaved" causal laws.
There is also a complication here in what we even regard to be "the function" we are estimating in the first place. Real-life problems involve a finite set of outcomes in nature, even in large populations, so the use of a mathematical function over a continuum is already an abstraction that goes beyond the observable data. We generally posit that natural forces can exist on a continuum and that physical/natural laws can likewise be properly described by continuous functions (e.g., this is the methodology in physics), but this can become more tenuous when we are looking at phenomena that are specific rather than general. Thus, your question is pregnant with some deeper metaphysical and epistemological quesstions about the validity of approximating finite sets of outcomes in nature by infinite/continuous mathematical descriptions.$^\dagger$
As you can see, a seemingly simple question like this opens up a lot of interesting philosophical doors. If you would like to learn more about these issues, I recommend reading some material on finitism by Doron Zeilberger and some material on the anthropic principle by Nick Bostrom.
$^\dagger$ Indeed, you should not take for granted the use of continuous functions in mathematics at all. There are a number of philosophers/mathematicians who object to the use of infinite mathematical objects (see e.g., finitism, ultrafinitism). To these practitioners, the very notion that there is a function on a "continuum" is already flawed. | Why does a function being smoother make it more likely? | I disagree with the other answers here asserting that there is no good reason for this, and that it is merely a simplifying assumption. From a metaphysical perspective, causal effects in nature gener | Why does a function being smoother make it more likely?
I disagree with the other answers here asserting that there is no good reason for this, and that it is merely a simplifying assumption. From a metaphysical perspective, causal effects in nature generally operate in a roughly "smooth" manner, and so small changes in the input quantities in a causal system generally result in small changes in the output. Of course, this is not always the case; there are some causal systems that exhibit large changes with threshold effects, and there are some chaotic systems where small chnages in inputs may lead to large and unpredictable changes in outputs. However, as a general rule, causal changes exhibit smoothness between inputs and outputs. This is a metaphysical property of nature, and not merely a modelling or statistical convention. One can certainly note that this is not true in all cases, but it is true in most applications where we model related variables.
For example, when you throw a ball in the air (without any obstruction above you), the force you impart to the ball affects the height it reaches in a smooth manner. If you throw it slightly harder it will go slightly higher, and so forth. Similarly, if a gust of upward or downward wind affects the trajectory of the ball, the wind-speed and angle will affect the height the ball reaches in a roughly smooth manner. If you have a slightly stronger wind it will affect the ball height slightly more, and so forth. I have given physical examples for simplicity, but similar outcomes occur in a range of areas including economics, finance, psychology, etc.
There is an "anthropomorphic" philosophical argument that can be made here. If the universe were such that causal laws tended not to be "smooth" then it would be a very chaotic place, and it is unlikely that life could exist; a fortiori intelligent life. Hence, our presence as living cognisant observers asking this question constitutes a form of selection bias that virtually necessitates smooth "well behaved" causal laws.
There is also a complication here in what we even regard to be "the function" we are estimating in the first place. Real-life problems involve a finite set of outcomes in nature, even in large populations, so the use of a mathematical function over a continuum is already an abstraction that goes beyond the observable data. We generally posit that natural forces can exist on a continuum and that physical/natural laws can likewise be properly described by continuous functions (e.g., this is the methodology in physics), but this can become more tenuous when we are looking at phenomena that are specific rather than general. Thus, your question is pregnant with some deeper metaphysical and epistemological quesstions about the validity of approximating finite sets of outcomes in nature by infinite/continuous mathematical descriptions.$^\dagger$
As you can see, a seemingly simple question like this opens up a lot of interesting philosophical doors. If you would like to learn more about these issues, I recommend reading some material on finitism by Doron Zeilberger and some material on the anthropic principle by Nick Bostrom.
$^\dagger$ Indeed, you should not take for granted the use of continuous functions in mathematics at all. There are a number of philosophers/mathematicians who object to the use of infinite mathematical objects (see e.g., finitism, ultrafinitism). To these practitioners, the very notion that there is a function on a "continuum" is already flawed. | Why does a function being smoother make it more likely?
I disagree with the other answers here asserting that there is no good reason for this, and that it is merely a simplifying assumption. From a metaphysical perspective, causal effects in nature gener |
33,837 | Permutation tests and exchangeability [duplicate] | One situation in which exchangeability does not
hold occurs when we're testing whether means of two
groups are equal, but suspect variances may be
unequal.
To be specific, let's look at the following situation:
x1 is a sample of size $n_1 = 10$ from a normal
population with $\mu_1=100$ and $\sigma_2=10$ and
x2 is a sample of size $n_2 = 50$ from a normal
population with $\mu_2=100$ and $\sigma_2=4.$
Inappropriate pooled t test. Suppose we try to use a pooled 2-sample t test of $H_0:\mu_1=\mu_2$ vs $H_a:\mu_1\ne\mu_2.$ Then
the true rejection rate (about $36\%)$ of an alleged test at level
$\alpha=0.05=5\%$ is much larger than $5\%,$
as shown by the following simulation in R. A monumental 'false discovery' rate. The pooled test assumes the two samples are from populations with equal variances.
set.seed(2020)
pv = replicate(10^5, t.test(rnorm(10,100,20),
rnorm(50,100,4), var.eq=T)$p.val)
mean(pv <= .05)
[1] 0.35981
Welch t test, not assuming equal variances. Such situations with unequal variances validate the
preference of many statisticians for the Welch two-sample t test, which does not assume equal variances
in the two populations. The Welch test (with intended $\alpha=5\%)$ has a true
significance level very nearly $5\%.$
set.seed(2020)
pv = replicate(10^5, t.test(rnorm(10,100,20),
rnorm(50,100,4))$p.val)
mean(pv <= .05)
[1] 0.05056
Flawed permutation test with non-exchangeable samples. A permutation test using the difference in sample
means as metric is no 'cure' for lack of exchangeability caused by heteroscedasticity.
set.seed(620)
m = 10^5; pv = numeric(m)
for(i in 1:m) {
x1 = rnorm(10, 100, 20); x2 = rnorm(50, 100, 5)
x = c(x1, x2)
d.obs = mean(x[1:10]) - mean(x[11:60])
for(j in 1:2000) {
x.prm = sample(x)
d.prm[j] = mean(x.prm[1:10]-x.prm[11:60]) }
pv[i] = mean(abs(d.prm) >= abs(d.obs))
}
mean(pv <= .05)
[1] 0.3634
So the rejection rate of the permutation test, with
difference in means as its metric and an intended $\alpha = 0.05,$ is about as high
as for the pooled t test.
Note: A permutation test with the Welch t statistic as metric treats samples with unequal variances as exchangeable (even if data may not be normal). Its significance level would be substantially correct. | Permutation tests and exchangeability [duplicate] | One situation in which exchangeability does not
hold occurs when we're testing whether means of two
groups are equal, but suspect variances may be
unequal.
To be specific, let's look at the following | Permutation tests and exchangeability [duplicate]
One situation in which exchangeability does not
hold occurs when we're testing whether means of two
groups are equal, but suspect variances may be
unequal.
To be specific, let's look at the following situation:
x1 is a sample of size $n_1 = 10$ from a normal
population with $\mu_1=100$ and $\sigma_2=10$ and
x2 is a sample of size $n_2 = 50$ from a normal
population with $\mu_2=100$ and $\sigma_2=4.$
Inappropriate pooled t test. Suppose we try to use a pooled 2-sample t test of $H_0:\mu_1=\mu_2$ vs $H_a:\mu_1\ne\mu_2.$ Then
the true rejection rate (about $36\%)$ of an alleged test at level
$\alpha=0.05=5\%$ is much larger than $5\%,$
as shown by the following simulation in R. A monumental 'false discovery' rate. The pooled test assumes the two samples are from populations with equal variances.
set.seed(2020)
pv = replicate(10^5, t.test(rnorm(10,100,20),
rnorm(50,100,4), var.eq=T)$p.val)
mean(pv <= .05)
[1] 0.35981
Welch t test, not assuming equal variances. Such situations with unequal variances validate the
preference of many statisticians for the Welch two-sample t test, which does not assume equal variances
in the two populations. The Welch test (with intended $\alpha=5\%)$ has a true
significance level very nearly $5\%.$
set.seed(2020)
pv = replicate(10^5, t.test(rnorm(10,100,20),
rnorm(50,100,4))$p.val)
mean(pv <= .05)
[1] 0.05056
Flawed permutation test with non-exchangeable samples. A permutation test using the difference in sample
means as metric is no 'cure' for lack of exchangeability caused by heteroscedasticity.
set.seed(620)
m = 10^5; pv = numeric(m)
for(i in 1:m) {
x1 = rnorm(10, 100, 20); x2 = rnorm(50, 100, 5)
x = c(x1, x2)
d.obs = mean(x[1:10]) - mean(x[11:60])
for(j in 1:2000) {
x.prm = sample(x)
d.prm[j] = mean(x.prm[1:10]-x.prm[11:60]) }
pv[i] = mean(abs(d.prm) >= abs(d.obs))
}
mean(pv <= .05)
[1] 0.3634
So the rejection rate of the permutation test, with
difference in means as its metric and an intended $\alpha = 0.05,$ is about as high
as for the pooled t test.
Note: A permutation test with the Welch t statistic as metric treats samples with unequal variances as exchangeable (even if data may not be normal). Its significance level would be substantially correct. | Permutation tests and exchangeability [duplicate]
One situation in which exchangeability does not
hold occurs when we're testing whether means of two
groups are equal, but suspect variances may be
unequal.
To be specific, let's look at the following |
33,838 | Permutation tests and exchangeability [duplicate] | Another important case is tests for interaction. The null hypothesis of additivity does not imply exchangeability. In a linear, constant variance model you can permute residuals (Anderson, 2001), in generalised linear models it's more complicated | Permutation tests and exchangeability [duplicate] | Another important case is tests for interaction. The null hypothesis of additivity does not imply exchangeability. In a linear, constant variance model you can permute residuals (Anderson, 2001), in | Permutation tests and exchangeability [duplicate]
Another important case is tests for interaction. The null hypothesis of additivity does not imply exchangeability. In a linear, constant variance model you can permute residuals (Anderson, 2001), in generalised linear models it's more complicated | Permutation tests and exchangeability [duplicate]
Another important case is tests for interaction. The null hypothesis of additivity does not imply exchangeability. In a linear, constant variance model you can permute residuals (Anderson, 2001), in |
33,839 | Permutation tests and exchangeability [duplicate] | There are many, many situations where exchangeability of values in a sequence does not hold. One general scenario is when you have a time-series of values that are autocorrelated, so that values near each other in time are statistically related. For example, if we produce a random walk, the values in the random walk are not exchangeable, and this will be extremely obvious by comparing a plot of the random walk to a plot of a random permutation of that random walk.
#Generate and plot a one-dimensional random walk
set.seed(1);
n <- 10000;
MOVES <- sample(c(-1, 1), size = n, replace = TRUE);
WALK <- cumsum(MOVES);
plot(WALK, type = 'p',
main = 'Plot of a Random Walk',
xlab = 'Time', ylab = 'Value');
#Plot a random permutation of the random walk
PERM <- sample(WALK, size = n, replace = FALSE);
plot(PERM, type = 'p',
main = 'Plot of a Randomly Permuted Random Walk',
xlab = 'Time', ylab = 'Value');
We can see from these plots that the random permutation jumbles the order of the points so that values near each other in time are no longer near to each other in value. Any moderately sensible runs test will easily detect that the first plot involves a vector of values that is not exchangeable. | Permutation tests and exchangeability [duplicate] | There are many, many situations where exchangeability of values in a sequence does not hold. One general scenario is when you have a time-series of values that are autocorrelated, so that values near | Permutation tests and exchangeability [duplicate]
There are many, many situations where exchangeability of values in a sequence does not hold. One general scenario is when you have a time-series of values that are autocorrelated, so that values near each other in time are statistically related. For example, if we produce a random walk, the values in the random walk are not exchangeable, and this will be extremely obvious by comparing a plot of the random walk to a plot of a random permutation of that random walk.
#Generate and plot a one-dimensional random walk
set.seed(1);
n <- 10000;
MOVES <- sample(c(-1, 1), size = n, replace = TRUE);
WALK <- cumsum(MOVES);
plot(WALK, type = 'p',
main = 'Plot of a Random Walk',
xlab = 'Time', ylab = 'Value');
#Plot a random permutation of the random walk
PERM <- sample(WALK, size = n, replace = FALSE);
plot(PERM, type = 'p',
main = 'Plot of a Randomly Permuted Random Walk',
xlab = 'Time', ylab = 'Value');
We can see from these plots that the random permutation jumbles the order of the points so that values near each other in time are no longer near to each other in value. Any moderately sensible runs test will easily detect that the first plot involves a vector of values that is not exchangeable. | Permutation tests and exchangeability [duplicate]
There are many, many situations where exchangeability of values in a sequence does not hold. One general scenario is when you have a time-series of values that are autocorrelated, so that values near |
33,840 | Interpreting a generalised linear mixed model with binomial data | The interpretation is the same as for a generalised linear model, except that the estimates of the fixed effects are conditional on the random effects.
Since this is a generalized linear mixed model, the coefficient estimates are not interpreted in the same way as for a linear model. In this case you have a binary outcome with a logit link, so the raw estimates are on the log-odds scale.
The estimated coefficient for the intercept, 5.03046, is the log odds of RespYN being 1 (or whatever non-reference value it is coded as) when Lengthis equal to zero, and Treatment and Gender take their reference value. A value of zero for dLength might not make sense in your sample, since presumably it will never be negative and is always far above zero, and if so, you might want to consider centering it so that a zero value for the centered variable is more meaningful.
The estimate for Length of -0.05896 means that a 1 unit increase in Length is associated with a 0.05896 decrease in the log-odds of RespYN being 1, compared to RespYN being 0. If we exponentiate this number then we obtain the odds ratio of 0.9427445, which means that for a 1 unit increase in Length we expect to see (approximately) a 6% decrease in the odds of RespYN being 1.
The estimate for TreatmentPo of -4.06399 means that Treatment = Po is associated with 4.06399 lower log-odds than the other treatment group of RespYN being 1, compared to RespYN being 0. This can be exponentiated as above to obtain an odds ratio. The same analysis applies to Gender.
How do I prove that the treatment is causing/not causing the response?
Nothing can be proven with statistics, especially with observational studies. You can say that, while controlling for Gender,Length and the repeated measures within Anim_ID, you have found evidence that the association of Treatment with the the outcome is not zero. You could also say that, if the association of Treatment with the the outcome is actually zero, then the probability of observing the data that you have, or data more extreme, is less than 0.0000000000000002
Lastly, I notice that you have specified random intercepts for Anim_ID in your model formula, yet the model output says that Cockroach_ID is the grouping variable. This is rather odd, normally they would be the same. Moreover, the convergence code is zero which indicates that the model has not converged, and the estimated variance for the random effect is zero. This means that potentially there is no variation within Anim_ID. It would be a good idea to fit a model with glm() (ie without random intercepts but with Anim_ID as a fixed effect) and see how the model estimates compare. | Interpreting a generalised linear mixed model with binomial data | The interpretation is the same as for a generalised linear model, except that the estimates of the fixed effects are conditional on the random effects.
Since this is a generalized linear mixed model, | Interpreting a generalised linear mixed model with binomial data
The interpretation is the same as for a generalised linear model, except that the estimates of the fixed effects are conditional on the random effects.
Since this is a generalized linear mixed model, the coefficient estimates are not interpreted in the same way as for a linear model. In this case you have a binary outcome with a logit link, so the raw estimates are on the log-odds scale.
The estimated coefficient for the intercept, 5.03046, is the log odds of RespYN being 1 (or whatever non-reference value it is coded as) when Lengthis equal to zero, and Treatment and Gender take their reference value. A value of zero for dLength might not make sense in your sample, since presumably it will never be negative and is always far above zero, and if so, you might want to consider centering it so that a zero value for the centered variable is more meaningful.
The estimate for Length of -0.05896 means that a 1 unit increase in Length is associated with a 0.05896 decrease in the log-odds of RespYN being 1, compared to RespYN being 0. If we exponentiate this number then we obtain the odds ratio of 0.9427445, which means that for a 1 unit increase in Length we expect to see (approximately) a 6% decrease in the odds of RespYN being 1.
The estimate for TreatmentPo of -4.06399 means that Treatment = Po is associated with 4.06399 lower log-odds than the other treatment group of RespYN being 1, compared to RespYN being 0. This can be exponentiated as above to obtain an odds ratio. The same analysis applies to Gender.
How do I prove that the treatment is causing/not causing the response?
Nothing can be proven with statistics, especially with observational studies. You can say that, while controlling for Gender,Length and the repeated measures within Anim_ID, you have found evidence that the association of Treatment with the the outcome is not zero. You could also say that, if the association of Treatment with the the outcome is actually zero, then the probability of observing the data that you have, or data more extreme, is less than 0.0000000000000002
Lastly, I notice that you have specified random intercepts for Anim_ID in your model formula, yet the model output says that Cockroach_ID is the grouping variable. This is rather odd, normally they would be the same. Moreover, the convergence code is zero which indicates that the model has not converged, and the estimated variance for the random effect is zero. This means that potentially there is no variation within Anim_ID. It would be a good idea to fit a model with glm() (ie without random intercepts but with Anim_ID as a fixed effect) and see how the model estimates compare. | Interpreting a generalised linear mixed model with binomial data
The interpretation is the same as for a generalised linear model, except that the estimates of the fixed effects are conditional on the random effects.
Since this is a generalized linear mixed model, |
33,841 | Interpreting a generalised linear mixed model with binomial data | A couple of extra notes on top of what @RobertLong already answered:
As Robert also noted, the interpretation of the coefficients from generalized linear mixed models are conditional on the random effects. Most often this is not the interpretation you are looking for. For more info on this check here.
You have fitted the model with the default Laplace approximation. This may be inaccurate, especially for dichotomous data. You should better fit the model with the adaptive Gaussian quadrature by specifying a value much greater than one for the nAGQ argument of glmer(). | Interpreting a generalised linear mixed model with binomial data | A couple of extra notes on top of what @RobertLong already answered:
As Robert also noted, the interpretation of the coefficients from generalized linear mixed models are conditional on the random ef | Interpreting a generalised linear mixed model with binomial data
A couple of extra notes on top of what @RobertLong already answered:
As Robert also noted, the interpretation of the coefficients from generalized linear mixed models are conditional on the random effects. Most often this is not the interpretation you are looking for. For more info on this check here.
You have fitted the model with the default Laplace approximation. This may be inaccurate, especially for dichotomous data. You should better fit the model with the adaptive Gaussian quadrature by specifying a value much greater than one for the nAGQ argument of glmer(). | Interpreting a generalised linear mixed model with binomial data
A couple of extra notes on top of what @RobertLong already answered:
As Robert also noted, the interpretation of the coefficients from generalized linear mixed models are conditional on the random ef |
33,842 | How to calculate Helmert Coding | I think you are generally trying to understand how Helmert Contrasts
work. I think the answer provided by Peter Flom is great, but I'd
like to take a bit of a different approach and show you how Helmert
Contrasts end up comparing means of factor "levels." I think
this should improve your understanding.
To start the understanding, it's instructive to review the general
model structure. We can assume the following standard multiple regression
model:
\begin{eqnarray*}
\hat{\mu}_{i}=E(Y_{i}) & = & \hat{\beta}_{0}+\hat{\beta}_{1}X_{1}+\hat{\beta}_{2}X_{2}+\hat{\beta}_{3}X_{3}
\end{eqnarray*}
where $i=$ {$H$ for Hispanic, $A$ for Asian, $B$ for Black, and
$W$ for White}.
Contrasts are purposefully chosen methods of coding or ways to numerically
represent factor levels (e.g. Hispanic, Asian, Black,
and White) so that when you regress them onto your dependent
variable, you will obtain estimated beta coefficients that represent
useful comparisons without doing any additional work. You may be familiar
with the traditional treatment contrasts or dummy coding for example,
which assigns a value of 0 or 1 to each observation depending on whether
or not the observation is a Hispanic, Asian, Black, or White. That
coding appears as:
So, if an observation corresponds to someone who is Hispanic, then,
$X_{1}=X_{2}=X_{3}=0$. If the observation corresponds to someone
who is black, then $X_{1}=0,\,X_{2}=1,\,X_{3}=0$. Recall with this
coding, then the estimate corresponding to $\hat{\beta}_{0}$ corresponds
to the estimated mean response for Hispanics only. Then $\hat{\beta}_{1}$
would represent the difference in the estimated mean response between
Asian and Hispanic (i.e. $\hat{\mu}_{A}-\hat{\mu}_{H})$, $\hat{\beta}_{2}$ would
represent the difference in the estimated mean response between Black
and Hispanic (i.e. $\hat{\mu}_{B}-\hat{\mu}_{H})$, and $\hat{\beta}_{3}$ would
represent the difference in estimated mean response between White
and Hispanic (i.e. $\hat{\mu}_{W}-\hat{\mu}_{H})$.
With this in mind recall that we can use the same model as presented
above, but use Helmert codings to obtain useful comparisons of these
mean responses of the races. If instead of treatment contrasts, we
use Helmert contrasts, then the resulting estimated coefficients change
meaning. Instead of $\hat{\beta}_{1}$ corresponding to the difference
in the mean response between Asian and Hispanic, under the Helmert
coding you presented, it would represent the difference between the
mean response for Hispanic and and the "mean of the mean" response for the Asian, Black and White group (i.e. $\hat{\mu}_{H}-\frac{\hat{\mu}_{A}+\hat{\mu}_{B}+\hat{\mu}_{W}}{3}$).
To see how this coding ``turns'' into these estimates. We can simply
set up the Helmert matrix (only I'm going to include the constant
column which is sometimes excluded in texts) and augment it with the
estimated mean response for each race, $\hat{\mu}_{i}$, then use
Gauss-Jordan Elimination to put the matrix in row-reduced echelon
form. This will allow us to simply read-off the interpretations of
each estimated parameter from the model. I'll demonstrate this below:
\begin{eqnarray*}
\begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
1 & -\frac{1}{4} & \frac{2}{3} & 0 & | & \mu_{A}\\
1 & -\frac{1}{4} & -\frac{1}{3} & \frac{1}{2} & | & \mu_{B}\\
1 & -\frac{1}{4} & -\frac{1}{3} & -\frac{1}{2} & | & \mu_{W}
\end{bmatrix} & \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & -1 & -\frac{1}{3} & \frac{1}{2} & | & \mu_{B}-\mu_{H}\\
0 & -1 & -\frac{1}{3} & -\frac{1}{2} & | & \mu_{W}-\mu_{H}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & 0 & 1 & -\frac{1}{2} & | & \mu_{A}-\mu_{B}\\
0 & 0 & -1 & -\frac{1}{2} & | & \mu_{W}-\mu_{A}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & 0 & 1 & -\frac{1}{2} & | & \mu_{A}-\mu_{B}\\
0 & 0 & 0 & 1 & | & \mu_{B}-\mu_{W}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & 0 & 0 & 0 & | & \mu_{H}-\frac{3}{4}\left\{ \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\right\} \\
0 & 1 & 0 & 0 & | & \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\\
0 & 0 & 1 & 0 & | & \mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\\
0 & 0 & 0 & 1 & | & \mu_{B}-\mu_{W}
\end{bmatrix}
\end{eqnarray*}
So, now we simply read off the pivot positions. This implies that:
\begin{eqnarray*}
\hat{\beta}_{0} & = & \mu_{H}-\frac{3}{4}\left\{ \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\right\} \\
& = & \frac{1}{4}\hat{\mu}{}_{H}+\frac{1}{4}\hat{\mu}{}_{A}+\frac{1}{4}\hat{\mu}{}_{B}+\frac{1}{4}\hat{\mu}{}_{W}
\end{eqnarray*}
that:
\begin{eqnarray*}
\hat{\beta}_{1} & = & \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\\
& = & \hat{\mu}{}_{H}-\hat{\mu}{}_{A}+\frac{2}{3}\hat{\mu}{}_{A}-\frac{1}{3}\left(\hat{\mu}{}_{B}-\hat{\mu}{}_{W}\right)\\
& = & \hat{\mu}{}_{H}-\frac{\hat{\mu}{}_{A}+\hat{\mu}{}_{B}+\hat{\mu}{}_{W}}{3}
\end{eqnarray*}
that:
\begin{eqnarray*}
\hat{\beta}_{2} & = & \mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\\
& = & \mu_{A}-\frac{\mu_{B}+\mu_{W}}{2}
\end{eqnarray*}
and finally that:
\begin{eqnarray*}
\hat{\beta}_{3} & = & \hat{\mu}{}_{B}-\hat{\mu}{}_{W}
\end{eqnarray*}
As you can see, by using the Helmert contrasts, we end up with betas
that represent the difference between the estimated mean at the current
level/race and the mean of the subsequent levels/races.
Let's take a look at this in R to drive the point home:
hsb2 = read.table('https://stats.idre.ucla.edu/stat/data/hsb2.csv', header=T, sep=",")
hsb2$race.f = factor(hsb2$race, labels=c("Hispanic", "Asian", "African-Am", "Caucasian"))
cellmeans = tapply(hsb2$write, hsb2$race.f, mean)
cellmeans
Hispanic Asian African-Am Caucasian
46.45833 58.00000 48.20000 54.05517
helmert2 = matrix(c(3/4, -1/4, -1/4, -1/4, 0, 2/3, -1/3, -1/3, 0, 0, 1/2,
-1/2), ncol = 3)
contrasts(hsb2$race.f) = helmert2
model.helmert2 =lm(write ~ race.f, hsb2)
model.helmert2
Call:
lm(formula = write ~ race.f, data = hsb2)
Coefficients:
(Intercept) race.f1 race.f2 race.f3
51.678 -6.960 6.872 -5.855
#B0=51.678 shoud correspond to the mean of the means of the races:
cellmeans = tapply(hsb2$write, hsb2$race.f, mean)
mean(cellmeans)
[1] 51.67838
#B1=-6.960 shoud correspond to the difference between the mean for Hispanics
#and the the mean for (Asian, Black, White):
mean(race.means[c("Hispanic")]) - mean(race.means[c("Asian", "African-Am","Caucasian")])
[1] -6.960057
#B2=6.872 shoud correspond to the difference between the mean for Asian and
#the the mean for (Black, White):
mean(race.means[c("Asian")]) - mean(race.means[c("African-Am","Caucasian")])
[1] 6.872414
#B3=-5.855 shoud correspond to the difference between the mean for Black
#and the the mean for (White):
mean(race.means[c("African-Am")]) - mean(race.means[c("Caucasian")])
[1] -5.855172
If you are looking for a method to create a Helmert matrix or are trying to understand how the helmert matrices are generated, you may use this code too that I put together:
#Example with Race Data from OPs example
hsb2 = read.table('https://stats.idre.ucla.edu/stat/data/hsb2.csv', header=T, sep=",")
hsb2$race.f = factor(hsb2$race, labels=c("Hispanic", "Asian", "African-Am", "Caucasian"))
levels<-length(levels(hsb2$race.f))
categories<-seq(levels, 2)
basematrix=matrix(-1, nrow=levels, ncol=levels)
diag(basematrix[1:levels, 2:levels])<-seq(levels-1, 1)
sub.basematrix<-basematrix[,2:levels]
sub.basematrix[upper.tri(sub.basematrix-1)]<-0
contrasts<-sub.basematrix %*% diag(1/categories)
rownames(contrasts)<-levels(hsb2$race.f)
contrasts
[,1] [,2] [,3]
Hispanic 0.75 0.0000000 0.0
Asian -0.25 0.6666667 0.0
African-Am -0.25 -0.3333333 0.5
Caucasian -0.25 -0.3333333 -0.5
Here is an example with five levels of a factor:
levels<-5
categories<-seq(levels, 2)
basematrix=matrix(-1, nrow=levels, ncol=levels)
diag(basematrix[1:levels, 2:levels])<-seq(levels-1, 1)
sub.basematrix<-basematrix[,2:levels]
sub.basematrix[upper.tri(sub.basematrix-1)]<-0
contrasts<-sub.basematrix %*% diag(1/categories)
contrasts
[,1] [,2] [,3] [,4]
[1,] 0.8 0.00 0.0000000 0.0
[2,] -0.2 0.75 0.0000000 0.0
[3,] -0.2 -0.25 0.6666667 0.0
[4,] -0.2 -0.25 -0.3333333 0.5
[5,] -0.2 -0.25 -0.3333333 -0.5 | How to calculate Helmert Coding | I think you are generally trying to understand how Helmert Contrasts
work. I think the answer provided by Peter Flom is great, but I'd
like to take a bit of a different approach and show you how Helme | How to calculate Helmert Coding
I think you are generally trying to understand how Helmert Contrasts
work. I think the answer provided by Peter Flom is great, but I'd
like to take a bit of a different approach and show you how Helmert
Contrasts end up comparing means of factor "levels." I think
this should improve your understanding.
To start the understanding, it's instructive to review the general
model structure. We can assume the following standard multiple regression
model:
\begin{eqnarray*}
\hat{\mu}_{i}=E(Y_{i}) & = & \hat{\beta}_{0}+\hat{\beta}_{1}X_{1}+\hat{\beta}_{2}X_{2}+\hat{\beta}_{3}X_{3}
\end{eqnarray*}
where $i=$ {$H$ for Hispanic, $A$ for Asian, $B$ for Black, and
$W$ for White}.
Contrasts are purposefully chosen methods of coding or ways to numerically
represent factor levels (e.g. Hispanic, Asian, Black,
and White) so that when you regress them onto your dependent
variable, you will obtain estimated beta coefficients that represent
useful comparisons without doing any additional work. You may be familiar
with the traditional treatment contrasts or dummy coding for example,
which assigns a value of 0 or 1 to each observation depending on whether
or not the observation is a Hispanic, Asian, Black, or White. That
coding appears as:
So, if an observation corresponds to someone who is Hispanic, then,
$X_{1}=X_{2}=X_{3}=0$. If the observation corresponds to someone
who is black, then $X_{1}=0,\,X_{2}=1,\,X_{3}=0$. Recall with this
coding, then the estimate corresponding to $\hat{\beta}_{0}$ corresponds
to the estimated mean response for Hispanics only. Then $\hat{\beta}_{1}$
would represent the difference in the estimated mean response between
Asian and Hispanic (i.e. $\hat{\mu}_{A}-\hat{\mu}_{H})$, $\hat{\beta}_{2}$ would
represent the difference in the estimated mean response between Black
and Hispanic (i.e. $\hat{\mu}_{B}-\hat{\mu}_{H})$, and $\hat{\beta}_{3}$ would
represent the difference in estimated mean response between White
and Hispanic (i.e. $\hat{\mu}_{W}-\hat{\mu}_{H})$.
With this in mind recall that we can use the same model as presented
above, but use Helmert codings to obtain useful comparisons of these
mean responses of the races. If instead of treatment contrasts, we
use Helmert contrasts, then the resulting estimated coefficients change
meaning. Instead of $\hat{\beta}_{1}$ corresponding to the difference
in the mean response between Asian and Hispanic, under the Helmert
coding you presented, it would represent the difference between the
mean response for Hispanic and and the "mean of the mean" response for the Asian, Black and White group (i.e. $\hat{\mu}_{H}-\frac{\hat{\mu}_{A}+\hat{\mu}_{B}+\hat{\mu}_{W}}{3}$).
To see how this coding ``turns'' into these estimates. We can simply
set up the Helmert matrix (only I'm going to include the constant
column which is sometimes excluded in texts) and augment it with the
estimated mean response for each race, $\hat{\mu}_{i}$, then use
Gauss-Jordan Elimination to put the matrix in row-reduced echelon
form. This will allow us to simply read-off the interpretations of
each estimated parameter from the model. I'll demonstrate this below:
\begin{eqnarray*}
\begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
1 & -\frac{1}{4} & \frac{2}{3} & 0 & | & \mu_{A}\\
1 & -\frac{1}{4} & -\frac{1}{3} & \frac{1}{2} & | & \mu_{B}\\
1 & -\frac{1}{4} & -\frac{1}{3} & -\frac{1}{2} & | & \mu_{W}
\end{bmatrix} & \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & -1 & -\frac{1}{3} & \frac{1}{2} & | & \mu_{B}-\mu_{H}\\
0 & -1 & -\frac{1}{3} & -\frac{1}{2} & | & \mu_{W}-\mu_{H}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & 0 & 1 & -\frac{1}{2} & | & \mu_{A}-\mu_{B}\\
0 & 0 & -1 & -\frac{1}{2} & | & \mu_{W}-\mu_{A}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & \frac{3}{4} & 0 & 0 & | & \mu_{H}\\
0 & 1 & -\frac{2}{3} & 0 & | & \mu_{H}-\mu_{A}\\
0 & 0 & 1 & -\frac{1}{2} & | & \mu_{A}-\mu_{B}\\
0 & 0 & 0 & 1 & | & \mu_{B}-\mu_{W}
\end{bmatrix}\\
& \sim & \begin{bmatrix}1 & 0 & 0 & 0 & | & \mu_{H}-\frac{3}{4}\left\{ \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\right\} \\
0 & 1 & 0 & 0 & | & \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\\
0 & 0 & 1 & 0 & | & \mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\\
0 & 0 & 0 & 1 & | & \mu_{B}-\mu_{W}
\end{bmatrix}
\end{eqnarray*}
So, now we simply read off the pivot positions. This implies that:
\begin{eqnarray*}
\hat{\beta}_{0} & = & \mu_{H}-\frac{3}{4}\left\{ \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\right\} \\
& = & \frac{1}{4}\hat{\mu}{}_{H}+\frac{1}{4}\hat{\mu}{}_{A}+\frac{1}{4}\hat{\mu}{}_{B}+\frac{1}{4}\hat{\mu}{}_{W}
\end{eqnarray*}
that:
\begin{eqnarray*}
\hat{\beta}_{1} & = & \mu_{H}-\mu_{A}+\frac{2}{3}\left[\mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\right]\\
& = & \hat{\mu}{}_{H}-\hat{\mu}{}_{A}+\frac{2}{3}\hat{\mu}{}_{A}-\frac{1}{3}\left(\hat{\mu}{}_{B}-\hat{\mu}{}_{W}\right)\\
& = & \hat{\mu}{}_{H}-\frac{\hat{\mu}{}_{A}+\hat{\mu}{}_{B}+\hat{\mu}{}_{W}}{3}
\end{eqnarray*}
that:
\begin{eqnarray*}
\hat{\beta}_{2} & = & \mu_{A}-\mu_{B}+\frac{1}{2}\left(\mu_{B}-\mu_{W}\right)\\
& = & \mu_{A}-\frac{\mu_{B}+\mu_{W}}{2}
\end{eqnarray*}
and finally that:
\begin{eqnarray*}
\hat{\beta}_{3} & = & \hat{\mu}{}_{B}-\hat{\mu}{}_{W}
\end{eqnarray*}
As you can see, by using the Helmert contrasts, we end up with betas
that represent the difference between the estimated mean at the current
level/race and the mean of the subsequent levels/races.
Let's take a look at this in R to drive the point home:
hsb2 = read.table('https://stats.idre.ucla.edu/stat/data/hsb2.csv', header=T, sep=",")
hsb2$race.f = factor(hsb2$race, labels=c("Hispanic", "Asian", "African-Am", "Caucasian"))
cellmeans = tapply(hsb2$write, hsb2$race.f, mean)
cellmeans
Hispanic Asian African-Am Caucasian
46.45833 58.00000 48.20000 54.05517
helmert2 = matrix(c(3/4, -1/4, -1/4, -1/4, 0, 2/3, -1/3, -1/3, 0, 0, 1/2,
-1/2), ncol = 3)
contrasts(hsb2$race.f) = helmert2
model.helmert2 =lm(write ~ race.f, hsb2)
model.helmert2
Call:
lm(formula = write ~ race.f, data = hsb2)
Coefficients:
(Intercept) race.f1 race.f2 race.f3
51.678 -6.960 6.872 -5.855
#B0=51.678 shoud correspond to the mean of the means of the races:
cellmeans = tapply(hsb2$write, hsb2$race.f, mean)
mean(cellmeans)
[1] 51.67838
#B1=-6.960 shoud correspond to the difference between the mean for Hispanics
#and the the mean for (Asian, Black, White):
mean(race.means[c("Hispanic")]) - mean(race.means[c("Asian", "African-Am","Caucasian")])
[1] -6.960057
#B2=6.872 shoud correspond to the difference between the mean for Asian and
#the the mean for (Black, White):
mean(race.means[c("Asian")]) - mean(race.means[c("African-Am","Caucasian")])
[1] 6.872414
#B3=-5.855 shoud correspond to the difference between the mean for Black
#and the the mean for (White):
mean(race.means[c("African-Am")]) - mean(race.means[c("Caucasian")])
[1] -5.855172
If you are looking for a method to create a Helmert matrix or are trying to understand how the helmert matrices are generated, you may use this code too that I put together:
#Example with Race Data from OPs example
hsb2 = read.table('https://stats.idre.ucla.edu/stat/data/hsb2.csv', header=T, sep=",")
hsb2$race.f = factor(hsb2$race, labels=c("Hispanic", "Asian", "African-Am", "Caucasian"))
levels<-length(levels(hsb2$race.f))
categories<-seq(levels, 2)
basematrix=matrix(-1, nrow=levels, ncol=levels)
diag(basematrix[1:levels, 2:levels])<-seq(levels-1, 1)
sub.basematrix<-basematrix[,2:levels]
sub.basematrix[upper.tri(sub.basematrix-1)]<-0
contrasts<-sub.basematrix %*% diag(1/categories)
rownames(contrasts)<-levels(hsb2$race.f)
contrasts
[,1] [,2] [,3]
Hispanic 0.75 0.0000000 0.0
Asian -0.25 0.6666667 0.0
African-Am -0.25 -0.3333333 0.5
Caucasian -0.25 -0.3333333 -0.5
Here is an example with five levels of a factor:
levels<-5
categories<-seq(levels, 2)
basematrix=matrix(-1, nrow=levels, ncol=levels)
diag(basematrix[1:levels, 2:levels])<-seq(levels-1, 1)
sub.basematrix<-basematrix[,2:levels]
sub.basematrix[upper.tri(sub.basematrix-1)]<-0
contrasts<-sub.basematrix %*% diag(1/categories)
contrasts
[,1] [,2] [,3] [,4]
[1,] 0.8 0.00 0.0000000 0.0
[2,] -0.2 0.75 0.0000000 0.0
[3,] -0.2 -0.25 0.6666667 0.0
[4,] -0.2 -0.25 -0.3333333 0.5
[5,] -0.2 -0.25 -0.3333333 -0.5 | How to calculate Helmert Coding
I think you are generally trying to understand how Helmert Contrasts
work. I think the answer provided by Peter Flom is great, but I'd
like to take a bit of a different approach and show you how Helme |
33,843 | How to calculate Helmert Coding | With Helmert coding, each level of the variable is compared to "later" levels of the variable.
The weights depend on the number of levels of the variable.
If there are L levels then the first comparison is of level vs. $(L-1)$ other levels. The weights are then $(L-1)/L$ for the first level and $-1/L$ for each of the other levels. In your case L = 4 so the weights are .75 and -.25 (3 times).
The next comparison has only $L-1$ levels (the first level is no longer part of the comparisons), so now the weights are $(L-2)/(L-1)$ for the first level and $-1/(L-1)$ for the others (in your case, $2/3$ and -$1/3$. And so on.
Why are you using Helmert coding here? As this page notes, Helmert coding and its inverse, difference coding, really only make sense when the variable is ordinal.
Clearly, this coding system does not make much sense with our example
of race because it is a nominal variable. However, this system is
useful when the levels of the categorical variable are ordered in a
meaningful way. For example, if we had a categorical variable in
which work-related stress was coded as low, medium or high, then
comparing the means of the previous levels of the variable would make
more sense.
Personally, I find them hard to interpret, even in that case. But, you are comparing "White" to the average of the other three groups. Is that what you want? | How to calculate Helmert Coding | With Helmert coding, each level of the variable is compared to "later" levels of the variable.
The weights depend on the number of levels of the variable.
If there are L levels then the first comparis | How to calculate Helmert Coding
With Helmert coding, each level of the variable is compared to "later" levels of the variable.
The weights depend on the number of levels of the variable.
If there are L levels then the first comparison is of level vs. $(L-1)$ other levels. The weights are then $(L-1)/L$ for the first level and $-1/L$ for each of the other levels. In your case L = 4 so the weights are .75 and -.25 (3 times).
The next comparison has only $L-1$ levels (the first level is no longer part of the comparisons), so now the weights are $(L-2)/(L-1)$ for the first level and $-1/(L-1)$ for the others (in your case, $2/3$ and -$1/3$. And so on.
Why are you using Helmert coding here? As this page notes, Helmert coding and its inverse, difference coding, really only make sense when the variable is ordinal.
Clearly, this coding system does not make much sense with our example
of race because it is a nominal variable. However, this system is
useful when the levels of the categorical variable are ordered in a
meaningful way. For example, if we had a categorical variable in
which work-related stress was coded as low, medium or high, then
comparing the means of the previous levels of the variable would make
more sense.
Personally, I find them hard to interpret, even in that case. But, you are comparing "White" to the average of the other three groups. Is that what you want? | How to calculate Helmert Coding
With Helmert coding, each level of the variable is compared to "later" levels of the variable.
The weights depend on the number of levels of the variable.
If there are L levels then the first comparis |
33,844 | Can a confidence interval straddle the zero mark? [duplicate] | Can't a confidence interval be positioned both sides of the zero? Say, can't its range be [-12; +12]?
Certainly it can.
Did you miss the stated condition "Whenever an effect is significant"? | Can a confidence interval straddle the zero mark? [duplicate] | Can't a confidence interval be positioned both sides of the zero? Say, can't its range be [-12; +12]?
Certainly it can.
Did you miss the stated condition "Whenever an effect is significant"? | Can a confidence interval straddle the zero mark? [duplicate]
Can't a confidence interval be positioned both sides of the zero? Say, can't its range be [-12; +12]?
Certainly it can.
Did you miss the stated condition "Whenever an effect is significant"? | Can a confidence interval straddle the zero mark? [duplicate]
Can't a confidence interval be positioned both sides of the zero? Say, can't its range be [-12; +12]?
Certainly it can.
Did you miss the stated condition "Whenever an effect is significant"? |
33,845 | Can a confidence interval straddle the zero mark? [duplicate] | Often the quantity you are interested in has as its null value zero. So for example if you are estimating differences between means then the null value is zero - no difference. In such a case you would be interested in whether the confidence interval included zero. In some cases though the null value is not zero but perhaps unity. For instance if you are interested in the odds ratio then the null value is unity (since it is a ratio and when the quantities are the same their ratio is unity). In such cases you are interested in whether the interval includes unity and in fact the ratio cannot be negative. | Can a confidence interval straddle the zero mark? [duplicate] | Often the quantity you are interested in has as its null value zero. So for example if you are estimating differences between means then the null value is zero - no difference. In such a case you woul | Can a confidence interval straddle the zero mark? [duplicate]
Often the quantity you are interested in has as its null value zero. So for example if you are estimating differences between means then the null value is zero - no difference. In such a case you would be interested in whether the confidence interval included zero. In some cases though the null value is not zero but perhaps unity. For instance if you are interested in the odds ratio then the null value is unity (since it is a ratio and when the quantities are the same their ratio is unity). In such cases you are interested in whether the interval includes unity and in fact the ratio cannot be negative. | Can a confidence interval straddle the zero mark? [duplicate]
Often the quantity you are interested in has as its null value zero. So for example if you are estimating differences between means then the null value is zero - no difference. In such a case you woul |
33,846 | Can a confidence interval straddle the zero mark? [duplicate] | The quote talks about testing whether a certain value is significantly different from zero at some level of significance.
If you had a confidence interval proposed by you, then the test would not reject the null hypothesis that "the tested value is zero". Because the interval [-12, 12] actually tells you that it might easily happen, that the true value is indeed zero. | Can a confidence interval straddle the zero mark? [duplicate] | The quote talks about testing whether a certain value is significantly different from zero at some level of significance.
If you had a confidence interval proposed by you, then the test would not reje | Can a confidence interval straddle the zero mark? [duplicate]
The quote talks about testing whether a certain value is significantly different from zero at some level of significance.
If you had a confidence interval proposed by you, then the test would not reject the null hypothesis that "the tested value is zero". Because the interval [-12, 12] actually tells you that it might easily happen, that the true value is indeed zero. | Can a confidence interval straddle the zero mark? [duplicate]
The quote talks about testing whether a certain value is significantly different from zero at some level of significance.
If you had a confidence interval proposed by you, then the test would not reje |
33,847 | Can a confidence interval straddle the zero mark? [duplicate] | Allow me to help. Basically, the confidence interval of a parameter are the endpoints of an interval or range that the parameter can reasonably achieve. So, if 95% of the time a parameter is greater than -12 and less than 12, then it could also be zero. And, if it can be zero then it can be worthless. For example, if $A X$ can have $A=0$ then its contribution is not significant to a regression. If we had 95% CI's for $A$ of 12 to 24, then it is not likely to be worthless as it is significantly not zero. Mind you, not significantly different from zero is not necessarily an insignificant contribution, and if we had more data, it might become significantly different from zero. It is a fine point, perhaps, that not significant does not mean insignificant, and just because we do not obtain significance as a result proves nothing special.
However, that does not mean that because we have a not significant result that it is also meaningless. It does have a meaningful context, and indeed one in which the confidence intervals contribute more than a probability alone would.
Suppose we have 95% confidence intervals for $A$ as above that only extend from -1 to 1. Then they are 12 times lesser in range than the -12 to 12 CIs we had before, above. What that implies is that we would then know that a result of -5 or +5 would be significantly non-zero so that we now have a greater certainty of where the values of $A$ are not-located, and a narrower range of where our uncertain values of $A$ reside. | Can a confidence interval straddle the zero mark? [duplicate] | Allow me to help. Basically, the confidence interval of a parameter are the endpoints of an interval or range that the parameter can reasonably achieve. So, if 95% of the time a parameter is greater t | Can a confidence interval straddle the zero mark? [duplicate]
Allow me to help. Basically, the confidence interval of a parameter are the endpoints of an interval or range that the parameter can reasonably achieve. So, if 95% of the time a parameter is greater than -12 and less than 12, then it could also be zero. And, if it can be zero then it can be worthless. For example, if $A X$ can have $A=0$ then its contribution is not significant to a regression. If we had 95% CI's for $A$ of 12 to 24, then it is not likely to be worthless as it is significantly not zero. Mind you, not significantly different from zero is not necessarily an insignificant contribution, and if we had more data, it might become significantly different from zero. It is a fine point, perhaps, that not significant does not mean insignificant, and just because we do not obtain significance as a result proves nothing special.
However, that does not mean that because we have a not significant result that it is also meaningless. It does have a meaningful context, and indeed one in which the confidence intervals contribute more than a probability alone would.
Suppose we have 95% confidence intervals for $A$ as above that only extend from -1 to 1. Then they are 12 times lesser in range than the -12 to 12 CIs we had before, above. What that implies is that we would then know that a result of -5 or +5 would be significantly non-zero so that we now have a greater certainty of where the values of $A$ are not-located, and a narrower range of where our uncertain values of $A$ reside. | Can a confidence interval straddle the zero mark? [duplicate]
Allow me to help. Basically, the confidence interval of a parameter are the endpoints of an interval or range that the parameter can reasonably achieve. So, if 95% of the time a parameter is greater t |
33,848 | What exactly does it mean to and why must one update prior? | In plain english, update a prior in bayesian inference means that you start with some guesses about the probability of an event occuring (prior probability), then you observe what happens (likelihood), and depending on what happened you update your initial guess. Once updated, your prior probability is called posterior probability.
Of course, now you can:
stop with your posterior probability;
use you posterior probability as a new prior, and update such a probability to obtain a new posterior by observing more evidence (i.e. data).
Essentially, updating a prior means that you start with a (informed) guess and you use evidence to update your initial guess. Recall that
$$ p(\theta | x) = \frac{p(x|\theta)p(\theta)}{p(x)},$$
where $p(\theta)$ is your prior, $p(x|\theta)$ is the likelihood (i.e. the evidence that you use to update the prior), and $p(\theta|x)$ is the posterior probability. Notice that the posterior probability is a probability given the evidence.
Example of coins:
You start with the guess the probability of the coin being fair is $p = 0.1$. Then, you toss 10 times the coin, and you obtain a posterior probability $p = 0.3$. At this point, you can decide to be satisfied with $p = 0.3$ or toss the coin again (say 90 times): in this case, your prior will be $p = 0.3$ -- i.e. the posterior becomes the new prior -- and you will obtain a new posterior probability depending on new evidence.
Suppose that after 1000 tosses your posterior probability is $p = 0.9$. At the beginning you prior was $p = 0.1$, so you were supposing that the coin was unfair. Now, based on the evidence of 1000 tosses, you see that the probability of the coin being fair is high.
Notice that the fact that you can easily update a probability as you have new evidences is a strength of the bayesian framework. The point here is not that the prior must be updated, but to use all the available evidence to update your guess about a certain probability. | What exactly does it mean to and why must one update prior? | In plain english, update a prior in bayesian inference means that you start with some guesses about the probability of an event occuring (prior probability), then you observe what happens (likelihood) | What exactly does it mean to and why must one update prior?
In plain english, update a prior in bayesian inference means that you start with some guesses about the probability of an event occuring (prior probability), then you observe what happens (likelihood), and depending on what happened you update your initial guess. Once updated, your prior probability is called posterior probability.
Of course, now you can:
stop with your posterior probability;
use you posterior probability as a new prior, and update such a probability to obtain a new posterior by observing more evidence (i.e. data).
Essentially, updating a prior means that you start with a (informed) guess and you use evidence to update your initial guess. Recall that
$$ p(\theta | x) = \frac{p(x|\theta)p(\theta)}{p(x)},$$
where $p(\theta)$ is your prior, $p(x|\theta)$ is the likelihood (i.e. the evidence that you use to update the prior), and $p(\theta|x)$ is the posterior probability. Notice that the posterior probability is a probability given the evidence.
Example of coins:
You start with the guess the probability of the coin being fair is $p = 0.1$. Then, you toss 10 times the coin, and you obtain a posterior probability $p = 0.3$. At this point, you can decide to be satisfied with $p = 0.3$ or toss the coin again (say 90 times): in this case, your prior will be $p = 0.3$ -- i.e. the posterior becomes the new prior -- and you will obtain a new posterior probability depending on new evidence.
Suppose that after 1000 tosses your posterior probability is $p = 0.9$. At the beginning you prior was $p = 0.1$, so you were supposing that the coin was unfair. Now, based on the evidence of 1000 tosses, you see that the probability of the coin being fair is high.
Notice that the fact that you can easily update a probability as you have new evidences is a strength of the bayesian framework. The point here is not that the prior must be updated, but to use all the available evidence to update your guess about a certain probability. | What exactly does it mean to and why must one update prior?
In plain english, update a prior in bayesian inference means that you start with some guesses about the probability of an event occuring (prior probability), then you observe what happens (likelihood) |
33,849 | What exactly does it mean to and why must one update prior? | Because I think we want a model that incorporated the data observed so that the model (probability distribution) fits the data observed and we can use the model for stable predictions, as the initial prior is just a hypothesis to start with.
The steps to update the prior distribution to get the posterior. | What exactly does it mean to and why must one update prior? | Because I think we want a model that incorporated the data observed so that the model (probability distribution) fits the data observed and we can use the model for stable predictions, as the initial | What exactly does it mean to and why must one update prior?
Because I think we want a model that incorporated the data observed so that the model (probability distribution) fits the data observed and we can use the model for stable predictions, as the initial prior is just a hypothesis to start with.
The steps to update the prior distribution to get the posterior. | What exactly does it mean to and why must one update prior?
Because I think we want a model that incorporated the data observed so that the model (probability distribution) fits the data observed and we can use the model for stable predictions, as the initial |
33,850 | Is a fat tail same as skew | The "heaviness" of the tail refers to how quickly the probability decays as you move away from the center of the distribution, while skewness deals with symmetry or lack thereof. For instance, the exponential distribution is skewed but considered to have a fairly light tail, while the Cauchy distribution is perfectly symmetric but heavy-tailed. | Is a fat tail same as skew | The "heaviness" of the tail refers to how quickly the probability decays as you move away from the center of the distribution, while skewness deals with symmetry or lack thereof. For instance, the ex | Is a fat tail same as skew
The "heaviness" of the tail refers to how quickly the probability decays as you move away from the center of the distribution, while skewness deals with symmetry or lack thereof. For instance, the exponential distribution is skewed but considered to have a fairly light tail, while the Cauchy distribution is perfectly symmetric but heavy-tailed. | Is a fat tail same as skew
The "heaviness" of the tail refers to how quickly the probability decays as you move away from the center of the distribution, while skewness deals with symmetry or lack thereof. For instance, the ex |
33,851 | Is a fat tail same as skew | As dsaxtron points out (+1), skewness refers to symmetry or asymmetry. Any symmetric distribution will have a skewness of zero - no matter how fat its tails. This is because of the third power in its definition, which allows deviations in both tails to cancel out.
Thus, there is no relationship between skewness and tail fatness.
However, and relatedly, I strongly recommend Westfall (2014), Kurtosis as Peakedness, 1905–2014. R.I.P. in The American Statistician, which extremely nicely debunks the common misconception (also found in the Wikipedia article) that kurtosis has anything to do with "peakedness". Instead, kurtosis measures the propensity to outliers, i.e., the fatness of tails of a distribution. This is because the kurtosis uses the fourth power of deviations from the mean, so positive and negative tails do not cancel out. | Is a fat tail same as skew | As dsaxtron points out (+1), skewness refers to symmetry or asymmetry. Any symmetric distribution will have a skewness of zero - no matter how fat its tails. This is because of the third power in its | Is a fat tail same as skew
As dsaxtron points out (+1), skewness refers to symmetry or asymmetry. Any symmetric distribution will have a skewness of zero - no matter how fat its tails. This is because of the third power in its definition, which allows deviations in both tails to cancel out.
Thus, there is no relationship between skewness and tail fatness.
However, and relatedly, I strongly recommend Westfall (2014), Kurtosis as Peakedness, 1905–2014. R.I.P. in The American Statistician, which extremely nicely debunks the common misconception (also found in the Wikipedia article) that kurtosis has anything to do with "peakedness". Instead, kurtosis measures the propensity to outliers, i.e., the fatness of tails of a distribution. This is because the kurtosis uses the fourth power of deviations from the mean, so positive and negative tails do not cancel out. | Is a fat tail same as skew
As dsaxtron points out (+1), skewness refers to symmetry or asymmetry. Any symmetric distribution will have a skewness of zero - no matter how fat its tails. This is because of the third power in its |
33,852 | Is a fat tail same as skew | Sorry I am late to this thread. There have been several points of view expressed in the comments that express confusion about outliers and tails.
Rex Kerr's comment that kurtosis is not fat-tailedness is misguided. His counterexample with no outliers (and therefore, as he claims, no fat tail) is $(-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1)$. I will convert those data to the empirical distribution $x = (-1,0,1)$, with $p(x) = (1/11, 9/11, 1/11)$ and calculate excess kurtosis $k = 2.5$.
His comment is that this example shows "large kurtosis despite not having any tail."
To shed some more light on this, let's simplify the example. Consider instead the Bernoulli distribution $x = (0,1)$, $p(x) = (1-p,p)$. There is even less in the tail of this distribution, yet kurtosis tends to infinity as $p$ tends to $0$. For example, imagine that this is a model for a belief that the moon landing was faked, with $p(x) = (.94, .06)$. I think we would all agree that the person who believes that the moon landing was faked is an "outlier." The excess kurtosis of this distribution bears that out, with $k = 11.7$.
The degree of "outlier-ness" can be characterized by $z$-score: Here the person who thinks the moon landing was faked has $z$-score $z= (1 - .06)/\sqrt{.06*.94} = 3.95$, quite a ways into the tail of the $z$-distribution.
If belief in the hoax were more rare (which apparently it is not, according to polls), such as $0.1\%$, then that person would have a $z$-score of $z= (1 - .001)/\sqrt{.001*.999} = 31.61$, which, all would agree, is an outlier: If these data were from a normal distribution, the likelihood of seeing an observation $31.61$ standard deviations from the mean or more would be so small as to be called impossible. Also, the excess kurtosis is now $k = 995$.
So, despite the fact that the normal distribution has tails that extend to infinity, the Bernoulli distribution is arguably "heavier-tailed" for small $p$ in the sense that it can produce extreme observations greatly exceeding what the normal distribution is capable of.
Kurtosis is, by definition, the expected value of the $Z$ scores, each raised to the fourth power. When you have extreme $z$-scores (outliers), you have high kurtosis.
There are infinitely measures of tail extremity. Kurtosis is a measure of tail extremity that focuses on the $z$-scores, thus, by this measure, a distribution with finite support can be heavier-tailed than one with infinite support.
This definition is perfectly logical, and quite applied. The reason we care about tails is because we care about outliers. The normal distribution simply does not produce outliers 31 standard deviations from the mean, by any practical way of thinking about it. The Bernoulli distribution, on the other hand, produces such values quite easily.
The focus on outliers is quite applied because statistical procedures of all kinds are affected by outliers. Take the variance estimate, for example: Its accuracy depends strongly on kurtosis, because the value of the estimate is strongly dependent upon whether or not outliers are in that particular sample. Power of means tests are also affected by outliers. So the interpretation of kurtosis as a measure of outliers, and not peak or center, is not only correct, it also lines up correctly with statistical applications.
Back to Rex's "counterexample," he could make more extreme by letting $x = (-1,0,1)$, $p(x) = (.001, .998, .001)$. The excess kurtosis is now $k=497$. The reason is that the $+1$ and $-1$ responses are now extreme outliers, $22.4$ standard deviations from the mean. This distribution is heavier-tailed than the normal distribution in the sense that it produces occasional values $22.4$ standard deviations from the mean.
Also, while Rex's counterexample, and my enhanced version of it suggest that higher kurtosis corresponds to a "peaked" distribution, there are easy examples where the distribution is not peaked with the same kurtosis. Take, for example, $x = (-1000, -2,-1,0,+1,+2, +1000)$, $p(x) = (.001, .25,.20, .098, .20, .25, .001)$. This distribution is "U" shaped, not peaked, and there are occasional outliers. Its excess kurtosis is $k=496$, similar to my enhanced counterexample, and the most extreme values are similarly $22.3$ standard deviations from the mean.
In summary, kurtosis does measure the tail (potential outliers) of the distribution, because it is the expected value of $Z^4$. If you have some large $Z$-values, then you have large kurtosis.
I give three mathematical theorems (for which there obviously can be no counterexamples) in my TAS article to support the connection between kurtosis and tails. To my knowledge, there are no theorems connecting kurtosis to the shape of the peak, or even to the probability content in the $\mu \pm \sigma$ range. If anyone has such a theorem, I'd love to see it. | Is a fat tail same as skew | Sorry I am late to this thread. There have been several points of view expressed in the comments that express confusion about outliers and tails.
Rex Kerr's comment that kurtosis is not fat-tailedness | Is a fat tail same as skew
Sorry I am late to this thread. There have been several points of view expressed in the comments that express confusion about outliers and tails.
Rex Kerr's comment that kurtosis is not fat-tailedness is misguided. His counterexample with no outliers (and therefore, as he claims, no fat tail) is $(-1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1)$. I will convert those data to the empirical distribution $x = (-1,0,1)$, with $p(x) = (1/11, 9/11, 1/11)$ and calculate excess kurtosis $k = 2.5$.
His comment is that this example shows "large kurtosis despite not having any tail."
To shed some more light on this, let's simplify the example. Consider instead the Bernoulli distribution $x = (0,1)$, $p(x) = (1-p,p)$. There is even less in the tail of this distribution, yet kurtosis tends to infinity as $p$ tends to $0$. For example, imagine that this is a model for a belief that the moon landing was faked, with $p(x) = (.94, .06)$. I think we would all agree that the person who believes that the moon landing was faked is an "outlier." The excess kurtosis of this distribution bears that out, with $k = 11.7$.
The degree of "outlier-ness" can be characterized by $z$-score: Here the person who thinks the moon landing was faked has $z$-score $z= (1 - .06)/\sqrt{.06*.94} = 3.95$, quite a ways into the tail of the $z$-distribution.
If belief in the hoax were more rare (which apparently it is not, according to polls), such as $0.1\%$, then that person would have a $z$-score of $z= (1 - .001)/\sqrt{.001*.999} = 31.61$, which, all would agree, is an outlier: If these data were from a normal distribution, the likelihood of seeing an observation $31.61$ standard deviations from the mean or more would be so small as to be called impossible. Also, the excess kurtosis is now $k = 995$.
So, despite the fact that the normal distribution has tails that extend to infinity, the Bernoulli distribution is arguably "heavier-tailed" for small $p$ in the sense that it can produce extreme observations greatly exceeding what the normal distribution is capable of.
Kurtosis is, by definition, the expected value of the $Z$ scores, each raised to the fourth power. When you have extreme $z$-scores (outliers), you have high kurtosis.
There are infinitely measures of tail extremity. Kurtosis is a measure of tail extremity that focuses on the $z$-scores, thus, by this measure, a distribution with finite support can be heavier-tailed than one with infinite support.
This definition is perfectly logical, and quite applied. The reason we care about tails is because we care about outliers. The normal distribution simply does not produce outliers 31 standard deviations from the mean, by any practical way of thinking about it. The Bernoulli distribution, on the other hand, produces such values quite easily.
The focus on outliers is quite applied because statistical procedures of all kinds are affected by outliers. Take the variance estimate, for example: Its accuracy depends strongly on kurtosis, because the value of the estimate is strongly dependent upon whether or not outliers are in that particular sample. Power of means tests are also affected by outliers. So the interpretation of kurtosis as a measure of outliers, and not peak or center, is not only correct, it also lines up correctly with statistical applications.
Back to Rex's "counterexample," he could make more extreme by letting $x = (-1,0,1)$, $p(x) = (.001, .998, .001)$. The excess kurtosis is now $k=497$. The reason is that the $+1$ and $-1$ responses are now extreme outliers, $22.4$ standard deviations from the mean. This distribution is heavier-tailed than the normal distribution in the sense that it produces occasional values $22.4$ standard deviations from the mean.
Also, while Rex's counterexample, and my enhanced version of it suggest that higher kurtosis corresponds to a "peaked" distribution, there are easy examples where the distribution is not peaked with the same kurtosis. Take, for example, $x = (-1000, -2,-1,0,+1,+2, +1000)$, $p(x) = (.001, .25,.20, .098, .20, .25, .001)$. This distribution is "U" shaped, not peaked, and there are occasional outliers. Its excess kurtosis is $k=496$, similar to my enhanced counterexample, and the most extreme values are similarly $22.3$ standard deviations from the mean.
In summary, kurtosis does measure the tail (potential outliers) of the distribution, because it is the expected value of $Z^4$. If you have some large $Z$-values, then you have large kurtosis.
I give three mathematical theorems (for which there obviously can be no counterexamples) in my TAS article to support the connection between kurtosis and tails. To my knowledge, there are no theorems connecting kurtosis to the shape of the peak, or even to the probability content in the $\mu \pm \sigma$ range. If anyone has such a theorem, I'd love to see it. | Is a fat tail same as skew
Sorry I am late to this thread. There have been several points of view expressed in the comments that express confusion about outliers and tails.
Rex Kerr's comment that kurtosis is not fat-tailedness |
33,853 | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models | These decisions IMHO can only be made in a sensible way with intimate knowledge about the problem and the data at hand (search terms: no free lunch theorem for pattern recognition/classification). So all we can tell you here are very general rules of thumb.
The more statistically independent cases you have for training, the more complex models you can afford. Very restrictive models (e.g. linear) are very often chosen because more complex models cannot be afforded with the given amount of data and less about really being convinced of having actually linear class boundaries.
See bias variance tradeoff and model complexity e.g. in The Elements of Statistical Learning
knowledge about the nature of your problem and data may also suggest sensible ways of feature generation.
If you don't have terribly many samples, but absolutely need nonlinear boundaries and therefore get unstable models, then ensemble models (like the random Forest) can help. You can aggregate not only decision trees but all other kinds of models as well.
There are rumours* that for the final quality of the model the choice of model often matters less than the experience the user has with the chosen type of model. I try to collect some evidence about this rumour in this question.
The conclusion would be to look for someone to consult who has experience with the classifiers you consider or, even better, with classification of your type of data (that would need a more detailed description than just saying it is time series).
Note: the first three can also be set up to output posterior probabilities.
*I don't know any scientific study that reports this, but have heard numerous people reporting this observation, and there is a number of descriptions of the differences between types of models that at the end conclude that the theoretical differences in practice hardly ever matter. | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models | These decisions IMHO can only be made in a sensible way with intimate knowledge about the problem and the data at hand (search terms: no free lunch theorem for pattern recognition/classification). So | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models
These decisions IMHO can only be made in a sensible way with intimate knowledge about the problem and the data at hand (search terms: no free lunch theorem for pattern recognition/classification). So all we can tell you here are very general rules of thumb.
The more statistically independent cases you have for training, the more complex models you can afford. Very restrictive models (e.g. linear) are very often chosen because more complex models cannot be afforded with the given amount of data and less about really being convinced of having actually linear class boundaries.
See bias variance tradeoff and model complexity e.g. in The Elements of Statistical Learning
knowledge about the nature of your problem and data may also suggest sensible ways of feature generation.
If you don't have terribly many samples, but absolutely need nonlinear boundaries and therefore get unstable models, then ensemble models (like the random Forest) can help. You can aggregate not only decision trees but all other kinds of models as well.
There are rumours* that for the final quality of the model the choice of model often matters less than the experience the user has with the chosen type of model. I try to collect some evidence about this rumour in this question.
The conclusion would be to look for someone to consult who has experience with the classifiers you consider or, even better, with classification of your type of data (that would need a more detailed description than just saying it is time series).
Note: the first three can also be set up to output posterior probabilities.
*I don't know any scientific study that reports this, but have heard numerous people reporting this observation, and there is a number of descriptions of the differences between types of models that at the end conclude that the theoretical differences in practice hardly ever matter. | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models
These decisions IMHO can only be made in a sensible way with intimate knowledge about the problem and the data at hand (search terms: no free lunch theorem for pattern recognition/classification). So |
33,854 | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models | It is hard to make specific suggestions without knowing more information about the exact problem you are attempting to solve. However, I will make a few recommendations for general time series prediction.
First, it is important to note that independence of observations is often a poor assumption for time series due to serial correlation of observations. Additionally, assuming that the time series is stationary may or may not be a good assumption. These issues complicate the partitioning of your data into training, cross-validation, and test sets. If there is a significant degree of autocorrelation this will reduce your effective sample size and increase your chances of over fitting, especially for complex machine learning algorithms like multilayer neural networks that have many tunable weights. Also, autocorrelation affects how you split your data set. If you randomly sample from your entire data set to create cross-validation and testing sets, then you will likely introduce a look-ahead bias into your out-of-sample error estimates. This happens because the out-of-sample data sets have points that are temporally adjacent to in-sample data points. This violates the independence assumption if there is significant autocorrelation and will artificially reduce your out-of-sample error estimates. This problem often leads to splitting the data set at a particular time point so that every observation before the split is training and every observation after the split is cross-validation/testing. This alleviates some of the issues with autocorrelation, but adds other complications if the in-sample time series differs significantly from the out-of-sample time series (non-stationary time series).
Considering the issues above, I would start as simple as possible and work my way up in complexity as needed. I would start by temporally splitting my data set into an in-sample and out-of-sample set, making sure that the out-of-sample contained similar cycles and statistical properties as the in-sample set. Then I would approach the in-sample data set with a Random Forest. Not knowing the size of your data set, 20 input variables may be a conservative use of your degrees of freedom. However, due to the negative effects of autocorrelation on your effective sample size, I would apply a random forest to the data set to get an estimate of variable importance. Knowing the relative importance of your inputs might help you eliminate some of your weaker inputs and free up more degrees of freedom. This site is a good intro for using random forests for variable importance estimates, and this paper discusses key pitfalls of conducting variable ranking. Random forests also have the advantage that you can pull out individual decision trees and understand the classification process by following the branches of the tree. This process of examining several of the forest’s decision trees can be very insightful and lead to better choices for the final classification algorithm. Neural networks and SVMs are less easy to interpret than random forests. Random forests also require less pre-processing of data than other machine learning algorithms, making them an approachable first step.
Using a random forest does have the drawback that it randomly splits data into out-of-bag (OOB) and in-bag samples for each decision tree. This random splitting will lead to bias in the OOB error estimate of the forest if your time series has significant autocorrelation. However, you can confine this bias to the in-sample data set by only running the random forest on it and preserving the out-of-sample data set for actual error estimation.
A random forest may provide you with enough power for your classification problem. However if you are looking for more improvement, I would proceed next by constructing a logistic regression using the variables deemed most important by the random forest. At this point, you have already used your in-sample data for variable selection and for insights by examining decision trees. However, you are probably safe splitting the in-sample data into training and cross-validation sets and then using learning curves to decide how complex you should make your classification algorithm. Start with a simple logistic regression, if you are still observing high bias via your learning curve, then you might consider moving to a multi-node, single layer neural network and repeating the process. Or you might consider using a SVM with a RBF kernel. This post touches on the differences between neural nets and SVMs. These choices will really be driven by the specifics of your problem and the availability of more data if you do encounter high variance in your final classification algorithm. Also, the complexity of your final classification algorithm will be governed by performance on cross-validation, but you should aim for the simplest solution that provides reasonable performance. | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models | It is hard to make specific suggestions without knowing more information about the exact problem you are attempting to solve. However, I will make a few recommendations for general time series predict | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models
It is hard to make specific suggestions without knowing more information about the exact problem you are attempting to solve. However, I will make a few recommendations for general time series prediction.
First, it is important to note that independence of observations is often a poor assumption for time series due to serial correlation of observations. Additionally, assuming that the time series is stationary may or may not be a good assumption. These issues complicate the partitioning of your data into training, cross-validation, and test sets. If there is a significant degree of autocorrelation this will reduce your effective sample size and increase your chances of over fitting, especially for complex machine learning algorithms like multilayer neural networks that have many tunable weights. Also, autocorrelation affects how you split your data set. If you randomly sample from your entire data set to create cross-validation and testing sets, then you will likely introduce a look-ahead bias into your out-of-sample error estimates. This happens because the out-of-sample data sets have points that are temporally adjacent to in-sample data points. This violates the independence assumption if there is significant autocorrelation and will artificially reduce your out-of-sample error estimates. This problem often leads to splitting the data set at a particular time point so that every observation before the split is training and every observation after the split is cross-validation/testing. This alleviates some of the issues with autocorrelation, but adds other complications if the in-sample time series differs significantly from the out-of-sample time series (non-stationary time series).
Considering the issues above, I would start as simple as possible and work my way up in complexity as needed. I would start by temporally splitting my data set into an in-sample and out-of-sample set, making sure that the out-of-sample contained similar cycles and statistical properties as the in-sample set. Then I would approach the in-sample data set with a Random Forest. Not knowing the size of your data set, 20 input variables may be a conservative use of your degrees of freedom. However, due to the negative effects of autocorrelation on your effective sample size, I would apply a random forest to the data set to get an estimate of variable importance. Knowing the relative importance of your inputs might help you eliminate some of your weaker inputs and free up more degrees of freedom. This site is a good intro for using random forests for variable importance estimates, and this paper discusses key pitfalls of conducting variable ranking. Random forests also have the advantage that you can pull out individual decision trees and understand the classification process by following the branches of the tree. This process of examining several of the forest’s decision trees can be very insightful and lead to better choices for the final classification algorithm. Neural networks and SVMs are less easy to interpret than random forests. Random forests also require less pre-processing of data than other machine learning algorithms, making them an approachable first step.
Using a random forest does have the drawback that it randomly splits data into out-of-bag (OOB) and in-bag samples for each decision tree. This random splitting will lead to bias in the OOB error estimate of the forest if your time series has significant autocorrelation. However, you can confine this bias to the in-sample data set by only running the random forest on it and preserving the out-of-sample data set for actual error estimation.
A random forest may provide you with enough power for your classification problem. However if you are looking for more improvement, I would proceed next by constructing a logistic regression using the variables deemed most important by the random forest. At this point, you have already used your in-sample data for variable selection and for insights by examining decision trees. However, you are probably safe splitting the in-sample data into training and cross-validation sets and then using learning curves to decide how complex you should make your classification algorithm. Start with a simple logistic regression, if you are still observing high bias via your learning curve, then you might consider moving to a multi-node, single layer neural network and repeating the process. Or you might consider using a SVM with a RBF kernel. This post touches on the differences between neural nets and SVMs. These choices will really be driven by the specifics of your problem and the availability of more data if you do encounter high variance in your final classification algorithm. Also, the complexity of your final classification algorithm will be governed by performance on cross-validation, but you should aim for the simplest solution that provides reasonable performance. | Classification in time series: SVMs, Neural Networks, Random Forests or non parametric models
It is hard to make specific suggestions without knowing more information about the exact problem you are attempting to solve. However, I will make a few recommendations for general time series predict |
33,855 | Is the square root of the symmetric Kullback-Leibler divergence a metric? | No, the square root of the symmetrised KL divergence is not a metric. A counterexample is as follows:
Let $P$ be a coin that produces a head 10% of the time.
Let $Q$ be a coin that produces a head 20% of the time.
Let $R$ be a coin that produces a head 30% of the time.
Then $d(P, Q) + d(Q, R) = 0.284... + 0.232... < 0.519... = d(P, R)$.
However, for $P$ and $Q$ very close together, $D(P, Q)$ and $J(P, Q)$ and $S(P, Q)$ are essentially the same (they are proportional to one another $+ O((P-Q)^3)$) and their square root is a metric (to the same order). We can take this local metric and integrate it up over the whole space of probability distributions to obtain a global metric. The result is:
$$A(P, Q) = \cos^{-1}\left(\sum_x \sqrt{P(x)Q(x)} \right)$$
I worked this out myself, so I'm afraid I do not know what it is called. I will use A for Alistair until I find out. ;-)
By construction, the triangle inequality in this metric is tight. You can actually find a unique shortest path through the space of probability distributions from $P$ to $Q$ that has the right length. In that respect it is preferable to the otherwise similar Hellinger distance:
$$H(P, Q) = 1 - \sqrt{\sum_x \sqrt{P(x)*Q(x)} }$$
Update 2013-12-05: Apparently this is called the Battacharrya arc-cos distance. | Is the square root of the symmetric Kullback-Leibler divergence a metric? | No, the square root of the symmetrised KL divergence is not a metric. A counterexample is as follows:
Let $P$ be a coin that produces a head 10% of the time.
Let $Q$ be a coin that produces a head 20 | Is the square root of the symmetric Kullback-Leibler divergence a metric?
No, the square root of the symmetrised KL divergence is not a metric. A counterexample is as follows:
Let $P$ be a coin that produces a head 10% of the time.
Let $Q$ be a coin that produces a head 20% of the time.
Let $R$ be a coin that produces a head 30% of the time.
Then $d(P, Q) + d(Q, R) = 0.284... + 0.232... < 0.519... = d(P, R)$.
However, for $P$ and $Q$ very close together, $D(P, Q)$ and $J(P, Q)$ and $S(P, Q)$ are essentially the same (they are proportional to one another $+ O((P-Q)^3)$) and their square root is a metric (to the same order). We can take this local metric and integrate it up over the whole space of probability distributions to obtain a global metric. The result is:
$$A(P, Q) = \cos^{-1}\left(\sum_x \sqrt{P(x)Q(x)} \right)$$
I worked this out myself, so I'm afraid I do not know what it is called. I will use A for Alistair until I find out. ;-)
By construction, the triangle inequality in this metric is tight. You can actually find a unique shortest path through the space of probability distributions from $P$ to $Q$ that has the right length. In that respect it is preferable to the otherwise similar Hellinger distance:
$$H(P, Q) = 1 - \sqrt{\sum_x \sqrt{P(x)*Q(x)} }$$
Update 2013-12-05: Apparently this is called the Battacharrya arc-cos distance. | Is the square root of the symmetric Kullback-Leibler divergence a metric?
No, the square root of the symmetrised KL divergence is not a metric. A counterexample is as follows:
Let $P$ be a coin that produces a head 10% of the time.
Let $Q$ be a coin that produces a head 20 |
33,856 | Is the square root of the symmetric Kullback-Leibler divergence a metric? | One case of theorem 2.2 in this paper says that if we define (for positive numbers rather than whole probability distributions) $S(p,q) = (p-q)\log(p/q)$ then $\sqrt S$ is a metric.
(I haven't looked at the paper closely enough to vouch for its correctness, but in any case you have no more reason to trust me than to trust its author :-).)
If so, then your symmetrized KL divergence is a metric on probability distributions, because of the following theorem: if you have metric spaces $(M_1,d_1)$, $(M_2,d_2)$, etc., then $(M_1 \times M_2 \times \cdots$, $\sqrt{d_1^2+d_2^2+\cdots}$ is also a metric space; see e.g. Wikipedia.
EDITED in the light of the now-accepted answer to add: So, clearly it is not true that $\sqrt S$ (as in my first paragraph above) is a metric. And indeed it isn't; specifically (taking the counterexample in that answer as inspiration) we have both $S(0.1,0.2) + S(0.2,0.3) < S(0.1,0.3)$ and $S(0.9,0.8) + S(0.8,0.7) < S(0.9,0.7)$. Unless I'm gravely misunderstanding the paper I linked to, that means its theorem 2.2 is incorrect. This theorem is concerned with a generalization of this $S$, taking the $S$ we're actually interested in here as a limit of something more tractable; it seems to be false for the more-tractable thing too, so the problem is there rather than in the passage to the limit. | Is the square root of the symmetric Kullback-Leibler divergence a metric? | One case of theorem 2.2 in this paper says that if we define (for positive numbers rather than whole probability distributions) $S(p,q) = (p-q)\log(p/q)$ then $\sqrt S$ is a metric.
(I haven't looked | Is the square root of the symmetric Kullback-Leibler divergence a metric?
One case of theorem 2.2 in this paper says that if we define (for positive numbers rather than whole probability distributions) $S(p,q) = (p-q)\log(p/q)$ then $\sqrt S$ is a metric.
(I haven't looked at the paper closely enough to vouch for its correctness, but in any case you have no more reason to trust me than to trust its author :-).)
If so, then your symmetrized KL divergence is a metric on probability distributions, because of the following theorem: if you have metric spaces $(M_1,d_1)$, $(M_2,d_2)$, etc., then $(M_1 \times M_2 \times \cdots$, $\sqrt{d_1^2+d_2^2+\cdots}$ is also a metric space; see e.g. Wikipedia.
EDITED in the light of the now-accepted answer to add: So, clearly it is not true that $\sqrt S$ (as in my first paragraph above) is a metric. And indeed it isn't; specifically (taking the counterexample in that answer as inspiration) we have both $S(0.1,0.2) + S(0.2,0.3) < S(0.1,0.3)$ and $S(0.9,0.8) + S(0.8,0.7) < S(0.9,0.7)$. Unless I'm gravely misunderstanding the paper I linked to, that means its theorem 2.2 is incorrect. This theorem is concerned with a generalization of this $S$, taking the $S$ we're actually interested in here as a limit of something more tractable; it seems to be false for the more-tractable thing too, so the problem is there rather than in the passage to the limit. | Is the square root of the symmetric Kullback-Leibler divergence a metric?
One case of theorem 2.2 in this paper says that if we define (for positive numbers rather than whole probability distributions) $S(p,q) = (p-q)\log(p/q)$ then $\sqrt S$ is a metric.
(I haven't looked |
33,857 | Is the square root of the symmetric Kullback-Leibler divergence a metric? | Here is wolfram's definition of metric: http://mathworld.wolfram.com/Metric.html
They say that the properties of metrics are:
non-negative
symmetry
distance identity (distance between a point and itself is zero)
triangle inequality
The KL divergence is not non-negative. It doesn't qualify. The absolute KL-divergence is non-negative. So I am going to pull a "fog of war" and "answer the question you wish were asked". I am going to evaluate whether the absolute value of KL divergence (or its positive root) comprise a metric.
1) Because it is absolute value, the non-negative is satisfied
2) Symmetry means that $g \left( x,y\right) =g \left( y,x\right)$.
The KL divergence is not symmetric in general. The univariate cases where it is symmetric are when $p \left( x\right)=q \left( x\right)$, when the PDFs under evaluation are equal in value when evaluated at the same point $x$. Absolute value of KL divergence is symmetric.
3) IDentity (in a measurement sense) is satisfied. The natural log of one approaches zero. Neither square root nor absolute value change this.
4) Triangle inequality
In order to satisfy the requirement, the following must be true:
$ KL(a,b) + KL(b,c) \ge KL(a,c)$
or
$ abs(KL(a,b)) + abs(KL(b,c)) \ge abs(KL(a,c))$
You can see the general form of $ abs(log(x))$ where x is the ratio of likelihoods for your PDF's of interest. Are there any places where the triangle inequality is violated?
I'm not sure how to engage this right now and will come back later. At this point, without the absolute value, the KL or sqrt(KL) is broken as a metric.
EDIT:
So it is now "later".
I was using a simplification of KL as $ KL = \sum_{i=1}^{N} {p(x_i) ln \left ( \frac {p(x_i)} {q(x_i)}\right )} $ being treated as $ KL_2 = \sum_{i=1}^{N} { ln \left ( \frac {p(x_i)} {q(x_i)}\right )} $ because the linear scaling isn't going to impact the nature of the metric space. The $ a_i$ is going to be (for my distributions) continuous and smooth. It could be argued that Gaussian Mixture Models (GMM's) provide a sufficient basis to represent any distribution to arbitrary precision in an analogy to Fourier Series basis for time-series signal data, but such arguments are sample size constrained.
The same sort of argument can also be made for the symmetric KL divergence.
By inspection and graphical demonstration, consider the region in the figure to the left of $ x=1$. and imagine two cases: that "a" and "b" are equal and that they are not. If they are equal, and because of the concave nature of the curve the triangle inequality holds. If they are unequal then a triangle can be drawn between the points $ (a,f(a))$, $ (b, f(b))$, and $ (a+b,f(a+b))$. The longest segment of the triangle is such that $ f(min(a,b)) \ge f(a+b) $ and the triangle inequality holds.
Now to consider when $ a = b = 1$. We get $ f(a) + f(b) = 0 + 0$ while $ f(a+b) = f(2) \gt 0$ and the triangle inequality no longer holds. In the domain where the curve is concave down for any $ f(x | x_i \ge 1)$ there are always component values for which triangle inequality is broken. For $ KL_2$ the "radius of compatibility" for the metric space is 1.
If triangle inequality is "broken" for $ KL_2$ then is it broken for $ S(P,Q)$? I will continue to think on this. | Is the square root of the symmetric Kullback-Leibler divergence a metric? | Here is wolfram's definition of metric: http://mathworld.wolfram.com/Metric.html
They say that the properties of metrics are:
non-negative
symmetry
distance identity (distance between a point and its | Is the square root of the symmetric Kullback-Leibler divergence a metric?
Here is wolfram's definition of metric: http://mathworld.wolfram.com/Metric.html
They say that the properties of metrics are:
non-negative
symmetry
distance identity (distance between a point and itself is zero)
triangle inequality
The KL divergence is not non-negative. It doesn't qualify. The absolute KL-divergence is non-negative. So I am going to pull a "fog of war" and "answer the question you wish were asked". I am going to evaluate whether the absolute value of KL divergence (or its positive root) comprise a metric.
1) Because it is absolute value, the non-negative is satisfied
2) Symmetry means that $g \left( x,y\right) =g \left( y,x\right)$.
The KL divergence is not symmetric in general. The univariate cases where it is symmetric are when $p \left( x\right)=q \left( x\right)$, when the PDFs under evaluation are equal in value when evaluated at the same point $x$. Absolute value of KL divergence is symmetric.
3) IDentity (in a measurement sense) is satisfied. The natural log of one approaches zero. Neither square root nor absolute value change this.
4) Triangle inequality
In order to satisfy the requirement, the following must be true:
$ KL(a,b) + KL(b,c) \ge KL(a,c)$
or
$ abs(KL(a,b)) + abs(KL(b,c)) \ge abs(KL(a,c))$
You can see the general form of $ abs(log(x))$ where x is the ratio of likelihoods for your PDF's of interest. Are there any places where the triangle inequality is violated?
I'm not sure how to engage this right now and will come back later. At this point, without the absolute value, the KL or sqrt(KL) is broken as a metric.
EDIT:
So it is now "later".
I was using a simplification of KL as $ KL = \sum_{i=1}^{N} {p(x_i) ln \left ( \frac {p(x_i)} {q(x_i)}\right )} $ being treated as $ KL_2 = \sum_{i=1}^{N} { ln \left ( \frac {p(x_i)} {q(x_i)}\right )} $ because the linear scaling isn't going to impact the nature of the metric space. The $ a_i$ is going to be (for my distributions) continuous and smooth. It could be argued that Gaussian Mixture Models (GMM's) provide a sufficient basis to represent any distribution to arbitrary precision in an analogy to Fourier Series basis for time-series signal data, but such arguments are sample size constrained.
The same sort of argument can also be made for the symmetric KL divergence.
By inspection and graphical demonstration, consider the region in the figure to the left of $ x=1$. and imagine two cases: that "a" and "b" are equal and that they are not. If they are equal, and because of the concave nature of the curve the triangle inequality holds. If they are unequal then a triangle can be drawn between the points $ (a,f(a))$, $ (b, f(b))$, and $ (a+b,f(a+b))$. The longest segment of the triangle is such that $ f(min(a,b)) \ge f(a+b) $ and the triangle inequality holds.
Now to consider when $ a = b = 1$. We get $ f(a) + f(b) = 0 + 0$ while $ f(a+b) = f(2) \gt 0$ and the triangle inequality no longer holds. In the domain where the curve is concave down for any $ f(x | x_i \ge 1)$ there are always component values for which triangle inequality is broken. For $ KL_2$ the "radius of compatibility" for the metric space is 1.
If triangle inequality is "broken" for $ KL_2$ then is it broken for $ S(P,Q)$? I will continue to think on this. | Is the square root of the symmetric Kullback-Leibler divergence a metric?
Here is wolfram's definition of metric: http://mathworld.wolfram.com/Metric.html
They say that the properties of metrics are:
non-negative
symmetry
distance identity (distance between a point and its |
33,858 | Correlation among repeated measures - I need an explanation | Correlation measures association between two random variables and a correlation matrix collects pairwise correlations.
For example, in dimension 3 we have
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
\textrm{Cor}(X_1, X_1) & \textrm{Cor}(X_1, X_2) & \textrm{Cor}(X_1, X_3) \\
\textrm{Cor}(X_2, X_1) & \textrm{Cor}(X_2, X_2) & \textrm{Cor}(X_2, X_3) \\
\textrm{Cor}(X_3, X_1) & \textrm{Cor}(X_3, X_2) & \textrm{Cor}(X_3, X_3)
\end{array}
\right).
$$
It is symmetric and there are $1$'s along the diagonal.
When measurements are taken several times on the same individual, we usually expect some positive association that can be quantified by correlation. In the mixed model methodology, in particular, it is typical to put some structure on a correlation matrix.
One possible structure would be
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
1 & \rho & \rho \\
& 1 & \rho \\
& & 1
\end{array}
\right),
$$
with $0 \leq \rho \leq 1$, that is, correlation is the same regardless of the lag between pairs of repeated
measures. That's the compound symmetry (CS) structure.
Another popular example when repeated measures are taken at equally-spaced time points is the AR(1) structure:
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
1 & \rho & \rho^2 \\
& 1 & \rho \\
& & 1
\end{array}
\right),
$$
that is, correlation decreases with distance in time.
In these two example, only one parameter has to be estimated to get the three entries of the matrix. This generalises to higher dimension.
More details, for example, in the doc for proc mixed, p3955. | Correlation among repeated measures - I need an explanation | Correlation measures association between two random variables and a correlation matrix collects pairwise correlations.
For example, in dimension 3 we have
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 | Correlation among repeated measures - I need an explanation
Correlation measures association between two random variables and a correlation matrix collects pairwise correlations.
For example, in dimension 3 we have
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
\textrm{Cor}(X_1, X_1) & \textrm{Cor}(X_1, X_2) & \textrm{Cor}(X_1, X_3) \\
\textrm{Cor}(X_2, X_1) & \textrm{Cor}(X_2, X_2) & \textrm{Cor}(X_2, X_3) \\
\textrm{Cor}(X_3, X_1) & \textrm{Cor}(X_3, X_2) & \textrm{Cor}(X_3, X_3)
\end{array}
\right).
$$
It is symmetric and there are $1$'s along the diagonal.
When measurements are taken several times on the same individual, we usually expect some positive association that can be quantified by correlation. In the mixed model methodology, in particular, it is typical to put some structure on a correlation matrix.
One possible structure would be
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
1 & \rho & \rho \\
& 1 & \rho \\
& & 1
\end{array}
\right),
$$
with $0 \leq \rho \leq 1$, that is, correlation is the same regardless of the lag between pairs of repeated
measures. That's the compound symmetry (CS) structure.
Another popular example when repeated measures are taken at equally-spaced time points is the AR(1) structure:
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 \\ X_2 \\ X_3
\end{array}
\right) = \left(
\begin{array}{cccc}
1 & \rho & \rho^2 \\
& 1 & \rho \\
& & 1
\end{array}
\right),
$$
that is, correlation decreases with distance in time.
In these two example, only one parameter has to be estimated to get the three entries of the matrix. This generalises to higher dimension.
More details, for example, in the doc for proc mixed, p3955. | Correlation among repeated measures - I need an explanation
Correlation measures association between two random variables and a correlation matrix collects pairwise correlations.
For example, in dimension 3 we have
$$
\textrm{Cor} \left(
\begin{array}{c}
X_1 |
33,859 | Correlation among repeated measures - I need an explanation | The help guide for G*Power 2 defines rho as "the population correlation between the individual levels of the repeated measures factor." Though there is no help currently available in G*Power 3 for the Repeated Measures ANOVA you are interested in, the options available under the "options button" suggests that the "Corr among rep measures" option is refering to rho. | Correlation among repeated measures - I need an explanation | The help guide for G*Power 2 defines rho as "the population correlation between the individual levels of the repeated measures factor." Though there is no help currently available in G*Power 3 for th | Correlation among repeated measures - I need an explanation
The help guide for G*Power 2 defines rho as "the population correlation between the individual levels of the repeated measures factor." Though there is no help currently available in G*Power 3 for the Repeated Measures ANOVA you are interested in, the options available under the "options button" suggests that the "Corr among rep measures" option is refering to rho. | Correlation among repeated measures - I need an explanation
The help guide for G*Power 2 defines rho as "the population correlation between the individual levels of the repeated measures factor." Though there is no help currently available in G*Power 3 for th |
33,860 | Correlation among repeated measures - I need an explanation | I think @ocram is on the right track here. His first "possible structure" is also called compound symmetry (see here: What is compound symmetry in plain English? for more information). This is the correlational structure that goes with what's called a repeated measures ANOVA, that is, a mixed effects model with a random intercept only (no random effects / slopes for any other variable). However, it appears that the correlations between the sets differ from each other, so this covariance structure is not appropriate. It is possible to determine power for situations like this, but it isn't straightforward. It appears that GPower doesn't have routines specified that can handle your situation. The only way I know of to deal with these more complicated situations is to simulate. However, the answer you get from GPower may be good enough for your purposes. | Correlation among repeated measures - I need an explanation | I think @ocram is on the right track here. His first "possible structure" is also called compound symmetry (see here: What is compound symmetry in plain English? for more information). This is the c | Correlation among repeated measures - I need an explanation
I think @ocram is on the right track here. His first "possible structure" is also called compound symmetry (see here: What is compound symmetry in plain English? for more information). This is the correlational structure that goes with what's called a repeated measures ANOVA, that is, a mixed effects model with a random intercept only (no random effects / slopes for any other variable). However, it appears that the correlations between the sets differ from each other, so this covariance structure is not appropriate. It is possible to determine power for situations like this, but it isn't straightforward. It appears that GPower doesn't have routines specified that can handle your situation. The only way I know of to deal with these more complicated situations is to simulate. However, the answer you get from GPower may be good enough for your purposes. | Correlation among repeated measures - I need an explanation
I think @ocram is on the right track here. His first "possible structure" is also called compound symmetry (see here: What is compound symmetry in plain English? for more information). This is the c |
33,861 | Correlation among repeated measures - I need an explanation | As @ocram and @gung rightly point out, if your correlations vary greatly, a different statistical procedure may be more appropriate. That said, for your case I would suggest two possible ways to estimate a required sample size or achieved power using G*Power:
Use the most conservative estimate. A null correlation among repeated measures will yield a sample size that is equivalent to that of a between subject design divided by the number of groups: A between subjects comparison of your three groups ($f = .25$, $α = .05$, $1-β = .95$) would require a total sample size of $n = 252$. A within subjects comparison with an assumed correlation among repeated measures of $r = 0$ requires a total sample size of $n = 84$. Since each measurement is repeated in all three groups your effective sample size is $84\times3 = 252$. Thus, you could simply use 0 as an estimate for your correlation. However, that will most likely be an overly conservative estimate. Using the lowest correlation coefficient found in your data, would give you a more adequate but still conservative estimate of the required sample size.
Use the mean correlation. A less conservative estimate of the correlation among repeated measures is the mean correlation found in your data. Note, however, correlation coefficients are not normally distributed. The formulas you suggest are, therefore, not appropriate to determine the mean correlation. You need to perform a Fisher-transformation before averaging and afterwards retransform the mean to yield the correct mean correlation. But, I think, if the correlations you enter into the mean vary greatly, this approach may lead to sample size estimates that are too liberal. Comparing the results to those resulting from the most conservative estimate may be helpful.
Since it appears you are using R to conduct your analysis here is a function that will do the calculations for you:
rep.m.cor <- function(x, measure, formula, type = "min") {
require("reshape2")
fisher.z <- function(r) {
return(0.5 * log((1+r)/(1-r)))
}
inv.fisher.z <- function(z) {
return((exp(2*z) - 1)/(exp(2*z) + 1))
}
melt.data <- melt(x, measure.vars = measure, na.rm = FALSE)
wide.data <- dcast(melt.data, formula = formula, mean, value.var = "value")
correlations <- cor(wide.data[, -1])
correlations <- correlations[upper.tri(correlations)]
if(type == "mean") {
m.correlations <- inv.fisher.z(mean(fisher.z(correlations)))
} else if(type == "min") {
m.correlations <- inv.fisher.z(min(fisher.z(correlations)))
} else {
stop("Type must be either 'min' or 'mean'.")
}
return(m.correlations)
}
x = A data.frame resulting from aggregation, for example aggregate(measure ~ subject * factor1 * factor2, data, mean).
measure = A string providing the name of the measure.
formula = A formula giving the factor for which the correlation should be calculated, for example subject ~ factor1. It is also possible to determine the correlation for the interaction of two or more factors (subject ~ factor1 + factor2; yes, it needs to be a "+"). Note that GPower can be used to perform power analyses for up to two repeated measures factors as long as one of them has only two levels. To do so, enter the larger number of factor levels into the field "Number of measurements" and multiply the effect size $f$ by $\sqrt{2}$ (2 corresponding to the number of levels of the other factor). If both factor have more than two levels GPower will underestimate the required sample size!
type = A string naming the estimation procedure for the correlation among repeated measures. "min" corresponds to the above option 1, "mean" corresponds to the above option 2.
I hope this helps.
P.S.: If you understand German, there are detailed step-by-step instructions (including screenshots) in this book supplement. | Correlation among repeated measures - I need an explanation | As @ocram and @gung rightly point out, if your correlations vary greatly, a different statistical procedure may be more appropriate. That said, for your case I would suggest two possible ways to estim | Correlation among repeated measures - I need an explanation
As @ocram and @gung rightly point out, if your correlations vary greatly, a different statistical procedure may be more appropriate. That said, for your case I would suggest two possible ways to estimate a required sample size or achieved power using G*Power:
Use the most conservative estimate. A null correlation among repeated measures will yield a sample size that is equivalent to that of a between subject design divided by the number of groups: A between subjects comparison of your three groups ($f = .25$, $α = .05$, $1-β = .95$) would require a total sample size of $n = 252$. A within subjects comparison with an assumed correlation among repeated measures of $r = 0$ requires a total sample size of $n = 84$. Since each measurement is repeated in all three groups your effective sample size is $84\times3 = 252$. Thus, you could simply use 0 as an estimate for your correlation. However, that will most likely be an overly conservative estimate. Using the lowest correlation coefficient found in your data, would give you a more adequate but still conservative estimate of the required sample size.
Use the mean correlation. A less conservative estimate of the correlation among repeated measures is the mean correlation found in your data. Note, however, correlation coefficients are not normally distributed. The formulas you suggest are, therefore, not appropriate to determine the mean correlation. You need to perform a Fisher-transformation before averaging and afterwards retransform the mean to yield the correct mean correlation. But, I think, if the correlations you enter into the mean vary greatly, this approach may lead to sample size estimates that are too liberal. Comparing the results to those resulting from the most conservative estimate may be helpful.
Since it appears you are using R to conduct your analysis here is a function that will do the calculations for you:
rep.m.cor <- function(x, measure, formula, type = "min") {
require("reshape2")
fisher.z <- function(r) {
return(0.5 * log((1+r)/(1-r)))
}
inv.fisher.z <- function(z) {
return((exp(2*z) - 1)/(exp(2*z) + 1))
}
melt.data <- melt(x, measure.vars = measure, na.rm = FALSE)
wide.data <- dcast(melt.data, formula = formula, mean, value.var = "value")
correlations <- cor(wide.data[, -1])
correlations <- correlations[upper.tri(correlations)]
if(type == "mean") {
m.correlations <- inv.fisher.z(mean(fisher.z(correlations)))
} else if(type == "min") {
m.correlations <- inv.fisher.z(min(fisher.z(correlations)))
} else {
stop("Type must be either 'min' or 'mean'.")
}
return(m.correlations)
}
x = A data.frame resulting from aggregation, for example aggregate(measure ~ subject * factor1 * factor2, data, mean).
measure = A string providing the name of the measure.
formula = A formula giving the factor for which the correlation should be calculated, for example subject ~ factor1. It is also possible to determine the correlation for the interaction of two or more factors (subject ~ factor1 + factor2; yes, it needs to be a "+"). Note that GPower can be used to perform power analyses for up to two repeated measures factors as long as one of them has only two levels. To do so, enter the larger number of factor levels into the field "Number of measurements" and multiply the effect size $f$ by $\sqrt{2}$ (2 corresponding to the number of levels of the other factor). If both factor have more than two levels GPower will underestimate the required sample size!
type = A string naming the estimation procedure for the correlation among repeated measures. "min" corresponds to the above option 1, "mean" corresponds to the above option 2.
I hope this helps.
P.S.: If you understand German, there are detailed step-by-step instructions (including screenshots) in this book supplement. | Correlation among repeated measures - I need an explanation
As @ocram and @gung rightly point out, if your correlations vary greatly, a different statistical procedure may be more appropriate. That said, for your case I would suggest two possible ways to estim |
33,862 | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Sure you can use Naive Bayes. You just have to specify what form the conditional distribution will have.
I can think of a few options:
Binary distribution: Binarize your data using a threshold, and you revert to the problem that you were already solving.
Parametric distribution: If there is some reasonable parametric distribution, e.g. Gaussian, you can use that.
Non-parametric distribution: Decide on bins for the numerical data and use those to construct an empirical non-parametric distribution. | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Sure you can use Naive Bayes. You just have to specify what form the conditional distribution will have.
I can think of a few options:
Binary distribution: Binarize your data using a threshold, and y | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Sure you can use Naive Bayes. You just have to specify what form the conditional distribution will have.
I can think of a few options:
Binary distribution: Binarize your data using a threshold, and you revert to the problem that you were already solving.
Parametric distribution: If there is some reasonable parametric distribution, e.g. Gaussian, you can use that.
Non-parametric distribution: Decide on bins for the numerical data and use those to construct an empirical non-parametric distribution. | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Sure you can use Naive Bayes. You just have to specify what form the conditional distribution will have.
I can think of a few options:
Binary distribution: Binarize your data using a threshold, and y |
33,863 | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Naive Bayes classifiers can accommodate numeric variables as well as discrete ones without too much problem. Essentially there are three approaches: (i) discretise the numeric values (ii) use a parametric model of each numeric attribute (e.g. Gaussian) or (iii) use a non-parametric (e.g. Parzen) density estimator for each numeric attribute.
see e.g. "Naive Bayes classifiers that perform well with continuous variables" by Remco Bouckaert | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Naive Bayes classifiers can accommodate numeric variables as well as discrete ones without too much problem. Essentially there are three approaches: (i) discretise the numeric values (ii) use a param | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Naive Bayes classifiers can accommodate numeric variables as well as discrete ones without too much problem. Essentially there are three approaches: (i) discretise the numeric values (ii) use a parametric model of each numeric attribute (e.g. Gaussian) or (iii) use a non-parametric (e.g. Parzen) density estimator for each numeric attribute.
see e.g. "Naive Bayes classifiers that perform well with continuous variables" by Remco Bouckaert | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Naive Bayes classifiers can accommodate numeric variables as well as discrete ones without too much problem. Essentially there are three approaches: (i) discretise the numeric values (ii) use a param |
33,864 | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Naive Bayes can certainly work with numeric attributes as well as discrete ones (modulo concerns about the appropriacy of the assumed distribution as mentioned in other answers). However, you should consider whether you really want to use Naive Bayes, as the non-discriminative methodology will break down more and more as you combine data from various sources, with potentially strong correlations.
If you want to retain a probabilistic interpretation, consider logistic regression, which is an exact analog of Naive Bayes with a discriminative rather than generative objective (see this paper for example: Logistic Regression Vs Naive Bayes. You can find various implementations of it: I like Mallet, if you can use java (accessible as a command-line tool or an API).
If a strict probabilistic interpretation isn't necessary, you can use an SVM. There are many implementations of this, but the de-facto standard (with a variant available in most languages) is LibSVM. | Is a Bayesian Classifier a good approach for text with numerical meta-data? | Naive Bayes can certainly work with numeric attributes as well as discrete ones (modulo concerns about the appropriacy of the assumed distribution as mentioned in other answers). However, you should c | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Naive Bayes can certainly work with numeric attributes as well as discrete ones (modulo concerns about the appropriacy of the assumed distribution as mentioned in other answers). However, you should consider whether you really want to use Naive Bayes, as the non-discriminative methodology will break down more and more as you combine data from various sources, with potentially strong correlations.
If you want to retain a probabilistic interpretation, consider logistic regression, which is an exact analog of Naive Bayes with a discriminative rather than generative objective (see this paper for example: Logistic Regression Vs Naive Bayes. You can find various implementations of it: I like Mallet, if you can use java (accessible as a command-line tool or an API).
If a strict probabilistic interpretation isn't necessary, you can use an SVM. There are many implementations of this, but the de-facto standard (with a variant available in most languages) is LibSVM. | Is a Bayesian Classifier a good approach for text with numerical meta-data?
Naive Bayes can certainly work with numeric attributes as well as discrete ones (modulo concerns about the appropriacy of the assumed distribution as mentioned in other answers). However, you should c |
33,865 | Is a Bayesian Classifier a good approach for text with numerical meta-data? | You can use numerical values quite easily. In the term P(Feature|scam=Yes) you could put a gaussian distribution or any other empirical distribution from training data (for e.g. sort the data, create a function that returns the percentile of the given input numerical value). Here is a write up describing that | Is a Bayesian Classifier a good approach for text with numerical meta-data? | You can use numerical values quite easily. In the term P(Feature|scam=Yes) you could put a gaussian distribution or any other empirical distribution from training data (for e.g. sort the data, create | Is a Bayesian Classifier a good approach for text with numerical meta-data?
You can use numerical values quite easily. In the term P(Feature|scam=Yes) you could put a gaussian distribution or any other empirical distribution from training data (for e.g. sort the data, create a function that returns the percentile of the given input numerical value). Here is a write up describing that | Is a Bayesian Classifier a good approach for text with numerical meta-data?
You can use numerical values quite easily. In the term P(Feature|scam=Yes) you could put a gaussian distribution or any other empirical distribution from training data (for e.g. sort the data, create |
33,866 | What are the different types of averages? | One popular type of average that you have not mentioned is the trimmed mean (recommended by, for example, Wilcox, 2010) which I think of as a middle road between the mean and the median. You get the trimmed mean by first discarding $n$ % of the lower and upper part of your sample and then taking the mean of the resulting subset, where $n$ can be , for example, 10. The resulting average is generally more robust to outliers than the mean.
If your data looks normally distributed (or generally heap shaped) the mean is a good description of the general tendency of the data. If the data is skewed then often the median or a trimmed mean can be a better description of the general tendency.
References
Wilcox, R. R. (2010). Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy, Springer, 2nd Ed. | What are the different types of averages? | One popular type of average that you have not mentioned is the trimmed mean (recommended by, for example, Wilcox, 2010) which I think of as a middle road between the mean and the median. You get the t | What are the different types of averages?
One popular type of average that you have not mentioned is the trimmed mean (recommended by, for example, Wilcox, 2010) which I think of as a middle road between the mean and the median. You get the trimmed mean by first discarding $n$ % of the lower and upper part of your sample and then taking the mean of the resulting subset, where $n$ can be , for example, 10. The resulting average is generally more robust to outliers than the mean.
If your data looks normally distributed (or generally heap shaped) the mean is a good description of the general tendency of the data. If the data is skewed then often the median or a trimmed mean can be a better description of the general tendency.
References
Wilcox, R. R. (2010). Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy, Springer, 2nd Ed. | What are the different types of averages?
One popular type of average that you have not mentioned is the trimmed mean (recommended by, for example, Wilcox, 2010) which I think of as a middle road between the mean and the median. You get the t |
33,867 | What are the different types of averages? | For the geometrically minded, there are means based on monotonic transformations of data.
The geometric mean of a random variable is defined as $$\mbox{G.M.}(X) = \exp \left( \int_{\Omega_X} \log(x)df_x \right) .$$
This is excellent at handling measures of the things that are known to grow exponentially such as income, bacterial colonies, disease progression, etc. One of the reasons log transforms are so highly favored in biostatistics is due to their ability to estimate geometric means with regression models.
The harmonic mean of a random variable is defined as $$\mbox{H.M.}(X) = \left( \int_{\Omega_X} x^{-1}df_x \right) ^{-1}.$$
This is excellent at estimating averages of rates where you have incidents, tasks, or events in a numerator and measures of person time in the denominator. For health planning or corporate mergers, you might be interested in staffing locum tenens doctors to travel between 3 acquired community hospitals to serve specific breakouts of MRSA. Traveling between places, you need to jointly average time per task across the various hospitals and their protocols. A harmonic mean tells you that. | What are the different types of averages? | For the geometrically minded, there are means based on monotonic transformations of data.
The geometric mean of a random variable is defined as $$\mbox{G.M.}(X) = \exp \left( \int_{\Omega_X} \log(x)df | What are the different types of averages?
For the geometrically minded, there are means based on monotonic transformations of data.
The geometric mean of a random variable is defined as $$\mbox{G.M.}(X) = \exp \left( \int_{\Omega_X} \log(x)df_x \right) .$$
This is excellent at handling measures of the things that are known to grow exponentially such as income, bacterial colonies, disease progression, etc. One of the reasons log transforms are so highly favored in biostatistics is due to their ability to estimate geometric means with regression models.
The harmonic mean of a random variable is defined as $$\mbox{H.M.}(X) = \left( \int_{\Omega_X} x^{-1}df_x \right) ^{-1}.$$
This is excellent at estimating averages of rates where you have incidents, tasks, or events in a numerator and measures of person time in the denominator. For health planning or corporate mergers, you might be interested in staffing locum tenens doctors to travel between 3 acquired community hospitals to serve specific breakouts of MRSA. Traveling between places, you need to jointly average time per task across the various hospitals and their protocols. A harmonic mean tells you that. | What are the different types of averages?
For the geometrically minded, there are means based on monotonic transformations of data.
The geometric mean of a random variable is defined as $$\mbox{G.M.}(X) = \exp \left( \int_{\Omega_X} \log(x)df |
33,868 | What are the different types of averages? | With respect to choosing amongst the three types of averages you list, it is generally considered that the mean is appropriate for continuous equal-interval data, the median is for ordinal data, and the mode is for nominal data. However, this scheme is quite limited. See which-mean-to-use-and-when for more sophisticated thoughts on the topic. | What are the different types of averages? | With respect to choosing amongst the three types of averages you list, it is generally considered that the mean is appropriate for continuous equal-interval data, the median is for ordinal data, and t | What are the different types of averages?
With respect to choosing amongst the three types of averages you list, it is generally considered that the mean is appropriate for continuous equal-interval data, the median is for ordinal data, and the mode is for nominal data. However, this scheme is quite limited. See which-mean-to-use-and-when for more sophisticated thoughts on the topic. | What are the different types of averages?
With respect to choosing amongst the three types of averages you list, it is generally considered that the mean is appropriate for continuous equal-interval data, the median is for ordinal data, and t |
33,869 | What are the different types of averages? | Some relevant literature across the spectrum:
Muliere, Pietro, and Giovanni Parmigiani. 1993. Utility and means in the 1930s. Statistical Science 8: 421–32.
..gives a wide-ranging review of about a century's worth of thought on the subject, starting ~1920's with the axiomatic approach of Kolmogorov, Chisini's insights, through decision theory and other developments. A good and thorough academic review.
Many of the same insights are concisely available in:
de Carvalho, Michel. 2016. Mean, what do you mean? The American Statistician 70: 270.
For a well designed and presented, brief and accessible article the following is excellent (it would, for example, be ideal stimulation for talented students):
Falk, Ruma, Avital Lann, and Shmuel Zamir. 2005. Average speed bumps: Four perspectives on averaging speeds.
Chance 18: 25–32.
And for those who would like the full & dense mathematical treatment - really only for mathematicians to be honest - this two-part review would be a good start:
Grabisch, Michel, Jean-Luc Marichal, Radko Mesiar, and Endre Pap. 2011a. Aggregation functions: Means.
Information Sciences 181: 1–22.
Grabisch, Michel, Jean-Luc Marichal, Radko Mesiar, and Endre Pap. 2011b. Aggregation functions: Construction
methods, conjunctive, disjunctive and mixed classes. Information Sciences 181: 23–43. | What are the different types of averages? | Some relevant literature across the spectrum:
Muliere, Pietro, and Giovanni Parmigiani. 1993. Utility and means in the 1930s. Statistical Science 8: 421–32.
..gives a wide-ranging review of about a ce | What are the different types of averages?
Some relevant literature across the spectrum:
Muliere, Pietro, and Giovanni Parmigiani. 1993. Utility and means in the 1930s. Statistical Science 8: 421–32.
..gives a wide-ranging review of about a century's worth of thought on the subject, starting ~1920's with the axiomatic approach of Kolmogorov, Chisini's insights, through decision theory and other developments. A good and thorough academic review.
Many of the same insights are concisely available in:
de Carvalho, Michel. 2016. Mean, what do you mean? The American Statistician 70: 270.
For a well designed and presented, brief and accessible article the following is excellent (it would, for example, be ideal stimulation for talented students):
Falk, Ruma, Avital Lann, and Shmuel Zamir. 2005. Average speed bumps: Four perspectives on averaging speeds.
Chance 18: 25–32.
And for those who would like the full & dense mathematical treatment - really only for mathematicians to be honest - this two-part review would be a good start:
Grabisch, Michel, Jean-Luc Marichal, Radko Mesiar, and Endre Pap. 2011a. Aggregation functions: Means.
Information Sciences 181: 1–22.
Grabisch, Michel, Jean-Luc Marichal, Radko Mesiar, and Endre Pap. 2011b. Aggregation functions: Construction
methods, conjunctive, disjunctive and mixed classes. Information Sciences 181: 23–43. | What are the different types of averages?
Some relevant literature across the spectrum:
Muliere, Pietro, and Giovanni Parmigiani. 1993. Utility and means in the 1930s. Statistical Science 8: 421–32.
..gives a wide-ranging review of about a ce |
33,870 | What are the odds of three people having consecutive birthdays? | For simplicity, ignore leap days and that the distribution of birthdays is not uniform.
There are $365$ sets of consecutive triples of days. We can index them by their first day.
There are $3! = 6$ ways the $3$ people can have a particular triple of distinct birthdays.
There are $365^3$ ways the people can have birthdays, which we are assuming are equally likely.
So, the chance that three random people have consecutive birthdays is $\frac {6 \times 365}{365^3} = \frac {6}{365^2} \approx 0.0045\% \approx 1/22,000.$
Of course, if you have $60$ friends, there are ${60 \choose 3} = 34,220$ ways to choose $3$ of them, and so the average number of triples with consecutive birthdays among your friends is about $1.5$, even if you disregard the chance that the real pattern was a superset such as "consecutive or equal" or "within 2 days of each other." If this is counterintuitive, look up the Birthday Problem. | What are the odds of three people having consecutive birthdays? | For simplicity, ignore leap days and that the distribution of birthdays is not uniform.
There are $365$ sets of consecutive triples of days. We can index them by their first day.
There are $3! = 6$ | What are the odds of three people having consecutive birthdays?
For simplicity, ignore leap days and that the distribution of birthdays is not uniform.
There are $365$ sets of consecutive triples of days. We can index them by their first day.
There are $3! = 6$ ways the $3$ people can have a particular triple of distinct birthdays.
There are $365^3$ ways the people can have birthdays, which we are assuming are equally likely.
So, the chance that three random people have consecutive birthdays is $\frac {6 \times 365}{365^3} = \frac {6}{365^2} \approx 0.0045\% \approx 1/22,000.$
Of course, if you have $60$ friends, there are ${60 \choose 3} = 34,220$ ways to choose $3$ of them, and so the average number of triples with consecutive birthdays among your friends is about $1.5$, even if you disregard the chance that the real pattern was a superset such as "consecutive or equal" or "within 2 days of each other." If this is counterintuitive, look up the Birthday Problem. | What are the odds of three people having consecutive birthdays?
For simplicity, ignore leap days and that the distribution of birthdays is not uniform.
There are $365$ sets of consecutive triples of days. We can index them by their first day.
There are $3! = 6$ |
33,871 | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5? | I am familiar with an earlier version of the book but I have not seen the specific discussion that you refer to. So I am not exactly sure what they are getting at but I think I have a pretty good idea. The chi square test for contingency tables is only asymptotically valid. So it requires a large sample size for the null distribution to be approximately correct so that the test would be valid and the p-value would be approximately right.
For an RxC table the problem gets worse when the row and column dimensions get large. Many years ago William Cochran had a rule of thumb suggesting that the chi square approximation would be good if all the cells were >5. I believe that the authors and others may have been doing additional research to see if they could come up with a less stringent rule. Apparently there rule is that you should have less than 20% of the cells with an expected count of 5 or less. The expected count is the number you would expect to have in the cell if the null hypothesis were true. If many cells had expected counts less than 5 this would indicate that there are a lot of sparse cells and the test would not be valid.
The additional requirement that all cells have at least 1 count in them is another requirement related to the sparseness of the cells. So to be clear let's take an example. Suppose R is 5 and C is 10. Then you have 50 cells. Everyone of those 50 cells should have at least one case in it. Since 20% is 10 cells you would compute the expected count for each cell based on the formula that assumes independence between columns. The authors are saying that they would only recommend use of the chi square approximation if there are no empty cells and the computed expect count is less than 5 in no more than 10 of the 50 cells. | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5 | I am familiar with an earlier version of the book but I have not seen the specific discussion that you refer to. So I am not exactly sure what they are getting at but I think I have a pretty good ide | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5?
I am familiar with an earlier version of the book but I have not seen the specific discussion that you refer to. So I am not exactly sure what they are getting at but I think I have a pretty good idea. The chi square test for contingency tables is only asymptotically valid. So it requires a large sample size for the null distribution to be approximately correct so that the test would be valid and the p-value would be approximately right.
For an RxC table the problem gets worse when the row and column dimensions get large. Many years ago William Cochran had a rule of thumb suggesting that the chi square approximation would be good if all the cells were >5. I believe that the authors and others may have been doing additional research to see if they could come up with a less stringent rule. Apparently there rule is that you should have less than 20% of the cells with an expected count of 5 or less. The expected count is the number you would expect to have in the cell if the null hypothesis were true. If many cells had expected counts less than 5 this would indicate that there are a lot of sparse cells and the test would not be valid.
The additional requirement that all cells have at least 1 count in them is another requirement related to the sparseness of the cells. So to be clear let's take an example. Suppose R is 5 and C is 10. Then you have 50 cells. Everyone of those 50 cells should have at least one case in it. Since 20% is 10 cells you would compute the expected count for each cell based on the formula that assumes independence between columns. The authors are saying that they would only recommend use of the chi square approximation if there are no empty cells and the computed expect count is less than 5 in no more than 10 of the 50 cells. | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5
I am familiar with an earlier version of the book but I have not seen the specific discussion that you refer to. So I am not exactly sure what they are getting at but I think I have a pretty good ide |
33,872 | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5? | +1 to both @MichaelChernick and @juba. I have heard of that rule, I believe Agresti also mentions it in his books on categorical data analysis. Note however, that both that rule, and Cochran's original rule (i.e., that all cells have expected values >5), apply to expected counts, not the observed counts. This is very slippery, and it looks a little bit like you are slipping between the two from your first to your second paragraph.
The best resource I know of for these issues is:
Campbell Ian, 2007, Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations, Statistics in Medicine, 26, 3661 - 3675
I list a lot of related information about chi-squared and related tests here: Contingency tables: what tests to do and when?
On the other hand, with small contingency tables with few actual counts there's a legitimate question of whether we ought to always just be using Fisher's exact test these days. (Note that this is what @juba is referring to, as you can see at the end of the quote.) There's a really good discussion on CV here (albeit one that mostly argues against using Fisher's test): Given the power of computers these days, is there ever a reason not to do a chi-squared test rather than Fisher's exact test? | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5 | +1 to both @MichaelChernick and @juba. I have heard of that rule, I believe Agresti also mentions it in his books on categorical data analysis. Note however, that both that rule, and Cochran's origin | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5?
+1 to both @MichaelChernick and @juba. I have heard of that rule, I believe Agresti also mentions it in his books on categorical data analysis. Note however, that both that rule, and Cochran's original rule (i.e., that all cells have expected values >5), apply to expected counts, not the observed counts. This is very slippery, and it looks a little bit like you are slipping between the two from your first to your second paragraph.
The best resource I know of for these issues is:
Campbell Ian, 2007, Chi-squared and Fisher-Irwin tests of two-by-two tables with small sample recommendations, Statistics in Medicine, 26, 3661 - 3675
I list a lot of related information about chi-squared and related tests here: Contingency tables: what tests to do and when?
On the other hand, with small contingency tables with few actual counts there's a legitimate question of whether we ought to always just be using Fisher's exact test these days. (Note that this is what @juba is referring to, as you can see at the end of the quote.) There's a really good discussion on CV here (albeit one that mostly argues against using Fisher's test): Given the power of computers these days, is there ever a reason not to do a chi-squared test rather than Fisher's exact test? | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5
+1 to both @MichaelChernick and @juba. I have heard of that rule, I believe Agresti also mentions it in his books on categorical data analysis. Note however, that both that rule, and Cochran's origin |
33,873 | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5? | Just for the record, there is an option to the chisq.test function in R that allows to simulate the p-value by randomly generate a given number of independent tables instead of deriving it from the chi-squared distribution :
chisq.test(x, y, simulate.p.value=TRUE, B=2000)
From the function help page :
If ‘simulate.p.value’ is ‘FALSE’, the p-value is computed from the
asymptotic chi-squared distribution of the test statistic;
continuity correction is only used in the 2-by-2 case (if
‘correct’ is ‘TRUE’, the default). Otherwise the p-value is
computed for a Monte Carlo test (Hope, 1968) with ‘B’ replicates.
In the contingency table case simulation is done by random
sampling from the set of all contingency tables with given
marginals, and works only if the marginals are strictly positive.
(A C translation of the algorithm of Patefield (1981) is used.)
Continuity correction is never used, and the statistic is quoted
without it. Note that this is not the usual sampling situation
assumed for the chi-squared test but rather that for Fisher's
exact test.
That could be a way to estimate a p-value without worrying about expected counts. | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5 | Just for the record, there is an option to the chisq.test function in R that allows to simulate the p-value by randomly generate a given number of independent tables instead of deriving it from the ch | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5?
Just for the record, there is an option to the chisq.test function in R that allows to simulate the p-value by randomly generate a given number of independent tables instead of deriving it from the chi-squared distribution :
chisq.test(x, y, simulate.p.value=TRUE, B=2000)
From the function help page :
If ‘simulate.p.value’ is ‘FALSE’, the p-value is computed from the
asymptotic chi-squared distribution of the test statistic;
continuity correction is only used in the 2-by-2 case (if
‘correct’ is ‘TRUE’, the default). Otherwise the p-value is
computed for a Monte Carlo test (Hope, 1968) with ‘B’ replicates.
In the contingency table case simulation is done by random
sampling from the set of all contingency tables with given
marginals, and works only if the marginals are strictly positive.
(A C translation of the algorithm of Patefield (1981) is used.)
Continuity correction is never used, and the statistic is quoted
without it. Note that this is not the usual sampling situation
assumed for the chi-squared test but rather that for Fisher's
exact test.
That could be a way to estimate a p-value without worrying about expected counts. | For chi-square on any 2 by X contingency table, should no more than 20% of the cells be less than 5
Just for the record, there is an option to the chisq.test function in R that allows to simulate the p-value by randomly generate a given number of independent tables instead of deriving it from the ch |
33,874 | Slope of a line given multiple points [closed] | First, note that your link links to a worked example that will probably help.
To implement the equation in Excel:
make a new column labeled "XY" in E
enter the number of rows in cell "F2" (this will be N)
label column F "X^2"
enter =C2^2 into F2 to caclulate X^2, highlight F2:FN and hit ctrl+D to fill this equation down
enter the equation =(G2*sum(E:E) - sum(C:C)*sum(D:D))/(G2*sum(F:F) - sum(C:C)^2) into an empty cell. This will be your R^2
enter the equation =slope(D:D, C:C) into another empty cell, this should match your calculation.
Finished result is shown in an example google doc that can be downloaded in Excel format here. | Slope of a line given multiple points [closed] | First, note that your link links to a worked example that will probably help.
To implement the equation in Excel:
make a new column labeled "XY" in E
enter the number of rows in cell "F2" (this will | Slope of a line given multiple points [closed]
First, note that your link links to a worked example that will probably help.
To implement the equation in Excel:
make a new column labeled "XY" in E
enter the number of rows in cell "F2" (this will be N)
label column F "X^2"
enter =C2^2 into F2 to caclulate X^2, highlight F2:FN and hit ctrl+D to fill this equation down
enter the equation =(G2*sum(E:E) - sum(C:C)*sum(D:D))/(G2*sum(F:F) - sum(C:C)^2) into an empty cell. This will be your R^2
enter the equation =slope(D:D, C:C) into another empty cell, this should match your calculation.
Finished result is shown in an example google doc that can be downloaded in Excel format here. | Slope of a line given multiple points [closed]
First, note that your link links to a worked example that will probably help.
To implement the equation in Excel:
make a new column labeled "XY" in E
enter the number of rows in cell "F2" (this will |
33,875 | Slope of a line given multiple points [closed] | A solution with R and the example data posted by @David and instructions on accessing data from google spreadsheets from the Revolutions blog
require(RCurl)
mycsv <- getURL("https://docs.google.com/spreadsheet/pub?key=0Ai_PDCcY5g2JdGNabGs0R0IyVzhrUFIxOVRoTXMzUUE&single=true&gid=0&range=C1%3AD11&output=csv")
mydata <- read.csv(textConnection(mycsv))
x <- mydata$X
y <- mydata$Y
n <- nrow(mydata)
xy <- x*y
m <- (n*sum(xy)-sum(x)*sum(y)) / (n*sum(x^2)-sum(x)^2)
m
Or, you could use R's built-in function
lm(y~x) | Slope of a line given multiple points [closed] | A solution with R and the example data posted by @David and instructions on accessing data from google spreadsheets from the Revolutions blog
require(RCurl)
mycsv <- getURL("https://docs.google.com/sp | Slope of a line given multiple points [closed]
A solution with R and the example data posted by @David and instructions on accessing data from google spreadsheets from the Revolutions blog
require(RCurl)
mycsv <- getURL("https://docs.google.com/spreadsheet/pub?key=0Ai_PDCcY5g2JdGNabGs0R0IyVzhrUFIxOVRoTXMzUUE&single=true&gid=0&range=C1%3AD11&output=csv")
mydata <- read.csv(textConnection(mycsv))
x <- mydata$X
y <- mydata$Y
n <- nrow(mydata)
xy <- x*y
m <- (n*sum(xy)-sum(x)*sum(y)) / (n*sum(x^2)-sum(x)^2)
m
Or, you could use R's built-in function
lm(y~x) | Slope of a line given multiple points [closed]
A solution with R and the example data posted by @David and instructions on accessing data from google spreadsheets from the Revolutions blog
require(RCurl)
mycsv <- getURL("https://docs.google.com/sp |
33,876 | Slope of a line given multiple points [closed] | Excel already contains a function called SLOPE. See this official help site for reference and an example. | Slope of a line given multiple points [closed] | Excel already contains a function called SLOPE. See this official help site for reference and an example. | Slope of a line given multiple points [closed]
Excel already contains a function called SLOPE. See this official help site for reference and an example. | Slope of a line given multiple points [closed]
Excel already contains a function called SLOPE. See this official help site for reference and an example. |
33,877 | Slope of a line given multiple points [closed] | With your X values in column A, and Y values in column B (no column headers):
=( (COUNT(A:A)*(SUMPRODUCT(A:A,B:B)) - (SUM(A:A)*SUM(B:B))) )/
( (COUNT(A:A)*SUMPRODUCT(A:A,A:A)) - (SUM(A:A)^2) )
If you want column headers, replace all A:A and B:B entries with the proper location of your values.
I figured this formula out so I could use the slope function in PowerPivot, which does not have a SLOPE formula. | Slope of a line given multiple points [closed] | With your X values in column A, and Y values in column B (no column headers):
=( (COUNT(A:A)*(SUMPRODUCT(A:A,B:B)) - (SUM(A:A)*SUM(B:B))) )/
( (COUNT(A:A)*SUMPRODUCT(A:A,A:A)) - (SUM(A:A)^2) )
If y | Slope of a line given multiple points [closed]
With your X values in column A, and Y values in column B (no column headers):
=( (COUNT(A:A)*(SUMPRODUCT(A:A,B:B)) - (SUM(A:A)*SUM(B:B))) )/
( (COUNT(A:A)*SUMPRODUCT(A:A,A:A)) - (SUM(A:A)^2) )
If you want column headers, replace all A:A and B:B entries with the proper location of your values.
I figured this formula out so I could use the slope function in PowerPivot, which does not have a SLOPE formula. | Slope of a line given multiple points [closed]
With your X values in column A, and Y values in column B (no column headers):
=( (COUNT(A:A)*(SUMPRODUCT(A:A,B:B)) - (SUM(A:A)*SUM(B:B))) )/
( (COUNT(A:A)*SUMPRODUCT(A:A,A:A)) - (SUM(A:A)^2) )
If y |
33,878 | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$? | It is a square root of exponential distribution with rate $\pi\lambda$. This means that if $Y\sim\exp(\pi\lambda)$, then $\sqrt{Y}\sim f_X$.
Since your estimate is maximum likelihood estimate it should be asymptotically normal. This follows immediately from the properties of maximum likelihood estimates. In this particular case:
$$\sqrt{n}(\hat\lambda-\lambda)\to N(0,\lambda^2)$$
since
$$E\frac{\partial^2}{\partial \lambda^2}\log f_X(X)=-\frac{1}{\lambda^2}.$$ | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$? | It is a square root of exponential distribution with rate $\pi\lambda$. This means that if $Y\sim\exp(\pi\lambda)$, then $\sqrt{Y}\sim f_X$.
Since your estimate is maximum likelihood estimate it shou | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$?
It is a square root of exponential distribution with rate $\pi\lambda$. This means that if $Y\sim\exp(\pi\lambda)$, then $\sqrt{Y}\sim f_X$.
Since your estimate is maximum likelihood estimate it should be asymptotically normal. This follows immediately from the properties of maximum likelihood estimates. In this particular case:
$$\sqrt{n}(\hat\lambda-\lambda)\to N(0,\lambda^2)$$
since
$$E\frac{\partial^2}{\partial \lambda^2}\log f_X(X)=-\frac{1}{\lambda^2}.$$ | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$?
It is a square root of exponential distribution with rate $\pi\lambda$. This means that if $Y\sim\exp(\pi\lambda)$, then $\sqrt{Y}\sim f_X$.
Since your estimate is maximum likelihood estimate it shou |
33,879 | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$? | Why do you care about asymptotics when the exact answer is just as simple (and exact)? I am assuming that you want asymptotic normality so that you can use the $\mathrm{Est}\pm z_{\alpha}\mathrm{StdErr}$ type of confidence interval
If you make the probability transformation $Y_{i}=X_{i}^{2}$ then you have an exponential sampling distribution (as @mpiktas has mentioned):
$$\newcommand{\Gamma}{\mathrm{Gamma}}
\newcommand{\MLE}{\mathrm{MLE}}
\newcommand{\Pr}{\mathrm{Pr}}
f_{Y_{i}}(y_{i})=f_{X_{i}}(\sqrt{y_{i}})|\frac{\partial\sqrt{y_{i}}}{\partial y_{i}}|=2 \lambda \pi \sqrt{y_{i}} \exp(-\lambda \pi \sqrt{y_{i}} ^2)\frac{1}{2\sqrt{y_{i}}}=\lambda\pi\exp(-\lambda\pi y_{i})$$
So the joint log-likelihood in terms of $D\equiv\{y_{1},\dots,y_{N}\}$ becomes:
$$\log[f(D|\lambda)]=N\log(\pi)+N\log(\lambda)-\lambda\pi\sum_{i=1}^{N}y_{i}$$
Now the only way the data enters the analysis is through the total $T_{N}=\sum_{i=1}^{N}y_{i}$ (and the sample size $N$). Now it is an elementary sampling theory calculation to show that $T_{N}\sim \Gamma(N,\pi\lambda)$, and further that $\pi N^{-1}T_{N}\sim \Gamma(N,N\lambda)$. We can further make this a "pivotal" quantity by taking $\lambda$ out of the equations (via the same way that I just put $N$ into them). And we have:
$$\lambda\pi N^{-1}T_{N}=\frac{\lambda}{\hat{\lambda}_{\MLE}}\sim \Gamma(N,N)$$
Note that thus we now have a distribution which involves the MLE and whose sampling distribution is independent of the parameter $\lambda$. Now your MLE is equal to $\frac{1}{\pi N^{-1}T_{N}}$ And so writing quantities $L_{\alpha}$ and $U_{\alpha}$ such that the following holds:
$$\Pr(L_{\alpha} < G < U_{\alpha})=1-\alpha\;\;\;\;\;\;\;G\sim \Gamma(N,N)$$
And we then have:
$$\Pr(L_{\alpha} < \frac{\lambda}{\hat{\lambda}_{\MLE}} < U_{\alpha})=\Pr(L_{\alpha}\hat{\lambda}_{\MLE} > \lambda > U_{\alpha}\hat{\lambda}_{\MLE})=1-\alpha$$
And you have an exact $1-\alpha$ confidence interval for $\lambda$.
NOTE: The Gamma distribution I am using is the "precision" style, so that a $\Gamma(N,N)$ density looks like:
$$f_{\Gamma(N,N)}(g)=\frac{N^{N}}{\Gamma(N)}g^{N-1}\exp(-Ng)$$ | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$? | Why do you care about asymptotics when the exact answer is just as simple (and exact)? I am assuming that you want asymptotic normality so that you can use the $\mathrm{Est}\pm z_{\alpha}\mathrm{StdE | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$?
Why do you care about asymptotics when the exact answer is just as simple (and exact)? I am assuming that you want asymptotic normality so that you can use the $\mathrm{Est}\pm z_{\alpha}\mathrm{StdErr}$ type of confidence interval
If you make the probability transformation $Y_{i}=X_{i}^{2}$ then you have an exponential sampling distribution (as @mpiktas has mentioned):
$$\newcommand{\Gamma}{\mathrm{Gamma}}
\newcommand{\MLE}{\mathrm{MLE}}
\newcommand{\Pr}{\mathrm{Pr}}
f_{Y_{i}}(y_{i})=f_{X_{i}}(\sqrt{y_{i}})|\frac{\partial\sqrt{y_{i}}}{\partial y_{i}}|=2 \lambda \pi \sqrt{y_{i}} \exp(-\lambda \pi \sqrt{y_{i}} ^2)\frac{1}{2\sqrt{y_{i}}}=\lambda\pi\exp(-\lambda\pi y_{i})$$
So the joint log-likelihood in terms of $D\equiv\{y_{1},\dots,y_{N}\}$ becomes:
$$\log[f(D|\lambda)]=N\log(\pi)+N\log(\lambda)-\lambda\pi\sum_{i=1}^{N}y_{i}$$
Now the only way the data enters the analysis is through the total $T_{N}=\sum_{i=1}^{N}y_{i}$ (and the sample size $N$). Now it is an elementary sampling theory calculation to show that $T_{N}\sim \Gamma(N,\pi\lambda)$, and further that $\pi N^{-1}T_{N}\sim \Gamma(N,N\lambda)$. We can further make this a "pivotal" quantity by taking $\lambda$ out of the equations (via the same way that I just put $N$ into them). And we have:
$$\lambda\pi N^{-1}T_{N}=\frac{\lambda}{\hat{\lambda}_{\MLE}}\sim \Gamma(N,N)$$
Note that thus we now have a distribution which involves the MLE and whose sampling distribution is independent of the parameter $\lambda$. Now your MLE is equal to $\frac{1}{\pi N^{-1}T_{N}}$ And so writing quantities $L_{\alpha}$ and $U_{\alpha}$ such that the following holds:
$$\Pr(L_{\alpha} < G < U_{\alpha})=1-\alpha\;\;\;\;\;\;\;G\sim \Gamma(N,N)$$
And we then have:
$$\Pr(L_{\alpha} < \frac{\lambda}{\hat{\lambda}_{\MLE}} < U_{\alpha})=\Pr(L_{\alpha}\hat{\lambda}_{\MLE} > \lambda > U_{\alpha}\hat{\lambda}_{\MLE})=1-\alpha$$
And you have an exact $1-\alpha$ confidence interval for $\lambda$.
NOTE: The Gamma distribution I am using is the "precision" style, so that a $\Gamma(N,N)$ density looks like:
$$f_{\Gamma(N,N)}(g)=\frac{N^{N}}{\Gamma(N)}g^{N-1}\exp(-Ng)$$ | What kind of distribution is $f_X(x) = 2 \lambda \pi x e^{-\lambda \pi x ^2}$?
Why do you care about asymptotics when the exact answer is just as simple (and exact)? I am assuming that you want asymptotic normality so that you can use the $\mathrm{Est}\pm z_{\alpha}\mathrm{StdE |
33,880 | Is there a way to compute diversity in a population? | How about the Shannon index? | Is there a way to compute diversity in a population? | How about the Shannon index? | Is there a way to compute diversity in a population?
How about the Shannon index? | Is there a way to compute diversity in a population?
How about the Shannon index? |
33,881 | Is there a way to compute diversity in a population? | This paper by Massey and Denton 1988 is a fairly prolific overview of commonly used indices in Sociology/Demography. It would also be useful for some other key terms used for searching articles. Frequently in Sociology the indices are labelled with names such as "heterogeneity" and "segregation" as well as "diversity".
Part of the reason no absolute right answer exists to your question is that people frequently only use epistemic logic to reason why one index is a preferred measurement. Infrequently are those arguments so strong that one should entirely discount other suggested measures. The work of Massey and Denton is useful to highlight what many of these indices theoretically measure and when they differ to a substantively noticeable extent (in large cities in the US). | Is there a way to compute diversity in a population? | This paper by Massey and Denton 1988 is a fairly prolific overview of commonly used indices in Sociology/Demography. It would also be useful for some other key terms used for searching articles. Frequ | Is there a way to compute diversity in a population?
This paper by Massey and Denton 1988 is a fairly prolific overview of commonly used indices in Sociology/Demography. It would also be useful for some other key terms used for searching articles. Frequently in Sociology the indices are labelled with names such as "heterogeneity" and "segregation" as well as "diversity".
Part of the reason no absolute right answer exists to your question is that people frequently only use epistemic logic to reason why one index is a preferred measurement. Infrequently are those arguments so strong that one should entirely discount other suggested measures. The work of Massey and Denton is useful to highlight what many of these indices theoretically measure and when they differ to a substantively noticeable extent (in large cities in the US). | Is there a way to compute diversity in a population?
This paper by Massey and Denton 1988 is a fairly prolific overview of commonly used indices in Sociology/Demography. It would also be useful for some other key terms used for searching articles. Frequ |
33,882 | Is there a way to compute diversity in a population? | Tree diversity analysis book will get you up to speed with common diversity indices, along with some useful packages in R and their usage. While the book talks about trees, it can be used with marine fauna (which I did for my thesis) or even people. | Is there a way to compute diversity in a population? | Tree diversity analysis book will get you up to speed with common diversity indices, along with some useful packages in R and their usage. While the book talks about trees, it can be used with marine | Is there a way to compute diversity in a population?
Tree diversity analysis book will get you up to speed with common diversity indices, along with some useful packages in R and their usage. While the book talks about trees, it can be used with marine fauna (which I did for my thesis) or even people. | Is there a way to compute diversity in a population?
Tree diversity analysis book will get you up to speed with common diversity indices, along with some useful packages in R and their usage. While the book talks about trees, it can be used with marine |
33,883 | Is there a way to compute diversity in a population? | A diversity index such as Simpson's Diversity Index may be helpful:
$$ S = \sum_{k=1}^{K} \left(\frac{n_k}{N}\right)^2 $$
where there are $N$ units and $K$ types in your population with $n_k$ units of each type ($k=1,2,\dots,K$).
It is essentially the probability that two randomly selected samples (with replacement) will be of the same type.
From your examples, the values for Simpson's Diversity Index will be as follows:
City A: $S_A = (\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2 = 1/5 = 0.200.$
City B: $S_B = (\frac{99}{100})^2+\sum_{i=1}^{100}(\frac{0.01}{100})^2 \approx 0.980.$
City C: $S_C = (\frac{40}{100})^2+\sum_{i=1}^{10}(\frac{6}{100})^2 = 0.196.$
You may have noticed that the more diverse the population, the lower Simpson's index is. Therefore, to create a positive relationship, sometimes it is presented as $1-S$ or $\frac{1}{S}$. | Is there a way to compute diversity in a population? | A diversity index such as Simpson's Diversity Index may be helpful:
$$ S = \sum_{k=1}^{K} \left(\frac{n_k}{N}\right)^2 $$
where there are $N$ units and $K$ types in your population with $n_k$ units of | Is there a way to compute diversity in a population?
A diversity index such as Simpson's Diversity Index may be helpful:
$$ S = \sum_{k=1}^{K} \left(\frac{n_k}{N}\right)^2 $$
where there are $N$ units and $K$ types in your population with $n_k$ units of each type ($k=1,2,\dots,K$).
It is essentially the probability that two randomly selected samples (with replacement) will be of the same type.
From your examples, the values for Simpson's Diversity Index will be as follows:
City A: $S_A = (\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2+(\frac{20}{100})^2 = 1/5 = 0.200.$
City B: $S_B = (\frac{99}{100})^2+\sum_{i=1}^{100}(\frac{0.01}{100})^2 \approx 0.980.$
City C: $S_C = (\frac{40}{100})^2+\sum_{i=1}^{10}(\frac{6}{100})^2 = 0.196.$
You may have noticed that the more diverse the population, the lower Simpson's index is. Therefore, to create a positive relationship, sometimes it is presented as $1-S$ or $\frac{1}{S}$. | Is there a way to compute diversity in a population?
A diversity index such as Simpson's Diversity Index may be helpful:
$$ S = \sum_{k=1}^{K} \left(\frac{n_k}{N}\right)^2 $$
where there are $N$ units and $K$ types in your population with $n_k$ units of |
33,884 | Is there a way to compute diversity in a population? | You may be interested in this paper: "A new axiomatic approach to diversity" from Chris Dowden. | Is there a way to compute diversity in a population? | You may be interested in this paper: "A new axiomatic approach to diversity" from Chris Dowden. | Is there a way to compute diversity in a population?
You may be interested in this paper: "A new axiomatic approach to diversity" from Chris Dowden. | Is there a way to compute diversity in a population?
You may be interested in this paper: "A new axiomatic approach to diversity" from Chris Dowden. |
33,885 | Does rpart use multivariate splits by default? | Rpart only provides univariate splits. I believe, based upon your question, that you are not entirely familiar with the difference between a univariate partitioning method and a multivariate partitioning method. I have done my best to explain this below, as well as provide some references for further research and to suggest some R packages to implement these methods.
Rpart is a tree based classifier that uses recursive partitioning. With partitioning methods you must define the points within your data at which a split is to be made. The rpart algorithm in R does this by finding the variable and the point which best splits (and thus reduces) the RSS. Because the splits only happen along one variable at a time, these are univariate splits. A Multivariate Split is typically defined as a simultaneous partitioning along multiple axis (hence multivariate), i.e. the first rpart node might split along Age>35, the second node might split along Income >25,000, and the third node might split along Cities west of the Mississippi. The second and third nodes are split on smaller subsets of the overall data, so in the second node the income criterion best splits the RSS only for those people who have an age of over 35, it does not apply to observations not found in this node, the same applies for the Cities criterion. One could continue doing this until there is a node for each observation in your dataset (rpart uses a minimum bucket size function in addition to a minimum node size criterion and a cp parameter which is the minimum the r-squared value must increase in order to continue fitting).
A multivariate method, such as Patient Rule Induction Method (the prim package in R) would simultaneously split by selecting, for example, All Observations where Income was Greater than 22,000, Age>32, and Cities West of Atlanta. The reason why the fit might be different is because the calculation for the fit is multivariate instead of univariate, the fit of these three criterion is calculated based upon the simultaneous fit of the three variables on all observations meeting these criterion rather than iteratively partitioning based upon univariate splits (as with rpart).
There are varying beliefs in regards to the effectiveness of univariate versus multivariate partitioning methods. Generally what I have seen in practice, is that most people prefer univariate partitioning (such as rpart) for explanatory purposes (it is only used in prediction when dealing with a problem where the structure is very well defined and the variation among the variables is fairly constant, this is why these are often used in medicine). Univariate tree models are typically combined with ensemble learners when used for prediction (i.e. a Random Forest). People who do use multivariate partitioning or clustering (which is very closely related to multivariate partitioning) often do so for complex problems that univariate methods fit very poorly, and do so mainly for prediction, or to group observations into categories.
I highly recommend Julian Faraway's book Extending the Linear Model with R. Chapter 13 is dedicated entirely to the use of Trees (all univariate). If you're interested further in multivariate methods, Elements of Statistical Learning by Hastie et. al, provides an excellent overview of many multivariate methods, including PRIM (although Friedman at Stanford has his original article on the method posted on his website), as well as clustering methods.
In regards to R Packages to utilize these methods, I believe you're already using the rpart package, and I've mentioned the prim package above. There are various built in clustering routines, and I am quite fond of the party package mentioned by another person in this thread, because of its implementation of conditional inference in the decision tree building process. The optpart package lets you perform multivariate partitioning, and the mvpart package (also mentioned by someone else) lets you perform multivariate rpart trees, however I personally prefer using partDSA, which lets you combine nodes further down in your tree to help prevent partitioning of similar observations, if I feel rpart and party are not adequate for my modeling purposes.
Note: In my example of an rpart tree in paragraph 2, I describe how partitioning works with node numbers, if one were to draw out this tree, the partitioning would proceed to the left if the rule for the split was true, however in R I believe the split actually proceeds to the right if the rule is true. | Does rpart use multivariate splits by default? | Rpart only provides univariate splits. I believe, based upon your question, that you are not entirely familiar with the difference between a univariate partitioning method and a multivariate partition | Does rpart use multivariate splits by default?
Rpart only provides univariate splits. I believe, based upon your question, that you are not entirely familiar with the difference between a univariate partitioning method and a multivariate partitioning method. I have done my best to explain this below, as well as provide some references for further research and to suggest some R packages to implement these methods.
Rpart is a tree based classifier that uses recursive partitioning. With partitioning methods you must define the points within your data at which a split is to be made. The rpart algorithm in R does this by finding the variable and the point which best splits (and thus reduces) the RSS. Because the splits only happen along one variable at a time, these are univariate splits. A Multivariate Split is typically defined as a simultaneous partitioning along multiple axis (hence multivariate), i.e. the first rpart node might split along Age>35, the second node might split along Income >25,000, and the third node might split along Cities west of the Mississippi. The second and third nodes are split on smaller subsets of the overall data, so in the second node the income criterion best splits the RSS only for those people who have an age of over 35, it does not apply to observations not found in this node, the same applies for the Cities criterion. One could continue doing this until there is a node for each observation in your dataset (rpart uses a minimum bucket size function in addition to a minimum node size criterion and a cp parameter which is the minimum the r-squared value must increase in order to continue fitting).
A multivariate method, such as Patient Rule Induction Method (the prim package in R) would simultaneously split by selecting, for example, All Observations where Income was Greater than 22,000, Age>32, and Cities West of Atlanta. The reason why the fit might be different is because the calculation for the fit is multivariate instead of univariate, the fit of these three criterion is calculated based upon the simultaneous fit of the three variables on all observations meeting these criterion rather than iteratively partitioning based upon univariate splits (as with rpart).
There are varying beliefs in regards to the effectiveness of univariate versus multivariate partitioning methods. Generally what I have seen in practice, is that most people prefer univariate partitioning (such as rpart) for explanatory purposes (it is only used in prediction when dealing with a problem where the structure is very well defined and the variation among the variables is fairly constant, this is why these are often used in medicine). Univariate tree models are typically combined with ensemble learners when used for prediction (i.e. a Random Forest). People who do use multivariate partitioning or clustering (which is very closely related to multivariate partitioning) often do so for complex problems that univariate methods fit very poorly, and do so mainly for prediction, or to group observations into categories.
I highly recommend Julian Faraway's book Extending the Linear Model with R. Chapter 13 is dedicated entirely to the use of Trees (all univariate). If you're interested further in multivariate methods, Elements of Statistical Learning by Hastie et. al, provides an excellent overview of many multivariate methods, including PRIM (although Friedman at Stanford has his original article on the method posted on his website), as well as clustering methods.
In regards to R Packages to utilize these methods, I believe you're already using the rpart package, and I've mentioned the prim package above. There are various built in clustering routines, and I am quite fond of the party package mentioned by another person in this thread, because of its implementation of conditional inference in the decision tree building process. The optpart package lets you perform multivariate partitioning, and the mvpart package (also mentioned by someone else) lets you perform multivariate rpart trees, however I personally prefer using partDSA, which lets you combine nodes further down in your tree to help prevent partitioning of similar observations, if I feel rpart and party are not adequate for my modeling purposes.
Note: In my example of an rpart tree in paragraph 2, I describe how partitioning works with node numbers, if one were to draw out this tree, the partitioning would proceed to the left if the rule for the split was true, however in R I believe the split actually proceeds to the right if the rule is true. | Does rpart use multivariate splits by default?
Rpart only provides univariate splits. I believe, based upon your question, that you are not entirely familiar with the difference between a univariate partitioning method and a multivariate partition |
33,886 | Does rpart use multivariate splits by default? | As fas as I know, it doesn't; but have not used it for a while. If I understand you well, you might want to look at package mvpart instead. | Does rpart use multivariate splits by default? | As fas as I know, it doesn't; but have not used it for a while. If I understand you well, you might want to look at package mvpart instead. | Does rpart use multivariate splits by default?
As fas as I know, it doesn't; but have not used it for a while. If I understand you well, you might want to look at package mvpart instead. | Does rpart use multivariate splits by default?
As fas as I know, it doesn't; but have not used it for a while. If I understand you well, you might want to look at package mvpart instead. |
33,887 | Does rpart use multivariate splits by default? | Your terminology is confusing. Due you mean splits using more than one variable, or a tree that allows for a multivariate (as opposed to a univariate) response? I presume the latter.
F. Tusell has pointed you to the mvpart package, which adds a multivariate criterion for node impurity that is evaluated for all possible splits at each stage of tree building.
An alternative is the party package, whose function ctree() can handle multivariate responses. | Does rpart use multivariate splits by default? | Your terminology is confusing. Due you mean splits using more than one variable, or a tree that allows for a multivariate (as opposed to a univariate) response? I presume the latter.
F. Tusell has poi | Does rpart use multivariate splits by default?
Your terminology is confusing. Due you mean splits using more than one variable, or a tree that allows for a multivariate (as opposed to a univariate) response? I presume the latter.
F. Tusell has pointed you to the mvpart package, which adds a multivariate criterion for node impurity that is evaluated for all possible splits at each stage of tree building.
An alternative is the party package, whose function ctree() can handle multivariate responses. | Does rpart use multivariate splits by default?
Your terminology is confusing. Due you mean splits using more than one variable, or a tree that allows for a multivariate (as opposed to a univariate) response? I presume the latter.
F. Tusell has poi |
33,888 | Does rpart use multivariate splits by default? | Multivariate splits as defined in the CART book aren't implemented in rpart. The CART software package from Salford Systems has this feature, but AFAIK it uses a proprietary algorithm licensed from Breiman, Friedman et al. | Does rpart use multivariate splits by default? | Multivariate splits as defined in the CART book aren't implemented in rpart. The CART software package from Salford Systems has this feature, but AFAIK it uses a proprietary algorithm licensed from Br | Does rpart use multivariate splits by default?
Multivariate splits as defined in the CART book aren't implemented in rpart. The CART software package from Salford Systems has this feature, but AFAIK it uses a proprietary algorithm licensed from Breiman, Friedman et al. | Does rpart use multivariate splits by default?
Multivariate splits as defined in the CART book aren't implemented in rpart. The CART software package from Salford Systems has this feature, but AFAIK it uses a proprietary algorithm licensed from Br |
33,889 | To what extent does the quality of data play in the accuracy of a model? | It’s a garbage in, garbage out scenario. What machine learning models do, is they learn to recognize patterns in the data and act when finding the patterns at prediction time. If you have garbage data, the model would make garbage predictions no matter how sophisticated your machine learning model is. This is what Andrew Ng means by data-centric AI, when he talks that our major concern should be the qualify of the data, rather than the models. If you know that the quality of the data is low, you should be spending most of the time getting better data, as working on improving the model is an unlikely cure.
As others noticed in the comments, the above statement may be too strong. Indeed, our usual assumption is that the data is noisy and most of the models would be able to overcome some degree of noise, mislabeled samples, etc. We even have specialized models like the errors-in-variables model. Still, if there are known issues with data quality, the usually more efficient approach would be to gather better data (or improve it by re-labeling it, etc) than hoping that the model would be able to overcome the issues by itself. | To what extent does the quality of data play in the accuracy of a model? | It’s a garbage in, garbage out scenario. What machine learning models do, is they learn to recognize patterns in the data and act when finding the patterns at prediction time. If you have garbage data | To what extent does the quality of data play in the accuracy of a model?
It’s a garbage in, garbage out scenario. What machine learning models do, is they learn to recognize patterns in the data and act when finding the patterns at prediction time. If you have garbage data, the model would make garbage predictions no matter how sophisticated your machine learning model is. This is what Andrew Ng means by data-centric AI, when he talks that our major concern should be the qualify of the data, rather than the models. If you know that the quality of the data is low, you should be spending most of the time getting better data, as working on improving the model is an unlikely cure.
As others noticed in the comments, the above statement may be too strong. Indeed, our usual assumption is that the data is noisy and most of the models would be able to overcome some degree of noise, mislabeled samples, etc. We even have specialized models like the errors-in-variables model. Still, if there are known issues with data quality, the usually more efficient approach would be to gather better data (or improve it by re-labeling it, etc) than hoping that the model would be able to overcome the issues by itself. | To what extent does the quality of data play in the accuracy of a model?
It’s a garbage in, garbage out scenario. What machine learning models do, is they learn to recognize patterns in the data and act when finding the patterns at prediction time. If you have garbage data |
33,890 | To what extent does the quality of data play in the accuracy of a model? | An extreme example might be determining the name of a dog’s owner based on a photo of the dog’s tongue. You’re missing critical information from the veterinary records that associate the dog with a human. With such information, you might be able to get the right answer every time.
It can be the case that you simply lack the information to make accurate predictions.
Consider an outcome that is totally determined by the two feature variables (so this outcome is entirely predictable, in some sense), which are independent on each other. If you only have measurements for one of those variables, you’ll never reliably make accurate predictions. Since the two features are independent, you cannot even wrangle information about one out of the other. This would be the low signal-to-noise ratio that you mention.
If your features are related, perhaps you can wrestle with observed features to glean insight about what the unobserved feature would have been had it been measured, perhaps at the risk of overfitting.
However, if you feed a model garbage data (tongue picture), it should be expected to output garbage predictions (inability to predict owner’s name). | To what extent does the quality of data play in the accuracy of a model? | An extreme example might be determining the name of a dog’s owner based on a photo of the dog’s tongue. You’re missing critical information from the veterinary records that associate the dog with a hu | To what extent does the quality of data play in the accuracy of a model?
An extreme example might be determining the name of a dog’s owner based on a photo of the dog’s tongue. You’re missing critical information from the veterinary records that associate the dog with a human. With such information, you might be able to get the right answer every time.
It can be the case that you simply lack the information to make accurate predictions.
Consider an outcome that is totally determined by the two feature variables (so this outcome is entirely predictable, in some sense), which are independent on each other. If you only have measurements for one of those variables, you’ll never reliably make accurate predictions. Since the two features are independent, you cannot even wrangle information about one out of the other. This would be the low signal-to-noise ratio that you mention.
If your features are related, perhaps you can wrestle with observed features to glean insight about what the unobserved feature would have been had it been measured, perhaps at the risk of overfitting.
However, if you feed a model garbage data (tongue picture), it should be expected to output garbage predictions (inability to predict owner’s name). | To what extent does the quality of data play in the accuracy of a model?
An extreme example might be determining the name of a dog’s owner based on a photo of the dog’s tongue. You’re missing critical information from the veterinary records that associate the dog with a hu |
33,891 | To what extent does the quality of data play in the accuracy of a model? | the performance metrics of the models I use will always approach a ceiling point due to the limit of the nature of the variables that is being used. Hence the question I pose: whether one should attempt to perform feature selection, model selection, hyper parameter optimisation IF there exist this constraint?
Optimization might still be necessary. Even when the data does not allow a model to measure values accurately, it will always remain that some models are better than others.
More interesting is the question what the nature of the constraint is. Why do you have this constraint and is it truly some limit or do you not have enough data or are your models not detailed enough?
For example: Some variables are just very hard to predict. For instance, when I flip a coin or roll a dice then I might be able to predict the average outcome, but for a given single instance it is extremely hard to predict the outcome. This is when nature, whose basic laws are deterministic, appears random to us. | To what extent does the quality of data play in the accuracy of a model? | the performance metrics of the models I use will always approach a ceiling point due to the limit of the nature of the variables that is being used. Hence the question I pose: whether one should attem | To what extent does the quality of data play in the accuracy of a model?
the performance metrics of the models I use will always approach a ceiling point due to the limit of the nature of the variables that is being used. Hence the question I pose: whether one should attempt to perform feature selection, model selection, hyper parameter optimisation IF there exist this constraint?
Optimization might still be necessary. Even when the data does not allow a model to measure values accurately, it will always remain that some models are better than others.
More interesting is the question what the nature of the constraint is. Why do you have this constraint and is it truly some limit or do you not have enough data or are your models not detailed enough?
For example: Some variables are just very hard to predict. For instance, when I flip a coin or roll a dice then I might be able to predict the average outcome, but for a given single instance it is extremely hard to predict the outcome. This is when nature, whose basic laws are deterministic, appears random to us. | To what extent does the quality of data play in the accuracy of a model?
the performance metrics of the models I use will always approach a ceiling point due to the limit of the nature of the variables that is being used. Hence the question I pose: whether one should attem |
33,892 | Why probit regression is less interpretable than logistic regression? | Logistic regression is a model for probabilities of binary events. Another concept that is closely related to probabilities is odds, i.e. ratios of probabilities. If the probability of observing a binary event is $p$, the odds of observing it is $\tfrac{p}{1-p}$. It's a fairly simple and commonly understood concept. Logistic regression predicts log-odds. They are "simpler" to interpret because odds are already related to probabilities of binary events, while normal quantiles do not directly translate to them in a meaningful way. If the predicted quantile is $Q(p)$, how "likely" is this to happen? To answer the question, you need to translate the value to probability, while in the case of odds it just re-phrases the question to probability relative to the probability of the opposite event. | Why probit regression is less interpretable than logistic regression? | Logistic regression is a model for probabilities of binary events. Another concept that is closely related to probabilities is odds, i.e. ratios of probabilities. If the probability of observing a bin | Why probit regression is less interpretable than logistic regression?
Logistic regression is a model for probabilities of binary events. Another concept that is closely related to probabilities is odds, i.e. ratios of probabilities. If the probability of observing a binary event is $p$, the odds of observing it is $\tfrac{p}{1-p}$. It's a fairly simple and commonly understood concept. Logistic regression predicts log-odds. They are "simpler" to interpret because odds are already related to probabilities of binary events, while normal quantiles do not directly translate to them in a meaningful way. If the predicted quantile is $Q(p)$, how "likely" is this to happen? To answer the question, you need to translate the value to probability, while in the case of odds it just re-phrases the question to probability relative to the probability of the opposite event. | Why probit regression is less interpretable than logistic regression?
Logistic regression is a model for probabilities of binary events. Another concept that is closely related to probabilities is odds, i.e. ratios of probabilities. If the probability of observing a bin |
33,893 | Why probit regression is less interpretable than logistic regression? | Here's another way to think about this:
Let $\theta = β_0 +\sum_{i=1}^n \beta_i X_i $ be your linear predictor. If $X_j$ increases by one unit, the linear predictor increases by $\beta_j$. What can we say about the corresponding change in probability, $|p(\theta+ \beta_j)-p(\theta)|$?
In a logistic regression model, the probability is given by $p(\theta)=e^\theta/(1+e^\theta)$, the inverse of the logit (log-odds) function. It is straightforward to show that the derivative of this function is maximised at $\theta=0$, and that $p'(0)=1/4$. We deduce that $|p(\theta+ \beta_j)-p(\theta)| < \frac{|\beta_j|}{4}$. In other words, the maximum effect a unit change in the $j$th variable can have on the probability is equal to its coefficient divided by 4.
Can you work out the equivalent for the probit model? | Why probit regression is less interpretable than logistic regression? | Here's another way to think about this:
Let $\theta = β_0 +\sum_{i=1}^n \beta_i X_i $ be your linear predictor. If $X_j$ increases by one unit, the linear predictor increases by $\beta_j$. What can we | Why probit regression is less interpretable than logistic regression?
Here's another way to think about this:
Let $\theta = β_0 +\sum_{i=1}^n \beta_i X_i $ be your linear predictor. If $X_j$ increases by one unit, the linear predictor increases by $\beta_j$. What can we say about the corresponding change in probability, $|p(\theta+ \beta_j)-p(\theta)|$?
In a logistic regression model, the probability is given by $p(\theta)=e^\theta/(1+e^\theta)$, the inverse of the logit (log-odds) function. It is straightforward to show that the derivative of this function is maximised at $\theta=0$, and that $p'(0)=1/4$. We deduce that $|p(\theta+ \beta_j)-p(\theta)| < \frac{|\beta_j|}{4}$. In other words, the maximum effect a unit change in the $j$th variable can have on the probability is equal to its coefficient divided by 4.
Can you work out the equivalent for the probit model? | Why probit regression is less interpretable than logistic regression?
Here's another way to think about this:
Let $\theta = β_0 +\sum_{i=1}^n \beta_i X_i $ be your linear predictor. If $X_j$ increases by one unit, the linear predictor increases by $\beta_j$. What can we |
33,894 | Is it possible to have 1e-15 p-value when difference is about 1 SD? | The p 3.46 × 10−15 is not about the difference between groups but about the time × group interaction. The difference between groups has the p = 0.0012, which is in the text in the section Bumetanide improves ASD symptoms, and this p-value is correct given the numbers they provided. You can check it by plunging them into some t-test calculator.
The time × group interaction p-value is calculated using a random effect model and a permutation test, so it is not possible to just plug the numbers from the paper into a calculator and see if they match. Given the supplementary figure 1, which shows the data this test is testing, I wouldn't be surprised that the p-value is extremely small, you can see that the symptom score for pretty much every control stays the same and the symptom score for every treatment case go down by a constant amount.
Although I think that the p-value is possible, I do not think that clinical data can show such a very regular pattern as shown in the supplementary figure, however, I know nothing about the field and barely even skimmed the paper.
edit
the figure in question: | Is it possible to have 1e-15 p-value when difference is about 1 SD? | The p 3.46 × 10−15 is not about the difference between groups but about the time × group interaction. The difference between groups has the p = 0.0012, which is in the text in the section Bumetanide i | Is it possible to have 1e-15 p-value when difference is about 1 SD?
The p 3.46 × 10−15 is not about the difference between groups but about the time × group interaction. The difference between groups has the p = 0.0012, which is in the text in the section Bumetanide improves ASD symptoms, and this p-value is correct given the numbers they provided. You can check it by plunging them into some t-test calculator.
The time × group interaction p-value is calculated using a random effect model and a permutation test, so it is not possible to just plug the numbers from the paper into a calculator and see if they match. Given the supplementary figure 1, which shows the data this test is testing, I wouldn't be surprised that the p-value is extremely small, you can see that the symptom score for pretty much every control stays the same and the symptom score for every treatment case go down by a constant amount.
Although I think that the p-value is possible, I do not think that clinical data can show such a very regular pattern as shown in the supplementary figure, however, I know nothing about the field and barely even skimmed the paper.
edit
the figure in question: | Is it possible to have 1e-15 p-value when difference is about 1 SD?
The p 3.46 × 10−15 is not about the difference between groups but about the time × group interaction. The difference between groups has the p = 0.0012, which is in the text in the section Bumetanide i |
33,895 | Is it possible to have 1e-15 p-value when difference is about 1 SD? | Remember the definition of the p value:
This is the probability, under the null hypothesis, of sampling a test statistic at least as extreme as that which was observed
The null hypothesis is not stated. In such cases it is typically one of "no difference".
If this null hypothesis is true, we can still observe a difference in observed means of 1 SD. This probability is small for small sample sizes... smaller for larger sample sizes... smaller yet for really large sample sizes.
Bottom line: for a given effect size (your difference in means of about 1 SD would correspond to an effect size of $d=1.0$, which is sometimes indeed called "large"), we can make the p value arbitrarily small by increasing the sample size.
Whether a difference of 1 SD is clinically significant is another matter entirely.
EDIT: you add that there are about 40 participants in each group. We can do a rough sanity check of the p value by simulation.
Specifically, we simulate 40 "participants" in each group, with normally distributed outcomes. Per the definition of the p value, we assume that the null hypothesis is true, so we use the same mean $0$ for both groups. For simplicity, we also use the same variance $1$.
We then take means within each simulated group, and record whether the mean of the first group exceeds the mean of the second group by at least $1$ SD (for a one-sided test).
We repeat this simulation many times, and check how often we have a positive result.
Below is R code repeating the simulation $1,000,000$ times.
n_sim <- 1e6
hits <- replicate(n_sim,return(mean(rnorm(40))-mean(rnorm(40))>1))
sum(hits)
When I run this, I get $3$ hits, which corresponds to $p=3\times 10^{-6}$. It's not the exact same number as the p value reported in that paper, but it's in the same ballpark of "exceedingly tiny". (Note that there is little "real" difference between p values this tiny.) In particular, it's not like our simulation yielded $p=0.2$, which would indeed indicate that something is amiss.
So even without digging deeply into potential issues of that paper, the p value is not completely unrealistic. | Is it possible to have 1e-15 p-value when difference is about 1 SD? | Remember the definition of the p value:
This is the probability, under the null hypothesis, of sampling a test statistic at least as extreme as that which was observed
The null hypothesis is not sta | Is it possible to have 1e-15 p-value when difference is about 1 SD?
Remember the definition of the p value:
This is the probability, under the null hypothesis, of sampling a test statistic at least as extreme as that which was observed
The null hypothesis is not stated. In such cases it is typically one of "no difference".
If this null hypothesis is true, we can still observe a difference in observed means of 1 SD. This probability is small for small sample sizes... smaller for larger sample sizes... smaller yet for really large sample sizes.
Bottom line: for a given effect size (your difference in means of about 1 SD would correspond to an effect size of $d=1.0$, which is sometimes indeed called "large"), we can make the p value arbitrarily small by increasing the sample size.
Whether a difference of 1 SD is clinically significant is another matter entirely.
EDIT: you add that there are about 40 participants in each group. We can do a rough sanity check of the p value by simulation.
Specifically, we simulate 40 "participants" in each group, with normally distributed outcomes. Per the definition of the p value, we assume that the null hypothesis is true, so we use the same mean $0$ for both groups. For simplicity, we also use the same variance $1$.
We then take means within each simulated group, and record whether the mean of the first group exceeds the mean of the second group by at least $1$ SD (for a one-sided test).
We repeat this simulation many times, and check how often we have a positive result.
Below is R code repeating the simulation $1,000,000$ times.
n_sim <- 1e6
hits <- replicate(n_sim,return(mean(rnorm(40))-mean(rnorm(40))>1))
sum(hits)
When I run this, I get $3$ hits, which corresponds to $p=3\times 10^{-6}$. It's not the exact same number as the p value reported in that paper, but it's in the same ballpark of "exceedingly tiny". (Note that there is little "real" difference between p values this tiny.) In particular, it's not like our simulation yielded $p=0.2$, which would indeed indicate that something is amiss.
So even without digging deeply into potential issues of that paper, the p value is not completely unrealistic. | Is it possible to have 1e-15 p-value when difference is about 1 SD?
Remember the definition of the p value:
This is the probability, under the null hypothesis, of sampling a test statistic at least as extreme as that which was observed
The null hypothesis is not sta |
33,896 | Is it possible to have 1e-15 p-value when difference is about 1 SD? | The p-value is determined by the standard error (SEM) rather than the standard deviation (SD), so a large enough sample size can lead to a small p-value even where the effect size is small relative to SD.
Another way (that I initially neglected) for a low p-value when the effect is small relative to the SDs is to have done a within-subject test (e.g. paired t-test). In that case the variance of the effect can be small relative to the overall variance of the data when there is a strong within subject before/after correlation.
You might be correct in your assertion that a difference of about 1 SD is functionally (or biologically, or scientifically) trivial ("insignificant"), but the statistics never tell you directly about that kind of significance as it depends on things that are not in the data. | Is it possible to have 1e-15 p-value when difference is about 1 SD? | The p-value is determined by the standard error (SEM) rather than the standard deviation (SD), so a large enough sample size can lead to a small p-value even where the effect size is small relative to | Is it possible to have 1e-15 p-value when difference is about 1 SD?
The p-value is determined by the standard error (SEM) rather than the standard deviation (SD), so a large enough sample size can lead to a small p-value even where the effect size is small relative to SD.
Another way (that I initially neglected) for a low p-value when the effect is small relative to the SDs is to have done a within-subject test (e.g. paired t-test). In that case the variance of the effect can be small relative to the overall variance of the data when there is a strong within subject before/after correlation.
You might be correct in your assertion that a difference of about 1 SD is functionally (or biologically, or scientifically) trivial ("insignificant"), but the statistics never tell you directly about that kind of significance as it depends on things that are not in the data. | Is it possible to have 1e-15 p-value when difference is about 1 SD?
The p-value is determined by the standard error (SEM) rather than the standard deviation (SD), so a large enough sample size can lead to a small p-value even where the effect size is small relative to |
33,897 | Differences between approaches to exponential regression | One of the differences is the likelihoods for each model. In case readers can't remember, the likelihood encapsulates assumptions about the conditional distribution of the data. In the case of COVID-19, this would be the distribution of infections (or reported new cases, or deaths, etc) on the given day. Whatever we want the outcome to be, let's call it $y$. Thus, the conditional distribution (e.g. the number of new cases today) would be $y\vert t$ (think of this as $y$ conditioned on $t$).
In the case of taking the log and then performing lm, this would mean that $\log(y)\vert t \sim \mathcal{N}(\mu(x), \sigma^2) $. Equivalently, that $y$ is lognormal given $t$. The reason we do linear regression on $\log(y)$ is because on the log scale, the conditional mean is independent of the variance, where as the mean of the log normal is also a function of the variance. So Pro: we know how to do linear regression, but Con This approach makes linear regression assumptions on the log scale which can always be assessed but might be hard to theoretically justify? Another con is that people do not realize that predicting on the log scale and then taking the exponential actually biases predictions by a factor if $\exp(\sigma^2/2)$ if I recall correctly. So when you make predictions from a log normal model, you need to account for this.
So far as I understand, nls assumes a Gaussian likelihood as well, so in this model $ y \vert t \sim \mathcal{N}(\exp(\beta_0 + \beta t), \sigma^2)$. Except now, we let the conditional mean of the outcome be non-linear. This can be a pain because no confidence intervals are not bounded below by 0, so your model might estimate a negative count of infections. Obviously, that can't happen. When the count of infections (or whatever) is larger, then a Gaussian can justifiable. But when things are just starting, then this probably isn't the best likelihood. Furthermore, if you fit your data using nls, you'll see that it fits later data very well but not early data. That is because misfitting later data incurrs larger loss and the goal of nls is to minimize this loss.
The approach with glm frees is a little and allows us to control the conditional distribution as well as the form of the conditional mean through the link function. In this model, $y \vert t \sim \text{Gamma}(\mu(x), \phi)$ with $\mu(x) = g^{-1}(\beta_0 + \beta_1)$. We call $g$ the link, and for the case of log link $\mu(x) = \exp(\beta_0 + \beta_1 t)$. Pro These models are much more expressive, but I think the power comes from the ability to perform inference with a likelihood which is not normal. This lifts a lot of the restrictions, for example symmetric confidence intervals. The Con is that you need a little more theory to understand what is going on. | Differences between approaches to exponential regression | One of the differences is the likelihoods for each model. In case readers can't remember, the likelihood encapsulates assumptions about the conditional distribution of the data. In the case of COVID | Differences between approaches to exponential regression
One of the differences is the likelihoods for each model. In case readers can't remember, the likelihood encapsulates assumptions about the conditional distribution of the data. In the case of COVID-19, this would be the distribution of infections (or reported new cases, or deaths, etc) on the given day. Whatever we want the outcome to be, let's call it $y$. Thus, the conditional distribution (e.g. the number of new cases today) would be $y\vert t$ (think of this as $y$ conditioned on $t$).
In the case of taking the log and then performing lm, this would mean that $\log(y)\vert t \sim \mathcal{N}(\mu(x), \sigma^2) $. Equivalently, that $y$ is lognormal given $t$. The reason we do linear regression on $\log(y)$ is because on the log scale, the conditional mean is independent of the variance, where as the mean of the log normal is also a function of the variance. So Pro: we know how to do linear regression, but Con This approach makes linear regression assumptions on the log scale which can always be assessed but might be hard to theoretically justify? Another con is that people do not realize that predicting on the log scale and then taking the exponential actually biases predictions by a factor if $\exp(\sigma^2/2)$ if I recall correctly. So when you make predictions from a log normal model, you need to account for this.
So far as I understand, nls assumes a Gaussian likelihood as well, so in this model $ y \vert t \sim \mathcal{N}(\exp(\beta_0 + \beta t), \sigma^2)$. Except now, we let the conditional mean of the outcome be non-linear. This can be a pain because no confidence intervals are not bounded below by 0, so your model might estimate a negative count of infections. Obviously, that can't happen. When the count of infections (or whatever) is larger, then a Gaussian can justifiable. But when things are just starting, then this probably isn't the best likelihood. Furthermore, if you fit your data using nls, you'll see that it fits later data very well but not early data. That is because misfitting later data incurrs larger loss and the goal of nls is to minimize this loss.
The approach with glm frees is a little and allows us to control the conditional distribution as well as the form of the conditional mean through the link function. In this model, $y \vert t \sim \text{Gamma}(\mu(x), \phi)$ with $\mu(x) = g^{-1}(\beta_0 + \beta_1)$. We call $g$ the link, and for the case of log link $\mu(x) = \exp(\beta_0 + \beta_1 t)$. Pro These models are much more expressive, but I think the power comes from the ability to perform inference with a likelihood which is not normal. This lifts a lot of the restrictions, for example symmetric confidence intervals. The Con is that you need a little more theory to understand what is going on. | Differences between approaches to exponential regression
One of the differences is the likelihoods for each model. In case readers can't remember, the likelihood encapsulates assumptions about the conditional distribution of the data. In the case of COVID |
33,898 | Differences between approaches to exponential regression | A known difference between fitting an exponential curve with a nonlinear fitting or with a linearized fitting is the difference in the relevance of the error/residuals of different points.
You can notice this in the plot below.
In that plot you can see that
the linearized fit (the broken line) is fitting more precisely the points with small values (see the plot on the right where the broken line is closer to the values in the beginning).
the non linear fit is closer to the points with high values.
modnls <- nls(US ~ a*exp(b*days), start=list(a=100, b=0.3))
modlm <- lm(log(US) ~ days )
plot(days,US, ylim = c(1,15000))
lines(days,predict(modnls))
lines(days,exp(predict(modlm)), lty=2)
title("linear scale", cex.main=1)
legend(0,15000,c("lm","nls"),lty=c(2,1))
plot(days,US, log = "y", ylim = c(100,15000))
lines(days,predict(modnls))
lines(days,exp(predict(modlm)), lty=2)
title("log scale", cex.main=1)
Getting the random noise modeled correctly is not always right in practice
In practice the problem is not so often what sort of model to use for the random noise (whether it should be some sort of glm or not).
The problem is much more that the exponential model (the deterministic part) is not correct, and the choice of fitting a linearized model or not is a choice in the strength between the first points versus fitting the last points. The linearized model fits very well the values at a small size and the non-linear model fits better the values with high values.
You can see the incorrectness of the exponential model when we plot the ratio of increase.
When we plot the ratio of the increase, for the world variable, as function of time, then you can see that it is a non-constant variable (and for this period it appears to be increasing). You can make the same plot for the US but it is very noisy, that is because the numbers are still small and differentiating a noisy curve makes the noise:signal ratio larger.
(also note that the error terms will be incremental and if you really wish to do it right then you should use some arima type of model for the error, or use some other way to make the error terms correlated)
I still don't get why lm with log gives me completely different coefficients. How do I convert between the two?
The glm and nls model the errors both as
$$y−y_{model}∼N(0,\sigma^2)$$
The linearized model models the errors as
$$log(y)−log(y_{model})∼N(0,\sigma^2)$$
but when you take the logarithm of values then you change the relative size. The difference between 1000.1 and 1000 and 1.1 and 1 is both 0.1. But on a log scale it is not the same difference anymore.
This is actually how the glm does the fitting. It uses a linear model, but with transformed weigths for the errors (and it iterates this a few times). See the following two which return the same result:
last_14 <- list(days <- 0:13,
World <- c(101784,105821,109795, 113561,118592,125865,128343,145193,156094,167446,181527,197142,214910,242708),
US <- c(262,402,518,583,959,1281,1663,2179,2727,3499,4632,6421,7783,13677))
days <- last_14[[1]]
US<- last_14[[3]]
World <- last_14[[2]]
Y <- log(US)
X <- cbind(rep(1,14),days)
coef <- lm.fit(x=X, y=Y)$coefficients
yp <- exp(X %*% coef)
for (i in 1:100) {
# itterating with different
# weights
w <- as.numeric(yp^2)
# y-values
Y <- log(US) + (US-yp)/yp
# solve weighted linear equation
coef <- solve(crossprod(X,w*X), crossprod(X,w*Y))
# If am using lm.fit then for some reason you get something different then direct matrix solution
# lm.wfit(x=X, y=Y, w=w)$coefficients
yp <- exp(X %*% coef)
}
coef
# > coef
# [,1]
# 5.2028935
# days 0.3267964
glm(US ~days,
family = gaussian(link = "log"),
control = list(epsilon = 10^-20, maxit = 100))
# > glm(US ~days,
# + family = gaussian(link = "log"),
# + control = list(epsilon = 10^-20, maxit = 100))
#
# Call: glm(formula = US ~ days, family = gaussian(link = "log"), control = list(epsilon = 10^-20,
# maxit = 100))
#
# Coefficients:
# (Intercept) days
# 5.2029 0.3268
#
# Degrees of Freedom: 13 Total (i.e. Null); 12 Residual
# Null Deviance: 185900000
# Residual Deviance: 3533000 AIC: 219.9 | Differences between approaches to exponential regression | A known difference between fitting an exponential curve with a nonlinear fitting or with a linearized fitting is the difference in the relevance of the error/residuals of different points.
You can not | Differences between approaches to exponential regression
A known difference between fitting an exponential curve with a nonlinear fitting or with a linearized fitting is the difference in the relevance of the error/residuals of different points.
You can notice this in the plot below.
In that plot you can see that
the linearized fit (the broken line) is fitting more precisely the points with small values (see the plot on the right where the broken line is closer to the values in the beginning).
the non linear fit is closer to the points with high values.
modnls <- nls(US ~ a*exp(b*days), start=list(a=100, b=0.3))
modlm <- lm(log(US) ~ days )
plot(days,US, ylim = c(1,15000))
lines(days,predict(modnls))
lines(days,exp(predict(modlm)), lty=2)
title("linear scale", cex.main=1)
legend(0,15000,c("lm","nls"),lty=c(2,1))
plot(days,US, log = "y", ylim = c(100,15000))
lines(days,predict(modnls))
lines(days,exp(predict(modlm)), lty=2)
title("log scale", cex.main=1)
Getting the random noise modeled correctly is not always right in practice
In practice the problem is not so often what sort of model to use for the random noise (whether it should be some sort of glm or not).
The problem is much more that the exponential model (the deterministic part) is not correct, and the choice of fitting a linearized model or not is a choice in the strength between the first points versus fitting the last points. The linearized model fits very well the values at a small size and the non-linear model fits better the values with high values.
You can see the incorrectness of the exponential model when we plot the ratio of increase.
When we plot the ratio of the increase, for the world variable, as function of time, then you can see that it is a non-constant variable (and for this period it appears to be increasing). You can make the same plot for the US but it is very noisy, that is because the numbers are still small and differentiating a noisy curve makes the noise:signal ratio larger.
(also note that the error terms will be incremental and if you really wish to do it right then you should use some arima type of model for the error, or use some other way to make the error terms correlated)
I still don't get why lm with log gives me completely different coefficients. How do I convert between the two?
The glm and nls model the errors both as
$$y−y_{model}∼N(0,\sigma^2)$$
The linearized model models the errors as
$$log(y)−log(y_{model})∼N(0,\sigma^2)$$
but when you take the logarithm of values then you change the relative size. The difference between 1000.1 and 1000 and 1.1 and 1 is both 0.1. But on a log scale it is not the same difference anymore.
This is actually how the glm does the fitting. It uses a linear model, but with transformed weigths for the errors (and it iterates this a few times). See the following two which return the same result:
last_14 <- list(days <- 0:13,
World <- c(101784,105821,109795, 113561,118592,125865,128343,145193,156094,167446,181527,197142,214910,242708),
US <- c(262,402,518,583,959,1281,1663,2179,2727,3499,4632,6421,7783,13677))
days <- last_14[[1]]
US<- last_14[[3]]
World <- last_14[[2]]
Y <- log(US)
X <- cbind(rep(1,14),days)
coef <- lm.fit(x=X, y=Y)$coefficients
yp <- exp(X %*% coef)
for (i in 1:100) {
# itterating with different
# weights
w <- as.numeric(yp^2)
# y-values
Y <- log(US) + (US-yp)/yp
# solve weighted linear equation
coef <- solve(crossprod(X,w*X), crossprod(X,w*Y))
# If am using lm.fit then for some reason you get something different then direct matrix solution
# lm.wfit(x=X, y=Y, w=w)$coefficients
yp <- exp(X %*% coef)
}
coef
# > coef
# [,1]
# 5.2028935
# days 0.3267964
glm(US ~days,
family = gaussian(link = "log"),
control = list(epsilon = 10^-20, maxit = 100))
# > glm(US ~days,
# + family = gaussian(link = "log"),
# + control = list(epsilon = 10^-20, maxit = 100))
#
# Call: glm(formula = US ~ days, family = gaussian(link = "log"), control = list(epsilon = 10^-20,
# maxit = 100))
#
# Coefficients:
# (Intercept) days
# 5.2029 0.3268
#
# Degrees of Freedom: 13 Total (i.e. Null); 12 Residual
# Null Deviance: 185900000
# Residual Deviance: 3533000 AIC: 219.9 | Differences between approaches to exponential regression
A known difference between fitting an exponential curve with a nonlinear fitting or with a linearized fitting is the difference in the relevance of the error/residuals of different points.
You can not |
33,899 | Differences between approaches to exponential regression | For a comparison of exponential models fitted in competing ways see:
Best Fit for Exponential Data
This shows comparison in a case where exponential change was chosen in advance, as appropriate to the question (exponential increase in seal numbers after 1972 Marine Mammal Protection Act). The comparison shows the expected difference between log(y) and y as response variables, as described above. | Differences between approaches to exponential regression | For a comparison of exponential models fitted in competing ways see:
Best Fit for Exponential Data
This shows comparison in a case where exponential change was chosen in advance, as appropriate to the | Differences between approaches to exponential regression
For a comparison of exponential models fitted in competing ways see:
Best Fit for Exponential Data
This shows comparison in a case where exponential change was chosen in advance, as appropriate to the question (exponential increase in seal numbers after 1972 Marine Mammal Protection Act). The comparison shows the expected difference between log(y) and y as response variables, as described above. | Differences between approaches to exponential regression
For a comparison of exponential models fitted in competing ways see:
Best Fit for Exponential Data
This shows comparison in a case where exponential change was chosen in advance, as appropriate to the |
33,900 | How can the F-test reject the null hypothesis while the KS test does not? | Significance testing consists of defining a rejection region, and rejecting if the data is in that region. The size of the region is its $\alpha$ value. If two different regions are different shapes, then even if one is smaller than the other, there can be places that are inside the smaller one but not in the larger one.
Dave’s answer explains that KS tests many different attributes, such as mean, variance, and multimodality. Suppose we restrict our attention to just mean and variance. We can then represent the sample on a two-dimensional plot, with one, say, differences in mean being the horizontal dimension and difference in variance being vertical:
The $F$-test’s rejection region (blue) are two horizontal strips in this space: if difference in variance is too positive, or too negative, it rejects the null. The KS test’s rejection region (green) is (with some simplification) a ring: anything too far from the origin in any direction will be rejected. We can (again, with some simplification), consider each to have a “radius”, and anything outside that radius results in the null being rejected. But for the $F$-test, only the vertical distance from the $x$-axis is considered, while the distance from the origin is considered for the KS test.
If both have the same $\alpha$, then since the KS looks at both dimensions, its radius has to be larger. So if your sample has a small difference in mean, and a difference in variance that is slightly more than the $F$-test’s “radius”, then it will be within the KS radius. | How can the F-test reject the null hypothesis while the KS test does not? | Significance testing consists of defining a rejection region, and rejecting if the data is in that region. The size of the region is its $\alpha$ value. If two different regions are different shapes, | How can the F-test reject the null hypothesis while the KS test does not?
Significance testing consists of defining a rejection region, and rejecting if the data is in that region. The size of the region is its $\alpha$ value. If two different regions are different shapes, then even if one is smaller than the other, there can be places that are inside the smaller one but not in the larger one.
Dave’s answer explains that KS tests many different attributes, such as mean, variance, and multimodality. Suppose we restrict our attention to just mean and variance. We can then represent the sample on a two-dimensional plot, with one, say, differences in mean being the horizontal dimension and difference in variance being vertical:
The $F$-test’s rejection region (blue) are two horizontal strips in this space: if difference in variance is too positive, or too negative, it rejects the null. The KS test’s rejection region (green) is (with some simplification) a ring: anything too far from the origin in any direction will be rejected. We can (again, with some simplification), consider each to have a “radius”, and anything outside that radius results in the null being rejected. But for the $F$-test, only the vertical distance from the $x$-axis is considered, while the distance from the origin is considered for the KS test.
If both have the same $\alpha$, then since the KS looks at both dimensions, its radius has to be larger. So if your sample has a small difference in mean, and a difference in variance that is slightly more than the $F$-test’s “radius”, then it will be within the KS radius. | How can the F-test reject the null hypothesis while the KS test does not?
Significance testing consists of defining a rejection region, and rejecting if the data is in that region. The size of the region is its $\alpha$ value. If two different regions are different shapes, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.