idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
45,501
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
|
In fact, Leibniz' notation for infinitesimal increments can be confusing.
One has to be careful here to keep all terms of the same order: $e^{-\lambda dt}$ must be approximated to first order of $dt$ (not zeroth order, i.e. without any terms in $dt$), i.e. $e^{-\lambda dt}$ is approximately $1 - \lambda dt$ + (plus terms which are at least quadratic in $dt$ and thus go to zero faster than $dt$ itself)
Then one has:
$$
\lambda dt \cdot e^{-\lambda dt} \approx \lambda dt \cdot (1 - \lambda dt)
$$
and then for $dt \rightarrow 0$ one can ignore the non-leading terms ($dt^2$) and is left with $\lambda dt$.
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
|
In fact, Leibniz' notation for infinitesimal increments can be confusing.
One has to be careful here to keep all terms of the same order: $e^{-\lambda dt}$ must be approximated to first order of $dt$
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
In fact, Leibniz' notation for infinitesimal increments can be confusing.
One has to be careful here to keep all terms of the same order: $e^{-\lambda dt}$ must be approximated to first order of $dt$ (not zeroth order, i.e. without any terms in $dt$), i.e. $e^{-\lambda dt}$ is approximately $1 - \lambda dt$ + (plus terms which are at least quadratic in $dt$ and thus go to zero faster than $dt$ itself)
Then one has:
$$
\lambda dt \cdot e^{-\lambda dt} \approx \lambda dt \cdot (1 - \lambda dt)
$$
and then for $dt \rightarrow 0$ one can ignore the non-leading terms ($dt^2$) and is left with $\lambda dt$.
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
In fact, Leibniz' notation for infinitesimal increments can be confusing.
One has to be careful here to keep all terms of the same order: $e^{-\lambda dt}$ must be approximated to first order of $dt$
|
45,502
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
|
The previous two answer's are I think coming at the problem "backwards" - though they are both correct. They do not start with the postulate and end with the conclusion. If we start from the postulate, then we have:
$$Pr(\text{No event in} [t,t+dt])=1-Pr(\text{1 event in} [t,t+dt])=1-\lambda dt$$
If we define the function $h(t)$ as follows:
$$Pr(\text{No event in} [0,t])=h(t)$$
$$Pr(\text{No event in} [0,t+dt])=h(t+dt)$$
Additionally, we can use the independence of the increments - another postulate of the poisson process and we have:
$$h(t+dt)=h(t)[1-\lambda dt]\implies\frac{h(t+dt)-h(t)}{dt}=-\lambda h(t)$$
Taking the limit as $dt\to 0$ we have $h'(t)=-\lambda h(t)$ which implies $h(t)=K\exp(-\lambda t)$. We can resolve the proportionality constant by noting that $h(0)=1$ - i.e. it is certain to see no events in $[0,0]$. This gives $K=1$. This derivation can be found here (page 4) along with how to extend it to the probability for any number of events (basically by multiplying the zero count probability by $\lambda^n$ where $n$ is the number of events).
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
|
The previous two answer's are I think coming at the problem "backwards" - though they are both correct. They do not start with the postulate and end with the conclusion. If we start from the postula
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
The previous two answer's are I think coming at the problem "backwards" - though they are both correct. They do not start with the postulate and end with the conclusion. If we start from the postulate, then we have:
$$Pr(\text{No event in} [t,t+dt])=1-Pr(\text{1 event in} [t,t+dt])=1-\lambda dt$$
If we define the function $h(t)$ as follows:
$$Pr(\text{No event in} [0,t])=h(t)$$
$$Pr(\text{No event in} [0,t+dt])=h(t+dt)$$
Additionally, we can use the independence of the increments - another postulate of the poisson process and we have:
$$h(t+dt)=h(t)[1-\lambda dt]\implies\frac{h(t+dt)-h(t)}{dt}=-\lambda h(t)$$
Taking the limit as $dt\to 0$ we have $h'(t)=-\lambda h(t)$ which implies $h(t)=K\exp(-\lambda t)$. We can resolve the proportionality constant by noting that $h(0)=1$ - i.e. it is certain to see no events in $[0,0]$. This gives $K=1$. This derivation can be found here (page 4) along with how to extend it to the probability for any number of events (basically by multiplying the zero count probability by $\lambda^n$ where $n$ is the number of events).
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
The previous two answer's are I think coming at the problem "backwards" - though they are both correct. They do not start with the postulate and end with the conclusion. If we start from the postula
|
45,503
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
|
Here's an alternative (but basically equivalent) derivation to @Andre Holzner's:
For a Poisson process $N(t)$ with rate $\lambda$,
$Pr(N(t+\tau) - N(t) = 1) = (\tau\lambda)\exp(-\tau\lambda) = Pr(N(\tau) = 1) $
which has Taylor expansion around $\tau=0$
$\tau\lambda - \tau^2\lambda^2 + O(\tau^3)$
and this is approximately $\tau\lambda$ for small $\tau$. You're correct that the actual limit is zero, as one typically assumes $Pr(N(0)=0)=1$ in developing the Poisson process.
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
|
Here's an alternative (but basically equivalent) derivation to @Andre Holzner's:
For a Poisson process $N(t)$ with rate $\lambda$,
$Pr(N(t+\tau) - N(t) = 1) = (\tau\lambda)\exp(-\tau\lambda) = Pr(N(\
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly one event in $[t,t+dt]$?
Here's an alternative (but basically equivalent) derivation to @Andre Holzner's:
For a Poisson process $N(t)$ with rate $\lambda$,
$Pr(N(t+\tau) - N(t) = 1) = (\tau\lambda)\exp(-\tau\lambda) = Pr(N(\tau) = 1) $
which has Taylor expansion around $\tau=0$
$\tau\lambda - \tau^2\lambda^2 + O(\tau^3)$
and this is approximately $\tau\lambda$ for small $\tau$. You're correct that the actual limit is zero, as one typically assumes $Pr(N(0)=0)=1$ in developing the Poisson process.
|
Why is the first postulate of the Poisson process that $\lambda dt$ is the probability of exactly on
Here's an alternative (but basically equivalent) derivation to @Andre Holzner's:
For a Poisson process $N(t)$ with rate $\lambda$,
$Pr(N(t+\tau) - N(t) = 1) = (\tau\lambda)\exp(-\tau\lambda) = Pr(N(\
|
45,504
|
How to simulate data based on a linear mixed model fit object in R?
|
Note: the simulated data using simulate.lme does not match elements of the original data structure or model fit (eg. variance, effect size...) nor does it creation of data de novo for experimental design testing.
require(nlme)
?nlme::simulate.lme
fit <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
orthSim <- simulate.lme(fit, nsim = 1)
This produces a simulated fitting (with a possible alternative model).
This is thanks to an answer by @Momo on one of my questions:
Is there a general method for simulating data from a formula or analysis available?
If you require the simulated data, you will need to create a new function from the simulate.lme function.
simulate.lme.data<-edit(simulate.lme)
add the following line right before the last bracket
return(base2)
You can then create as much data as you want:
orthSimdata <- simulate.lme.data(fit, nsim = 1)
Note this is from my (possibly mis-)interpretation of the un-commented code in simulate.lme.
Though this is useful, this seems to do little less than add gaussian noise to your existing data.
This can not be used to directly simulate data de novo. I currently create the start data by adding the numeric value of the factors levels of my experimental design data frame (eg. response=as.numeric(factor1)+as.numeric(factor2)+as.numeric(factor1)*as.numeric(factor1)+rnorm(sd=2)...).
|
How to simulate data based on a linear mixed model fit object in R?
|
Note: the simulated data using simulate.lme does not match elements of the original data structure or model fit (eg. variance, effect size...) nor does it creation of data de novo for experimental des
|
How to simulate data based on a linear mixed model fit object in R?
Note: the simulated data using simulate.lme does not match elements of the original data structure or model fit (eg. variance, effect size...) nor does it creation of data de novo for experimental design testing.
require(nlme)
?nlme::simulate.lme
fit <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
orthSim <- simulate.lme(fit, nsim = 1)
This produces a simulated fitting (with a possible alternative model).
This is thanks to an answer by @Momo on one of my questions:
Is there a general method for simulating data from a formula or analysis available?
If you require the simulated data, you will need to create a new function from the simulate.lme function.
simulate.lme.data<-edit(simulate.lme)
add the following line right before the last bracket
return(base2)
You can then create as much data as you want:
orthSimdata <- simulate.lme.data(fit, nsim = 1)
Note this is from my (possibly mis-)interpretation of the un-commented code in simulate.lme.
Though this is useful, this seems to do little less than add gaussian noise to your existing data.
This can not be used to directly simulate data de novo. I currently create the start data by adding the numeric value of the factors levels of my experimental design data frame (eg. response=as.numeric(factor1)+as.numeric(factor2)+as.numeric(factor1)*as.numeric(factor1)+rnorm(sd=2)...).
|
How to simulate data based on a linear mixed model fit object in R?
Note: the simulated data using simulate.lme does not match elements of the original data structure or model fit (eg. variance, effect size...) nor does it creation of data de novo for experimental des
|
45,505
|
How to simulate data based on a linear mixed model fit object in R?
|
Here is one approach that takes all the values from fm2. You could add more arguments to the function to allow you to change values in the simulations.
library(nlme)
fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
simfun <- function(n) {
# n is the number of subjects, total rows will be 4*n
# sig.b0 is the st. dev. of the random intercepts
# it might be easier to just copy from the output
sig.b0 <- exp(unlist(fm2$modelStruct$reStruct))*fm2$sigma
b0 <- rnorm(n, 0, sig.b0)
sex <- rbinom(n, 1, 0.5) # assign sex at random
fe <- fixef(fm2)
my.df <- data.frame( Subject=rep(1:n, each=4),
int = fe[1] + rep(b0, each=4),
Sex=rep(sex,each=4), age=rep( c(8,10,12,14), n ) )
my.df$distance <- my.df$int + fe[2] * my.df$age +
fe[3]*my.df$Sex + rnorm(n*4, 0, fm2$sigma)
my.df$int <- NULL
my.df$Sex <- factor( my.df$Sex, levels=0:1,
labels=c('Male','Female') )
my.df
}
Orthodont2 <- simfun(100)
|
How to simulate data based on a linear mixed model fit object in R?
|
Here is one approach that takes all the values from fm2. You could add more arguments to the function to allow you to change values in the simulations.
library(nlme)
fm2 <- lme(distance ~ age + Sex,
|
How to simulate data based on a linear mixed model fit object in R?
Here is one approach that takes all the values from fm2. You could add more arguments to the function to allow you to change values in the simulations.
library(nlme)
fm2 <- lme(distance ~ age + Sex, data = Orthodont, random = ~ 1)
simfun <- function(n) {
# n is the number of subjects, total rows will be 4*n
# sig.b0 is the st. dev. of the random intercepts
# it might be easier to just copy from the output
sig.b0 <- exp(unlist(fm2$modelStruct$reStruct))*fm2$sigma
b0 <- rnorm(n, 0, sig.b0)
sex <- rbinom(n, 1, 0.5) # assign sex at random
fe <- fixef(fm2)
my.df <- data.frame( Subject=rep(1:n, each=4),
int = fe[1] + rep(b0, each=4),
Sex=rep(sex,each=4), age=rep( c(8,10,12,14), n ) )
my.df$distance <- my.df$int + fe[2] * my.df$age +
fe[3]*my.df$Sex + rnorm(n*4, 0, fm2$sigma)
my.df$int <- NULL
my.df$Sex <- factor( my.df$Sex, levels=0:1,
labels=c('Male','Female') )
my.df
}
Orthodont2 <- simfun(100)
|
How to simulate data based on a linear mixed model fit object in R?
Here is one approach that takes all the values from fm2. You could add more arguments to the function to allow you to change values in the simulations.
library(nlme)
fm2 <- lme(distance ~ age + Sex,
|
45,506
|
How to simulate data based on a linear mixed model fit object in R?
|
I would probably just sample randomly with replacement from the Subjects in your data until I had the right sample size. This is the bootstrap method. It is simpler than identifying the multivariate distribution of the variables and then sampling from it. Also the bootstrap does not make additional assumptions about the multivariate structure of the data.
first set the number of participants in your big simulated study
nits=300
get the unique participants in the small study
sub=unique(Orthodont$Subject)
sample the unique participants randomly with replacement
subs=sample(sub,nits,rep=T)
make an empty data frame
df=Orthodont[-(1:dim(Orthodont)[1]),]
loop through the sample size and bind it together.
for( i in 1:nits) {
df=rbind(df,Orthodont[which(Orthodont$Subject==subs[i]),])
}
This last for loop is slow, there is prolly a better way of writing it.
Now you can run your regression on the bigger dataset and watch your confidence intervals get smaller.
|
How to simulate data based on a linear mixed model fit object in R?
|
I would probably just sample randomly with replacement from the Subjects in your data until I had the right sample size. This is the bootstrap method. It is simpler than identifying the multivariate d
|
How to simulate data based on a linear mixed model fit object in R?
I would probably just sample randomly with replacement from the Subjects in your data until I had the right sample size. This is the bootstrap method. It is simpler than identifying the multivariate distribution of the variables and then sampling from it. Also the bootstrap does not make additional assumptions about the multivariate structure of the data.
first set the number of participants in your big simulated study
nits=300
get the unique participants in the small study
sub=unique(Orthodont$Subject)
sample the unique participants randomly with replacement
subs=sample(sub,nits,rep=T)
make an empty data frame
df=Orthodont[-(1:dim(Orthodont)[1]),]
loop through the sample size and bind it together.
for( i in 1:nits) {
df=rbind(df,Orthodont[which(Orthodont$Subject==subs[i]),])
}
This last for loop is slow, there is prolly a better way of writing it.
Now you can run your regression on the bigger dataset and watch your confidence intervals get smaller.
|
How to simulate data based on a linear mixed model fit object in R?
I would probably just sample randomly with replacement from the Subjects in your data until I had the right sample size. This is the bootstrap method. It is simpler than identifying the multivariate d
|
45,507
|
Measure of association for 2x3 contingency table
|
Linear or monotonic trend tests--$M^2$ association measure, WMW test cited by @GaBorgulya, or the Cochran-Armitage trend test--can also be used, and they are well explained in Agresti (CDA, 2002, §3.4.6, p. 90).
The latter is actually equivalent to a score test for testing $H_0:\; \beta = 0$ in a logistic regression model, but it can be computed from the $M^2$ statistic, defined as $(n-1)r^2$ ($\sim\chi^2(1)$ for large sample), where $r$ is the sample correlation coefficient between the two variables (the ordinal measure being recoded as numerical scores), by replacing $n-1$ with $n$ (ibid., p. 182). It is easy to compute in any statistical software, but you can also use the coin package in R (I provided an example of use for a related question).
Sidenote
If you are using R, you will find useful resources in either Laura Thompson's R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002), which shows how to replicate Agresti's results with R, or the gnm package (and its companion packages, vcd and vcdExtra) which allows to fit row-column association models (see the vignette, Generalized nonlinear models in R: An overview of the gnm package).
|
Measure of association for 2x3 contingency table
|
Linear or monotonic trend tests--$M^2$ association measure, WMW test cited by @GaBorgulya, or the Cochran-Armitage trend test--can also be used, and they are well explained in Agresti (CDA, 2002, §3.4
|
Measure of association for 2x3 contingency table
Linear or monotonic trend tests--$M^2$ association measure, WMW test cited by @GaBorgulya, or the Cochran-Armitage trend test--can also be used, and they are well explained in Agresti (CDA, 2002, §3.4.6, p. 90).
The latter is actually equivalent to a score test for testing $H_0:\; \beta = 0$ in a logistic regression model, but it can be computed from the $M^2$ statistic, defined as $(n-1)r^2$ ($\sim\chi^2(1)$ for large sample), where $r$ is the sample correlation coefficient between the two variables (the ordinal measure being recoded as numerical scores), by replacing $n-1$ with $n$ (ibid., p. 182). It is easy to compute in any statistical software, but you can also use the coin package in R (I provided an example of use for a related question).
Sidenote
If you are using R, you will find useful resources in either Laura Thompson's R (and S-PLUS) Manual to Accompany Agresti’s Categorical Data Analysis (2002), which shows how to replicate Agresti's results with R, or the gnm package (and its companion packages, vcd and vcdExtra) which allows to fit row-column association models (see the vignette, Generalized nonlinear models in R: An overview of the gnm package).
|
Measure of association for 2x3 contingency table
Linear or monotonic trend tests--$M^2$ association measure, WMW test cited by @GaBorgulya, or the Cochran-Armitage trend test--can also be used, and they are well explained in Agresti (CDA, 2002, §3.4
|
45,508
|
Measure of association for 2x3 contingency table
|
On a 2x3 contingency table where the three-level factor is ordered you may use rank correlation (Spearman or Kendall) to assess association between the two variables.
You may also think about the data as an ordered variable observed in two groups. A corresponding significance test could be the Mann-Whitney test (with many ties). This has an associated measure of association, the WMW odds, related to Agresti’s generalized odds ratio.
Both for rank correlation coefficients and WMW odds confidence intervals can be calculated. I find odds more intuitive, otherwise I believe both kinds of measures are appropriate.
|
Measure of association for 2x3 contingency table
|
On a 2x3 contingency table where the three-level factor is ordered you may use rank correlation (Spearman or Kendall) to assess association between the two variables.
You may also think about the data
|
Measure of association for 2x3 contingency table
On a 2x3 contingency table where the three-level factor is ordered you may use rank correlation (Spearman or Kendall) to assess association between the two variables.
You may also think about the data as an ordered variable observed in two groups. A corresponding significance test could be the Mann-Whitney test (with many ties). This has an associated measure of association, the WMW odds, related to Agresti’s generalized odds ratio.
Both for rank correlation coefficients and WMW odds confidence intervals can be calculated. I find odds more intuitive, otherwise I believe both kinds of measures are appropriate.
|
Measure of association for 2x3 contingency table
On a 2x3 contingency table where the three-level factor is ordered you may use rank correlation (Spearman or Kendall) to assess association between the two variables.
You may also think about the data
|
45,509
|
Measure of association for 2x3 contingency table
|
One way to incorporate the ordering of the column factor into your analysis is to use the cumulative frequencies instead of the cell frequencies. So in your table you have:
$$f_{ij}=\frac{n_{ij}}{n_{\bullet\bullet}}\;\;\;\; i=1,2\;\;j=1,2,3$$
where a "$\bullet$" indicates sum over that index. So I suggesting modeling instead:
$$g_{ij}=\sum_{k=1}^{j}f_{ik}$$
Now you basically have a simple hypothesis for association, that the index $i$ doesn't matter. So you have:
$$E(g_{ij}|H_{0})=\sum_{k=1}^{j}\frac{n_{\bullet k}}{n_{\bullet\bullet}}$$
And then use the good old "entropy" test statistic:
$$T(H_{0})=n_{\bullet\bullet}\sum_{i,j}g_{ij}log\left(\frac{g_{ij}}{E(g_{ij}|H_{0})}\right)$$
Plugging in the numbers gives:
$$T(H_{0})=\sum_{i,j}\left(\sum_{k=1}^{j}n_{ik}\right)log\left(\frac{\sum_{k=1}^{j}n_{ik}}{\sum_{k=1}^{j}n_{\bullet k}}\right)$$
And you reject if this number is too big, it should be interpreted as a "log-odds" ratio which will help with choosing cut-offs.
|
Measure of association for 2x3 contingency table
|
One way to incorporate the ordering of the column factor into your analysis is to use the cumulative frequencies instead of the cell frequencies. So in your table you have:
$$f_{ij}=\frac{n_{ij}}{n_{
|
Measure of association for 2x3 contingency table
One way to incorporate the ordering of the column factor into your analysis is to use the cumulative frequencies instead of the cell frequencies. So in your table you have:
$$f_{ij}=\frac{n_{ij}}{n_{\bullet\bullet}}\;\;\;\; i=1,2\;\;j=1,2,3$$
where a "$\bullet$" indicates sum over that index. So I suggesting modeling instead:
$$g_{ij}=\sum_{k=1}^{j}f_{ik}$$
Now you basically have a simple hypothesis for association, that the index $i$ doesn't matter. So you have:
$$E(g_{ij}|H_{0})=\sum_{k=1}^{j}\frac{n_{\bullet k}}{n_{\bullet\bullet}}$$
And then use the good old "entropy" test statistic:
$$T(H_{0})=n_{\bullet\bullet}\sum_{i,j}g_{ij}log\left(\frac{g_{ij}}{E(g_{ij}|H_{0})}\right)$$
Plugging in the numbers gives:
$$T(H_{0})=\sum_{i,j}\left(\sum_{k=1}^{j}n_{ik}\right)log\left(\frac{\sum_{k=1}^{j}n_{ik}}{\sum_{k=1}^{j}n_{\bullet k}}\right)$$
And you reject if this number is too big, it should be interpreted as a "log-odds" ratio which will help with choosing cut-offs.
|
Measure of association for 2x3 contingency table
One way to incorporate the ordering of the column factor into your analysis is to use the cumulative frequencies instead of the cell frequencies. So in your table you have:
$$f_{ij}=\frac{n_{ij}}{n_{
|
45,510
|
Measure of association for 2x3 contingency table
|
You could use the Jonckheere Terpstra test. In SAS, you can get this in PROC FREQ with the /JT option on the tables statement. I didn't see a function for it in R, but there may be one out there.
|
Measure of association for 2x3 contingency table
|
You could use the Jonckheere Terpstra test. In SAS, you can get this in PROC FREQ with the /JT option on the tables statement. I didn't see a function for it in R, but there may be one out there.
|
Measure of association for 2x3 contingency table
You could use the Jonckheere Terpstra test. In SAS, you can get this in PROC FREQ with the /JT option on the tables statement. I didn't see a function for it in R, but there may be one out there.
|
Measure of association for 2x3 contingency table
You could use the Jonckheere Terpstra test. In SAS, you can get this in PROC FREQ with the /JT option on the tables statement. I didn't see a function for it in R, but there may be one out there.
|
45,511
|
References for using networks to display correlations?
|
Do you know the qgraph project (and the related R package)? It aims at providing various displays for psychometric models, especially those relying on correlations. I discovered this approach for displaying correlation measures when I was reading a very nice and revolutionary article on diagnostic medicine by Denny Borsboom and coll.: Comorbidity: A network perspective, BBS (2010) 33: 137-193.
An oversimplified summary of their network approach of comorbidity is that it is “hypothesized to arise from direct relations between symptoms of multiple disorders”, contrary to the more classical view where these are comorbid disorders themselves that causes their associated symptoms to correlate (as reflected in a latent variable model, like factor or item response models, where a given symptom would allow to measure a particular disorder). In fact, symptoms are part of disorder, but they don’t measure it (and this is a mereological relationship). Their figure 5 describes such a "comorbidity network" and is particularly interesting as it embeds the frequency of symptoms and magnitude of their bivariate association in the same picture. They were using Cytoscape at that time, but the qgraph project has now reached a mature state.
Here are some examples from the on-line R help; basically, these are (1) an association graph with circular or (2) spring layout, (3) a concentration graph with spring layout, and (4) a factorial graph with spring layout (but see help(qgraph.panel)):
(See also help(qgraph.pca) for nice circular displays of an observed correlation matrix for the NEO-FFI, which is a 60-item personality inventory.)
|
References for using networks to display correlations?
|
Do you know the qgraph project (and the related R package)? It aims at providing various displays for psychometric models, especially those relying on correlations. I discovered this approach for disp
|
References for using networks to display correlations?
Do you know the qgraph project (and the related R package)? It aims at providing various displays for psychometric models, especially those relying on correlations. I discovered this approach for displaying correlation measures when I was reading a very nice and revolutionary article on diagnostic medicine by Denny Borsboom and coll.: Comorbidity: A network perspective, BBS (2010) 33: 137-193.
An oversimplified summary of their network approach of comorbidity is that it is “hypothesized to arise from direct relations between symptoms of multiple disorders”, contrary to the more classical view where these are comorbid disorders themselves that causes their associated symptoms to correlate (as reflected in a latent variable model, like factor or item response models, where a given symptom would allow to measure a particular disorder). In fact, symptoms are part of disorder, but they don’t measure it (and this is a mereological relationship). Their figure 5 describes such a "comorbidity network" and is particularly interesting as it embeds the frequency of symptoms and magnitude of their bivariate association in the same picture. They were using Cytoscape at that time, but the qgraph project has now reached a mature state.
Here are some examples from the on-line R help; basically, these are (1) an association graph with circular or (2) spring layout, (3) a concentration graph with spring layout, and (4) a factorial graph with spring layout (but see help(qgraph.panel)):
(See also help(qgraph.pca) for nice circular displays of an observed correlation matrix for the NEO-FFI, which is a 60-item personality inventory.)
|
References for using networks to display correlations?
Do you know the qgraph project (and the related R package)? It aims at providing various displays for psychometric models, especially those relying on correlations. I discovered this approach for disp
|
45,512
|
References for using networks to display correlations?
|
Surprisingly, as a search of Google Images indicates, such graphs do not appear to be in common use to study or explain multiple correlations. That's a pity, because I'm sure much of this theory can be reduced to simple operations on graphs.
Nevertheless, this graphical method to display correlations (or their mathematical equivalents, cosines of angles) has been in use a long time (at least 75 years) in the form of a Coxeter-Dynkin diagram .
For instance, the A3 diagram 0--0--0 represents three variables X, Y, and Z where X and Z (the outer nodes) are uncorrelated and the correlations between X and Y and Z and Y are both -0.5. In the usual applications of these diagrams, certain special "correlations" (angles) are important, so a special method of labeling the edges with the correlations is used, but this functions no differently than using other forms of labeling such as colors.
When you use a "distance" metric monotonically related to correlation, then any 2D MDS calculation can be (and usually is) thought of as embedding this graph in the plane so that Euclidean distances reflect the correlations. This illustrates the intimate connection between correlation-based clustering methods and graphs of correlation. As another example in this vein, a dendrogram, when it is derived from a correlation-based similarity matrix, is another network-based way of displaying correlations. (However, it uses vertical position in an essential way to display similarity, and so is not a purely network-based method.)
|
References for using networks to display correlations?
|
Surprisingly, as a search of Google Images indicates, such graphs do not appear to be in common use to study or explain multiple correlations. That's a pity, because I'm sure much of this theory can
|
References for using networks to display correlations?
Surprisingly, as a search of Google Images indicates, such graphs do not appear to be in common use to study or explain multiple correlations. That's a pity, because I'm sure much of this theory can be reduced to simple operations on graphs.
Nevertheless, this graphical method to display correlations (or their mathematical equivalents, cosines of angles) has been in use a long time (at least 75 years) in the form of a Coxeter-Dynkin diagram .
For instance, the A3 diagram 0--0--0 represents three variables X, Y, and Z where X and Z (the outer nodes) are uncorrelated and the correlations between X and Y and Z and Y are both -0.5. In the usual applications of these diagrams, certain special "correlations" (angles) are important, so a special method of labeling the edges with the correlations is used, but this functions no differently than using other forms of labeling such as colors.
When you use a "distance" metric monotonically related to correlation, then any 2D MDS calculation can be (and usually is) thought of as embedding this graph in the plane so that Euclidean distances reflect the correlations. This illustrates the intimate connection between correlation-based clustering methods and graphs of correlation. As another example in this vein, a dendrogram, when it is derived from a correlation-based similarity matrix, is another network-based way of displaying correlations. (However, it uses vertical position in an essential way to display similarity, and so is not a purely network-based method.)
|
References for using networks to display correlations?
Surprisingly, as a search of Google Images indicates, such graphs do not appear to be in common use to study or explain multiple correlations. That's a pity, because I'm sure much of this theory can
|
45,513
|
Poisson distribution and statistical significance
|
There are two points to make:
It is not the specific value of 130 that is unusual, but that it is much larger than 100. If you got more than 130 hits, that would have been even more surprising. So we usually look at the P(X>=130), not just P(X=130). By your logic even 100 hits would be unusual, because dpois(100,100)=0.04. So a more correct calculation is to look at ppois(129, 100, lower=F)=0.00228. This is still small, but not as extreme as your value. And this does not even take into account, that an unusually low number of hits might also surprise you. We often multiply the probability of exceeding the observed count by 2 to account for this.
If you keep checking your hits every day, sooner or later even rare events will occur. For example P(X>=130) happens to be close to 1/365, so such an event would be expected to occur once a year.
|
Poisson distribution and statistical significance
|
There are two points to make:
It is not the specific value of 130 that is unusual, but that it is much larger than 100. If you got more than 130 hits, that would have been even more surprising. So we
|
Poisson distribution and statistical significance
There are two points to make:
It is not the specific value of 130 that is unusual, but that it is much larger than 100. If you got more than 130 hits, that would have been even more surprising. So we usually look at the P(X>=130), not just P(X=130). By your logic even 100 hits would be unusual, because dpois(100,100)=0.04. So a more correct calculation is to look at ppois(129, 100, lower=F)=0.00228. This is still small, but not as extreme as your value. And this does not even take into account, that an unusually low number of hits might also surprise you. We often multiply the probability of exceeding the observed count by 2 to account for this.
If you keep checking your hits every day, sooner or later even rare events will occur. For example P(X>=130) happens to be close to 1/365, so such an event would be expected to occur once a year.
|
Poisson distribution and statistical significance
There are two points to make:
It is not the specific value of 130 that is unusual, but that it is much larger than 100. If you got more than 130 hits, that would have been even more surprising. So we
|
45,514
|
Poisson distribution and statistical significance
|
First, note that dpois(130, 100) will give you the probability of exactly 130 hits if you are assuming that the true rate is 100. That probability is indeed very low. However, in the usual hypothesis testing framework, what we calculate is the probability of the observed outcome or an even more extreme outcome. You can obtain this for the Poisson distribution with:
> ppois(129, lambda=100, lower.tail=FALSE)
[1] 0.002282093
So, there is a ~.2% probability of observing the 130 hits or even more hits if you are assuming a true rate of 100. By convention, if this value is below .025 (which it is), we would consider this finding "statistically significant" at $\alpha = .05$ (two-sided). What this means is that you are willing to take a 5% risk that your decision (calling the deviation statistically significant and rejecting the hypothesis that the true rate is 100 for that observation) is wrong. That is, if the true rate is indeed 100 for that day, then in 2.5% of the cases, the observed rate will in fact be 120 or larger (qpois(.975, lambda=100)) and in 2.5% of the cases, the observed rate will be 81 or lower (qpois(.025, lambda=100)). So, if you are using $\alpha = .05$, then in 5% of the cases, your decision will be false.
|
Poisson distribution and statistical significance
|
First, note that dpois(130, 100) will give you the probability of exactly 130 hits if you are assuming that the true rate is 100. That probability is indeed very low. However, in the usual hypothesis
|
Poisson distribution and statistical significance
First, note that dpois(130, 100) will give you the probability of exactly 130 hits if you are assuming that the true rate is 100. That probability is indeed very low. However, in the usual hypothesis testing framework, what we calculate is the probability of the observed outcome or an even more extreme outcome. You can obtain this for the Poisson distribution with:
> ppois(129, lambda=100, lower.tail=FALSE)
[1] 0.002282093
So, there is a ~.2% probability of observing the 130 hits or even more hits if you are assuming a true rate of 100. By convention, if this value is below .025 (which it is), we would consider this finding "statistically significant" at $\alpha = .05$ (two-sided). What this means is that you are willing to take a 5% risk that your decision (calling the deviation statistically significant and rejecting the hypothesis that the true rate is 100 for that observation) is wrong. That is, if the true rate is indeed 100 for that day, then in 2.5% of the cases, the observed rate will in fact be 120 or larger (qpois(.975, lambda=100)) and in 2.5% of the cases, the observed rate will be 81 or lower (qpois(.025, lambda=100)). So, if you are using $\alpha = .05$, then in 5% of the cases, your decision will be false.
|
Poisson distribution and statistical significance
First, note that dpois(130, 100) will give you the probability of exactly 130 hits if you are assuming that the true rate is 100. That probability is indeed very low. However, in the usual hypothesis
|
45,515
|
Effect of missing data and outliers on least square estimation
|
I'm not sure about the "missing data", but I can give an answer on "outliers"
This is basically due to the "unbounded" influence that a single observation can have in least squares (or at least in conventional least squares). A very, very simple example of least squares should show this. Suppose you only estimate an intercept $\mu$ using data $Y_i \ (i=1,\dots,n)$. The least square equation is
$$\sum_{i=1}^{n} (Y_i-\mu)^2$$
Which is minimised by choosing $\hat{\mu}=n^{-1}\sum_{i=1}^{n} Y_i=\overline{Y}$. Now suppose I add one extra observation to the sample, equal to $X$, how will the estimate change? The new best estimate using $n+1$ observations is just
$$\hat{\mu}_{+1}=\frac{X+\sum_{i=1}^{n} Y_i}{n+1}$$
Rearranging terms gives
$$\hat{\mu}_{+1}=\hat{\mu}+\frac{1}{n+1}(X-\hat{\mu})$$
Now for a given sample $\hat{\mu}$ and $n$ are fixed. So I can essentially "choose" $X$ to get any new average that I want!
Using the same argument, you can show that deleting the $j$th observation has a similar effect:
$$\hat{\mu}_{-j}=\hat{\mu}+\frac{-1}{n-1}(Y_{j}-\hat{\mu})$$
And similarly (a bit tediously), you can show that removing $M$ observations gives:
$$\hat{\mu}_{-M}=\hat{\mu}+\frac{-M}{n-M}(\overline{Y}_{M}-\hat{\mu})$$
Where $\overline{Y}_{M}$ is the average of the observations that you removed.
The same kind of thing happens in general least squares, the estimate "chases" the outliers. If you are worried about this, then "least absolute deviations" may be a better way to go (but this can be less efficient if you don't have any outliers).
Influence functions are a good way to study this stuff (outliers and robustness). For example, you can get an approximate change in the variance $s^2=n^{-1}\sum_{i=1}^{n}(Y_i-\overline{Y})^2$ as:
$$s^2_{-j} = s^2 +\frac{-1}{n-1}((Y_j-\overline{Y})^2-s^2) + O(n^{-2})$$
|
Effect of missing data and outliers on least square estimation
|
I'm not sure about the "missing data", but I can give an answer on "outliers"
This is basically due to the "unbounded" influence that a single observation can have in least squares (or at least in con
|
Effect of missing data and outliers on least square estimation
I'm not sure about the "missing data", but I can give an answer on "outliers"
This is basically due to the "unbounded" influence that a single observation can have in least squares (or at least in conventional least squares). A very, very simple example of least squares should show this. Suppose you only estimate an intercept $\mu$ using data $Y_i \ (i=1,\dots,n)$. The least square equation is
$$\sum_{i=1}^{n} (Y_i-\mu)^2$$
Which is minimised by choosing $\hat{\mu}=n^{-1}\sum_{i=1}^{n} Y_i=\overline{Y}$. Now suppose I add one extra observation to the sample, equal to $X$, how will the estimate change? The new best estimate using $n+1$ observations is just
$$\hat{\mu}_{+1}=\frac{X+\sum_{i=1}^{n} Y_i}{n+1}$$
Rearranging terms gives
$$\hat{\mu}_{+1}=\hat{\mu}+\frac{1}{n+1}(X-\hat{\mu})$$
Now for a given sample $\hat{\mu}$ and $n$ are fixed. So I can essentially "choose" $X$ to get any new average that I want!
Using the same argument, you can show that deleting the $j$th observation has a similar effect:
$$\hat{\mu}_{-j}=\hat{\mu}+\frac{-1}{n-1}(Y_{j}-\hat{\mu})$$
And similarly (a bit tediously), you can show that removing $M$ observations gives:
$$\hat{\mu}_{-M}=\hat{\mu}+\frac{-M}{n-M}(\overline{Y}_{M}-\hat{\mu})$$
Where $\overline{Y}_{M}$ is the average of the observations that you removed.
The same kind of thing happens in general least squares, the estimate "chases" the outliers. If you are worried about this, then "least absolute deviations" may be a better way to go (but this can be less efficient if you don't have any outliers).
Influence functions are a good way to study this stuff (outliers and robustness). For example, you can get an approximate change in the variance $s^2=n^{-1}\sum_{i=1}^{n}(Y_i-\overline{Y})^2$ as:
$$s^2_{-j} = s^2 +\frac{-1}{n-1}((Y_j-\overline{Y})^2-s^2) + O(n^{-2})$$
|
Effect of missing data and outliers on least square estimation
I'm not sure about the "missing data", but I can give an answer on "outliers"
This is basically due to the "unbounded" influence that a single observation can have in least squares (or at least in con
|
45,516
|
Effect of missing data and outliers on least square estimation
|
If you're using R, try the following example.
library(tcltk)
demo(tkcanvas)
Move the dots around to create all of the outliers you want. The regression will keep up with you.
|
Effect of missing data and outliers on least square estimation
|
If you're using R, try the following example.
library(tcltk)
demo(tkcanvas)
Move the dots around to create all of the outliers you want. The regression will keep up with you.
|
Effect of missing data and outliers on least square estimation
If you're using R, try the following example.
library(tcltk)
demo(tkcanvas)
Move the dots around to create all of the outliers you want. The regression will keep up with you.
|
Effect of missing data and outliers on least square estimation
If you're using R, try the following example.
library(tcltk)
demo(tkcanvas)
Move the dots around to create all of the outliers you want. The regression will keep up with you.
|
45,517
|
Effect of missing data and outliers on least square estimation
|
An graphical example on outliers that requires no software and can be read in 2 minutes is Wikipedia on Anscombe's quartet
|
Effect of missing data and outliers on least square estimation
|
An graphical example on outliers that requires no software and can be read in 2 minutes is Wikipedia on Anscombe's quartet
|
Effect of missing data and outliers on least square estimation
An graphical example on outliers that requires no software and can be read in 2 minutes is Wikipedia on Anscombe's quartet
|
Effect of missing data and outliers on least square estimation
An graphical example on outliers that requires no software and can be read in 2 minutes is Wikipedia on Anscombe's quartet
|
45,518
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation?
|
This might be a case of locally uncorrelated, but globally correlated variables. The variance in each group might be limited because of group homogeneity, therefore there is no evidence for a relationship within each group. But globally, with the full variance, the relationship can be strong. A schematic illustration of the joint distribution within three groups, and the resulting global joint distribution:
Edit: Your question also seems to be if the global correlation is still "real", even if the theoretical correlation within each group is 0. Random variables are defined on a probability space $<\Omega, P>$ where $\Omega$ is the set of all outcomes (think of different observable persons in your case), and $P$ is a probability measure. If your natural population $\Omega$ includes members from all groups, then: yes, the variables are "really" correlated. Otherwise, if the members of different groups do not form a natural common $\Omega$, but each belong to separate populations, then: no, the variables are uncorrelated.
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation
|
This might be a case of locally uncorrelated, but globally correlated variables. The variance in each group might be limited because of group homogeneity, therefore there is no evidence for a relation
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation?
This might be a case of locally uncorrelated, but globally correlated variables. The variance in each group might be limited because of group homogeneity, therefore there is no evidence for a relationship within each group. But globally, with the full variance, the relationship can be strong. A schematic illustration of the joint distribution within three groups, and the resulting global joint distribution:
Edit: Your question also seems to be if the global correlation is still "real", even if the theoretical correlation within each group is 0. Random variables are defined on a probability space $<\Omega, P>$ where $\Omega$ is the set of all outcomes (think of different observable persons in your case), and $P$ is a probability measure. If your natural population $\Omega$ includes members from all groups, then: yes, the variables are "really" correlated. Otherwise, if the members of different groups do not form a natural common $\Omega$, but each belong to separate populations, then: no, the variables are uncorrelated.
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation
This might be a case of locally uncorrelated, but globally correlated variables. The variance in each group might be limited because of group homogeneity, therefore there is no evidence for a relation
|
45,519
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation?
|
Therefore, it is important to evaluate whether the homogeneity of groups is due to low number of data, or actually these groups are quite homogeneous and different. In the first case, we could ensure the presence of a high correlation even when this has not been observed for each group separately.
But what would happen in the second case? If even using a large number of data we could not observe a correlation within each group, we could say that this correlation exist?
Perhaps the value of one of these measures will be only useful for predicting membership in one of the two groups, but not to predict the value of the other measure.
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation
|
Therefore, it is important to evaluate whether the homogeneity of groups is due to low number of data, or actually these groups are quite homogeneous and different. In the first case, we could ensure
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation?
Therefore, it is important to evaluate whether the homogeneity of groups is due to low number of data, or actually these groups are quite homogeneous and different. In the first case, we could ensure the presence of a high correlation even when this has not been observed for each group separately.
But what would happen in the second case? If even using a large number of data we could not observe a correlation within each group, we could say that this correlation exist?
Perhaps the value of one of these measures will be only useful for predicting membership in one of the two groups, but not to predict the value of the other measure.
|
If correlation between two variables is affected by a factor, how should I evaluate this correlation
Therefore, it is important to evaluate whether the homogeneity of groups is due to low number of data, or actually these groups are quite homogeneous and different. In the first case, we could ensure
|
45,520
|
Quantile-Quantile Plot with Unknown Distribution?
|
There are a variety of different possibilities. For example, a chi-square distribution with degrees of freedom in the range of 30-40 would give rise to such a qq-plot. In R:
x <- rchisq(10000, df=35)
qqnorm(x)
qqline(x)
looks like this:
A mixture of two normals with different means doesn't apply though.
x <- c(rnorm(10000/2, mean=0), rnorm(10000/2, mean=2))
qqnorm(x)
qqline(x)
looks like this:
Note how the points cross the line, which is a different pattern than the one you observe.
|
Quantile-Quantile Plot with Unknown Distribution?
|
There are a variety of different possibilities. For example, a chi-square distribution with degrees of freedom in the range of 30-40 would give rise to such a qq-plot. In R:
x <- rchisq(10000, df=35)
|
Quantile-Quantile Plot with Unknown Distribution?
There are a variety of different possibilities. For example, a chi-square distribution with degrees of freedom in the range of 30-40 would give rise to such a qq-plot. In R:
x <- rchisq(10000, df=35)
qqnorm(x)
qqline(x)
looks like this:
A mixture of two normals with different means doesn't apply though.
x <- c(rnorm(10000/2, mean=0), rnorm(10000/2, mean=2))
qqnorm(x)
qqline(x)
looks like this:
Note how the points cross the line, which is a different pattern than the one you observe.
|
Quantile-Quantile Plot with Unknown Distribution?
There are a variety of different possibilities. For example, a chi-square distribution with degrees of freedom in the range of 30-40 would give rise to such a qq-plot. In R:
x <- rchisq(10000, df=35)
|
45,521
|
Quantile-Quantile Plot with Unknown Distribution?
|
Your dataset clearly is not normal. (With this much data, any goodness of fit test will tell you that.) But you can read much more than that from the normal probability plot:
The generally smooth curvature does not hint at a mixture structure.
The upper tail is too stretched out (values too high compared to the reference distribution).
The lower tail is too compressed (values also too high).
This suggests that a mild Box-Cox transformation will produce nearly-normal, or at least symmetric, data. To find it, consider some key values on this plot: the median, found above the x-value of 0, is about 0.90; +2 standard deviations is about 0.99; and -2 standard deviations is about 0.825. The nonlinearity is apparent from the simple calculations 0.99 - 0.90 = 0.09 whereas 0.90 - 0.825 = 0.075: the rise from the median to the upper tail is greater than the rise from the lower tail to the median. We can equalize the slopes by trying out some simple re-expressions of these three values only. For example, taking the reciprocals of the three key data values (Box-Cox power of -1) gives
1/0.825 = 1.21
1/0.90 = 1.11; 1.21 - 1.11 = 0.10 (new slope is 0.050 per SD)
1/0.99 = 1.01; 1.11 - 1.01 = 0.10 (0.050 per SD)
Because the slopes of the re-expressed values are now equal, we know the plot of reciprocals of the data will be approximately linear between -2 and +2 SDs. As a check, let's pick more points further out into the tails and see what the reciprocal does to them. I estimate that the value in the plot at -3 SD from the mean is around 0.79 and the value +3 SD from the mean is 1.05. The two slopes in question equal 0.053 and 0.052 per SD: close enough to each other and to the slopes found between -2 and +2 SD.
My estimates--based on the plot as shown on a monitor--are crude, so you will want to repeat these (simple, quick) calculations with the actual data. Nevertheless, there is considerable evidence that your data when suitably re-expressed with a simple transformation will be close to normally distributed.
|
Quantile-Quantile Plot with Unknown Distribution?
|
Your dataset clearly is not normal. (With this much data, any goodness of fit test will tell you that.) But you can read much more than that from the normal probability plot:
The generally smooth c
|
Quantile-Quantile Plot with Unknown Distribution?
Your dataset clearly is not normal. (With this much data, any goodness of fit test will tell you that.) But you can read much more than that from the normal probability plot:
The generally smooth curvature does not hint at a mixture structure.
The upper tail is too stretched out (values too high compared to the reference distribution).
The lower tail is too compressed (values also too high).
This suggests that a mild Box-Cox transformation will produce nearly-normal, or at least symmetric, data. To find it, consider some key values on this plot: the median, found above the x-value of 0, is about 0.90; +2 standard deviations is about 0.99; and -2 standard deviations is about 0.825. The nonlinearity is apparent from the simple calculations 0.99 - 0.90 = 0.09 whereas 0.90 - 0.825 = 0.075: the rise from the median to the upper tail is greater than the rise from the lower tail to the median. We can equalize the slopes by trying out some simple re-expressions of these three values only. For example, taking the reciprocals of the three key data values (Box-Cox power of -1) gives
1/0.825 = 1.21
1/0.90 = 1.11; 1.21 - 1.11 = 0.10 (new slope is 0.050 per SD)
1/0.99 = 1.01; 1.11 - 1.01 = 0.10 (0.050 per SD)
Because the slopes of the re-expressed values are now equal, we know the plot of reciprocals of the data will be approximately linear between -2 and +2 SDs. As a check, let's pick more points further out into the tails and see what the reciprocal does to them. I estimate that the value in the plot at -3 SD from the mean is around 0.79 and the value +3 SD from the mean is 1.05. The two slopes in question equal 0.053 and 0.052 per SD: close enough to each other and to the slopes found between -2 and +2 SD.
My estimates--based on the plot as shown on a monitor--are crude, so you will want to repeat these (simple, quick) calculations with the actual data. Nevertheless, there is considerable evidence that your data when suitably re-expressed with a simple transformation will be close to normally distributed.
|
Quantile-Quantile Plot with Unknown Distribution?
Your dataset clearly is not normal. (With this much data, any goodness of fit test will tell you that.) But you can read much more than that from the normal probability plot:
The generally smooth c
|
45,522
|
Quantile-Quantile Plot with Unknown Distribution?
|
You may want to take a look at the Anderson-Darling test for normality which empirically tests whether or not your data comes from a given distribution. @chl recommends looking at the scipy toolkit, specifically anderson() in morestats.py for an implementation.
|
Quantile-Quantile Plot with Unknown Distribution?
|
You may want to take a look at the Anderson-Darling test for normality which empirically tests whether or not your data comes from a given distribution. @chl recommends looking at the scipy toolkit,
|
Quantile-Quantile Plot with Unknown Distribution?
You may want to take a look at the Anderson-Darling test for normality which empirically tests whether or not your data comes from a given distribution. @chl recommends looking at the scipy toolkit, specifically anderson() in morestats.py for an implementation.
|
Quantile-Quantile Plot with Unknown Distribution?
You may want to take a look at the Anderson-Darling test for normality which empirically tests whether or not your data comes from a given distribution. @chl recommends looking at the scipy toolkit,
|
45,523
|
Method to reliably determine abnormal statistical values
|
Your use of the stddev indicates you look at every variable seperately. If you look at them together, you might have more chance. An outlier in one dimension can be coincidence, an outlier in more dimensions is more surely an anomaly. I don't know much about games, but I reckon that you could use find extra variables like traveled distance in the game and so on.
You can use outlier theory for detecting anomalies. A very naive way of looking for outliers is using the mahalanobis distance. This is a measure that takes into account the spread of your data, and calculates the relative distance from the center. It will be less sensitive to outliers in one statistic, but can be seen as a way of finding gamers where the combination of the statistics is odd.
A similar approach is building a model and taking a look at the error terms. This does essentially the same: it looks for gamers that don't fit the general pattern. It's a technique that's also used in financial services to find fraud cases. The model can go from a basic linear model to more complex models. If you apply your algorithm to the error terms of the model fitted without player i, you essentially calculate something similar to Cook's distance of a certain player. In combination with the DFFITS measure and the leverage, it is often used for detection of outliers and/or influential points in regression.
You could as well use supervised classification: You train an algorithm with genuine gamers and known cheaters. There's a multitude of techniques available there, starting from neural networks and classification trees to support vector machines and random forests.
Genetic algorithms are used more and more as well, as they can progress in knowledge when time goes by. If you check the assumed cheaters, you could -much like a spam filter- correct the wrongly classified gamers. The algorithm will continuously learn to predict better when a gamer is a cheater.
As mbq mentioned, without example data it's impossible to give you an algorithm; I don't even know on which measurements one can work. But this should give you some ideas about the multitude of available methods, from very naive to pretty complex. Much can be learned from fraud detection if you'd like to google around a bit further.
A starter could be this article of Wheeler and Aitken. Another interesting overview of possible techniques is found in this article by Kou et al (alternative link to publication)
|
Method to reliably determine abnormal statistical values
|
Your use of the stddev indicates you look at every variable seperately. If you look at them together, you might have more chance. An outlier in one dimension can be coincidence, an outlier in more dim
|
Method to reliably determine abnormal statistical values
Your use of the stddev indicates you look at every variable seperately. If you look at them together, you might have more chance. An outlier in one dimension can be coincidence, an outlier in more dimensions is more surely an anomaly. I don't know much about games, but I reckon that you could use find extra variables like traveled distance in the game and so on.
You can use outlier theory for detecting anomalies. A very naive way of looking for outliers is using the mahalanobis distance. This is a measure that takes into account the spread of your data, and calculates the relative distance from the center. It will be less sensitive to outliers in one statistic, but can be seen as a way of finding gamers where the combination of the statistics is odd.
A similar approach is building a model and taking a look at the error terms. This does essentially the same: it looks for gamers that don't fit the general pattern. It's a technique that's also used in financial services to find fraud cases. The model can go from a basic linear model to more complex models. If you apply your algorithm to the error terms of the model fitted without player i, you essentially calculate something similar to Cook's distance of a certain player. In combination with the DFFITS measure and the leverage, it is often used for detection of outliers and/or influential points in regression.
You could as well use supervised classification: You train an algorithm with genuine gamers and known cheaters. There's a multitude of techniques available there, starting from neural networks and classification trees to support vector machines and random forests.
Genetic algorithms are used more and more as well, as they can progress in knowledge when time goes by. If you check the assumed cheaters, you could -much like a spam filter- correct the wrongly classified gamers. The algorithm will continuously learn to predict better when a gamer is a cheater.
As mbq mentioned, without example data it's impossible to give you an algorithm; I don't even know on which measurements one can work. But this should give you some ideas about the multitude of available methods, from very naive to pretty complex. Much can be learned from fraud detection if you'd like to google around a bit further.
A starter could be this article of Wheeler and Aitken. Another interesting overview of possible techniques is found in this article by Kou et al (alternative link to publication)
|
Method to reliably determine abnormal statistical values
Your use of the stddev indicates you look at every variable seperately. If you look at them together, you might have more chance. An outlier in one dimension can be coincidence, an outlier in more dim
|
45,524
|
Method to reliably determine abnormal statistical values
|
I will repost the answer I gave on math.stackexchange:
Your question needs some more information:
How is their score generated (what kind of game is it)? What should your non-cheating data look like? How do people cheat? How will their score be different (in a statistical sense) when they are not cheating? Do you know roughly the proportion that are cheating? Or is that something you also want to find out?
I would also look at outlier detection algorithms: wikipedia looks useful on this topic (link). Using a Q-Q plot on your data may also be useful if your non-cheating data should be approximately Normally distributed; points that are significantly above the line might be cheaters.
|
Method to reliably determine abnormal statistical values
|
I will repost the answer I gave on math.stackexchange:
Your question needs some more information:
How is their score generated (what kind of game is it)? What should your non-cheating data look like?
|
Method to reliably determine abnormal statistical values
I will repost the answer I gave on math.stackexchange:
Your question needs some more information:
How is their score generated (what kind of game is it)? What should your non-cheating data look like? How do people cheat? How will their score be different (in a statistical sense) when they are not cheating? Do you know roughly the proportion that are cheating? Or is that something you also want to find out?
I would also look at outlier detection algorithms: wikipedia looks useful on this topic (link). Using a Q-Q plot on your data may also be useful if your non-cheating data should be approximately Normally distributed; points that are significantly above the line might be cheaters.
|
Method to reliably determine abnormal statistical values
I will repost the answer I gave on math.stackexchange:
Your question needs some more information:
How is their score generated (what kind of game is it)? What should your non-cheating data look like?
|
45,525
|
Repeatability and measurement error from and between observers
|
What you describe is a reliability study where each subject is going to be assessed by the same three raters on two occasions. Analysis can be done separately on the two outcomes (length and weight, though I assume they will be highly correlated and you're not interested in how this correlation is reflected in raters' assessments). Estimating measurement reliability can be done in two ways:
The original approach (as described in Fleiss, 1987) relies on the analysis of variance components through an ANOVA table, where we assume no subject by rater interaction (the corresponding SS is constrained to 0) -- of course, you won't look at $p$-values, but at the MSs corresponding to relevant effects;
A mixed-effects model allows to derive variance estimates, considering time as a fixed effect and subject and/or rater as random-effect(s) (the latter distinction depends on whether you consider that your three observers were taken or sampled from a pool of potential raters or not -- if the rater effect is small, the two analyses will yield quite the same estimate for outcome reliability).
In both cases, you will be able to derive a single intraclass correlation coefficient, which is a measure of reliability of the assessments (under the Generalizability Theory, we would call them generalizability coefficients), which would answer your second question. The first question deals with a potential effect of time (considered as a fixed effect), which I discussed here, Reliability in Elicitation Exercise. More details can be found in Dunn (1989) or Brennan (2001).
I have an R example script on Github which illustrates both approaches. I think it would not be too difficult to incorporate rater effects in the model.
References
Fleiss, J.L. (1987). The design and analysis of clinical experiments. New York: Wiley.
Dunn, G. (1989). Design and analysis of reliability studies. Oxford
Brennan, R.L. (2001). Generalizability Theory. Springer
|
Repeatability and measurement error from and between observers
|
What you describe is a reliability study where each subject is going to be assessed by the same three raters on two occasions. Analysis can be done separately on the two outcomes (length and weight, t
|
Repeatability and measurement error from and between observers
What you describe is a reliability study where each subject is going to be assessed by the same three raters on two occasions. Analysis can be done separately on the two outcomes (length and weight, though I assume they will be highly correlated and you're not interested in how this correlation is reflected in raters' assessments). Estimating measurement reliability can be done in two ways:
The original approach (as described in Fleiss, 1987) relies on the analysis of variance components through an ANOVA table, where we assume no subject by rater interaction (the corresponding SS is constrained to 0) -- of course, you won't look at $p$-values, but at the MSs corresponding to relevant effects;
A mixed-effects model allows to derive variance estimates, considering time as a fixed effect and subject and/or rater as random-effect(s) (the latter distinction depends on whether you consider that your three observers were taken or sampled from a pool of potential raters or not -- if the rater effect is small, the two analyses will yield quite the same estimate for outcome reliability).
In both cases, you will be able to derive a single intraclass correlation coefficient, which is a measure of reliability of the assessments (under the Generalizability Theory, we would call them generalizability coefficients), which would answer your second question. The first question deals with a potential effect of time (considered as a fixed effect), which I discussed here, Reliability in Elicitation Exercise. More details can be found in Dunn (1989) or Brennan (2001).
I have an R example script on Github which illustrates both approaches. I think it would not be too difficult to incorporate rater effects in the model.
References
Fleiss, J.L. (1987). The design and analysis of clinical experiments. New York: Wiley.
Dunn, G. (1989). Design and analysis of reliability studies. Oxford
Brennan, R.L. (2001). Generalizability Theory. Springer
|
Repeatability and measurement error from and between observers
What you describe is a reliability study where each subject is going to be assessed by the same three raters on two occasions. Analysis can be done separately on the two outcomes (length and weight, t
|
45,526
|
Repeatability and measurement error from and between observers
|
You need to repeat the same process separately for length and weight, as these are completely separate outcomes with different units and methods of measurement.
I'd start, as so often, by plotting some exploratory graphs. In this case a set of Bland–Altman (diffference vs. average) plots, one for each observer. If the plots for each observer look similar, I'd do a combined plot too. I'd look for any patterns in these plots, e.g. does the variability in the difference stay reasonably constant with the mean? (if not, i might consider some variance-stabilizing transformation). For each observer I'd then calculate the mean difference between early and late readings, to quantify whether there's a systematic difference, and the standard deviation of the difference as a way of quantifying how much each observer's measurements vary between late and early readings. I might then conduct a formal statistical test for the equality of the variances of the differences, such as the Brown–Forsythe test. If there's no strong evidence that the variances differ substantially between observers, I'd move on to ANOVA as I see has just been described by chl.
|
Repeatability and measurement error from and between observers
|
You need to repeat the same process separately for length and weight, as these are completely separate outcomes with different units and methods of measurement.
I'd start, as so often, by plotting som
|
Repeatability and measurement error from and between observers
You need to repeat the same process separately for length and weight, as these are completely separate outcomes with different units and methods of measurement.
I'd start, as so often, by plotting some exploratory graphs. In this case a set of Bland–Altman (diffference vs. average) plots, one for each observer. If the plots for each observer look similar, I'd do a combined plot too. I'd look for any patterns in these plots, e.g. does the variability in the difference stay reasonably constant with the mean? (if not, i might consider some variance-stabilizing transformation). For each observer I'd then calculate the mean difference between early and late readings, to quantify whether there's a systematic difference, and the standard deviation of the difference as a way of quantifying how much each observer's measurements vary between late and early readings. I might then conduct a formal statistical test for the equality of the variances of the differences, such as the Brown–Forsythe test. If there's no strong evidence that the variances differ substantially between observers, I'd move on to ANOVA as I see has just been described by chl.
|
Repeatability and measurement error from and between observers
You need to repeat the same process separately for length and weight, as these are completely separate outcomes with different units and methods of measurement.
I'd start, as so often, by plotting som
|
45,527
|
Data visualisation- summarise 190 means and response rates
|
I find a heatmap to be one of the most effective ways of summarizing large amounts of multi-dimensional data in a confined space. The LearnR blog has a nice example of creating one in ggplot2.
|
Data visualisation- summarise 190 means and response rates
|
I find a heatmap to be one of the most effective ways of summarizing large amounts of multi-dimensional data in a confined space. The LearnR blog has a nice example of creating one in ggplot2.
|
Data visualisation- summarise 190 means and response rates
I find a heatmap to be one of the most effective ways of summarizing large amounts of multi-dimensional data in a confined space. The LearnR blog has a nice example of creating one in ggplot2.
|
Data visualisation- summarise 190 means and response rates
I find a heatmap to be one of the most effective ways of summarizing large amounts of multi-dimensional data in a confined space. The LearnR blog has a nice example of creating one in ggplot2.
|
45,528
|
Data visualisation- summarise 190 means and response rates
|
To give you a few more things to look at:
Principal components - look at some previous answers about PC. In particular, this answer may be helpful.
Cluster analysis. This page gives quite a nice overall in R.
I would recommend trying as many things as possible and see what comes out. Once you have your data in R in a reasonable format, it shouldn't take too long to try these things.
|
Data visualisation- summarise 190 means and response rates
|
To give you a few more things to look at:
Principal components - look at some previous answers about PC. In particular, this answer may be helpful.
Cluster analysis. This page gives quite a nice over
|
Data visualisation- summarise 190 means and response rates
To give you a few more things to look at:
Principal components - look at some previous answers about PC. In particular, this answer may be helpful.
Cluster analysis. This page gives quite a nice overall in R.
I would recommend trying as many things as possible and see what comes out. Once you have your data in R in a reasonable format, it shouldn't take too long to try these things.
|
Data visualisation- summarise 190 means and response rates
To give you a few more things to look at:
Principal components - look at some previous answers about PC. In particular, this answer may be helpful.
Cluster analysis. This page gives quite a nice over
|
45,529
|
Data visualisation- summarise 190 means and response rates
|
I would suggest you check out either box-plots (if you have an intro text to R, box plots always seem to be one of the first plots they use), or you can plot the means of each group on the Y axis and use the X-axis to represent each of your 190 work areas (and then maybe put error bars representing a confidence interval for the estimate of the mean).
You can plot each of the likert scales next to each other, and use a different color to represent the means, and as long as you choose distinct colors and the same order for your likert scales across work areas people will be able to distinguish them.
But I personally would only plot the scales next to each other if they are expected to have some sort of relationship with each other (if scale A is high I might expect scale B to be low). If they are not you could panel the charts on top of each other (check out the lattice package in R, and here is what I think is a good example with sample R code), and so you only need to label one X-axis (this also allows you to use different Y-axis scales if the scales are not easily plotted on all the same Y-levels, although by your description this doesn't seem to be the case). You could also include response rate as one of the panels (maybe represented as a bar).
What is difficult with 190 different groups is you will have trouble distinguishing different work groups unless you highlight specific groups, but any chart with all of the groups will be excellent to examine overall trends (and maybe spot outliers). Also if your work groups have no logical ordering or higher order groupings the orientation on the axis will be arbitrary. You could order according to values on one of the scales (or according to response rate).
Also I am personally learning R at the moment, and I would highly suggest you check out the Use R! series by Springer. The book A Beginner's Guide to R is one of the best intro texts I have encountered, and they have books on ggplot2 and the lattice packages that would likely help you.
As an end if you post some examples of plots and code to make them some more of the R savy crowd on the forum will likely be able to give you suggestions. When you do finish come back and post your results!
HTH and good luck.
|
Data visualisation- summarise 190 means and response rates
|
I would suggest you check out either box-plots (if you have an intro text to R, box plots always seem to be one of the first plots they use), or you can plot the means of each group on the Y axis and
|
Data visualisation- summarise 190 means and response rates
I would suggest you check out either box-plots (if you have an intro text to R, box plots always seem to be one of the first plots they use), or you can plot the means of each group on the Y axis and use the X-axis to represent each of your 190 work areas (and then maybe put error bars representing a confidence interval for the estimate of the mean).
You can plot each of the likert scales next to each other, and use a different color to represent the means, and as long as you choose distinct colors and the same order for your likert scales across work areas people will be able to distinguish them.
But I personally would only plot the scales next to each other if they are expected to have some sort of relationship with each other (if scale A is high I might expect scale B to be low). If they are not you could panel the charts on top of each other (check out the lattice package in R, and here is what I think is a good example with sample R code), and so you only need to label one X-axis (this also allows you to use different Y-axis scales if the scales are not easily plotted on all the same Y-levels, although by your description this doesn't seem to be the case). You could also include response rate as one of the panels (maybe represented as a bar).
What is difficult with 190 different groups is you will have trouble distinguishing different work groups unless you highlight specific groups, but any chart with all of the groups will be excellent to examine overall trends (and maybe spot outliers). Also if your work groups have no logical ordering or higher order groupings the orientation on the axis will be arbitrary. You could order according to values on one of the scales (or according to response rate).
Also I am personally learning R at the moment, and I would highly suggest you check out the Use R! series by Springer. The book A Beginner's Guide to R is one of the best intro texts I have encountered, and they have books on ggplot2 and the lattice packages that would likely help you.
As an end if you post some examples of plots and code to make them some more of the R savy crowd on the forum will likely be able to give you suggestions. When you do finish come back and post your results!
HTH and good luck.
|
Data visualisation- summarise 190 means and response rates
I would suggest you check out either box-plots (if you have an intro text to R, box plots always seem to be one of the first plots they use), or you can plot the means of each group on the Y axis and
|
45,530
|
Series expansion of a density function
|
You can also use Edgeworth series, if your random variable has a finite mean and variance, which expands the CDF of your random variable in terms of the Gaussian CDF. At first glance it's not quite as tidy conceptually as using a mixture model, but the derivation is quite pretty and it gives you a closed form with very fast decay in the tail terms.
|
Series expansion of a density function
|
You can also use Edgeworth series, if your random variable has a finite mean and variance, which expands the CDF of your random variable in terms of the Gaussian CDF. At first glance it's not quite a
|
Series expansion of a density function
You can also use Edgeworth series, if your random variable has a finite mean and variance, which expands the CDF of your random variable in terms of the Gaussian CDF. At first glance it's not quite as tidy conceptually as using a mixture model, but the derivation is quite pretty and it gives you a closed form with very fast decay in the tail terms.
|
Series expansion of a density function
You can also use Edgeworth series, if your random variable has a finite mean and variance, which expands the CDF of your random variable in terms of the Gaussian CDF. At first glance it's not quite a
|
45,531
|
Series expansion of a density function
|
Histogram density estimator is estimating the density with a sum of piecewise functions (density of a uniform).
KDE is using a sum of smooth function (gaussian is an example) (as long as they are positive they can be transformed into a density by normalization)
The use of "mixture" in statistic is about convex combination of densities.
|
Series expansion of a density function
|
Histogram density estimator is estimating the density with a sum of piecewise functions (density of a uniform).
KDE is using a sum of smooth function (gaussian is an example) (as long as they are pos
|
Series expansion of a density function
Histogram density estimator is estimating the density with a sum of piecewise functions (density of a uniform).
KDE is using a sum of smooth function (gaussian is an example) (as long as they are positive they can be transformed into a density by normalization)
The use of "mixture" in statistic is about convex combination of densities.
|
Series expansion of a density function
Histogram density estimator is estimating the density with a sum of piecewise functions (density of a uniform).
KDE is using a sum of smooth function (gaussian is an example) (as long as they are pos
|
45,532
|
Series expansion of a density function
|
You can do this with mixture modeling. There are a number of R packages on CRAN for doing this. Search for "mixture" at http://cran.r-project.org/web/packages/
|
Series expansion of a density function
|
You can do this with mixture modeling. There are a number of R packages on CRAN for doing this. Search for "mixture" at http://cran.r-project.org/web/packages/
|
Series expansion of a density function
You can do this with mixture modeling. There are a number of R packages on CRAN for doing this. Search for "mixture" at http://cran.r-project.org/web/packages/
|
Series expansion of a density function
You can do this with mixture modeling. There are a number of R packages on CRAN for doing this. Search for "mixture" at http://cran.r-project.org/web/packages/
|
45,533
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
I am afraid there is no; during my little adventure with such data we have just converted it to a data frame form, added some extra attributes made from neighborhoods of pixels and used standard methods. Still, packages ripa and hyperSpec might be useful.
For other software, I've got an impression that most of sensible applications are commercial.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
I am afraid there is no; during my little adventure with such data we have just converted it to a data frame form, added some extra attributes made from neighborhoods of pixels and used standard metho
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
I am afraid there is no; during my little adventure with such data we have just converted it to a data frame form, added some extra attributes made from neighborhoods of pixels and used standard methods. Still, packages ripa and hyperSpec might be useful.
For other software, I've got an impression that most of sensible applications are commercial.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
I am afraid there is no; during my little adventure with such data we have just converted it to a data frame form, added some extra attributes made from neighborhoods of pixels and used standard metho
|
45,534
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
Not an R package, but D. A. Landgrebe from Purdue (author of Signal theory methods in multispectral remote sensing) has sponsored the MultiSpec freeware. Its a rather clunky GUI but gets the job done for most of the common hyperspectral algorithms.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
Not an R package, but D. A. Landgrebe from Purdue (author of Signal theory methods in multispectral remote sensing) has sponsored the MultiSpec freeware. Its a rather clunky GUI but gets the job done
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
Not an R package, but D. A. Landgrebe from Purdue (author of Signal theory methods in multispectral remote sensing) has sponsored the MultiSpec freeware. Its a rather clunky GUI but gets the job done for most of the common hyperspectral algorithms.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
Not an R package, but D. A. Landgrebe from Purdue (author of Signal theory methods in multispectral remote sensing) has sponsored the MultiSpec freeware. Its a rather clunky GUI but gets the job done
|
45,535
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
The best place to look for free/open source capabilities of this nature is GRASS GIS. The image processing manual is here. Because this is constantly undergoing development, it would be worthwhile posting an inquiry on one of the GRASS user lists (found through links on the home page here.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
The best place to look for free/open source capabilities of this nature is GRASS GIS. The image processing manual is here. Because this is constantly undergoing development, it would be worthwhile p
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
The best place to look for free/open source capabilities of this nature is GRASS GIS. The image processing manual is here. Because this is constantly undergoing development, it would be worthwhile posting an inquiry on one of the GRASS user lists (found through links on the home page here.
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
The best place to look for free/open source capabilities of this nature is GRASS GIS. The image processing manual is here. Because this is constantly undergoing development, it would be worthwhile p
|
45,536
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
This is a very late response, so this may no longer be of interest, but I am working on putting together an R library with various hyperspectral image processing capabilities. At the moment my focus has been on endmember detection and unmixing. If this is still something which is of interest please let me know. My hope is to publish a beta version to CRAN or R-Forge in the near future but I would be happy to send out the code itself.
Best,
Dan
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
|
This is a very late response, so this may no longer be of interest, but I am working on putting together an R library with various hyperspectral image processing capabilities. At the moment my focus h
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
This is a very late response, so this may no longer be of interest, but I am working on putting together an R library with various hyperspectral image processing capabilities. At the moment my focus has been on endmember detection and unmixing. If this is still something which is of interest please let me know. My hope is to publish a beta version to CRAN or R-Forge in the near future but I would be happy to send out the code itself.
Best,
Dan
|
Suggested R packages for frontier estimation or segmentation of hyperspectral images
This is a very late response, so this may no longer be of interest, but I am working on putting together an R library with various hyperspectral image processing capabilities. At the moment my focus h
|
45,537
|
Survival analysis with only censored event times?
|
It's not that you don't have events, it's just that you don't have exact times for the events. You do, however, have a lower and an upper limit to the time to each event. That's what's called "interval censored" data in general. In your situation, there's only 1 observation time per individual, so you have "current status" data as @Ben said in a comment. As you note, you have left-censored event times for cases with events (lower limit of 0) and right-censored event times for cases without events (upper limit of +Infinity).
Some types of survival analysis with such data are relatively straightforward. Let's take the reproducible data set provided in the answer from @AdamO (+1) and reformat it. Specify "L" and "R" as the left and right limits of the interval. It turns out that setting the lower (left) limit for left-censored event times to -Inf instead of 0 helps with some functions.
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
dissectData <- data.frame(dissectTime=t,event=x<t)
dissectData[,"L"] <- -Inf
dissectData[,"R"] <- Inf
dissectData[dissectData$event==FALSE,"L"] <- dissectData[dissectData$event==FALSE,"dissectTime"]
dissectData[dissectData$event==TRUE,"R"] <- dissectData[dissectData$event==TRUE,"dissectTime"]
That puts data into a form used by the "interval2" type of Surv object in the R survival package. That allows for simple descriptive survfit() processing (for 1 or more groups) and for parametric survival modeling. For example:
library(survival)
plot(survfit(Surv(L, R, type="interval2") ~ 1, data = dissectData), bty="n",xlab="Time",ylab="Fraction Surviving")
curve(exp(-x/50),from=0,to=100,add=TRUE,col="red")
shows the estimated survival curve and its 95% confidence intervals (in black) along with the original continuous function used to generate the data sample (in red).
You can fit a parametric survival model this way, also. For example:
survreg(Surv(L, R, type="interval2") ~ 1, data = dissectData)
fits the default Weibull model to the data. With a more complicated data set including treatment groups and other outcome-associated covariates, you can specify (functions of) those as predictors instead of the simple ~1 intercept-only predictor used here for a single group. There are several other choices for survival distributions available, too.
You can't, however, fit a semi-parametric model like a Cox survival model via the survival package. For that you need specialized tools like those in the R icenReg package. That package works directly with "interval2" data. It also provides for Bayesian models like those recommended by @Björn in another answer (+1).
Your small set of known, fixed time points does recommend a binomial regression approach, but you didn't provide enough details to know if your model correctly takes the left censoring into account. A simple binomial model of fractions of animals with tumors over time is a model of tumor prevalence. That's OK for some purposes, and it can form the basis for tests of treatment effects with covariate adjustment. Prevalence data, however, leads to problems in interpretation in terms of tumor onset if the observed prevalence decreases at a later time period.
The answer from @AdamO provides a good way to deal with that problem, in a way that forces the cumulative hazard to be non-decreasing. Tutz and Schmid discuss ways to handle interval censoring in a binomial regression context in Section 3.7, "Subject-Specific Interval Censoring," of their Modeling Discrete Time-to-Event Data book.
|
Survival analysis with only censored event times?
|
It's not that you don't have events, it's just that you don't have exact times for the events. You do, however, have a lower and an upper limit to the time to each event. That's what's called "interva
|
Survival analysis with only censored event times?
It's not that you don't have events, it's just that you don't have exact times for the events. You do, however, have a lower and an upper limit to the time to each event. That's what's called "interval censored" data in general. In your situation, there's only 1 observation time per individual, so you have "current status" data as @Ben said in a comment. As you note, you have left-censored event times for cases with events (lower limit of 0) and right-censored event times for cases without events (upper limit of +Infinity).
Some types of survival analysis with such data are relatively straightforward. Let's take the reproducible data set provided in the answer from @AdamO (+1) and reformat it. Specify "L" and "R" as the left and right limits of the interval. It turns out that setting the lower (left) limit for left-censored event times to -Inf instead of 0 helps with some functions.
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
dissectData <- data.frame(dissectTime=t,event=x<t)
dissectData[,"L"] <- -Inf
dissectData[,"R"] <- Inf
dissectData[dissectData$event==FALSE,"L"] <- dissectData[dissectData$event==FALSE,"dissectTime"]
dissectData[dissectData$event==TRUE,"R"] <- dissectData[dissectData$event==TRUE,"dissectTime"]
That puts data into a form used by the "interval2" type of Surv object in the R survival package. That allows for simple descriptive survfit() processing (for 1 or more groups) and for parametric survival modeling. For example:
library(survival)
plot(survfit(Surv(L, R, type="interval2") ~ 1, data = dissectData), bty="n",xlab="Time",ylab="Fraction Surviving")
curve(exp(-x/50),from=0,to=100,add=TRUE,col="red")
shows the estimated survival curve and its 95% confidence intervals (in black) along with the original continuous function used to generate the data sample (in red).
You can fit a parametric survival model this way, also. For example:
survreg(Surv(L, R, type="interval2") ~ 1, data = dissectData)
fits the default Weibull model to the data. With a more complicated data set including treatment groups and other outcome-associated covariates, you can specify (functions of) those as predictors instead of the simple ~1 intercept-only predictor used here for a single group. There are several other choices for survival distributions available, too.
You can't, however, fit a semi-parametric model like a Cox survival model via the survival package. For that you need specialized tools like those in the R icenReg package. That package works directly with "interval2" data. It also provides for Bayesian models like those recommended by @Björn in another answer (+1).
Your small set of known, fixed time points does recommend a binomial regression approach, but you didn't provide enough details to know if your model correctly takes the left censoring into account. A simple binomial model of fractions of animals with tumors over time is a model of tumor prevalence. That's OK for some purposes, and it can form the basis for tests of treatment effects with covariate adjustment. Prevalence data, however, leads to problems in interpretation in terms of tumor onset if the observed prevalence decreases at a later time period.
The answer from @AdamO provides a good way to deal with that problem, in a way that forces the cumulative hazard to be non-decreasing. Tutz and Schmid discuss ways to handle interval censoring in a binomial regression context in Section 3.7, "Subject-Specific Interval Censoring," of their Modeling Discrete Time-to-Event Data book.
|
Survival analysis with only censored event times?
It's not that you don't have events, it's just that you don't have exact times for the events. You do, however, have a lower and an upper limit to the time to each event. That's what's called "interva
|
45,538
|
Survival analysis with only censored event times?
|
This is a case of right censoring and (if there had been events) interval censoring. I.e. when a fish is event free when dissected, the event time is right censored (assuming all fish would eventually have the event). If you upon dissection a fish has the event, the event is interval censored to lie between time 0 and the time of dissection.
So far, so easy. One tricky bit is when you don't have any events, at all. E.g. simple maximum likelihood estimation based on asymptotics will break down here. However, firstly there's exact methods for some situations (e.g. if you assume the hazard rate to be constant over time) and secondly (more flexible and probably more usefully) there's Bayesian survival analysis. Going Bayesian here with informative (based on the best available prior information/what you elicit from experts) or weakly informative (wider than your prior assumptions suggest) prior distributions is very attractive.
|
Survival analysis with only censored event times?
|
This is a case of right censoring and (if there had been events) interval censoring. I.e. when a fish is event free when dissected, the event time is right censored (assuming all fish would eventually
|
Survival analysis with only censored event times?
This is a case of right censoring and (if there had been events) interval censoring. I.e. when a fish is event free when dissected, the event time is right censored (assuming all fish would eventually have the event). If you upon dissection a fish has the event, the event is interval censored to lie between time 0 and the time of dissection.
So far, so easy. One tricky bit is when you don't have any events, at all. E.g. simple maximum likelihood estimation based on asymptotics will break down here. However, firstly there's exact methods for some situations (e.g. if you assume the hazard rate to be constant over time) and secondly (more flexible and probably more usefully) there's Bayesian survival analysis. Going Bayesian here with informative (based on the best available prior information/what you elicit from experts) or weakly informative (wider than your prior assumptions suggest) prior distributions is very attractive.
|
Survival analysis with only censored event times?
This is a case of right censoring and (if there had been events) interval censoring. I.e. when a fish is event free when dissected, the event time is right censored (assuming all fish would eventually
|
45,539
|
Survival analysis with only censored event times?
|
Since the observation times are non-random, logistic regression can be used to estimate the event rate per each 24h interval. You can even just use proportions tests to estimate CIs, unless there are stratification features you want to implement, like species, weight, etc.
For fish sacrificed at time 1, denote the probability of event as $p_1$. At time 2, the event occurrence has probability $p_1 + p_2$. And so on. A standard survival analysis does not apply in this case because fish who were sacrificed at time 2 were not known to be event free at time 1, and so it would be inappropriate to include them in the denominator of "risk set" as non-events for the time 1 stratum as would be typical of a Cox model. This considerably simplifies the model, although knowing the event status for surviving fish at each time point would inform different models that could be considerably more powerful.
You can use these probabilities to report the cumulative incidence of the event. See R implementation to make things crystal clear.
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
p <- tapply(x < t, t, mean)
plot(unique(t), p, xlim=c(0, 100), ylim=c(0, 1), type='b')
segments(
unique(t),
p-sqrt(p*(1-p)/10),
unique(t),
p+sqrt(p*(1-p)/10)
)
curve(pexp(x, 1/50), add=T, lty=2)
From this basic approach there are a number of interesting and more sophisticated things to consider.
Is there a single-pass modeling procedure that constrains the empirical cumulative incidence to be strictly increasing? Consider maximum likelihood constraining $p_1 < (p_1+p_2) < \ldots $.
Alternately, can a fitting procedure be used for the time-based event incidence to produce a stepwise increasing curve?
maximum likelihood approach
A convenient way to constrain the probability so that $p_2, p_3, \ldots >0 $ while keeping $p_1 + p_2 + \ldots < 1$ is to use the log odds. The result is somewhat better than the above approach.
## parameterize the log odds difference, constrain non-index LO to be positive, i.e. increasing probability
lodiff.to.p <- function(lodiff ) plogis(cumsum(c(lodiff[1], pmax(0, lodiff)[-1])))
negloglik <- function(lodiff) ## evaluate the joint likelihood
-sum(dbinom(x=tapply(x<t, t, sum), size=table(t), prob=lodiff.to.p(lodiff), log=T))
mle <- nlm(negloglik, p=c(-3, rep(0.01, 9))) ## lucky guess
plot(c(0,unique(t)), c(0,lodiff.to.p(mle$estimate)), col='red', type='b', xlab='Time', ylab='Cumulative incidence')
curve(pexp(x, 1/50), add=T, lty=2)
curve fitting
We might use the empirical probability estimates to fit a monotonic increasing curve via least-squares. This approach is similar to the above, just swap binomial likelihood with normal.
negloglik <- function(mudiff) {
mu <- pmin(1, cumsum(pmax(0, mudiff)))
-sum(dnorm(x=p, mean = mu, sd=1, log=T))
}
mle <- nlm(negloglik, p=rep(0.1, 10))
mu <- pmin(1, cumsum(pmax(0, mle$estimate)))
plot(p, xlim=c(0, 10), ylim=c(0, 1), xlab='Time', ylab='Proportion with event')
lines(0:10, c(0,mu))
|
Survival analysis with only censored event times?
|
Since the observation times are non-random, logistic regression can be used to estimate the event rate per each 24h interval. You can even just use proportions tests to estimate CIs, unless there are
|
Survival analysis with only censored event times?
Since the observation times are non-random, logistic regression can be used to estimate the event rate per each 24h interval. You can even just use proportions tests to estimate CIs, unless there are stratification features you want to implement, like species, weight, etc.
For fish sacrificed at time 1, denote the probability of event as $p_1$. At time 2, the event occurrence has probability $p_1 + p_2$. And so on. A standard survival analysis does not apply in this case because fish who were sacrificed at time 2 were not known to be event free at time 1, and so it would be inappropriate to include them in the denominator of "risk set" as non-events for the time 1 stratum as would be typical of a Cox model. This considerably simplifies the model, although knowing the event status for surviving fish at each time point would inform different models that could be considerably more powerful.
You can use these probabilities to report the cumulative incidence of the event. See R implementation to make things crystal clear.
set.seed(123)
n <- 100
x <- rexp(n, 1/50)
t <- 10*(1+1:n%/% 10.1) ## assigned sacrifice timepoint
p <- tapply(x < t, t, mean)
plot(unique(t), p, xlim=c(0, 100), ylim=c(0, 1), type='b')
segments(
unique(t),
p-sqrt(p*(1-p)/10),
unique(t),
p+sqrt(p*(1-p)/10)
)
curve(pexp(x, 1/50), add=T, lty=2)
From this basic approach there are a number of interesting and more sophisticated things to consider.
Is there a single-pass modeling procedure that constrains the empirical cumulative incidence to be strictly increasing? Consider maximum likelihood constraining $p_1 < (p_1+p_2) < \ldots $.
Alternately, can a fitting procedure be used for the time-based event incidence to produce a stepwise increasing curve?
maximum likelihood approach
A convenient way to constrain the probability so that $p_2, p_3, \ldots >0 $ while keeping $p_1 + p_2 + \ldots < 1$ is to use the log odds. The result is somewhat better than the above approach.
## parameterize the log odds difference, constrain non-index LO to be positive, i.e. increasing probability
lodiff.to.p <- function(lodiff ) plogis(cumsum(c(lodiff[1], pmax(0, lodiff)[-1])))
negloglik <- function(lodiff) ## evaluate the joint likelihood
-sum(dbinom(x=tapply(x<t, t, sum), size=table(t), prob=lodiff.to.p(lodiff), log=T))
mle <- nlm(negloglik, p=c(-3, rep(0.01, 9))) ## lucky guess
plot(c(0,unique(t)), c(0,lodiff.to.p(mle$estimate)), col='red', type='b', xlab='Time', ylab='Cumulative incidence')
curve(pexp(x, 1/50), add=T, lty=2)
curve fitting
We might use the empirical probability estimates to fit a monotonic increasing curve via least-squares. This approach is similar to the above, just swap binomial likelihood with normal.
negloglik <- function(mudiff) {
mu <- pmin(1, cumsum(pmax(0, mudiff)))
-sum(dnorm(x=p, mean = mu, sd=1, log=T))
}
mle <- nlm(negloglik, p=rep(0.1, 10))
mu <- pmin(1, cumsum(pmax(0, mle$estimate)))
plot(p, xlim=c(0, 10), ylim=c(0, 1), xlab='Time', ylab='Proportion with event')
lines(0:10, c(0,mu))
|
Survival analysis with only censored event times?
Since the observation times are non-random, logistic regression can be used to estimate the event rate per each 24h interval. You can even just use proportions tests to estimate CIs, unless there are
|
45,540
|
Alternative to Friedman Test in R
|
The tests you cited are not appropriate due to the presence of repeated measures. The common way to deal with repeated measures is via mixed-effects linear models.
I'm considering here the most general model, borrowing from one of your earlier posts.
> leach_lme <- lme(fixed = cl_conc ~ soil_type*treatment*days,
+ random =~1|core_id, data = leach2,
+ method = "ML")
> anova(leach_lme)
numDF denDF F-value p-value
(Intercept) 1 80 677.4590 <.0001
soil_type 3 32 6.1510 0.0020
treatment 3 32 109.4603 <.0001
days 1 80 17.3933 0.0001
soil_type:treatment 9 32 2.2588 0.0436
soil_type:days 3 80 3.1330 0.0301
treatment:days 3 80 1.0310 0.3834
soil_type:treatment:days 9 80 3.9676 0.0003
As you can see, the three-way interaction is significant, and so is the two-way interaction soil_type:days, etc. Linear mixed-effects is a parametric model, so as usual in the context of a linear model, one needs to check the residuals are well-behaving.
As per request in the comments, here is a quick residual check.
plot(leach_lme)
The message here is that residuals may be heteroscedastic. Now let's log-transforming the response
leach_lme2 <- lme(fixed = log(cl_conc) ~ soil_type*treatment*days,
random =~1|core_id, data = leach2,
method = "ML")
plot(leach_lme2)
A part of a single observation which appears to be far from the bulk of the data, the residuals look fine to me, i.e. homoscedastic. The QQ-plot as well (not shown here but you can plot it yourself via qqnorm(leach_lme2)) doesn't seem that bad.
P.S. In this answer, I treated days as a numerical variable. To treat it as a factor, as you seem to be interested in (thanks Sal Mangiacifo for pointing it out), use
leach2$days <- factor(leach2$days)
and redo the analyses. The output will be slightly different compared to the one shown above; there will be two additional parameters to be estimated.
|
Alternative to Friedman Test in R
|
The tests you cited are not appropriate due to the presence of repeated measures. The common way to deal with repeated measures is via mixed-effects linear models.
I'm considering here the most genera
|
Alternative to Friedman Test in R
The tests you cited are not appropriate due to the presence of repeated measures. The common way to deal with repeated measures is via mixed-effects linear models.
I'm considering here the most general model, borrowing from one of your earlier posts.
> leach_lme <- lme(fixed = cl_conc ~ soil_type*treatment*days,
+ random =~1|core_id, data = leach2,
+ method = "ML")
> anova(leach_lme)
numDF denDF F-value p-value
(Intercept) 1 80 677.4590 <.0001
soil_type 3 32 6.1510 0.0020
treatment 3 32 109.4603 <.0001
days 1 80 17.3933 0.0001
soil_type:treatment 9 32 2.2588 0.0436
soil_type:days 3 80 3.1330 0.0301
treatment:days 3 80 1.0310 0.3834
soil_type:treatment:days 9 80 3.9676 0.0003
As you can see, the three-way interaction is significant, and so is the two-way interaction soil_type:days, etc. Linear mixed-effects is a parametric model, so as usual in the context of a linear model, one needs to check the residuals are well-behaving.
As per request in the comments, here is a quick residual check.
plot(leach_lme)
The message here is that residuals may be heteroscedastic. Now let's log-transforming the response
leach_lme2 <- lme(fixed = log(cl_conc) ~ soil_type*treatment*days,
random =~1|core_id, data = leach2,
method = "ML")
plot(leach_lme2)
A part of a single observation which appears to be far from the bulk of the data, the residuals look fine to me, i.e. homoscedastic. The QQ-plot as well (not shown here but you can plot it yourself via qqnorm(leach_lme2)) doesn't seem that bad.
P.S. In this answer, I treated days as a numerical variable. To treat it as a factor, as you seem to be interested in (thanks Sal Mangiacifo for pointing it out), use
leach2$days <- factor(leach2$days)
and redo the analyses. The output will be slightly different compared to the one shown above; there will be two additional parameters to be estimated.
|
Alternative to Friedman Test in R
The tests you cited are not appropriate due to the presence of repeated measures. The common way to deal with repeated measures is via mixed-effects linear models.
I'm considering here the most genera
|
45,541
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
|
As Thomas Lumley asserted, Neyman and Pearson in $\rm [I]$ didn't mention lemma. They frequently used the word principle, basis while deducing the critical regions in various cases.
When was the first time it was marked as a lemma?
$\bullet$ Wilks in his book did outline the theory but again refrained from calling it as lemma.
$\bullet$ Cramér in his book never mentioned any lemma but explained the "basic idea of Neyman-Pearson theory".
$\bullet$ Lehmann termed it while "formaliz[ing] in the following theorem, the fundamental lemma of Neyman and Pearson".
$\bullet$ Kendall & Stuart did use the term while writing "the examples we have given so far of the use of the Neyman-Pearson Lemma ..." the "lemma due to Neyman and Pearson ..."
$\bullet$ In $\rm [VI],$ the authors detailed a Lemma, which would be the more familiar Generalized NP Lemma we are acquainted with.
Again with absolute certainty, I cannot ascertain whether this was the first time the word was introduced. But as of now, it seems.
References:
$\rm [I]$ Neyman, J., & Pearson, E. S. ($1933$). On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, $231(694-706), ~289–337.$ doi:10.1098/rsta.1933.0009
$\rm [II]$ Mathematical Statistics, S. S. Wilks, Princeton
University Press, $1943,$ sec. $7.3,$ p. $152.$
$\rm [III]$ Mathematical Methods of Satistics, Harald Cramér, Princeton
University Press, $1946,$ sec. $35.1,$ p. $527.$
$\rm [IV]$ Testing Statistical Hypotheses, E. L. Lehmann, John Wiley & Sons, $1959,$ sec. $3.2, $ p. $64.$
$\rm [V]$ The Advanced Theory of Statistics: Inference and Relationship, Maurice G. Kendall, Alan Stuart, Hafner Publishing Company, $1961,$ sec. $22.10,$ p. $166.$
$\rm [VI]$ Statistical Research Memoirs: Volume $1,$ University College, London, Department of Statistics, $1936,$ p. $11.$
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
|
As Thomas Lumley asserted, Neyman and Pearson in $\rm [I]$ didn't mention lemma. They frequently used the word principle, basis while deducing the critical regions in various cases.
When was the first
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
As Thomas Lumley asserted, Neyman and Pearson in $\rm [I]$ didn't mention lemma. They frequently used the word principle, basis while deducing the critical regions in various cases.
When was the first time it was marked as a lemma?
$\bullet$ Wilks in his book did outline the theory but again refrained from calling it as lemma.
$\bullet$ Cramér in his book never mentioned any lemma but explained the "basic idea of Neyman-Pearson theory".
$\bullet$ Lehmann termed it while "formaliz[ing] in the following theorem, the fundamental lemma of Neyman and Pearson".
$\bullet$ Kendall & Stuart did use the term while writing "the examples we have given so far of the use of the Neyman-Pearson Lemma ..." the "lemma due to Neyman and Pearson ..."
$\bullet$ In $\rm [VI],$ the authors detailed a Lemma, which would be the more familiar Generalized NP Lemma we are acquainted with.
Again with absolute certainty, I cannot ascertain whether this was the first time the word was introduced. But as of now, it seems.
References:
$\rm [I]$ Neyman, J., & Pearson, E. S. ($1933$). On the Problem of the Most Efficient Tests of Statistical Hypotheses. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, $231(694-706), ~289–337.$ doi:10.1098/rsta.1933.0009
$\rm [II]$ Mathematical Statistics, S. S. Wilks, Princeton
University Press, $1943,$ sec. $7.3,$ p. $152.$
$\rm [III]$ Mathematical Methods of Satistics, Harald Cramér, Princeton
University Press, $1946,$ sec. $35.1,$ p. $527.$
$\rm [IV]$ Testing Statistical Hypotheses, E. L. Lehmann, John Wiley & Sons, $1959,$ sec. $3.2, $ p. $64.$
$\rm [V]$ The Advanced Theory of Statistics: Inference and Relationship, Maurice G. Kendall, Alan Stuart, Hafner Publishing Company, $1961,$ sec. $22.10,$ p. $166.$
$\rm [VI]$ Statistical Research Memoirs: Volume $1,$ University College, London, Department of Statistics, $1936,$ p. $11.$
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
As Thomas Lumley asserted, Neyman and Pearson in $\rm [I]$ didn't mention lemma. They frequently used the word principle, basis while deducing the critical regions in various cases.
When was the first
|
45,542
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
|
What people often describe as the Neyman Pearson lemma is a result proven by the lemma but not the lemma itself. The description of the lemma by cross validated is for instance:
A theorem stating that likelihood ratio test is the most powerful test of point null hypothesis against point alternative hypothesis
However, the lemma is a more abstract and technical underlying result.
The region $\omega$ maximises $\int_{w \in \omega} g(w) dw$ subject to the constraints $\int_{w \in \omega} f_i(w) dw = c_i$, if and only if it exists and if for some constants $k_i$ we have that $g_i > \sum k_i f_i$ everywhere inside the region and $g_i < \sum k_i f_i$ everywhere outside the region.
(You can see it as a sort of equivalent to Lagrange multipliers)
This Lemma is not only applied to the case of likelihood ratio tests but also to find optimal (most powerful) critical regions of other types.
The 1933 version
Probably the first occurrence is in 1933. A proof for the lemma is given but it is not yet explicitly named as a lemma or theorem.
Neyman, Jerzy, and Egon Sharpe Pearson. "IX. On the problem of the most efficient tests of statistical hypotheses." Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 231.694-706 (1933): 289-337.
https://doi.org/10.1098/rsta.1933.0009
The 1935 version
An early reference to the lemma occurs in 1935. It is in French and doesn't yet speak of a lemma, but it speaks of a general result (résultat général). It is actually not referencing the 1933 article, and it is referencing the future article in 1936 (with the remark 'in press')
Neyman, Jerzy. "Sur la vérification des hypothèses statistiques composées." Bulletin de la Société Mathématique de France 63 (1935): 246-266.
Ce problème peut être résolu en appliquant le résultat général que voici.
The 1936 version
In 1936 there is reference to the proposition that explicitly calls it a lemma
London (England). et al. Statistical Research Memoirs. 1936.
https://books.google.ch/books?id=0alju-yale4C&pg=PA213
The 1939 version
In 1939 Wald speaks about a general principle in a theory. Here it is not anymore about a lemma used in a theorem. Possibly this could be the start where the meaning of the lemma started to switch to the statement about the optimal power for likelihood ratio tests?
Wald, Abraham. "Contributions to the theory of statistical estimation and testing hypotheses." The Annals of Mathematical Statistics 10.4 (1939): 299-326.
In the Neyman-Pearson theory two types of hypotheses are considered. Let $\theta =\theta_1$ be the hypothesis to be tested, where $\theta_1$ denotes a certain point if the parameter space. Denote this hypothesis by $H_1$ and the hypothesis $\theta \neq \theta_1$ by $\bar{H}$. The type I error is that which is made by rejecting $H_1$ when it is true. The type II error is that which is made by accepting $H_1$ when it is false. The fundamental principle in the Neyman-Pearson theory can be formulated as follows: Among all critical regions (regions of rejection of $H_1$, i.e. regions of acceptance of $\bar{H}$) for which the probability of type I error is equal to a given constant $\alpha$, we have to choose that region for which the probability of type II error is a minimum.
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
|
What people often describe as the Neyman Pearson lemma is a result proven by the lemma but not the lemma itself. The description of the lemma by cross validated is for instance:
A theorem stating tha
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
What people often describe as the Neyman Pearson lemma is a result proven by the lemma but not the lemma itself. The description of the lemma by cross validated is for instance:
A theorem stating that likelihood ratio test is the most powerful test of point null hypothesis against point alternative hypothesis
However, the lemma is a more abstract and technical underlying result.
The region $\omega$ maximises $\int_{w \in \omega} g(w) dw$ subject to the constraints $\int_{w \in \omega} f_i(w) dw = c_i$, if and only if it exists and if for some constants $k_i$ we have that $g_i > \sum k_i f_i$ everywhere inside the region and $g_i < \sum k_i f_i$ everywhere outside the region.
(You can see it as a sort of equivalent to Lagrange multipliers)
This Lemma is not only applied to the case of likelihood ratio tests but also to find optimal (most powerful) critical regions of other types.
The 1933 version
Probably the first occurrence is in 1933. A proof for the lemma is given but it is not yet explicitly named as a lemma or theorem.
Neyman, Jerzy, and Egon Sharpe Pearson. "IX. On the problem of the most efficient tests of statistical hypotheses." Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 231.694-706 (1933): 289-337.
https://doi.org/10.1098/rsta.1933.0009
The 1935 version
An early reference to the lemma occurs in 1935. It is in French and doesn't yet speak of a lemma, but it speaks of a general result (résultat général). It is actually not referencing the 1933 article, and it is referencing the future article in 1936 (with the remark 'in press')
Neyman, Jerzy. "Sur la vérification des hypothèses statistiques composées." Bulletin de la Société Mathématique de France 63 (1935): 246-266.
Ce problème peut être résolu en appliquant le résultat général que voici.
The 1936 version
In 1936 there is reference to the proposition that explicitly calls it a lemma
London (England). et al. Statistical Research Memoirs. 1936.
https://books.google.ch/books?id=0alju-yale4C&pg=PA213
The 1939 version
In 1939 Wald speaks about a general principle in a theory. Here it is not anymore about a lemma used in a theorem. Possibly this could be the start where the meaning of the lemma started to switch to the statement about the optimal power for likelihood ratio tests?
Wald, Abraham. "Contributions to the theory of statistical estimation and testing hypotheses." The Annals of Mathematical Statistics 10.4 (1939): 299-326.
In the Neyman-Pearson theory two types of hypotheses are considered. Let $\theta =\theta_1$ be the hypothesis to be tested, where $\theta_1$ denotes a certain point if the parameter space. Denote this hypothesis by $H_1$ and the hypothesis $\theta \neq \theta_1$ by $\bar{H}$. The type I error is that which is made by rejecting $H_1$ when it is true. The type II error is that which is made by accepting $H_1$ when it is false. The fundamental principle in the Neyman-Pearson theory can be formulated as follows: Among all critical regions (regions of rejection of $H_1$, i.e. regions of acceptance of $\bar{H}$) for which the probability of type I error is equal to a given constant $\alpha$, we have to choose that region for which the probability of type II error is a minimum.
|
Why is Neyman-Pearson lemma a lemma or is it a theorem?
What people often describe as the Neyman Pearson lemma is a result proven by the lemma but not the lemma itself. The description of the lemma by cross validated is for instance:
A theorem stating tha
|
45,543
|
Compute median of continuous distribution using integrate() in R
|
CAUTION!
As was pointed out and explained by whuber in the comments, the current code below does not check if it is fed a density that integrates to one (or to some other finite value which we could use to renormalize). It is therefore useful to call ff(1)+0.5 (or whatever the support for a given density is) as a sanity check!
E.g. a previous version of the question had an exponent of 2 rather than 1/2, which has integral
$$
\frac{1}{2(1-x)}+C
$$
For such a function, the median is not defined, in that there cannot be a value with 0.5 of the probability mass to the left and to the right of it.
One could play around with upperbound ever closer to one to illustrate why...:
upperbound <- .99
x <- seq(0, upperbound, .00001)
ff <- function(x) 1/(2*(1-x)^2)
plot(x, ff(x), type="l")
The following code works for the "Beta(0,0.5) density" of the OP (and, with suitable modification of the support, also for other proper densities), i.e., a limiting case of a Beta density with first argument tending to zero.
ff1 <- function(x) 1/(2*(1-x)^(1/2))
ff <- function(m) integrate(ff1, 0, m)$value-0.5
uniroot(ff, c(0, 1))
|
Compute median of continuous distribution using integrate() in R
|
CAUTION!
As was pointed out and explained by whuber in the comments, the current code below does not check if it is fed a density that integrates to one (or to some other finite value which we could u
|
Compute median of continuous distribution using integrate() in R
CAUTION!
As was pointed out and explained by whuber in the comments, the current code below does not check if it is fed a density that integrates to one (or to some other finite value which we could use to renormalize). It is therefore useful to call ff(1)+0.5 (or whatever the support for a given density is) as a sanity check!
E.g. a previous version of the question had an exponent of 2 rather than 1/2, which has integral
$$
\frac{1}{2(1-x)}+C
$$
For such a function, the median is not defined, in that there cannot be a value with 0.5 of the probability mass to the left and to the right of it.
One could play around with upperbound ever closer to one to illustrate why...:
upperbound <- .99
x <- seq(0, upperbound, .00001)
ff <- function(x) 1/(2*(1-x)^2)
plot(x, ff(x), type="l")
The following code works for the "Beta(0,0.5) density" of the OP (and, with suitable modification of the support, also for other proper densities), i.e., a limiting case of a Beta density with first argument tending to zero.
ff1 <- function(x) 1/(2*(1-x)^(1/2))
ff <- function(m) integrate(ff1, 0, m)$value-0.5
uniroot(ff, c(0, 1))
|
Compute median of continuous distribution using integrate() in R
CAUTION!
As was pointed out and explained by whuber in the comments, the current code below does not check if it is fed a density that integrates to one (or to some other finite value which we could u
|
45,544
|
Normalization of conditional probabilities
|
Not necessarily. Here is a counter-example: consider the vectors
$$
p = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
and
$$
\pi = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
Then the matrix
$$
W = \begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
$$
verifies $Wp = \pi$, while the elements in each of its columns do not sum to 1.
|
Normalization of conditional probabilities
|
Not necessarily. Here is a counter-example: consider the vectors
$$
p = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
and
$$
\pi = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
T
|
Normalization of conditional probabilities
Not necessarily. Here is a counter-example: consider the vectors
$$
p = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
and
$$
\pi = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
Then the matrix
$$
W = \begin{bmatrix}
1 & 0 \\
1 & 0
\end{bmatrix}
$$
verifies $Wp = \pi$, while the elements in each of its columns do not sum to 1.
|
Normalization of conditional probabilities
Not necessarily. Here is a counter-example: consider the vectors
$$
p = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
and
$$
\pi = \begin{bmatrix}
0.5 \\
0.5
\end{bmatrix}
$$
T
|
45,545
|
Normalization of conditional probabilities
|
The answer by @CamilleGontier has pushed me in the right direction: what is implicit in the question (but was not explicitly stated in the OP) is that it should work for an arbitrary vector $p_i$ (as long as it satisfies conditions $p_i>0$ and $\sum_i p_i=1$.) We can then consider a set of linearly independent vectors which have only one non-zero component:
$$
p_i^{(i_0)}=\delta_{i,i_0}$$
Then for each such vector we have corresponding
$$
\pi_\alpha^{(i_0)}=W_{\alpha i_0},$$
and the normalization condition for $\pi_\alpha$ automatically translates into the condition for matrix $W$ :
$$\sum_\alpha\pi_\alpha^{(i_0)} = \sum_\alpha W_{\alpha i_0}=1
$$
|
Normalization of conditional probabilities
|
The answer by @CamilleGontier has pushed me in the right direction: what is implicit in the question (but was not explicitly stated in the OP) is that it should work for an arbitrary vector $p_i$ (as
|
Normalization of conditional probabilities
The answer by @CamilleGontier has pushed me in the right direction: what is implicit in the question (but was not explicitly stated in the OP) is that it should work for an arbitrary vector $p_i$ (as long as it satisfies conditions $p_i>0$ and $\sum_i p_i=1$.) We can then consider a set of linearly independent vectors which have only one non-zero component:
$$
p_i^{(i_0)}=\delta_{i,i_0}$$
Then for each such vector we have corresponding
$$
\pi_\alpha^{(i_0)}=W_{\alpha i_0},$$
and the normalization condition for $\pi_\alpha$ automatically translates into the condition for matrix $W$ :
$$\sum_\alpha\pi_\alpha^{(i_0)} = \sum_\alpha W_{\alpha i_0}=1
$$
|
Normalization of conditional probabilities
The answer by @CamilleGontier has pushed me in the right direction: what is implicit in the question (but was not explicitly stated in the OP) is that it should work for an arbitrary vector $p_i$ (as
|
45,546
|
Structural Causal Models with cycles
|
Most of the current causal literature restricts itself to acyclic SCMs, but there has recently been a lot of research advancing the theory of cyclic causal systems. Although one of the first algorithms for cyclic causation, the CCD by Richardson, was already published in 1996, it was only in recent years that the amount of work published on cyclic SCMs has increased considerably, with seminal papers by e.g. Hyttinen et al. and Forre and Mooij.
And there are plenty of cyclic causal systems in the real world that need proper description. Just think of any systems that contain feedback loops. Think e.g. of demand-supply-price models, think of how infection rates in two neighboring regions affect each other, or think of equilibrating controller systems.
Often, the underlying strict physical cause-effect structure is too complex and happens too fast for measurements to have a chance to resolve it. The result is data that is usually aggregated over larger time intervals in which lots of causation is happening that is acyclic on the micro scale but cyclic on the measured (macro) scale.
Note, however, that cyclic SCMs need a different theory compared to the standard acyclic SCMs. E.g., a central notion also in causality is d-separation, which has to be replaced by $\sigma$-separation. See Bongers et al. for a treatment of the foundations of cyclic SCMs.
For your question about the infinite feedback loops, have a look at the end of section 2.2. of Hyttinen et al. who treat the linear case. Intuitively, connections between two nodes can be described via trek-rules, like the famous path rules by Wright, and if you have loops, you get infinitely many paths (running any number of times through the loops) the sum over which must converge.
|
Structural Causal Models with cycles
|
Most of the current causal literature restricts itself to acyclic SCMs, but there has recently been a lot of research advancing the theory of cyclic causal systems. Although one of the first algorithm
|
Structural Causal Models with cycles
Most of the current causal literature restricts itself to acyclic SCMs, but there has recently been a lot of research advancing the theory of cyclic causal systems. Although one of the first algorithms for cyclic causation, the CCD by Richardson, was already published in 1996, it was only in recent years that the amount of work published on cyclic SCMs has increased considerably, with seminal papers by e.g. Hyttinen et al. and Forre and Mooij.
And there are plenty of cyclic causal systems in the real world that need proper description. Just think of any systems that contain feedback loops. Think e.g. of demand-supply-price models, think of how infection rates in two neighboring regions affect each other, or think of equilibrating controller systems.
Often, the underlying strict physical cause-effect structure is too complex and happens too fast for measurements to have a chance to resolve it. The result is data that is usually aggregated over larger time intervals in which lots of causation is happening that is acyclic on the micro scale but cyclic on the measured (macro) scale.
Note, however, that cyclic SCMs need a different theory compared to the standard acyclic SCMs. E.g., a central notion also in causality is d-separation, which has to be replaced by $\sigma$-separation. See Bongers et al. for a treatment of the foundations of cyclic SCMs.
For your question about the infinite feedback loops, have a look at the end of section 2.2. of Hyttinen et al. who treat the linear case. Intuitively, connections between two nodes can be described via trek-rules, like the famous path rules by Wright, and if you have loops, you get infinitely many paths (running any number of times through the loops) the sum over which must converge.
|
Structural Causal Models with cycles
Most of the current causal literature restricts itself to acyclic SCMs, but there has recently been a lot of research advancing the theory of cyclic causal systems. Although one of the first algorithm
|
45,547
|
Structural Causal Models with cycles
|
Well, the fundamental rule of causality is that causes must precede effects - that is a strict inequality in time. So it is not permissible to have $X_i(t)=f_i(X_j(t),\dots,U_i(t)),$ but then turn around and have $X_j(t)=f_j(X_i(t),\dots,U_j(t)).$ But you can have feedback show up in subsequent moments in time: $X(t)=f(Y(t)),$ and then $Y(t+1)=g(X(t)).$ This is non-trivial to deal with in the DAG setting, but one way to begin to handle it is to treat $Y(t)$ and $Y(t+1)$ as separate nodes in the causal graph.
|
Structural Causal Models with cycles
|
Well, the fundamental rule of causality is that causes must precede effects - that is a strict inequality in time. So it is not permissible to have $X_i(t)=f_i(X_j(t),\dots,U_i(t)),$ but then turn aro
|
Structural Causal Models with cycles
Well, the fundamental rule of causality is that causes must precede effects - that is a strict inequality in time. So it is not permissible to have $X_i(t)=f_i(X_j(t),\dots,U_i(t)),$ but then turn around and have $X_j(t)=f_j(X_i(t),\dots,U_j(t)).$ But you can have feedback show up in subsequent moments in time: $X(t)=f(Y(t)),$ and then $Y(t+1)=g(X(t)).$ This is non-trivial to deal with in the DAG setting, but one way to begin to handle it is to treat $Y(t)$ and $Y(t+1)$ as separate nodes in the causal graph.
|
Structural Causal Models with cycles
Well, the fundamental rule of causality is that causes must precede effects - that is a strict inequality in time. So it is not permissible to have $X_i(t)=f_i(X_j(t),\dots,U_i(t)),$ but then turn aro
|
45,548
|
ALS vs SGD in parallelization
|
SGD can not be parallelised for a single model in vanilla form because it is a single update sequential algorithm by construction. However, SGD-based parallelisation is possible, running multiple streams of batches to construct (build) multiple models and then combine these models, i.e., model averaging. For neural networks, simple periodic-averaging works Parallel training of DNNs with Natural Gradient and Parameter Averaging. For collaborative filtering, similar approach could be implemented by introducing averaging procedure for the matrices that gives some convergence gurantees.
|
ALS vs SGD in parallelization
|
SGD can not be parallelised for a single model in vanilla form because it is a single update sequential algorithm by construction. However, SGD-based parallelisation is possible, running multiple stre
|
ALS vs SGD in parallelization
SGD can not be parallelised for a single model in vanilla form because it is a single update sequential algorithm by construction. However, SGD-based parallelisation is possible, running multiple streams of batches to construct (build) multiple models and then combine these models, i.e., model averaging. For neural networks, simple periodic-averaging works Parallel training of DNNs with Natural Gradient and Parameter Averaging. For collaborative filtering, similar approach could be implemented by introducing averaging procedure for the matrices that gives some convergence gurantees.
|
ALS vs SGD in parallelization
SGD can not be parallelised for a single model in vanilla form because it is a single update sequential algorithm by construction. However, SGD-based parallelisation is possible, running multiple stre
|
45,549
|
ALS vs SGD in parallelization
|
Note that first update is the standard linear least squares estimation equation, more traditionally written as $(X^T X)^{-1} X^T y =X^\dagger y$, whereas your SGD version formulation comes down to solving this system one row at a time. Hence you get the same issue in parallelizing SGD as with a standard least squares problem.
Basic issue is that your examples may interact, and this lowers the efficiency of parallel updates. For instance, two updates could cancel out when applied in parallel, but not cancel out when applied in sequence.
We can look at theory of linear estimation to figure out the limits of parallelism, ie, this paper, and my notes on it here.
If you plot per-step improvement as a function of batch size, it may look something like below, at some point the additional improvement in loss is almost unchanged by increasing number of parallel updates. Here you see that using 1 SGD step with batch size 200 gives almost the same improvement as 7 SGD steps with batch size 1
To summarize: with proper tuning of $\gamma$, you'll be able to apply up to $k$ updates in parallel where $k$ is the "critical batch size." Critical batch size is the point at which your "parallel update" strategy starts exhibiting diminishing returns. In the plot above, critical batch size is about 10 since that's the point where you cross y=0.5x line. Basically it's the point where gain from each new example added in parallel is less than 0.5 of the gain of processing this example serially.
To estimate critical batch size for your problem, you could assume your $p$s and $q$ are normally distributed with second moment matrix $\Sigma$. Hessian of corresponding optimization problem is $H=\Sigma$.
Critical batch size is determined by one of the two effective ranks below (from this paper):
The first rank gives "worst-case" critical batch size, if you "get unlucky" with the location of the optimum relative to current position, while the second is applicable when all directions towards the optimum are equally likely.
To apply this to your problem, to establish the limit of SGD parallelism, you would evaluate $r(P^T P)$, and $r(Q^T Q)$. This would give you a lower estimate on the largest batch size you could use for your parallel updates on $q$ and on $p$. Evaluating $R(P^T P)$ and $R(Q^T Q)$ would give you an upper estimate on batch size.
|
ALS vs SGD in parallelization
|
Note that first update is the standard linear least squares estimation equation, more traditionally written as $(X^T X)^{-1} X^T y =X^\dagger y$, whereas your SGD version formulation comes down to sol
|
ALS vs SGD in parallelization
Note that first update is the standard linear least squares estimation equation, more traditionally written as $(X^T X)^{-1} X^T y =X^\dagger y$, whereas your SGD version formulation comes down to solving this system one row at a time. Hence you get the same issue in parallelizing SGD as with a standard least squares problem.
Basic issue is that your examples may interact, and this lowers the efficiency of parallel updates. For instance, two updates could cancel out when applied in parallel, but not cancel out when applied in sequence.
We can look at theory of linear estimation to figure out the limits of parallelism, ie, this paper, and my notes on it here.
If you plot per-step improvement as a function of batch size, it may look something like below, at some point the additional improvement in loss is almost unchanged by increasing number of parallel updates. Here you see that using 1 SGD step with batch size 200 gives almost the same improvement as 7 SGD steps with batch size 1
To summarize: with proper tuning of $\gamma$, you'll be able to apply up to $k$ updates in parallel where $k$ is the "critical batch size." Critical batch size is the point at which your "parallel update" strategy starts exhibiting diminishing returns. In the plot above, critical batch size is about 10 since that's the point where you cross y=0.5x line. Basically it's the point where gain from each new example added in parallel is less than 0.5 of the gain of processing this example serially.
To estimate critical batch size for your problem, you could assume your $p$s and $q$ are normally distributed with second moment matrix $\Sigma$. Hessian of corresponding optimization problem is $H=\Sigma$.
Critical batch size is determined by one of the two effective ranks below (from this paper):
The first rank gives "worst-case" critical batch size, if you "get unlucky" with the location of the optimum relative to current position, while the second is applicable when all directions towards the optimum are equally likely.
To apply this to your problem, to establish the limit of SGD parallelism, you would evaluate $r(P^T P)$, and $r(Q^T Q)$. This would give you a lower estimate on the largest batch size you could use for your parallel updates on $q$ and on $p$. Evaluating $R(P^T P)$ and $R(Q^T Q)$ would give you an upper estimate on batch size.
|
ALS vs SGD in parallelization
Note that first update is the standard linear least squares estimation equation, more traditionally written as $(X^T X)^{-1} X^T y =X^\dagger y$, whereas your SGD version formulation comes down to sol
|
45,550
|
Is there a non-parametric form of a 3-way ANOVA?
|
ANOVA, even a 3-way ANOVA, is a special case of linear regression.
For one-way ANOVA, the typical "nonparametric" flavor is the Kruskal-Wallis test, so it seems like you would want some kind of 3-way Kruskal-Wallis test.
Much as ANOVA is a special case of linear regression, the Kruskal-Wallis test is a special case of proportional odds ordinal logistic regression.
Consequently, there is a sense in which the nonparametric flavor of 3-way ANOVA is a proportional odds ordinal logistic regression model on the variables, their two-way interactions, and their three-way interactions, much as the parametric flavor of 3-way ANOVA would be linear regression on the variables, their two-way interactions, and their 3-way interactions.
This is the first I've heard of the Scheirer-Ray-Hare test, but its Wikipedia article makes it sound like it can handle any number of factors, not just two, so perhaps your inability to include three factors is a software issue. Additionally, the Wikipedia article makes it sound like the Scheirer-Ray-Hare test is another special case of the proportional odds ordinal logistic regression.
|
Is there a non-parametric form of a 3-way ANOVA?
|
ANOVA, even a 3-way ANOVA, is a special case of linear regression.
For one-way ANOVA, the typical "nonparametric" flavor is the Kruskal-Wallis test, so it seems like you would want some kind of 3-way
|
Is there a non-parametric form of a 3-way ANOVA?
ANOVA, even a 3-way ANOVA, is a special case of linear regression.
For one-way ANOVA, the typical "nonparametric" flavor is the Kruskal-Wallis test, so it seems like you would want some kind of 3-way Kruskal-Wallis test.
Much as ANOVA is a special case of linear regression, the Kruskal-Wallis test is a special case of proportional odds ordinal logistic regression.
Consequently, there is a sense in which the nonparametric flavor of 3-way ANOVA is a proportional odds ordinal logistic regression model on the variables, their two-way interactions, and their three-way interactions, much as the parametric flavor of 3-way ANOVA would be linear regression on the variables, their two-way interactions, and their 3-way interactions.
This is the first I've heard of the Scheirer-Ray-Hare test, but its Wikipedia article makes it sound like it can handle any number of factors, not just two, so perhaps your inability to include three factors is a software issue. Additionally, the Wikipedia article makes it sound like the Scheirer-Ray-Hare test is another special case of the proportional odds ordinal logistic regression.
|
Is there a non-parametric form of a 3-way ANOVA?
ANOVA, even a 3-way ANOVA, is a special case of linear regression.
For one-way ANOVA, the typical "nonparametric" flavor is the Kruskal-Wallis test, so it seems like you would want some kind of 3-way
|
45,551
|
Is there a non-parametric form of a 3-way ANOVA?
|
The exchange in comments now makes this clearer. The OP has three species of shark, two level of maturity, and two sexes of shark. This forms a $3\times2\times2$ design. There will be 2 degrees of freedom for species, 1 for sex, 1 for maturity. There will be 2 for species by sex, 2 for species by maturity, and 1 for sex by maturity. By calculation or by subtraction from the overall total we can see that this leaves 2 for the three way-interaction. So R is correct in printing out just two terms for the three-way interaction. It chooses male mature leopard sharks and male mature pyjama sharks as the comment suggests.
|
Is there a non-parametric form of a 3-way ANOVA?
|
The exchange in comments now makes this clearer. The OP has three species of shark, two level of maturity, and two sexes of shark. This forms a $3\times2\times2$ design. There will be 2 degrees of fre
|
Is there a non-parametric form of a 3-way ANOVA?
The exchange in comments now makes this clearer. The OP has three species of shark, two level of maturity, and two sexes of shark. This forms a $3\times2\times2$ design. There will be 2 degrees of freedom for species, 1 for sex, 1 for maturity. There will be 2 for species by sex, 2 for species by maturity, and 1 for sex by maturity. By calculation or by subtraction from the overall total we can see that this leaves 2 for the three way-interaction. So R is correct in printing out just two terms for the three-way interaction. It chooses male mature leopard sharks and male mature pyjama sharks as the comment suggests.
|
Is there a non-parametric form of a 3-way ANOVA?
The exchange in comments now makes this clearer. The OP has three species of shark, two level of maturity, and two sexes of shark. This forms a $3\times2\times2$ design. There will be 2 degrees of fre
|
45,552
|
Is there a non-parametric form of a 3-way ANOVA?
|
You can do an anova per permutation (non-parametric) with the aovp function from lmperm package. I suggest you use perm = "exact", to have a more robust test.
For a following post hoc test you can also use pairwise.perm.t.test from RVaideMemoire package (it allows you to do a correction, and to defined the number of permutation you wish).
This is what I have used so far and it worked quite well...
Cheers
|
Is there a non-parametric form of a 3-way ANOVA?
|
You can do an anova per permutation (non-parametric) with the aovp function from lmperm package. I suggest you use perm = "exact", to have a more robust test.
For a following post hoc test you can als
|
Is there a non-parametric form of a 3-way ANOVA?
You can do an anova per permutation (non-parametric) with the aovp function from lmperm package. I suggest you use perm = "exact", to have a more robust test.
For a following post hoc test you can also use pairwise.perm.t.test from RVaideMemoire package (it allows you to do a correction, and to defined the number of permutation you wish).
This is what I have used so far and it worked quite well...
Cheers
|
Is there a non-parametric form of a 3-way ANOVA?
You can do an anova per permutation (non-parametric) with the aovp function from lmperm package. I suggest you use perm = "exact", to have a more robust test.
For a following post hoc test you can als
|
45,553
|
Is there a non-parametric form of a 3-way ANOVA?
|
Just to add to the other answers, a relatively flexible method for non-parametric multi-way anova is aligned ranks transformation anova (ART anova).
At least in the implementation in R, it can handle mixed effects and has methods for post-hoc analysis.
It has its limitations, so it's important to read up on the background and documentation.
|
Is there a non-parametric form of a 3-way ANOVA?
|
Just to add to the other answers, a relatively flexible method for non-parametric multi-way anova is aligned ranks transformation anova (ART anova).
At least in the implementation in R, it can handle
|
Is there a non-parametric form of a 3-way ANOVA?
Just to add to the other answers, a relatively flexible method for non-parametric multi-way anova is aligned ranks transformation anova (ART anova).
At least in the implementation in R, it can handle mixed effects and has methods for post-hoc analysis.
It has its limitations, so it's important to read up on the background and documentation.
|
Is there a non-parametric form of a 3-way ANOVA?
Just to add to the other answers, a relatively flexible method for non-parametric multi-way anova is aligned ranks transformation anova (ART anova).
At least in the implementation in R, it can handle
|
45,554
|
Ridge or multiple linear regression following PCA?
|
85 predictor dimensions with only 150 samples is likely to lead to overfitting in clinical data, even though you now have p < n. You typically need 10-20 cases per predictor to avoid overfitting.
Ridge regression can be thought of as a continuous version of principal components regression (PCR). Ridge weights the principal components continuously rather than the all-or-none PC selection in PCR. In that sense, ridge is a superior solution to your problem over an unpenalized PCR, as the coefficient penalization imposed by ridge will minimize the overfitting that would occur if you just went along with 85 PCs and 150 samples. Performing ridge with all of your initial predictors will give you advantages of PCR without the overfitting disadvantage in this case.
There's a practical issue in external validation and other future uses in this scenario, as you will need data in those future data sets on all of the predictors used in building the model. That's true whether you use PCR or ridge. If you can't assure that, you might be better off with a different penalization method like elastic net or LASSO so that you only need to have a smaller number of predictor values available for later application.
A better solution overall might be to apply knowledge of the subject matter for a rational choice of candidate predictors. See Frank Harrell's course notes and book for guidance. If your predictors are things like expression values for thousands of genes that might not be possible, but in that case you should at least make sure to include known clinically relevant predictors, perhaps not penalized, in your model.
|
Ridge or multiple linear regression following PCA?
|
85 predictor dimensions with only 150 samples is likely to lead to overfitting in clinical data, even though you now have p < n. You typically need 10-20 cases per predictor to avoid overfitting.
Ridg
|
Ridge or multiple linear regression following PCA?
85 predictor dimensions with only 150 samples is likely to lead to overfitting in clinical data, even though you now have p < n. You typically need 10-20 cases per predictor to avoid overfitting.
Ridge regression can be thought of as a continuous version of principal components regression (PCR). Ridge weights the principal components continuously rather than the all-or-none PC selection in PCR. In that sense, ridge is a superior solution to your problem over an unpenalized PCR, as the coefficient penalization imposed by ridge will minimize the overfitting that would occur if you just went along with 85 PCs and 150 samples. Performing ridge with all of your initial predictors will give you advantages of PCR without the overfitting disadvantage in this case.
There's a practical issue in external validation and other future uses in this scenario, as you will need data in those future data sets on all of the predictors used in building the model. That's true whether you use PCR or ridge. If you can't assure that, you might be better off with a different penalization method like elastic net or LASSO so that you only need to have a smaller number of predictor values available for later application.
A better solution overall might be to apply knowledge of the subject matter for a rational choice of candidate predictors. See Frank Harrell's course notes and book for guidance. If your predictors are things like expression values for thousands of genes that might not be possible, but in that case you should at least make sure to include known clinically relevant predictors, perhaps not penalized, in your model.
|
Ridge or multiple linear regression following PCA?
85 predictor dimensions with only 150 samples is likely to lead to overfitting in clinical data, even though you now have p < n. You typically need 10-20 cases per predictor to avoid overfitting.
Ridg
|
45,555
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
|
I agree that the null hypothesis of equivalence is, in many cases, a rather useless hypothesis. In such cases, a superiority hypothesis informed by theory/other empirical results may be preferred. However, I don't see the need for a new procedure here. I'd suggest you 1) set a superiority hypothesis and 2) use a t-test to decide whether or not to reject this hypothesis.
So your hypothesis would be as follows:
$$
H_0: \hat \mu_t - \hat \mu_c \le \delta \\
H_1: \hat \mu_t - \hat \mu_c > \delta
$$
And then your test as follows:
$$ \frac{\hat \mu_t - \hat \mu_c - \delta} {s \sqrt{\frac{1}{n_t} + \frac{1}{n_c}}} > t_{\alpha}, \ n_t + n_c - 2 $$
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
|
I agree that the null hypothesis of equivalence is, in many cases, a rather useless hypothesis. In such cases, a superiority hypothesis informed by theory/other empirical results may be preferred. How
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
I agree that the null hypothesis of equivalence is, in many cases, a rather useless hypothesis. In such cases, a superiority hypothesis informed by theory/other empirical results may be preferred. However, I don't see the need for a new procedure here. I'd suggest you 1) set a superiority hypothesis and 2) use a t-test to decide whether or not to reject this hypothesis.
So your hypothesis would be as follows:
$$
H_0: \hat \mu_t - \hat \mu_c \le \delta \\
H_1: \hat \mu_t - \hat \mu_c > \delta
$$
And then your test as follows:
$$ \frac{\hat \mu_t - \hat \mu_c - \delta} {s \sqrt{\frac{1}{n_t} + \frac{1}{n_c}}} > t_{\alpha}, \ n_t + n_c - 2 $$
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
I agree that the null hypothesis of equivalence is, in many cases, a rather useless hypothesis. In such cases, a superiority hypothesis informed by theory/other empirical results may be preferred. How
|
45,556
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
|
As far as I can see, your main question can be addressed by the answer of num_39 or probably also by a confidence interval (maybe one-sided).
I will address some other issues raised in the question. I think that it is very important to distinguish between the formal concept of a significance test (and p-value) and the way how it is interpreted (often misinterpreted). There is a tendency in some current literature criticising significance tests to blame misinterpretations on the concept itself, but in my view the concept itself can be used in a valid and unproblematic way, whereas what needs to be criticised is its widespread misinterpretation and misuse. This is to some extent caused by its own success, because at some point many journal editors, reviewers etc. made it explicitly or implicitly mandatory for publications to come up with significant results, which created a very unhealthy incentive for trying to tease significance out of anything. I give to the opponents of significance testing that it is legitimate to wonder whether some problems are to some extent intrinsic to the significance test concept itself, however I am rather a member of the "don't throw the baby out with the bathwater" fraction.
Ultimately the idea of significance testing is very old and quite intuitive. It basically says that data provide evidence against a probability model if the data are very unlikely under that model. We need to have in mind here that many probability distributions provide nonzero probability to everything that can possibly happen, so one can't learn enough about whether a probability model is appropriate or not by only rejecting it if something under the model impossible happens. Furthermore, things are complicated by the fact that continuous probability distributions will give probability zero to any precise result, so that in principle one could declare any data as "very unlikely", which isn't helpful either. This means that in order to find evidence against a model, one need to specify a set of events pre-data that has a probability deemed too small, and basically state that the model doesn't predict this set to occur, and if it occurs, this constitutes evidence against the model (the role of the alternative hypothesis is that having an alternative in mind helps to choose the set in such a way that, if indeed the model is wrong in a certain way suspected a priori, the "rejecting set" is likely to occur, i.e., we have a good chance to reject a false model in case our suspected alternative is true). In my view this is essentially the most direct way to make a statement about whether data are compatible with the model.
Some comments:
A test can formally be defined as binary, i.e., either "rejecting" or "accepting" the null hypothesis (the latter is a terrible term as there are always many models compatible with any data, and therefore we cannot have evidence in favour of any specific null hypothesis to be true). However, it should be clear that this oversimplifies the situation, as for defining a binary decision rule we need to choose a probability threshold, and exact probability thresholds are artificial. Is 0.07 a so small probability that it should be taken as a reason to "reject" the model? 0.04? 0.015? 0.0099? There is no objective answer to this, and in fact it doesn't need to be decided unless there are different actions involved whether the outcome is one or the other. p-values are meant to give "continuous" information rather than a binary decision rule, and everybody understanding p-values understands that 0.04 and 0.06 are in fact more similar to each other than either of these is to 0.2 or 0.001 even though somebody may put a threshold for action at 0.05. It needs to be understood also that if binary decisions are to be made, thresholds are required, and if we can't have objectively justified ones, whatever we do will come with a smell of arbitrariness. (Note that in some literature multiple thresholds are used for talking about "weak/modest/strong/very strong evidence" - this gives more information than "reject/don't reject" but less than the continuous p-value.) The question states "significance testing offers a false sense of certainty (true/false)", however who understands the nature of the decision problem should know that certainty is not provided because (a) any binary decision has to depend on an at least to some extent arbitrary threshold and (b) we are rejecting or not rejecting abstract formal models, and reality will be different from the model in some ways anyway.
Personally I don't think there is any such thing as a "true" model in reality. Probability models are defined in the world of mathematics, which is essentially different from the reality for which it is interpreted. Models are tools for thinking, and no test can say anything about the "truth" of a model. This implies in particular that the null hypotheses should not be believed to be "true" regardless of whether the data reject it or not, and the same holds for the alternative (and any probability model used in other approaches outside the significance test paradigm). Testing the null hypothesis does not mean that we are testing whether it is literally true, but rather whether data are incompatible with it, for which reason we may drop it, not as a "belief" (because I wouldn't believe it in the first place), but even as a tool, a means to understand and interpret reality. This means that the following is mistaken as objection against significance tests: "However, the nil hypothesis is known a priori to be false: things are never exactly equal and there is always an effect." Even though I agree with the null hypothesis never being true (which is not exclusive to the zero effect hypothesis, rather also to any precisely specified effect), this is not what the test is meant to find out. For example, if an astrologist claims that marriages between certain zodiac signs are more likely to fail than others, it may well be that observed data are consistent with total randomness (not sure whether Cohen would claim that there for sure is an effect in this situation, in fact I can imagine reasons for it such as people asking an astrologist for advice who says "get divorced"), and it is a legitimate interest whether they are or not. (If you don't believe in astrology, it may still be of interest whether astrology talk has this kind of influence on society. Before having seen data, I don't have a strong expectation either way, so it is an interesting question what the outcome is.) Ultimately, if somebody claims that there is a certain kind of effect, this claim is for sure weakened by realising that data are compatible with randomness or a nil effect. This of course is totally perverted by a culture that gives scientists an incentive to do something that seems to "prove" whatever claim they may have, so that something will be done in order to achieve significance that a neutral tester of the claim wouldn't achieve with high probability (and also, as stated before, rejecting a null hypothesis does not imply that any specific alternative is true).
"scientists are conditioned to test point null hypotheses of no difference" - there is nothing in the formal concept of significance tests that requires this, even though, as stated above, in some situations it can make sense. The issue mentioned above, namely that models are essentially different from reality, therefore never literally (and precisely) true, holds as well for other point null hypotheses by the way, such as the one of a "meaningful minimum distance" in the response by num_39. This doesn't make tests useless, as long as before collecting the data it is really of interest whether data will show evidence against the H0 or not (which is of course different from the situation that a researcher is determined to find such evidence whatever it takes). Note by the way that the objection applies less strongly to one-sided tests; part of the H0 can then be not only a nil effect but also all effects that go in the other direction than what is expected, which in practice happens from time to time.
It is true that a test outcome in itself doesn't say anything about effect sizes, and that effect sizes are usually relevant. It is of course a misconception of a significance test to think that the test decides whether an effect should be taken as substantively meaningful. Some people seem to think that this should be a reason to not run significance tests at all, or to replace them, e.g., by confidence intervals or Bayesian analysis. In my view it'd be so much easier to acknowledge that different methods are for different kinds of questions, sometimes effect sizes are the major focus of interest but sometimes the compatibility of data with a null model, and sometimes both (or other things such as prediction quality). Whether tests (and/or other methods) should be used or not depends on the question of interest, and for sure if you run a test, it doesn't mean you are not allowed to compute a confidence interval on top of it, or even a Bayesian analysis!
Whether point null hypotheses should be used to reflect theory obviously depends on whether the theory allows such precise specifications.
All of this doesn't seem to be that mysterious to me. Much discussion on significance tests in my view treats them as some kind of black magic that is expected to deliver all kinds of miracles, and is then condemned for not doing so. I don't think they are very problematic if they are used for what they can do, and not used for what they can't do.
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
|
As far as I can see, your main question can be addressed by the answer of num_39 or probably also by a confidence interval (maybe one-sided).
I will address some other issues raised in the question. I
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
As far as I can see, your main question can be addressed by the answer of num_39 or probably also by a confidence interval (maybe one-sided).
I will address some other issues raised in the question. I think that it is very important to distinguish between the formal concept of a significance test (and p-value) and the way how it is interpreted (often misinterpreted). There is a tendency in some current literature criticising significance tests to blame misinterpretations on the concept itself, but in my view the concept itself can be used in a valid and unproblematic way, whereas what needs to be criticised is its widespread misinterpretation and misuse. This is to some extent caused by its own success, because at some point many journal editors, reviewers etc. made it explicitly or implicitly mandatory for publications to come up with significant results, which created a very unhealthy incentive for trying to tease significance out of anything. I give to the opponents of significance testing that it is legitimate to wonder whether some problems are to some extent intrinsic to the significance test concept itself, however I am rather a member of the "don't throw the baby out with the bathwater" fraction.
Ultimately the idea of significance testing is very old and quite intuitive. It basically says that data provide evidence against a probability model if the data are very unlikely under that model. We need to have in mind here that many probability distributions provide nonzero probability to everything that can possibly happen, so one can't learn enough about whether a probability model is appropriate or not by only rejecting it if something under the model impossible happens. Furthermore, things are complicated by the fact that continuous probability distributions will give probability zero to any precise result, so that in principle one could declare any data as "very unlikely", which isn't helpful either. This means that in order to find evidence against a model, one need to specify a set of events pre-data that has a probability deemed too small, and basically state that the model doesn't predict this set to occur, and if it occurs, this constitutes evidence against the model (the role of the alternative hypothesis is that having an alternative in mind helps to choose the set in such a way that, if indeed the model is wrong in a certain way suspected a priori, the "rejecting set" is likely to occur, i.e., we have a good chance to reject a false model in case our suspected alternative is true). In my view this is essentially the most direct way to make a statement about whether data are compatible with the model.
Some comments:
A test can formally be defined as binary, i.e., either "rejecting" or "accepting" the null hypothesis (the latter is a terrible term as there are always many models compatible with any data, and therefore we cannot have evidence in favour of any specific null hypothesis to be true). However, it should be clear that this oversimplifies the situation, as for defining a binary decision rule we need to choose a probability threshold, and exact probability thresholds are artificial. Is 0.07 a so small probability that it should be taken as a reason to "reject" the model? 0.04? 0.015? 0.0099? There is no objective answer to this, and in fact it doesn't need to be decided unless there are different actions involved whether the outcome is one or the other. p-values are meant to give "continuous" information rather than a binary decision rule, and everybody understanding p-values understands that 0.04 and 0.06 are in fact more similar to each other than either of these is to 0.2 or 0.001 even though somebody may put a threshold for action at 0.05. It needs to be understood also that if binary decisions are to be made, thresholds are required, and if we can't have objectively justified ones, whatever we do will come with a smell of arbitrariness. (Note that in some literature multiple thresholds are used for talking about "weak/modest/strong/very strong evidence" - this gives more information than "reject/don't reject" but less than the continuous p-value.) The question states "significance testing offers a false sense of certainty (true/false)", however who understands the nature of the decision problem should know that certainty is not provided because (a) any binary decision has to depend on an at least to some extent arbitrary threshold and (b) we are rejecting or not rejecting abstract formal models, and reality will be different from the model in some ways anyway.
Personally I don't think there is any such thing as a "true" model in reality. Probability models are defined in the world of mathematics, which is essentially different from the reality for which it is interpreted. Models are tools for thinking, and no test can say anything about the "truth" of a model. This implies in particular that the null hypotheses should not be believed to be "true" regardless of whether the data reject it or not, and the same holds for the alternative (and any probability model used in other approaches outside the significance test paradigm). Testing the null hypothesis does not mean that we are testing whether it is literally true, but rather whether data are incompatible with it, for which reason we may drop it, not as a "belief" (because I wouldn't believe it in the first place), but even as a tool, a means to understand and interpret reality. This means that the following is mistaken as objection against significance tests: "However, the nil hypothesis is known a priori to be false: things are never exactly equal and there is always an effect." Even though I agree with the null hypothesis never being true (which is not exclusive to the zero effect hypothesis, rather also to any precisely specified effect), this is not what the test is meant to find out. For example, if an astrologist claims that marriages between certain zodiac signs are more likely to fail than others, it may well be that observed data are consistent with total randomness (not sure whether Cohen would claim that there for sure is an effect in this situation, in fact I can imagine reasons for it such as people asking an astrologist for advice who says "get divorced"), and it is a legitimate interest whether they are or not. (If you don't believe in astrology, it may still be of interest whether astrology talk has this kind of influence on society. Before having seen data, I don't have a strong expectation either way, so it is an interesting question what the outcome is.) Ultimately, if somebody claims that there is a certain kind of effect, this claim is for sure weakened by realising that data are compatible with randomness or a nil effect. This of course is totally perverted by a culture that gives scientists an incentive to do something that seems to "prove" whatever claim they may have, so that something will be done in order to achieve significance that a neutral tester of the claim wouldn't achieve with high probability (and also, as stated before, rejecting a null hypothesis does not imply that any specific alternative is true).
"scientists are conditioned to test point null hypotheses of no difference" - there is nothing in the formal concept of significance tests that requires this, even though, as stated above, in some situations it can make sense. The issue mentioned above, namely that models are essentially different from reality, therefore never literally (and precisely) true, holds as well for other point null hypotheses by the way, such as the one of a "meaningful minimum distance" in the response by num_39. This doesn't make tests useless, as long as before collecting the data it is really of interest whether data will show evidence against the H0 or not (which is of course different from the situation that a researcher is determined to find such evidence whatever it takes). Note by the way that the objection applies less strongly to one-sided tests; part of the H0 can then be not only a nil effect but also all effects that go in the other direction than what is expected, which in practice happens from time to time.
It is true that a test outcome in itself doesn't say anything about effect sizes, and that effect sizes are usually relevant. It is of course a misconception of a significance test to think that the test decides whether an effect should be taken as substantively meaningful. Some people seem to think that this should be a reason to not run significance tests at all, or to replace them, e.g., by confidence intervals or Bayesian analysis. In my view it'd be so much easier to acknowledge that different methods are for different kinds of questions, sometimes effect sizes are the major focus of interest but sometimes the compatibility of data with a null model, and sometimes both (or other things such as prediction quality). Whether tests (and/or other methods) should be used or not depends on the question of interest, and for sure if you run a test, it doesn't mean you are not allowed to compute a confidence interval on top of it, or even a Bayesian analysis!
Whether point null hypotheses should be used to reflect theory obviously depends on whether the theory allows such precise specifications.
All of this doesn't seem to be that mysterious to me. Much discussion on significance tests in my view treats them as some kind of black magic that is expected to deliver all kinds of miracles, and is then condemned for not doing so. I don't think they are very problematic if they are used for what they can do, and not used for what they can't do.
|
Frequentist inference with a null hypothesis that reflects theory a good-enough belt around it
As far as I can see, your main question can be addressed by the answer of num_39 or probably also by a confidence interval (maybe one-sided).
I will address some other issues raised in the question. I
|
45,557
|
How to prove this inequality?
|
It's convenient to define $U$ once and for all to be a uniform variable on the interval $[-1,1]$ and simply multiply it by $t$ to produce the $U$ used in the question.
Two useful, easily proven, but not widely known facts about random variables $X$ in general are
No matter what the distribution function (CDF) $F_X$ of $X$ might be, the random variable $X + tU$ is absolutely continuous with a density function $$f_{X+tU}(y) = \frac{1}{2t}\left(F_X(y+t) - F_X(y-t)\right).$$ There are many ways to prove this, as explained at https://stats.stackexchange.com/a/43075/919 (which concerns sums of the closely related uniform variable supported on $[0,1]$).
The expectation of any non-negative random variable $X$ with CDF $F_X$ equals $$E[X] = \int_0^\infty \left[1 - F_X(x)\right]\,\mathrm{d} x.$$ This is repeatedly demonstrated in many threads here on CV; one is Expectation of a function of a random variable from CDF. They all amount to performing an integration by parts.
With these facts in mind, evaluate the probability in the question as
$$\begin{aligned}
\Pr(X + tU \gt t) &= \int_t^\infty f_{X+tU}(y)\,\mathrm{d}y \\
&= \frac{1}{2t}\int_t^\infty \left(F_X(y+t) - F_X(y-t)\right)\,\mathrm{d}y\\
&= \frac{1}{2t}\int_t^\infty \left( \left[1 - F_X(y-t)\right] - \left[1 - F_X(y+t)\right]\right)\,\mathrm{d}y\\
&= \frac{1}{2t}\left(\int_0^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x - \int_{2t}^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x\right)\\
&= \frac{1}{2t}\left(E[X] - \int_{2t}^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x\right)\\
&\le \frac{1}{2t} E[X].
\end{aligned}$$
The justifications of these steps are (1) definition of density, (2) fact $(1)$ above, (3) algebra, (4) linearity of integration followed by changes of variables, (5) fact $(2)$ above, and (6) since $1-F_X(x)$ is a probability for all $x,$ it is never negative, whence its integral is non-negative.
Finally, $\Pr(X + tU \ge t) = \Pr(X + tU \gt t)$ because (as previously noted) the variable $X + tU$ is absolutely continuous, QED.
One thing I like about this way of proceeding is the insight it provides into the tightness of the inequality: the amount by which the probability (on the left hand side) falls short of $E[X]/(2t)$ (on the right hand side) is proportional to the integral of $1-F_X$ from $2t$ on up. Thus, for instance, when $X$ is bounded and $2t$ exceeds this bound, that integral is zero and the inequality becomes an equality.
|
How to prove this inequality?
|
It's convenient to define $U$ once and for all to be a uniform variable on the interval $[-1,1]$ and simply multiply it by $t$ to produce the $U$ used in the question.
Two useful, easily proven, but n
|
How to prove this inequality?
It's convenient to define $U$ once and for all to be a uniform variable on the interval $[-1,1]$ and simply multiply it by $t$ to produce the $U$ used in the question.
Two useful, easily proven, but not widely known facts about random variables $X$ in general are
No matter what the distribution function (CDF) $F_X$ of $X$ might be, the random variable $X + tU$ is absolutely continuous with a density function $$f_{X+tU}(y) = \frac{1}{2t}\left(F_X(y+t) - F_X(y-t)\right).$$ There are many ways to prove this, as explained at https://stats.stackexchange.com/a/43075/919 (which concerns sums of the closely related uniform variable supported on $[0,1]$).
The expectation of any non-negative random variable $X$ with CDF $F_X$ equals $$E[X] = \int_0^\infty \left[1 - F_X(x)\right]\,\mathrm{d} x.$$ This is repeatedly demonstrated in many threads here on CV; one is Expectation of a function of a random variable from CDF. They all amount to performing an integration by parts.
With these facts in mind, evaluate the probability in the question as
$$\begin{aligned}
\Pr(X + tU \gt t) &= \int_t^\infty f_{X+tU}(y)\,\mathrm{d}y \\
&= \frac{1}{2t}\int_t^\infty \left(F_X(y+t) - F_X(y-t)\right)\,\mathrm{d}y\\
&= \frac{1}{2t}\int_t^\infty \left( \left[1 - F_X(y-t)\right] - \left[1 - F_X(y+t)\right]\right)\,\mathrm{d}y\\
&= \frac{1}{2t}\left(\int_0^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x - \int_{2t}^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x\right)\\
&= \frac{1}{2t}\left(E[X] - \int_{2t}^\infty \left[1 - F_X(x)\right]\,\mathrm{d}x\right)\\
&\le \frac{1}{2t} E[X].
\end{aligned}$$
The justifications of these steps are (1) definition of density, (2) fact $(1)$ above, (3) algebra, (4) linearity of integration followed by changes of variables, (5) fact $(2)$ above, and (6) since $1-F_X(x)$ is a probability for all $x,$ it is never negative, whence its integral is non-negative.
Finally, $\Pr(X + tU \ge t) = \Pr(X + tU \gt t)$ because (as previously noted) the variable $X + tU$ is absolutely continuous, QED.
One thing I like about this way of proceeding is the insight it provides into the tightness of the inequality: the amount by which the probability (on the left hand side) falls short of $E[X]/(2t)$ (on the right hand side) is proportional to the integral of $1-F_X$ from $2t$ on up. Thus, for instance, when $X$ is bounded and $2t$ exceeds this bound, that integral is zero and the inequality becomes an equality.
|
How to prove this inequality?
It's convenient to define $U$ once and for all to be a uniform variable on the interval $[-1,1]$ and simply multiply it by $t$ to produce the $U$ used in the question.
Two useful, easily proven, but n
|
45,558
|
How to prove this inequality?
|
Yes, your comment is on the right track. This looks like it's going to be Markov's Inequality:
$$P(Z\geq z)\leq \frac{E[Z]}{z}$$
for non-negative $Z$. As you have noted, $X+U+t$ is non-negative, so it's a candidate for $Z$, and $X+U+t\geq 2t$ when $X+U\geq t$. So, consider the mean of $Z$.
|
How to prove this inequality?
|
Yes, your comment is on the right track. This looks like it's going to be Markov's Inequality:
$$P(Z\geq z)\leq \frac{E[Z]}{z}$$
for non-negative $Z$. As you have noted, $X+U+t$ is non-negative, so i
|
How to prove this inequality?
Yes, your comment is on the right track. This looks like it's going to be Markov's Inequality:
$$P(Z\geq z)\leq \frac{E[Z]}{z}$$
for non-negative $Z$. As you have noted, $X+U+t$ is non-negative, so it's a candidate for $Z$, and $X+U+t\geq 2t$ when $X+U\geq t$. So, consider the mean of $Z$.
|
How to prove this inequality?
Yes, your comment is on the right track. This looks like it's going to be Markov's Inequality:
$$P(Z\geq z)\leq \frac{E[Z]}{z}$$
for non-negative $Z$. As you have noted, $X+U+t$ is non-negative, so i
|
45,559
|
Why is GARCH offering no predictive value?
|
First of all, your results look a bit strange. I would advise you to check your code. Nevertheless, I will describe a method that you can use to obtain one-step-ahead forecasts for the conditional variance using a GARCH(1,1)-model.
Method
Assume that you observe a time series $(r_t)_{t=1}^T$ of log-returns and you want to estimate a simple GARCH(1,1) model.
\begin{align}
r_t&=\sigma_t u_t \quad, u_t \sim \mathcal N(0,1) \\
\sigma_t^2&=\alpha_0+\alpha_1r_{t-1}^2+\beta_1 \sigma_{t-1}^2
\end{align}
First of all, estimate the model on the first $N$ observations where $N <T$ and denote the ML estimate as $\hat{\boldsymbol{\theta}}^{j=1}=(\hat{\alpha}_0^{j=1},\hat{\alpha}_1^{j=1},\hat{\beta}_0^{j=1})^\top$.
Then calculate the time series $(\sigma_t^2)_{t=1}^N$ as follows:
choose an initial estimate for $\sigma_1^2$, for instance $\sigma_1^2=\frac{1}{N}\sum_{t=1}^Nr_t^2$.
$\sigma_2^2=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_1^2+\hat{\beta}_0^{j=1}\sigma_1^2$
$\vdots$
$\sigma_N^2=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_{N-1}^2+\hat{\beta}_0^{j=1}\sigma_{N-1}^2$
Now, you can predict the conditional variance for $t=N+1$ as
$$
\hat{\sigma}_{N+1}^2=E(\sigma_{N+1}^2\vert \mathcal F_{N})=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_{N}^2+\hat{\beta}_0^{j=1}\sigma_{N}^2
$$
, which is the MSE optimal prediction.
If you want to use a rolling window, re-estimate the model on $(r_t)_{t=2}^{N+1}$ and obtain $\hat{\boldsymbol{\theta}}^{j=2}=(\hat{\alpha}_0^{j=2},\hat{\alpha}_1^{j=2},\hat{\beta}_0^{j=2})^\top$.
You can calculate $(\sigma_t^2)_{t=2}^{N+1}$ as described above.
Then, predict
$$
\hat{\sigma}_{N+2}^2=E(\sigma_{N+2}^2\vert \mathcal F_{N})=\hat{\alpha}_0^{j=2}+\hat{\alpha}_1^{j=2}r_{N+1}^2+\hat{\beta}_0^{j=2}\sigma_{N+1}^2
$$
You repeat this process until no observations are left. As a result, you have a time series $(\hat{\sigma}_t^2)_{t={N+1}}^T$ which are the predictions of $\sigma_t^2$ using a rolling window.
Evaluation of volatility forecasts
There was a great discussion in the literature, whether GARCH-models are able to provide precise volatility forecasts or not. It turned out that it was not the models that gave bad results, rather many people used "wrong" proxies for volatility. (Reference: : Torben G. Andersen and Tim Bollerslev (1998): "Answering the Skeptics: Yes, Standard Volatility Models do Provide Accurate Forecasts", in International Economic Review, Vol. 39, No. 4).
In sum, one of the major problems when evaluating volatility forecasts is that volatility is unobservable and you need to use some form of proxy. Assuming that the specified model is correct, an unbiased estimator of the "true" volatility $\sigma_t^2$ is given by the squared returns $r_t^2$ because:
$$
E(r_t^2 \vert \mathcal F_{t-1})=E(\sigma_t^2u_t^2 \vert \mathcal F_{t-1})=\sigma_t^2E(u_t^2)=\sigma_t^2
$$
Thus, you could plot $r_t^2$ and $\hat{\sigma}_t^2$ to assess whether the results make sense to some extent. Usually, a simple GARCH(1,1)-model does a moderate job in predicting $\sigma_{t+1}^2$. Exceptions prove the rule, but if the results are completely different, it is likely that there is an error in the code.
However, note that $r_t^2$ is a noisy proxy for $\sigma_t^2$. Usually, you get much better results, if you don't use $r_t^2$ as a proxy for $\sigma_t^2$ but realized volatility estimators like
$$
RV_{t,n}=\sum_{i=1}^n(\ln(P_{t,i})-\ln(P_{t,i-1})).
$$
So, it is possible that your code is correct but for your time series, $r_t^2$ is a really bad proxy for the unobservable volatility and you may get completely different results if you use RV. However, to do that, you need to have access to intraday data and getting the data is quite a challenge if you don't have access to Bloomberg or other data providers.
|
Why is GARCH offering no predictive value?
|
First of all, your results look a bit strange. I would advise you to check your code. Nevertheless, I will describe a method that you can use to obtain one-step-ahead forecasts for the conditional var
|
Why is GARCH offering no predictive value?
First of all, your results look a bit strange. I would advise you to check your code. Nevertheless, I will describe a method that you can use to obtain one-step-ahead forecasts for the conditional variance using a GARCH(1,1)-model.
Method
Assume that you observe a time series $(r_t)_{t=1}^T$ of log-returns and you want to estimate a simple GARCH(1,1) model.
\begin{align}
r_t&=\sigma_t u_t \quad, u_t \sim \mathcal N(0,1) \\
\sigma_t^2&=\alpha_0+\alpha_1r_{t-1}^2+\beta_1 \sigma_{t-1}^2
\end{align}
First of all, estimate the model on the first $N$ observations where $N <T$ and denote the ML estimate as $\hat{\boldsymbol{\theta}}^{j=1}=(\hat{\alpha}_0^{j=1},\hat{\alpha}_1^{j=1},\hat{\beta}_0^{j=1})^\top$.
Then calculate the time series $(\sigma_t^2)_{t=1}^N$ as follows:
choose an initial estimate for $\sigma_1^2$, for instance $\sigma_1^2=\frac{1}{N}\sum_{t=1}^Nr_t^2$.
$\sigma_2^2=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_1^2+\hat{\beta}_0^{j=1}\sigma_1^2$
$\vdots$
$\sigma_N^2=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_{N-1}^2+\hat{\beta}_0^{j=1}\sigma_{N-1}^2$
Now, you can predict the conditional variance for $t=N+1$ as
$$
\hat{\sigma}_{N+1}^2=E(\sigma_{N+1}^2\vert \mathcal F_{N})=\hat{\alpha}_0^{j=1}+\hat{\alpha}_1^{j=1}r_{N}^2+\hat{\beta}_0^{j=1}\sigma_{N}^2
$$
, which is the MSE optimal prediction.
If you want to use a rolling window, re-estimate the model on $(r_t)_{t=2}^{N+1}$ and obtain $\hat{\boldsymbol{\theta}}^{j=2}=(\hat{\alpha}_0^{j=2},\hat{\alpha}_1^{j=2},\hat{\beta}_0^{j=2})^\top$.
You can calculate $(\sigma_t^2)_{t=2}^{N+1}$ as described above.
Then, predict
$$
\hat{\sigma}_{N+2}^2=E(\sigma_{N+2}^2\vert \mathcal F_{N})=\hat{\alpha}_0^{j=2}+\hat{\alpha}_1^{j=2}r_{N+1}^2+\hat{\beta}_0^{j=2}\sigma_{N+1}^2
$$
You repeat this process until no observations are left. As a result, you have a time series $(\hat{\sigma}_t^2)_{t={N+1}}^T$ which are the predictions of $\sigma_t^2$ using a rolling window.
Evaluation of volatility forecasts
There was a great discussion in the literature, whether GARCH-models are able to provide precise volatility forecasts or not. It turned out that it was not the models that gave bad results, rather many people used "wrong" proxies for volatility. (Reference: : Torben G. Andersen and Tim Bollerslev (1998): "Answering the Skeptics: Yes, Standard Volatility Models do Provide Accurate Forecasts", in International Economic Review, Vol. 39, No. 4).
In sum, one of the major problems when evaluating volatility forecasts is that volatility is unobservable and you need to use some form of proxy. Assuming that the specified model is correct, an unbiased estimator of the "true" volatility $\sigma_t^2$ is given by the squared returns $r_t^2$ because:
$$
E(r_t^2 \vert \mathcal F_{t-1})=E(\sigma_t^2u_t^2 \vert \mathcal F_{t-1})=\sigma_t^2E(u_t^2)=\sigma_t^2
$$
Thus, you could plot $r_t^2$ and $\hat{\sigma}_t^2$ to assess whether the results make sense to some extent. Usually, a simple GARCH(1,1)-model does a moderate job in predicting $\sigma_{t+1}^2$. Exceptions prove the rule, but if the results are completely different, it is likely that there is an error in the code.
However, note that $r_t^2$ is a noisy proxy for $\sigma_t^2$. Usually, you get much better results, if you don't use $r_t^2$ as a proxy for $\sigma_t^2$ but realized volatility estimators like
$$
RV_{t,n}=\sum_{i=1}^n(\ln(P_{t,i})-\ln(P_{t,i-1})).
$$
So, it is possible that your code is correct but for your time series, $r_t^2$ is a really bad proxy for the unobservable volatility and you may get completely different results if you use RV. However, to do that, you need to have access to intraday data and getting the data is quite a challenge if you don't have access to Bloomberg or other data providers.
|
Why is GARCH offering no predictive value?
First of all, your results look a bit strange. I would advise you to check your code. Nevertheless, I will describe a method that you can use to obtain one-step-ahead forecasts for the conditional var
|
45,560
|
Why is GARCH offering no predictive value?
|
Your observation is correct. GARCH is an autoregressive model and its $h$-step-ahead predictions tend to lag $h$ steps behind, as is the case with most autoregressive models.
We often model time series processes as being hit by a new zero-mean stochastic shock every period. A special case that illustrates the lagging predictions best is an AR(1) with a zero intercept and a unit slope (in other words, a random walk):
$$
y_t=c+\varphi_1 y_{t-1}+\varepsilon_t
$$
where $c=0$ and $\varphi_1=1$. An optimal (under square loss) $h$-step-ahead point forecast is $\hat y_{t+h|t}=y_t$, i.e. the last observed value. Thus even if we were able to estimate $c$ and $\varphi_1$ with perfect precision, our optimal (!) forecast would seem to lag by $h$ steps.
Similar logic applies in the more general case of $c\neq 0$ and $\varphi\neq 1$, though the argument for the general case is more nuanced. GARCH being an autoregressive model suffers from the same problem. (The fact that GARCH is autoregressive in terms of conditional variance rather than conditional mean does not change the essence. See this answer for more detail.) But recall that that need not be a sign of forecast suboptimality, as even optimal forecasts may be characterized by it. This applies to GARCH to a large extent; in typical applications of GARCH models, conditional variance is often found to be quite close to a random walk.
|
Why is GARCH offering no predictive value?
|
Your observation is correct. GARCH is an autoregressive model and its $h$-step-ahead predictions tend to lag $h$ steps behind, as is the case with most autoregressive models.
We often model time serie
|
Why is GARCH offering no predictive value?
Your observation is correct. GARCH is an autoregressive model and its $h$-step-ahead predictions tend to lag $h$ steps behind, as is the case with most autoregressive models.
We often model time series processes as being hit by a new zero-mean stochastic shock every period. A special case that illustrates the lagging predictions best is an AR(1) with a zero intercept and a unit slope (in other words, a random walk):
$$
y_t=c+\varphi_1 y_{t-1}+\varepsilon_t
$$
where $c=0$ and $\varphi_1=1$. An optimal (under square loss) $h$-step-ahead point forecast is $\hat y_{t+h|t}=y_t$, i.e. the last observed value. Thus even if we were able to estimate $c$ and $\varphi_1$ with perfect precision, our optimal (!) forecast would seem to lag by $h$ steps.
Similar logic applies in the more general case of $c\neq 0$ and $\varphi\neq 1$, though the argument for the general case is more nuanced. GARCH being an autoregressive model suffers from the same problem. (The fact that GARCH is autoregressive in terms of conditional variance rather than conditional mean does not change the essence. See this answer for more detail.) But recall that that need not be a sign of forecast suboptimality, as even optimal forecasts may be characterized by it. This applies to GARCH to a large extent; in typical applications of GARCH models, conditional variance is often found to be quite close to a random walk.
|
Why is GARCH offering no predictive value?
Your observation is correct. GARCH is an autoregressive model and its $h$-step-ahead predictions tend to lag $h$ steps behind, as is the case with most autoregressive models.
We often model time serie
|
45,561
|
How can I better understand this covariance equation?
|
Let's write this in matrix forms:
$$\overline{\mathbf{x}} =
\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]
=
\frac{1}{N}\sum_{i=1}^N\mathbf{x}_i
=
\frac{1}{N}\sum_{i=1}^N\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]
$$
So the (biased) estimate of the covariance matrix is a square matrix, like below:
$$S=\frac1N\sum_{i=1}^N(\mathbf{x}_i-\overline{\mathbf{x}})
(\mathbf{x}_i-\overline{\mathbf{x}})^T
=
\frac1N\sum_{n=1}^N
\left(\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]-\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]\right)
\left(\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]-\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]\right)^T
=\\
\frac1N\sum_{i=1}^N
\left[\matrix{x_{i,1}-\bar{x}_1\\x_{i,2}-\bar{x}_2\\\vdots\\x_{i,d}-\bar{x}_d}\right]
\left[\matrix{x_{i,1}-\bar{x}_1&x_{i,2}-\bar{x}_2&\cdots&x_{i,d}-\bar{x}_d}\right]=\\
\frac1N\sum_{i=1}^N
\left[\matrix{(x_{i,1}-\bar{x}_1)^2 & (x_{i,1}-\bar{x}_1)(x_{i,2}-\bar{x}_2) & \cdots & (x_{i,1}-\bar{x}_1)(x_{i,d}-\bar{x}_d) \\ (x_{i,1}-\bar{x}_1)(x_{i,2}-\bar{x}_2) & (x_{i,2}-\bar{x}_2)^2 & \cdots & (x_{i,2}-\bar{x}_2)(x_{i,d}-\bar{x}_d) \\ \vdots & \vdots & \ddots & \vdots \\ (x_{i,1}-\bar{x}_1)(x_{i,d}-\bar{x}_d) & (x_{i,2}-\bar{x}_2)(x_{i,d}-\bar{x}_d) & \cdots & (x_{i,d}-\bar{x}_d)^2}\right]
$$
|
How can I better understand this covariance equation?
|
Let's write this in matrix forms:
$$\overline{\mathbf{x}} =
\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]
=
\frac{1}{N}\sum_{i=1}^N\mathbf{x}_i
=
\frac{1}{N}\sum_{i=1}^N\left[\matrix{x
|
How can I better understand this covariance equation?
Let's write this in matrix forms:
$$\overline{\mathbf{x}} =
\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]
=
\frac{1}{N}\sum_{i=1}^N\mathbf{x}_i
=
\frac{1}{N}\sum_{i=1}^N\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]
$$
So the (biased) estimate of the covariance matrix is a square matrix, like below:
$$S=\frac1N\sum_{i=1}^N(\mathbf{x}_i-\overline{\mathbf{x}})
(\mathbf{x}_i-\overline{\mathbf{x}})^T
=
\frac1N\sum_{n=1}^N
\left(\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]-\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]\right)
\left(\left[\matrix{x_{i,1}\\x_{i,2}\\\vdots\\x_{i,d}}\right]-\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]\right)^T
=\\
\frac1N\sum_{i=1}^N
\left[\matrix{x_{i,1}-\bar{x}_1\\x_{i,2}-\bar{x}_2\\\vdots\\x_{i,d}-\bar{x}_d}\right]
\left[\matrix{x_{i,1}-\bar{x}_1&x_{i,2}-\bar{x}_2&\cdots&x_{i,d}-\bar{x}_d}\right]=\\
\frac1N\sum_{i=1}^N
\left[\matrix{(x_{i,1}-\bar{x}_1)^2 & (x_{i,1}-\bar{x}_1)(x_{i,2}-\bar{x}_2) & \cdots & (x_{i,1}-\bar{x}_1)(x_{i,d}-\bar{x}_d) \\ (x_{i,1}-\bar{x}_1)(x_{i,2}-\bar{x}_2) & (x_{i,2}-\bar{x}_2)^2 & \cdots & (x_{i,2}-\bar{x}_2)(x_{i,d}-\bar{x}_d) \\ \vdots & \vdots & \ddots & \vdots \\ (x_{i,1}-\bar{x}_1)(x_{i,d}-\bar{x}_d) & (x_{i,2}-\bar{x}_2)(x_{i,d}-\bar{x}_d) & \cdots & (x_{i,d}-\bar{x}_d)^2}\right]
$$
|
How can I better understand this covariance equation?
Let's write this in matrix forms:
$$\overline{\mathbf{x}} =
\left[\matrix{\bar{x}_1\\\bar{x}_2\\\vdots\\\bar{x}_d}\right]
=
\frac{1}{N}\sum_{i=1}^N\mathbf{x}_i
=
\frac{1}{N}\sum_{i=1}^N\left[\matrix{x
|
45,562
|
Equivalent ways of parametrizing Gamma distribution
|
This distribution $f(x, \alpha) = \frac{x^{\alpha - 1} e^{-x}}{\Gamma(\alpha)}$ is the distribution with a fixed scale parameter $1/\beta = \theta = 1$.
The article states further on
The probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the loc and scale parameters. Specifically, gamma.pdf(x, a, loc, scale) is identically equivalent to gamma.pdf(y, a) / scale with y = (x - loc) / scale
So, in the end, they put the second parameter back by the use of the scale parameter.
if I substitute $x = \beta y$ into the first equation, it seems there would be a factor $\beta$ missing compared to the 2nd equation.
If you transform the variable $x = \beta y$ you are sort of squeezing or stretching the density function. When you do this then you need to correct the height as well in order that the pdf integrates to a total area of 1.
|
Equivalent ways of parametrizing Gamma distribution
|
This distribution $f(x, \alpha) = \frac{x^{\alpha - 1} e^{-x}}{\Gamma(\alpha)}$ is the distribution with a fixed scale parameter $1/\beta = \theta = 1$.
The article states further on
The probability
|
Equivalent ways of parametrizing Gamma distribution
This distribution $f(x, \alpha) = \frac{x^{\alpha - 1} e^{-x}}{\Gamma(\alpha)}$ is the distribution with a fixed scale parameter $1/\beta = \theta = 1$.
The article states further on
The probability density above is defined in the “standardized” form. To shift and/or scale the distribution use the loc and scale parameters. Specifically, gamma.pdf(x, a, loc, scale) is identically equivalent to gamma.pdf(y, a) / scale with y = (x - loc) / scale
So, in the end, they put the second parameter back by the use of the scale parameter.
if I substitute $x = \beta y$ into the first equation, it seems there would be a factor $\beta$ missing compared to the 2nd equation.
If you transform the variable $x = \beta y$ you are sort of squeezing or stretching the density function. When you do this then you need to correct the height as well in order that the pdf integrates to a total area of 1.
|
Equivalent ways of parametrizing Gamma distribution
This distribution $f(x, \alpha) = \frac{x^{\alpha - 1} e^{-x}}{\Gamma(\alpha)}$ is the distribution with a fixed scale parameter $1/\beta = \theta = 1$.
The article states further on
The probability
|
45,563
|
Equivalent ways of parametrizing Gamma distribution
|
A change of variables in the density requires more than substitution. In particular you need to multiply by the absolute value of the derivative of the inverse function. This would be more obvious if you considered the cumulative distribution function and then differentiated to get the density: the extra multiplicative factor would come through the chain rule.
If $g(x)$ is strictly increasing, you could consider $$f_Y(y)=\tfrac{d}{dy} F_Y(y) = \tfrac{d}{dy} F_X\big(g^{-1}(y)\big) = f_X\big(g^{-1}(y)\big) \tfrac{d}{dy} \big(g^{-1}(y)\big)$$ and something similar if $g(x)$ was strictly decreasing. Combining these two results, a typical statement is that if you have $f_X(x)$ and want to consider the density of $Y=g(X)$, then $$f_Y(y) =f_X\big(g^{-1}(y)\big) \left| \tfrac{d}{dy} \big(g^{-1}(y)\big) \right|$$ though it gets more complicated when $g(x)$ is not a bijection
In your example $g(x)=\frac x\beta$ so $g^{-1}(y)=\beta y$ and thus you need to multiply by $\left| \tfrac{d}{dy} \big(g^{-1}(y)\big) \right| = \beta$
|
Equivalent ways of parametrizing Gamma distribution
|
A change of variables in the density requires more than substitution. In particular you need to multiply by the absolute value of the derivative of the inverse function. This would be more obvious
|
Equivalent ways of parametrizing Gamma distribution
A change of variables in the density requires more than substitution. In particular you need to multiply by the absolute value of the derivative of the inverse function. This would be more obvious if you considered the cumulative distribution function and then differentiated to get the density: the extra multiplicative factor would come through the chain rule.
If $g(x)$ is strictly increasing, you could consider $$f_Y(y)=\tfrac{d}{dy} F_Y(y) = \tfrac{d}{dy} F_X\big(g^{-1}(y)\big) = f_X\big(g^{-1}(y)\big) \tfrac{d}{dy} \big(g^{-1}(y)\big)$$ and something similar if $g(x)$ was strictly decreasing. Combining these two results, a typical statement is that if you have $f_X(x)$ and want to consider the density of $Y=g(X)$, then $$f_Y(y) =f_X\big(g^{-1}(y)\big) \left| \tfrac{d}{dy} \big(g^{-1}(y)\big) \right|$$ though it gets more complicated when $g(x)$ is not a bijection
In your example $g(x)=\frac x\beta$ so $g^{-1}(y)=\beta y$ and thus you need to multiply by $\left| \tfrac{d}{dy} \big(g^{-1}(y)\big) \right| = \beta$
|
Equivalent ways of parametrizing Gamma distribution
A change of variables in the density requires more than substitution. In particular you need to multiply by the absolute value of the derivative of the inverse function. This would be more obvious
|
45,564
|
Zero-inflated Gaussian for weights below zero recorded as 0?
|
I think the model is more appropriately a left-censored Gaussian, since the process you describe is about discarding information below some value (in this case, the location is known to be 0, which is simpler than the case of an unknown censoring value). In other words, there's some real quantity which can (hypothetically) be measured, but that quantity is not recorded. We need to use a modeling tool that reflects that there is some true, non-censored value, but that this value is not available to us.
One resource I happen to have on my bookshelf is Gelman et al., Bayesian Data Analysis (3rd edition). Censoring and truncation models are discussed starting on page 224. The authors write
Suppose an object is weighed 100 times on an electronic scale with a known measurement distribution $\mathcal{N}(\theta,1^2)$, where $\theta$ is the true weight of the object....
[T]he scale has an upper limit of 200 kg for reports: all values above 200kg are reported as "too heavy." The complete data are still $\mathcal{N}(\theta,1^2)$, but the observed data are censored; if we observe "too heavy," we know that it corresponds to a weighing with a reading above 200.
This is very similar to the problem as the one stated by OP, with the exception that it's censored above 200 instead of below 0, and the concept that each item is weighed repeatedly with some instrument error.
One R package that seems relevant is censReg.
Arne Henningsen. "Estimating Censored Regression Models in Rusing the censReg Package"
We demonstrate how censored regression models (including standard Tobit models) can be estimated in R using the add-on package censReg. This package provides not only the usual maximum likelihood (ML) procedure for cross-sectional data but also the random-effects maximum likelihood procedure for panel data using Gauss-Hermite quadrature.
I haven't used it, so I can't vouch for its quality or utility in this problem. There are probably lots of other options. The approach taken in Bayesian Data Analysis is to just code up your own model, either using the base library, or using stan. This has the greatest degree of flexibility, at the cost of having to do the coding yourself.
|
Zero-inflated Gaussian for weights below zero recorded as 0?
|
I think the model is more appropriately a left-censored Gaussian, since the process you describe is about discarding information below some value (in this case, the location is known to be 0, which is
|
Zero-inflated Gaussian for weights below zero recorded as 0?
I think the model is more appropriately a left-censored Gaussian, since the process you describe is about discarding information below some value (in this case, the location is known to be 0, which is simpler than the case of an unknown censoring value). In other words, there's some real quantity which can (hypothetically) be measured, but that quantity is not recorded. We need to use a modeling tool that reflects that there is some true, non-censored value, but that this value is not available to us.
One resource I happen to have on my bookshelf is Gelman et al., Bayesian Data Analysis (3rd edition). Censoring and truncation models are discussed starting on page 224. The authors write
Suppose an object is weighed 100 times on an electronic scale with a known measurement distribution $\mathcal{N}(\theta,1^2)$, where $\theta$ is the true weight of the object....
[T]he scale has an upper limit of 200 kg for reports: all values above 200kg are reported as "too heavy." The complete data are still $\mathcal{N}(\theta,1^2)$, but the observed data are censored; if we observe "too heavy," we know that it corresponds to a weighing with a reading above 200.
This is very similar to the problem as the one stated by OP, with the exception that it's censored above 200 instead of below 0, and the concept that each item is weighed repeatedly with some instrument error.
One R package that seems relevant is censReg.
Arne Henningsen. "Estimating Censored Regression Models in Rusing the censReg Package"
We demonstrate how censored regression models (including standard Tobit models) can be estimated in R using the add-on package censReg. This package provides not only the usual maximum likelihood (ML) procedure for cross-sectional data but also the random-effects maximum likelihood procedure for panel data using Gauss-Hermite quadrature.
I haven't used it, so I can't vouch for its quality or utility in this problem. There are probably lots of other options. The approach taken in Bayesian Data Analysis is to just code up your own model, either using the base library, or using stan. This has the greatest degree of flexibility, at the cost of having to do the coding yourself.
|
Zero-inflated Gaussian for weights below zero recorded as 0?
I think the model is more appropriately a left-censored Gaussian, since the process you describe is about discarding information below some value (in this case, the location is known to be 0, which is
|
45,565
|
What does comparing mean rank mean?
|
When comparing two independent samples, you want to rank all the data together.
Revising your example:
Sample A
value rank
20 7.5
20 7.5
20 7.5
20 7.5
25 10
and Sample B
value rank
1 1
2 2
3 3
4 4
5 5
What is going on?
Sample B's value of 1 is the lowest ordered value from both samples, so it gets a rank of 1. Similarly for Sample B's values of 2–5. The mean rank for Sample B is therefore $\frac{1+2+3+4+5}{5}=2.5$.
Sample A's values of 20, 20, 20, and 20 occupy the 6th, 7th, 8th, and 9th ranks together, so they each get the average rank of $\frac{6+7+8+9}{4\text{ rank positions}}=7.5$. Finally, Sample A's value of 10 is the largest value from both samples so it gets the highest rank 10. The mean rank for Sample A is therefore $\frac{7.5+7.5+7.5+7.5+10}{5}=8$.
Bonus: To be super explicit: No. The mean ranks of two independent samples of the same $\boldsymbol{N}$ will not necessarily have the same mean ranks.
|
What does comparing mean rank mean?
|
When comparing two independent samples, you want to rank all the data together.
Revising your example:
Sample A
value rank
20 7.5
20 7.5
20 7.5
20 7.5
25 10
and Sample B
va
|
What does comparing mean rank mean?
When comparing two independent samples, you want to rank all the data together.
Revising your example:
Sample A
value rank
20 7.5
20 7.5
20 7.5
20 7.5
25 10
and Sample B
value rank
1 1
2 2
3 3
4 4
5 5
What is going on?
Sample B's value of 1 is the lowest ordered value from both samples, so it gets a rank of 1. Similarly for Sample B's values of 2–5. The mean rank for Sample B is therefore $\frac{1+2+3+4+5}{5}=2.5$.
Sample A's values of 20, 20, 20, and 20 occupy the 6th, 7th, 8th, and 9th ranks together, so they each get the average rank of $\frac{6+7+8+9}{4\text{ rank positions}}=7.5$. Finally, Sample A's value of 10 is the largest value from both samples so it gets the highest rank 10. The mean rank for Sample A is therefore $\frac{7.5+7.5+7.5+7.5+10}{5}=8$.
Bonus: To be super explicit: No. The mean ranks of two independent samples of the same $\boldsymbol{N}$ will not necessarily have the same mean ranks.
|
What does comparing mean rank mean?
When comparing two independent samples, you want to rank all the data together.
Revising your example:
Sample A
value rank
20 7.5
20 7.5
20 7.5
20 7.5
25 10
and Sample B
va
|
45,566
|
Cyclicality in causal relationships
|
Because causes must precede effects, acyclic is preferred. Ultimately, there can be no true cycles: if event $A$ causes event $B,$ then $A$ must precede $B.$ The time $t_a$ at which $A$ occurs must be smaller than the time $t_b$ at which $B$ occurs, for time flowing in the usual direction. If $t_a<t_b,$ it is impossible for $t_b<t_a,$ and hence impossible for $B$ to precede $A.$ Therefore, $B$ cannot cause $A.$
That said, there are definitely feedback loops, both in nature and in engineering. Let's say you have variables $A(t)$ and $B(t),$ and you know that $A(t)\to B(t+1).$ But then $B$ feeds back into $A$ at a later time, so you might have $B(t+1)\to A(t+2).$ If you think of variables at different times as different variables, you can model the feedback loop without using a cyclic graph. You lose something in that model, of course: the relationship between $A(t)$ and $A(t+2).$ You might mitigate that somewhat if you include the direct arrow $A(t)\to A(t+2).$
|
Cyclicality in causal relationships
|
Because causes must precede effects, acyclic is preferred. Ultimately, there can be no true cycles: if event $A$ causes event $B,$ then $A$ must precede $B.$ The time $t_a$ at which $A$ occurs must be
|
Cyclicality in causal relationships
Because causes must precede effects, acyclic is preferred. Ultimately, there can be no true cycles: if event $A$ causes event $B,$ then $A$ must precede $B.$ The time $t_a$ at which $A$ occurs must be smaller than the time $t_b$ at which $B$ occurs, for time flowing in the usual direction. If $t_a<t_b,$ it is impossible for $t_b<t_a,$ and hence impossible for $B$ to precede $A.$ Therefore, $B$ cannot cause $A.$
That said, there are definitely feedback loops, both in nature and in engineering. Let's say you have variables $A(t)$ and $B(t),$ and you know that $A(t)\to B(t+1).$ But then $B$ feeds back into $A$ at a later time, so you might have $B(t+1)\to A(t+2).$ If you think of variables at different times as different variables, you can model the feedback loop without using a cyclic graph. You lose something in that model, of course: the relationship between $A(t)$ and $A(t+2).$ You might mitigate that somewhat if you include the direct arrow $A(t)\to A(t+2).$
|
Cyclicality in causal relationships
Because causes must precede effects, acyclic is preferred. Ultimately, there can be no true cycles: if event $A$ causes event $B,$ then $A$ must precede $B.$ The time $t_a$ at which $A$ occurs must be
|
45,567
|
How to decide the best form of BMI used in cox regression, categorical or continuous?
|
BMI might be associated continuously with outcome but not necessarily linearly. The best way to test that is to fit BMI as a continuous predictor flexibly, for example with restricted cubic splines as in the rms package in R. If you use the tools in that package, then you can use its anova() function to test the significance of the continuous fit overall and of the non-linear terms in particular.
There is almost never anything to be gained by categorizing a continuous variable. If someone insists that you do it anyway, compare the Akaike Information Criteria (AIC) of the models fit continuously and with categorization. I suspect that the fit will be better with a flexibly fit continuous variable.
One question to consider is whether BMI, itself a derived variable, is useful. It's quite possible that fitting both its components, height and weight, would work better.
|
How to decide the best form of BMI used in cox regression, categorical or continuous?
|
BMI might be associated continuously with outcome but not necessarily linearly. The best way to test that is to fit BMI as a continuous predictor flexibly, for example with restricted cubic splines as
|
How to decide the best form of BMI used in cox regression, categorical or continuous?
BMI might be associated continuously with outcome but not necessarily linearly. The best way to test that is to fit BMI as a continuous predictor flexibly, for example with restricted cubic splines as in the rms package in R. If you use the tools in that package, then you can use its anova() function to test the significance of the continuous fit overall and of the non-linear terms in particular.
There is almost never anything to be gained by categorizing a continuous variable. If someone insists that you do it anyway, compare the Akaike Information Criteria (AIC) of the models fit continuously and with categorization. I suspect that the fit will be better with a flexibly fit continuous variable.
One question to consider is whether BMI, itself a derived variable, is useful. It's quite possible that fitting both its components, height and weight, would work better.
|
How to decide the best form of BMI used in cox regression, categorical or continuous?
BMI might be associated continuously with outcome but not necessarily linearly. The best way to test that is to fit BMI as a continuous predictor flexibly, for example with restricted cubic splines as
|
45,568
|
Simple constant-width prediction interval for a regression model
|
As described in your question, you should expect that the prediction intervals in production to have the desired coverage.
During training you come up with a fitted model $f$, which need not be equal to, or even close to, the true data generating function.
Using the test set you get a sample $e_1,\dots ,e_{n_\text{test}}$. By Glivenko-Cantelli as $n_\text{test}\rightarrow \infty$ the empirical CDF of $e_1,\dots ,e_{n_\text{test}}$ should converge to the true CDF of $e_i$.
Because of the above result, using the empirical CDF you can come up with an interval such that for a new independent set of observations in production, for each $e_i^*=y_i^*-f(x_i^*)$, you will have $$P\left(e_i^*\in[e_{low},e_{high}]\right)\approx p \Rightarrow P\left(y_i\in[e_{low}+f(x_i),e_{high}+f(x_i)]\right)\approx p.$$
|
Simple constant-width prediction interval for a regression model
|
As described in your question, you should expect that the prediction intervals in production to have the desired coverage.
During training you come up with a fitted model $f$, which need not be equal
|
Simple constant-width prediction interval for a regression model
As described in your question, you should expect that the prediction intervals in production to have the desired coverage.
During training you come up with a fitted model $f$, which need not be equal to, or even close to, the true data generating function.
Using the test set you get a sample $e_1,\dots ,e_{n_\text{test}}$. By Glivenko-Cantelli as $n_\text{test}\rightarrow \infty$ the empirical CDF of $e_1,\dots ,e_{n_\text{test}}$ should converge to the true CDF of $e_i$.
Because of the above result, using the empirical CDF you can come up with an interval such that for a new independent set of observations in production, for each $e_i^*=y_i^*-f(x_i^*)$, you will have $$P\left(e_i^*\in[e_{low},e_{high}]\right)\approx p \Rightarrow P\left(y_i\in[e_{low}+f(x_i),e_{high}+f(x_i)]\right)\approx p.$$
|
Simple constant-width prediction interval for a regression model
As described in your question, you should expect that the prediction intervals in production to have the desired coverage.
During training you come up with a fitted model $f$, which need not be equal
|
45,569
|
Simple constant-width prediction interval for a regression model
|
I think the fundamental thing to realize here is that if your test set is really a test set (used in no way for training) and your test data are really i.i.d., and test and production really have the same distribution, then $f$ can be considered as any other function determined independently of your data. The fact that $f$ was originally constructed using a training set is interesting biographical information about the function, but is irrelevant to the problem.
In particular, if $\{(x_i, y_i)\}$ are i.i.d. and drawn from the same distribution as production, then $\{e_i := f(x_i) - y_i\}$ are i.i.d. and from the same distribution as production. This is where we use the property that the test set was not used for training. If $f(x_i)$ were somehow dependent on $x_j$, $j \neq i$, as would be the case if the test set were used for training or model selection, then $\{e_i\}$ would not be mutually independent. And if $y_i$ were used to determine the value $f(x_i)$, again as would be the case if the test set were used for training, the distribution of $y - f(x)$ in the test set would be different than in production.
The empirical distribution function of $e$ converges to its true distribution function as the size of your test set grows, by the Glivenko-Cantelli theorem, and so your prediction intervals are correct.
Worth pointing out explicitly that the convergence has nothing to do with the any properties of the training set or the regression model, although maybe the speed of convergence does. The training set itself could be non-i.i.d., or even drawn from a different distribution than the test set and production. The only "condition" for the Glivenko-Cantelli theorem is that the sample used to construct the empirical distribution function is drawn independently from a single distribution.
|
Simple constant-width prediction interval for a regression model
|
I think the fundamental thing to realize here is that if your test set is really a test set (used in no way for training) and your test data are really i.i.d., and test and production really have the
|
Simple constant-width prediction interval for a regression model
I think the fundamental thing to realize here is that if your test set is really a test set (used in no way for training) and your test data are really i.i.d., and test and production really have the same distribution, then $f$ can be considered as any other function determined independently of your data. The fact that $f$ was originally constructed using a training set is interesting biographical information about the function, but is irrelevant to the problem.
In particular, if $\{(x_i, y_i)\}$ are i.i.d. and drawn from the same distribution as production, then $\{e_i := f(x_i) - y_i\}$ are i.i.d. and from the same distribution as production. This is where we use the property that the test set was not used for training. If $f(x_i)$ were somehow dependent on $x_j$, $j \neq i$, as would be the case if the test set were used for training or model selection, then $\{e_i\}$ would not be mutually independent. And if $y_i$ were used to determine the value $f(x_i)$, again as would be the case if the test set were used for training, the distribution of $y - f(x)$ in the test set would be different than in production.
The empirical distribution function of $e$ converges to its true distribution function as the size of your test set grows, by the Glivenko-Cantelli theorem, and so your prediction intervals are correct.
Worth pointing out explicitly that the convergence has nothing to do with the any properties of the training set or the regression model, although maybe the speed of convergence does. The training set itself could be non-i.i.d., or even drawn from a different distribution than the test set and production. The only "condition" for the Glivenko-Cantelli theorem is that the sample used to construct the empirical distribution function is drawn independently from a single distribution.
|
Simple constant-width prediction interval for a regression model
I think the fundamental thing to realize here is that if your test set is really a test set (used in no way for training) and your test data are really i.i.d., and test and production really have the
|
45,570
|
Is there a generalized concept of noncentrality of a distribution?
|
It's hard to understand how to answer this question.
For any given hypothesis and any given test statistic, the distribution under an alternative hypothesis is considered a "non-central" version of the distribution of the same statistic under the null.
In some lucky cases, the test-statistic under the alternative hypothesis has a distribution which shares a parametric family with the distribution of the test statistic under the null. The Z-test is quite contrived in that regard.
Non-central chi-square, non-central F, and non-central T are in such widespread use that they are cited in much of the literature and software, and there are a few useful analytical results. If the non-central distribution is lucky enough to be available in closed form, we usually expect that the "central" counterpart belongs in the family, just like how a t-distribution is a non-central t with non-centrality parameter set to 0.
However, beyond this lies a whole cadre of distributions that are not described in the literature. Either they're too specially tooled to be of any generalizable (or didactic) use, or they aren't even available and have to be estimated numerically, i.e. simulated. In my experience, any remotely non-routine power calculation relies on simulation to identify the distribution underlying the test statistic. To the best of my knowledge, test statistics for hypotheses about fixed or random effects in mixed models, mediators in linear models, or treatment assignment in adaptive randomized tests are highly irregular when the null is false, and extensive simulation studies are as close as we can come to getting an understanding of the operating characteristics of the test.
|
Is there a generalized concept of noncentrality of a distribution?
|
It's hard to understand how to answer this question.
For any given hypothesis and any given test statistic, the distribution under an alternative hypothesis is considered a "non-central" version of th
|
Is there a generalized concept of noncentrality of a distribution?
It's hard to understand how to answer this question.
For any given hypothesis and any given test statistic, the distribution under an alternative hypothesis is considered a "non-central" version of the distribution of the same statistic under the null.
In some lucky cases, the test-statistic under the alternative hypothesis has a distribution which shares a parametric family with the distribution of the test statistic under the null. The Z-test is quite contrived in that regard.
Non-central chi-square, non-central F, and non-central T are in such widespread use that they are cited in much of the literature and software, and there are a few useful analytical results. If the non-central distribution is lucky enough to be available in closed form, we usually expect that the "central" counterpart belongs in the family, just like how a t-distribution is a non-central t with non-centrality parameter set to 0.
However, beyond this lies a whole cadre of distributions that are not described in the literature. Either they're too specially tooled to be of any generalizable (or didactic) use, or they aren't even available and have to be estimated numerically, i.e. simulated. In my experience, any remotely non-routine power calculation relies on simulation to identify the distribution underlying the test statistic. To the best of my knowledge, test statistics for hypotheses about fixed or random effects in mixed models, mediators in linear models, or treatment assignment in adaptive randomized tests are highly irregular when the null is false, and extensive simulation studies are as close as we can come to getting an understanding of the operating characteristics of the test.
|
Is there a generalized concept of noncentrality of a distribution?
It's hard to understand how to answer this question.
For any given hypothesis and any given test statistic, the distribution under an alternative hypothesis is considered a "non-central" version of th
|
45,571
|
Is there a generalized concept of noncentrality of a distribution?
|
I think a simple way to think about noncentral distributions is to consider how they're built from normal distribution, e.g., non central t variable is $\frac{Z+\mu}{\sqrt{V/\nu}}$, where $Z$ is standard normal and $V\sim\chi_\nu^2$. When noncentrality parameter $\mu=0$, we have the standard normal in numerator, and the distribution becomes usual [central] Student t. Other noncentral distributions are constructed similarly. So, when your Gaussian variable has non zero mean, that's when noncentrality occurs in these distribution.
Note, that the "central" version of the Student t distribution is $\frac{Z}{\sqrt{V/\nu}}$, which came up when analyzing the properties of estimated parameters of regression. The coefficients tend to be from the normal distribution with unknown variance, hence the formulation of the Student t from the normal variable in the numerator and the square root of $\chi^2$ variable in the denominator.
Skewed variant of these distributions have no connection to normal distribution explicitly. It's a generalization in a different direction so to speak.
Naturally, the only logical noncentral extension of the standard normal variable is the Gaussian variable with nonzero mean. However, this is such a trivial case, that nobody would call this distribution "noncentral normal" variable, but you could if you wish so.
|
Is there a generalized concept of noncentrality of a distribution?
|
I think a simple way to think about noncentral distributions is to consider how they're built from normal distribution, e.g., non central t variable is $\frac{Z+\mu}{\sqrt{V/\nu}}$, where $Z$ is stand
|
Is there a generalized concept of noncentrality of a distribution?
I think a simple way to think about noncentral distributions is to consider how they're built from normal distribution, e.g., non central t variable is $\frac{Z+\mu}{\sqrt{V/\nu}}$, where $Z$ is standard normal and $V\sim\chi_\nu^2$. When noncentrality parameter $\mu=0$, we have the standard normal in numerator, and the distribution becomes usual [central] Student t. Other noncentral distributions are constructed similarly. So, when your Gaussian variable has non zero mean, that's when noncentrality occurs in these distribution.
Note, that the "central" version of the Student t distribution is $\frac{Z}{\sqrt{V/\nu}}$, which came up when analyzing the properties of estimated parameters of regression. The coefficients tend to be from the normal distribution with unknown variance, hence the formulation of the Student t from the normal variable in the numerator and the square root of $\chi^2$ variable in the denominator.
Skewed variant of these distributions have no connection to normal distribution explicitly. It's a generalization in a different direction so to speak.
Naturally, the only logical noncentral extension of the standard normal variable is the Gaussian variable with nonzero mean. However, this is such a trivial case, that nobody would call this distribution "noncentral normal" variable, but you could if you wish so.
|
Is there a generalized concept of noncentrality of a distribution?
I think a simple way to think about noncentral distributions is to consider how they're built from normal distribution, e.g., non central t variable is $\frac{Z+\mu}{\sqrt{V/\nu}}$, where $Z$ is stand
|
45,572
|
Is there a generalized concept of noncentrality of a distribution?
|
I agree with Aksakal and AdamO, the non-central varieties are a result of investigating the power of a test. The test itself assumes a particular null hypothesis for the purposes of argument and inference using ex-post sampling probability as evidence. Power explores the ex-ante sampling probability of the test when in fact an alternative hypothesis is true. The non-centrality parameter is related to the true alternative hypothesis. For instance, think of calculating power when testing a binomial proportion by referencing a binomial CDF. The critical value is compared to a binomial sampling distribution under the alternative, and the shape of this distribution is different from the null sampling distribution. It is not a simple shift.
For, say, a Wald test we assume the standard error is known and not a function of the null hypothesis so the non-central distribution under the alternative is just a shifted normal distribution. The reason this is a simple shift is because the nuisance parameters are not profiled and are considered known. An interesting thing to note in this simple example is that the power function is the same as the p-value function. When using more complicated tests that profile nuisance parameters before treating them as known (e.g. score, LR) the p-value function works incredibly well at approximating power, meaning we can avoid non-central distributions altogether. When calculating a p-value this profiling works to account for having estimated the nuisance parameters even though they are assumed known. Here is a paper of mine that discusses approximating a power function using a p-value function.
Johnson, G. S. (2021). Decision Making in Drug Development via Inference on Power. Researchgate.net.
|
Is there a generalized concept of noncentrality of a distribution?
|
I agree with Aksakal and AdamO, the non-central varieties are a result of investigating the power of a test. The test itself assumes a particular null hypothesis for the purposes of argument and infe
|
Is there a generalized concept of noncentrality of a distribution?
I agree with Aksakal and AdamO, the non-central varieties are a result of investigating the power of a test. The test itself assumes a particular null hypothesis for the purposes of argument and inference using ex-post sampling probability as evidence. Power explores the ex-ante sampling probability of the test when in fact an alternative hypothesis is true. The non-centrality parameter is related to the true alternative hypothesis. For instance, think of calculating power when testing a binomial proportion by referencing a binomial CDF. The critical value is compared to a binomial sampling distribution under the alternative, and the shape of this distribution is different from the null sampling distribution. It is not a simple shift.
For, say, a Wald test we assume the standard error is known and not a function of the null hypothesis so the non-central distribution under the alternative is just a shifted normal distribution. The reason this is a simple shift is because the nuisance parameters are not profiled and are considered known. An interesting thing to note in this simple example is that the power function is the same as the p-value function. When using more complicated tests that profile nuisance parameters before treating them as known (e.g. score, LR) the p-value function works incredibly well at approximating power, meaning we can avoid non-central distributions altogether. When calculating a p-value this profiling works to account for having estimated the nuisance parameters even though they are assumed known. Here is a paper of mine that discusses approximating a power function using a p-value function.
Johnson, G. S. (2021). Decision Making in Drug Development via Inference on Power. Researchgate.net.
|
Is there a generalized concept of noncentrality of a distribution?
I agree with Aksakal and AdamO, the non-central varieties are a result of investigating the power of a test. The test itself assumes a particular null hypothesis for the purposes of argument and infe
|
45,573
|
Is there a generalized concept of noncentrality of a distribution?
|
The intuitive way to grasp noncentral distributions is through their central counterparts. There are several noncentral distributions like noncentral chi-squared, noncentral F, noncentral T, noncentral beta, noncentral negative hypergeometric, noncentral Wishart, and so on. All of them can be expressed as infinite mixtures of the corresponding central distribution. The weights of the mixture are usually Poisson probabilities (as in the first four), but could also be negative binomial weights (as in noncentral negative hypergeometric). A good starting point for you is R. Chattamvelli (1995), "A note on the noncentral beta distribution function", The American Statistician, vol 49, number 3.
Hope this helps
|
Is there a generalized concept of noncentrality of a distribution?
|
The intuitive way to grasp noncentral distributions is through their central counterparts. There are several noncentral distributions like noncentral chi-squared, noncentral F, noncentral T, noncentra
|
Is there a generalized concept of noncentrality of a distribution?
The intuitive way to grasp noncentral distributions is through their central counterparts. There are several noncentral distributions like noncentral chi-squared, noncentral F, noncentral T, noncentral beta, noncentral negative hypergeometric, noncentral Wishart, and so on. All of them can be expressed as infinite mixtures of the corresponding central distribution. The weights of the mixture are usually Poisson probabilities (as in the first four), but could also be negative binomial weights (as in noncentral negative hypergeometric). A good starting point for you is R. Chattamvelli (1995), "A note on the noncentral beta distribution function", The American Statistician, vol 49, number 3.
Hope this helps
|
Is there a generalized concept of noncentrality of a distribution?
The intuitive way to grasp noncentral distributions is through their central counterparts. There are several noncentral distributions like noncentral chi-squared, noncentral F, noncentral T, noncentra
|
45,574
|
Why doesn't this work as a backdoor?
|
Your assumption is that conditioning on a variable (i.e., $X_4$) blocks all paths through that variable, but that is not so. Conditioning on a variable opens a path between the antecedents of the variable. $X_1$ and $X_2$ are d-connected after conditioning on $X_4$. $X_4$ is a collider of $X_1$ and $X_2$.
|
Why doesn't this work as a backdoor?
|
Your assumption is that conditioning on a variable (i.e., $X_4$) blocks all paths through that variable, but that is not so. Conditioning on a variable opens a path between the antecedents of the vari
|
Why doesn't this work as a backdoor?
Your assumption is that conditioning on a variable (i.e., $X_4$) blocks all paths through that variable, but that is not so. Conditioning on a variable opens a path between the antecedents of the variable. $X_1$ and $X_2$ are d-connected after conditioning on $X_4$. $X_4$ is a collider of $X_1$ and $X_2$.
|
Why doesn't this work as a backdoor?
Your assumption is that conditioning on a variable (i.e., $X_4$) blocks all paths through that variable, but that is not so. Conditioning on a variable opens a path between the antecedents of the vari
|
45,575
|
Plot profile likelihood
|
I'll use $\mu_i = \eta_1 - 2\theta\eta_2x_i + \eta_2 x_i^2$ for convenience. If we're thinking of $\mu_i$ as a function of $\theta$, so only $\eta_1$ and $\eta_2$ are parameters, then we can write this as
$$
\mu_i = \eta_1 + \eta_2(-2\theta x_i + x_i^2) = \eta_1 + \eta_2 z_i
$$
for $z_i = -2\theta x_i + x_i^2$. This is just a simple linear regression now so
$$
\begin{aligned}
&\hat\eta_1 = \bar y - \hat\eta_2 \bar z \\&
\hat\eta_2 = \frac{\sum_i (z_i -\bar z)(y_i - \bar y)}{\sum_i (z_i - \bar z)^2}
\end{aligned}
$$
so all together the profiled log-likelihood is
$$
\ell_p(\theta) = \ell(\hat\eta_1(\theta), \theta, \hat\eta_2(\theta)) \\
= -\frac n2 \log 2\pi\sigma^2 - \frac 1{2\sigma^2}\sum_{i=1}^n (y_i - \hat \eta_1(\theta) - \hat \eta_2(\theta)\cdot(- 2\theta x_i + x_i^2))^2.
$$
Here's an example in R:
set.seed(132)
theta <- 1.23; eta1 <- -.55; eta2 <- .761
sigma <- .234
n <- 500
x <- rnorm(n, -.5)
y <- eta1 - 2 * theta * eta2 * x + eta2 * x^2 + rnorm(n, 0, sigma)
profloglik <- function(theta, sigma, x, y) {
z <- -2 * theta * x + x^2 # creating the new feature in terms of theta
mod <- lm(y ~ z) # using `lm` to do the simple linear regression
sum(dnorm(y, fitted(mod), sigma, log=TRUE)) # log likelihood
}
theta_seq <- seq(-10, 10, length=500)
liks <- sapply(theta_seq, profloglik, sigma=sigma, x=x, y=y)
plot(liks ~ theta_seq, type="l", lwd=2,
main=bquote("Profiled log-likelihood for" ~ theta),
ylab="profiled log lik", xlab=bquote(theta))
|
Plot profile likelihood
|
I'll use $\mu_i = \eta_1 - 2\theta\eta_2x_i + \eta_2 x_i^2$ for convenience. If we're thinking of $\mu_i$ as a function of $\theta$, so only $\eta_1$ and $\eta_2$ are parameters, then we can write thi
|
Plot profile likelihood
I'll use $\mu_i = \eta_1 - 2\theta\eta_2x_i + \eta_2 x_i^2$ for convenience. If we're thinking of $\mu_i$ as a function of $\theta$, so only $\eta_1$ and $\eta_2$ are parameters, then we can write this as
$$
\mu_i = \eta_1 + \eta_2(-2\theta x_i + x_i^2) = \eta_1 + \eta_2 z_i
$$
for $z_i = -2\theta x_i + x_i^2$. This is just a simple linear regression now so
$$
\begin{aligned}
&\hat\eta_1 = \bar y - \hat\eta_2 \bar z \\&
\hat\eta_2 = \frac{\sum_i (z_i -\bar z)(y_i - \bar y)}{\sum_i (z_i - \bar z)^2}
\end{aligned}
$$
so all together the profiled log-likelihood is
$$
\ell_p(\theta) = \ell(\hat\eta_1(\theta), \theta, \hat\eta_2(\theta)) \\
= -\frac n2 \log 2\pi\sigma^2 - \frac 1{2\sigma^2}\sum_{i=1}^n (y_i - \hat \eta_1(\theta) - \hat \eta_2(\theta)\cdot(- 2\theta x_i + x_i^2))^2.
$$
Here's an example in R:
set.seed(132)
theta <- 1.23; eta1 <- -.55; eta2 <- .761
sigma <- .234
n <- 500
x <- rnorm(n, -.5)
y <- eta1 - 2 * theta * eta2 * x + eta2 * x^2 + rnorm(n, 0, sigma)
profloglik <- function(theta, sigma, x, y) {
z <- -2 * theta * x + x^2 # creating the new feature in terms of theta
mod <- lm(y ~ z) # using `lm` to do the simple linear regression
sum(dnorm(y, fitted(mod), sigma, log=TRUE)) # log likelihood
}
theta_seq <- seq(-10, 10, length=500)
liks <- sapply(theta_seq, profloglik, sigma=sigma, x=x, y=y)
plot(liks ~ theta_seq, type="l", lwd=2,
main=bquote("Profiled log-likelihood for" ~ theta),
ylab="profiled log lik", xlab=bquote(theta))
|
Plot profile likelihood
I'll use $\mu_i = \eta_1 - 2\theta\eta_2x_i + \eta_2 x_i^2$ for convenience. If we're thinking of $\mu_i$ as a function of $\theta$, so only $\eta_1$ and $\eta_2$ are parameters, then we can write thi
|
45,576
|
Plot profile likelihood
|
This is an appendix to @jld's answer (+1), which assumes that the error variance $\sigma^2$ is known.
Alternatively, we can treat $\sigma^2$ as another parameter to maximize while profiling the log-likelihood for $\theta$. This is straightforward to do in a linear regression:
$$
\begin{aligned}
\widehat{\sigma}_\mu^2 = \frac{1}{n}\sum_i(y_i - \mu_i)^2
\end{aligned}
$$
The updated profile log-likelihood plot illustrates how eliminating $\sigma^2$ by maximizing it instead of fixing it to a specific value concentrates the inference on the parameter of interest $\theta$. The vertical red line is at the true value $\theta = 1.23$.
Following a suggestion by @kjetilbhalvorsen, I tried to overlay the two graphs on the same plot. This is hard to do when plotting log-likelihoods: notice how different the y-axis limits are between @jld's graph and mine. So instead I plot the profile likelihood, scaled so that the upper limit on the y-axis is 1: $L_P(\theta) / \max L_P(\theta) = L_P(\theta) / L_P(\widehat{\theta}_{MLE})$. I also limit the x-axis to the range of $\theta$ where the profile likelihood is most regular (ie. most like a quadratic function). Outside of that range $L_P(\theta)$ is negligible.
For fun, I add the profile likelihood at two other fixed values for the error standard deviation: 1.2$\sigma$ and 0.8$\sigma$. Both values are "wrong" and lead to worse inference for $\theta$ than when we estimate $\widehat{\sigma}$: with 1.2$\sigma$ we underestimate how much we learn about $\theta$ from the data and with 0.8$\sigma$ we ignore (unknown) variability. In this example the difference among the four choices for the error variance are small. However, it still illustrates that in general — unless we know the true value of a parameter or have a very accurate estimate of it — we are better off eliminating the nuisance parameter by maximizing it rather than plugging in a wrong value.
I also calculate likelihood intervals c = 15% as described in the book "In All Likelihood" by Yudi Pawitan. See Section 2.6, Likelihood-based intervals. These confirm numerically what we observe in the profile likelihood plot.
confints
#> c lower upper
#> sigma.hat 0.15 1.059856 1.309477
#> sigma.true 0.15 1.066958 1.300096
#> sigma.true*1.2 0.15 1.046815 1.327167
#> sigma.true*0.8 0.15 1.087611 1.273799
Updated R code. It's mostly the same as @jld's original code, with the addition of maximizing the error variance $\sigma^2$ and computing likelihood intervals.
set.seed(132)
theta <- 1.23
eta1 <- -.55
eta2 <- .761
sigma <- .234
# Use a small sample.
# Otherwise the MLE of sigma is a very good estimate to the true sigma.
n <- 75
x <- rnorm(n, -.5)
y <- eta1 - 2 * theta * eta2 * x + eta2 * x^2 + rnorm(n, 0, sigma)
profloglik <- function(theta, x, y, sigma = NULL) {
z <- -2 * theta * x + x^2 # creating the new feature in terms of theta
mod <- lm(y ~ z) # using `lm` to do the simple linear regression
mu <- fitted(mod)
if (is.null(sigma)) {
# Maximum likelihood estimate of the error variance given the mean(s)
s2 <- mean((y - mu)^2)
sigma <- sqrt(s2)
}
sum(dnorm(y, fitted(mod), sd = sigma, log = TRUE)) # log likelihood
}
theta_seq <- seq(-10, 10, length = 500)
logliks <- sapply(theta_seq, profloglik, x = x, y = y, sigma = NULL)
plot(
logliks ~ theta_seq,
type = "l", lwd = 2,
main = bquote("Profile log-likelihood for" ~ theta),
xlab = bquote(theta),
ylab = bquote(log ~ L[p](theta))
)
abline(v = theta, lwd = 2, col = "#DF536B")
# Compute likelihood intervals for a scalar theta at the given c levels.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
confint_like <- function(theta, like, c = 0.15) {
theta.mle <- mean(theta[like == max(like)])
theta.below <- theta[theta < theta.mle]
if (length(theta.below) < 2) {
lower <- min(theta)
} else {
like.below <- like[theta < theta.mle]
lower <- approx(like.below, theta.below, xout = c)$y
}
theta.above <- theta[theta > theta.mle]
if (length(theta.above) < 2) {
upper <- max(theta)
} else {
like.above <- like[theta > theta.mle]
upper <- approx(like.above, theta.above, xout = c)$y
}
data.frame(c, lower, upper)
}
theta_seq <- seq(0.9, 1.5, length = 500)
logliks0 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = NULL) # Use the MLE.
logliks1 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma)
logliks2 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma * 1.2)
logliks3 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma * 0.8)
liks0 <- exp(logliks0 - max(logliks0))
liks1 <- exp(logliks1 - max(logliks1))
liks2 <- exp(logliks2 - max(logliks2))
liks3 <- exp(logliks3 - max(logliks3))
confints <- rbind(
confint_like(theta_seq, liks0),
confint_like(theta_seq, liks1),
confint_like(theta_seq, liks2),
confint_like(theta_seq, liks3)
)
row.names(confints) <- c("sigma.hat", "sigma.true", "sigma.true*1.2", "sigma.true*0.8")
confints
plot(
theta_seq, liks0,
type = "l", lwd = 2,
main = bquote("Profile likelihood for" ~ theta),
xlab = bquote(theta),
ylab = bquote(L[p](theta))
)
lines(theta_seq, liks1, lwd = 2, col = "#CD0BBC")
lines(theta_seq, liks2, lwd = 2, col = "#2297E6")
lines(theta_seq, liks3, lwd = 2, col = "#28E2E5")
legend(
"topright",
legend = c(
bquote(widehat(sigma)),
bquote(sigma[true]),
bquote(sigma[true] %*% 1.2),
bquote(sigma[true] %*% 0.8)
),
col = c("black", "#CD0BBC", "#2297E6", "#28E2E5"), lty = 1
)
|
Plot profile likelihood
|
This is an appendix to @jld's answer (+1), which assumes that the error variance $\sigma^2$ is known.
Alternatively, we can treat $\sigma^2$ as another parameter to maximize while profiling the log-li
|
Plot profile likelihood
This is an appendix to @jld's answer (+1), which assumes that the error variance $\sigma^2$ is known.
Alternatively, we can treat $\sigma^2$ as another parameter to maximize while profiling the log-likelihood for $\theta$. This is straightforward to do in a linear regression:
$$
\begin{aligned}
\widehat{\sigma}_\mu^2 = \frac{1}{n}\sum_i(y_i - \mu_i)^2
\end{aligned}
$$
The updated profile log-likelihood plot illustrates how eliminating $\sigma^2$ by maximizing it instead of fixing it to a specific value concentrates the inference on the parameter of interest $\theta$. The vertical red line is at the true value $\theta = 1.23$.
Following a suggestion by @kjetilbhalvorsen, I tried to overlay the two graphs on the same plot. This is hard to do when plotting log-likelihoods: notice how different the y-axis limits are between @jld's graph and mine. So instead I plot the profile likelihood, scaled so that the upper limit on the y-axis is 1: $L_P(\theta) / \max L_P(\theta) = L_P(\theta) / L_P(\widehat{\theta}_{MLE})$. I also limit the x-axis to the range of $\theta$ where the profile likelihood is most regular (ie. most like a quadratic function). Outside of that range $L_P(\theta)$ is negligible.
For fun, I add the profile likelihood at two other fixed values for the error standard deviation: 1.2$\sigma$ and 0.8$\sigma$. Both values are "wrong" and lead to worse inference for $\theta$ than when we estimate $\widehat{\sigma}$: with 1.2$\sigma$ we underestimate how much we learn about $\theta$ from the data and with 0.8$\sigma$ we ignore (unknown) variability. In this example the difference among the four choices for the error variance are small. However, it still illustrates that in general — unless we know the true value of a parameter or have a very accurate estimate of it — we are better off eliminating the nuisance parameter by maximizing it rather than plugging in a wrong value.
I also calculate likelihood intervals c = 15% as described in the book "In All Likelihood" by Yudi Pawitan. See Section 2.6, Likelihood-based intervals. These confirm numerically what we observe in the profile likelihood plot.
confints
#> c lower upper
#> sigma.hat 0.15 1.059856 1.309477
#> sigma.true 0.15 1.066958 1.300096
#> sigma.true*1.2 0.15 1.046815 1.327167
#> sigma.true*0.8 0.15 1.087611 1.273799
Updated R code. It's mostly the same as @jld's original code, with the addition of maximizing the error variance $\sigma^2$ and computing likelihood intervals.
set.seed(132)
theta <- 1.23
eta1 <- -.55
eta2 <- .761
sigma <- .234
# Use a small sample.
# Otherwise the MLE of sigma is a very good estimate to the true sigma.
n <- 75
x <- rnorm(n, -.5)
y <- eta1 - 2 * theta * eta2 * x + eta2 * x^2 + rnorm(n, 0, sigma)
profloglik <- function(theta, x, y, sigma = NULL) {
z <- -2 * theta * x + x^2 # creating the new feature in terms of theta
mod <- lm(y ~ z) # using `lm` to do the simple linear regression
mu <- fitted(mod)
if (is.null(sigma)) {
# Maximum likelihood estimate of the error variance given the mean(s)
s2 <- mean((y - mu)^2)
sigma <- sqrt(s2)
}
sum(dnorm(y, fitted(mod), sd = sigma, log = TRUE)) # log likelihood
}
theta_seq <- seq(-10, 10, length = 500)
logliks <- sapply(theta_seq, profloglik, x = x, y = y, sigma = NULL)
plot(
logliks ~ theta_seq,
type = "l", lwd = 2,
main = bquote("Profile log-likelihood for" ~ theta),
xlab = bquote(theta),
ylab = bquote(log ~ L[p](theta))
)
abline(v = theta, lwd = 2, col = "#DF536B")
# Compute likelihood intervals for a scalar theta at the given c levels.
# This implementation is based on the program `li.r` for computing likelihood
# intervals which accompanies the book "In All Likelihood" by Yudi Pawitan.
# https://www.meb.ki.se/sites/yudpaw/book/
confint_like <- function(theta, like, c = 0.15) {
theta.mle <- mean(theta[like == max(like)])
theta.below <- theta[theta < theta.mle]
if (length(theta.below) < 2) {
lower <- min(theta)
} else {
like.below <- like[theta < theta.mle]
lower <- approx(like.below, theta.below, xout = c)$y
}
theta.above <- theta[theta > theta.mle]
if (length(theta.above) < 2) {
upper <- max(theta)
} else {
like.above <- like[theta > theta.mle]
upper <- approx(like.above, theta.above, xout = c)$y
}
data.frame(c, lower, upper)
}
theta_seq <- seq(0.9, 1.5, length = 500)
logliks0 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = NULL) # Use the MLE.
logliks1 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma)
logliks2 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma * 1.2)
logliks3 <- sapply(theta_seq, profloglik, x = x, y = y, sigma = sigma * 0.8)
liks0 <- exp(logliks0 - max(logliks0))
liks1 <- exp(logliks1 - max(logliks1))
liks2 <- exp(logliks2 - max(logliks2))
liks3 <- exp(logliks3 - max(logliks3))
confints <- rbind(
confint_like(theta_seq, liks0),
confint_like(theta_seq, liks1),
confint_like(theta_seq, liks2),
confint_like(theta_seq, liks3)
)
row.names(confints) <- c("sigma.hat", "sigma.true", "sigma.true*1.2", "sigma.true*0.8")
confints
plot(
theta_seq, liks0,
type = "l", lwd = 2,
main = bquote("Profile likelihood for" ~ theta),
xlab = bquote(theta),
ylab = bquote(L[p](theta))
)
lines(theta_seq, liks1, lwd = 2, col = "#CD0BBC")
lines(theta_seq, liks2, lwd = 2, col = "#2297E6")
lines(theta_seq, liks3, lwd = 2, col = "#28E2E5")
legend(
"topright",
legend = c(
bquote(widehat(sigma)),
bquote(sigma[true]),
bquote(sigma[true] %*% 1.2),
bquote(sigma[true] %*% 0.8)
),
col = c("black", "#CD0BBC", "#2297E6", "#28E2E5"), lty = 1
)
|
Plot profile likelihood
This is an appendix to @jld's answer (+1), which assumes that the error variance $\sigma^2$ is known.
Alternatively, we can treat $\sigma^2$ as another parameter to maximize while profiling the log-li
|
45,577
|
How should I proceed when the minimum sample size in an experiment is not reached?
|
Elaborating a bit on Jeremy's answer, let's think for a minute about what a power analysis is. The purpose is to determine how many participants one would need to "detect" an effect of a specific size. So in discussing the results of your experiment vis a vis the sample size you originally designed, and what the pandemic (unforseen circumstances) led to, you should do so in the context of the estimated effect size of your current experiment. Is the effect size about what was expected a priori? Bigger? Smaller? Would a larger sample size potentially have changed the results of the hypothesis test(s) of this particular effect? That kind of discussion is what is called for, since you're being honest about why you ended up with the N that you did. Then you're contextualizing the resulting hypothesis tests through the lens of effect sizes.
|
How should I proceed when the minimum sample size in an experiment is not reached?
|
Elaborating a bit on Jeremy's answer, let's think for a minute about what a power analysis is. The purpose is to determine how many participants one would need to "detect" an effect of a specific size
|
How should I proceed when the minimum sample size in an experiment is not reached?
Elaborating a bit on Jeremy's answer, let's think for a minute about what a power analysis is. The purpose is to determine how many participants one would need to "detect" an effect of a specific size. So in discussing the results of your experiment vis a vis the sample size you originally designed, and what the pandemic (unforseen circumstances) led to, you should do so in the context of the estimated effect size of your current experiment. Is the effect size about what was expected a priori? Bigger? Smaller? Would a larger sample size potentially have changed the results of the hypothesis test(s) of this particular effect? That kind of discussion is what is called for, since you're being honest about why you ended up with the N that you did. Then you're contextualizing the resulting hypothesis tests through the lens of effect sizes.
|
How should I proceed when the minimum sample size in an experiment is not reached?
Elaborating a bit on Jeremy's answer, let's think for a minute about what a power analysis is. The purpose is to determine how many participants one would need to "detect" an effect of a specific size
|
45,578
|
How should I proceed when the minimum sample size in an experiment is not reached?
|
I would just explain what happened. You powered for N, and you got N*. It's not the first time this has happened (and won't be the last).
Post hoc power would not be especially useful (as you have realized.)
|
How should I proceed when the minimum sample size in an experiment is not reached?
|
I would just explain what happened. You powered for N, and you got N*. It's not the first time this has happened (and won't be the last).
Post hoc power would not be especially useful (as you have rea
|
How should I proceed when the minimum sample size in an experiment is not reached?
I would just explain what happened. You powered for N, and you got N*. It's not the first time this has happened (and won't be the last).
Post hoc power would not be especially useful (as you have realized.)
|
How should I proceed when the minimum sample size in an experiment is not reached?
I would just explain what happened. You powered for N, and you got N*. It's not the first time this has happened (and won't be the last).
Post hoc power would not be especially useful (as you have rea
|
45,579
|
Getting understand HAC estimators
|
In a linear model, we have $\hat\beta = (X^TX)^{-1}X^TY$.
A basic property of variances and matrices is that
$$\mathrm{var}[A^TY] = A^T\mathrm{var}[Y]A$$
So
$$\mathrm{var}[\hat\beta] = (X^TX)^{-1}X^T \mathrm{var}[Y] X(X^TX)^{-1}$$
It's usual when considering HAC estimators to break this into three pieces, two of which are the same, hence the name "sandwich"
$$\mathrm{var}[\hat\beta] = I^{-1} H I^{-1}$$
I'm not going to do quite that. Instead, I'm going to write
$$\hat\beta-\beta = (X^TX)^{-1}X^T (Y-X\beta)$$
which works because $(X^TX)^{-1}(X^TX)\beta=\beta$
and note that
$$\hat\beta-\beta = \sum_{i=1}^n h_i(\beta)$$
where
$$h_i(\beta)=(X^TX)^{-1} x_i(y_i-x_i\beta).$$
These are the influence functions. They have mean zero and each one almost depends on only one $y_i$. I say 'almost' because they all depend on $(X^TX)^{-1}$, but that is an average of $n$ observations and so is effectively constant for large $n$. If we were doing asymptotics, we'd replace it by its limiting value.
By basically the definition of covariance
$$\mathrm{var}\left[\sum_i h_i(\beta)\right] = \sum_{i,j} \mathrm{cov}[h_i(\beta),h_j(\beta)]$$
We know that $E[h_i(\beta)h_j(\beta)=\mathrm{cov}[h_i(\beta),h_j(\beta)]$ (because they have zero mean), and we might hope to estimate it by $h_i(\hat\beta)h_j(\hat\beta)$.
For any individual $(i,j)$ that's a terrible measurement, but it is approximately unbiased (it would be exactly unbiased if we evaluated it at the true $\beta$, but if we knew the true $\beta$ we wouldn't be doing any of this). Since it's (approximately) unbiased, we have a reasonable hope that the law of large numbers will turn the sum of these things into a good estimate of the sum of the variances.
Sadly, it doesn't. For a start, since we know $\sum_i h_i(\hat\beta)=0$ by construction, $\sum_{i,j} h_i(\hat\beta)h_j(\hat\beta)=0$.
However, we can rescue the estimator with a bias:variance tradeoff on the covariance terms. Suppose we assume that $i$ indexes time and that observations well-separately in time are very nearly independent. That's reasonable: an ARIMA model has exponentially decaying correlations. We could then estimate $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ by 0 if $|i-j|$ is large enough, and use $h_i(\hat\beta)h_j(\hat\beta)$ when $|i-j|$ is small.
This does work.
It also works for spatial data and for various sparse correlation models. The proofs get a bit detailed, especially if you want nearly optimal conditions, because there is uniform convergence to be proved. The general form of the result, though, is fairly straightforward
Write ${\cal N}$ (neighbours) for the set of $(i,j)$ such that $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ is not small. If
$|{\cal N}|$ is much smaller than $n^2$, (eg $O_p(n^{2-\delta})$)
The sum of the true $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ over pairs not in ${\cal N}$ is small (goes to zero)
then the HAC estimator is pretty good (is consistent). You can improve things a bit by not taking a binary yes/no decision but instead taking $w_{ij} h_i(\hat\beta)h_j(\hat\beta)$ for some $0<w_{ij}<1$. Most of the HAC estimators do this.
That was all for the linear model, but (apart from a bit of smoothness and moment assumptions) the only property of the linear model we used was that each $h_i$ depends (approximately) only one observation, and that the $h_i$ add up to $\hat\beta-\beta$. If you weaken the latter part of that to "add up to $\hat\beta-\beta$" plus error of smaller order, that's a definition of an influence function, and you can find them for generalised linear models, the Cox model, and many parametric regression models and just use the same approach to get HC and HAC variance estimators.
The sandwich package also has some sophisticated improvements that give you slightly better performance (and noticeably better in small samples), but this is the basic idea.
|
Getting understand HAC estimators
|
In a linear model, we have $\hat\beta = (X^TX)^{-1}X^TY$.
A basic property of variances and matrices is that
$$\mathrm{var}[A^TY] = A^T\mathrm{var}[Y]A$$
So
$$\mathrm{var}[\hat\beta] = (X^TX)^{-1}X^T
|
Getting understand HAC estimators
In a linear model, we have $\hat\beta = (X^TX)^{-1}X^TY$.
A basic property of variances and matrices is that
$$\mathrm{var}[A^TY] = A^T\mathrm{var}[Y]A$$
So
$$\mathrm{var}[\hat\beta] = (X^TX)^{-1}X^T \mathrm{var}[Y] X(X^TX)^{-1}$$
It's usual when considering HAC estimators to break this into three pieces, two of which are the same, hence the name "sandwich"
$$\mathrm{var}[\hat\beta] = I^{-1} H I^{-1}$$
I'm not going to do quite that. Instead, I'm going to write
$$\hat\beta-\beta = (X^TX)^{-1}X^T (Y-X\beta)$$
which works because $(X^TX)^{-1}(X^TX)\beta=\beta$
and note that
$$\hat\beta-\beta = \sum_{i=1}^n h_i(\beta)$$
where
$$h_i(\beta)=(X^TX)^{-1} x_i(y_i-x_i\beta).$$
These are the influence functions. They have mean zero and each one almost depends on only one $y_i$. I say 'almost' because they all depend on $(X^TX)^{-1}$, but that is an average of $n$ observations and so is effectively constant for large $n$. If we were doing asymptotics, we'd replace it by its limiting value.
By basically the definition of covariance
$$\mathrm{var}\left[\sum_i h_i(\beta)\right] = \sum_{i,j} \mathrm{cov}[h_i(\beta),h_j(\beta)]$$
We know that $E[h_i(\beta)h_j(\beta)=\mathrm{cov}[h_i(\beta),h_j(\beta)]$ (because they have zero mean), and we might hope to estimate it by $h_i(\hat\beta)h_j(\hat\beta)$.
For any individual $(i,j)$ that's a terrible measurement, but it is approximately unbiased (it would be exactly unbiased if we evaluated it at the true $\beta$, but if we knew the true $\beta$ we wouldn't be doing any of this). Since it's (approximately) unbiased, we have a reasonable hope that the law of large numbers will turn the sum of these things into a good estimate of the sum of the variances.
Sadly, it doesn't. For a start, since we know $\sum_i h_i(\hat\beta)=0$ by construction, $\sum_{i,j} h_i(\hat\beta)h_j(\hat\beta)=0$.
However, we can rescue the estimator with a bias:variance tradeoff on the covariance terms. Suppose we assume that $i$ indexes time and that observations well-separately in time are very nearly independent. That's reasonable: an ARIMA model has exponentially decaying correlations. We could then estimate $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ by 0 if $|i-j|$ is large enough, and use $h_i(\hat\beta)h_j(\hat\beta)$ when $|i-j|$ is small.
This does work.
It also works for spatial data and for various sparse correlation models. The proofs get a bit detailed, especially if you want nearly optimal conditions, because there is uniform convergence to be proved. The general form of the result, though, is fairly straightforward
Write ${\cal N}$ (neighbours) for the set of $(i,j)$ such that $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ is not small. If
$|{\cal N}|$ is much smaller than $n^2$, (eg $O_p(n^{2-\delta})$)
The sum of the true $\mathrm{cov}[h_i(\beta),h_j(\beta)]$ over pairs not in ${\cal N}$ is small (goes to zero)
then the HAC estimator is pretty good (is consistent). You can improve things a bit by not taking a binary yes/no decision but instead taking $w_{ij} h_i(\hat\beta)h_j(\hat\beta)$ for some $0<w_{ij}<1$. Most of the HAC estimators do this.
That was all for the linear model, but (apart from a bit of smoothness and moment assumptions) the only property of the linear model we used was that each $h_i$ depends (approximately) only one observation, and that the $h_i$ add up to $\hat\beta-\beta$. If you weaken the latter part of that to "add up to $\hat\beta-\beta$" plus error of smaller order, that's a definition of an influence function, and you can find them for generalised linear models, the Cox model, and many parametric regression models and just use the same approach to get HC and HAC variance estimators.
The sandwich package also has some sophisticated improvements that give you slightly better performance (and noticeably better in small samples), but this is the basic idea.
|
Getting understand HAC estimators
In a linear model, we have $\hat\beta = (X^TX)^{-1}X^TY$.
A basic property of variances and matrices is that
$$\mathrm{var}[A^TY] = A^T\mathrm{var}[Y]A$$
So
$$\mathrm{var}[\hat\beta] = (X^TX)^{-1}X^T
|
45,580
|
Getting understand HAC estimators
|
The ideas the HAC estimators implemented in the sandwich package are explained in an accompanying paper that is also listed in the references in ?vcovHAC:
Zeileis A (2004).
"Econometric Computing with HC and HAC Covariance Matrix Estimators."
Journal of Statistical Software, 11(10), 1-17.
doi:10.18637/jss.v011.i10
The paper also provides further links to the relevant literature and explains what you can do with the estimated variance-covariance matrix in R. Typically, you plug it into functions that allow you to test the coefficients of your model based on Wald-type tests, e.g., coeftest(), coefci(), and waldtest() from the lmtest package or linearHypothesis(), Anova(), or deltaMethod() from the car package.
|
Getting understand HAC estimators
|
The ideas the HAC estimators implemented in the sandwich package are explained in an accompanying paper that is also listed in the references in ?vcovHAC:
Zeileis A (2004).
"Econometric Computing wit
|
Getting understand HAC estimators
The ideas the HAC estimators implemented in the sandwich package are explained in an accompanying paper that is also listed in the references in ?vcovHAC:
Zeileis A (2004).
"Econometric Computing with HC and HAC Covariance Matrix Estimators."
Journal of Statistical Software, 11(10), 1-17.
doi:10.18637/jss.v011.i10
The paper also provides further links to the relevant literature and explains what you can do with the estimated variance-covariance matrix in R. Typically, you plug it into functions that allow you to test the coefficients of your model based on Wald-type tests, e.g., coeftest(), coefci(), and waldtest() from the lmtest package or linearHypothesis(), Anova(), or deltaMethod() from the car package.
|
Getting understand HAC estimators
The ideas the HAC estimators implemented in the sandwich package are explained in an accompanying paper that is also listed in the references in ?vcovHAC:
Zeileis A (2004).
"Econometric Computing wit
|
45,581
|
Is a sample i.i.d or is a collection of random variables i.i.d.?
|
From Wikipedia, two Random Variables (RVs) (remark: you can generalize this to any number of RVs) are independent and identically distributed (i.i.d.) if their Cumulative Distribution Function (CDF) is the same for any element of the domain $I$ and if their joint CDF factorizes in the product of the marginal CDFs. This means that:
$${\begin{aligned}&F_{X}(x)=F_{Y}(x)\,&\forall x\in I\\&F_{X,Y}(x,y)=F_{X}(x)\cdot F_{Y}(y)\,&\forall x,y\in I\end{aligned}}$$
(Note that this also imply that their pdfs are the same (almost everywhere, i.e. on the whole domain except for sets of measure zero, but this is a technical condition so don't worry about it)
Realizations of an RV are usually referred to as samples, i.e. roughly speaking their outcome. The assumption that samples generated by an RV are i.i.d. simply refers to the fact that underlying RVs, whose realizations you observe in the samples, are i.i.d.
So replying to your questions:
they are essentially the same thing.
You can regard it as repeated measurements of one random variable since the CDFs of two i.i.d. RVs are the same.
|
Is a sample i.i.d or is a collection of random variables i.i.d.?
|
From Wikipedia, two Random Variables (RVs) (remark: you can generalize this to any number of RVs) are independent and identically distributed (i.i.d.) if their Cumulative Distribution Function (CDF) i
|
Is a sample i.i.d or is a collection of random variables i.i.d.?
From Wikipedia, two Random Variables (RVs) (remark: you can generalize this to any number of RVs) are independent and identically distributed (i.i.d.) if their Cumulative Distribution Function (CDF) is the same for any element of the domain $I$ and if their joint CDF factorizes in the product of the marginal CDFs. This means that:
$${\begin{aligned}&F_{X}(x)=F_{Y}(x)\,&\forall x\in I\\&F_{X,Y}(x,y)=F_{X}(x)\cdot F_{Y}(y)\,&\forall x,y\in I\end{aligned}}$$
(Note that this also imply that their pdfs are the same (almost everywhere, i.e. on the whole domain except for sets of measure zero, but this is a technical condition so don't worry about it)
Realizations of an RV are usually referred to as samples, i.e. roughly speaking their outcome. The assumption that samples generated by an RV are i.i.d. simply refers to the fact that underlying RVs, whose realizations you observe in the samples, are i.i.d.
So replying to your questions:
they are essentially the same thing.
You can regard it as repeated measurements of one random variable since the CDFs of two i.i.d. RVs are the same.
|
Is a sample i.i.d or is a collection of random variables i.i.d.?
From Wikipedia, two Random Variables (RVs) (remark: you can generalize this to any number of RVs) are independent and identically distributed (i.i.d.) if their Cumulative Distribution Function (CDF) i
|
45,582
|
How to fit this linear regression with constraints?
|
The model is overparametrised: you don't need $\beta_1$, which can be set to anything convenient, like 1.
One thing I thought of was to fit iteratively. Start out with some guess at $w$ and $\beta_2$. Then compute $Z=(\sum_i \hat{w}_iX_i)^2$ and fit the linear model
Y~ X1+X2+...+X_k + Z
The coefficients of the $X$s are the new $\hat{w}_i$, and the coefficient of $Z$ is $\hat \beta_2$. And then recompute Z, iterate and hope it converges. Sadly, it doesn't.
But if $k$ isn't too large, it's easy to just compute the residual sum of squares as a function of the parameters and run it through a general purpose optimiser. In R I'd use minqa::newuoa, but there are lots of alternatives.
> X<-matrix(rnorm(50*100),ncol=5)
> w<-1:5
> Y<- (X%*%w)+2*(X%*%w)^2+rnorm(100)
>
>
> rss<-function(theta){
+ beta2<-theta[1]
+ w<-theta[-1]
+ mu<- (X%*%w)+beta2*(X%*%w)^2
+ sum((Y-mu)^2)
+ }
>
> minqa::newuoa(par=rep(1,6), rss)
parameter estimates: 1.99478699135839, 1.00032043499982, 2.00140284432351, 3.00312315850919, 4.00284240744153, 5.00537517104468
objective: 1047.51402563294
number of function evaluations: 1689
Then use the bootstrap to get standard error estimates.
With $k=50$ it doesn't work (without tuning -- I'm sure it would work if the optimiser defaults were changed or the starting values were better)
|
How to fit this linear regression with constraints?
|
The model is overparametrised: you don't need $\beta_1$, which can be set to anything convenient, like 1.
One thing I thought of was to fit iteratively. Start out with some guess at $w$ and $\beta_2
|
How to fit this linear regression with constraints?
The model is overparametrised: you don't need $\beta_1$, which can be set to anything convenient, like 1.
One thing I thought of was to fit iteratively. Start out with some guess at $w$ and $\beta_2$. Then compute $Z=(\sum_i \hat{w}_iX_i)^2$ and fit the linear model
Y~ X1+X2+...+X_k + Z
The coefficients of the $X$s are the new $\hat{w}_i$, and the coefficient of $Z$ is $\hat \beta_2$. And then recompute Z, iterate and hope it converges. Sadly, it doesn't.
But if $k$ isn't too large, it's easy to just compute the residual sum of squares as a function of the parameters and run it through a general purpose optimiser. In R I'd use minqa::newuoa, but there are lots of alternatives.
> X<-matrix(rnorm(50*100),ncol=5)
> w<-1:5
> Y<- (X%*%w)+2*(X%*%w)^2+rnorm(100)
>
>
> rss<-function(theta){
+ beta2<-theta[1]
+ w<-theta[-1]
+ mu<- (X%*%w)+beta2*(X%*%w)^2
+ sum((Y-mu)^2)
+ }
>
> minqa::newuoa(par=rep(1,6), rss)
parameter estimates: 1.99478699135839, 1.00032043499982, 2.00140284432351, 3.00312315850919, 4.00284240744153, 5.00537517104468
objective: 1047.51402563294
number of function evaluations: 1689
Then use the bootstrap to get standard error estimates.
With $k=50$ it doesn't work (without tuning -- I'm sure it would work if the optimiser defaults were changed or the starting values were better)
|
How to fit this linear regression with constraints?
The model is overparametrised: you don't need $\beta_1$, which can be set to anything convenient, like 1.
One thing I thought of was to fit iteratively. Start out with some guess at $w$ and $\beta_2
|
45,583
|
How to fit this linear regression with constraints?
|
If you write out the expression, you get a polynomial in terms of $X_1,X_2,..,X_k$, including their interactions, where the new "coefficients" are all function of $\beta$s and $w$s and twos. For k=2, you get a polynomial that has 5 coefficients (or 6 including the intercept) with 4 unknowns:
$$ \begin{align*} Y &= \beta_0+(\beta_1w_1)X_1+(\beta_1w_2)X_2+(\beta_2w_1^2)X_1^2 + (\beta_2 w_2^2)X_2^2+(2\beta_2 w_1w_2)X_1X_2 +\varepsilon \\ &= \alpha_0+\alpha_1X_1+\alpha_2X_2+\alpha_3X_1^2 + \alpha_4X_2^2+\alpha_5X_1X_2 +\varepsilon \end{align*} $$
If you fit this regression, you will get the new $\alpha$ coefficients, which gives you a system of non-linear equations:
$$ \begin{align*} \alpha_0 &= \beta_0 \\ \alpha_1 &= \beta_1w_1 \\ \alpha_2 &= \beta_1w_2 \\ \alpha_3 & =\beta_2w_1^2\\ \alpha_4 &= \beta_2 w_2^2 \\ \alpha_5 &= 2\beta_2 w_1w_2 \end{align*} $$
In principle, that system of equations should be solvable numerically, at least sometimes. It should remain solvable with $k>3$ since you don't have the curse of dimensionality since each new variable adds only one parameters but multiple new equations that help pin it down.
Here's a toy $k=2$ simulation example using Stata where I ignore the intercept equation since it is trivial:
. clear
. set obs 1000
number of observations (_N) was 0, now 1,000
. set seed 10011979
. gen b0 = 1
. gen b1 = 2
. gen b2 = 3
. gen w1 = 4
. gen w2 = 5
. gen x1 = rnormal(0,1)
. gen x2 = rnormal(10,2)
. gen eps = rnormal()
. gen y = b0 + b1*(w1*x1 + w2*x2) + b2*(w1*x1 + w2*x2)^2 + eps
. reg y (c.x1 c.x2)##(c.x1 c.x2)
Source | SS df MS Number of obs = 1,000
-------------+---------------------------------- F(5, 994) > 99999.00
Model | 1.1237e+10 5 2.2475e+09 Prob > F = 0.0000
Residual | 1052.11816 994 1.05846897 R-squared = 1.0000
-------------+---------------------------------- Adj R-squared = 1.0000
Total | 1.1237e+10 999 11248523.6 Root MSE = 1.0288
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 8.082131 .1573906 51.35 0.000 7.773275 8.390987
x2 | 9.852645 .110114 89.48 0.000 9.636562 10.06873
|
c.x1#c.x1 | 47.9813 .0233895 2051.40 0.000 47.9354 48.0272
|
c.x1#c.x2 | 119.9907 .0153233 7830.59 0.000 119.9606 120.0208
|
c.x2#c.x2 | 75.00664 .0053927 1.4e+04 0.000 74.99605 75.01722
|
_cons | 1.77947 .5532575 3.22 0.001 .693783 2.865156
------------------------------------------------------------------------------
.
. clear mata
. mata:
------------------------------------------------- mata (type end to exit) -----------------------------------------------------------------------------------------------------------------------------------------------
: void mysolver(todo, p, lnf, S, H)
> {
> b1 = p[1]
> b2 = p[2]
> w1 = p[3]
> w2 = p[4]
> lnf = (b1*w1 - 8.082131)^2\
> (b1*w2 - 9.852645)^2\
> (b2*w1^2 - 47.9813)^2\
> (b2*w2^2 - 75.00664)^2\
> (2*b2*w1*w2 - 119.9907)^2
> }
note: argument todo unused
note: argument S unused
note: argument H unused
:
: S = optimize_init()
: optimize_init_evaluator(S, &mysolver())
: optimize_init_evaluatortype(S, "v0")
: optimize_init_params(S, (1,1,1,1))
: optimize_init_which(S, "min" )
: optimize_init_tracelevel(S,"none")
: optimize_init_conv_ptol(S, 1e-16)
: optimize_init_conv_vtol(S, 1e-16)
: p = optimize(S)
: p
1 2 3 4
+---------------------------------------------------------+
1 | 2.1561597 3.521534782 3.691630188 4.614939185 |
+---------------------------------------------------------+
: end
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The solution is not very good (unless you squint and round to the nearest integer), since $p = (2,3,4,5)$ in the simulation. I am probably doing something wrong when when I solve the equations numerically. But even the intercept is pretty off with $b_0 = 1.77947 \ne 1$.
Code:
cls
clear
set obs 1000
set seed 10011979
gen b0 = 1
gen b1 = 2
gen b2 = 3
gen w1 = 4
gen w2 = 5
gen x1 = rnormal(0,1)
gen x2 = rnormal(10,2)
gen eps = rnormal()
gen y = b0 + b1*(w1*x1 + w2*x2) + b2*(w1*x1 + w2*x2)^2 + eps
reg y (c.x1 c.x2)##(c.x1 c.x2)
clear mata
mata:
void mysolver(todo, p, lnf, S, H)
{
b1 = p[1]
b2 = p[2]
w1 = p[3]
w2 = p[4]
lnf = (b1*w1 - 8.082131)^2\
(b1*w2 - 9.852645)^2\
(b2*w1^2 - 47.9813)^2\
(b2*w2^2 - 75.00664)^2\
(2*b2*w1*w2 - 119.9907)^2
}
S = optimize_init()
optimize_init_evaluator(S, &mysolver())
optimize_init_evaluatortype(S, "v0")
optimize_init_params(S, (1,1,1,1))
optimize_init_which(S, "min" )
optimize_init_tracelevel(S,"none")
optimize_init_conv_ptol(S, 1e-16)
optimize_init_conv_vtol(S, 1e-16)
p = optimize(S)
p
end
|
How to fit this linear regression with constraints?
|
If you write out the expression, you get a polynomial in terms of $X_1,X_2,..,X_k$, including their interactions, where the new "coefficients" are all function of $\beta$s and $w$s and twos. For k=2,
|
How to fit this linear regression with constraints?
If you write out the expression, you get a polynomial in terms of $X_1,X_2,..,X_k$, including their interactions, where the new "coefficients" are all function of $\beta$s and $w$s and twos. For k=2, you get a polynomial that has 5 coefficients (or 6 including the intercept) with 4 unknowns:
$$ \begin{align*} Y &= \beta_0+(\beta_1w_1)X_1+(\beta_1w_2)X_2+(\beta_2w_1^2)X_1^2 + (\beta_2 w_2^2)X_2^2+(2\beta_2 w_1w_2)X_1X_2 +\varepsilon \\ &= \alpha_0+\alpha_1X_1+\alpha_2X_2+\alpha_3X_1^2 + \alpha_4X_2^2+\alpha_5X_1X_2 +\varepsilon \end{align*} $$
If you fit this regression, you will get the new $\alpha$ coefficients, which gives you a system of non-linear equations:
$$ \begin{align*} \alpha_0 &= \beta_0 \\ \alpha_1 &= \beta_1w_1 \\ \alpha_2 &= \beta_1w_2 \\ \alpha_3 & =\beta_2w_1^2\\ \alpha_4 &= \beta_2 w_2^2 \\ \alpha_5 &= 2\beta_2 w_1w_2 \end{align*} $$
In principle, that system of equations should be solvable numerically, at least sometimes. It should remain solvable with $k>3$ since you don't have the curse of dimensionality since each new variable adds only one parameters but multiple new equations that help pin it down.
Here's a toy $k=2$ simulation example using Stata where I ignore the intercept equation since it is trivial:
. clear
. set obs 1000
number of observations (_N) was 0, now 1,000
. set seed 10011979
. gen b0 = 1
. gen b1 = 2
. gen b2 = 3
. gen w1 = 4
. gen w2 = 5
. gen x1 = rnormal(0,1)
. gen x2 = rnormal(10,2)
. gen eps = rnormal()
. gen y = b0 + b1*(w1*x1 + w2*x2) + b2*(w1*x1 + w2*x2)^2 + eps
. reg y (c.x1 c.x2)##(c.x1 c.x2)
Source | SS df MS Number of obs = 1,000
-------------+---------------------------------- F(5, 994) > 99999.00
Model | 1.1237e+10 5 2.2475e+09 Prob > F = 0.0000
Residual | 1052.11816 994 1.05846897 R-squared = 1.0000
-------------+---------------------------------- Adj R-squared = 1.0000
Total | 1.1237e+10 999 11248523.6 Root MSE = 1.0288
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 8.082131 .1573906 51.35 0.000 7.773275 8.390987
x2 | 9.852645 .110114 89.48 0.000 9.636562 10.06873
|
c.x1#c.x1 | 47.9813 .0233895 2051.40 0.000 47.9354 48.0272
|
c.x1#c.x2 | 119.9907 .0153233 7830.59 0.000 119.9606 120.0208
|
c.x2#c.x2 | 75.00664 .0053927 1.4e+04 0.000 74.99605 75.01722
|
_cons | 1.77947 .5532575 3.22 0.001 .693783 2.865156
------------------------------------------------------------------------------
.
. clear mata
. mata:
------------------------------------------------- mata (type end to exit) -----------------------------------------------------------------------------------------------------------------------------------------------
: void mysolver(todo, p, lnf, S, H)
> {
> b1 = p[1]
> b2 = p[2]
> w1 = p[3]
> w2 = p[4]
> lnf = (b1*w1 - 8.082131)^2\
> (b1*w2 - 9.852645)^2\
> (b2*w1^2 - 47.9813)^2\
> (b2*w2^2 - 75.00664)^2\
> (2*b2*w1*w2 - 119.9907)^2
> }
note: argument todo unused
note: argument S unused
note: argument H unused
:
: S = optimize_init()
: optimize_init_evaluator(S, &mysolver())
: optimize_init_evaluatortype(S, "v0")
: optimize_init_params(S, (1,1,1,1))
: optimize_init_which(S, "min" )
: optimize_init_tracelevel(S,"none")
: optimize_init_conv_ptol(S, 1e-16)
: optimize_init_conv_vtol(S, 1e-16)
: p = optimize(S)
: p
1 2 3 4
+---------------------------------------------------------+
1 | 2.1561597 3.521534782 3.691630188 4.614939185 |
+---------------------------------------------------------+
: end
--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
The solution is not very good (unless you squint and round to the nearest integer), since $p = (2,3,4,5)$ in the simulation. I am probably doing something wrong when when I solve the equations numerically. But even the intercept is pretty off with $b_0 = 1.77947 \ne 1$.
Code:
cls
clear
set obs 1000
set seed 10011979
gen b0 = 1
gen b1 = 2
gen b2 = 3
gen w1 = 4
gen w2 = 5
gen x1 = rnormal(0,1)
gen x2 = rnormal(10,2)
gen eps = rnormal()
gen y = b0 + b1*(w1*x1 + w2*x2) + b2*(w1*x1 + w2*x2)^2 + eps
reg y (c.x1 c.x2)##(c.x1 c.x2)
clear mata
mata:
void mysolver(todo, p, lnf, S, H)
{
b1 = p[1]
b2 = p[2]
w1 = p[3]
w2 = p[4]
lnf = (b1*w1 - 8.082131)^2\
(b1*w2 - 9.852645)^2\
(b2*w1^2 - 47.9813)^2\
(b2*w2^2 - 75.00664)^2\
(2*b2*w1*w2 - 119.9907)^2
}
S = optimize_init()
optimize_init_evaluator(S, &mysolver())
optimize_init_evaluatortype(S, "v0")
optimize_init_params(S, (1,1,1,1))
optimize_init_which(S, "min" )
optimize_init_tracelevel(S,"none")
optimize_init_conv_ptol(S, 1e-16)
optimize_init_conv_vtol(S, 1e-16)
p = optimize(S)
p
end
|
How to fit this linear regression with constraints?
If you write out the expression, you get a polynomial in terms of $X_1,X_2,..,X_k$, including their interactions, where the new "coefficients" are all function of $\beta$s and $w$s and twos. For k=2,
|
45,584
|
Dirichlet distribution vs Multinomial distribution?
|
Multinomial distribution is a discrete, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in \{0,1,\dots,n\}$ and $\sum_{i=1}^k x_i = n$. Dirichlet distribution is a continuous, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in (0,1)$ and $\sum_{i=1}^k x_i = 1$. In the first case, the support of the distribution is limited to a finite number of values, while in the second case, to the infinite number of values that fall into the unit interval are within the support.
Does Dirichlet distribution serves the same purpose as a multinomial
distribution?
No. Multinomial is a distribution for counts, while Dirichlet is usually used as a distribution over probabilities.
What are the advantages/disadvantages of using Dirichlet over
multinomial distributions?
They are different things, and as you can learn from the Can a Multinomial(1/n, ..., 1/n) be characterized as a discretized Dirichlet(1, .., 1)? thread, they behave differently in higher dimensions. You would almost never use them exchangeably.
The exception is that in some cases, you might want to use a continuous distribution to approximate the discrete distribution, e.g. as you can approximate binomial (for large $n$), or Poisson distribution (for large $\lambda$) with Gaussian.
What makes the Dirichlet distribution different from a multinomial
distribution?
They are continuous vs discrete distributions.
|
Dirichlet distribution vs Multinomial distribution?
|
Multinomial distribution is a discrete, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in \{0,1,\dots,n\}$ and $\sum_{i=1}^k x_i = n$. Dirichlet distribution is a con
|
Dirichlet distribution vs Multinomial distribution?
Multinomial distribution is a discrete, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in \{0,1,\dots,n\}$ and $\sum_{i=1}^k x_i = n$. Dirichlet distribution is a continuous, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in (0,1)$ and $\sum_{i=1}^k x_i = 1$. In the first case, the support of the distribution is limited to a finite number of values, while in the second case, to the infinite number of values that fall into the unit interval are within the support.
Does Dirichlet distribution serves the same purpose as a multinomial
distribution?
No. Multinomial is a distribution for counts, while Dirichlet is usually used as a distribution over probabilities.
What are the advantages/disadvantages of using Dirichlet over
multinomial distributions?
They are different things, and as you can learn from the Can a Multinomial(1/n, ..., 1/n) be characterized as a discretized Dirichlet(1, .., 1)? thread, they behave differently in higher dimensions. You would almost never use them exchangeably.
The exception is that in some cases, you might want to use a continuous distribution to approximate the discrete distribution, e.g. as you can approximate binomial (for large $n$), or Poisson distribution (for large $\lambda$) with Gaussian.
What makes the Dirichlet distribution different from a multinomial
distribution?
They are continuous vs discrete distributions.
|
Dirichlet distribution vs Multinomial distribution?
Multinomial distribution is a discrete, multivariate distribution for $k$ variables $x_1,x_2,\dots,x_k$ where each $x_i \in \{0,1,\dots,n\}$ and $\sum_{i=1}^k x_i = n$. Dirichlet distribution is a con
|
45,585
|
Dirichlet distribution vs Multinomial distribution?
|
A first difference is that multinomial distribution $\mathcal{M}(N, \mathbf{p})$ is discrete (it generalises binomial disrtibution) whereas Dirichlet distribution is continuous (it generalizes Beta distribution).
But if you were to make $N$ go to infinity in order to get an approximately continuous outcome, then the marginal distributions of components of a multinomial random variables will become gaussian, which has a different shape than Dirichlet distribution.
Dirichlet is commonly used as a prior on a probability vector, since it is the conjugate prior of the multinomial distribution.
|
Dirichlet distribution vs Multinomial distribution?
|
A first difference is that multinomial distribution $\mathcal{M}(N, \mathbf{p})$ is discrete (it generalises binomial disrtibution) whereas Dirichlet distribution is continuous (it generalizes Beta di
|
Dirichlet distribution vs Multinomial distribution?
A first difference is that multinomial distribution $\mathcal{M}(N, \mathbf{p})$ is discrete (it generalises binomial disrtibution) whereas Dirichlet distribution is continuous (it generalizes Beta distribution).
But if you were to make $N$ go to infinity in order to get an approximately continuous outcome, then the marginal distributions of components of a multinomial random variables will become gaussian, which has a different shape than Dirichlet distribution.
Dirichlet is commonly used as a prior on a probability vector, since it is the conjugate prior of the multinomial distribution.
|
Dirichlet distribution vs Multinomial distribution?
A first difference is that multinomial distribution $\mathcal{M}(N, \mathbf{p})$ is discrete (it generalises binomial disrtibution) whereas Dirichlet distribution is continuous (it generalizes Beta di
|
45,586
|
Can we get Moment Generating Function(MGF) from data?
|
Can we define an MGF from data?
The MGF of a random variables $X$ is defined to be
$$M(t) = \mathbf E\left[e^{tX}\right],$$
so given observed data $x_1,\ldots, x_n$ we can certainly define the empirical MGF to be
$$M(t; \underline x) = \frac1n \left( e^{tx_1} + \cdots + e^{tx_n}\right).$$
Is it useful?
The use of this empirical MGF is likely limited - in part due to it not admitting a simple closed formula, but also because many of the features that make the MGF useful for studying probability distributions, will not be relevant for the empirical MGF when we have small/moderate sample sizes.
I've set out a summary of some of the key reasons for studying MGFs of probability distributions at the end.
Is it less useful than the PDF?
Theoretically - no (in most cases).
Both the PDF and the MGF uniquelly determine a probability distribution - so neither contains any information that the other does not. Which is more useful depends on what you want to do with the distribution.
For sampling, the PDF will be more useful. To easily calculate the mean, variance and higher moments - the MGF may make this significantly easier.
It is worth noting, however, that not all distributions admit an MGF, for example the Cauchy distribution.
Probability vs Statistics
Finally - its worth noting that the value of the MGF is arguably higher to a probabilist than a statistician - where I'm informally using the convention that probabilists study abstract/theoretical distributions, whilst statisticians study data (and sometimes fit these to theoretical distributions).
Many of the properties I summarise below are more useful in this theoretical framework - for instance 4) the convergence property is key to proving the Central Limit Theorem.
Key Properties of the MGF
1) The key feature of an MGF is that its power series expansion is in terms of the distribution's moments:
$$ M(t) = 1 + t \mathbf E[X] + \frac{t^2}2 \mathbf E[X^2] + \frac{t^3}{3!} \mathbf E[X^3] + \cdots $$
For some distributuions evaluating this power series will be significantly easier than trying to compute these expectations directly through integration.
For instance if $X \sim N(0,1)$ then $M(t) = \exp(\frac12t^2)$, from which the standard Taylor expansion gives
$$M(t) = 1 + \left(\frac{t^2}{2}\right) + \frac12 \left(\frac{t^2}{2}\right)^2 + \frac{1}{3!} \left(\frac{t^2}{2}\right)^3 \cdots $$
we easily see that all odd moments of the distribution are 0, and also get a formula for all even moments:
$$ \mathbf E[X^{2n}] = \frac{(2n)!}{2^n n!},$$
(the right hand side is often denoted $n!!$, and is known as the double factorial).
2) The MGF of the sum of two independent variables, is the product of their respective MGFs:
$$M_{X+Y}(t) = M_X(t)M_Y(t),$$
again this make calculations easier.
3) The radius of convergence of the MGF can be used to deduce asymptotic properties of the moments of the distribution, via the Cauchy-Hadamard theorem.
4) The MGF (when it exists) uniquely determines a probability distribution. Moreover given a sequence of distributions, if their MGFs converge pointwise then this is equivalent to convergence in distribution.
|
Can we get Moment Generating Function(MGF) from data?
|
Can we define an MGF from data?
The MGF of a random variables $X$ is defined to be
$$M(t) = \mathbf E\left[e^{tX}\right],$$
so given observed data $x_1,\ldots, x_n$ we can certainly define the empiric
|
Can we get Moment Generating Function(MGF) from data?
Can we define an MGF from data?
The MGF of a random variables $X$ is defined to be
$$M(t) = \mathbf E\left[e^{tX}\right],$$
so given observed data $x_1,\ldots, x_n$ we can certainly define the empirical MGF to be
$$M(t; \underline x) = \frac1n \left( e^{tx_1} + \cdots + e^{tx_n}\right).$$
Is it useful?
The use of this empirical MGF is likely limited - in part due to it not admitting a simple closed formula, but also because many of the features that make the MGF useful for studying probability distributions, will not be relevant for the empirical MGF when we have small/moderate sample sizes.
I've set out a summary of some of the key reasons for studying MGFs of probability distributions at the end.
Is it less useful than the PDF?
Theoretically - no (in most cases).
Both the PDF and the MGF uniquelly determine a probability distribution - so neither contains any information that the other does not. Which is more useful depends on what you want to do with the distribution.
For sampling, the PDF will be more useful. To easily calculate the mean, variance and higher moments - the MGF may make this significantly easier.
It is worth noting, however, that not all distributions admit an MGF, for example the Cauchy distribution.
Probability vs Statistics
Finally - its worth noting that the value of the MGF is arguably higher to a probabilist than a statistician - where I'm informally using the convention that probabilists study abstract/theoretical distributions, whilst statisticians study data (and sometimes fit these to theoretical distributions).
Many of the properties I summarise below are more useful in this theoretical framework - for instance 4) the convergence property is key to proving the Central Limit Theorem.
Key Properties of the MGF
1) The key feature of an MGF is that its power series expansion is in terms of the distribution's moments:
$$ M(t) = 1 + t \mathbf E[X] + \frac{t^2}2 \mathbf E[X^2] + \frac{t^3}{3!} \mathbf E[X^3] + \cdots $$
For some distributuions evaluating this power series will be significantly easier than trying to compute these expectations directly through integration.
For instance if $X \sim N(0,1)$ then $M(t) = \exp(\frac12t^2)$, from which the standard Taylor expansion gives
$$M(t) = 1 + \left(\frac{t^2}{2}\right) + \frac12 \left(\frac{t^2}{2}\right)^2 + \frac{1}{3!} \left(\frac{t^2}{2}\right)^3 \cdots $$
we easily see that all odd moments of the distribution are 0, and also get a formula for all even moments:
$$ \mathbf E[X^{2n}] = \frac{(2n)!}{2^n n!},$$
(the right hand side is often denoted $n!!$, and is known as the double factorial).
2) The MGF of the sum of two independent variables, is the product of their respective MGFs:
$$M_{X+Y}(t) = M_X(t)M_Y(t),$$
again this make calculations easier.
3) The radius of convergence of the MGF can be used to deduce asymptotic properties of the moments of the distribution, via the Cauchy-Hadamard theorem.
4) The MGF (when it exists) uniquely determines a probability distribution. Moreover given a sequence of distributions, if their MGFs converge pointwise then this is equivalent to convergence in distribution.
|
Can we get Moment Generating Function(MGF) from data?
Can we define an MGF from data?
The MGF of a random variables $X$ is defined to be
$$M(t) = \mathbf E\left[e^{tX}\right],$$
so given observed data $x_1,\ldots, x_n$ we can certainly define the empiric
|
45,587
|
Can we get Moment Generating Function(MGF) from data?
|
In parametric problems (i.e., where you have a specified distribution family indexed by a finite number of parameters), both the true density and MGF are functions of the parameters (assuming the latter exists). Both objects summarise the distribution and contain the same information, so neither is less useful in a strict sense (though the MGF is more difficult to interpret intuitively than the density). Estimation of either the density or the MGF can be done by estimating the unknown parameters, and substituting these into the required parametric function. For example, if we have IID normal data $x_1,...,x_n$ with sample mean $\bar{x}$ and sample variance $s^2$, we could estimate the MGF as:
$$\hat{m}_X(t) = \exp \Big( \bar{x} t + \frac{s^2}{2} \cdot t^2 \Big).$$
Alternatively, we can use non-parametric methods to estimate the MGF in cases where we do not want to assume a particular distributional family. The simplest estimator is the empirical MGF, which is:
$$\hat{m}_\mathbf{x}(t) = \frac{1}{n} \sum_{i=1}^n \exp (t x_i).$$
For IID data, if the MGF exists in a neighbourhood of $t \in \mathbb{R}$, then the law of large numbers ensures that $\hat{m}_\mathbf{x}(t) \rightarrow m_X(t)$. (Both the weak and strong law hold, so the convergence is "almost surely".)
Generally speaking, the empirical MGF is a more robust estimator, but it is less powerful than the parametric estimators in cases where you have correctly specified the distributional family. This is just one aspect of the more general statistical phenomenon that an assumption of a distributional family allows you to make your estimators more powerful, but at the expense of lack of robustness to outside distributions. You can read more about these various estimators, and their performance, in Gber and Collins (1989).
|
Can we get Moment Generating Function(MGF) from data?
|
In parametric problems (i.e., where you have a specified distribution family indexed by a finite number of parameters), both the true density and MGF are functions of the parameters (assuming the latt
|
Can we get Moment Generating Function(MGF) from data?
In parametric problems (i.e., where you have a specified distribution family indexed by a finite number of parameters), both the true density and MGF are functions of the parameters (assuming the latter exists). Both objects summarise the distribution and contain the same information, so neither is less useful in a strict sense (though the MGF is more difficult to interpret intuitively than the density). Estimation of either the density or the MGF can be done by estimating the unknown parameters, and substituting these into the required parametric function. For example, if we have IID normal data $x_1,...,x_n$ with sample mean $\bar{x}$ and sample variance $s^2$, we could estimate the MGF as:
$$\hat{m}_X(t) = \exp \Big( \bar{x} t + \frac{s^2}{2} \cdot t^2 \Big).$$
Alternatively, we can use non-parametric methods to estimate the MGF in cases where we do not want to assume a particular distributional family. The simplest estimator is the empirical MGF, which is:
$$\hat{m}_\mathbf{x}(t) = \frac{1}{n} \sum_{i=1}^n \exp (t x_i).$$
For IID data, if the MGF exists in a neighbourhood of $t \in \mathbb{R}$, then the law of large numbers ensures that $\hat{m}_\mathbf{x}(t) \rightarrow m_X(t)$. (Both the weak and strong law hold, so the convergence is "almost surely".)
Generally speaking, the empirical MGF is a more robust estimator, but it is less powerful than the parametric estimators in cases where you have correctly specified the distributional family. This is just one aspect of the more general statistical phenomenon that an assumption of a distributional family allows you to make your estimators more powerful, but at the expense of lack of robustness to outside distributions. You can read more about these various estimators, and their performance, in Gber and Collins (1989).
|
Can we get Moment Generating Function(MGF) from data?
In parametric problems (i.e., where you have a specified distribution family indexed by a finite number of parameters), both the true density and MGF are functions of the parameters (assuming the latt
|
45,588
|
Can we get Moment Generating Function(MGF) from data?
|
Just some additions to the excellent answer by @owen88. Some examples of empirical mgf's (emgf) (and comments on better ways to estimate them) can be found in answers here: How does saddlepoint approximation work?. One use is to approximate the bootstrap distribution, thereby making possible bootstrap without simulation! Related to the mgf is the probability generating function, see What is the difference between moment generating function and probability generating function?. And there is some literature about the use of the empirical moment generating function, for example this paper or this one. It is also possible to use emprical generating functions directly in inference, for example this paper.
|
Can we get Moment Generating Function(MGF) from data?
|
Just some additions to the excellent answer by @owen88. Some examples of empirical mgf's (emgf) (and comments on better ways to estimate them) can be found in answers here: How does saddlepoint appr
|
Can we get Moment Generating Function(MGF) from data?
Just some additions to the excellent answer by @owen88. Some examples of empirical mgf's (emgf) (and comments on better ways to estimate them) can be found in answers here: How does saddlepoint approximation work?. One use is to approximate the bootstrap distribution, thereby making possible bootstrap without simulation! Related to the mgf is the probability generating function, see What is the difference between moment generating function and probability generating function?. And there is some literature about the use of the empirical moment generating function, for example this paper or this one. It is also possible to use emprical generating functions directly in inference, for example this paper.
|
Can we get Moment Generating Function(MGF) from data?
Just some additions to the excellent answer by @owen88. Some examples of empirical mgf's (emgf) (and comments on better ways to estimate them) can be found in answers here: How does saddlepoint appr
|
45,589
|
Will each unique input to an Autoencoder produce a unique coding?
|
In some setups, not only they can, they need to. An idealized Denoising Autoencoder with a weak decoder would map any input+noise, as well as just input, to the same eventual latent code - its encoder would be just a lossless compression of the noiseless data, plus noise filters.
For a negative case, in a pathological scenario the latent encoding could collapse into a single vector, producing a single underfitted reconstruction with a local minimum of reconstruction cost.
That's just classical AEs. A VAE should produce overlapping codes, if you consider the code to be the sample rather than the distribution parameter, being N-dimensional bubbles in a compact and (approximately) continuous latent space.
|
Will each unique input to an Autoencoder produce a unique coding?
|
In some setups, not only they can, they need to. An idealized Denoising Autoencoder with a weak decoder would map any input+noise, as well as just input, to the same eventual latent code - its encoder
|
Will each unique input to an Autoencoder produce a unique coding?
In some setups, not only they can, they need to. An idealized Denoising Autoencoder with a weak decoder would map any input+noise, as well as just input, to the same eventual latent code - its encoder would be just a lossless compression of the noiseless data, plus noise filters.
For a negative case, in a pathological scenario the latent encoding could collapse into a single vector, producing a single underfitted reconstruction with a local minimum of reconstruction cost.
That's just classical AEs. A VAE should produce overlapping codes, if you consider the code to be the sample rather than the distribution parameter, being N-dimensional bubbles in a compact and (approximately) continuous latent space.
|
Will each unique input to an Autoencoder produce a unique coding?
In some setups, not only they can, they need to. An idealized Denoising Autoencoder with a weak decoder would map any input+noise, as well as just input, to the same eventual latent code - its encoder
|
45,590
|
Will each unique input to an Autoencoder produce a unique coding?
|
There is more information going to the bottleneck than it is going out, so some input have to produce the same outputs.
|
Will each unique input to an Autoencoder produce a unique coding?
|
There is more information going to the bottleneck than it is going out, so some input have to produce the same outputs.
|
Will each unique input to an Autoencoder produce a unique coding?
There is more information going to the bottleneck than it is going out, so some input have to produce the same outputs.
|
Will each unique input to an Autoencoder produce a unique coding?
There is more information going to the bottleneck than it is going out, so some input have to produce the same outputs.
|
45,591
|
Will each unique input to an Autoencoder produce a unique coding?
|
Trivially, if your bottleneck/representation layer uses ReLU activations and all of the inputs to that layer are less than 0, the encoding will be all 0s. So to produce such encodings, you'd just need to have two inputs that have the property that they get mapped "to the left side" of all the bottleneck layer ReLUs.
Or you have an auto-encoder with weights that are all 0 (maybe because you set the $L^2$ regularization too high and the model collapsed), this model assigns all inputs to the same code.
|
Will each unique input to an Autoencoder produce a unique coding?
|
Trivially, if your bottleneck/representation layer uses ReLU activations and all of the inputs to that layer are less than 0, the encoding will be all 0s. So to produce such encodings, you'd just need
|
Will each unique input to an Autoencoder produce a unique coding?
Trivially, if your bottleneck/representation layer uses ReLU activations and all of the inputs to that layer are less than 0, the encoding will be all 0s. So to produce such encodings, you'd just need to have two inputs that have the property that they get mapped "to the left side" of all the bottleneck layer ReLUs.
Or you have an auto-encoder with weights that are all 0 (maybe because you set the $L^2$ regularization too high and the model collapsed), this model assigns all inputs to the same code.
|
Will each unique input to an Autoencoder produce a unique coding?
Trivially, if your bottleneck/representation layer uses ReLU activations and all of the inputs to that layer are less than 0, the encoding will be all 0s. So to produce such encodings, you'd just need
|
45,592
|
Statistical test with violation of independence assumption
|
I have many questions about the proposed approach, mostly because it is not clearly described. It's unlikely this question can be edited to give us much insight into the setting and rationale for this approach. Nonetheless, a few points can be addressed.
Sums of variances obtained by randomly partitioning a dataset 30-fold cannot be called "robustness". A random partition shouldn't be referred to as a "dataset".
Calculating the variance of partitions of an independent sample are not interesting because
$$ \text{var}(\sum_{i=1}^n X_i) = \sum_{i=1}^n \text{var}( X_i)$$
when $X$ is mutually independent.
However, if there is a lack of independence, the expression generalized as
$$ \text{var}(\sum_{i=1}^n X_i) = \sum_{i=1}^n \text{var}( X_i) + \sum_{i \ne j}\text{cov}(X_i, X_j)$$
By partitioning the dataset, the working covariance matrix forces off-diagonal blocks to be 0 valued, which may produce interesting if imprecise differences from the total sample variance. Of course, comparing to other random partitions will give the same-ish answer each time, with little other than random variation contributing to differences, and no valid inference about covariance structure.
Even if you made this total-sample vs. partitioned sample comparison, you would have a very imprecise test. You are ignoring a whole family of regression models for correlated data that provide estimates of covariance directly in the form of autoregression or exchangeable correlation structures. Variogram models are also of interest. So as far as tests to detect autodependence, while this approach is close to something that can be used, it is not worth reinventing the wheel when numerous approaches with proven statistical properties are already out there.
It is a pitfall of "robust" statistics to presume you must test to determine the presence (or lack) of a particular assumption. The point of robust statistics is to identify an omnibus which reduces the complicated nature of testing assumptions (and risking false positives or false negatives) and recommending final analyses based on interim/penultimate results.
In particular, if the desire is for a test of regression parameter differences that is robust to undetected dependence within sample observations, the generalized estimating equation (GEE) provides consistent and unbiased inference with sandwich variance estimation.
|
Statistical test with violation of independence assumption
|
I have many questions about the proposed approach, mostly because it is not clearly described. It's unlikely this question can be edited to give us much insight into the setting and rationale for this
|
Statistical test with violation of independence assumption
I have many questions about the proposed approach, mostly because it is not clearly described. It's unlikely this question can be edited to give us much insight into the setting and rationale for this approach. Nonetheless, a few points can be addressed.
Sums of variances obtained by randomly partitioning a dataset 30-fold cannot be called "robustness". A random partition shouldn't be referred to as a "dataset".
Calculating the variance of partitions of an independent sample are not interesting because
$$ \text{var}(\sum_{i=1}^n X_i) = \sum_{i=1}^n \text{var}( X_i)$$
when $X$ is mutually independent.
However, if there is a lack of independence, the expression generalized as
$$ \text{var}(\sum_{i=1}^n X_i) = \sum_{i=1}^n \text{var}( X_i) + \sum_{i \ne j}\text{cov}(X_i, X_j)$$
By partitioning the dataset, the working covariance matrix forces off-diagonal blocks to be 0 valued, which may produce interesting if imprecise differences from the total sample variance. Of course, comparing to other random partitions will give the same-ish answer each time, with little other than random variation contributing to differences, and no valid inference about covariance structure.
Even if you made this total-sample vs. partitioned sample comparison, you would have a very imprecise test. You are ignoring a whole family of regression models for correlated data that provide estimates of covariance directly in the form of autoregression or exchangeable correlation structures. Variogram models are also of interest. So as far as tests to detect autodependence, while this approach is close to something that can be used, it is not worth reinventing the wheel when numerous approaches with proven statistical properties are already out there.
It is a pitfall of "robust" statistics to presume you must test to determine the presence (or lack) of a particular assumption. The point of robust statistics is to identify an omnibus which reduces the complicated nature of testing assumptions (and risking false positives or false negatives) and recommending final analyses based on interim/penultimate results.
In particular, if the desire is for a test of regression parameter differences that is robust to undetected dependence within sample observations, the generalized estimating equation (GEE) provides consistent and unbiased inference with sandwich variance estimation.
|
Statistical test with violation of independence assumption
I have many questions about the proposed approach, mostly because it is not clearly described. It's unlikely this question can be edited to give us much insight into the setting and rationale for this
|
45,593
|
Statistical test with violation of independence assumption
|
If I understand you correctly, you are asking about testing the significance of the difference between two regressions on the same data set. This is not a standard significance testing problem. A standard significance testing problem states a null hypothesis ($H_0$) about how the data were generated, and uses the distribution assuming this $H_0$ of a certain test statistic measuring in some sense the difference between the data and the $H_0$ to see whether the data are "too far away" from the $H_0$, in which case one would say that there is evidence that the data were in fact not generated by the $H_0$.
However, it is not clear what your $H_0$ is supposed to be. You seemingly have in mind that "the two regressions are equal", but as they are computed in different ways, their results will (almost) always be different on the same data, so in fact we know that they are not the same, we don't need to test this. "The regressions are the same" is not a proper $H_0$ about the generation of the data.
The only sensible $H_0$ I can imagine here is the model underlying one of the regressions (maybe the Gaussian model underlying the least squares regression), and then one can imagine using the difference between the two regressions for testing this, with the idea in mind that if the model holds they should be rather similar, whereas if the model doesn't hold, they can be much further from each other. This depends on how exactly the model is violated though. A significant result would then not mean that there is evidence that the "two regressions are in fact different", but rather that there is evidence against the Gaussian regression model. Not sure whether this is what you actually want to know, but it may well be (though there's the sensible answer of AdamO who doubts that this is a good question to ask when using robust methods).
Even then this is a nonstandard test and I'm not aware of any standard software that does this. There may be specialist literature flying around exploring such a test (I haven't put time into searching), but surely it will depend on what precisely your regressions are, there's no general answer without knowing that. Chances are something can be done using bootstrap (maybe parametric bootstrap or Monte Carlo), but I'd not expect that anybody here could tell you straight away how to solve this (and as said before, I'm not even sure whether this is what you really want); it would be something of a project.
|
Statistical test with violation of independence assumption
|
If I understand you correctly, you are asking about testing the significance of the difference between two regressions on the same data set. This is not a standard significance testing problem. A stan
|
Statistical test with violation of independence assumption
If I understand you correctly, you are asking about testing the significance of the difference between two regressions on the same data set. This is not a standard significance testing problem. A standard significance testing problem states a null hypothesis ($H_0$) about how the data were generated, and uses the distribution assuming this $H_0$ of a certain test statistic measuring in some sense the difference between the data and the $H_0$ to see whether the data are "too far away" from the $H_0$, in which case one would say that there is evidence that the data were in fact not generated by the $H_0$.
However, it is not clear what your $H_0$ is supposed to be. You seemingly have in mind that "the two regressions are equal", but as they are computed in different ways, their results will (almost) always be different on the same data, so in fact we know that they are not the same, we don't need to test this. "The regressions are the same" is not a proper $H_0$ about the generation of the data.
The only sensible $H_0$ I can imagine here is the model underlying one of the regressions (maybe the Gaussian model underlying the least squares regression), and then one can imagine using the difference between the two regressions for testing this, with the idea in mind that if the model holds they should be rather similar, whereas if the model doesn't hold, they can be much further from each other. This depends on how exactly the model is violated though. A significant result would then not mean that there is evidence that the "two regressions are in fact different", but rather that there is evidence against the Gaussian regression model. Not sure whether this is what you actually want to know, but it may well be (though there's the sensible answer of AdamO who doubts that this is a good question to ask when using robust methods).
Even then this is a nonstandard test and I'm not aware of any standard software that does this. There may be specialist literature flying around exploring such a test (I haven't put time into searching), but surely it will depend on what precisely your regressions are, there's no general answer without knowing that. Chances are something can be done using bootstrap (maybe parametric bootstrap or Monte Carlo), but I'd not expect that anybody here could tell you straight away how to solve this (and as said before, I'm not even sure whether this is what you really want); it would be something of a project.
|
Statistical test with violation of independence assumption
If I understand you correctly, you are asking about testing the significance of the difference between two regressions on the same data set. This is not a standard significance testing problem. A stan
|
45,594
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
|
Since Interest_in_Acme is unobservable, the average causal effect of Loyalty Club Membership on Spend is unidentifiable. However, there is an important exception to that rule, that is if Interest_in_Acme is perfectly correlated ($r=1.0$ or $r=0.0$) with Spend_in_Prev_Year. If those two variables are perfectly correlated (i.e. contain the same information), then Spend_in_Prev_Year can be adjusted for instead and used to identify the average causal effect.
In the much more likely scenario of Interest_in_Acme being somewhat correlated with Spend_in_Prev_Year, a somewhat biased estimate of the average causal effect can be obtained. The more that the two are correlated, the less biased the estimate adjusted for Spend_in_Prev_Year.
A simple simulation study
To demonstrate the concept, below is a simple simulation study (Python 3.5+ code). Let $L$ be Interest_in_Acme, $L^*$ be Spend_in_Prev_Year, $A$ be Loyalty Club Membership, $Y(a)$ be the potential Spend under treatment plan $a$, and $Y$ be the observed spending. For simplicity, my simulation uses binary variables. To reduce variability to sample size, I set $n=1,000,000$. For the estimator of the average causal effect, I used the standardized mean difference (i.e. g-formula, do-calculus, etc.)
import numpy as np
import pandas as pd
# Simulation parameters
n = 1000000
correlation = 1.0
np.random.seed(20191223)
# Simulating data set
df = pd.DataFrame()
df['L'] = np.random.binomial(n=1, p=0.25, size=n)
df['L*'] = np.random.binomial(n=1, p=correlation*df['L'] + (1-correlation)*(1-df['L']), size=n)
df['A'] = np.random.binomial(1, p=(0.25 + 0.5*df['L']), size=n)
df['Ya0'] = np.random.binomial(1, p=(0.75 - 0.5*df['L']), size=n)
df['Ya1'] = np.random.binomial(1, p=(0.75 - 0.5*df['L'] - 0.1*1 -0.1*1*df['L']), size=n)
df['Y'] = (1-df['A'])*df['Ya0'] + df['A']*df['Ya1']
# True causal effect
print("True Causal Effect:", np.mean(df['Ya1'] - df['Ya0']))
# Standardized Mean Estimator
l1 = np.mean(df['L*'])
l0 = 1 - l1
r1_l0 = np.mean(df.loc[(df['A']==1) & (df['L*']==0)]['Y'])
r1_l1 = np.mean(df.loc[(df['A']==1) & (df['L*']==1)]['Y'])
r0_l0 = np.mean(df.loc[(df['A']==0) & (df['L*']==0)]['Y'])
r0_l1 = np.mean(df.loc[(df['A']==0) & (df['L*']==1)]['Y'])
rd_stdmean = (r1_l0*l0 + r1_l1*l1) - (r0_l0*l0 + r0_l1*l1)
print('Standardized Mean Risk Difference:', rd_stdmean)
Below are the results for some various correlations (you can also run this code and change the correlation parameter to see the result of the various changes. Note that $r=0.50$ is no correlation
True Average Causal Effect: -0.124
$r=1.0$: -0.123
$r=0.99$: -0.136
$r=0.50$: -0.347
$r=0.05$: -0.180
Summary
As a justification, you may believe that Interest_in_Acme and Spend_in_Prev_Year are highly correlated meaning you may be close to the true average causal effect. While you can't fully identify, you may believe that those two variables are highly correlated so your estimate is close to the truth. As a final note, this problem becomes more complicated for continuous variables since functional forms of variables may differ.
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
|
Since Interest_in_Acme is unobservable, the average causal effect of Loyalty Club Membership on Spend is unidentifiable. However, there is an important exception to that rule, that is if Interest_in_A
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
Since Interest_in_Acme is unobservable, the average causal effect of Loyalty Club Membership on Spend is unidentifiable. However, there is an important exception to that rule, that is if Interest_in_Acme is perfectly correlated ($r=1.0$ or $r=0.0$) with Spend_in_Prev_Year. If those two variables are perfectly correlated (i.e. contain the same information), then Spend_in_Prev_Year can be adjusted for instead and used to identify the average causal effect.
In the much more likely scenario of Interest_in_Acme being somewhat correlated with Spend_in_Prev_Year, a somewhat biased estimate of the average causal effect can be obtained. The more that the two are correlated, the less biased the estimate adjusted for Spend_in_Prev_Year.
A simple simulation study
To demonstrate the concept, below is a simple simulation study (Python 3.5+ code). Let $L$ be Interest_in_Acme, $L^*$ be Spend_in_Prev_Year, $A$ be Loyalty Club Membership, $Y(a)$ be the potential Spend under treatment plan $a$, and $Y$ be the observed spending. For simplicity, my simulation uses binary variables. To reduce variability to sample size, I set $n=1,000,000$. For the estimator of the average causal effect, I used the standardized mean difference (i.e. g-formula, do-calculus, etc.)
import numpy as np
import pandas as pd
# Simulation parameters
n = 1000000
correlation = 1.0
np.random.seed(20191223)
# Simulating data set
df = pd.DataFrame()
df['L'] = np.random.binomial(n=1, p=0.25, size=n)
df['L*'] = np.random.binomial(n=1, p=correlation*df['L'] + (1-correlation)*(1-df['L']), size=n)
df['A'] = np.random.binomial(1, p=(0.25 + 0.5*df['L']), size=n)
df['Ya0'] = np.random.binomial(1, p=(0.75 - 0.5*df['L']), size=n)
df['Ya1'] = np.random.binomial(1, p=(0.75 - 0.5*df['L'] - 0.1*1 -0.1*1*df['L']), size=n)
df['Y'] = (1-df['A'])*df['Ya0'] + df['A']*df['Ya1']
# True causal effect
print("True Causal Effect:", np.mean(df['Ya1'] - df['Ya0']))
# Standardized Mean Estimator
l1 = np.mean(df['L*'])
l0 = 1 - l1
r1_l0 = np.mean(df.loc[(df['A']==1) & (df['L*']==0)]['Y'])
r1_l1 = np.mean(df.loc[(df['A']==1) & (df['L*']==1)]['Y'])
r0_l0 = np.mean(df.loc[(df['A']==0) & (df['L*']==0)]['Y'])
r0_l1 = np.mean(df.loc[(df['A']==0) & (df['L*']==1)]['Y'])
rd_stdmean = (r1_l0*l0 + r1_l1*l1) - (r0_l0*l0 + r0_l1*l1)
print('Standardized Mean Risk Difference:', rd_stdmean)
Below are the results for some various correlations (you can also run this code and change the correlation parameter to see the result of the various changes. Note that $r=0.50$ is no correlation
True Average Causal Effect: -0.124
$r=1.0$: -0.123
$r=0.99$: -0.136
$r=0.50$: -0.347
$r=0.05$: -0.180
Summary
As a justification, you may believe that Interest_in_Acme and Spend_in_Prev_Year are highly correlated meaning you may be close to the true average causal effect. While you can't fully identify, you may believe that those two variables are highly correlated so your estimate is close to the truth. As a final note, this problem becomes more complicated for continuous variables since functional forms of variables may differ.
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
Since Interest_in_Acme is unobservable, the average causal effect of Loyalty Club Membership on Spend is unidentifiable. However, there is an important exception to that rule, that is if Interest_in_A
|
45,595
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
|
Exact point identification is not possible here, but adjusting for Spend_in_Prev_Year does partially block the backdoor path, so that would be the rationale for it. As a general advice, you should adjust for the proxy in the absence of the true confounder (there are exceptions, the proxy could be opening other backdoor paths for instance, but that's not the case in your example).
Now I should add, since you know you didn't fully block the backdoor path, you should perform a sensitivity analysis---we know by construction that your estimate is biased, so we want to judge how biased it could be.
For instance, if you are using a linear model, you can perform a fairly general, yet simple, sensitivity analysis by simply comparing how much more variation the true variable could explain of your treatment and your outcome, as compared with the proxy variable you have measured (see Cinelli and Hazlett 2020 - ungated version). If you think the proxy does a good job, and the the true variable can't be much stronger than the proxy, then it is likely your estimate is not much biased.
I will show here an example in R using the package sensemakr. Suppose you measured the confounder $X^*$ instead of $X$, and you obtained the following estimates,
set.seed(10)
n <- 1e4
x <- rnorm(n)
xs <- x + rnorm(n)
d <- rbinom(n, 1, plogis(x))
y <- d + x + rnorm(n)
model <- lm(y ~ d + xs)
model
#>
#> Call:
#> lm(formula = y ~ d + xs)
#>
#> Coefficients:
#> (Intercept) d xs
#> -0.2411 1.4882 0.4537
Now you wonder whether the whole estimate of $1.48$ could be due to bias, because you didn't control for the "true" $X$.
Here is a sensitivity plot showing how much stronger the true $X$ would need to be, both with its association with the treatment $D$ and with the outcome $Y$, to fully explain away the observed association (as compared to the proxy, and above what the proxy already explains). As you can see in the example, the true variable would need to be 3 times as strong as the proxy to fully explain away your estimate. If you think that's unlikely and that you think that the true variable could (additionally) explain only as much or twice as much as what has already explained by the proxy, then you can claim the true effect is not less than 0.54 (in our case we know it is 1).
library(sensemakr)
#> See details in:
#> Carlos Cinelli and Chad Hazlett (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society Series B.
sense <- sensemakr(model = model, treatment = "d",
benchmark_covariates = "xs",
kd = 1:3)
plot(sense)
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
|
Exact point identification is not possible here, but adjusting for Spend_in_Prev_Year does partially block the backdoor path, so that would be the rationale for it. As a general advice, you should ad
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
Exact point identification is not possible here, but adjusting for Spend_in_Prev_Year does partially block the backdoor path, so that would be the rationale for it. As a general advice, you should adjust for the proxy in the absence of the true confounder (there are exceptions, the proxy could be opening other backdoor paths for instance, but that's not the case in your example).
Now I should add, since you know you didn't fully block the backdoor path, you should perform a sensitivity analysis---we know by construction that your estimate is biased, so we want to judge how biased it could be.
For instance, if you are using a linear model, you can perform a fairly general, yet simple, sensitivity analysis by simply comparing how much more variation the true variable could explain of your treatment and your outcome, as compared with the proxy variable you have measured (see Cinelli and Hazlett 2020 - ungated version). If you think the proxy does a good job, and the the true variable can't be much stronger than the proxy, then it is likely your estimate is not much biased.
I will show here an example in R using the package sensemakr. Suppose you measured the confounder $X^*$ instead of $X$, and you obtained the following estimates,
set.seed(10)
n <- 1e4
x <- rnorm(n)
xs <- x + rnorm(n)
d <- rbinom(n, 1, plogis(x))
y <- d + x + rnorm(n)
model <- lm(y ~ d + xs)
model
#>
#> Call:
#> lm(formula = y ~ d + xs)
#>
#> Coefficients:
#> (Intercept) d xs
#> -0.2411 1.4882 0.4537
Now you wonder whether the whole estimate of $1.48$ could be due to bias, because you didn't control for the "true" $X$.
Here is a sensitivity plot showing how much stronger the true $X$ would need to be, both with its association with the treatment $D$ and with the outcome $Y$, to fully explain away the observed association (as compared to the proxy, and above what the proxy already explains). As you can see in the example, the true variable would need to be 3 times as strong as the proxy to fully explain away your estimate. If you think that's unlikely and that you think that the true variable could (additionally) explain only as much or twice as much as what has already explained by the proxy, then you can claim the true effect is not less than 0.54 (in our case we know it is 1).
library(sensemakr)
#> See details in:
#> Carlos Cinelli and Chad Hazlett (2020). Making Sense of Sensitivity: Extending Omitted Variable Bias. Journal of the Royal Statistical Society Series B.
sense <- sensemakr(model = model, treatment = "d",
benchmark_covariates = "xs",
kd = 1:3)
plot(sense)
|
What justifies adjusting for proxy variables in the DAG causal inference framework?
Exact point identification is not possible here, but adjusting for Spend_in_Prev_Year does partially block the backdoor path, so that would be the rationale for it. As a general advice, you should ad
|
45,596
|
When to use Cohen's d and when t-test?
|
Cohen's d seeks to tell you how big the standardized difference is between the two distributions. It's very popular in areas like psychology where I think there are no obvious units you can use to describe the difference. In medical stats, I could say (for example) that your HbA1c levels were on average 5mg different in the two groups, and wouldn't need to use Cohen's d.
The t-test is an attempt to tell you have enough evidence to reject the idea that the difference is non-zero. However, a non-zero difference could be, in practical terms, completely irrelevant. Also, don't forget you have to make technical assumptions when using the t-test, e.g. you default to assume the two groups have the same variance.
There are arguments that it is more useful to compare confidence or credible intervals estimated from the two samples.
There's an interesting article here: https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-015-1020-4
|
When to use Cohen's d and when t-test?
|
Cohen's d seeks to tell you how big the standardized difference is between the two distributions. It's very popular in areas like psychology where I think there are no obvious units you can use to de
|
When to use Cohen's d and when t-test?
Cohen's d seeks to tell you how big the standardized difference is between the two distributions. It's very popular in areas like psychology where I think there are no obvious units you can use to describe the difference. In medical stats, I could say (for example) that your HbA1c levels were on average 5mg different in the two groups, and wouldn't need to use Cohen's d.
The t-test is an attempt to tell you have enough evidence to reject the idea that the difference is non-zero. However, a non-zero difference could be, in practical terms, completely irrelevant. Also, don't forget you have to make technical assumptions when using the t-test, e.g. you default to assume the two groups have the same variance.
There are arguments that it is more useful to compare confidence or credible intervals estimated from the two samples.
There's an interesting article here: https://bmcresnotes.biomedcentral.com/articles/10.1186/s13104-015-1020-4
|
When to use Cohen's d and when t-test?
Cohen's d seeks to tell you how big the standardized difference is between the two distributions. It's very popular in areas like psychology where I think there are no obvious units you can use to de
|
45,597
|
When to use Cohen's d and when t-test?
|
T-test is in complimentary relation with Cohen's $d$ (and equivalence tests using Cohen's $d$).
T-test gives a p-value which is the probability of committing a Type I error. One can reject the null hypothesis, if the p-value is too small, but one cannot claim that the null hypothesis is true on the basis of p-value only, without risking of making a Type II error.
To assess the risk of Type II error one has to perform power testing, i.e. calculating the probability that the alternative hypothesis is correct. However, in the case of testing $H_0 : \mu =0$ against $H_1 : \mu \neq 0$ direct power calculation is impossible.
One common solution to this problem is assuming the minimal size of the effect (here is where Cohen's $d$ comes in) and proving that the actual effect is smaller than this minimal size. This is known as "equivalence testing" or TOST (two one-sided tests). Here is a useful reference: https://www.ncbi.nlm.nih.gov/pubmed/28736600
There are alternative approaches, e.g., based on the use of Bayes factors. But these would take us too far from the core of your question.
|
When to use Cohen's d and when t-test?
|
T-test is in complimentary relation with Cohen's $d$ (and equivalence tests using Cohen's $d$).
T-test gives a p-value which is the probability of committing a Type I error. One can reject the null h
|
When to use Cohen's d and when t-test?
T-test is in complimentary relation with Cohen's $d$ (and equivalence tests using Cohen's $d$).
T-test gives a p-value which is the probability of committing a Type I error. One can reject the null hypothesis, if the p-value is too small, but one cannot claim that the null hypothesis is true on the basis of p-value only, without risking of making a Type II error.
To assess the risk of Type II error one has to perform power testing, i.e. calculating the probability that the alternative hypothesis is correct. However, in the case of testing $H_0 : \mu =0$ against $H_1 : \mu \neq 0$ direct power calculation is impossible.
One common solution to this problem is assuming the minimal size of the effect (here is where Cohen's $d$ comes in) and proving that the actual effect is smaller than this minimal size. This is known as "equivalence testing" or TOST (two one-sided tests). Here is a useful reference: https://www.ncbi.nlm.nih.gov/pubmed/28736600
There are alternative approaches, e.g., based on the use of Bayes factors. But these would take us too far from the core of your question.
|
When to use Cohen's d and when t-test?
T-test is in complimentary relation with Cohen's $d$ (and equivalence tests using Cohen's $d$).
T-test gives a p-value which is the probability of committing a Type I error. One can reject the null h
|
45,598
|
Error while performing multiclass classification using Gridsearch CV
|
Accuracy might look tempting but not a good metric in general. In multilabel classification, for each class we'll have f1 score, precision, recall values etc. You need to decide how to average them, which is what the error is saying actually. The options are binary (which is the default one), micro, macro, weighted, samples. binary option needs positive and negative classes, and doesn't work in multilabel problems.
To reiterate sklearn documentation linked above, micro option calculates TP,FP etc. globally, while macro does it specific to each class and averages them. weighted is the weighted version of macro average
that accounts for class imbalance.
And, this parameter needs to be passed into the scorer function, e.g.:
scorer = sklearn.metrics.make_scorer(sklearn.metrics.f1_score, average = 'weighted')
gs_svc = GridSearchCV(estimator=svc_clf,param_grid=param_grid,scoring=scorer,cv=5)
|
Error while performing multiclass classification using Gridsearch CV
|
Accuracy might look tempting but not a good metric in general. In multilabel classification, for each class we'll have f1 score, precision, recall values etc. You need to decide how to average them, w
|
Error while performing multiclass classification using Gridsearch CV
Accuracy might look tempting but not a good metric in general. In multilabel classification, for each class we'll have f1 score, precision, recall values etc. You need to decide how to average them, which is what the error is saying actually. The options are binary (which is the default one), micro, macro, weighted, samples. binary option needs positive and negative classes, and doesn't work in multilabel problems.
To reiterate sklearn documentation linked above, micro option calculates TP,FP etc. globally, while macro does it specific to each class and averages them. weighted is the weighted version of macro average
that accounts for class imbalance.
And, this parameter needs to be passed into the scorer function, e.g.:
scorer = sklearn.metrics.make_scorer(sklearn.metrics.f1_score, average = 'weighted')
gs_svc = GridSearchCV(estimator=svc_clf,param_grid=param_grid,scoring=scorer,cv=5)
|
Error while performing multiclass classification using Gridsearch CV
Accuracy might look tempting but not a good metric in general. In multilabel classification, for each class we'll have f1 score, precision, recall values etc. You need to decide how to average them, w
|
45,599
|
Error while performing multiclass classification using Gridsearch CV
|
In addition to gunes' excellent answer, you may also use several scoring functions:
scoring = {'accuracy': make_scorer(accuracy_score),
'precision': make_scorer(precision_score, average = 'macro'),
'recall': make_scorer(recall_score, average = 'macro'),
'f1_macro': make_scorer(f1_score, average = 'macro',
'f1_weighted': make_scorer(f1_score, average = 'weighted')}
gs_svc = GridSearchCV(estimator=svc_clf,param_grid=param_grid,scoring=scoring,cv=5)
gs_svc.fit(X_train,y_train)
|
Error while performing multiclass classification using Gridsearch CV
|
In addition to gunes' excellent answer, you may also use several scoring functions:
scoring = {'accuracy': make_scorer(accuracy_score),
'precision': make_scorer(precision_score, average = '
|
Error while performing multiclass classification using Gridsearch CV
In addition to gunes' excellent answer, you may also use several scoring functions:
scoring = {'accuracy': make_scorer(accuracy_score),
'precision': make_scorer(precision_score, average = 'macro'),
'recall': make_scorer(recall_score, average = 'macro'),
'f1_macro': make_scorer(f1_score, average = 'macro',
'f1_weighted': make_scorer(f1_score, average = 'weighted')}
gs_svc = GridSearchCV(estimator=svc_clf,param_grid=param_grid,scoring=scoring,cv=5)
gs_svc.fit(X_train,y_train)
|
Error while performing multiclass classification using Gridsearch CV
In addition to gunes' excellent answer, you may also use several scoring functions:
scoring = {'accuracy': make_scorer(accuracy_score),
'precision': make_scorer(precision_score, average = '
|
45,600
|
Could someone please translate this code into some mathematical notation? [closed]
|
Although this question relies heavily on Python, the answer does appear to benefit from some statistical reasoning.
This function creates "training" and "test" datasets of points $(x_i,y_i)$ for a regression model
$$y_i = w_0 x_i + w_1 x_i^2 + \varepsilon_i \sigma$$
where $\varepsilon_i$ are independent variables with standard Normal distributions. The values of the parameters $w_0,$ $w_1,$ and $\sigma$ are hard-coded into the function. The values of the $x_i$ in each dataset are equally spaced from $0$ to $20$ (although the test set does not include $20$). The test set is hard-coded (as $(0.0, 0.1, 0.2, \ldots, 19.9)$) while the training set size is provided by the caller in the argument n.
The model can also be compactly written by stating that the observations $y_i$ are realizations of independent random variables $Y_i$ having Normal$(w_0x_i+w_1x_i^2, \sigma^2)$ distributions; this frequently is abbreviated as $$Y_i\ {\sim}_{\operatorname{iid}}\ \mathcal{N}(w_0x_i + w_1 x_i^2, \sigma^2).$$
That answers the question about what distribution is involved.
This is the setting for ordinary least squares regression of a response $y$ against "features" or "explanatory variables" $x$ and $x^2.$ Thus, it posits that
the points $(x_i,y_i)$ deviate from the parabola $y=w_0x + w_1 x^2$ by means of independent random variations in the $y$ coordinates.
That answers the question about what the squared term is doing.
|
Could someone please translate this code into some mathematical notation? [closed]
|
Although this question relies heavily on Python, the answer does appear to benefit from some statistical reasoning.
This function creates "training" and "test" datasets of points $(x_i,y_i)$ for a reg
|
Could someone please translate this code into some mathematical notation? [closed]
Although this question relies heavily on Python, the answer does appear to benefit from some statistical reasoning.
This function creates "training" and "test" datasets of points $(x_i,y_i)$ for a regression model
$$y_i = w_0 x_i + w_1 x_i^2 + \varepsilon_i \sigma$$
where $\varepsilon_i$ are independent variables with standard Normal distributions. The values of the parameters $w_0,$ $w_1,$ and $\sigma$ are hard-coded into the function. The values of the $x_i$ in each dataset are equally spaced from $0$ to $20$ (although the test set does not include $20$). The test set is hard-coded (as $(0.0, 0.1, 0.2, \ldots, 19.9)$) while the training set size is provided by the caller in the argument n.
The model can also be compactly written by stating that the observations $y_i$ are realizations of independent random variables $Y_i$ having Normal$(w_0x_i+w_1x_i^2, \sigma^2)$ distributions; this frequently is abbreviated as $$Y_i\ {\sim}_{\operatorname{iid}}\ \mathcal{N}(w_0x_i + w_1 x_i^2, \sigma^2).$$
That answers the question about what distribution is involved.
This is the setting for ordinary least squares regression of a response $y$ against "features" or "explanatory variables" $x$ and $x^2.$ Thus, it posits that
the points $(x_i,y_i)$ deviate from the parabola $y=w_0x + w_1 x^2$ by means of independent random variations in the $y$ coordinates.
That answers the question about what the squared term is doing.
|
Could someone please translate this code into some mathematical notation? [closed]
Although this question relies heavily on Python, the answer does appear to benefit from some statistical reasoning.
This function creates "training" and "test" datasets of points $(x_i,y_i)$ for a reg
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.