idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
37,201
Mixed effects model or mixed design ANOVA in R
So I've done a lot of reading and chatting to people and I have a solution. My experimental design is a split plot design, which is quite different from a nested or hierarchical design. I was originally confusing the terms. As Robert correctly states in his answer, what is needed is a mixed effects model. Thus: Fixed effects: Year, Treatment1, Treatment2 Random effects: Year, Block, Treatment1 The model is specified thus: mod<- lmer(Richness~Treatment1*Treatment2*Year+(1|Block/Treatment1)+(1|Year),data=dat,poisson) The fixed effects are the terms specified in the brackets. Since none of these are continuous (the effect of Year doesn't necessarily increase each year in a linear fashion so I have classed it as a categorical fixed effect), they are specified 1|fixed effect, where 1 represents the intercept. If Block were actually a continuous fixed effect (obviously hypothetical!) then the fixed effects might be specified +(Block|Treatment1)+(1|Year). The model can then be simplified as appropriate. Several things to note: 1) When specified as a random effect, Year is listed separately from Block and Treatment1, since it doesn't have an intuitive "level" at which to be nested between them (Year isn't any different at any plot size of the experiment: for every block, plot and subplot Year is the same. 2) Treatment 2 does not need to be specified as a random effect since it represents the highest level of replication in the experiment and therefore will not be psuedoreplicated 3) In mixed effects models it is possible to specify an error distribution if errors are not normal. I have specified poisson here, since my response data are counts - this improved the distribution of the model residuals.
Mixed effects model or mixed design ANOVA in R
So I've done a lot of reading and chatting to people and I have a solution. My experimental design is a split plot design, which is quite different from a nested or hierarchical design. I was origi
Mixed effects model or mixed design ANOVA in R So I've done a lot of reading and chatting to people and I have a solution. My experimental design is a split plot design, which is quite different from a nested or hierarchical design. I was originally confusing the terms. As Robert correctly states in his answer, what is needed is a mixed effects model. Thus: Fixed effects: Year, Treatment1, Treatment2 Random effects: Year, Block, Treatment1 The model is specified thus: mod<- lmer(Richness~Treatment1*Treatment2*Year+(1|Block/Treatment1)+(1|Year),data=dat,poisson) The fixed effects are the terms specified in the brackets. Since none of these are continuous (the effect of Year doesn't necessarily increase each year in a linear fashion so I have classed it as a categorical fixed effect), they are specified 1|fixed effect, where 1 represents the intercept. If Block were actually a continuous fixed effect (obviously hypothetical!) then the fixed effects might be specified +(Block|Treatment1)+(1|Year). The model can then be simplified as appropriate. Several things to note: 1) When specified as a random effect, Year is listed separately from Block and Treatment1, since it doesn't have an intuitive "level" at which to be nested between them (Year isn't any different at any plot size of the experiment: for every block, plot and subplot Year is the same. 2) Treatment 2 does not need to be specified as a random effect since it represents the highest level of replication in the experiment and therefore will not be psuedoreplicated 3) In mixed effects models it is possible to specify an error distribution if errors are not normal. I have specified poisson here, since my response data are counts - this improved the distribution of the model residuals.
Mixed effects model or mixed design ANOVA in R So I've done a lot of reading and chatting to people and I have a solution. My experimental design is a split plot design, which is quite different from a nested or hierarchical design. I was origi
37,202
Mixed effects model or mixed design ANOVA in R
My suggestion is to use a mixed effects model with Treatment 1 and 2 as factors (fixed effects) Year as an interval-scaled factor Block as a random factor Q: Should year be treated as a random effect? A: Depends on what you expect based on your theory. I suppose the number of species might increase over time, and this increase might be linear. Then you should use year as a factor. However, if there is reason to assume that, say, in year one there is little increase, in year two a great increase, and again little increase in year three, you might consider using year as a fixed factor. Q: How should 'block' be treated? A: My understanding is that environmental conditions differ between blocks, but not between plots in the same block. Then I would suggest adding blockas a random factor. Does that help in answering your questions?
Mixed effects model or mixed design ANOVA in R
My suggestion is to use a mixed effects model with Treatment 1 and 2 as factors (fixed effects) Year as an interval-scaled factor Block as a random factor Q: Should year be treated as a random effe
Mixed effects model or mixed design ANOVA in R My suggestion is to use a mixed effects model with Treatment 1 and 2 as factors (fixed effects) Year as an interval-scaled factor Block as a random factor Q: Should year be treated as a random effect? A: Depends on what you expect based on your theory. I suppose the number of species might increase over time, and this increase might be linear. Then you should use year as a factor. However, if there is reason to assume that, say, in year one there is little increase, in year two a great increase, and again little increase in year three, you might consider using year as a fixed factor. Q: How should 'block' be treated? A: My understanding is that environmental conditions differ between blocks, but not between plots in the same block. Then I would suggest adding blockas a random factor. Does that help in answering your questions?
Mixed effects model or mixed design ANOVA in R My suggestion is to use a mixed effects model with Treatment 1 and 2 as factors (fixed effects) Year as an interval-scaled factor Block as a random factor Q: Should year be treated as a random effe
37,203
Bootstrap size and probability of drawing distinct observations
EDIT after seeing your code for the simulation I think you did not consider the fact that a certain number can also occurr more than once within one sample. After accounting for that, we get the same results. Here is your modified code: fx02<-function(ll,n,k){ a1<-matrix(0,n,2) samp1 <- sample(1:n, k, replace=TRUE) samp2 <- sample(1:n, k, replace=TRUE) a1[sort(unique(samp1)), 1] <- as.numeric(table(samp1)) a1[sort(unique(samp2)), 2] <- as.numeric(table(samp2)) sum(rowSums(a1)==1)/n } ss<-(1:60)*5 #The grid of values of k for which we'll compute the probability. a4<-matrix(NA,length(ss),2) for(i in 1:length(ss)){ a2<-ss[i] a3<-c(lapply(1:1000,fx02,n=100,k=a2),recursive=TRUE); a4[i,]<-c(a2,mean(a3)) } plot(a4,xlab="k",ylab="frequency of distinct draw", pch=16, las=1) abline(h=0.3697296, v=50) max(a4[,2]) [1] 0.36971 Original answer For the experiment 2 (with replacement), I think the probability that a certain number is drawn exactly once is: $$ P_{\text{once}}=2k(n-1)^{(2k-1)}(1/n)^{2k} $$ or $$ P_{\text{once}}=2k(n-1)^{(2k-1)}n^{-2k} $$ This can be checked by a simple example where $n=3$ and $k=2$. The probability to draw a certain number exactly once is in this case $2(4/9)^{2}\approx0.395$. The maximum occurs of the formula above occurs for: $$ k_{\text{max}}=\lfloor-\frac{1}{[2(\log(n + 1) + \log(1/n))]}\rceil $$ or $$ k_{\text{max}}=\lfloor-\frac{1}{[2(\log(n + 1) - \log(n))]}\rceil $$ Where $k_{\text{max}}$ is rounded to the nearest integer. This is roughly $n/2$ for larger $n$, as already mentioned in the comments by @user603. For $n=100$, $k_{\text{max}}\approx 49.75$ so 50. The maximum probability for $n=100$ and $k_{\text{max}}=50$ would then be around $0.3697$ (as already worked out in the comments by @user603). I set up a simulation to check this result in R: prob.once <- vector() draw.once <- function(n, k, sim=10000, repl=TRUE){ for ( i in 1:sim ) { samp1 <- sample(1:n, size=k, replace=repl) samp2 <- sample(1:n, size=k, replace=repl) if ((is.element(1, samp1) & !is.element(1, samp2) & !is.element(1, samp1[duplicated(samp1)])) | (!is.element(1, samp1) & is.element(1, samp2) & !is.element(1, samp2[duplicated(samp2)])) ){ prob.once[i] <- 1 } else { prob.once[i] <- 0 } } mean(prob.once) } krepl <- 1:300 probs.repl <- sapply(krepl, FUN=draw.once, n=100, sim=20000, repl=TRUE) plot(probs.repl~krepl, pch=16, type="p", lwd=2, las=1, ylab="Probability", xlab="k", col="steelblue") abline(h=0.3697296) abline(v=50) The simulated result seems to confirm the above considerations.
Bootstrap size and probability of drawing distinct observations
EDIT after seeing your code for the simulation I think you did not consider the fact that a certain number can also occurr more than once within one sample. After accounting for that, we get the same
Bootstrap size and probability of drawing distinct observations EDIT after seeing your code for the simulation I think you did not consider the fact that a certain number can also occurr more than once within one sample. After accounting for that, we get the same results. Here is your modified code: fx02<-function(ll,n,k){ a1<-matrix(0,n,2) samp1 <- sample(1:n, k, replace=TRUE) samp2 <- sample(1:n, k, replace=TRUE) a1[sort(unique(samp1)), 1] <- as.numeric(table(samp1)) a1[sort(unique(samp2)), 2] <- as.numeric(table(samp2)) sum(rowSums(a1)==1)/n } ss<-(1:60)*5 #The grid of values of k for which we'll compute the probability. a4<-matrix(NA,length(ss),2) for(i in 1:length(ss)){ a2<-ss[i] a3<-c(lapply(1:1000,fx02,n=100,k=a2),recursive=TRUE); a4[i,]<-c(a2,mean(a3)) } plot(a4,xlab="k",ylab="frequency of distinct draw", pch=16, las=1) abline(h=0.3697296, v=50) max(a4[,2]) [1] 0.36971 Original answer For the experiment 2 (with replacement), I think the probability that a certain number is drawn exactly once is: $$ P_{\text{once}}=2k(n-1)^{(2k-1)}(1/n)^{2k} $$ or $$ P_{\text{once}}=2k(n-1)^{(2k-1)}n^{-2k} $$ This can be checked by a simple example where $n=3$ and $k=2$. The probability to draw a certain number exactly once is in this case $2(4/9)^{2}\approx0.395$. The maximum occurs of the formula above occurs for: $$ k_{\text{max}}=\lfloor-\frac{1}{[2(\log(n + 1) + \log(1/n))]}\rceil $$ or $$ k_{\text{max}}=\lfloor-\frac{1}{[2(\log(n + 1) - \log(n))]}\rceil $$ Where $k_{\text{max}}$ is rounded to the nearest integer. This is roughly $n/2$ for larger $n$, as already mentioned in the comments by @user603. For $n=100$, $k_{\text{max}}\approx 49.75$ so 50. The maximum probability for $n=100$ and $k_{\text{max}}=50$ would then be around $0.3697$ (as already worked out in the comments by @user603). I set up a simulation to check this result in R: prob.once <- vector() draw.once <- function(n, k, sim=10000, repl=TRUE){ for ( i in 1:sim ) { samp1 <- sample(1:n, size=k, replace=repl) samp2 <- sample(1:n, size=k, replace=repl) if ((is.element(1, samp1) & !is.element(1, samp2) & !is.element(1, samp1[duplicated(samp1)])) | (!is.element(1, samp1) & is.element(1, samp2) & !is.element(1, samp2[duplicated(samp2)])) ){ prob.once[i] <- 1 } else { prob.once[i] <- 0 } } mean(prob.once) } krepl <- 1:300 probs.repl <- sapply(krepl, FUN=draw.once, n=100, sim=20000, repl=TRUE) plot(probs.repl~krepl, pch=16, type="p", lwd=2, las=1, ylab="Probability", xlab="k", col="steelblue") abline(h=0.3697296) abline(v=50) The simulated result seems to confirm the above considerations.
Bootstrap size and probability of drawing distinct observations EDIT after seeing your code for the simulation I think you did not consider the fact that a certain number can also occurr more than once within one sample. After accounting for that, we get the same
37,204
Pattern recognition techniques in spatial or spatio-temporal data?
I don't have a cookbook answer but here are some initial thoughts: I think that you idea about Frobenius norm is not unreasonable and can serve as a first safe bet indeed. I think you can use quite a few different metrics for matrix distances but I will propose two based on you data's nature: Given that what you are looking in each climatic map is the realization of a 2D Gaussian Process in space, it might be interesting to go ahead and estimate for each map the hyper-parameters $\theta_{MAP}$ of it. Then you can treat $\theta_{MAP}$ as containing information about the underlying dynamics of your process. Comparing the vectors $\theta$ will give an idea of similarity between any two maps. (You could even cluster them after that.) For starters a "standard" covariance function as the summation of a squared exponential and a Gaussian noise one should do just fine. It would probably interesting to think how would you "zero-centre" your maps. You might need to look up kriging a bit more carefully (understand the difference between simple and ordinary kriging for example and you'll immediately see what I mean by "zero-centring" your maps. (It will depend if your see your maps are coming from the same stationary process or not) You treat all map instances you as being samples from the same forward model. You go ahead compute the eigen-maps on them and then you compare the difference you are seeing in their projection scores generate by the eigen-maps. Easiest reference for this is... Eigenfaces. Really no joke, just read the article and each time it reads "face", read "climatic map". Everything is there. Don't get out of of the PCA step; your covariance matrix will be $N \times N$ where $N$ is your sample size not your map size. Kriging: If you are working in spatial statistics it is of paramount importance to understand it. Everything else if practically done in extension or in parallel to this main technique. Understand what a variogram shows and how to read one. Gaussian process regression literature might also be helpful for a first read; GPR essentially is simple kriging and usually the text describing GPR are less technical. For actual references on the matter I will refer direct to the instructions given by Peter Diggle about this: Cressie (1991) remains a standard reference for spatial statistical models and methods. Possibly more accessible accounts (...) are: the introductory chapters of Rue and Held (2005) on discrete spatial variation, Diggle and Ribeiro (2007) on geostatistics and Diggle (2003) on point processes. Waller and Gotway (2004) cover all three sub-areas at an introductory level, with a focus on public health applications. Gelfand et al (2010) is an edited compilation that covers both spatial and spatio-temporal models and methods. For a machine learning perceptive on Gaussian Processes I definitely refer to Gaussian Processes for Machine Learning by Rasmussen and Williams. Personally I have used the Diggle & Ribeiro and the Rasmussen & Williams books a lot. Cressie has a lot of nice papers on the subject. I don't know your level of mathematical expertise but it's a fun subject and I think you can gain traction relatively easily. When all is said and done, you just interpolate between points. Good luck. Ah, when it comes to software I think going to the CRAN's Task View options on Temporal and SpatioTemporal is the best starting step.
Pattern recognition techniques in spatial or spatio-temporal data?
I don't have a cookbook answer but here are some initial thoughts: I think that you idea about Frobenius norm is not unreasonable and can serve as a first safe bet indeed. I think you can use quite a
Pattern recognition techniques in spatial or spatio-temporal data? I don't have a cookbook answer but here are some initial thoughts: I think that you idea about Frobenius norm is not unreasonable and can serve as a first safe bet indeed. I think you can use quite a few different metrics for matrix distances but I will propose two based on you data's nature: Given that what you are looking in each climatic map is the realization of a 2D Gaussian Process in space, it might be interesting to go ahead and estimate for each map the hyper-parameters $\theta_{MAP}$ of it. Then you can treat $\theta_{MAP}$ as containing information about the underlying dynamics of your process. Comparing the vectors $\theta$ will give an idea of similarity between any two maps. (You could even cluster them after that.) For starters a "standard" covariance function as the summation of a squared exponential and a Gaussian noise one should do just fine. It would probably interesting to think how would you "zero-centre" your maps. You might need to look up kriging a bit more carefully (understand the difference between simple and ordinary kriging for example and you'll immediately see what I mean by "zero-centring" your maps. (It will depend if your see your maps are coming from the same stationary process or not) You treat all map instances you as being samples from the same forward model. You go ahead compute the eigen-maps on them and then you compare the difference you are seeing in their projection scores generate by the eigen-maps. Easiest reference for this is... Eigenfaces. Really no joke, just read the article and each time it reads "face", read "climatic map". Everything is there. Don't get out of of the PCA step; your covariance matrix will be $N \times N$ where $N$ is your sample size not your map size. Kriging: If you are working in spatial statistics it is of paramount importance to understand it. Everything else if practically done in extension or in parallel to this main technique. Understand what a variogram shows and how to read one. Gaussian process regression literature might also be helpful for a first read; GPR essentially is simple kriging and usually the text describing GPR are less technical. For actual references on the matter I will refer direct to the instructions given by Peter Diggle about this: Cressie (1991) remains a standard reference for spatial statistical models and methods. Possibly more accessible accounts (...) are: the introductory chapters of Rue and Held (2005) on discrete spatial variation, Diggle and Ribeiro (2007) on geostatistics and Diggle (2003) on point processes. Waller and Gotway (2004) cover all three sub-areas at an introductory level, with a focus on public health applications. Gelfand et al (2010) is an edited compilation that covers both spatial and spatio-temporal models and methods. For a machine learning perceptive on Gaussian Processes I definitely refer to Gaussian Processes for Machine Learning by Rasmussen and Williams. Personally I have used the Diggle & Ribeiro and the Rasmussen & Williams books a lot. Cressie has a lot of nice papers on the subject. I don't know your level of mathematical expertise but it's a fun subject and I think you can gain traction relatively easily. When all is said and done, you just interpolate between points. Good luck. Ah, when it comes to software I think going to the CRAN's Task View options on Temporal and SpatioTemporal is the best starting step.
Pattern recognition techniques in spatial or spatio-temporal data? I don't have a cookbook answer but here are some initial thoughts: I think that you idea about Frobenius norm is not unreasonable and can serve as a first safe bet indeed. I think you can use quite a
37,205
When and how to use weights for sequence analysis in social science?
I assume that you are using sampling weights to correct for representativity bias. Please note that some "data providers" require you to use the weights in your publications. In my opinion, you should always use weights for descriptive analysis in order to get unbiased results. I think that there are more consensus for this kind of analysis. Descriptive analysis includes cluster analysis, sequences visualization, computation of transitions rates (and hence substitution costs based on them), for instance. For weighted cluster analysis, you can have a look at the WeightedCluster library and manual. Regarding the weights to use, I would recommend to use longitudinal weights, since the sequences are defined for the whole period, but it depends on the exact weight definition. For a more general answer, you need to answer the following questions: What sample do I have (at what time, and so on)? to which population do I want to generalize? In some panels, longitudinal weights use the sample defined by wave t and generalize it to the population at wave one. This is what you want if you want to follow the evolution at wave one.
When and how to use weights for sequence analysis in social science?
I assume that you are using sampling weights to correct for representativity bias. Please note that some "data providers" require you to use the weights in your publications. In my opinion, you should
When and how to use weights for sequence analysis in social science? I assume that you are using sampling weights to correct for representativity bias. Please note that some "data providers" require you to use the weights in your publications. In my opinion, you should always use weights for descriptive analysis in order to get unbiased results. I think that there are more consensus for this kind of analysis. Descriptive analysis includes cluster analysis, sequences visualization, computation of transitions rates (and hence substitution costs based on them), for instance. For weighted cluster analysis, you can have a look at the WeightedCluster library and manual. Regarding the weights to use, I would recommend to use longitudinal weights, since the sequences are defined for the whole period, but it depends on the exact weight definition. For a more general answer, you need to answer the following questions: What sample do I have (at what time, and so on)? to which population do I want to generalize? In some panels, longitudinal weights use the sample defined by wave t and generalize it to the population at wave one. This is what you want if you want to follow the evolution at wave one.
When and how to use weights for sequence analysis in social science? I assume that you are using sampling weights to correct for representativity bias. Please note that some "data providers" require you to use the weights in your publications. In my opinion, you should
37,206
Implementation of the cross validiation
I realize this is a somewhat older question now but perhaps this can shed some light. There are several things wrong with your code. The code as it stands is NOT reproducible. At no point is h defined which is the exact parameter you wish to tune. You aren't iterating over different bandwidths. If you are using cross-validation to compare between different models, you need different models (i.e. different parameters). Modifying your code and utilizing the cars dataset, here is a reproducible example of Leave-One-Out Cross-validation on bandwidth whereby you pass a 'grid' of different bandwidths to tune the model. cv <- function(data, used.function, bandwidth.grid) { n <- nrow(data) mse <- matrix(, nrow=length(bandwidth.grid), ncol=2) for (b in 1:length(bandwidth.grid)){ cv.value <- rep(0, n-1) for (i in 1:(n-1)){ new.data <- data[-i,] funcargs <- list(reg.x=new.data[,1],reg.y=new.data[,2],x=data[i,1], h = bandwidth.grid[b]) cv.value[i] <- do.call(used.function, funcargs) } mse[b,] <- c(bandwidth.grid[b], 1/n*sum((new.data[,2]-cv.value)^2)) } ## MSE colnames(mse) <- c("bandwidth", "mse") return(mse) } ### kernel estimator usind nadaraya-watson: fcn1 <- function(reg.x, reg.y, x, h){ return(ksmooth(reg.x, reg.y, x.point = x, kernel = "normal", bandwidth = h)$y) } attach(cars) ### CV-score for kernel estimator: cv(cbind(speed, dist), fcn1, seq(10)) > cv(cbind(speed, dist), fcn1, seq(10)) bandwidth mse [1,] 1 261.9555 [2,] 2 223.3542 [3,] 3 217.8303 [4,] 4 214.0923 [5,] 5 211.1874 [6,] 6 211.4104 [7,] 7 214.6941 [8,] 8 220.1501 [9,] 9 227.2262 [10,] 10 235.5479 Here we can see that a bandwidth = 5 would be best.
Implementation of the cross validiation
I realize this is a somewhat older question now but perhaps this can shed some light. There are several things wrong with your code. The code as it stands is NOT reproducible. At no point is h defi
Implementation of the cross validiation I realize this is a somewhat older question now but perhaps this can shed some light. There are several things wrong with your code. The code as it stands is NOT reproducible. At no point is h defined which is the exact parameter you wish to tune. You aren't iterating over different bandwidths. If you are using cross-validation to compare between different models, you need different models (i.e. different parameters). Modifying your code and utilizing the cars dataset, here is a reproducible example of Leave-One-Out Cross-validation on bandwidth whereby you pass a 'grid' of different bandwidths to tune the model. cv <- function(data, used.function, bandwidth.grid) { n <- nrow(data) mse <- matrix(, nrow=length(bandwidth.grid), ncol=2) for (b in 1:length(bandwidth.grid)){ cv.value <- rep(0, n-1) for (i in 1:(n-1)){ new.data <- data[-i,] funcargs <- list(reg.x=new.data[,1],reg.y=new.data[,2],x=data[i,1], h = bandwidth.grid[b]) cv.value[i] <- do.call(used.function, funcargs) } mse[b,] <- c(bandwidth.grid[b], 1/n*sum((new.data[,2]-cv.value)^2)) } ## MSE colnames(mse) <- c("bandwidth", "mse") return(mse) } ### kernel estimator usind nadaraya-watson: fcn1 <- function(reg.x, reg.y, x, h){ return(ksmooth(reg.x, reg.y, x.point = x, kernel = "normal", bandwidth = h)$y) } attach(cars) ### CV-score for kernel estimator: cv(cbind(speed, dist), fcn1, seq(10)) > cv(cbind(speed, dist), fcn1, seq(10)) bandwidth mse [1,] 1 261.9555 [2,] 2 223.3542 [3,] 3 217.8303 [4,] 4 214.0923 [5,] 5 211.1874 [6,] 6 211.4104 [7,] 7 214.6941 [8,] 8 220.1501 [9,] 9 227.2262 [10,] 10 235.5479 Here we can see that a bandwidth = 5 would be best.
Implementation of the cross validiation I realize this is a somewhat older question now but perhaps this can shed some light. There are several things wrong with your code. The code as it stands is NOT reproducible. At no point is h defi
37,207
Given that one can sample $X \sim f(x)$, is there an easy way to sample $Y \sim k \cdot f(g(y))$ (such as $k \cdot f(e^y)$)?
If the inverse of $g(y)$, i.e. $g^{-1}(x)$, is relatively linear for the most probable values drawn from $f(x)$, then the following method should have a reasonable acceptance rate. I'm assuming $g(y)$ is strictly monotonic - in order that it have an inverse. This is Metropolis-Hastings. We make a proposal, and then accept or reject appropriately. Make a draw from your sampler, $x \sim f(x)$, and feed that into the inverse to get a proposal $y_{proposal} = g^{-1}(x)$. The probability of making that proposal is proportional to $$\propto \frac{f(x)}{ \left| \frac{d}{dx} g^{-1}(x) \right| } $$ That expression uses the derivative of the inverse to account for the fact that it will overrepresent areas where g^{-1} is relatively constant. We must calculate the proposal probabilty for both the proposed value, $y_{proposal}$, and the current value, $y_{current}$. We can also use the definitions $x_{proposal} = g(y_{proposal})$ and $x_{current} = g(y_{current})$. $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ f(g(y_{proposal})) }{ f(g(y_{current})) } \frac{ \frac{f(x_{current})}{ \left| \frac{d}{dx} g^{-1}(x_{current}) \right| } }{ \frac{f(x_{proposal})}{ \left| \frac{d}{dx} g^{-1}(x_{proposal}) \right| } } \right) $$ Don't worry about the absolute value $|\cdot|$ in that equation. $g(y)$ is either increasing everywhere or decreasing everywhere and therefore the absolute values will cancel out. We can do a lot of cancelling here, in particular remember that $x = g(y)$. $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ f(x_{proposal}) }{ f(x_{current}) } \frac{ \frac{f(x_{current})}{ \left| \frac{d}{dx} g^{-1}(x_{current}) \right| } }{ \frac{f(x_{proposal})}{ \left| \frac{d}{dx} g^{-1}(x_{proposal}) \right| } } \right) $$ $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ { \frac{d}{dx} g^{-1}(x_{proposal}) } }{ { \frac{d}{dx} g^{-1}(x_{current}) } } \right) $$ Finally, I think you can rearrange a little more, and make use of the fact that the derivative of the inverse is the reciprocal of the derivative of the original function: $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ { \frac{d}{dy} g(y_{current}) } }{ { \frac{d}{dy} g(y_{proposal}) } } \right) $$
Given that one can sample $X \sim f(x)$, is there an easy way to sample $Y \sim k \cdot f(g(y))$ (su
If the inverse of $g(y)$, i.e. $g^{-1}(x)$, is relatively linear for the most probable values drawn from $f(x)$, then the following method should have a reasonable acceptance rate. I'm assuming $g(y)
Given that one can sample $X \sim f(x)$, is there an easy way to sample $Y \sim k \cdot f(g(y))$ (such as $k \cdot f(e^y)$)? If the inverse of $g(y)$, i.e. $g^{-1}(x)$, is relatively linear for the most probable values drawn from $f(x)$, then the following method should have a reasonable acceptance rate. I'm assuming $g(y)$ is strictly monotonic - in order that it have an inverse. This is Metropolis-Hastings. We make a proposal, and then accept or reject appropriately. Make a draw from your sampler, $x \sim f(x)$, and feed that into the inverse to get a proposal $y_{proposal} = g^{-1}(x)$. The probability of making that proposal is proportional to $$\propto \frac{f(x)}{ \left| \frac{d}{dx} g^{-1}(x) \right| } $$ That expression uses the derivative of the inverse to account for the fact that it will overrepresent areas where g^{-1} is relatively constant. We must calculate the proposal probabilty for both the proposed value, $y_{proposal}$, and the current value, $y_{current}$. We can also use the definitions $x_{proposal} = g(y_{proposal})$ and $x_{current} = g(y_{current})$. $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ f(g(y_{proposal})) }{ f(g(y_{current})) } \frac{ \frac{f(x_{current})}{ \left| \frac{d}{dx} g^{-1}(x_{current}) \right| } }{ \frac{f(x_{proposal})}{ \left| \frac{d}{dx} g^{-1}(x_{proposal}) \right| } } \right) $$ Don't worry about the absolute value $|\cdot|$ in that equation. $g(y)$ is either increasing everywhere or decreasing everywhere and therefore the absolute values will cancel out. We can do a lot of cancelling here, in particular remember that $x = g(y)$. $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ f(x_{proposal}) }{ f(x_{current}) } \frac{ \frac{f(x_{current})}{ \left| \frac{d}{dx} g^{-1}(x_{current}) \right| } }{ \frac{f(x_{proposal})}{ \left| \frac{d}{dx} g^{-1}(x_{proposal}) \right| } } \right) $$ $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ { \frac{d}{dx} g^{-1}(x_{proposal}) } }{ { \frac{d}{dx} g^{-1}(x_{current}) } } \right) $$ Finally, I think you can rearrange a little more, and make use of the fact that the derivative of the inverse is the reciprocal of the derivative of the original function: $$ Acceptance~probability = \operatorname{min} \left(1, \frac{ { \frac{d}{dy} g(y_{current}) } }{ { \frac{d}{dy} g(y_{proposal}) } } \right) $$
Given that one can sample $X \sim f(x)$, is there an easy way to sample $Y \sim k \cdot f(g(y))$ (su If the inverse of $g(y)$, i.e. $g^{-1}(x)$, is relatively linear for the most probable values drawn from $f(x)$, then the following method should have a reasonable acceptance rate. I'm assuming $g(y)
37,208
Does every log-linear model have a perfectly equivalent logistic regression?
The answer is 'no'. The loglinear model is more general than the logistic regression model. See Fienberg, 1980, Analysis of Cross-Classified Categorical Data, section 6.2 on how to specify a loglinear model so that it corresponds to logistic regression. Actually the reverse is true: If all variables are categorical, then every logistic regression model corresponds to some loglinear model.
Does every log-linear model have a perfectly equivalent logistic regression?
The answer is 'no'. The loglinear model is more general than the logistic regression model. See Fienberg, 1980, Analysis of Cross-Classified Categorical Data, section 6.2 on how to specify a loglinear
Does every log-linear model have a perfectly equivalent logistic regression? The answer is 'no'. The loglinear model is more general than the logistic regression model. See Fienberg, 1980, Analysis of Cross-Classified Categorical Data, section 6.2 on how to specify a loglinear model so that it corresponds to logistic regression. Actually the reverse is true: If all variables are categorical, then every logistic regression model corresponds to some loglinear model.
Does every log-linear model have a perfectly equivalent logistic regression? The answer is 'no'. The loglinear model is more general than the logistic regression model. See Fienberg, 1980, Analysis of Cross-Classified Categorical Data, section 6.2 on how to specify a loglinear
37,209
regression with non-independent data
I assume what you have in mind is score as the response and then some player attributes as the predictors. E.g find out if blonds score higher. Why not perform the regression with game as your sample unit. A game of N points must distribute those points between A and B so you can just take player A score for each game as a binomial response and then include both players attributes as predictors.
regression with non-independent data
I assume what you have in mind is score as the response and then some player attributes as the predictors. E.g find out if blonds score higher. Why not perform the regression with game as your sample
regression with non-independent data I assume what you have in mind is score as the response and then some player attributes as the predictors. E.g find out if blonds score higher. Why not perform the regression with game as your sample unit. A game of N points must distribute those points between A and B so you can just take player A score for each game as a binomial response and then include both players attributes as predictors.
regression with non-independent data I assume what you have in mind is score as the response and then some player attributes as the predictors. E.g find out if blonds score higher. Why not perform the regression with game as your sample
37,210
regression with non-independent data
You have a system of simultaneous equations to deal with, which should have been talked about in your econometrics class (you are an economist, right?) You will be estimating the system using 2SLS or 3SLS methods, provided that you have decent exogeneous variables that affect only one of the outcomes, i.e., demographics such as the color of their hair, per Corone's suggestion. You would need to impose symmetry restrictions, so that both equations have the same coefficients. You can also try approaching this as the analysis of dyadic data problem, where the dyads are, of course, the pairs of interacting players. The existing literature on dyadic data tends to come from psychologists who do not care about endogeneity the way economists do, so you may need to take their suggestions with a grain of salt. Modeling dyadic data in a multilevel way, i.e., with a random effect, is a popular approach. If you have say 15 people, and each person played with say 6 other people, then you have additional problems with lack of independence across your data set, and multilevel/random effect model seems even more appropriate.
regression with non-independent data
You have a system of simultaneous equations to deal with, which should have been talked about in your econometrics class (you are an economist, right?) You will be estimating the system using 2SLS or
regression with non-independent data You have a system of simultaneous equations to deal with, which should have been talked about in your econometrics class (you are an economist, right?) You will be estimating the system using 2SLS or 3SLS methods, provided that you have decent exogeneous variables that affect only one of the outcomes, i.e., demographics such as the color of their hair, per Corone's suggestion. You would need to impose symmetry restrictions, so that both equations have the same coefficients. You can also try approaching this as the analysis of dyadic data problem, where the dyads are, of course, the pairs of interacting players. The existing literature on dyadic data tends to come from psychologists who do not care about endogeneity the way economists do, so you may need to take their suggestions with a grain of salt. Modeling dyadic data in a multilevel way, i.e., with a random effect, is a popular approach. If you have say 15 people, and each person played with say 6 other people, then you have additional problems with lack of independence across your data set, and multilevel/random effect model seems even more appropriate.
regression with non-independent data You have a system of simultaneous equations to deal with, which should have been talked about in your econometrics class (you are an economist, right?) You will be estimating the system using 2SLS or
37,211
regression with non-independent data
Model the actions of the players instead of the payoffs. That is, predict the probability that a player selects to cooperate at a particular round as a function of previous rounds (if the game is repeated in your setting) and your covariates. I think this makes more causal sense, as the players actually select the actions influenced by whatever, and the payoffs are just a deterministic function of the actions. Furthermore, this makes the output variables binary, which simplifies the analysis, as you do not have to think about the potentially difficult dependence between total payoffs. I guess it is also probably fine to treat the strategies selected by each player as conditionally independent given the covariates&history, which makes the analysis just simple prediction of a binary variable. On the other hand, one could argue that unobserved variables might lead to dependence. Angel Sanchéz has applied logistic regression to modeling the probability of cooperating in Prisoner's dilemma. Their setting is probably somewhat different as it involves multiple players in a network, but you should still take a look to see if their approach can be modified to your setting.
regression with non-independent data
Model the actions of the players instead of the payoffs. That is, predict the probability that a player selects to cooperate at a particular round as a function of previous rounds (if the game is repe
regression with non-independent data Model the actions of the players instead of the payoffs. That is, predict the probability that a player selects to cooperate at a particular round as a function of previous rounds (if the game is repeated in your setting) and your covariates. I think this makes more causal sense, as the players actually select the actions influenced by whatever, and the payoffs are just a deterministic function of the actions. Furthermore, this makes the output variables binary, which simplifies the analysis, as you do not have to think about the potentially difficult dependence between total payoffs. I guess it is also probably fine to treat the strategies selected by each player as conditionally independent given the covariates&history, which makes the analysis just simple prediction of a binary variable. On the other hand, one could argue that unobserved variables might lead to dependence. Angel Sanchéz has applied logistic regression to modeling the probability of cooperating in Prisoner's dilemma. Their setting is probably somewhat different as it involves multiple players in a network, but you should still take a look to see if their approach can be modified to your setting.
regression with non-independent data Model the actions of the players instead of the payoffs. That is, predict the probability that a player selects to cooperate at a particular round as a function of previous rounds (if the game is repe
37,212
Evaluate $\lim_{n \to \infty} \sum_{j=0}^{n}{{j+n-1} \choose j}\frac{1}{2^{j+n}}$
Another way is to use the combinatorial identity $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$. (See end of post for a proof.) Dividing by $4^m$, setting $m = n-1$, and extracting the last term, the OP's sum becomes $$ \binom{2n-1}{n} \frac{1}{4^n} + \frac{1}{2} \sum_{j=0}^{n-1}{{j+n-1} \choose j}\frac{1}{2^{j+n-1}} = \binom{2n}{n} \frac{1}{2 \cdot 4^n} + \frac{1}{2}.$$ Since the central binomial coefficient has asymptotic $\binom{2n}{n} \sim \frac{4^n}{\sqrt{\pi n}}$, the last expression approaches $\frac{1}{2}$ as $n \to \infty$. Combinatorial proof that $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$ (borrowed from an answer of mine on math.SE): Suppose you flip coins until you obtain either $m+1$ heads or $m+1$ tails. After either heads or tails "wins" you keep flipping until you have a total of $2m+1$ coin flips. The two sides count the number of ways for heads to win. For the left side: Condition on the number of tails $j$ obtained before head $m+1$. There are $\binom{m+j}{j}$ ways to choose the positions at which these $j$ tails occurred from the $m+j$ total options, and then $2^{m-j}$ possibilities for the remaining flips after head $m+1$. Summing up yields the left side. For the right side: Heads wins on half of the total number of sequences; i.e., $\frac{1}{2}(2^{2m+1}) = 4^m$. Added: Byron Schmuland has recently answered this question on math.SE as well. My answer is similar to his.
Evaluate $\lim_{n \to \infty} \sum_{j=0}^{n}{{j+n-1} \choose j}\frac{1}{2^{j+n}}$
Another way is to use the combinatorial identity $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$. (See end of post for a proof.) Dividing by $4^m$, setting $m = n-1$, and extracting the last
Evaluate $\lim_{n \to \infty} \sum_{j=0}^{n}{{j+n-1} \choose j}\frac{1}{2^{j+n}}$ Another way is to use the combinatorial identity $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$. (See end of post for a proof.) Dividing by $4^m$, setting $m = n-1$, and extracting the last term, the OP's sum becomes $$ \binom{2n-1}{n} \frac{1}{4^n} + \frac{1}{2} \sum_{j=0}^{n-1}{{j+n-1} \choose j}\frac{1}{2^{j+n-1}} = \binom{2n}{n} \frac{1}{2 \cdot 4^n} + \frac{1}{2}.$$ Since the central binomial coefficient has asymptotic $\binom{2n}{n} \sim \frac{4^n}{\sqrt{\pi n}}$, the last expression approaches $\frac{1}{2}$ as $n \to \infty$. Combinatorial proof that $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$ (borrowed from an answer of mine on math.SE): Suppose you flip coins until you obtain either $m+1$ heads or $m+1$ tails. After either heads or tails "wins" you keep flipping until you have a total of $2m+1$ coin flips. The two sides count the number of ways for heads to win. For the left side: Condition on the number of tails $j$ obtained before head $m+1$. There are $\binom{m+j}{j}$ ways to choose the positions at which these $j$ tails occurred from the $m+j$ total options, and then $2^{m-j}$ possibilities for the remaining flips after head $m+1$. Summing up yields the left side. For the right side: Heads wins on half of the total number of sequences; i.e., $\frac{1}{2}(2^{2m+1}) = 4^m$. Added: Byron Schmuland has recently answered this question on math.SE as well. My answer is similar to his.
Evaluate $\lim_{n \to \infty} \sum_{j=0}^{n}{{j+n-1} \choose j}\frac{1}{2^{j+n}}$ Another way is to use the combinatorial identity $\displaystyle \sum_{j=0}^m \binom{m+j}{j} 2^{m-j}= 4^m$. (See end of post for a proof.) Dividing by $4^m$, setting $m = n-1$, and extracting the last
37,213
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
Answer edited to implement encouraging and constructive comment by @Ferdi I would like to: provide an answer with a full contained script mention one can also test more general custom contrasts using the /TEST command argue this is necessary in some cases (ie the EMMEANS COMPARE combination is not enough) I assume to have a database with columns: depV, Group, F1, F2. I implement a 2x2x2 mixed design ANOVA where depV is the dependent variable, F1 and F2 are within subject factors and Group is a between subject factor. I further assume the F test has revealed that the interaction Group*F2 is significant. I therefore need to use post hoc t-tests to understand what drives the interaction. MIXED depV BY Group F1 F2 /FIXED=Group F1 F2 Group*F1 Group*F2 F1*F2 Group*F1*F2 | SSTYPE(3) /METHOD=REML /RANDOM=INTERCEPT | SUBJECT(Subject) COVTYPE(VC) /EMMEANS=TABLES(Group*F2) COMPARE(Group) ADJ(Bonferroni) /TEST(0) = 'depV(F2=1)-depV(F2=0) differs between groups' Group*F2 1/4 -1/4 -1/4 1/4 Group*F1*F2 1/8 -1/8 1/8 -1/8 -1/8 1/8 -1/8 1/8 /TEST(0) = 'depV(Group1, F2=1)-depV(Group2, F2=1)' Group 1 -1 Group*F1 1/2 1/2 -1/2 -1/2 Group*F2 1 0 -1 0 Group*F1*F2 1/2 0 1/2 0 -1/2 0 -1/2 0 . In particular the second t-test corresponds to the one performed by the EMMEANS command. The EMMEANS comparison could reveal for example that depV was bigger in Group 1 on the condition F2=1. However the interaction could also be driven by something else, which is verified by the first test: the difference depV(F2=1)-depV(F2=0) differs between groups, and this is a contrast you cannot verify with the EMMEANS command (at least I did not find an easy way). Now, in models with many factors it is a bit tricky to write down the /TEST line, the sequence of 1/2, 1/4 etc, called L matrix. Typically if you get the error message: "the L matrix is not estimable", you are forgetting some elements. One link that explains the receipt is this one: https://stats.idre.ucla.edu/spss/faq/how-can-i-test-contrasts-and-interaction-contrasts-in-a-mixed-model/
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
Answer edited to implement encouraging and constructive comment by @Ferdi I would like to: provide an answer with a full contained script mention one can also test more general custom contrasts using
Post hoc test in a 2x3 mixed design ANOVA using SPSS? Answer edited to implement encouraging and constructive comment by @Ferdi I would like to: provide an answer with a full contained script mention one can also test more general custom contrasts using the /TEST command argue this is necessary in some cases (ie the EMMEANS COMPARE combination is not enough) I assume to have a database with columns: depV, Group, F1, F2. I implement a 2x2x2 mixed design ANOVA where depV is the dependent variable, F1 and F2 are within subject factors and Group is a between subject factor. I further assume the F test has revealed that the interaction Group*F2 is significant. I therefore need to use post hoc t-tests to understand what drives the interaction. MIXED depV BY Group F1 F2 /FIXED=Group F1 F2 Group*F1 Group*F2 F1*F2 Group*F1*F2 | SSTYPE(3) /METHOD=REML /RANDOM=INTERCEPT | SUBJECT(Subject) COVTYPE(VC) /EMMEANS=TABLES(Group*F2) COMPARE(Group) ADJ(Bonferroni) /TEST(0) = 'depV(F2=1)-depV(F2=0) differs between groups' Group*F2 1/4 -1/4 -1/4 1/4 Group*F1*F2 1/8 -1/8 1/8 -1/8 -1/8 1/8 -1/8 1/8 /TEST(0) = 'depV(Group1, F2=1)-depV(Group2, F2=1)' Group 1 -1 Group*F1 1/2 1/2 -1/2 -1/2 Group*F2 1 0 -1 0 Group*F1*F2 1/2 0 1/2 0 -1/2 0 -1/2 0 . In particular the second t-test corresponds to the one performed by the EMMEANS command. The EMMEANS comparison could reveal for example that depV was bigger in Group 1 on the condition F2=1. However the interaction could also be driven by something else, which is verified by the first test: the difference depV(F2=1)-depV(F2=0) differs between groups, and this is a contrast you cannot verify with the EMMEANS command (at least I did not find an easy way). Now, in models with many factors it is a bit tricky to write down the /TEST line, the sequence of 1/2, 1/4 etc, called L matrix. Typically if you get the error message: "the L matrix is not estimable", you are forgetting some elements. One link that explains the receipt is this one: https://stats.idre.ucla.edu/spss/faq/how-can-i-test-contrasts-and-interaction-contrasts-in-a-mixed-model/
Post hoc test in a 2x3 mixed design ANOVA using SPSS? Answer edited to implement encouraging and constructive comment by @Ferdi I would like to: provide an answer with a full contained script mention one can also test more general custom contrasts using
37,214
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
I don't know SPSS syntax particularly well, but, if I understand your situation correctly, the significant interaction means that, in order to adequately assess the significance of your main effects, you'll need to do separate analyses. I think the best way to proceed is to do separate repeated measure analyses for each level in your grouping factor. Perhaps someone else can speak better to the question of how to handle correcting for multiple comparisons during post-hoc analysis, but I'm pretty sure you still need to use a correction. You might try Tukey's, as a multiple comparison correction!
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
I don't know SPSS syntax particularly well, but, if I understand your situation correctly, the significant interaction means that, in order to adequately assess the significance of your main effects,
Post hoc test in a 2x3 mixed design ANOVA using SPSS? I don't know SPSS syntax particularly well, but, if I understand your situation correctly, the significant interaction means that, in order to adequately assess the significance of your main effects, you'll need to do separate analyses. I think the best way to proceed is to do separate repeated measure analyses for each level in your grouping factor. Perhaps someone else can speak better to the question of how to handle correcting for multiple comparisons during post-hoc analysis, but I'm pretty sure you still need to use a correction. You might try Tukey's, as a multiple comparison correction!
Post hoc test in a 2x3 mixed design ANOVA using SPSS? I don't know SPSS syntax particularly well, but, if I understand your situation correctly, the significant interaction means that, in order to adequately assess the significance of your main effects,
37,215
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
In short. There is no globally accepted convention for these situations. Some will use Bonferroni corrections. Some will force the Tukey HSD framework to dance for them (e.g. Maxwell & Delaney). In contrast... COMPARE(time) ADJ(BONFERRONI)", just after "/EMMEANS=TABLES(newgroup*time) ... does seem to use the Bonferroni correction. However, this approach is likely conservative, especially in the face of Holm-Sidak style corrections. (ESPECIALLY if you don't use the MSW as the error term for your post-hoc comparisons).
Post hoc test in a 2x3 mixed design ANOVA using SPSS?
In short. There is no globally accepted convention for these situations. Some will use Bonferroni corrections. Some will force the Tukey HSD framework to dance for them (e.g. Maxwell & Delaney). I
Post hoc test in a 2x3 mixed design ANOVA using SPSS? In short. There is no globally accepted convention for these situations. Some will use Bonferroni corrections. Some will force the Tukey HSD framework to dance for them (e.g. Maxwell & Delaney). In contrast... COMPARE(time) ADJ(BONFERRONI)", just after "/EMMEANS=TABLES(newgroup*time) ... does seem to use the Bonferroni correction. However, this approach is likely conservative, especially in the face of Holm-Sidak style corrections. (ESPECIALLY if you don't use the MSW as the error term for your post-hoc comparisons).
Post hoc test in a 2x3 mixed design ANOVA using SPSS? In short. There is no globally accepted convention for these situations. Some will use Bonferroni corrections. Some will force the Tukey HSD framework to dance for them (e.g. Maxwell & Delaney). I
37,216
LARS - LASSO with weights
The glmnet Package solves the lasso problem using coordinate descent. It also provides features for adding in weights
LARS - LASSO with weights
The glmnet Package solves the lasso problem using coordinate descent. It also provides features for adding in weights
LARS - LASSO with weights The glmnet Package solves the lasso problem using coordinate descent. It also provides features for adding in weights
LARS - LASSO with weights The glmnet Package solves the lasso problem using coordinate descent. It also provides features for adding in weights
37,217
Why are there so many random generators in R?
One case where this RNG cornucopia is handy is when you're rewriting or comparing software that rely on different RNGs. One example might be porting R code to C++. You want to pin down why you're getting different results and it helps to hold as many things constant as possible.
Why are there so many random generators in R?
One case where this RNG cornucopia is handy is when you're rewriting or comparing software that rely on different RNGs. One example might be porting R code to C++. You want to pin down why you're gett
Why are there so many random generators in R? One case where this RNG cornucopia is handy is when you're rewriting or comparing software that rely on different RNGs. One example might be porting R code to C++. You want to pin down why you're getting different results and it helps to hold as many things constant as possible.
Why are there so many random generators in R? One case where this RNG cornucopia is handy is when you're rewriting or comparing software that rely on different RNGs. One example might be porting R code to C++. You want to pin down why you're gett
37,218
Variables for post-stratification weights?
It is an interesting twist to find a case where the demographic data will be seen as less reliable as the behavioral data. There isn't much good advice on how to select the calibration variables, other than they should correlate with both the (non-)response process and the variables of interest. The reasoning behind the widespread use of demographic variables for calibration is that age, education, race and gender affect pretty much everything in any social science. You can make a very simple case with your data, however, by modeling the probability of response as a function of all the variables that you think about including -- a propensity model, if you like. If you can demonstrate that donations are more significant in your model than age, nobody would have the grounds to object to your use of the former in calibration. The question of how much calibration is enough has not been addressed much, either. I can think about this conceptually as a trade-off between improving the accuracy (which, for a given response variable $y$ and a set of calibration variables $\bf x$, is the variance of the residuals $e_i = y_i - {\bf x}_i' {\bf b}$) and the increase in the variability of the weights, and hence the design effect $1+{\rm CV}^2$. As you add predictors that decrease in their strength, the precision gains are diminishing; the CV though will continue increasing, so at some point, arguably, the two curves will meet, giving you the right number of calibration variables. That's just an idea, but may be I should write a paper about it :)
Variables for post-stratification weights?
It is an interesting twist to find a case where the demographic data will be seen as less reliable as the behavioral data. There isn't much good advice on how to select the calibration variables, othe
Variables for post-stratification weights? It is an interesting twist to find a case where the demographic data will be seen as less reliable as the behavioral data. There isn't much good advice on how to select the calibration variables, other than they should correlate with both the (non-)response process and the variables of interest. The reasoning behind the widespread use of demographic variables for calibration is that age, education, race and gender affect pretty much everything in any social science. You can make a very simple case with your data, however, by modeling the probability of response as a function of all the variables that you think about including -- a propensity model, if you like. If you can demonstrate that donations are more significant in your model than age, nobody would have the grounds to object to your use of the former in calibration. The question of how much calibration is enough has not been addressed much, either. I can think about this conceptually as a trade-off between improving the accuracy (which, for a given response variable $y$ and a set of calibration variables $\bf x$, is the variance of the residuals $e_i = y_i - {\bf x}_i' {\bf b}$) and the increase in the variability of the weights, and hence the design effect $1+{\rm CV}^2$. As you add predictors that decrease in their strength, the precision gains are diminishing; the CV though will continue increasing, so at some point, arguably, the two curves will meet, giving you the right number of calibration variables. That's just an idea, but may be I should write a paper about it :)
Variables for post-stratification weights? It is an interesting twist to find a case where the demographic data will be seen as less reliable as the behavioral data. There isn't much good advice on how to select the calibration variables, othe
37,219
Can you reproduce this chi-squared test result?
It seems to me that there are three things wrong with the conclusion. First, as @caracal said: They are reporting "significance" using a one-tailed test, without saying that they are doing so. Most people, I think, recommend using two-tailed tests almost always. Certainly it is not ok to use a one-tail test without saying so. Second, the effect is tiny. When there was a signal, the subject (there was only one) detected it 11% of the time (32/293). When there was no signal, she detected a signal 6.5% of the time. That difference seems pretty small. And the subject was not able to detect the signal 89% of the time! Third, as @oddthinking pointed out, there were some selective data reporting that were not properly explained or justified (I didn't read the paper carefully, so am simply repeating what was in the original post).
Can you reproduce this chi-squared test result?
It seems to me that there are three things wrong with the conclusion. First, as @caracal said: They are reporting "significance" using a one-tailed test, without saying that they are doing so. Most p
Can you reproduce this chi-squared test result? It seems to me that there are three things wrong with the conclusion. First, as @caracal said: They are reporting "significance" using a one-tailed test, without saying that they are doing so. Most people, I think, recommend using two-tailed tests almost always. Certainly it is not ok to use a one-tail test without saying so. Second, the effect is tiny. When there was a signal, the subject (there was only one) detected it 11% of the time (32/293). When there was no signal, she detected a signal 6.5% of the time. That difference seems pretty small. And the subject was not able to detect the signal 89% of the time! Third, as @oddthinking pointed out, there were some selective data reporting that were not properly explained or justified (I didn't read the paper carefully, so am simply repeating what was in the original post).
Can you reproduce this chi-squared test result? It seems to me that there are three things wrong with the conclusion. First, as @caracal said: They are reporting "significance" using a one-tailed test, without saying that they are doing so. Most p
37,220
Can you reproduce this chi-squared test result?
A Fisher exact test on the given table gives, per this code actual <- c(rep("Y", 32), rep("N", 19), rep("Y", 261), rep("N", 274)) det <- c(rep("Y", 51), rep("N", 535)) table(det,actual) fisher.test(det,actual) a p = 0.08
Can you reproduce this chi-squared test result?
A Fisher exact test on the given table gives, per this code actual <- c(rep("Y", 32), rep("N", 19), rep("Y", 261), rep("N", 274)) det <- c(rep("Y", 51), rep("N", 535)) table(det,actual) fisher.test(d
Can you reproduce this chi-squared test result? A Fisher exact test on the given table gives, per this code actual <- c(rep("Y", 32), rep("N", 19), rep("Y", 261), rep("N", 274)) det <- c(rep("Y", 51), rep("N", 535)) table(det,actual) fisher.test(det,actual) a p = 0.08
Can you reproduce this chi-squared test result? A Fisher exact test on the given table gives, per this code actual <- c(rep("Y", 32), rep("N", 19), rep("Y", 261), rep("N", 274)) det <- c(rep("Y", 51), rep("N", 535)) table(det,actual) fisher.test(d
37,221
Is it possible to interpret standardized beta coefficients for quantile regression?
Yes, that is the interpretation. One way in which you can see this is by predicting the median for different values of your standardized, each 1 unit (in this case standard deviation) appart. Than you can look at how much these predicted medians differ, and you will see that that is exactly the same number as your standardized quantile regression coefficient. Here is an example: . sysuse auto, clear (1978 Automobile Data) . . // standardize variables . sum price if !missing(price,weight) Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- price | 74 6165.257 2949.496 3291 15906 . gen double z_price = ( price - r(mean) ) / r(sd) . . sum weight if !missing(price,weight) Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- weight | 74 3019.459 777.1936 1760 4840 . gen double z_weight = ( weight - r(mean) ) / r(sd) . . // estimate the quartile regression . qreg z_price z_weight Iteration 1: WLS sum of weighted deviations = 47.263794 Iteration 1: sum of abs. weighted deviations = 54.018868 Iteration 2: sum of abs. weighted deviations = 43.851751 Median regression Number of obs = 74 Raw sum of deviations 48.21332 (about -.41744651) Min sum of deviations 43.85175 Pseudo R2 = 0.0905 ------------------------------------------------------------------------------ z_price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- z_weight | .2552875 .1368752 1.87 0.066 -.0175682 .5281432 _cons | -.3415908 .1359472 -2.51 0.014 -.6125966 -.070585 ------------------------------------------------------------------------------ . . // predict the predicted median for z_weight . // is -2, -1, 0, 1, 2 . drop _all . set obs 5 obs was 0, now 5 . gen z_weight = _n - 3 . predict med (option xb assumed; fitted values) . list +----------------------+ | z_weight med | |----------------------| 1. | -2 -.8521658 | 2. | -1 -.5968783 | 3. | 0 -.3415908 | 4. | 1 -.0863033 | 5. | 2 .1689841 | +----------------------+ . . // compute how much the predicted median . // differs between cars 1 standard deviation . // weight apart . gen diff = med - med[_n - 1] (1 missing value generated) . list +---------------------------------+ | z_weight med diff | |---------------------------------| 1. | -2 -.8521658 . | 2. | -1 -.5968783 .2552875 | 3. | 0 -.3415908 .2552875 | 4. | 1 -.0863033 .2552875 | 5. | 2 .1689841 .2552875 | +---------------------------------+
Is it possible to interpret standardized beta coefficients for quantile regression?
Yes, that is the interpretation. One way in which you can see this is by predicting the median for different values of your standardized, each 1 unit (in this case standard deviation) appart. Than you
Is it possible to interpret standardized beta coefficients for quantile regression? Yes, that is the interpretation. One way in which you can see this is by predicting the median for different values of your standardized, each 1 unit (in this case standard deviation) appart. Than you can look at how much these predicted medians differ, and you will see that that is exactly the same number as your standardized quantile regression coefficient. Here is an example: . sysuse auto, clear (1978 Automobile Data) . . // standardize variables . sum price if !missing(price,weight) Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- price | 74 6165.257 2949.496 3291 15906 . gen double z_price = ( price - r(mean) ) / r(sd) . . sum weight if !missing(price,weight) Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- weight | 74 3019.459 777.1936 1760 4840 . gen double z_weight = ( weight - r(mean) ) / r(sd) . . // estimate the quartile regression . qreg z_price z_weight Iteration 1: WLS sum of weighted deviations = 47.263794 Iteration 1: sum of abs. weighted deviations = 54.018868 Iteration 2: sum of abs. weighted deviations = 43.851751 Median regression Number of obs = 74 Raw sum of deviations 48.21332 (about -.41744651) Min sum of deviations 43.85175 Pseudo R2 = 0.0905 ------------------------------------------------------------------------------ z_price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- z_weight | .2552875 .1368752 1.87 0.066 -.0175682 .5281432 _cons | -.3415908 .1359472 -2.51 0.014 -.6125966 -.070585 ------------------------------------------------------------------------------ . . // predict the predicted median for z_weight . // is -2, -1, 0, 1, 2 . drop _all . set obs 5 obs was 0, now 5 . gen z_weight = _n - 3 . predict med (option xb assumed; fitted values) . list +----------------------+ | z_weight med | |----------------------| 1. | -2 -.8521658 | 2. | -1 -.5968783 | 3. | 0 -.3415908 | 4. | 1 -.0863033 | 5. | 2 .1689841 | +----------------------+ . . // compute how much the predicted median . // differs between cars 1 standard deviation . // weight apart . gen diff = med - med[_n - 1] (1 missing value generated) . list +---------------------------------+ | z_weight med diff | |---------------------------------| 1. | -2 -.8521658 . | 2. | -1 -.5968783 .2552875 | 3. | 0 -.3415908 .2552875 | 4. | 1 -.0863033 .2552875 | 5. | 2 .1689841 .2552875 | +---------------------------------+
Is it possible to interpret standardized beta coefficients for quantile regression? Yes, that is the interpretation. One way in which you can see this is by predicting the median for different values of your standardized, each 1 unit (in this case standard deviation) appart. Than you
37,222
Combination of variational methods and empirical Bayes
One way of deciding how to run variational MLE is to look at how the experts do it. In Blei's LDA code (http://www.cs.princeton.edu/~blei/lda-c/lda-c-dist.tgz), within the "run_em" function, the "lda_inference" function (inside "doc_e_step") repeatedly maximizes with respect to each $q$ distribution until convergence. After the $q$'s converge, the algorithm maximizes with respect to the parameters in "lda_mle". The justification for this order is that by maximizing with respect to the $q$'s until convergence you get a better estimate of the expectations of hidden variables (or marginalized parameters) needed to maximize with respect to the parameters. In standard EM, of course, the expectations you are computing are exact - which is the main difference between standard and variational EM - so this is not a concern. From the perspective of EM as a maximization algorithm over the function $F(q,\theta)$ (www.cs.toronto.edu/~radford/ftp/emk.pdf) or from the perspective of maximizing the evidence lower bound, it is not clear that maximizing over the q's until convergence is best in terms of computationally efficiency because the algorithm will reach a local maximum no matter the order of maximization steps.
Combination of variational methods and empirical Bayes
One way of deciding how to run variational MLE is to look at how the experts do it. In Blei's LDA code (http://www.cs.princeton.edu/~blei/lda-c/lda-c-dist.tgz), within the "run_em" function, the "lda_
Combination of variational methods and empirical Bayes One way of deciding how to run variational MLE is to look at how the experts do it. In Blei's LDA code (http://www.cs.princeton.edu/~blei/lda-c/lda-c-dist.tgz), within the "run_em" function, the "lda_inference" function (inside "doc_e_step") repeatedly maximizes with respect to each $q$ distribution until convergence. After the $q$'s converge, the algorithm maximizes with respect to the parameters in "lda_mle". The justification for this order is that by maximizing with respect to the $q$'s until convergence you get a better estimate of the expectations of hidden variables (or marginalized parameters) needed to maximize with respect to the parameters. In standard EM, of course, the expectations you are computing are exact - which is the main difference between standard and variational EM - so this is not a concern. From the perspective of EM as a maximization algorithm over the function $F(q,\theta)$ (www.cs.toronto.edu/~radford/ftp/emk.pdf) or from the perspective of maximizing the evidence lower bound, it is not clear that maximizing over the q's until convergence is best in terms of computationally efficiency because the algorithm will reach a local maximum no matter the order of maximization steps.
Combination of variational methods and empirical Bayes One way of deciding how to run variational MLE is to look at how the experts do it. In Blei's LDA code (http://www.cs.princeton.edu/~blei/lda-c/lda-c-dist.tgz), within the "run_em" function, the "lda_
37,223
Combination of variational methods and empirical Bayes
Generally, in empirical Bayes, you maximise the marginal likelihood (also called model evidence, or the normalising constant of the posterior) with respect to the hyperparameters and plug this estimate of the hyperparameters into the posterior. In Casella (2001) there is a derivation of an EM for empirical Bayes. Casella first writes the marginal likelihood $m(\eta ; y)$ as: \begin{align} m(\eta ; y) = \frac{p(y, \theta, z | \eta) }{p(\theta, z| y, \eta)}. \end{align} So the marginal likelihood is the joint distribution of model parameters $\theta$, latent parameters $z$ and data $y$ ($p(y,\theta,z| \eta)$) divided by the joint posterior distribution of the model and latent parameters given the data ($p(\theta,z| y, \eta$). Take the logarithm and expectation with respect to the posterior $p(\theta, z| y, \eta^{(0)})$, for some starting value $\eta^{(0)}$: $$ E [\log m(\eta ; y)| \eta^{(0)}] = E [\log p(y, \theta, z | \eta)|\eta^{(0)}] - E [\log p(\theta, z| y, \eta) | \eta^{(0)}]. $$ By Gibbs' inequality, $E [\log p(\theta, z| y, \eta) | \eta^{(0)}] \leq E [\log p(\theta, z| y, \eta^{(0)}) | \eta^{(0)}]$. So, maximising $E [\log p(\theta, z| y, \eta) | \eta^{(0)}]$ with respect to $\eta$ beyond $\eta^{(0)}$ increases the marginal likelihood such that the sequence $$ \eta^{(k+1)} = \underset{\eta}{\arg \max} \, E [\log p(\theta, z| y, \eta) | \eta^{(k)}] $$ converges to a local maximum of the marginal likelihood. In general, this expectation is not available in closed form. In Casella (2001), the expectation is approximated using $M$ Monte Carlo samples from the posterior $p(\theta, z| y, \eta^{(k)})$: \begin{align} E [\log p(\theta, z| y, \eta) | \eta^{(k)}] & \approx M^{-1} \sum_{m=1}^M \log p(y, \theta^{(m)}, z^{(m)} | \eta), \\ \theta^{(m)}, z^{(m)} & \sim p(\theta, z| y, \eta^{(k)}). \end{align} However, you could of course use other approximating methods, such as variational Bayes. The expectation then becomes: $$ E [\log p(\theta, z| y, \eta) | \eta^{(k)}] \approx E_Q [\log p(\theta, z| y, \eta) | \eta^{(k)}], $$ where the right-hand side expectation is now with respect to the variational posterior. So, in the EM algorithm to find the hyperparameters that maximise the marginal likelihood, the E step is now a variational Bayes EM itself, so we have an EM within an EM. Which is basically what you describe. That also explains why it doesn't work so well when you update the hyperparameters at every iteration: the expectation with respect to the variational Bayes posterior is not very accurate after one iteration. Updating the hyperparameters after the variational Bayes algorithm has converged gives better performance, since the expectation is more accurate.
Combination of variational methods and empirical Bayes
Generally, in empirical Bayes, you maximise the marginal likelihood (also called model evidence, or the normalising constant of the posterior) with respect to the hyperparameters and plug this estimat
Combination of variational methods and empirical Bayes Generally, in empirical Bayes, you maximise the marginal likelihood (also called model evidence, or the normalising constant of the posterior) with respect to the hyperparameters and plug this estimate of the hyperparameters into the posterior. In Casella (2001) there is a derivation of an EM for empirical Bayes. Casella first writes the marginal likelihood $m(\eta ; y)$ as: \begin{align} m(\eta ; y) = \frac{p(y, \theta, z | \eta) }{p(\theta, z| y, \eta)}. \end{align} So the marginal likelihood is the joint distribution of model parameters $\theta$, latent parameters $z$ and data $y$ ($p(y,\theta,z| \eta)$) divided by the joint posterior distribution of the model and latent parameters given the data ($p(\theta,z| y, \eta$). Take the logarithm and expectation with respect to the posterior $p(\theta, z| y, \eta^{(0)})$, for some starting value $\eta^{(0)}$: $$ E [\log m(\eta ; y)| \eta^{(0)}] = E [\log p(y, \theta, z | \eta)|\eta^{(0)}] - E [\log p(\theta, z| y, \eta) | \eta^{(0)}]. $$ By Gibbs' inequality, $E [\log p(\theta, z| y, \eta) | \eta^{(0)}] \leq E [\log p(\theta, z| y, \eta^{(0)}) | \eta^{(0)}]$. So, maximising $E [\log p(\theta, z| y, \eta) | \eta^{(0)}]$ with respect to $\eta$ beyond $\eta^{(0)}$ increases the marginal likelihood such that the sequence $$ \eta^{(k+1)} = \underset{\eta}{\arg \max} \, E [\log p(\theta, z| y, \eta) | \eta^{(k)}] $$ converges to a local maximum of the marginal likelihood. In general, this expectation is not available in closed form. In Casella (2001), the expectation is approximated using $M$ Monte Carlo samples from the posterior $p(\theta, z| y, \eta^{(k)})$: \begin{align} E [\log p(\theta, z| y, \eta) | \eta^{(k)}] & \approx M^{-1} \sum_{m=1}^M \log p(y, \theta^{(m)}, z^{(m)} | \eta), \\ \theta^{(m)}, z^{(m)} & \sim p(\theta, z| y, \eta^{(k)}). \end{align} However, you could of course use other approximating methods, such as variational Bayes. The expectation then becomes: $$ E [\log p(\theta, z| y, \eta) | \eta^{(k)}] \approx E_Q [\log p(\theta, z| y, \eta) | \eta^{(k)}], $$ where the right-hand side expectation is now with respect to the variational posterior. So, in the EM algorithm to find the hyperparameters that maximise the marginal likelihood, the E step is now a variational Bayes EM itself, so we have an EM within an EM. Which is basically what you describe. That also explains why it doesn't work so well when you update the hyperparameters at every iteration: the expectation with respect to the variational Bayes posterior is not very accurate after one iteration. Updating the hyperparameters after the variational Bayes algorithm has converged gives better performance, since the expectation is more accurate.
Combination of variational methods and empirical Bayes Generally, in empirical Bayes, you maximise the marginal likelihood (also called model evidence, or the normalising constant of the posterior) with respect to the hyperparameters and plug this estimat
37,224
Getting probabilities over 1 in positive and unlabeled learning
I tried to follow authors' instructions with a very simple test case. Here is the R code that I used. set.seed(123) # Create the data according to authors' notations. # For the first 100 observations y = 1, and for the last 100 y = 0. # Only the first 25 obersvations are labelled, according to their # "chosen completely at random" criterion. c <- 25/100 data <- data.frame( y = c(rep(1,100), rep(0,100)), x = c(rnorm(100, mean=2), rnorm(100, mean=0)), s = c(rep(1,25), rep(0, 175)) ) # Train a standard logistic classifier on s g <- glm(s ~ x, data=data, family="binomial") max(g$fitted.values / c) # 1.676602 So even on very simple cases the estimated probabilities go higher than one. The reason this happens is that the estimated probabilities $g(x) = p(s=1|x)$ are estimated. If we actually have $g(x) \approx p(s=1|x)$, and $c$ itself is estimated, nothing prevents $ g(x) / c$ to be greater than one. Actually the method has very strong practical limits. First you need to have a good estimate of $c$, which you can only get with a traditional dataset. Second, there has to be a single sample "chosen completely at random". Third... well you just pointed out the third point ;-) Note: from your description, I got the feeling that your sampling scheme is not "chosen completely at random", as the authors insist heavily. There is a subtle but important difference between the scenario considered here, and the scenario considered in [21]. The scenario here is that the training data are drawn randomly from $p(x, y, s)$, but for each tuple $\langle x, y, s\rangle$ that is drawn, only $\langle x, s\rangle$ is recorded. The scenario of [21] is that two training sets are drawn independently from $p(x, y, s)$.
Getting probabilities over 1 in positive and unlabeled learning
I tried to follow authors' instructions with a very simple test case. Here is the R code that I used. set.seed(123) # Create the data according to authors' notations. # For the first 100 observations
Getting probabilities over 1 in positive and unlabeled learning I tried to follow authors' instructions with a very simple test case. Here is the R code that I used. set.seed(123) # Create the data according to authors' notations. # For the first 100 observations y = 1, and for the last 100 y = 0. # Only the first 25 obersvations are labelled, according to their # "chosen completely at random" criterion. c <- 25/100 data <- data.frame( y = c(rep(1,100), rep(0,100)), x = c(rnorm(100, mean=2), rnorm(100, mean=0)), s = c(rep(1,25), rep(0, 175)) ) # Train a standard logistic classifier on s g <- glm(s ~ x, data=data, family="binomial") max(g$fitted.values / c) # 1.676602 So even on very simple cases the estimated probabilities go higher than one. The reason this happens is that the estimated probabilities $g(x) = p(s=1|x)$ are estimated. If we actually have $g(x) \approx p(s=1|x)$, and $c$ itself is estimated, nothing prevents $ g(x) / c$ to be greater than one. Actually the method has very strong practical limits. First you need to have a good estimate of $c$, which you can only get with a traditional dataset. Second, there has to be a single sample "chosen completely at random". Third... well you just pointed out the third point ;-) Note: from your description, I got the feeling that your sampling scheme is not "chosen completely at random", as the authors insist heavily. There is a subtle but important difference between the scenario considered here, and the scenario considered in [21]. The scenario here is that the training data are drawn randomly from $p(x, y, s)$, but for each tuple $\langle x, y, s\rangle$ that is drawn, only $\langle x, s\rangle$ is recorded. The scenario of [21] is that two training sets are drawn independently from $p(x, y, s)$.
Getting probabilities over 1 in positive and unlabeled learning I tried to follow authors' instructions with a very simple test case. Here is the R code that I used. set.seed(123) # Create the data according to authors' notations. # For the first 100 observations
37,225
How to generate nice summary table?
If you have the R package Hmisc and a working latex installation you can do: x=rnorm(1000) y=rnorm(1000) lm1=lm(y~x) slm1=summary(lm1) latex(slm1) It works the same with datasets, latex(summary(cars))
How to generate nice summary table?
If you have the R package Hmisc and a working latex installation you can do: x=rnorm(1000) y=rnorm(1000) lm1=lm(y~x) slm1=summary(lm1) latex(slm1) It works the same with datasets, latex(summary(cars)
How to generate nice summary table? If you have the R package Hmisc and a working latex installation you can do: x=rnorm(1000) y=rnorm(1000) lm1=lm(y~x) slm1=summary(lm1) latex(slm1) It works the same with datasets, latex(summary(cars))
How to generate nice summary table? If you have the R package Hmisc and a working latex installation you can do: x=rnorm(1000) y=rnorm(1000) lm1=lm(y~x) slm1=summary(lm1) latex(slm1) It works the same with datasets, latex(summary(cars)
37,226
Why do we need sparsity for auto-encoders?
Remember that auto-encoders are trying to come up with the best summarized/compressed (hidden/laten) representation of the data such that the reconstruction error is minimized. This can be expressed in equations as follow: $$ \min_{\hat{x} \in \mathcal{F}} \frac{1}{n} \sum^{n}_{t=1} cost(x^{(t)}, \hat{x}^{(t)} )$$ where $\hat{x}^{(t)} = \hat{x}(x^{(t)})$ is the reconstruction for data point $t$, $\mathcal{F}$ is the space of functions we are considering and $cost$ is some cost/distance function of your choice. If you choose such a powerful model as mutli-layer neural networks that is a universality approximator, then it means you can basically copy any input data that you get. Therefore, the solution to the above problem could becomes vacuous and trivial, since your neural network might learn the identity mapping: $\hat{x}^{(i)} = \hat{x}(x^{(i)}) = x^{(i)}$ This could happen if your cost function is such that its zero when the $cost(x^{(t)}, \hat{x}^{(t)} ) = cost(x^{(t)}, x^{(t)} )$. This is probably true for a lot of commonly used cost functions, like euclidean distance. For example: $$ \min_{\hat{x} \in \mathcal{F}} \frac{1}{n} \sum^{n}_{t=1} \frac{1}{2} \| x^{(t)} - \hat{x}^{(t)} \|^2_{2}$$ is zero when $x^{(t)} - \hat{x}^{(t)} = 0 \iff x^{(t)} = \hat{x}^{(t)}$. Therefore, without any further constraints on the space of functions you are considering, you would not learn a meaningful compressed/hidden representation of the data (since you system/algorithm is too powerful and basically is kind of "overfitting"). Requiring a sparse solution is just one way of getting around this problem. Its just imposing a prior on your given problem so that you can get a meaningful solution. Intuitively, its similar to getting around the "no-free-lunch" issue. Without some kind of prior, its hard to get something useful.
Why do we need sparsity for auto-encoders?
Remember that auto-encoders are trying to come up with the best summarized/compressed (hidden/laten) representation of the data such that the reconstruction error is minimized. This can be expressed i
Why do we need sparsity for auto-encoders? Remember that auto-encoders are trying to come up with the best summarized/compressed (hidden/laten) representation of the data such that the reconstruction error is minimized. This can be expressed in equations as follow: $$ \min_{\hat{x} \in \mathcal{F}} \frac{1}{n} \sum^{n}_{t=1} cost(x^{(t)}, \hat{x}^{(t)} )$$ where $\hat{x}^{(t)} = \hat{x}(x^{(t)})$ is the reconstruction for data point $t$, $\mathcal{F}$ is the space of functions we are considering and $cost$ is some cost/distance function of your choice. If you choose such a powerful model as mutli-layer neural networks that is a universality approximator, then it means you can basically copy any input data that you get. Therefore, the solution to the above problem could becomes vacuous and trivial, since your neural network might learn the identity mapping: $\hat{x}^{(i)} = \hat{x}(x^{(i)}) = x^{(i)}$ This could happen if your cost function is such that its zero when the $cost(x^{(t)}, \hat{x}^{(t)} ) = cost(x^{(t)}, x^{(t)} )$. This is probably true for a lot of commonly used cost functions, like euclidean distance. For example: $$ \min_{\hat{x} \in \mathcal{F}} \frac{1}{n} \sum^{n}_{t=1} \frac{1}{2} \| x^{(t)} - \hat{x}^{(t)} \|^2_{2}$$ is zero when $x^{(t)} - \hat{x}^{(t)} = 0 \iff x^{(t)} = \hat{x}^{(t)}$. Therefore, without any further constraints on the space of functions you are considering, you would not learn a meaningful compressed/hidden representation of the data (since you system/algorithm is too powerful and basically is kind of "overfitting"). Requiring a sparse solution is just one way of getting around this problem. Its just imposing a prior on your given problem so that you can get a meaningful solution. Intuitively, its similar to getting around the "no-free-lunch" issue. Without some kind of prior, its hard to get something useful.
Why do we need sparsity for auto-encoders? Remember that auto-encoders are trying to come up with the best summarized/compressed (hidden/laten) representation of the data such that the reconstruction error is minimized. This can be expressed i
37,227
Does using box-cox transformation on individual data sets prevent these data from being comparable?
Yes, if they are on different scales you logically cannot compare them. Had the transformations all been the same you could, since the power transform is monotonic. However, the sample variance changes because of the transformation and that would need to be accounted for. But in your situation, you cannot compare them.
Does using box-cox transformation on individual data sets prevent these data from being comparable?
Yes, if they are on different scales you logically cannot compare them. Had the transformations all been the same you could, since the power transform is monotonic. However, the sample variance chan
Does using box-cox transformation on individual data sets prevent these data from being comparable? Yes, if they are on different scales you logically cannot compare them. Had the transformations all been the same you could, since the power transform is monotonic. However, the sample variance changes because of the transformation and that would need to be accounted for. But in your situation, you cannot compare them.
Does using box-cox transformation on individual data sets prevent these data from being comparable? Yes, if they are on different scales you logically cannot compare them. Had the transformations all been the same you could, since the power transform is monotonic. However, the sample variance chan
37,228
Posterior simulations of the variances with the mcmcsamp function
The short(ish) answer is that as.data.frame(mm,type="varcov") should extract the chains for the fixed effects and for the random-effect and residual variances in the form of a data frame. For example: library(lme4.0) ## I am using the r-forge version fm2 <- lmer(Reaction ~ Days + (1|Subject) + (0+Days|Subject), sleepstudy) mm <- mcmcsamp(fm2,1000) dd <- as.data.frame(mm,type="varcov") burnin <- 100 ## probably unnecessary summary(dd[-(1:burnin),]) Unfortunately this vector doesn't get useful names for the variance components. You can use vnames <- c(names(getME(fm2,"theta")),"sigma^2") names(dd)[3:5] <- vnames to remedy this (instead of hard-coding the positions in the last step you could do something with -1:(length(fixef(fm2)))) The other part of this answer is that I am having some serious doubts/questions about the behavior of the mcmcsamp chains, but I will correspond off-list: a partial/preliminary (and possibly wrong!) discussion of my confusion is posted at http://www.math.mcmaster.ca/~bolker/R/misc/mcmcsampex.pdf .
Posterior simulations of the variances with the mcmcsamp function
The short(ish) answer is that as.data.frame(mm,type="varcov") should extract the chains for the fixed effects and for the random-effect and residual variances in the form of a data frame. For exampl
Posterior simulations of the variances with the mcmcsamp function The short(ish) answer is that as.data.frame(mm,type="varcov") should extract the chains for the fixed effects and for the random-effect and residual variances in the form of a data frame. For example: library(lme4.0) ## I am using the r-forge version fm2 <- lmer(Reaction ~ Days + (1|Subject) + (0+Days|Subject), sleepstudy) mm <- mcmcsamp(fm2,1000) dd <- as.data.frame(mm,type="varcov") burnin <- 100 ## probably unnecessary summary(dd[-(1:burnin),]) Unfortunately this vector doesn't get useful names for the variance components. You can use vnames <- c(names(getME(fm2,"theta")),"sigma^2") names(dd)[3:5] <- vnames to remedy this (instead of hard-coding the positions in the last step you could do something with -1:(length(fixef(fm2)))) The other part of this answer is that I am having some serious doubts/questions about the behavior of the mcmcsamp chains, but I will correspond off-list: a partial/preliminary (and possibly wrong!) discussion of my confusion is posted at http://www.math.mcmaster.ca/~bolker/R/misc/mcmcsampex.pdf .
Posterior simulations of the variances with the mcmcsamp function The short(ish) answer is that as.data.frame(mm,type="varcov") should extract the chains for the fixed effects and for the random-effect and residual variances in the form of a data frame. For exampl
37,229
Jack-knife with time series models
I don't understand why you think $q>0$ is a problem for prediction. It is easy enough to forecast using an ARIMA model with MA terms and you don't need to use the innovations algorithm of Brockwell and Davis. That algorithm is useful for estimation; in particular, in getting starting values when optimizing the likelihood. To answer your specific questions: auto.arima() calls arima() which uses a state space representation for computing the likelihood. Missing values are handled naturally in a state space format. So, yes, they are handled correctly. Missing historical values are not estimated by arima(). If you want to forecast them (i.e., using only past data), just fit a model up to the start of the missing sequence and then forecast from it. If you want to estimate them (using data before and afterwards), you would need to use a Kalman smoother based on the equivalent state space model. An alternative fudge that gives almost the same results is to average the forecasts using data up to the last non-missing data with the backcasts using data up to the first non-missing data after the missing sequence.
Jack-knife with time series models
I don't understand why you think $q>0$ is a problem for prediction. It is easy enough to forecast using an ARIMA model with MA terms and you don't need to use the innovations algorithm of Brockwell an
Jack-knife with time series models I don't understand why you think $q>0$ is a problem for prediction. It is easy enough to forecast using an ARIMA model with MA terms and you don't need to use the innovations algorithm of Brockwell and Davis. That algorithm is useful for estimation; in particular, in getting starting values when optimizing the likelihood. To answer your specific questions: auto.arima() calls arima() which uses a state space representation for computing the likelihood. Missing values are handled naturally in a state space format. So, yes, they are handled correctly. Missing historical values are not estimated by arima(). If you want to forecast them (i.e., using only past data), just fit a model up to the start of the missing sequence and then forecast from it. If you want to estimate them (using data before and afterwards), you would need to use a Kalman smoother based on the equivalent state space model. An alternative fudge that gives almost the same results is to average the forecasts using data up to the last non-missing data with the backcasts using data up to the first non-missing data after the missing sequence.
Jack-knife with time series models I don't understand why you think $q>0$ is a problem for prediction. It is easy enough to forecast using an ARIMA model with MA terms and you don't need to use the innovations algorithm of Brockwell an
37,230
How to analyse a ranking and rating scale together?
I assume you can't just ask them if they prefer a laptop or a tablet; or you want to check what they think they prefer with what you think they should prefer... There are a number of ways to do this. This is in fact a version of the very common real life problem of evaluating job applicants, or tenders for contracting work - you need to decide on criteria, weight them, and rate the candidates against the criteria. You have emphasised the problem of weighting the criteria, but the rating of candidates (laptop and tablet) against the criteria is crucial, as was the choice of the six criteria in the first place. These are largely judgement rather than statistical questions. There are two steps necessary: combine the information in the two questions to give you weightings for the criteria; and the compare the importance given to the six qualities to the performance of the two products against those six qualities. Your first problem is that you have two questions that are apparently (see my comment) getting at basically the same underlying factor and that respondents will inevitably not be completely consistent in their answers (although hopefully not as much as in your example, where storage capacity is the lowest priority but "very important"!) One approach to combining these two is to convert the ranking to a rating on the same scale as the second question and then take an average. You could do this for example by $rate_{new}=\frac{rate+rank*\frac{4}{5}+0.2}{2}$. This is a bit crude, but the fact is there is no really satisfactory way of combining the two without drawbacks of some sort. Converting rankings to ratings and vice versa is a problem however you do it, and some kind of rule of thumb is needed to deal with ties in the ratings (if you want to turn them into rankings) or unknown range behind the rankings (if you want to turn them into ratings ie the user has been forced to rank from one to 6, but really might think they are all really important - or unimportant...). The next crudity is you will need to score the products against the six qualities. Often subjects would have been asked to do this, but in this case it looks like you have to do it yourself. You will produce a matrix like: Tablet Laptop Storage capacity 4 2 Portability 1 2 Touch interface 1 4 Keyboard 5 1 Long battery life 3 2 Entertainment on the go 1 3 I've kept to the convention you have of low scores being good. Then you just multiply and sum your importance ratings by these quality scores and you get a score for tablet and one for laptop. The one with the lowest score is the preference - you don't need a threshold, just to compare the two scores. Note that how you score the two products against the six qualities will be crucial in this - probably more important than how you generated the weightings. So you'd want to try a range of different scores and see which ones give plausible results. There's no statistical way of getting the "right" scores, with the information you've got. If you knew people's actual laptop/table preferences, you could perhaps generate a set of scores that produced those preferences, but then the whole exercise would be a different one. See below for some R code and output that implements this and suggests that your somewhat confused subject might actually want a laptop: > r1 <- c(6,5,1,4,2,3) > r2 <- c(1,3,1,1,2,4) > newrate <- (r2+r1*4/5+.2)/2 > products <- as.matrix(data.frame(Tablet=c(4,1,1,5,3,1), Laptop=c(2,2,4,1,2,3))) > cbind(products, newrate) Tablet Laptop newrate [1,] 4 2 3.0 [2,] 1 2 3.6 [3,] 1 4 1.0 [4,] 5 1 2.2 [5,] 3 2 1.9 [6,] 1 3 3.3 > newrate%*%products Tablet Laptop [1,] 36.6 33.1
How to analyse a ranking and rating scale together?
I assume you can't just ask them if they prefer a laptop or a tablet; or you want to check what they think they prefer with what you think they should prefer... There are a number of ways to do this.
How to analyse a ranking and rating scale together? I assume you can't just ask them if they prefer a laptop or a tablet; or you want to check what they think they prefer with what you think they should prefer... There are a number of ways to do this. This is in fact a version of the very common real life problem of evaluating job applicants, or tenders for contracting work - you need to decide on criteria, weight them, and rate the candidates against the criteria. You have emphasised the problem of weighting the criteria, but the rating of candidates (laptop and tablet) against the criteria is crucial, as was the choice of the six criteria in the first place. These are largely judgement rather than statistical questions. There are two steps necessary: combine the information in the two questions to give you weightings for the criteria; and the compare the importance given to the six qualities to the performance of the two products against those six qualities. Your first problem is that you have two questions that are apparently (see my comment) getting at basically the same underlying factor and that respondents will inevitably not be completely consistent in their answers (although hopefully not as much as in your example, where storage capacity is the lowest priority but "very important"!) One approach to combining these two is to convert the ranking to a rating on the same scale as the second question and then take an average. You could do this for example by $rate_{new}=\frac{rate+rank*\frac{4}{5}+0.2}{2}$. This is a bit crude, but the fact is there is no really satisfactory way of combining the two without drawbacks of some sort. Converting rankings to ratings and vice versa is a problem however you do it, and some kind of rule of thumb is needed to deal with ties in the ratings (if you want to turn them into rankings) or unknown range behind the rankings (if you want to turn them into ratings ie the user has been forced to rank from one to 6, but really might think they are all really important - or unimportant...). The next crudity is you will need to score the products against the six qualities. Often subjects would have been asked to do this, but in this case it looks like you have to do it yourself. You will produce a matrix like: Tablet Laptop Storage capacity 4 2 Portability 1 2 Touch interface 1 4 Keyboard 5 1 Long battery life 3 2 Entertainment on the go 1 3 I've kept to the convention you have of low scores being good. Then you just multiply and sum your importance ratings by these quality scores and you get a score for tablet and one for laptop. The one with the lowest score is the preference - you don't need a threshold, just to compare the two scores. Note that how you score the two products against the six qualities will be crucial in this - probably more important than how you generated the weightings. So you'd want to try a range of different scores and see which ones give plausible results. There's no statistical way of getting the "right" scores, with the information you've got. If you knew people's actual laptop/table preferences, you could perhaps generate a set of scores that produced those preferences, but then the whole exercise would be a different one. See below for some R code and output that implements this and suggests that your somewhat confused subject might actually want a laptop: > r1 <- c(6,5,1,4,2,3) > r2 <- c(1,3,1,1,2,4) > newrate <- (r2+r1*4/5+.2)/2 > products <- as.matrix(data.frame(Tablet=c(4,1,1,5,3,1), Laptop=c(2,2,4,1,2,3))) > cbind(products, newrate) Tablet Laptop newrate [1,] 4 2 3.0 [2,] 1 2 3.6 [3,] 1 4 1.0 [4,] 5 1 2.2 [5,] 3 2 1.9 [6,] 1 3 3.3 > newrate%*%products Tablet Laptop [1,] 36.6 33.1
How to analyse a ranking and rating scale together? I assume you can't just ask them if they prefer a laptop or a tablet; or you want to check what they think they prefer with what you think they should prefer... There are a number of ways to do this.
37,231
How to analyse a ranking and rating scale together?
To think that you can learn what is important in people's decisions simply by asking expresses unjustified optimism. But there are some sound methods of "deriving" the importance of different factors. Years and years of research in psychology and behavioural economics have borne this out. A colleague and I summarized some findings from the literature on this topic and explored some ways to apply them (in a higher education context) here.
How to analyse a ranking and rating scale together?
To think that you can learn what is important in people's decisions simply by asking expresses unjustified optimism. But there are some sound methods of "deriving" the importance of different factors
How to analyse a ranking and rating scale together? To think that you can learn what is important in people's decisions simply by asking expresses unjustified optimism. But there are some sound methods of "deriving" the importance of different factors. Years and years of research in psychology and behavioural economics have borne this out. A colleague and I summarized some findings from the literature on this topic and explored some ways to apply them (in a higher education context) here.
How to analyse a ranking and rating scale together? To think that you can learn what is important in people's decisions simply by asking expresses unjustified optimism. But there are some sound methods of "deriving" the importance of different factors
37,232
How to analyse a ranking and rating scale together?
This is an unsupervised learning task. Here is a very simple idea which if incorrect I hope someone else points out. Feed your ten variables into a PCA to extract 2 PCs. Use the two principal components in a 2-mean clustering algorithm to define boundaries for the assignment to each of two groups. Examine the PCs, and name them Tablet and Laptop if it makes sense to do so. You now have a criterion based on a linear combination of your 10 variables. The main problem I see with this is that you won't necessarily end up with a definite "laptop" versus "tablet" prediction algorithm. To get something like that you would ideally have at least a few data points with outcomes to train on.
How to analyse a ranking and rating scale together?
This is an unsupervised learning task. Here is a very simple idea which if incorrect I hope someone else points out. Feed your ten variables into a PCA to extract 2 PCs. Use the two principal componen
How to analyse a ranking and rating scale together? This is an unsupervised learning task. Here is a very simple idea which if incorrect I hope someone else points out. Feed your ten variables into a PCA to extract 2 PCs. Use the two principal components in a 2-mean clustering algorithm to define boundaries for the assignment to each of two groups. Examine the PCs, and name them Tablet and Laptop if it makes sense to do so. You now have a criterion based on a linear combination of your 10 variables. The main problem I see with this is that you won't necessarily end up with a definite "laptop" versus "tablet" prediction algorithm. To get something like that you would ideally have at least a few data points with outcomes to train on.
How to analyse a ranking and rating scale together? This is an unsupervised learning task. Here is a very simple idea which if incorrect I hope someone else points out. Feed your ten variables into a PCA to extract 2 PCs. Use the two principal componen
37,233
Tracking down the assumptions made by SciPy's ttest_ind() function
By using the SciPy built-in function source(), I could see a printout of the source code for the function ttest_ind(). Based on the source code, the SciPy built-in is performing the t-test assuming that the variances of the two samples are equal. It is not using the Welch-Satterthwaite degrees of freedom. I just want to point out that, crucially, this is why you should not just trust library functions. In my case, I actually do need the t-test for populations of unequal variances, and the degrees of freedom adjustment might matter for some of the smaller data sets I will run this on. SciPy assumes equal variances but does not state this assumption. As I mentioned in some comments, the discrepancy between my code and SciPy's is about 0.008 for sample sizes between 30 and 400, and then slowly goes to zero for larger sample sizes. This is an effect of the extra (1/n1 + 1/n2) term in the equal-variances t-statistic denominator. Accuracy-wise, this is pretty important, especially for small sample sizes. It definitely confirms to me that I need to write my own function. (Possibly there are other, better Python libraries, but this at least should be known. Frankly, it's surprising this isn't anywhere up front and center in the SciPy documentation for ttest_ind()).
Tracking down the assumptions made by SciPy's ttest_ind() function
By using the SciPy built-in function source(), I could see a printout of the source code for the function ttest_ind(). Based on the source code, the SciPy built-in is performing the t-test assuming th
Tracking down the assumptions made by SciPy's ttest_ind() function By using the SciPy built-in function source(), I could see a printout of the source code for the function ttest_ind(). Based on the source code, the SciPy built-in is performing the t-test assuming that the variances of the two samples are equal. It is not using the Welch-Satterthwaite degrees of freedom. I just want to point out that, crucially, this is why you should not just trust library functions. In my case, I actually do need the t-test for populations of unequal variances, and the degrees of freedom adjustment might matter for some of the smaller data sets I will run this on. SciPy assumes equal variances but does not state this assumption. As I mentioned in some comments, the discrepancy between my code and SciPy's is about 0.008 for sample sizes between 30 and 400, and then slowly goes to zero for larger sample sizes. This is an effect of the extra (1/n1 + 1/n2) term in the equal-variances t-statistic denominator. Accuracy-wise, this is pretty important, especially for small sample sizes. It definitely confirms to me that I need to write my own function. (Possibly there are other, better Python libraries, but this at least should be known. Frankly, it's surprising this isn't anywhere up front and center in the SciPy documentation for ttest_ind()).
Tracking down the assumptions made by SciPy's ttest_ind() function By using the SciPy built-in function source(), I could see a printout of the source code for the function ttest_ind(). Based on the source code, the SciPy built-in is performing the t-test assuming th
37,234
Relationship between correlation and sample variance
This can be explained as follows. Mathematically, given two variables X and Y, their Correlation is defined as the covariance(X,Y)/(Standard Deviation(X)*Standard Deviation(Y)). In other words, the correlation is proportional to the the covariance of the two variables. The divisor in the equation acts has a scaling effect on the covariance so that the resulting correlation will lie between -1 and +1. So, all other things being equal, reducing the covariance will reduce the correlation. The effect of having similar school achievement is to reduce the covariance between IQ and school achievement. For example, given a wide range of IQ's, if school achievement is similar then school achievement doesn't co-vary with IQ, i.e. there is a relatively random relationship between achievement and IQ, i.e. the correlation is close to zero indicating (relatively speaking) no relationship. On the other hand, given a wide range of IQ's, if school achievement is also spread over a wide range then correlation can still take any value between -1 (a negative relationship and +1 (a positive relationahip) including 0 (indicating no relationship) Getting back to your question, it is the reduction in covariance that is important here rather than the reduction in variance.
Relationship between correlation and sample variance
This can be explained as follows. Mathematically, given two variables X and Y, their Correlation is defined as the covariance(X,Y)/(Standard Deviation(X)*Standard Deviation(Y)). In other words, the
Relationship between correlation and sample variance This can be explained as follows. Mathematically, given two variables X and Y, their Correlation is defined as the covariance(X,Y)/(Standard Deviation(X)*Standard Deviation(Y)). In other words, the correlation is proportional to the the covariance of the two variables. The divisor in the equation acts has a scaling effect on the covariance so that the resulting correlation will lie between -1 and +1. So, all other things being equal, reducing the covariance will reduce the correlation. The effect of having similar school achievement is to reduce the covariance between IQ and school achievement. For example, given a wide range of IQ's, if school achievement is similar then school achievement doesn't co-vary with IQ, i.e. there is a relatively random relationship between achievement and IQ, i.e. the correlation is close to zero indicating (relatively speaking) no relationship. On the other hand, given a wide range of IQ's, if school achievement is also spread over a wide range then correlation can still take any value between -1 (a negative relationship and +1 (a positive relationahip) including 0 (indicating no relationship) Getting back to your question, it is the reduction in covariance that is important here rather than the reduction in variance.
Relationship between correlation and sample variance This can be explained as follows. Mathematically, given two variables X and Y, their Correlation is defined as the covariance(X,Y)/(Standard Deviation(X)*Standard Deviation(Y)). In other words, the
37,235
Relationship between correlation and sample variance
Yes, there is a mathematical although rather conceptual explanation. I was puzzled by the same question until now. First, why we were puzzled: 1) If you are calculating the correlation coefficient in a sample with lower variance (e.g. all with similar scholar achievements) BUT which truly and perfectly represents the larger population to which it belongs (city’s population, that has higher variance), the correlation coefficients should be very similar once covariance and SD will change together. This could hold for simulated data. 2) Real samples almost never represent perfectly the population, so the sample’s correlation coefficient can be either higher or lower than the population’s depending on which section of the population you selected (that is, if the population correlation coefficient is not perfect, i.e. less than 1, of course). However, the overwhelming tendency is for the lower variance sample’s coefficient to be lower than the higher variance population (or of another same sized sample with higher variance). Why??? My opinion (and answer): noise. Every measuring tool has a degree of error and a degree of precision. Measurement error explains the reduced coefficients in a thin slice of scale/continuous data mentioned before. While the absolute size of error is always the same, its relative size increases as you “zoom in”. The “shrinking variance” will approach the size of the error itself thus increasing the contribution of noise and decreasing the measured correlation (not the true correlation!), even if everything else is controlled for. Blunt instruments such as questionnaires suffer more from imprecision, where a measured point, post-graduation for example, is too course representing a wide variety of achievements and might have blurred boundaries (is any course taken after graduation a post-graduation course?). Plus, and very frequently, people use Pearson’s correlation coefficient to measure those relationships, which is inappropriate and further contributes to the dampening of coefficients in face of lower variance in ordinal data.
Relationship between correlation and sample variance
Yes, there is a mathematical although rather conceptual explanation. I was puzzled by the same question until now. First, why we were puzzled: 1) If you are calculating the correlation coefficient in
Relationship between correlation and sample variance Yes, there is a mathematical although rather conceptual explanation. I was puzzled by the same question until now. First, why we were puzzled: 1) If you are calculating the correlation coefficient in a sample with lower variance (e.g. all with similar scholar achievements) BUT which truly and perfectly represents the larger population to which it belongs (city’s population, that has higher variance), the correlation coefficients should be very similar once covariance and SD will change together. This could hold for simulated data. 2) Real samples almost never represent perfectly the population, so the sample’s correlation coefficient can be either higher or lower than the population’s depending on which section of the population you selected (that is, if the population correlation coefficient is not perfect, i.e. less than 1, of course). However, the overwhelming tendency is for the lower variance sample’s coefficient to be lower than the higher variance population (or of another same sized sample with higher variance). Why??? My opinion (and answer): noise. Every measuring tool has a degree of error and a degree of precision. Measurement error explains the reduced coefficients in a thin slice of scale/continuous data mentioned before. While the absolute size of error is always the same, its relative size increases as you “zoom in”. The “shrinking variance” will approach the size of the error itself thus increasing the contribution of noise and decreasing the measured correlation (not the true correlation!), even if everything else is controlled for. Blunt instruments such as questionnaires suffer more from imprecision, where a measured point, post-graduation for example, is too course representing a wide variety of achievements and might have blurred boundaries (is any course taken after graduation a post-graduation course?). Plus, and very frequently, people use Pearson’s correlation coefficient to measure those relationships, which is inappropriate and further contributes to the dampening of coefficients in face of lower variance in ordinal data.
Relationship between correlation and sample variance Yes, there is a mathematical although rather conceptual explanation. I was puzzled by the same question until now. First, why we were puzzled: 1) If you are calculating the correlation coefficient in
37,236
Relationship between correlation and sample variance
The correlation coefficient (I frequently use the intraclass as a measure of test-retest reliability) is often defined as the ratio of between-subject variation to the total variation (between-subject + within-subject). If the between-subject variation is high (e.g., persons with very different school types) compared to the within-subject variation, then the correlation coefficient would be high.
Relationship between correlation and sample variance
The correlation coefficient (I frequently use the intraclass as a measure of test-retest reliability) is often defined as the ratio of between-subject variation to the total variation (between-subject
Relationship between correlation and sample variance The correlation coefficient (I frequently use the intraclass as a measure of test-retest reliability) is often defined as the ratio of between-subject variation to the total variation (between-subject + within-subject). If the between-subject variation is high (e.g., persons with very different school types) compared to the within-subject variation, then the correlation coefficient would be high.
Relationship between correlation and sample variance The correlation coefficient (I frequently use the intraclass as a measure of test-retest reliability) is often defined as the ratio of between-subject variation to the total variation (between-subject
37,237
Calculate average of a set numbers with reported standard errors
"The variance of the sums is the sum of the variances". So: Square each of the 365 standard errors so they become variances. Add them together; this will give you the variance of the annual total. Divide that variance by 365^2; this will give you the variance of the annual average. Take the square root of that variance; this will give you the standard error of your annual average. From there, I suspect your sample size is big enough (bigger than 500 in total, right?) it doesn't matter too much what the underlying population is (log normal etc) as your estimate is probably roughly normally distributed due to the central limit theorem. So multiply the standard error calculated above by 1.96 to give the +/- of your 95 percent confidence interval. Edit / addition On reflection, my answer above is probably incomplete. I should have asked you for more context. Most importantly there is a question about where your original data come from. Are they themselves the averages of sets of independent observations? (or something similar, eg output from a regression) If so, are they each based on the same number of observations? If not ie if they are based on different numbers of observations, you will need a weighted average, and hence a weighted estimate of the variance. Arguably, you may want to do this anyway. In this case, weighting should be proportional to the inverse of the variance of each "prediction" in your data set; or to the number of observations behind each "prediction".
Calculate average of a set numbers with reported standard errors
"The variance of the sums is the sum of the variances". So: Square each of the 365 standard errors so they become variances. Add them together; this will give you the variance of the annual total.
Calculate average of a set numbers with reported standard errors "The variance of the sums is the sum of the variances". So: Square each of the 365 standard errors so they become variances. Add them together; this will give you the variance of the annual total. Divide that variance by 365^2; this will give you the variance of the annual average. Take the square root of that variance; this will give you the standard error of your annual average. From there, I suspect your sample size is big enough (bigger than 500 in total, right?) it doesn't matter too much what the underlying population is (log normal etc) as your estimate is probably roughly normally distributed due to the central limit theorem. So multiply the standard error calculated above by 1.96 to give the +/- of your 95 percent confidence interval. Edit / addition On reflection, my answer above is probably incomplete. I should have asked you for more context. Most importantly there is a question about where your original data come from. Are they themselves the averages of sets of independent observations? (or something similar, eg output from a regression) If so, are they each based on the same number of observations? If not ie if they are based on different numbers of observations, you will need a weighted average, and hence a weighted estimate of the variance. Arguably, you may want to do this anyway. In this case, weighting should be proportional to the inverse of the variance of each "prediction" in your data set; or to the number of observations behind each "prediction".
Calculate average of a set numbers with reported standard errors "The variance of the sums is the sum of the variances". So: Square each of the 365 standard errors so they become variances. Add them together; this will give you the variance of the annual total.
37,238
Calculate average of a set numbers with reported standard errors
In Ken Tatebe’s “Combining Multiple Averaged Data Points And Their Errors” (PDF—linked in Steven Howell’s answer), Tatebe shows that given two "bins" of averaged data points $a$ and $b$, each reported as an average ($\bar a$, $\bar b$) and an error ($\varepsilon_a$, $\varepsilon_b$), with bin sizes $n_a$ and $n_b$ (where $n_a$ + $n_b$ = $n$ is the number of original data points), the average of the entire data set $\bar S$ is given by the weighted average of the averages: $$ \bar S = (\frac{n_a}{n}) \bar a + (\frac{n_b}{n}) \bar b $$ and the error $\varepsilon_S$ is given by $$ \varepsilon_S = \sqrt{\frac{N_a}{N}\varepsilon^2_a+\frac{N_b}{N}\varepsilon^2_b+\frac{n_an_b(\bar a - \bar b)^2}{nN}} $$ Tatebe goes on to state: If more than two averaged points need to be combined, the formulae may be used repeatedly to combine multiple sets. For example, if one has three averaged data points $\bar a$, $\bar b$ and $\bar c$ one may use Equations 10 and 48… (equations 10 and 48 being $\bar S$ and $\varepsilon_S$ above, respectively) …to combine $\bar a$ and $\bar b$. This result can then be combined with $\bar c$ to get the final composite point. I’ve written a Python implementation; it’s untested, but in case it’s useful to anyone else, it’s available here as a gist.
Calculate average of a set numbers with reported standard errors
In Ken Tatebe’s “Combining Multiple Averaged Data Points And Their Errors” (PDF—linked in Steven Howell’s answer), Tatebe shows that given two "bins" of averaged data points $a$ and $b$, each reported
Calculate average of a set numbers with reported standard errors In Ken Tatebe’s “Combining Multiple Averaged Data Points And Their Errors” (PDF—linked in Steven Howell’s answer), Tatebe shows that given two "bins" of averaged data points $a$ and $b$, each reported as an average ($\bar a$, $\bar b$) and an error ($\varepsilon_a$, $\varepsilon_b$), with bin sizes $n_a$ and $n_b$ (where $n_a$ + $n_b$ = $n$ is the number of original data points), the average of the entire data set $\bar S$ is given by the weighted average of the averages: $$ \bar S = (\frac{n_a}{n}) \bar a + (\frac{n_b}{n}) \bar b $$ and the error $\varepsilon_S$ is given by $$ \varepsilon_S = \sqrt{\frac{N_a}{N}\varepsilon^2_a+\frac{N_b}{N}\varepsilon^2_b+\frac{n_an_b(\bar a - \bar b)^2}{nN}} $$ Tatebe goes on to state: If more than two averaged points need to be combined, the formulae may be used repeatedly to combine multiple sets. For example, if one has three averaged data points $\bar a$, $\bar b$ and $\bar c$ one may use Equations 10 and 48… (equations 10 and 48 being $\bar S$ and $\varepsilon_S$ above, respectively) …to combine $\bar a$ and $\bar b$. This result can then be combined with $\bar c$ to get the final composite point. I’ve written a Python implementation; it’s untested, but in case it’s useful to anyone else, it’s available here as a gist.
Calculate average of a set numbers with reported standard errors In Ken Tatebe’s “Combining Multiple Averaged Data Points And Their Errors” (PDF—linked in Steven Howell’s answer), Tatebe shows that given two "bins" of averaged data points $a$ and $b$, each reported
37,239
How to find implementation details for a SAS procedure?
This information is not documented because SAS is a proprietary statistical software suite. Sharing such info (from their perspective) would compromise the integrity of their license. Because statistics lack any kind of IEEE-like standard for numerical routines, nothing can be inferred about the methods used without inspecting the source code. I get the sense from your question that you are doing this for your own enrichment. If that's the case, I recommend doing this in R (it will benefit you in the long run as well). To answer your specific question, predicted values from (linear) regression models are based on the covariance matrix of the parameter estimates. By predicted probabilities, I'm not sure if you have some regression routine like logistic or probit GLMs, which estimate probabilities. By writing the fitted values as a linear combination of the model parameters, $x^t\hat{\beta}$, you compute its variance directly: $ \mbox{var}(x^t\hat{\beta}) = x^t \mbox{var}(\hat{\beta}) x $ This gives the confidence band. One can use the estimated residual variance to account for resampling variability for predictions. I recommend taking a look at Seber and Lee's ancient text on Linear Regression Analysis if you want more info on this. This is slightly more complicated in the GLM case.
How to find implementation details for a SAS procedure?
This information is not documented because SAS is a proprietary statistical software suite. Sharing such info (from their perspective) would compromise the integrity of their license. Because statisti
How to find implementation details for a SAS procedure? This information is not documented because SAS is a proprietary statistical software suite. Sharing such info (from their perspective) would compromise the integrity of their license. Because statistics lack any kind of IEEE-like standard for numerical routines, nothing can be inferred about the methods used without inspecting the source code. I get the sense from your question that you are doing this for your own enrichment. If that's the case, I recommend doing this in R (it will benefit you in the long run as well). To answer your specific question, predicted values from (linear) regression models are based on the covariance matrix of the parameter estimates. By predicted probabilities, I'm not sure if you have some regression routine like logistic or probit GLMs, which estimate probabilities. By writing the fitted values as a linear combination of the model parameters, $x^t\hat{\beta}$, you compute its variance directly: $ \mbox{var}(x^t\hat{\beta}) = x^t \mbox{var}(\hat{\beta}) x $ This gives the confidence band. One can use the estimated residual variance to account for resampling variability for predictions. I recommend taking a look at Seber and Lee's ancient text on Linear Regression Analysis if you want more info on this. This is slightly more complicated in the GLM case.
How to find implementation details for a SAS procedure? This information is not documented because SAS is a proprietary statistical software suite. Sharing such info (from their perspective) would compromise the integrity of their license. Because statisti
37,240
Accuracy of a random classifier
I am not sure I understand the last part of your question But, how can I compare the accuracy of my classifier without citing a test set? but I think I understand your concern. A given binary classifier's accuracy of 90% may be misleading if the natural frequency of one case vs the other is 90/100. If the classifier simply always chooses the most common case then it will, on average, be correct 90% of the time. A useful score to account for this issue is the Information score. A paper describing the score and its rationale can be found here. I learned about this score because it is part of the cross-validation suite in the excellent Orange data mining tools (you can use no-coding-needed visual programming or call libraries from Python).
Accuracy of a random classifier
I am not sure I understand the last part of your question But, how can I compare the accuracy of my classifier without citing a test set? but I think I understand your concern. A given binary classi
Accuracy of a random classifier I am not sure I understand the last part of your question But, how can I compare the accuracy of my classifier without citing a test set? but I think I understand your concern. A given binary classifier's accuracy of 90% may be misleading if the natural frequency of one case vs the other is 90/100. If the classifier simply always chooses the most common case then it will, on average, be correct 90% of the time. A useful score to account for this issue is the Information score. A paper describing the score and its rationale can be found here. I learned about this score because it is part of the cross-validation suite in the excellent Orange data mining tools (you can use no-coding-needed visual programming or call libraries from Python).
Accuracy of a random classifier I am not sure I understand the last part of your question But, how can I compare the accuracy of my classifier without citing a test set? but I think I understand your concern. A given binary classi
37,241
Will quantum computing allow new statistical techniques?
Frankly I doubt this would ever work -- IMO every more complex structure will just melt from decoherence. Nevertheless probably the most obvious use is doing complete combinatorical sweeps for larger number of variables or huge Monte Carlo simulations. Yet those are things achievable by molecular computers -- imagine $10^{23}$ combinations evaluated at once (-; And such stuff is more realistic, similar setup has been already used for travelling salesman. Of course there are problems, like that the assembling step is still longer than than solving step and those are one-time devices, but we are on the very beginning of this path.
Will quantum computing allow new statistical techniques?
Frankly I doubt this would ever work -- IMO every more complex structure will just melt from decoherence. Nevertheless probably the most obvious use is doing complete combinatorical sweeps for larger
Will quantum computing allow new statistical techniques? Frankly I doubt this would ever work -- IMO every more complex structure will just melt from decoherence. Nevertheless probably the most obvious use is doing complete combinatorical sweeps for larger number of variables or huge Monte Carlo simulations. Yet those are things achievable by molecular computers -- imagine $10^{23}$ combinations evaluated at once (-; And such stuff is more realistic, similar setup has been already used for travelling salesman. Of course there are problems, like that the assembling step is still longer than than solving step and those are one-time devices, but we are on the very beginning of this path.
Will quantum computing allow new statistical techniques? Frankly I doubt this would ever work -- IMO every more complex structure will just melt from decoherence. Nevertheless probably the most obvious use is doing complete combinatorical sweeps for larger
37,242
Will quantum computing allow new statistical techniques?
If it actually worked, and was something you could implement statistical code on in one way or another? Absolutely. There are undoubtedly new techniques that could emerge from throwing yet more computational firepower at something. Or, as importantly, making currently bleeding edge, computationally intensive techniques accessible. Just think about current computers - Bayesian estimation isn't exactly new. But being able to run MCMC-based analysis on massively complex data sets where that's not the focus of the paper, but just something that happened along the way, is profoundly powerful thing. So even if they don't bring about new techniques (which they will) being able to go "yeah, sure we can do that" to computationally intensive techniques on huge data sets is a big deal.
Will quantum computing allow new statistical techniques?
If it actually worked, and was something you could implement statistical code on in one way or another? Absolutely. There are undoubtedly new techniques that could emerge from throwing yet more comput
Will quantum computing allow new statistical techniques? If it actually worked, and was something you could implement statistical code on in one way or another? Absolutely. There are undoubtedly new techniques that could emerge from throwing yet more computational firepower at something. Or, as importantly, making currently bleeding edge, computationally intensive techniques accessible. Just think about current computers - Bayesian estimation isn't exactly new. But being able to run MCMC-based analysis on massively complex data sets where that's not the focus of the paper, but just something that happened along the way, is profoundly powerful thing. So even if they don't bring about new techniques (which they will) being able to go "yeah, sure we can do that" to computationally intensive techniques on huge data sets is a big deal.
Will quantum computing allow new statistical techniques? If it actually worked, and was something you could implement statistical code on in one way or another? Absolutely. There are undoubtedly new techniques that could emerge from throwing yet more comput
37,243
Bootstrapped parameter and fit estimates with non-normality for structural equation models
The following are just a few points: If you have departure from normality then bootstrapping is often a good idea. You mention using "1000" replicates. Increasing the number of replicates increases computational time and accuracy. Thus, sometimes when first setting up your model, you'll set the number of replicates at a level that is relatively quick to run. However, for your final model that you report, you may want to push up the number of replicates to 10,000 or more. If the departure of your data from normality is mild, then coefficient and model fit tests that assume normality are often a reasonable approximation. In particular when you have a big sample, as is often the case with structural equation modelling, assumption tests that perform a significant test with the null hypothesis as normality often are overly sensitive for the purpose of deciding whether to persist with methods that assume normality. I would pay more attention to the actual indices of non-normality like skewness and kurtosis values (or if your intuition is sufficiently trained, check out histograms of the variables). If the departure from normality is mild, I would expect that both standard and bootstrapped approaches should yield similar results. Showing that your results are robust to such analytic decisions may provide you with greater confidence in your results.
Bootstrapped parameter and fit estimates with non-normality for structural equation models
The following are just a few points: If you have departure from normality then bootstrapping is often a good idea. You mention using "1000" replicates. Increasing the number of replicates increases
Bootstrapped parameter and fit estimates with non-normality for structural equation models The following are just a few points: If you have departure from normality then bootstrapping is often a good idea. You mention using "1000" replicates. Increasing the number of replicates increases computational time and accuracy. Thus, sometimes when first setting up your model, you'll set the number of replicates at a level that is relatively quick to run. However, for your final model that you report, you may want to push up the number of replicates to 10,000 or more. If the departure of your data from normality is mild, then coefficient and model fit tests that assume normality are often a reasonable approximation. In particular when you have a big sample, as is often the case with structural equation modelling, assumption tests that perform a significant test with the null hypothesis as normality often are overly sensitive for the purpose of deciding whether to persist with methods that assume normality. I would pay more attention to the actual indices of non-normality like skewness and kurtosis values (or if your intuition is sufficiently trained, check out histograms of the variables). If the departure from normality is mild, I would expect that both standard and bootstrapped approaches should yield similar results. Showing that your results are robust to such analytic decisions may provide you with greater confidence in your results.
Bootstrapped parameter and fit estimates with non-normality for structural equation models The following are just a few points: If you have departure from normality then bootstrapping is often a good idea. You mention using "1000" replicates. Increasing the number of replicates increases
37,244
Unexpected singularities in the Hessian matrix error in multinomial logistic regression
I the key you may be looking for can be found on the UCLA website for Multinomial Logistic Regression where it states: Perfect prediction: Perfect prediction means that only one value of a predictor variable is associated with only one value of the response variable. You can tell from the output of the regression coefficients that something is wrong. You can then do a two-way tabulation of the outcome variable with the problematic variable to confirm this and then rerun the model without the problematic variable. I would recommend running a two-way table for each of the predictors (vs. the response) to determine if one level of the response occurs with only one level of your predictor.
Unexpected singularities in the Hessian matrix error in multinomial logistic regression
I the key you may be looking for can be found on the UCLA website for Multinomial Logistic Regression where it states: Perfect prediction: Perfect prediction means that only one value of a predi
Unexpected singularities in the Hessian matrix error in multinomial logistic regression I the key you may be looking for can be found on the UCLA website for Multinomial Logistic Regression where it states: Perfect prediction: Perfect prediction means that only one value of a predictor variable is associated with only one value of the response variable. You can tell from the output of the regression coefficients that something is wrong. You can then do a two-way tabulation of the outcome variable with the problematic variable to confirm this and then rerun the model without the problematic variable. I would recommend running a two-way table for each of the predictors (vs. the response) to determine if one level of the response occurs with only one level of your predictor.
Unexpected singularities in the Hessian matrix error in multinomial logistic regression I the key you may be looking for can be found on the UCLA website for Multinomial Logistic Regression where it states: Perfect prediction: Perfect prediction means that only one value of a predi
37,245
How to perform text mining, sentiment mining, and business category identification, and where to obtain a categorization library
One solution mentioned by Jeffrey Breen is to use Lu and Hiu's lexicon. He also gives a cool tutorial for sentiment mining on Twitter.
How to perform text mining, sentiment mining, and business category identification, and where to obt
One solution mentioned by Jeffrey Breen is to use Lu and Hiu's lexicon. He also gives a cool tutorial for sentiment mining on Twitter.
How to perform text mining, sentiment mining, and business category identification, and where to obtain a categorization library One solution mentioned by Jeffrey Breen is to use Lu and Hiu's lexicon. He also gives a cool tutorial for sentiment mining on Twitter.
How to perform text mining, sentiment mining, and business category identification, and where to obt One solution mentioned by Jeffrey Breen is to use Lu and Hiu's lexicon. He also gives a cool tutorial for sentiment mining on Twitter.
37,246
How to perform text mining, sentiment mining, and business category identification, and where to obtain a categorization library
A few alternatives 1) Supervised learning: Ideally you'd want to have in your data the text and a label, where the label refers to the category you are interested. For this you'd need to manually label the data. Then you can train a statistical/machine learning algorithm in order to classify the categories you've labeled. For this a simple approach in R is using the text2vec and glmnet packages. 2) Unsupervised learning: Another alternative if you are not willing to label the data, you can fit a statistical model to find topics, this is called latent dirichlet allocation. It's also supported by text2vec, there are other packages for this such as topicmodels. 3) Transfer learning: Finally, another option is transfer learning. The idea is you get labeled data from another problem, train a classifier and use the model to make predictions in your data. The problem is finding data of the same domain... 4) Using a dictionary (hack): Below is the original answer I wrote that is using a look-up table of a "dictionary" of positive and negative words. I'd now say this option is the last resource, it's sort of a hack as it too broad. I think there are packages that implement this now. EDIT Below is my previous answer. I've used Jeffrey's function to compute the score using the "look up" method. I made a corruption index of polititians from Argentina. So the more people said they were corrupt, the higher the score. For the data I had, as the tweets were in spanish, I just wrote a small dictionary more coherent with the needs of the problem. I guess it depends what you want to analyze. Besides this approach, there are more sophisticated methods. I think there is a coursera course about NLP. Here is another resource: https://sites.google.com/site/miningtwitter/basics/text-mining It says it's deprecated because of the changes of twitter API, but the author of twitteR package adapted the package to the new API. So you need first to install the updated package, perhaps the source version of it. Here is the author's web page. _http://geoffjentry.hexdump.org/ The later is needed in order not to get duplicated tweets when making a call bigger than 100 tweets. Jeffrey's Function: score.sentiment = function(sentences, pos.words, neg.words, .progress='none') { require(plyr) require(stringr) # we got a vector of sentences. plyr will handle a list # or a vector as an "l" for us # we want a simple array ("a") of scores back, so we use # "l" + "a" + "ply" = "laply": scores = laply(sentences, function(sentence, pos.words, neg.words) { # clean up sentences with R's regex-driven global substitute, gsub(): sentence = gsub('[[:punct:]]', '', sentence) sentence = gsub('[[:cntrl:]]', '', sentence) sentence = gsub('\\d+', '', sentence) # and convert to lower case: sentence = tolower(sentence) # split into words. str_split is in the stringr package word.list = str_split(sentence, '\\s+') # sometimes a list() is one level of hierarchy too much words = unlist(word.list) # compare our words to the dictionaries of positive & negative terms pos.matches = match(words, pos.words) neg.matches = match(words, neg.words) # match() returns the position of the matched term or NA # we just want a TRUE/FALSE: pos.matches = !is.na(pos.matches) neg.matches = !is.na(neg.matches) # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum(): score = sum(pos.matches) - sum(neg.matches) return(score) }, pos.words, neg.words, .progress=.progress ) scores.df = data.frame(score=scores, text=sentences) return(scores.df) }
How to perform text mining, sentiment mining, and business category identification, and where to obt
A few alternatives 1) Supervised learning: Ideally you'd want to have in your data the text and a label, where the label refers to the category you are interested. For this you'd need to manually labe
How to perform text mining, sentiment mining, and business category identification, and where to obtain a categorization library A few alternatives 1) Supervised learning: Ideally you'd want to have in your data the text and a label, where the label refers to the category you are interested. For this you'd need to manually label the data. Then you can train a statistical/machine learning algorithm in order to classify the categories you've labeled. For this a simple approach in R is using the text2vec and glmnet packages. 2) Unsupervised learning: Another alternative if you are not willing to label the data, you can fit a statistical model to find topics, this is called latent dirichlet allocation. It's also supported by text2vec, there are other packages for this such as topicmodels. 3) Transfer learning: Finally, another option is transfer learning. The idea is you get labeled data from another problem, train a classifier and use the model to make predictions in your data. The problem is finding data of the same domain... 4) Using a dictionary (hack): Below is the original answer I wrote that is using a look-up table of a "dictionary" of positive and negative words. I'd now say this option is the last resource, it's sort of a hack as it too broad. I think there are packages that implement this now. EDIT Below is my previous answer. I've used Jeffrey's function to compute the score using the "look up" method. I made a corruption index of polititians from Argentina. So the more people said they were corrupt, the higher the score. For the data I had, as the tweets were in spanish, I just wrote a small dictionary more coherent with the needs of the problem. I guess it depends what you want to analyze. Besides this approach, there are more sophisticated methods. I think there is a coursera course about NLP. Here is another resource: https://sites.google.com/site/miningtwitter/basics/text-mining It says it's deprecated because of the changes of twitter API, but the author of twitteR package adapted the package to the new API. So you need first to install the updated package, perhaps the source version of it. Here is the author's web page. _http://geoffjentry.hexdump.org/ The later is needed in order not to get duplicated tweets when making a call bigger than 100 tweets. Jeffrey's Function: score.sentiment = function(sentences, pos.words, neg.words, .progress='none') { require(plyr) require(stringr) # we got a vector of sentences. plyr will handle a list # or a vector as an "l" for us # we want a simple array ("a") of scores back, so we use # "l" + "a" + "ply" = "laply": scores = laply(sentences, function(sentence, pos.words, neg.words) { # clean up sentences with R's regex-driven global substitute, gsub(): sentence = gsub('[[:punct:]]', '', sentence) sentence = gsub('[[:cntrl:]]', '', sentence) sentence = gsub('\\d+', '', sentence) # and convert to lower case: sentence = tolower(sentence) # split into words. str_split is in the stringr package word.list = str_split(sentence, '\\s+') # sometimes a list() is one level of hierarchy too much words = unlist(word.list) # compare our words to the dictionaries of positive & negative terms pos.matches = match(words, pos.words) neg.matches = match(words, neg.words) # match() returns the position of the matched term or NA # we just want a TRUE/FALSE: pos.matches = !is.na(pos.matches) neg.matches = !is.na(neg.matches) # and conveniently enough, TRUE/FALSE will be treated as 1/0 by sum(): score = sum(pos.matches) - sum(neg.matches) return(score) }, pos.words, neg.words, .progress=.progress ) scores.df = data.frame(score=scores, text=sentences) return(scores.df) }
How to perform text mining, sentiment mining, and business category identification, and where to obt A few alternatives 1) Supervised learning: Ideally you'd want to have in your data the text and a label, where the label refers to the category you are interested. For this you'd need to manually labe
37,247
Help in drawing confidence ellipse
An ellipse can be parametrized as the affine image of any given circle. If we consider the unit circle: $$x=a \cos (t)$$ $$y=b \sin (t)$$ ellipse(center, shape, radius, log="", center.pch=19, center.cex=1.5, segments=51, add=TRUE, xlab="", ylab="", col=palette()[2], lwd=2, fill=FALSE, fill.alpha=0.3, grid=TRUE, ...) You can notice the ellipse function asks for the center and the radius of the circle, as well as the covariance matrix, which is equivalent to giving the parameters of the affine transformation. center 2-element vector with coordinates of center of ellipse. shape 2 * 2 shape (or covariance) matrix. radius radius of circle generating the ellipse. Let us have a look at the car package function: ellipse <- t(center + radius * t(unit.circle %*% chol(shape))) The radius parameter can be set to 1 if you want to use the covariance matrix directly for the shape parameter. I believe it was introduced to help people use normalized matrices instead if they prefer so. Edit: As mentioned in whuber's comment, the two ellipses below are the same. > library(car) > s=matrix(c(1,0,0,1), nrow=2, ncol=2) > plot(0, 0, xlim=c(-5,5), ylim=c(-5,5)) > ellipse(c(0,0), 4*s, 1) > ellipse(c(0,0), s, 2)
Help in drawing confidence ellipse
An ellipse can be parametrized as the affine image of any given circle. If we consider the unit circle: $$x=a \cos (t)$$ $$y=b \sin (t)$$ ellipse(center, shape, radius, log="", center.pch=19, center.c
Help in drawing confidence ellipse An ellipse can be parametrized as the affine image of any given circle. If we consider the unit circle: $$x=a \cos (t)$$ $$y=b \sin (t)$$ ellipse(center, shape, radius, log="", center.pch=19, center.cex=1.5, segments=51, add=TRUE, xlab="", ylab="", col=palette()[2], lwd=2, fill=FALSE, fill.alpha=0.3, grid=TRUE, ...) You can notice the ellipse function asks for the center and the radius of the circle, as well as the covariance matrix, which is equivalent to giving the parameters of the affine transformation. center 2-element vector with coordinates of center of ellipse. shape 2 * 2 shape (or covariance) matrix. radius radius of circle generating the ellipse. Let us have a look at the car package function: ellipse <- t(center + radius * t(unit.circle %*% chol(shape))) The radius parameter can be set to 1 if you want to use the covariance matrix directly for the shape parameter. I believe it was introduced to help people use normalized matrices instead if they prefer so. Edit: As mentioned in whuber's comment, the two ellipses below are the same. > library(car) > s=matrix(c(1,0,0,1), nrow=2, ncol=2) > plot(0, 0, xlim=c(-5,5), ylim=c(-5,5)) > ellipse(c(0,0), 4*s, 1) > ellipse(c(0,0), s, 2)
Help in drawing confidence ellipse An ellipse can be parametrized as the affine image of any given circle. If we consider the unit circle: $$x=a \cos (t)$$ $$y=b \sin (t)$$ ellipse(center, shape, radius, log="", center.pch=19, center.c
37,248
Scalable multinomial regression implementation
I've had good experiences with Madigan's and Lewis's BMR and BBR packages for multiple category dependent variables, lasso or ridge priors on parameters, and high dimensional input data. Not quite as high as yours, but it might still be worth a look. Instructions are here: http://bayesianregression.com/bmr.html
Scalable multinomial regression implementation
I've had good experiences with Madigan's and Lewis's BMR and BBR packages for multiple category dependent variables, lasso or ridge priors on parameters, and high dimensional input data. Not quite as
Scalable multinomial regression implementation I've had good experiences with Madigan's and Lewis's BMR and BBR packages for multiple category dependent variables, lasso or ridge priors on parameters, and high dimensional input data. Not quite as high as yours, but it might still be worth a look. Instructions are here: http://bayesianregression.com/bmr.html
Scalable multinomial regression implementation I've had good experiences with Madigan's and Lewis's BMR and BBR packages for multiple category dependent variables, lasso or ridge priors on parameters, and high dimensional input data. Not quite as
37,249
Are there bounds on the Spearman correlation of a sum of two variables?
Spearman's rank correlation is just the Pearson product-moment correlation between the ranks of the variables. Shabbychef's extra constraint means that $y_1$ and $y_2$ are the same as their ranks and that there are no ties, so they have equal standard deviation $\sigma_y$ (say). If we also replace x by its ranks, the problem becomes the equivalent problem for the Pearson product-moment correlation. By definition of the Pearson product-moment correlation, $$\begin{align} \rho(x,y_1+y_2) &= \frac{\operatorname{Cov}(x,y_1+y_2)} {\sigma_x \sqrt{\operatorname{Var}(y_1+y_2)}} \\ &= \frac{\operatorname{Cov}(x,y_1) + \operatorname{Cov}(x,y_2)} {\sigma_x \sqrt{\operatorname{Var}(y_1)+\operatorname{Var}(y_2) + 2\operatorname{Cov}(y_1,y_2)}} \\ &= \frac{\rho_1\sigma_x\sigma_y + \rho_2\sigma_x\sigma_y} {\sigma_x \sqrt{2\sigma_y^2 + 2\sigma_y^2\rho(y_1,y_2)}} \\ &= \frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho(y_1,y_2)\right)^{1/2}}. \\ \end{align}$$ For any set of three variables, if we know two of their three correlations we can put bounds on the third correlation (see e.g. Vos 2009, or from the formula for partial correlation): $$\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} \leq \rho(y_1,y_2) \leq \rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} $$ Therefore $$\frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}} \leq \rho(x,y_1+y_2) \leq \frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}} $$ if $\rho_1 + \rho_2 \geq 0$; if $\rho_1 + \rho_2 \le 0$ you need to switch the bounds around.
Are there bounds on the Spearman correlation of a sum of two variables?
Spearman's rank correlation is just the Pearson product-moment correlation between the ranks of the variables. Shabbychef's extra constraint means that $y_1$ and $y_2$ are the same as their ranks and
Are there bounds on the Spearman correlation of a sum of two variables? Spearman's rank correlation is just the Pearson product-moment correlation between the ranks of the variables. Shabbychef's extra constraint means that $y_1$ and $y_2$ are the same as their ranks and that there are no ties, so they have equal standard deviation $\sigma_y$ (say). If we also replace x by its ranks, the problem becomes the equivalent problem for the Pearson product-moment correlation. By definition of the Pearson product-moment correlation, $$\begin{align} \rho(x,y_1+y_2) &= \frac{\operatorname{Cov}(x,y_1+y_2)} {\sigma_x \sqrt{\operatorname{Var}(y_1+y_2)}} \\ &= \frac{\operatorname{Cov}(x,y_1) + \operatorname{Cov}(x,y_2)} {\sigma_x \sqrt{\operatorname{Var}(y_1)+\operatorname{Var}(y_2) + 2\operatorname{Cov}(y_1,y_2)}} \\ &= \frac{\rho_1\sigma_x\sigma_y + \rho_2\sigma_x\sigma_y} {\sigma_x \sqrt{2\sigma_y^2 + 2\sigma_y^2\rho(y_1,y_2)}} \\ &= \frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho(y_1,y_2)\right)^{1/2}}. \\ \end{align}$$ For any set of three variables, if we know two of their three correlations we can put bounds on the third correlation (see e.g. Vos 2009, or from the formula for partial correlation): $$\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} \leq \rho(y_1,y_2) \leq \rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2} $$ Therefore $$\frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho_1\rho_2 + \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}} \leq \rho(x,y_1+y_2) \leq \frac{\rho_1 + \rho_2} {\sqrt{2}\left(1+\rho_1\rho_2 - \sqrt{1-\rho_1^2}\sqrt{1-\rho_2^2}\right)^{1/2}} $$ if $\rho_1 + \rho_2 \geq 0$; if $\rho_1 + \rho_2 \le 0$ you need to switch the bounds around.
Are there bounds on the Spearman correlation of a sum of two variables? Spearman's rank correlation is just the Pearson product-moment correlation between the ranks of the variables. Shabbychef's extra constraint means that $y_1$ and $y_2$ are the same as their ranks and
37,250
Brown-Forsythe and Welch f-ratios in two-way ANOVAs?
I found this article by Algina & Olejnik (1984). The abstract: The Welch-James procedure may be used to test hypotheses on means, when independent samples from populations with heterogeneous variances are available. Until recently the complexity of the available presentations of this procedure limited the application of this procedure. To resolve this state of affairs, summation formulas for the Welch-James procedure are presented for the 2 x 2 design. In addition, matrix formulas that permit routine application of the procedure to crossed factorial designs are presented. It frankly looks a little hairy, but I thought it might be a start. Citation Algina, J., & Olejnik, S. F. (1984). Implementing the Welch-James procedure with factorial designs. Educational and psychological measurement, 44(1), 39-48.
Brown-Forsythe and Welch f-ratios in two-way ANOVAs?
I found this article by Algina & Olejnik (1984). The abstract: The Welch-James procedure may be used to test hypotheses on means, when independent samples from populations with heterogeneous v
Brown-Forsythe and Welch f-ratios in two-way ANOVAs? I found this article by Algina & Olejnik (1984). The abstract: The Welch-James procedure may be used to test hypotheses on means, when independent samples from populations with heterogeneous variances are available. Until recently the complexity of the available presentations of this procedure limited the application of this procedure. To resolve this state of affairs, summation formulas for the Welch-James procedure are presented for the 2 x 2 design. In addition, matrix formulas that permit routine application of the procedure to crossed factorial designs are presented. It frankly looks a little hairy, but I thought it might be a start. Citation Algina, J., & Olejnik, S. F. (1984). Implementing the Welch-James procedure with factorial designs. Educational and psychological measurement, 44(1), 39-48.
Brown-Forsythe and Welch f-ratios in two-way ANOVAs? I found this article by Algina & Olejnik (1984). The abstract: The Welch-James procedure may be used to test hypotheses on means, when independent samples from populations with heterogeneous v
37,251
Brown-Forsythe and Welch f-ratios in two-way ANOVAs?
The Brown-Forsythe F* can be used even in two-way ANOVAs. From what I can tell the F statistic is the same as for classic two-way ANOVA, the only difference is in how the degrees of freedom are calculated. Of course it gets more complicated for unbalanced factorial designs. (the original Brown & Forsythe's paper is available at http://www.jstor.org/stable/2529238)
Brown-Forsythe and Welch f-ratios in two-way ANOVAs?
The Brown-Forsythe F* can be used even in two-way ANOVAs. From what I can tell the F statistic is the same as for classic two-way ANOVA, the only difference is in how the degrees of freedom are calcul
Brown-Forsythe and Welch f-ratios in two-way ANOVAs? The Brown-Forsythe F* can be used even in two-way ANOVAs. From what I can tell the F statistic is the same as for classic two-way ANOVA, the only difference is in how the degrees of freedom are calculated. Of course it gets more complicated for unbalanced factorial designs. (the original Brown & Forsythe's paper is available at http://www.jstor.org/stable/2529238)
Brown-Forsythe and Welch f-ratios in two-way ANOVAs? The Brown-Forsythe F* can be used even in two-way ANOVAs. From what I can tell the F statistic is the same as for classic two-way ANOVA, the only difference is in how the degrees of freedom are calcul
37,252
How can one extract meaningful factors from a sparse matrix?
I might suggest non-negative matrix factorization. The iterative algorithm of Lee and Seung is easy to implement and should be amenable to sparse matrices (although it involves Hadamard products, which some sparse matrix packages may not support.).
How can one extract meaningful factors from a sparse matrix?
I might suggest non-negative matrix factorization. The iterative algorithm of Lee and Seung is easy to implement and should be amenable to sparse matrices (although it involves Hadamard products, whic
How can one extract meaningful factors from a sparse matrix? I might suggest non-negative matrix factorization. The iterative algorithm of Lee and Seung is easy to implement and should be amenable to sparse matrices (although it involves Hadamard products, which some sparse matrix packages may not support.).
How can one extract meaningful factors from a sparse matrix? I might suggest non-negative matrix factorization. The iterative algorithm of Lee and Seung is easy to implement and should be amenable to sparse matrices (although it involves Hadamard products, whic
37,253
How can one extract meaningful factors from a sparse matrix?
One has to be careful about the meaning of the word sparse. Your matrix contains many zeroes and one may represent such a matrix in a sparse way (to save on storage). But since the figures represent co-occurrences these zeroes are still to be considered informative (they are not missing; they are not structurally zero) and should therefore be taken into account when modeling the content of the matrix. The many zeroes and the skewness (approximately geometric) would suggest to use generalized forms of bilinear models (see de Falguerolles/Gabriel : Generalized Linear-Bilinear Models). The R-package gnm supports this type of models. The sparse variants of PCA/SVD you are referring to rather relate to L1-regularisations of the factorial representation such that estimated loadings come out as sparse (many zeroes).
How can one extract meaningful factors from a sparse matrix?
One has to be careful about the meaning of the word sparse. Your matrix contains many zeroes and one may represent such a matrix in a sparse way (to save on storage). But since the figures represent c
How can one extract meaningful factors from a sparse matrix? One has to be careful about the meaning of the word sparse. Your matrix contains many zeroes and one may represent such a matrix in a sparse way (to save on storage). But since the figures represent co-occurrences these zeroes are still to be considered informative (they are not missing; they are not structurally zero) and should therefore be taken into account when modeling the content of the matrix. The many zeroes and the skewness (approximately geometric) would suggest to use generalized forms of bilinear models (see de Falguerolles/Gabriel : Generalized Linear-Bilinear Models). The R-package gnm supports this type of models. The sparse variants of PCA/SVD you are referring to rather relate to L1-regularisations of the factorial representation such that estimated loadings come out as sparse (many zeroes).
How can one extract meaningful factors from a sparse matrix? One has to be careful about the meaning of the word sparse. Your matrix contains many zeroes and one may represent such a matrix in a sparse way (to save on storage). But since the figures represent c
37,254
How can one extract meaningful factors from a sparse matrix?
I had the same problem with a sparce matrix in NLP and what we did was select the columns that where more useful to clasify our rows (that gave more information for discerning the result), if you want I can explain it to you in more detail but it is really simple you can figure it out. But your problem does not seem to be a classification one, I actually am a little confused about what you said about above the diagonal and below it. But I was thinking that you can use the Apriori data mining algorithm to discover the more important alliances between any number of items.
How can one extract meaningful factors from a sparse matrix?
I had the same problem with a sparce matrix in NLP and what we did was select the columns that where more useful to clasify our rows (that gave more information for discerning the result), if you want
How can one extract meaningful factors from a sparse matrix? I had the same problem with a sparce matrix in NLP and what we did was select the columns that where more useful to clasify our rows (that gave more information for discerning the result), if you want I can explain it to you in more detail but it is really simple you can figure it out. But your problem does not seem to be a classification one, I actually am a little confused about what you said about above the diagonal and below it. But I was thinking that you can use the Apriori data mining algorithm to discover the more important alliances between any number of items.
How can one extract meaningful factors from a sparse matrix? I had the same problem with a sparce matrix in NLP and what we did was select the columns that where more useful to clasify our rows (that gave more information for discerning the result), if you want
37,255
How can one extract meaningful factors from a sparse matrix?
I suggest that you look at the 2009 paper by Leng and Wang in JCGS: http://pubs.amstat.org/toc/jcgs/18/1 If this is what you want, the authors supply R code in the supplementary materials.
How can one extract meaningful factors from a sparse matrix?
I suggest that you look at the 2009 paper by Leng and Wang in JCGS: http://pubs.amstat.org/toc/jcgs/18/1 If this is what you want, the authors supply R code in the supplementary materials.
How can one extract meaningful factors from a sparse matrix? I suggest that you look at the 2009 paper by Leng and Wang in JCGS: http://pubs.amstat.org/toc/jcgs/18/1 If this is what you want, the authors supply R code in the supplementary materials.
How can one extract meaningful factors from a sparse matrix? I suggest that you look at the 2009 paper by Leng and Wang in JCGS: http://pubs.amstat.org/toc/jcgs/18/1 If this is what you want, the authors supply R code in the supplementary materials.
37,256
Random generation of scores similar to those of a classification model
Some time has passed and I think I might have a solution at hand. I will describe my approach briefly to give you the general idea. The code should be enough to figure out the details. I like to attach code here, but it is a lot and stackexchange makes it not easy to do so. I am of course happy to answer any comments, also I appreciate any criticism. The code can be found below. The strategy: Approximate a smooth ROC-Curve by using the Logistic function in the interval [0,6] By adding a parameter k one can influence the shape of the curve to fit the desired model quality, measured by AUC (Area Under Curve). The resulting function is $f_k(x)=\frac{1}{(1+exp(-k*x))}$. If k-> 0, AUC approaches 0.5 (no optimization), if k -> Inf, AUC approaches 1 (optimal model). As a handy approach, k should be in the interval [0.0001,100]. By some basic calculus, one can create a function to map k to AUC and vice versa. Now, given you have a roc-curve which matches the desired AUC, determine a score by sample from [0,1] uniformly. This represents the fpr (False-Positive-Rate) on the ROC-curve. For simplicity, the score is calculated then as 1-fpr. The label is now determined by sampling from a Bernoulli Distribution with p calculated using the slope of the ROC-Curve at this fpr and the desired overall precision of the scores. In detail: weight(label="1"):= slope(fpr) mutiplicated by overallPrecision, weight(label="0"):= 1 multiplicated by (1-overallPrecision). Normalize the weights so that they sum up to 1 to determine p and 1-p. Here is an example ROC-Curve for AUC = 0.6 and overall precision = 0.1 (also in the code below) Notes: the resulting AUC is not exactly the same as the input AUC, in fact, there is a small error (around 0.02). This error originates from the way the label of a score is determined. An improvement could be to add a parameter to control the size of the error. the score is set as 1-fpr. This is arbitrary since the ROC-Curve does not care how the scores look like as long as they can be sorted. code: # This function creates a set of random scores together with a binary label # n = sampleSize # basePrecision = ratio of positives in the sample (also called overall Precision on stats.stackexchange) # auc = Area Under Curve i.e. the quality of the simulated model. Must be in [0.5,1]. # binaryModelScores <- function(n,basePrecision=0.1,auc=0.6){ # determine parameter of logistic function k <- calculateK(auc) res <- data.frame("score"=rep(-1,n),"label"=rep(-1,n)) randUniform = runif(n,0,1) runIndex <- 1 for(fpRate in randUniform){ tpRate <- roc(fpRate,k) # slope slope <- derivRoc(fpRate,k) labSampleWeights <- c((1-basePrecision)*1,basePrecision*slope) labSampleWeights <- labSampleWeights/sum(labSampleWeights) res[runIndex,1] <- 1-fpRate # score res[runIndex,2] <- sample(c(0,1),1,prob=labSampleWeights) # label runIndex<-runIndex+1 } res } # min-max-normalization of x (fpr): [0,6] -> [0,1] transformX <- function(x){ (x-0)/(6-0) * (1-0)+0 } # inverse min-max-normalization of x (fpr): [0,1] -> [0,6] invTransformX <- function(invx){ (invx-0)/(1-0) *(6-0) + 0 } # min-max-normalization of y (tpr): [0.5,logistic(6,k)] -> [0,1] transformY <- function(y,k){ (y-0.5)/(logistic(6,k)-0.5)*(1-0)+0 } # logistic function logistic <- function(x,k){ 1/(1+exp(-k*x)) } # integral of logistic function intLogistic <- function(x,k){ 1/k*log(1+exp(k*x)) } # derivative of logistic function derivLogistic <- function(x,k){ numerator <- k*exp(-k*x) denominator <- (1+exp(-k*x))^2 numerator/denominator } # roc-function, mapping fpr to tpr roc <- function(x,k){ transformY(logistic(invTransformX(x),k),k) } # derivative of the roc-function derivRoc <- function(x,k){ scalFactor <- 6 / (logistic(6,k)-0.5) derivLogistic(invTransformX(x),k) * scalFactor } # calculate the AUC for a given k calculateAUC <- function(k){ ((intLogistic(6,k)-intLogistic(0,k))-(0.5*6))/((logistic(6,k)-0.5)*6) } # calculate k for a given auc calculateK <- function(auc){ f <- function(k){ return(calculateAUC(k)-auc) } if(f(0.0001) > 0){ return(0.0001) }else{ return(uniroot(f,c(0.0001,100))$root) } } # Example require(ROCR) x <- seq(0,1,by=0.01) k <- calculateK(0.6) plot(x,roc(x,k),type="l",xlab="fpr",ylab="tpr",main=paste("ROC-Curve for AUC=",0.6," <=> k=",k)) dat <- binaryModelScores(1000,basePrecision=0.1,auc=0.6) pred <- prediction(dat$score,as.factor(dat$label)) performance(pred,measure="auc")@y.values[[1]] perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf,main="approximated ROC-Curve (random generated scores)")
Random generation of scores similar to those of a classification model
Some time has passed and I think I might have a solution at hand. I will describe my approach briefly to give you the general idea. The code should be enough to figure out the details. I like to attac
Random generation of scores similar to those of a classification model Some time has passed and I think I might have a solution at hand. I will describe my approach briefly to give you the general idea. The code should be enough to figure out the details. I like to attach code here, but it is a lot and stackexchange makes it not easy to do so. I am of course happy to answer any comments, also I appreciate any criticism. The code can be found below. The strategy: Approximate a smooth ROC-Curve by using the Logistic function in the interval [0,6] By adding a parameter k one can influence the shape of the curve to fit the desired model quality, measured by AUC (Area Under Curve). The resulting function is $f_k(x)=\frac{1}{(1+exp(-k*x))}$. If k-> 0, AUC approaches 0.5 (no optimization), if k -> Inf, AUC approaches 1 (optimal model). As a handy approach, k should be in the interval [0.0001,100]. By some basic calculus, one can create a function to map k to AUC and vice versa. Now, given you have a roc-curve which matches the desired AUC, determine a score by sample from [0,1] uniformly. This represents the fpr (False-Positive-Rate) on the ROC-curve. For simplicity, the score is calculated then as 1-fpr. The label is now determined by sampling from a Bernoulli Distribution with p calculated using the slope of the ROC-Curve at this fpr and the desired overall precision of the scores. In detail: weight(label="1"):= slope(fpr) mutiplicated by overallPrecision, weight(label="0"):= 1 multiplicated by (1-overallPrecision). Normalize the weights so that they sum up to 1 to determine p and 1-p. Here is an example ROC-Curve for AUC = 0.6 and overall precision = 0.1 (also in the code below) Notes: the resulting AUC is not exactly the same as the input AUC, in fact, there is a small error (around 0.02). This error originates from the way the label of a score is determined. An improvement could be to add a parameter to control the size of the error. the score is set as 1-fpr. This is arbitrary since the ROC-Curve does not care how the scores look like as long as they can be sorted. code: # This function creates a set of random scores together with a binary label # n = sampleSize # basePrecision = ratio of positives in the sample (also called overall Precision on stats.stackexchange) # auc = Area Under Curve i.e. the quality of the simulated model. Must be in [0.5,1]. # binaryModelScores <- function(n,basePrecision=0.1,auc=0.6){ # determine parameter of logistic function k <- calculateK(auc) res <- data.frame("score"=rep(-1,n),"label"=rep(-1,n)) randUniform = runif(n,0,1) runIndex <- 1 for(fpRate in randUniform){ tpRate <- roc(fpRate,k) # slope slope <- derivRoc(fpRate,k) labSampleWeights <- c((1-basePrecision)*1,basePrecision*slope) labSampleWeights <- labSampleWeights/sum(labSampleWeights) res[runIndex,1] <- 1-fpRate # score res[runIndex,2] <- sample(c(0,1),1,prob=labSampleWeights) # label runIndex<-runIndex+1 } res } # min-max-normalization of x (fpr): [0,6] -> [0,1] transformX <- function(x){ (x-0)/(6-0) * (1-0)+0 } # inverse min-max-normalization of x (fpr): [0,1] -> [0,6] invTransformX <- function(invx){ (invx-0)/(1-0) *(6-0) + 0 } # min-max-normalization of y (tpr): [0.5,logistic(6,k)] -> [0,1] transformY <- function(y,k){ (y-0.5)/(logistic(6,k)-0.5)*(1-0)+0 } # logistic function logistic <- function(x,k){ 1/(1+exp(-k*x)) } # integral of logistic function intLogistic <- function(x,k){ 1/k*log(1+exp(k*x)) } # derivative of logistic function derivLogistic <- function(x,k){ numerator <- k*exp(-k*x) denominator <- (1+exp(-k*x))^2 numerator/denominator } # roc-function, mapping fpr to tpr roc <- function(x,k){ transformY(logistic(invTransformX(x),k),k) } # derivative of the roc-function derivRoc <- function(x,k){ scalFactor <- 6 / (logistic(6,k)-0.5) derivLogistic(invTransformX(x),k) * scalFactor } # calculate the AUC for a given k calculateAUC <- function(k){ ((intLogistic(6,k)-intLogistic(0,k))-(0.5*6))/((logistic(6,k)-0.5)*6) } # calculate k for a given auc calculateK <- function(auc){ f <- function(k){ return(calculateAUC(k)-auc) } if(f(0.0001) > 0){ return(0.0001) }else{ return(uniroot(f,c(0.0001,100))$root) } } # Example require(ROCR) x <- seq(0,1,by=0.01) k <- calculateK(0.6) plot(x,roc(x,k),type="l",xlab="fpr",ylab="tpr",main=paste("ROC-Curve for AUC=",0.6," <=> k=",k)) dat <- binaryModelScores(1000,basePrecision=0.1,auc=0.6) pred <- prediction(dat$score,as.factor(dat$label)) performance(pred,measure="auc")@y.values[[1]] perf <- performance(pred, measure = "tpr", x.measure = "fpr") plot(perf,main="approximated ROC-Curve (random generated scores)")
Random generation of scores similar to those of a classification model Some time has passed and I think I might have a solution at hand. I will describe my approach briefly to give you the general idea. The code should be enough to figure out the details. I like to attac
37,257
How to explain null hypothesis testing using Bayesian terms?
...the idea to describe NHST as something like a posterior predictive check, but with a "null hypothesis" pseudo-posterior that is a Dirac $\delta$ density about $H_0$. Does that make sense...? Yes. I'm not sure whether to call NHST a prior or posterior predictive check, but it is fair to see it as a form of model check. That said, a Bayesian PPC is often used to check "Is my model large enough yet, or do I need to add more nuance?" By contrast, you could says classical NHST is typically used to check "Is my model small enough yet to match the capacity of my sample size / study design, or should I simplify it further because (especially without believing in an informative prior) I just don't have enough data to estimate some parameters with adequate precision?" Ultimately, the usual scientific reason for running a NHST is to answer the question "Is my sample size large enough to rule out sampling variation as being a major concern?" We deliberately set up a too-simple straw-man model, under which the effect we hope to learn about (say, a difference in means between treatment and control groups) isn't a true effect in the population, but could show up as an apparent effect in finite samples: $\mu_1=\mu_2$, but $\bar{x}_1\neq\bar{x}_2$. If our sample is small, this "PPC" might lead us to conclude: "Even though we don't believe this straw-man model, the data aren't inconsistent with it. Let's design our next experiment to collect more data, so that we can rule out sampling variation as a reason to disbelieve our results." But if our sample is large enough, we should see that the too-simple model $\mu_1=\mu_2$ typically leads to datasets that don't look like our actual sample. Then we can say, "OK, sampling variation isn't a major concern here. Now we can focus on all the other concerns: Was there random assignment? Are the measurements valid for the construct we are trying to study? etc." Perhaps it's worth framing this a 2nd way too: From the Bayesian point of view, a point prior often makes no sense. If your prior puts all its weight on a single $\theta$ value, the posterior will be the same, so there's no point in collecting data at all. In this sense, classical NHST is not the same as using data to update your prior probabilities for $H_0$ and $H_A$ into posterior probabilities, because it starts with a point prior solely on $H_0$. But since Bayesian methods are largely meant for updating priors into posteriors, NHST seems like nonsense to many Bayesians. However, if you're a Bayesian who is willing to run a PPC, you are willing to admit your prior might be wrong. Maybe your initial prior is your first attempt at pinning down your beliefs on this topic, and you run the PPC to see if your prior beliefs lead to an adequately realistic model that generates adequately realistic data. If they do, you'll keep using your prior. If they don't, it might convince you that your initial prior was inadequate, and you'll revise your prior (again, NOT the same thing as updating from a prior to a posterior). In that sense, the purpose of NHST is similar to a PPC attempting to find convincing evidence that a prior of "no effect" is unreasonable. You might not actually hold such a prior yourself, but some readers or reviewers might. By reporting a NHST, you hope to tell them: "If we had started with a simple prior of 'no effect,' a PPC would have told us that our prior was inadequate" (if you reject $H_0$), or "...was not inadequate" (if you fail to reject $H_0$). In either case, NHST is not meant as an answer to the Bayesian's usual question "Which values of $\theta$ should I believe in?" NHST is about the study design, not really about $\theta$ itself. The Greenland & Poole article mentioned in the comments does a nice job of trying to frame p-values in more Bayesian ways, but I don't know how useful that is, because (outside of PPCs) Bayesian methods are simply tackling a very different question than NHST is.
How to explain null hypothesis testing using Bayesian terms?
...the idea to describe NHST as something like a posterior predictive check, but with a "null hypothesis" pseudo-posterior that is a Dirac $\delta$ density about $H_0$. Does that make sense...? Yes.
How to explain null hypothesis testing using Bayesian terms? ...the idea to describe NHST as something like a posterior predictive check, but with a "null hypothesis" pseudo-posterior that is a Dirac $\delta$ density about $H_0$. Does that make sense...? Yes. I'm not sure whether to call NHST a prior or posterior predictive check, but it is fair to see it as a form of model check. That said, a Bayesian PPC is often used to check "Is my model large enough yet, or do I need to add more nuance?" By contrast, you could says classical NHST is typically used to check "Is my model small enough yet to match the capacity of my sample size / study design, or should I simplify it further because (especially without believing in an informative prior) I just don't have enough data to estimate some parameters with adequate precision?" Ultimately, the usual scientific reason for running a NHST is to answer the question "Is my sample size large enough to rule out sampling variation as being a major concern?" We deliberately set up a too-simple straw-man model, under which the effect we hope to learn about (say, a difference in means between treatment and control groups) isn't a true effect in the population, but could show up as an apparent effect in finite samples: $\mu_1=\mu_2$, but $\bar{x}_1\neq\bar{x}_2$. If our sample is small, this "PPC" might lead us to conclude: "Even though we don't believe this straw-man model, the data aren't inconsistent with it. Let's design our next experiment to collect more data, so that we can rule out sampling variation as a reason to disbelieve our results." But if our sample is large enough, we should see that the too-simple model $\mu_1=\mu_2$ typically leads to datasets that don't look like our actual sample. Then we can say, "OK, sampling variation isn't a major concern here. Now we can focus on all the other concerns: Was there random assignment? Are the measurements valid for the construct we are trying to study? etc." Perhaps it's worth framing this a 2nd way too: From the Bayesian point of view, a point prior often makes no sense. If your prior puts all its weight on a single $\theta$ value, the posterior will be the same, so there's no point in collecting data at all. In this sense, classical NHST is not the same as using data to update your prior probabilities for $H_0$ and $H_A$ into posterior probabilities, because it starts with a point prior solely on $H_0$. But since Bayesian methods are largely meant for updating priors into posteriors, NHST seems like nonsense to many Bayesians. However, if you're a Bayesian who is willing to run a PPC, you are willing to admit your prior might be wrong. Maybe your initial prior is your first attempt at pinning down your beliefs on this topic, and you run the PPC to see if your prior beliefs lead to an adequately realistic model that generates adequately realistic data. If they do, you'll keep using your prior. If they don't, it might convince you that your initial prior was inadequate, and you'll revise your prior (again, NOT the same thing as updating from a prior to a posterior). In that sense, the purpose of NHST is similar to a PPC attempting to find convincing evidence that a prior of "no effect" is unreasonable. You might not actually hold such a prior yourself, but some readers or reviewers might. By reporting a NHST, you hope to tell them: "If we had started with a simple prior of 'no effect,' a PPC would have told us that our prior was inadequate" (if you reject $H_0$), or "...was not inadequate" (if you fail to reject $H_0$). In either case, NHST is not meant as an answer to the Bayesian's usual question "Which values of $\theta$ should I believe in?" NHST is about the study design, not really about $\theta$ itself. The Greenland & Poole article mentioned in the comments does a nice job of trying to frame p-values in more Bayesian ways, but I don't know how useful that is, because (outside of PPCs) Bayesian methods are simply tackling a very different question than NHST is.
How to explain null hypothesis testing using Bayesian terms? ...the idea to describe NHST as something like a posterior predictive check, but with a "null hypothesis" pseudo-posterior that is a Dirac $\delta$ density about $H_0$. Does that make sense...? Yes.
37,258
How to explain null hypothesis testing using Bayesian terms?
Null hypothesis significance testing, NHST, is expressing an observed effect in terms of a probabilistic comparison with the hypothesis of an absence of the effect. An observation is statistically significant if there is a clear distinction in the support of the data for absence versus presence of an effect. An example of performing NHST with Bayesian techniques can be the following: Imagine there is a lady that claims that she can taste whether the tea or the milk was first added to the cup. We would like to test that claim by having her perform a blind taste test. We test the ability of the lady, by presenting her 100 cups of tea where she has to guess whether it was tea or milk first and we record the number of correct guesses. Let's for simplicity assume that the probability of a correct guess is symmetric (independent from whether the cup was tea first or milk first). Say that based on prior information we know that there is a probability of 0.99 to have a person that can't taste anything (the null hypothesis), and a probability of 0.01 probability that a person can taste something and the ability of this person will follow a uniform distribution. $$\begin{array}{RL} H_0:&p=0.5\\ H_a:&p\sim U(0.5,1) \end{array}$$ We have the following likelihoods as function of the number of heads: $$\begin{array}{rcl} \mathcal{L}(H_0,k) &=& {n \choose k} 0.5^n\\ \mathcal{L}(H_a,k) &=& \int_{0.5}^1 2 {n \choose k} p^{k}(1-p)^{n-k} dp \end{array}$$ and a likelihood ratio (where we can compute the denominator with as the incomplete beta function) $$ \Lambda = \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} = \frac{1}{2^{n+1}\int_{0.5}^1 p^{k}(1-p)^{n-k} dp}$$ and posterior odds as function of $k$ $$ \frac{P(H_0;k)}{P(H_a;k)} = \frac{P(H_0)}{P(H_a)} \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} $$ will look like If for example the lady has two thirds (67) of the tea cups guessed correctly, then this indicates an effect that she can have more than half guessed correctly. But it is not significant. The null hypothesis is just as likely as the alternative hypothesis (odds ratio around one or even slightly above it). The classical null hypothesis significance testing is not using these priors and are using instead probability statements based on a fiducial distribution or p-value. Those statements are independent from a prior distribution (but not on prior information, e.g. assumptions about the model describing the likelihood function), and they only regard the likelihood of the null hypothesis and aim to make this a small value in order to declare a test statistically significant. In a way the NHST has an implicit Bayesian reasoning and assumes that data that does not support the null hypothesis is supporting instead some alternative, but unknown, alternative hypothesis. Neyman and Pearson make this more explicit by defining the fiducial distribution or p-values (which can be computed in different ways) based on a specfic alternative hypothesis. Possibly a more simple way to regard statistical significance, and how I interpret Fisher's approach to it, is that the fiducial distribution has a probability density concentrated in a small region (and in a Bayesian analysis one could use the posterior distribution in place of the fiducial distribution). An effect is statistically significant if the highest density region (or some other region) of a certain large amount, say 95%, does not include the parameter value relating to a zero/null effect. Expressions of statistical significance are useful when people make point estimates. A point estimate could for instance be the maximum of the posterior distribution. But such point estimate alone does not give an indication of the entire posterior and of the difference of the estimate with other hypotheses. If we give a point estimate along with a region, then we can have a better idea about the information that the data contains about a particular parameter/hypothesis.
How to explain null hypothesis testing using Bayesian terms?
Null hypothesis significance testing, NHST, is expressing an observed effect in terms of a probabilistic comparison with the hypothesis of an absence of the effect. An observation is statistically sig
How to explain null hypothesis testing using Bayesian terms? Null hypothesis significance testing, NHST, is expressing an observed effect in terms of a probabilistic comparison with the hypothesis of an absence of the effect. An observation is statistically significant if there is a clear distinction in the support of the data for absence versus presence of an effect. An example of performing NHST with Bayesian techniques can be the following: Imagine there is a lady that claims that she can taste whether the tea or the milk was first added to the cup. We would like to test that claim by having her perform a blind taste test. We test the ability of the lady, by presenting her 100 cups of tea where she has to guess whether it was tea or milk first and we record the number of correct guesses. Let's for simplicity assume that the probability of a correct guess is symmetric (independent from whether the cup was tea first or milk first). Say that based on prior information we know that there is a probability of 0.99 to have a person that can't taste anything (the null hypothesis), and a probability of 0.01 probability that a person can taste something and the ability of this person will follow a uniform distribution. $$\begin{array}{RL} H_0:&p=0.5\\ H_a:&p\sim U(0.5,1) \end{array}$$ We have the following likelihoods as function of the number of heads: $$\begin{array}{rcl} \mathcal{L}(H_0,k) &=& {n \choose k} 0.5^n\\ \mathcal{L}(H_a,k) &=& \int_{0.5}^1 2 {n \choose k} p^{k}(1-p)^{n-k} dp \end{array}$$ and a likelihood ratio (where we can compute the denominator with as the incomplete beta function) $$ \Lambda = \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} = \frac{1}{2^{n+1}\int_{0.5}^1 p^{k}(1-p)^{n-k} dp}$$ and posterior odds as function of $k$ $$ \frac{P(H_0;k)}{P(H_a;k)} = \frac{P(H_0)}{P(H_a)} \frac{\mathcal{L}(H_0,k)}{ \mathcal{L}(H_a,k)} $$ will look like If for example the lady has two thirds (67) of the tea cups guessed correctly, then this indicates an effect that she can have more than half guessed correctly. But it is not significant. The null hypothesis is just as likely as the alternative hypothesis (odds ratio around one or even slightly above it). The classical null hypothesis significance testing is not using these priors and are using instead probability statements based on a fiducial distribution or p-value. Those statements are independent from a prior distribution (but not on prior information, e.g. assumptions about the model describing the likelihood function), and they only regard the likelihood of the null hypothesis and aim to make this a small value in order to declare a test statistically significant. In a way the NHST has an implicit Bayesian reasoning and assumes that data that does not support the null hypothesis is supporting instead some alternative, but unknown, alternative hypothesis. Neyman and Pearson make this more explicit by defining the fiducial distribution or p-values (which can be computed in different ways) based on a specfic alternative hypothesis. Possibly a more simple way to regard statistical significance, and how I interpret Fisher's approach to it, is that the fiducial distribution has a probability density concentrated in a small region (and in a Bayesian analysis one could use the posterior distribution in place of the fiducial distribution). An effect is statistically significant if the highest density region (or some other region) of a certain large amount, say 95%, does not include the parameter value relating to a zero/null effect. Expressions of statistical significance are useful when people make point estimates. A point estimate could for instance be the maximum of the posterior distribution. But such point estimate alone does not give an indication of the entire posterior and of the difference of the estimate with other hypotheses. If we give a point estimate along with a region, then we can have a better idea about the information that the data contains about a particular parameter/hypothesis.
How to explain null hypothesis testing using Bayesian terms? Null hypothesis significance testing, NHST, is expressing an observed effect in terms of a probabilistic comparison with the hypothesis of an absence of the effect. An observation is statistically sig
37,259
Why does collider adjustment in a shielded triplet tend to cause independence?
Here you can find a formal example of linear causal model that share your DAG. Which OLS assumptions are colliders violating? I consider (there and here) the case where all noises are independent each others. Moreover now I consider here the particular case where noises are standard Normal. Moreover, like in your example, I start considering all three causal parameters equal to $1$. As showed in my example (in the link) the causal coefficient/effect of $X$ on $Y$ can be consistently estimated from the regression of $Y$ on $X$. No controls are needed, indeed if we add a collider ($Z$) as control the regression coefficient of $X$ is no more a consistent estimator of the causal coefficient of $X$ on $Y$; worse, it do not represent any causal parameter of SCM. It is a useless regression coefficient. Actually under the just suggested parametrization, the useless regression coefficient converge precisely to $0$; from normality assumption we can said that $X$ and $Y$ are independent conditioned on $Z$. Now you said that I tried changing the equations for $Y$ and $Z$ and yet, the zero is always there (almost always, check the end of the question) for $I(X;Y|Z)$, whenever it's a shielded triplet. Even if I do it with continuous variables and normal distributions for the noise, I still find the same thing. ... But then, if I change a bit the structural equation for $Z$, I get something different from zero. ... A hypothesis would be some sort of cancelling of effects between the two paths, but I changed the equations in ways that I didn't expect it to happen and still... The 0 is there. This do not seems me completely clear and true. Indeed from my example if the causal parameter in the structural equation for $Y$ is different from $1$ the useless regression coefficient become different from zero. For example if the causal parameter in that structural equation is $2$ the conditional independence claimed do not hold. Indeed the useless regression coefficient converrge to $0,5$ Moreover following the alternative parameterization that you suggest ($2$ and $3$ as causal parameters in structural equation for $Z$ and retain $1$ in the equation for $Y$) the useless regression coefficient converge to $-0,5$. In general, different values of causal parameters can produce null or positive or negative value for the useless regression coefficient. Moreover even the distribution of structural errors matters. Finally Why does collider adjustment in a shielded triplet tend to cause independence? the independence you claimed can happen in particular cases (parameters combinations) but in general it do not hold. Indeed the general message is that control for collider is a bad idea.
Why does collider adjustment in a shielded triplet tend to cause independence?
Here you can find a formal example of linear causal model that share your DAG. Which OLS assumptions are colliders violating? I consider (there and here) the case where all noises are independent each
Why does collider adjustment in a shielded triplet tend to cause independence? Here you can find a formal example of linear causal model that share your DAG. Which OLS assumptions are colliders violating? I consider (there and here) the case where all noises are independent each others. Moreover now I consider here the particular case where noises are standard Normal. Moreover, like in your example, I start considering all three causal parameters equal to $1$. As showed in my example (in the link) the causal coefficient/effect of $X$ on $Y$ can be consistently estimated from the regression of $Y$ on $X$. No controls are needed, indeed if we add a collider ($Z$) as control the regression coefficient of $X$ is no more a consistent estimator of the causal coefficient of $X$ on $Y$; worse, it do not represent any causal parameter of SCM. It is a useless regression coefficient. Actually under the just suggested parametrization, the useless regression coefficient converge precisely to $0$; from normality assumption we can said that $X$ and $Y$ are independent conditioned on $Z$. Now you said that I tried changing the equations for $Y$ and $Z$ and yet, the zero is always there (almost always, check the end of the question) for $I(X;Y|Z)$, whenever it's a shielded triplet. Even if I do it with continuous variables and normal distributions for the noise, I still find the same thing. ... But then, if I change a bit the structural equation for $Z$, I get something different from zero. ... A hypothesis would be some sort of cancelling of effects between the two paths, but I changed the equations in ways that I didn't expect it to happen and still... The 0 is there. This do not seems me completely clear and true. Indeed from my example if the causal parameter in the structural equation for $Y$ is different from $1$ the useless regression coefficient become different from zero. For example if the causal parameter in that structural equation is $2$ the conditional independence claimed do not hold. Indeed the useless regression coefficient converrge to $0,5$ Moreover following the alternative parameterization that you suggest ($2$ and $3$ as causal parameters in structural equation for $Z$ and retain $1$ in the equation for $Y$) the useless regression coefficient converge to $-0,5$. In general, different values of causal parameters can produce null or positive or negative value for the useless regression coefficient. Moreover even the distribution of structural errors matters. Finally Why does collider adjustment in a shielded triplet tend to cause independence? the independence you claimed can happen in particular cases (parameters combinations) but in general it do not hold. Indeed the general message is that control for collider is a bad idea.
Why does collider adjustment in a shielded triplet tend to cause independence? Here you can find a formal example of linear causal model that share your DAG. Which OLS assumptions are colliders violating? I consider (there and here) the case where all noises are independent each
37,260
Why does collider adjustment in a shielded triplet tend to cause independence?
This is called colliding bias, also called Berkson's paradox, if the relationship between X and Y conditions on Z. Conditioning on Z will induce negative association between X and Y, so if X affects Y positively, then this positive causation will be reduced, even to 0, as you showed.
Why does collider adjustment in a shielded triplet tend to cause independence?
This is called colliding bias, also called Berkson's paradox, if the relationship between X and Y conditions on Z. Conditioning on Z will induce negative association between X and Y, so if X affects Y
Why does collider adjustment in a shielded triplet tend to cause independence? This is called colliding bias, also called Berkson's paradox, if the relationship between X and Y conditions on Z. Conditioning on Z will induce negative association between X and Y, so if X affects Y positively, then this positive causation will be reduced, even to 0, as you showed.
Why does collider adjustment in a shielded triplet tend to cause independence? This is called colliding bias, also called Berkson's paradox, if the relationship between X and Y conditions on Z. Conditioning on Z will induce negative association between X and Y, so if X affects Y
37,261
How to design experiment and holdout for two types of treatment at the same time
OP mentioned that they ended up selecting control groups for the treatments independently. In this case, if there is no clear bias in the assignment mechanisms of treatment A and treatment B (e.g. somehow applying treatment B also increases probability of treatment A), the subtraction in the calculation of average treatment effect should naturally cancel out the impact of the other treatment on both treatment and control group. If one suspects that there is some dependency between treatment A and treatment B, it is always possible to manually test for the independence of the two random events. In the case of treatment A and treatment B being dependent, without loss of generality, one can adjust for the bias from treatment B by conducting regression adjustment with the propensity scores of treatment A given treatment B. I suspect that's what OP went for directly without testing for independence.
How to design experiment and holdout for two types of treatment at the same time
OP mentioned that they ended up selecting control groups for the treatments independently. In this case, if there is no clear bias in the assignment mechanisms of treatment A and treatment B (e.g. som
How to design experiment and holdout for two types of treatment at the same time OP mentioned that they ended up selecting control groups for the treatments independently. In this case, if there is no clear bias in the assignment mechanisms of treatment A and treatment B (e.g. somehow applying treatment B also increases probability of treatment A), the subtraction in the calculation of average treatment effect should naturally cancel out the impact of the other treatment on both treatment and control group. If one suspects that there is some dependency between treatment A and treatment B, it is always possible to manually test for the independence of the two random events. In the case of treatment A and treatment B being dependent, without loss of generality, one can adjust for the bias from treatment B by conducting regression adjustment with the propensity scores of treatment A given treatment B. I suspect that's what OP went for directly without testing for independence.
How to design experiment and holdout for two types of treatment at the same time OP mentioned that they ended up selecting control groups for the treatments independently. In this case, if there is no clear bias in the assignment mechanisms of treatment A and treatment B (e.g. som
37,262
How to design experiment and holdout for two types of treatment at the same time
When conducting these kinds of experiments there is a natural control-group for each marginal and conditional effect in the experiment. The figure below shows an experimental flowchart for the possible categories of each participant. The "No Treat" category acts as the control group for determining the marginal effects of Treatment A or B, the "Treat A" category acts as the control group for determining the conditional effect of Treatment B given Treatment A, and the "Treat B" category acts as the control group for determining the conditional effect of Treatment A given Treatment B. (And of course, these last two groups might end up being effectively the same thing if the order of the treatments makes no difference; in case the order of treatment does make a difference, we leave them separately for now.) Now, ideally you would be able to randomly assign your participants to these five groups and then undertake estimation of all the marginal and conditional causal effects in the process. However, if you have no control over the occurrence of a second treatment given a first treatment then you cannot make reliable causal inferences about the conditional causal effects. You can still estimate the relevant conditional distributions in a statistical sense, but the connection to causality is effectively lost. As to references for causal inference, this depends largely on your existing knowledge of statistics and mathematics, and the degree to which you want a formalisation of the rules of causality. A standard text in the field is Pearl (2009), but this is written for people who have some quantitative training in mathematics or statistics. The main thing to remember when conducting causal inference is that it works by undertaking regular statistical modelling/inference (usually using regression analysis) in a context where we can sever causal relationships between variables using randomisation of assigned variables and blinding protocols.
How to design experiment and holdout for two types of treatment at the same time
When conducting these kinds of experiments there is a natural control-group for each marginal and conditional effect in the experiment. The figure below shows an experimental flowchart for the possib
How to design experiment and holdout for two types of treatment at the same time When conducting these kinds of experiments there is a natural control-group for each marginal and conditional effect in the experiment. The figure below shows an experimental flowchart for the possible categories of each participant. The "No Treat" category acts as the control group for determining the marginal effects of Treatment A or B, the "Treat A" category acts as the control group for determining the conditional effect of Treatment B given Treatment A, and the "Treat B" category acts as the control group for determining the conditional effect of Treatment A given Treatment B. (And of course, these last two groups might end up being effectively the same thing if the order of the treatments makes no difference; in case the order of treatment does make a difference, we leave them separately for now.) Now, ideally you would be able to randomly assign your participants to these five groups and then undertake estimation of all the marginal and conditional causal effects in the process. However, if you have no control over the occurrence of a second treatment given a first treatment then you cannot make reliable causal inferences about the conditional causal effects. You can still estimate the relevant conditional distributions in a statistical sense, but the connection to causality is effectively lost. As to references for causal inference, this depends largely on your existing knowledge of statistics and mathematics, and the degree to which you want a formalisation of the rules of causality. A standard text in the field is Pearl (2009), but this is written for people who have some quantitative training in mathematics or statistics. The main thing to remember when conducting causal inference is that it works by undertaking regular statistical modelling/inference (usually using regression analysis) in a context where we can sever causal relationships between variables using randomisation of assigned variables and blinding protocols.
How to design experiment and holdout for two types of treatment at the same time When conducting these kinds of experiments there is a natural control-group for each marginal and conditional effect in the experiment. The figure below shows an experimental flowchart for the possib
37,263
How to design experiment and holdout for two types of treatment at the same time
I opted to construct control groups for treatment A and treatment B independently. Each control group is randomly selected. It allowed the two teams at my work to save time and effort for communications. I used matching to adjust for any potential bias in the measurement of treatment effect of A and B introduced by the other treatment (which turned out to be an insignificant adjustment). I won't go into the specifics here, cause it requires context that I am not allowed to share. To estimate interaction effect, it's done with a CUPED-like method. If there are better answers that go beyond the basics, I will accept it. If not, I will leave this question open. Special thanks to the paper shared in the comment section for giving me a starting point of thinking about the problem.
How to design experiment and holdout for two types of treatment at the same time
I opted to construct control groups for treatment A and treatment B independently. Each control group is randomly selected. It allowed the two teams at my work to save time and effort for communicatio
How to design experiment and holdout for two types of treatment at the same time I opted to construct control groups for treatment A and treatment B independently. Each control group is randomly selected. It allowed the two teams at my work to save time and effort for communications. I used matching to adjust for any potential bias in the measurement of treatment effect of A and B introduced by the other treatment (which turned out to be an insignificant adjustment). I won't go into the specifics here, cause it requires context that I am not allowed to share. To estimate interaction effect, it's done with a CUPED-like method. If there are better answers that go beyond the basics, I will accept it. If not, I will leave this question open. Special thanks to the paper shared in the comment section for giving me a starting point of thinking about the problem.
How to design experiment and holdout for two types of treatment at the same time I opted to construct control groups for treatment A and treatment B independently. Each control group is randomly selected. It allowed the two teams at my work to save time and effort for communicatio
37,264
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$?
The notation here is more complicated than it needs to be. A simpler way to frame this is to say that: $$\mathbb{P}(T>t, Z=1) = \mathbb{P}(t<T_1< \min(T_2,...,T_k)).$$ Now, for all values $r \in \mathbb{R}$ we clearly have the event equivalence: $$\{ r< \min(T_2,...,T_k) \} = \bigcap_{i=2}^k \{ T_i > r \}.$$ Thus, applying the law of total probability to the latter expression (conditioning on $T=r$), we have: $$\begin{align} \mathbb{P}(T>t, Z=1) &= \mathbb{P}(t<T< \min(T_2,...,T_k)) \\[12pt] &= \int \limits_t^\infty \mathbb{P}(r < \min(T_2,...,T_k)) f_{T_1}(r) \ dr \\[6pt] &= \int \limits_t^\infty \mathbb{P} \Bigg( \bigcap_{i=2}^k \{ T_i > r \} \Bigg) \lambda_1 e^{-\lambda_1 r} \ dr. \\[6pt] \end{align}$$
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam
The notation here is more complicated than it needs to be. A simpler way to frame this is to say that: $$\mathbb{P}(T>t, Z=1) = \mathbb{P}(t<T_1< \min(T_2,...,T_k)).$$ Now, for all values $r \in \mat
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$? The notation here is more complicated than it needs to be. A simpler way to frame this is to say that: $$\mathbb{P}(T>t, Z=1) = \mathbb{P}(t<T_1< \min(T_2,...,T_k)).$$ Now, for all values $r \in \mathbb{R}$ we clearly have the event equivalence: $$\{ r< \min(T_2,...,T_k) \} = \bigcap_{i=2}^k \{ T_i > r \}.$$ Thus, applying the law of total probability to the latter expression (conditioning on $T=r$), we have: $$\begin{align} \mathbb{P}(T>t, Z=1) &= \mathbb{P}(t<T< \min(T_2,...,T_k)) \\[12pt] &= \int \limits_t^\infty \mathbb{P}(r < \min(T_2,...,T_k)) f_{T_1}(r) \ dr \\[6pt] &= \int \limits_t^\infty \mathbb{P} \Bigg( \bigcap_{i=2}^k \{ T_i > r \} \Bigg) \lambda_1 e^{-\lambda_1 r} \ dr. \\[6pt] \end{align}$$
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam The notation here is more complicated than it needs to be. A simpler way to frame this is to say that: $$\mathbb{P}(T>t, Z=1) = \mathbb{P}(t<T_1< \min(T_2,...,T_k)).$$ Now, for all values $r \in \mat
37,265
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$?
Since no one has attempted an answer, I will say that if $X,Y$ are random variables $P(t < X < Y) = P(X > t, X < Y) = \int_t^\infty P(X < Y \mid X = x)f(x) \ dx$ by an adjusted version of law of total probability (adjusting the limits of $X$). Here we have $P(t < T_1 < T_j ,j = 2 ,\dots, k) = \int_t^\infty P(\cap_{j = 2}^k \{ x < T_j \} ) \lambda_1 e^{- \lambda_1 x} \ dx$ by the adjusted law of total probability.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam
Since no one has attempted an answer, I will say that if $X,Y$ are random variables $P(t < X < Y) = P(X > t, X < Y) = \int_t^\infty P(X < Y \mid X = x)f(x) \ dx$ by an adjusted version of law of total
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$? Since no one has attempted an answer, I will say that if $X,Y$ are random variables $P(t < X < Y) = P(X > t, X < Y) = \int_t^\infty P(X < Y \mid X = x)f(x) \ dx$ by an adjusted version of law of total probability (adjusting the limits of $X$). Here we have $P(t < T_1 < T_j ,j = 2 ,\dots, k) = \int_t^\infty P(\cap_{j = 2}^k \{ x < T_j \} ) \lambda_1 e^{- \lambda_1 x} \ dx$ by the adjusted law of total probability.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam Since no one has attempted an answer, I will say that if $X,Y$ are random variables $P(t < X < Y) = P(X > t, X < Y) = \int_t^\infty P(X < Y \mid X = x)f(x) \ dx$ by an adjusted version of law of total
37,266
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$?
There is a good discussion of this theorem in Wikipedia / Exponential Distribution / Distribution of the minimum of exponential random variables. How exactly was the law of total probability used ... ? Total probability in the integral you provide sums over all possible outcomes ("total probability") of the other independent variables $w_i,~ i \neq 1$.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam
There is a good discussion of this theorem in Wikipedia / Exponential Distribution / Distribution of the minimum of exponential random variables. How exactly was the law of total probability used ...
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$? There is a good discussion of this theorem in Wikipedia / Exponential Distribution / Distribution of the minimum of exponential random variables. How exactly was the law of total probability used ... ? Total probability in the integral you provide sums over all possible outcomes ("total probability") of the other independent variables $w_i,~ i \neq 1$.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam There is a good discussion of this theorem in Wikipedia / Exponential Distribution / Distribution of the minimum of exponential random variables. How exactly was the law of total probability used ...
37,267
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$?
The answer is in your question: where the probability in the integrand is $P(T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k) \mid T_1 = x)$. With your notation, $A = T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k)$. So $$ P(A\mid T_1 = x) = I(t >x)P(T_j > x, \,\,(j = 2, \dots, k)) $$ Thus, $$ P(A) = \int_{-\infty}^{\infty}P(A \mid T_1 = x)f_{T_1}(x)\,dx= \int_{t}^{\infty}P(\cap_j T_j > x)f_{T_1}(x)\,dx $$ Note, $A$ is the event where $T_1$ is smaller than all other $T_j$. With symbols: $A = \cap_{j>1} T_j > T_1$.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam
The answer is in your question: where the probability in the integrand is $P(T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k) \mid T_1 = x)$. With your notation, $A = T_1 > t \ \& \ T_1 < T_j, (j = 2, \d
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lambda_1 e^{- \lambda_1 x} \ dx$? The answer is in your question: where the probability in the integrand is $P(T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k) \mid T_1 = x)$. With your notation, $A = T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k)$. So $$ P(A\mid T_1 = x) = I(t >x)P(T_j > x, \,\,(j = 2, \dots, k)) $$ Thus, $$ P(A) = \int_{-\infty}^{\infty}P(A \mid T_1 = x)f_{T_1}(x)\,dx= \int_{t}^{\infty}P(\cap_j T_j > x)f_{T_1}(x)\,dx $$ Note, $A$ is the event where $T_1$ is smaller than all other $T_j$. With symbols: $A = \cap_{j>1} T_j > T_1$.
The law of total probability: $P(T > t, Z = 1) = \int_t^\infty P(\cap_{j = 2}^k \{ T_j > x \} ) \lam The answer is in your question: where the probability in the integrand is $P(T_1 > t \ \& \ T_1 < T_j, (j = 2, \dots, k) \mid T_1 = x)$. With your notation, $A = T_1 > t \ \& \ T_1 < T_j, (j = 2, \d
37,268
What does non-linearity mean when using only binary predictors?
Aside from interactions, which could be vitally important, I do not see it, either. Let's consider two features that are $0/1$ binary variables. $$ x_{i1}\in\{0,1\}\\ x_{i2}\in\{0,1\}\\ $$ Now consider some functions $\phi_i:\rightarrow\mathbb R$ of those variables. $$ \phi_1(x_{i1}) \in\{a_1, b_1\}\\ \phi_2(x_{i2}) \in\{a_2, b_2\}\\ $$ No matter how nonlinear the $\phi_i$ are, the new features are linear/affine transformations of the original variables. You do not get any kind of bending, curving, or discontinuity of the regression by using these new variables. Where you might be able to say there is something nonlinear happening is when it comes to interactions. $\hat y = \hat\beta_0 +\hat\beta_1x_1 + \hat\beta_2x_2 + \hat\beta_3x_1x_2$ is a linear model, sure, but it is not linear in the data. A tree-based model might be able to pick up on this without being explicitly told to look for an interaction, while a generalized linear model will miss it unless you set the $x_1x_2$ interaction as a feature.
What does non-linearity mean when using only binary predictors?
Aside from interactions, which could be vitally important, I do not see it, either. Let's consider two features that are $0/1$ binary variables. $$ x_{i1}\in\{0,1\}\\ x_{i2}\in\{0,1\}\\ $$ Now conside
What does non-linearity mean when using only binary predictors? Aside from interactions, which could be vitally important, I do not see it, either. Let's consider two features that are $0/1$ binary variables. $$ x_{i1}\in\{0,1\}\\ x_{i2}\in\{0,1\}\\ $$ Now consider some functions $\phi_i:\rightarrow\mathbb R$ of those variables. $$ \phi_1(x_{i1}) \in\{a_1, b_1\}\\ \phi_2(x_{i2}) \in\{a_2, b_2\}\\ $$ No matter how nonlinear the $\phi_i$ are, the new features are linear/affine transformations of the original variables. You do not get any kind of bending, curving, or discontinuity of the regression by using these new variables. Where you might be able to say there is something nonlinear happening is when it comes to interactions. $\hat y = \hat\beta_0 +\hat\beta_1x_1 + \hat\beta_2x_2 + \hat\beta_3x_1x_2$ is a linear model, sure, but it is not linear in the data. A tree-based model might be able to pick up on this without being explicitly told to look for an interaction, while a generalized linear model will miss it unless you set the $x_1x_2$ interaction as a feature.
What does non-linearity mean when using only binary predictors? Aside from interactions, which could be vitally important, I do not see it, either. Let's consider two features that are $0/1$ binary variables. $$ x_{i1}\in\{0,1\}\\ x_{i2}\in\{0,1\}\\ $$ Now conside
37,269
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART)
The first part is to identify what is the exact estimand you are interested in. In the context of $T \in \{0, 1\}$, things are fairly simple. However, you can picture different variations for a continuous $T$. For example, you can imagine 'shifting' all the observed $T$ values by some constant $\alpha$. See Dia-Munoz & van der Laan 2011 for a detailed discussion of this problem (and an example using several different estimators). You can also look at 'setting' everyone to $T=t$. This option is usually referred to as estimating the dose-response curve. See Kennedy et al. 2017 for discussion of this approach. Based on your description, it sounds like you are more interested in the later. I can't speak to BART, but the previous procedures allow for general functions to estimate the counterfactual quantity. The Kennedy et al. paper in particular describes a non-parametric approach. See above for alternative approaches. The proposed estimators in the above linked papers are general. You can also use super-learner with a 'binning' approach, which allows for any classification algorithm you would want. This procedure is described in Chapter 14.
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART)
The first part is to identify what is the exact estimand you are interested in. In the context of $T \in \{0, 1\}$, things are fairly simple. However, you can picture different variations for a contin
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART) The first part is to identify what is the exact estimand you are interested in. In the context of $T \in \{0, 1\}$, things are fairly simple. However, you can picture different variations for a continuous $T$. For example, you can imagine 'shifting' all the observed $T$ values by some constant $\alpha$. See Dia-Munoz & van der Laan 2011 for a detailed discussion of this problem (and an example using several different estimators). You can also look at 'setting' everyone to $T=t$. This option is usually referred to as estimating the dose-response curve. See Kennedy et al. 2017 for discussion of this approach. Based on your description, it sounds like you are more interested in the later. I can't speak to BART, but the previous procedures allow for general functions to estimate the counterfactual quantity. The Kennedy et al. paper in particular describes a non-parametric approach. See above for alternative approaches. The proposed estimators in the above linked papers are general. You can also use super-learner with a 'binning' approach, which allows for any classification algorithm you would want. This procedure is described in Chapter 14.
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART) The first part is to identify what is the exact estimand you are interested in. In the context of $T \in \{0, 1\}$, things are fairly simple. However, you can picture different variations for a contin
37,270
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART)
With respect to BART's ability to handle continuous treatments, I have found two useful references: (Hill, 2011): Section 6 of the paper (named "Estimating Dosage Effects") uses BART to estimate the causal effect of getting a dosage level $d\in \mathbb{R}$ compared to getting a dosage of 0. (Woody et al, 2020): The paper develops a BART model with a linear causal effect of a continuous variable, which is non-linearly moderated by a set of moderator variables. References: Jennifer L. Hill (2011) Bayesian Nonparametric Modeling for Causal Inference, Journal of Computational and Graphical Statistics, 20:1, 217-240, DOI: 10.1198/jcgs.2010.08162 Spencer Woody, Carlos M. Carvalho, P. Richard Hahn, & Jared S. Murray. (2020). Estimating heterogeneous effects of continuous exposures using Bayesian tree ensembles: revisiting the impact of abortion rates on crime. https://arxiv.org/abs/2007.09845
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART)
With respect to BART's ability to handle continuous treatments, I have found two useful references: (Hill, 2011): Section 6 of the paper (named "Estimating Dosage Effects") uses BART to estimate the
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART) With respect to BART's ability to handle continuous treatments, I have found two useful references: (Hill, 2011): Section 6 of the paper (named "Estimating Dosage Effects") uses BART to estimate the causal effect of getting a dosage level $d\in \mathbb{R}$ compared to getting a dosage of 0. (Woody et al, 2020): The paper develops a BART model with a linear causal effect of a continuous variable, which is non-linearly moderated by a set of moderator variables. References: Jennifer L. Hill (2011) Bayesian Nonparametric Modeling for Causal Inference, Journal of Computational and Graphical Statistics, 20:1, 217-240, DOI: 10.1198/jcgs.2010.08162 Spencer Woody, Carlos M. Carvalho, P. Richard Hahn, & Jared S. Murray. (2020). Estimating heterogeneous effects of continuous exposures using Bayesian tree ensembles: revisiting the impact of abortion rates on crime. https://arxiv.org/abs/2007.09845
Heterogeneous Treatment Effects with Continuous Treatment (e.g. using BART) With respect to BART's ability to handle continuous treatments, I have found two useful references: (Hill, 2011): Section 6 of the paper (named "Estimating Dosage Effects") uses BART to estimate the
37,271
On solving ode/pde with Neural Networks
The procedure presented in the paper seems to be slightly different from the one above. In the paper the authors make an ansatz that explicitely fulfills the initial conditions. For a second order differential equation of the form $$ \Psi''(t)=f(t,\Psi(t),\Psi'(t)) $$ with $\Psi(0)=A$ and $\Psi'(0)=B$ they suggest to use (see section 3.1 and specifically equation (13) in the preprint) $$\Psi(t)=A+Bt+t^2N(t),$$ where $N(t)$ is the neural net. Note that this form is not unique, but it will have the correct initial values no matter what $N(0)$. The cost function to optimize on the other hand is $$ C=\sum_i(\Psi''(t_i)-f(t_i,\Psi(t_i),\Psi'(t_i)))^2, $$ where $\{t_i\}_i$ is a set of collocation points that are sampled from the domain of $\Psi$. So for your example problem you have $A=0$, $B=-3$, and $C=\sum_i(\Psi''(t_i)+14\Psi'(t_i)+49\Psi(t_i))^2$.
On solving ode/pde with Neural Networks
The procedure presented in the paper seems to be slightly different from the one above. In the paper the authors make an ansatz that explicitely fulfills the initial conditions. For a second order dif
On solving ode/pde with Neural Networks The procedure presented in the paper seems to be slightly different from the one above. In the paper the authors make an ansatz that explicitely fulfills the initial conditions. For a second order differential equation of the form $$ \Psi''(t)=f(t,\Psi(t),\Psi'(t)) $$ with $\Psi(0)=A$ and $\Psi'(0)=B$ they suggest to use (see section 3.1 and specifically equation (13) in the preprint) $$\Psi(t)=A+Bt+t^2N(t),$$ where $N(t)$ is the neural net. Note that this form is not unique, but it will have the correct initial values no matter what $N(0)$. The cost function to optimize on the other hand is $$ C=\sum_i(\Psi''(t_i)-f(t_i,\Psi(t_i),\Psi'(t_i)))^2, $$ where $\{t_i\}_i$ is a set of collocation points that are sampled from the domain of $\Psi$. So for your example problem you have $A=0$, $B=-3$, and $C=\sum_i(\Psi''(t_i)+14\Psi'(t_i)+49\Psi(t_i))^2$.
On solving ode/pde with Neural Networks The procedure presented in the paper seems to be slightly different from the one above. In the paper the authors make an ansatz that explicitely fulfills the initial conditions. For a second order dif
37,272
Wikipedia Proof About Minimum of Exponential Random Variables
$P(X_k=x)=0$ for every $x$, but you can condition on $X_k=x$, $x \in [0,\infty)$: \begin{align*} P(I=k) &= P(X_i>X_k, i\ne k) \\ &= \int_0^\infty P(X_i>X_k, i\ne k\mid X_k=x)\lambda_ke^{-\lambda_k x}dx\\ &= \int_0^\infty P(X_i>x, i\ne k)\lambda_ke^{-\lambda_k x}dx \\ &= \int_0^\infty \lambda_ke^{-\lambda_k x}dx\left(\prod_{i\ne k}e^{-\lambda_i x}\right)dx \\ &\text{etc.} \end{align*} See https://mast.queensu.ca/~stat455/lecturenotes/set4.pdf
Wikipedia Proof About Minimum of Exponential Random Variables
$P(X_k=x)=0$ for every $x$, but you can condition on $X_k=x$, $x \in [0,\infty)$: \begin{align*} P(I=k) &= P(X_i>X_k, i\ne k) \\ &= \int_0^\infty P(X_i>X_k, i\ne k\mid X_k=x)\lambda_ke^{-\lambda_k x}d
Wikipedia Proof About Minimum of Exponential Random Variables $P(X_k=x)=0$ for every $x$, but you can condition on $X_k=x$, $x \in [0,\infty)$: \begin{align*} P(I=k) &= P(X_i>X_k, i\ne k) \\ &= \int_0^\infty P(X_i>X_k, i\ne k\mid X_k=x)\lambda_ke^{-\lambda_k x}dx\\ &= \int_0^\infty P(X_i>x, i\ne k)\lambda_ke^{-\lambda_k x}dx \\ &= \int_0^\infty \lambda_ke^{-\lambda_k x}dx\left(\prod_{i\ne k}e^{-\lambda_i x}\right)dx \\ &\text{etc.} \end{align*} See https://mast.queensu.ca/~stat455/lecturenotes/set4.pdf
Wikipedia Proof About Minimum of Exponential Random Variables $P(X_k=x)=0$ for every $x$, but you can condition on $X_k=x$, $x \in [0,\infty)$: \begin{align*} P(I=k) &= P(X_i>X_k, i\ne k) \\ &= \int_0^\infty P(X_i>X_k, i\ne k\mid X_k=x)\lambda_ke^{-\lambda_k x}d
37,273
Should we really do Re-Sampling in Class Imbalance data?
In real world, many imbalanced class problems have heavy cost on misclassification. The minority class might be rare, but one occurrence of that class will have really great impact. The minority class is oftentimes "the goal/point" to avoid or to obtain, not "some useless noise class". This is enough to justify resampling: you'll want the algorithm to be able to not misclassify the minority class. Algorithm that sees imbalanced class data will have less information on whether should it classify an observation as the minority or not. In the end, it will often just label them the majority class. My point is this is biasing the data if real world data is going to see less of minority class. In training we are biasing the data by making algorithm see more of it than what it would see in real life. The point of having the algorithm is to use its predictive ability. You will want to have the algorithm predicting correctly, that's it. Whether or not the algorithm sees the data as it is in real life is not the point. If it is the point, say goodbye to feature engineering as well. p.s.: We can stretch this and extrapolate to how humans see imbalanced data. Humans also (kind of) do "resampling/weighting", by remembering more intensely things that are "rare but has great impact", and not the "things that happen everyday and boring". It balances out so the human both remembers "the one thing that happen and changed my life" and "the thing I do everyday, generally".
Should we really do Re-Sampling in Class Imbalance data?
In real world, many imbalanced class problems have heavy cost on misclassification. The minority class might be rare, but one occurrence of that class will have really great impact. The minority class
Should we really do Re-Sampling in Class Imbalance data? In real world, many imbalanced class problems have heavy cost on misclassification. The minority class might be rare, but one occurrence of that class will have really great impact. The minority class is oftentimes "the goal/point" to avoid or to obtain, not "some useless noise class". This is enough to justify resampling: you'll want the algorithm to be able to not misclassify the minority class. Algorithm that sees imbalanced class data will have less information on whether should it classify an observation as the minority or not. In the end, it will often just label them the majority class. My point is this is biasing the data if real world data is going to see less of minority class. In training we are biasing the data by making algorithm see more of it than what it would see in real life. The point of having the algorithm is to use its predictive ability. You will want to have the algorithm predicting correctly, that's it. Whether or not the algorithm sees the data as it is in real life is not the point. If it is the point, say goodbye to feature engineering as well. p.s.: We can stretch this and extrapolate to how humans see imbalanced data. Humans also (kind of) do "resampling/weighting", by remembering more intensely things that are "rare but has great impact", and not the "things that happen everyday and boring". It balances out so the human both remembers "the one thing that happen and changed my life" and "the thing I do everyday, generally".
Should we really do Re-Sampling in Class Imbalance data? In real world, many imbalanced class problems have heavy cost on misclassification. The minority class might be rare, but one occurrence of that class will have really great impact. The minority class
37,274
Should we really do Re-Sampling in Class Imbalance data?
I'm not sure if this is an answer or not, but I'll throw in my two cents. Real world test data is imbalanced. Shouldn’t be not modify the training data to make it balanced so that it mimics real world data still? You're referring to the prevalence of classes in the real world. This is an important point to make when you're doing something like risk modelling for medical diagnoses (e.g. your risk of heart attack). If the prevalence of the positive class is low, your risk model should respect that. Resampling for the sake of having a class balance artificially increases baseline risk to 50%. Classification is something different however. Frank Harrell writes that classification should really only be used when the class is quite obvious and there is high signal to noise (e.g. is this a picture of a dog or not). In that case, prevalence shouldn't really be an issue. You want your algorithm to be able to learn differences between classes, and in my own opinion, their prevalence in the real world is orthogonal to that goal. So as with everything, the answer depends on what you're doing. If risk of an event occurring is important, and classes are rare, then resampling can turn a perfectly good model bad. However if you just want your computer to distinguish chihuahuas from blueberry muffins, then the prevalence in the real world of either is not important.
Should we really do Re-Sampling in Class Imbalance data?
I'm not sure if this is an answer or not, but I'll throw in my two cents. Real world test data is imbalanced. Shouldn’t be not modify the training data to make it balanced so that it mimics real worl
Should we really do Re-Sampling in Class Imbalance data? I'm not sure if this is an answer or not, but I'll throw in my two cents. Real world test data is imbalanced. Shouldn’t be not modify the training data to make it balanced so that it mimics real world data still? You're referring to the prevalence of classes in the real world. This is an important point to make when you're doing something like risk modelling for medical diagnoses (e.g. your risk of heart attack). If the prevalence of the positive class is low, your risk model should respect that. Resampling for the sake of having a class balance artificially increases baseline risk to 50%. Classification is something different however. Frank Harrell writes that classification should really only be used when the class is quite obvious and there is high signal to noise (e.g. is this a picture of a dog or not). In that case, prevalence shouldn't really be an issue. You want your algorithm to be able to learn differences between classes, and in my own opinion, their prevalence in the real world is orthogonal to that goal. So as with everything, the answer depends on what you're doing. If risk of an event occurring is important, and classes are rare, then resampling can turn a perfectly good model bad. However if you just want your computer to distinguish chihuahuas from blueberry muffins, then the prevalence in the real world of either is not important.
Should we really do Re-Sampling in Class Imbalance data? I'm not sure if this is an answer or not, but I'll throw in my two cents. Real world test data is imbalanced. Shouldn’t be not modify the training data to make it balanced so that it mimics real worl
37,275
Why does PCA often perform comparably well to nonlinear models on nonlinear problems?
The reason you are not breaking PCA is because your data is still "simple" and have strong "linear properties". In your first example, the line example, we can summarize data as follows: the regression target will be larger, with respect x and y, i.e., in original feature space, the upper right corner. In your second example, the S shaped example, we can summarize data as: the regression target will be larger, when x is small and y is small, i.e., in original feature space, the lower left corner. The following example will break linear PCA. Because the is no linear relationship/features we can found to classify different classes. (Similar to the pearson correlation coefficient will be close to 0 for such data.)
Why does PCA often perform comparably well to nonlinear models on nonlinear problems?
The reason you are not breaking PCA is because your data is still "simple" and have strong "linear properties". In your first example, the line example, we can summarize data as follows: the regressio
Why does PCA often perform comparably well to nonlinear models on nonlinear problems? The reason you are not breaking PCA is because your data is still "simple" and have strong "linear properties". In your first example, the line example, we can summarize data as follows: the regression target will be larger, with respect x and y, i.e., in original feature space, the upper right corner. In your second example, the S shaped example, we can summarize data as: the regression target will be larger, when x is small and y is small, i.e., in original feature space, the lower left corner. The following example will break linear PCA. Because the is no linear relationship/features we can found to classify different classes. (Similar to the pearson correlation coefficient will be close to 0 for such data.)
Why does PCA often perform comparably well to nonlinear models on nonlinear problems? The reason you are not breaking PCA is because your data is still "simple" and have strong "linear properties". In your first example, the line example, we can summarize data as follows: the regressio
37,276
Seeking an intuitive understanding of independence for random variables
Welcome to this community! The following answer comes from the point of view of probability as a generalization of logic, and as an expression of a person's degrees of belief. First of all I'd like to emphasize that independence is not a physical property of objects or quantities. It is a property of a person's degree of belief about those quantities. It depends on a person's knowledge about those quantities. So two quantities may have independent probabilities for one person, with one state of knowledge, and non-independent probabilities for another person, with a different state of knowledge. Here's a simple example. Consider a specific individual. Thanks to your background information $I$, you know their height $X$, let's say 170 cm (within 1 cm), and their weight, let's say 70 kg. So for you $$ \begin{aligned} \mathrm{P}(X\!=\!x | I) &= \delta(x, \text{170 cm}), \\ \mathrm{P}(Y\!=\!y | I) &= \delta(y, \text{70 kg}). \end{aligned} $$ You can easily verify that these probabilities are independent: $$ \begin{aligned} \mathrm{P}(X\!=\!x, Y\!=\!y | I) &= \mathrm{P}(X\!=\!x| I)\times \mathrm{P}(Y\!=\!y | I)\\ &=\delta(x,\text{170 cm}) \times \delta(y,\text{70 kg}) \end{aligned} $$ I don't have any information about this person instead (and don't know the information you have either), and I assign a joint probability to $X$ and $Y$ that doesn't factorize, because my background knowledge tells me that information about one quantity can improve my information about the other. In your case this doesn't happen because your information is complete. Note that the same would happen if you had complete information about one quantity but not about the other. This example may appear a little trivial, but I really invite you to think about it. Another important point is that independence may hold for the probabilities of specific values of two quantities, but not all their values. Here's an example. Suppose $X\in\{1,2,3\}$ and $Y\in\{1,2\}$. Consider the following joint distribution, written in matrix form: $$ \mathrm{P}(X\!=\!x, Y\!=\!y) = \begin{matrix} &{\scriptstyle y}\\ {\scriptstyle x}\!\!\!\!\mbox{}& \begin{pmatrix} \frac{1}{2} \left(-\frac{\sqrt{41}}{10}-\frac{1}{2}\right)+\frac{3}{5} & \frac{1}{10} \\ \frac{1}{10} & \frac{1}{2} \left(\frac{\sqrt{41}}{10}+\frac{1}{2}\right) \\ \frac{1}{10} & \frac{1}{10} \end{pmatrix} \end{matrix} \approx \begin{pmatrix} 0.0298438 & 0.1 \\ 0.1 & 0.570156 \\ 0.1 & 0.1 \end{pmatrix}. $$ If you do the calculations of the marginal probabilities you'll find that $$\mathrm{P}(X\!=\!1, Y\!=\!1) =\mathrm{P}(X\!=\!1) \times \mathrm{P}(Y\!=\!1) $$ but, for example, $$\mathrm{P}(X\!=\!2, Y\!=\!1) = 0.10 \ne \mathrm{P}(X\!=\!2) \times \mathrm{P}(Y\!=\!1) \approx 0.15. $$ So the probabilities for the specific values $X=1$, $Y=1$ are independent, but those for the specific values $X=2$, $Y=1$ are not. So, again, independence is not the property of two quantities, but of our probabilities for specific values of those quantities. There may be a physical interdependence between two quantities, in the sense that they may be related by a physical law. But this is different from informational dependence. Often our knowledge about physical dependence leads us to have non-independent beliefs, but not always. Also, the informational dependence of two quantities may be motivated not by a mutual physical dependence, but because we know both are physically influenced by a third quantity. There's an important fact about independent vs non-independent joint probabilities: if you collect new data or information, a non-independent joint probability can be updated (via Bayes's theorem) to an independent one. But an independent joint probability can never be updated to a non-independent one. So, independent joint probabilities are "irreversible", so to speak. Regarding the connection with information theory, it's possible to show that the joint probability of two quantities has vanishing mutual information if and only if it is factorizable, that is, independent. Independence can also be seen as the byproduct of informational irrelevance, that is, the fact that the probability for some value of one quantity is the same whether you know that another quantity has some specific value or not: $$\mathrm{P}(X\!=\!x | Y\!=\!y) = \mathrm{P}(X\!=\!x).$$ There's a good paper by A. P. Dawid on this: Conditional Independence in Statistical Theory. And Jaynes's book Probability Theory: The Logic of Science has many insightful discussions and examples about logical and causal independence and information theory. See for example §§ 4.2–4.3, 6.11–6.12, 10.10, about the distinction between logical independence and causal independence, and chap. 11 about the connection with information theory.
Seeking an intuitive understanding of independence for random variables
Welcome to this community! The following answer comes from the point of view of probability as a generalization of logic, and as an expression of a person's degrees of belief. First of all I'd like to
Seeking an intuitive understanding of independence for random variables Welcome to this community! The following answer comes from the point of view of probability as a generalization of logic, and as an expression of a person's degrees of belief. First of all I'd like to emphasize that independence is not a physical property of objects or quantities. It is a property of a person's degree of belief about those quantities. It depends on a person's knowledge about those quantities. So two quantities may have independent probabilities for one person, with one state of knowledge, and non-independent probabilities for another person, with a different state of knowledge. Here's a simple example. Consider a specific individual. Thanks to your background information $I$, you know their height $X$, let's say 170 cm (within 1 cm), and their weight, let's say 70 kg. So for you $$ \begin{aligned} \mathrm{P}(X\!=\!x | I) &= \delta(x, \text{170 cm}), \\ \mathrm{P}(Y\!=\!y | I) &= \delta(y, \text{70 kg}). \end{aligned} $$ You can easily verify that these probabilities are independent: $$ \begin{aligned} \mathrm{P}(X\!=\!x, Y\!=\!y | I) &= \mathrm{P}(X\!=\!x| I)\times \mathrm{P}(Y\!=\!y | I)\\ &=\delta(x,\text{170 cm}) \times \delta(y,\text{70 kg}) \end{aligned} $$ I don't have any information about this person instead (and don't know the information you have either), and I assign a joint probability to $X$ and $Y$ that doesn't factorize, because my background knowledge tells me that information about one quantity can improve my information about the other. In your case this doesn't happen because your information is complete. Note that the same would happen if you had complete information about one quantity but not about the other. This example may appear a little trivial, but I really invite you to think about it. Another important point is that independence may hold for the probabilities of specific values of two quantities, but not all their values. Here's an example. Suppose $X\in\{1,2,3\}$ and $Y\in\{1,2\}$. Consider the following joint distribution, written in matrix form: $$ \mathrm{P}(X\!=\!x, Y\!=\!y) = \begin{matrix} &{\scriptstyle y}\\ {\scriptstyle x}\!\!\!\!\mbox{}& \begin{pmatrix} \frac{1}{2} \left(-\frac{\sqrt{41}}{10}-\frac{1}{2}\right)+\frac{3}{5} & \frac{1}{10} \\ \frac{1}{10} & \frac{1}{2} \left(\frac{\sqrt{41}}{10}+\frac{1}{2}\right) \\ \frac{1}{10} & \frac{1}{10} \end{pmatrix} \end{matrix} \approx \begin{pmatrix} 0.0298438 & 0.1 \\ 0.1 & 0.570156 \\ 0.1 & 0.1 \end{pmatrix}. $$ If you do the calculations of the marginal probabilities you'll find that $$\mathrm{P}(X\!=\!1, Y\!=\!1) =\mathrm{P}(X\!=\!1) \times \mathrm{P}(Y\!=\!1) $$ but, for example, $$\mathrm{P}(X\!=\!2, Y\!=\!1) = 0.10 \ne \mathrm{P}(X\!=\!2) \times \mathrm{P}(Y\!=\!1) \approx 0.15. $$ So the probabilities for the specific values $X=1$, $Y=1$ are independent, but those for the specific values $X=2$, $Y=1$ are not. So, again, independence is not the property of two quantities, but of our probabilities for specific values of those quantities. There may be a physical interdependence between two quantities, in the sense that they may be related by a physical law. But this is different from informational dependence. Often our knowledge about physical dependence leads us to have non-independent beliefs, but not always. Also, the informational dependence of two quantities may be motivated not by a mutual physical dependence, but because we know both are physically influenced by a third quantity. There's an important fact about independent vs non-independent joint probabilities: if you collect new data or information, a non-independent joint probability can be updated (via Bayes's theorem) to an independent one. But an independent joint probability can never be updated to a non-independent one. So, independent joint probabilities are "irreversible", so to speak. Regarding the connection with information theory, it's possible to show that the joint probability of two quantities has vanishing mutual information if and only if it is factorizable, that is, independent. Independence can also be seen as the byproduct of informational irrelevance, that is, the fact that the probability for some value of one quantity is the same whether you know that another quantity has some specific value or not: $$\mathrm{P}(X\!=\!x | Y\!=\!y) = \mathrm{P}(X\!=\!x).$$ There's a good paper by A. P. Dawid on this: Conditional Independence in Statistical Theory. And Jaynes's book Probability Theory: The Logic of Science has many insightful discussions and examples about logical and causal independence and information theory. See for example §§ 4.2–4.3, 6.11–6.12, 10.10, about the distinction between logical independence and causal independence, and chap. 11 about the connection with information theory.
Seeking an intuitive understanding of independence for random variables Welcome to this community! The following answer comes from the point of view of probability as a generalization of logic, and as an expression of a person's degrees of belief. First of all I'd like to
37,277
When and why would you not want to use a GAM?
You could take this to extreme and ask why wouldn't we use non-parametric model like $k$-NN regression? Actually, the opposite question Why would anyone use KNN for regression? was asked, and you can check it for more detailed discussion. You can also make the question more broad and ask why wouldn't we use more complicated models instead of simpler ones? For example, why would anyone use logistic, or linear regression, if they could use a neural network? The two main reasons for preferring simple models are: Interpretability. Simple models like linear regression are directly interpretable, while this does not have to be the case of more complicated models. This may be desirable in some disciplines (e.g. medicine), and even obligatory by law in others (finance). Overfitting. More complicated models are more prone to overfitting, especially for small sample sizes. Complicated model may simply memorize the training dataset and not generalize. As noticed in the comments, this seems to be also discussed in the following thread: When to use a GAM vs GLM. As a comment, notice that using model that is linear in parameters is not that a big constraint. You can easily extend a linear model using polynomial components to model complex relationships, and this may even outperform neural networks in some cases (see Cheng et al, 2018 [arXiv:1806.0685]).
When and why would you not want to use a GAM?
You could take this to extreme and ask why wouldn't we use non-parametric model like $k$-NN regression? Actually, the opposite question Why would anyone use KNN for regression? was asked, and you can
When and why would you not want to use a GAM? You could take this to extreme and ask why wouldn't we use non-parametric model like $k$-NN regression? Actually, the opposite question Why would anyone use KNN for regression? was asked, and you can check it for more detailed discussion. You can also make the question more broad and ask why wouldn't we use more complicated models instead of simpler ones? For example, why would anyone use logistic, or linear regression, if they could use a neural network? The two main reasons for preferring simple models are: Interpretability. Simple models like linear regression are directly interpretable, while this does not have to be the case of more complicated models. This may be desirable in some disciplines (e.g. medicine), and even obligatory by law in others (finance). Overfitting. More complicated models are more prone to overfitting, especially for small sample sizes. Complicated model may simply memorize the training dataset and not generalize. As noticed in the comments, this seems to be also discussed in the following thread: When to use a GAM vs GLM. As a comment, notice that using model that is linear in parameters is not that a big constraint. You can easily extend a linear model using polynomial components to model complex relationships, and this may even outperform neural networks in some cases (see Cheng et al, 2018 [arXiv:1806.0685]).
When and why would you not want to use a GAM? You could take this to extreme and ask why wouldn't we use non-parametric model like $k$-NN regression? Actually, the opposite question Why would anyone use KNN for regression? was asked, and you can
37,278
Why K and V are not the same in Transformer attention?
I guess the reason why the specific terms "query", "key" and "value" were chosen is that this attention mechanism resembles a memory access mechanism. The query is the specific element for which we seek a representation. The role of the keys is to respond more or less to the query and the values are here to compose an answer. Keys and values are necessarily related but do not play the same roles. For example, given the word query "network", you might want the key words "neural" and "social" to generate high weights since "neural network" and "social network" are common terms. It means that the dot products between the query and these two keys are high and thus the two key vectors are similar. Nevertheless, the values for "neural" and "social" should be dissimilar since they don't deal with the same topic. Using the same representation for keys and values doesn't allow this. Somehow, using the same transformation for keys and values might still work, but you'll lose a lot of expressiveness and might need much more parameters to achieve similar performances. EDIT: I just found a better explanation of the query, key and value terms in this post.
Why K and V are not the same in Transformer attention?
I guess the reason why the specific terms "query", "key" and "value" were chosen is that this attention mechanism resembles a memory access mechanism. The query is the specific element for which we se
Why K and V are not the same in Transformer attention? I guess the reason why the specific terms "query", "key" and "value" were chosen is that this attention mechanism resembles a memory access mechanism. The query is the specific element for which we seek a representation. The role of the keys is to respond more or less to the query and the values are here to compose an answer. Keys and values are necessarily related but do not play the same roles. For example, given the word query "network", you might want the key words "neural" and "social" to generate high weights since "neural network" and "social network" are common terms. It means that the dot products between the query and these two keys are high and thus the two key vectors are similar. Nevertheless, the values for "neural" and "social" should be dissimilar since they don't deal with the same topic. Using the same representation for keys and values doesn't allow this. Somehow, using the same transformation for keys and values might still work, but you'll lose a lot of expressiveness and might need much more parameters to achieve similar performances. EDIT: I just found a better explanation of the query, key and value terms in this post.
Why K and V are not the same in Transformer attention? I guess the reason why the specific terms "query", "key" and "value" were chosen is that this attention mechanism resembles a memory access mechanism. The query is the specific element for which we se
37,279
Fractional un-differencing of time-series
This paper offers one way to integrate the fractional integrated process where $d\in(-1/2,1/2)$: Reisen, V. A. and Lopes, S. (1999) Some simulations and applications of forecasting long-memory time series models; Journal of Statistical Planning and Inference, 80, 269–287 You can find it here. The idea's that when you have ARFIMA(p,d,q) process like: $$\Phi(B)(1-B)^dX_t=\Theta(B)\varepsilon_t$$ with $B$ - backshift operator, you can represent it as infinite AR process: $$X_{t+k}=-\sum_{j=1}^\infty\pi_jX_{t+k-j}+\varepsilon_{t+l}$$ They have an equation for coefficients $\pi_j$ as a function of integration order. For instance, for a ARFIMA(0,d,0) you'd get: $$\pi_1=-\frac 1 {d+1}$$ then $$X_{t+1}=\frac 1 {d+1}X_{t} -\dots +\varepsilon_{t+l}$$ I wound't do it manually, and instead get a stat package that does it for me.
Fractional un-differencing of time-series
This paper offers one way to integrate the fractional integrated process where $d\in(-1/2,1/2)$: Reisen, V. A. and Lopes, S. (1999) Some simulations and applications of forecasting long-memory time se
Fractional un-differencing of time-series This paper offers one way to integrate the fractional integrated process where $d\in(-1/2,1/2)$: Reisen, V. A. and Lopes, S. (1999) Some simulations and applications of forecasting long-memory time series models; Journal of Statistical Planning and Inference, 80, 269–287 You can find it here. The idea's that when you have ARFIMA(p,d,q) process like: $$\Phi(B)(1-B)^dX_t=\Theta(B)\varepsilon_t$$ with $B$ - backshift operator, you can represent it as infinite AR process: $$X_{t+k}=-\sum_{j=1}^\infty\pi_jX_{t+k-j}+\varepsilon_{t+l}$$ They have an equation for coefficients $\pi_j$ as a function of integration order. For instance, for a ARFIMA(0,d,0) you'd get: $$\pi_1=-\frac 1 {d+1}$$ then $$X_{t+1}=\frac 1 {d+1}X_{t} -\dots +\varepsilon_{t+l}$$ I wound't do it manually, and instead get a stat package that does it for me.
Fractional un-differencing of time-series This paper offers one way to integrate the fractional integrated process where $d\in(-1/2,1/2)$: Reisen, V. A. and Lopes, S. (1999) Some simulations and applications of forecasting long-memory time se
37,280
Optimal scaling of the Random Walk Metroplis-Hastings algorithm and the speed measure of the limiting diffusion
The Langevin diffusion is a process $(X_t)_{t \ge 0}$ satisfying the SDE: \begin{align*} d X_t = -\nabla h(X_t) + \sqrt{2} dW_t \end{align*} where $(W_t)_{t \ge 0}$ is the standard Brownian motion in $\mathbb R^d$. Under mild conditions on $h$, the above has a unique solution which is a Markov process. Also, the distribution of $X_t$ can be shown to converge to to a distribution with density $\pi(x) \propto \exp(-h(x))$ as $t \to \infty$. I am going to write this as $\mathcal L(X_t) \to \exp(-h) dx$ In the paper you consider, they have the 1-d dynamic: \begin{align*} d V_t = d B_t + \frac{f'(V_t)}{2 f(V_t)} dt = d B_t + \frac12 g'(V_t) dt \end{align*} where $g = \log f$. Let us define $\tilde V_t = V_{\alpha t}$. Then, \begin{align*} d \tilde V_t &= \alpha^{1/2} \, d B_{t} + \frac\alpha 2 g'(\tilde V_{t}) dt \end{align*} (This is using the fact that $(B_{\alpha t})$ has the same distribution as $(\sqrt{\alpha} B_t.)$ Taking $\alpha = 2$, we have that $\tilde V_t$ satisfies the standard Langevin dynamics with $h = -g$, hence $$\mathcal L(\tilde V_t) \to \exp(g) dx = f dx \quad \text{as} \quad t \to \infty.$$ Now, as they argue in the paper $U_t = V_{h(\ell) t}$. This is easy to see by the same argument as above: basically setting $\alpha = h(\ell)$ shows that $U_t$ satisfies the desired SDE. In short $U_t = \tilde V_{\frac12 h(\ell) t}$ and $\tilde V_t$ is converging in distribution to the desired law. So $\frac12 h(\ell)$ looks like a step size. The larger it is, the faster you move along the process $\tilde V_t$ for a unit step in time (say $\Delta t = 1)$. EDIT: There is a lot of interesting recent activity on the convergence of Langevin dynamics and MH algorithm. I will try to cite them once I get a chance. EDIT2: Some recent developments: Reflection couplings and contraction rates for diffusions Theoretical guarantees for approximate sampling from smooth and log-concave densities Log-concave sampling: Metropolis-Hastings algorithms are fast! Sharp Convergence Rates for Langevin Dynamics in the Nonconvex Setting
Optimal scaling of the Random Walk Metroplis-Hastings algorithm and the speed measure of the limitin
The Langevin diffusion is a process $(X_t)_{t \ge 0}$ satisfying the SDE: \begin{align*} d X_t = -\nabla h(X_t) + \sqrt{2} dW_t \end{align*} where $(W_t)_{t \ge 0}$ is the standard Brownian motion in
Optimal scaling of the Random Walk Metroplis-Hastings algorithm and the speed measure of the limiting diffusion The Langevin diffusion is a process $(X_t)_{t \ge 0}$ satisfying the SDE: \begin{align*} d X_t = -\nabla h(X_t) + \sqrt{2} dW_t \end{align*} where $(W_t)_{t \ge 0}$ is the standard Brownian motion in $\mathbb R^d$. Under mild conditions on $h$, the above has a unique solution which is a Markov process. Also, the distribution of $X_t$ can be shown to converge to to a distribution with density $\pi(x) \propto \exp(-h(x))$ as $t \to \infty$. I am going to write this as $\mathcal L(X_t) \to \exp(-h) dx$ In the paper you consider, they have the 1-d dynamic: \begin{align*} d V_t = d B_t + \frac{f'(V_t)}{2 f(V_t)} dt = d B_t + \frac12 g'(V_t) dt \end{align*} where $g = \log f$. Let us define $\tilde V_t = V_{\alpha t}$. Then, \begin{align*} d \tilde V_t &= \alpha^{1/2} \, d B_{t} + \frac\alpha 2 g'(\tilde V_{t}) dt \end{align*} (This is using the fact that $(B_{\alpha t})$ has the same distribution as $(\sqrt{\alpha} B_t.)$ Taking $\alpha = 2$, we have that $\tilde V_t$ satisfies the standard Langevin dynamics with $h = -g$, hence $$\mathcal L(\tilde V_t) \to \exp(g) dx = f dx \quad \text{as} \quad t \to \infty.$$ Now, as they argue in the paper $U_t = V_{h(\ell) t}$. This is easy to see by the same argument as above: basically setting $\alpha = h(\ell)$ shows that $U_t$ satisfies the desired SDE. In short $U_t = \tilde V_{\frac12 h(\ell) t}$ and $\tilde V_t$ is converging in distribution to the desired law. So $\frac12 h(\ell)$ looks like a step size. The larger it is, the faster you move along the process $\tilde V_t$ for a unit step in time (say $\Delta t = 1)$. EDIT: There is a lot of interesting recent activity on the convergence of Langevin dynamics and MH algorithm. I will try to cite them once I get a chance. EDIT2: Some recent developments: Reflection couplings and contraction rates for diffusions Theoretical guarantees for approximate sampling from smooth and log-concave densities Log-concave sampling: Metropolis-Hastings algorithms are fast! Sharp Convergence Rates for Langevin Dynamics in the Nonconvex Setting
Optimal scaling of the Random Walk Metroplis-Hastings algorithm and the speed measure of the limitin The Langevin diffusion is a process $(X_t)_{t \ge 0}$ satisfying the SDE: \begin{align*} d X_t = -\nabla h(X_t) + \sqrt{2} dW_t \end{align*} where $(W_t)_{t \ge 0}$ is the standard Brownian motion in
37,281
Clarification: Are Generative Adversarial Networks an alternative to MCMC sampling?
Both Markov Chain Monte Carlo (MCMC) and the generator network from a Generative Adversarial Network (GAN) return samples from a probability distribution. However, they solve different problems: MCMC works when known the formula for the probability of each configuration (it does not need to be normalized). A classic example is an Ising model, in which the probability of a configuration is $\exp(- \beta E)$, where $\beta$ is inverse temperature. To sample, we can use the Metropolis-Hasting MCMC algorithm - flipping a single spin, with probability related to energies of the states. MCMC is often being used for integration in Bayesian statistics. In principle (given infinite time) we always get the exact numerical result. In GANs we do not know the probability distribution. Instead, we know a few samples from which we want to create a sampler from this probability distribution. We don't get a function that says what is the probability of a given sample (at least, not with typical discriminator network that does only guess if the sample is real or generated). Furthermore, in the case of GANs the problem is fuzzy - the function depends on the neural network architecture, training process, and other factors. In general, there are infinitely many ways to turn a discrete set of samples into a probability distribution (sampler). GANs and MCMC are not exclusive. As already pointed out by @shimao, you can combine both of the approaches as in Metropolis-Hastings Generative Adversarial Networks (2018). Also, there is an overlap of functionality, for which either can be used. For example, patterns can be generated either with GANs or with MCMC (here are some beautiful examples with ConvChain). In the later, we typically it is more reliable and less data-hungry, but we need to translate input into probability (based on some assumptions).
Clarification: Are Generative Adversarial Networks an alternative to MCMC sampling?
Both Markov Chain Monte Carlo (MCMC) and the generator network from a Generative Adversarial Network (GAN) return samples from a probability distribution. However, they solve different problems: MCMC
Clarification: Are Generative Adversarial Networks an alternative to MCMC sampling? Both Markov Chain Monte Carlo (MCMC) and the generator network from a Generative Adversarial Network (GAN) return samples from a probability distribution. However, they solve different problems: MCMC works when known the formula for the probability of each configuration (it does not need to be normalized). A classic example is an Ising model, in which the probability of a configuration is $\exp(- \beta E)$, where $\beta$ is inverse temperature. To sample, we can use the Metropolis-Hasting MCMC algorithm - flipping a single spin, with probability related to energies of the states. MCMC is often being used for integration in Bayesian statistics. In principle (given infinite time) we always get the exact numerical result. In GANs we do not know the probability distribution. Instead, we know a few samples from which we want to create a sampler from this probability distribution. We don't get a function that says what is the probability of a given sample (at least, not with typical discriminator network that does only guess if the sample is real or generated). Furthermore, in the case of GANs the problem is fuzzy - the function depends on the neural network architecture, training process, and other factors. In general, there are infinitely many ways to turn a discrete set of samples into a probability distribution (sampler). GANs and MCMC are not exclusive. As already pointed out by @shimao, you can combine both of the approaches as in Metropolis-Hastings Generative Adversarial Networks (2018). Also, there is an overlap of functionality, for which either can be used. For example, patterns can be generated either with GANs or with MCMC (here are some beautiful examples with ConvChain). In the later, we typically it is more reliable and less data-hungry, but we need to translate input into probability (based on some assumptions).
Clarification: Are Generative Adversarial Networks an alternative to MCMC sampling? Both Markov Chain Monte Carlo (MCMC) and the generator network from a Generative Adversarial Network (GAN) return samples from a probability distribution. However, they solve different problems: MCMC
37,282
Is a visual estimate of homoscedasticity rigorous enough?
I would say that (concentrating on the second plot) that heteroskedasticity is not clear, the spread (vertical) seems larger where the density of points is higher. So to evaluate that, maybe add a local smooth of the residual standard deviation. That could be very informative. Also answered by comments: I'm usually grateful that the paper's author actually even knows about the homoscedasticity assumption and gave it some thought: that puts you ahead of the vast majority of people who use OLS. Incidentally, for your data there's much more that could be said. There is a suggestion of positive skewness in the first set of residuals, making it likely that a simple nonlinear transformation of the response values might simultaneously make the residuals more symmetrically distributed and eliminate some (but not all of) that curved lack of fit you noticed. – whuber @whuber is it positive skewness or do the residuals not seem to be distributed with zero mean as a function of redshift z? The issue here is not so much heteroscedasticity but much more an unequal distribution (weight) among the parameter z plus an wrong model. So the "erroneous" model is gonna follow the (locally linear) trend in that high density bulk with $0.2<\log(z)<0.6$ but should not be regarded as representative for other areas. – Sextus Empiricus
Is a visual estimate of homoscedasticity rigorous enough?
I would say that (concentrating on the second plot) that heteroskedasticity is not clear, the spread (vertical) seems larger where the density of points is higher. So to evaluate that, maybe add a loc
Is a visual estimate of homoscedasticity rigorous enough? I would say that (concentrating on the second plot) that heteroskedasticity is not clear, the spread (vertical) seems larger where the density of points is higher. So to evaluate that, maybe add a local smooth of the residual standard deviation. That could be very informative. Also answered by comments: I'm usually grateful that the paper's author actually even knows about the homoscedasticity assumption and gave it some thought: that puts you ahead of the vast majority of people who use OLS. Incidentally, for your data there's much more that could be said. There is a suggestion of positive skewness in the first set of residuals, making it likely that a simple nonlinear transformation of the response values might simultaneously make the residuals more symmetrically distributed and eliminate some (but not all of) that curved lack of fit you noticed. – whuber @whuber is it positive skewness or do the residuals not seem to be distributed with zero mean as a function of redshift z? The issue here is not so much heteroscedasticity but much more an unequal distribution (weight) among the parameter z plus an wrong model. So the "erroneous" model is gonna follow the (locally linear) trend in that high density bulk with $0.2<\log(z)<0.6$ but should not be regarded as representative for other areas. – Sextus Empiricus
Is a visual estimate of homoscedasticity rigorous enough? I would say that (concentrating on the second plot) that heteroskedasticity is not clear, the spread (vertical) seems larger where the density of points is higher. So to evaluate that, maybe add a loc
37,283
What exactly should be called "projection matrix" in the context of PCA?
$U U^T$ is the projection operator. I believe calling $z = U^T x$ a projection is an abuse of terminology. It is actually a coordinate transformation, where each value in the new coordinates is computed by a scalar projection onto the principal vectors. $U U^T x$ is the orthogonal projection of the data onto the principal vectors in the original coordinates, and $U^T x$ is the orthogonal projection of the data onto the principal vectors in the new coordinates (principal components).
What exactly should be called "projection matrix" in the context of PCA?
$U U^T$ is the projection operator. I believe calling $z = U^T x$ a projection is an abuse of terminology. It is actually a coordinate transformation, where each value in the new coordinates is comput
What exactly should be called "projection matrix" in the context of PCA? $U U^T$ is the projection operator. I believe calling $z = U^T x$ a projection is an abuse of terminology. It is actually a coordinate transformation, where each value in the new coordinates is computed by a scalar projection onto the principal vectors. $U U^T x$ is the orthogonal projection of the data onto the principal vectors in the original coordinates, and $U^T x$ is the orthogonal projection of the data onto the principal vectors in the new coordinates (principal components).
What exactly should be called "projection matrix" in the context of PCA? $U U^T$ is the projection operator. I believe calling $z = U^T x$ a projection is an abuse of terminology. It is actually a coordinate transformation, where each value in the new coordinates is comput
37,284
Analytically solving sampling with or without replacement after Poisson/Negative binomial
The case without replacement If you have $n$ independent Poisson distributed variables $$Y_i \sim \text{Pois$(\lambda_i)$}$$ and condition on $$\sum_{j=i}^n Y_j = K$$ then $$\lbrace Y_i \rbrace \sim \text{Multinom} \left(K,\left(\frac{\lambda_i}{\sum_{j=1}^n \lambda_j} \right)\right)$$ So you could fill your urn with the $n$ times $Y_i$ colored balls like first drawing the value for the total $K$ (which is Poisson distributed cutoff by the condition $K \geq k$) and then fill the urn with $K$ balls according to the multinomial distribution. This filling of the urn with $K$ balls, according to a multinomial distribution, is equivalent to drawing for each ball independently the color from the categorical distribution. Then you can consider the first $k$ balls that have been added to the urn as defining the random sample $\lbrace Z_i \rbrace$ (when this sample is drawn without replacement) and the distribution for this is just another multinomial distributed vector: $$\lbrace Z_i \rbrace \sim \text{Multinom} \left(k,\left(\frac{\lambda_i}{\sum_{j=1}^n \lambda_j} \right)\right)$$ simulation ##### simulating sample process for 3 variables ####### # settings set.seed(1) k = 10 lambda = c(4, 4, 4) trials = 1000000 # observed counts table Ocounts = array(0, dim = c(k+1, k+1, k+1)) for (i in c(1:trials)) { # draw poisson with limit sum(n) >= k repeat { Y = rpois(3,lambda) if (sum(Y) >= k) {break} } # setup urn urn <- c(rep(1, Y[1]), rep(2, Y[2]), rep(3, Y[3])) # draw from urn s <- sample(urn, k, replace=0) Z = c(sum(s==1),sum(s==2),sum(s==3)) Ocounts[Z[1]+1, Z[2]+1, Z[3]+1] = Ocounts[Z[1]+1, Z[2]+1, Z[3]+1] + 1 } # comparison observed = rep(0, 0.5*k*(k+1)) expected = rep(0, 0.5*k*(k+1)) t = 1 for (z1 in c(0:k)) { for (z2 in c(0:(k-z1))) { z3 = k-z1-z2 observed[t] = Ocounts[z1+1, z2+1, z3+1] expected[t] = trials*dmultinom(c(z1, z2, z3), prob=lambda) t = t+1 } } plot(observed,expected) x2 <- sum((observed-expected)^2/expected) pvalue <- 1-pchisq(x2, 66-1) results > # results from chi-sq test > x2 [1] 75.49286 > pvalue [1] 0.1754805 Negative binomial The arguments would work the same for the case of a negative binomial distribution which results (under certain conditions) into a Dirichlet-multinomial distribution. Below is an example simulation ##### simulating sample process for 3 variables ####### # dirichlet multinomial for vectors of size 3 ddirmultinom = function(x1,x2,x3,p1,p2,p3) { (factorial(x1+x2+x3)*gamma(p1+p2+p3)/gamma(x1+x2+x3+p1+p2+p3))/ (factorial(x1)*gamma(p1)/gamma(x1+p1))/ (factorial(x2)*gamma(p2)/gamma(x2+p2))/ (factorial(x3)*gamma(p3)/gamma(x3+p3)) } # settings set.seed(1) k = 10 theta = 1 lambda = c(4,4,4) trials = 1000000 # calculating negative binomials pars means = lambda vars = lambda*(1+theta) ps = (vars-means)/(vars) rs = means^2/(vars-means) # observed counts table Ocounts = array(0, dim = c(k+1,k+1,k+1)) for (i in c(1:trials)) { # draw poisson with limit sum(n) >= k repeat { Y = rnbinom(3,rs,ps) if (sum(Y) >= k) {break} } # setup urn urn <- c(rep(1,Y[1]),rep(2,Y[2]),rep(3,Y[3])) # draw from urn s <- sample(urn,k,replace=0) Z = c(sum(s==1),sum(s==2),sum(s==3)) Ocounts[Z[1]+1,Z[2]+1,Z[3]+1] = Ocounts[Z[1]+1,Z[2]+1,Z[3]+1] + 1 } # comparison observed = rep(0,0.5*k*(k+1)) expected = rep(0,0.5*k*(k+1)) t = 1 for (z1 in c(0:k)) { for (z2 in c(0:(k-z1))) { z3 = k-z1-z2 observed[t]=Ocounts[z1+1,z2+1,z3+1] expected[t]=trials*ddirmultinom(z1,z2,z3,lambda[1]/theta,lambda[2]/theta,lambda[3]/theta) t = t+1 } } plot(observed,expected) x2 <- sum((observed-expected)^2/expected) pvalue <- 1-pchisq(x2,66-1) # results from chi-sq test x2 pvalue
Analytically solving sampling with or without replacement after Poisson/Negative binomial
The case without replacement If you have $n$ independent Poisson distributed variables $$Y_i \sim \text{Pois$(\lambda_i)$}$$ and condition on $$\sum_{j=i}^n Y_j = K$$ then $$\lbrace Y_i \rbrace \si
Analytically solving sampling with or without replacement after Poisson/Negative binomial The case without replacement If you have $n$ independent Poisson distributed variables $$Y_i \sim \text{Pois$(\lambda_i)$}$$ and condition on $$\sum_{j=i}^n Y_j = K$$ then $$\lbrace Y_i \rbrace \sim \text{Multinom} \left(K,\left(\frac{\lambda_i}{\sum_{j=1}^n \lambda_j} \right)\right)$$ So you could fill your urn with the $n$ times $Y_i$ colored balls like first drawing the value for the total $K$ (which is Poisson distributed cutoff by the condition $K \geq k$) and then fill the urn with $K$ balls according to the multinomial distribution. This filling of the urn with $K$ balls, according to a multinomial distribution, is equivalent to drawing for each ball independently the color from the categorical distribution. Then you can consider the first $k$ balls that have been added to the urn as defining the random sample $\lbrace Z_i \rbrace$ (when this sample is drawn without replacement) and the distribution for this is just another multinomial distributed vector: $$\lbrace Z_i \rbrace \sim \text{Multinom} \left(k,\left(\frac{\lambda_i}{\sum_{j=1}^n \lambda_j} \right)\right)$$ simulation ##### simulating sample process for 3 variables ####### # settings set.seed(1) k = 10 lambda = c(4, 4, 4) trials = 1000000 # observed counts table Ocounts = array(0, dim = c(k+1, k+1, k+1)) for (i in c(1:trials)) { # draw poisson with limit sum(n) >= k repeat { Y = rpois(3,lambda) if (sum(Y) >= k) {break} } # setup urn urn <- c(rep(1, Y[1]), rep(2, Y[2]), rep(3, Y[3])) # draw from urn s <- sample(urn, k, replace=0) Z = c(sum(s==1),sum(s==2),sum(s==3)) Ocounts[Z[1]+1, Z[2]+1, Z[3]+1] = Ocounts[Z[1]+1, Z[2]+1, Z[3]+1] + 1 } # comparison observed = rep(0, 0.5*k*(k+1)) expected = rep(0, 0.5*k*(k+1)) t = 1 for (z1 in c(0:k)) { for (z2 in c(0:(k-z1))) { z3 = k-z1-z2 observed[t] = Ocounts[z1+1, z2+1, z3+1] expected[t] = trials*dmultinom(c(z1, z2, z3), prob=lambda) t = t+1 } } plot(observed,expected) x2 <- sum((observed-expected)^2/expected) pvalue <- 1-pchisq(x2, 66-1) results > # results from chi-sq test > x2 [1] 75.49286 > pvalue [1] 0.1754805 Negative binomial The arguments would work the same for the case of a negative binomial distribution which results (under certain conditions) into a Dirichlet-multinomial distribution. Below is an example simulation ##### simulating sample process for 3 variables ####### # dirichlet multinomial for vectors of size 3 ddirmultinom = function(x1,x2,x3,p1,p2,p3) { (factorial(x1+x2+x3)*gamma(p1+p2+p3)/gamma(x1+x2+x3+p1+p2+p3))/ (factorial(x1)*gamma(p1)/gamma(x1+p1))/ (factorial(x2)*gamma(p2)/gamma(x2+p2))/ (factorial(x3)*gamma(p3)/gamma(x3+p3)) } # settings set.seed(1) k = 10 theta = 1 lambda = c(4,4,4) trials = 1000000 # calculating negative binomials pars means = lambda vars = lambda*(1+theta) ps = (vars-means)/(vars) rs = means^2/(vars-means) # observed counts table Ocounts = array(0, dim = c(k+1,k+1,k+1)) for (i in c(1:trials)) { # draw poisson with limit sum(n) >= k repeat { Y = rnbinom(3,rs,ps) if (sum(Y) >= k) {break} } # setup urn urn <- c(rep(1,Y[1]),rep(2,Y[2]),rep(3,Y[3])) # draw from urn s <- sample(urn,k,replace=0) Z = c(sum(s==1),sum(s==2),sum(s==3)) Ocounts[Z[1]+1,Z[2]+1,Z[3]+1] = Ocounts[Z[1]+1,Z[2]+1,Z[3]+1] + 1 } # comparison observed = rep(0,0.5*k*(k+1)) expected = rep(0,0.5*k*(k+1)) t = 1 for (z1 in c(0:k)) { for (z2 in c(0:(k-z1))) { z3 = k-z1-z2 observed[t]=Ocounts[z1+1,z2+1,z3+1] expected[t]=trials*ddirmultinom(z1,z2,z3,lambda[1]/theta,lambda[2]/theta,lambda[3]/theta) t = t+1 } } plot(observed,expected) x2 <- sum((observed-expected)^2/expected) pvalue <- 1-pchisq(x2,66-1) # results from chi-sq test x2 pvalue
Analytically solving sampling with or without replacement after Poisson/Negative binomial The case without replacement If you have $n$ independent Poisson distributed variables $$Y_i \sim \text{Pois$(\lambda_i)$}$$ and condition on $$\sum_{j=i}^n Y_j = K$$ then $$\lbrace Y_i \rbrace \si
37,285
Is there any such thing as "polar regression"?
There is an enormous amount of statistics related to this under the name circular or directional statistics. Your case is not so well described though. What is the 'switch' that you were thinking of by introducing polar coordinates? Do you mean to solve a situation of a non-negative variable, e.g. some count, as function of a cyclical variable, e.g. hour of the day, by picturing it as a curve and solve it with the mechanisms for curve fitting in euclidean coordinates?
Is there any such thing as "polar regression"?
There is an enormous amount of statistics related to this under the name circular or directional statistics. Your case is not so well described though. What is the 'switch' that you were thinking of
Is there any such thing as "polar regression"? There is an enormous amount of statistics related to this under the name circular or directional statistics. Your case is not so well described though. What is the 'switch' that you were thinking of by introducing polar coordinates? Do you mean to solve a situation of a non-negative variable, e.g. some count, as function of a cyclical variable, e.g. hour of the day, by picturing it as a curve and solve it with the mechanisms for curve fitting in euclidean coordinates?
Is there any such thing as "polar regression"? There is an enormous amount of statistics related to this under the name circular or directional statistics. Your case is not so well described though. What is the 'switch' that you were thinking of
37,286
What are the recent works and research scope in asymptotic inference (large sample theory)?
I am probably less up-to-date than you in this field, so rather than giving you some fish, I am going to try to teach you to fish. I also hope that this answer might be more broadly interesting to readers that also want to look up statistical literature, but are interested in a different topic than you. Please forgive me if any of this is well-known to you; it is not intended to be condescending, but merely to give some general advice that might be useful to many readers of this site. Your question is essentially asking for a recent literature review of a field of interest to you, where you have some partial familiarity with the subject. There are a lot of resources you can use to give you suggestions on conducting a literature review, and in fact, there are also a few book sections on the topic (see e.g., O'Leary 2004, Jesson 2011). Since we live in the internet-age, much of this is a matter of becoming skilled at using search techniques to identify useful literature. If you are at a university then you probably have access to the Web of Science portal, where you can search for literature via keywords, and also analyse the results by year of publication and other variables. If you do not have access to this then you can also use Google Scholar, which also has substantial search facilities. (Google-Scholar has a broad search net, including academic articles, books, conference proceeding and pre-prints, and it also auto-updates citation metrics. The wide scope of this search engine is both a blessing and a curse depending on context.) Finding important literature in a desired field of study is really just a matter of learning good search techniques and then having a lot of tenacity. Initial search results lead to more citations, which lead to more results, which lead to more citations, virtually ad infinitum. Once you have extended your search widely, you will usually be able to find the items that come up again and again in searches, and this will usually give you a reasonable idea of the most "significant" works. An example of searching for your literature of interest: Here are some steps you could take to find what you're looking for through Google-Scholar: Read up on how to do advanced Google-Scholar search queries; Start with searches using basic keywords you expect to see in that field. For example, for your query, I would start with "statistics asymptotic theory", and maybe also search with a restriction to works published since 2014. Note that some works will be republished books that were initially published prior to the date restriction, but these can easily be identified by clicking on the tab that says X related versions. Go through the pages of search results and pull out the ones that look like they fall within the field you are interested in. If you only want to look at "significant" works, this is usually identifiable prima facie by looking at the number of citations relative to age. The most highly-cited works should show up near the top of your search results, and these are the most "significant" works, in the sense of being cited most often. Read some of the identified papers/books and check their citations for more leads to other papers. You can also go the other way and use Google-Scholar to get a list of all the publications that this one was cited by. (This latter technique is usually a bit less useful, because a lot of papers cite things you are looking at, without being focused on the same subject area of interest.) Sometimes you get especially lucky and you find that there has been a recent published literature review of the field you are interested in. For example, on the second page of my search results, I find that Gomes and Giullou (2015) is a review of literature and results in extreme value theory, with a healthy emphasis on asymptotics. One more Google search finds me an accessible pdf version and now I have a whole paper reviewing the subject, with another 258 citations! (Perhaps this is not quite what you're looking for?) Continue this game of whack-a-mole until you find what you need or pass out from exhaustion. Every new paper you find leads to a new list of citations, and every new citation leads to a new paper!
What are the recent works and research scope in asymptotic inference (large sample theory)?
I am probably less up-to-date than you in this field, so rather than giving you some fish, I am going to try to teach you to fish. I also hope that this answer might be more broadly interesting to r
What are the recent works and research scope in asymptotic inference (large sample theory)? I am probably less up-to-date than you in this field, so rather than giving you some fish, I am going to try to teach you to fish. I also hope that this answer might be more broadly interesting to readers that also want to look up statistical literature, but are interested in a different topic than you. Please forgive me if any of this is well-known to you; it is not intended to be condescending, but merely to give some general advice that might be useful to many readers of this site. Your question is essentially asking for a recent literature review of a field of interest to you, where you have some partial familiarity with the subject. There are a lot of resources you can use to give you suggestions on conducting a literature review, and in fact, there are also a few book sections on the topic (see e.g., O'Leary 2004, Jesson 2011). Since we live in the internet-age, much of this is a matter of becoming skilled at using search techniques to identify useful literature. If you are at a university then you probably have access to the Web of Science portal, where you can search for literature via keywords, and also analyse the results by year of publication and other variables. If you do not have access to this then you can also use Google Scholar, which also has substantial search facilities. (Google-Scholar has a broad search net, including academic articles, books, conference proceeding and pre-prints, and it also auto-updates citation metrics. The wide scope of this search engine is both a blessing and a curse depending on context.) Finding important literature in a desired field of study is really just a matter of learning good search techniques and then having a lot of tenacity. Initial search results lead to more citations, which lead to more results, which lead to more citations, virtually ad infinitum. Once you have extended your search widely, you will usually be able to find the items that come up again and again in searches, and this will usually give you a reasonable idea of the most "significant" works. An example of searching for your literature of interest: Here are some steps you could take to find what you're looking for through Google-Scholar: Read up on how to do advanced Google-Scholar search queries; Start with searches using basic keywords you expect to see in that field. For example, for your query, I would start with "statistics asymptotic theory", and maybe also search with a restriction to works published since 2014. Note that some works will be republished books that were initially published prior to the date restriction, but these can easily be identified by clicking on the tab that says X related versions. Go through the pages of search results and pull out the ones that look like they fall within the field you are interested in. If you only want to look at "significant" works, this is usually identifiable prima facie by looking at the number of citations relative to age. The most highly-cited works should show up near the top of your search results, and these are the most "significant" works, in the sense of being cited most often. Read some of the identified papers/books and check their citations for more leads to other papers. You can also go the other way and use Google-Scholar to get a list of all the publications that this one was cited by. (This latter technique is usually a bit less useful, because a lot of papers cite things you are looking at, without being focused on the same subject area of interest.) Sometimes you get especially lucky and you find that there has been a recent published literature review of the field you are interested in. For example, on the second page of my search results, I find that Gomes and Giullou (2015) is a review of literature and results in extreme value theory, with a healthy emphasis on asymptotics. One more Google search finds me an accessible pdf version and now I have a whole paper reviewing the subject, with another 258 citations! (Perhaps this is not quite what you're looking for?) Continue this game of whack-a-mole until you find what you need or pass out from exhaustion. Every new paper you find leads to a new list of citations, and every new citation leads to a new paper!
What are the recent works and research scope in asymptotic inference (large sample theory)? I am probably less up-to-date than you in this field, so rather than giving you some fish, I am going to try to teach you to fish. I also hope that this answer might be more broadly interesting to r
37,287
What are the recent works and research scope in asymptotic inference (large sample theory)?
I would point out that "Asymptotics/Limit Theory" is the general term covering all cases where we study Approximation theory, while the "sample size goes to infinity Asymptotics" is just a particular subfield in there. Viewing the field as a user of its results, I would not say that major things and breakthroughs are happening for some time now (of the variety that will spill over to Statistics etc). What one could see as a largely open direction, is Limiting theory for non-stationary and non-ergodic processes, since so much non-stationarity and non-ergodicity exists in the real world. Anirban DasGupta's book "Asymptotic Theory of Statistics and Probability" (2008) is perhaps the best panorama of the field.
What are the recent works and research scope in asymptotic inference (large sample theory)?
I would point out that "Asymptotics/Limit Theory" is the general term covering all cases where we study Approximation theory, while the "sample size goes to infinity Asymptotics" is just a particular
What are the recent works and research scope in asymptotic inference (large sample theory)? I would point out that "Asymptotics/Limit Theory" is the general term covering all cases where we study Approximation theory, while the "sample size goes to infinity Asymptotics" is just a particular subfield in there. Viewing the field as a user of its results, I would not say that major things and breakthroughs are happening for some time now (of the variety that will spill over to Statistics etc). What one could see as a largely open direction, is Limiting theory for non-stationary and non-ergodic processes, since so much non-stationarity and non-ergodicity exists in the real world. Anirban DasGupta's book "Asymptotic Theory of Statistics and Probability" (2008) is perhaps the best panorama of the field.
What are the recent works and research scope in asymptotic inference (large sample theory)? I would point out that "Asymptotics/Limit Theory" is the general term covering all cases where we study Approximation theory, while the "sample size goes to infinity Asymptotics" is just a particular
37,288
Standardized and unstandardized variables yield different results for mixed regression model
As pointed out by @BenBolker uncorrelated random slopes are independent terms. Because the random effects are uncorrelated an additive transformation does and will result in a change in estimated correlations as well as the likelihood and predictions of the resulting model (Bates, Mächler, Bolker, & Walker, 2015). Edit: Updated to reflect @BenBolker's comment - an additive transformation will cause the problems not a linear one. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1 - 48. doi:http://dx.doi.org/10.18637/jss.v067.i01
Standardized and unstandardized variables yield different results for mixed regression model
As pointed out by @BenBolker uncorrelated random slopes are independent terms. Because the random effects are uncorrelated an additive transformation does and will result in a change in estimated corr
Standardized and unstandardized variables yield different results for mixed regression model As pointed out by @BenBolker uncorrelated random slopes are independent terms. Because the random effects are uncorrelated an additive transformation does and will result in a change in estimated correlations as well as the likelihood and predictions of the resulting model (Bates, Mächler, Bolker, & Walker, 2015). Edit: Updated to reflect @BenBolker's comment - an additive transformation will cause the problems not a linear one. Bates, D., Mächler, M., Bolker, B., & Walker, S. (2015). Fitting Linear Mixed-Effects Models Using lme4. Journal of Statistical Software, 67(1), 1 - 48. doi:http://dx.doi.org/10.18637/jss.v067.i01
Standardized and unstandardized variables yield different results for mixed regression model As pointed out by @BenBolker uncorrelated random slopes are independent terms. Because the random effects are uncorrelated an additive transformation does and will result in a change in estimated corr
37,289
Definition of softmax function
Yes, you are correct that there is a lack of identifiability unless one of the coefficent vectors is fixed. There are some reasons that don't mention this. I can't speak to why they omit this detail, but here's an explanation of what it is and how to fix it. Description Say you have observations $y_i \in \{0, 1, 2, \ldots, K-1\}$ and predictors $\mathbf{x}_i^\intercal \in \mathbb{R}^p$, where $i$ goes from $1$ to $n$ and denotes the observation number/index. You will need to estimate $K$ $p$-dimensional coefficient vectors $\boldsymbol{\beta}^0, \boldsymbol{\beta}^1, \ldots, \boldsymbol{\beta}^{K-1}$. The softmax function is indeed defined as $$ \text{softmax}(\mathbf{z})_i = \frac{\exp(z_i)}{\sum_{l=0}^{K-1}\exp(z_l)}, $$ which has nice properties such as differentiability, it sums to $1$, etc. Multinomial logistic regression uses the softmax function for each observation $i$ on the vector $$ \begin{bmatrix} \mathbf{x}_i^\intercal \boldsymbol{\beta}^0 \\ \mathbf{x}_i^\intercal \boldsymbol{\beta}^1 \\ \vdots \\ \mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}, \end{bmatrix} $$ which means $$ \begin{bmatrix} P(y_i = 0) \\ P(y_i = 1) \\ \vdots \\ P(y_i = K-1) \end{bmatrix} = \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^0] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^1] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \end{bmatrix}. $$ The problem However, the likelihood is not identifiable because multiple parameter collections will give the same likelihood. For example, shifting all the coefficient vectors by the same vector $\mathbf{c}$ will produce the same likelihood. This can be seen if you multiply each the numerator and denominator of each element of the vector by a constant $\exp[-\mathbf{x}_i^\intercal \mathbf{c}]$, nothing changes: $$ \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^0] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^1] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \end{bmatrix} = \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^0-\mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k-\mathbf{c})] } \\ \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^1-\mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k-\mathbf{c})] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^{K-1} - \mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k -\mathbf{c}) ] } \end{bmatrix}. $$ Fixing it The way to fix this is to constrain the parameters. Fixing one of them will lead to identifiability, because shifting all of them will no longer be permitted. There are two common choices: set $\mathbf{c} = \boldsymbol{\beta}^0$, which means $\boldsymbol{\beta}^0 = \mathbf{0}$ (you mention this one), and set $\mathbf{c} = \boldsymbol{\beta}^{K-1}$, which means $\boldsymbol{\beta}^{K-1} = \mathbf{0}$. Ignoring it Sometimes the restriction isn't necessary, though. For instance, if you were interested in forming a confidence interval for the quantity $\beta^0_1 - \beta^2_1$, then this is the same as $\beta^0_1 - c - [\beta^2_1-c]$, so inference on relatively quantities doesn't really matter. Also, if your task is prediction instead of parameter inference, your predictions will be unaffected if all coefficient vectors are estimated (without constraining one).
Definition of softmax function
Yes, you are correct that there is a lack of identifiability unless one of the coefficent vectors is fixed. There are some reasons that don't mention this. I can't speak to why they omit this detail,
Definition of softmax function Yes, you are correct that there is a lack of identifiability unless one of the coefficent vectors is fixed. There are some reasons that don't mention this. I can't speak to why they omit this detail, but here's an explanation of what it is and how to fix it. Description Say you have observations $y_i \in \{0, 1, 2, \ldots, K-1\}$ and predictors $\mathbf{x}_i^\intercal \in \mathbb{R}^p$, where $i$ goes from $1$ to $n$ and denotes the observation number/index. You will need to estimate $K$ $p$-dimensional coefficient vectors $\boldsymbol{\beta}^0, \boldsymbol{\beta}^1, \ldots, \boldsymbol{\beta}^{K-1}$. The softmax function is indeed defined as $$ \text{softmax}(\mathbf{z})_i = \frac{\exp(z_i)}{\sum_{l=0}^{K-1}\exp(z_l)}, $$ which has nice properties such as differentiability, it sums to $1$, etc. Multinomial logistic regression uses the softmax function for each observation $i$ on the vector $$ \begin{bmatrix} \mathbf{x}_i^\intercal \boldsymbol{\beta}^0 \\ \mathbf{x}_i^\intercal \boldsymbol{\beta}^1 \\ \vdots \\ \mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}, \end{bmatrix} $$ which means $$ \begin{bmatrix} P(y_i = 0) \\ P(y_i = 1) \\ \vdots \\ P(y_i = K-1) \end{bmatrix} = \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^0] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^1] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \end{bmatrix}. $$ The problem However, the likelihood is not identifiable because multiple parameter collections will give the same likelihood. For example, shifting all the coefficient vectors by the same vector $\mathbf{c}$ will produce the same likelihood. This can be seen if you multiply each the numerator and denominator of each element of the vector by a constant $\exp[-\mathbf{x}_i^\intercal \mathbf{c}]$, nothing changes: $$ \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^0] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^1] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^{K-1}] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal \boldsymbol{\beta}^k] } \end{bmatrix} = \begin{bmatrix} \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^0-\mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k-\mathbf{c})] } \\ \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^1-\mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k-\mathbf{c})] } \\ \vdots \\ \frac{\exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^{K-1} - \mathbf{c})] }{ \sum_{k=0}^{K-1} \exp[\mathbf{x}_i^\intercal (\boldsymbol{\beta}^k -\mathbf{c}) ] } \end{bmatrix}. $$ Fixing it The way to fix this is to constrain the parameters. Fixing one of them will lead to identifiability, because shifting all of them will no longer be permitted. There are two common choices: set $\mathbf{c} = \boldsymbol{\beta}^0$, which means $\boldsymbol{\beta}^0 = \mathbf{0}$ (you mention this one), and set $\mathbf{c} = \boldsymbol{\beta}^{K-1}$, which means $\boldsymbol{\beta}^{K-1} = \mathbf{0}$. Ignoring it Sometimes the restriction isn't necessary, though. For instance, if you were interested in forming a confidence interval for the quantity $\beta^0_1 - \beta^2_1$, then this is the same as $\beta^0_1 - c - [\beta^2_1-c]$, so inference on relatively quantities doesn't really matter. Also, if your task is prediction instead of parameter inference, your predictions will be unaffected if all coefficient vectors are estimated (without constraining one).
Definition of softmax function Yes, you are correct that there is a lack of identifiability unless one of the coefficent vectors is fixed. There are some reasons that don't mention this. I can't speak to why they omit this detail,
37,290
Whether to use EFA or CFA to predict latent variables scores?
As Jeremy pointed out, EFA, CFA, and IRT models scores would usually be in close agreement. This is especially true in the case of unidimensional scales, or 2nd-order factor models (since this will take you back to almost the same configuration when working on the higher-order factor). Moreover, PCA, which does not take into account measurement error but is often used to select the number of relevant factors, will also be highly correlated to those factor scores, like is the case for raw summated scale score provided the scale is truly unidimensional --- after all, a simple or weighted sum of all item scores is all what is desired to summarize a latent trait. In the case of multi-dimensional scales, you can consider each scale separately, if this makes sense. For an illustration, here is one of the three subscales of the Holzinger & Swineford (1939) study, available in lavaan. I choose a simple correlated factor model, although several other CFA models could be built (and would be equally valid). I used the principal axis factoring for extracting factors in the case of (oblique) EFA. Both the EFA and CFA models were estimated on all items (3 subscales). For PCA, I restrict the computation on the single "visual" subscale (to avoid rotation after PCA). As can be seen in the picture below (EFA factor scores are on the horizontal axis and PCA or CFA scores on the vertical axis), the correlation is above 0.95 in both cases. Of course, there are many ways to construct factor scores in the EFA framework, see, e.g., Understanding and Using Factor Scores: Considerations for the Applied Researcher, by DiStefano et al. I'm almost sure I came across papers dealing with the correlation between EFA and CFA scores but I can't get my hands on it anymore. It's not so "weird to try both and select the one that works the best" --- what is really problematic is to force a factor structure without testing its relevance on independant samples, this is just capitalizing on chance, IMO: I would simply suggest to use CFA factor scores if the factor structure is already defined, or use EFA scores if the interest is simply in feature reduction (as you would use PCA scores in PCR in a regression context). Differences between EFA and CFA are often overstated as both methods are useful, even in an exploratory approach (CFA has model fit indices, which may be helpful, or not).
Whether to use EFA or CFA to predict latent variables scores?
As Jeremy pointed out, EFA, CFA, and IRT models scores would usually be in close agreement. This is especially true in the case of unidimensional scales, or 2nd-order factor models (since this will ta
Whether to use EFA or CFA to predict latent variables scores? As Jeremy pointed out, EFA, CFA, and IRT models scores would usually be in close agreement. This is especially true in the case of unidimensional scales, or 2nd-order factor models (since this will take you back to almost the same configuration when working on the higher-order factor). Moreover, PCA, which does not take into account measurement error but is often used to select the number of relevant factors, will also be highly correlated to those factor scores, like is the case for raw summated scale score provided the scale is truly unidimensional --- after all, a simple or weighted sum of all item scores is all what is desired to summarize a latent trait. In the case of multi-dimensional scales, you can consider each scale separately, if this makes sense. For an illustration, here is one of the three subscales of the Holzinger & Swineford (1939) study, available in lavaan. I choose a simple correlated factor model, although several other CFA models could be built (and would be equally valid). I used the principal axis factoring for extracting factors in the case of (oblique) EFA. Both the EFA and CFA models were estimated on all items (3 subscales). For PCA, I restrict the computation on the single "visual" subscale (to avoid rotation after PCA). As can be seen in the picture below (EFA factor scores are on the horizontal axis and PCA or CFA scores on the vertical axis), the correlation is above 0.95 in both cases. Of course, there are many ways to construct factor scores in the EFA framework, see, e.g., Understanding and Using Factor Scores: Considerations for the Applied Researcher, by DiStefano et al. I'm almost sure I came across papers dealing with the correlation between EFA and CFA scores but I can't get my hands on it anymore. It's not so "weird to try both and select the one that works the best" --- what is really problematic is to force a factor structure without testing its relevance on independant samples, this is just capitalizing on chance, IMO: I would simply suggest to use CFA factor scores if the factor structure is already defined, or use EFA scores if the interest is simply in feature reduction (as you would use PCA scores in PCR in a regression context). Differences between EFA and CFA are often overstated as both methods are useful, even in an exploratory approach (CFA has model fit indices, which may be helpful, or not).
Whether to use EFA or CFA to predict latent variables scores? As Jeremy pointed out, EFA, CFA, and IRT models scores would usually be in close agreement. This is especially true in the case of unidimensional scales, or 2nd-order factor models (since this will ta
37,291
Whether to use EFA or CFA to predict latent variables scores?
It looks as though you are interested in extracting (i.e., observing) latent variable scores for prediction purposes (i.e., not necessarily to make inferences). Given this, I would not rule out PCA either (while dually noting its similarity to EFA; see link below for more details). Also, since your goal prediction, I would not suggest using CFA, as the most interpretable model may not be the best for your purposes. Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Whether to use EFA or CFA to predict latent variables scores?
It looks as though you are interested in extracting (i.e., observing) latent variable scores for prediction purposes (i.e., not necessarily to make inferences). Given this, I would not rule out PCA ei
Whether to use EFA or CFA to predict latent variables scores? It looks as though you are interested in extracting (i.e., observing) latent variable scores for prediction purposes (i.e., not necessarily to make inferences). Given this, I would not rule out PCA either (while dually noting its similarity to EFA; see link below for more details). Also, since your goal prediction, I would not suggest using CFA, as the most interpretable model may not be the best for your purposes. Is there any good reason to use PCA instead of EFA? Also, can PCA be a substitute for factor analysis?
Whether to use EFA or CFA to predict latent variables scores? It looks as though you are interested in extracting (i.e., observing) latent variable scores for prediction purposes (i.e., not necessarily to make inferences). Given this, I would not rule out PCA ei
37,292
Post-hoc test to determine difference in variance
I worked on a project not too long ago where I approached this type of problem within the framework of generalized least squares regression. In other words, I fitted a model to my data which estimated the means of the groups while allowing the standard deviations of groups 2, 3, ..., 4 to be equal to: the standard deviation of group 1 times a factor delta2 (for group 2); the standard deviation of group 1 times a factor delta3 (for group 3); the standard deviation of group 1 times a factor delta4 (for group 4); the standard deviation of group 1 times a factor delta5 (for group 5). If I recall correctly, the software I used produced estimates and confidence intervals for each of these factors and that enabled me to get all the pairwise comparisons of interest between groups in terms of standard deviations. I had to transform the outcome variable to get well-behaved residuals and also fight against the software because it didn't let me choose what group would be treated as a reference (e.g., group 1), but would instead choose the reference group based on the data. The R syntax for fitting this type of model would be something like: library(nlme) model <- gls(outcome ~ group, weights = varIdent(form = ~1|group), data=mydata) Then use something like: summary(model) to see a summary of the model fit. Furthermore, use: model$modelStruct$varStruct to get estimates of delta1 (which will be 1 for the reference group 1), delta2, delta3, delta4 and delta5. Finally, use: intervals(model, which="var-cov") to get 95% confidence intervals for delta2, delta3, delta4, delta5 and for the standard deviation sigma of the reference group (i.e., group 1). The latter will be listed under the heading Residual standard error. See pages 159-162 of the book Linear Mixed-Effects Models Using R: A Step-by-Step Approach for an example. The book was written by Galecki and Burzykowski. Section 7.6.2 of the book gives the formula for a confidence interval for the logarithm of deltas, where s can be 2, 3, 4 or 5. Maybe other people here will give you other ideas for how to proceed, but I thought I would share my idea in case it might spark further conversation.
Post-hoc test to determine difference in variance
I worked on a project not too long ago where I approached this type of problem within the framework of generalized least squares regression. In other words, I fitted a model to my data which estimate
Post-hoc test to determine difference in variance I worked on a project not too long ago where I approached this type of problem within the framework of generalized least squares regression. In other words, I fitted a model to my data which estimated the means of the groups while allowing the standard deviations of groups 2, 3, ..., 4 to be equal to: the standard deviation of group 1 times a factor delta2 (for group 2); the standard deviation of group 1 times a factor delta3 (for group 3); the standard deviation of group 1 times a factor delta4 (for group 4); the standard deviation of group 1 times a factor delta5 (for group 5). If I recall correctly, the software I used produced estimates and confidence intervals for each of these factors and that enabled me to get all the pairwise comparisons of interest between groups in terms of standard deviations. I had to transform the outcome variable to get well-behaved residuals and also fight against the software because it didn't let me choose what group would be treated as a reference (e.g., group 1), but would instead choose the reference group based on the data. The R syntax for fitting this type of model would be something like: library(nlme) model <- gls(outcome ~ group, weights = varIdent(form = ~1|group), data=mydata) Then use something like: summary(model) to see a summary of the model fit. Furthermore, use: model$modelStruct$varStruct to get estimates of delta1 (which will be 1 for the reference group 1), delta2, delta3, delta4 and delta5. Finally, use: intervals(model, which="var-cov") to get 95% confidence intervals for delta2, delta3, delta4, delta5 and for the standard deviation sigma of the reference group (i.e., group 1). The latter will be listed under the heading Residual standard error. See pages 159-162 of the book Linear Mixed-Effects Models Using R: A Step-by-Step Approach for an example. The book was written by Galecki and Burzykowski. Section 7.6.2 of the book gives the formula for a confidence interval for the logarithm of deltas, where s can be 2, 3, 4 or 5. Maybe other people here will give you other ideas for how to proceed, but I thought I would share my idea in case it might spark further conversation.
Post-hoc test to determine difference in variance I worked on a project not too long ago where I approached this type of problem within the framework of generalized least squares regression. In other words, I fitted a model to my data which estimate
37,293
If I have a vector of $N$ correlated probabilities. How can I turn them into binary $0,1$ without destroying the correlation?
I don't understand Gaussian Copula enough to know what is the problem. But I found a way to generate correlated Bernoulli vectors. Following https://mathoverflow.net/a/19436/105908 if we take a set of fixed vectors $v_1 ... v_n$ and a random vector on the unit sphere $u$, we can transform $u$ into binary $X$ where $X_i = (u \cdot v_i > 0)$. In this setup, $cor(X_i,X_j) = \frac{\pi - 2 * \theta(i,j)}{\pi}$ where $\theta(i,j)$ is the angle between $v_i$ and $v_j$. How to find suitable matrix $V = |v_1 ... v_n|$ to produce a desired correlation matrix $R$? The angle condition translates to $VV^T = cos(-\frac{\pi R - \pi}{2})$ and thus we can find $V$ with Cholesky decomposition. An example code in R follows: #Get a simple correlation matrix N = 3 cor_matrix <- matrix(c(1,0.5,0.8,0.5,1,0.4,0.8,0.4,1), N, N) #Calculate the vectors with desired angles vector_matrix <- chol(cos( (pi * cor_matrix - pi) * -0.5)) #You can generate random unit vectors by normalizing a vector #of normally distributed variables, note however that the normalization #does not affect the sign of the dot product and so we ignore it num_samples <- 10000 normal_rand <- matrix(rnorm(num_samples * N), num_samples, N) #Generate the target variables B <- (normal_rand %*% vector_matrix) > 0 #See for yourself that it works cor(B) cor(B) - cor_matrix Thanks @jakub-bartczuk for linking to the MO question - I wouldn't find that on my own. The above code has one big limitation: the marginal distributions are fixed at $X_i \sim Bernoulli(0.5)$. I am currently unaware of how to extend this approach to fit both correlations and marginal distributions. Another answer has an approach for the general case, but it looses a lot of simplicity (it involves numerical integration). There is also paper called Generating Spike Trains with Specified Correlation Coefficients and accompanying Matlab package where the sampling involves "only" finding numerically the unique zero of a monotonic function by bisection.
If I have a vector of $N$ correlated probabilities. How can I turn them into binary $0,1$ without de
I don't understand Gaussian Copula enough to know what is the problem. But I found a way to generate correlated Bernoulli vectors. Following https://mathoverflow.net/a/19436/105908 if we take a set of
If I have a vector of $N$ correlated probabilities. How can I turn them into binary $0,1$ without destroying the correlation? I don't understand Gaussian Copula enough to know what is the problem. But I found a way to generate correlated Bernoulli vectors. Following https://mathoverflow.net/a/19436/105908 if we take a set of fixed vectors $v_1 ... v_n$ and a random vector on the unit sphere $u$, we can transform $u$ into binary $X$ where $X_i = (u \cdot v_i > 0)$. In this setup, $cor(X_i,X_j) = \frac{\pi - 2 * \theta(i,j)}{\pi}$ where $\theta(i,j)$ is the angle between $v_i$ and $v_j$. How to find suitable matrix $V = |v_1 ... v_n|$ to produce a desired correlation matrix $R$? The angle condition translates to $VV^T = cos(-\frac{\pi R - \pi}{2})$ and thus we can find $V$ with Cholesky decomposition. An example code in R follows: #Get a simple correlation matrix N = 3 cor_matrix <- matrix(c(1,0.5,0.8,0.5,1,0.4,0.8,0.4,1), N, N) #Calculate the vectors with desired angles vector_matrix <- chol(cos( (pi * cor_matrix - pi) * -0.5)) #You can generate random unit vectors by normalizing a vector #of normally distributed variables, note however that the normalization #does not affect the sign of the dot product and so we ignore it num_samples <- 10000 normal_rand <- matrix(rnorm(num_samples * N), num_samples, N) #Generate the target variables B <- (normal_rand %*% vector_matrix) > 0 #See for yourself that it works cor(B) cor(B) - cor_matrix Thanks @jakub-bartczuk for linking to the MO question - I wouldn't find that on my own. The above code has one big limitation: the marginal distributions are fixed at $X_i \sim Bernoulli(0.5)$. I am currently unaware of how to extend this approach to fit both correlations and marginal distributions. Another answer has an approach for the general case, but it looses a lot of simplicity (it involves numerical integration). There is also paper called Generating Spike Trains with Specified Correlation Coefficients and accompanying Matlab package where the sampling involves "only" finding numerically the unique zero of a monotonic function by bisection.
If I have a vector of $N$ correlated probabilities. How can I turn them into binary $0,1$ without de I don't understand Gaussian Copula enough to know what is the problem. But I found a way to generate correlated Bernoulli vectors. Following https://mathoverflow.net/a/19436/105908 if we take a set of
37,294
Invertibility in Reinsch form Derivation (Smoothing Splines)
I was hoping to find a more intuitive proof, but the matrix can be shown to be invertible by directly showing that its determinant is nonzero. First note that the knots are located at the $K$ unique values of $\mathbf{x}$. Let $\boldsymbol{\xi}$ represent these knots, in ascending order. Also, $d_j(\xi_i) = 0$ if $i \le j$, and so $N_{i,j} = 0$ if $i > j + 1$ and so $\mathbf{N}$ is "almost lower diagonal". In fact, it's a lower diagonal matrix with a column of 1's appended to the front of it. We can take advantage of this structure to obtain a simple expression for the determinant. First we transform $\mathbf{N}$ by subtracting the first row from all others. Call this matrix $\mathbf{M}$. (These row operations do not affect the determinant, so $\text{det}(\mathbf{M}) = \text{det}(\mathbf{N})$.) We have: $$ \begin{aligned} \mathbf{M} &= \begin{pmatrix} 1 & \xi_1 & 0 & 0 & \cdots & 0 \\ 0 & \xi_2 - \xi_1 & 0 & 0 & \cdots & 0 \\ 0 & \xi_3 - \xi_1 & N_3(\xi_3) & 0 & \cdots & 0 \\ 0 & \xi_4 - \xi_1 & N_3(\xi_4) & N_4(\xi_4) & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \xi_K - \xi_1 & N_3(\xi_K) & N_4(\xi_K) & \cdots & N_K(\xi_K) \end{pmatrix} \\ \end{aligned} $$ By expanding along the first column, we see that the determinant of $\mathbf{M}$ is equal to that of the sub-matrix formed by deleting the first column and the first row. Because this sub-matrix is lower diagonal, the determinant is the product of its diagonal entries: $$ \text{det}(\mathbf{M}) = (\xi_2 - \xi_1) \prod_{i=3}^K N_i(\xi_i) $$ We know that $\xi_2 - \xi_1 \ne 0$. Thus it suffices to show that $N_i(\xi_i) \ne 0$ for all $i \in \{3, \dots, K\}$, as follows: \begin{aligned} N_i(\xi_i) &= d_{i-2}(\xi_i) - d_{K-1}(\xi_i) \\ &= \frac{(\xi_i - \xi_{i-2})_+^3 - (\xi_i-\xi_{K})_+^3}{\xi_{K}-\xi_{i-2}} - \frac{(\xi_i-\xi_{K-1})_+^3 - (\xi_i-\xi_{K})_+^3}{\xi_{K}-\xi_{K-1}} \\ &= \frac{(\xi_i - \xi_{i-2})_+^3}{\xi_{K}-\xi_{i-2}} - \frac{(\xi_i-\xi_{K-1})_+^3}{\xi_{K}-\xi_{K-1}} \\ \end{aligned} If $i < K$, then we have $$ N_i(x_i) = \frac{(\xi_i - \xi_{i-2})_+^3}{\xi_{K}-\xi_{i-2}} \ne 0 $$ by the fact that the $\xi_i$ are strictly increasing. If $i = K$, then we have \begin{aligned} N_K(x_K) &= (\xi_K - \xi_{K-2})^2 - (\xi_K - \xi_{K-1})^2 \\ &> (\xi_K - \xi_{K-2})^2 - (\xi_K - \xi_{K-2})^2 = 0 \end{aligned} So $N_K(x_K) \ne 0$ and the proof is complete.
Invertibility in Reinsch form Derivation (Smoothing Splines)
I was hoping to find a more intuitive proof, but the matrix can be shown to be invertible by directly showing that its determinant is nonzero. First note that the knots are located at the $K$ unique v
Invertibility in Reinsch form Derivation (Smoothing Splines) I was hoping to find a more intuitive proof, but the matrix can be shown to be invertible by directly showing that its determinant is nonzero. First note that the knots are located at the $K$ unique values of $\mathbf{x}$. Let $\boldsymbol{\xi}$ represent these knots, in ascending order. Also, $d_j(\xi_i) = 0$ if $i \le j$, and so $N_{i,j} = 0$ if $i > j + 1$ and so $\mathbf{N}$ is "almost lower diagonal". In fact, it's a lower diagonal matrix with a column of 1's appended to the front of it. We can take advantage of this structure to obtain a simple expression for the determinant. First we transform $\mathbf{N}$ by subtracting the first row from all others. Call this matrix $\mathbf{M}$. (These row operations do not affect the determinant, so $\text{det}(\mathbf{M}) = \text{det}(\mathbf{N})$.) We have: $$ \begin{aligned} \mathbf{M} &= \begin{pmatrix} 1 & \xi_1 & 0 & 0 & \cdots & 0 \\ 0 & \xi_2 - \xi_1 & 0 & 0 & \cdots & 0 \\ 0 & \xi_3 - \xi_1 & N_3(\xi_3) & 0 & \cdots & 0 \\ 0 & \xi_4 - \xi_1 & N_3(\xi_4) & N_4(\xi_4) & \cdots & 0 \\ \vdots & \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & \xi_K - \xi_1 & N_3(\xi_K) & N_4(\xi_K) & \cdots & N_K(\xi_K) \end{pmatrix} \\ \end{aligned} $$ By expanding along the first column, we see that the determinant of $\mathbf{M}$ is equal to that of the sub-matrix formed by deleting the first column and the first row. Because this sub-matrix is lower diagonal, the determinant is the product of its diagonal entries: $$ \text{det}(\mathbf{M}) = (\xi_2 - \xi_1) \prod_{i=3}^K N_i(\xi_i) $$ We know that $\xi_2 - \xi_1 \ne 0$. Thus it suffices to show that $N_i(\xi_i) \ne 0$ for all $i \in \{3, \dots, K\}$, as follows: \begin{aligned} N_i(\xi_i) &= d_{i-2}(\xi_i) - d_{K-1}(\xi_i) \\ &= \frac{(\xi_i - \xi_{i-2})_+^3 - (\xi_i-\xi_{K})_+^3}{\xi_{K}-\xi_{i-2}} - \frac{(\xi_i-\xi_{K-1})_+^3 - (\xi_i-\xi_{K})_+^3}{\xi_{K}-\xi_{K-1}} \\ &= \frac{(\xi_i - \xi_{i-2})_+^3}{\xi_{K}-\xi_{i-2}} - \frac{(\xi_i-\xi_{K-1})_+^3}{\xi_{K}-\xi_{K-1}} \\ \end{aligned} If $i < K$, then we have $$ N_i(x_i) = \frac{(\xi_i - \xi_{i-2})_+^3}{\xi_{K}-\xi_{i-2}} \ne 0 $$ by the fact that the $\xi_i$ are strictly increasing. If $i = K$, then we have \begin{aligned} N_K(x_K) &= (\xi_K - \xi_{K-2})^2 - (\xi_K - \xi_{K-1})^2 \\ &> (\xi_K - \xi_{K-2})^2 - (\xi_K - \xi_{K-2})^2 = 0 \end{aligned} So $N_K(x_K) \ne 0$ and the proof is complete.
Invertibility in Reinsch form Derivation (Smoothing Splines) I was hoping to find a more intuitive proof, but the matrix can be shown to be invertible by directly showing that its determinant is nonzero. First note that the knots are located at the $K$ unique v
37,295
Would you flag this data as fraudulent?
The ethical issue is paramount here. I'd agree on this information in regarding the data as highly suspect and report it as such; I would never call "fraudulence" because that's for others to determine after investigation and in some countries could lay you open to legal action or its threat. As for attaching a probability estimate, I would back off from that. The case is clear(er) without it and just about every assumption or approximation is open to challenge.
Would you flag this data as fraudulent?
The ethical issue is paramount here. I'd agree on this information in regarding the data as highly suspect and report it as such; I would never call "fraudulence" because that's for others to determin
Would you flag this data as fraudulent? The ethical issue is paramount here. I'd agree on this information in regarding the data as highly suspect and report it as such; I would never call "fraudulence" because that's for others to determine after investigation and in some countries could lay you open to legal action or its threat. As for attaching a probability estimate, I would back off from that. The case is clear(er) without it and just about every assumption or approximation is open to challenge.
Would you flag this data as fraudulent? The ethical issue is paramount here. I'd agree on this information in regarding the data as highly suspect and report it as such; I would never call "fraudulence" because that's for others to determin
37,296
Fitting Gaussian mixture models with dirac delta functions
I am confused why EM doesn't encouter this problem, as fitting GMM with Dirac delta functions is not typically discussed in textbooks. The unboundedness of the likelihood of a Gaussian mixture model is discussed in many textbooks (incl. mine). It is rarely a problem for EM as the corresponding modes are very narrow and hence do not constitute domains of attraction for most starting values of EM (unless one starts with $\mu_1=x_1$, say). the singularity problem occurs for whatever algorithms we use to fit GMMs. But still, MLE is a practical criterion for learning GMMs. The problem only occurs when considering the likelihood function alone. Moment estimators are not facing this difficulty and neither do Bayesian methods, since the vicinity of $\sigma_1$ gets zero prior probability. Here is an illustration from Chopin and Robert (2010) of a posterior sample obtained by nested sampling for the Gaussian mixture $$0.3{\cal N}(0,1)+0.7{\cal N}(\mu,\sigma^2)$$ While some particles are located close to $\sigma=0$, they soon escape this vicinity and concentrate on another mode of the likelihood. Note also that a result by Redner and Walker (1984) demonstrate that there exist consistent EM solutions.
Fitting Gaussian mixture models with dirac delta functions
I am confused why EM doesn't encouter this problem, as fitting GMM with Dirac delta functions is not typically discussed in textbooks. The unboundedness of the likelihood of a Gaussian mixture mode
Fitting Gaussian mixture models with dirac delta functions I am confused why EM doesn't encouter this problem, as fitting GMM with Dirac delta functions is not typically discussed in textbooks. The unboundedness of the likelihood of a Gaussian mixture model is discussed in many textbooks (incl. mine). It is rarely a problem for EM as the corresponding modes are very narrow and hence do not constitute domains of attraction for most starting values of EM (unless one starts with $\mu_1=x_1$, say). the singularity problem occurs for whatever algorithms we use to fit GMMs. But still, MLE is a practical criterion for learning GMMs. The problem only occurs when considering the likelihood function alone. Moment estimators are not facing this difficulty and neither do Bayesian methods, since the vicinity of $\sigma_1$ gets zero prior probability. Here is an illustration from Chopin and Robert (2010) of a posterior sample obtained by nested sampling for the Gaussian mixture $$0.3{\cal N}(0,1)+0.7{\cal N}(\mu,\sigma^2)$$ While some particles are located close to $\sigma=0$, they soon escape this vicinity and concentrate on another mode of the likelihood. Note also that a result by Redner and Walker (1984) demonstrate that there exist consistent EM solutions.
Fitting Gaussian mixture models with dirac delta functions I am confused why EM doesn't encouter this problem, as fitting GMM with Dirac delta functions is not typically discussed in textbooks. The unboundedness of the likelihood of a Gaussian mixture mode
37,297
Sum of linear combination of product of exponentials is exponential
Not a complete answer, sorry, but a few ideas (to long for a comment). Note that what you have is a product of $K+1$ iid random variables, where $K$ is a random variable (rv) with a poisson distribution with parameter $\lambda$. That can be used for another "sanity check", a simulation (using exponentials of rate 1): set.seed(7*11*13) N <- 1000000 prods <- rep(0, N) ks <- rpois(N, 1)+1 for (i in 1:N) { k <- ks[i] prods[i] <- prod( rexp(k, 1)) } qqplot( qexp(ppoints(N)), prods) The resulting qqplot (not shown here) is far from a straight line, so this do not look to be an exponential of rate 1. The mean is right, the variance to large, there is a right tail much longer than for an exponential. What can be done theoretically? The Mellin transform https://en.wikipedia.org/wiki/Mellin_transform is adapted to products of independent random variables. I will compute only for the exponential with rate 1. The Mellin transform of $V_0$ then is $$ \DeclareMathOperator{\E}{\mathbb{E}} M_1(s) = \E V_0^s = \int_0^\infty x^s e^{-x}\; dx = \Gamma(s+1) $$ so the Mellin transform of a product of $k+1$ iid exponentials is $$ M_{k+1}(s) = \Gamma(s+1)^{k+1} $$ Since $K$ has a poisson distribution with parameter $\lambda$, the Mellin transform of the random product of a random number $K+1$ factors, is $$ M(s) = \E M_{K+1}(s) = \E \Gamma(s+1)^{K+1}= \Gamma(s+1)e^{-\lambda}\sum_{k=0}^\infty \frac{\lambda^k}{k!}\Gamma(s+1)^k=e^{-\lambda}\Gamma(s+1) e^{\lambda \Gamma(s+1)} $$ but I cannot find an inverse of this transform. But note that if $X$ is a nonnegative random variable with Mellin transform $M_X(t)$, then defining $Y=\log X$ we find that $$ K_Y(t)=\E e^{tY} = \E e^{t\log X}=\E e^{\log (X^t)}=\E X^t =M_X(y) $$ so the Mellin transform of $X$ is the moment generating function of its logarithm $Y$. So, using that we can approximate the distribution of $X$ with saddlepoint approximation methods, How does saddlepoint approximation work? and search this site.
Sum of linear combination of product of exponentials is exponential
Not a complete answer, sorry, but a few ideas (to long for a comment). Note that what you have is a product of $K+1$ iid random variables, where $K$ is a random variable (rv) with a poisson distributi
Sum of linear combination of product of exponentials is exponential Not a complete answer, sorry, but a few ideas (to long for a comment). Note that what you have is a product of $K+1$ iid random variables, where $K$ is a random variable (rv) with a poisson distribution with parameter $\lambda$. That can be used for another "sanity check", a simulation (using exponentials of rate 1): set.seed(7*11*13) N <- 1000000 prods <- rep(0, N) ks <- rpois(N, 1)+1 for (i in 1:N) { k <- ks[i] prods[i] <- prod( rexp(k, 1)) } qqplot( qexp(ppoints(N)), prods) The resulting qqplot (not shown here) is far from a straight line, so this do not look to be an exponential of rate 1. The mean is right, the variance to large, there is a right tail much longer than for an exponential. What can be done theoretically? The Mellin transform https://en.wikipedia.org/wiki/Mellin_transform is adapted to products of independent random variables. I will compute only for the exponential with rate 1. The Mellin transform of $V_0$ then is $$ \DeclareMathOperator{\E}{\mathbb{E}} M_1(s) = \E V_0^s = \int_0^\infty x^s e^{-x}\; dx = \Gamma(s+1) $$ so the Mellin transform of a product of $k+1$ iid exponentials is $$ M_{k+1}(s) = \Gamma(s+1)^{k+1} $$ Since $K$ has a poisson distribution with parameter $\lambda$, the Mellin transform of the random product of a random number $K+1$ factors, is $$ M(s) = \E M_{K+1}(s) = \E \Gamma(s+1)^{K+1}= \Gamma(s+1)e^{-\lambda}\sum_{k=0}^\infty \frac{\lambda^k}{k!}\Gamma(s+1)^k=e^{-\lambda}\Gamma(s+1) e^{\lambda \Gamma(s+1)} $$ but I cannot find an inverse of this transform. But note that if $X$ is a nonnegative random variable with Mellin transform $M_X(t)$, then defining $Y=\log X$ we find that $$ K_Y(t)=\E e^{tY} = \E e^{t\log X}=\E e^{\log (X^t)}=\E X^t =M_X(y) $$ so the Mellin transform of $X$ is the moment generating function of its logarithm $Y$. So, using that we can approximate the distribution of $X$ with saddlepoint approximation methods, How does saddlepoint approximation work? and search this site.
Sum of linear combination of product of exponentials is exponential Not a complete answer, sorry, but a few ideas (to long for a comment). Note that what you have is a product of $K+1$ iid random variables, where $K$ is a random variable (rv) with a poisson distributi
37,298
Confidence Interval Widths as Measure for Variability
In brief, it's not a bad measure, but there are some things to be aware of. For (almost) any given data generating process, increasing the number of data will decrease the width of the confidence interval. Under some extra assumptions (those necessary for the Central Limit Theorem), the width will, asymptotically, follow an inverse square-root law. Even if the underlying data are normal, the maximum likelihood estimator of the variance and the unbiased estimator are not the same (which is the source of the systematic bias you show in your plot). This is due to the denominators being $n$ and $n-1$ respectively, so if you use the MLE you are out by a factor of $n/n-1$. Even though the distribution of the estimator of the variance is not normal, it does asymptotically approach normality and still satisfies the Central Limit Theorem as long as your data satisfy $E(x^2)<\infty$ (i.e. the data actually has a variance so the estimator has something to estimate).
Confidence Interval Widths as Measure for Variability
In brief, it's not a bad measure, but there are some things to be aware of. For (almost) any given data generating process, increasing the number of data will decrease the width of the confidence inte
Confidence Interval Widths as Measure for Variability In brief, it's not a bad measure, but there are some things to be aware of. For (almost) any given data generating process, increasing the number of data will decrease the width of the confidence interval. Under some extra assumptions (those necessary for the Central Limit Theorem), the width will, asymptotically, follow an inverse square-root law. Even if the underlying data are normal, the maximum likelihood estimator of the variance and the unbiased estimator are not the same (which is the source of the systematic bias you show in your plot). This is due to the denominators being $n$ and $n-1$ respectively, so if you use the MLE you are out by a factor of $n/n-1$. Even though the distribution of the estimator of the variance is not normal, it does asymptotically approach normality and still satisfies the Central Limit Theorem as long as your data satisfy $E(x^2)<\infty$ (i.e. the data actually has a variance so the estimator has something to estimate).
Confidence Interval Widths as Measure for Variability In brief, it's not a bad measure, but there are some things to be aware of. For (almost) any given data generating process, increasing the number of data will decrease the width of the confidence inte
37,299
Relationship between variance of mean and mean variance
$$\newcommand{\Var}{\operatorname{Var}}$$ The article compares two different estimates for the true population mean, $\mu$. The mean computed from five random samples $x_i$ $$\hat{\mu}_x = \frac{x_1 + x_2 + x_3 + x_4 + x_5}{5}$$ and the mean computed from five random quintiles $y_i$ (5 quintiles selected by five times randomly picking samples and ordering them and picking the 1st quintile from the 1st picking, the 2nd...) $$\hat{\mu}_y = \frac{y_1 + y_2 + y_3 + y_4 + y_5}{5}$$ With those expressions we should be able to see what they mean. The variance of the stochastic variables $\hat{\mu}_x$ and $\hat{\mu}_y$ (they are not fixed values like $\mu$) can be described by the sum of the variances of the variables $x_i$ and $y_i$. Using the general summation law of variance of uncorrelated variables $\Var(a z_1 + b z_2) = a^2 \Var(z_1) + b^2 \Var(z_2)$ you get: $$\Var(\hat{\mu}_x) = \frac{1}{25}\Var(x_1) + \frac{1}{25}\Var(x_2) + \frac{1}{25}\Var(x_3) + \frac{1}{25}\Var(x_4) + \frac{1}{25}\Var(x_5)$$ and $$\Var(\hat{\mu}_y) = \frac{1}{25}\Var(y_1) + \frac{1}{25}\Var(y_2) + \frac{1}{25}\Var(y_3) + \frac{1}{25}\Var(y_4) + \frac{1}{25}\Var(y_5)$$ now notice that all $\Var(x_i)$ are the same thus: $$\Var(\hat{\mu}_x) = \frac{1}{5}\Var(x_i)$$ and also notice that we have $\Var(y_i)<\Var(x_i)$ for each $i$ (the variance of a random sampled quintile is smaller than a random sample, which the Figure 1 in the reference shows very well). Thus: $$\Var(\hat{\mu}_y) < \frac{1}{5}\Var(x_i)$$ which is the increased efficiency what they where looking for (yes you need five times more samples to get those quintiles, but that is what the explain at the end, where they state that the method is especially suited for situations where ordering is much easier than exact quantitive measurement). And finally to get even closer to their statement we can use something like $$\begin{align} \Var(\hat{\mu}_y) &= \frac{1}{5} \frac{\Var(y_1)+\Var(y_2)+\Var(y_3)+\Var(y_4)+\Var(y_5)}{5} \\&= \frac{1}{5}\overline{\Var(y_i)} < \frac{1}{5}\Var(x_i) \end{align}$$ (1) The equality in the second line relates to: "The variance of the mean of five quadrats one from each subdistribution is one- fifth of the mean variance of these distributions." (2) the inequality in the second line relates to: "This may be contrasted with the variance of the mean of five random samples, that is, one-fifth of the variance of the parent population."
Relationship between variance of mean and mean variance
$$\newcommand{\Var}{\operatorname{Var}}$$ The article compares two different estimates for the true population mean, $\mu$. The mean computed from five random samples $x_i$ $$\hat{\mu}_x = \frac{x_1
Relationship between variance of mean and mean variance $$\newcommand{\Var}{\operatorname{Var}}$$ The article compares two different estimates for the true population mean, $\mu$. The mean computed from five random samples $x_i$ $$\hat{\mu}_x = \frac{x_1 + x_2 + x_3 + x_4 + x_5}{5}$$ and the mean computed from five random quintiles $y_i$ (5 quintiles selected by five times randomly picking samples and ordering them and picking the 1st quintile from the 1st picking, the 2nd...) $$\hat{\mu}_y = \frac{y_1 + y_2 + y_3 + y_4 + y_5}{5}$$ With those expressions we should be able to see what they mean. The variance of the stochastic variables $\hat{\mu}_x$ and $\hat{\mu}_y$ (they are not fixed values like $\mu$) can be described by the sum of the variances of the variables $x_i$ and $y_i$. Using the general summation law of variance of uncorrelated variables $\Var(a z_1 + b z_2) = a^2 \Var(z_1) + b^2 \Var(z_2)$ you get: $$\Var(\hat{\mu}_x) = \frac{1}{25}\Var(x_1) + \frac{1}{25}\Var(x_2) + \frac{1}{25}\Var(x_3) + \frac{1}{25}\Var(x_4) + \frac{1}{25}\Var(x_5)$$ and $$\Var(\hat{\mu}_y) = \frac{1}{25}\Var(y_1) + \frac{1}{25}\Var(y_2) + \frac{1}{25}\Var(y_3) + \frac{1}{25}\Var(y_4) + \frac{1}{25}\Var(y_5)$$ now notice that all $\Var(x_i)$ are the same thus: $$\Var(\hat{\mu}_x) = \frac{1}{5}\Var(x_i)$$ and also notice that we have $\Var(y_i)<\Var(x_i)$ for each $i$ (the variance of a random sampled quintile is smaller than a random sample, which the Figure 1 in the reference shows very well). Thus: $$\Var(\hat{\mu}_y) < \frac{1}{5}\Var(x_i)$$ which is the increased efficiency what they where looking for (yes you need five times more samples to get those quintiles, but that is what the explain at the end, where they state that the method is especially suited for situations where ordering is much easier than exact quantitive measurement). And finally to get even closer to their statement we can use something like $$\begin{align} \Var(\hat{\mu}_y) &= \frac{1}{5} \frac{\Var(y_1)+\Var(y_2)+\Var(y_3)+\Var(y_4)+\Var(y_5)}{5} \\&= \frac{1}{5}\overline{\Var(y_i)} < \frac{1}{5}\Var(x_i) \end{align}$$ (1) The equality in the second line relates to: "The variance of the mean of five quadrats one from each subdistribution is one- fifth of the mean variance of these distributions." (2) the inequality in the second line relates to: "This may be contrasted with the variance of the mean of five random samples, that is, one-fifth of the variance of the parent population."
Relationship between variance of mean and mean variance $$\newcommand{\Var}{\operatorname{Var}}$$ The article compares two different estimates for the true population mean, $\mu$. The mean computed from five random samples $x_i$ $$\hat{\mu}_x = \frac{x_1
37,300
Calculating Probability of x1 > x2
The setting is conventionally expressed in the form $$y = X\beta + \varepsilon$$ for an $n$-vector $y$ of responses, an $n\times k$ model matrix $X$, and a $k$-vector of parameters $\beta$, under the assumptions that the random errors $\varepsilon = (\varepsilon_i)$ are uncorrelated with equal variances $\sigma^2$ and zero means: that is, $$E(\varepsilon)=0; \ \operatorname{Var}(\varepsilon) = \sigma^2 I_{n}.$$ When this is the case, the ordinary least squares estimate is $$\hat\beta = (X^\prime X)^{-} X^\prime y.$$ Let $Z$ be a $2\times k$ matrix whose rows $z_R$ and $z_T$ give the values of the regressors for Rachel and Thomas, respectively. The predicted responses are in the $2$-vector $Z\hat\beta$. The actual responses are $z_R\beta+\varepsilon_R$ and $z_T\beta+\varepsilon_T$ where these new epsilons are zero-mean uncorrelated random variables, independent of the original $\epsilon$, and with common variances $\sigma^2$. The difference between those values for Rachel minus Thomas, which I will call $\delta$, is simply $$\delta=(z_R\beta+\varepsilon_R ) - (z_T\beta + \varepsilon_T) = (1,-1)Z\beta + \varepsilon_R - \varepsilon_T.$$ Both sides are $1\times 1$ matrices--that is, numbers--and evidently they are random by virtue of the appearance of $y$ on the right hand side. (The right hand side is the estimated difference between Rachel's and Thomas's responses, plus the deviation $\varepsilon_R$ between Rachel's actual and predicted responses, minus the deviation $\varepsilon_T$ between Thomas's actual and predicted responses.) We may compute its expectation term by term: $$\eqalign{ E(\delta) &= E\left((1,-1)Z\beta + \varepsilon_R - \varepsilon_T\right)\\ &= (1,-1)Z\beta +0 - 0\\ &= z_1\beta - z_2\beta. }$$ This is exactly what one would suppose: the expected difference is the difference in predicted values. It can be estimated by replacing the parameters by their estimates. To indicate this, let's place a hat over the "$E$": $$\hat{E}(\delta) = (1,-1)Z\hat\beta = z_1\hat\beta - z_2\hat\beta.\tag{1}$$ That's the $2.88-2.51$ appearing in the question. We may continue the analysis of the difference between Rachel and Thomas by expressing the two components of uncertainty about that distribution: one is because $\beta$ and $\sigma$ are estimated from random data and the other is the appearance of those random deviations $\varepsilon_R$ and $\varepsilon_T$. $$\eqalign{ \operatorname{Var}(\text{Rachel}-\text{Thomas}) &= \operatorname{Var}\left((1,-1)Z\hat\beta + \varepsilon_R - \varepsilon_T\right) \\ &= (1,-1)Z \operatorname{Var}(\hat\beta) Z^\prime (1,-1)^\prime + \operatorname{Var}(\varepsilon_R) + \operatorname{Var}(\varepsilon_T) \\ &=(1,-1)Z \operatorname{Var}(\hat\beta) Z^\prime (1,-1)^\prime + 2\hat\sigma^2.\tag{2} }$$ The variances of the epsilons are estimated by $\hat\sigma^2$. We don't know $\operatorname{Var}(\hat\beta)$ because it depends on $\sigma$. It is routine to estimate this variance by replacing $\sigma^2$ by its least-squares estimate $\hat\sigma^2$, producing a quantity sometimes written $\widehat{\operatorname{Var}}(\hat\beta)$. These estimates can be converted into probabilities only by making more specific assumptions about the conditional distributions of $y$ on $X$. By far the simplest is to assume $y$ is multivariate Normal, for then $\delta$ (being a linear transform of the vector $y$) itself is Normal and therefore its mean and variance completely determine its distribution. Its estimated distribution is obtained by placing the hats on $E$ and $\operatorname{Var}$. Finally we have assembled all the information needed for a solution. The OLS procedure estimates the distribution of Rachel's response minus Thomas's response to be Normal with a mean equal to the difference in predicted values $(1)$ and with a variance estimated by $(2)$, which involves the estimated error variance $\hat\sigma^2$ and the variance-covariance matrix of the coefficient estimates, $\operatorname{Var}(\hat\beta)$. This R code directly carries out the calculations exhibited in formulas $(1)$ and $(2)$: fit <- lm(cgpa ~ hgpa + sat + ltrs, data=df) # model to predict College GPA Z <- as.matrix(data.frame(intercept=1, hgpa=c(4,3), sat=c(1168,1168),ltrs=c(6,6))) cont <- matrix(c(1,-1), 1, 2) # Rachel - Thomas "contrast". beta.hat <- coef(fit) # Estimated coefficients for prediction delta.hat <- cont %*% Z %*% beta.hat # Predicted mean difference sigma.hat <- sigma(fit) # Estimated error SD var.delta.hat <- cont %*% Z %*% vcov(fit) %*% t(Z) %*% t(cont) + 2 * sigma.hat^2 pnorm(0, -delta.hat, sqrt(var.delta.hat)) # Chance Rachel > Thomas The output for these data is $0.67$: OLS estimates that there is a $67\%$ chance that Rachel's CGPA exceeds that of Thomas. (It turns out in this case, because Rachel and Thomas are so similar, the model fits so well, and the amount of data is so large, that $\widehat{\operatorname{Var}}(\hat\delta)$ is tiny compared to $2\hat\sigma^2$ and so could be neglected. That will not always be the case.) This is the mechanism that underlies the computation of prediction intervals: we can compute prediction intervals for the difference between Rachel's and Thomas's CGPA using this distribution.
Calculating Probability of x1 > x2
The setting is conventionally expressed in the form $$y = X\beta + \varepsilon$$ for an $n$-vector $y$ of responses, an $n\times k$ model matrix $X$, and a $k$-vector of parameters $\beta$, under the
Calculating Probability of x1 > x2 The setting is conventionally expressed in the form $$y = X\beta + \varepsilon$$ for an $n$-vector $y$ of responses, an $n\times k$ model matrix $X$, and a $k$-vector of parameters $\beta$, under the assumptions that the random errors $\varepsilon = (\varepsilon_i)$ are uncorrelated with equal variances $\sigma^2$ and zero means: that is, $$E(\varepsilon)=0; \ \operatorname{Var}(\varepsilon) = \sigma^2 I_{n}.$$ When this is the case, the ordinary least squares estimate is $$\hat\beta = (X^\prime X)^{-} X^\prime y.$$ Let $Z$ be a $2\times k$ matrix whose rows $z_R$ and $z_T$ give the values of the regressors for Rachel and Thomas, respectively. The predicted responses are in the $2$-vector $Z\hat\beta$. The actual responses are $z_R\beta+\varepsilon_R$ and $z_T\beta+\varepsilon_T$ where these new epsilons are zero-mean uncorrelated random variables, independent of the original $\epsilon$, and with common variances $\sigma^2$. The difference between those values for Rachel minus Thomas, which I will call $\delta$, is simply $$\delta=(z_R\beta+\varepsilon_R ) - (z_T\beta + \varepsilon_T) = (1,-1)Z\beta + \varepsilon_R - \varepsilon_T.$$ Both sides are $1\times 1$ matrices--that is, numbers--and evidently they are random by virtue of the appearance of $y$ on the right hand side. (The right hand side is the estimated difference between Rachel's and Thomas's responses, plus the deviation $\varepsilon_R$ between Rachel's actual and predicted responses, minus the deviation $\varepsilon_T$ between Thomas's actual and predicted responses.) We may compute its expectation term by term: $$\eqalign{ E(\delta) &= E\left((1,-1)Z\beta + \varepsilon_R - \varepsilon_T\right)\\ &= (1,-1)Z\beta +0 - 0\\ &= z_1\beta - z_2\beta. }$$ This is exactly what one would suppose: the expected difference is the difference in predicted values. It can be estimated by replacing the parameters by their estimates. To indicate this, let's place a hat over the "$E$": $$\hat{E}(\delta) = (1,-1)Z\hat\beta = z_1\hat\beta - z_2\hat\beta.\tag{1}$$ That's the $2.88-2.51$ appearing in the question. We may continue the analysis of the difference between Rachel and Thomas by expressing the two components of uncertainty about that distribution: one is because $\beta$ and $\sigma$ are estimated from random data and the other is the appearance of those random deviations $\varepsilon_R$ and $\varepsilon_T$. $$\eqalign{ \operatorname{Var}(\text{Rachel}-\text{Thomas}) &= \operatorname{Var}\left((1,-1)Z\hat\beta + \varepsilon_R - \varepsilon_T\right) \\ &= (1,-1)Z \operatorname{Var}(\hat\beta) Z^\prime (1,-1)^\prime + \operatorname{Var}(\varepsilon_R) + \operatorname{Var}(\varepsilon_T) \\ &=(1,-1)Z \operatorname{Var}(\hat\beta) Z^\prime (1,-1)^\prime + 2\hat\sigma^2.\tag{2} }$$ The variances of the epsilons are estimated by $\hat\sigma^2$. We don't know $\operatorname{Var}(\hat\beta)$ because it depends on $\sigma$. It is routine to estimate this variance by replacing $\sigma^2$ by its least-squares estimate $\hat\sigma^2$, producing a quantity sometimes written $\widehat{\operatorname{Var}}(\hat\beta)$. These estimates can be converted into probabilities only by making more specific assumptions about the conditional distributions of $y$ on $X$. By far the simplest is to assume $y$ is multivariate Normal, for then $\delta$ (being a linear transform of the vector $y$) itself is Normal and therefore its mean and variance completely determine its distribution. Its estimated distribution is obtained by placing the hats on $E$ and $\operatorname{Var}$. Finally we have assembled all the information needed for a solution. The OLS procedure estimates the distribution of Rachel's response minus Thomas's response to be Normal with a mean equal to the difference in predicted values $(1)$ and with a variance estimated by $(2)$, which involves the estimated error variance $\hat\sigma^2$ and the variance-covariance matrix of the coefficient estimates, $\operatorname{Var}(\hat\beta)$. This R code directly carries out the calculations exhibited in formulas $(1)$ and $(2)$: fit <- lm(cgpa ~ hgpa + sat + ltrs, data=df) # model to predict College GPA Z <- as.matrix(data.frame(intercept=1, hgpa=c(4,3), sat=c(1168,1168),ltrs=c(6,6))) cont <- matrix(c(1,-1), 1, 2) # Rachel - Thomas "contrast". beta.hat <- coef(fit) # Estimated coefficients for prediction delta.hat <- cont %*% Z %*% beta.hat # Predicted mean difference sigma.hat <- sigma(fit) # Estimated error SD var.delta.hat <- cont %*% Z %*% vcov(fit) %*% t(Z) %*% t(cont) + 2 * sigma.hat^2 pnorm(0, -delta.hat, sqrt(var.delta.hat)) # Chance Rachel > Thomas The output for these data is $0.67$: OLS estimates that there is a $67\%$ chance that Rachel's CGPA exceeds that of Thomas. (It turns out in this case, because Rachel and Thomas are so similar, the model fits so well, and the amount of data is so large, that $\widehat{\operatorname{Var}}(\hat\delta)$ is tiny compared to $2\hat\sigma^2$ and so could be neglected. That will not always be the case.) This is the mechanism that underlies the computation of prediction intervals: we can compute prediction intervals for the difference between Rachel's and Thomas's CGPA using this distribution.
Calculating Probability of x1 > x2 The setting is conventionally expressed in the form $$y = X\beta + \varepsilon$$ for an $n$-vector $y$ of responses, an $n\times k$ model matrix $X$, and a $k$-vector of parameters $\beta$, under the