idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
29,401
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails to cluster
I'm not sure if anyone is looking at this question any more but I put your question in to rjags to test Tom's Gibbs sampling suggestion while incorporating insight from Guy about the flat prior for standard deviation. This toy problem might be difficult because 10 and even 40 data points are not enough to estimate variance without an informative prior. The current prior σzi∼Uniform(0,100) is not informative. This might explain why nearly all draws of μzi are the expected mean of the two distributions. If it does not alter your question too much I will use 100 and 400 data points respectively. I also did not use the stick breaking process directly in my code. The wikipedia page for the dirichlet process made me think p ~ Dir(a/k) would be ok. Finally it is only a semi-parametric implementation since it still takes a number of clusters k. I don't know how to make an infinite mixture model in rjags. library("rjags") set1 <- rnorm(100, 0, 1) set2 <- rnorm(400, 4, 1) data <- c(set1, set2) plot(data, type='l', col='blue', lwd=3, main='gaussian mixture model data', xlab='data sample #', ylab='data value') points(data, col='blue') cpd.model.str <- 'model { a ~ dunif(0.3, 100) for (i in 1:k){ alpha[i] <- a/k mu[i] ~ dnorm(0.0, 0.001) sigma[i] ~ dunif(0, 100) } p[1:k] ~ ddirich(alpha[1:k]) for (i in 1:n){ z[i] ~ dcat(p) y[i] ~ dnorm(mu[z[i]], pow(sigma[z[i]], -2)) } }' cpd.model <- jags.model(textConnection(cpd.model.str), data=list(y=data, n=length(data), k=5)) update(cpd.model, 1000) chain <- coda.samples(model = cpd.model, n.iter = 1000, variable.names = c('p', 'mu', 'sigma')) rchain <- as.matrix(chain) apply(rchain, 2, mean)
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails
I'm not sure if anyone is looking at this question any more but I put your question in to rjags to test Tom's Gibbs sampling suggestion while incorporating insight from Guy about the flat prior for st
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails to cluster I'm not sure if anyone is looking at this question any more but I put your question in to rjags to test Tom's Gibbs sampling suggestion while incorporating insight from Guy about the flat prior for standard deviation. This toy problem might be difficult because 10 and even 40 data points are not enough to estimate variance without an informative prior. The current prior σzi∼Uniform(0,100) is not informative. This might explain why nearly all draws of μzi are the expected mean of the two distributions. If it does not alter your question too much I will use 100 and 400 data points respectively. I also did not use the stick breaking process directly in my code. The wikipedia page for the dirichlet process made me think p ~ Dir(a/k) would be ok. Finally it is only a semi-parametric implementation since it still takes a number of clusters k. I don't know how to make an infinite mixture model in rjags. library("rjags") set1 <- rnorm(100, 0, 1) set2 <- rnorm(400, 4, 1) data <- c(set1, set2) plot(data, type='l', col='blue', lwd=3, main='gaussian mixture model data', xlab='data sample #', ylab='data value') points(data, col='blue') cpd.model.str <- 'model { a ~ dunif(0.3, 100) for (i in 1:k){ alpha[i] <- a/k mu[i] ~ dnorm(0.0, 0.001) sigma[i] ~ dunif(0, 100) } p[1:k] ~ ddirich(alpha[1:k]) for (i in 1:n){ z[i] ~ dcat(p) y[i] ~ dnorm(mu[z[i]], pow(sigma[z[i]], -2)) } }' cpd.model <- jags.model(textConnection(cpd.model.str), data=list(y=data, n=length(data), k=5)) update(cpd.model, 1000) chain <- coda.samples(model = cpd.model, n.iter = 1000, variable.names = c('p', 'mu', 'sigma')) rchain <- as.matrix(chain) apply(rchain, 2, mean)
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails I'm not sure if anyone is looking at this question any more but I put your question in to rjags to test Tom's Gibbs sampling suggestion while incorporating insight from Guy about the flat prior for st
29,402
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails to cluster
The poor mixing that you are seeing is most likely because of the way that PyMC draws samples. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. In your case, that means it will try to update the entire clustermean array in one step, and similarly for clusterid. PyMC doesn't do Gibbs sampling; it does Metropolis where the proposal is chosen by some simple heuristics. This makes it unlikely to propose a good value for an entire array.
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails
The poor mixing that you are seeing is most likely because of the way that PyMC draws samples. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated t
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails to cluster The poor mixing that you are seeing is most likely because of the way that PyMC draws samples. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated together. In your case, that means it will try to update the entire clustermean array in one step, and similarly for clusterid. PyMC doesn't do Gibbs sampling; it does Metropolis where the proposal is chosen by some simple heuristics. This makes it unlikely to propose a good value for an entire array.
PyMC for nonparametric clustering: Dirichlet process to estimate Gaussian mixture's parameters fails The poor mixing that you are seeing is most likely because of the way that PyMC draws samples. As explained in section 5.8.1 of the PyMC documentation, all elements of an array variable are updated t
29,403
How to deal with very s-shaped qq plot in a linear mixed model (lme4 1.0.4)?
When in doubt, look at the raw data: library(ggplot2) theme_set(theme_bw()) ggplot(dat,aes(pos.centered,diff,colour=cond.lag))+geom_point()+ geom_line(aes(group=sub:cond.lag),alpha=0.4) ggsave("SE_ex.png",width=6,height=4) (You could also try colour=sub and facet_grid(.~cond.lag)) It looks like your problem is the explosion of variance for centered position >5 or so (more variance than expected just from diverging individual lines). I'm not quite sure what to do about it: you could look at the individual curves a bit more and think about whether there's a good (phenomenologically or mechanistically) nonlinear model for these data. It's a bit hard at the moment to combine heteroscedasticity models (which lme can do but not lmer) with crossed random effects (possible in lme, but harder than in lmer) in R, but might (??) be possible e.g. in SAS PROC NLMIXED, Genstat/AS-REML, AD Model Builder ...
How to deal with very s-shaped qq plot in a linear mixed model (lme4 1.0.4)?
When in doubt, look at the raw data: library(ggplot2) theme_set(theme_bw()) ggplot(dat,aes(pos.centered,diff,colour=cond.lag))+geom_point()+ geom_line(aes(group=sub:cond.lag),alpha=0.4) ggsave("SE
How to deal with very s-shaped qq plot in a linear mixed model (lme4 1.0.4)? When in doubt, look at the raw data: library(ggplot2) theme_set(theme_bw()) ggplot(dat,aes(pos.centered,diff,colour=cond.lag))+geom_point()+ geom_line(aes(group=sub:cond.lag),alpha=0.4) ggsave("SE_ex.png",width=6,height=4) (You could also try colour=sub and facet_grid(.~cond.lag)) It looks like your problem is the explosion of variance for centered position >5 or so (more variance than expected just from diverging individual lines). I'm not quite sure what to do about it: you could look at the individual curves a bit more and think about whether there's a good (phenomenologically or mechanistically) nonlinear model for these data. It's a bit hard at the moment to combine heteroscedasticity models (which lme can do but not lmer) with crossed random effects (possible in lme, but harder than in lmer) in R, but might (??) be possible e.g. in SAS PROC NLMIXED, Genstat/AS-REML, AD Model Builder ...
How to deal with very s-shaped qq plot in a linear mixed model (lme4 1.0.4)? When in doubt, look at the raw data: library(ggplot2) theme_set(theme_bw()) ggplot(dat,aes(pos.centered,diff,colour=cond.lag))+geom_point()+ geom_line(aes(group=sub:cond.lag),alpha=0.4) ggsave("SE
29,404
How to get confidence interval on population r-square change
Population $R^2$ I'm firstly trying to understand the definition of the population R-squared. Quoting your comment: Or you could define it asymptotically as the proportion of variance explained in your sample as your sample size approaches infinity. I think you mean this is the limit of the sample $R^2$ when one replicates the model infinitely many times (with the same predictors at each replicate). So what is the formula for the asymptotic value of the sample $R^²$ ? Write your linear model $\boxed{Y=\mu+\sigma G}$ as in https://stats.stackexchange.com/a/58133/8402, and use the same notations as this link. Then one can check that the sample $R^2$ goes to $\boxed{popR^2:=\dfrac{\lambda}{n+\lambda}}$ when one replicates the model $Y=\mu+\sigma G$ infinitely many times. As example: > ## design of the simple regression model lm(y~x0) > n0 <- 10 > sigma <- 1 > x0 <- rnorm(n0, 1:n0, sigma) > a <- 1; b <- 2 # intercept and slope > params <- c(a,b) > X <- model.matrix(~x0) > Mu <- (X%*%params)[,1] > > ## replicate this experiment k times > k <- 200 > y <- rep(Mu,k) + rnorm(k*n0) > # the R-squared is: > summary(lm(y~rep(x0,k)))$r.squared [1] 0.971057 > > # theoretical asymptotic R-squared: > lambda0 <- crossprod(Mu-mean(Mu))/sigma^2 > lambda0/(lambda0+n0) [,1] [1,] 0.9722689 > > # other approximation of the asymptotic R-squared for simple linear regression: > 1-sigma^2/var(y) [1] 0.9721834 Population $R^2$ of a submodel Now assume the model is $\boxed{Y=\mu+\sigma G}$ with $H_1\colon\mu \in W_1$ and consider the submodel $H_0\colon \mu \in W_0$. Then I said above that the population $R^2$ of model $H_1$ is $\boxed{popR^2_1:=\dfrac{\lambda_1}{n+\lambda_1}}$ where $\boxed{\lambda_1=\frac{{\Vert P_{Z_1} \mu\Vert}^2}{\sigma^2}}$ and $Z_1=[1]^\perp \cap W_1$ and then one simply has ${\Vert P_{Z_1} \mu\Vert}^2=\sum(\mu_i - \bar \mu)^2$. Now do you define the population $R^2$ of the submodel $H_0$ as the asymptotic value of the $R^2$ calculated with respect to model $H_0$ but under the distributional assumption of model $H_1$ ? The asymptotic value (if there is one) seems more difficult to find.
How to get confidence interval on population r-square change
Population $R^2$ I'm firstly trying to understand the definition of the population R-squared. Quoting your comment: Or you could define it asymptotically as the proportion of variance explained in
How to get confidence interval on population r-square change Population $R^2$ I'm firstly trying to understand the definition of the population R-squared. Quoting your comment: Or you could define it asymptotically as the proportion of variance explained in your sample as your sample size approaches infinity. I think you mean this is the limit of the sample $R^2$ when one replicates the model infinitely many times (with the same predictors at each replicate). So what is the formula for the asymptotic value of the sample $R^²$ ? Write your linear model $\boxed{Y=\mu+\sigma G}$ as in https://stats.stackexchange.com/a/58133/8402, and use the same notations as this link. Then one can check that the sample $R^2$ goes to $\boxed{popR^2:=\dfrac{\lambda}{n+\lambda}}$ when one replicates the model $Y=\mu+\sigma G$ infinitely many times. As example: > ## design of the simple regression model lm(y~x0) > n0 <- 10 > sigma <- 1 > x0 <- rnorm(n0, 1:n0, sigma) > a <- 1; b <- 2 # intercept and slope > params <- c(a,b) > X <- model.matrix(~x0) > Mu <- (X%*%params)[,1] > > ## replicate this experiment k times > k <- 200 > y <- rep(Mu,k) + rnorm(k*n0) > # the R-squared is: > summary(lm(y~rep(x0,k)))$r.squared [1] 0.971057 > > # theoretical asymptotic R-squared: > lambda0 <- crossprod(Mu-mean(Mu))/sigma^2 > lambda0/(lambda0+n0) [,1] [1,] 0.9722689 > > # other approximation of the asymptotic R-squared for simple linear regression: > 1-sigma^2/var(y) [1] 0.9721834 Population $R^2$ of a submodel Now assume the model is $\boxed{Y=\mu+\sigma G}$ with $H_1\colon\mu \in W_1$ and consider the submodel $H_0\colon \mu \in W_0$. Then I said above that the population $R^2$ of model $H_1$ is $\boxed{popR^2_1:=\dfrac{\lambda_1}{n+\lambda_1}}$ where $\boxed{\lambda_1=\frac{{\Vert P_{Z_1} \mu\Vert}^2}{\sigma^2}}$ and $Z_1=[1]^\perp \cap W_1$ and then one simply has ${\Vert P_{Z_1} \mu\Vert}^2=\sum(\mu_i - \bar \mu)^2$. Now do you define the population $R^2$ of the submodel $H_0$ as the asymptotic value of the $R^2$ calculated with respect to model $H_0$ but under the distributional assumption of model $H_1$ ? The asymptotic value (if there is one) seems more difficult to find.
How to get confidence interval on population r-square change Population $R^2$ I'm firstly trying to understand the definition of the population R-squared. Quoting your comment: Or you could define it asymptotically as the proportion of variance explained in
29,405
How to get confidence interval on population r-square change
Rather than answer the question you asked, I'm going to ask why you ask that question. I assume you want to know whether mod.small <- lm(y ~ x1a + x1b + x1c, data=x) is at least as good as mod.large <- lm(y ~ ., data=x) at explaining y. Since these models are nested, the obvious way to answer this question would seem to be to run an analysis of variance comparing them, in the same way as you might run an analysis of deviance for two GLMs, like anova(mod.small, mod.large) Then you could use the sample R-square improvement between models as your best guess at what the fit improvement would be in the population, always assuming you can make sense of population R-squared. Personally I'm not sure I can, but with this it doesn't matter either way. More generally, if you're interested in population quantities you're presumably interested in generalisation so a sample fit measure is not quite what you want, however 'corrected'. For example, cross-validation of some quantity that estimates the sort and quantity of actual errors you could expect to make out of sample, like MSE, would seem to get at what you want. But it's quite possible I'm missing something here...
How to get confidence interval on population r-square change
Rather than answer the question you asked, I'm going to ask why you ask that question. I assume you want to know whether mod.small <- lm(y ~ x1a + x1b + x1c, data=x) is at least as good as mod.large
How to get confidence interval on population r-square change Rather than answer the question you asked, I'm going to ask why you ask that question. I assume you want to know whether mod.small <- lm(y ~ x1a + x1b + x1c, data=x) is at least as good as mod.large <- lm(y ~ ., data=x) at explaining y. Since these models are nested, the obvious way to answer this question would seem to be to run an analysis of variance comparing them, in the same way as you might run an analysis of deviance for two GLMs, like anova(mod.small, mod.large) Then you could use the sample R-square improvement between models as your best guess at what the fit improvement would be in the population, always assuming you can make sense of population R-squared. Personally I'm not sure I can, but with this it doesn't matter either way. More generally, if you're interested in population quantities you're presumably interested in generalisation so a sample fit measure is not quite what you want, however 'corrected'. For example, cross-validation of some quantity that estimates the sort and quantity of actual errors you could expect to make out of sample, like MSE, would seem to get at what you want. But it's quite possible I'm missing something here...
How to get confidence interval on population r-square change Rather than answer the question you asked, I'm going to ask why you ask that question. I assume you want to know whether mod.small <- lm(y ~ x1a + x1b + x1c, data=x) is at least as good as mod.large
29,406
How to get confidence interval on population r-square change
The following represent a few possibilities for calculating confidence intervals on $\rho^2$. Double adjusted r-square bootstrap My current best guess at an answer is to do a double adjusted r-square bootstrap. I've implemented the technique. It involves the following: Generate a set of bootstrap samples from the current data. For each bootstrapped sample: calculate first adjusted r-square for the two models calculate second adjusted r-square on the adjusted r-square values from the previous step Subtract model2 from model1 second adjusted r-square values to get an estimate of $\Delta \rho^2$. The rationale is that the first adjusted r-square removes the bias introduced by bootsrapping (i.e., bootstrapping assumes that the sample r-square is the population r-square). The second adjusted r-square performs the standard correction that is applied to a normal sample to estimate population r-square. At this point, all I can see is that applying this algorithm generates estimates that seem about right (i.e., the mean theta_hat in the bootstrap is very close to the sample theta_hat). The standard error aligns with my intuition. I haven't yet tested whether it provides proper frequentist coverage where the data generating process is known, and I'm also not entirely sure at this point how the argument could be justified from first principles If anyone sees any reasons why this approach would be problematic, I'd be grateful to hear about it. Simulation by Algina et al Stéphane mentioned the article by Algina, Keselman and Penfield. They performed a simulation study to examine the 95% confidence interval coverage of bootstrapping and asymptotic methods for estimating $\Delta \rho^2$. Their bootstrapping methods involved only a single application of adjusted r-square, rather than the double adjustment of r-square that I mention above. They found that bootstrap estimates only provided good coverage when the number of additional predictors in the full model was one or perhaps two. It is my hypothesis that this is because as the number of predictors increases, so would the difference between the single and double adjusted r-square bootstrap. Smithson (2001) on using the noncentrality parameter Smithson (2001) discusses calculating confidence intervals for the partial $R^2$ based on the non-centrality parameter. See pages 615 and 616 in particular. He suggests that "it is straightforward to construct a CI for $f^2$ and partial $R^2$ but not for the squared semipartial correlation." (p.615) References Algina, J., Keselman, H. J., & Penfield, R. D. Confidence Intervals for the Squared Multiple Semipartial Correlation Coefficient. PDF Smithson, M. (2001). Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals. Educational and Psychological Measurement, 61(4), 605-632.
How to get confidence interval on population r-square change
The following represent a few possibilities for calculating confidence intervals on $\rho^2$. Double adjusted r-square bootstrap My current best guess at an answer is to do a double adjusted r-square
How to get confidence interval on population r-square change The following represent a few possibilities for calculating confidence intervals on $\rho^2$. Double adjusted r-square bootstrap My current best guess at an answer is to do a double adjusted r-square bootstrap. I've implemented the technique. It involves the following: Generate a set of bootstrap samples from the current data. For each bootstrapped sample: calculate first adjusted r-square for the two models calculate second adjusted r-square on the adjusted r-square values from the previous step Subtract model2 from model1 second adjusted r-square values to get an estimate of $\Delta \rho^2$. The rationale is that the first adjusted r-square removes the bias introduced by bootsrapping (i.e., bootstrapping assumes that the sample r-square is the population r-square). The second adjusted r-square performs the standard correction that is applied to a normal sample to estimate population r-square. At this point, all I can see is that applying this algorithm generates estimates that seem about right (i.e., the mean theta_hat in the bootstrap is very close to the sample theta_hat). The standard error aligns with my intuition. I haven't yet tested whether it provides proper frequentist coverage where the data generating process is known, and I'm also not entirely sure at this point how the argument could be justified from first principles If anyone sees any reasons why this approach would be problematic, I'd be grateful to hear about it. Simulation by Algina et al Stéphane mentioned the article by Algina, Keselman and Penfield. They performed a simulation study to examine the 95% confidence interval coverage of bootstrapping and asymptotic methods for estimating $\Delta \rho^2$. Their bootstrapping methods involved only a single application of adjusted r-square, rather than the double adjustment of r-square that I mention above. They found that bootstrap estimates only provided good coverage when the number of additional predictors in the full model was one or perhaps two. It is my hypothesis that this is because as the number of predictors increases, so would the difference between the single and double adjusted r-square bootstrap. Smithson (2001) on using the noncentrality parameter Smithson (2001) discusses calculating confidence intervals for the partial $R^2$ based on the non-centrality parameter. See pages 615 and 616 in particular. He suggests that "it is straightforward to construct a CI for $f^2$ and partial $R^2$ but not for the squared semipartial correlation." (p.615) References Algina, J., Keselman, H. J., & Penfield, R. D. Confidence Intervals for the Squared Multiple Semipartial Correlation Coefficient. PDF Smithson, M. (2001). Correct confidence intervals for various regression effect sizes and parameters: The importance of noncentral distributions in computing intervals. Educational and Psychological Measurement, 61(4), 605-632.
How to get confidence interval on population r-square change The following represent a few possibilities for calculating confidence intervals on $\rho^2$. Double adjusted r-square bootstrap My current best guess at an answer is to do a double adjusted r-square
29,407
Naive Bayes on continuous variables
From the R package (e1071) and the function naiveBayes that you're using: The standard naive Bayes classifier (at least this implementation) assumes independence of the predictor variables, and Gaussian distribution (given the target class) of metric predictors. For attributes with missing values, the corresponding table entries are omitted for prediction. It's pretty standard for continuous variables in a naive Bayes that a normal distribution is considered for these variables and a mean and standard deviation can then be calculated and then using some standard z-table calculations probabilities can be estimated for each of your continuous variables to make the naive Bayes classifier. I thought that it was possible to change the distributional assumption in this package, but apparently I'm wrong. There is another R package (klaR) where you can change the density kernel. (the function is NaiveBayes). From the package: NaiveBayes(x, grouping, prior, usekernel = FALSE, fL = 0, ...) usekernel if TRUE a kernel density estimate (density) is used for denstity estimation. If FALSE a normal density is estimated. density(x, bw = "nrd0", adjust = 1, kernel = c("gaussian", "epanechnikov", "rectangular", "triangular", "biweight", "cosine", "optcosine")
Naive Bayes on continuous variables
From the R package (e1071) and the function naiveBayes that you're using: The standard naive Bayes classifier (at least this implementation) assumes independence of the predictor variables, and Gaussi
Naive Bayes on continuous variables From the R package (e1071) and the function naiveBayes that you're using: The standard naive Bayes classifier (at least this implementation) assumes independence of the predictor variables, and Gaussian distribution (given the target class) of metric predictors. For attributes with missing values, the corresponding table entries are omitted for prediction. It's pretty standard for continuous variables in a naive Bayes that a normal distribution is considered for these variables and a mean and standard deviation can then be calculated and then using some standard z-table calculations probabilities can be estimated for each of your continuous variables to make the naive Bayes classifier. I thought that it was possible to change the distributional assumption in this package, but apparently I'm wrong. There is another R package (klaR) where you can change the density kernel. (the function is NaiveBayes). From the package: NaiveBayes(x, grouping, prior, usekernel = FALSE, fL = 0, ...) usekernel if TRUE a kernel density estimate (density) is used for denstity estimation. If FALSE a normal density is estimated. density(x, bw = "nrd0", adjust = 1, kernel = c("gaussian", "epanechnikov", "rectangular", "triangular", "biweight", "cosine", "optcosine")
Naive Bayes on continuous variables From the R package (e1071) and the function naiveBayes that you're using: The standard naive Bayes classifier (at least this implementation) assumes independence of the predictor variables, and Gaussi
29,408
Naive Bayes on continuous variables
I was working on a project not that long ago for which I needed to compute a naive bayes classifier (from scratch). I started out in R, but once I had the process down, I moved the code to Python. Here's my code that I began with. Don't expect it to be polished. For the most part, I followed Wikipedia's example (https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Examples). The steps are simple: calculate the a priori probabilities which are the proportion of classes For your continuous data, assume a normal distribution and calculate the mean and standard deviation. To classify observations, take the new input x, calculate dnorm(x, mu, sigma) where mu and sigma come from step 2. Sum up log(apriori) + log(dnorm(...)). At this point, log(dnorm(...)) should contain two log-values (in my example). The probability that they belong in class 0 and probability they belong in class 1. This is the point Eric Peterson makes in his second paragraph. Calculate the posterior probabilities I also compared my results to R library e1071. My probability results do not line up with theirs for this simple case, though the classification does. In their predict.naiveBayes function, they have something like log(apriori) + apply(log(sapply(...compute dnorm code here...)), 1, sum) which returns log(apriori) + log(1) = log(apriori) which is an error so their classification is solely based on the a priori probabilities (actually, they use counts not the probabilities). Anyways, I hope this helps you (and anyone else) see what's under the hood as it was not clear to me either. n=30 ## samples set.seed(123) x = c(rnorm(n/2, 10, 2), rnorm(n/2, 0, 2)) y = as.factor(c(rep(0, 20), rep(1, 10))) y #library(e1071) #nb = naiveBayes(x, y, laplace = 0) #nb #nb_predictions = predict(nb, x[1], type='raw') #nb_predictions library(dplyr) nbc <- function(x, y){ df <- as.data.frame(cbind(x,y)) a_priori <- table(y) #/length(y) cond_probs <- df %>% group_by(y) %>% summarise(means = mean(x), var = sd(x)) print("A Priori Probabilities") print(a_priori/sum(a_priori)) print("conditional probabilities \n") print(cond_probs) return(list(apriori = a_priori, tables = cond_probs)) } predict_nbc <- function(model, new_x){ apriori = as.matrix(model$apriori) a = log(apriori/sum(apriori)) msd = as.matrix(model$tables)[,c(2,3)] ## creates 3 columsn; first is junk probs = sapply(new_x, function(v) dnorm(x = v, mean = msd[,1], sd = msd[,2])) b = log(probs) #L = a + b ## works for 1 new obs L = apply(X = b, MARGIN = 2, FUN = function(v) a + v) results <- apply(X = L, MARGIN = 2, function(x){ sapply(x, function(lp){ 1/sum(exp(x - lp)) }) ## numerically stable }) return(results) } fit = nbc(x,y) fit ## my naive bayes classifier model myres = predict_nbc(fit, new_x = x[1:4]) myres
Naive Bayes on continuous variables
I was working on a project not that long ago for which I needed to compute a naive bayes classifier (from scratch). I started out in R, but once I had the process down, I moved the code to Python. Her
Naive Bayes on continuous variables I was working on a project not that long ago for which I needed to compute a naive bayes classifier (from scratch). I started out in R, but once I had the process down, I moved the code to Python. Here's my code that I began with. Don't expect it to be polished. For the most part, I followed Wikipedia's example (https://en.wikipedia.org/wiki/Naive_Bayes_classifier#Examples). The steps are simple: calculate the a priori probabilities which are the proportion of classes For your continuous data, assume a normal distribution and calculate the mean and standard deviation. To classify observations, take the new input x, calculate dnorm(x, mu, sigma) where mu and sigma come from step 2. Sum up log(apriori) + log(dnorm(...)). At this point, log(dnorm(...)) should contain two log-values (in my example). The probability that they belong in class 0 and probability they belong in class 1. This is the point Eric Peterson makes in his second paragraph. Calculate the posterior probabilities I also compared my results to R library e1071. My probability results do not line up with theirs for this simple case, though the classification does. In their predict.naiveBayes function, they have something like log(apriori) + apply(log(sapply(...compute dnorm code here...)), 1, sum) which returns log(apriori) + log(1) = log(apriori) which is an error so their classification is solely based on the a priori probabilities (actually, they use counts not the probabilities). Anyways, I hope this helps you (and anyone else) see what's under the hood as it was not clear to me either. n=30 ## samples set.seed(123) x = c(rnorm(n/2, 10, 2), rnorm(n/2, 0, 2)) y = as.factor(c(rep(0, 20), rep(1, 10))) y #library(e1071) #nb = naiveBayes(x, y, laplace = 0) #nb #nb_predictions = predict(nb, x[1], type='raw') #nb_predictions library(dplyr) nbc <- function(x, y){ df <- as.data.frame(cbind(x,y)) a_priori <- table(y) #/length(y) cond_probs <- df %>% group_by(y) %>% summarise(means = mean(x), var = sd(x)) print("A Priori Probabilities") print(a_priori/sum(a_priori)) print("conditional probabilities \n") print(cond_probs) return(list(apriori = a_priori, tables = cond_probs)) } predict_nbc <- function(model, new_x){ apriori = as.matrix(model$apriori) a = log(apriori/sum(apriori)) msd = as.matrix(model$tables)[,c(2,3)] ## creates 3 columsn; first is junk probs = sapply(new_x, function(v) dnorm(x = v, mean = msd[,1], sd = msd[,2])) b = log(probs) #L = a + b ## works for 1 new obs L = apply(X = b, MARGIN = 2, FUN = function(v) a + v) results <- apply(X = L, MARGIN = 2, function(x){ sapply(x, function(lp){ 1/sum(exp(x - lp)) }) ## numerically stable }) return(results) } fit = nbc(x,y) fit ## my naive bayes classifier model myres = predict_nbc(fit, new_x = x[1:4]) myres
Naive Bayes on continuous variables I was working on a project not that long ago for which I needed to compute a naive bayes classifier (from scratch). I started out in R, but once I had the process down, I moved the code to Python. Her
29,409
Is there a general definition of the effect size?
I don't think there can be a general and precise answer. There can be general answers that are loose, and specific answers that are precise. Most generally (and most loosely) an effect size is a statistical measure of how big some relationship or difference is. In regression type problems, one type of effect size is a measure of how much of the dependent variable's variance is accounted for by the model. But, this is only precisely answerable (AFAIK) in OLS regression - by $R^2$. There are "pseudo-$R^2$" measures for other regression. There are also effect size measures for individual independent variables - these are the parameter estimates (and transformations of them). In a t-test, a good effect size is the standardized difference of the means (this also works in ANOVA, and may work in regression if we pick particular values of the independent vairables) and so on. There are whole books on the subject; I used to have one, I believe that Ellis is an updated version of it (the title sounds familiar)
Is there a general definition of the effect size?
I don't think there can be a general and precise answer. There can be general answers that are loose, and specific answers that are precise. Most generally (and most loosely) an effect size is a stati
Is there a general definition of the effect size? I don't think there can be a general and precise answer. There can be general answers that are loose, and specific answers that are precise. Most generally (and most loosely) an effect size is a statistical measure of how big some relationship or difference is. In regression type problems, one type of effect size is a measure of how much of the dependent variable's variance is accounted for by the model. But, this is only precisely answerable (AFAIK) in OLS regression - by $R^2$. There are "pseudo-$R^2$" measures for other regression. There are also effect size measures for individual independent variables - these are the parameter estimates (and transformations of them). In a t-test, a good effect size is the standardized difference of the means (this also works in ANOVA, and may work in regression if we pick particular values of the independent vairables) and so on. There are whole books on the subject; I used to have one, I believe that Ellis is an updated version of it (the title sounds familiar)
Is there a general definition of the effect size? I don't think there can be a general and precise answer. There can be general answers that are loose, and specific answers that are precise. Most generally (and most loosely) an effect size is a stati
29,410
Interpreting time series decomposition using TBATS from R forecast package
In the user comments on this page, somebody asks about the interpretation of the level and slope, and also how to get the trend and residuals that the decompose() function provides. Hyndman remarks that there isn't a straight translation as decompose() and tbats() use different models. But if your TBATS model doesn't have a Box-Cox transformation, then the TBATS level is roughly the same as the decompose() trend. If on the other hand the model does apply the Box-Cox transformation, then you have to undo the transformation before interpreting the level as (roughly) the trend. At least that's how I interpret his response. As for residuals and slope, they're not the same. You can think of a basic decomposition has having a trend component, a seasonal component, and a residual component. You can break the trend down further into a level and a slope. The level is essentially a baseline for the trend, and the slope is the the change per unit time. The reason for breaking the trend down into a level and a slope is that some models support damped growth. Maybe you observe current growth, but you expect growth to diminish gradually over time, and you want your forecasts to reflect that expectation. The model supports this by allowing you to damp growth by applying a damping factor to the slope, making it converge toward zero, which means that the trend converges toward its level component. There's not a straightforward answer to the question of how the level and slope combine to yield the trend. It depends on the type of model you are using. As a general statement, additive trend models combine them in an additive fashion and multiplicative trend models combine them in a multiplicative way. The damped variants of models combine the level with a damped slope. Hyndman's Forecasting with Exponential Smoothing book (hope it's ok to include the Amazon link—I have no affiliation whatsoever with the author) provides the exact equations on a per-model basis in Table 2.1.
Interpreting time series decomposition using TBATS from R forecast package
In the user comments on this page, somebody asks about the interpretation of the level and slope, and also how to get the trend and residuals that the decompose() function provides. Hyndman remarks th
Interpreting time series decomposition using TBATS from R forecast package In the user comments on this page, somebody asks about the interpretation of the level and slope, and also how to get the trend and residuals that the decompose() function provides. Hyndman remarks that there isn't a straight translation as decompose() and tbats() use different models. But if your TBATS model doesn't have a Box-Cox transformation, then the TBATS level is roughly the same as the decompose() trend. If on the other hand the model does apply the Box-Cox transformation, then you have to undo the transformation before interpreting the level as (roughly) the trend. At least that's how I interpret his response. As for residuals and slope, they're not the same. You can think of a basic decomposition has having a trend component, a seasonal component, and a residual component. You can break the trend down further into a level and a slope. The level is essentially a baseline for the trend, and the slope is the the change per unit time. The reason for breaking the trend down into a level and a slope is that some models support damped growth. Maybe you observe current growth, but you expect growth to diminish gradually over time, and you want your forecasts to reflect that expectation. The model supports this by allowing you to damp growth by applying a damping factor to the slope, making it converge toward zero, which means that the trend converges toward its level component. There's not a straightforward answer to the question of how the level and slope combine to yield the trend. It depends on the type of model you are using. As a general statement, additive trend models combine them in an additive fashion and multiplicative trend models combine them in a multiplicative way. The damped variants of models combine the level with a damped slope. Hyndman's Forecasting with Exponential Smoothing book (hope it's ok to include the Amazon link—I have no affiliation whatsoever with the author) provides the exact equations on a per-model basis in Table 2.1.
Interpreting time series decomposition using TBATS from R forecast package In the user comments on this page, somebody asks about the interpretation of the level and slope, and also how to get the trend and residuals that the decompose() function provides. Hyndman remarks th
29,411
How to specify a contrast matrix (in R) for the difference between one level and an average of the others?
That comparison of one with the mean of all later variables is (aside from scale), called Helmert coding or Helmert contrasts. The one you give is the first contrast, the other would be a scaled version of $(0, 1, -1)^\top$. What R calls helmert coding, this calls 'reverse Helmert'. They're equivalent up to a change of variable order.
How to specify a contrast matrix (in R) for the difference between one level and an average of the o
That comparison of one with the mean of all later variables is (aside from scale), called Helmert coding or Helmert contrasts. The one you give is the first contrast, the other would be a scaled versi
How to specify a contrast matrix (in R) for the difference between one level and an average of the others? That comparison of one with the mean of all later variables is (aside from scale), called Helmert coding or Helmert contrasts. The one you give is the first contrast, the other would be a scaled version of $(0, 1, -1)^\top$. What R calls helmert coding, this calls 'reverse Helmert'. They're equivalent up to a change of variable order.
How to specify a contrast matrix (in R) for the difference between one level and an average of the o That comparison of one with the mean of all later variables is (aside from scale), called Helmert coding or Helmert contrasts. The one you give is the first contrast, the other would be a scaled versi
29,412
Using EM algorithm for record linking
Absolutely, the EM algorithm has been used for probabilistic linking. There are a lot of articles on the subject, the following by Winkler may be helpful regarding theoretical details: http://www.census.gov.edgekey.net/srd/papers/pdf/rr2000-05.pdf Also there is data linking software developed by Kevin Campbell already available here: http://the-link-king.com/ The software can be freely downloaded & Kevin Campbell offers support for a fee. The code is written in SAS, so you'll need the base SAS package.
Using EM algorithm for record linking
Absolutely, the EM algorithm has been used for probabilistic linking. There are a lot of articles on the subject, the following by Winkler may be helpful regarding theoretical details: http://www.cens
Using EM algorithm for record linking Absolutely, the EM algorithm has been used for probabilistic linking. There are a lot of articles on the subject, the following by Winkler may be helpful regarding theoretical details: http://www.census.gov.edgekey.net/srd/papers/pdf/rr2000-05.pdf Also there is data linking software developed by Kevin Campbell already available here: http://the-link-king.com/ The software can be freely downloaded & Kevin Campbell offers support for a fee. The code is written in SAS, so you'll need the base SAS package.
Using EM algorithm for record linking Absolutely, the EM algorithm has been used for probabilistic linking. There are a lot of articles on the subject, the following by Winkler may be helpful regarding theoretical details: http://www.cens
29,413
Using EM algorithm for record linking
We have recently published Splink, a Python/Spark library for record linkage, which includes an implementation of the EM algorithm. You can try it in your browser here. There's more information on the underlying theory here
Using EM algorithm for record linking
We have recently published Splink, a Python/Spark library for record linkage, which includes an implementation of the EM algorithm. You can try it in your browser here. There's more information on t
Using EM algorithm for record linking We have recently published Splink, a Python/Spark library for record linkage, which includes an implementation of the EM algorithm. You can try it in your browser here. There's more information on the underlying theory here
Using EM algorithm for record linking We have recently published Splink, a Python/Spark library for record linkage, which includes an implementation of the EM algorithm. You can try it in your browser here. There's more information on t
29,414
Using EM algorithm for record linking
There is a software RELAIS that does record linkage with: 6) Probabilistic record linkage (Estimation of the Fellegi and Sunter model parameters via EM (Expectation-Maximization). RELAIS has been implemented in Java and R and has a database architecture (MySQL). There are some more documentation about record linkage available from the ESSnet Data Integration project.
Using EM algorithm for record linking
There is a software RELAIS that does record linkage with: 6) Probabilistic record linkage (Estimation of the Fellegi and Sunter model parameters via EM (Expectation-Maximization). RELAIS has been i
Using EM algorithm for record linking There is a software RELAIS that does record linkage with: 6) Probabilistic record linkage (Estimation of the Fellegi and Sunter model parameters via EM (Expectation-Maximization). RELAIS has been implemented in Java and R and has a database architecture (MySQL). There are some more documentation about record linkage available from the ESSnet Data Integration project.
Using EM algorithm for record linking There is a software RELAIS that does record linkage with: 6) Probabilistic record linkage (Estimation of the Fellegi and Sunter model parameters via EM (Expectation-Maximization). RELAIS has been i
29,415
Controlling False Discovery Rate in Stages
There is no universal answer to your question. Global B-H would control the FDR over all the 24516 hypotheses. B-H within each of the 81 sets, will give you FDR control within each slice, but no overall guarantees. If you want both within slice and overall FDR control, have a look at this paper.
Controlling False Discovery Rate in Stages
There is no universal answer to your question. Global B-H would control the FDR over all the 24516 hypotheses. B-H within each of the 81 sets, will give you FDR control within each slice, but no overa
Controlling False Discovery Rate in Stages There is no universal answer to your question. Global B-H would control the FDR over all the 24516 hypotheses. B-H within each of the 81 sets, will give you FDR control within each slice, but no overall guarantees. If you want both within slice and overall FDR control, have a look at this paper.
Controlling False Discovery Rate in Stages There is no universal answer to your question. Global B-H would control the FDR over all the 24516 hypotheses. B-H within each of the 81 sets, will give you FDR control within each slice, but no overa
29,416
How to perform Gaussian process regression when function being approximated changes over time?
You could try this method: Predictive Active Set Selection Methods for Gaussian Processes We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal likelihood maximization. The active set update rules rely on the ability of the predictive distributions of a Gaussian process classifier to estimate the relative contribution of a datapoint when being either included or removed from the model.
How to perform Gaussian process regression when function being approximated changes over time?
You could try this method: Predictive Active Set Selection Methods for Gaussian Processes We propose an active set selection framework for Gaussian process classification for cases when the dataset
How to perform Gaussian process regression when function being approximated changes over time? You could try this method: Predictive Active Set Selection Methods for Gaussian Processes We propose an active set selection framework for Gaussian process classification for cases when the dataset is large enough to render its inference prohibitive. Our scheme consists of a two step alternating procedure of active set update rules and hyperparameter optimization based upon marginal likelihood maximization. The active set update rules rely on the ability of the predictive distributions of a Gaussian process classifier to estimate the relative contribution of a datapoint when being either included or removed from the model.
How to perform Gaussian process regression when function being approximated changes over time? You could try this method: Predictive Active Set Selection Methods for Gaussian Processes We propose an active set selection framework for Gaussian process classification for cases when the dataset
29,417
How to perform Gaussian process regression when function being approximated changes over time?
If you want a fixed budget algorithm, see for e.g., M. Lázaro-Gredilla, S. Van Vaerenbergh and I. Santamaría, "A Bayesian Approach to Tracking with Kernel Recursive Least-Squares", IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2011), Beijing, China, September, 2011.
How to perform Gaussian process regression when function being approximated changes over time?
If you want a fixed budget algorithm, see for e.g., M. Lázaro-Gredilla, S. Van Vaerenbergh and I. Santamaría, "A Bayesian Approach to Tracking with Kernel Recursive Least-Squares", IEEE Internati
How to perform Gaussian process regression when function being approximated changes over time? If you want a fixed budget algorithm, see for e.g., M. Lázaro-Gredilla, S. Van Vaerenbergh and I. Santamaría, "A Bayesian Approach to Tracking with Kernel Recursive Least-Squares", IEEE International Workshop on Machine Learning for Signal Processing (MLSP 2011), Beijing, China, September, 2011.
How to perform Gaussian process regression when function being approximated changes over time? If you want a fixed budget algorithm, see for e.g., M. Lázaro-Gredilla, S. Van Vaerenbergh and I. Santamaría, "A Bayesian Approach to Tracking with Kernel Recursive Least-Squares", IEEE Internati
29,418
Seasonally adjusted month-to-month growth with underlying weekly seasonality
I model thus kind of data all the time. You need to incorporate day-of-the-week holiday effects ( lead , contemporaneous and lag effects ) special days-of-the-month perhaps Friday before a holiday or a Monday after a holiday weekly effects monthly effects ARIMA structure to render the errors white noise; et.al. . The statistical approach is called Transfer Function Modelling with Intervention DEtection. If you want to share your data either privately via dave@autobox.com or preferably via SE , I would be more than glad to actually show you the specifics of a final model and further your ability to do it yourself or at least to help you and others to understand what needs to be done and what can be done. In either case you come out smarter without spending any treasure be it coin or time.You might read some of my other responses to time series questions to learn more.
Seasonally adjusted month-to-month growth with underlying weekly seasonality
I model thus kind of data all the time. You need to incorporate day-of-the-week holiday effects ( lead , contemporaneous and lag effects ) special days-of-the-month perhaps Friday before a holiday o
Seasonally adjusted month-to-month growth with underlying weekly seasonality I model thus kind of data all the time. You need to incorporate day-of-the-week holiday effects ( lead , contemporaneous and lag effects ) special days-of-the-month perhaps Friday before a holiday or a Monday after a holiday weekly effects monthly effects ARIMA structure to render the errors white noise; et.al. . The statistical approach is called Transfer Function Modelling with Intervention DEtection. If you want to share your data either privately via dave@autobox.com or preferably via SE , I would be more than glad to actually show you the specifics of a final model and further your ability to do it yourself or at least to help you and others to understand what needs to be done and what can be done. In either case you come out smarter without spending any treasure be it coin or time.You might read some of my other responses to time series questions to learn more.
Seasonally adjusted month-to-month growth with underlying weekly seasonality I model thus kind of data all the time. You need to incorporate day-of-the-week holiday effects ( lead , contemporaneous and lag effects ) special days-of-the-month perhaps Friday before a holiday o
29,419
Incremental training of Neural Networks
I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh and small dataset. Try reading: A Survey on Transfer Learning and the algorithm TrAdaBoost.
Incremental training of Neural Networks
I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh and small dataset. Try reading: A Survey on Transfer Learning a
Incremental training of Neural Networks I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh and small dataset. Try reading: A Survey on Transfer Learning and the algorithm TrAdaBoost.
Incremental training of Neural Networks I would suggest you to use Transfer Learning Techniques. Basically, it transfers the knowledge in your big and old dataset to your fresh and small dataset. Try reading: A Survey on Transfer Learning a
29,420
Incremental training of Neural Networks
This branch of ML is referred to as Continual Learning (or Incremental Learning). It is effectively multi-task transfer learning, where the tasks come sequentially, and we wish to use an architecture which does not "catastrophically forget" the information from the previous tasks. See the following paper: Continual Lifelong Learning with Neural Networks: A Review (2019) And the following related questions: https://ai.stackexchange.com/questions/14047/what-are-the-state-of-the-art-approaches-for-continual-learning-with-neural-netw https://ai.stackexchange.com/questions/23567/is-continuous-learning-possible-with-a-deep-convolutional-neural-network-withou/24529#24529
Incremental training of Neural Networks
This branch of ML is referred to as Continual Learning (or Incremental Learning). It is effectively multi-task transfer learning, where the tasks come sequentially, and we wish to use an architecture
Incremental training of Neural Networks This branch of ML is referred to as Continual Learning (or Incremental Learning). It is effectively multi-task transfer learning, where the tasks come sequentially, and we wish to use an architecture which does not "catastrophically forget" the information from the previous tasks. See the following paper: Continual Lifelong Learning with Neural Networks: A Review (2019) And the following related questions: https://ai.stackexchange.com/questions/14047/what-are-the-state-of-the-art-approaches-for-continual-learning-with-neural-netw https://ai.stackexchange.com/questions/23567/is-continuous-learning-possible-with-a-deep-convolutional-neural-network-withou/24529#24529
Incremental training of Neural Networks This branch of ML is referred to as Continual Learning (or Incremental Learning). It is effectively multi-task transfer learning, where the tasks come sequentially, and we wish to use an architecture
29,421
What is a main difference between RBF neural networks and SVM with a RBF kernel?
A RBF SVM would be virtually equivalent to a RBF neural nets where the weights of the first layer would be fixed to the feature values of all the training samples. Only the second layer weights are tuned by the learning algorithm. This allows the optimization problem to be convex hence admit a single global solution. The fact that the number of the potential hidden nodes can grow with the number of samples makes this hypothetical neural network a non-parametric model (which is usually not the case when we train neural nets: we tend to fix the architecture in advance as a fixed hyper-parameter of the algorithm independently of the number of samples). Off-course in practice the SVM algorithm implementations do not treat all the samples from the training set as concurrently active support vectors / hidden nodes. Samples are incrementally selected in the active set if they contribute enough and pruned as soon as their are shadowed by a more recent set of support vectors (thanks to the combined use of a margin loss function such as the hinge loss and regularizer such as l2). That allows the number of parameters of the model to be kept low enough. On the other hand when classical RBF neural networks are trained with a fixed architecture (fixed number of hidden nodes) but with tunable input layer parameters. If we allow the neural network to have as many hidden nodes as samples, then the expressive power such a RBF NN would be much higher than the SVM model as the weights of the first layer are tunable but that comes at the price of a non convex objective function that can be stuck in local optima that would prevent the algorithm to converge to good parameter values. Furthermore this increased expressive power comes with a serious capability to overfit: reducing the number of hidden nodes can help decrease overfitting. To summarize from a practical point of view: RBF neural nets have a higher number of hyper-parameters (the bandwidth of the RBF kernel, number of hidden nodes + the initialization scheme of the weights + the strengths of the regularizer a.k.a weight decay for the first and second layers + learning rate + momentum) + the local optima convergence issues (that may or not be an issue in practice depending on the data and the hyper-parameter) RBF SVM has 2 hyper-parameters to grid search (the bandwidth of the RBF kernel and the strength of the regularizer) and the convergence is independent from the init (convex objective function) BTW, both should have scaled features as input (e.g. unit variance scaling).
What is a main difference between RBF neural networks and SVM with a RBF kernel?
A RBF SVM would be virtually equivalent to a RBF neural nets where the weights of the first layer would be fixed to the feature values of all the training samples. Only the second layer weights are tu
What is a main difference between RBF neural networks and SVM with a RBF kernel? A RBF SVM would be virtually equivalent to a RBF neural nets where the weights of the first layer would be fixed to the feature values of all the training samples. Only the second layer weights are tuned by the learning algorithm. This allows the optimization problem to be convex hence admit a single global solution. The fact that the number of the potential hidden nodes can grow with the number of samples makes this hypothetical neural network a non-parametric model (which is usually not the case when we train neural nets: we tend to fix the architecture in advance as a fixed hyper-parameter of the algorithm independently of the number of samples). Off-course in practice the SVM algorithm implementations do not treat all the samples from the training set as concurrently active support vectors / hidden nodes. Samples are incrementally selected in the active set if they contribute enough and pruned as soon as their are shadowed by a more recent set of support vectors (thanks to the combined use of a margin loss function such as the hinge loss and regularizer such as l2). That allows the number of parameters of the model to be kept low enough. On the other hand when classical RBF neural networks are trained with a fixed architecture (fixed number of hidden nodes) but with tunable input layer parameters. If we allow the neural network to have as many hidden nodes as samples, then the expressive power such a RBF NN would be much higher than the SVM model as the weights of the first layer are tunable but that comes at the price of a non convex objective function that can be stuck in local optima that would prevent the algorithm to converge to good parameter values. Furthermore this increased expressive power comes with a serious capability to overfit: reducing the number of hidden nodes can help decrease overfitting. To summarize from a practical point of view: RBF neural nets have a higher number of hyper-parameters (the bandwidth of the RBF kernel, number of hidden nodes + the initialization scheme of the weights + the strengths of the regularizer a.k.a weight decay for the first and second layers + learning rate + momentum) + the local optima convergence issues (that may or not be an issue in practice depending on the data and the hyper-parameter) RBF SVM has 2 hyper-parameters to grid search (the bandwidth of the RBF kernel and the strength of the regularizer) and the convergence is independent from the init (convex objective function) BTW, both should have scaled features as input (e.g. unit variance scaling).
What is a main difference between RBF neural networks and SVM with a RBF kernel? A RBF SVM would be virtually equivalent to a RBF neural nets where the weights of the first layer would be fixed to the feature values of all the training samples. Only the second layer weights are tu
29,422
Using Kolmogorov–Smirnov test
1) The null hypothesis is that the data is distributed according to the theoretical distribution. 2) Let $N$ be your sample size, $D$ be the observed value of the Kolmogorov-Smirnov test statistic, and define $\lambda = D(0.12 + \sqrt{N} + 0.11 / \sqrt{N})$. Then the p-value for the test statistic is approximately: $Q = 2 \sum_{j=1}^{\infty}(-1)^{j-1}\exp\{-2j^2\lambda^2\}$ Obviously you can't calculate the infinite sum, but if you sum over 100 values or so this will get you very, very, very close. This approximation is quite good even for small values of $N$, as low as 5 if I recall correctly, and gets better as $N$ increases. Note, however, that @whuber in comments proposes a better approach. This is a perfectly reasonable alternative to the Shapiro-Wilk test I suggested in answer to your other question, by the way. Shapiro-Wilk is more powerful, but if your sample size is in the high hundreds, the Kolmogorov-Smirnov test will have quite a bit of power too.
Using Kolmogorov–Smirnov test
1) The null hypothesis is that the data is distributed according to the theoretical distribution. 2) Let $N$ be your sample size, $D$ be the observed value of the Kolmogorov-Smirnov test statistic, a
Using Kolmogorov–Smirnov test 1) The null hypothesis is that the data is distributed according to the theoretical distribution. 2) Let $N$ be your sample size, $D$ be the observed value of the Kolmogorov-Smirnov test statistic, and define $\lambda = D(0.12 + \sqrt{N} + 0.11 / \sqrt{N})$. Then the p-value for the test statistic is approximately: $Q = 2 \sum_{j=1}^{\infty}(-1)^{j-1}\exp\{-2j^2\lambda^2\}$ Obviously you can't calculate the infinite sum, but if you sum over 100 values or so this will get you very, very, very close. This approximation is quite good even for small values of $N$, as low as 5 if I recall correctly, and gets better as $N$ increases. Note, however, that @whuber in comments proposes a better approach. This is a perfectly reasonable alternative to the Shapiro-Wilk test I suggested in answer to your other question, by the way. Shapiro-Wilk is more powerful, but if your sample size is in the high hundreds, the Kolmogorov-Smirnov test will have quite a bit of power too.
Using Kolmogorov–Smirnov test 1) The null hypothesis is that the data is distributed according to the theoretical distribution. 2) Let $N$ be your sample size, $D$ be the observed value of the Kolmogorov-Smirnov test statistic, a
29,423
Using Kolmogorov–Smirnov test
No. Null hypothesis that the empirical data is distributed according to the theoretical distribution. Not familiar with java function. But KS test critical values are available online. Also available in the appendix of statistics books which deals nonparametric tests. You can compare few values with the java function and the table. (Please let us know if it is different)
Using Kolmogorov–Smirnov test
No. Null hypothesis that the empirical data is distributed according to the theoretical distribution. Not familiar with java function. But KS test critical values are available online. Also available
Using Kolmogorov–Smirnov test No. Null hypothesis that the empirical data is distributed according to the theoretical distribution. Not familiar with java function. But KS test critical values are available online. Also available in the appendix of statistics books which deals nonparametric tests. You can compare few values with the java function and the table. (Please let us know if it is different)
Using Kolmogorov–Smirnov test No. Null hypothesis that the empirical data is distributed according to the theoretical distribution. Not familiar with java function. But KS test critical values are available online. Also available
29,424
Change of variable with non bijective function
The generalization of the change of variable formula to the non-bijective case is generally hard to write out explicitly, check http://en.wikipedia.org/wiki/Probability_density_function#Multiple_variables which essentially formalizes mpiktas's suggestion
Change of variable with non bijective function
The generalization of the change of variable formula to the non-bijective case is generally hard to write out explicitly, check http://en.wikipedia.org/wiki/Probability_density_function#Multiple_varia
Change of variable with non bijective function The generalization of the change of variable formula to the non-bijective case is generally hard to write out explicitly, check http://en.wikipedia.org/wiki/Probability_density_function#Multiple_variables which essentially formalizes mpiktas's suggestion
Change of variable with non bijective function The generalization of the change of variable formula to the non-bijective case is generally hard to write out explicitly, check http://en.wikipedia.org/wiki/Probability_density_function#Multiple_varia
29,425
Change of variable with non bijective function
I found this article https://en.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities excellent!
Change of variable with non bijective function
I found this article https://en.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities excellent!
Change of variable with non bijective function I found this article https://en.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities excellent!
Change of variable with non bijective function I found this article https://en.wikibooks.org/wiki/Probability/Transformation_of_Probability_Densities excellent!
29,426
"Normalized" standard deviation
If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If you recast your problem as comparing variances, then there are plenty of standard tests available. For two independent samples, you can use the F test; its null distribution follows the (surprise) F distribution which is indexed by degrees of freedom, so it implicitly adjusts for what you're calling a scale problem. If you're comparing more than two samples, either Bartlett's or Levene's test might be suitable. Of course, these have the same problem as one-way ANOVA, they don't tell you which variances differ significantly. However, if, say, Bartlett's test did identify inhomogeneous variances, you could do follow-up pairwise comparisons with the F test and make a Bonferroni adjustment to maintain your experimentwise Type I error (alpha). You can get details for all of this stuff in the NIST/SEMATECH e-Handbook of Statistical Methods.
"Normalized" standard deviation
If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If
"Normalized" standard deviation If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If you recast your problem as comparing variances, then there are plenty of standard tests available. For two independent samples, you can use the F test; its null distribution follows the (surprise) F distribution which is indexed by degrees of freedom, so it implicitly adjusts for what you're calling a scale problem. If you're comparing more than two samples, either Bartlett's or Levene's test might be suitable. Of course, these have the same problem as one-way ANOVA, they don't tell you which variances differ significantly. However, if, say, Bartlett's test did identify inhomogeneous variances, you could do follow-up pairwise comparisons with the F test and make a Bonferroni adjustment to maintain your experimentwise Type I error (alpha). You can get details for all of this stuff in the NIST/SEMATECH e-Handbook of Statistical Methods.
"Normalized" standard deviation If all your measurements are using the same units, then you've already addressed the scale problem; what's bugging you is degrees of freedom and precision of your estimates of standard deviation. If
29,427
"Normalized" standard deviation
How about using the mean of the absolute value, i.e., $$\sigma_\textrm{normalized}(X) = \frac{\sigma(X)}{\mathbb{E}[|X|]}$$?
"Normalized" standard deviation
How about using the mean of the absolute value, i.e., $$\sigma_\textrm{normalized}(X) = \frac{\sigma(X)}{\mathbb{E}[|X|]}$$?
"Normalized" standard deviation How about using the mean of the absolute value, i.e., $$\sigma_\textrm{normalized}(X) = \frac{\sigma(X)}{\mathbb{E}[|X|]}$$?
"Normalized" standard deviation How about using the mean of the absolute value, i.e., $$\sigma_\textrm{normalized}(X) = \frac{\sigma(X)}{\mathbb{E}[|X|]}$$?
29,428
"Normalized" standard deviation
Harvey: You're absolutely right that the F and Bartlett's tests won't work to compare raw data with smoothed data! Once the data has been smoothed, there's all manner of autocorrelation in there, and the testing becomes much more complicated. Better to compare separate--and hopefully independent--sequences.
"Normalized" standard deviation
Harvey: You're absolutely right that the F and Bartlett's tests won't work to compare raw data with smoothed data! Once the data has been smoothed, there's all manner of autocorrelation in there, and
"Normalized" standard deviation Harvey: You're absolutely right that the F and Bartlett's tests won't work to compare raw data with smoothed data! Once the data has been smoothed, there's all manner of autocorrelation in there, and the testing becomes much more complicated. Better to compare separate--and hopefully independent--sequences.
"Normalized" standard deviation Harvey: You're absolutely right that the F and Bartlett's tests won't work to compare raw data with smoothed data! Once the data has been smoothed, there's all manner of autocorrelation in there, and
29,429
Longitudinal comparison of two distributions
This is not a complete answer but I hope it gives you some ideas as to how to model the situation in a coherent manner. Assumptions The values at the lower end of the scale follow a normal distribution truncated from below. The values at the upper end of the scale follow a normal distribution truncated from above. (Note: I know that you said that the data is not normal but I am assuming that you are referring to the distribution of all the values whereas the above assumptions pertain to the values at the lower and the upper end of the scale.) A person's underlying state (whether they have TB or not) follows a first-order markov chain. Model Let: $D_i(t)$ be 1 if at time $t$ the $i^\mbox{th}$ person has TB and 0 otherwise, $RTB_i(t)$ be the test response to the TB test at time $t$ of the the $i^\mbox{th}$ person, $RN_i(t)$ be the test response to the NILL test at time $t$ of the the $i^\mbox{th}$ person, $f(RN_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ $f(RN_i(t) | D_i(t)=1) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ Points 4 and 5 capture the idea that a person's response to the NILL test is not dependent on disease status. $f(RTB_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RTB_i(t) > R_l)$ $f(RTB_i(t) | D_i(t)=1) \sim N(\mu_u,\sigma_u^2) I(RTB_i(t) < R_u)$ $\mu_u > \mu_l$ Points 6, 7 and 8 capture the idea that a person's response to the TB test is dependent on disease status. $p(t)$ be the probability that a person's catches TB during the 6 months preceding time $t$ given that they were disease free during the previous test period. Thus, the state transition matrix would like the one below: $\begin{bmatrix} 1-p(t) & p(t) \\ 0 & 1 \end{bmatrix}$ In other words, $Prob(D_i(t)=1 | D_i(t-1) = 0) = p(t)$ $Prob(D_i(t)=0 | D_i(t-1) = 0) = 1-p(t)$ $Prob(D_i(t)=1 | D_i(t-1) = 1) = 1$ $Prob(D_i(t)=0 | D_i(t-1) = 1) = 0$ Your test criteria states that: $\hat{D}_i(t) = \begin{cases} 1, & RTB_i(t) - RN_i(t) \ge 0.35 \\ 0, & otherwise \end{cases}$ However, as you see from the structure of the model you can actually parameterize the cut-offs and change the whole problem to that of what should be your cut-offs to accurately diagnose patients. Thus, the wobbler problem seems to be more an issue with your choice of cut-offs rather than anything else. In order to choose the 'right' cut-offs, you can take historical data about patients definitively identified as having TB and estimate the resulting parameters of the above setup. You could use some criteria such as number of patients correctly classified as having TB or not as a metric to identify the 'best' model. For simplicity, you could assume that $p(t)$ to be a time invariant parameter which seems reasonable in the absence of epidemics etc. Hope that is useful.
Longitudinal comparison of two distributions
This is not a complete answer but I hope it gives you some ideas as to how to model the situation in a coherent manner. Assumptions The values at the lower end of the scale follow a normal distribut
Longitudinal comparison of two distributions This is not a complete answer but I hope it gives you some ideas as to how to model the situation in a coherent manner. Assumptions The values at the lower end of the scale follow a normal distribution truncated from below. The values at the upper end of the scale follow a normal distribution truncated from above. (Note: I know that you said that the data is not normal but I am assuming that you are referring to the distribution of all the values whereas the above assumptions pertain to the values at the lower and the upper end of the scale.) A person's underlying state (whether they have TB or not) follows a first-order markov chain. Model Let: $D_i(t)$ be 1 if at time $t$ the $i^\mbox{th}$ person has TB and 0 otherwise, $RTB_i(t)$ be the test response to the TB test at time $t$ of the the $i^\mbox{th}$ person, $RN_i(t)$ be the test response to the NILL test at time $t$ of the the $i^\mbox{th}$ person, $f(RN_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ $f(RN_i(t) | D_i(t)=1) \sim N(\mu_l,\sigma_l^2) I(RN_i(t) > R_l)$ Points 4 and 5 capture the idea that a person's response to the NILL test is not dependent on disease status. $f(RTB_i(t) | D_i(t)=0) \sim N(\mu_l,\sigma_l^2) I(RTB_i(t) > R_l)$ $f(RTB_i(t) | D_i(t)=1) \sim N(\mu_u,\sigma_u^2) I(RTB_i(t) < R_u)$ $\mu_u > \mu_l$ Points 6, 7 and 8 capture the idea that a person's response to the TB test is dependent on disease status. $p(t)$ be the probability that a person's catches TB during the 6 months preceding time $t$ given that they were disease free during the previous test period. Thus, the state transition matrix would like the one below: $\begin{bmatrix} 1-p(t) & p(t) \\ 0 & 1 \end{bmatrix}$ In other words, $Prob(D_i(t)=1 | D_i(t-1) = 0) = p(t)$ $Prob(D_i(t)=0 | D_i(t-1) = 0) = 1-p(t)$ $Prob(D_i(t)=1 | D_i(t-1) = 1) = 1$ $Prob(D_i(t)=0 | D_i(t-1) = 1) = 0$ Your test criteria states that: $\hat{D}_i(t) = \begin{cases} 1, & RTB_i(t) - RN_i(t) \ge 0.35 \\ 0, & otherwise \end{cases}$ However, as you see from the structure of the model you can actually parameterize the cut-offs and change the whole problem to that of what should be your cut-offs to accurately diagnose patients. Thus, the wobbler problem seems to be more an issue with your choice of cut-offs rather than anything else. In order to choose the 'right' cut-offs, you can take historical data about patients definitively identified as having TB and estimate the resulting parameters of the above setup. You could use some criteria such as number of patients correctly classified as having TB or not as a metric to identify the 'best' model. For simplicity, you could assume that $p(t)$ to be a time invariant parameter which seems reasonable in the absence of epidemics etc. Hope that is useful.
Longitudinal comparison of two distributions This is not a complete answer but I hope it gives you some ideas as to how to model the situation in a coherent manner. Assumptions The values at the lower end of the scale follow a normal distribut
29,430
Longitudinal comparison of two distributions
Tricky Matt, as many real-world stats problems are! I would start be defining your study aims/objectives. Without knowing the true status of the subjects it will be hard to define the probability distributions for the TB+ and TB- test. Do you have questionairres regarding previous TB infection (or better, medical histories). Also I still test TB+ due to an immunisation in childhood - several decades ago - so previous immunisations need to be considered. It seems to me your intrinsic question is: Does repeated TB testing affect test outcome? It would be worth getting a copy of Peter Diggle's Analysis of Longitudinal Data. Do some exploratory data analysis, particularly scatter plot matrices of the nil-test results at each time versus each other, and the TB test results at each time versus each other; and the TB vs nil scatter plots (at each time). Also take the differences (TB test - Nil test) and do the scatter plot matrices. Try transformations of the data and redo these - I imagine log(TB) - log(Nil) may help if the TB results are very large relative to Nil. Look for linear relations in the correlations structure. Another approach would be to take the defined test result (positive/ negative) and model this logitudibnally using a non-linear mixed effects model (logit link). Do some individuals flip between testing TB+ to TB- and is this related to their Nil test, TB test, TB - Nil or some transformation of test results?
Longitudinal comparison of two distributions
Tricky Matt, as many real-world stats problems are! I would start be defining your study aims/objectives. Without knowing the true status of the subjects it will be hard to define the probability dist
Longitudinal comparison of two distributions Tricky Matt, as many real-world stats problems are! I would start be defining your study aims/objectives. Without knowing the true status of the subjects it will be hard to define the probability distributions for the TB+ and TB- test. Do you have questionairres regarding previous TB infection (or better, medical histories). Also I still test TB+ due to an immunisation in childhood - several decades ago - so previous immunisations need to be considered. It seems to me your intrinsic question is: Does repeated TB testing affect test outcome? It would be worth getting a copy of Peter Diggle's Analysis of Longitudinal Data. Do some exploratory data analysis, particularly scatter plot matrices of the nil-test results at each time versus each other, and the TB test results at each time versus each other; and the TB vs nil scatter plots (at each time). Also take the differences (TB test - Nil test) and do the scatter plot matrices. Try transformations of the data and redo these - I imagine log(TB) - log(Nil) may help if the TB results are very large relative to Nil. Look for linear relations in the correlations structure. Another approach would be to take the defined test result (positive/ negative) and model this logitudibnally using a non-linear mixed effects model (logit link). Do some individuals flip between testing TB+ to TB- and is this related to their Nil test, TB test, TB - Nil or some transformation of test results?
Longitudinal comparison of two distributions Tricky Matt, as many real-world stats problems are! I would start be defining your study aims/objectives. Without knowing the true status of the subjects it will be hard to define the probability dist
29,431
Kernel bandwidth in Kernel density estimation
One place to start would be Silverman's nearest-neighbor estimator, but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently be formulated in terms of distances. I believe your first and second nearest neighbor method are versions of the nearest-neighbor method, but without a kernel function, and with a small value of $k$.
Kernel bandwidth in Kernel density estimation
One place to start would be Silverman's nearest-neighbor estimator, but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently
Kernel bandwidth in Kernel density estimation One place to start would be Silverman's nearest-neighbor estimator, but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently be formulated in terms of distances. I believe your first and second nearest neighbor method are versions of the nearest-neighbor method, but without a kernel function, and with a small value of $k$.
Kernel bandwidth in Kernel density estimation One place to start would be Silverman's nearest-neighbor estimator, but to add in the weights somehow. (I am not sure exactly what your weights are for here.) The nearest neighbor method can evidently
29,432
Kernel bandwidth in Kernel density estimation
On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: Kernel Density Estimator. Even if you don't use Matlab, you can parse through this code for its method of calculating the optimal bandwidth. This is a highly rated function on file exchange and I have used it many times.
Kernel bandwidth in Kernel density estimation
On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: Kernel Density Estimator. Even if you don't use Matlab, you can
Kernel bandwidth in Kernel density estimation On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: Kernel Density Estimator. Even if you don't use Matlab, you can parse through this code for its method of calculating the optimal bandwidth. This is a highly rated function on file exchange and I have used it many times.
Kernel bandwidth in Kernel density estimation On Matlab File Exchange, there is a kde function that provides the optimal bandwidth with the assumption that a Gaussian kernel is used: Kernel Density Estimator. Even if you don't use Matlab, you can
29,433
What is a sepset in a probabilistic graphical model?
The term sepset is used in connection with cluster graphs. A cluster graph is a graph with nodes $C$ including a subset of variables $\{X_1, \dots, X_n\}$. A sepset $S_{ij}$ is the subset of variables between nodes $C_i$ and $C_j$ that are in the intersection of the scopes of both nodes (scope simply means the list of variables node $C$ depends on. I.e., $S_{ij} \subseteq (Scope(C_i) \bigcap Scope(C_j)).$ If $C_i = \phi(A, B, C)$ and $C_j = \phi(B, C, D)$, then possible sepsets are: $S_{ij}^1 = \{\}$ - this means there is no edge between $C_i, C_j$ $S_{ij}^2 = \{B\}$ $S_{ij}^3 = \{C\}$ $S_{ij}^4 = \{B,C\}$ The relevance of sepsets is that they determine e.g. in belief propagation whether a node $C_i$ sends a message to $C_j$ about a given variable (they only send a message containing information about a variable if it is in $S_{ij}$).
What is a sepset in a probabilistic graphical model?
The term sepset is used in connection with cluster graphs. A cluster graph is a graph with nodes $C$ including a subset of variables $\{X_1, \dots, X_n\}$. A sepset $S_{ij}$ is the subset of variable
What is a sepset in a probabilistic graphical model? The term sepset is used in connection with cluster graphs. A cluster graph is a graph with nodes $C$ including a subset of variables $\{X_1, \dots, X_n\}$. A sepset $S_{ij}$ is the subset of variables between nodes $C_i$ and $C_j$ that are in the intersection of the scopes of both nodes (scope simply means the list of variables node $C$ depends on. I.e., $S_{ij} \subseteq (Scope(C_i) \bigcap Scope(C_j)).$ If $C_i = \phi(A, B, C)$ and $C_j = \phi(B, C, D)$, then possible sepsets are: $S_{ij}^1 = \{\}$ - this means there is no edge between $C_i, C_j$ $S_{ij}^2 = \{B\}$ $S_{ij}^3 = \{C\}$ $S_{ij}^4 = \{B,C\}$ The relevance of sepsets is that they determine e.g. in belief propagation whether a node $C_i$ sends a message to $C_j$ about a given variable (they only send a message containing information about a variable if it is in $S_{ij}$).
What is a sepset in a probabilistic graphical model? The term sepset is used in connection with cluster graphs. A cluster graph is a graph with nodes $C$ including a subset of variables $\{X_1, \dots, X_n\}$. A sepset $S_{ij}$ is the subset of variable
29,434
Odds ratio vs probability ratio
I think the reason that OR is far more common that PR comes down to the standard ways in which different types of quantity are typically transformed. When working with normal quantities, like temperature, height, weight, then the standard assumptions is that they are approximately Normal. When you take contrasts between these sorts of quantities, then a good thing to do is take the difference. Equally if you fit a regression model to it you don't expect a systematic change in the variance. When you are working with quantities that are "rate like", that is they are bounded at zero and typically come from calculating things like "number per day", then taking raw differences is awkward. Since the variance of any sample is proportional to the rate, the residuals of any fit to count or rate data won't generally have constant variance. However, if we work with the log of the mean, then the variances will be "stabilized" – that is they add rather than multiply. Thus for rates we typically handle them as the log. Then when you form contrasts you are taking differences of logs, and that is the same as a ratio. When you are working with probability like quantities, or fractions of a cake, then you are now bounded above and below. You now also have an arbitrary choice what you code as 1 and 0 (or more in multi-class models). Differences between probabilities are invariant to switching 1 to 0, but have the problem of rates that the variance changes with the mean again. Logging them wouldn't give you invariance for 1s and 0s, so instead we tend to logit them (log-odds). Working with log-odds you are now back on the full real line, the variance is the same all along the line, and differences of log-odds behave a bit like normal quantities. Gaussian Variance does not depend on $\mu$ Canonical link for GLM is $x$ Transformation not helpful Poisson Variance is proportional to the rate $\lambda$ Canonical link for GLM is $\ln(x)$ Logging should result in residuals of constant variance Binomial Variance is proportional to $p(1-p)$ Canonical link for GLM is logit $\ln\left(\frac{p}{1-p}\right)$ Taking logit (log-odds) of data should result in residuals of constant variance So I think that the reason you see lots of RR, but very little PR is that PR is constructed from probability/Binomial type quantities, while RR is constructed from rate type quantities. In particular note that incidence can exceed 100% if people can catch the disease multiple times per year, but probability can never exceed 100%. Is odds the only way? No, the general messages above are just useful rules of thumb, and these "canonical" forms are just convenient mathematically – hence why you tend to see it most. The probit function is used instead for probit regression, so in principle differences of probit would be just as valid as OR. Similarly, despite best efforts to word it carefully, the text above still sort of suggests that logging and logiting your raw data, and then fitting a model to it is a good idea – it's not a terrible idea, but there are better things that you can do (GLM etc.).
Odds ratio vs probability ratio
I think the reason that OR is far more common that PR comes down to the standard ways in which different types of quantity are typically transformed. When working with normal quantities, like temper
Odds ratio vs probability ratio I think the reason that OR is far more common that PR comes down to the standard ways in which different types of quantity are typically transformed. When working with normal quantities, like temperature, height, weight, then the standard assumptions is that they are approximately Normal. When you take contrasts between these sorts of quantities, then a good thing to do is take the difference. Equally if you fit a regression model to it you don't expect a systematic change in the variance. When you are working with quantities that are "rate like", that is they are bounded at zero and typically come from calculating things like "number per day", then taking raw differences is awkward. Since the variance of any sample is proportional to the rate, the residuals of any fit to count or rate data won't generally have constant variance. However, if we work with the log of the mean, then the variances will be "stabilized" – that is they add rather than multiply. Thus for rates we typically handle them as the log. Then when you form contrasts you are taking differences of logs, and that is the same as a ratio. When you are working with probability like quantities, or fractions of a cake, then you are now bounded above and below. You now also have an arbitrary choice what you code as 1 and 0 (or more in multi-class models). Differences between probabilities are invariant to switching 1 to 0, but have the problem of rates that the variance changes with the mean again. Logging them wouldn't give you invariance for 1s and 0s, so instead we tend to logit them (log-odds). Working with log-odds you are now back on the full real line, the variance is the same all along the line, and differences of log-odds behave a bit like normal quantities. Gaussian Variance does not depend on $\mu$ Canonical link for GLM is $x$ Transformation not helpful Poisson Variance is proportional to the rate $\lambda$ Canonical link for GLM is $\ln(x)$ Logging should result in residuals of constant variance Binomial Variance is proportional to $p(1-p)$ Canonical link for GLM is logit $\ln\left(\frac{p}{1-p}\right)$ Taking logit (log-odds) of data should result in residuals of constant variance So I think that the reason you see lots of RR, but very little PR is that PR is constructed from probability/Binomial type quantities, while RR is constructed from rate type quantities. In particular note that incidence can exceed 100% if people can catch the disease multiple times per year, but probability can never exceed 100%. Is odds the only way? No, the general messages above are just useful rules of thumb, and these "canonical" forms are just convenient mathematically – hence why you tend to see it most. The probit function is used instead for probit regression, so in principle differences of probit would be just as valid as OR. Similarly, despite best efforts to word it carefully, the text above still sort of suggests that logging and logiting your raw data, and then fitting a model to it is a good idea – it's not a terrible idea, but there are better things that you can do (GLM etc.).
Odds ratio vs probability ratio I think the reason that OR is far more common that PR comes down to the standard ways in which different types of quantity are typically transformed. When working with normal quantities, like temper
29,435
Odds ratio vs probability ratio
Underlying models for probabilities Odds relate well to logistic models $$p = \frac{1}{1 + e^{-(a+bx)}}$$ Probability relates well to exponential models $$p = e^{a+bx}$$ Comparison Let's see how the curves of these models compare to each other in the images below. For small values of $p$ the difference between odds and probability is not so large. It is at larger values of $p$ that the $(1-p)$ term in the denominator of the odds expression becomes important. The log probabilities are linear for the exponential model. The log odds are linear for the logistic model. The linearity for log-odds means that the change in the odds ratio is constant for a change in $x$. If the probability follows a logistic model, then $\frac{odds(x)}{odds(x+\Delta)}$ is independent of $x$ and depends only on the size of the change $\Delta$. Thus with logistic models, a change of the parameter $x$ by some step $\Delta$ means the same change in log-odds creates the same odds-ratio, independent of $x$. Why odds Logistic models are more typical (or related shapes like logit models). This makes comparisons with differences in log-odds (or equivalent ratios of odds) an intuitive way to express changes. But for small probabilities, the odds ratio's and probability ratio's are very similar. $$ \frac{odds(x)}{odds(y)} = \frac{p_x/(1-p_x)}{p_y/(1-p_y)} = \frac{p_x}{p_y} \frac{1-p_y}{1-p_x} \approx \frac{p_x}{p_y} $$
Odds ratio vs probability ratio
Underlying models for probabilities Odds relate well to logistic models $$p = \frac{1}{1 + e^{-(a+bx)}}$$ Probability relates well to exponential models $$p = e^{a+bx}$$ Comparison Let's see how the c
Odds ratio vs probability ratio Underlying models for probabilities Odds relate well to logistic models $$p = \frac{1}{1 + e^{-(a+bx)}}$$ Probability relates well to exponential models $$p = e^{a+bx}$$ Comparison Let's see how the curves of these models compare to each other in the images below. For small values of $p$ the difference between odds and probability is not so large. It is at larger values of $p$ that the $(1-p)$ term in the denominator of the odds expression becomes important. The log probabilities are linear for the exponential model. The log odds are linear for the logistic model. The linearity for log-odds means that the change in the odds ratio is constant for a change in $x$. If the probability follows a logistic model, then $\frac{odds(x)}{odds(x+\Delta)}$ is independent of $x$ and depends only on the size of the change $\Delta$. Thus with logistic models, a change of the parameter $x$ by some step $\Delta$ means the same change in log-odds creates the same odds-ratio, independent of $x$. Why odds Logistic models are more typical (or related shapes like logit models). This makes comparisons with differences in log-odds (or equivalent ratios of odds) an intuitive way to express changes. But for small probabilities, the odds ratio's and probability ratio's are very similar. $$ \frac{odds(x)}{odds(y)} = \frac{p_x/(1-p_x)}{p_y/(1-p_y)} = \frac{p_x}{p_y} \frac{1-p_y}{1-p_x} \approx \frac{p_x}{p_y} $$
Odds ratio vs probability ratio Underlying models for probabilities Odds relate well to logistic models $$p = \frac{1}{1 + e^{-(a+bx)}}$$ Probability relates well to exponential models $$p = e^{a+bx}$$ Comparison Let's see how the c
29,436
Transforming a distribution into another one?
As pointed out in the comment, the question makes more sense if we consider converting random variables. As for your statement about transforming into normal distribution, I hope you're referring to something like Box-Cox (which is an approximate method), not feature normalization with $(x-\mu)/\sigma$, since it's not transforming to normal distribution. A standard way for converting a RV from some distribution into another is using Inverse CDF method. Typically, many random number generators use this method to convert the uniform distribution into an arbitrary one. Sometimes, this might not be enough since we can't get analytical inverse of $F(x)$, as in normal RV, and other methods exist, e.g. Box-Muller for uniform(s) to normal(s) conversion. When the transform accepts going in reverse direction, we can first convert $X_1$ to $U$, then $U$ to $X_2$.
Transforming a distribution into another one?
As pointed out in the comment, the question makes more sense if we consider converting random variables. As for your statement about transforming into normal distribution, I hope you're referring to s
Transforming a distribution into another one? As pointed out in the comment, the question makes more sense if we consider converting random variables. As for your statement about transforming into normal distribution, I hope you're referring to something like Box-Cox (which is an approximate method), not feature normalization with $(x-\mu)/\sigma$, since it's not transforming to normal distribution. A standard way for converting a RV from some distribution into another is using Inverse CDF method. Typically, many random number generators use this method to convert the uniform distribution into an arbitrary one. Sometimes, this might not be enough since we can't get analytical inverse of $F(x)$, as in normal RV, and other methods exist, e.g. Box-Muller for uniform(s) to normal(s) conversion. When the transform accepts going in reverse direction, we can first convert $X_1$ to $U$, then $U$ to $X_2$.
Transforming a distribution into another one? As pointed out in the comment, the question makes more sense if we consider converting random variables. As for your statement about transforming into normal distribution, I hope you're referring to s
29,437
Automated ML vs the entire replicability/reproducibility crisis
I agree with Alex R's comments, and I'm expanding them into a full answer. I'll be talking about "black box" models in this answer, by which I mean machine learning (ML) models whose internal implementations are either not known or not understood. Using some sort of "Auto ML" framework would produce a black box model. More generally, many people would consider hard-to-interpret methods such as deep learning and large ensembles as black boxes. It's certainly possible that people could use black boxes in a statistically unrigorous way, but I think the question is somewhat misunderstanding what I believe to be the typical use case. Are your model's components important, or just its outputs? In many fields, we use regression techniques as a way to try to understand the world. Having a super accurate prediction is not the main goal. Usually the goal is more explanatory, e.g. trying to see the effect dosage has on survival rates. Here, getting a rigorous, un-hacked p-value measures of significance for the components of your model (e.g. your coefficients/biases) is extremely important. Since the components are what's important, you should not use a black box! But there are also many other areas where the main goal is simply the most "accurate" (substitute accuracy for your favorite performance metric) prediction. In this case, we don't really care about the p-value of specific components of our model. What we should care about, is the p-value of our model's performance metric compared to a baseline. This is why you will see people split the data into a training set, a validation set, and a held out test set. That held out test set should be looked at only a very small number of times to avoid p-hacking and/or overfitting. In short, if you care about using the internal components of your model to make statements about our world, then obviously you should know what the internals are and probably not be using hard-to-interpret or even unknown-to-you techniques. But if all you care about is the output of your model, then make sure you have a robust test set (no overlap with training/validation sets, i.i.d., etc.) that you don't look at too much and you are likely good to go even if your model is a black box. So there's no reproducibility problems in performance-oriented machine learning? I wanted to be clear about this -- there are definitely reproducibility problems in performance-oriented machine learning. If you train thousands of models and see their performance on the same test set, you are likely getting non-reproducible results. If you take a biased sample for your test set, you are likely getting non-reproducible results. If you have "data leakage", i.e. overlap between your train/validation set and your test set, you are likely getting non-reproducible results. But none of these problems are inherent to the use of black box models. They are the problems of the craftsman, not the tools!
Automated ML vs the entire replicability/reproducibility crisis
I agree with Alex R's comments, and I'm expanding them into a full answer. I'll be talking about "black box" models in this answer, by which I mean machine learning (ML) models whose internal implemen
Automated ML vs the entire replicability/reproducibility crisis I agree with Alex R's comments, and I'm expanding them into a full answer. I'll be talking about "black box" models in this answer, by which I mean machine learning (ML) models whose internal implementations are either not known or not understood. Using some sort of "Auto ML" framework would produce a black box model. More generally, many people would consider hard-to-interpret methods such as deep learning and large ensembles as black boxes. It's certainly possible that people could use black boxes in a statistically unrigorous way, but I think the question is somewhat misunderstanding what I believe to be the typical use case. Are your model's components important, or just its outputs? In many fields, we use regression techniques as a way to try to understand the world. Having a super accurate prediction is not the main goal. Usually the goal is more explanatory, e.g. trying to see the effect dosage has on survival rates. Here, getting a rigorous, un-hacked p-value measures of significance for the components of your model (e.g. your coefficients/biases) is extremely important. Since the components are what's important, you should not use a black box! But there are also many other areas where the main goal is simply the most "accurate" (substitute accuracy for your favorite performance metric) prediction. In this case, we don't really care about the p-value of specific components of our model. What we should care about, is the p-value of our model's performance metric compared to a baseline. This is why you will see people split the data into a training set, a validation set, and a held out test set. That held out test set should be looked at only a very small number of times to avoid p-hacking and/or overfitting. In short, if you care about using the internal components of your model to make statements about our world, then obviously you should know what the internals are and probably not be using hard-to-interpret or even unknown-to-you techniques. But if all you care about is the output of your model, then make sure you have a robust test set (no overlap with training/validation sets, i.i.d., etc.) that you don't look at too much and you are likely good to go even if your model is a black box. So there's no reproducibility problems in performance-oriented machine learning? I wanted to be clear about this -- there are definitely reproducibility problems in performance-oriented machine learning. If you train thousands of models and see their performance on the same test set, you are likely getting non-reproducible results. If you take a biased sample for your test set, you are likely getting non-reproducible results. If you have "data leakage", i.e. overlap between your train/validation set and your test set, you are likely getting non-reproducible results. But none of these problems are inherent to the use of black box models. They are the problems of the craftsman, not the tools!
Automated ML vs the entire replicability/reproducibility crisis I agree with Alex R's comments, and I'm expanding them into a full answer. I'll be talking about "black box" models in this answer, by which I mean machine learning (ML) models whose internal implemen
29,438
Automated ML vs the entire replicability/reproducibility crisis
Preamble: Models (also as constructed by Auto-ML) can be used for many aims, not just for running tests and p-values. The first issue when investigating reproducibility is to define what exactly you want to do, how you interpret your result, and what you expect to be reproduced, and all further considerations depend on that. Now let's assume you are in fact interested in a test/p-value, and let's say AutoML comes up with an in some sense optimal model, chosen from the given data, and then you run a test on the same data based on that model (I do realise that this is not 100% in line with the linked Wikipedia page but from your posting I guess that you have something like this in mind, and a version of AutoML that does this is at least conceivable; actually software like this exist, I only don't know whether people would call it "AutoML"). The issue here is that the theory behind the p-value assumes that the model is assumed and the test was fixed independently of the data. As the data choose your model, this assumption is violated here. This means that the p-value is technically invalid. In some cases this may be relatively harmless (in case your test is independent or approximately independent of what was done during model selection), but you cannot take this for granted, and as long as you don't know precisely what your AutoML does, there is no way to find out. More generally, if what you do with the selected model is independent of the model selection (for example if it was done on new data, such as prediction quality evaluation on independent data put aside and not used for AutoML), this evaluation is unaffected by the model selection; otherwise it is and can therefore not be trusted to generalise to new data.
Automated ML vs the entire replicability/reproducibility crisis
Preamble: Models (also as constructed by Auto-ML) can be used for many aims, not just for running tests and p-values. The first issue when investigating reproducibility is to define what exactly you w
Automated ML vs the entire replicability/reproducibility crisis Preamble: Models (also as constructed by Auto-ML) can be used for many aims, not just for running tests and p-values. The first issue when investigating reproducibility is to define what exactly you want to do, how you interpret your result, and what you expect to be reproduced, and all further considerations depend on that. Now let's assume you are in fact interested in a test/p-value, and let's say AutoML comes up with an in some sense optimal model, chosen from the given data, and then you run a test on the same data based on that model (I do realise that this is not 100% in line with the linked Wikipedia page but from your posting I guess that you have something like this in mind, and a version of AutoML that does this is at least conceivable; actually software like this exist, I only don't know whether people would call it "AutoML"). The issue here is that the theory behind the p-value assumes that the model is assumed and the test was fixed independently of the data. As the data choose your model, this assumption is violated here. This means that the p-value is technically invalid. In some cases this may be relatively harmless (in case your test is independent or approximately independent of what was done during model selection), but you cannot take this for granted, and as long as you don't know precisely what your AutoML does, there is no way to find out. More generally, if what you do with the selected model is independent of the model selection (for example if it was done on new data, such as prediction quality evaluation on independent data put aside and not used for AutoML), this evaluation is unaffected by the model selection; otherwise it is and can therefore not be trusted to generalise to new data.
Automated ML vs the entire replicability/reproducibility crisis Preamble: Models (also as constructed by Auto-ML) can be used for many aims, not just for running tests and p-values. The first issue when investigating reproducibility is to define what exactly you w
29,439
Do Stochastic Processes such as the Gaussian Process/Dirichlet Process have densities? If not, how can Bayes rule be applied to them?
A "density" or "likelihood" relates to the Radon-Nikodym theorem in measure theory. As noted by @Xi'an, when you consider a finite set of so-called partial observations of a stochastic process, the likelihood corresponds to the usual notion of derivative w.r.t. the Lebesgue measure. For instance, the likelihood of a Gaussian process observed at a known finite set of indices is that of a Gaussian random vector with its mean an covariance deduced from that of the process, which can both take parameterized forms. In the idealized case where an infinite number of observations is available from a stochastic process, the probability measure is on an infinite-dimensional space, for instance a space of continuous functions if the stochastic process has continuous paths. But nothing exists like a Lebesgue measure on an infinite-dimensional space, hence there is no straightforward definition of the likelihood. For Gaussian processes there are some cases where we can define a likelihood by using the notion of equivalence of Gaussian measures. An important example is provided by Girsanov's theorem, which is widely used in financial math. This defines the likelihood of an Itô diffusion $Y_t$ as the derivative w.r.t the probability distribution of a standard Wiener process $B_t$ defined for $t \geq 0$. A neat math exposition is found in the book by Bernt Øksendal. The (upcoming) book by Särkkä and Solin provides a more intuitive presentation which will help practitioners. A brilliant math exposition on Analysis and Probability on Infinite-Dimensional Spaces by Nate Elderedge is available. Note that the likelihood of a stochastic process that would be completely observed is sometimes called infill likelihood by statisticians.
Do Stochastic Processes such as the Gaussian Process/Dirichlet Process have densities? If not, how c
A "density" or "likelihood" relates to the Radon-Nikodym theorem in measure theory. As noted by @Xi'an, when you consider a finite set of so-called partial observations of a stochastic process, the li
Do Stochastic Processes such as the Gaussian Process/Dirichlet Process have densities? If not, how can Bayes rule be applied to them? A "density" or "likelihood" relates to the Radon-Nikodym theorem in measure theory. As noted by @Xi'an, when you consider a finite set of so-called partial observations of a stochastic process, the likelihood corresponds to the usual notion of derivative w.r.t. the Lebesgue measure. For instance, the likelihood of a Gaussian process observed at a known finite set of indices is that of a Gaussian random vector with its mean an covariance deduced from that of the process, which can both take parameterized forms. In the idealized case where an infinite number of observations is available from a stochastic process, the probability measure is on an infinite-dimensional space, for instance a space of continuous functions if the stochastic process has continuous paths. But nothing exists like a Lebesgue measure on an infinite-dimensional space, hence there is no straightforward definition of the likelihood. For Gaussian processes there are some cases where we can define a likelihood by using the notion of equivalence of Gaussian measures. An important example is provided by Girsanov's theorem, which is widely used in financial math. This defines the likelihood of an Itô diffusion $Y_t$ as the derivative w.r.t the probability distribution of a standard Wiener process $B_t$ defined for $t \geq 0$. A neat math exposition is found in the book by Bernt Øksendal. The (upcoming) book by Särkkä and Solin provides a more intuitive presentation which will help practitioners. A brilliant math exposition on Analysis and Probability on Infinite-Dimensional Spaces by Nate Elderedge is available. Note that the likelihood of a stochastic process that would be completely observed is sometimes called infill likelihood by statisticians.
Do Stochastic Processes such as the Gaussian Process/Dirichlet Process have densities? If not, how c A "density" or "likelihood" relates to the Radon-Nikodym theorem in measure theory. As noted by @Xi'an, when you consider a finite set of so-called partial observations of a stochastic process, the li
29,440
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ population
Update: Consider the estimator $$\hat 0 = \bar{X} - cS$$ where $c$ is given in your post. This is is an unbiased estimator of $0$ and will clearly be correlated with the estimator given below (for any value of $a$). Theorem 6.2.25 from C&B shows how to find complete sufficient statistics for the Exponential family so long as $$\{(w_1(\theta), \cdots w_k(\theta)\}$$ contains an open set in $\mathbb R^k$. Unfortunately this distribution yields $w_1(\theta) = \theta^{-2}$ and $w_2(\theta) = \theta^{-1}$ which does NOT form an open set in $R^2$ (since $w_1(\theta) = w_2(\theta)^2$). It is because of this that the statistic $(\bar{X}, S^2)$ is not complete for $\theta$, and it is for the same reason that we can construct an unbiased estimator of $0$ that will be correlated with any unbiased estimator of $\theta$ that is based on the sufficient statistics. Another Update: From here, the argument is constructive. It must be the case that there exists another unbiased estimator $\tilde\theta$ such that $Var(\tilde\theta) < Var(\hat\theta)$ for at least one $\theta \in \Theta$. Proof: Let suppose that $E(\hat\theta) = \theta$, $E(\hat 0) = 0$ and $Cov(\hat\theta, \hat 0) < 0$ (for some value of $\theta$). Consider a new estimator $$\tilde\theta = \hat\theta + b\hat0$$ This estimator is clearly unbiased with variance $$Var(\tilde\theta) = Var(\hat\theta) + b^2Var(\hat0) + 2bCov(\hat\theta,\hat0)$$ Let $M(\theta) = \frac{-2Cov(\hat\theta, \hat0)}{Var(\hat0)}$. By assumption, there must exist a $\theta_0$ such that $M(\theta_0) > 0$. If we choose $b \in (0, M(\theta_0))$, then $Var(\tilde\theta) < Var(\hat\theta)$ at $\theta_0$. Therefore $\hat\theta$ cannot be the UMVUE. $\quad \square$ In summary: The fact that $\hat\theta$ is correlated with $\hat0$ (for any choice of $a$) implies that we can construct a new estimator which is better than $\hat\theta$ for at least one point $\theta_0$, violating the uniformity of $\hat\theta$ claim for best unbiasedness. Let's look at your idea of linear combinations more closely. $$\hat\theta = a \bar X + (1-a)cS$$ As you point out, $\hat\theta$ is a reasonable estimator since it is based on Sufficient (albeit not complete) statistics. Clearly, this estimator is unbiased, so to compute the MSE we need only compute the variance. \begin{align*} MSE(\hat\theta) &= a^2 Var(\bar{X}) + (1-a)^2 c^2 Var(S) \\ &= \frac{a^2\theta^2}{n} + (1-a)^2 c^2 \left[E(S^2) - E(S)^2\right] \\ &= \frac{a^2\theta^2}{n} + (1-a)^2 c^2 \left[\theta^2 - \theta^2/c^2\right] \\ &= \theta^2\left[\frac{a^2}{n} + (1-a)^2(c^2 - 1)\right] \end{align*} By differentiating, we can find the "optimal $a$" for a given sample size $n$. $$a_{opt}(n) = \frac{c^2 - 1}{1/n + c^2 - 1}$$ where $$c^2 = \frac{n-1}{2}\left(\frac{\Gamma((n-1)/2)}{\Gamma(n/2)}\right)^2$$ A plot of this optimal choice of $a$ is given below. It is somewhat interesting to note that as $n\rightarrow \infty$, we have $a_{opt}\rightarrow \frac{1}{3}$ (confirmed via Wolframalpha). While there is no guarantee that this is the UMVUE, this estimator is the minimum variance estimator of all unbiased linear combinations of the sufficient statistics.
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ popul
Update: Consider the estimator $$\hat 0 = \bar{X} - cS$$ where $c$ is given in your post. This is is an unbiased estimator of $0$ and will clearly be correlated with the estimator given below (for a
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ population Update: Consider the estimator $$\hat 0 = \bar{X} - cS$$ where $c$ is given in your post. This is is an unbiased estimator of $0$ and will clearly be correlated with the estimator given below (for any value of $a$). Theorem 6.2.25 from C&B shows how to find complete sufficient statistics for the Exponential family so long as $$\{(w_1(\theta), \cdots w_k(\theta)\}$$ contains an open set in $\mathbb R^k$. Unfortunately this distribution yields $w_1(\theta) = \theta^{-2}$ and $w_2(\theta) = \theta^{-1}$ which does NOT form an open set in $R^2$ (since $w_1(\theta) = w_2(\theta)^2$). It is because of this that the statistic $(\bar{X}, S^2)$ is not complete for $\theta$, and it is for the same reason that we can construct an unbiased estimator of $0$ that will be correlated with any unbiased estimator of $\theta$ that is based on the sufficient statistics. Another Update: From here, the argument is constructive. It must be the case that there exists another unbiased estimator $\tilde\theta$ such that $Var(\tilde\theta) < Var(\hat\theta)$ for at least one $\theta \in \Theta$. Proof: Let suppose that $E(\hat\theta) = \theta$, $E(\hat 0) = 0$ and $Cov(\hat\theta, \hat 0) < 0$ (for some value of $\theta$). Consider a new estimator $$\tilde\theta = \hat\theta + b\hat0$$ This estimator is clearly unbiased with variance $$Var(\tilde\theta) = Var(\hat\theta) + b^2Var(\hat0) + 2bCov(\hat\theta,\hat0)$$ Let $M(\theta) = \frac{-2Cov(\hat\theta, \hat0)}{Var(\hat0)}$. By assumption, there must exist a $\theta_0$ such that $M(\theta_0) > 0$. If we choose $b \in (0, M(\theta_0))$, then $Var(\tilde\theta) < Var(\hat\theta)$ at $\theta_0$. Therefore $\hat\theta$ cannot be the UMVUE. $\quad \square$ In summary: The fact that $\hat\theta$ is correlated with $\hat0$ (for any choice of $a$) implies that we can construct a new estimator which is better than $\hat\theta$ for at least one point $\theta_0$, violating the uniformity of $\hat\theta$ claim for best unbiasedness. Let's look at your idea of linear combinations more closely. $$\hat\theta = a \bar X + (1-a)cS$$ As you point out, $\hat\theta$ is a reasonable estimator since it is based on Sufficient (albeit not complete) statistics. Clearly, this estimator is unbiased, so to compute the MSE we need only compute the variance. \begin{align*} MSE(\hat\theta) &= a^2 Var(\bar{X}) + (1-a)^2 c^2 Var(S) \\ &= \frac{a^2\theta^2}{n} + (1-a)^2 c^2 \left[E(S^2) - E(S)^2\right] \\ &= \frac{a^2\theta^2}{n} + (1-a)^2 c^2 \left[\theta^2 - \theta^2/c^2\right] \\ &= \theta^2\left[\frac{a^2}{n} + (1-a)^2(c^2 - 1)\right] \end{align*} By differentiating, we can find the "optimal $a$" for a given sample size $n$. $$a_{opt}(n) = \frac{c^2 - 1}{1/n + c^2 - 1}$$ where $$c^2 = \frac{n-1}{2}\left(\frac{\Gamma((n-1)/2)}{\Gamma(n/2)}\right)^2$$ A plot of this optimal choice of $a$ is given below. It is somewhat interesting to note that as $n\rightarrow \infty$, we have $a_{opt}\rightarrow \frac{1}{3}$ (confirmed via Wolframalpha). While there is no guarantee that this is the UMVUE, this estimator is the minimum variance estimator of all unbiased linear combinations of the sufficient statistics.
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ popul Update: Consider the estimator $$\hat 0 = \bar{X} - cS$$ where $c$ is given in your post. This is is an unbiased estimator of $0$ and will clearly be correlated with the estimator given below (for a
29,441
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ population
Let me directly find the UMVUE of $\theta$. However, I cannot answer "can the BLUE be the UMVUE", as I do not have enough knowledge of BLUE. It can be shown that $$\mathbb{E}(\bar{X})=\mathbb{E}(cS)=\theta,$$ where $c$ is a constant so that $c=\mathbb{E}\left[\frac{S}{\theta}\right]$, $S^2=\sum_{i=1}^n(X_i-\bar{X})^2$. Thus, the totality of the unbiased estimator of $\theta$ is given by $$\mathcal{U}(\theta)=\{a\bar{X}+(1-a)cS, a\in\mathbb{R}\},$$ followed from Lemma 1.4 from Lehmann and Casella's Theory of Point Estimation 2nd edition. Based on the definition of UMVUE, our next goal is to find estimator $\delta_a(X)\in\mathcal{U}(\theta)$ so that $\text{Var}[\delta_a(X)]$ is minimized, i.e. $$\text{Var}[\delta_a(X)]=\min\{\text{Var}[\delta(X)]:\delta(X)\in\mathcal{U}(\theta)\},$$ which is equivalent with finding $a$ so that $$a^*=\underset{a\in\mathbb{R}}{\arg\min}\text{Var}[a\bar{X}+(1-a)cS]$$ Since $\bar{X}$ is independent with $S$, we can show $$\text{Var}[a\bar{X}+(1-a)cS]=a^2\frac{\theta^2}{n}+(1-a)^2c^22(n-1)\theta^2$$ Taking derivative, letting it to be zero and solve for $a^*$, we get $$a^*=\frac{2nc^2(n-1)}{2nc^2(n-1)-1},$$ which means $a^*\bar{X}+(1-a^*)cS$ is the UMVUE of $\theta$. ---- Update ---- Lemma 1.4 If $\delta_0$ is any unbiased estimator of $g(\theta)$, the totality of unbiased estimators is given by $\delta=\delta_0+U$, where $U$ is any unbiased estimator of zero, that is, it satisfies $$E_\theta(U)=0, \forall \theta\in\Theta$$ In our case, $U$ can be $\{a\bar{X}-acS,a\in\mathbb{R}\}$. $\delta_0$ can be $cS$. The totality of unbiased estimator is thus $\mathcal{U}(\theta)=\{a\bar{X}+(1-a)cS, a\in\mathbb{R}\}$ then the UMVUE (if exists) must live in $\mathcal{U}(\theta)$.
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ popul
Let me directly find the UMVUE of $\theta$. However, I cannot answer "can the BLUE be the UMVUE", as I do not have enough knowledge of BLUE. It can be shown that $$\mathbb{E}(\bar{X})=\mathbb{E}(cS)=\
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ population Let me directly find the UMVUE of $\theta$. However, I cannot answer "can the BLUE be the UMVUE", as I do not have enough knowledge of BLUE. It can be shown that $$\mathbb{E}(\bar{X})=\mathbb{E}(cS)=\theta,$$ where $c$ is a constant so that $c=\mathbb{E}\left[\frac{S}{\theta}\right]$, $S^2=\sum_{i=1}^n(X_i-\bar{X})^2$. Thus, the totality of the unbiased estimator of $\theta$ is given by $$\mathcal{U}(\theta)=\{a\bar{X}+(1-a)cS, a\in\mathbb{R}\},$$ followed from Lemma 1.4 from Lehmann and Casella's Theory of Point Estimation 2nd edition. Based on the definition of UMVUE, our next goal is to find estimator $\delta_a(X)\in\mathcal{U}(\theta)$ so that $\text{Var}[\delta_a(X)]$ is minimized, i.e. $$\text{Var}[\delta_a(X)]=\min\{\text{Var}[\delta(X)]:\delta(X)\in\mathcal{U}(\theta)\},$$ which is equivalent with finding $a$ so that $$a^*=\underset{a\in\mathbb{R}}{\arg\min}\text{Var}[a\bar{X}+(1-a)cS]$$ Since $\bar{X}$ is independent with $S$, we can show $$\text{Var}[a\bar{X}+(1-a)cS]=a^2\frac{\theta^2}{n}+(1-a)^2c^22(n-1)\theta^2$$ Taking derivative, letting it to be zero and solve for $a^*$, we get $$a^*=\frac{2nc^2(n-1)}{2nc^2(n-1)-1},$$ which means $a^*\bar{X}+(1-a^*)cS$ is the UMVUE of $\theta$. ---- Update ---- Lemma 1.4 If $\delta_0$ is any unbiased estimator of $g(\theta)$, the totality of unbiased estimators is given by $\delta=\delta_0+U$, where $U$ is any unbiased estimator of zero, that is, it satisfies $$E_\theta(U)=0, \forall \theta\in\Theta$$ In our case, $U$ can be $\{a\bar{X}-acS,a\in\mathbb{R}\}$. $\delta_0$ can be $cS$. The totality of unbiased estimator is thus $\mathcal{U}(\theta)=\{a\bar{X}+(1-a)cS, a\in\mathbb{R}\}$ then the UMVUE (if exists) must live in $\mathcal{U}(\theta)$.
On the existence of UMVUE and choice of estimator of $\theta$ in $\mathcal N(\theta,\theta^2)$ popul Let me directly find the UMVUE of $\theta$. However, I cannot answer "can the BLUE be the UMVUE", as I do not have enough knowledge of BLUE. It can be shown that $$\mathbb{E}(\bar{X})=\mathbb{E}(cS)=\
29,442
Is the OLS estimator the UMVUE (assuming Normality)?
Under the assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e \mid \mathbf{X} \sim \mathop{\mathcal{N}}\left(\mathbf 0,\sigma^2\mathbf{I}\right),\;\sigma^2 \in \mathbb{R}_{>0} \end{align} $$ the OLS estimator $\hat{\mathbf{b}}=\left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}\mathbf X^{\mathsf T}\mathbf y$ is the UMVUE of $\mathbf b$. This is clear from the facts that $\hat{\mathbf b}$ is unbiased and that $\mathop{\mathbb{V}}\left(\hat{\mathbf b}\right) = \sigma^2 \left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}$ is the inverse expected Fisher information of $\mathbf b$, i.e., $\hat{\mathbf b}$ attains the Cramér–Rao lower bound. This result is in a sense more general than the Gauss–Markov theorem in that it's not restricted to linear estimators. On the other hand it's about linear regression with i.i.d. normal errors only. Interestingly, under the more general assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e = \left(e_1,\ldots, e_n\right)^\mathsf{T},\\ &e_1,\ldots, e_n \mid \mathbf X \overset{\text{(c.)i.i.d.}}{\sim} \left(0, \sigma^2\right),\;\sigma^2 \in \mathbb{R}_{>0}, \end{align} $$ the OLS estimator $\hat{\mathbf{b}}=\left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}\mathbf X^{\mathsf T}\mathbf y$ is also the UMVUE of $\mathbf b$ if $\hat{\mathbf{b}}$ is unbiased for all regression models that satisfy $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathop{\mathbb{E}}\left(\mathbf e\mid \mathbf{X}\right)=\mathbf 0,\\ &\mathbf e = \left(e_1,\ldots, e_n\right)^\mathsf{T},\\ &e_1,\ldots, e_n \mid \mathbf X \;\text{(conditionally) independent},\\ &\mathop{\mathbb{V}}\left(\mathbf e \mid \mathbf{X}\right)=\mathop{\mathrm{diag}}\left(\sigma^2_1,\ldots,\sigma^2_n\right),\; \sigma^2_i \in \mathbb{R}_{>0},\\ \end{align} $$ i.e., for all linear regression models with independent and homo- or heteroscedastic errors (in particular, not only for the data-generating class of linear regression models with independent and homoscedastic errors). References Hansen, B. E. (2022). A modern Gauss–Markov theorem. Econometrica, 90(3), 1283–1294. Pötscher, B. M., & Preinerstorfer, D. (2022). A Modern Gauss-Markov Theorem? Really?. arXiv:2203.01425v3
Is the OLS estimator the UMVUE (assuming Normality)?
Under the assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e \mid \mathbf{X} \sim \mathop{\mathcal{N}}\left(\mathbf 0,\
Is the OLS estimator the UMVUE (assuming Normality)? Under the assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e \mid \mathbf{X} \sim \mathop{\mathcal{N}}\left(\mathbf 0,\sigma^2\mathbf{I}\right),\;\sigma^2 \in \mathbb{R}_{>0} \end{align} $$ the OLS estimator $\hat{\mathbf{b}}=\left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}\mathbf X^{\mathsf T}\mathbf y$ is the UMVUE of $\mathbf b$. This is clear from the facts that $\hat{\mathbf b}$ is unbiased and that $\mathop{\mathbb{V}}\left(\hat{\mathbf b}\right) = \sigma^2 \left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}$ is the inverse expected Fisher information of $\mathbf b$, i.e., $\hat{\mathbf b}$ attains the Cramér–Rao lower bound. This result is in a sense more general than the Gauss–Markov theorem in that it's not restricted to linear estimators. On the other hand it's about linear regression with i.i.d. normal errors only. Interestingly, under the more general assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e = \left(e_1,\ldots, e_n\right)^\mathsf{T},\\ &e_1,\ldots, e_n \mid \mathbf X \overset{\text{(c.)i.i.d.}}{\sim} \left(0, \sigma^2\right),\;\sigma^2 \in \mathbb{R}_{>0}, \end{align} $$ the OLS estimator $\hat{\mathbf{b}}=\left(\mathbf X^{\mathsf T}\mathbf X\right)^{-1}\mathbf X^{\mathsf T}\mathbf y$ is also the UMVUE of $\mathbf b$ if $\hat{\mathbf{b}}$ is unbiased for all regression models that satisfy $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathop{\mathbb{E}}\left(\mathbf e\mid \mathbf{X}\right)=\mathbf 0,\\ &\mathbf e = \left(e_1,\ldots, e_n\right)^\mathsf{T},\\ &e_1,\ldots, e_n \mid \mathbf X \;\text{(conditionally) independent},\\ &\mathop{\mathbb{V}}\left(\mathbf e \mid \mathbf{X}\right)=\mathop{\mathrm{diag}}\left(\sigma^2_1,\ldots,\sigma^2_n\right),\; \sigma^2_i \in \mathbb{R}_{>0},\\ \end{align} $$ i.e., for all linear regression models with independent and homo- or heteroscedastic errors (in particular, not only for the data-generating class of linear regression models with independent and homoscedastic errors). References Hansen, B. E. (2022). A modern Gauss–Markov theorem. Econometrica, 90(3), 1283–1294. Pötscher, B. M., & Preinerstorfer, D. (2022). A Modern Gauss-Markov Theorem? Really?. arXiv:2203.01425v3
Is the OLS estimator the UMVUE (assuming Normality)? Under the assumptions $$ \begin{align} &\mathbf{y} = \mathbf{X} \mathbf{b} + \mathbf{e}, \;\mathbf X \;\text{full column rank},\\ &\mathbf e \mid \mathbf{X} \sim \mathop{\mathcal{N}}\left(\mathbf 0,\
29,443
Why are random Fourier features non-negative?
Apparently, the highlighted sentence is wrong (or at least confusing): $z(x)$ can be negative. This isn't a problem because we only care about the inner product of $z$, not $z$ itself. The "inner product" of $z$ only seemed incorrect when I used this method because I mixed up $z'z$ and $zz'$. Not because $z$ was wrong.
Why are random Fourier features non-negative?
Apparently, the highlighted sentence is wrong (or at least confusing): $z(x)$ can be negative. This isn't a problem because we only care about the inner product of $z$, not $z$ itself. The "inner pro
Why are random Fourier features non-negative? Apparently, the highlighted sentence is wrong (or at least confusing): $z(x)$ can be negative. This isn't a problem because we only care about the inner product of $z$, not $z$ itself. The "inner product" of $z$ only seemed incorrect when I used this method because I mixed up $z'z$ and $zz'$. Not because $z$ was wrong.
Why are random Fourier features non-negative? Apparently, the highlighted sentence is wrong (or at least confusing): $z(x)$ can be negative. This isn't a problem because we only care about the inner product of $z$, not $z$ itself. The "inner pro
29,444
Reverse birthday problem with multiple collisions
The expectation value of a distribution is calculated as $E(X) = \sum {p_ix_i}$. For this problem, we want to calculate the distribution of $N$ given some collision criteria, or find $E(N) = \sum_{n=0}^{\infty}p_n n$ given some collision criteria, where $p_n=P(N=n).$ Suppose you have some collision criteria as stated above, and let $q_n$ be the probability that the collision criteria is met given the length of the year is $n.$ Then $q_n$ can be found by simply dividing the number of ways the collision criteria can be met by the number of ways birthdays can be arranged in general. Once $q_n$ is found for each possible $n$, then the only piece that is missing is translating $q_n$ to $p_n.$ If we assume that $p_n$ is proportional to $q_n$, then $p_n= \alpha q_n.$ Since $\sum_{n=0}^{\infty} p_n=1$, $\alpha \sum_{n=0}^{\infty} q_n=1$ and $\alpha=\frac{1}{\sum_{n=0}^{\infty} q_n}.$ Therefore, we just need a formula for $q_n$ to solve this problem. For your example, let us first find the number of ways the collision criteria can happen given $N=n.$ The first alien singleton can land on any day, so there are $n$ possibilities. The next singleton can land on any day but the birthday of the first alien, so there are $n-1$ possibilities. Completing this for the first 84 singletons, we get $n(n-1)(n-2)...(n-83)$ possible ways this can happen. Notice we also have 5 pairs and 2 triplets, so the "first" alien for each group must not land on the singleton pairs either. This leads to a $n(n-1)(n-2)...(n-84-5-2 + 1)$ ways these aliens don't collide (the clumsy syntax is for easier generalization later). Next, the second alien for a given pair or triplet has 91 choices, the next has 90, etc., the total number of ways this can happen given the birthdays of the first 91 aliens is $91(91-1)(91-2)...(91-7+1)$. The remaining members of the triplets must fall on birthdays of the pairs, and the probability of that happening is $7*6$. We multiply the probabilities for these all together to get a total number of possible ways for the collision criteria to be met as: $$r_n=n(n-1)...(n-84-5-2+1)(84+5+2)(84+5+2-1)...(84+1)(5+2)(5+1)$$ At this point the pattern is clear, if we have $a$ singletons, $b$ pairs, and $c$ triplets, we replace 84 with $a,$ 5 with $b,$ and 2 with $c$ to get a generalized formula. I think it is also clear that the number of possible ways for the birthdays to be arranged in general is $n^m$, where m is the total number of aliens in the problem. Therefore, the probability of meeting the collision criteria is the number of ways to meet the collision criteria divided by the number of ways the aliens could be born, or $q_n=\frac{r_n}{n^m}$. Another interesting thing appeared in the formula of $r_n$. Let $y_n=n(n-1)...(n-(a+b+c)+1)=\frac{n!}{(n-(a+b+c))!}$, and let $z_n$ be the remaining portion of $r_n$ so that $r_n=y_nz_n$. Note that $z_n$ is independent of n, so we can simply write $z_n=z$ as a constant! Since $p_n=q_n/\sum_{i=0}^\infty q_i$, and $q_n=\frac{zy_n}{n^m}$, we can actually factor $z$ out of the sum in the denominator. At this point, it cancels out with the portion from the numerator to get $p_n=\frac{y_n}{n^m}/\sum_{i=0}^\infty (\frac{y_i}{i^m})$. We can simplify $y_n$ further if we let $s=a+b+c$ (or this can be thought of as the number of unique birthdays in the group of aliens), so that we get: $$ p_n=\frac{\frac{n!}{(n-s)!}}{n^m}/\sum_{i=0}^\infty (\frac{\frac{i!}{(i-s)!}}{i^m}) $$ Now we have a (fairly) simple formula for $p_n$, and therefore a (fairly) simple formula for $E(N)$, where the only assumption made was that $P(N=n)$ is proportional to $q_n$ (the probability of meeting the collision criteria given that $N=n$). I think this is a fair assumption to make, and someone smarter than me might even be able to prove that this assumption is associated with $P(N=n)$ following a multinomial distribution. At this point we can calculate $E(N)$ using numerical methods or make some approximation assumptions, as $p_n$ will approach 0 as $n$ approaches $\infty$.
Reverse birthday problem with multiple collisions
The expectation value of a distribution is calculated as $E(X) = \sum {p_ix_i}$. For this problem, we want to calculate the distribution of $N$ given some collision criteria, or find $E(N) = \sum_{n=0
Reverse birthday problem with multiple collisions The expectation value of a distribution is calculated as $E(X) = \sum {p_ix_i}$. For this problem, we want to calculate the distribution of $N$ given some collision criteria, or find $E(N) = \sum_{n=0}^{\infty}p_n n$ given some collision criteria, where $p_n=P(N=n).$ Suppose you have some collision criteria as stated above, and let $q_n$ be the probability that the collision criteria is met given the length of the year is $n.$ Then $q_n$ can be found by simply dividing the number of ways the collision criteria can be met by the number of ways birthdays can be arranged in general. Once $q_n$ is found for each possible $n$, then the only piece that is missing is translating $q_n$ to $p_n.$ If we assume that $p_n$ is proportional to $q_n$, then $p_n= \alpha q_n.$ Since $\sum_{n=0}^{\infty} p_n=1$, $\alpha \sum_{n=0}^{\infty} q_n=1$ and $\alpha=\frac{1}{\sum_{n=0}^{\infty} q_n}.$ Therefore, we just need a formula for $q_n$ to solve this problem. For your example, let us first find the number of ways the collision criteria can happen given $N=n.$ The first alien singleton can land on any day, so there are $n$ possibilities. The next singleton can land on any day but the birthday of the first alien, so there are $n-1$ possibilities. Completing this for the first 84 singletons, we get $n(n-1)(n-2)...(n-83)$ possible ways this can happen. Notice we also have 5 pairs and 2 triplets, so the "first" alien for each group must not land on the singleton pairs either. This leads to a $n(n-1)(n-2)...(n-84-5-2 + 1)$ ways these aliens don't collide (the clumsy syntax is for easier generalization later). Next, the second alien for a given pair or triplet has 91 choices, the next has 90, etc., the total number of ways this can happen given the birthdays of the first 91 aliens is $91(91-1)(91-2)...(91-7+1)$. The remaining members of the triplets must fall on birthdays of the pairs, and the probability of that happening is $7*6$. We multiply the probabilities for these all together to get a total number of possible ways for the collision criteria to be met as: $$r_n=n(n-1)...(n-84-5-2+1)(84+5+2)(84+5+2-1)...(84+1)(5+2)(5+1)$$ At this point the pattern is clear, if we have $a$ singletons, $b$ pairs, and $c$ triplets, we replace 84 with $a,$ 5 with $b,$ and 2 with $c$ to get a generalized formula. I think it is also clear that the number of possible ways for the birthdays to be arranged in general is $n^m$, where m is the total number of aliens in the problem. Therefore, the probability of meeting the collision criteria is the number of ways to meet the collision criteria divided by the number of ways the aliens could be born, or $q_n=\frac{r_n}{n^m}$. Another interesting thing appeared in the formula of $r_n$. Let $y_n=n(n-1)...(n-(a+b+c)+1)=\frac{n!}{(n-(a+b+c))!}$, and let $z_n$ be the remaining portion of $r_n$ so that $r_n=y_nz_n$. Note that $z_n$ is independent of n, so we can simply write $z_n=z$ as a constant! Since $p_n=q_n/\sum_{i=0}^\infty q_i$, and $q_n=\frac{zy_n}{n^m}$, we can actually factor $z$ out of the sum in the denominator. At this point, it cancels out with the portion from the numerator to get $p_n=\frac{y_n}{n^m}/\sum_{i=0}^\infty (\frac{y_i}{i^m})$. We can simplify $y_n$ further if we let $s=a+b+c$ (or this can be thought of as the number of unique birthdays in the group of aliens), so that we get: $$ p_n=\frac{\frac{n!}{(n-s)!}}{n^m}/\sum_{i=0}^\infty (\frac{\frac{i!}{(i-s)!}}{i^m}) $$ Now we have a (fairly) simple formula for $p_n$, and therefore a (fairly) simple formula for $E(N)$, where the only assumption made was that $P(N=n)$ is proportional to $q_n$ (the probability of meeting the collision criteria given that $N=n$). I think this is a fair assumption to make, and someone smarter than me might even be able to prove that this assumption is associated with $P(N=n)$ following a multinomial distribution. At this point we can calculate $E(N)$ using numerical methods or make some approximation assumptions, as $p_n$ will approach 0 as $n$ approaches $\infty$.
Reverse birthday problem with multiple collisions The expectation value of a distribution is calculated as $E(X) = \sum {p_ix_i}$. For this problem, we want to calculate the distribution of $N$ given some collision criteria, or find $E(N) = \sum_{n=0
29,445
Reverse birthday problem with multiple collisions
The excellent answer from Cody provides a nice way to express the likelihood function for $N$, the number days in the year (or the posterior distribution based on a flat prior) by factoring out some part of the probability that is independent from $N$. In this answer I would like to write it down more concisely and also provide a way to compute the maximum of this likelihood function (rather than the expected value which is much more difficult to compute). Likelihood function for N The number of ways to draw a sequence of $a+2b+3c$ birthdays out of a set of $n$ birthdays, with the restriction that $a$ is the number of single birthdays, $b$ duplicate birthdays, and $c$ triple birthdays is equal to $$\begin{array}{ccl} r_n &=& \underbrace{{n}\choose{a+b+c}}_{\small\begin{array}{ccc}&\text{number of ways to} &\\ &\text{pick $m$ unique birthdays}& \\ &\text{out of $n$ days}&\end{array}} \underbrace{\frac{(a+b+c)!}{a!b!c!}}_{\small\begin{array}{ccc}&\text{number of ways to} &\\ &\text{distribute $m$ birthdays}& \\ &\text{among groups of size $a$, $b$ and $c$}&\end{array}} \underbrace{\frac{(a+2b+3c)!}{1!^a2!^b3!^c}}_{\small\begin{array}{ccc}&\text{number of ordered ways to} &\\ &\text{arrange specific }& \\&\text{single, duplicate, and triplicates}& \\ &\text{among the aliens }&\end{array}} \\ &=& \frac{n!}{(n-a-b-c)!} \times \frac{(a+2b+3c)}{a!b!c!1!^a2!^b3!^c} \end{array} $$ and only the first term on the righthandside is dependent on $n$, so by factoring out the other terms we end with a simple expression for a likelihood function $$\begin{array}{rcl} \mathcal{L}(n|a,b,c) &=& n^{-(a+2b+3c)} \frac{n!}{(n-a-b-c)!} = n^{-m} \frac{n!}{(n-s)!}\\ & \propto & P(a,b,c|n) \end{array}$$ where we follow the notation from Cody and use $m$ to denote the number of aliens and $s$ the number of unique birthdays. Maximum likelihood estimate for N We can use this likelihood function to derive the maximum likelihood estimate for $N$. Note that $$\mathcal{L}(n) = \mathcal{L}(n-1) \left(\frac{n-1}{n}\right)^m\frac{n}{n-s}$$ and the maximum will occur just before the $n$ for which $$\left(\frac{n-1}{n}\right)^m\frac{n}{n-s} = 1$$ or $$s = n \left(1-\left(1-1/n\right)^m \right)$$ which is for large $n$ approximately (using a Laurent series which you can find by substituting $x = 1/n$ and write the Taylor series for $x$ in the point $x=0$) $$s \approx \sum_{k=0}^l {{m}\choose{k}} (-n)^{-k} + \mathcal{O} \left( n^{-(l+1)} \right) $$ Using only the first order term $s \approx m - \frac{m(m-1)}{2 n}$ you get: $$n_1 \approx \frac{ {{m}\choose{2}}}{m-s} $$ Using the second order term as well $s \approx m - \frac{m(m-1)}{2 n} + \frac{m(m-1)(m-2)}{6 n^2}$ you get: $$n_2 \approx \frac{ {{m}\choose{2}} + \sqrt{{{m}\choose{2}}^2- 4 (m-s){{m}\choose{3}} } }{2(m-s)} $$ So in the case of the $m=100$ aliens among which there are $s=91$ unique birthdays you get using the approximation $n_1 \approx 550$ and $n_2 \approx 515.1215$. When you solve the equation numerically you get $n=516.82$ which we round-down to $n=516$ to get the MLE.
Reverse birthday problem with multiple collisions
The excellent answer from Cody provides a nice way to express the likelihood function for $N$, the number days in the year (or the posterior distribution based on a flat prior) by factoring out some p
Reverse birthday problem with multiple collisions The excellent answer from Cody provides a nice way to express the likelihood function for $N$, the number days in the year (or the posterior distribution based on a flat prior) by factoring out some part of the probability that is independent from $N$. In this answer I would like to write it down more concisely and also provide a way to compute the maximum of this likelihood function (rather than the expected value which is much more difficult to compute). Likelihood function for N The number of ways to draw a sequence of $a+2b+3c$ birthdays out of a set of $n$ birthdays, with the restriction that $a$ is the number of single birthdays, $b$ duplicate birthdays, and $c$ triple birthdays is equal to $$\begin{array}{ccl} r_n &=& \underbrace{{n}\choose{a+b+c}}_{\small\begin{array}{ccc}&\text{number of ways to} &\\ &\text{pick $m$ unique birthdays}& \\ &\text{out of $n$ days}&\end{array}} \underbrace{\frac{(a+b+c)!}{a!b!c!}}_{\small\begin{array}{ccc}&\text{number of ways to} &\\ &\text{distribute $m$ birthdays}& \\ &\text{among groups of size $a$, $b$ and $c$}&\end{array}} \underbrace{\frac{(a+2b+3c)!}{1!^a2!^b3!^c}}_{\small\begin{array}{ccc}&\text{number of ordered ways to} &\\ &\text{arrange specific }& \\&\text{single, duplicate, and triplicates}& \\ &\text{among the aliens }&\end{array}} \\ &=& \frac{n!}{(n-a-b-c)!} \times \frac{(a+2b+3c)}{a!b!c!1!^a2!^b3!^c} \end{array} $$ and only the first term on the righthandside is dependent on $n$, so by factoring out the other terms we end with a simple expression for a likelihood function $$\begin{array}{rcl} \mathcal{L}(n|a,b,c) &=& n^{-(a+2b+3c)} \frac{n!}{(n-a-b-c)!} = n^{-m} \frac{n!}{(n-s)!}\\ & \propto & P(a,b,c|n) \end{array}$$ where we follow the notation from Cody and use $m$ to denote the number of aliens and $s$ the number of unique birthdays. Maximum likelihood estimate for N We can use this likelihood function to derive the maximum likelihood estimate for $N$. Note that $$\mathcal{L}(n) = \mathcal{L}(n-1) \left(\frac{n-1}{n}\right)^m\frac{n}{n-s}$$ and the maximum will occur just before the $n$ for which $$\left(\frac{n-1}{n}\right)^m\frac{n}{n-s} = 1$$ or $$s = n \left(1-\left(1-1/n\right)^m \right)$$ which is for large $n$ approximately (using a Laurent series which you can find by substituting $x = 1/n$ and write the Taylor series for $x$ in the point $x=0$) $$s \approx \sum_{k=0}^l {{m}\choose{k}} (-n)^{-k} + \mathcal{O} \left( n^{-(l+1)} \right) $$ Using only the first order term $s \approx m - \frac{m(m-1)}{2 n}$ you get: $$n_1 \approx \frac{ {{m}\choose{2}}}{m-s} $$ Using the second order term as well $s \approx m - \frac{m(m-1)}{2 n} + \frac{m(m-1)(m-2)}{6 n^2}$ you get: $$n_2 \approx \frac{ {{m}\choose{2}} + \sqrt{{{m}\choose{2}}^2- 4 (m-s){{m}\choose{3}} } }{2(m-s)} $$ So in the case of the $m=100$ aliens among which there are $s=91$ unique birthdays you get using the approximation $n_1 \approx 550$ and $n_2 \approx 515.1215$. When you solve the equation numerically you get $n=516.82$ which we round-down to $n=516$ to get the MLE.
Reverse birthday problem with multiple collisions The excellent answer from Cody provides a nice way to express the likelihood function for $N$, the number days in the year (or the posterior distribution based on a flat prior) by factoring out some p
29,446
How does Lasso scale with the design matrix size?
The answers from the references, $\mathcal{O}(d^2n)$ for least angle regression $\mathcal{O}(dn)$ for coordinate descent , are correct. The difference is that LARS equations are written in a closed form and finds an exact solution (and doing so going across the entire path of possible λ while the computational complexity is scaling the same as finding the solution of the ordinary least squares problem, which also scales as $O(d^2n)$) while coordinate descent is an iterative scheme to approximate the solution. The referred step (whose computational costs scale as as $\mathcal{O}(dn)$) is "only" a single approximation step, converging/'descending' closer to the minimum of the LASSO problem. LARS uses (exactly) $d$ steps to find the solution (with the complexity of the k-th step scaling as $\mathcal{O}((d-k)n+k^2)$, first term for finding $d-k$ inner-products in the inactive set and second term for solving the new angle in the $k$ active variables). With coordinate descent, nobody really knows the convergence rate and the number of required/expected steps for 'sufficient' convergence (or at least it has not been described well). On the other hand the cost $d^2n$ increases a lot for high dimensions (while there is no strong reason to expect that coordinate descent's convergence rate scales similarly, =linear, if $d$ increases). So intuitively coordinate descent will perform better above a certain limit for $d$. This has also been shown by case studies (see also the reference which shows that glmnet performs mostly better than LARS when $d>>100$, while for $d=100$ the algorithms perform similar). Scaling LARS is a problem involving computational complexity. Scaling coordinate descent is a problem involving computational complexity and convergence.
How does Lasso scale with the design matrix size?
The answers from the references, $\mathcal{O}(d^2n)$ for least angle regression $\mathcal{O}(dn)$ for coordinate descent , are correct. The difference is that LARS equations are written in a close
How does Lasso scale with the design matrix size? The answers from the references, $\mathcal{O}(d^2n)$ for least angle regression $\mathcal{O}(dn)$ for coordinate descent , are correct. The difference is that LARS equations are written in a closed form and finds an exact solution (and doing so going across the entire path of possible λ while the computational complexity is scaling the same as finding the solution of the ordinary least squares problem, which also scales as $O(d^2n)$) while coordinate descent is an iterative scheme to approximate the solution. The referred step (whose computational costs scale as as $\mathcal{O}(dn)$) is "only" a single approximation step, converging/'descending' closer to the minimum of the LASSO problem. LARS uses (exactly) $d$ steps to find the solution (with the complexity of the k-th step scaling as $\mathcal{O}((d-k)n+k^2)$, first term for finding $d-k$ inner-products in the inactive set and second term for solving the new angle in the $k$ active variables). With coordinate descent, nobody really knows the convergence rate and the number of required/expected steps for 'sufficient' convergence (or at least it has not been described well). On the other hand the cost $d^2n$ increases a lot for high dimensions (while there is no strong reason to expect that coordinate descent's convergence rate scales similarly, =linear, if $d$ increases). So intuitively coordinate descent will perform better above a certain limit for $d$. This has also been shown by case studies (see also the reference which shows that glmnet performs mostly better than LARS when $d>>100$, while for $d=100$ the algorithms perform similar). Scaling LARS is a problem involving computational complexity. Scaling coordinate descent is a problem involving computational complexity and convergence.
How does Lasso scale with the design matrix size? The answers from the references, $\mathcal{O}(d^2n)$ for least angle regression $\mathcal{O}(dn)$ for coordinate descent , are correct. The difference is that LARS equations are written in a close
29,447
Why are deep belief networks (DBN) rarely used?
Remember that backpropagation used to come with one big problem; the vanishing gradient; I think the main reason for what deep belief networks are rarely used is because backpropagation used with RELU (Rectified Linear Unit) solves the vanishing gradient problem and it is not an issue anymore and you don´t need to implement a DBN. The second reason is because even though you could resolve the same problem using similar approaches, big deep networks architectures become way more complex to train with deep belief networks. Using backpropagation with RELU you can train in one shot.
Why are deep belief networks (DBN) rarely used?
Remember that backpropagation used to come with one big problem; the vanishing gradient; I think the main reason for what deep belief networks are rarely used is because backpropagation used with RELU
Why are deep belief networks (DBN) rarely used? Remember that backpropagation used to come with one big problem; the vanishing gradient; I think the main reason for what deep belief networks are rarely used is because backpropagation used with RELU (Rectified Linear Unit) solves the vanishing gradient problem and it is not an issue anymore and you don´t need to implement a DBN. The second reason is because even though you could resolve the same problem using similar approaches, big deep networks architectures become way more complex to train with deep belief networks. Using backpropagation with RELU you can train in one shot.
Why are deep belief networks (DBN) rarely used? Remember that backpropagation used to come with one big problem; the vanishing gradient; I think the main reason for what deep belief networks are rarely used is because backpropagation used with RELU
29,448
When to use Gaussian mixture model?
In my opinion, you can perform GMM when you know that the data points are mixtures of a gaussian distribution. Basically forming clusters with different mean and standard deviation. There's a nice diagram on scikit-learn website. L GMM classification An approach is to find the clusters using soft clustering methods and then see if they are gaussian. If they are then you can apply a GMM model which represents the whole dataset.
When to use Gaussian mixture model?
In my opinion, you can perform GMM when you know that the data points are mixtures of a gaussian distribution. Basically forming clusters with different mean and standard deviation. There's a nice dia
When to use Gaussian mixture model? In my opinion, you can perform GMM when you know that the data points are mixtures of a gaussian distribution. Basically forming clusters with different mean and standard deviation. There's a nice diagram on scikit-learn website. L GMM classification An approach is to find the clusters using soft clustering methods and then see if they are gaussian. If they are then you can apply a GMM model which represents the whole dataset.
When to use Gaussian mixture model? In my opinion, you can perform GMM when you know that the data points are mixtures of a gaussian distribution. Basically forming clusters with different mean and standard deviation. There's a nice dia
29,449
When to use Gaussian mixture model?
GMMs are usually a good place to start if your goal is to either (1) cluster observations, (2) specify a generative model, or (3) estimate densities. In fact, for clustering, GMMs are a superset of k-means.
When to use Gaussian mixture model?
GMMs are usually a good place to start if your goal is to either (1) cluster observations, (2) specify a generative model, or (3) estimate densities. In fact, for clustering, GMMs are a superset of k-
When to use Gaussian mixture model? GMMs are usually a good place to start if your goal is to either (1) cluster observations, (2) specify a generative model, or (3) estimate densities. In fact, for clustering, GMMs are a superset of k-means.
When to use Gaussian mixture model? GMMs are usually a good place to start if your goal is to either (1) cluster observations, (2) specify a generative model, or (3) estimate densities. In fact, for clustering, GMMs are a superset of k-
29,450
Sampling distribution of the mean of a Beta
I thought this was an interesting question so here's a quick visual exploration. For $X\sim Beta(\alpha_1,\alpha_2)$, I first selected 4 separate Beta distributions (PDFs shown below). Then I collected sample means, $\bar X = \frac{1}{n}\sum_{i=1}^n x_i$ and plotted the corresponding histograms as shown below. The results look Normal and I'm inclined to believe @ChristophHanck's assertion that the Central Limit Theorem (CLT) is at work here. MATLAB code % Parameters n = 5000; K = 5000; % Define Beta distributions pd1 = makedist('Beta',0.25,0.45); pd2 = makedist('Beta',0.25,2.5); pd3 = makedist('Beta',4,0.15); pd4 = makedist('Beta',3.5,5); % Collect Sample Means X1bar = zeros(K,1); X2bar = zeros(K,1); X3bar = zeros(K,1); X4bar = zeros(K,1); for k = 1:K % get K sample means X1bar(k) = mean(random(pd1,n,1)); % take mean of n samples X2bar(k) = mean(random(pd2,n,1)); X3bar(k) = mean(random(pd3,n,1)); X4bar(k) = mean(random(pd4,n,1)); end % Plot Beta distribution PDFs Xsupport = 0:.01:1; figure, hold on, box on title('Beta(\alpha_1,\alpha_2) PDFs') plot(Xsupport,pdf(pd1,Xsupport),'r-','LineWidth',2.2) plot(Xsupport,pdf(pd2,Xsupport),'b-','LineWidth',2.2) plot(Xsupport,pdf(pd3,Xsupport),'k-','LineWidth',2.2) plot(Xsupport,pdf(pd4,Xsupport),'g-','LineWidth',2.2) legend('(0.25,0.45)','(0.25,2.5)','(4,0.15)','(3.5,5)') figure s(1) = subplot(2,2,1), hold on, box on histogram(X1bar,'FaceColor','r') s(2) = subplot(2,2,2), hold on, box on histogram(X2bar,'FaceColor','b') s(3) = subplot(2,2,3), hold on, box on histogram(X3bar,'FaceColor','k') s(4) = subplot(2,2,4), hold on, box on histogram(X4bar,'FaceColor','g') title(s(1),'(0.25,0.45)') title(s(2),'(0.25,2.5)') title(s(3),'(4,0.15)') title(s(4),'(3.5,5)') Edit: This post was a quick attempt to provide the OP something. As pointed out, we know the Central Limit Theorem (CLT) implies these results will hold for any distribution with a finite variance.
Sampling distribution of the mean of a Beta
I thought this was an interesting question so here's a quick visual exploration. For $X\sim Beta(\alpha_1,\alpha_2)$, I first selected 4 separate Beta distributions (PDFs shown below). Then I collect
Sampling distribution of the mean of a Beta I thought this was an interesting question so here's a quick visual exploration. For $X\sim Beta(\alpha_1,\alpha_2)$, I first selected 4 separate Beta distributions (PDFs shown below). Then I collected sample means, $\bar X = \frac{1}{n}\sum_{i=1}^n x_i$ and plotted the corresponding histograms as shown below. The results look Normal and I'm inclined to believe @ChristophHanck's assertion that the Central Limit Theorem (CLT) is at work here. MATLAB code % Parameters n = 5000; K = 5000; % Define Beta distributions pd1 = makedist('Beta',0.25,0.45); pd2 = makedist('Beta',0.25,2.5); pd3 = makedist('Beta',4,0.15); pd4 = makedist('Beta',3.5,5); % Collect Sample Means X1bar = zeros(K,1); X2bar = zeros(K,1); X3bar = zeros(K,1); X4bar = zeros(K,1); for k = 1:K % get K sample means X1bar(k) = mean(random(pd1,n,1)); % take mean of n samples X2bar(k) = mean(random(pd2,n,1)); X3bar(k) = mean(random(pd3,n,1)); X4bar(k) = mean(random(pd4,n,1)); end % Plot Beta distribution PDFs Xsupport = 0:.01:1; figure, hold on, box on title('Beta(\alpha_1,\alpha_2) PDFs') plot(Xsupport,pdf(pd1,Xsupport),'r-','LineWidth',2.2) plot(Xsupport,pdf(pd2,Xsupport),'b-','LineWidth',2.2) plot(Xsupport,pdf(pd3,Xsupport),'k-','LineWidth',2.2) plot(Xsupport,pdf(pd4,Xsupport),'g-','LineWidth',2.2) legend('(0.25,0.45)','(0.25,2.5)','(4,0.15)','(3.5,5)') figure s(1) = subplot(2,2,1), hold on, box on histogram(X1bar,'FaceColor','r') s(2) = subplot(2,2,2), hold on, box on histogram(X2bar,'FaceColor','b') s(3) = subplot(2,2,3), hold on, box on histogram(X3bar,'FaceColor','k') s(4) = subplot(2,2,4), hold on, box on histogram(X4bar,'FaceColor','g') title(s(1),'(0.25,0.45)') title(s(2),'(0.25,2.5)') title(s(3),'(4,0.15)') title(s(4),'(3.5,5)') Edit: This post was a quick attempt to provide the OP something. As pointed out, we know the Central Limit Theorem (CLT) implies these results will hold for any distribution with a finite variance.
Sampling distribution of the mean of a Beta I thought this was an interesting question so here's a quick visual exploration. For $X\sim Beta(\alpha_1,\alpha_2)$, I first selected 4 separate Beta distributions (PDFs shown below). Then I collect
29,451
Sampling distribution of the mean of a Beta
Note: see also for the same question: Sum of n i.i.d Beta-distributed variables. For the case of a uniform distribution, $\text{Beta}(1,1)$, the distribution of the sum of a number of independent variables (and the mean is related) has been described as the Irwin-Hall distribution. If $$X_n = \sum_{i=1}^n Y_i \quad \text{ with } \quad U_i \sim \text{Beta}(1,1)$$ then you have a spline of degree $n-1$ $$f_X(x;n) = \frac{1}{(n-1)!} \sum_{j=0}^{n-1} a_j(k,n)x^j \quad \text{ for } \quad k \leq x \leq k+1$$ where the $a_j(k,n)$ can be described by a recurrence relation: $$a_j(k,n) = \begin{cases} 1 & \quad k=0,j=n-1 \\ 0 & \quad k=0,j< n-1 \\ a_j(k-1,n) + (-1)^{n+k-j-1} {{n}\choose{k}} {{n-1}\choose{j}} k^{n-j-1} & \quad k>1 \end{cases}$$ You could see the above formula as being constructed by a repeated convolution of $X_{n-1}$ with $Y_n$ where the integral is solved piecewice. Can we possibly generalize this for Beta distributed variables with any $\alpha$ and $\beta$? Let $$X_n(\alpha,\beta) = \sum_{i=1}^n Y_i \quad \text{ with } \quad U_i \sim \text{Beta}(\alpha,\beta)$$ We expect the function $f_X(x;n,\alpha,\beta)$ to be split up in $n$ pieces (though possibly not a spline anymore). The convolution to compute the distribution of $X_{n}(\alpha,\beta) = X_{n-1}(\alpha,\beta)+U_n$ will be something like: $$f_X(x;n,\alpha,\beta) = \int^{\text{min}(1,x)}_{1-\text{min}(1,n-x)} f_X(x-y;n-1,\alpha,\beta) y^{\alpha-1}(1-y)^{\beta-1} dy$$ For $n=2$: $$f_X(x;n,\alpha,\beta) = \begin{cases} \int_{0\phantom{-x}}^{x} ((x-y)y)^{\alpha-1}((1-x+y)(1-y))^{\beta-1} dy & \quad \text{if $0 \leq x \leq 1$} \\ \int_{x-1}^{1} ((x-y)y)^{\alpha-1}((1-x+y)(1-y))^{\beta-1} dy & \quad \text{if $1 \leq x \leq 2$} \end{cases}$$ For integer $\alpha$ and $\beta$: the terms like $((x-y)y)^{\alpha-1}$ and $((1-x+y)(1-y))^{\beta-1}$ can be expanded for integer values of $\alpha$ and $\beta$, such that the integral is straightforward to solve. For example: $$\begin{array}{} f_X(x;2,2,2) &=& \begin{cases} \frac{1}{30} x^3(x^2-5x+5) & \quad \text{if $x \leq 1$} \\ \frac{1}{30}(2-x)^3(x^2+x-1) & \quad \text{if $x \geq 1$} \end{cases}\\ \\ f_X(x;2,3,3) &=& \begin{cases} \frac{1}{630} x^5(x^4-9x^3+30x^2-42x+21) & \quad \text{if $x \leq 1$} \\ \frac{1}{630}(2-x)^5(x^4+x^3-2x+1) & \quad \text{if $x \geq 1$} \end{cases} \end{array}$$ The solution for integer values of $\alpha$ and $\beta$ will be a spline as well. Possibly this could be cast in some nice (or more likely not so nice) formula for more general situations (not just $n=2$ and $\alpha=\beta=2$ or $\alpha=\beta=3$). But at this point one needs quite a few cups of coffee, or better an infuse, to tackle this stuff.
Sampling distribution of the mean of a Beta
Note: see also for the same question: Sum of n i.i.d Beta-distributed variables. For the case of a uniform distribution, $\text{Beta}(1,1)$, the distribution of the sum of a number of independent vari
Sampling distribution of the mean of a Beta Note: see also for the same question: Sum of n i.i.d Beta-distributed variables. For the case of a uniform distribution, $\text{Beta}(1,1)$, the distribution of the sum of a number of independent variables (and the mean is related) has been described as the Irwin-Hall distribution. If $$X_n = \sum_{i=1}^n Y_i \quad \text{ with } \quad U_i \sim \text{Beta}(1,1)$$ then you have a spline of degree $n-1$ $$f_X(x;n) = \frac{1}{(n-1)!} \sum_{j=0}^{n-1} a_j(k,n)x^j \quad \text{ for } \quad k \leq x \leq k+1$$ where the $a_j(k,n)$ can be described by a recurrence relation: $$a_j(k,n) = \begin{cases} 1 & \quad k=0,j=n-1 \\ 0 & \quad k=0,j< n-1 \\ a_j(k-1,n) + (-1)^{n+k-j-1} {{n}\choose{k}} {{n-1}\choose{j}} k^{n-j-1} & \quad k>1 \end{cases}$$ You could see the above formula as being constructed by a repeated convolution of $X_{n-1}$ with $Y_n$ where the integral is solved piecewice. Can we possibly generalize this for Beta distributed variables with any $\alpha$ and $\beta$? Let $$X_n(\alpha,\beta) = \sum_{i=1}^n Y_i \quad \text{ with } \quad U_i \sim \text{Beta}(\alpha,\beta)$$ We expect the function $f_X(x;n,\alpha,\beta)$ to be split up in $n$ pieces (though possibly not a spline anymore). The convolution to compute the distribution of $X_{n}(\alpha,\beta) = X_{n-1}(\alpha,\beta)+U_n$ will be something like: $$f_X(x;n,\alpha,\beta) = \int^{\text{min}(1,x)}_{1-\text{min}(1,n-x)} f_X(x-y;n-1,\alpha,\beta) y^{\alpha-1}(1-y)^{\beta-1} dy$$ For $n=2$: $$f_X(x;n,\alpha,\beta) = \begin{cases} \int_{0\phantom{-x}}^{x} ((x-y)y)^{\alpha-1}((1-x+y)(1-y))^{\beta-1} dy & \quad \text{if $0 \leq x \leq 1$} \\ \int_{x-1}^{1} ((x-y)y)^{\alpha-1}((1-x+y)(1-y))^{\beta-1} dy & \quad \text{if $1 \leq x \leq 2$} \end{cases}$$ For integer $\alpha$ and $\beta$: the terms like $((x-y)y)^{\alpha-1}$ and $((1-x+y)(1-y))^{\beta-1}$ can be expanded for integer values of $\alpha$ and $\beta$, such that the integral is straightforward to solve. For example: $$\begin{array}{} f_X(x;2,2,2) &=& \begin{cases} \frac{1}{30} x^3(x^2-5x+5) & \quad \text{if $x \leq 1$} \\ \frac{1}{30}(2-x)^3(x^2+x-1) & \quad \text{if $x \geq 1$} \end{cases}\\ \\ f_X(x;2,3,3) &=& \begin{cases} \frac{1}{630} x^5(x^4-9x^3+30x^2-42x+21) & \quad \text{if $x \leq 1$} \\ \frac{1}{630}(2-x)^5(x^4+x^3-2x+1) & \quad \text{if $x \geq 1$} \end{cases} \end{array}$$ The solution for integer values of $\alpha$ and $\beta$ will be a spline as well. Possibly this could be cast in some nice (or more likely not so nice) formula for more general situations (not just $n=2$ and $\alpha=\beta=2$ or $\alpha=\beta=3$). But at this point one needs quite a few cups of coffee, or better an infuse, to tackle this stuff.
Sampling distribution of the mean of a Beta Note: see also for the same question: Sum of n i.i.d Beta-distributed variables. For the case of a uniform distribution, $\text{Beta}(1,1)$, the distribution of the sum of a number of independent vari
29,452
Model averaging approach -- averaging coefficient estimates vs. model predictions?
In linear models averaging across coefficients will give you the same predicted values as the predicted values from averaging across predictions, but conveys more information. Many expositions deal with linear models and therefore average across coefficients. You can check the equivalence with a bit of linear algebra. Say you have $T$ observations and $N$ predictors. You gather the latter in the $T\times N$ matrix $\mathbf{X}$. You also have $M$ models, each of which assigns a coefficient estimate $\beta_m$ to the $N$ predictors. Stack these coefficient estimates in the $N \times M$ matrix $\mathbf{\beta}$. Averaging means that you assign weights $w_m$ to each model $m$ (weights are typically non-negative and sum up to one). Put these weights in the vector $\mathbf{w}$ of length $M$. Predicted values for each model are given by $\mathbf{\hat{y}}_m = \mathbf{X}\beta_m$, or, in the stacked notation $$ \mathbf{\hat{y}} = \mathbf{X}\mathbf{\beta} $$ Predicted values from averaging across predictions are given by $$ \mathbf{\hat{y}} \mathbf{w} = (\mathbf{X}\mathbf{\beta})\mathbf{w} $$ When you average across coefficient estimates instead, you compute $$ \mathbf{\beta}_w = \mathbf{\beta}\mathbf{w} $$ And the predicted values from the averaging coefficients are given by $$ \mathbf{X\beta}_w = \mathbf{X}(\mathbf{\beta}\mathbf{w}) $$ Equivalence between the predicted values for either approach follows from the associativeness of the matrix product. Since the predicted values are the same, you may as well just compute the average of the coefficients: this gives you more information, in case you e.g. want to look at coefficients for individual predictors. In non-linear models, the equivalence typically does not hold anymore and there, it makes indeed sense to average across predictions instead. The vast literature on averaging across predictions (forecast combinations) is for instance summarized here.
Model averaging approach -- averaging coefficient estimates vs. model predictions?
In linear models averaging across coefficients will give you the same predicted values as the predicted values from averaging across predictions, but conveys more information. Many expositions deal wi
Model averaging approach -- averaging coefficient estimates vs. model predictions? In linear models averaging across coefficients will give you the same predicted values as the predicted values from averaging across predictions, but conveys more information. Many expositions deal with linear models and therefore average across coefficients. You can check the equivalence with a bit of linear algebra. Say you have $T$ observations and $N$ predictors. You gather the latter in the $T\times N$ matrix $\mathbf{X}$. You also have $M$ models, each of which assigns a coefficient estimate $\beta_m$ to the $N$ predictors. Stack these coefficient estimates in the $N \times M$ matrix $\mathbf{\beta}$. Averaging means that you assign weights $w_m$ to each model $m$ (weights are typically non-negative and sum up to one). Put these weights in the vector $\mathbf{w}$ of length $M$. Predicted values for each model are given by $\mathbf{\hat{y}}_m = \mathbf{X}\beta_m$, or, in the stacked notation $$ \mathbf{\hat{y}} = \mathbf{X}\mathbf{\beta} $$ Predicted values from averaging across predictions are given by $$ \mathbf{\hat{y}} \mathbf{w} = (\mathbf{X}\mathbf{\beta})\mathbf{w} $$ When you average across coefficient estimates instead, you compute $$ \mathbf{\beta}_w = \mathbf{\beta}\mathbf{w} $$ And the predicted values from the averaging coefficients are given by $$ \mathbf{X\beta}_w = \mathbf{X}(\mathbf{\beta}\mathbf{w}) $$ Equivalence between the predicted values for either approach follows from the associativeness of the matrix product. Since the predicted values are the same, you may as well just compute the average of the coefficients: this gives you more information, in case you e.g. want to look at coefficients for individual predictors. In non-linear models, the equivalence typically does not hold anymore and there, it makes indeed sense to average across predictions instead. The vast literature on averaging across predictions (forecast combinations) is for instance summarized here.
Model averaging approach -- averaging coefficient estimates vs. model predictions? In linear models averaging across coefficients will give you the same predicted values as the predicted values from averaging across predictions, but conveys more information. Many expositions deal wi
29,453
How to optimally spread draws when calculating multiple expectations
This is a very interesting question with little documentation in the Monte Carlo literature, except in connection with stratification and Rao-Blackwellisation. This is possibly due to the fact that the computations of the expected conditional variance and of the variance of the conditional expectation are rarely feasible. First, let us assume you run $R$ simulations from $\pi_X$, $x_1,\ldots,x_R$ and for each simulated $x_r$, you run $S$ simulations from $\pi_{Y|X=x_r}$, $y_{1r},\ldots,y_{sr}$. Your Monte Carlo estimate is then $$\delta(R,S)=\frac{1}{RS}\sum_{r=1}^R\sum_{s=1}^S f(x_r,y_{rs})$$ The variance of this estimate is decomposed as follows \begin{align*} \text{var} \{\delta(R,S)\} &= \frac{1}{R^2S^2} R\text{var} \left\{\sum_{s=1}^S f(x_r,y_{rs})\right\}\\ &= \frac{1}{RS^2} \text{var}_X\mathbb{E}_{Y|X}\left\{\sum_{s=1}^S f(x_r,y_{rs})\big|x_r\right\}+\frac{1}{RS^2}\mathbb{E}_{X}\text{var}_{Y|X} \left\{\sum_{s=1}^S f(x_r,y_{rs})\big|x_r\right\}\\ &=\frac{1}{RS^2} \text{var}_X\{ S \mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{RS^2} \mathbb{E}_{X}[S\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]\\ &=\frac{1}{R} \text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{RS} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]\\ &\stackrel{K=RS}{=}\frac{1}{R}\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{K} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}] \end{align*} Therefore if one wants to minimise this variance the optimal choice is $R=K$. Implying that $S=1$. Except when the first variance term is null, in which case it does not matter. However, as discussed in the comments, the assumption $K=RS$ is unrealistic as it does not account for the production of one $x_r$ [or assumes this comes for free]. Now let us assume different simulation costs and the budget constraint $R+aRS=b$, meaning that the $y_{rs}$'s cost $a$ times more to simulate than the $x_r$'s. The above decomposition of the variance is then $$\frac{1}{R}\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{R(b-R)/aR} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]$$ which can be minimised in $R$ as $$R^*=b\big/1+\{a\mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}/\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}\}^{1/2}$$ [the closest integer under the constraints $R\ge 1$ and $S\ge 1$], except when the first variance is equal to zero, in which case $R=1$. When $\mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]=0$, the minimum variance corresponds to a maximum $R$, which leads to $S=1$ in the current formalism. Note also that this solution should be compared with the symmetric solution when the inner integral is in $X$ given $Y$ and the outer integral is against the marginal in $Y$ (assuming the simulations are also feasible in this order). An interesting extension to the question would be to consider a different number of simulations $S(x_r)$ for each simulated $x_r$, depending on the value $\text{var}_{Y|X}\{f(x_r,Y)|x_r\}$.
How to optimally spread draws when calculating multiple expectations
This is a very interesting question with little documentation in the Monte Carlo literature, except in connection with stratification and Rao-Blackwellisation. This is possibly due to the fact tha
How to optimally spread draws when calculating multiple expectations This is a very interesting question with little documentation in the Monte Carlo literature, except in connection with stratification and Rao-Blackwellisation. This is possibly due to the fact that the computations of the expected conditional variance and of the variance of the conditional expectation are rarely feasible. First, let us assume you run $R$ simulations from $\pi_X$, $x_1,\ldots,x_R$ and for each simulated $x_r$, you run $S$ simulations from $\pi_{Y|X=x_r}$, $y_{1r},\ldots,y_{sr}$. Your Monte Carlo estimate is then $$\delta(R,S)=\frac{1}{RS}\sum_{r=1}^R\sum_{s=1}^S f(x_r,y_{rs})$$ The variance of this estimate is decomposed as follows \begin{align*} \text{var} \{\delta(R,S)\} &= \frac{1}{R^2S^2} R\text{var} \left\{\sum_{s=1}^S f(x_r,y_{rs})\right\}\\ &= \frac{1}{RS^2} \text{var}_X\mathbb{E}_{Y|X}\left\{\sum_{s=1}^S f(x_r,y_{rs})\big|x_r\right\}+\frac{1}{RS^2}\mathbb{E}_{X}\text{var}_{Y|X} \left\{\sum_{s=1}^S f(x_r,y_{rs})\big|x_r\right\}\\ &=\frac{1}{RS^2} \text{var}_X\{ S \mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{RS^2} \mathbb{E}_{X}[S\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]\\ &=\frac{1}{R} \text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{RS} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]\\ &\stackrel{K=RS}{=}\frac{1}{R}\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{K} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}] \end{align*} Therefore if one wants to minimise this variance the optimal choice is $R=K$. Implying that $S=1$. Except when the first variance term is null, in which case it does not matter. However, as discussed in the comments, the assumption $K=RS$ is unrealistic as it does not account for the production of one $x_r$ [or assumes this comes for free]. Now let us assume different simulation costs and the budget constraint $R+aRS=b$, meaning that the $y_{rs}$'s cost $a$ times more to simulate than the $x_r$'s. The above decomposition of the variance is then $$\frac{1}{R}\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}+ \frac{1}{R(b-R)/aR} \mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]$$ which can be minimised in $R$ as $$R^*=b\big/1+\{a\mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}/\text{var}_X\{\mathbb{E}_{Y|X}[f(x_r,Y)|x_r]\}\}^{1/2}$$ [the closest integer under the constraints $R\ge 1$ and $S\ge 1$], except when the first variance is equal to zero, in which case $R=1$. When $\mathbb{E}_{X}[\text{var}_{Y|X}\{f(x_r,Y)|x_r\}]=0$, the minimum variance corresponds to a maximum $R$, which leads to $S=1$ in the current formalism. Note also that this solution should be compared with the symmetric solution when the inner integral is in $X$ given $Y$ and the outer integral is against the marginal in $Y$ (assuming the simulations are also feasible in this order). An interesting extension to the question would be to consider a different number of simulations $S(x_r)$ for each simulated $x_r$, depending on the value $\text{var}_{Y|X}\{f(x_r,Y)|x_r\}$.
How to optimally spread draws when calculating multiple expectations This is a very interesting question with little documentation in the Monte Carlo literature, except in connection with stratification and Rao-Blackwellisation. This is possibly due to the fact tha
29,454
t-statistic for linear regression
$$ Z = \frac{\widehat\beta - \beta}{\left( \dfrac \sigma { \sqrt{ n \left( \,\overline{(x^2)} - (\,\overline{x}\,\right)^2}} \right)} \sim \mathrm{N}(0,1). $$ And $$ (n-2) \frac{\widehat{\sigma}^2}{\sigma^2} \sim \chi^2_{n-2}. $$ Notice that $\sigma$ appears in both the numerator and the denominator of $Z/\sqrt{\chi^2_k/k}$ and cancels out. Independence of these two things is seen by observing that the vector of residuals is independent of the vector of fitted values. To see that, find the covariance between the vector of residuals and the vector of fitted values, and recall that if two random vectors are jointly normally distributed then they are independent if they are uncorrelated. (The whole story of why these things have the distributions asserted here would take somewhat longer.)
t-statistic for linear regression
$$ Z = \frac{\widehat\beta - \beta}{\left( \dfrac \sigma { \sqrt{ n \left( \,\overline{(x^2)} - (\,\overline{x}\,\right)^2}} \right)} \sim \mathrm{N}(0,1). $$ And $$ (n-2) \frac{\widehat{\sigma}^2}{\s
t-statistic for linear regression $$ Z = \frac{\widehat\beta - \beta}{\left( \dfrac \sigma { \sqrt{ n \left( \,\overline{(x^2)} - (\,\overline{x}\,\right)^2}} \right)} \sim \mathrm{N}(0,1). $$ And $$ (n-2) \frac{\widehat{\sigma}^2}{\sigma^2} \sim \chi^2_{n-2}. $$ Notice that $\sigma$ appears in both the numerator and the denominator of $Z/\sqrt{\chi^2_k/k}$ and cancels out. Independence of these two things is seen by observing that the vector of residuals is independent of the vector of fitted values. To see that, find the covariance between the vector of residuals and the vector of fitted values, and recall that if two random vectors are jointly normally distributed then they are independent if they are uncorrelated. (The whole story of why these things have the distributions asserted here would take somewhat longer.)
t-statistic for linear regression $$ Z = \frac{\widehat\beta - \beta}{\left( \dfrac \sigma { \sqrt{ n \left( \,\overline{(x^2)} - (\,\overline{x}\,\right)^2}} \right)} \sim \mathrm{N}(0,1). $$ And $$ (n-2) \frac{\widehat{\sigma}^2}{\s
29,455
t-statistic for linear regression
I know a way to show you why you get a t distribution for this statistic but it's going to require some linear algebra. You are working with the model $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i,$$ and I will assume for now on that $\{\epsilon_1,\ldots,\epsilon_n\}$ are i.i.d. from the $N(0,\sigma^2)$ distribution. Step 1 - distribution of $\hat\beta_1$: You know that the least square approximation of $\beta_1$ can be written as: $$\hat\beta_1 = \sum_{i=1}^n\frac{x_i - \bar x}{SXX}y_i,$$ and you can show from this equation that $\hat{\beta}_1 \sim N_1(\beta_1,\frac{\sigma^2}{SXX}).$ Step 2 - distribution of $RSS$: Now to continue my answer it is convenient to rewrite our model in matrix form: $$y = X\beta + \epsilon,$$ And remember that the residual sum of squares can be written as: $$RSS = y^T(I - P_X)y$$ where $P_X = X(X^TX)^{-1}X'$. Now since $P_X$ is a projection matrix of rank two, $I-P_X$ is also a projection matrix but of rank $n-2$. Now we can see that $RSS \sim \sigma^2\chi^2_{n-2}$ because it is a quadratic form of independent normal variables with common variance on a projection matrix. (The centrality parameter is 0 because $(I-P_X)X = 0$). Step 3 - Independence of $\hat\beta_1$ and $RSS$: It remains to prove that $\hat\beta_1$ and $RSS$ are independent. Remember that: $$\hat\beta_1 = (0,1)(X^TX)^{-1}X^Ty,$$ $$RSS = y^T(I - P_X)y = y^T(I - P_X)(I - P_X)y,$$ but $(X^TX)^{-1}X^Ty$ and $(I - P_X)y$ are independent since $(X^TX)^{-1}X^T(I - P_X) = 0$. Now we have $\hat\beta_1$ and $RSS$ as functions of independent variables and thus they are independent. Final Step: The usual T statistic for testing $H_0: \beta_1 = 0$ vs. $H_1: \beta_1 \neq 0$ is: $$T = \frac{\hat\beta_1}{\text{se}(\hat\beta_1)}$$ where $\text{se}(\hat\beta_1) = \sqrt{\frac{RSS}{(n-2)SXX}}$. After some algebraic manipulations we get: $$T = \frac{\hat\beta_1}{\text{se}(\hat\beta_1)}=\frac{\frac{\sqrt{SXX}}{\sigma}\hat\beta_1}{\sqrt{\frac{RSS/\sigma^2}{n-2}}}$$ And from the previous steps you can see that we have the ratio $\frac{N(0,1)}{\sqrt\frac{\chi^2_{n-2}}{n-2}}\sim t_{n-2}.$ Proofs for the affirmations I made and a detailed discussion can be found in the book "A Primer on Linear Models" by John F. Monahan or any other book on linear regression analysis.
t-statistic for linear regression
I know a way to show you why you get a t distribution for this statistic but it's going to require some linear algebra. You are working with the model $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i,$$ and
t-statistic for linear regression I know a way to show you why you get a t distribution for this statistic but it's going to require some linear algebra. You are working with the model $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i,$$ and I will assume for now on that $\{\epsilon_1,\ldots,\epsilon_n\}$ are i.i.d. from the $N(0,\sigma^2)$ distribution. Step 1 - distribution of $\hat\beta_1$: You know that the least square approximation of $\beta_1$ can be written as: $$\hat\beta_1 = \sum_{i=1}^n\frac{x_i - \bar x}{SXX}y_i,$$ and you can show from this equation that $\hat{\beta}_1 \sim N_1(\beta_1,\frac{\sigma^2}{SXX}).$ Step 2 - distribution of $RSS$: Now to continue my answer it is convenient to rewrite our model in matrix form: $$y = X\beta + \epsilon,$$ And remember that the residual sum of squares can be written as: $$RSS = y^T(I - P_X)y$$ where $P_X = X(X^TX)^{-1}X'$. Now since $P_X$ is a projection matrix of rank two, $I-P_X$ is also a projection matrix but of rank $n-2$. Now we can see that $RSS \sim \sigma^2\chi^2_{n-2}$ because it is a quadratic form of independent normal variables with common variance on a projection matrix. (The centrality parameter is 0 because $(I-P_X)X = 0$). Step 3 - Independence of $\hat\beta_1$ and $RSS$: It remains to prove that $\hat\beta_1$ and $RSS$ are independent. Remember that: $$\hat\beta_1 = (0,1)(X^TX)^{-1}X^Ty,$$ $$RSS = y^T(I - P_X)y = y^T(I - P_X)(I - P_X)y,$$ but $(X^TX)^{-1}X^Ty$ and $(I - P_X)y$ are independent since $(X^TX)^{-1}X^T(I - P_X) = 0$. Now we have $\hat\beta_1$ and $RSS$ as functions of independent variables and thus they are independent. Final Step: The usual T statistic for testing $H_0: \beta_1 = 0$ vs. $H_1: \beta_1 \neq 0$ is: $$T = \frac{\hat\beta_1}{\text{se}(\hat\beta_1)}$$ where $\text{se}(\hat\beta_1) = \sqrt{\frac{RSS}{(n-2)SXX}}$. After some algebraic manipulations we get: $$T = \frac{\hat\beta_1}{\text{se}(\hat\beta_1)}=\frac{\frac{\sqrt{SXX}}{\sigma}\hat\beta_1}{\sqrt{\frac{RSS/\sigma^2}{n-2}}}$$ And from the previous steps you can see that we have the ratio $\frac{N(0,1)}{\sqrt\frac{\chi^2_{n-2}}{n-2}}\sim t_{n-2}.$ Proofs for the affirmations I made and a detailed discussion can be found in the book "A Primer on Linear Models" by John F. Monahan or any other book on linear regression analysis.
t-statistic for linear regression I know a way to show you why you get a t distribution for this statistic but it's going to require some linear algebra. You are working with the model $$y_i = \beta_0 + \beta_1 x_i + \epsilon_i,$$ and
29,456
Recurrent Neural Network (RNN) topology: why always fully-connected?
One reason might be due to the mathematical convenience. The vanilla recurrent neural network (Elman-type) can be formularized as: $\vec{h}_t = f(\vec{x}_t, \vec{h}_{t-1})$, where $f(\cdot)$ can be written as $\sigma(W\vec{x}_t + U\vec{h}_{t-1})$. The above equation corresponds to your first picture. Of course you can make the recurrent matrix $U$ sparse to restrict the connections, but that does not affect the core idea of the RNN. BTW: There exists two kinds of memories in a RNN. One is the input-to-hidden weight $W$, which mainly stores the information from the input. The other is the hidden-to-hidden matrix $U$, which is used to store the histories. Since we do not know which parts of histories will affect our current prediction, the most reasonable way might be to allow all possible connetions and let the network learn for themself.
Recurrent Neural Network (RNN) topology: why always fully-connected?
One reason might be due to the mathematical convenience. The vanilla recurrent neural network (Elman-type) can be formularized as: $\vec{h}_t = f(\vec{x}_t, \vec{h}_{t-1})$, where $f(\cdot)$ can be wr
Recurrent Neural Network (RNN) topology: why always fully-connected? One reason might be due to the mathematical convenience. The vanilla recurrent neural network (Elman-type) can be formularized as: $\vec{h}_t = f(\vec{x}_t, \vec{h}_{t-1})$, where $f(\cdot)$ can be written as $\sigma(W\vec{x}_t + U\vec{h}_{t-1})$. The above equation corresponds to your first picture. Of course you can make the recurrent matrix $U$ sparse to restrict the connections, but that does not affect the core idea of the RNN. BTW: There exists two kinds of memories in a RNN. One is the input-to-hidden weight $W$, which mainly stores the information from the input. The other is the hidden-to-hidden matrix $U$, which is used to store the histories. Since we do not know which parts of histories will affect our current prediction, the most reasonable way might be to allow all possible connetions and let the network learn for themself.
Recurrent Neural Network (RNN) topology: why always fully-connected? One reason might be due to the mathematical convenience. The vanilla recurrent neural network (Elman-type) can be formularized as: $\vec{h}_t = f(\vec{x}_t, \vec{h}_{t-1})$, where $f(\cdot)$ can be wr
29,457
Recurrent Neural Network (RNN) topology: why always fully-connected?
You can use 2 input neurons if you arrange them in a similar way to the fast Walsh Hadamard transform. So an out of place algorithm would be to step through the input vector sequentially 2 elements at a time. Have 2 2-input neurons act on each pair of elements. Put the output of the first neuron sequentially in the lower half of a new vector array, the output of the second neuron sequentially in the upper half of the new vector array. Repeat using the new vector array as input. After log_base_2(n) repeats a change in a single one of the input elements can potentially change all the outputs. Which is the best you can do. n must be a positive integer power of 2.
Recurrent Neural Network (RNN) topology: why always fully-connected?
You can use 2 input neurons if you arrange them in a similar way to the fast Walsh Hadamard transform. So an out of place algorithm would be to step through the input vector sequentially 2 elements a
Recurrent Neural Network (RNN) topology: why always fully-connected? You can use 2 input neurons if you arrange them in a similar way to the fast Walsh Hadamard transform. So an out of place algorithm would be to step through the input vector sequentially 2 elements at a time. Have 2 2-input neurons act on each pair of elements. Put the output of the first neuron sequentially in the lower half of a new vector array, the output of the second neuron sequentially in the upper half of the new vector array. Repeat using the new vector array as input. After log_base_2(n) repeats a change in a single one of the input elements can potentially change all the outputs. Which is the best you can do. n must be a positive integer power of 2.
Recurrent Neural Network (RNN) topology: why always fully-connected? You can use 2 input neurons if you arrange them in a similar way to the fast Walsh Hadamard transform. So an out of place algorithm would be to step through the input vector sequentially 2 elements a
29,458
How can I tell if a statistical model is "identified"?
Identifiability basically refers to whether or not consistent estimators exist for the parameters of the model. Put another way, if we are told the distribution of the data, can we recover the model parameters? If not then our model is unidentifiable. Perhaps the simplest example of an unidentifiable model is the overparameterized ANOVA model. This model takes the form $$ Y_{ij} = \mu + \alpha_i + \epsilon_{ij} $$ where $\mu$ and $\{ \alpha_i \}_{i=1}^{k}$ are arbitrary constants and $\epsilon_{ij} \sim$ normal$(0, \sigma^2)$. If we are given the information that $Y_{ij} \sim$ normal$(\mu_i, \sigma^2)$ for some sets of constants $\{ \mu_i \}_{i=1}^{k}$ and $\sigma^2$, and it is important to note that this is all we can ever hope to learn from the data, then there is no unique way to translate this back into constants $\mu$, $\{ \alpha_i \}_{i=1}^{k}$ and $\sigma^2$. This is because we can always take $\mu + c$ and $\alpha_i - c$ to arrive at the same mean parameter $\mu_i = \mu + \alpha_i$ for different values of the model parameters. Even if we had infinite data we could never hope to recover these values. For this reason we impose the constraint $\sum_{i=1}^{k} \alpha_i = 0$ which guarantees a one to one mapping between model and distribution parameters.
How can I tell if a statistical model is "identified"?
Identifiability basically refers to whether or not consistent estimators exist for the parameters of the model. Put another way, if we are told the distribution of the data, can we recover the model
How can I tell if a statistical model is "identified"? Identifiability basically refers to whether or not consistent estimators exist for the parameters of the model. Put another way, if we are told the distribution of the data, can we recover the model parameters? If not then our model is unidentifiable. Perhaps the simplest example of an unidentifiable model is the overparameterized ANOVA model. This model takes the form $$ Y_{ij} = \mu + \alpha_i + \epsilon_{ij} $$ where $\mu$ and $\{ \alpha_i \}_{i=1}^{k}$ are arbitrary constants and $\epsilon_{ij} \sim$ normal$(0, \sigma^2)$. If we are given the information that $Y_{ij} \sim$ normal$(\mu_i, \sigma^2)$ for some sets of constants $\{ \mu_i \}_{i=1}^{k}$ and $\sigma^2$, and it is important to note that this is all we can ever hope to learn from the data, then there is no unique way to translate this back into constants $\mu$, $\{ \alpha_i \}_{i=1}^{k}$ and $\sigma^2$. This is because we can always take $\mu + c$ and $\alpha_i - c$ to arrive at the same mean parameter $\mu_i = \mu + \alpha_i$ for different values of the model parameters. Even if we had infinite data we could never hope to recover these values. For this reason we impose the constraint $\sum_{i=1}^{k} \alpha_i = 0$ which guarantees a one to one mapping between model and distribution parameters.
How can I tell if a statistical model is "identified"? Identifiability basically refers to whether or not consistent estimators exist for the parameters of the model. Put another way, if we are told the distribution of the data, can we recover the model
29,459
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
You are right, MLP is one kind of neural network. There are several kinds of NN, you can have a NN based on Radial Basis Function with a Soft gating strategy, for example. You can use a committee machine strategy to form a NN...
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
You are right, MLP is one kind of neural network. There are several kinds of NN, you can have a NN based on Radial Basis Function with a Soft gating strategy, for example. You can use a committee mach
Difference between MLP(Multi-layer Perceptron) and Neural Networks? You are right, MLP is one kind of neural network. There are several kinds of NN, you can have a NN based on Radial Basis Function with a Soft gating strategy, for example. You can use a committee machine strategy to form a NN...
Difference between MLP(Multi-layer Perceptron) and Neural Networks? You are right, MLP is one kind of neural network. There are several kinds of NN, you can have a NN based on Radial Basis Function with a Soft gating strategy, for example. You can use a committee mach
29,460
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
MLP is fully connected feed-forward network. In particular CNN which is partially connected, RNN which has feedback loop are not MLPs.
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
MLP is fully connected feed-forward network. In particular CNN which is partially connected, RNN which has feedback loop are not MLPs.
Difference between MLP(Multi-layer Perceptron) and Neural Networks? MLP is fully connected feed-forward network. In particular CNN which is partially connected, RNN which has feedback loop are not MLPs.
Difference between MLP(Multi-layer Perceptron) and Neural Networks? MLP is fully connected feed-forward network. In particular CNN which is partially connected, RNN which has feedback loop are not MLPs.
29,461
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
The main difference is that MLP is one way. Thus, it's a feedforward network without any loop. Whereas, Neural networks such as DNN can contain loops. See more here
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
The main difference is that MLP is one way. Thus, it's a feedforward network without any loop. Whereas, Neural networks such as DNN can contain loops. See more here
Difference between MLP(Multi-layer Perceptron) and Neural Networks? The main difference is that MLP is one way. Thus, it's a feedforward network without any loop. Whereas, Neural networks such as DNN can contain loops. See more here
Difference between MLP(Multi-layer Perceptron) and Neural Networks? The main difference is that MLP is one way. Thus, it's a feedforward network without any loop. Whereas, Neural networks such as DNN can contain loops. See more here
29,462
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
Multi-Layer Perceptron is a model of neural networks (NN). There are several other models including recurrent NN and radial basis networks. For an introduction to different models and to get a sense of how they are different, check this link out.
Difference between MLP(Multi-layer Perceptron) and Neural Networks?
Multi-Layer Perceptron is a model of neural networks (NN). There are several other models including recurrent NN and radial basis networks. For an introduction to different models and to get a sense o
Difference between MLP(Multi-layer Perceptron) and Neural Networks? Multi-Layer Perceptron is a model of neural networks (NN). There are several other models including recurrent NN and radial basis networks. For an introduction to different models and to get a sense of how they are different, check this link out.
Difference between MLP(Multi-layer Perceptron) and Neural Networks? Multi-Layer Perceptron is a model of neural networks (NN). There are several other models including recurrent NN and radial basis networks. For an introduction to different models and to get a sense o
29,463
fat-finger distribution
Since we're dealing with discrete numbers, I immediately thought of using a Categorical Distribution as the conditional distribution of each target key. So, if we take your example of a user's intent to press 5, and let $K$ be the key actually pressed, then we get: $$P(K=k|5) = p_{k,5}\;\;\mathrm{ where }\;\; p_{k,5} \geq 0 \;\mathrm{ and}\; \sum_{k=0}^9 p_{k,5} = 1$$ We can define such a distribution for each key. This is the empirical part. Now, let's say that the number actually pressed is $k$, we want to infer the intended key $I$. This is naturally expressed as a Bayesian Inference problem: $$P(I=i|k) = \frac{P(I=i)P(k|I=i)}{\sum_{i=0}^9 P(I=i)P(k|I=i)}$$ This equation tells you the probability that the user intended to press $i$ given they pressed $k$. However, you will notice that this depends on $P(I=i)$, which is the prior probability that someone would ever intend to press $i$. I would imagine that this would be conditional on the actual phone number being pressed (of course) but since you will not know this, you will need some way to adjust this prior context . The bottom-line is that there is no single fat-finger distribution, unless we are talking about the distribution conditional on an intended number. If your error-correcting method is to be useful, it will have to guess the intended number using these conditional distributions. However, this will require some useful prior-context, otherwise I'd expect the inferred key to alwasy be the key actually pressed...not overly useful.
fat-finger distribution
Since we're dealing with discrete numbers, I immediately thought of using a Categorical Distribution as the conditional distribution of each target key. So, if we take your example of a user's intent
fat-finger distribution Since we're dealing with discrete numbers, I immediately thought of using a Categorical Distribution as the conditional distribution of each target key. So, if we take your example of a user's intent to press 5, and let $K$ be the key actually pressed, then we get: $$P(K=k|5) = p_{k,5}\;\;\mathrm{ where }\;\; p_{k,5} \geq 0 \;\mathrm{ and}\; \sum_{k=0}^9 p_{k,5} = 1$$ We can define such a distribution for each key. This is the empirical part. Now, let's say that the number actually pressed is $k$, we want to infer the intended key $I$. This is naturally expressed as a Bayesian Inference problem: $$P(I=i|k) = \frac{P(I=i)P(k|I=i)}{\sum_{i=0}^9 P(I=i)P(k|I=i)}$$ This equation tells you the probability that the user intended to press $i$ given they pressed $k$. However, you will notice that this depends on $P(I=i)$, which is the prior probability that someone would ever intend to press $i$. I would imagine that this would be conditional on the actual phone number being pressed (of course) but since you will not know this, you will need some way to adjust this prior context . The bottom-line is that there is no single fat-finger distribution, unless we are talking about the distribution conditional on an intended number. If your error-correcting method is to be useful, it will have to guess the intended number using these conditional distributions. However, this will require some useful prior-context, otherwise I'd expect the inferred key to alwasy be the key actually pressed...not overly useful.
fat-finger distribution Since we're dealing with discrete numbers, I immediately thought of using a Categorical Distribution as the conditional distribution of each target key. So, if we take your example of a user's intent
29,464
fat-finger distribution
I agree with Bey's approach, ie the conditional probablity for each key to be pressed given the user's intention is highest for the intended key. If it wasn't, equipment manufacturers would rename the key. Some keys are more prone to be mis-pressed that others. Perhaps towards the middle. Even knowing this, because we are inputing numbers, it isn't possible to exploit like correcting words as one number is as valid as the next one. So error correction on single key strokes isn't feasible. What is feasible is correcting, or perhaps the less ambitious detection of key errors in a given input data type. This is done for an ISBN or a credit card number, say. However, phone numbers do not have check sums. Perhaps the empirical distribution for each keyboard could be used to make the most efficient check of numbers - that being the best use of the added check number(s).
fat-finger distribution
I agree with Bey's approach, ie the conditional probablity for each key to be pressed given the user's intention is highest for the intended key. If it wasn't, equipment manufacturers would rename the
fat-finger distribution I agree with Bey's approach, ie the conditional probablity for each key to be pressed given the user's intention is highest for the intended key. If it wasn't, equipment manufacturers would rename the key. Some keys are more prone to be mis-pressed that others. Perhaps towards the middle. Even knowing this, because we are inputing numbers, it isn't possible to exploit like correcting words as one number is as valid as the next one. So error correction on single key strokes isn't feasible. What is feasible is correcting, or perhaps the less ambitious detection of key errors in a given input data type. This is done for an ISBN or a credit card number, say. However, phone numbers do not have check sums. Perhaps the empirical distribution for each keyboard could be used to make the most efficient check of numbers - that being the best use of the added check number(s).
fat-finger distribution I agree with Bey's approach, ie the conditional probablity for each key to be pressed given the user's intention is highest for the intended key. If it wasn't, equipment manufacturers would rename the
29,465
Help in Expectation Maximization from paper :how to include prior distribution?
If we consider the target as $$\arg\max_\theta L(\theta|x)\pi(\theta) = \arg\max_\theta \log L(\theta|x) + \log \pi(\theta)$$ the representation at the basis of EM is $$\log L(\theta|x) = \mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]-\mathbb{E}[\log q(Z|x,\theta)|x,\theta⁰]$$ for an arbitrary $\theta⁰$, because of the decomposition $$q(z|x,\theta)=f(x,z|\theta) \big/ g(x|\theta)$$ or $$g(x|\theta) = f(x,z|\theta) \big/ q(z|x,\theta)$$ which works for an arbitrary value of $z$ (since there is none on the lhs) and hence also works for any expectation in $Z$: $$\log g(x|\theta) = \log f(x,z|\theta) - \log q(z|x,\theta) = \mathbb{E}[\log f(x,Z|\theta) - \log q(Z|x,\theta)|x]$$ for any conditional distribution of $Z$ given $X=x$, for instance $q(z|x,\theta⁰)$. Therefore if we maximise in $\theta$ $$\mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]+ \log \pi(\theta)$$ with solution $\theta^1$ we have $$\mathbb{E}[\log L(\theta^1|x,Z)|x,\theta⁰]+ \log \pi(\theta^1)\ge\mathbb{E}[\log L(\theta⁰|x,Z)|x,\theta⁰]+ \log \pi(\theta⁰)$$ while $$\mathbb{E}[\log q(Z|x,\theta⁰)|x,\theta⁰]\ge\mathbb{E}[\log q(Z|x,\theta^1)|x,\theta⁰]$$ by the standard arguments of EM. Therefore, $$\mathbb{E}[\log L(\theta^1|x,Z)|x,\theta⁰]+ \log \pi(\theta^1)\ge\mathbb{E}[\log L(\theta⁰|x,Z)|x,\theta⁰]+ \log \pi(\theta⁰)$$ and using as an E step the target $$\mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]+ \log \pi(\theta)$$ leads to an increase in the posterior at each M step, meaning that the modified EM algorithm converges to a local MAP.
Help in Expectation Maximization from paper :how to include prior distribution?
If we consider the target as $$\arg\max_\theta L(\theta|x)\pi(\theta) = \arg\max_\theta \log L(\theta|x) + \log \pi(\theta)$$ the representation at the basis of EM is $$\log L(\theta|x) = \mathbb{E}[\
Help in Expectation Maximization from paper :how to include prior distribution? If we consider the target as $$\arg\max_\theta L(\theta|x)\pi(\theta) = \arg\max_\theta \log L(\theta|x) + \log \pi(\theta)$$ the representation at the basis of EM is $$\log L(\theta|x) = \mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]-\mathbb{E}[\log q(Z|x,\theta)|x,\theta⁰]$$ for an arbitrary $\theta⁰$, because of the decomposition $$q(z|x,\theta)=f(x,z|\theta) \big/ g(x|\theta)$$ or $$g(x|\theta) = f(x,z|\theta) \big/ q(z|x,\theta)$$ which works for an arbitrary value of $z$ (since there is none on the lhs) and hence also works for any expectation in $Z$: $$\log g(x|\theta) = \log f(x,z|\theta) - \log q(z|x,\theta) = \mathbb{E}[\log f(x,Z|\theta) - \log q(Z|x,\theta)|x]$$ for any conditional distribution of $Z$ given $X=x$, for instance $q(z|x,\theta⁰)$. Therefore if we maximise in $\theta$ $$\mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]+ \log \pi(\theta)$$ with solution $\theta^1$ we have $$\mathbb{E}[\log L(\theta^1|x,Z)|x,\theta⁰]+ \log \pi(\theta^1)\ge\mathbb{E}[\log L(\theta⁰|x,Z)|x,\theta⁰]+ \log \pi(\theta⁰)$$ while $$\mathbb{E}[\log q(Z|x,\theta⁰)|x,\theta⁰]\ge\mathbb{E}[\log q(Z|x,\theta^1)|x,\theta⁰]$$ by the standard arguments of EM. Therefore, $$\mathbb{E}[\log L(\theta^1|x,Z)|x,\theta⁰]+ \log \pi(\theta^1)\ge\mathbb{E}[\log L(\theta⁰|x,Z)|x,\theta⁰]+ \log \pi(\theta⁰)$$ and using as an E step the target $$\mathbb{E}[\log L(\theta|x,Z)|x,\theta⁰]+ \log \pi(\theta)$$ leads to an increase in the posterior at each M step, meaning that the modified EM algorithm converges to a local MAP.
Help in Expectation Maximization from paper :how to include prior distribution? If we consider the target as $$\arg\max_\theta L(\theta|x)\pi(\theta) = \arg\max_\theta \log L(\theta|x) + \log \pi(\theta)$$ the representation at the basis of EM is $$\log L(\theta|x) = \mathbb{E}[\
29,466
Help in Expectation Maximization from paper :how to include prior distribution?
I don't think showing monotonic increasing log-posterior (or log likelihood for MLE) are sufficient for showing convergence to stationary point of the MAP estimate (or MLE). For example, the increments can become arbitrarily small. In the famous paper by Wu 1983, a sufficient condition for converging to stationary point of EM is differentiability in both arguments of the lower bound function.
Help in Expectation Maximization from paper :how to include prior distribution?
I don't think showing monotonic increasing log-posterior (or log likelihood for MLE) are sufficient for showing convergence to stationary point of the MAP estimate (or MLE). For example, the increment
Help in Expectation Maximization from paper :how to include prior distribution? I don't think showing monotonic increasing log-posterior (or log likelihood for MLE) are sufficient for showing convergence to stationary point of the MAP estimate (or MLE). For example, the increments can become arbitrarily small. In the famous paper by Wu 1983, a sufficient condition for converging to stationary point of EM is differentiability in both arguments of the lower bound function.
Help in Expectation Maximization from paper :how to include prior distribution? I don't think showing monotonic increasing log-posterior (or log likelihood for MLE) are sufficient for showing convergence to stationary point of the MAP estimate (or MLE). For example, the increment
29,467
Interpretation of gam model results
tl;dr: AIC is predictive whereas p-values are for inference. Also, your test of significance may simply lack power. One possible explanation is that the null hypothesis $s(x) = 0$ is false, but you have low power and so your p-value is not very impressive. Just because an effect is present doesn't mean it is easy to detect. That's why clinical trials must be designed with a certain effect size in mind (usually the MCID). Another way to resolve this: different measures should give different results because they encode different priorities. AIC is a predictive criterion, and it behaves similarly to cross-validation. It may result in overly complex models that happen to have strong predictive performance. By contrast, mgcv's p-values are used to determine the presence or absence of a given effect$^*$, and predictive performance is a secondary concern. $^*$substitute "association" for "effect" unless you're working with data from a controlled trial where $x$ was assigned randomly or you have other reasons to believe the observed association is causal.
Interpretation of gam model results
tl;dr: AIC is predictive whereas p-values are for inference. Also, your test of significance may simply lack power. One possible explanation is that the null hypothesis $s(x) = 0$ is false, but you h
Interpretation of gam model results tl;dr: AIC is predictive whereas p-values are for inference. Also, your test of significance may simply lack power. One possible explanation is that the null hypothesis $s(x) = 0$ is false, but you have low power and so your p-value is not very impressive. Just because an effect is present doesn't mean it is easy to detect. That's why clinical trials must be designed with a certain effect size in mind (usually the MCID). Another way to resolve this: different measures should give different results because they encode different priorities. AIC is a predictive criterion, and it behaves similarly to cross-validation. It may result in overly complex models that happen to have strong predictive performance. By contrast, mgcv's p-values are used to determine the presence or absence of a given effect$^*$, and predictive performance is a secondary concern. $^*$substitute "association" for "effect" unless you're working with data from a controlled trial where $x$ was assigned randomly or you have other reasons to believe the observed association is causal.
Interpretation of gam model results tl;dr: AIC is predictive whereas p-values are for inference. Also, your test of significance may simply lack power. One possible explanation is that the null hypothesis $s(x) = 0$ is false, but you h
29,468
How does one formalize a prior probability distribution? Are there rules of thumb or tips one should use?
Your idea to treat your prior information of 272 successes in 400 attempts does have fairly solid Bayesian justification. The problem you are dealing with, as you recognized, is that of estimating a success probability $\theta$ of a Bernoulli experiment. The Beta distribution is the corresponding "conjugate prior". Such conjugate priors enjoy the "fictitious sample interpretation": The Beta prior is $$ \pi(\theta)=\frac{\Gamma(\alpha_0+\beta_0)}{\Gamma(\alpha_0)\Gamma(\beta_0)}\theta^{\alpha_0-1}(1-\theta)^{\beta_0-1} $$ This can be interpreted as the information contained in a sample of size $\underline{n}=\alpha_0+\beta_0-2$ (loosely so, as $\underline{n}$ need not be integer of course) with $\alpha_0-1$ successes: $$ \pi(\theta)=\frac{\Gamma(\alpha_0+\beta_0)}{\Gamma(\alpha_0)\Gamma(\beta_0)}\theta^{\alpha_0-1}(1-\theta)^{\underline{n}-(\alpha_0-1)} $$ Hence, if you take $\alpha_0+\beta_0-2=400$ and $\alpha_0-1=272$, this corresponds to prior parameters $\alpha_0=273$ and $\beta_0=129$. "Halving" the sample would lead to prior parameters $\alpha_0=137$ and $\beta_0=65$. Now, recall that the prior mean and prior variance of the beta distribution are given by $$ \mu=\frac{\alpha}{\alpha+\beta}\qquad\text{and}\qquad\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} $$ Halving the sample keeps the prior mean (almost) where it is: alpha01 <- 273 beta01 <- 129 (mean01 <- alpha01/(alpha01+beta01)) alpha02 <- 137 beta02 <- 65 (mean02 <- alpha02/(alpha02+beta02)) but increases the prior variance from (priorvariance01 <- (alpha01*beta01)/((alpha01+beta01)^2*(alpha01+beta01+1))) [1] 0.0005407484 to (priorvariance02 <- (alpha02*beta02)/((alpha02+beta02)^2*(alpha02+beta02+1))) [1] 0.001075066 as desired.
How does one formalize a prior probability distribution? Are there rules of thumb or tips one should
Your idea to treat your prior information of 272 successes in 400 attempts does have fairly solid Bayesian justification. The problem you are dealing with, as you recognized, is that of estimating a s
How does one formalize a prior probability distribution? Are there rules of thumb or tips one should use? Your idea to treat your prior information of 272 successes in 400 attempts does have fairly solid Bayesian justification. The problem you are dealing with, as you recognized, is that of estimating a success probability $\theta$ of a Bernoulli experiment. The Beta distribution is the corresponding "conjugate prior". Such conjugate priors enjoy the "fictitious sample interpretation": The Beta prior is $$ \pi(\theta)=\frac{\Gamma(\alpha_0+\beta_0)}{\Gamma(\alpha_0)\Gamma(\beta_0)}\theta^{\alpha_0-1}(1-\theta)^{\beta_0-1} $$ This can be interpreted as the information contained in a sample of size $\underline{n}=\alpha_0+\beta_0-2$ (loosely so, as $\underline{n}$ need not be integer of course) with $\alpha_0-1$ successes: $$ \pi(\theta)=\frac{\Gamma(\alpha_0+\beta_0)}{\Gamma(\alpha_0)\Gamma(\beta_0)}\theta^{\alpha_0-1}(1-\theta)^{\underline{n}-(\alpha_0-1)} $$ Hence, if you take $\alpha_0+\beta_0-2=400$ and $\alpha_0-1=272$, this corresponds to prior parameters $\alpha_0=273$ and $\beta_0=129$. "Halving" the sample would lead to prior parameters $\alpha_0=137$ and $\beta_0=65$. Now, recall that the prior mean and prior variance of the beta distribution are given by $$ \mu=\frac{\alpha}{\alpha+\beta}\qquad\text{and}\qquad\sigma^2=\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)} $$ Halving the sample keeps the prior mean (almost) where it is: alpha01 <- 273 beta01 <- 129 (mean01 <- alpha01/(alpha01+beta01)) alpha02 <- 137 beta02 <- 65 (mean02 <- alpha02/(alpha02+beta02)) but increases the prior variance from (priorvariance01 <- (alpha01*beta01)/((alpha01+beta01)^2*(alpha01+beta01+1))) [1] 0.0005407484 to (priorvariance02 <- (alpha02*beta02)/((alpha02+beta02)^2*(alpha02+beta02+1))) [1] 0.001075066 as desired.
How does one formalize a prior probability distribution? Are there rules of thumb or tips one should Your idea to treat your prior information of 272 successes in 400 attempts does have fairly solid Bayesian justification. The problem you are dealing with, as you recognized, is that of estimating a s
29,469
Logistic regression output and probability [duplicate]
What is the interpretation of the number that is the output of the logistic regression function? Logistic regression as understood in recent decades is explicitly used as a model for Bernoulli or binomial data (with extensions into other cases such as multinomial), where the model if for the parameter, $p$, which is indeed a probability. However, logistic regression has its origins in modelling the growth of a proportion over time[1] (which may be continuous), so in origins it bears a close link to nonlinear models that fit logistic growth curves And frankly anything between 0 and 1, what else could it be other than a probability. Well, something between 0 and 1 could be a model a continuous fraction such as the proportion of substance A in a mix of things. Can logistic regression model such a thing? The model for the mean makes sense, but the model for the variance doesn't necessarily make sense; in logistic regression the variance function is of the form $\mu(1-\mu)$. This is directly related to the variance of a Bernoulli. However (for example) one could consider approximating something like a beta (which has variance function proportional to $\mu(1-\mu)$) by a quasi-binomial model; then we wouldn't necessarily be modelling a probability as such, but we'd still arguably be using logistic regression to do it. So while it's nearly always conceived as a model for a probability, it doesn't necessarily have to be. Suppose it is a probability, or more exactly the probability of a 'true', '1', or 'positive' classification of a point in the domain. How is this justified? I don't understand the question here. If it's explicitly a model for $p$ in a Bernoulli, what additional sort of justification do you seek? Of course the link function may be wrong (while that's no great difficulty - since other links could be used - we would no longer be doing logistic regression). [1]: Cramer, J.S. (2002), "The Origins of Logistic Regression," Tinbergen Institute, December http://papers.tinbergen.nl/02119.pdf
Logistic regression output and probability [duplicate]
What is the interpretation of the number that is the output of the logistic regression function? Logistic regression as understood in recent decades is explicitly used as a model for Bernoulli or bin
Logistic regression output and probability [duplicate] What is the interpretation of the number that is the output of the logistic regression function? Logistic regression as understood in recent decades is explicitly used as a model for Bernoulli or binomial data (with extensions into other cases such as multinomial), where the model if for the parameter, $p$, which is indeed a probability. However, logistic regression has its origins in modelling the growth of a proportion over time[1] (which may be continuous), so in origins it bears a close link to nonlinear models that fit logistic growth curves And frankly anything between 0 and 1, what else could it be other than a probability. Well, something between 0 and 1 could be a model a continuous fraction such as the proportion of substance A in a mix of things. Can logistic regression model such a thing? The model for the mean makes sense, but the model for the variance doesn't necessarily make sense; in logistic regression the variance function is of the form $\mu(1-\mu)$. This is directly related to the variance of a Bernoulli. However (for example) one could consider approximating something like a beta (which has variance function proportional to $\mu(1-\mu)$) by a quasi-binomial model; then we wouldn't necessarily be modelling a probability as such, but we'd still arguably be using logistic regression to do it. So while it's nearly always conceived as a model for a probability, it doesn't necessarily have to be. Suppose it is a probability, or more exactly the probability of a 'true', '1', or 'positive' classification of a point in the domain. How is this justified? I don't understand the question here. If it's explicitly a model for $p$ in a Bernoulli, what additional sort of justification do you seek? Of course the link function may be wrong (while that's no great difficulty - since other links could be used - we would no longer be doing logistic regression). [1]: Cramer, J.S. (2002), "The Origins of Logistic Regression," Tinbergen Institute, December http://papers.tinbergen.nl/02119.pdf
Logistic regression output and probability [duplicate] What is the interpretation of the number that is the output of the logistic regression function? Logistic regression as understood in recent decades is explicitly used as a model for Bernoulli or bin
29,470
What is asymptotic error?
It means the error of a method when you run entire population through it. It's a useful measure of the method as it tells you what's the best you could get from a method. Also, you want to know how quickly the method converges to the asymptotic error, because you can't really run the population in most cases.
What is asymptotic error?
It means the error of a method when you run entire population through it. It's a useful measure of the method as it tells you what's the best you could get from a method. Also, you want to know how qu
What is asymptotic error? It means the error of a method when you run entire population through it. It's a useful measure of the method as it tells you what's the best you could get from a method. Also, you want to know how quickly the method converges to the asymptotic error, because you can't really run the population in most cases.
What is asymptotic error? It means the error of a method when you run entire population through it. It's a useful measure of the method as it tells you what's the best you could get from a method. Also, you want to know how qu
29,471
What is asymptotic error?
What it means is just the error to which the algorithm is asymptotic. Suppose we have an error that is the limiting error that an algorithm can achieve after a number of iterations, no matter how many. The error for the $i^{th}$ iteration is then (typically) larger than the error associated with a limiting number of iterations. The text is comparing a larger terminal error that is quickly achieved for fewer iterations with a smaller terminal error that takes more iterations to achieve. A problem with this is that the terminal error may be only relatively constant, such that the language used is inexact. In the quote, "lower" means smaller absolute error, and "higher" means larger absolute error.
What is asymptotic error?
What it means is just the error to which the algorithm is asymptotic. Suppose we have an error that is the limiting error that an algorithm can achieve after a number of iterations, no matter how many
What is asymptotic error? What it means is just the error to which the algorithm is asymptotic. Suppose we have an error that is the limiting error that an algorithm can achieve after a number of iterations, no matter how many. The error for the $i^{th}$ iteration is then (typically) larger than the error associated with a limiting number of iterations. The text is comparing a larger terminal error that is quickly achieved for fewer iterations with a smaller terminal error that takes more iterations to achieve. A problem with this is that the terminal error may be only relatively constant, such that the language used is inexact. In the quote, "lower" means smaller absolute error, and "higher" means larger absolute error.
What is asymptotic error? What it means is just the error to which the algorithm is asymptotic. Suppose we have an error that is the limiting error that an algorithm can achieve after a number of iterations, no matter how many
29,472
How to propagate uncertainty into the prediction of a neural network?
$\newcommand{\bx}{\mathbf{x}}$ $\newcommand{\by}{\mathbf{y}}$ I personally prefer the Monte Carlo approach because of its ease. There are alternatives (e.g. the unscented transform), but these are certainly biased. Let me formalise your problem a bit. You are using a neural network to implement a conditional probability distribution over the outputs $\by$ given the inputs $\bx$, where the weights are collected in $\theta$: $$ p_\theta(\by~\mid~\bx). $$ Let us not care about how you obtained the weights $\theta$–probably some kind of backprop–and just treat that as a black box that has been handed to us. As an additional property of your problem, you assume that your only have access to some "noisy version" $\tilde \bx$ of the actual input $\bx$, where $$\tilde \bx = \bx + \epsilon$$ with $\epsilon$ following some distribution, e.g. Gaussian. Note that you then can write $$ p(\tilde \bx\mid\bx) = \mathcal{N}(\tilde \bx| \bx, \sigma^2_\epsilon) $$ where $\epsilon \sim \mathcal{N}(0, \sigma^2_\epsilon).$ Then what you want is the distribution $$ p(\by\mid\tilde \bx) = \int p(\by\mid\bx) p(\bx\mid\tilde \bx) d\bx, $$ i.e. the distribution over outputs given the noisy input and a model of clean inputs to outputs. Now, if you can invert $p(\tilde \bx\mid\bx)$ to obtain $p(\bx\mid\tilde \bx)$ (which you can in the case of a Gaussian random variable and others), you can approximate the above with plain Monte Carlo integration through sampling: $$ p(\by\mid\tilde \bx) \approx \sum_i p(\by\mid\bx_i), \quad \bx_i \sim p(\bx\mid\tilde \bx). $$ Note that this can also be used to calculate all other kinds of expectations of functions $f$ of $\by$: $$ f(\tilde \bx) \approx \sum_i f(\by_i), \quad \bx_i \sim p(\bx\mid\tilde \bx), \by_i \sim p(\by\mid\bx_i). $$ Without further assumptions, there are only biased approximations.
How to propagate uncertainty into the prediction of a neural network?
$\newcommand{\bx}{\mathbf{x}}$ $\newcommand{\by}{\mathbf{y}}$ I personally prefer the Monte Carlo approach because of its ease. There are alternatives (e.g. the unscented transform), but these are cer
How to propagate uncertainty into the prediction of a neural network? $\newcommand{\bx}{\mathbf{x}}$ $\newcommand{\by}{\mathbf{y}}$ I personally prefer the Monte Carlo approach because of its ease. There are alternatives (e.g. the unscented transform), but these are certainly biased. Let me formalise your problem a bit. You are using a neural network to implement a conditional probability distribution over the outputs $\by$ given the inputs $\bx$, where the weights are collected in $\theta$: $$ p_\theta(\by~\mid~\bx). $$ Let us not care about how you obtained the weights $\theta$–probably some kind of backprop–and just treat that as a black box that has been handed to us. As an additional property of your problem, you assume that your only have access to some "noisy version" $\tilde \bx$ of the actual input $\bx$, where $$\tilde \bx = \bx + \epsilon$$ with $\epsilon$ following some distribution, e.g. Gaussian. Note that you then can write $$ p(\tilde \bx\mid\bx) = \mathcal{N}(\tilde \bx| \bx, \sigma^2_\epsilon) $$ where $\epsilon \sim \mathcal{N}(0, \sigma^2_\epsilon).$ Then what you want is the distribution $$ p(\by\mid\tilde \bx) = \int p(\by\mid\bx) p(\bx\mid\tilde \bx) d\bx, $$ i.e. the distribution over outputs given the noisy input and a model of clean inputs to outputs. Now, if you can invert $p(\tilde \bx\mid\bx)$ to obtain $p(\bx\mid\tilde \bx)$ (which you can in the case of a Gaussian random variable and others), you can approximate the above with plain Monte Carlo integration through sampling: $$ p(\by\mid\tilde \bx) \approx \sum_i p(\by\mid\bx_i), \quad \bx_i \sim p(\bx\mid\tilde \bx). $$ Note that this can also be used to calculate all other kinds of expectations of functions $f$ of $\by$: $$ f(\tilde \bx) \approx \sum_i f(\by_i), \quad \bx_i \sim p(\bx\mid\tilde \bx), \by_i \sim p(\by\mid\bx_i). $$ Without further assumptions, there are only biased approximations.
How to propagate uncertainty into the prediction of a neural network? $\newcommand{\bx}{\mathbf{x}}$ $\newcommand{\by}{\mathbf{y}}$ I personally prefer the Monte Carlo approach because of its ease. There are alternatives (e.g. the unscented transform), but these are cer
29,473
How to propagate uncertainty into the prediction of a neural network?
It depends on what kind of error you want to determine. Training data vs applying data differences A technique used to estimate the errors on the predictions is to train several algorithms using different random seeds. For most algorithms, this will lead to different predictions: the variation may gives you an estimate. Classification specific So in order to determine the classification error, there are roughly two methods: event by event: You can simply look at the predictions, create (for example) bins and divide label 1 by label 0. Because having 100 events of label 1 with a prediction between 0.6-0.65 and 50 with label 0 with a prediction in the same range simply yields a 2/3 chance for an event to be of class 1. Or, in other words, with a 1/3 change, your events in that bin are not class 1. Total efficiency: This approach is the one to use if it fits your case, it is more specific. You first determine where you apply your cut (meaning: what's the threshold on the predictions for an event to be class 1 or 0; this is usually not 0.5 but an optimized figure of merit). Let's say you cut on 0.9 (so <0.9 -> class 0, else class 1). Then you can count: how many class 1 events are lost (lower then 0.9)? how many class 0 events are still in the sample? This gives you an estimation of the error on your classifiers output. Regression specific tag-and-probe: You can use known values, enter them and get their error. Then, you may assume that values in between two of those, roughly have the average error. Or in other words, you extrapolate the error from known values. Simple average: Simply take the average of the errors. If they are roughly equally distributed, this is a good way to go.
How to propagate uncertainty into the prediction of a neural network?
It depends on what kind of error you want to determine. Training data vs applying data differences A technique used to estimate the errors on the predictions is to train several algorithms using diffe
How to propagate uncertainty into the prediction of a neural network? It depends on what kind of error you want to determine. Training data vs applying data differences A technique used to estimate the errors on the predictions is to train several algorithms using different random seeds. For most algorithms, this will lead to different predictions: the variation may gives you an estimate. Classification specific So in order to determine the classification error, there are roughly two methods: event by event: You can simply look at the predictions, create (for example) bins and divide label 1 by label 0. Because having 100 events of label 1 with a prediction between 0.6-0.65 and 50 with label 0 with a prediction in the same range simply yields a 2/3 chance for an event to be of class 1. Or, in other words, with a 1/3 change, your events in that bin are not class 1. Total efficiency: This approach is the one to use if it fits your case, it is more specific. You first determine where you apply your cut (meaning: what's the threshold on the predictions for an event to be class 1 or 0; this is usually not 0.5 but an optimized figure of merit). Let's say you cut on 0.9 (so <0.9 -> class 0, else class 1). Then you can count: how many class 1 events are lost (lower then 0.9)? how many class 0 events are still in the sample? This gives you an estimation of the error on your classifiers output. Regression specific tag-and-probe: You can use known values, enter them and get their error. Then, you may assume that values in between two of those, roughly have the average error. Or in other words, you extrapolate the error from known values. Simple average: Simply take the average of the errors. If they are roughly equally distributed, this is a good way to go.
How to propagate uncertainty into the prediction of a neural network? It depends on what kind of error you want to determine. Training data vs applying data differences A technique used to estimate the errors on the predictions is to train several algorithms using diffe
29,474
Determine outliers using IQR or standard deviation?
It seems like you have so many outliers that only looking at the residuals after fitting with, say, ordinary least squares might be misleading---some sample points could have so high influence (or "leverage") that the fit is changed, maybe misleading, just to make the residuals corresponding to those high leverage points small. So you probably needs robust fitting methods! You can start by looking at the answers to How to optimize a regression by removing 10% "worst" data points? and maybe search this site for the robust tag: https://stats.stackexchange.com/questions/tagged/robust
Determine outliers using IQR or standard deviation?
It seems like you have so many outliers that only looking at the residuals after fitting with, say, ordinary least squares might be misleading---some sample points could have so high influence (or "le
Determine outliers using IQR or standard deviation? It seems like you have so many outliers that only looking at the residuals after fitting with, say, ordinary least squares might be misleading---some sample points could have so high influence (or "leverage") that the fit is changed, maybe misleading, just to make the residuals corresponding to those high leverage points small. So you probably needs robust fitting methods! You can start by looking at the answers to How to optimize a regression by removing 10% "worst" data points? and maybe search this site for the robust tag: https://stats.stackexchange.com/questions/tagged/robust
Determine outliers using IQR or standard deviation? It seems like you have so many outliers that only looking at the residuals after fitting with, say, ordinary least squares might be misleading---some sample points could have so high influence (or "le
29,475
Updating a Bayes factor
Presumably the purpose of a recursive equation for Bayes factor would be when you have already calculated the Bayes factor for $n$ data points, and you want to be able to update this with one additional data point. It does seem that it is possible to do this without recomputing the marginals of the previous data vector, so long as the form of the posterior function $\pi_n$ is known. Assuming we know the form of this function (and assuming IID data as in your question), the predictive density can be written as: $$\begin{equation} \begin{aligned} m(x_{n+1} | x_1,...,x_n) &= \int \limits_\Theta f(x_{n+1}|\theta) \pi_n(d \theta | x_1,...,x_n). \\[6pt] \end{aligned} \end{equation}$$ Hence, you have: $$\begin{equation} \begin{aligned} m(x_1,...,x_{n+1}) &= m(x_1,...,x_n) \int \limits_\Theta f(x_{n+1}|\theta) \pi_n(d \theta | x_1,...,x_n). \\[6pt] \end{aligned} \end{equation}$$ Comparing two model classes via Bayes factor, we then get the recursive equation: $$\begin{equation} \begin{aligned} \mathfrak{B}_{12}(x_1,...,x_{n+1}) &= \mathfrak{B}_{12}(x_1,...,x_{n}) \cdot \frac{\int _{\Theta_1} f(x_{n+1}|\theta) \pi_{1,n}(d \theta | x_1,...,x_n)}{\int _{\Theta_2} f(x_{n+1}|\theta) \pi_{2,n}(d \theta | x_1,...,x_n)}. \\[6pt] \end{aligned} \end{equation}$$ This still involves integration over the parameter range, so I agree with your view that there does not appear to be any computational advantage over just recomputing the Bayes factor via the initial formula you give. Nevertheless, you can see that this does not require you to recompute the marginals for the previous data vector. (Instead we compute the predictive densities of the new data point conditional on previous data, under each of the model classes.) Like you, I don't really see any computational advantage of this, unless it happens that this integral formula simplifies easily. In any case, I suppose it gives you another formula for updating the Bayes factor.
Updating a Bayes factor
Presumably the purpose of a recursive equation for Bayes factor would be when you have already calculated the Bayes factor for $n$ data points, and you want to be able to update this with one addition
Updating a Bayes factor Presumably the purpose of a recursive equation for Bayes factor would be when you have already calculated the Bayes factor for $n$ data points, and you want to be able to update this with one additional data point. It does seem that it is possible to do this without recomputing the marginals of the previous data vector, so long as the form of the posterior function $\pi_n$ is known. Assuming we know the form of this function (and assuming IID data as in your question), the predictive density can be written as: $$\begin{equation} \begin{aligned} m(x_{n+1} | x_1,...,x_n) &= \int \limits_\Theta f(x_{n+1}|\theta) \pi_n(d \theta | x_1,...,x_n). \\[6pt] \end{aligned} \end{equation}$$ Hence, you have: $$\begin{equation} \begin{aligned} m(x_1,...,x_{n+1}) &= m(x_1,...,x_n) \int \limits_\Theta f(x_{n+1}|\theta) \pi_n(d \theta | x_1,...,x_n). \\[6pt] \end{aligned} \end{equation}$$ Comparing two model classes via Bayes factor, we then get the recursive equation: $$\begin{equation} \begin{aligned} \mathfrak{B}_{12}(x_1,...,x_{n+1}) &= \mathfrak{B}_{12}(x_1,...,x_{n}) \cdot \frac{\int _{\Theta_1} f(x_{n+1}|\theta) \pi_{1,n}(d \theta | x_1,...,x_n)}{\int _{\Theta_2} f(x_{n+1}|\theta) \pi_{2,n}(d \theta | x_1,...,x_n)}. \\[6pt] \end{aligned} \end{equation}$$ This still involves integration over the parameter range, so I agree with your view that there does not appear to be any computational advantage over just recomputing the Bayes factor via the initial formula you give. Nevertheless, you can see that this does not require you to recompute the marginals for the previous data vector. (Instead we compute the predictive densities of the new data point conditional on previous data, under each of the model classes.) Like you, I don't really see any computational advantage of this, unless it happens that this integral formula simplifies easily. In any case, I suppose it gives you another formula for updating the Bayes factor.
Updating a Bayes factor Presumably the purpose of a recursive equation for Bayes factor would be when you have already calculated the Bayes factor for $n$ data points, and you want to be able to update this with one addition
29,476
Updating a Bayes factor
There is no computational advantage afaik but there is a conceptual one. By writing the Bayes factor as a product of partial Bayes factors and computing each one, we see how much each new observation should alter the odds, given the previous ones.
Updating a Bayes factor
There is no computational advantage afaik but there is a conceptual one. By writing the Bayes factor as a product of partial Bayes factors and computing each one, we see how much each new observation
Updating a Bayes factor There is no computational advantage afaik but there is a conceptual one. By writing the Bayes factor as a product of partial Bayes factors and computing each one, we see how much each new observation should alter the odds, given the previous ones.
Updating a Bayes factor There is no computational advantage afaik but there is a conceptual one. By writing the Bayes factor as a product of partial Bayes factors and computing each one, we see how much each new observation
29,477
Difference between averaging data then fitting and fitting the data then averaging
Imagine we're in a panel data context where there's variation across time $t$ and across firms $i$. Think of each time period $t$ as a separate experiment. I understand your question as whether it's equivalent to estimate an effect using: Cross-sectional variation in time series averages. Time series averages of cross-sectional variation. The answer in general is no. The setup: In my formulation, we can think of each time period $t$ as a separate experiment. Let's say you have a balanced panel of length $T$ over $n$ firms. If we break each time period apart $(X_t, \mathbf{y}_t)$ etc... we can write the overall data as: $$ Y = \begin{bmatrix} \mathbf{y}_1 \\ \mathbf{y}_2 \\ \ldots \\ \mathbf{y}_n \end{bmatrix} \quad \quad X = \begin{bmatrix} X_1 \\ X_2 \\ \ldots \\ X_n \end{bmatrix} $$ Average of fits: \begin{align*} \frac{1}{T} \sum_t \mathbf{b}_t &= \frac{1}{T} \sum_t \left(X_t'X_t \right)^{-1} X_t' \mathbf{y}_t \\ &= \frac{1}{T} \sum_t S^{-1}_t \left( \frac{1}{n} \sum_i \mathbf{x}_{t,i} y_{t,i}\right) \quad \text{where } S_t = \frac{1}{n} \sum_i \mathbf{x}_{t,i} \mathbf{x}_{t,i}' \end{align*} Fit of averages: This isn't in general equal to the estimate based upon cross-sectional variation of time series averages (i.e. the between estimator). $$ \left( \frac{1}{n} \sum_i \bar{\mathbf{x}}_i \bar{\mathbf{x}}_i' \right)^{-1} \frac{1}{n} \sum_i \bar{\mathbf{x}}_i \bar{y}_i $$ Where $\bar{\mathbf{x}}_i = \frac{1}{T} \sum_t \mathbf{x}_{t, i}$ etc... Pooled OLS estimate: Something perhaps useful to think about is the pooled OLS estimate. What is it? \begin{align*} \hat{\mathbf{b}} &= \left(X'X\right)^{-1}X'Y \\ &= \left( \frac{1}{nT} \sum_t X_t'X_t \right)^{-1} \left( \frac{1}{nT} \sum_t X_t' \mathbf{y}_i \right) \end{align*} Then use $\mathbf{b}_t = \left(X_t'X_t \right)^{-1}X_t' \mathbf{y}_i$ \begin{align*} &= \left( \frac{1}{nT} \sum_t X_t'X_t \right)^{-1} \left( \frac{1}{nT} \sum_t X_t'X_t \mathbf{b}_t \right) \end{align*} Let's $S = \frac{1}{nT} \sum_i X'X $ and $S_t = \frac{1}{n} X_t'X_t $ be our estimates of $\operatorname{E}[\mathbf{x}\mathbf{x}']$ over the full sample and in period $t$ respectively. Then we have: \begin{align*} \hat{\mathbf{b}} &= \frac{1}{T} \sum_t \left( S^{-1} S_t \right) \mathbf{b}_t \end{align*} This is sort of like an average of the different time specific estimates $\mathbf{b}_t$, but it's a bit different. In some loose sense, you're giving more weight to periods with higher variance of the right hand side variables. Special case: right hand side variables are time invariant and firm specific If the right hand side variables for each firm $i$ are constant across time (i.e. $X_{t_1} = X_{t_2}$ for any $t_1$ and $t_2$) then $S = S_t$ for all $t$ and we would have: $$\hat{\mathbf{b}} = \frac{1}{T} \sum_t \mathbf{b}_t$$ Fun comment: This is the case Fama and Macbeth where in when they applied this technique of averaging cross-sectional estimates to obtain consistent standard errors when estimating how expected returns vary with firms' covariance with the market (or other factor loadings). The Fama-Macbeth procedure is an intuitive way to get consistent standard errors in the panel context when error terms are cross-sectionally correlated but independent across time. A more modern technique that yields similar results is clustering on time.
Difference between averaging data then fitting and fitting the data then averaging
Imagine we're in a panel data context where there's variation across time $t$ and across firms $i$. Think of each time period $t$ as a separate experiment. I understand your question as whether it's e
Difference between averaging data then fitting and fitting the data then averaging Imagine we're in a panel data context where there's variation across time $t$ and across firms $i$. Think of each time period $t$ as a separate experiment. I understand your question as whether it's equivalent to estimate an effect using: Cross-sectional variation in time series averages. Time series averages of cross-sectional variation. The answer in general is no. The setup: In my formulation, we can think of each time period $t$ as a separate experiment. Let's say you have a balanced panel of length $T$ over $n$ firms. If we break each time period apart $(X_t, \mathbf{y}_t)$ etc... we can write the overall data as: $$ Y = \begin{bmatrix} \mathbf{y}_1 \\ \mathbf{y}_2 \\ \ldots \\ \mathbf{y}_n \end{bmatrix} \quad \quad X = \begin{bmatrix} X_1 \\ X_2 \\ \ldots \\ X_n \end{bmatrix} $$ Average of fits: \begin{align*} \frac{1}{T} \sum_t \mathbf{b}_t &= \frac{1}{T} \sum_t \left(X_t'X_t \right)^{-1} X_t' \mathbf{y}_t \\ &= \frac{1}{T} \sum_t S^{-1}_t \left( \frac{1}{n} \sum_i \mathbf{x}_{t,i} y_{t,i}\right) \quad \text{where } S_t = \frac{1}{n} \sum_i \mathbf{x}_{t,i} \mathbf{x}_{t,i}' \end{align*} Fit of averages: This isn't in general equal to the estimate based upon cross-sectional variation of time series averages (i.e. the between estimator). $$ \left( \frac{1}{n} \sum_i \bar{\mathbf{x}}_i \bar{\mathbf{x}}_i' \right)^{-1} \frac{1}{n} \sum_i \bar{\mathbf{x}}_i \bar{y}_i $$ Where $\bar{\mathbf{x}}_i = \frac{1}{T} \sum_t \mathbf{x}_{t, i}$ etc... Pooled OLS estimate: Something perhaps useful to think about is the pooled OLS estimate. What is it? \begin{align*} \hat{\mathbf{b}} &= \left(X'X\right)^{-1}X'Y \\ &= \left( \frac{1}{nT} \sum_t X_t'X_t \right)^{-1} \left( \frac{1}{nT} \sum_t X_t' \mathbf{y}_i \right) \end{align*} Then use $\mathbf{b}_t = \left(X_t'X_t \right)^{-1}X_t' \mathbf{y}_i$ \begin{align*} &= \left( \frac{1}{nT} \sum_t X_t'X_t \right)^{-1} \left( \frac{1}{nT} \sum_t X_t'X_t \mathbf{b}_t \right) \end{align*} Let's $S = \frac{1}{nT} \sum_i X'X $ and $S_t = \frac{1}{n} X_t'X_t $ be our estimates of $\operatorname{E}[\mathbf{x}\mathbf{x}']$ over the full sample and in period $t$ respectively. Then we have: \begin{align*} \hat{\mathbf{b}} &= \frac{1}{T} \sum_t \left( S^{-1} S_t \right) \mathbf{b}_t \end{align*} This is sort of like an average of the different time specific estimates $\mathbf{b}_t$, but it's a bit different. In some loose sense, you're giving more weight to periods with higher variance of the right hand side variables. Special case: right hand side variables are time invariant and firm specific If the right hand side variables for each firm $i$ are constant across time (i.e. $X_{t_1} = X_{t_2}$ for any $t_1$ and $t_2$) then $S = S_t$ for all $t$ and we would have: $$\hat{\mathbf{b}} = \frac{1}{T} \sum_t \mathbf{b}_t$$ Fun comment: This is the case Fama and Macbeth where in when they applied this technique of averaging cross-sectional estimates to obtain consistent standard errors when estimating how expected returns vary with firms' covariance with the market (or other factor loadings). The Fama-Macbeth procedure is an intuitive way to get consistent standard errors in the panel context when error terms are cross-sectionally correlated but independent across time. A more modern technique that yields similar results is clustering on time.
Difference between averaging data then fitting and fitting the data then averaging Imagine we're in a panel data context where there's variation across time $t$ and across firms $i$. Think of each time period $t$ as a separate experiment. I understand your question as whether it's e
29,478
Difference between averaging data then fitting and fitting the data then averaging
(Note: I do not have enough reputation to comment, so am posting this as an answer.) For the particular question posed, the answer by fcop is correct: fitting the average is the same as averaging the fits (at least for linear least squares). However it is worth mentioning that either of these naïve "online" approaches can give biased results, compared to fitting all the data at once. As the two are equivalent, I will focus on the "fit the average" approach. Essentially, fitting the averaged curves $\bar{y}[x]=\langle y[x]\rangle$ ignores the relative uncertainty in $y$ values between different $x$ points. For example if $y_1[x_1]=y_2[x_1]=2$, $y_1[x_2]=1$, and $y_1[x_2]=3$, then $\bar{y}[x_1]=\bar{y}[x_2]=2$, but any curve fit should care much more about misfit at $x_1$ compared to $x_2$. Note that most scientific software platforms should have tools to compute/update a true "online" least squares fit (known as recursive least squares). So all the data can be used (if this is desirable).
Difference between averaging data then fitting and fitting the data then averaging
(Note: I do not have enough reputation to comment, so am posting this as an answer.) For the particular question posed, the answer by fcop is correct: fitting the average is the same as averaging the
Difference between averaging data then fitting and fitting the data then averaging (Note: I do not have enough reputation to comment, so am posting this as an answer.) For the particular question posed, the answer by fcop is correct: fitting the average is the same as averaging the fits (at least for linear least squares). However it is worth mentioning that either of these naïve "online" approaches can give biased results, compared to fitting all the data at once. As the two are equivalent, I will focus on the "fit the average" approach. Essentially, fitting the averaged curves $\bar{y}[x]=\langle y[x]\rangle$ ignores the relative uncertainty in $y$ values between different $x$ points. For example if $y_1[x_1]=y_2[x_1]=2$, $y_1[x_2]=1$, and $y_1[x_2]=3$, then $\bar{y}[x_1]=\bar{y}[x_2]=2$, but any curve fit should care much more about misfit at $x_1$ compared to $x_2$. Note that most scientific software platforms should have tools to compute/update a true "online" least squares fit (known as recursive least squares). So all the data can be used (if this is desirable).
Difference between averaging data then fitting and fitting the data then averaging (Note: I do not have enough reputation to comment, so am posting this as an answer.) For the particular question posed, the answer by fcop is correct: fitting the average is the same as averaging the
29,479
Does modeling with Random Forests require cross-validation?
The OOB error is calculated by for each observation using only the trees that did not have this particular observation in their bootstrap sample; see this related question. This is very roughly equivalent to two-fold cross validation as the probability of a particular observation being in a particular bootstrap sample is $1-(1-\frac{1}{N})^N \approx 1-e^{-1} \approx 0.6$. As @Wouter points out, you will probably want to do cross validation for parameter tuning, but as an estimate of test set error the OOB error should be fine.
Does modeling with Random Forests require cross-validation?
The OOB error is calculated by for each observation using only the trees that did not have this particular observation in their bootstrap sample; see this related question. This is very roughly equiva
Does modeling with Random Forests require cross-validation? The OOB error is calculated by for each observation using only the trees that did not have this particular observation in their bootstrap sample; see this related question. This is very roughly equivalent to two-fold cross validation as the probability of a particular observation being in a particular bootstrap sample is $1-(1-\frac{1}{N})^N \approx 1-e^{-1} \approx 0.6$. As @Wouter points out, you will probably want to do cross validation for parameter tuning, but as an estimate of test set error the OOB error should be fine.
Does modeling with Random Forests require cross-validation? The OOB error is calculated by for each observation using only the trees that did not have this particular observation in their bootstrap sample; see this related question. This is very roughly equiva
29,480
Testing Sharpe Ratio significance
Bailey and Marcos López de Prado designed a method do exactly that. They use the fact that Sharpe Ratio's are asymptotically normal distributed, even if the returns are not. here gamme_3 and gamma_4 are the skewness and kurtosis of the returns. They use this expression to derive the Probabilistic Sharpe Ratio. SR^* is the value of the sharpe ratio under the null hypothesis, at 5% significance level Sharpe Ratio is significantly greater than SR* if the estimated PSR is larger than 0.95.
Testing Sharpe Ratio significance
Bailey and Marcos López de Prado designed a method do exactly that. They use the fact that Sharpe Ratio's are asymptotically normal distributed, even if the returns are not. here gamme_3 and gamma_4
Testing Sharpe Ratio significance Bailey and Marcos López de Prado designed a method do exactly that. They use the fact that Sharpe Ratio's are asymptotically normal distributed, even if the returns are not. here gamme_3 and gamma_4 are the skewness and kurtosis of the returns. They use this expression to derive the Probabilistic Sharpe Ratio. SR^* is the value of the sharpe ratio under the null hypothesis, at 5% significance level Sharpe Ratio is significantly greater than SR* if the estimated PSR is larger than 0.95.
Testing Sharpe Ratio significance Bailey and Marcos López de Prado designed a method do exactly that. They use the fact that Sharpe Ratio's are asymptotically normal distributed, even if the returns are not. here gamme_3 and gamma_4
29,481
What is the advantage of non-negativity in matrix factorization?
Take a look at Jester, it is a well known data set of jokes which uses continuous ratings in the range of -10.0 to +10.0. A lot of papers have used this dataset for matrix factorization techniques with no ill effects. There is no such positive-ratings-only constraint on a mathematical or technical level. But the reason we typically see majorly positive rating matrices, has likely something to do with websites/services not wanting their products to have negative ratings: even a one-star somehow appears better than a negative rating, it's how liking something less compares to hating something. It may also have to do with how users perceive averages: if you use negative and positive ratings, some products could on average have 0 stars, which actually means neutral on that rating scale. But if you're used to the 5-star rating scales used on other websites, neutral is 3 stars and 0 stars would mean "really bad".
What is the advantage of non-negativity in matrix factorization?
Take a look at Jester, it is a well known data set of jokes which uses continuous ratings in the range of -10.0 to +10.0. A lot of papers have used this dataset for matrix factorization techniques wit
What is the advantage of non-negativity in matrix factorization? Take a look at Jester, it is a well known data set of jokes which uses continuous ratings in the range of -10.0 to +10.0. A lot of papers have used this dataset for matrix factorization techniques with no ill effects. There is no such positive-ratings-only constraint on a mathematical or technical level. But the reason we typically see majorly positive rating matrices, has likely something to do with websites/services not wanting their products to have negative ratings: even a one-star somehow appears better than a negative rating, it's how liking something less compares to hating something. It may also have to do with how users perceive averages: if you use negative and positive ratings, some products could on average have 0 stars, which actually means neutral on that rating scale. But if you're used to the 5-star rating scales used on other websites, neutral is 3 stars and 0 stars would mean "really bad".
What is the advantage of non-negativity in matrix factorization? Take a look at Jester, it is a well known data set of jokes which uses continuous ratings in the range of -10.0 to +10.0. A lot of papers have used this dataset for matrix factorization techniques wit
29,482
What is the advantage of non-negativity in matrix factorization?
Some good points have been raised above. I'll add three ways in which non-negative constraints improve capture of interpretable factor models: Non-negativity encourages imputation of missing signal. Because signals can only be explained in one direction--additively--much more missing signal is imputed than in SVD, for example. If negative values are permitted in the models, the algorithm may be "tempted" to subtract away real data points rather than adding to missing points. It is often the case that data in the matrix to be factorized is not complete due to signal dropout or incomplete sampling, thus imputation is important. Non-negativity enforces sparsity. While unconstrained decompositions are almost never zero, non-negative models are always zero where a given factor absolutely does not contribute to a signal. Sparse representations are desirable when we want to discover distinct feature sets or sample relationships. Non-negativity ensures that factors will not cancel one another out. For instance, if one factor "overcorrects" a signal, another factor my try to "counter-correct" to balance it out. When factors are positive or zero only, they can never counter-correct, and only can explain additive signal. Generally, in a well-conditioned non-negative input matrix, an unconstrained orthogonal factorization will nearly be non-negative. However, imposing non-negativity enforces reasonable theoretical expectations and can accelerate convergence.
What is the advantage of non-negativity in matrix factorization?
Some good points have been raised above. I'll add three ways in which non-negative constraints improve capture of interpretable factor models: Non-negativity encourages imputation of missing signal. B
What is the advantage of non-negativity in matrix factorization? Some good points have been raised above. I'll add three ways in which non-negative constraints improve capture of interpretable factor models: Non-negativity encourages imputation of missing signal. Because signals can only be explained in one direction--additively--much more missing signal is imputed than in SVD, for example. If negative values are permitted in the models, the algorithm may be "tempted" to subtract away real data points rather than adding to missing points. It is often the case that data in the matrix to be factorized is not complete due to signal dropout or incomplete sampling, thus imputation is important. Non-negativity enforces sparsity. While unconstrained decompositions are almost never zero, non-negative models are always zero where a given factor absolutely does not contribute to a signal. Sparse representations are desirable when we want to discover distinct feature sets or sample relationships. Non-negativity ensures that factors will not cancel one another out. For instance, if one factor "overcorrects" a signal, another factor my try to "counter-correct" to balance it out. When factors are positive or zero only, they can never counter-correct, and only can explain additive signal. Generally, in a well-conditioned non-negative input matrix, an unconstrained orthogonal factorization will nearly be non-negative. However, imposing non-negativity enforces reasonable theoretical expectations and can accelerate convergence.
What is the advantage of non-negativity in matrix factorization? Some good points have been raised above. I'll add three ways in which non-negative constraints improve capture of interpretable factor models: Non-negativity encourages imputation of missing signal. B
29,483
Problems with Outlier Detection
@Jerome Baum's comment is spot on. To bring the Gelman quote here: Outlier detection can be a good thing. The problem is that non-statisticians seem to like to latch on to the word “outlier” without trying to think at all about the process that creates the outlier, also some textbooks have rules that look stupid to statisticians such as myself, rules such as labeling something as an outlier if it more than some number of sd’s from the median, or whatever. The concept of an outlier is useful but I think it requires context—if you label something as an outlier, you want to try to get some sense of why you think that. To add a little bit more, how about we first define outlier. Try to do so rigorously without referring to anything visual like "looks like it's far away from other points". It's actually quite hard. I'd say that an outlier is a point that is highly unlikely given a model of how points are generated. In most situations, people don't actually have a model of how the points are generated, or if they do it is so over-simplified as to be wrong much of the time. So, as Andrew says, people will do things like assume that some kind of Gaussian process is generating points and so if a point is more than a certain number of SD's from the mean, it's an outlier. Mathematically convenient, not so principled. And we haven't even gotten into what people do with outliers once they are identified. Most people want to throw these inconvenient points away, for example. In many cases, it's the outliers that lead to breakthroughs and discoveries, not the non-outliers! There's a lot of ad-hoc'ery in outlier detection, as practiced by non-statisticians, and Andrew is uncomfortable with that.
Problems with Outlier Detection
@Jerome Baum's comment is spot on. To bring the Gelman quote here: Outlier detection can be a good thing. The problem is that non-statisticians seem to like to latch on to the word “outlier” with
Problems with Outlier Detection @Jerome Baum's comment is spot on. To bring the Gelman quote here: Outlier detection can be a good thing. The problem is that non-statisticians seem to like to latch on to the word “outlier” without trying to think at all about the process that creates the outlier, also some textbooks have rules that look stupid to statisticians such as myself, rules such as labeling something as an outlier if it more than some number of sd’s from the median, or whatever. The concept of an outlier is useful but I think it requires context—if you label something as an outlier, you want to try to get some sense of why you think that. To add a little bit more, how about we first define outlier. Try to do so rigorously without referring to anything visual like "looks like it's far away from other points". It's actually quite hard. I'd say that an outlier is a point that is highly unlikely given a model of how points are generated. In most situations, people don't actually have a model of how the points are generated, or if they do it is so over-simplified as to be wrong much of the time. So, as Andrew says, people will do things like assume that some kind of Gaussian process is generating points and so if a point is more than a certain number of SD's from the mean, it's an outlier. Mathematically convenient, not so principled. And we haven't even gotten into what people do with outliers once they are identified. Most people want to throw these inconvenient points away, for example. In many cases, it's the outliers that lead to breakthroughs and discoveries, not the non-outliers! There's a lot of ad-hoc'ery in outlier detection, as practiced by non-statisticians, and Andrew is uncomfortable with that.
Problems with Outlier Detection @Jerome Baum's comment is spot on. To bring the Gelman quote here: Outlier detection can be a good thing. The problem is that non-statisticians seem to like to latch on to the word “outlier” with
29,484
Problems with Outlier Detection
This demonstrates the classic tug of war between the two types of objectives for statistical analyses such as regression: descriptive vs. predictive. (Pardon the generalizations in my comments below.) From the statistician's point of view, description usually matters more than prediction. Hence, they are inherently "biased" towards explanation. Why is there an outlier? Is it really an error in data-entry (extra zeros at the end of a value) or is it a valid data point which just happens to be extreme? These are important questions for a statistician. OTOH, the data scientists are more interested in prediction rather than description. Their objective is to develop a strong model that does a great job of predicting a future outcome (e.g., purchase, attrition). If there's an extreme value in one of the fields, a data scientist would happily cap that value (to the 98th percentile value, for instance) if that helps improve the predictive accuracy of the model. I don't have a general inclination towards either one of these two approaches. However, whether the methods/approaches such as stepwise-regression and outlier-treatment are "a bit of a joke" or not depends on which side of the fence you are standing.
Problems with Outlier Detection
This demonstrates the classic tug of war between the two types of objectives for statistical analyses such as regression: descriptive vs. predictive. (Pardon the generalizations in my comments below.)
Problems with Outlier Detection This demonstrates the classic tug of war between the two types of objectives for statistical analyses such as regression: descriptive vs. predictive. (Pardon the generalizations in my comments below.) From the statistician's point of view, description usually matters more than prediction. Hence, they are inherently "biased" towards explanation. Why is there an outlier? Is it really an error in data-entry (extra zeros at the end of a value) or is it a valid data point which just happens to be extreme? These are important questions for a statistician. OTOH, the data scientists are more interested in prediction rather than description. Their objective is to develop a strong model that does a great job of predicting a future outcome (e.g., purchase, attrition). If there's an extreme value in one of the fields, a data scientist would happily cap that value (to the 98th percentile value, for instance) if that helps improve the predictive accuracy of the model. I don't have a general inclination towards either one of these two approaches. However, whether the methods/approaches such as stepwise-regression and outlier-treatment are "a bit of a joke" or not depends on which side of the fence you are standing.
Problems with Outlier Detection This demonstrates the classic tug of war between the two types of objectives for statistical analyses such as regression: descriptive vs. predictive. (Pardon the generalizations in my comments below.)
29,485
Solution to exercice 2.2a.16 of "Robust Statistics: The Approach Based on Influence Functions"
Older statistics books used "invariant" in a slightly different way than one might expect; the ambiguous terminology persists. A more modern equivalent is "equivariant" (see the references at the end of this post). In the present context it means $$T_n(X_1+c,X_2+c,\ldots,X_n+c) = T_n(X_1,X_2,\ldots,X_n) + c$$ for all real $c$. To address the question, then, suppose that $T_n$ has the property that for sufficiently large $n$, all real $c$, and all $m \le \varepsilon^{*}n$, $$|T_n(\mathbf{X + Y}) - T_n(\mathbf{X})| = o(|c|)$$ whenever $\mathbf Y$ differs from $\mathbf{X}$ by at most $c$ in at most $m$ coordinates. (This is a weaker condition than assumed in the definition of breakdown bound. In fact, all we really need to assume is that when $n$ is sufficiently large, the expression "$o(|c|)$" is some value guaranteed to be less than $|c|/2$ in size.) The proof is by contradiction. Assume, accordingly, that this $T_n$ is also equivariant and suppose $\varepsilon^{*} \gt 1/2$. Then for sufficiently large $n$, $m(n) = \lfloor \varepsilon^{*}n\rfloor$ is an integer for which both $m(n)/n \le \varepsilon^{*}$ and $(n-m(n))/n \le \varepsilon^{*}$. For any real numbers $a,b$ define $$t_n(a, b) = T_n(a, a, \ldots, a,\ b, b, \ldots, b)$$ where there are $m(n)$ $a$'s and $n-m(n)$ $b$'s. By changing $m(n)$ or fewer of the coordinates we conclude both $$|t(a,b) - t(0,b)| = o(|a|)$$ and $$|t(a,b) - t(a,0)| = o(|b|).$$ For $c\gt 0$ the triangle inequality asserts $$\eqalign{ c = |t_n(c, c) - t_n(0, 0)| &\le |t_n(c, c) - t_n(c, 0)| + |t_n(c, 0) - t_n(0,0)| \\&= o(c) + o(c) \\&\lt c/2 + c/2 \\ &= c}$$ The strict inequality on the penultimate line is assured for sufficiently large $n$. The contradiction it implies, $c \lt c$, proves $\varepsilon^{*} \le 1/2.$ References E. L. Lehmann, Theory of Point Estimation. John Wiley 1983. In the text (chapter 3, section 1) and an accompanying footnote Lehmann writes An estimator satisfying $\delta(X_1+a, \ldots, X_n+a) = \delta(X_1,\ldots,X_n)+a$ for all $a$ will be called equivariant ... Some authors call such estimators "invariant." Since this suggests that the estimator remains unchanged under $X_i^\prime = X_i+a$, it seems preferable to reserve that term for functions satisfying $u(x+a)=u(x)$ for all $x,a$.
Solution to exercice 2.2a.16 of "Robust Statistics: The Approach Based on Influence Functions"
Older statistics books used "invariant" in a slightly different way than one might expect; the ambiguous terminology persists. A more modern equivalent is "equivariant" (see the references at the end
Solution to exercice 2.2a.16 of "Robust Statistics: The Approach Based on Influence Functions" Older statistics books used "invariant" in a slightly different way than one might expect; the ambiguous terminology persists. A more modern equivalent is "equivariant" (see the references at the end of this post). In the present context it means $$T_n(X_1+c,X_2+c,\ldots,X_n+c) = T_n(X_1,X_2,\ldots,X_n) + c$$ for all real $c$. To address the question, then, suppose that $T_n$ has the property that for sufficiently large $n$, all real $c$, and all $m \le \varepsilon^{*}n$, $$|T_n(\mathbf{X + Y}) - T_n(\mathbf{X})| = o(|c|)$$ whenever $\mathbf Y$ differs from $\mathbf{X}$ by at most $c$ in at most $m$ coordinates. (This is a weaker condition than assumed in the definition of breakdown bound. In fact, all we really need to assume is that when $n$ is sufficiently large, the expression "$o(|c|)$" is some value guaranteed to be less than $|c|/2$ in size.) The proof is by contradiction. Assume, accordingly, that this $T_n$ is also equivariant and suppose $\varepsilon^{*} \gt 1/2$. Then for sufficiently large $n$, $m(n) = \lfloor \varepsilon^{*}n\rfloor$ is an integer for which both $m(n)/n \le \varepsilon^{*}$ and $(n-m(n))/n \le \varepsilon^{*}$. For any real numbers $a,b$ define $$t_n(a, b) = T_n(a, a, \ldots, a,\ b, b, \ldots, b)$$ where there are $m(n)$ $a$'s and $n-m(n)$ $b$'s. By changing $m(n)$ or fewer of the coordinates we conclude both $$|t(a,b) - t(0,b)| = o(|a|)$$ and $$|t(a,b) - t(a,0)| = o(|b|).$$ For $c\gt 0$ the triangle inequality asserts $$\eqalign{ c = |t_n(c, c) - t_n(0, 0)| &\le |t_n(c, c) - t_n(c, 0)| + |t_n(c, 0) - t_n(0,0)| \\&= o(c) + o(c) \\&\lt c/2 + c/2 \\ &= c}$$ The strict inequality on the penultimate line is assured for sufficiently large $n$. The contradiction it implies, $c \lt c$, proves $\varepsilon^{*} \le 1/2.$ References E. L. Lehmann, Theory of Point Estimation. John Wiley 1983. In the text (chapter 3, section 1) and an accompanying footnote Lehmann writes An estimator satisfying $\delta(X_1+a, \ldots, X_n+a) = \delta(X_1,\ldots,X_n)+a$ for all $a$ will be called equivariant ... Some authors call such estimators "invariant." Since this suggests that the estimator remains unchanged under $X_i^\prime = X_i+a$, it seems preferable to reserve that term for functions satisfying $u(x+a)=u(x)$ for all $x,a$.
Solution to exercice 2.2a.16 of "Robust Statistics: The Approach Based on Influence Functions" Older statistics books used "invariant" in a slightly different way than one might expect; the ambiguous terminology persists. A more modern equivalent is "equivariant" (see the references at the end
29,486
Reference for a story about sampling from the phone book
In his article Damned Liars and Expert Witnesses (JASA 81:394, pp 269-276, 1986) Paul Meier writes (William G.) Cochran was fond of telling of the occasion when he was called on to carry out a sampling study of, I believe, a class of retail stores, and he instructed that the sample consist of every tenth establishment of that type listed in the Yellow Pages. The judge, he said, welcomed his expert testimony as a learning experience and remarked, after Cochran had been sworn, “I am glad to hear and to learn from Professor Cochran about this scientific sampling business, because I know virtually nothing about it. In fact, about the only thing I do know is that you should not just start at the beginning and take every 10th one after that.” (at p. 270, top right). Meier provides no reference and I cannot find anything like this in searches of court cases on Google Scholar.
Reference for a story about sampling from the phone book
In his article Damned Liars and Expert Witnesses (JASA 81:394, pp 269-276, 1986) Paul Meier writes (William G.) Cochran was fond of telling of the occasion when he was called on to carry out a samp
Reference for a story about sampling from the phone book In his article Damned Liars and Expert Witnesses (JASA 81:394, pp 269-276, 1986) Paul Meier writes (William G.) Cochran was fond of telling of the occasion when he was called on to carry out a sampling study of, I believe, a class of retail stores, and he instructed that the sample consist of every tenth establishment of that type listed in the Yellow Pages. The judge, he said, welcomed his expert testimony as a learning experience and remarked, after Cochran had been sworn, “I am glad to hear and to learn from Professor Cochran about this scientific sampling business, because I know virtually nothing about it. In fact, about the only thing I do know is that you should not just start at the beginning and take every 10th one after that.” (at p. 270, top right). Meier provides no reference and I cannot find anything like this in searches of court cases on Google Scholar.
Reference for a story about sampling from the phone book In his article Damned Liars and Expert Witnesses (JASA 81:394, pp 269-276, 1986) Paul Meier writes (William G.) Cochran was fond of telling of the occasion when he was called on to carry out a samp
29,487
Unbiased estimator with minimum variance for $1/\theta$
Indeed for a Geometric ${\cal G}(\theta)$ variate, $X$, $$\mathbb{E}_\theta[X]=1/\theta=g(\theta)$$and the Rao-Blackwell theorem implies that $$\hat{\theta}(T)=\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]$$is the unique minimum variance unbiased estimator. But rather than trying to compute this conditional expectation directly, one could remark that $$\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]=\ldots=\mathbb{E}_\theta\left[X_n\Bigg|\sum_{i=1}^n X_i=T\right]$$ hence that $$\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]=\frac{1}{n}\sum_{i=1}^n \mathbb{E}_\theta\left[X_i\Bigg|\sum_{i=1}^n X_i=T\right]=\frac{T}{n}$$ Note, incidentally, that, since $\sum_{j\ge 2} X_j$ is a Negative Binomial $\cal{N}eg(n-1,\theta)$ $$\mathbb{P}\left(\sum_{j\ge 2} X_j=m\right)={m-1\choose n-2}\theta^{n-1}(1-\theta)^{m-n+1}\mathbb{I}_{m>n-1}$$ hence the final sum should be $$\sum_{i=1}^{t-n+1} i {\binom{t-i-1}{n-2}}\bigg/{\binom{t-1}{n-1}}$$
Unbiased estimator with minimum variance for $1/\theta$
Indeed for a Geometric ${\cal G}(\theta)$ variate, $X$, $$\mathbb{E}_\theta[X]=1/\theta=g(\theta)$$and the Rao-Blackwell theorem implies that $$\hat{\theta}(T)=\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=
Unbiased estimator with minimum variance for $1/\theta$ Indeed for a Geometric ${\cal G}(\theta)$ variate, $X$, $$\mathbb{E}_\theta[X]=1/\theta=g(\theta)$$and the Rao-Blackwell theorem implies that $$\hat{\theta}(T)=\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]$$is the unique minimum variance unbiased estimator. But rather than trying to compute this conditional expectation directly, one could remark that $$\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]=\ldots=\mathbb{E}_\theta\left[X_n\Bigg|\sum_{i=1}^n X_i=T\right]$$ hence that $$\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=1}^n X_i=T\right]=\frac{1}{n}\sum_{i=1}^n \mathbb{E}_\theta\left[X_i\Bigg|\sum_{i=1}^n X_i=T\right]=\frac{T}{n}$$ Note, incidentally, that, since $\sum_{j\ge 2} X_j$ is a Negative Binomial $\cal{N}eg(n-1,\theta)$ $$\mathbb{P}\left(\sum_{j\ge 2} X_j=m\right)={m-1\choose n-2}\theta^{n-1}(1-\theta)^{m-n+1}\mathbb{I}_{m>n-1}$$ hence the final sum should be $$\sum_{i=1}^{t-n+1} i {\binom{t-i-1}{n-2}}\bigg/{\binom{t-1}{n-1}}$$
Unbiased estimator with minimum variance for $1/\theta$ Indeed for a Geometric ${\cal G}(\theta)$ variate, $X$, $$\mathbb{E}_\theta[X]=1/\theta=g(\theta)$$and the Rao-Blackwell theorem implies that $$\hat{\theta}(T)=\mathbb{E}_\theta\left[X_1\Bigg|\sum_{i=
29,488
Conditional expectation of $X$ given $Z = X + Y$
As @StéphaneLaurent points out, $(X,Z)$ have a bivariate normal distribution and $E[X\mid Z] = aZ+b$. But even more can be said in this case because it is known that $$a = \frac{\operatorname{cov}(X,Z)}{\sigma_Z^2}, \quad b = \mu_X - a\mu_Z = \mu_X - \frac{\operatorname{cov}(X,Z)}{\sigma_Z^2}\mu_Z,$$ and we can use the independence of $X$ and $Y$ (which implies $\operatorname{cov}(X,Y) = 0$) to deduce that $$\begin{align} \operatorname{cov}(X,Z) &= \operatorname{cov}(X,X+Y)\\ &= \operatorname{cov}(X,X) + \operatorname{cov}(X,Y)\\ &= \sigma_X^2\\ \sigma_Z^2 &= \operatorname{var}(X+Y)\\ &= \operatorname{var}(X)+\operatorname{var}(Y) + 2\operatorname{cov}(X, Y)\\ &= \sigma_X^2+\sigma_Y^2\\ \mu_Z &= \mu_X+\mu_Y. \end{align}$$ Note that the method used above can also be applied in the more general case when $X$ and $Y$ are correlated jointly normal random variables instead of independent normal random variables. Continuing with the calculations, we see that $$E[X\mid Z] = \frac{\sigma_X^2}{\sigma_X^2+\sigma_Y^2}(Z-\mu_Z) + \mu_x \tag{1}$$ which I find comforting because we can interchange the roles of $X$ and $Y$ to immediately write down $$E[Y\mid Z] = \frac{\sigma_Y^2}{\sigma_X^2+\sigma_Y^2}(Z-\mu_Z) + \mu_Y\tag{2}$$ and the sum of $(1)$ and $(2)$ gives $E[X\mid Z] + E[Y\mid Z] = Z$ as noted in Stéphane Laurent's answer.
Conditional expectation of $X$ given $Z = X + Y$
As @StéphaneLaurent points out, $(X,Z)$ have a bivariate normal distribution and $E[X\mid Z] = aZ+b$. But even more can be said in this case because it is known that $$a = \frac{\operatorname{cov}(X,Z
Conditional expectation of $X$ given $Z = X + Y$ As @StéphaneLaurent points out, $(X,Z)$ have a bivariate normal distribution and $E[X\mid Z] = aZ+b$. But even more can be said in this case because it is known that $$a = \frac{\operatorname{cov}(X,Z)}{\sigma_Z^2}, \quad b = \mu_X - a\mu_Z = \mu_X - \frac{\operatorname{cov}(X,Z)}{\sigma_Z^2}\mu_Z,$$ and we can use the independence of $X$ and $Y$ (which implies $\operatorname{cov}(X,Y) = 0$) to deduce that $$\begin{align} \operatorname{cov}(X,Z) &= \operatorname{cov}(X,X+Y)\\ &= \operatorname{cov}(X,X) + \operatorname{cov}(X,Y)\\ &= \sigma_X^2\\ \sigma_Z^2 &= \operatorname{var}(X+Y)\\ &= \operatorname{var}(X)+\operatorname{var}(Y) + 2\operatorname{cov}(X, Y)\\ &= \sigma_X^2+\sigma_Y^2\\ \mu_Z &= \mu_X+\mu_Y. \end{align}$$ Note that the method used above can also be applied in the more general case when $X$ and $Y$ are correlated jointly normal random variables instead of independent normal random variables. Continuing with the calculations, we see that $$E[X\mid Z] = \frac{\sigma_X^2}{\sigma_X^2+\sigma_Y^2}(Z-\mu_Z) + \mu_x \tag{1}$$ which I find comforting because we can interchange the roles of $X$ and $Y$ to immediately write down $$E[Y\mid Z] = \frac{\sigma_Y^2}{\sigma_X^2+\sigma_Y^2}(Z-\mu_Z) + \mu_Y\tag{2}$$ and the sum of $(1)$ and $(2)$ gives $E[X\mid Z] + E[Y\mid Z] = Z$ as noted in Stéphane Laurent's answer.
Conditional expectation of $X$ given $Z = X + Y$ As @StéphaneLaurent points out, $(X,Z)$ have a bivariate normal distribution and $E[X\mid Z] = aZ+b$. But even more can be said in this case because it is known that $$a = \frac{\operatorname{cov}(X,Z
29,489
Conditional expectation of $X$ given $Z = X + Y$
Each of the pairs $(X,Z)$ and $(Y,Z)$ has a bivariate normal distribution. Then we know that $$E(X\mid Z) = a Z+b \quad\textrm{ and } \quad E(Y \mid Z)=\alpha Z + \beta.$$ Taking the expectation yields $E(X)=aE(Z)+b$ and $E(Y)=\alpha E(Z) + \beta$. But we also have $E(X\mid Z) + E(Y \mid Z) = Z$, therefore $a+\alpha=1$ and $b+\beta=0$. Finally we have to solve a linear system of two equations and two unknown variables.
Conditional expectation of $X$ given $Z = X + Y$
Each of the pairs $(X,Z)$ and $(Y,Z)$ has a bivariate normal distribution. Then we know that $$E(X\mid Z) = a Z+b \quad\textrm{ and } \quad E(Y \mid Z)=\alpha Z + \beta.$$ Taking the expectation yield
Conditional expectation of $X$ given $Z = X + Y$ Each of the pairs $(X,Z)$ and $(Y,Z)$ has a bivariate normal distribution. Then we know that $$E(X\mid Z) = a Z+b \quad\textrm{ and } \quad E(Y \mid Z)=\alpha Z + \beta.$$ Taking the expectation yields $E(X)=aE(Z)+b$ and $E(Y)=\alpha E(Z) + \beta$. But we also have $E(X\mid Z) + E(Y \mid Z) = Z$, therefore $a+\alpha=1$ and $b+\beta=0$. Finally we have to solve a linear system of two equations and two unknown variables.
Conditional expectation of $X$ given $Z = X + Y$ Each of the pairs $(X,Z)$ and $(Y,Z)$ has a bivariate normal distribution. Then we know that $$E(X\mid Z) = a Z+b \quad\textrm{ and } \quad E(Y \mid Z)=\alpha Z + \beta.$$ Taking the expectation yield
29,490
Intervention With Differencing
Assuming this is toy example: To answer your first question: 1) Even though we have differenced the ARIMA errors , to assess the intervention function which was then technically fit using the differenced series ▽Xt is there anything we need to do in order to "change back" the estimate of ω0 or δ from using ▽Xt to Xt? When you difference the data, you should difference the response/intervention variables. When you back difference (transform) after you model then this would automatically take care of differencing** I know this is very easy when you use SAS Proc ARIMA. I dont know how to do this R. Second Question: 2) Is this correct: In order to determine the gain of the intervention, I constructed the intervention mt from the parameters. Once I have mt then I compare the fitted values from the model fit4 (exp() to reverse the log) to exp( fitted values minus mt ) and determine that over the observed period, the intervention resulted in 3342.37 extra units. To determine, gain in intervention, you need to take exponent and then subtract -1, this would give the proportion or incremental effect. To demonstrate this in your case, see below. For the first month, the impact was 55% of original sales and rapidly decays. Cumulativelt you have 4580 units of incremental effect (Oct 13 thru Feb 2014. (I referred to Forecasting Principle and Applications by Delurgio P: 518. There is an excellent voluminous chapter on intervention analysis). Someone please correct if this methodology is correct ? Pulse intervention + decay is clearly not sufficient in this case, I would do a pulse + permanent level shift as shown in the diagram (e) below which is from the classic paper by Box and Tiao.
Intervention With Differencing
Assuming this is toy example: To answer your first question: 1) Even though we have differenced the ARIMA errors , to assess the intervention function which was then technically fit using the differen
Intervention With Differencing Assuming this is toy example: To answer your first question: 1) Even though we have differenced the ARIMA errors , to assess the intervention function which was then technically fit using the differenced series ▽Xt is there anything we need to do in order to "change back" the estimate of ω0 or δ from using ▽Xt to Xt? When you difference the data, you should difference the response/intervention variables. When you back difference (transform) after you model then this would automatically take care of differencing** I know this is very easy when you use SAS Proc ARIMA. I dont know how to do this R. Second Question: 2) Is this correct: In order to determine the gain of the intervention, I constructed the intervention mt from the parameters. Once I have mt then I compare the fitted values from the model fit4 (exp() to reverse the log) to exp( fitted values minus mt ) and determine that over the observed period, the intervention resulted in 3342.37 extra units. To determine, gain in intervention, you need to take exponent and then subtract -1, this would give the proportion or incremental effect. To demonstrate this in your case, see below. For the first month, the impact was 55% of original sales and rapidly decays. Cumulativelt you have 4580 units of incremental effect (Oct 13 thru Feb 2014. (I referred to Forecasting Principle and Applications by Delurgio P: 518. There is an excellent voluminous chapter on intervention analysis). Someone please correct if this methodology is correct ? Pulse intervention + decay is clearly not sufficient in this case, I would do a pulse + permanent level shift as shown in the diagram (e) below which is from the classic paper by Box and Tiao.
Intervention With Differencing Assuming this is toy example: To answer your first question: 1) Even though we have differenced the ARIMA errors , to assess the intervention function which was then technically fit using the differen
29,491
Intervention With Differencing
@forecaster After allowing AUTOBOX to identify 3 outliers using 29 values (not inappropriate in y experience) a useful model was found and here . The residual acf plot does not suggest an under-specified model . The Actual/Fit/Forecast plot is with Fit/Forecast here . Forecaster had (correctly) previously mentioned how a pulse variable can morph into a level/step variable when a denominator coefficient of nearly 1.0 is introduced. In finding two level shifts (the most recent one starting in 9/2013) and a pulse at 10/2013 , the model presents a clearer picture. In terms of the impact of the pulse at 10/13 it is simply the value of the coefficient. HTH
Intervention With Differencing
@forecaster After allowing AUTOBOX to identify 3 outliers using 29 values (not inappropriate in y experience) a useful model was found and here . The residual acf plot does not suggest an under-spe
Intervention With Differencing @forecaster After allowing AUTOBOX to identify 3 outliers using 29 values (not inappropriate in y experience) a useful model was found and here . The residual acf plot does not suggest an under-specified model . The Actual/Fit/Forecast plot is with Fit/Forecast here . Forecaster had (correctly) previously mentioned how a pulse variable can morph into a level/step variable when a denominator coefficient of nearly 1.0 is introduced. In finding two level shifts (the most recent one starting in 9/2013) and a pulse at 10/2013 , the model presents a clearer picture. In terms of the impact of the pulse at 10/13 it is simply the value of the coefficient. HTH
Intervention With Differencing @forecaster After allowing AUTOBOX to identify 3 outliers using 29 values (not inappropriate in y experience) a useful model was found and here . The residual acf plot does not suggest an under-spe
29,492
R - power.prop.test, prop.test, and unequal sample sizes in A/B tests
Is this method sound or at least on the right track? Yes, I think it's a pretty good approach. Could I specify alt="greater" on prop.test and trust the p-value even though power.prop.test was for a two-sided test? I'm not certain, but I think you'll need to use alternative="two.sided" for prop.test. What if the p-value was greater than .05 on prop.test? Should I assume that I have a statistically significant sample but there is no statistically significant difference between the two proportions? Furthermore, is statistical significance inherent in the p-value in prop.test - i.e. is power.prop.test even necessary? Yes, if p-value is greater than .05 then there is no confidence that there is a detectable difference between the samples. Yes, statistical significance is inherent in the p-value, but the power.prop.test is still necessary before you start your experiment to determine your sample size. power.prop.test is used to set up your experiment, prop.test is used to evaluate the results of your experiment. BTW - You can calculate the confidence interval for each group and see if they overlap at your confidence level. You can do that by following these steps for Calculating Many Confidence Intervals From a t Distribution. To visualize what I mean, look at this calculator with your example data plugged in: http://www.evanmiller.org/ab-testing/chi-squared.html#!2300/20000;2100/20000@95 Here is the result: Notice the graphic it provides that shows the range of the confidence interval for each group. What if I can't do a 50/50 split and need to do, say, a 95/5 split? Is there a method to calculate sample size for this case? This is why you need to use power.prop.test because the split doesn't matter. What matters is that you meet the minimum sample size for each group. If you do a 95/5 split, then it'll just take longer to hit the minimum sample size for the variation that is getting the 5%. What if I have no idea what my baseline prediction should be for proportions? If I guess and the actual proportions are way off, will that invalidate my analysis? You'll need to draw a line in the sand, guess a reasonable detectable effect, and calculate the necessary sample size. If you don't have enough time, resources, etc. to meet the calculated sample size in power.prop.test, then you'll have to lower your detectable effect. I usually set it up like this and run through different delta values to see what the sample size would need to be for that effect. #Significance Level (alpha) alpha <- .05 # Statistical Power (1-Beta) beta <- 0.8 # Baseline conversion rate p <- 0.2 # Minimum Detectable Effect delta <- .05 power.prop.test(p1=p, p2=p+delta, sig.level=alpha, power=beta, alternative="two.sided")
R - power.prop.test, prop.test, and unequal sample sizes in A/B tests
Is this method sound or at least on the right track? Yes, I think it's a pretty good approach. Could I specify alt="greater" on prop.test and trust the p-value even though power.prop.test was for a
R - power.prop.test, prop.test, and unequal sample sizes in A/B tests Is this method sound or at least on the right track? Yes, I think it's a pretty good approach. Could I specify alt="greater" on prop.test and trust the p-value even though power.prop.test was for a two-sided test? I'm not certain, but I think you'll need to use alternative="two.sided" for prop.test. What if the p-value was greater than .05 on prop.test? Should I assume that I have a statistically significant sample but there is no statistically significant difference between the two proportions? Furthermore, is statistical significance inherent in the p-value in prop.test - i.e. is power.prop.test even necessary? Yes, if p-value is greater than .05 then there is no confidence that there is a detectable difference between the samples. Yes, statistical significance is inherent in the p-value, but the power.prop.test is still necessary before you start your experiment to determine your sample size. power.prop.test is used to set up your experiment, prop.test is used to evaluate the results of your experiment. BTW - You can calculate the confidence interval for each group and see if they overlap at your confidence level. You can do that by following these steps for Calculating Many Confidence Intervals From a t Distribution. To visualize what I mean, look at this calculator with your example data plugged in: http://www.evanmiller.org/ab-testing/chi-squared.html#!2300/20000;2100/20000@95 Here is the result: Notice the graphic it provides that shows the range of the confidence interval for each group. What if I can't do a 50/50 split and need to do, say, a 95/5 split? Is there a method to calculate sample size for this case? This is why you need to use power.prop.test because the split doesn't matter. What matters is that you meet the minimum sample size for each group. If you do a 95/5 split, then it'll just take longer to hit the minimum sample size for the variation that is getting the 5%. What if I have no idea what my baseline prediction should be for proportions? If I guess and the actual proportions are way off, will that invalidate my analysis? You'll need to draw a line in the sand, guess a reasonable detectable effect, and calculate the necessary sample size. If you don't have enough time, resources, etc. to meet the calculated sample size in power.prop.test, then you'll have to lower your detectable effect. I usually set it up like this and run through different delta values to see what the sample size would need to be for that effect. #Significance Level (alpha) alpha <- .05 # Statistical Power (1-Beta) beta <- 0.8 # Baseline conversion rate p <- 0.2 # Minimum Detectable Effect delta <- .05 power.prop.test(p1=p, p2=p+delta, sig.level=alpha, power=beta, alternative="two.sided")
R - power.prop.test, prop.test, and unequal sample sizes in A/B tests Is this method sound or at least on the right track? Yes, I think it's a pretty good approach. Could I specify alt="greater" on prop.test and trust the p-value even though power.prop.test was for a
29,493
Classification with mislabeled data
This problem is known as "label noise" and there are a number of methods of dealing with it (essentially you need to include the possibility of incorrect labelling of patterns into the model and infer whether the pattern has been mislabelled, or actually belongs the wrong side of the decision boundary). There is a nice paper by Bootkrajang and Kaban on this topic, which would be a good place to start. This paper by Lawrence and Scholkopf is also well worth investigating. However, research on this problem has quite a long history, IIRC there is a discussion of this in McLachlan's book on "Discriminant Analysis and Statistical Pattern Recognition".
Classification with mislabeled data
This problem is known as "label noise" and there are a number of methods of dealing with it (essentially you need to include the possibility of incorrect labelling of patterns into the model and infer
Classification with mislabeled data This problem is known as "label noise" and there are a number of methods of dealing with it (essentially you need to include the possibility of incorrect labelling of patterns into the model and infer whether the pattern has been mislabelled, or actually belongs the wrong side of the decision boundary). There is a nice paper by Bootkrajang and Kaban on this topic, which would be a good place to start. This paper by Lawrence and Scholkopf is also well worth investigating. However, research on this problem has quite a long history, IIRC there is a discussion of this in McLachlan's book on "Discriminant Analysis and Statistical Pattern Recognition".
Classification with mislabeled data This problem is known as "label noise" and there are a number of methods of dealing with it (essentially you need to include the possibility of incorrect labelling of patterns into the model and infer
29,494
Classification with mislabeled data
If you have a considerable amount of data, I would suggest first to make tests with subsets of the data, like , so that you do not have all the mislabeled data for training. Maybe also using some technique with multiple weak classifiers helps.
Classification with mislabeled data
If you have a considerable amount of data, I would suggest first to make tests with subsets of the data, like , so that you do not have all the mislabeled data for training. Maybe also using some tech
Classification with mislabeled data If you have a considerable amount of data, I would suggest first to make tests with subsets of the data, like , so that you do not have all the mislabeled data for training. Maybe also using some technique with multiple weak classifiers helps.
Classification with mislabeled data If you have a considerable amount of data, I would suggest first to make tests with subsets of the data, like , so that you do not have all the mislabeled data for training. Maybe also using some tech
29,495
Should I report credible intervals instead of confidence intervals?
The type of interval indicates what type of method you used. If a credible interval (or Bayesian variant), it means a Bayesian method was used. If a confidence interval, then a frequentist method was used. Re: Or is it that in practice, they are so often overlapping that it doesn't matter at all? As long as the conditions to use methods are reasonably satisfied (e.g. "independence of observations" is a requirement for many methods), the Bayesian method doesn't use an informative prior, the sample that isn't very small, and the models / methods are analogous, the credible and confidence intervals will be close to each other. The reason: the likelihood will dominate the Bayesian prior, and the likelihood is what is typically used in frequentist methods. I would suggest not fretting about which to use. If you want an informative prior, then be sure to use a Bayesian method. If not, then choose a suitable method and context (frequentist or Bayesian), check to make sure the conditions required to apply the method are reasonably satisfied (so important but so rarely done!), and then move forward if the method is appropriate for the type of data.
Should I report credible intervals instead of confidence intervals?
The type of interval indicates what type of method you used. If a credible interval (or Bayesian variant), it means a Bayesian method was used. If a confidence interval, then a frequentist method was
Should I report credible intervals instead of confidence intervals? The type of interval indicates what type of method you used. If a credible interval (or Bayesian variant), it means a Bayesian method was used. If a confidence interval, then a frequentist method was used. Re: Or is it that in practice, they are so often overlapping that it doesn't matter at all? As long as the conditions to use methods are reasonably satisfied (e.g. "independence of observations" is a requirement for many methods), the Bayesian method doesn't use an informative prior, the sample that isn't very small, and the models / methods are analogous, the credible and confidence intervals will be close to each other. The reason: the likelihood will dominate the Bayesian prior, and the likelihood is what is typically used in frequentist methods. I would suggest not fretting about which to use. If you want an informative prior, then be sure to use a Bayesian method. If not, then choose a suitable method and context (frequentist or Bayesian), check to make sure the conditions required to apply the method are reasonably satisfied (so important but so rarely done!), and then move forward if the method is appropriate for the type of data.
Should I report credible intervals instead of confidence intervals? The type of interval indicates what type of method you used. If a credible interval (or Bayesian variant), it means a Bayesian method was used. If a confidence interval, then a frequentist method was
29,496
How to make a Neural network understand that multiple inputs are related (to the same entity)?
Sometimes the correlation level between any of the two input variables are calculated, and then the input is partitioned into several independent sub-groups before the training starts, like what was implemented in this paper. But generally, like @alto says, when you provide those inputs, the neurons will treat them as they correspond to the same entity. Each neuron at hidden layer will response different variables to different extent, reflected by its connection strength to the variables (i.e, the weights). And those responses are combined to generate a final response at the output layer (linear combination, or plus some activation functions). During training process the weights are adjusted to better learn the output they are given. And finally when the training is done, with the obtained strengths between each neuron and each input variable, the network can respond to any other inputs to different levels, and that is the prediction part. Note that the neurons will reduce the connection strengths to some input variables if they learn that those variables do not contribute the final consequence very much.
How to make a Neural network understand that multiple inputs are related (to the same entity)?
Sometimes the correlation level between any of the two input variables are calculated, and then the input is partitioned into several independent sub-groups before the training starts, like what was i
How to make a Neural network understand that multiple inputs are related (to the same entity)? Sometimes the correlation level between any of the two input variables are calculated, and then the input is partitioned into several independent sub-groups before the training starts, like what was implemented in this paper. But generally, like @alto says, when you provide those inputs, the neurons will treat them as they correspond to the same entity. Each neuron at hidden layer will response different variables to different extent, reflected by its connection strength to the variables (i.e, the weights). And those responses are combined to generate a final response at the output layer (linear combination, or plus some activation functions). During training process the weights are adjusted to better learn the output they are given. And finally when the training is done, with the obtained strengths between each neuron and each input variable, the network can respond to any other inputs to different levels, and that is the prediction part. Note that the neurons will reduce the connection strengths to some input variables if they learn that those variables do not contribute the final consequence very much.
How to make a Neural network understand that multiple inputs are related (to the same entity)? Sometimes the correlation level between any of the two input variables are calculated, and then the input is partitioned into several independent sub-groups before the training starts, like what was i
29,497
Why don't asymptotically consistent estimators have zero variance at infinity?
Convergence of a sequence of random variables in probability does not imply convergence of their variances, nor even that their variances get anywhere near $0.$ In fact, their means may converge to a constant yet their variances can still diverge. Examples and counterexamples Construct counterexamples by creating ever more rare events that are increasingly far from the mean: the squared distance from the mean can overwhelm the decreasing probability and cause the variance to do anything (as I will proceed to show). For instance, scale a Bernoulli$(1/n)$ variate by $n^{p}$ for some power $p$ to be determined. That is, define the sequence of random variables $X_n$ by $$\begin{aligned} &\Pr(X_n=n^{p})=1/n \\ &\Pr(X_n=0)= 1 - 1/n. \end{aligned}$$ As $n\to \infty$, because $\Pr(X_n=0)\to 1$ this converges in probability to $0;$ its expectation $n^{p-1}$ even converges to $0$ provided $p\lt 1;$ but for $p\gt 1/2$ its variance $n^{2p-1}(1-1/n)$ diverges. Comments Many other behaviors are possible: Because negative powers $2p-1$ of $n$ converge to $0,$ the variance converges to $0$ for $p\lt 1/2:$ the variables "squeeze down" to $0$ in some sense. An interesting edge case is $p=1/2,$ for which the variance converges to $1.$ By varying $p$ above and below $1/2$ depending on $n$ you can even make the variance not converge at all. For instance, let $p(n)=0$ for even $n$ and $p(n)=1$ for odd $n.$ A direct connection with estimation Finally, a reasonable possible objection is that abstract sequences of random variables are not really "estimators" of anything. But they can nevertheless be involved in estimation. For instance, let $t_n$ be a sequence of statistics, intended to estimate some numerical property $\theta(F)$ of the common distribution of an (arbitrarily large) iid random sample $(Y_1,Y_2,\ldots,Y_n,\ldots)$ of $F.$ This induces a sequence of random variables $$T_n = t_n(Y_1,Y_2,\ldots,Y_n).$$ Modify this sequence by choosing any value of $p$ (as above) you like and set $$T^\prime_n = T_n + (X_n - n^{p-1}).$$ The parenthesized term makes a zero-mean adjustment to $T_n,$ so that if $T_n$ is a reasonable estimator of $\theta(F),$ then so is $T^\prime_n.$ (With some imagination we can conceive of situations where $T_n^\prime$ could yield better estimates than $T_n$ with probability close to $1.$) However, if you make the $X_n$ independent of $Y_1,\ldots, Y_n,$ the variance of $T^\prime_n$ will be the sum of the variances of $T_n$ and $X_n,$ which you thereby can cause to diverge.
Why don't asymptotically consistent estimators have zero variance at infinity?
Convergence of a sequence of random variables in probability does not imply convergence of their variances, nor even that their variances get anywhere near $0.$ In fact, their means may converge to a
Why don't asymptotically consistent estimators have zero variance at infinity? Convergence of a sequence of random variables in probability does not imply convergence of their variances, nor even that their variances get anywhere near $0.$ In fact, their means may converge to a constant yet their variances can still diverge. Examples and counterexamples Construct counterexamples by creating ever more rare events that are increasingly far from the mean: the squared distance from the mean can overwhelm the decreasing probability and cause the variance to do anything (as I will proceed to show). For instance, scale a Bernoulli$(1/n)$ variate by $n^{p}$ for some power $p$ to be determined. That is, define the sequence of random variables $X_n$ by $$\begin{aligned} &\Pr(X_n=n^{p})=1/n \\ &\Pr(X_n=0)= 1 - 1/n. \end{aligned}$$ As $n\to \infty$, because $\Pr(X_n=0)\to 1$ this converges in probability to $0;$ its expectation $n^{p-1}$ even converges to $0$ provided $p\lt 1;$ but for $p\gt 1/2$ its variance $n^{2p-1}(1-1/n)$ diverges. Comments Many other behaviors are possible: Because negative powers $2p-1$ of $n$ converge to $0,$ the variance converges to $0$ for $p\lt 1/2:$ the variables "squeeze down" to $0$ in some sense. An interesting edge case is $p=1/2,$ for which the variance converges to $1.$ By varying $p$ above and below $1/2$ depending on $n$ you can even make the variance not converge at all. For instance, let $p(n)=0$ for even $n$ and $p(n)=1$ for odd $n.$ A direct connection with estimation Finally, a reasonable possible objection is that abstract sequences of random variables are not really "estimators" of anything. But they can nevertheless be involved in estimation. For instance, let $t_n$ be a sequence of statistics, intended to estimate some numerical property $\theta(F)$ of the common distribution of an (arbitrarily large) iid random sample $(Y_1,Y_2,\ldots,Y_n,\ldots)$ of $F.$ This induces a sequence of random variables $$T_n = t_n(Y_1,Y_2,\ldots,Y_n).$$ Modify this sequence by choosing any value of $p$ (as above) you like and set $$T^\prime_n = T_n + (X_n - n^{p-1}).$$ The parenthesized term makes a zero-mean adjustment to $T_n,$ so that if $T_n$ is a reasonable estimator of $\theta(F),$ then so is $T^\prime_n.$ (With some imagination we can conceive of situations where $T_n^\prime$ could yield better estimates than $T_n$ with probability close to $1.$) However, if you make the $X_n$ independent of $Y_1,\ldots, Y_n,$ the variance of $T^\prime_n$ will be the sum of the variances of $T_n$ and $X_n,$ which you thereby can cause to diverge.
Why don't asymptotically consistent estimators have zero variance at infinity? Convergence of a sequence of random variables in probability does not imply convergence of their variances, nor even that their variances get anywhere near $0.$ In fact, their means may converge to a
29,498
Why discriminative models are preferred to generative models for sequence labeling tasks?
I think you pretty much nailed it in your Edit. Generative model makes more restrictive assumption about the distribution of $x$. From Minka "Unlike traditional generative random fields, CRFs only model the conditional distribution $p(t|x)$ and do not explicitly model the marginal $p(x)$. Note that the labels ${ti }$ are globally conditioned on the whole observation $x$ in CRFs. Thus, we do not assume that the observed data $x$ are conditionally independent as in a generative random field."
Why discriminative models are preferred to generative models for sequence labeling tasks?
I think you pretty much nailed it in your Edit. Generative model makes more restrictive assumption about the distribution of $x$. From Minka "Unlike traditional generative random fields, CRFs only mod
Why discriminative models are preferred to generative models for sequence labeling tasks? I think you pretty much nailed it in your Edit. Generative model makes more restrictive assumption about the distribution of $x$. From Minka "Unlike traditional generative random fields, CRFs only model the conditional distribution $p(t|x)$ and do not explicitly model the marginal $p(x)$. Note that the labels ${ti }$ are globally conditioned on the whole observation $x$ in CRFs. Thus, we do not assume that the observed data $x$ are conditionally independent as in a generative random field."
Why discriminative models are preferred to generative models for sequence labeling tasks? I think you pretty much nailed it in your Edit. Generative model makes more restrictive assumption about the distribution of $x$. From Minka "Unlike traditional generative random fields, CRFs only mod
29,499
Why discriminative models are preferred to generative models for sequence labeling tasks?
CRFs and HMMs are not necessarily exclusive model formulations. In the formulation you have above, X in the HMM is usually a state variable that is unobserved, so a generative model is somewhat necessary. In the CRF though, X is some feature vector that is observed and affects Y in the traditional way. But you can have a combination of both: a sequence of states and outputs where the state is unobserved, and a set of observed features that affects the conditional probabilities of the outputs given the states (or transition probabilities between states). I believe that ultimately the CRF admits some more flexible models where the conditional probabilities are more dynamic, and could be affected by, for example, the output from several observations ago, or something like that. They can get awfully large and difficult to train when they start including many more free parameters like that though.
Why discriminative models are preferred to generative models for sequence labeling tasks?
CRFs and HMMs are not necessarily exclusive model formulations. In the formulation you have above, X in the HMM is usually a state variable that is unobserved, so a generative model is somewhat necess
Why discriminative models are preferred to generative models for sequence labeling tasks? CRFs and HMMs are not necessarily exclusive model formulations. In the formulation you have above, X in the HMM is usually a state variable that is unobserved, so a generative model is somewhat necessary. In the CRF though, X is some feature vector that is observed and affects Y in the traditional way. But you can have a combination of both: a sequence of states and outputs where the state is unobserved, and a set of observed features that affects the conditional probabilities of the outputs given the states (or transition probabilities between states). I believe that ultimately the CRF admits some more flexible models where the conditional probabilities are more dynamic, and could be affected by, for example, the output from several observations ago, or something like that. They can get awfully large and difficult to train when they start including many more free parameters like that though.
Why discriminative models are preferred to generative models for sequence labeling tasks? CRFs and HMMs are not necessarily exclusive model formulations. In the formulation you have above, X in the HMM is usually a state variable that is unobserved, so a generative model is somewhat necess
29,500
Hidden Markov model for event prediction
One problem with the approach you've described is you will need to define what kind of increase in $P(O)$ is meaningful, which may be difficult as $P(O)$ will always be very small in general. It may be better to train two HMMs, say HMM1 for observation sequences where the event of interest occurs and HMM2 for observation sequences where the event doesn't occur. Then given an observation sequence $O$ you have $$ \begin{align*} P(HHM1|O) &= \frac{P(O|HMM1)P(HMM1)}{P(O)} \\ &\varpropto P(O|HMM1)P(HMM1) \end{align*} $$ and likewise for HMM2. Then you can predict the event will occur if $$ \begin{align*} P(HMM1|O) &> P(HMM2|O) \\ \implies \frac{P(HMM1)P(O|HMM1)}{P(O)} &> \frac{P(HMM2)P(O|HMM2)}{P(O)} \\ \implies P(HMM1)P(O|HMM1) &> P(HMM2)P(O|HMM2). \end{align*} $$ Disclaimer: What follows is based on my own personal experience, so take it for what it is. One of the nice things about HMMs is they allow you to deal with variable length sequences and variable order effects (thanks to the hidden states). Sometimes this is necessary (like in lots of NLP applications). However, it seems like you have a priori assumed that only the last 5 observations are relevant for predicting the event of interest. If this assumption is realistic then you may have significantly more luck using traditional techniques (logistic regression, naive bayes, SVM, etc) and simply using the last 5 observations as features/independent variables. Typically these types of models will be easier to train and (in my experience) produce better results.
Hidden Markov model for event prediction
One problem with the approach you've described is you will need to define what kind of increase in $P(O)$ is meaningful, which may be difficult as $P(O)$ will always be very small in general. It may b
Hidden Markov model for event prediction One problem with the approach you've described is you will need to define what kind of increase in $P(O)$ is meaningful, which may be difficult as $P(O)$ will always be very small in general. It may be better to train two HMMs, say HMM1 for observation sequences where the event of interest occurs and HMM2 for observation sequences where the event doesn't occur. Then given an observation sequence $O$ you have $$ \begin{align*} P(HHM1|O) &= \frac{P(O|HMM1)P(HMM1)}{P(O)} \\ &\varpropto P(O|HMM1)P(HMM1) \end{align*} $$ and likewise for HMM2. Then you can predict the event will occur if $$ \begin{align*} P(HMM1|O) &> P(HMM2|O) \\ \implies \frac{P(HMM1)P(O|HMM1)}{P(O)} &> \frac{P(HMM2)P(O|HMM2)}{P(O)} \\ \implies P(HMM1)P(O|HMM1) &> P(HMM2)P(O|HMM2). \end{align*} $$ Disclaimer: What follows is based on my own personal experience, so take it for what it is. One of the nice things about HMMs is they allow you to deal with variable length sequences and variable order effects (thanks to the hidden states). Sometimes this is necessary (like in lots of NLP applications). However, it seems like you have a priori assumed that only the last 5 observations are relevant for predicting the event of interest. If this assumption is realistic then you may have significantly more luck using traditional techniques (logistic regression, naive bayes, SVM, etc) and simply using the last 5 observations as features/independent variables. Typically these types of models will be easier to train and (in my experience) produce better results.
Hidden Markov model for event prediction One problem with the approach you've described is you will need to define what kind of increase in $P(O)$ is meaningful, which may be difficult as $P(O)$ will always be very small in general. It may b