idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
24,401
Incremental IDF (Inverse Document Frequency)
Ok, Thanks to Steffen for the useful comments. I guess the answer is quite simple in the end. As he says, all we need to do is store current denominator (call it $z$): $z(t) = |\{d:t\in d\}|$ Now given a new document $d^*$, we update the denominator simply by: $z^*(t) = z(t) + \left\{ \begin{array}{ll} 1 & \mbox{if}\; {t\in d^*} \\ 0 & \mbox{otherwise} \end{array} \right.$ We would then have to recalculate the $tf-idf$ based on the new $idf$ vector. Similarly to remove an old document, we decrement the numerator in a similar fashion. This does mean that we either have to store the entire $tf$ matrix as well as the $tf-idf$ matrix (doubling the memory requirements), or we have to compute the $tf-idf$ scores when needed (increasing computational costs). I can't see any way round that. For the second part of the question, about the evolution of $idf$ vectors over time, it seems that we can use the above method, and store a set of "landmark" $z$ vectors (denominators) for different date ranges (or perhaps content subsets). Of course $z$ is a dense vector of the length of the dictionary so storing a lot of these will be memory intensive; however this is probably preferable to recomputing $idf$ vectors when needed (which would again require storing the $tf$ matrix as well or instead).
Incremental IDF (Inverse Document Frequency)
Ok, Thanks to Steffen for the useful comments. I guess the answer is quite simple in the end. As he says, all we need to do is store current denominator (call it $z$): $z(t) = |\{d:t\in d\}|$ Now give
Incremental IDF (Inverse Document Frequency) Ok, Thanks to Steffen for the useful comments. I guess the answer is quite simple in the end. As he says, all we need to do is store current denominator (call it $z$): $z(t) = |\{d:t\in d\}|$ Now given a new document $d^*$, we update the denominator simply by: $z^*(t) = z(t) + \left\{ \begin{array}{ll} 1 & \mbox{if}\; {t\in d^*} \\ 0 & \mbox{otherwise} \end{array} \right.$ We would then have to recalculate the $tf-idf$ based on the new $idf$ vector. Similarly to remove an old document, we decrement the numerator in a similar fashion. This does mean that we either have to store the entire $tf$ matrix as well as the $tf-idf$ matrix (doubling the memory requirements), or we have to compute the $tf-idf$ scores when needed (increasing computational costs). I can't see any way round that. For the second part of the question, about the evolution of $idf$ vectors over time, it seems that we can use the above method, and store a set of "landmark" $z$ vectors (denominators) for different date ranges (or perhaps content subsets). Of course $z$ is a dense vector of the length of the dictionary so storing a lot of these will be memory intensive; however this is probably preferable to recomputing $idf$ vectors when needed (which would again require storing the $tf$ matrix as well or instead).
Incremental IDF (Inverse Document Frequency) Ok, Thanks to Steffen for the useful comments. I guess the answer is quite simple in the end. As he says, all we need to do is store current denominator (call it $z$): $z(t) = |\{d:t\in d\}|$ Now give
24,402
How can I set up a zero-inflated poisson in JAGS?
Here is a simple solution using the fact that the poisson will give you zeros when the lambda parameter is zero. Note however that JAGS tends to break if lambda is exactly zero, thus the "+ 0.00001". model { for (i in 1:I) { y[i] ~ dpois(mu1[i]) mu1[i] <- mu[i]*x[i] + 0.00001 x[i] ~ dbern(pro[i]) logit(pro[i]) <- theta[i] mu[i] <- bla + bla +bla + .... theta[i] <- bla + bla + bla + .... }
How can I set up a zero-inflated poisson in JAGS?
Here is a simple solution using the fact that the poisson will give you zeros when the lambda parameter is zero. Note however that JAGS tends to break if lambda is exactly zero, thus the "+ 0.00001".
How can I set up a zero-inflated poisson in JAGS? Here is a simple solution using the fact that the poisson will give you zeros when the lambda parameter is zero. Note however that JAGS tends to break if lambda is exactly zero, thus the "+ 0.00001". model { for (i in 1:I) { y[i] ~ dpois(mu1[i]) mu1[i] <- mu[i]*x[i] + 0.00001 x[i] ~ dbern(pro[i]) logit(pro[i]) <- theta[i] mu[i] <- bla + bla +bla + .... theta[i] <- bla + bla + bla + .... }
How can I set up a zero-inflated poisson in JAGS? Here is a simple solution using the fact that the poisson will give you zeros when the lambda parameter is zero. Note however that JAGS tends to break if lambda is exactly zero, thus the "+ 0.00001".
24,403
How can I set up a zero-inflated poisson in JAGS?
C <- 10000 #Constant 1/0 trick # Likelihood: for ( i in 1:ny ) { #Likelihood of the count model component LikCountModel[i] <- pow(mu[i],y[i])/y_fact[i]*exp(-mu[i]) #Count model component eta[i] <- bet0 + inprod( beta[] , B[i,] ) mu[i] <- exp(eta[i]) #ZI Component zeta[i] <- gamm0 + inprod( gamma[] , G[i,] ) w[i] <- exp(zeta[i])/(1+exp(zeta[i])) #1/0 Tricks: ones is a column containing only ones, with the same size of y p[i] <- L[i] / C ones[i] ~ dbern(p[i]) #Full likelihood expression L[i] <- LikCountModel[i] * (1-w[i]) + equals(y[i],0)*w[i] } #then set your priors for all beta and gamma
How can I set up a zero-inflated poisson in JAGS?
C <- 10000 #Constant 1/0 trick # Likelihood: for ( i in 1:ny ) { #Likelihood of the count model component LikCountModel[i] <- pow(mu[i],y[i])/y_fact[i]*exp(-mu[i]) #Count model component eta[i] <-
How can I set up a zero-inflated poisson in JAGS? C <- 10000 #Constant 1/0 trick # Likelihood: for ( i in 1:ny ) { #Likelihood of the count model component LikCountModel[i] <- pow(mu[i],y[i])/y_fact[i]*exp(-mu[i]) #Count model component eta[i] <- bet0 + inprod( beta[] , B[i,] ) mu[i] <- exp(eta[i]) #ZI Component zeta[i] <- gamm0 + inprod( gamma[] , G[i,] ) w[i] <- exp(zeta[i])/(1+exp(zeta[i])) #1/0 Tricks: ones is a column containing only ones, with the same size of y p[i] <- L[i] / C ones[i] ~ dbern(p[i]) #Full likelihood expression L[i] <- LikCountModel[i] * (1-w[i]) + equals(y[i],0)*w[i] } #then set your priors for all beta and gamma
How can I set up a zero-inflated poisson in JAGS? C <- 10000 #Constant 1/0 trick # Likelihood: for ( i in 1:ny ) { #Likelihood of the count model component LikCountModel[i] <- pow(mu[i],y[i])/y_fact[i]*exp(-mu[i]) #Count model component eta[i] <-
24,404
Should the difference between control and treatment be modelled explicitly or implicitly?
The $\epsilon_{ij}$ are likely to be correlated in the second model but not the first. In the first, these terms represent measurement error and deviations from the additive model. With reasonable care--such as by randomizing the sequence of measurements--those errors can be made independent when the model is accurate. Whence $$d_{ij} = Y_{ij} - Y_{i0} = \gamma_i + \delta_j + \epsilon_{ij} - (\gamma_i + \delta_0 + \epsilon_{i0}) = \delta_j + (\epsilon_{ij} - \epsilon_{i0}).$$ (Note that this contradicts the last equation in the question, because it is wrong to assume $\epsilon_{i0}=0$. Doing so would force us to concede that the $\gamma_i$ are random variables rather than parameters, at least once we acknowledge the possibility of measurement error for the control. This would lead to the same conclusions below.) For $j, k \ne 0$, $j \ne k$ this implies $$Cov(d_{ij}, d_{ik}) = Cov(\epsilon_{ij} - \epsilon_{i0}, \epsilon_{ik} - \epsilon_{i0}) = Var(\epsilon_{i0}) \ne 0.$$ The correlation can be substantial. For iid errors, a similar calculation shows it equals 0.5. Unless you are using procedures that explicitly and correctly handle this correlation, favor the first model over the second.
Should the difference between control and treatment be modelled explicitly or implicitly?
The $\epsilon_{ij}$ are likely to be correlated in the second model but not the first. In the first, these terms represent measurement error and deviations from the additive model. With reasonable ca
Should the difference between control and treatment be modelled explicitly or implicitly? The $\epsilon_{ij}$ are likely to be correlated in the second model but not the first. In the first, these terms represent measurement error and deviations from the additive model. With reasonable care--such as by randomizing the sequence of measurements--those errors can be made independent when the model is accurate. Whence $$d_{ij} = Y_{ij} - Y_{i0} = \gamma_i + \delta_j + \epsilon_{ij} - (\gamma_i + \delta_0 + \epsilon_{i0}) = \delta_j + (\epsilon_{ij} - \epsilon_{i0}).$$ (Note that this contradicts the last equation in the question, because it is wrong to assume $\epsilon_{i0}=0$. Doing so would force us to concede that the $\gamma_i$ are random variables rather than parameters, at least once we acknowledge the possibility of measurement error for the control. This would lead to the same conclusions below.) For $j, k \ne 0$, $j \ne k$ this implies $$Cov(d_{ij}, d_{ik}) = Cov(\epsilon_{ij} - \epsilon_{i0}, \epsilon_{ik} - \epsilon_{i0}) = Var(\epsilon_{i0}) \ne 0.$$ The correlation can be substantial. For iid errors, a similar calculation shows it equals 0.5. Unless you are using procedures that explicitly and correctly handle this correlation, favor the first model over the second.
Should the difference between control and treatment be modelled explicitly or implicitly? The $\epsilon_{ij}$ are likely to be correlated in the second model but not the first. In the first, these terms represent measurement error and deviations from the additive model. With reasonable ca
24,405
Updating the lasso fit with new observations
The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the optimal $\beta^*_{old}$ you already have). The closest $\beta^*_{old}$ is to $\beta_{new}^*$, the smaller the number of LARS iteration you will have to step to get to $\beta_{new}^*$. EDIT: Due to the comments from user2763361 I add more details to my original answer. From the comments below I gather that user2763361 suggests to complement my original answer to turn it into one that can be used directly (off the shelves) while also being very efficient. To do the first part, I will illustrate the solution I propose step by step on a toy example. To satisfy the second part, I will do so using a recent, high quality interior point solver. This is because, it is easier to obtain an high performance implementation of the solution I propose using a library that can solve the lasso problem by the interior point approach rather than trying to hack the LARS or simplex algorithm to start the optimization from a non-standard starting point (though that second venue is also possible). Note that it is sometimes claimed (in older books) that the interior point approach to solving linear programs is slower than the simplex approach and that may have been true a long time ago but it's generally not true today and certainly not true for large scale problems (this is why most professional libraries like cplex use the interior point algorithm) and the question is at least implicitly about large scale problems. Note also that the interior point solver I use fully handles sparse matrices so I don t think there will be a large performance gap with LARS (an original motivation for using LARS was that many popular LP solvers at the time were not handling sparse matrices well and these are a characteristic features of the LASSO problem). A (very) good open source implementation of the interior point algorithm is ipopt, in the COIN-OR library. Another reason I will be using ipopt is that it has has an R interface, ipoptr. You will find more exhaustive installation guide here, below I give the standard commands to install it in ubuntu. in the bash, do: sudo apt-get install gcc g++ gfortran subversion patch wget svn co https://projects.coin-or.org/svn/Ipopt/stable/3.11 CoinIpopt cd ~/CoinIpopt ./configure make make install Then, as root, in R do (I assume svn has copied the subversion file in ~/ as it does by default): install.packages("~/CoinIpopt/Ipopt/contrib/RInterface",repos=NULL,type="source") From here, I'm giving a small example (mostly from the toy example given by Jelmer Ypma as part of his R wraper to ipopt): library('ipoptr') # Experiment parameters. lambda <- 1 # Level of L1 regularization. n <- 100 # Number of training examples. e <- 1 # Std. dev. in noise of outputs. beta <- c( 0, 0, 2, -4, 0, 0, -1, 3 ) # "True" regression coefficients. # Set the random number generator seed. ranseed <- 7 set.seed( ranseed ) # CREATE DATA SET. # Generate the input vectors from the standard normal, and generate the # responses from the regression with some additional noise. The variable # "beta" is the set of true regression coefficients. m <- length(beta) # Number of features. A <- matrix( rnorm(n*m), nrow=n, ncol=m ) # The n x m matrix of examples. noise <- rnorm(n, sd=e) # Noise in outputs. y <- A %*% beta + noise # The outputs. # DEFINE LASSO FUNCTIONS # m, lambda, y, A are all defined in the ipoptr_environment eval_f <- function(x) { # separate x in two parts w <- x[ 1:m ] # parameters u <- x[ (m+1):(2*m) ] return( sum( (y - A %*% w)^2 )/2 + lambda*sum(u) ) } # ------------------------------------------------------------------ eval_grad_f <- function(x) { w <- x[ 1:m ] return( c( -t(A) %*% (y - A %*% w), rep(lambda,m) ) ) } # ------------------------------------------------------------------ eval_g <- function(x) { # separate x in two parts w <- x[ 1:m ] # parameters u <- x[ (m+1):(2*m) ] return( c( w + u, u - w ) ) } eval_jac_g <- function(x) { # return a vector of 1 and minus 1, since those are the values of the non-zero elements return( c( rep( 1, 2*m ), rep( c(-1,1), m ) ) ) } # ------------------------------------------------------------------ # rename lambda so it doesn't cause confusion with lambda in auxdata eval_h <- function( x, obj_factor, hessian_lambda ) { H <- t(A) %*% A H <- unlist( lapply( 1:m, function(i) { H[i,1:i] } ) ) return( obj_factor * H ) } eval_h_structure <- c( lapply( 1:m, function(x) { return( c(1:x) ) } ), lapply( 1:m, function(x) { return( c() ) } ) ) # The starting point. x0 = c( rep(0, m), rep(1, m) ) # The constraint functions are bounded from below by zero. constraint_lb = rep( 0, 2*m ) constraint_ub = rep( Inf, 2*m ) ipoptr_opts <- list( "jac_d_constant" = 'yes', "hessian_constant" = 'yes', "mu_strategy" = 'adaptive', "max_iter" = 100, "tol" = 1e-8 ) # Set up the auxiliary data. auxdata <- new.env() auxdata$m <- m auxdata$A <- A auxdata$y <- y auxdata$lambda <- lambda # COMPUTE SOLUTION WITH IPOPT. # Compute the L1-regularized maximum likelihood estimator. print( ipoptr( x0=x0, eval_f=eval_f, eval_grad_f=eval_grad_f, eval_g=eval_g, eval_jac_g=eval_jac_g, eval_jac_g_structure=eval_jac_g_structure, constraint_lb=constraint_lb, constraint_ub=constraint_ub, eval_h=eval_h, eval_h_structure=eval_h_structure, opts=ipoptr_opts, ipoptr_environment=auxdata ) ) My point is, if you have new data in, you just need to update (not replace) the constraint matrix and objective function vector to account for the new observations. change the starting points of the interior point from x0 = c( rep(0, m), rep(1, m) ) to the vector of solution you found previously (before new data was added in). The logic here is as follows. Denote $\beta_{new}$ the new vector of coefficients (the ones corresponding to the data set after the update) and $\beta_{old}$ the original ones. Also denote $\beta_{init}$ the vector x0 in the code above (this is the usual start for the interior point method). Then the idea is that if: $$|\beta_{init}-\beta_{new}|_1>|\beta_{new}-\beta_{old}|_1\quad(1)$$ then, one can get $\beta_{new}$ much faster by starting the interior point from $\beta_{old}$ rather than the naive $\beta_{init}$. The gain will be all the more important when the dimensions of the data set ($n$ and $p$) are larger. As for the conditions under which inequality (1) holds, they are: when $\lambda$ is large compared to $|\beta_{OLS}|_1$ (this is usually the case when $p$, the number of design variables is large compared to $n$, the number of observations) when the new observations are not pathologically influential, e.g. for example when they are consistent with the stochastic process that has generated the existing data. when the size of the update is small relative to the size of the existing data.
Updating the lasso fit with new observations
The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the
Updating the lasso fit with new observations The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the optimal $\beta^*_{old}$ you already have). The closest $\beta^*_{old}$ is to $\beta_{new}^*$, the smaller the number of LARS iteration you will have to step to get to $\beta_{new}^*$. EDIT: Due to the comments from user2763361 I add more details to my original answer. From the comments below I gather that user2763361 suggests to complement my original answer to turn it into one that can be used directly (off the shelves) while also being very efficient. To do the first part, I will illustrate the solution I propose step by step on a toy example. To satisfy the second part, I will do so using a recent, high quality interior point solver. This is because, it is easier to obtain an high performance implementation of the solution I propose using a library that can solve the lasso problem by the interior point approach rather than trying to hack the LARS or simplex algorithm to start the optimization from a non-standard starting point (though that second venue is also possible). Note that it is sometimes claimed (in older books) that the interior point approach to solving linear programs is slower than the simplex approach and that may have been true a long time ago but it's generally not true today and certainly not true for large scale problems (this is why most professional libraries like cplex use the interior point algorithm) and the question is at least implicitly about large scale problems. Note also that the interior point solver I use fully handles sparse matrices so I don t think there will be a large performance gap with LARS (an original motivation for using LARS was that many popular LP solvers at the time were not handling sparse matrices well and these are a characteristic features of the LASSO problem). A (very) good open source implementation of the interior point algorithm is ipopt, in the COIN-OR library. Another reason I will be using ipopt is that it has has an R interface, ipoptr. You will find more exhaustive installation guide here, below I give the standard commands to install it in ubuntu. in the bash, do: sudo apt-get install gcc g++ gfortran subversion patch wget svn co https://projects.coin-or.org/svn/Ipopt/stable/3.11 CoinIpopt cd ~/CoinIpopt ./configure make make install Then, as root, in R do (I assume svn has copied the subversion file in ~/ as it does by default): install.packages("~/CoinIpopt/Ipopt/contrib/RInterface",repos=NULL,type="source") From here, I'm giving a small example (mostly from the toy example given by Jelmer Ypma as part of his R wraper to ipopt): library('ipoptr') # Experiment parameters. lambda <- 1 # Level of L1 regularization. n <- 100 # Number of training examples. e <- 1 # Std. dev. in noise of outputs. beta <- c( 0, 0, 2, -4, 0, 0, -1, 3 ) # "True" regression coefficients. # Set the random number generator seed. ranseed <- 7 set.seed( ranseed ) # CREATE DATA SET. # Generate the input vectors from the standard normal, and generate the # responses from the regression with some additional noise. The variable # "beta" is the set of true regression coefficients. m <- length(beta) # Number of features. A <- matrix( rnorm(n*m), nrow=n, ncol=m ) # The n x m matrix of examples. noise <- rnorm(n, sd=e) # Noise in outputs. y <- A %*% beta + noise # The outputs. # DEFINE LASSO FUNCTIONS # m, lambda, y, A are all defined in the ipoptr_environment eval_f <- function(x) { # separate x in two parts w <- x[ 1:m ] # parameters u <- x[ (m+1):(2*m) ] return( sum( (y - A %*% w)^2 )/2 + lambda*sum(u) ) } # ------------------------------------------------------------------ eval_grad_f <- function(x) { w <- x[ 1:m ] return( c( -t(A) %*% (y - A %*% w), rep(lambda,m) ) ) } # ------------------------------------------------------------------ eval_g <- function(x) { # separate x in two parts w <- x[ 1:m ] # parameters u <- x[ (m+1):(2*m) ] return( c( w + u, u - w ) ) } eval_jac_g <- function(x) { # return a vector of 1 and minus 1, since those are the values of the non-zero elements return( c( rep( 1, 2*m ), rep( c(-1,1), m ) ) ) } # ------------------------------------------------------------------ # rename lambda so it doesn't cause confusion with lambda in auxdata eval_h <- function( x, obj_factor, hessian_lambda ) { H <- t(A) %*% A H <- unlist( lapply( 1:m, function(i) { H[i,1:i] } ) ) return( obj_factor * H ) } eval_h_structure <- c( lapply( 1:m, function(x) { return( c(1:x) ) } ), lapply( 1:m, function(x) { return( c() ) } ) ) # The starting point. x0 = c( rep(0, m), rep(1, m) ) # The constraint functions are bounded from below by zero. constraint_lb = rep( 0, 2*m ) constraint_ub = rep( Inf, 2*m ) ipoptr_opts <- list( "jac_d_constant" = 'yes', "hessian_constant" = 'yes', "mu_strategy" = 'adaptive', "max_iter" = 100, "tol" = 1e-8 ) # Set up the auxiliary data. auxdata <- new.env() auxdata$m <- m auxdata$A <- A auxdata$y <- y auxdata$lambda <- lambda # COMPUTE SOLUTION WITH IPOPT. # Compute the L1-regularized maximum likelihood estimator. print( ipoptr( x0=x0, eval_f=eval_f, eval_grad_f=eval_grad_f, eval_g=eval_g, eval_jac_g=eval_jac_g, eval_jac_g_structure=eval_jac_g_structure, constraint_lb=constraint_lb, constraint_ub=constraint_ub, eval_h=eval_h, eval_h_structure=eval_h_structure, opts=ipoptr_opts, ipoptr_environment=auxdata ) ) My point is, if you have new data in, you just need to update (not replace) the constraint matrix and objective function vector to account for the new observations. change the starting points of the interior point from x0 = c( rep(0, m), rep(1, m) ) to the vector of solution you found previously (before new data was added in). The logic here is as follows. Denote $\beta_{new}$ the new vector of coefficients (the ones corresponding to the data set after the update) and $\beta_{old}$ the original ones. Also denote $\beta_{init}$ the vector x0 in the code above (this is the usual start for the interior point method). Then the idea is that if: $$|\beta_{init}-\beta_{new}|_1>|\beta_{new}-\beta_{old}|_1\quad(1)$$ then, one can get $\beta_{new}$ much faster by starting the interior point from $\beta_{old}$ rather than the naive $\beta_{init}$. The gain will be all the more important when the dimensions of the data set ($n$ and $p$) are larger. As for the conditions under which inequality (1) holds, they are: when $\lambda$ is large compared to $|\beta_{OLS}|_1$ (this is usually the case when $p$, the number of design variables is large compared to $n$, the number of observations) when the new observations are not pathologically influential, e.g. for example when they are consistent with the stochastic process that has generated the existing data. when the size of the update is small relative to the size of the existing data.
Updating the lasso fit with new observations The lasso is fitted through LARS (an iterative process, that starts at some initial estimate $\beta^0$). By default $\beta^0=0_p$ but you can change this in most implementation (and replace it by the
24,406
A transform to change skew without affecting kurtosis?
My answer is the beginnings of a total hack, but I am not aware of any established way to do what you ask. My first step would be to rank order your dataset you can find the proportional position in your dataset and then transform it to a normal distribution, this method was used in Reynolds & Hewitt, 1996. See sample R code below in PROCMiracle. Once the distribution is normal, then the problem has been turned on its head - a matter of adjusting kurtosis but not skew. A google search suggested that one could follow the procedures of John & Draper, 1980 to adjust the kurtosis but not the skew - but I could not replicate that result. My attempts to develop a crude spreading/narrowing function that takes the input (normalized) value and adds or subtracts a value from it proportional to the position of the variable on the normal scale does result in a monotonic adjustment, but in practice tends to create a bimodal distribution though one that has the desired skewness and kurtosis values. I realize this is not a complete answer, but I thought it might provide a step in the right direction. PROCMiracle <- function(datasource,normalrank="BLOM") { switch(normalrank, "BLOM" = { rmod <- -3/8 nmod <- 1/4 }, "TUKEY" = { rmod <- -1/3 nmod <- 1/3 }, "VW" ={ rmod <- 0 nmod <- 1 }, "NONE" = { rmod <- 0 nmod <- 0 } ) print("This may be doing something strange with NA values! Beware!") return(scale(qnorm((rank(datasource)+rmod)/(length(datasource)+nmod)))) }
A transform to change skew without affecting kurtosis?
My answer is the beginnings of a total hack, but I am not aware of any established way to do what you ask. My first step would be to rank order your dataset you can find the proportional position in
A transform to change skew without affecting kurtosis? My answer is the beginnings of a total hack, but I am not aware of any established way to do what you ask. My first step would be to rank order your dataset you can find the proportional position in your dataset and then transform it to a normal distribution, this method was used in Reynolds & Hewitt, 1996. See sample R code below in PROCMiracle. Once the distribution is normal, then the problem has been turned on its head - a matter of adjusting kurtosis but not skew. A google search suggested that one could follow the procedures of John & Draper, 1980 to adjust the kurtosis but not the skew - but I could not replicate that result. My attempts to develop a crude spreading/narrowing function that takes the input (normalized) value and adds or subtracts a value from it proportional to the position of the variable on the normal scale does result in a monotonic adjustment, but in practice tends to create a bimodal distribution though one that has the desired skewness and kurtosis values. I realize this is not a complete answer, but I thought it might provide a step in the right direction. PROCMiracle <- function(datasource,normalrank="BLOM") { switch(normalrank, "BLOM" = { rmod <- -3/8 nmod <- 1/4 }, "TUKEY" = { rmod <- -1/3 nmod <- 1/3 }, "VW" ={ rmod <- 0 nmod <- 1 }, "NONE" = { rmod <- 0 nmod <- 0 } ) print("This may be doing something strange with NA values! Beware!") return(scale(qnorm((rank(datasource)+rmod)/(length(datasource)+nmod)))) }
A transform to change skew without affecting kurtosis? My answer is the beginnings of a total hack, but I am not aware of any established way to do what you ask. My first step would be to rank order your dataset you can find the proportional position in
24,407
A transform to change skew without affecting kurtosis?
Another possible interesting technique has come to mind, though this doesn't quite answer the question, is to transform a sample to have a fixed sample L-skew and sample L-kurtosis (as well as a fixed mean and L-scale). These four constraints are linear in the order statistics. To keep the transform monotonic on a sample of $n$ observations would then require another $n-1$ equations. This could then be posed as a quadratic optimization problem: minimize the $\ell_2$ norm between the sample order statistics and the transformed version subject to the given constraints. This is a kind of wacky approach, though. In the original question, I was looking for something more basic and fundamental. I also was implicitly looking for a technique which could be applied to individual observations, independent of having an entire cohort of samples.
A transform to change skew without affecting kurtosis?
Another possible interesting technique has come to mind, though this doesn't quite answer the question, is to transform a sample to have a fixed sample L-skew and sample L-kurtosis (as well as a fixed
A transform to change skew without affecting kurtosis? Another possible interesting technique has come to mind, though this doesn't quite answer the question, is to transform a sample to have a fixed sample L-skew and sample L-kurtosis (as well as a fixed mean and L-scale). These four constraints are linear in the order statistics. To keep the transform monotonic on a sample of $n$ observations would then require another $n-1$ equations. This could then be posed as a quadratic optimization problem: minimize the $\ell_2$ norm between the sample order statistics and the transformed version subject to the given constraints. This is a kind of wacky approach, though. In the original question, I was looking for something more basic and fundamental. I also was implicitly looking for a technique which could be applied to individual observations, independent of having an entire cohort of samples.
A transform to change skew without affecting kurtosis? Another possible interesting technique has come to mind, though this doesn't quite answer the question, is to transform a sample to have a fixed sample L-skew and sample L-kurtosis (as well as a fixed
24,408
A transform to change skew without affecting kurtosis?
I would rather model this data set using a leptokurtic distribution instead of using data-transformations. I like the sinh-arcsinh distribution from Jones and Pewsey (2009), Biometrika.
A transform to change skew without affecting kurtosis?
I would rather model this data set using a leptokurtic distribution instead of using data-transformations. I like the sinh-arcsinh distribution from Jones and Pewsey (2009), Biometrika.
A transform to change skew without affecting kurtosis? I would rather model this data set using a leptokurtic distribution instead of using data-transformations. I like the sinh-arcsinh distribution from Jones and Pewsey (2009), Biometrika.
A transform to change skew without affecting kurtosis? I would rather model this data set using a leptokurtic distribution instead of using data-transformations. I like the sinh-arcsinh distribution from Jones and Pewsey (2009), Biometrika.
24,409
Why do people use Zero-Padding in Convolutional Neural Networks?
Zero-padding is a generic way to (1) control the shrinkage of dimension after applying filters larger than 1x1, and (2) avoid loosing information at the boundaries, e.g. when weights in a filter drop rapidly away from its center. For a specific input, activation function, or loss function, a variant might perform better, i.e. utilizing domain knowledge. However, the key for zero-padding is "being generic". For example, a completely different padding would be "reflection padding" that, instead of a specific value, puts a mirror of input outside the boundaries. We could try reflection padding and if it gives better results, then we might look for a justification based on the task, activation function, etc. As an example related to comments, assume black and white images with $\text{tanh}$ activation functions (between $-1$ and $1$), we may opt for $(-1)$-padding instead of $0$-padding. If we reverse the black and white in the image, now $1$-padding would be more justified for the same reason.
Why do people use Zero-Padding in Convolutional Neural Networks?
Zero-padding is a generic way to (1) control the shrinkage of dimension after applying filters larger than 1x1, and (2) avoid loosing information at the boundaries, e.g. when weights in a filter drop
Why do people use Zero-Padding in Convolutional Neural Networks? Zero-padding is a generic way to (1) control the shrinkage of dimension after applying filters larger than 1x1, and (2) avoid loosing information at the boundaries, e.g. when weights in a filter drop rapidly away from its center. For a specific input, activation function, or loss function, a variant might perform better, i.e. utilizing domain knowledge. However, the key for zero-padding is "being generic". For example, a completely different padding would be "reflection padding" that, instead of a specific value, puts a mirror of input outside the boundaries. We could try reflection padding and if it gives better results, then we might look for a justification based on the task, activation function, etc. As an example related to comments, assume black and white images with $\text{tanh}$ activation functions (between $-1$ and $1$), we may opt for $(-1)$-padding instead of $0$-padding. If we reverse the black and white in the image, now $1$-padding would be more justified for the same reason.
Why do people use Zero-Padding in Convolutional Neural Networks? Zero-padding is a generic way to (1) control the shrinkage of dimension after applying filters larger than 1x1, and (2) avoid loosing information at the boundaries, e.g. when weights in a filter drop
24,410
Why do people use Zero-Padding in Convolutional Neural Networks?
If you consider the central limit theorem, input data will follow a normal distribution with a constant mean. Thus, if the input data are normalized, the mean will be close to 0. So padding with 0 (the mean) doesn't affect the distribution. I have done some testings in my research, which show the output of batch normalization will follow normal distributions with the mean close to 0. Also if you know the convolution operation in traditional signal processing, you can find that zero padding is just the standardized way.
Why do people use Zero-Padding in Convolutional Neural Networks?
If you consider the central limit theorem, input data will follow a normal distribution with a constant mean. Thus, if the input data are normalized, the mean will be close to 0. So padding with 0 (th
Why do people use Zero-Padding in Convolutional Neural Networks? If you consider the central limit theorem, input data will follow a normal distribution with a constant mean. Thus, if the input data are normalized, the mean will be close to 0. So padding with 0 (the mean) doesn't affect the distribution. I have done some testings in my research, which show the output of batch normalization will follow normal distributions with the mean close to 0. Also if you know the convolution operation in traditional signal processing, you can find that zero padding is just the standardized way.
Why do people use Zero-Padding in Convolutional Neural Networks? If you consider the central limit theorem, input data will follow a normal distribution with a constant mean. Thus, if the input data are normalized, the mean will be close to 0. So padding with 0 (th
24,411
If $X$ and $Y$ are independent Normal variables each with mean zero, then $\frac{XY}{\sqrt{X^2+Y^2}}$ is also a Normal variable
The original solution of the problem by Shepp uses the concept of stable law property, which seems a bit advanced for me at the moment. So I could not comprehend the hint given in the exercise I cited in my post. I guess a proof involving only the single variable $U=\frac{XY}{\sqrt{X^2+Y^2}}$ and not using a change of variables is difficult to come up with. So I share three open access papers I found that provide an alternate solution to the problem: A note on Normal functions of Normal random variables Normal functions of Normal random variables A Result of Shepp The first one has convinced me not to go down the integration path I took with that choice of the variable $V$ to derive the density of $U$. It is the third paper that looks like something I can follow. I give a brief sketch of the proof here: We assume without loss of generality $\sigma_1^2=1$, and set $\sigma_2^2=\sigma^2$. Now noting that $X^2\sim\chi^2_1$ and $\frac{Y^2}{\sigma^2}\sim\chi^2_1$ are independent, we have the joint density of $(X^2,Y^2)$. We denote it by $f_{X^2,Y^2}$. Consider the transformation $(X^2,Y^2)\to(W,Z)$ such that $W=\frac{X^2Y^2}{X^2+Y^2}$ and $Z=\frac{X^2+Y^2}{Y^2}$. So we have the joint density of $(W,Z)$. Let us denote it by $f_{W,Z}$. Following the standard procedure, we integrate $f_{W,Z}$ wrt to $z$ to get the marginal density $f_W$ of $W$. We find that $W=U^2$ is a Gamma variate with parameters $\frac{1}{2}$ and $2(1+\frac{1}{\sigma})^{-2}$, so that $(1+\frac{1}{\sigma})^2\,W\sim\chi^2_1$. We note that the density of $U$ is symmetric about $0$. This implies that $(1+\frac{1}{\sigma})U\sim\mathcal{N}(0,1)$, and hence $U\sim\mathcal{N}\left(0,\left(\frac{\sigma}{\sigma+1}\right)^2\right)$.
If $X$ and $Y$ are independent Normal variables each with mean zero, then $\frac{XY}{\sqrt{X^2+Y^2}}
The original solution of the problem by Shepp uses the concept of stable law property, which seems a bit advanced for me at the moment. So I could not comprehend the hint given in the exercise I cited
If $X$ and $Y$ are independent Normal variables each with mean zero, then $\frac{XY}{\sqrt{X^2+Y^2}}$ is also a Normal variable The original solution of the problem by Shepp uses the concept of stable law property, which seems a bit advanced for me at the moment. So I could not comprehend the hint given in the exercise I cited in my post. I guess a proof involving only the single variable $U=\frac{XY}{\sqrt{X^2+Y^2}}$ and not using a change of variables is difficult to come up with. So I share three open access papers I found that provide an alternate solution to the problem: A note on Normal functions of Normal random variables Normal functions of Normal random variables A Result of Shepp The first one has convinced me not to go down the integration path I took with that choice of the variable $V$ to derive the density of $U$. It is the third paper that looks like something I can follow. I give a brief sketch of the proof here: We assume without loss of generality $\sigma_1^2=1$, and set $\sigma_2^2=\sigma^2$. Now noting that $X^2\sim\chi^2_1$ and $\frac{Y^2}{\sigma^2}\sim\chi^2_1$ are independent, we have the joint density of $(X^2,Y^2)$. We denote it by $f_{X^2,Y^2}$. Consider the transformation $(X^2,Y^2)\to(W,Z)$ such that $W=\frac{X^2Y^2}{X^2+Y^2}$ and $Z=\frac{X^2+Y^2}{Y^2}$. So we have the joint density of $(W,Z)$. Let us denote it by $f_{W,Z}$. Following the standard procedure, we integrate $f_{W,Z}$ wrt to $z$ to get the marginal density $f_W$ of $W$. We find that $W=U^2$ is a Gamma variate with parameters $\frac{1}{2}$ and $2(1+\frac{1}{\sigma})^{-2}$, so that $(1+\frac{1}{\sigma})^2\,W\sim\chi^2_1$. We note that the density of $U$ is symmetric about $0$. This implies that $(1+\frac{1}{\sigma})U\sim\mathcal{N}(0,1)$, and hence $U\sim\mathcal{N}\left(0,\left(\frac{\sigma}{\sigma+1}\right)^2\right)$.
If $X$ and $Y$ are independent Normal variables each with mean zero, then $\frac{XY}{\sqrt{X^2+Y^2}} The original solution of the problem by Shepp uses the concept of stable law property, which seems a bit advanced for me at the moment. So I could not comprehend the hint given in the exercise I cited
24,412
Help Understanding Reconstruction Loss In Variational Autoencoder
Typically in VAE implementations, the output of the decoder is actually the mean $\mu_{x|z}$ which I will just call $\mu$, and people assumes a unitary covariance. So in that case we have: $logP(x|z)=-\frac{1}{2}[log(|\Sigma|)+klog(2\pi)+(\mathbf{x}-\mathbf{\mu})^T(\mathbf{x}-\mathbf{\mu})]$ This comes from taking the log of the pdf of a multivariate Gaussian distribution. Now you can see that since the first two terms are constant with respect to $\mu$, the optimization problem is equivalent to maximize $-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})^T(\mathbf{x}-\boldsymbol{\mu})$ which is the just the L2 loss between $\mathbf{x}$ and $\boldsymbol{\mu}$. Finally the expectation is just approximated by averaging.
Help Understanding Reconstruction Loss In Variational Autoencoder
Typically in VAE implementations, the output of the decoder is actually the mean $\mu_{x|z}$ which I will just call $\mu$, and people assumes a unitary covariance. So in that case we have: $logP(x|z)=
Help Understanding Reconstruction Loss In Variational Autoencoder Typically in VAE implementations, the output of the decoder is actually the mean $\mu_{x|z}$ which I will just call $\mu$, and people assumes a unitary covariance. So in that case we have: $logP(x|z)=-\frac{1}{2}[log(|\Sigma|)+klog(2\pi)+(\mathbf{x}-\mathbf{\mu})^T(\mathbf{x}-\mathbf{\mu})]$ This comes from taking the log of the pdf of a multivariate Gaussian distribution. Now you can see that since the first two terms are constant with respect to $\mu$, the optimization problem is equivalent to maximize $-\frac{1}{2}(\mathbf{x}-\boldsymbol{\mu})^T(\mathbf{x}-\boldsymbol{\mu})$ which is the just the L2 loss between $\mathbf{x}$ and $\boldsymbol{\mu}$. Finally the expectation is just approximated by averaging.
Help Understanding Reconstruction Loss In Variational Autoencoder Typically in VAE implementations, the output of the decoder is actually the mean $\mu_{x|z}$ which I will just call $\mu$, and people assumes a unitary covariance. So in that case we have: $logP(x|z)=
24,413
How to calculate confidence scores in regression (with random forests/XGBoost) for each prediction in R?
What you are referring to as a confidence score can be obtained from the uncertainty in individual predictions (e.g. by taking the inverse of it). Quantifying this uncertainty was always possible with bagging and is relatively straightforward in random forests - but these estimates were biased. Wager et al. (2014) described two procedures to get at these uncertainties more efficiently and with less bias. This was based on bias-corrected versions of the jackknife-after-bootstrap and the infinitesimal jackknife. You can find implementations in the R packages ranger and grf. More recently, this has been improved upon by using random forests built with conditional inference trees. Based on simulation studies (Brokamp et al. 2018), the infinitesimal jackknife estimator appears to more accurately estimate the error in predictions when conditional inference trees are used to build the random forests. This is implemented in the package RFinfer. Wager, S., Hastie, T., & Efron, B. (2014). Confidence intervals for random forests: The jackknife and the infinitesimal jackknife. The Journal of Machine Learning Research, 15(1), 1625-1651. Brokamp, C., Rao, M. B., Ryan, P., & Jandarov, R. (2017). A comparison of resampling and recursive partitioning methods in random forest for estimating the asymptotic variance using the infinitesimal jackknife. Stat, 6(1), 360-372.
How to calculate confidence scores in regression (with random forests/XGBoost) for each prediction i
What you are referring to as a confidence score can be obtained from the uncertainty in individual predictions (e.g. by taking the inverse of it). Quantifying this uncertainty was always possible with
How to calculate confidence scores in regression (with random forests/XGBoost) for each prediction in R? What you are referring to as a confidence score can be obtained from the uncertainty in individual predictions (e.g. by taking the inverse of it). Quantifying this uncertainty was always possible with bagging and is relatively straightforward in random forests - but these estimates were biased. Wager et al. (2014) described two procedures to get at these uncertainties more efficiently and with less bias. This was based on bias-corrected versions of the jackknife-after-bootstrap and the infinitesimal jackknife. You can find implementations in the R packages ranger and grf. More recently, this has been improved upon by using random forests built with conditional inference trees. Based on simulation studies (Brokamp et al. 2018), the infinitesimal jackknife estimator appears to more accurately estimate the error in predictions when conditional inference trees are used to build the random forests. This is implemented in the package RFinfer. Wager, S., Hastie, T., & Efron, B. (2014). Confidence intervals for random forests: The jackknife and the infinitesimal jackknife. The Journal of Machine Learning Research, 15(1), 1625-1651. Brokamp, C., Rao, M. B., Ryan, P., & Jandarov, R. (2017). A comparison of resampling and recursive partitioning methods in random forest for estimating the asymptotic variance using the infinitesimal jackknife. Stat, 6(1), 360-372.
How to calculate confidence scores in regression (with random forests/XGBoost) for each prediction i What you are referring to as a confidence score can be obtained from the uncertainty in individual predictions (e.g. by taking the inverse of it). Quantifying this uncertainty was always possible with
24,414
How does XGBoost/lightGBM evaluate ndcg metric for ranking
I happened across this myself, and finally dug into the code to figure it out. The difference is the handling of a missing IDCG. Your code returns 0, while LightGBM is treating that case as a 1. The following code produced matching results for me: import numpy as np def dcg_at_k(r, k): r = np.asfarray(r)[:k] if r.size: return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2))) return 0. def ndcg_at_k(r, k): idcg = dcg_at_k(sorted(r, reverse=True), k) if not idcg: return 1. # CHANGE THIS return dcg_at_k(r, k) / idcg
How does XGBoost/lightGBM evaluate ndcg metric for ranking
I happened across this myself, and finally dug into the code to figure it out. The difference is the handling of a missing IDCG. Your code returns 0, while LightGBM is treating that case as a 1. The f
How does XGBoost/lightGBM evaluate ndcg metric for ranking I happened across this myself, and finally dug into the code to figure it out. The difference is the handling of a missing IDCG. Your code returns 0, while LightGBM is treating that case as a 1. The following code produced matching results for me: import numpy as np def dcg_at_k(r, k): r = np.asfarray(r)[:k] if r.size: return np.sum(np.subtract(np.power(2, r), 1) / np.log2(np.arange(2, r.size + 2))) return 0. def ndcg_at_k(r, k): idcg = dcg_at_k(sorted(r, reverse=True), k) if not idcg: return 1. # CHANGE THIS return dcg_at_k(r, k) / idcg
How does XGBoost/lightGBM evaluate ndcg metric for ranking I happened across this myself, and finally dug into the code to figure it out. The difference is the handling of a missing IDCG. Your code returns 0, while LightGBM is treating that case as a 1. The f
24,415
Is that possible to distill the knowledge of a stacked ensemble model?
In fact Hinton et al (2014) in their paper, as inspiration refer to paper by Caruana et al (2006), who described distilling knowledge from an ensemble of models. Moreover, distilling is a very basic and versatile approach that can be applied to any model. The point is that you have noisy data and train a big model on this data, this model learns some kind of approximate representation of the data, that is smoother then the raw data. This representation should be less noisy then the data, so it should be easier to learn from it, then from noisy data. Among other, later, interpretations of why distillation works, is that what we often observe is that the big models like deep neural networks, among huge number of parameters they have, only a handful "does most of the work". One of the interpretations of why huge, deep neural networks work, is that the models with huge number of parameters have a very big search space to find among them such combinations that lead to useful functions. If that is the case, then we don't need the complicated model if only we could extract the small fraction of it that learned the most important characteristics of the problem. This is probably not only the case for neural networks, but also for other modern machine learning algorithms. From practical point of view, as noticed by Hinton et al, it is easier to learn from smoother representation of the knowledge, so distillation should work better for things like predicted probabilities, or logits, then the hard classifications. Moreover, what Hinton et al proposed, is to further smooth the outputs of the model by including the temperature parameter in the softmax function. This may be more or less useful depending on how well callibrated are the probabilities returned by your model. If your model returns values that are very clustered over high and low probabilities, then the values are not very much more discriminative then the hard classifications. On another hand, with "smooth" outputs the knowledge of the model about the variability of the data is better preserved. Finally, the idea is pretty simple, so you could give it a try and see if the result achieved by the small, "student" model is close to the big model. The whole point of the small model is that is is lighter then the big one, so should also be faster to train, what makes experimenting much easier. As a word of caution, when looking at the results remember to look not only on the general performance metrics, but also at the tails, at how the small vs big model does handle the atypical cases. There is always risk that the simpler model would learn to classify correctly the "average" cases, but will not do well on the edge cases, so you need to double check this (it is not easy).
Is that possible to distill the knowledge of a stacked ensemble model?
In fact Hinton et al (2014) in their paper, as inspiration refer to paper by Caruana et al (2006), who described distilling knowledge from an ensemble of models. Moreover, distilling is a very basic a
Is that possible to distill the knowledge of a stacked ensemble model? In fact Hinton et al (2014) in their paper, as inspiration refer to paper by Caruana et al (2006), who described distilling knowledge from an ensemble of models. Moreover, distilling is a very basic and versatile approach that can be applied to any model. The point is that you have noisy data and train a big model on this data, this model learns some kind of approximate representation of the data, that is smoother then the raw data. This representation should be less noisy then the data, so it should be easier to learn from it, then from noisy data. Among other, later, interpretations of why distillation works, is that what we often observe is that the big models like deep neural networks, among huge number of parameters they have, only a handful "does most of the work". One of the interpretations of why huge, deep neural networks work, is that the models with huge number of parameters have a very big search space to find among them such combinations that lead to useful functions. If that is the case, then we don't need the complicated model if only we could extract the small fraction of it that learned the most important characteristics of the problem. This is probably not only the case for neural networks, but also for other modern machine learning algorithms. From practical point of view, as noticed by Hinton et al, it is easier to learn from smoother representation of the knowledge, so distillation should work better for things like predicted probabilities, or logits, then the hard classifications. Moreover, what Hinton et al proposed, is to further smooth the outputs of the model by including the temperature parameter in the softmax function. This may be more or less useful depending on how well callibrated are the probabilities returned by your model. If your model returns values that are very clustered over high and low probabilities, then the values are not very much more discriminative then the hard classifications. On another hand, with "smooth" outputs the knowledge of the model about the variability of the data is better preserved. Finally, the idea is pretty simple, so you could give it a try and see if the result achieved by the small, "student" model is close to the big model. The whole point of the small model is that is is lighter then the big one, so should also be faster to train, what makes experimenting much easier. As a word of caution, when looking at the results remember to look not only on the general performance metrics, but also at the tails, at how the small vs big model does handle the atypical cases. There is always risk that the simpler model would learn to classify correctly the "average" cases, but will not do well on the edge cases, so you need to double check this (it is not easy).
Is that possible to distill the knowledge of a stacked ensemble model? In fact Hinton et al (2014) in their paper, as inspiration refer to paper by Caruana et al (2006), who described distilling knowledge from an ensemble of models. Moreover, distilling is a very basic a
24,416
Is that possible to distill the knowledge of a stacked ensemble model?
yes, it's possible. as long as you have logits (the inputs to the final softmax layer), you can represent your stacked ensemble model with a much smaller model (which could be a neural net, svm, etc.) initially, the smaller model ("student") was designed to predict raw unnormalized logits. nowadays, it's more common to train the student to predict softmax outputs.
Is that possible to distill the knowledge of a stacked ensemble model?
yes, it's possible. as long as you have logits (the inputs to the final softmax layer), you can represent your stacked ensemble model with a much smaller model (which could be a neural net, svm, etc.)
Is that possible to distill the knowledge of a stacked ensemble model? yes, it's possible. as long as you have logits (the inputs to the final softmax layer), you can represent your stacked ensemble model with a much smaller model (which could be a neural net, svm, etc.) initially, the smaller model ("student") was designed to predict raw unnormalized logits. nowadays, it's more common to train the student to predict softmax outputs.
Is that possible to distill the knowledge of a stacked ensemble model? yes, it's possible. as long as you have logits (the inputs to the final softmax layer), you can represent your stacked ensemble model with a much smaller model (which could be a neural net, svm, etc.)
24,417
What is the difference between Econometrics and Machine Learning?
First things first. Everything that I say is my understanding only. Hence, as usual, I can be wrong. Henry is partially right. But Econometrics is also a family of methods. There are a variety of different econometric methods that can be applied depending on the research question at hand as well as the data provided (cross section vs. panel data and so on). Machine learning in my understanding is a collection of methods which enables machines to learn patterns from past observations (oftentimes in a black box manner). Regression is a standard tool in econometrics as well as machine learning as it allows to learn relationships between variables and to extrapolate these relationships into the future. Not all econometricians are interested in a causal interpretation of parameters estimates (they rarely can claim a causal interpretation if observational data (non experimental) is used). Oftentimes, like in the case of time series data, econometricians also do only care about predictive performance. Essentially both are the very same thing but developed in different sub-fields (machine learning being rooted in computer science). They are both a collection of methods. Econometricians also increasingly use machine learning methods like decision trees and neural networks. You already touched a very interesting point: Causality. Essentially, both fields would like to know the true underlying relationships but as you already mentioned, oftentimes the predictive performance is the main KPI used in machine learning tasks. That is, having a low generalization error is the main goal. Of course, if you know the true causal relationships, this should have the lowest generalization error out of all possible formulations. Reality is very complex and there is no free hunch. Hence, most of the time we have only partial knowledge of the underlying system and sometimes can't even measure the most important influences. But we can use proxy variables that correlate with the true underlying variables we would like to measure. Long story short and very very superficial: Both fields are related whereas econometricians are mostly interested in finding the true causal relationships (that is, testing some hypothesis) whereas machine learning is rooted rather in the computer science and is mostly interested in building systems with low generalization error. PS: Using only the whole data set in econometrics should be generally avoided too. Econometricians are getting more aware that relationships found insample do not necessarily generalize to new data. Hence, replication of econometric studies is and always was very important. Hope this helps in any way.
What is the difference between Econometrics and Machine Learning?
First things first. Everything that I say is my understanding only. Hence, as usual, I can be wrong. Henry is partially right. But Econometrics is also a family of methods. There are a variety of diff
What is the difference between Econometrics and Machine Learning? First things first. Everything that I say is my understanding only. Hence, as usual, I can be wrong. Henry is partially right. But Econometrics is also a family of methods. There are a variety of different econometric methods that can be applied depending on the research question at hand as well as the data provided (cross section vs. panel data and so on). Machine learning in my understanding is a collection of methods which enables machines to learn patterns from past observations (oftentimes in a black box manner). Regression is a standard tool in econometrics as well as machine learning as it allows to learn relationships between variables and to extrapolate these relationships into the future. Not all econometricians are interested in a causal interpretation of parameters estimates (they rarely can claim a causal interpretation if observational data (non experimental) is used). Oftentimes, like in the case of time series data, econometricians also do only care about predictive performance. Essentially both are the very same thing but developed in different sub-fields (machine learning being rooted in computer science). They are both a collection of methods. Econometricians also increasingly use machine learning methods like decision trees and neural networks. You already touched a very interesting point: Causality. Essentially, both fields would like to know the true underlying relationships but as you already mentioned, oftentimes the predictive performance is the main KPI used in machine learning tasks. That is, having a low generalization error is the main goal. Of course, if you know the true causal relationships, this should have the lowest generalization error out of all possible formulations. Reality is very complex and there is no free hunch. Hence, most of the time we have only partial knowledge of the underlying system and sometimes can't even measure the most important influences. But we can use proxy variables that correlate with the true underlying variables we would like to measure. Long story short and very very superficial: Both fields are related whereas econometricians are mostly interested in finding the true causal relationships (that is, testing some hypothesis) whereas machine learning is rooted rather in the computer science and is mostly interested in building systems with low generalization error. PS: Using only the whole data set in econometrics should be generally avoided too. Econometricians are getting more aware that relationships found insample do not necessarily generalize to new data. Hence, replication of econometric studies is and always was very important. Hope this helps in any way.
What is the difference between Econometrics and Machine Learning? First things first. Everything that I say is my understanding only. Hence, as usual, I can be wrong. Henry is partially right. But Econometrics is also a family of methods. There are a variety of diff
24,418
What is the difference between Econometrics and Machine Learning?
Some notes in addition to @JustMe: First, there is a lot of arrogance on both sides of Econometrics and Machine Learning. Discussing which of the two may be a sub-discipline of which is futile. In fact they are both strongly overlapping sub-disciplines of the field of statistics (which is best described as applied mathematics). Both have their own foci and preferences, e.g. Econometrics focus on estimation and testing hypotheses, often in smaller samples, while ML focuses on best functional approximation, often in huge samples. The first focuses on parametric methods making distributional assumptions, the second more often (but by far not exclusively) on non-parametric distribution-free methods. And so on. Second, if the goal is prediction there is no inherent need to understand causality, as long as random samples from the same population are available. However, understanding causality is of central interest if we want to understand a system (i.e. theory development/testing) or change it (i.e. acting on theory by an intervention). This type of research goal is much more common in econometrics (and other fields like biostatistics) than it is in machine learning. However, there are machine learning researchers interested in causality as well. The primary difference between fields here are, once again, that econometricians have hypotheses about interventions and try to estimate their effects (e.g. from observational data or experimental data using techniques from causal inference theory such as weighting, matching or selection models) whereas machine learning would rather try to learn causal relationships from the data (e.g. using search algorithms in directed acyclical causal graphs) and the focus is less strongly put on a single intervention.
What is the difference between Econometrics and Machine Learning?
Some notes in addition to @JustMe: First, there is a lot of arrogance on both sides of Econometrics and Machine Learning. Discussing which of the two may be a sub-discipline of which is futile. In fa
What is the difference between Econometrics and Machine Learning? Some notes in addition to @JustMe: First, there is a lot of arrogance on both sides of Econometrics and Machine Learning. Discussing which of the two may be a sub-discipline of which is futile. In fact they are both strongly overlapping sub-disciplines of the field of statistics (which is best described as applied mathematics). Both have their own foci and preferences, e.g. Econometrics focus on estimation and testing hypotheses, often in smaller samples, while ML focuses on best functional approximation, often in huge samples. The first focuses on parametric methods making distributional assumptions, the second more often (but by far not exclusively) on non-parametric distribution-free methods. And so on. Second, if the goal is prediction there is no inherent need to understand causality, as long as random samples from the same population are available. However, understanding causality is of central interest if we want to understand a system (i.e. theory development/testing) or change it (i.e. acting on theory by an intervention). This type of research goal is much more common in econometrics (and other fields like biostatistics) than it is in machine learning. However, there are machine learning researchers interested in causality as well. The primary difference between fields here are, once again, that econometricians have hypotheses about interventions and try to estimate their effects (e.g. from observational data or experimental data using techniques from causal inference theory such as weighting, matching or selection models) whereas machine learning would rather try to learn causal relationships from the data (e.g. using search algorithms in directed acyclical causal graphs) and the focus is less strongly put on a single intervention.
What is the difference between Econometrics and Machine Learning? Some notes in addition to @JustMe: First, there is a lot of arrogance on both sides of Econometrics and Machine Learning. Discussing which of the two may be a sub-discipline of which is futile. In fa
24,419
What is the difference between Econometrics and Machine Learning?
Something that I think could be stressed more is that econometric modelling often assumes that the model chosen is in fact the true model, in the sense that this model is equivalent to the data generating process (DGP). This is needed to derive powerful distributional results in order to do inference and express uncertainty and make statements such as OLS being the best linear unbiased estimator (BLUE) under standard assumptions. The obtained results are incredibly useful for testing model hypotheses which also helps explain how this framework is so useful for testing economic theory. On the other hand, machine learning often makes less restrictive assumptions which does not allow for these kind of results and machine learning also focusses more on the approximation error, which is defined as the error between the best predictor in a chosen model and the best predictor among all predictors (often called Bayes predictor). More general "learning guarantees" can be proven which allows to bound the estimation error of the model which very little assumptions. It is then often more natural in machine learning to take a very flexible approach to modelling which is sensible as machine learning researchers often focus on predictive modelling. I would like to add that advanced methodology in econometrics allows for (among many other things) expressing the model uncertainty e.g. using Bayesian modelling a comparison can be made between the posterior probabilities between candidate models. My point here is that depending on the methodology chosen, econometrics can incorporate more uncertainty then the uncertainty inherent to the parameters and the errors which is often the only uncertainty expressed in many types of analysis.
What is the difference between Econometrics and Machine Learning?
Something that I think could be stressed more is that econometric modelling often assumes that the model chosen is in fact the true model, in the sense that this model is equivalent to the data genera
What is the difference between Econometrics and Machine Learning? Something that I think could be stressed more is that econometric modelling often assumes that the model chosen is in fact the true model, in the sense that this model is equivalent to the data generating process (DGP). This is needed to derive powerful distributional results in order to do inference and express uncertainty and make statements such as OLS being the best linear unbiased estimator (BLUE) under standard assumptions. The obtained results are incredibly useful for testing model hypotheses which also helps explain how this framework is so useful for testing economic theory. On the other hand, machine learning often makes less restrictive assumptions which does not allow for these kind of results and machine learning also focusses more on the approximation error, which is defined as the error between the best predictor in a chosen model and the best predictor among all predictors (often called Bayes predictor). More general "learning guarantees" can be proven which allows to bound the estimation error of the model which very little assumptions. It is then often more natural in machine learning to take a very flexible approach to modelling which is sensible as machine learning researchers often focus on predictive modelling. I would like to add that advanced methodology in econometrics allows for (among many other things) expressing the model uncertainty e.g. using Bayesian modelling a comparison can be made between the posterior probabilities between candidate models. My point here is that depending on the methodology chosen, econometrics can incorporate more uncertainty then the uncertainty inherent to the parameters and the errors which is often the only uncertainty expressed in many types of analysis.
What is the difference between Econometrics and Machine Learning? Something that I think could be stressed more is that econometric modelling often assumes that the model chosen is in fact the true model, in the sense that this model is equivalent to the data genera
24,420
What exactly is a Residual Learning block in the context of Deep Residual Networks in Deep Learning?
Yes that's true, you can take a look at their caffe model to see how it is implemented.
What exactly is a Residual Learning block in the context of Deep Residual Networks in Deep Learning?
Yes that's true, you can take a look at their caffe model to see how it is implemented.
What exactly is a Residual Learning block in the context of Deep Residual Networks in Deep Learning? Yes that's true, you can take a look at their caffe model to see how it is implemented.
What exactly is a Residual Learning block in the context of Deep Residual Networks in Deep Learning? Yes that's true, you can take a look at their caffe model to see how it is implemented.
24,421
Is Kernel Regression similar to Gaussian Process Regression?
Yes, there is a connection, depending on the GP covariance function and the kernel of the smoother. It's discussed in chapter 2 (section 2.6) of Gaussian Processes for Machine Learning. Note that even a simple covariance function, such as the squared exponential, results in complex equivalent kernels due to the spectral properties of the function. Other things to note are: in the multivariate setting, the N-WKR boils down to univariate regression in each dimension (see this answer), whereas GPs can model the full multivariate covariance. there is no equivalent to the GP mean function the kernel in N-WKR needn't be a valid GP covariance function, and there may not be an equivalent covariance function for every kernel there is no obvious equivalent for e.g. periodic covariance functions as a kernel smoother in GPs you are free to combine covariance functions (e.g. through multiplication or addition), see e.g. the kernel cookbook
Is Kernel Regression similar to Gaussian Process Regression?
Yes, there is a connection, depending on the GP covariance function and the kernel of the smoother. It's discussed in chapter 2 (section 2.6) of Gaussian Processes for Machine Learning. Note that even
Is Kernel Regression similar to Gaussian Process Regression? Yes, there is a connection, depending on the GP covariance function and the kernel of the smoother. It's discussed in chapter 2 (section 2.6) of Gaussian Processes for Machine Learning. Note that even a simple covariance function, such as the squared exponential, results in complex equivalent kernels due to the spectral properties of the function. Other things to note are: in the multivariate setting, the N-WKR boils down to univariate regression in each dimension (see this answer), whereas GPs can model the full multivariate covariance. there is no equivalent to the GP mean function the kernel in N-WKR needn't be a valid GP covariance function, and there may not be an equivalent covariance function for every kernel there is no obvious equivalent for e.g. periodic covariance functions as a kernel smoother in GPs you are free to combine covariance functions (e.g. through multiplication or addition), see e.g. the kernel cookbook
Is Kernel Regression similar to Gaussian Process Regression? Yes, there is a connection, depending on the GP covariance function and the kernel of the smoother. It's discussed in chapter 2 (section 2.6) of Gaussian Processes for Machine Learning. Note that even
24,422
Is Kernel Regression similar to Gaussian Process Regression?
There is a connection in that Gaussian Process Modeling is a kernel technique, meaning that GPMs use a kernel function to describe a multivariate Gaussian covariance among observed data points, and regression is used to find the kernel parameters (hyperparameters) that best describe the observed data. Gaussian Process Modeling can extrapolate from observed data to produce an interpolating mean function (with associated uncertainty dictated by the kernel function) for any point in the space. Below are some resources on GPM that describe in detail what types of kernel functions are typically employed as well as the approaches used to estimate kernel hyperparameters: http://www.gaussianprocess.org/gpml/ http://www.eurandom.tue.nl/events/workshops/2010/YESIV/Prog-Abstr_files/Ghahramani-lecture2.pdf
Is Kernel Regression similar to Gaussian Process Regression?
There is a connection in that Gaussian Process Modeling is a kernel technique, meaning that GPMs use a kernel function to describe a multivariate Gaussian covariance among observed data points, and re
Is Kernel Regression similar to Gaussian Process Regression? There is a connection in that Gaussian Process Modeling is a kernel technique, meaning that GPMs use a kernel function to describe a multivariate Gaussian covariance among observed data points, and regression is used to find the kernel parameters (hyperparameters) that best describe the observed data. Gaussian Process Modeling can extrapolate from observed data to produce an interpolating mean function (with associated uncertainty dictated by the kernel function) for any point in the space. Below are some resources on GPM that describe in detail what types of kernel functions are typically employed as well as the approaches used to estimate kernel hyperparameters: http://www.gaussianprocess.org/gpml/ http://www.eurandom.tue.nl/events/workshops/2010/YESIV/Prog-Abstr_files/Ghahramani-lecture2.pdf
Is Kernel Regression similar to Gaussian Process Regression? There is a connection in that Gaussian Process Modeling is a kernel technique, meaning that GPMs use a kernel function to describe a multivariate Gaussian covariance among observed data points, and re
24,423
MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can it work if we are not?
TL DR; You can't estimate $L$ since $L = \infty$. Thus, the simplifying assumption can never truly be possible. (There maybe some cases where it is, but not in the general world of MCMC). You can however decide what $N$ will make the early bias small. Essentially, your question boils down to "how can we estimate burn-in time?". Burn-in is the act of throwing away the beginning samples because the Markov chain has not converged. There are many MCMC diagnostics that help you estimate "burn-in" time, you can see a review of them here. There are two schools of through regarding burn-in; the popular one is to use one of those diagnostics to decide what $L$ is, and throw away the $L$ samples, and the second school of through it, the first $L$ samples shouldn't matter, so don't worry about them. Charlie Geyer has a rant about this that I agree with. Now, I turn to the more technical details of your question. A simplifying assumption you make in your question is that eventually, (after $L$ steps), the sampler will start drawing from the limiting distribution. So your samples after $L$ steps are pure draws, albeit correlated. This is untrue. Strictly speaking, $L$ is $\infty$. The Markov chain never truly converges to the limiting distribution in finite time. So estimating $L$ is almost pointless. A different way of posing this question is: what is $L$ such that after $L$ steps, the Markov chain is "close enough" to the limiting distribution. This is the question most diagnostics try to answer. It is increasingly agreed upon that the diagnostics above are generally extremely liberal, and can diagnose "convergence" much before it should have. Here is a paper that demonstrates some of the weaknesses of diagnostics. What the above asks the users to do instead is don't worry about $L$, worry about $N$. Generally, users are not interested in the full posterior distribution, but in a specific quantity. Often this quantity is the mean of the posterior, or any other function that can be written down as an expectation. This is where the "Monte Carlo" part of MCMC comes in, since Monte Carlo indicates estimating an integral with summation. So if $X_1, X_2, X_3, \dots, X_N$ is your Markov chain (notice how I am ignoring $L$, since $L$ is $\infty$), and we want to estimate the posterior mean ($\theta$), then $$ \bar{\theta}_N = \dfrac{1}{N} \sum_{i=1}^{N}X_i. $$ The idea is that if $N$ is large enough, then the initial bias of the sample will be insignificant. Of course if the starting value was pathetically far away from high probability space of the limiting distribution, a user can eye-ball and throw away the first couple of samples. This is different from estimating $L$, since it is not an estimation, but an educated disregard for clearly corrupted samples. Now the question of course is: how large should $N$ be? The answer should depend on how well do we want to estimate $\theta$. If we want a great estimate, then we want more samples, if an ok estimate suffices, then we might be fine with a smaller sample. This is also exactly what happens in standard statistical problems. The way we quantify "goodness" of an estimate, is to think, "what can we say about $(\bar{\theta}_N - \theta)$, the Monte Carlo error? Under reasonable conditions, there is in fact a Markov chain CLT that says as $N \to \infty$, for any initial distribution $$\sqrt{N}(\bar{\theta}_N - \theta) \overset{d}{\to} N_p(0, \Sigma), $$ where $\theta \in \mathbb{R}^p$ and $\Sigma$ is the asymptotic covariance matrix. The key here is that the result is true for any initial distribution. When $\Sigma/N$ is small, we know that the estimator is good. This paper presents this idea of stopping, and my answer here summarizes their method. The results in their paper are also regardless of the initial distribution of the process.
MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can
TL DR; You can't estimate $L$ since $L = \infty$. Thus, the simplifying assumption can never truly be possible. (There maybe some cases where it is, but not in the general world of MCMC). You can howe
MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can it work if we are not? TL DR; You can't estimate $L$ since $L = \infty$. Thus, the simplifying assumption can never truly be possible. (There maybe some cases where it is, but not in the general world of MCMC). You can however decide what $N$ will make the early bias small. Essentially, your question boils down to "how can we estimate burn-in time?". Burn-in is the act of throwing away the beginning samples because the Markov chain has not converged. There are many MCMC diagnostics that help you estimate "burn-in" time, you can see a review of them here. There are two schools of through regarding burn-in; the popular one is to use one of those diagnostics to decide what $L$ is, and throw away the $L$ samples, and the second school of through it, the first $L$ samples shouldn't matter, so don't worry about them. Charlie Geyer has a rant about this that I agree with. Now, I turn to the more technical details of your question. A simplifying assumption you make in your question is that eventually, (after $L$ steps), the sampler will start drawing from the limiting distribution. So your samples after $L$ steps are pure draws, albeit correlated. This is untrue. Strictly speaking, $L$ is $\infty$. The Markov chain never truly converges to the limiting distribution in finite time. So estimating $L$ is almost pointless. A different way of posing this question is: what is $L$ such that after $L$ steps, the Markov chain is "close enough" to the limiting distribution. This is the question most diagnostics try to answer. It is increasingly agreed upon that the diagnostics above are generally extremely liberal, and can diagnose "convergence" much before it should have. Here is a paper that demonstrates some of the weaknesses of diagnostics. What the above asks the users to do instead is don't worry about $L$, worry about $N$. Generally, users are not interested in the full posterior distribution, but in a specific quantity. Often this quantity is the mean of the posterior, or any other function that can be written down as an expectation. This is where the "Monte Carlo" part of MCMC comes in, since Monte Carlo indicates estimating an integral with summation. So if $X_1, X_2, X_3, \dots, X_N$ is your Markov chain (notice how I am ignoring $L$, since $L$ is $\infty$), and we want to estimate the posterior mean ($\theta$), then $$ \bar{\theta}_N = \dfrac{1}{N} \sum_{i=1}^{N}X_i. $$ The idea is that if $N$ is large enough, then the initial bias of the sample will be insignificant. Of course if the starting value was pathetically far away from high probability space of the limiting distribution, a user can eye-ball and throw away the first couple of samples. This is different from estimating $L$, since it is not an estimation, but an educated disregard for clearly corrupted samples. Now the question of course is: how large should $N$ be? The answer should depend on how well do we want to estimate $\theta$. If we want a great estimate, then we want more samples, if an ok estimate suffices, then we might be fine with a smaller sample. This is also exactly what happens in standard statistical problems. The way we quantify "goodness" of an estimate, is to think, "what can we say about $(\bar{\theta}_N - \theta)$, the Monte Carlo error? Under reasonable conditions, there is in fact a Markov chain CLT that says as $N \to \infty$, for any initial distribution $$\sqrt{N}(\bar{\theta}_N - \theta) \overset{d}{\to} N_p(0, \Sigma), $$ where $\theta \in \mathbb{R}^p$ and $\Sigma$ is the asymptotic covariance matrix. The key here is that the result is true for any initial distribution. When $\Sigma/N$ is small, we know that the estimator is good. This paper presents this idea of stopping, and my answer here summarizes their method. The results in their paper are also regardless of the initial distribution of the process.
MCMC; Can we be sure that we have a ''pure'' and ''large enough'' sample from the posterior? How can TL DR; You can't estimate $L$ since $L = \infty$. Thus, the simplifying assumption can never truly be possible. (There maybe some cases where it is, but not in the general world of MCMC). You can howe
24,424
Is it a mistaken idea to use standardized coefficients to assess the relative importance of regression predictors?
gungs answer is in my view a critique of the idea to compare the relative strength of different variables in an empirical analyses without having a model in mind how those variables interact or how the (true) joint distribution of all relevant variables looks like. Think of the example of the importance of athlete's height and weight gungs mentions. Nobody can proof that for example an additive linear regression is a good approximation of the conditional expectation function or in other words, height and weight might be important in a very complicated manner for athlete's performance. You can run a linear regression including both variables and compare the standardized coefficients but you do not know whether the results really make sense. To give a Mickey Mouse example, looking at sports climber (my favorite sports), here is a list of top male climbers according to some performance measure taking from the site 8a.nu with information about their height, weight and year born (only those with available information). We standardize all variables beforehand so we can compare directly the association between one standard deviation changes in the predictors on one standard deviation change in the performance distribution. Excluding for the illustration the number one, Adam Ondra, who is unusual tall, we get the following result. : rm(list=ls(all=TRUE)) # Show only two decimal places options(digits=2) # Read Data and attach climber<-read.table("https://drive.google.com/uc?export=&confirm=no_antivirus&id=0B70aDwYo0zuGNGJCRHNrY0ptSW8",sep="\t",header=T) head(climber) # Drop best climber Adam Ondra who is very tall (kind of outlier) climber<-subset(climber,name!="Adam Ondra") # Standardize Predictors climber$performance_std<-(climber$performance-mean(climber$performance))/sd(climber$performance) climber$height_std<-(climber$height-mean(climber$height))/sd(climber$height) climber$weight_std<-(climber$weight-mean(climber$weight))/sd(climber$weight) climber$born_std<-(climber$born-mean(climber$born))/sd(climber$born) # Simple Regression, excluding intercept because of the standardization lm(performance_std~height_std+weight_std-1,data=climber)$coef height_std weight_std -0.16 -0.25 Ignoring standard errors etc. at all, it seems that weight is more important than height or equally important. But one could argue that climbers have become better over time. Perhaps we should control for cohort effects, e.g. training opportunities through better indoor facilities? Let us include year of birth! # Add year of birth lm(performance_std~height_std+weight_std+born_std-1,data=climber)$coef height_std weight_std born_std -0.293 -0.076 0.256 Now, we find that being young and being small is more important than being slim. But now another person could argue this holds only for top climbers? It could make sense to compare the standardized coefficients across the whole performance distribution (for example via quantile regression). And of course it might differ for female climbers who are much smaller and slimmer. Nobody knows. This is a Mickey Mouse example of what I think gung refers to. I am not so skeptical, I think it can make sense to look at standardized coefficients, if you think that you have specified the right model or that additive separability make sense. But this depends as so often on the question at hand. Regarding the other questions: Is this equivalent to saying that we shouldn't use standardized coefficients to assess importance because we might have randomly sampled a restricted range of X1 values and a wider range of X2 values? Then when we standardize this problem hasn't gone away, and we end up spuriously thinking that X1 is a weaker predictor than X2? Yes, I think you could say that like this. The "wider range of X2 values" could arise through omitted variable bias by including important variables correlated with X1 but omitting those which are correlated with X2. Why does the problem go away if the true r is exactly 0? Omitted variable bias is again a good example why this holds. Omitted variables cause only problems (or bias) if they are correlated with the predictors as well as with the outcome, see the formula in the Wikipedia entry. If the true $r$ is exactly 0 than the variable is uncorrelated with the outcome and there is no problem (even if it is correlated with the predictors). How do other methods (e.g. looking at semipartial coefficients) do away with this problem? Other models have such as semipartial coefficients face the same problem. If your dataset is large enough, you can do for example nonparametric regression and try to estimate the full joint distribution without assumptions about the functional form (e.g. additive separability) to justify what you are doing but this is never a proof. Summing up, I think it can make sense to compare standardized or semipartial coefficients but it depends and you have to reason yourself or others why you think it make sense.
Is it a mistaken idea to use standardized coefficients to assess the relative importance of regressi
gungs answer is in my view a critique of the idea to compare the relative strength of different variables in an empirical analyses without having a model in mind how those variables interact or how th
Is it a mistaken idea to use standardized coefficients to assess the relative importance of regression predictors? gungs answer is in my view a critique of the idea to compare the relative strength of different variables in an empirical analyses without having a model in mind how those variables interact or how the (true) joint distribution of all relevant variables looks like. Think of the example of the importance of athlete's height and weight gungs mentions. Nobody can proof that for example an additive linear regression is a good approximation of the conditional expectation function or in other words, height and weight might be important in a very complicated manner for athlete's performance. You can run a linear regression including both variables and compare the standardized coefficients but you do not know whether the results really make sense. To give a Mickey Mouse example, looking at sports climber (my favorite sports), here is a list of top male climbers according to some performance measure taking from the site 8a.nu with information about their height, weight and year born (only those with available information). We standardize all variables beforehand so we can compare directly the association between one standard deviation changes in the predictors on one standard deviation change in the performance distribution. Excluding for the illustration the number one, Adam Ondra, who is unusual tall, we get the following result. : rm(list=ls(all=TRUE)) # Show only two decimal places options(digits=2) # Read Data and attach climber<-read.table("https://drive.google.com/uc?export=&confirm=no_antivirus&id=0B70aDwYo0zuGNGJCRHNrY0ptSW8",sep="\t",header=T) head(climber) # Drop best climber Adam Ondra who is very tall (kind of outlier) climber<-subset(climber,name!="Adam Ondra") # Standardize Predictors climber$performance_std<-(climber$performance-mean(climber$performance))/sd(climber$performance) climber$height_std<-(climber$height-mean(climber$height))/sd(climber$height) climber$weight_std<-(climber$weight-mean(climber$weight))/sd(climber$weight) climber$born_std<-(climber$born-mean(climber$born))/sd(climber$born) # Simple Regression, excluding intercept because of the standardization lm(performance_std~height_std+weight_std-1,data=climber)$coef height_std weight_std -0.16 -0.25 Ignoring standard errors etc. at all, it seems that weight is more important than height or equally important. But one could argue that climbers have become better over time. Perhaps we should control for cohort effects, e.g. training opportunities through better indoor facilities? Let us include year of birth! # Add year of birth lm(performance_std~height_std+weight_std+born_std-1,data=climber)$coef height_std weight_std born_std -0.293 -0.076 0.256 Now, we find that being young and being small is more important than being slim. But now another person could argue this holds only for top climbers? It could make sense to compare the standardized coefficients across the whole performance distribution (for example via quantile regression). And of course it might differ for female climbers who are much smaller and slimmer. Nobody knows. This is a Mickey Mouse example of what I think gung refers to. I am not so skeptical, I think it can make sense to look at standardized coefficients, if you think that you have specified the right model or that additive separability make sense. But this depends as so often on the question at hand. Regarding the other questions: Is this equivalent to saying that we shouldn't use standardized coefficients to assess importance because we might have randomly sampled a restricted range of X1 values and a wider range of X2 values? Then when we standardize this problem hasn't gone away, and we end up spuriously thinking that X1 is a weaker predictor than X2? Yes, I think you could say that like this. The "wider range of X2 values" could arise through omitted variable bias by including important variables correlated with X1 but omitting those which are correlated with X2. Why does the problem go away if the true r is exactly 0? Omitted variable bias is again a good example why this holds. Omitted variables cause only problems (or bias) if they are correlated with the predictors as well as with the outcome, see the formula in the Wikipedia entry. If the true $r$ is exactly 0 than the variable is uncorrelated with the outcome and there is no problem (even if it is correlated with the predictors). How do other methods (e.g. looking at semipartial coefficients) do away with this problem? Other models have such as semipartial coefficients face the same problem. If your dataset is large enough, you can do for example nonparametric regression and try to estimate the full joint distribution without assumptions about the functional form (e.g. additive separability) to justify what you are doing but this is never a proof. Summing up, I think it can make sense to compare standardized or semipartial coefficients but it depends and you have to reason yourself or others why you think it make sense.
Is it a mistaken idea to use standardized coefficients to assess the relative importance of regressi gungs answer is in my view a critique of the idea to compare the relative strength of different variables in an empirical analyses without having a model in mind how those variables interact or how th
24,425
Question about subtracting mean on train/valid/test set
Let's assume you have 100 images in total; 90 are training data and 10 are test data. The authors correctly asserts that using the whole 100 image sample to compute the sample mean $\hat{\mu}$ is wrong. That is because in this case you would have information leakage. Information from your "out-of-sample" elements would be move to your training set. In particular for the estimation of $\hat{\mu}$ , if you use 100 instead of 90 images you allow your training set to have a more informed mean than it should have too. As a result your training error would be potentially lower than it should be. The estimated $\hat{\mu}$ is common throughout the training/validation/testing procedure. The same $\hat{\mu}$ is to be use to centre all your data. (I mention this later because I have the slight impression you use the mean of each separate image to centre that image.)
Question about subtracting mean on train/valid/test set
Let's assume you have 100 images in total; 90 are training data and 10 are test data. The authors correctly asserts that using the whole 100 image sample to compute the sample mean $\hat{\mu}$ is wro
Question about subtracting mean on train/valid/test set Let's assume you have 100 images in total; 90 are training data and 10 are test data. The authors correctly asserts that using the whole 100 image sample to compute the sample mean $\hat{\mu}$ is wrong. That is because in this case you would have information leakage. Information from your "out-of-sample" elements would be move to your training set. In particular for the estimation of $\hat{\mu}$ , if you use 100 instead of 90 images you allow your training set to have a more informed mean than it should have too. As a result your training error would be potentially lower than it should be. The estimated $\hat{\mu}$ is common throughout the training/validation/testing procedure. The same $\hat{\mu}$ is to be use to centre all your data. (I mention this later because I have the slight impression you use the mean of each separate image to centre that image.)
Question about subtracting mean on train/valid/test set Let's assume you have 100 images in total; 90 are training data and 10 are test data. The authors correctly asserts that using the whole 100 image sample to compute the sample mean $\hat{\mu}$ is wro
24,426
GAM model summary: What is meant by "significance of smooth terms"?
As not_bonferroni mentions, help(summary.gam) do have useful information. Do see the references therein or Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapman & Hall/CRC Texts in Statistical Science). particulier section 6.12. To give a brief and simple answer to When I take a summary of the model, I get a chart that indicates the "significance of smooth terms" (which is quite significant). What does this represent? let us suppose that you only have one covariate $x_i$ and you have an outcome variable is $y_i\in\{0,1\}$ which is $1$ if observation $i$ is overall happy and $0$ otherwise. The model you fit is $$ g\left(E\left(y_i \mid x_i\right)\right) = \alpha + f(x_i) $$ where $g$ is a link function, and $f$ is an unknown smooth function. Then the $p$-value is for the null hypothesis $H_0:\, f(x_i)=0$. To give a simple example then we make some simulations below where $f(x_i)=2\sin(x_i)$, $f(x)=x$ and $f(x_i)=0$. library(mgcv) set.seed(2160179) n <- 100 x <- seq(-pi, pi, length.out = n) # f(x) = 2sin(x) y <- 1/(1 + exp(-(1 + 2 * sin(x)))) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 4.285 5.344 32.61 8.33e-06 *** #R --- #R ... # f(x) = x y <- 1/(1 + exp(-(1 + x))) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 1 1 24.45 7.63e-07 *** #R --- #R ... # f(x) = 0 y <- 1/(1 + exp(-1)) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 6.532 8.115 11.04 0.21 #R --- #R ... We reject the null hypothesis in the two first cases but not in the latter as expected. Suppose now that we add two additional covaraites to the model such that $$ g\left(E\left(y_i \mid x_i\right)\right) = \alpha + f_1(x_{1i}) + f_2(x_{2i}) + \beta x_{3i} $$ Your null hypothesis is that there is not a (potentially non-linear) association with covariate one, $x_{1i}$, given a (potentially non-linear) association with covaraite two, $x_{2i}$, and a linear association with covariate three, $x_{3i}$, on the link scale. One final comment (which is stressed in help(summary.gam)) is that the $p$-values are without considering uncertainty in the smoothing parameter estimates. Thus, you may need to be careful when the $p$-value is close to your threshold.
GAM model summary: What is meant by "significance of smooth terms"?
As not_bonferroni mentions, help(summary.gam) do have useful information. Do see the references therein or Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapma
GAM model summary: What is meant by "significance of smooth terms"? As not_bonferroni mentions, help(summary.gam) do have useful information. Do see the references therein or Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapman & Hall/CRC Texts in Statistical Science). particulier section 6.12. To give a brief and simple answer to When I take a summary of the model, I get a chart that indicates the "significance of smooth terms" (which is quite significant). What does this represent? let us suppose that you only have one covariate $x_i$ and you have an outcome variable is $y_i\in\{0,1\}$ which is $1$ if observation $i$ is overall happy and $0$ otherwise. The model you fit is $$ g\left(E\left(y_i \mid x_i\right)\right) = \alpha + f(x_i) $$ where $g$ is a link function, and $f$ is an unknown smooth function. Then the $p$-value is for the null hypothesis $H_0:\, f(x_i)=0$. To give a simple example then we make some simulations below where $f(x_i)=2\sin(x_i)$, $f(x)=x$ and $f(x_i)=0$. library(mgcv) set.seed(2160179) n <- 100 x <- seq(-pi, pi, length.out = n) # f(x) = 2sin(x) y <- 1/(1 + exp(-(1 + 2 * sin(x)))) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 4.285 5.344 32.61 8.33e-06 *** #R --- #R ... # f(x) = x y <- 1/(1 + exp(-(1 + x))) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 1 1 24.45 7.63e-07 *** #R --- #R ... # f(x) = 0 y <- 1/(1 + exp(-1)) > runif(n) fit <- gam(y ~ s(x, k = 20), binomial()) summary(fit) #R ... #R Approximate significance of smooth terms: #R edf Ref.df Chi.sq p-value #R s(x) 6.532 8.115 11.04 0.21 #R --- #R ... We reject the null hypothesis in the two first cases but not in the latter as expected. Suppose now that we add two additional covaraites to the model such that $$ g\left(E\left(y_i \mid x_i\right)\right) = \alpha + f_1(x_{1i}) + f_2(x_{2i}) + \beta x_{3i} $$ Your null hypothesis is that there is not a (potentially non-linear) association with covariate one, $x_{1i}$, given a (potentially non-linear) association with covaraite two, $x_{2i}$, and a linear association with covariate three, $x_{3i}$, on the link scale. One final comment (which is stressed in help(summary.gam)) is that the $p$-values are without considering uncertainty in the smoothing parameter estimates. Thus, you may need to be careful when the $p$-value is close to your threshold.
GAM model summary: What is meant by "significance of smooth terms"? As not_bonferroni mentions, help(summary.gam) do have useful information. Do see the references therein or Wood, Simon N.. Generalized Additive Models: An Introduction with R, Second Edition (Chapma
24,427
GAM model summary: What is meant by "significance of smooth terms"?
The significance of the smooth terms is exactly what the name says: how significant the smooth terms of your model are. Perhaps the question was much more what the smooth terms are (since you seem to understand what significance is)? Your model includes various terms, some of them are "smooth" terms, basically penalized cubic regression splines. Those are the terms with an "s", i.e., s(salary, k=3) for instance. Some other terms are parametric, for instance num_siblings or num_vacation. Each of these terms is more or less important on explaining variance of your response variable "overall_happy". Some of them seem quite unimportant, like num_vacation which has a small significance (a large p value of 0.132343). This mean that this variable has probably no mechanistic or deterministic or physical influence on your response variable and, thus, you can ignore it and remove it from your model. Other terms have a high significance (a small p value), like the smooth term s(salary). This means that most probably, in reality, the salary of a person is one of the major factors contributing to its happiness.
GAM model summary: What is meant by "significance of smooth terms"?
The significance of the smooth terms is exactly what the name says: how significant the smooth terms of your model are. Perhaps the question was much more what the smooth terms are (since you seem to
GAM model summary: What is meant by "significance of smooth terms"? The significance of the smooth terms is exactly what the name says: how significant the smooth terms of your model are. Perhaps the question was much more what the smooth terms are (since you seem to understand what significance is)? Your model includes various terms, some of them are "smooth" terms, basically penalized cubic regression splines. Those are the terms with an "s", i.e., s(salary, k=3) for instance. Some other terms are parametric, for instance num_siblings or num_vacation. Each of these terms is more or less important on explaining variance of your response variable "overall_happy". Some of them seem quite unimportant, like num_vacation which has a small significance (a large p value of 0.132343). This mean that this variable has probably no mechanistic or deterministic or physical influence on your response variable and, thus, you can ignore it and remove it from your model. Other terms have a high significance (a small p value), like the smooth term s(salary). This means that most probably, in reality, the salary of a person is one of the major factors contributing to its happiness.
GAM model summary: What is meant by "significance of smooth terms"? The significance of the smooth terms is exactly what the name says: how significant the smooth terms of your model are. Perhaps the question was much more what the smooth terms are (since you seem to
24,428
What is the name of the density estimation method where all possible pairs are used to create a Normal mixture distribution?
This is an intriguing idea, because the estimator of the standard deviation appears to be less sensitive to outliers than the usual root-mean-square approaches. However, I doubt this estimator has been published. There are three reasons why: it is computationally inefficient, it is biased, and even when the bias is corrected, it is statistically inefficient (but only a little). These can be seen with a little preliminary analysis, so let's do that first and then draw the conclusions. Analysis The ML estimators of the mean $\mu$ and standard deviation $\sigma$ based on data $(x_i, x_j)$ are $$\hat\mu(x_i,x_j) = \frac{x_i+x_j}{2}$$ and $$\hat\sigma(x_i,x_j) = \frac{|x_i-x_j|}{2}.$$ Therefore the method described in the question is $$\hat\mu(x_1, x_2, \ldots, x_n) = \frac{2}{n(n-1)} \sum_{i\gt j} \frac{x_i+x_j}{2} = \frac{1}{n}\sum_{i=1}^nx_i,$$ which is the usual estimator of the mean, and $$\hat\sigma(x_1, x_2, \ldots, x_n) = \frac{2}{n(n-1)}\sum_{i\gt j}\frac{|x_i-x_j|}{2} = \frac{1}{n(n-1)}\sum_{i,j}|x_i-x_j|.$$ The expected value of this estimator is readily found by exploiting the exchangeability of the data, which implies $E = \mathbb{E}(|x_i-x_j|)$ is independent of $i$ and $j$. Whence $$\mathbb{E}(\hat\sigma(x_1, x_2, \ldots, x_n)) = \frac{1}{n(n-1)}\sum_{i,j}\mathbb{E}(|x_i-x_j|) = E.$$ But since $x_i$ and $x_j$ are independent Normal variates, their difference is a zero-mean Normal with variance $2\sigma^2$. Its absolute value therefore is $\sqrt{2}\sigma$ times a $\chi(1)$ distribution, whose mean is $\sqrt{2/\pi}$. Consequently $$E = \frac{2}{\sqrt{\pi}} \sigma.$$ The coefficient $2/\sqrt{\pi} \approx 1.128$ is the bias in this estimator. In the same way, but with considerably more work, one could compute the variance of $\hat\sigma$, but--as we will see--there's unlikely to be much interest in this, so I will just estimate it with a quick simulation. Conclusions The estimator is biased. $\hat\sigma$ has a substantial constant bias of about +13%. This could be corrected. In this example with a sample size of $n=20,000$ both the biased and bias-corrected estimators are plotted over the histogram. The 13% error is apparent. It is computationally inefficient. Because the sum of absolute values, $\sum_{i,j}|x_i-x_j|$, has no algebraic simplification, its calculation requires $O(n^2)$ effort instead of the $O(n)$ effort for almost any other estimator. This scales badly, making it prohibitively expensive once $n$ exceeds $10,000$ or so. For instance, computing the previous figure required 45 seconds of CPU time and 8 GB RAM in R. (On other platforms the RAM requirements would be much smaller, perhaps at a slight cost in computation time.) It is statistically inefficient. To give it the best showing, let's consider the unbiased version and compare it to the unbiased version of either the least squares or maximum likelihood estimator $$\hat\sigma_{OLS} = \sqrt{\left(\frac{1}{n-1} \sum_{i=1}^n \left(x_i - \hat\mu\right)^2\right)} \frac{(n-1)\Gamma((n-1)/2)}{2\Gamma(n/2)}.$$ The R code below demonstrates that the unbiased version of the estimator in the question is surprisingly efficient: across a range of sample sizes from $n=3$ to $n=300$ its variance is usually about 1% to 2% greater than the variance of $\hat\sigma_{OLS}$. This means you should plan on paying an extra 1% to 2% more for samples in order to achieve any given level of precision in estimating $\sigma$. Afterward The form of $\hat\sigma$ is reminiscent of the robust and resistant Theil-Sen estimator--but instead of using the medians of the absolute differences, it uses their means. If the objective is to have an estimator that is resistant to outlying values or one that is robust to departures from the Normality assumption, then using the median would be more advisable. Code sigma <- function(x) sum(abs(outer(x, x, '-'))) / (2*choose(length(x), 2)) # # sigma is biased. # y <- rnorm(1e3) # Don't exceed 2E4 or so! mu.hat <- mean(y) sigma.hat <- sigma(y) hist(y, freq=FALSE, main="Biased (dotted red) and Unbiased (solid blue) Versions of the Estimator", xlab=paste("Sample size of", length(y))) curve(dnorm(x, mu.hat, sigma.hat), col="Red", lwd=2, lty=3, add=TRUE) curve(dnorm(x, mu.hat, sqrt(pi/4)*sigma.hat), col="Blue", lwd=2, add=TRUE) # # The variance of sigma is too large. # N <- 1e4 n <- 10 y <- matrix(rnorm(n*N), nrow=n) sigma.hat <- apply(y, 2, sigma) * sqrt(pi/4) sigma.ols <- apply(y, 2, sd) / (sqrt(2/(n-1)) * exp(lgamma(n/2)-lgamma((n-1)/2))) message("Mean of unbiased estimator is ", format(mean(sigma.hat), digits=4)) message("Mean of unbiased OLS estimator is ", format(mean(sigma.ols), digits=4)) message("Variance of unbiased estimator is ", format(var(sigma.hat), digits=4)) message("Variance of unbiased OLS estimator is ", format(var(sigma.ols), digits=4)) message("Efficiency is ", format(var(sigma.ols) / var(sigma.hat), digits=4))
What is the name of the density estimation method where all possible pairs are used to create a Norm
This is an intriguing idea, because the estimator of the standard deviation appears to be less sensitive to outliers than the usual root-mean-square approaches. However, I doubt this estimator has be
What is the name of the density estimation method where all possible pairs are used to create a Normal mixture distribution? This is an intriguing idea, because the estimator of the standard deviation appears to be less sensitive to outliers than the usual root-mean-square approaches. However, I doubt this estimator has been published. There are three reasons why: it is computationally inefficient, it is biased, and even when the bias is corrected, it is statistically inefficient (but only a little). These can be seen with a little preliminary analysis, so let's do that first and then draw the conclusions. Analysis The ML estimators of the mean $\mu$ and standard deviation $\sigma$ based on data $(x_i, x_j)$ are $$\hat\mu(x_i,x_j) = \frac{x_i+x_j}{2}$$ and $$\hat\sigma(x_i,x_j) = \frac{|x_i-x_j|}{2}.$$ Therefore the method described in the question is $$\hat\mu(x_1, x_2, \ldots, x_n) = \frac{2}{n(n-1)} \sum_{i\gt j} \frac{x_i+x_j}{2} = \frac{1}{n}\sum_{i=1}^nx_i,$$ which is the usual estimator of the mean, and $$\hat\sigma(x_1, x_2, \ldots, x_n) = \frac{2}{n(n-1)}\sum_{i\gt j}\frac{|x_i-x_j|}{2} = \frac{1}{n(n-1)}\sum_{i,j}|x_i-x_j|.$$ The expected value of this estimator is readily found by exploiting the exchangeability of the data, which implies $E = \mathbb{E}(|x_i-x_j|)$ is independent of $i$ and $j$. Whence $$\mathbb{E}(\hat\sigma(x_1, x_2, \ldots, x_n)) = \frac{1}{n(n-1)}\sum_{i,j}\mathbb{E}(|x_i-x_j|) = E.$$ But since $x_i$ and $x_j$ are independent Normal variates, their difference is a zero-mean Normal with variance $2\sigma^2$. Its absolute value therefore is $\sqrt{2}\sigma$ times a $\chi(1)$ distribution, whose mean is $\sqrt{2/\pi}$. Consequently $$E = \frac{2}{\sqrt{\pi}} \sigma.$$ The coefficient $2/\sqrt{\pi} \approx 1.128$ is the bias in this estimator. In the same way, but with considerably more work, one could compute the variance of $\hat\sigma$, but--as we will see--there's unlikely to be much interest in this, so I will just estimate it with a quick simulation. Conclusions The estimator is biased. $\hat\sigma$ has a substantial constant bias of about +13%. This could be corrected. In this example with a sample size of $n=20,000$ both the biased and bias-corrected estimators are plotted over the histogram. The 13% error is apparent. It is computationally inefficient. Because the sum of absolute values, $\sum_{i,j}|x_i-x_j|$, has no algebraic simplification, its calculation requires $O(n^2)$ effort instead of the $O(n)$ effort for almost any other estimator. This scales badly, making it prohibitively expensive once $n$ exceeds $10,000$ or so. For instance, computing the previous figure required 45 seconds of CPU time and 8 GB RAM in R. (On other platforms the RAM requirements would be much smaller, perhaps at a slight cost in computation time.) It is statistically inefficient. To give it the best showing, let's consider the unbiased version and compare it to the unbiased version of either the least squares or maximum likelihood estimator $$\hat\sigma_{OLS} = \sqrt{\left(\frac{1}{n-1} \sum_{i=1}^n \left(x_i - \hat\mu\right)^2\right)} \frac{(n-1)\Gamma((n-1)/2)}{2\Gamma(n/2)}.$$ The R code below demonstrates that the unbiased version of the estimator in the question is surprisingly efficient: across a range of sample sizes from $n=3$ to $n=300$ its variance is usually about 1% to 2% greater than the variance of $\hat\sigma_{OLS}$. This means you should plan on paying an extra 1% to 2% more for samples in order to achieve any given level of precision in estimating $\sigma$. Afterward The form of $\hat\sigma$ is reminiscent of the robust and resistant Theil-Sen estimator--but instead of using the medians of the absolute differences, it uses their means. If the objective is to have an estimator that is resistant to outlying values or one that is robust to departures from the Normality assumption, then using the median would be more advisable. Code sigma <- function(x) sum(abs(outer(x, x, '-'))) / (2*choose(length(x), 2)) # # sigma is biased. # y <- rnorm(1e3) # Don't exceed 2E4 or so! mu.hat <- mean(y) sigma.hat <- sigma(y) hist(y, freq=FALSE, main="Biased (dotted red) and Unbiased (solid blue) Versions of the Estimator", xlab=paste("Sample size of", length(y))) curve(dnorm(x, mu.hat, sigma.hat), col="Red", lwd=2, lty=3, add=TRUE) curve(dnorm(x, mu.hat, sqrt(pi/4)*sigma.hat), col="Blue", lwd=2, add=TRUE) # # The variance of sigma is too large. # N <- 1e4 n <- 10 y <- matrix(rnorm(n*N), nrow=n) sigma.hat <- apply(y, 2, sigma) * sqrt(pi/4) sigma.ols <- apply(y, 2, sd) / (sqrt(2/(n-1)) * exp(lgamma(n/2)-lgamma((n-1)/2))) message("Mean of unbiased estimator is ", format(mean(sigma.hat), digits=4)) message("Mean of unbiased OLS estimator is ", format(mean(sigma.ols), digits=4)) message("Variance of unbiased estimator is ", format(var(sigma.hat), digits=4)) message("Variance of unbiased OLS estimator is ", format(var(sigma.ols), digits=4)) message("Efficiency is ", format(var(sigma.ols) / var(sigma.hat), digits=4))
What is the name of the density estimation method where all possible pairs are used to create a Norm This is an intriguing idea, because the estimator of the standard deviation appears to be less sensitive to outliers than the usual root-mean-square approaches. However, I doubt this estimator has be
24,429
Maximum likelihood estimator of joint distribution given only marginal counts
This kind of problem was studied in the paper "Data Augmentation in Multi-way Contingency Tables With Fixed Marginal Totals" by Dobra et al (2006). Let $\theta$ denote the parameters of the model, let $\mathbf{n}$ denote the unobserved integer table of counts for each $(x,y)$ pair, and let $C(S,T)$ be the set of integer tables whose marginal counts equal $(S,T)$. Then the probability of observing the marginal counts $(S,T)$ is: $$ p(S,T | \theta) = \sum_{\mathbf{n} \in C(S,T)} p(\mathbf{n} | \theta) $$ where $p(\mathbf{n} | \theta)$ is the multinomial sampling distribution. This defines the likelihood function for ML, but direct evaluation is infeasible except for small problems. The approach they recommend is MCMC, where you alternately update $\mathbf{n}$ and $\theta$ by sampling from a proposal distribution and accepting the change according to the Metropolis-Hastings acceptance ratio. This could be adapted to find an approximate maximum over $\theta$ using Monte Carlo EM. A different approach would use variational methods to approximate the sum over $\mathbf{n}$. The marginal constraints can be encoded as a factor graph and inference over $\theta$ could be carried out using Expectation Propagation. To see why this problem is difficult and does not admit a trivial solution, consider the case $S=(1,2), T=(2,1)$. Taking $S$ as the row sums and $T$ as the column sums, there are two possible tables of counts: $$ \begin{bmatrix} 0 & 1 \\ 2 & 0 \end{bmatrix} \qquad \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} $$ Therefore the likelihood function is $$ p(S,T|\theta) = 3 p_{12} p_{21}^2 + 6 p_{11} p_{21} p_{22} $$ The MLE for this problem is $$ \hat{p}_{x,y} = \begin{bmatrix} 0 & 1/3 \\ 2/3 & 0 \end{bmatrix} $$ which corresponds to assuming the table on the left. By contrast, the estimate that you would get by assuming independence is $$ q_{x,y} = \begin{bmatrix} 1/3 \\ 2/3 \end{bmatrix} \begin{bmatrix} 2/3 & 1/3 \end{bmatrix} = \begin{bmatrix} 2/9 & 1/9 \\ 4/9 & 2/9 \end{bmatrix} $$ which has a smaller likelihood value.
Maximum likelihood estimator of joint distribution given only marginal counts
This kind of problem was studied in the paper "Data Augmentation in Multi-way Contingency Tables With Fixed Marginal Totals" by Dobra et al (2006). Let $\theta$ denote the parameters of the model, l
Maximum likelihood estimator of joint distribution given only marginal counts This kind of problem was studied in the paper "Data Augmentation in Multi-way Contingency Tables With Fixed Marginal Totals" by Dobra et al (2006). Let $\theta$ denote the parameters of the model, let $\mathbf{n}$ denote the unobserved integer table of counts for each $(x,y)$ pair, and let $C(S,T)$ be the set of integer tables whose marginal counts equal $(S,T)$. Then the probability of observing the marginal counts $(S,T)$ is: $$ p(S,T | \theta) = \sum_{\mathbf{n} \in C(S,T)} p(\mathbf{n} | \theta) $$ where $p(\mathbf{n} | \theta)$ is the multinomial sampling distribution. This defines the likelihood function for ML, but direct evaluation is infeasible except for small problems. The approach they recommend is MCMC, where you alternately update $\mathbf{n}$ and $\theta$ by sampling from a proposal distribution and accepting the change according to the Metropolis-Hastings acceptance ratio. This could be adapted to find an approximate maximum over $\theta$ using Monte Carlo EM. A different approach would use variational methods to approximate the sum over $\mathbf{n}$. The marginal constraints can be encoded as a factor graph and inference over $\theta$ could be carried out using Expectation Propagation. To see why this problem is difficult and does not admit a trivial solution, consider the case $S=(1,2), T=(2,1)$. Taking $S$ as the row sums and $T$ as the column sums, there are two possible tables of counts: $$ \begin{bmatrix} 0 & 1 \\ 2 & 0 \end{bmatrix} \qquad \begin{bmatrix} 1 & 0 \\ 1 & 1 \end{bmatrix} $$ Therefore the likelihood function is $$ p(S,T|\theta) = 3 p_{12} p_{21}^2 + 6 p_{11} p_{21} p_{22} $$ The MLE for this problem is $$ \hat{p}_{x,y} = \begin{bmatrix} 0 & 1/3 \\ 2/3 & 0 \end{bmatrix} $$ which corresponds to assuming the table on the left. By contrast, the estimate that you would get by assuming independence is $$ q_{x,y} = \begin{bmatrix} 1/3 \\ 2/3 \end{bmatrix} \begin{bmatrix} 2/3 & 1/3 \end{bmatrix} = \begin{bmatrix} 2/9 & 1/9 \\ 4/9 & 2/9 \end{bmatrix} $$ which has a smaller likelihood value.
Maximum likelihood estimator of joint distribution given only marginal counts This kind of problem was studied in the paper "Data Augmentation in Multi-way Contingency Tables With Fixed Marginal Totals" by Dobra et al (2006). Let $\theta$ denote the parameters of the model, l
24,430
Maximum likelihood estimator of joint distribution given only marginal counts
As has been pointed by @Glen_b, this is insufficiently specified. I do not think you can use maximum likelihood unless you can fully specify the likelihood. If you were willing to assume independence, then the problem is quite simple (incidentally, I think the solution would be the maximum entropy solution that has been suggested). If you are not willing nor able to impose additional structure in your problem and you still want some kind of approximation to the values of the cells, may be you could use the Fréchet–Hoeffding copula bounds. Without additional assumptions, I do not think you can go any further.
Maximum likelihood estimator of joint distribution given only marginal counts
As has been pointed by @Glen_b, this is insufficiently specified. I do not think you can use maximum likelihood unless you can fully specify the likelihood. If you were willing to assume independence
Maximum likelihood estimator of joint distribution given only marginal counts As has been pointed by @Glen_b, this is insufficiently specified. I do not think you can use maximum likelihood unless you can fully specify the likelihood. If you were willing to assume independence, then the problem is quite simple (incidentally, I think the solution would be the maximum entropy solution that has been suggested). If you are not willing nor able to impose additional structure in your problem and you still want some kind of approximation to the values of the cells, may be you could use the Fréchet–Hoeffding copula bounds. Without additional assumptions, I do not think you can go any further.
Maximum likelihood estimator of joint distribution given only marginal counts As has been pointed by @Glen_b, this is insufficiently specified. I do not think you can use maximum likelihood unless you can fully specify the likelihood. If you were willing to assume independence
24,431
Maximum likelihood estimator of joint distribution given only marginal counts
Edit: This answer is based on an incorrect assumption that likelihood of the marginal counts given $p_{x,y}$ is only a function of the marginal probabilities $p_x = \sum_y p_{x,y}$ and $p_y = \sum_x p_{x,y}$. I'm still thinking about it. Wrong stuff follows: As mentioned in a comment, the problem with finding "the" maximum-likelihood estimator for $p_{x, y}$ is that it's not unique. For instance, consider the case with binary $X, Y$ and marginals $S_1 = S_2 = T_1 = T_2 = 10$. The two estimators $$p = \left(\begin{array}{cc} \frac12 & 0 \\ 0 & \frac12\end{array}\right), \qquad p = \left(\begin{array}{cc} \frac14 & \frac14 \\ \frac14 & \frac14\end{array}\right)$$ have the same marginal probabilities $p_x$ and $p_y$ in all cases, and hence have equal likelihoods (both of which maximize the likelihood function, as you can verify). Indeed, no matter what the marginals are (as long as two of them are nonzero in each dimension), the maximum likelihood solution is not unique. I'll prove this for the binary case. Let $p = \left(\begin{array}{cc}a & b \\ c & d\end{array}\right)$ be a maximum-likelihood solution. Without loss of generality suppose $0 < a \le d$. Then $p = \left(\begin{array}{cc}0 & b + a \\ c + a & d - a\end{array}\right)$ has the same marginals and is thus also a maximum-likelihood solution. If you want to additionally apply a maximum-entropy constraint, then you do get a unique solution, which as F. Tussell stated is the solution in which $X, Y$ are independent. You can see this as follows: The entropy of the distribution is $H(p) = -\sum_{x,y} p_{x,y} \log p_{x,y}$; maximizing subject to $\sum_x p_{x,y} = p_y$ and $\sum_{y} p_{x,y} = p_x$ (equivalently, $\vec g(p) = 0$ where $g_x(p) = \sum_y p_{x,y} - p_x$ and $g_y(p) = \sum_x p_{x,y} - p_y$) using Lagrange multipliers gives the equation: $$\nabla H(p) = \sum_{ k \in X \cup Y} \lambda_k \nabla g_k(p) $$ All the gradients of each $g_k$ are 1, so coordinate-wise this works out to $$1 - \log p_{x,y} = \lambda_x + \lambda_y \implies p_{x,y} = e^{1-\lambda_x-\lambda_y}$$ plus the original constraints $\sum_x p_{x,y} = p_y$ and $\sum_{y} p_{x,y} = p_x$. You can verify that this is satisfied when $e^{1/2 - \lambda_x} = p_x$ and $e^{1/2 - \lambda_y} = p_y$, giving $$p_{x,y} = p_xp_y.$$
Maximum likelihood estimator of joint distribution given only marginal counts
Edit: This answer is based on an incorrect assumption that likelihood of the marginal counts given $p_{x,y}$ is only a function of the marginal probabilities $p_x = \sum_y p_{x,y}$ and $p_y = \sum_x p
Maximum likelihood estimator of joint distribution given only marginal counts Edit: This answer is based on an incorrect assumption that likelihood of the marginal counts given $p_{x,y}$ is only a function of the marginal probabilities $p_x = \sum_y p_{x,y}$ and $p_y = \sum_x p_{x,y}$. I'm still thinking about it. Wrong stuff follows: As mentioned in a comment, the problem with finding "the" maximum-likelihood estimator for $p_{x, y}$ is that it's not unique. For instance, consider the case with binary $X, Y$ and marginals $S_1 = S_2 = T_1 = T_2 = 10$. The two estimators $$p = \left(\begin{array}{cc} \frac12 & 0 \\ 0 & \frac12\end{array}\right), \qquad p = \left(\begin{array}{cc} \frac14 & \frac14 \\ \frac14 & \frac14\end{array}\right)$$ have the same marginal probabilities $p_x$ and $p_y$ in all cases, and hence have equal likelihoods (both of which maximize the likelihood function, as you can verify). Indeed, no matter what the marginals are (as long as two of them are nonzero in each dimension), the maximum likelihood solution is not unique. I'll prove this for the binary case. Let $p = \left(\begin{array}{cc}a & b \\ c & d\end{array}\right)$ be a maximum-likelihood solution. Without loss of generality suppose $0 < a \le d$. Then $p = \left(\begin{array}{cc}0 & b + a \\ c + a & d - a\end{array}\right)$ has the same marginals and is thus also a maximum-likelihood solution. If you want to additionally apply a maximum-entropy constraint, then you do get a unique solution, which as F. Tussell stated is the solution in which $X, Y$ are independent. You can see this as follows: The entropy of the distribution is $H(p) = -\sum_{x,y} p_{x,y} \log p_{x,y}$; maximizing subject to $\sum_x p_{x,y} = p_y$ and $\sum_{y} p_{x,y} = p_x$ (equivalently, $\vec g(p) = 0$ where $g_x(p) = \sum_y p_{x,y} - p_x$ and $g_y(p) = \sum_x p_{x,y} - p_y$) using Lagrange multipliers gives the equation: $$\nabla H(p) = \sum_{ k \in X \cup Y} \lambda_k \nabla g_k(p) $$ All the gradients of each $g_k$ are 1, so coordinate-wise this works out to $$1 - \log p_{x,y} = \lambda_x + \lambda_y \implies p_{x,y} = e^{1-\lambda_x-\lambda_y}$$ plus the original constraints $\sum_x p_{x,y} = p_y$ and $\sum_{y} p_{x,y} = p_x$. You can verify that this is satisfied when $e^{1/2 - \lambda_x} = p_x$ and $e^{1/2 - \lambda_y} = p_y$, giving $$p_{x,y} = p_xp_y.$$
Maximum likelihood estimator of joint distribution given only marginal counts Edit: This answer is based on an incorrect assumption that likelihood of the marginal counts given $p_{x,y}$ is only a function of the marginal probabilities $p_x = \sum_y p_{x,y}$ and $p_y = \sum_x p
24,432
How to choose the cutoff probability for a rare event Logistic Regression
I disagree that a 50% cutoff is either inherently valid or supported by the literature. The only case where such a cut off might be justified is in a case-control design where the prevalence of the outcome is exactly 50%, but even then the choice would be subject to a few conditions. I think the principal rationale for the choice of cut-off is the desired operating characteristic of the diagnostic test. A cut-off may be chosen to achieve a desired sensitivity or specificity. For an example of this, consult the medical devices literature. Sensitivity is often set to a fixed amount: examples include 80%, 90%, 95%, 99%, 99.9%, or 99.99%. The sensitivity/specificity tradeoff should be compared to the harms of Type I and Type II errors. Often times, as with statistical testing, the harm of a type I error is greater and so we control that risk. Still, these harms are rarely quantifiable. Because of that, I have major objections to cut-off selection methods which rely on a single measure of predictive accuracy: they convey, incorrectly, that harms can and have been quantified. Your issue of too many false positives is an example of the contrary: Type II error may be more harmful. Then you may set the threshold to achieve a desired specificity, and report the achieved sensitivity at that threshold. If you find both are too low to be acceptable for practice, your risk model does not work and it should be rejected. Sensitivity and specificity are easily calculated or looked up from a table over an entire range of possible cut-off values. The trouble with the ROC is that it omits the specific cut-off information from the graphic. The ROC is therefore irrelevant for choosing a cutoff value.
How to choose the cutoff probability for a rare event Logistic Regression
I disagree that a 50% cutoff is either inherently valid or supported by the literature. The only case where such a cut off might be justified is in a case-control design where the prevalence of the ou
How to choose the cutoff probability for a rare event Logistic Regression I disagree that a 50% cutoff is either inherently valid or supported by the literature. The only case where such a cut off might be justified is in a case-control design where the prevalence of the outcome is exactly 50%, but even then the choice would be subject to a few conditions. I think the principal rationale for the choice of cut-off is the desired operating characteristic of the diagnostic test. A cut-off may be chosen to achieve a desired sensitivity or specificity. For an example of this, consult the medical devices literature. Sensitivity is often set to a fixed amount: examples include 80%, 90%, 95%, 99%, 99.9%, or 99.99%. The sensitivity/specificity tradeoff should be compared to the harms of Type I and Type II errors. Often times, as with statistical testing, the harm of a type I error is greater and so we control that risk. Still, these harms are rarely quantifiable. Because of that, I have major objections to cut-off selection methods which rely on a single measure of predictive accuracy: they convey, incorrectly, that harms can and have been quantified. Your issue of too many false positives is an example of the contrary: Type II error may be more harmful. Then you may set the threshold to achieve a desired specificity, and report the achieved sensitivity at that threshold. If you find both are too low to be acceptable for practice, your risk model does not work and it should be rejected. Sensitivity and specificity are easily calculated or looked up from a table over an entire range of possible cut-off values. The trouble with the ROC is that it omits the specific cut-off information from the graphic. The ROC is therefore irrelevant for choosing a cutoff value.
How to choose the cutoff probability for a rare event Logistic Regression I disagree that a 50% cutoff is either inherently valid or supported by the literature. The only case where such a cut off might be justified is in a case-control design where the prevalence of the ou
24,433
building a classification model for strictly binary data
would support vector machine help? as far as i know, SVM only deals with continuous - variables as the predictors ... Binary variables are not a problem for SVM. Even specialized kernels exist for exactly such data (Hamming kernel, Tanimoto/Jaccard kernel), though I don't recommend using those if you're not intimately familiar with kernel methods. would logistic regression apply here? as far as i know, the predictors in logistic regression also are continuous Logistic regression works with binary predictors. It is probably your best option. how to explain the relationships in these models (especially to clinicians). If you use linear SVM it is fairly straightforward to explain what's going on. Logistic regression is a better option, though, since most clinicials actually know these models (and by know I mean have heard of).
building a classification model for strictly binary data
would support vector machine help? as far as i know, SVM only deals with continuous - variables as the predictors ... Binary variables are not a problem for SVM. Even specialized kernels exist for ex
building a classification model for strictly binary data would support vector machine help? as far as i know, SVM only deals with continuous - variables as the predictors ... Binary variables are not a problem for SVM. Even specialized kernels exist for exactly such data (Hamming kernel, Tanimoto/Jaccard kernel), though I don't recommend using those if you're not intimately familiar with kernel methods. would logistic regression apply here? as far as i know, the predictors in logistic regression also are continuous Logistic regression works with binary predictors. It is probably your best option. how to explain the relationships in these models (especially to clinicians). If you use linear SVM it is fairly straightforward to explain what's going on. Logistic regression is a better option, though, since most clinicials actually know these models (and by know I mean have heard of).
building a classification model for strictly binary data would support vector machine help? as far as i know, SVM only deals with continuous - variables as the predictors ... Binary variables are not a problem for SVM. Even specialized kernels exist for ex
24,434
building a classification model for strictly binary data
I would like to share my experiment of classifying about .3 million binary data with a majority of false values. I have used Linear SVM, Complex trees, LDA, QDA, logistic regression etc. All these methods had an efficiency of about 54%, which is not good. According to my professor, the classification methods that could help me in this problem are Neural Networks, Quadratic SVM but I haven't tested these. I hope this could help.
building a classification model for strictly binary data
I would like to share my experiment of classifying about .3 million binary data with a majority of false values. I have used Linear SVM, Complex trees, LDA, QDA, logistic regression etc. All these met
building a classification model for strictly binary data I would like to share my experiment of classifying about .3 million binary data with a majority of false values. I have used Linear SVM, Complex trees, LDA, QDA, logistic regression etc. All these methods had an efficiency of about 54%, which is not good. According to my professor, the classification methods that could help me in this problem are Neural Networks, Quadratic SVM but I haven't tested these. I hope this could help.
building a classification model for strictly binary data I would like to share my experiment of classifying about .3 million binary data with a majority of false values. I have used Linear SVM, Complex trees, LDA, QDA, logistic regression etc. All these met
24,435
Does a canonical link function always exist for a Generalized Linear Model (GLM)?
For these distributions $A'(\theta) = E(Y)$ and $A''(\theta)=Var(Y)/d(\tau)$ Since the variance and dispersion parameter are non-zero (and even positive) $A'(\theta)$ is a strictly increasing function and must be invertible. However, I am not sure if there are distributions of this family that have an infinite variance. I was not able to find such examples.
Does a canonical link function always exist for a Generalized Linear Model (GLM)?
For these distributions $A'(\theta) = E(Y)$ and $A''(\theta)=Var(Y)/d(\tau)$ Since the variance and dispersion parameter are non-zero (and even positive) $A'(\theta)$ is a strictly increasing functio
Does a canonical link function always exist for a Generalized Linear Model (GLM)? For these distributions $A'(\theta) = E(Y)$ and $A''(\theta)=Var(Y)/d(\tau)$ Since the variance and dispersion parameter are non-zero (and even positive) $A'(\theta)$ is a strictly increasing function and must be invertible. However, I am not sure if there are distributions of this family that have an infinite variance. I was not able to find such examples.
Does a canonical link function always exist for a Generalized Linear Model (GLM)? For these distributions $A'(\theta) = E(Y)$ and $A''(\theta)=Var(Y)/d(\tau)$ Since the variance and dispersion parameter are non-zero (and even positive) $A'(\theta)$ is a strictly increasing functio
24,436
Classifier performance measure that combines sensitivity and specificity?
I would say that there might not be any particular or only one measure which you should take into account. Last time when I did probabilistic classification I had a R package ROCR and explicit cost values for the False Positives and False Negatives. I considered all cutoff-points from 0 to 1 and used many measures such as expected cost when selecting this cutoff - point. Of course I had already AUC measure for the general measure of classifying accuracy. But for me this was not the only possibility. Values for the FP and FN cases must come outside your particular model, maybe these are provided by some subject matter expert? For example in customer churn analysis it might be more expensive to incorrectly infer that customer is not churning but also that it will be expensive to give a general reduction in prices for services without accurary to target these to correct groups. -Analyst
Classifier performance measure that combines sensitivity and specificity?
I would say that there might not be any particular or only one measure which you should take into account. Last time when I did probabilistic classification I had a R package ROCR and explicit cost va
Classifier performance measure that combines sensitivity and specificity? I would say that there might not be any particular or only one measure which you should take into account. Last time when I did probabilistic classification I had a R package ROCR and explicit cost values for the False Positives and False Negatives. I considered all cutoff-points from 0 to 1 and used many measures such as expected cost when selecting this cutoff - point. Of course I had already AUC measure for the general measure of classifying accuracy. But for me this was not the only possibility. Values for the FP and FN cases must come outside your particular model, maybe these are provided by some subject matter expert? For example in customer churn analysis it might be more expensive to incorrectly infer that customer is not churning but also that it will be expensive to give a general reduction in prices for services without accurary to target these to correct groups. -Analyst
Classifier performance measure that combines sensitivity and specificity? I would say that there might not be any particular or only one measure which you should take into account. Last time when I did probabilistic classification I had a R package ROCR and explicit cost va
24,437
Classifier performance measure that combines sensitivity and specificity?
Classification accuracy, sensitivity, specificity, and any simple combination of them are all improper scoring rules. That is, they are optimized by a bogus model. Using them will make you choose the wrong features, give the wrong weights, and make suboptimal decisions. One of many ways decisions are suboptimal is the false confidence you get when predicted probabilities are near the threshold implied by the use of these measures. In short, everything that can go wrong does go wrong with these measures. Using them to compare even two well-fitted models will mislead you.
Classifier performance measure that combines sensitivity and specificity?
Classification accuracy, sensitivity, specificity, and any simple combination of them are all improper scoring rules. That is, they are optimized by a bogus model. Using them will make you choose th
Classifier performance measure that combines sensitivity and specificity? Classification accuracy, sensitivity, specificity, and any simple combination of them are all improper scoring rules. That is, they are optimized by a bogus model. Using them will make you choose the wrong features, give the wrong weights, and make suboptimal decisions. One of many ways decisions are suboptimal is the false confidence you get when predicted probabilities are near the threshold implied by the use of these measures. In short, everything that can go wrong does go wrong with these measures. Using them to compare even two well-fitted models will mislead you.
Classifier performance measure that combines sensitivity and specificity? Classification accuracy, sensitivity, specificity, and any simple combination of them are all improper scoring rules. That is, they are optimized by a bogus model. Using them will make you choose th
24,438
Classifier performance measure that combines sensitivity and specificity?
EDIT: Whoops didn't realise how old this was, hope this is useful for anyone who stumbles upon it You've got a bunch of options which make sense depending on what fits your exact context: Balanced accuracy is just the mean of sensitivity and specificity, but still usually preferable to accuracy F1 is the harmonic mean of sensitivity and PPV, and as you said is what you want when you care about the positive class, but not the negative class so much MCC is more balanced than F1, and may be what you're after: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7863449/ In terms of general advice - perhaps interrogate what your context demands a little bit more. What is more harmful - FN or FP? You can tailor which metrics suit your need a little more, and a holistic approach to what they all represent (in words even) would be more sensible than relying on a single value.
Classifier performance measure that combines sensitivity and specificity?
EDIT: Whoops didn't realise how old this was, hope this is useful for anyone who stumbles upon it You've got a bunch of options which make sense depending on what fits your exact context: Balanced ac
Classifier performance measure that combines sensitivity and specificity? EDIT: Whoops didn't realise how old this was, hope this is useful for anyone who stumbles upon it You've got a bunch of options which make sense depending on what fits your exact context: Balanced accuracy is just the mean of sensitivity and specificity, but still usually preferable to accuracy F1 is the harmonic mean of sensitivity and PPV, and as you said is what you want when you care about the positive class, but not the negative class so much MCC is more balanced than F1, and may be what you're after: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7863449/ In terms of general advice - perhaps interrogate what your context demands a little bit more. What is more harmful - FN or FP? You can tailor which metrics suit your need a little more, and a holistic approach to what they all represent (in words even) would be more sensible than relying on a single value.
Classifier performance measure that combines sensitivity and specificity? EDIT: Whoops didn't realise how old this was, hope this is useful for anyone who stumbles upon it You've got a bunch of options which make sense depending on what fits your exact context: Balanced ac
24,439
How to choose the number of trees in a generalized boosted regression model?
This is GBM: http://rss.acs.unt.edu/Rdoc/library/gbm/html/gbm.html http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf "I don't think that ... " has been the dangerous first part of many sentences. Good enough is meaningless without a measure of goodness, a rubric. What are the measures of goodness for any other method? Difference between model and data (sse, ...) Divergence of Error in a holdout set (training error vs. test error) Parameter count to sample count ratio (most folks like 5 samples per parameter or 30 samples per parameter) Cross validation (ensemble methods on divergence of error tests) Like a neural network, or spline, you can perform piecewise linear interpolation on the data and get a model that cannot generalize. You need to give up some of the "low error" in exchange for general applicability - generalization. More links: http://yaroslavvb.com/papers/moody-effective.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.529
How to choose the number of trees in a generalized boosted regression model?
This is GBM: http://rss.acs.unt.edu/Rdoc/library/gbm/html/gbm.html http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf "I don't think that ... " has been the dangerous first part of many
How to choose the number of trees in a generalized boosted regression model? This is GBM: http://rss.acs.unt.edu/Rdoc/library/gbm/html/gbm.html http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf "I don't think that ... " has been the dangerous first part of many sentences. Good enough is meaningless without a measure of goodness, a rubric. What are the measures of goodness for any other method? Difference between model and data (sse, ...) Divergence of Error in a holdout set (training error vs. test error) Parameter count to sample count ratio (most folks like 5 samples per parameter or 30 samples per parameter) Cross validation (ensemble methods on divergence of error tests) Like a neural network, or spline, you can perform piecewise linear interpolation on the data and get a model that cannot generalize. You need to give up some of the "low error" in exchange for general applicability - generalization. More links: http://yaroslavvb.com/papers/moody-effective.pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.48.529
How to choose the number of trees in a generalized boosted regression model? This is GBM: http://rss.acs.unt.edu/Rdoc/library/gbm/html/gbm.html http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf "I don't think that ... " has been the dangerous first part of many
24,440
How to choose the number of trees in a generalized boosted regression model?
I did find some insight into the problem: http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf The gbm.step function can be used to determine the optimal number of trees. I'm still not sure what causes model deviance to increase after a certain number of trees, so I'm still willing to accept a response that answers this part of the question!
How to choose the number of trees in a generalized boosted regression model?
I did find some insight into the problem: http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf The gbm.step function can be used to determine the optimal number of trees. I'm still not sure
How to choose the number of trees in a generalized boosted regression model? I did find some insight into the problem: http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf The gbm.step function can be used to determine the optimal number of trees. I'm still not sure what causes model deviance to increase after a certain number of trees, so I'm still willing to accept a response that answers this part of the question!
How to choose the number of trees in a generalized boosted regression model? I did find some insight into the problem: http://cran.r-project.org/web/packages/dismo/vignettes/brt.pdf The gbm.step function can be used to determine the optimal number of trees. I'm still not sure
24,441
How to choose the number of trees in a generalized boosted regression model?
This is the working guid to boosted regression trees from Elith et al.: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full Very helpful! You should at least use 1000 trees. As far as I understood, you should use the combination of learning rate, tree complexity and number of trees that achieves the minumum predictive error. Smaller values of the learning rate lead to larger training risk for the same number of iterations, while each iteration reduces the training risk. If the number of trees is large enough, the risk can be made arbitrarily small (see: Hastie et al., 2001, "The Elements of Statistical Learning, Data Mining, Inference and Prediction").
How to choose the number of trees in a generalized boosted regression model?
This is the working guid to boosted regression trees from Elith et al.: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full Very helpful! You should at least use 1000 trees. As fa
How to choose the number of trees in a generalized boosted regression model? This is the working guid to boosted regression trees from Elith et al.: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full Very helpful! You should at least use 1000 trees. As far as I understood, you should use the combination of learning rate, tree complexity and number of trees that achieves the minumum predictive error. Smaller values of the learning rate lead to larger training risk for the same number of iterations, while each iteration reduces the training risk. If the number of trees is large enough, the risk can be made arbitrarily small (see: Hastie et al., 2001, "The Elements of Statistical Learning, Data Mining, Inference and Prediction").
How to choose the number of trees in a generalized boosted regression model? This is the working guid to boosted regression trees from Elith et al.: http://onlinelibrary.wiley.com/doi/10.1111/j.1365-2656.2008.01390.x/full Very helpful! You should at least use 1000 trees. As fa
24,442
How to choose the number of trees in a generalized boosted regression model?
As common in some machine learning algorithms, Boosting is subject to Bias-variance trade-off regarding number of trees. Loosely speaking, this trade-off tells you that: (i) weak models tend to have high bias and low variance: they are too rigid to capture variability in the training dataset, so will not perform well in the test set either (high test error) (ii) very strong models tend to have low bias and high variance: they are too flexible and they overfit the training set, so in the test set (as the datapoints are different from the training set) they will also not perform well (high test error) The concept of Boosting trees is to start with shallow trees (weak models) and keep adding more shallow trees that try to correct previous trees weakenesses. As you do this process, the test error tends to go down (because the overall model gets more flexible/powerful). However, if you add too many of those trees, you start overfitting the training data and therefore test error increases. Cross-validation helps with finding the sweet spot
How to choose the number of trees in a generalized boosted regression model?
As common in some machine learning algorithms, Boosting is subject to Bias-variance trade-off regarding number of trees. Loosely speaking, this trade-off tells you that: (i) weak models tend to have h
How to choose the number of trees in a generalized boosted regression model? As common in some machine learning algorithms, Boosting is subject to Bias-variance trade-off regarding number of trees. Loosely speaking, this trade-off tells you that: (i) weak models tend to have high bias and low variance: they are too rigid to capture variability in the training dataset, so will not perform well in the test set either (high test error) (ii) very strong models tend to have low bias and high variance: they are too flexible and they overfit the training set, so in the test set (as the datapoints are different from the training set) they will also not perform well (high test error) The concept of Boosting trees is to start with shallow trees (weak models) and keep adding more shallow trees that try to correct previous trees weakenesses. As you do this process, the test error tends to go down (because the overall model gets more flexible/powerful). However, if you add too many of those trees, you start overfitting the training data and therefore test error increases. Cross-validation helps with finding the sweet spot
How to choose the number of trees in a generalized boosted regression model? As common in some machine learning algorithms, Boosting is subject to Bias-variance trade-off regarding number of trees. Loosely speaking, this trade-off tells you that: (i) weak models tend to have h
24,443
Denormalize value after prediction
You have n inputs and 1 output. What you need is comparing the predicted results $\hat y$ (a vector as long as the number of your test rows) with the real results $y$ (a vector as long as $\hat y$). Originally you had normalized the original data set using the min-max normalization through $\min Y$ and $\max Y$ (the min and max numbers assumed by the data output). In order to evaluate your model you need to denormalize only the outputs. Since $\hat y_\text{norm}$ is the normalized test output you can do: $$ \hat y = \hat y_\text{norm} \times (\max Y - \min Y) + \min Y $$ Then you'll compare $\hat y$ with $y$.
Denormalize value after prediction
You have n inputs and 1 output. What you need is comparing the predicted results $\hat y$ (a vector as long as the number of your test rows) with the real results $y$ (a vector as long as $\hat y$). O
Denormalize value after prediction You have n inputs and 1 output. What you need is comparing the predicted results $\hat y$ (a vector as long as the number of your test rows) with the real results $y$ (a vector as long as $\hat y$). Originally you had normalized the original data set using the min-max normalization through $\min Y$ and $\max Y$ (the min and max numbers assumed by the data output). In order to evaluate your model you need to denormalize only the outputs. Since $\hat y_\text{norm}$ is the normalized test output you can do: $$ \hat y = \hat y_\text{norm} \times (\max Y - \min Y) + \min Y $$ Then you'll compare $\hat y$ with $y$.
Denormalize value after prediction You have n inputs and 1 output. What you need is comparing the predicted results $\hat y$ (a vector as long as the number of your test rows) with the real results $y$ (a vector as long as $\hat y$). O
24,444
Denormalize value after prediction
You could run your prediction again, with normalisation and without normalisation and compare the output?
Denormalize value after prediction
You could run your prediction again, with normalisation and without normalisation and compare the output?
Denormalize value after prediction You could run your prediction again, with normalisation and without normalisation and compare the output?
Denormalize value after prediction You could run your prediction again, with normalisation and without normalisation and compare the output?
24,445
R neuralnet - compute give a constant answer
I'm not an expert in neural nets but I think the following points might be helpful to you. There are also some nice posts, e.g. this one on hidden units, that you can search for on this site about what neural nets do that you might find useful. 1 Large errors: why didn't your example work at all why errors are so big and why all predicted values are almost constant? This is because the neural network was unable to compute the multiplication function you gave it and outputting a constant number in the middle of the range of y, regardless of x, was the best way to minimize errors during training. (Notice how 58749 is pretty close to the mean of multiplying two numbers between 1 and 500 together.) It's very hard to see how a neural network could compute a multiplication function in a sensible way. Think about how each node in the network combines previously computed results: you take a weighted sum of the outputs from previous nodes (and then apply a sigmoidal function to it, see, e.g. an Introduction to Neural Networks, to scrunch the output inbetween $-1$ and $1$). How are you going to get a weighted sum to give you multiplication of two inputs? (I suppose, however, that it might be possible to take a large number of hidden layers to get multiplication working in a very contrived way.) 2 Local minima: why a theoretically reasonable example might not work However, even trying to do addition you run into problems in your example: the network doesn't train successfully. I believe that this is because of a second problem: getting local minima during the training. In fact, for addition, using two layers of 5 hidden units is much too complicated to compute addition. A network with no hidden units trains perfectly well: x <- cbind(runif(50, min=1, max=500), runif(50, min=1, max=500)) y <- x[, 1] + x[, 2] train <- data.frame(x, y) n <- names(train) f <- as.formula(paste('y ~', paste(n[!n %in% 'y'], collapse = ' + '))) net <- neuralnet(f, train, hidden = 0, threshold=0.01) print(net) # Error 0.00000001893602844 Of course, you could transform your original problem into an addition problem by taking logs, but I don't think this is what you want, so onwards... 3 Number of training examples compared to number of parameters to estimate So what would be a reasonable way to test your neural net with two layers of 5 hidden units as you originally had? Neural nets are often used for classification, so deciding whether $\mathbf{x}\cdot\mathbf{k} > c$ seemed a reasonable choice of problem. I used $\mathbf{k} = (1, 2, 3, 4, 5)$ and $c = 3750$. Notice that there are several parameters to be learnt. In the code below I take a very similar approach to yours except that I train two neural nets, one with 50 examples from the training set, and one with 500. library(neuralnet) set.seed(1) # make results reproducible N=500 x <- cbind(runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500)) y <- ifelse(x[,1] + 2*x[,1] + 3*x[,1] + 4*x[,1] + 5*x[,1] > 3750, 1, 0) trainSMALL <- data.frame(x[1:(N/10),], y=y[1:(N/10)]) trainALL <- data.frame(x, y) n <- names(trainSMALL) f <- as.formula(paste('y ~', paste(n[!n %in% 'y'], collapse = ' + '))) netSMALL <- neuralnet(f, trainSMALL, hidden = c(5,5), threshold = 0.01) netALL <- neuralnet(f, trainALL, hidden = c(5,5), threshold = 0.01) print(netSMALL) # error 4.117671763 print(netALL) # error 0.009598461875 # get a sense of accuracy w.r.t small training set (in-sample) cbind(y, compute(netSMALL,x)$net.result)[1:10,] y [1,] 1 0.587903899825 [2,] 0 0.001158500142 [3,] 1 0.587903899825 [4,] 0 0.001158500281 [5,] 0 -0.003770868805 [6,] 0 0.587903899825 [7,] 1 0.587903899825 [8,] 0 0.001158500142 [9,] 0 0.587903899825 [10,] 1 0.587903899825 # get a sense of accuracy w.r.t full training set (in-sample) cbind(y, compute(netALL,x)$net.result)[1:10,] y [1,] 1 1.0003618092051 [2,] 0 -0.0025677656844 [3,] 1 0.9999590121059 [4,] 0 -0.0003835722682 [5,] 0 -0.0003835722682 [6,] 0 -0.0003835722199 [7,] 1 1.0003618092051 [8,] 0 -0.0025677656844 [9,] 0 -0.0003835722682 [10,] 1 1.0003618092051 It is apparent that the netALL does a lot better! Why is this? Take a look at what you get with a plot(netALL) command: I make it 66 parameters that are estimated during training (5 inputs and 1 bias input to each of 11 nodes). You can't reliably estimate 66 parameters with 50 training examples. I suspect in this case you might be able to cut down on the number of parameters to estimate by cutting down on the number of units. And you can see from constructing a neural network to do addition that a simpler neural network may be less likely to run into problems during training. But as a general rule in any machine learning (including linear regression) you want to have a lot more training examples than parameters to estimate.
R neuralnet - compute give a constant answer
I'm not an expert in neural nets but I think the following points might be helpful to you. There are also some nice posts, e.g. this one on hidden units, that you can search for on this site about wha
R neuralnet - compute give a constant answer I'm not an expert in neural nets but I think the following points might be helpful to you. There are also some nice posts, e.g. this one on hidden units, that you can search for on this site about what neural nets do that you might find useful. 1 Large errors: why didn't your example work at all why errors are so big and why all predicted values are almost constant? This is because the neural network was unable to compute the multiplication function you gave it and outputting a constant number in the middle of the range of y, regardless of x, was the best way to minimize errors during training. (Notice how 58749 is pretty close to the mean of multiplying two numbers between 1 and 500 together.) It's very hard to see how a neural network could compute a multiplication function in a sensible way. Think about how each node in the network combines previously computed results: you take a weighted sum of the outputs from previous nodes (and then apply a sigmoidal function to it, see, e.g. an Introduction to Neural Networks, to scrunch the output inbetween $-1$ and $1$). How are you going to get a weighted sum to give you multiplication of two inputs? (I suppose, however, that it might be possible to take a large number of hidden layers to get multiplication working in a very contrived way.) 2 Local minima: why a theoretically reasonable example might not work However, even trying to do addition you run into problems in your example: the network doesn't train successfully. I believe that this is because of a second problem: getting local minima during the training. In fact, for addition, using two layers of 5 hidden units is much too complicated to compute addition. A network with no hidden units trains perfectly well: x <- cbind(runif(50, min=1, max=500), runif(50, min=1, max=500)) y <- x[, 1] + x[, 2] train <- data.frame(x, y) n <- names(train) f <- as.formula(paste('y ~', paste(n[!n %in% 'y'], collapse = ' + '))) net <- neuralnet(f, train, hidden = 0, threshold=0.01) print(net) # Error 0.00000001893602844 Of course, you could transform your original problem into an addition problem by taking logs, but I don't think this is what you want, so onwards... 3 Number of training examples compared to number of parameters to estimate So what would be a reasonable way to test your neural net with two layers of 5 hidden units as you originally had? Neural nets are often used for classification, so deciding whether $\mathbf{x}\cdot\mathbf{k} > c$ seemed a reasonable choice of problem. I used $\mathbf{k} = (1, 2, 3, 4, 5)$ and $c = 3750$. Notice that there are several parameters to be learnt. In the code below I take a very similar approach to yours except that I train two neural nets, one with 50 examples from the training set, and one with 500. library(neuralnet) set.seed(1) # make results reproducible N=500 x <- cbind(runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500), runif(N, min=1, max=500)) y <- ifelse(x[,1] + 2*x[,1] + 3*x[,1] + 4*x[,1] + 5*x[,1] > 3750, 1, 0) trainSMALL <- data.frame(x[1:(N/10),], y=y[1:(N/10)]) trainALL <- data.frame(x, y) n <- names(trainSMALL) f <- as.formula(paste('y ~', paste(n[!n %in% 'y'], collapse = ' + '))) netSMALL <- neuralnet(f, trainSMALL, hidden = c(5,5), threshold = 0.01) netALL <- neuralnet(f, trainALL, hidden = c(5,5), threshold = 0.01) print(netSMALL) # error 4.117671763 print(netALL) # error 0.009598461875 # get a sense of accuracy w.r.t small training set (in-sample) cbind(y, compute(netSMALL,x)$net.result)[1:10,] y [1,] 1 0.587903899825 [2,] 0 0.001158500142 [3,] 1 0.587903899825 [4,] 0 0.001158500281 [5,] 0 -0.003770868805 [6,] 0 0.587903899825 [7,] 1 0.587903899825 [8,] 0 0.001158500142 [9,] 0 0.587903899825 [10,] 1 0.587903899825 # get a sense of accuracy w.r.t full training set (in-sample) cbind(y, compute(netALL,x)$net.result)[1:10,] y [1,] 1 1.0003618092051 [2,] 0 -0.0025677656844 [3,] 1 0.9999590121059 [4,] 0 -0.0003835722682 [5,] 0 -0.0003835722682 [6,] 0 -0.0003835722199 [7,] 1 1.0003618092051 [8,] 0 -0.0025677656844 [9,] 0 -0.0003835722682 [10,] 1 1.0003618092051 It is apparent that the netALL does a lot better! Why is this? Take a look at what you get with a plot(netALL) command: I make it 66 parameters that are estimated during training (5 inputs and 1 bias input to each of 11 nodes). You can't reliably estimate 66 parameters with 50 training examples. I suspect in this case you might be able to cut down on the number of parameters to estimate by cutting down on the number of units. And you can see from constructing a neural network to do addition that a simpler neural network may be less likely to run into problems during training. But as a general rule in any machine learning (including linear regression) you want to have a lot more training examples than parameters to estimate.
R neuralnet - compute give a constant answer I'm not an expert in neural nets but I think the following points might be helpful to you. There are also some nice posts, e.g. this one on hidden units, that you can search for on this site about wha
24,446
Computation of polynomial contrast variables
As a segue to my prior post on this topic I want to share some tentative (albeit incomplete) exploration of the functions behind the linear algebra and related R functions. This is supposed to be a work in progress. Part of the opaqueness of the functions has to do with the "compact" form of the Householder $\mathbf {QR}$ decomposition. The idea behind the Householder decomposition is to reflect vectors across a hyperplane determined by a unit-vector $\mathbf u$ as in the diagram below, but picking this plane in a purposeful way so as to project every column vector of the original matrix $\bf A$ onto the $\bf e_1$ standard unit vector. The normalized norm-2 $1$ vector $\bf u$ can be used to compute the different Householder transformations $\mathbf{ I - 2\, uu^T x}$. The resultant projection can be expressed as $\text{sign}(x_i=x_1)\times \lVert x \rVert \begin{bmatrix}1\\0\\0\\\vdots\\0\end{bmatrix}+\begin{bmatrix}x_1\\x_2\\x_3\\\vdots\\x_m\end{bmatrix}$ The vector $\bf v$ represents the difference between the column vectors $\bf x$ in the matrix $\bf A$ that we want to decompose and the vectors $\bf y$ corresponding to the reflection across the subspace or "mirror" determined by $\bf u$. The method used by LAPACK liberates the need for storage of the first entry in the Householder reflectors by turning them into $1$'s. Instead of normalizing the vector $\bf v$ to $\bf u$ with $\lVert u\rVert= 1$, it is just the fist entry that is converted to a $1$; yet, these new vectors - call them $\bf w$ can still be used as a directional vectors. The beauty of the method is that given that $\bf R$ in a $\bf QR$ decomposition is upper triangular, we can actually take advantage of the $0$ elements in $\bf R$ below the diagonal to fill them in with these $\bf w$ reflectors. Thankfully, the leading entries in these vectors all equal $1$, preventing a problem in the "disputed" diagonal of the matrix: knowing that they are all $1$ they don't need to be included, and can yield the diagonal to the entries of $\bf R$. The "compact QR" matrix in the function qr()$qr can be understood as roughly the addition of the $\mathbf R$ matrix and the lower triangular "storage" matrix for the "modified" reflectors. The Householder projection will still have the form $\mathbf{ I - 2\, uu^T x}$, but we won't be working with $\bf u$ ($\lVert \bf x \rVert=1$), but rather with a vector $\bf w$, of which only the first entry is guanteed to be $1$, and $\mathbf{ I - 2\, uu^T x}=\mathbf{ I - 2\, \frac{w}{\lVert w \rVert} \frac{w^T}{\lVert w \rVert} x}=\mathbf{ I - 2\, \frac{w\,w^T}{\lVert w \rVert^2}\,x}\tag{1}$. One would assume that it would be just fine to store these $\bf w$ reflectors below the diagonal or $\bf R$ excluding the first entry of $1$, and call it a day. However, things are never so easy. Instead what is stored below the diagonal in qr()$qr is a combination of $\bf w$ and the coefficients in the Householder transformation expressed as (1), such that, defining $\text{tau}$ as: $\Large \tau = \frac{w^T\,w}{2}= \frac{\lVert \bf w \rVert}{2}$, the reflectors can be expressed as $\text{reflectors}=\large \bf w/\tau$. These "reflector" vectors are the ones stored right under $\bf R$ in the so-called "compact $\bf QR$". Now we are one degree away from the $\bf w$ vectors, and the first entry is no longer $1$, Hence the output of qr() will need to include the key to restore them since we insist on excluding the first entry of the "reflector" vectors to fit everything in qr()$qr. So are we seeing the $\tau$ values in the output? Well, no that would be predictable. Instead in the output of qr()$qraux(where this key is stored) we find $\rho=\frac{\sum \text{reflectors}^2}{2}= \frac{\bf w^Tw}{\tau^2} / 2$. So framed in red below, we see the "reflectors" ($\bf w/\tau$), excluding their first entry. All the code is here, but since this answer is about the intersection of coding and linear algebra, I will paste the output for ease: options(scipen=999) set.seed(13) (X = matrix(c(rnorm(16)), nrow=4, byrow=F)) [,1] [,2] [,3] [,4] [1,] 0.5543269 1.1425261 -0.3653828 -1.3609845 [2,] -0.2802719 0.4155261 1.1051443 -1.8560272 [3,] 1.7751634 1.2295066 -1.0935940 -0.4398554 [4,] 0.1873201 0.2366797 0.4618709 -0.1939469 Now I wrote the function House() as follows: House = function(A){ Q = diag(nrow(A)) reflectors = matrix(0,nrow=nrow(A),ncol=ncol(A)) for(r in 1:(nrow(A) - 1)){ # We will apply Householder to progressively the columns in A, decreasing 1 element at a time. x = A[r:nrow(A), r] # We now get the vector v, starting with first entry = norm-2 of x[i] times 1 # The sign is to avoid computational issues first = (sign(x[1]) * sqrt(sum(x^2))) + x[1] # We get the rest of v, which is x unchanged, since e1 = [1, 0, 0, ..., 0] # We go the the last column / row, hence the if statement: v = if(length(x) > 1){c(first, x[2:length(x)])}else{v = c(first)} # Now we make the first entry unitary: w = v/first # Tau will be used in the Householder transform, so here it goes: t = as.numeric(t(w)%*%w) / 2 # And the "reflectors" are stored as in the R qr()$qr function: reflectors[r: nrow(A), r] = w/t # The Householder tranformation is: I = diag(length(r:nrow(A))) H.transf = I - 1/t * (w %*% t(w)) H_i = diag(nrow(A)) H_i[r:nrow(A),r:ncol(A)] = H.transf # And we apply the Householder reflection - we left multiply the entire A or Q A = H_i %*% A Q = H_i %*% Q } DECOMPOSITION = list("Q"= t(Q), "R"= round(A,7), "compact Q as in qr()$qr"= ((A*upper.tri(A,diag=T))+(reflectors*lower.tri(reflectors,diag=F))), "reflectors" = reflectors, "rho"=c(apply(reflectors[,1:(ncol(reflectors)- 1)], 2, function(x) sum(x^2) / 2), A[nrow(A),ncol(A)])) return(DECOMPOSITION) } Let's compare the ouput to the R built-in functions. First the home-made function: (H = House(X)) $Q [,1] [,2] [,3] [,4] [1,] -0.29329367 -0.73996967 0.5382474 0.2769719 [2,] 0.14829152 -0.65124800 -0.5656093 -0.4837063 [3,] -0.93923665 0.13835611 -0.1947321 -0.2465187 [4,] -0.09911084 -0.09580458 -0.5936794 0.7928072 $R [,1] [,2] [,3] [,4] [1,] -1.890006 -1.4517318 1.2524151 0.5562856 [2,] 0.000000 -0.9686105 -0.6449056 2.1735456 [3,] 0.000000 0.0000000 -0.8829916 0.5180361 [4,] 0.000000 0.0000000 0.0000000 0.4754876 $`compact Q as in qr()$qr` [,1] [,2] [,3] [,4] [1,] -1.89000649 -1.45173183 1.2524151 0.5562856 [2,] -0.14829152 -0.96861050 -0.6449056 2.1735456 [3,] 0.93923665 -0.67574886 -0.8829916 0.5180361 [4,] 0.09911084 0.03909742 0.6235799 0.4754876 $reflectors [,1] [,2] [,3] [,4] [1,] 1.29329367 0.00000000 0.0000000 0 [2,] -0.14829152 1.73609434 0.0000000 0 [3,] 0.93923665 -0.67574886 1.7817597 0 [4,] 0.09911084 0.03909742 0.6235799 0 $rho [1] 1.2932937 1.7360943 1.7817597 0.4754876 to the R functions: qr.Q(qr(X)) [,1] [,2] [,3] [,4] [1,] -0.29329367 -0.73996967 0.5382474 0.2769719 [2,] 0.14829152 -0.65124800 -0.5656093 -0.4837063 [3,] -0.93923665 0.13835611 -0.1947321 -0.2465187 [4,] -0.09911084 -0.09580458 -0.5936794 0.7928072 qr.R(qr(X)) [,1] [,2] [,3] [,4] [1,] -1.890006 -1.4517318 1.2524151 0.5562856 [2,] 0.000000 -0.9686105 -0.6449056 2.1735456 [3,] 0.000000 0.0000000 -0.8829916 0.5180361 [4,] 0.000000 0.0000000 0.0000000 0.4754876 $qr [,1] [,2] [,3] [,4] [1,] -1.89000649 -1.45173183 1.2524151 0.5562856 [2,] -0.14829152 -0.96861050 -0.6449056 2.1735456 [3,] 0.93923665 -0.67574886 -0.8829916 0.5180361 [4,] 0.09911084 0.03909742 0.6235799 0.4754876 $qraux [1] 1.2932937 1.7360943 1.7817597 0.4754876
Computation of polynomial contrast variables
As a segue to my prior post on this topic I want to share some tentative (albeit incomplete) exploration of the functions behind the linear algebra and related R functions. This is supposed to be a wo
Computation of polynomial contrast variables As a segue to my prior post on this topic I want to share some tentative (albeit incomplete) exploration of the functions behind the linear algebra and related R functions. This is supposed to be a work in progress. Part of the opaqueness of the functions has to do with the "compact" form of the Householder $\mathbf {QR}$ decomposition. The idea behind the Householder decomposition is to reflect vectors across a hyperplane determined by a unit-vector $\mathbf u$ as in the diagram below, but picking this plane in a purposeful way so as to project every column vector of the original matrix $\bf A$ onto the $\bf e_1$ standard unit vector. The normalized norm-2 $1$ vector $\bf u$ can be used to compute the different Householder transformations $\mathbf{ I - 2\, uu^T x}$. The resultant projection can be expressed as $\text{sign}(x_i=x_1)\times \lVert x \rVert \begin{bmatrix}1\\0\\0\\\vdots\\0\end{bmatrix}+\begin{bmatrix}x_1\\x_2\\x_3\\\vdots\\x_m\end{bmatrix}$ The vector $\bf v$ represents the difference between the column vectors $\bf x$ in the matrix $\bf A$ that we want to decompose and the vectors $\bf y$ corresponding to the reflection across the subspace or "mirror" determined by $\bf u$. The method used by LAPACK liberates the need for storage of the first entry in the Householder reflectors by turning them into $1$'s. Instead of normalizing the vector $\bf v$ to $\bf u$ with $\lVert u\rVert= 1$, it is just the fist entry that is converted to a $1$; yet, these new vectors - call them $\bf w$ can still be used as a directional vectors. The beauty of the method is that given that $\bf R$ in a $\bf QR$ decomposition is upper triangular, we can actually take advantage of the $0$ elements in $\bf R$ below the diagonal to fill them in with these $\bf w$ reflectors. Thankfully, the leading entries in these vectors all equal $1$, preventing a problem in the "disputed" diagonal of the matrix: knowing that they are all $1$ they don't need to be included, and can yield the diagonal to the entries of $\bf R$. The "compact QR" matrix in the function qr()$qr can be understood as roughly the addition of the $\mathbf R$ matrix and the lower triangular "storage" matrix for the "modified" reflectors. The Householder projection will still have the form $\mathbf{ I - 2\, uu^T x}$, but we won't be working with $\bf u$ ($\lVert \bf x \rVert=1$), but rather with a vector $\bf w$, of which only the first entry is guanteed to be $1$, and $\mathbf{ I - 2\, uu^T x}=\mathbf{ I - 2\, \frac{w}{\lVert w \rVert} \frac{w^T}{\lVert w \rVert} x}=\mathbf{ I - 2\, \frac{w\,w^T}{\lVert w \rVert^2}\,x}\tag{1}$. One would assume that it would be just fine to store these $\bf w$ reflectors below the diagonal or $\bf R$ excluding the first entry of $1$, and call it a day. However, things are never so easy. Instead what is stored below the diagonal in qr()$qr is a combination of $\bf w$ and the coefficients in the Householder transformation expressed as (1), such that, defining $\text{tau}$ as: $\Large \tau = \frac{w^T\,w}{2}= \frac{\lVert \bf w \rVert}{2}$, the reflectors can be expressed as $\text{reflectors}=\large \bf w/\tau$. These "reflector" vectors are the ones stored right under $\bf R$ in the so-called "compact $\bf QR$". Now we are one degree away from the $\bf w$ vectors, and the first entry is no longer $1$, Hence the output of qr() will need to include the key to restore them since we insist on excluding the first entry of the "reflector" vectors to fit everything in qr()$qr. So are we seeing the $\tau$ values in the output? Well, no that would be predictable. Instead in the output of qr()$qraux(where this key is stored) we find $\rho=\frac{\sum \text{reflectors}^2}{2}= \frac{\bf w^Tw}{\tau^2} / 2$. So framed in red below, we see the "reflectors" ($\bf w/\tau$), excluding their first entry. All the code is here, but since this answer is about the intersection of coding and linear algebra, I will paste the output for ease: options(scipen=999) set.seed(13) (X = matrix(c(rnorm(16)), nrow=4, byrow=F)) [,1] [,2] [,3] [,4] [1,] 0.5543269 1.1425261 -0.3653828 -1.3609845 [2,] -0.2802719 0.4155261 1.1051443 -1.8560272 [3,] 1.7751634 1.2295066 -1.0935940 -0.4398554 [4,] 0.1873201 0.2366797 0.4618709 -0.1939469 Now I wrote the function House() as follows: House = function(A){ Q = diag(nrow(A)) reflectors = matrix(0,nrow=nrow(A),ncol=ncol(A)) for(r in 1:(nrow(A) - 1)){ # We will apply Householder to progressively the columns in A, decreasing 1 element at a time. x = A[r:nrow(A), r] # We now get the vector v, starting with first entry = norm-2 of x[i] times 1 # The sign is to avoid computational issues first = (sign(x[1]) * sqrt(sum(x^2))) + x[1] # We get the rest of v, which is x unchanged, since e1 = [1, 0, 0, ..., 0] # We go the the last column / row, hence the if statement: v = if(length(x) > 1){c(first, x[2:length(x)])}else{v = c(first)} # Now we make the first entry unitary: w = v/first # Tau will be used in the Householder transform, so here it goes: t = as.numeric(t(w)%*%w) / 2 # And the "reflectors" are stored as in the R qr()$qr function: reflectors[r: nrow(A), r] = w/t # The Householder tranformation is: I = diag(length(r:nrow(A))) H.transf = I - 1/t * (w %*% t(w)) H_i = diag(nrow(A)) H_i[r:nrow(A),r:ncol(A)] = H.transf # And we apply the Householder reflection - we left multiply the entire A or Q A = H_i %*% A Q = H_i %*% Q } DECOMPOSITION = list("Q"= t(Q), "R"= round(A,7), "compact Q as in qr()$qr"= ((A*upper.tri(A,diag=T))+(reflectors*lower.tri(reflectors,diag=F))), "reflectors" = reflectors, "rho"=c(apply(reflectors[,1:(ncol(reflectors)- 1)], 2, function(x) sum(x^2) / 2), A[nrow(A),ncol(A)])) return(DECOMPOSITION) } Let's compare the ouput to the R built-in functions. First the home-made function: (H = House(X)) $Q [,1] [,2] [,3] [,4] [1,] -0.29329367 -0.73996967 0.5382474 0.2769719 [2,] 0.14829152 -0.65124800 -0.5656093 -0.4837063 [3,] -0.93923665 0.13835611 -0.1947321 -0.2465187 [4,] -0.09911084 -0.09580458 -0.5936794 0.7928072 $R [,1] [,2] [,3] [,4] [1,] -1.890006 -1.4517318 1.2524151 0.5562856 [2,] 0.000000 -0.9686105 -0.6449056 2.1735456 [3,] 0.000000 0.0000000 -0.8829916 0.5180361 [4,] 0.000000 0.0000000 0.0000000 0.4754876 $`compact Q as in qr()$qr` [,1] [,2] [,3] [,4] [1,] -1.89000649 -1.45173183 1.2524151 0.5562856 [2,] -0.14829152 -0.96861050 -0.6449056 2.1735456 [3,] 0.93923665 -0.67574886 -0.8829916 0.5180361 [4,] 0.09911084 0.03909742 0.6235799 0.4754876 $reflectors [,1] [,2] [,3] [,4] [1,] 1.29329367 0.00000000 0.0000000 0 [2,] -0.14829152 1.73609434 0.0000000 0 [3,] 0.93923665 -0.67574886 1.7817597 0 [4,] 0.09911084 0.03909742 0.6235799 0 $rho [1] 1.2932937 1.7360943 1.7817597 0.4754876 to the R functions: qr.Q(qr(X)) [,1] [,2] [,3] [,4] [1,] -0.29329367 -0.73996967 0.5382474 0.2769719 [2,] 0.14829152 -0.65124800 -0.5656093 -0.4837063 [3,] -0.93923665 0.13835611 -0.1947321 -0.2465187 [4,] -0.09911084 -0.09580458 -0.5936794 0.7928072 qr.R(qr(X)) [,1] [,2] [,3] [,4] [1,] -1.890006 -1.4517318 1.2524151 0.5562856 [2,] 0.000000 -0.9686105 -0.6449056 2.1735456 [3,] 0.000000 0.0000000 -0.8829916 0.5180361 [4,] 0.000000 0.0000000 0.0000000 0.4754876 $qr [,1] [,2] [,3] [,4] [1,] -1.89000649 -1.45173183 1.2524151 0.5562856 [2,] -0.14829152 -0.96861050 -0.6449056 2.1735456 [3,] 0.93923665 -0.67574886 -0.8829916 0.5180361 [4,] 0.09911084 0.03909742 0.6235799 0.4754876 $qraux [1] 1.2932937 1.7360943 1.7817597 0.4754876
Computation of polynomial contrast variables As a segue to my prior post on this topic I want to share some tentative (albeit incomplete) exploration of the functions behind the linear algebra and related R functions. This is supposed to be a wo
24,447
Name of the "paradox" reported by Gelman
It's called Red/Blue paradox, see here the reference to Freakanomics web site
Name of the "paradox" reported by Gelman
It's called Red/Blue paradox, see here the reference to Freakanomics web site
Name of the "paradox" reported by Gelman It's called Red/Blue paradox, see here the reference to Freakanomics web site
Name of the "paradox" reported by Gelman It's called Red/Blue paradox, see here the reference to Freakanomics web site
24,448
Name of the "paradox" reported by Gelman
There is no "ecological paradox." Inference is specific to the unit of analysis. To take Robinson's (1950) analysis of 1930 US Census data as an example, it is true that: Individuals who reported being immigrants were slightly more likely to be illiterate (individual illiteracy and individual immigrant status were slightly positively correlated $r=0.12$); and States with a higher prevalence of illiteracy had a considerably lower prevalence of immigrants (state-level illiteracy and state-level immigrant status were moderately negatively correlated $r=-0.53$). Robinson used these and similar relationships to make the case that extrapolating from relationships between populations (e.g. states) to individuals was a kind of logical fallacy, and he bestowed upon us the term ecological fallacy for describing such. However, the opposite extrapolation—assuming that the relationships at the individual level must also apply at the population level—as also a logical fallacy... specifically the atomistic fallacy. So how could both these relationships ($r=0.12$ for individuals and $r=-0.53$ for states) be true? Well... while individuals who were immigrants may have been more likely to be illiterate, states with high rates of immigration (e.g. New York) had the kind of services, and economic & cultural opportunity that drew in new immigrants. Coincidentally, "services and economic and cultural" opportunity tend to arise in commercial and industrial regional economies characterized by higher prevalence of literacy than, for example, in the agricultural heartland which was less an immigrant destination. Red/blue states' association with state affluence versus red/blue individuals' association with individual affluence raises precisely the same issue: the logical fallacy of extrapolating relationships at one level of inference onto another level of inference. Incidentally, Robinsons' tacit assumption that individual relationships were the ones that really mattered (i.e. his focus on only the population to individual direction of fallacious inference) is itself a kind of psychologistic fallacy, as Diez-Roux (1998) and Subramanian, et al. (2009) make clear. The tl;dr: statistical relationships are specific to the level of inference of their data and analysis. "'Why do some individuals have hypertension?' is a quite different question from 'Why do some populations have much hypertension, whilst in others it is rare?'"—Rose, 1985 References Diez-Roux, A. V. (1998). Bringing context back into epidemiology: variables and fallacies in multilevel analysis. American Journal of Public Health, 88(2):216–222. Robinson, W. (1950). Ecological correlation and the behavior of individuals. American Sociological Review, 15(3):351–357. Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1):32–28. Subramanian, S. V., Jones, K., Kaddour, A., and Krieger, N. (2009). Revisit- ing Robinson: The perils of individualistic and ecologic fallacy. International Journal of Epidemiology, 38(2):342–360.
Name of the "paradox" reported by Gelman
There is no "ecological paradox." Inference is specific to the unit of analysis. To take Robinson's (1950) analysis of 1930 US Census data as an example, it is true that: Individuals who reported bei
Name of the "paradox" reported by Gelman There is no "ecological paradox." Inference is specific to the unit of analysis. To take Robinson's (1950) analysis of 1930 US Census data as an example, it is true that: Individuals who reported being immigrants were slightly more likely to be illiterate (individual illiteracy and individual immigrant status were slightly positively correlated $r=0.12$); and States with a higher prevalence of illiteracy had a considerably lower prevalence of immigrants (state-level illiteracy and state-level immigrant status were moderately negatively correlated $r=-0.53$). Robinson used these and similar relationships to make the case that extrapolating from relationships between populations (e.g. states) to individuals was a kind of logical fallacy, and he bestowed upon us the term ecological fallacy for describing such. However, the opposite extrapolation—assuming that the relationships at the individual level must also apply at the population level—as also a logical fallacy... specifically the atomistic fallacy. So how could both these relationships ($r=0.12$ for individuals and $r=-0.53$ for states) be true? Well... while individuals who were immigrants may have been more likely to be illiterate, states with high rates of immigration (e.g. New York) had the kind of services, and economic & cultural opportunity that drew in new immigrants. Coincidentally, "services and economic and cultural" opportunity tend to arise in commercial and industrial regional economies characterized by higher prevalence of literacy than, for example, in the agricultural heartland which was less an immigrant destination. Red/blue states' association with state affluence versus red/blue individuals' association with individual affluence raises precisely the same issue: the logical fallacy of extrapolating relationships at one level of inference onto another level of inference. Incidentally, Robinsons' tacit assumption that individual relationships were the ones that really mattered (i.e. his focus on only the population to individual direction of fallacious inference) is itself a kind of psychologistic fallacy, as Diez-Roux (1998) and Subramanian, et al. (2009) make clear. The tl;dr: statistical relationships are specific to the level of inference of their data and analysis. "'Why do some individuals have hypertension?' is a quite different question from 'Why do some populations have much hypertension, whilst in others it is rare?'"—Rose, 1985 References Diez-Roux, A. V. (1998). Bringing context back into epidemiology: variables and fallacies in multilevel analysis. American Journal of Public Health, 88(2):216–222. Robinson, W. (1950). Ecological correlation and the behavior of individuals. American Sociological Review, 15(3):351–357. Rose, G. (1985). Sick individuals and sick populations. International Journal of Epidemiology, 14(1):32–28. Subramanian, S. V., Jones, K., Kaddour, A., and Krieger, N. (2009). Revisit- ing Robinson: The perils of individualistic and ecologic fallacy. International Journal of Epidemiology, 38(2):342–360.
Name of the "paradox" reported by Gelman There is no "ecological paradox." Inference is specific to the unit of analysis. To take Robinson's (1950) analysis of 1930 US Census data as an example, it is true that: Individuals who reported bei
24,449
Limits to tree-based ensemble methods in small n, large p problems?
I suspect there won't be a definitive answer to this question until some simulation studies are conducted. In the meantime, I found Genuer et al's Random Forests: some methodological insights helped put some perspective on this question, at least in terms of testing RF against a variety of "low n, high p" datasets. Several of these datasets have >5000 predictors and <100 observations!!
Limits to tree-based ensemble methods in small n, large p problems?
I suspect there won't be a definitive answer to this question until some simulation studies are conducted. In the meantime, I found Genuer et al's Random Forests: some methodological insights helped p
Limits to tree-based ensemble methods in small n, large p problems? I suspect there won't be a definitive answer to this question until some simulation studies are conducted. In the meantime, I found Genuer et al's Random Forests: some methodological insights helped put some perspective on this question, at least in terms of testing RF against a variety of "low n, high p" datasets. Several of these datasets have >5000 predictors and <100 observations!!
Limits to tree-based ensemble methods in small n, large p problems? I suspect there won't be a definitive answer to this question until some simulation studies are conducted. In the meantime, I found Genuer et al's Random Forests: some methodological insights helped p
24,450
Limits to tree-based ensemble methods in small n, large p problems?
The failure mode you'll encounter is that, with enough random features, there will exist features that relate to the target within the bagged samples used for each tree but not within the larger dataset. A similar issue to that seen in multiple testing. Rules of thumb for this are hard to develop since the exact point at which this happens depends on the amount of noise and strength of the signal in the data. There also exist methods that address this by using multiple test corrected p-values as splitting criteria, doing a feature selection step based on variable importance and/or comparison of feature importances to artificial contrast features produced by randomly permutating the actual feature, use of out of bag cases to validate split selection and other methods. These can be extremely effective. I've used random forests (including some of the above methodological tweaks) on data sets with ~1000 cases and 30,000-1,000,000 features. (Data sets in human genetics with varying level of feature selection or engineering). They can certainly be effective in recovering a strong signal (or batch effect) in such data but don't do well piecing together something like a disease with heterogenous causes as the amount random variation overcomes each signal
Limits to tree-based ensemble methods in small n, large p problems?
The failure mode you'll encounter is that, with enough random features, there will exist features that relate to the target within the bagged samples used for each tree but not within the larger datas
Limits to tree-based ensemble methods in small n, large p problems? The failure mode you'll encounter is that, with enough random features, there will exist features that relate to the target within the bagged samples used for each tree but not within the larger dataset. A similar issue to that seen in multiple testing. Rules of thumb for this are hard to develop since the exact point at which this happens depends on the amount of noise and strength of the signal in the data. There also exist methods that address this by using multiple test corrected p-values as splitting criteria, doing a feature selection step based on variable importance and/or comparison of feature importances to artificial contrast features produced by randomly permutating the actual feature, use of out of bag cases to validate split selection and other methods. These can be extremely effective. I've used random forests (including some of the above methodological tweaks) on data sets with ~1000 cases and 30,000-1,000,000 features. (Data sets in human genetics with varying level of feature selection or engineering). They can certainly be effective in recovering a strong signal (or batch effect) in such data but don't do well piecing together something like a disease with heterogenous causes as the amount random variation overcomes each signal
Limits to tree-based ensemble methods in small n, large p problems? The failure mode you'll encounter is that, with enough random features, there will exist features that relate to the target within the bagged samples used for each tree but not within the larger datas
24,451
Limits to tree-based ensemble methods in small n, large p problems?
It will also depend on the signal and noise in your data. If your dependent variable is pretty well explained by a combination of the variables in your model than I think you can get away with a lower n/p ratio. I suspect an absolute minimum number of n will also be required to get a decent model apart from just the ratio. One way to look at it is that each tree is build using about SQRT(p) variables and if that number is large and number of points are small trees can be fitted without really having a real model there. Hence lot of such over-fitted trees will give false variable importance. Usually if in variable importance chart, I see lot of top variables with almost same level of importance I conclude that it is giving me just noise.
Limits to tree-based ensemble methods in small n, large p problems?
It will also depend on the signal and noise in your data. If your dependent variable is pretty well explained by a combination of the variables in your model than I think you can get away with a lower
Limits to tree-based ensemble methods in small n, large p problems? It will also depend on the signal and noise in your data. If your dependent variable is pretty well explained by a combination of the variables in your model than I think you can get away with a lower n/p ratio. I suspect an absolute minimum number of n will also be required to get a decent model apart from just the ratio. One way to look at it is that each tree is build using about SQRT(p) variables and if that number is large and number of points are small trees can be fitted without really having a real model there. Hence lot of such over-fitted trees will give false variable importance. Usually if in variable importance chart, I see lot of top variables with almost same level of importance I conclude that it is giving me just noise.
Limits to tree-based ensemble methods in small n, large p problems? It will also depend on the signal and noise in your data. If your dependent variable is pretty well explained by a combination of the variables in your model than I think you can get away with a lower
24,452
Mathematics base for data mining and artificial intelligence algorithms
That can actually sound a little strange within community of statisticians, but I am pretty sure that most of machine learning algorithms can be formulated as a functional minimization problems. That means that this is going to be covered with mathematical optimization. The other thing is that you will probably need calculus and linear algebra to understand what is optimization. And to interpret your results you will better have some background in probability theory and statistics.
Mathematics base for data mining and artificial intelligence algorithms
That can actually sound a little strange within community of statisticians, but I am pretty sure that most of machine learning algorithms can be formulated as a functional minimization problems. That
Mathematics base for data mining and artificial intelligence algorithms That can actually sound a little strange within community of statisticians, but I am pretty sure that most of machine learning algorithms can be formulated as a functional minimization problems. That means that this is going to be covered with mathematical optimization. The other thing is that you will probably need calculus and linear algebra to understand what is optimization. And to interpret your results you will better have some background in probability theory and statistics.
Mathematics base for data mining and artificial intelligence algorithms That can actually sound a little strange within community of statisticians, but I am pretty sure that most of machine learning algorithms can be formulated as a functional minimization problems. That
24,453
Mathematics base for data mining and artificial intelligence algorithms
This question is maybe to broad, you should say something more about what you will use data mining for! But, data mining is essentially statistics, and much of the use of AI that I have seen is statistics as well. So, what math you need is the math you need for statistics: 1) calculus and real analysis 2) probability 3) Linear algebra! In practical terms, 3) may be the most important, almost whatever you will be doing (inclusive uses of 1) and 2)) you will depend heavily on linear algebra. So, be sure to get, not only the concepts, but manipulative skill! A lot more is used, but maybe more specialized. So it does't make sense to give more detailed advice until you have specialized your question (and learnt 1), 2) & 3))
Mathematics base for data mining and artificial intelligence algorithms
This question is maybe to broad, you should say something more about what you will use data mining for! But, data mining is essentially statistics, and much of the use of AI that I have seen is statis
Mathematics base for data mining and artificial intelligence algorithms This question is maybe to broad, you should say something more about what you will use data mining for! But, data mining is essentially statistics, and much of the use of AI that I have seen is statistics as well. So, what math you need is the math you need for statistics: 1) calculus and real analysis 2) probability 3) Linear algebra! In practical terms, 3) may be the most important, almost whatever you will be doing (inclusive uses of 1) and 2)) you will depend heavily on linear algebra. So, be sure to get, not only the concepts, but manipulative skill! A lot more is used, but maybe more specialized. So it does't make sense to give more detailed advice until you have specialized your question (and learnt 1), 2) & 3))
Mathematics base for data mining and artificial intelligence algorithms This question is maybe to broad, you should say something more about what you will use data mining for! But, data mining is essentially statistics, and much of the use of AI that I have seen is statis
24,454
Mathematics base for data mining and artificial intelligence algorithms
It seems a fair question, what mathematics should I learn as a foundation for machine learning? Maybe it is the answer that is broad. As ML draws from so many disciplines. Others have suggested, Linear Algebra, Probability Theory, Statistics, Metric Spaces and many others which are all relevant. Perhaps a workable approach is to list some of the most popular ML algorithms take a look at them and fill in the mathematics you feel you are less comfortable with.
Mathematics base for data mining and artificial intelligence algorithms
It seems a fair question, what mathematics should I learn as a foundation for machine learning? Maybe it is the answer that is broad. As ML draws from so many disciplines. Others have suggested, Line
Mathematics base for data mining and artificial intelligence algorithms It seems a fair question, what mathematics should I learn as a foundation for machine learning? Maybe it is the answer that is broad. As ML draws from so many disciplines. Others have suggested, Linear Algebra, Probability Theory, Statistics, Metric Spaces and many others which are all relevant. Perhaps a workable approach is to list some of the most popular ML algorithms take a look at them and fill in the mathematics you feel you are less comfortable with.
Mathematics base for data mining and artificial intelligence algorithms It seems a fair question, what mathematics should I learn as a foundation for machine learning? Maybe it is the answer that is broad. As ML draws from so many disciplines. Others have suggested, Line
24,455
Schoenfeld residuals
Judgement of proportional hazards(PH) should be based on the results from a formal statistical test and the Schoenfeld residuals (SR) plot together. If the SR plot for a given variable shows deviation from a straight line while it stays flat for the rest of the variables, then it is something you shouldn't ignore. First thing you can do is to look at the results of the global test. The global test might indicate the overall assumption of PH holds true [or not]. If the global test is fine then switching the reference category of the variable for which the assumption didn't held true, you might be able to achieve PH. The hazards may be proportional when compared to one reference category but not the other. Hence, by switching the reference categories, you might be able to find the category which results in PH assumption being true. If the switching doesn't solve your problem, and assuming you have got the right variables in your model, then this indicates that the hazards is not proportional for this particular variable i.e. different hazards at different time points. Hence, you may want to introduce interaction between variable and time in your model.
Schoenfeld residuals
Judgement of proportional hazards(PH) should be based on the results from a formal statistical test and the Schoenfeld residuals (SR) plot together. If the SR plot for a given variable shows deviation
Schoenfeld residuals Judgement of proportional hazards(PH) should be based on the results from a formal statistical test and the Schoenfeld residuals (SR) plot together. If the SR plot for a given variable shows deviation from a straight line while it stays flat for the rest of the variables, then it is something you shouldn't ignore. First thing you can do is to look at the results of the global test. The global test might indicate the overall assumption of PH holds true [or not]. If the global test is fine then switching the reference category of the variable for which the assumption didn't held true, you might be able to achieve PH. The hazards may be proportional when compared to one reference category but not the other. Hence, by switching the reference categories, you might be able to find the category which results in PH assumption being true. If the switching doesn't solve your problem, and assuming you have got the right variables in your model, then this indicates that the hazards is not proportional for this particular variable i.e. different hazards at different time points. Hence, you may want to introduce interaction between variable and time in your model.
Schoenfeld residuals Judgement of proportional hazards(PH) should be based on the results from a formal statistical test and the Schoenfeld residuals (SR) plot together. If the SR plot for a given variable shows deviation
24,456
Logistic regression residual analysis
You can't really assess the bias that way in logistic regression. Logisitic regression is only expected to be unbiased on log odds or logit scores, log(p/(1-p)). The proportions will be skewed and therefore look biased. You need to plot the residuals in terms of log odds.
Logistic regression residual analysis
You can't really assess the bias that way in logistic regression. Logisitic regression is only expected to be unbiased on log odds or logit scores, log(p/(1-p)). The proportions will be skewed and th
Logistic regression residual analysis You can't really assess the bias that way in logistic regression. Logisitic regression is only expected to be unbiased on log odds or logit scores, log(p/(1-p)). The proportions will be skewed and therefore look biased. You need to plot the residuals in terms of log odds.
Logistic regression residual analysis You can't really assess the bias that way in logistic regression. Logisitic regression is only expected to be unbiased on log odds or logit scores, log(p/(1-p)). The proportions will be skewed and th
24,457
Logistic regression residual analysis
there is unlikely to exist any general software for doing this. most likely because there is no general theory for fixing issues in regression. hence this is more of a "what i would do" type of answer rather than a theoretically grounded procedure. the plot you produce is basically a visual HL test with 100 bins, but using a single predictor instead of the predicted probability to do the binning. this means your procedure is likely to inherit some of the properties of the HL test. your procedure sounds reasonable, although you should be aware of "overfitting" your criteria. your criteria is also less useful as a diagnostic because it has become part of the estimation process. also, whenever you do something by intuition, you should write down your decision making process in as much detail as is practical. this is because you may discover the seeds of a general process or theory, which when developed leads to a better procedure (more automatic and optimal with respect to some theory). i think one way to go is to first reduce the number of plots you need to investigate. one way to do this is to fit each variable as a cubic spline, and then investigate the plots which have non zero non linear estimates. given the number of data points this is also a easy automatic fix for non linearities. this will expand your model from 50 to 200+50k where k is the number of knots. you could think of this as applying a "statistical taylor series expansion" of the "true" transformation. if your diagnostic stills looks bad after this, then i would try adding interaction terms. parts of your question seem more about writing an interactive program, which is more the domain of stackoverflow than here. it may also be useful to search for exploratory data analysis tools as these are more likely to have features you can "piggy back" off.
Logistic regression residual analysis
there is unlikely to exist any general software for doing this. most likely because there is no general theory for fixing issues in regression. hence this is more of a "what i would do" type of answ
Logistic regression residual analysis there is unlikely to exist any general software for doing this. most likely because there is no general theory for fixing issues in regression. hence this is more of a "what i would do" type of answer rather than a theoretically grounded procedure. the plot you produce is basically a visual HL test with 100 bins, but using a single predictor instead of the predicted probability to do the binning. this means your procedure is likely to inherit some of the properties of the HL test. your procedure sounds reasonable, although you should be aware of "overfitting" your criteria. your criteria is also less useful as a diagnostic because it has become part of the estimation process. also, whenever you do something by intuition, you should write down your decision making process in as much detail as is practical. this is because you may discover the seeds of a general process or theory, which when developed leads to a better procedure (more automatic and optimal with respect to some theory). i think one way to go is to first reduce the number of plots you need to investigate. one way to do this is to fit each variable as a cubic spline, and then investigate the plots which have non zero non linear estimates. given the number of data points this is also a easy automatic fix for non linearities. this will expand your model from 50 to 200+50k where k is the number of knots. you could think of this as applying a "statistical taylor series expansion" of the "true" transformation. if your diagnostic stills looks bad after this, then i would try adding interaction terms. parts of your question seem more about writing an interactive program, which is more the domain of stackoverflow than here. it may also be useful to search for exploratory data analysis tools as these are more likely to have features you can "piggy back" off.
Logistic regression residual analysis there is unlikely to exist any general software for doing this. most likely because there is no general theory for fixing issues in regression. hence this is more of a "what i would do" type of answ
24,458
How best to communicate uncertainty?
That's what Gerd Gigerenzer has been working on in the past: http://www.amazon.com/Reckoning-With-Risk-Gerd-Gigerenzer/dp/0140297863/ref=sr_1_1?s=books&ie=UTF8&qid=1335941282&sr=1-1 Edit to summarize what I think might be what Gigerenzer means: As I understand it, Gigerenzer proposes to communicate risk differently. In the traditional way, a treatment (he's into medical statistics) is reported as having an effect of reducing an illness by a certain percentage. E.g. "eating 100 bananas a day reduces your risk of getting toe nail cancer by 50%". This is a huge benefit of eating bananas, it seems. The problem is that the prevalence of toe nail cancer isn't exactly high. Let's assume, there is a disease called "toe nail cancer" and its prevalence is 1 in 100000 people. Gigerenzer proposes to report the absolute probability of getting toe nail cancer before and after - e.g. "reduces the risk of getting toe nail cancer from 0,001% to 0,0005%" - which is lot less impressive in the case of rare diseases.
How best to communicate uncertainty?
That's what Gerd Gigerenzer has been working on in the past: http://www.amazon.com/Reckoning-With-Risk-Gerd-Gigerenzer/dp/0140297863/ref=sr_1_1?s=books&ie=UTF8&qid=1335941282&sr=1-1 Edit to summarize
How best to communicate uncertainty? That's what Gerd Gigerenzer has been working on in the past: http://www.amazon.com/Reckoning-With-Risk-Gerd-Gigerenzer/dp/0140297863/ref=sr_1_1?s=books&ie=UTF8&qid=1335941282&sr=1-1 Edit to summarize what I think might be what Gigerenzer means: As I understand it, Gigerenzer proposes to communicate risk differently. In the traditional way, a treatment (he's into medical statistics) is reported as having an effect of reducing an illness by a certain percentage. E.g. "eating 100 bananas a day reduces your risk of getting toe nail cancer by 50%". This is a huge benefit of eating bananas, it seems. The problem is that the prevalence of toe nail cancer isn't exactly high. Let's assume, there is a disease called "toe nail cancer" and its prevalence is 1 in 100000 people. Gigerenzer proposes to report the absolute probability of getting toe nail cancer before and after - e.g. "reduces the risk of getting toe nail cancer from 0,001% to 0,0005%" - which is lot less impressive in the case of rare diseases.
How best to communicate uncertainty? That's what Gerd Gigerenzer has been working on in the past: http://www.amazon.com/Reckoning-With-Risk-Gerd-Gigerenzer/dp/0140297863/ref=sr_1_1?s=books&ie=UTF8&qid=1335941282&sr=1-1 Edit to summarize
24,459
How best to communicate uncertainty?
In 2003 there was a series in the Journal of the Royal Statistical Society (A) on the Communication of Risk. The reference that I have for the 1st one is: J. R. Statist. Soc. A (2003) 166, Part 2, pp. 205-206 From there you could probably find the entire series and they may be of interest for this question.
How best to communicate uncertainty?
In 2003 there was a series in the Journal of the Royal Statistical Society (A) on the Communication of Risk. The reference that I have for the 1st one is: J. R. Statist. Soc. A (2003) 166, Part 2, pp
How best to communicate uncertainty? In 2003 there was a series in the Journal of the Royal Statistical Society (A) on the Communication of Risk. The reference that I have for the 1st one is: J. R. Statist. Soc. A (2003) 166, Part 2, pp. 205-206 From there you could probably find the entire series and they may be of interest for this question.
How best to communicate uncertainty? In 2003 there was a series in the Journal of the Royal Statistical Society (A) on the Communication of Risk. The reference that I have for the 1st one is: J. R. Statist. Soc. A (2003) 166, Part 2, pp
24,460
How best to communicate uncertainty?
I think bookmaker's racing terminology may be more easily understood by the general public, for example the chances of some specific event happening might be said to be 50-50 or, as another example, there may be odds of 9-1 that an effect will be within a stated range, with a risk of 100-1 that some rather unlikely specified event will happen. This needs to be balanced with risk, in the sense of the potential benefit or damage that may arise. For example, if one crosses a road as a pedestrian without looking, one may be lucky 75% of the time, but the consequences of an accident could be catastrophic.
How best to communicate uncertainty?
I think bookmaker's racing terminology may be more easily understood by the general public, for example the chances of some specific event happening might be said to be 50-50 or, as another example, t
How best to communicate uncertainty? I think bookmaker's racing terminology may be more easily understood by the general public, for example the chances of some specific event happening might be said to be 50-50 or, as another example, there may be odds of 9-1 that an effect will be within a stated range, with a risk of 100-1 that some rather unlikely specified event will happen. This needs to be balanced with risk, in the sense of the potential benefit or damage that may arise. For example, if one crosses a road as a pedestrian without looking, one may be lucky 75% of the time, but the consequences of an accident could be catastrophic.
How best to communicate uncertainty? I think bookmaker's racing terminology may be more easily understood by the general public, for example the chances of some specific event happening might be said to be 50-50 or, as another example, t
24,461
Crossed random effects and unbalanced data
As for unbalanced data, glmer is able to handle unbalanced groups: that was actually the point of developing mixed-models approaches as compared to repeated-measures ANOVAs which are restricted to balanced designs. Including clients or providers with few events (even only one) is still better than omitting them, as it improves the estimation of the residual variance (see Martin et al. 2011). If you want to use BLUPs (ranef(model)) as a proxy of skills, you will indeed have to estimate the uncertainty around your point predictions. This can be done in a frequentist framework using ranef(model, postVar=TRUE) or through the posterior distribution in a Bayesian framework. You should however not use BLUPs as the response variable in further regression models: see Hadfield et al. (2010) for examples of misuses of BLUPs and different methods to adequately take into account their uncertainty. As for the correlation of skills between clients and providers, this unbalance might be problematic if it is very strong, as it would prevent correctly estimating the variance due to each random effect. There does not seem to be a mixed-models framework that would easily handle the correlation between random intercepts (see here for a formal expression of your problem). Could you maybe precise how correlated are the average successes of clients and providers?
Crossed random effects and unbalanced data
As for unbalanced data, glmer is able to handle unbalanced groups: that was actually the point of developing mixed-models approaches as compared to repeated-measures ANOVAs which are restricted to bal
Crossed random effects and unbalanced data As for unbalanced data, glmer is able to handle unbalanced groups: that was actually the point of developing mixed-models approaches as compared to repeated-measures ANOVAs which are restricted to balanced designs. Including clients or providers with few events (even only one) is still better than omitting them, as it improves the estimation of the residual variance (see Martin et al. 2011). If you want to use BLUPs (ranef(model)) as a proxy of skills, you will indeed have to estimate the uncertainty around your point predictions. This can be done in a frequentist framework using ranef(model, postVar=TRUE) or through the posterior distribution in a Bayesian framework. You should however not use BLUPs as the response variable in further regression models: see Hadfield et al. (2010) for examples of misuses of BLUPs and different methods to adequately take into account their uncertainty. As for the correlation of skills between clients and providers, this unbalance might be problematic if it is very strong, as it would prevent correctly estimating the variance due to each random effect. There does not seem to be a mixed-models framework that would easily handle the correlation between random intercepts (see here for a formal expression of your problem). Could you maybe precise how correlated are the average successes of clients and providers?
Crossed random effects and unbalanced data As for unbalanced data, glmer is able to handle unbalanced groups: that was actually the point of developing mixed-models approaches as compared to repeated-measures ANOVAs which are restricted to bal
24,462
Understanding no free lunch theorem in Duda et al's Pattern Classification
I will answer the questions that I think I know the answers to. This answer is no because you are picking an $x$ that wasn't part of the fit set $D$ and so $h$ depends on $x$. $h$ is only evaluated at the values $x$ in the test set to obtain the expected error rate so it is not evaluated over the entire set $H$ but only at the discrete set of $x$'s in the test set. $\mathcal{E}_i(E|F, D)$ is the expected off training set error rate given the function $F$ and the training set $D$. But $\mathcal{E}_i(E|F, n)$ I think is different because you are only conditioning on the number of training points $n$ and not the actual $x$ values. But this is puzzling given the subsequent statements. $D$ is the set of training vectors. There are $n$ training vectors in $D$. So you are summing over the fixed $n$ training vectors in $D$. There is only one set $D$. I think the answer to 5 is no. The notation seems to be a bit confusing. Can't comment on 6 and 7.
Understanding no free lunch theorem in Duda et al's Pattern Classification
I will answer the questions that I think I know the answers to. This answer is no because you are picking an $x$ that wasn't part of the fit set $D$ and so $h$ depends on $x$. $h$ is only evaluated
Understanding no free lunch theorem in Duda et al's Pattern Classification I will answer the questions that I think I know the answers to. This answer is no because you are picking an $x$ that wasn't part of the fit set $D$ and so $h$ depends on $x$. $h$ is only evaluated at the values $x$ in the test set to obtain the expected error rate so it is not evaluated over the entire set $H$ but only at the discrete set of $x$'s in the test set. $\mathcal{E}_i(E|F, D)$ is the expected off training set error rate given the function $F$ and the training set $D$. But $\mathcal{E}_i(E|F, n)$ I think is different because you are only conditioning on the number of training points $n$ and not the actual $x$ values. But this is puzzling given the subsequent statements. $D$ is the set of training vectors. There are $n$ training vectors in $D$. So you are summing over the fixed $n$ training vectors in $D$. There is only one set $D$. I think the answer to 5 is no. The notation seems to be a bit confusing. Can't comment on 6 and 7.
Understanding no free lunch theorem in Duda et al's Pattern Classification I will answer the questions that I think I know the answers to. This answer is no because you are picking an $x$ that wasn't part of the fit set $D$ and so $h$ depends on $x$. $h$ is only evaluated
24,463
Error to report with median and graphical representations?
You can report a confidence interval for the median. In R, you can use wilcox.test with the argument conf.int=TRUE. There's a tiny discussion of this in John Verzani's simpleR notes: see here. Regarding plots: I don't really like using bar plots even for representing a set of means. I'd prefer to just plot little line segments for the CI: The plot on the right was made with errbar() from the Hmisc package [CRAN page]. You could make the same sort of plot for the medians and the related confidence intervals, or you could use box plots (which, in the same amount of space, describe the entire distribution).
Error to report with median and graphical representations?
You can report a confidence interval for the median. In R, you can use wilcox.test with the argument conf.int=TRUE. There's a tiny discussion of this in John Verzani's simpleR notes: see here. Regar
Error to report with median and graphical representations? You can report a confidence interval for the median. In R, you can use wilcox.test with the argument conf.int=TRUE. There's a tiny discussion of this in John Verzani's simpleR notes: see here. Regarding plots: I don't really like using bar plots even for representing a set of means. I'd prefer to just plot little line segments for the CI: The plot on the right was made with errbar() from the Hmisc package [CRAN page]. You could make the same sort of plot for the medians and the related confidence intervals, or you could use box plots (which, in the same amount of space, describe the entire distribution).
Error to report with median and graphical representations? You can report a confidence interval for the median. In R, you can use wilcox.test with the argument conf.int=TRUE. There's a tiny discussion of this in John Verzani's simpleR notes: see here. Regar
24,464
How to parameterize the ratio of two normally distributed variables, or the inverse of one?
You might want to look at some of the references under the Wikipedia article on Ratio Distribution. It's possible you'll find better approximations or distributions to use. Otherwise, your approach seems sound. Update I think a better reference might be: Ratios of Normal Variables and Ratios of Sums of Uniform Variables (Marsaglia, 1965) See formulas 2-4 on page 195. Update 2 On your updated question regarding variance from a Cauchy -- as John Cook pointed out in the comments, the variance doesn't exist. So, taking a sample variance simply won't work as an "estimator". In fact, you'll find that your sample variance does not converge at all and fluctuates wildly as you keep taking samples.
How to parameterize the ratio of two normally distributed variables, or the inverse of one?
You might want to look at some of the references under the Wikipedia article on Ratio Distribution. It's possible you'll find better approximations or distributions to use. Otherwise, your approach
How to parameterize the ratio of two normally distributed variables, or the inverse of one? You might want to look at some of the references under the Wikipedia article on Ratio Distribution. It's possible you'll find better approximations or distributions to use. Otherwise, your approach seems sound. Update I think a better reference might be: Ratios of Normal Variables and Ratios of Sums of Uniform Variables (Marsaglia, 1965) See formulas 2-4 on page 195. Update 2 On your updated question regarding variance from a Cauchy -- as John Cook pointed out in the comments, the variance doesn't exist. So, taking a sample variance simply won't work as an "estimator". In fact, you'll find that your sample variance does not converge at all and fluctuates wildly as you keep taking samples.
How to parameterize the ratio of two normally distributed variables, or the inverse of one? You might want to look at some of the references under the Wikipedia article on Ratio Distribution. It's possible you'll find better approximations or distributions to use. Otherwise, your approach
24,465
How to parameterize the ratio of two normally distributed variables, or the inverse of one?
Could you not assume that $y^{-1} \sim N(.,.)$ for the inverse of a normal random variable and do the necessary bayesian computation after identifying the appropriate parameters for the normal distribution. My suggestion below to use the Cauchy does not work as pointed out in the comments by ars and John. The ratio of two normally random variables follows the Cauchy distribution. You may want to use this idea to identify the parameters of the cauchy that most closely fits the data you have.
How to parameterize the ratio of two normally distributed variables, or the inverse of one?
Could you not assume that $y^{-1} \sim N(.,.)$ for the inverse of a normal random variable and do the necessary bayesian computation after identifying the appropriate parameters for the normal distrib
How to parameterize the ratio of two normally distributed variables, or the inverse of one? Could you not assume that $y^{-1} \sim N(.,.)$ for the inverse of a normal random variable and do the necessary bayesian computation after identifying the appropriate parameters for the normal distribution. My suggestion below to use the Cauchy does not work as pointed out in the comments by ars and John. The ratio of two normally random variables follows the Cauchy distribution. You may want to use this idea to identify the parameters of the cauchy that most closely fits the data you have.
How to parameterize the ratio of two normally distributed variables, or the inverse of one? Could you not assume that $y^{-1} \sim N(.,.)$ for the inverse of a normal random variable and do the necessary bayesian computation after identifying the appropriate parameters for the normal distrib
24,466
Example of a non-measurable maximum likelihood estimator
Here is a contrived example. Let $(\mathcal{X}, \mathcal{B})$ be the interval $[0, 2]$ with its Borel $\sigma$-algebra, let $(\Theta, \mathcal{F})$ be the interval $[1, 2]$ with its Borel $\sigma$-algebra, and let $P_\theta$ be the uniform distribution on $[0, \theta]$ for each $\theta \in \Theta$. The family $(P_\theta)_{\theta \in \Theta}$ is dominated by the restriction of Lebesgue measure to $\mathcal{X}$ (call it $\mu$ as in the question), and a sensible choice of density for $P_\theta$ with respect to $\mu$ is $$ f_\theta = \theta^{-1} \mathbf{1}_{[0, \theta]}. $$ The maximum likelihood estimator $\hat{\theta} : \mathcal{X} \to \Theta$ is then given by $$ \hat{\theta}(x) = \max\{1, x\}, $$ which is certainly measurable. However, suppose I'm not sensible, and that I have a favorite Vitali set $V \subseteq [0, 1]$, and I define a family $(g_\theta)_{\theta \in \Theta}$ of functions $\mathcal{X} \to \mathbb{R}$ as follows: if $\theta - 1\in V$, then $g_\theta = 3 \mathbf{1}_{\{\theta - 1\}} + \theta^{-1} \mathbf{1}_{[0, \theta] \setminus \{\theta - 1\}}$; if $\theta - 1\notin V$, then $g_\theta = f_\theta$. In other words, $$ g_\theta(x) = \begin{cases} 3, & \text{if $x \in V$ and $x = \theta - 1$,} \\ f_\theta(x), & \text{otherwise} \end{cases} $$ for all $x \in \mathcal{X}$ and $\theta \in \Theta$. Since each $g_\theta$ is a modification of $f_\theta$ on at most a singleton, the family $(g_\theta)_{\theta \in \Theta}$ is another family of densities for $(P_\theta)_{\theta \in \Theta}$ with respect to $\mu$. Now one can check that there is a unique function $\hat{\vartheta} : \mathcal{X} \to \Theta$ such that $$ g_{\hat{\vartheta}(x)}(x) = \sup_{\theta \in \Theta} g_\theta(x) $$ for each $x \in \mathcal{X}$; this function is given by $$ \hat{\vartheta}(x) = \begin{cases} \max\{1, x\}, & \text{if $x \notin V$,} \\ x + 1, & \text{if $x \in V$,} \end{cases} $$ and it is not measurable (for example, since $\{x \in \mathcal{X} : \hat{\vartheta}(x) = x + 1\} = \{0\} \cup V$). Thus, in the setup with the family of densities $(g_\theta)_{\theta \in \Theta}$, there exists a function $\hat{\vartheta} : \mathcal{X} \to \Theta$ such that $$ g_{\hat{\vartheta}(x)}(x) = \sup_{\theta \in \Theta} g_\theta(x) $$ for all $x \in \mathcal{X}$, but no such measurable function exists. This example merely shows that the the question of whether a measurable maximum likelihood estimator exists is sensitive to the choice of version of the Radon-Nikodym derivative $dP_\theta / d\mu$. When we used the much more reasonable family of densities $(f_\theta)_{\theta \in \Theta}$ we had no trouble obtaining a measurable maximum likelihood estimator, but when we artificially perturbed each of these densities on a null set to obtain $(g_\theta)_{\theta \in \Theta}$, we ran into problems. This example is inspired by Example 2.2 in Pfanzagl (1969), wherein a similar but much more subtle modification of densities yields a scenario in which no maximum likelihood estimator exists at all, measurable or not.
Example of a non-measurable maximum likelihood estimator
Here is a contrived example. Let $(\mathcal{X}, \mathcal{B})$ be the interval $[0, 2]$ with its Borel $\sigma$-algebra, let $(\Theta, \mathcal{F})$ be the interval $[1, 2]$ with its Borel $\sigma$-alg
Example of a non-measurable maximum likelihood estimator Here is a contrived example. Let $(\mathcal{X}, \mathcal{B})$ be the interval $[0, 2]$ with its Borel $\sigma$-algebra, let $(\Theta, \mathcal{F})$ be the interval $[1, 2]$ with its Borel $\sigma$-algebra, and let $P_\theta$ be the uniform distribution on $[0, \theta]$ for each $\theta \in \Theta$. The family $(P_\theta)_{\theta \in \Theta}$ is dominated by the restriction of Lebesgue measure to $\mathcal{X}$ (call it $\mu$ as in the question), and a sensible choice of density for $P_\theta$ with respect to $\mu$ is $$ f_\theta = \theta^{-1} \mathbf{1}_{[0, \theta]}. $$ The maximum likelihood estimator $\hat{\theta} : \mathcal{X} \to \Theta$ is then given by $$ \hat{\theta}(x) = \max\{1, x\}, $$ which is certainly measurable. However, suppose I'm not sensible, and that I have a favorite Vitali set $V \subseteq [0, 1]$, and I define a family $(g_\theta)_{\theta \in \Theta}$ of functions $\mathcal{X} \to \mathbb{R}$ as follows: if $\theta - 1\in V$, then $g_\theta = 3 \mathbf{1}_{\{\theta - 1\}} + \theta^{-1} \mathbf{1}_{[0, \theta] \setminus \{\theta - 1\}}$; if $\theta - 1\notin V$, then $g_\theta = f_\theta$. In other words, $$ g_\theta(x) = \begin{cases} 3, & \text{if $x \in V$ and $x = \theta - 1$,} \\ f_\theta(x), & \text{otherwise} \end{cases} $$ for all $x \in \mathcal{X}$ and $\theta \in \Theta$. Since each $g_\theta$ is a modification of $f_\theta$ on at most a singleton, the family $(g_\theta)_{\theta \in \Theta}$ is another family of densities for $(P_\theta)_{\theta \in \Theta}$ with respect to $\mu$. Now one can check that there is a unique function $\hat{\vartheta} : \mathcal{X} \to \Theta$ such that $$ g_{\hat{\vartheta}(x)}(x) = \sup_{\theta \in \Theta} g_\theta(x) $$ for each $x \in \mathcal{X}$; this function is given by $$ \hat{\vartheta}(x) = \begin{cases} \max\{1, x\}, & \text{if $x \notin V$,} \\ x + 1, & \text{if $x \in V$,} \end{cases} $$ and it is not measurable (for example, since $\{x \in \mathcal{X} : \hat{\vartheta}(x) = x + 1\} = \{0\} \cup V$). Thus, in the setup with the family of densities $(g_\theta)_{\theta \in \Theta}$, there exists a function $\hat{\vartheta} : \mathcal{X} \to \Theta$ such that $$ g_{\hat{\vartheta}(x)}(x) = \sup_{\theta \in \Theta} g_\theta(x) $$ for all $x \in \mathcal{X}$, but no such measurable function exists. This example merely shows that the the question of whether a measurable maximum likelihood estimator exists is sensitive to the choice of version of the Radon-Nikodym derivative $dP_\theta / d\mu$. When we used the much more reasonable family of densities $(f_\theta)_{\theta \in \Theta}$ we had no trouble obtaining a measurable maximum likelihood estimator, but when we artificially perturbed each of these densities on a null set to obtain $(g_\theta)_{\theta \in \Theta}$, we ran into problems. This example is inspired by Example 2.2 in Pfanzagl (1969), wherein a similar but much more subtle modification of densities yields a scenario in which no maximum likelihood estimator exists at all, measurable or not.
Example of a non-measurable maximum likelihood estimator Here is a contrived example. Let $(\mathcal{X}, \mathcal{B})$ be the interval $[0, 2]$ with its Borel $\sigma$-algebra, let $(\Theta, \mathcal{F})$ be the interval $[1, 2]$ with its Borel $\sigma$-alg
24,467
How to sample uniformly from the surface of a hyper-ellipsoid (constant Mahalanobis distance)?
When the different ellipsoid axes are not too much different then it is feasible to use rejection sampling (with large differences you reject a lot making it less feasible) (1) sample on a hyper-sphere (2) squeezing it into a hyper-ellipsoid (3) compute the rate by which the surface area was squeezed (4) reject samples according to that rate. 2D example set.seed(1) #some matrix to transform n-sphere (in this case 2x2) m <- matrix(c(1, 0.55, 0.55, 0.55), 2) # sample multinomial with identity covariance matrix x <- cbind(rnorm(3000, 0, 1), rnorm(3000, 0, 1)) l1 <- sqrt(x[,1]^2 + x[,2]^2) # perpendicular vector per <- cbind(x[,2], -x[,1]) # transform x x <- x %*% m # transform perpendicular vector (to see how the area transforms) per2 <- per %*% m # get onto unit-"sphere"/ellipsoid x <- x/l1 # this is how the area contracted contract <- sqrt(per2[,1]^2 + per2[,2]^2) / sqrt(per[,1]^2 + per[,2]^2) # then this is how we should choose to reject samples p <- contract/max(contract) # rejecting choose <- which( rbinom(n=length(p), size=1, p=p) == 1) #plotting plot(x[1:length(choose), 1], x[1:length(choose), 2], xlim=c(-1.2, 1.2), ylim=c(-1.2, 1.2), xlab = expression(x[1]), ylab = expression(x[2]), bg=rgb(0, 0, 0, 0.01), cex=0.6, pch=21, col=rgb(0, 0, 0, 0.01)) title("squeezed uniform circle \n ") #plotting plot(x[choose,1], x[choose,2], xlim=c(-1.2, 1.2), ylim=c(-1.2, 1.2), xlab = expression(x[1]), ylab = expression(x[2]), bg=rgb(0, 0, 0, 0.01), cex=0.6, pch=21, col=rgb(0, 0, 0, 0.01)) title("squeezed uniform circle \n with rejection sampling")
How to sample uniformly from the surface of a hyper-ellipsoid (constant Mahalanobis distance)?
When the different ellipsoid axes are not too much different then it is feasible to use rejection sampling (with large differences you reject a lot making it less feasible) (1) sample on a hyper-sph
How to sample uniformly from the surface of a hyper-ellipsoid (constant Mahalanobis distance)? When the different ellipsoid axes are not too much different then it is feasible to use rejection sampling (with large differences you reject a lot making it less feasible) (1) sample on a hyper-sphere (2) squeezing it into a hyper-ellipsoid (3) compute the rate by which the surface area was squeezed (4) reject samples according to that rate. 2D example set.seed(1) #some matrix to transform n-sphere (in this case 2x2) m <- matrix(c(1, 0.55, 0.55, 0.55), 2) # sample multinomial with identity covariance matrix x <- cbind(rnorm(3000, 0, 1), rnorm(3000, 0, 1)) l1 <- sqrt(x[,1]^2 + x[,2]^2) # perpendicular vector per <- cbind(x[,2], -x[,1]) # transform x x <- x %*% m # transform perpendicular vector (to see how the area transforms) per2 <- per %*% m # get onto unit-"sphere"/ellipsoid x <- x/l1 # this is how the area contracted contract <- sqrt(per2[,1]^2 + per2[,2]^2) / sqrt(per[,1]^2 + per[,2]^2) # then this is how we should choose to reject samples p <- contract/max(contract) # rejecting choose <- which( rbinom(n=length(p), size=1, p=p) == 1) #plotting plot(x[1:length(choose), 1], x[1:length(choose), 2], xlim=c(-1.2, 1.2), ylim=c(-1.2, 1.2), xlab = expression(x[1]), ylab = expression(x[2]), bg=rgb(0, 0, 0, 0.01), cex=0.6, pch=21, col=rgb(0, 0, 0, 0.01)) title("squeezed uniform circle \n ") #plotting plot(x[choose,1], x[choose,2], xlim=c(-1.2, 1.2), ylim=c(-1.2, 1.2), xlab = expression(x[1]), ylab = expression(x[2]), bg=rgb(0, 0, 0, 0.01), cex=0.6, pch=21, col=rgb(0, 0, 0, 0.01)) title("squeezed uniform circle \n with rejection sampling")
How to sample uniformly from the surface of a hyper-ellipsoid (constant Mahalanobis distance)? When the different ellipsoid axes are not too much different then it is feasible to use rejection sampling (with large differences you reject a lot making it less feasible) (1) sample on a hyper-sph
24,468
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the same distribution?
This is not really an answer, but at least it does not seem to be easy to create such an example from a stable distribution. We would need to produce a r.v. whose characteristic function is the same as that of its average. In general, for an iid draw, the c.f. of the average is $$ \phi_{\bar{X}_n}(t)=[\phi_X(t/n)]^n $$ with $\phi_X$ the c.f. of a single r.v. For stable distributions with location parameter zero, we have $$ \phi_X(t)=\exp\{-|ct|^\alpha(1-i\beta \text{sgn}(t)\Phi)\}, $$ where $$ \Phi=\begin{cases}\tan\left(\frac{\pi\alpha}{2}\right)&\alpha\neq1\\-\frac{2}{\pi}\log|t|&\alpha=1\end{cases} $$ The Cauchy distribution corresponds to $\alpha=1$, $\beta=0$, so that $\phi_{\bar{X}_n}(t)=\phi_X(t)$ indeed for any scale parameter $c>0$. In general, $$ \phi_{\bar{X}_n}(t)=\exp\left\{-n\left|c\frac{t}{n}\right|^\alpha\left(1-i\beta \text{sgn}\left(\frac{t}{n}\right)\Phi\right)\right\}, $$ To get $\phi_{\bar{X}_n}(t)=\phi_X(t)$, $\alpha=1$ seems called for, so \begin{eqnarray*} \phi_{\bar{X}_n}(t)&=&\exp\left\{-n\left|c\frac{t}{n}\right|\left(1-i\beta \text{sgn}\left(\frac{t}{n}\right)\left(-\frac{2}{\pi}\log\left|\frac{t}{n}\right|\right)\right)\right\}\\ &=&\exp\left\{-\left|ct\right|\left(1-i\beta \text{sgn}\left(t\right)\left(-\frac{2}{\pi}\log\left|\frac{t}{n}\right|\right)\right)\right\}, \end{eqnarray*} but $$ \log\left|\frac{t}{n}\right|\neq\log\left|t\right| $$
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the
This is not really an answer, but at least it does not seem to be easy to create such an example from a stable distribution. We would need to produce a r.v. whose characteristic function is the same a
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the same distribution? This is not really an answer, but at least it does not seem to be easy to create such an example from a stable distribution. We would need to produce a r.v. whose characteristic function is the same as that of its average. In general, for an iid draw, the c.f. of the average is $$ \phi_{\bar{X}_n}(t)=[\phi_X(t/n)]^n $$ with $\phi_X$ the c.f. of a single r.v. For stable distributions with location parameter zero, we have $$ \phi_X(t)=\exp\{-|ct|^\alpha(1-i\beta \text{sgn}(t)\Phi)\}, $$ where $$ \Phi=\begin{cases}\tan\left(\frac{\pi\alpha}{2}\right)&\alpha\neq1\\-\frac{2}{\pi}\log|t|&\alpha=1\end{cases} $$ The Cauchy distribution corresponds to $\alpha=1$, $\beta=0$, so that $\phi_{\bar{X}_n}(t)=\phi_X(t)$ indeed for any scale parameter $c>0$. In general, $$ \phi_{\bar{X}_n}(t)=\exp\left\{-n\left|c\frac{t}{n}\right|^\alpha\left(1-i\beta \text{sgn}\left(\frac{t}{n}\right)\Phi\right)\right\}, $$ To get $\phi_{\bar{X}_n}(t)=\phi_X(t)$, $\alpha=1$ seems called for, so \begin{eqnarray*} \phi_{\bar{X}_n}(t)&=&\exp\left\{-n\left|c\frac{t}{n}\right|\left(1-i\beta \text{sgn}\left(\frac{t}{n}\right)\left(-\frac{2}{\pi}\log\left|\frac{t}{n}\right|\right)\right)\right\}\\ &=&\exp\left\{-\left|ct\right|\left(1-i\beta \text{sgn}\left(t\right)\left(-\frac{2}{\pi}\log\left|\frac{t}{n}\right|\right)\right)\right\}, \end{eqnarray*} but $$ \log\left|\frac{t}{n}\right|\neq\log\left|t\right| $$
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the This is not really an answer, but at least it does not seem to be easy to create such an example from a stable distribution. We would need to produce a r.v. whose characteristic function is the same a
24,469
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the same distribution?
Normal distribution and shifted Poisson are examples. The shifted Poisson is $s=x-\lambda$, where $\lambda$ is Poisson intensity. There's a whole family of distribution such that the linear combination (not just the sample mean) of variables follows the same distribution, it's called stable distribution.
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the
Normal distribution and shifted Poisson are examples. The shifted Poisson is $s=x-\lambda$, where $\lambda$ is Poisson intensity. There's a whole family of distribution such that the linear combinatio
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the same distribution? Normal distribution and shifted Poisson are examples. The shifted Poisson is $s=x-\lambda$, where $\lambda$ is Poisson intensity. There's a whole family of distribution such that the linear combination (not just the sample mean) of variables follows the same distribution, it's called stable distribution.
Are there any distributions other than Cauchy for which the arithmetic mean of a sample follows the Normal distribution and shifted Poisson are examples. The shifted Poisson is $s=x-\lambda$, where $\lambda$ is Poisson intensity. There's a whole family of distribution such that the linear combinatio
24,470
Predicting Uncertainty in Random Forest Regression [duplicate]
As far as I know, the uncertainty of the RF predictions can be estimated using several approaches, one of them is the quantile regression forests method(Meinshausen, 2006), which estimates the prediction intervals. Other methods include U-statistics approach of Mentch & Hooker (2016) and monte carlo simulations approach of Coulston (2016).
Predicting Uncertainty in Random Forest Regression [duplicate]
As far as I know, the uncertainty of the RF predictions can be estimated using several approaches, one of them is the quantile regression forests method(Meinshausen, 2006), which estimates the predict
Predicting Uncertainty in Random Forest Regression [duplicate] As far as I know, the uncertainty of the RF predictions can be estimated using several approaches, one of them is the quantile regression forests method(Meinshausen, 2006), which estimates the prediction intervals. Other methods include U-statistics approach of Mentch & Hooker (2016) and monte carlo simulations approach of Coulston (2016).
Predicting Uncertainty in Random Forest Regression [duplicate] As far as I know, the uncertainty of the RF predictions can be estimated using several approaches, one of them is the quantile regression forests method(Meinshausen, 2006), which estimates the predict
24,471
Predicting Uncertainty in Random Forest Regression [duplicate]
The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper: Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random Forest Prediction Intervals." The American Statistician,2019. The R package "rfinterval" is its implementation available at CRAN. Installation To install the R package rfinterval: #install.packages("devtools") #devtools::install_github(repo="haozhestat/rfinterval") install.packages("rfinterval") library(rfinterval) ?rfinterval Usage Quickstart: train_data <- sim_data(n = 1000, p = 10) test_data <- sim_data(n = 1000, p = 10) output <- rfinterval(y~., train_data = train_data, test_data = test_data, method = c("oob", "split-conformal", "quantreg"), symmetry = TRUE,alpha = 0.1) ### print the marginal coverage of OOB prediction interval mean(output$oob_interval$lo < test_data$y & output$oob_interval$up > test_data$y) ### print the marginal coverage of Split-conformal prediction interval mean(output$sc_interval$lo < test_data$y & output$sc_interval$up > test_data$y) ### print the marginal coverage of Quantile regression forest prediction interval mean(output$quantreg_interval$lo < test_data$y & output$quantreg_interval$up > test_data$y) ```
Predicting Uncertainty in Random Forest Regression [duplicate]
The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper: Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random
Predicting Uncertainty in Random Forest Regression [duplicate] The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper: Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random Forest Prediction Intervals." The American Statistician,2019. The R package "rfinterval" is its implementation available at CRAN. Installation To install the R package rfinterval: #install.packages("devtools") #devtools::install_github(repo="haozhestat/rfinterval") install.packages("rfinterval") library(rfinterval) ?rfinterval Usage Quickstart: train_data <- sim_data(n = 1000, p = 10) test_data <- sim_data(n = 1000, p = 10) output <- rfinterval(y~., train_data = train_data, test_data = test_data, method = c("oob", "split-conformal", "quantreg"), symmetry = TRUE,alpha = 0.1) ### print the marginal coverage of OOB prediction interval mean(output$oob_interval$lo < test_data$y & output$oob_interval$up > test_data$y) ### print the marginal coverage of Split-conformal prediction interval mean(output$sc_interval$lo < test_data$y & output$sc_interval$up > test_data$y) ### print the marginal coverage of Quantile regression forest prediction interval mean(output$quantreg_interval$lo < test_data$y & output$quantreg_interval$up > test_data$y) ```
Predicting Uncertainty in Random Forest Regression [duplicate] The problem of constructing prediction intervals for random forest predictions has been addressed in the following paper: Zhang, Haozhe, Joshua Zimmerman, Dan Nettleton, and Daniel J. Nordman. "Random
24,472
Interpreting a 95% confidence interval
The confusion comes from this sentence: And yet, the consensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true mean. It is a partial misunderstanding of the real consensus. The confusion comes from not being specific about what probability we talk about. Not as a philosophical question but as "what exact probability we are speaking of in the context". As @ratsalad says it's all about conditioning. Call $\theta$ your parameter, $X$ your data, $I$ an interval that is a function of $X$: $I$ is a confidence interval means $P(\theta\in I\mid\theta)>0.95$ for all possible $\theta$ including the true one. Probability averages over all possible $X$ at fixed $\theta$. This is what you explain in your interpretation. $I$ being a (Bayesian) credible interval says $P(\theta\in I\mid X)>0.95$. Probability averages over all possible $\theta$ at fixed $X$. Both are probability of the same event but conditioned differently. The reason why one discourages saying "the probability that $\theta$ is in $I$ is 0.95" for confidence intervals is because this sentence implicitly means the second point: when we say "the probability that..." the conditioning is implicitly to what has been observed before: "I have seen some $X$, now what is the probability that $\theta$ is..." is formally "what is $P(\theta...\mid X)$". This implicit is reinforced by the (again implicit) suggestion you experience when reading "probability that $\theta$ is in $I$" that $\theta$ is the variable and $I$ the fixed object, while in frequentist analysis it is the opposite. Finally this is made even worse when you replace $I$ by your calculated interval. If you write: "The probability that $\theta$ is in $[4;5]$ is 0.95" then this is simply false. In frequentist analysis "$\theta$ is in $[4;5]$" is either true or false but is not a random event thus it does not have a probability (other than 0 or 1). Thus the sentence could only be meaningfully interpreted as the Bayesian one.
Interpreting a 95% confidence interval
The confusion comes from this sentence: And yet, the consensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true m
Interpreting a 95% confidence interval The confusion comes from this sentence: And yet, the consensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true mean. It is a partial misunderstanding of the real consensus. The confusion comes from not being specific about what probability we talk about. Not as a philosophical question but as "what exact probability we are speaking of in the context". As @ratsalad says it's all about conditioning. Call $\theta$ your parameter, $X$ your data, $I$ an interval that is a function of $X$: $I$ is a confidence interval means $P(\theta\in I\mid\theta)>0.95$ for all possible $\theta$ including the true one. Probability averages over all possible $X$ at fixed $\theta$. This is what you explain in your interpretation. $I$ being a (Bayesian) credible interval says $P(\theta\in I\mid X)>0.95$. Probability averages over all possible $\theta$ at fixed $X$. Both are probability of the same event but conditioned differently. The reason why one discourages saying "the probability that $\theta$ is in $I$ is 0.95" for confidence intervals is because this sentence implicitly means the second point: when we say "the probability that..." the conditioning is implicitly to what has been observed before: "I have seen some $X$, now what is the probability that $\theta$ is..." is formally "what is $P(\theta...\mid X)$". This implicit is reinforced by the (again implicit) suggestion you experience when reading "probability that $\theta$ is in $I$" that $\theta$ is the variable and $I$ the fixed object, while in frequentist analysis it is the opposite. Finally this is made even worse when you replace $I$ by your calculated interval. If you write: "The probability that $\theta$ is in $[4;5]$ is 0.95" then this is simply false. In frequentist analysis "$\theta$ is in $[4;5]$" is either true or false but is not a random event thus it does not have a probability (other than 0 or 1). Thus the sentence could only be meaningfully interpreted as the Bayesian one.
Interpreting a 95% confidence interval The confusion comes from this sentence: And yet, the consensus seems to be that a 95% confidence interval can NOT be interpreted as there being a 95% probability that the interval contains the true m
24,473
Interpreting a 95% confidence interval
Part of the difference comes down to conditioning, the difference between pre-data probabilities and post-data probabilities. Before you do your single experiment (before you obtain your sample), you know that there is a 95% chance that the 95% CI will contain the true mean (this is the definition of a 95% CI). However, after you obtain your sample, you are in a different state of knowledge: you have not learned the true mean, but you have seen a particular sample of data, which may give you some new knowledge and which can affect your probability calculations. Analogously, before you draw a card, you know that there is a 25% chance that the card will be a club. Now to make the analogy work, you cannot learn the true suit of the card when you draw it (because likewise the true mean is always hidden from you). But you may learn something new from drawing the card, for instance the color of the suit. Let's say that you draw the card, and through some mechanism (it doesn't matter for the point), you learn that the card is from a black suit. This changes your probability: from prior information, you know that clubs are black, and that half the cards are from black suits, so now you know that the card has a 50% chance of being a club. If, on the other hand, you discovered a red card, from your prior information you know that clubs are not red, so you would now know that there is a 0% chance of your card being a club. Both these probabilities are consistent with a 25% chance of a club before drawing the card. If you were to ignore your prior information, or if you were not told that the card was black, you would still have a 25% chance of being correct. However, you can do better if you take advantage of your prior information. There are many examples of this with real CIs, where seeing the data gives a coverage probability that is different from the CI %. This classic example (halfway down the post) of a "misleading" CI from David McKay may help. A similar example is given by Berger. To continue with your example of heights of people: lets say that you know that your population under study is from the Netherlands, which has the tallest average height of any country in the world (about $1.84 \pm 0.02$ m). However, lets say your sample has a 95% CI of $1.7 \pm 0.02$ m. Do you still think there is a 95% probability that the true population mean lies in that interval? I would say that, based on the prior knowledge, your specific sample was a stochastic fluke and anomalously low. In other words, the probability is much less than 95% that the true mean lies in your calculated CI. Note, before you obtained your sample, and calculated your specific CI, your chance of obtaining a CI that encompassed the true mean was 95%. Afterwards, if you use no prior information, and assume that all mean heights are equally probable a priori, then you could, if you wanted, make a Bayesian statement that there is 95% probability that your interval contains the true mean. But realize that such a statement does not follow from the definition of a CI, and that it crucially depends on a particular assumed prior for the mean. It also depends on your normality assumption, as most frequentist CIs cannot be re-interpreted in a Bayesian manner so easily.
Interpreting a 95% confidence interval
Part of the difference comes down to conditioning, the difference between pre-data probabilities and post-data probabilities. Before you do your single experiment (before you obtain your sample), you
Interpreting a 95% confidence interval Part of the difference comes down to conditioning, the difference between pre-data probabilities and post-data probabilities. Before you do your single experiment (before you obtain your sample), you know that there is a 95% chance that the 95% CI will contain the true mean (this is the definition of a 95% CI). However, after you obtain your sample, you are in a different state of knowledge: you have not learned the true mean, but you have seen a particular sample of data, which may give you some new knowledge and which can affect your probability calculations. Analogously, before you draw a card, you know that there is a 25% chance that the card will be a club. Now to make the analogy work, you cannot learn the true suit of the card when you draw it (because likewise the true mean is always hidden from you). But you may learn something new from drawing the card, for instance the color of the suit. Let's say that you draw the card, and through some mechanism (it doesn't matter for the point), you learn that the card is from a black suit. This changes your probability: from prior information, you know that clubs are black, and that half the cards are from black suits, so now you know that the card has a 50% chance of being a club. If, on the other hand, you discovered a red card, from your prior information you know that clubs are not red, so you would now know that there is a 0% chance of your card being a club. Both these probabilities are consistent with a 25% chance of a club before drawing the card. If you were to ignore your prior information, or if you were not told that the card was black, you would still have a 25% chance of being correct. However, you can do better if you take advantage of your prior information. There are many examples of this with real CIs, where seeing the data gives a coverage probability that is different from the CI %. This classic example (halfway down the post) of a "misleading" CI from David McKay may help. A similar example is given by Berger. To continue with your example of heights of people: lets say that you know that your population under study is from the Netherlands, which has the tallest average height of any country in the world (about $1.84 \pm 0.02$ m). However, lets say your sample has a 95% CI of $1.7 \pm 0.02$ m. Do you still think there is a 95% probability that the true population mean lies in that interval? I would say that, based on the prior knowledge, your specific sample was a stochastic fluke and anomalously low. In other words, the probability is much less than 95% that the true mean lies in your calculated CI. Note, before you obtained your sample, and calculated your specific CI, your chance of obtaining a CI that encompassed the true mean was 95%. Afterwards, if you use no prior information, and assume that all mean heights are equally probable a priori, then you could, if you wanted, make a Bayesian statement that there is 95% probability that your interval contains the true mean. But realize that such a statement does not follow from the definition of a CI, and that it crucially depends on a particular assumed prior for the mean. It also depends on your normality assumption, as most frequentist CIs cannot be re-interpreted in a Bayesian manner so easily.
Interpreting a 95% confidence interval Part of the difference comes down to conditioning, the difference between pre-data probabilities and post-data probabilities. Before you do your single experiment (before you obtain your sample), you
24,474
Interpreting a 95% confidence interval
Your question is more philosophy than statistics. It has been discussed ad nauseam in the form of a cat in a box. https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat I will add, regarding 95% confidence interval should be interpreted in terms of repeating an experiment multiple times and the calculated interval will contain the true mean 95% of the time This is one interpretation. You could also say that, before you create the interval, there is a 95% chance that the process will result in an interval that captures the true mean.
Interpreting a 95% confidence interval
Your question is more philosophy than statistics. It has been discussed ad nauseam in the form of a cat in a box. https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat I will add, regarding 95% con
Interpreting a 95% confidence interval Your question is more philosophy than statistics. It has been discussed ad nauseam in the form of a cat in a box. https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat I will add, regarding 95% confidence interval should be interpreted in terms of repeating an experiment multiple times and the calculated interval will contain the true mean 95% of the time This is one interpretation. You could also say that, before you create the interval, there is a 95% chance that the process will result in an interval that captures the true mean.
Interpreting a 95% confidence interval Your question is more philosophy than statistics. It has been discussed ad nauseam in the form of a cat in a box. https://en.wikipedia.org/wiki/Schr%C3%B6dinger%27s_cat I will add, regarding 95% con
24,475
Can a Linear Regression Model (with no higher order coefficients) over-fit?
This is an old post but I just came across it. I think the question refers to how can a line become "curvy" when over estimation occurs. If we have 2 3D points we should be over fitting. The algorithm will try to fit a plane through 2 points. An infinite number of planes can go through 2 points. It can 'tilt' any way to fit a 3rd point. If we add another dimension and another point it will tilt towards it and so on. That's where the problem is. The 2D plots they use to illustrate the concept are an abstraction or refer to polynomial regression.
Can a Linear Regression Model (with no higher order coefficients) over-fit?
This is an old post but I just came across it. I think the question refers to how can a line become "curvy" when over estimation occurs. If we have 2 3D points we should be over fitting. The algorithm
Can a Linear Regression Model (with no higher order coefficients) over-fit? This is an old post but I just came across it. I think the question refers to how can a line become "curvy" when over estimation occurs. If we have 2 3D points we should be over fitting. The algorithm will try to fit a plane through 2 points. An infinite number of planes can go through 2 points. It can 'tilt' any way to fit a 3rd point. If we add another dimension and another point it will tilt towards it and so on. That's where the problem is. The 2D plots they use to illustrate the concept are an abstraction or refer to polynomial regression.
Can a Linear Regression Model (with no higher order coefficients) over-fit? This is an old post but I just came across it. I think the question refers to how can a line become "curvy" when over estimation occurs. If we have 2 3D points we should be over fitting. The algorithm
24,476
Can a Linear Regression Model (with no higher order coefficients) over-fit?
Deleted, see question comment for content. (This post remains here because I can't delete an accepted answer, see https://meta.stackexchange.com/questions/14932/allow-author-of-accepted-answer-to-delete-it-in-certain-circumstances).
Can a Linear Regression Model (with no higher order coefficients) over-fit?
Deleted, see question comment for content. (This post remains here because I can't delete an accepted answer, see https://meta.stackexchange.com/questions/14932/allow-author-of-accepted-answer-to-dele
Can a Linear Regression Model (with no higher order coefficients) over-fit? Deleted, see question comment for content. (This post remains here because I can't delete an accepted answer, see https://meta.stackexchange.com/questions/14932/allow-author-of-accepted-answer-to-delete-it-in-certain-circumstances).
Can a Linear Regression Model (with no higher order coefficients) over-fit? Deleted, see question comment for content. (This post remains here because I can't delete an accepted answer, see https://meta.stackexchange.com/questions/14932/allow-author-of-accepted-answer-to-dele
24,477
Generalization bounds on SVM
I do not know the literature you referring to in detail, but I think a comprehensive summary of generalization bounds that should be up to date can be found in Boucheron et al. (2004) (Link: https://www.researchgate.net/profile/Olivier_Bousquet/publication/238718428_Advanced_Lectures_on_Machine_Learning_ML_Summer_Schools_2003_Canberra_Australia_February_2-14_2003_Tubingen_Germany_August_4-16_2003_Revised_Lectures/links/02e7e52c5870850311000000/Advanced-Lectures-on-Machine-Learning-ML-Summer-Schools-2003-Canberra-Australia-February-2-14-2003-Tuebingen-Germany-August-4-16-2003-Revised-Lectures.pdf#page=176) I will sketch part of the SVM bound in the following, leaving out details and proves. Before elaborating specifically about SVM bound, we need to understand what the generalization bounds are trying to achieve. First let us assume that the true probability $P(Y = +1| X = x)$ is known then the best possible classifier would be the bayes classifier, i.e. \begin{align} g* = \begin{cases} + 1 \ \ if P(Y = 1| X = x) > 0.5 \\ -1 \ \ otherwise \end{cases} \end{align} The goal of statistical learning theory now is to find the difference between a classifier of class $C$ (e.g. SVM) \begin{align} \hat{g}_n = arg \min_{g \in C} L_n(g) \end{align} and the bayes classifier, i.e. \begin{align} L(\hat{g}_n) - L(g*) = L(\hat{g}_n) - L(g^{*}_c) + L(g^{*}_c) - L(g*). \end{align} Note that $L(g) = \mathbb{E}l(g(X),Y)$ is the expected loss given data and $g^{*}_c$ is the best possible classifier in the model class $C$. The term $Z =: L(g*) - L(\hat{g}_n)$ is called estimation error and often the focus because it can be bounded much easier than the approximation error (the other term). I will also omit the approximation error here. The estimation error $Z$ can be further decomposed with \begin{align} Z = Z - \mathbb{E}Z + \mathbb{E}Z. \end{align} Now this can be bounded by two steps: Bound $Z - \mathbb{E}Z$ using McDiarmid inequality Bound $\mathbb{E}Z$ with the Rademacher complexity $R_n(C) = \mathbb{E}sup_{g \in C}|1/n \sum_{i=1}^{n} l(g(X_i),Y_i)|$ Using McDiarmids inequality one can show that if the loss function is ranging in an interval no more than $B$, step one result in a bound of \begin{align} Z - \mathbb{E}Z \leq 2 B \sqrt{\dfrac{ln(1/\delta)}{2n}}, \end{align} where $\delta$ is the confidence level. For the second step we can show that \begin{align} \mathbb{E}Z \leq 2R_n(C), \end{align} If you have a discrete loss-function, i.e. non-Lipschitz such as the 0-1-loss, you would need the VC-Dimension for further bounding the Rademacher Complexity. However, for L-lipschitz functions such as the Hinge-loss this can be further bounded by \begin{align} R_n(C) \leq \lambda L R/\sqrt{n}, \end{align} where $\lambda$ denotes the regularizer. Since for the Hinge-Loss $L = 1$ and $B = 1 + \lambda R$ (prove with Gauchy-Schwartz inequality) this further simplifies. Finally putting all results together, we can a bound of \begin{align} L(\hat{g}_n) - L(g^{*}_c) \leq 2(1 + \lambda R) \sqrt{\dfrac{ln(1/\delta)}{2n}} + 4 \lambda L R/\sqrt{n} \end{align}
Generalization bounds on SVM
I do not know the literature you referring to in detail, but I think a comprehensive summary of generalization bounds that should be up to date can be found in Boucheron et al. (2004) (Link: https://w
Generalization bounds on SVM I do not know the literature you referring to in detail, but I think a comprehensive summary of generalization bounds that should be up to date can be found in Boucheron et al. (2004) (Link: https://www.researchgate.net/profile/Olivier_Bousquet/publication/238718428_Advanced_Lectures_on_Machine_Learning_ML_Summer_Schools_2003_Canberra_Australia_February_2-14_2003_Tubingen_Germany_August_4-16_2003_Revised_Lectures/links/02e7e52c5870850311000000/Advanced-Lectures-on-Machine-Learning-ML-Summer-Schools-2003-Canberra-Australia-February-2-14-2003-Tuebingen-Germany-August-4-16-2003-Revised-Lectures.pdf#page=176) I will sketch part of the SVM bound in the following, leaving out details and proves. Before elaborating specifically about SVM bound, we need to understand what the generalization bounds are trying to achieve. First let us assume that the true probability $P(Y = +1| X = x)$ is known then the best possible classifier would be the bayes classifier, i.e. \begin{align} g* = \begin{cases} + 1 \ \ if P(Y = 1| X = x) > 0.5 \\ -1 \ \ otherwise \end{cases} \end{align} The goal of statistical learning theory now is to find the difference between a classifier of class $C$ (e.g. SVM) \begin{align} \hat{g}_n = arg \min_{g \in C} L_n(g) \end{align} and the bayes classifier, i.e. \begin{align} L(\hat{g}_n) - L(g*) = L(\hat{g}_n) - L(g^{*}_c) + L(g^{*}_c) - L(g*). \end{align} Note that $L(g) = \mathbb{E}l(g(X),Y)$ is the expected loss given data and $g^{*}_c$ is the best possible classifier in the model class $C$. The term $Z =: L(g*) - L(\hat{g}_n)$ is called estimation error and often the focus because it can be bounded much easier than the approximation error (the other term). I will also omit the approximation error here. The estimation error $Z$ can be further decomposed with \begin{align} Z = Z - \mathbb{E}Z + \mathbb{E}Z. \end{align} Now this can be bounded by two steps: Bound $Z - \mathbb{E}Z$ using McDiarmid inequality Bound $\mathbb{E}Z$ with the Rademacher complexity $R_n(C) = \mathbb{E}sup_{g \in C}|1/n \sum_{i=1}^{n} l(g(X_i),Y_i)|$ Using McDiarmids inequality one can show that if the loss function is ranging in an interval no more than $B$, step one result in a bound of \begin{align} Z - \mathbb{E}Z \leq 2 B \sqrt{\dfrac{ln(1/\delta)}{2n}}, \end{align} where $\delta$ is the confidence level. For the second step we can show that \begin{align} \mathbb{E}Z \leq 2R_n(C), \end{align} If you have a discrete loss-function, i.e. non-Lipschitz such as the 0-1-loss, you would need the VC-Dimension for further bounding the Rademacher Complexity. However, for L-lipschitz functions such as the Hinge-loss this can be further bounded by \begin{align} R_n(C) \leq \lambda L R/\sqrt{n}, \end{align} where $\lambda$ denotes the regularizer. Since for the Hinge-Loss $L = 1$ and $B = 1 + \lambda R$ (prove with Gauchy-Schwartz inequality) this further simplifies. Finally putting all results together, we can a bound of \begin{align} L(\hat{g}_n) - L(g^{*}_c) \leq 2(1 + \lambda R) \sqrt{\dfrac{ln(1/\delta)}{2n}} + 4 \lambda L R/\sqrt{n} \end{align}
Generalization bounds on SVM I do not know the literature you referring to in detail, but I think a comprehensive summary of generalization bounds that should be up to date can be found in Boucheron et al. (2004) (Link: https://w
24,478
Downweight outliers in mean
Rounding up the comments that has value of an answer, several methods can be used here. 1. Trimmed mean (by @Bernhard) Calculates the average of data that lies between the 5th and 95th percentile effectively discarding the extreme values. https://en.wikipedia.org/wiki/Truncated_mean 2. Winsorized mean (by @kjetil b halvorsen) Sets the bottom 5% to 5th percentile, sets the top 5% to 95th percentile, then calculates the average of all that. https://en.wikipedia.org/wiki/Winsorizing 3. M-estimator (by @Michael M) I'm sorry I can't provide concise explanation. Better see https://en.wikipedia.org/wiki/M-estimator 4. E-M algorithm (by @Tim) https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm 5. Median (by @Tim) Median is less affected by outlier and is more robust than mean. Consider the set of numbers ${1, 2, 3, 4, 5}$, with mean 3 and median 3. If I were to change 5 to 50, the mean changes to be 12, but the median stays 3. https://en.wikipedia.org/wiki/Median
Downweight outliers in mean
Rounding up the comments that has value of an answer, several methods can be used here. 1. Trimmed mean (by @Bernhard) Calculates the average of data that lies between the 5th and 95th percentile effe
Downweight outliers in mean Rounding up the comments that has value of an answer, several methods can be used here. 1. Trimmed mean (by @Bernhard) Calculates the average of data that lies between the 5th and 95th percentile effectively discarding the extreme values. https://en.wikipedia.org/wiki/Truncated_mean 2. Winsorized mean (by @kjetil b halvorsen) Sets the bottom 5% to 5th percentile, sets the top 5% to 95th percentile, then calculates the average of all that. https://en.wikipedia.org/wiki/Winsorizing 3. M-estimator (by @Michael M) I'm sorry I can't provide concise explanation. Better see https://en.wikipedia.org/wiki/M-estimator 4. E-M algorithm (by @Tim) https://en.wikipedia.org/wiki/Expectation%E2%80%93maximization_algorithm 5. Median (by @Tim) Median is less affected by outlier and is more robust than mean. Consider the set of numbers ${1, 2, 3, 4, 5}$, with mean 3 and median 3. If I were to change 5 to 50, the mean changes to be 12, but the median stays 3. https://en.wikipedia.org/wiki/Median
Downweight outliers in mean Rounding up the comments that has value of an answer, several methods can be used here. 1. Trimmed mean (by @Bernhard) Calculates the average of data that lies between the 5th and 95th percentile effe
24,479
In general, is doing inference more difficult than making prediction?
First, I would have different perspective for machine learning. What you mentioned, Andrew Ng's Coursera lecture and Kaggle competition are not 100% of machine learning but some branches that targeted on practical applications. Real machine learning research should be the work that invent the random forest / SVM / gradient boosting model, which is fairly close to statistics / math. I would agree machine learning practitioners focus more on accuracy comparing to statisticians / economists. There are reasons that people interested in getting better accuracy, rather than "inference about the true distribution." The major reason is the way we collect data and use the data has been changed over past decades. Statistics was established for hundred years, but in the past, no one would think about you have billions of data for training and other billions of data for testing. (For example, number of images on Internet). Therefore, with relatively small amount of data, assumptions from domain knowledge are needed to do the work. Or you can think about to "regularize" the model. Once the assumptions were made, then there are inferences problems about the "true" distribution. However, if we carefully think about it, can we make sure these assumptions are true, and the inferences are valid? I would like cite George Box: All models are wrong but some are useful Now, let's back to think about the practical approach to put more emphasis on accuracy than assumption / inference. It is a good approach, when we have huge amount of data. Suppose we are building a model for all images contain human faces on pixel level. First, it is very hard to propose the assumptions on pixel level for billion of images: no one has that domain knowledge. Second, we can think about all possible ways to fit the data, and because the data is huge, all the models we have may not be sufficient (almost impossible to over fit). This is also why, "deep learning / neural network" got popular again. Under the condition of big data, we can pick one model that really complex, and fit it as best as we can, and we may still OK, because our computational resources are limited, comparing to all the real data in the word. Finally, if the model we built are good in huge testing data set, then they are good and valuable, although we may not know the underline assumption or true distribution. I want to point out the word "inference" has different meanings in different community. In statistics community, it usually means getting information of the true distribution in parametric or non-parametric way. In machine learning community, it usually means computing certain probabilities from a given distribution. See Murphy's Graphical Models Tutorial for examples. In machine learning, people use the word "learning" to represent "getting the parameters of the true distribution", which is similar to the "inference" in statistics community. So, you can see, essentially, there are many people in machine learning are also doing "inference". In addition, you may also think about people in academia like to "re-brand their work and re-sell": coming up with new terms may be helpful to show the novelty of the research. In fact, there are many overlaps among artificial intelligence, data mining and machine learning. And they are closely related to statistics and algorithm design. Again there are no clear boundaries for doing "inference" or not.
In general, is doing inference more difficult than making prediction?
First, I would have different perspective for machine learning. What you mentioned, Andrew Ng's Coursera lecture and Kaggle competition are not 100% of machine learning but some branches that targeted
In general, is doing inference more difficult than making prediction? First, I would have different perspective for machine learning. What you mentioned, Andrew Ng's Coursera lecture and Kaggle competition are not 100% of machine learning but some branches that targeted on practical applications. Real machine learning research should be the work that invent the random forest / SVM / gradient boosting model, which is fairly close to statistics / math. I would agree machine learning practitioners focus more on accuracy comparing to statisticians / economists. There are reasons that people interested in getting better accuracy, rather than "inference about the true distribution." The major reason is the way we collect data and use the data has been changed over past decades. Statistics was established for hundred years, but in the past, no one would think about you have billions of data for training and other billions of data for testing. (For example, number of images on Internet). Therefore, with relatively small amount of data, assumptions from domain knowledge are needed to do the work. Or you can think about to "regularize" the model. Once the assumptions were made, then there are inferences problems about the "true" distribution. However, if we carefully think about it, can we make sure these assumptions are true, and the inferences are valid? I would like cite George Box: All models are wrong but some are useful Now, let's back to think about the practical approach to put more emphasis on accuracy than assumption / inference. It is a good approach, when we have huge amount of data. Suppose we are building a model for all images contain human faces on pixel level. First, it is very hard to propose the assumptions on pixel level for billion of images: no one has that domain knowledge. Second, we can think about all possible ways to fit the data, and because the data is huge, all the models we have may not be sufficient (almost impossible to over fit). This is also why, "deep learning / neural network" got popular again. Under the condition of big data, we can pick one model that really complex, and fit it as best as we can, and we may still OK, because our computational resources are limited, comparing to all the real data in the word. Finally, if the model we built are good in huge testing data set, then they are good and valuable, although we may not know the underline assumption or true distribution. I want to point out the word "inference" has different meanings in different community. In statistics community, it usually means getting information of the true distribution in parametric or non-parametric way. In machine learning community, it usually means computing certain probabilities from a given distribution. See Murphy's Graphical Models Tutorial for examples. In machine learning, people use the word "learning" to represent "getting the parameters of the true distribution", which is similar to the "inference" in statistics community. So, you can see, essentially, there are many people in machine learning are also doing "inference". In addition, you may also think about people in academia like to "re-brand their work and re-sell": coming up with new terms may be helpful to show the novelty of the research. In fact, there are many overlaps among artificial intelligence, data mining and machine learning. And they are closely related to statistics and algorithm design. Again there are no clear boundaries for doing "inference" or not.
In general, is doing inference more difficult than making prediction? First, I would have different perspective for machine learning. What you mentioned, Andrew Ng's Coursera lecture and Kaggle competition are not 100% of machine learning but some branches that targeted
24,480
Predicting CPU and GPU memory requirements of DNN training
The answer of @ik_vision describes how to estimate the memory space needed for storing the weights, but you also need to store the intermediate activations, and especially for convolutional networks working with 3D data, this is the main part of the memory needed. To analyze your example: Input needs 1000 elements After layers 1-4 layer you have 100 elements, 400 in total After final layer you have 10 elements In total for 1 sample you need 1410 elements for the forward pass. Except for the input, you also need a gradient information about each of them for backward pass, that is 410 more, totaling 1820 elements per sample. Multiply by the batch size to get 465 920. I said "elements", because the size required per element depends on the data type used. For single precision float32 it is 4B and the total memory needed to store the data blobs will be around 1.8MB.
Predicting CPU and GPU memory requirements of DNN training
The answer of @ik_vision describes how to estimate the memory space needed for storing the weights, but you also need to store the intermediate activations, and especially for convolutional networks w
Predicting CPU and GPU memory requirements of DNN training The answer of @ik_vision describes how to estimate the memory space needed for storing the weights, but you also need to store the intermediate activations, and especially for convolutional networks working with 3D data, this is the main part of the memory needed. To analyze your example: Input needs 1000 elements After layers 1-4 layer you have 100 elements, 400 in total After final layer you have 10 elements In total for 1 sample you need 1410 elements for the forward pass. Except for the input, you also need a gradient information about each of them for backward pass, that is 410 more, totaling 1820 elements per sample. Multiply by the batch size to get 465 920. I said "elements", because the size required per element depends on the data type used. For single precision float32 it is 4B and the total memory needed to store the data blobs will be around 1.8MB.
Predicting CPU and GPU memory requirements of DNN training The answer of @ik_vision describes how to estimate the memory space needed for storing the weights, but you also need to store the intermediate activations, and especially for convolutional networks w
24,481
Predicting CPU and GPU memory requirements of DNN training
I see two option: The Network is loaded from disk The Network is created on the fly In both cases the size of the memory in GPU need to be multiplied by the Batch size as most of the network is copied for each sample. Rule of Thumb if loaded from Disk: If the DNN takes X MB on Disk , the network will be 2X in the GPU memory for batch size 1. The Network is created on the fly for batch size 1: count the parameter and multiply by 4 bytes (float32 bit): Counting the number of Parameter Manually: fc1 : 1000x100 (weights) + 100 (biases) fc2 : 100x100 (weights) + 100 (biases) fc3 : 100x100 (weights) + 100 (biases) fc4 : 100x100 (weights) + 100 (biases) output : 100x10 (weights) + 10 (biases) Counting the number of Parameter using Keras: model.count_params()
Predicting CPU and GPU memory requirements of DNN training
I see two option: The Network is loaded from disk The Network is created on the fly In both cases the size of the memory in GPU need to be multiplied by the Batch size as most of the network is cop
Predicting CPU and GPU memory requirements of DNN training I see two option: The Network is loaded from disk The Network is created on the fly In both cases the size of the memory in GPU need to be multiplied by the Batch size as most of the network is copied for each sample. Rule of Thumb if loaded from Disk: If the DNN takes X MB on Disk , the network will be 2X in the GPU memory for batch size 1. The Network is created on the fly for batch size 1: count the parameter and multiply by 4 bytes (float32 bit): Counting the number of Parameter Manually: fc1 : 1000x100 (weights) + 100 (biases) fc2 : 100x100 (weights) + 100 (biases) fc3 : 100x100 (weights) + 100 (biases) fc4 : 100x100 (weights) + 100 (biases) output : 100x10 (weights) + 10 (biases) Counting the number of Parameter using Keras: model.count_params()
Predicting CPU and GPU memory requirements of DNN training I see two option: The Network is loaded from disk The Network is created on the fly In both cases the size of the memory in GPU need to be multiplied by the Batch size as most of the network is cop
24,482
Measure of "deviance" for zero-inflated Poisson or zero-inflated negative binomial?
The deviance is a GLM concept, ZIP and ZINB models are not glms but are formulated as finite mixtures of distributions which are GLMs and therefore can be solved easily via EM algorithm. These notes describe the theory of deviance concisely. If you read those notes you'll see the proof that the saturated model for the Poisson regression has log-likelihood $$\ell(\lambda_s)= \sum_{i=1, \forall y_i\neq 0}^n \left[ y_ilog(y_i)-y_i -log(y_i!)\right]$$ which results from the plug-in estimates $y_i =\hat{\lambda}_i$. I'll proceed now with the ZIP likelihood because the math is simpler, similar results hold for the ZINB. Unfortunately for the ZIP, there is no simple relationship like in the Poisson. The $i$th observations log-likelihood is $$\ell_i(\phi, \lambda)=Z_ilog(\phi+(1-\phi)e^{-\lambda})+ (1-Z_i)\left[-\lambda +y_ilog(\lambda) -log(y_i!)\right].$$ the $Z_i$ are not observed so to solve this you'd need to take partial derivatives w.r.t. both $\lambda$ and $\phi$, set the equations to 0 and then solve for $\lambda$ and $\phi$. The difficulty here are the $y_i=0$ values, these can go into a $\hat{\lambda}$ or into a $\hat{\phi}$ and it isn't possible without observing $Z_i$ which to put the $y_i=0$ observations into. However, if we knew the $Z_i$ value we wouldn't need a ZIP model because we would have no missing data. The observed data corresponds to the "complete data" likelihood in the EM formalism. One approach that might be reasonable is to work with the expectation w.r.t. $Z_i$ of the complete data log-likelihood, $\mathbb{E}(\ell_i(\phi, \lambda))$ which removes the $Z_i$ and replaces with an expectation, this is part of what the EM algorithm calculates (the E step) with the most recent updates. I'm unaware of any literature that has studied this approach to $expected$ deviance though. Also, this question was asked first so I answered this post. However, there is another question on the same topic with a nice comment by Gordon Smyth here: deviance for zero-inflated compound poisson model, continuous data (R) where he mentioned the same response (this is an elaboration of that comment I'd say) plus they mentioned in the comments to the other post a paper which you may want to read. (disclaimer, I have not read the paper referenced)
Measure of "deviance" for zero-inflated Poisson or zero-inflated negative binomial?
The deviance is a GLM concept, ZIP and ZINB models are not glms but are formulated as finite mixtures of distributions which are GLMs and therefore can be solved easily via EM algorithm. These notes d
Measure of "deviance" for zero-inflated Poisson or zero-inflated negative binomial? The deviance is a GLM concept, ZIP and ZINB models are not glms but are formulated as finite mixtures of distributions which are GLMs and therefore can be solved easily via EM algorithm. These notes describe the theory of deviance concisely. If you read those notes you'll see the proof that the saturated model for the Poisson regression has log-likelihood $$\ell(\lambda_s)= \sum_{i=1, \forall y_i\neq 0}^n \left[ y_ilog(y_i)-y_i -log(y_i!)\right]$$ which results from the plug-in estimates $y_i =\hat{\lambda}_i$. I'll proceed now with the ZIP likelihood because the math is simpler, similar results hold for the ZINB. Unfortunately for the ZIP, there is no simple relationship like in the Poisson. The $i$th observations log-likelihood is $$\ell_i(\phi, \lambda)=Z_ilog(\phi+(1-\phi)e^{-\lambda})+ (1-Z_i)\left[-\lambda +y_ilog(\lambda) -log(y_i!)\right].$$ the $Z_i$ are not observed so to solve this you'd need to take partial derivatives w.r.t. both $\lambda$ and $\phi$, set the equations to 0 and then solve for $\lambda$ and $\phi$. The difficulty here are the $y_i=0$ values, these can go into a $\hat{\lambda}$ or into a $\hat{\phi}$ and it isn't possible without observing $Z_i$ which to put the $y_i=0$ observations into. However, if we knew the $Z_i$ value we wouldn't need a ZIP model because we would have no missing data. The observed data corresponds to the "complete data" likelihood in the EM formalism. One approach that might be reasonable is to work with the expectation w.r.t. $Z_i$ of the complete data log-likelihood, $\mathbb{E}(\ell_i(\phi, \lambda))$ which removes the $Z_i$ and replaces with an expectation, this is part of what the EM algorithm calculates (the E step) with the most recent updates. I'm unaware of any literature that has studied this approach to $expected$ deviance though. Also, this question was asked first so I answered this post. However, there is another question on the same topic with a nice comment by Gordon Smyth here: deviance for zero-inflated compound poisson model, continuous data (R) where he mentioned the same response (this is an elaboration of that comment I'd say) plus they mentioned in the comments to the other post a paper which you may want to read. (disclaimer, I have not read the paper referenced)
Measure of "deviance" for zero-inflated Poisson or zero-inflated negative binomial? The deviance is a GLM concept, ZIP and ZINB models are not glms but are formulated as finite mixtures of distributions which are GLMs and therefore can be solved easily via EM algorithm. These notes d
24,483
In general, does normalization mean to normalize the samples or features?
Normalization is much trickier than most people think. Consider categorical and nonlinear predictors. Categorical (multinomial; polytomous) predictors are represented by indicator variables and should not be normalized. For continuous predictors, most relationships are nonlinear, and we fit them by expanding the predictor with nonlinear basis functions. The simplest case is perhaps a quadratic relationship $\beta_{1}x + \beta_{2}x^2$. Do we normalize $x$ by its standard deviation then square the normalized value for the second term? Do we normalize the second term by the standard deviation of $x^2$? The mere use of normalizing so that the sum of squares for a column equals one, or normalizing by the standard deviation assumes that the predictor is one such that squaring it is the right thing to do. In general this only works correctly when the predictor has a symmetric distribution. For asymmetric distributions, the standard deviation is not an appropriate summary statistic for dispersion. One might just as easily entertain Gini's mean difference or the interquartile range. It's all arbitrary.
In general, does normalization mean to normalize the samples or features?
Normalization is much trickier than most people think. Consider categorical and nonlinear predictors. Categorical (multinomial; polytomous) predictors are represented by indicator variables and shou
In general, does normalization mean to normalize the samples or features? Normalization is much trickier than most people think. Consider categorical and nonlinear predictors. Categorical (multinomial; polytomous) predictors are represented by indicator variables and should not be normalized. For continuous predictors, most relationships are nonlinear, and we fit them by expanding the predictor with nonlinear basis functions. The simplest case is perhaps a quadratic relationship $\beta_{1}x + \beta_{2}x^2$. Do we normalize $x$ by its standard deviation then square the normalized value for the second term? Do we normalize the second term by the standard deviation of $x^2$? The mere use of normalizing so that the sum of squares for a column equals one, or normalizing by the standard deviation assumes that the predictor is one such that squaring it is the right thing to do. In general this only works correctly when the predictor has a symmetric distribution. For asymmetric distributions, the standard deviation is not an appropriate summary statistic for dispersion. One might just as easily entertain Gini's mean difference or the interquartile range. It's all arbitrary.
In general, does normalization mean to normalize the samples or features? Normalization is much trickier than most people think. Consider categorical and nonlinear predictors. Categorical (multinomial; polytomous) predictors are represented by indicator variables and shou
24,484
In general, does normalization mean to normalize the samples or features?
In general, normalizing the features of one sample. I would not really talk much about rows and columns here, since the feature matrix can obviously transposed. I almost always span features over the rows as this makes it easier to perform calulations on the matrix in, e.g., C++. Normalizing along the samples (I think this is your first bullet point) does indeed not make much sense. I think it is sometimes done in Auto-Encoder/Decoder methods (edit: actually only on the weight matrix) when the weights are shared in a particular way. Think about it like this: if you normalize along the samples, how do you normalize a new sample that should be classified? Using the normalization term you have obtained during training or re-calculating the norm over the training examples + the new examples. Certainly the second one will eventually make the classifier fail. The first one will not guarantee that your normalization sums up to one anymore.
In general, does normalization mean to normalize the samples or features?
In general, normalizing the features of one sample. I would not really talk much about rows and columns here, since the feature matrix can obviously transposed. I almost always span features over the
In general, does normalization mean to normalize the samples or features? In general, normalizing the features of one sample. I would not really talk much about rows and columns here, since the feature matrix can obviously transposed. I almost always span features over the rows as this makes it easier to perform calulations on the matrix in, e.g., C++. Normalizing along the samples (I think this is your first bullet point) does indeed not make much sense. I think it is sometimes done in Auto-Encoder/Decoder methods (edit: actually only on the weight matrix) when the weights are shared in a particular way. Think about it like this: if you normalize along the samples, how do you normalize a new sample that should be classified? Using the normalization term you have obtained during training or re-calculating the norm over the training examples + the new examples. Certainly the second one will eventually make the classifier fail. The first one will not guarantee that your normalization sums up to one anymore.
In general, does normalization mean to normalize the samples or features? In general, normalizing the features of one sample. I would not really talk much about rows and columns here, since the feature matrix can obviously transposed. I almost always span features over the
24,485
In general, does normalization mean to normalize the samples or features?
That depends on the analysis steps following the normalization If nothing else is said, then it commonly refers to normalizing the features under consideration across all samples (e.g. to afterwards classify samples or to predict their value w.r.t to some quantitative attributes, or to conduct dimensionality reduction techniques under the requirement of avoiding some bias introduced by the hetereogeneous range of attributes) In specific fields however, in particular in analysis of microarray data, normalization along the samples is a widely used preprocessing step to remove unwanted variation during quality control (hopefully mostly technical noise, but it also affects real biological differences of course). You may e.g. want to have a look at https://en.wikipedia.org/wiki/Quantile_normalization. This normalization technique affects even both directions at the same time (samples and features): Look for the feature with the smallest value within each sample (may be a different attribute for each of the samples) Collect all these smallest values and calculate the average of them Assign this new value to the original places you took it from, so that all samples now have the same value at the attribute that originally showed the smallest value within the respective sample Do the same with the 2nd smallest value, 3rd,... until all data are processed this way Finally the range of all data is the same for any of the samples. This data set is then the basis for further processing.
In general, does normalization mean to normalize the samples or features?
That depends on the analysis steps following the normalization If nothing else is said, then it commonly refers to normalizing the features under consideration across all samples (e.g. to afterwards c
In general, does normalization mean to normalize the samples or features? That depends on the analysis steps following the normalization If nothing else is said, then it commonly refers to normalizing the features under consideration across all samples (e.g. to afterwards classify samples or to predict their value w.r.t to some quantitative attributes, or to conduct dimensionality reduction techniques under the requirement of avoiding some bias introduced by the hetereogeneous range of attributes) In specific fields however, in particular in analysis of microarray data, normalization along the samples is a widely used preprocessing step to remove unwanted variation during quality control (hopefully mostly technical noise, but it also affects real biological differences of course). You may e.g. want to have a look at https://en.wikipedia.org/wiki/Quantile_normalization. This normalization technique affects even both directions at the same time (samples and features): Look for the feature with the smallest value within each sample (may be a different attribute for each of the samples) Collect all these smallest values and calculate the average of them Assign this new value to the original places you took it from, so that all samples now have the same value at the attribute that originally showed the smallest value within the respective sample Do the same with the 2nd smallest value, 3rd,... until all data are processed this way Finally the range of all data is the same for any of the samples. This data set is then the basis for further processing.
In general, does normalization mean to normalize the samples or features? That depends on the analysis steps following the normalization If nothing else is said, then it commonly refers to normalizing the features under consideration across all samples (e.g. to afterwards c
24,486
Prove/Disprove probability of 0 or 1 (almost surely) will never change and has never been different
Your argument seems to be valid, but you start off by assuming that $E[1_A | \mathscr{F_t}] = 1$. However, the question states that $E[1_A | \mathscr{F_t}] \in \{0, 1\}$, which I would take to mean that the random variable $E[1_A | \mathscr{F_t}]$ takes values in the set $\{0, 1\}$ i.e. $E[1_A | \mathscr{F_t}]=1_B$ where $B\in\mathscr{F_t}$. The defining property of this conditional expectation is that $\int_F 1_B d\mathbb{P}=\int_F 1_A d\mathbb{P}$ for all $F\in\mathscr{F_t}$. In particular, taking $F=B$ leads to $P(B)=P(A\cap B)$, from which we can conclude that $B\subset A$ (except possibly on a set of probability zero). However, we also know (as in the argument you have written) that $E[E[1_A | \mathscr{F_t}]] = E[1_B]$ i.e. $P(A)=P(B)$, so the only possible conclusion is that $A=B$ (except possibly for a set of probability zero). For $s\gt t$, $\mathscr{F_t}\subset\mathscr{F_s}$, so the tower law for conditional expectations implies that $E[1_A | \mathscr{F_t}]=E[E[1_A | \mathscr{F_t}] | \mathscr{F_s}]$. But $E[1_A | \mathscr{F_t}]=1_A$, so $E[1_A | \mathscr{F_s}]=1_A$. So all the conditional expectations for $s>t$ are equal (to $1_A$). For $s<t$, if $A\in\mathscr{F_s}$ then we will still have $E[1_A | \mathscr{F_s}]=1_A$. On the other hand, if we go back to a time where $A$ is not in $\mathscr{F_s}$, then I don't think anything can be said about $E[1_A | \mathscr{F_s}]$ in general. For a concrete example, see this paper, Figure 1. Taking $A=\{\omega_2\}\in\mathscr{F_2}\setminus\mathscr{F_1}$, for example, gives the sequence of conditional expectations $E[1_A | \mathscr{F_0}]=\frac{1}{8} 1_\Omega$, $E[1_A | \mathscr{F_1}]=\frac{1}{2}1_{\{\omega_1,\omega_2\}}$, $E[1_A | \mathscr{F_2}]=1_{\{\omega_2\}}$, $E[1_A | \mathscr{F_3}]=1_{\{\omega_2\}}$.
Prove/Disprove probability of 0 or 1 (almost surely) will never change and has never been different
Your argument seems to be valid, but you start off by assuming that $E[1_A | \mathscr{F_t}] = 1$. However, the question states that $E[1_A | \mathscr{F_t}] \in \{0, 1\}$, which I would take to mean th
Prove/Disprove probability of 0 or 1 (almost surely) will never change and has never been different Your argument seems to be valid, but you start off by assuming that $E[1_A | \mathscr{F_t}] = 1$. However, the question states that $E[1_A | \mathscr{F_t}] \in \{0, 1\}$, which I would take to mean that the random variable $E[1_A | \mathscr{F_t}]$ takes values in the set $\{0, 1\}$ i.e. $E[1_A | \mathscr{F_t}]=1_B$ where $B\in\mathscr{F_t}$. The defining property of this conditional expectation is that $\int_F 1_B d\mathbb{P}=\int_F 1_A d\mathbb{P}$ for all $F\in\mathscr{F_t}$. In particular, taking $F=B$ leads to $P(B)=P(A\cap B)$, from which we can conclude that $B\subset A$ (except possibly on a set of probability zero). However, we also know (as in the argument you have written) that $E[E[1_A | \mathscr{F_t}]] = E[1_B]$ i.e. $P(A)=P(B)$, so the only possible conclusion is that $A=B$ (except possibly for a set of probability zero). For $s\gt t$, $\mathscr{F_t}\subset\mathscr{F_s}$, so the tower law for conditional expectations implies that $E[1_A | \mathscr{F_t}]=E[E[1_A | \mathscr{F_t}] | \mathscr{F_s}]$. But $E[1_A | \mathscr{F_t}]=1_A$, so $E[1_A | \mathscr{F_s}]=1_A$. So all the conditional expectations for $s>t$ are equal (to $1_A$). For $s<t$, if $A\in\mathscr{F_s}$ then we will still have $E[1_A | \mathscr{F_s}]=1_A$. On the other hand, if we go back to a time where $A$ is not in $\mathscr{F_s}$, then I don't think anything can be said about $E[1_A | \mathscr{F_s}]$ in general. For a concrete example, see this paper, Figure 1. Taking $A=\{\omega_2\}\in\mathscr{F_2}\setminus\mathscr{F_1}$, for example, gives the sequence of conditional expectations $E[1_A | \mathscr{F_0}]=\frac{1}{8} 1_\Omega$, $E[1_A | \mathscr{F_1}]=\frac{1}{2}1_{\{\omega_1,\omega_2\}}$, $E[1_A | \mathscr{F_2}]=1_{\{\omega_2\}}$, $E[1_A | \mathscr{F_3}]=1_{\{\omega_2\}}$.
Prove/Disprove probability of 0 or 1 (almost surely) will never change and has never been different Your argument seems to be valid, but you start off by assuming that $E[1_A | \mathscr{F_t}] = 1$. However, the question states that $E[1_A | \mathscr{F_t}] \in \{0, 1\}$, which I would take to mean th
24,487
Do statisticians use the Jeffreys' prior in actual applied work?
A partial answer to this is found in Gelman et al., Bayesian Data Analysis, 3rd ed. Jeffreys' principle can be extended to multiparameter models, but the results are more controversial. Simpler approaches based on assuming independent noninformative prior distributions for the components of the vector parameter $\theta$ can give different results than are obtained with Jeffreys' principle. When the number of parameters in a problem is large, we find it useful to abandon pure noninformative prior distributions in favor of hierarchical models, as we discuss in Chapter 5. When Gelman writes that the results are "controversial," I believe he means that a prior that is noninformative in one dimension tends to become strongly informative in several. If memory serves, this was a claim made in the same section of BDA 2nd ed., but I don't have a copy with me at the moment.
Do statisticians use the Jeffreys' prior in actual applied work?
A partial answer to this is found in Gelman et al., Bayesian Data Analysis, 3rd ed. Jeffreys' principle can be extended to multiparameter models, but the results are more controversial. Simpler appro
Do statisticians use the Jeffreys' prior in actual applied work? A partial answer to this is found in Gelman et al., Bayesian Data Analysis, 3rd ed. Jeffreys' principle can be extended to multiparameter models, but the results are more controversial. Simpler approaches based on assuming independent noninformative prior distributions for the components of the vector parameter $\theta$ can give different results than are obtained with Jeffreys' principle. When the number of parameters in a problem is large, we find it useful to abandon pure noninformative prior distributions in favor of hierarchical models, as we discuss in Chapter 5. When Gelman writes that the results are "controversial," I believe he means that a prior that is noninformative in one dimension tends to become strongly informative in several. If memory serves, this was a claim made in the same section of BDA 2nd ed., but I don't have a copy with me at the moment.
Do statisticians use the Jeffreys' prior in actual applied work? A partial answer to this is found in Gelman et al., Bayesian Data Analysis, 3rd ed. Jeffreys' principle can be extended to multiparameter models, but the results are more controversial. Simpler appro
24,488
Random forest vs Adaboost
Interesting question. A bunch of work on explaining ada boost via a few different tactics has been done since then. I did a quick literature search and this somewhat odd paper appears to be the most recent one on the subject and also reviews a bunch of the intercedent work by Leo Breiman and others: http://arxiv.org/pdf/1212.1108.pdf I have no idea if their results are valid but they claim to have failed to prove Breiman's conjecture but to have proved a weakened version of it claiming adaboost is measure preserving but not necessarily ergodic. They also present some empirical evidence that adaboost does in fact sometimes overfit. I think that suggests adaboost may be related to a random forest but not entirely (or not always) equivalent in the way Breiman conjectured?
Random forest vs Adaboost
Interesting question. A bunch of work on explaining ada boost via a few different tactics has been done since then. I did a quick literature search and this somewhat odd paper appears to be the most r
Random forest vs Adaboost Interesting question. A bunch of work on explaining ada boost via a few different tactics has been done since then. I did a quick literature search and this somewhat odd paper appears to be the most recent one on the subject and also reviews a bunch of the intercedent work by Leo Breiman and others: http://arxiv.org/pdf/1212.1108.pdf I have no idea if their results are valid but they claim to have failed to prove Breiman's conjecture but to have proved a weakened version of it claiming adaboost is measure preserving but not necessarily ergodic. They also present some empirical evidence that adaboost does in fact sometimes overfit. I think that suggests adaboost may be related to a random forest but not entirely (or not always) equivalent in the way Breiman conjectured?
Random forest vs Adaboost Interesting question. A bunch of work on explaining ada boost via a few different tactics has been done since then. I did a quick literature search and this somewhat odd paper appears to be the most r
24,489
When should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice?
Sorry for being unclear on my blog! Note: I provided some background on Bayesian model choice and the Jeffreys-Lindley paradox in this other answer on Cross validated. The Jeffreys-Lindley paradox is related to Bayesian model choice in that the marginal likelihood $$m(x)=\int \pi(\theta) f(x|\theta)\,\text{d}\theta$$ becomes meaningless when $\pi$ is a $\sigma$-finite measure (i.e., a measure with infinite mass) rather than a probability measure. The reason for this difficulty is that the infinite mass makes $\pi$ and $\mathfrak{c}\pi$ undistinguishable for any positive constant $\mathfrak{c}$. In particular, the Bayes factor cannot be used and should not be used when one model is endowed with a "flat" prior. The original Jeffreys-Lindley paradox uses the normal distribution as an example. When comparing the models $$x\sim\mathcal{N}(0,1)$$ and $$x\sim\mathcal{N}(\theta,1)$$ the Bayes factor is $$\mathfrak{B}_{12}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\int_{-\infty}^{+\infty}\exp\{-n(\bar{x}_n-\theta)^2/2\}\pi(\theta)\,\text{d}\theta}$$ It is well defined when $\pi$ is a proper prior but if you take a Normal prior $\mathcal{N}(0,\tau^2)$ on $\theta$ and let $\tau$ go to infinity, the denominator goes to zero for any value of $\bar{x}_n$ different from zero and any value of $n$. (Unless $\tau$ and $n$ are related, but this gets more complicated!) If instead you use directly $$\pi(\theta)=\mathfrak{c}$$where $\mathfrak{c}$ is a necessarily arbitrary constant, the Bayes factor $\mathfrak{B}_{12}$ will be $$\mathfrak{B}_{12}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\mathfrak{c}\int_{-\infty}^{+\infty}\exp\{-n(\bar{x}_n-\theta)^2/2\}\,\text{d}\theta}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\mathfrak{c}\sqrt{2\pi/n}}$$ hence directly dependent on $\mathfrak{c}$. Now, if your priors are informative (and hence proper), there is no reason for the Jeffreys-Lindley paradox to occur. With a sufficient number of observations, the Bayes factor will consistently selected the model that generated the data. (Or more precisely the model within the collection of models considered for model choice that is closest to the "true" model that generated the data.)
When should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice?
Sorry for being unclear on my blog! Note: I provided some background on Bayesian model choice and the Jeffreys-Lindley paradox in this other answer on Cross validated. The Jeffreys-Lindley paradox is
When should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice? Sorry for being unclear on my blog! Note: I provided some background on Bayesian model choice and the Jeffreys-Lindley paradox in this other answer on Cross validated. The Jeffreys-Lindley paradox is related to Bayesian model choice in that the marginal likelihood $$m(x)=\int \pi(\theta) f(x|\theta)\,\text{d}\theta$$ becomes meaningless when $\pi$ is a $\sigma$-finite measure (i.e., a measure with infinite mass) rather than a probability measure. The reason for this difficulty is that the infinite mass makes $\pi$ and $\mathfrak{c}\pi$ undistinguishable for any positive constant $\mathfrak{c}$. In particular, the Bayes factor cannot be used and should not be used when one model is endowed with a "flat" prior. The original Jeffreys-Lindley paradox uses the normal distribution as an example. When comparing the models $$x\sim\mathcal{N}(0,1)$$ and $$x\sim\mathcal{N}(\theta,1)$$ the Bayes factor is $$\mathfrak{B}_{12}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\int_{-\infty}^{+\infty}\exp\{-n(\bar{x}_n-\theta)^2/2\}\pi(\theta)\,\text{d}\theta}$$ It is well defined when $\pi$ is a proper prior but if you take a Normal prior $\mathcal{N}(0,\tau^2)$ on $\theta$ and let $\tau$ go to infinity, the denominator goes to zero for any value of $\bar{x}_n$ different from zero and any value of $n$. (Unless $\tau$ and $n$ are related, but this gets more complicated!) If instead you use directly $$\pi(\theta)=\mathfrak{c}$$where $\mathfrak{c}$ is a necessarily arbitrary constant, the Bayes factor $\mathfrak{B}_{12}$ will be $$\mathfrak{B}_{12}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\mathfrak{c}\int_{-\infty}^{+\infty}\exp\{-n(\bar{x}_n-\theta)^2/2\}\,\text{d}\theta}=\dfrac{\exp\{-n(\bar{x}_n)^2/2\}}{\mathfrak{c}\sqrt{2\pi/n}}$$ hence directly dependent on $\mathfrak{c}$. Now, if your priors are informative (and hence proper), there is no reason for the Jeffreys-Lindley paradox to occur. With a sufficient number of observations, the Bayes factor will consistently selected the model that generated the data. (Or more precisely the model within the collection of models considered for model choice that is closest to the "true" model that generated the data.)
When should I be worried about the Jeffreys-Lindley paradox in Bayesian model choice? Sorry for being unclear on my blog! Note: I provided some background on Bayesian model choice and the Jeffreys-Lindley paradox in this other answer on Cross validated. The Jeffreys-Lindley paradox is
24,490
Residual Diagnostics and Homogeneity of variances in linear mixed model
I think Question 1 and 2 are interconnected. First, the homogeneity of variance assumption comes from here, $\boldsymbol \epsilon \ \sim \ N(\mathbf{0, \sigma^2 I})$. But this assumption can be relaxed to more general variance structures, in which the homogeneity assumption is not necessary. That means it really depends on how the distribution of $\boldsymbol \epsilon$ is assumed. Second, the conditional residuals are used to check the distribution of (thus any assumptions related to) $\boldsymbol \epsilon$, whereas the marginal residuals can be used to check the total variance structure.
Residual Diagnostics and Homogeneity of variances in linear mixed model
I think Question 1 and 2 are interconnected. First, the homogeneity of variance assumption comes from here, $\boldsymbol \epsilon \ \sim \ N(\mathbf{0, \sigma^2 I})$. But this assumption can be relaxe
Residual Diagnostics and Homogeneity of variances in linear mixed model I think Question 1 and 2 are interconnected. First, the homogeneity of variance assumption comes from here, $\boldsymbol \epsilon \ \sim \ N(\mathbf{0, \sigma^2 I})$. But this assumption can be relaxed to more general variance structures, in which the homogeneity assumption is not necessary. That means it really depends on how the distribution of $\boldsymbol \epsilon$ is assumed. Second, the conditional residuals are used to check the distribution of (thus any assumptions related to) $\boldsymbol \epsilon$, whereas the marginal residuals can be used to check the total variance structure.
Residual Diagnostics and Homogeneity of variances in linear mixed model I think Question 1 and 2 are interconnected. First, the homogeneity of variance assumption comes from here, $\boldsymbol \epsilon \ \sim \ N(\mathbf{0, \sigma^2 I})$. But this assumption can be relaxe
24,491
Residual Diagnostics and Homogeneity of variances in linear mixed model
This is a really broad topic and I will only provide a general picture about the connection to standard linear regression. In the model listed in the question, $$ \mathbf{y_i \sim N(X_i\boldsymbol \beta, Z_i \boldsymbol D Z'_i + \boldsymbol \sigma^2 I)}, $$ if $\boldsymbol \gamma_i \sim N(\mathbf{0, D})$, where $i$ denotes a subject or cluster. Let $\mathbf{\Sigma_i=Z_i \boldsymbol D Z'_i + \boldsymbol \sigma^2 I}$. Using the Cholesky decomposition $\mathbf{\Sigma_i=L_i L'_i}$, we can transform the outcome and design matrix, $$\mathbf{y^*_i=L_i^{-1}y_i; X^*_i=L_i^{-1}X_i}.$$ As noted in Applied Longitudinal Analysis (Page 268), the generalized least squares (GLS) estimate of $\boldsymbol \beta$ (regressing $\mathbf y_i$ on $\mathbf X_i$) can be re-estimated from the OLS regression of $\mathbf y^*_i$ on $\mathbf X^*_i$. So all the built-in residual diagnostics from the resulting OLS can be used here. What we need to do is: estimate $\boldsymbol \Sigma_i$ from the (marginal) residual or variance component estimates in linear mixed model; re-fit an OLS regression using the transformed data. The OLS regression assumes independent observations with homogeneous variance, so standard diagnostics techniques can be applied to its residuals. Much more details can be found in Chapter 10 "Residual analyses and diagnostics" of the book Applied Longitudinal Analysis. They also discussed transforming the residual with $\mathbf L_i$, and there are some plots of (transformed) residuals (vs predicted values or predictors). More readings are listed in 10.8 "Further readings" and Bibliographic notes therein. Furthermore, in my opinion, given we assume $\boldsymbol \epsilon$ are independent with homogeneous variance, we can test these assumptions on the conditional residuals using the tools from standard regression.
Residual Diagnostics and Homogeneity of variances in linear mixed model
This is a really broad topic and I will only provide a general picture about the connection to standard linear regression. In the model listed in the question, $$ \mathbf{y_i \sim N(X_i\boldsymbol \b
Residual Diagnostics and Homogeneity of variances in linear mixed model This is a really broad topic and I will only provide a general picture about the connection to standard linear regression. In the model listed in the question, $$ \mathbf{y_i \sim N(X_i\boldsymbol \beta, Z_i \boldsymbol D Z'_i + \boldsymbol \sigma^2 I)}, $$ if $\boldsymbol \gamma_i \sim N(\mathbf{0, D})$, where $i$ denotes a subject or cluster. Let $\mathbf{\Sigma_i=Z_i \boldsymbol D Z'_i + \boldsymbol \sigma^2 I}$. Using the Cholesky decomposition $\mathbf{\Sigma_i=L_i L'_i}$, we can transform the outcome and design matrix, $$\mathbf{y^*_i=L_i^{-1}y_i; X^*_i=L_i^{-1}X_i}.$$ As noted in Applied Longitudinal Analysis (Page 268), the generalized least squares (GLS) estimate of $\boldsymbol \beta$ (regressing $\mathbf y_i$ on $\mathbf X_i$) can be re-estimated from the OLS regression of $\mathbf y^*_i$ on $\mathbf X^*_i$. So all the built-in residual diagnostics from the resulting OLS can be used here. What we need to do is: estimate $\boldsymbol \Sigma_i$ from the (marginal) residual or variance component estimates in linear mixed model; re-fit an OLS regression using the transformed data. The OLS regression assumes independent observations with homogeneous variance, so standard diagnostics techniques can be applied to its residuals. Much more details can be found in Chapter 10 "Residual analyses and diagnostics" of the book Applied Longitudinal Analysis. They also discussed transforming the residual with $\mathbf L_i$, and there are some plots of (transformed) residuals (vs predicted values or predictors). More readings are listed in 10.8 "Further readings" and Bibliographic notes therein. Furthermore, in my opinion, given we assume $\boldsymbol \epsilon$ are independent with homogeneous variance, we can test these assumptions on the conditional residuals using the tools from standard regression.
Residual Diagnostics and Homogeneity of variances in linear mixed model This is a really broad topic and I will only provide a general picture about the connection to standard linear regression. In the model listed in the question, $$ \mathbf{y_i \sim N(X_i\boldsymbol \b
24,492
Generalised additive model: What is ref.df in R's output?
Simon Wood, the author of the mgcv package and a book on GAMs, explains this here: These [Ref.df] are a bit of a throwback really, and are not very useful - they are reference degrees of freedom used in computing test statistic and the p-values, but since the null distributions are non-standard the reference DoF is not very interpretable.
Generalised additive model: What is ref.df in R's output?
Simon Wood, the author of the mgcv package and a book on GAMs, explains this here: These [Ref.df] are a bit of a throwback really, and are not very useful - they are reference degrees of freedom used
Generalised additive model: What is ref.df in R's output? Simon Wood, the author of the mgcv package and a book on GAMs, explains this here: These [Ref.df] are a bit of a throwback really, and are not very useful - they are reference degrees of freedom used in computing test statistic and the p-values, but since the null distributions are non-standard the reference DoF is not very interpretable.
Generalised additive model: What is ref.df in R's output? Simon Wood, the author of the mgcv package and a book on GAMs, explains this here: These [Ref.df] are a bit of a throwback really, and are not very useful - they are reference degrees of freedom used
24,493
How many observations do you need within each level of a random factor to fit a random effect?
There's nothing about a random or mixed effects model that requires having a certain number of observations per level. In fact, if you have many observations per level then the random effect may not be necessary and you may be able to just include that as a factor variable. You should be able to just specify the site as a random effect variable and proceed, and it shouldn't negatively affect the quality of your inference. As you note in the comment, the standard GLMM with the lme4 package ought to work fine for this.
How many observations do you need within each level of a random factor to fit a random effect?
There's nothing about a random or mixed effects model that requires having a certain number of observations per level. In fact, if you have many observations per level then the random effect may not b
How many observations do you need within each level of a random factor to fit a random effect? There's nothing about a random or mixed effects model that requires having a certain number of observations per level. In fact, if you have many observations per level then the random effect may not be necessary and you may be able to just include that as a factor variable. You should be able to just specify the site as a random effect variable and proceed, and it shouldn't negatively affect the quality of your inference. As you note in the comment, the standard GLMM with the lme4 package ought to work fine for this.
How many observations do you need within each level of a random factor to fit a random effect? There's nothing about a random or mixed effects model that requires having a certain number of observations per level. In fact, if you have many observations per level then the random effect may not b
24,494
How many observations do you need within each level of a random factor to fit a random effect?
you can include the sites with only two counts in your analysis. One way is to create a multi-level model, assign prior distributions to all the observables and then use the data to update to the posterior distribution (essentially a bayesian analysis). For guidance, look at (Gelman, Hill 2006) and the tutorials for using STAN. One keyword to look for is "data imputation" to handle missing values.
How many observations do you need within each level of a random factor to fit a random effect?
you can include the sites with only two counts in your analysis. One way is to create a multi-level model, assign prior distributions to all the observables and then use the data to update to the post
How many observations do you need within each level of a random factor to fit a random effect? you can include the sites with only two counts in your analysis. One way is to create a multi-level model, assign prior distributions to all the observables and then use the data to update to the posterior distribution (essentially a bayesian analysis). For guidance, look at (Gelman, Hill 2006) and the tutorials for using STAN. One keyword to look for is "data imputation" to handle missing values.
How many observations do you need within each level of a random factor to fit a random effect? you can include the sites with only two counts in your analysis. One way is to create a multi-level model, assign prior distributions to all the observables and then use the data to update to the post
24,495
Kernel density estimation on asymmetric distributions
First of all, KDE with symmetric kernels can also work very well when you data is asymmetric. Otherwise, it would be completely useless in practice, actually. Secondly, have you considered rescaling your data to fix the asymmetry, if you believe this is causing the problem. For example, it may be a good idea to try going to $\log(x)$, as this is known to help in many problems.
Kernel density estimation on asymmetric distributions
First of all, KDE with symmetric kernels can also work very well when you data is asymmetric. Otherwise, it would be completely useless in practice, actually. Secondly, have you considered rescaling y
Kernel density estimation on asymmetric distributions First of all, KDE with symmetric kernels can also work very well when you data is asymmetric. Otherwise, it would be completely useless in practice, actually. Secondly, have you considered rescaling your data to fix the asymmetry, if you believe this is causing the problem. For example, it may be a good idea to try going to $\log(x)$, as this is known to help in many problems.
Kernel density estimation on asymmetric distributions First of all, KDE with symmetric kernels can also work very well when you data is asymmetric. Otherwise, it would be completely useless in practice, actually. Secondly, have you considered rescaling y
24,496
Kernel density estimation on asymmetric distributions
Hmm. You might want a kernel width that changes as a function of location. If I were looking at the problem in eCDF then I might try and make a numeric slope of the CDF relate to the Kernel size. I think that if you are going to do a coordinate transform, then you need to have a pretty good idea of the start and end points. If you know the target distribution that well, then you don't need the Kernel approximation.
Kernel density estimation on asymmetric distributions
Hmm. You might want a kernel width that changes as a function of location. If I were looking at the problem in eCDF then I might try and make a numeric slope of the CDF relate to the Kernel size.
Kernel density estimation on asymmetric distributions Hmm. You might want a kernel width that changes as a function of location. If I were looking at the problem in eCDF then I might try and make a numeric slope of the CDF relate to the Kernel size. I think that if you are going to do a coordinate transform, then you need to have a pretty good idea of the start and end points. If you know the target distribution that well, then you don't need the Kernel approximation.
Kernel density estimation on asymmetric distributions Hmm. You might want a kernel width that changes as a function of location. If I were looking at the problem in eCDF then I might try and make a numeric slope of the CDF relate to the Kernel size.
24,497
How to reduce number of items using factor analysis, internal consistency, and item response theory in conjunction?
I don't have any citations, but here's what I'd suggest: Zeroth: If at all possible, split the data into a training and test set. First do EFA. Look at various solutions to see which ones make sense, based on your knowledge of the questions. You'd have to do this before Cronbach's alpha, or you won't know which items go into which factor. (Running alpha on ALL the items is probably not a good idea). Next, run alpha and delete items that have much poorer correlations than the others in each factor. I wouldn't set an arbitrary cutoff, I'd look for ones that were much lower than the others. See if deleting those makes sense. Finally, choose items with a variety of "difficulty" levels from IRT. Then, if possible, redo this on the test set, but without doing any exploring. That is, see how well the result found on the training set works on the test set.
How to reduce number of items using factor analysis, internal consistency, and item response theory
I don't have any citations, but here's what I'd suggest: Zeroth: If at all possible, split the data into a training and test set. First do EFA. Look at various solutions to see which ones make sense,
How to reduce number of items using factor analysis, internal consistency, and item response theory in conjunction? I don't have any citations, but here's what I'd suggest: Zeroth: If at all possible, split the data into a training and test set. First do EFA. Look at various solutions to see which ones make sense, based on your knowledge of the questions. You'd have to do this before Cronbach's alpha, or you won't know which items go into which factor. (Running alpha on ALL the items is probably not a good idea). Next, run alpha and delete items that have much poorer correlations than the others in each factor. I wouldn't set an arbitrary cutoff, I'd look for ones that were much lower than the others. See if deleting those makes sense. Finally, choose items with a variety of "difficulty" levels from IRT. Then, if possible, redo this on the test set, but without doing any exploring. That is, see how well the result found on the training set works on the test set.
How to reduce number of items using factor analysis, internal consistency, and item response theory I don't have any citations, but here's what I'd suggest: Zeroth: If at all possible, split the data into a training and test set. First do EFA. Look at various solutions to see which ones make sense,
24,498
How to reduce number of items using factor analysis, internal consistency, and item response theory in conjunction?
All three of your suggested criteria actually could be performed in IRT, more specifically multidimensional IRT. If your sample size is fairly large would probably be a consistent way to go about it for each subscale. In this way you could get the benefits of IRT for modelling item independently (using nominal models for some items, generalized partial credit or graded for others, or if possible even set up rating scales to help interpret polytomous items in a more parsimonious way). MIRT is conceptually equivalent to item-level factor analysis and therefore has a linear EFA equivalent relationship for dichotomous and polytomous items. I'm not sure if I would buy into the <.3 criteria for dropping items though, since it really depends on the context and factor structure. Small loadings/slopes don't provide as much information about the intercept locations, but may still be useful since they can offer a wider and less peaked information function across levels of $\theta$. Some applications in CAT make use of these types of items early on as well since they give a wider band of information early on in the test. Dropping items based on the Cronbach criteria is roughly the same as dropping items that give a better marginal/empirical reliability in IRT, so if the software you are using supports these statistics then you could follow the same strategy without leaving the IRT paradigm. I'd be more inclined to check the information functions however to see if removing an item severely affects the measurement at various $\theta$ levels (related to where the intercepts are). Relative information plots are useful here as well. You could attempt to remove items that don't conform to the unidimensional requirements of most IRT software, but I wouldn't necessarily recommend this if it affects the theoretical representation of the constructs at hand. In empirical applications it's usually better to try and make our models fit our theory, not the other way around. Also, this is where the bifactor/two-tier models tend to be appropriate since you would like to include all possible items while accounting for multidimensionality in a systematic and theoretically desirable way.
How to reduce number of items using factor analysis, internal consistency, and item response theory
All three of your suggested criteria actually could be performed in IRT, more specifically multidimensional IRT. If your sample size is fairly large would probably be a consistent way to go about it f
How to reduce number of items using factor analysis, internal consistency, and item response theory in conjunction? All three of your suggested criteria actually could be performed in IRT, more specifically multidimensional IRT. If your sample size is fairly large would probably be a consistent way to go about it for each subscale. In this way you could get the benefits of IRT for modelling item independently (using nominal models for some items, generalized partial credit or graded for others, or if possible even set up rating scales to help interpret polytomous items in a more parsimonious way). MIRT is conceptually equivalent to item-level factor analysis and therefore has a linear EFA equivalent relationship for dichotomous and polytomous items. I'm not sure if I would buy into the <.3 criteria for dropping items though, since it really depends on the context and factor structure. Small loadings/slopes don't provide as much information about the intercept locations, but may still be useful since they can offer a wider and less peaked information function across levels of $\theta$. Some applications in CAT make use of these types of items early on as well since they give a wider band of information early on in the test. Dropping items based on the Cronbach criteria is roughly the same as dropping items that give a better marginal/empirical reliability in IRT, so if the software you are using supports these statistics then you could follow the same strategy without leaving the IRT paradigm. I'd be more inclined to check the information functions however to see if removing an item severely affects the measurement at various $\theta$ levels (related to where the intercepts are). Relative information plots are useful here as well. You could attempt to remove items that don't conform to the unidimensional requirements of most IRT software, but I wouldn't necessarily recommend this if it affects the theoretical representation of the constructs at hand. In empirical applications it's usually better to try and make our models fit our theory, not the other way around. Also, this is where the bifactor/two-tier models tend to be appropriate since you would like to include all possible items while accounting for multidimensionality in a systematic and theoretically desirable way.
How to reduce number of items using factor analysis, internal consistency, and item response theory All three of your suggested criteria actually could be performed in IRT, more specifically multidimensional IRT. If your sample size is fairly large would probably be a consistent way to go about it f
24,499
Assumptions of GAM
The similarity between the two methods is the link function and being additive but otherwise, the generalized additive model is more general because the functions of the covariates need not be linear. In fact, they are nonparametric functions whereas, in the generalized linear model, they are linear in the parameters. I think that if you are fitting by least squares then in both cases you would be testing normality and constant variance just as you would for OLS linear regression.
Assumptions of GAM
The similarity between the two methods is the link function and being additive but otherwise, the generalized additive model is more general because the functions of the covariates need not be linear.
Assumptions of GAM The similarity between the two methods is the link function and being additive but otherwise, the generalized additive model is more general because the functions of the covariates need not be linear. In fact, they are nonparametric functions whereas, in the generalized linear model, they are linear in the parameters. I think that if you are fitting by least squares then in both cases you would be testing normality and constant variance just as you would for OLS linear regression.
Assumptions of GAM The similarity between the two methods is the link function and being additive but otherwise, the generalized additive model is more general because the functions of the covariates need not be linear.
24,500
Assumptions of GAM
This might be a bit late but for a GLM, the residuals aren't completely normally distributed (Faraway, 2006). Using the halfnorm (faraway package) function is a good way to detect outliers that are off the trend with noticeable jumps.
Assumptions of GAM
This might be a bit late but for a GLM, the residuals aren't completely normally distributed (Faraway, 2006). Using the halfnorm (faraway package) function is a good way to detect outliers that are of
Assumptions of GAM This might be a bit late but for a GLM, the residuals aren't completely normally distributed (Faraway, 2006). Using the halfnorm (faraway package) function is a good way to detect outliers that are off the trend with noticeable jumps.
Assumptions of GAM This might be a bit late but for a GLM, the residuals aren't completely normally distributed (Faraway, 2006). Using the halfnorm (faraway package) function is a good way to detect outliers that are of